idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
55,201 | What is a "log-F" distribution? | It is a Type IV Generalized Logistic distribution.
Begin with the pdf of an $F(\nu_1, \nu_2)$ distribution ($\nu_1$ is "df1" and $\nu_2$ is "df2"), written $f(x)$. The pdf for the logarithm $y = \log(x)$ will be the coefficient of $dy$ in
$$f(\exp(y))\, |\mathrm d \exp(y)| =\frac{1}{B\left(\frac{\nu _1}{2},\frac{\nu _2}{2}\right)} \left(\frac{\nu_1}{\nu_2 + \nu_1 \exp(y)}\right)^{\nu_1/2} \left(\frac{\nu_2}{\nu_2 + \nu_1 \exp(y)}\right)^{\nu_2/2} \mathrm d y.$$
Letting $\mu = \log(\nu_1) - \log(\nu_2)$ exhibits the pdf in the form
$$\frac{1}{B\left(\frac{\nu _1}{2},\frac{\nu _2}{2}\right)}\frac{\exp(-\nu_2(y-\mu))}{\left(1 + \exp(-(y-\mu))\right)^{(\nu_1+\nu_2)/2}} $$
which (upon setting $\alpha = \nu_1/2$ and $\beta=\nu_2/2$) is recognizable as being derived from
$$\frac{\exp(-\beta y)}{\left(1 + \exp(-y)\right)^{\alpha+\beta}}$$
via a shift of location to $\mu = 2\log(\alpha) - 2\log(\beta)$ and normalization to unit probability.
Incidentally, if we let $l(y) = 1/(1 + \exp(-y))$ be the logistic transformation (whose graph is a "sigmoid" that maps the real numbers to the interval $(0,1)$), this PDF can be presented in a more symmetric form
$$l(y)^\alpha \left(1 - l(y)\right)^\beta$$
which is reminiscent of the Beta distribution (and will be converted into it via $l$). Thus $\alpha$ and $\beta$ control the mean and variance of the distribution in a familiar way. | What is a "log-F" distribution? | It is a Type IV Generalized Logistic distribution.
Begin with the pdf of an $F(\nu_1, \nu_2)$ distribution ($\nu_1$ is "df1" and $\nu_2$ is "df2"), written $f(x)$. The pdf for the logarithm $y = \lo | What is a "log-F" distribution?
It is a Type IV Generalized Logistic distribution.
Begin with the pdf of an $F(\nu_1, \nu_2)$ distribution ($\nu_1$ is "df1" and $\nu_2$ is "df2"), written $f(x)$. The pdf for the logarithm $y = \log(x)$ will be the coefficient of $dy$ in
$$f(\exp(y))\, |\mathrm d \exp(y)| =\frac{1}{B\left(\frac{\nu _1}{2},\frac{\nu _2}{2}\right)} \left(\frac{\nu_1}{\nu_2 + \nu_1 \exp(y)}\right)^{\nu_1/2} \left(\frac{\nu_2}{\nu_2 + \nu_1 \exp(y)}\right)^{\nu_2/2} \mathrm d y.$$
Letting $\mu = \log(\nu_1) - \log(\nu_2)$ exhibits the pdf in the form
$$\frac{1}{B\left(\frac{\nu _1}{2},\frac{\nu _2}{2}\right)}\frac{\exp(-\nu_2(y-\mu))}{\left(1 + \exp(-(y-\mu))\right)^{(\nu_1+\nu_2)/2}} $$
which (upon setting $\alpha = \nu_1/2$ and $\beta=\nu_2/2$) is recognizable as being derived from
$$\frac{\exp(-\beta y)}{\left(1 + \exp(-y)\right)^{\alpha+\beta}}$$
via a shift of location to $\mu = 2\log(\alpha) - 2\log(\beta)$ and normalization to unit probability.
Incidentally, if we let $l(y) = 1/(1 + \exp(-y))$ be the logistic transformation (whose graph is a "sigmoid" that maps the real numbers to the interval $(0,1)$), this PDF can be presented in a more symmetric form
$$l(y)^\alpha \left(1 - l(y)\right)^\beta$$
which is reminiscent of the Beta distribution (and will be converted into it via $l$). Thus $\alpha$ and $\beta$ control the mean and variance of the distribution in a familiar way. | What is a "log-F" distribution?
It is a Type IV Generalized Logistic distribution.
Begin with the pdf of an $F(\nu_1, \nu_2)$ distribution ($\nu_1$ is "df1" and $\nu_2$ is "df2"), written $f(x)$. The pdf for the logarithm $y = \lo |
55,202 | Gibbs Sampling and Probability Notation | Very clean and clear presentation of the issue! This is exactly a hidden Ising model.
Your probabilities $\mathbb{P}(X_i=1|X_{[-i]},Y_i)$ and $\mathbb{P}(X_i=-1|X_{[-i]},Y_i)$ give you the way to simulate $X_i$ conditional on the others and on the observations. That those probabilities are non-integer is not an issue.
Maybe what is confusing you is the proportional $\propto$ symbol in
$$
\mathbb{P}(X_i|X_{[-i]},Y) \propto \exp{\left\lbrace X_i \sum_{j \in N_i} X_j + \log \mathbb{P}(Y_i|X_i) \right\rbrace}
$$
Indeed, without the $\propto$ symbol , we have
\begin{align*}
\mathbb{P}(X_i=1|X_{[-i]},Y) = \exp{\left\lbrace 1 \sum_{j \in N_i} X_j + \log \mathbb{P}(Y_i|1) \right\rbrace} \Big/ \\
\left[\exp{\left\lbrace 1 \sum_{j \in N_i} X_j + \log \mathbb{P}(Y_i|1) \right\rbrace}+
\exp{\left\lbrace -1 \sum_{j \in N_i} X_j + \log \mathbb{P}(Y_i|-1) \right\rbrace}\right]
\end{align*}
In your R code, you therefore have to compute both values:
for (i in seq(9)) {
logPYgplus = log(pnorm(q=Y[i], mean=mu[1], sd=sig[1]^2))
logPYgminus = log(pnorm(q=Y[i], mean=mu[2], sd=sig[2]^2))
PXiplus = exp(J*sum(X[t,N[[i]]]) + logPYgplus)
PXiminus = exp(-J*sum(X[t,N[[i]]]) + logPYgminus)
X[t,i]=1
if (PXiplus < runif(1)*(PXiplus+PXiminus)){
X[t,i] = -1}
}
where I assumed the index 1 corresponds to $X_i=1$ and the index 2 to $X_i=-1$ in your definition of the vectors $\mu$ and $\sigma$. (It is a matter of convention.) | Gibbs Sampling and Probability Notation | Very clean and clear presentation of the issue! This is exactly a hidden Ising model.
Your probabilities $\mathbb{P}(X_i=1|X_{[-i]},Y_i)$ and $\mathbb{P}(X_i=-1|X_{[-i]},Y_i)$ give you the way to simu | Gibbs Sampling and Probability Notation
Very clean and clear presentation of the issue! This is exactly a hidden Ising model.
Your probabilities $\mathbb{P}(X_i=1|X_{[-i]},Y_i)$ and $\mathbb{P}(X_i=-1|X_{[-i]},Y_i)$ give you the way to simulate $X_i$ conditional on the others and on the observations. That those probabilities are non-integer is not an issue.
Maybe what is confusing you is the proportional $\propto$ symbol in
$$
\mathbb{P}(X_i|X_{[-i]},Y) \propto \exp{\left\lbrace X_i \sum_{j \in N_i} X_j + \log \mathbb{P}(Y_i|X_i) \right\rbrace}
$$
Indeed, without the $\propto$ symbol , we have
\begin{align*}
\mathbb{P}(X_i=1|X_{[-i]},Y) = \exp{\left\lbrace 1 \sum_{j \in N_i} X_j + \log \mathbb{P}(Y_i|1) \right\rbrace} \Big/ \\
\left[\exp{\left\lbrace 1 \sum_{j \in N_i} X_j + \log \mathbb{P}(Y_i|1) \right\rbrace}+
\exp{\left\lbrace -1 \sum_{j \in N_i} X_j + \log \mathbb{P}(Y_i|-1) \right\rbrace}\right]
\end{align*}
In your R code, you therefore have to compute both values:
for (i in seq(9)) {
logPYgplus = log(pnorm(q=Y[i], mean=mu[1], sd=sig[1]^2))
logPYgminus = log(pnorm(q=Y[i], mean=mu[2], sd=sig[2]^2))
PXiplus = exp(J*sum(X[t,N[[i]]]) + logPYgplus)
PXiminus = exp(-J*sum(X[t,N[[i]]]) + logPYgminus)
X[t,i]=1
if (PXiplus < runif(1)*(PXiplus+PXiminus)){
X[t,i] = -1}
}
where I assumed the index 1 corresponds to $X_i=1$ and the index 2 to $X_i=-1$ in your definition of the vectors $\mu$ and $\sigma$. (It is a matter of convention.) | Gibbs Sampling and Probability Notation
Very clean and clear presentation of the issue! This is exactly a hidden Ising model.
Your probabilities $\mathbb{P}(X_i=1|X_{[-i]},Y_i)$ and $\mathbb{P}(X_i=-1|X_{[-i]},Y_i)$ give you the way to simu |
55,203 | Gibbs Sampling and Probability Notation | Answer to the follow-up question
Unfortunately, your intuition about the smoothing distribution $\mathbb{P}(x|y)$ is not correct as $\mathbb{P}(x|y)$ is not the product of the full conditionals:
$$
\prod_{i=1}^9 \mathbb{P}(x_i|x_{[-i]},y) \ne \mathbb{P}(x|y)
$$
even up to a proportionality constant. And even less on a log scale. Note also that the time step $t$ should not appear in the formula as time is only connected with Gibbs iterations in your problem, not with the initial statistical problem.
Now, you know $f(x|y)$ up to a constant. This constant is actually
$$
\sum_{x\in\{-1,1\}^9} \mathbb{P}(x|y)
$$
which remains a manageable sum, hence can be calculated.
My first solution for the marginal posterior density of $X_i$ (given $Y$) would be to use the decomposition
\begin{align*}
\mathbb{P}(x_i|y) &= \sum_{x_{[-i]}} \mathbb{P}((x_i,x_{[-i]})|y)\\
&= \sum_{x_{[-i]}} \mathbb{P}(x_i|x_{[-i]},y) \mathbb{P}(x_{[-i]}|y)\\
&= \mathbb{E}\left[ \mathbb{P}(x_i|X_{[-i]},y) | y \right]
\end{align*}
where the expectation is understood as the one of $X_{[-i]}$ given $y$. Since the Gibbs sampler returns simulations from all subsets of $X$ given $y$, a converging approximation to the above is
$$
\hat{\mathbb{P}}(x_i|y) = \frac{1}{T} \sum_{t=1}^T \mathbb{P}(x_i|x_{[-i]}^t,y)
$$
where $x^t$ denotes the value of the Markov chain at the $t$-th iteration.
However, since your state space $\{,-1,1\}^9$ is small, you can also derive this marginal exactly:
$$
\mathbb{P}(x_i|y) = \sum_{x_{[-i]}\in\{-1,1\}^8} \mathbb{P}((x_i,x_{[-i]})|y) .
$$
Hence an exact derivation of the entropy as well. | Gibbs Sampling and Probability Notation | Answer to the follow-up question
Unfortunately, your intuition about the smoothing distribution $\mathbb{P}(x|y)$ is not correct as $\mathbb{P}(x|y)$ is not the product of the full conditionals:
$$
\p | Gibbs Sampling and Probability Notation
Answer to the follow-up question
Unfortunately, your intuition about the smoothing distribution $\mathbb{P}(x|y)$ is not correct as $\mathbb{P}(x|y)$ is not the product of the full conditionals:
$$
\prod_{i=1}^9 \mathbb{P}(x_i|x_{[-i]},y) \ne \mathbb{P}(x|y)
$$
even up to a proportionality constant. And even less on a log scale. Note also that the time step $t$ should not appear in the formula as time is only connected with Gibbs iterations in your problem, not with the initial statistical problem.
Now, you know $f(x|y)$ up to a constant. This constant is actually
$$
\sum_{x\in\{-1,1\}^9} \mathbb{P}(x|y)
$$
which remains a manageable sum, hence can be calculated.
My first solution for the marginal posterior density of $X_i$ (given $Y$) would be to use the decomposition
\begin{align*}
\mathbb{P}(x_i|y) &= \sum_{x_{[-i]}} \mathbb{P}((x_i,x_{[-i]})|y)\\
&= \sum_{x_{[-i]}} \mathbb{P}(x_i|x_{[-i]},y) \mathbb{P}(x_{[-i]}|y)\\
&= \mathbb{E}\left[ \mathbb{P}(x_i|X_{[-i]},y) | y \right]
\end{align*}
where the expectation is understood as the one of $X_{[-i]}$ given $y$. Since the Gibbs sampler returns simulations from all subsets of $X$ given $y$, a converging approximation to the above is
$$
\hat{\mathbb{P}}(x_i|y) = \frac{1}{T} \sum_{t=1}^T \mathbb{P}(x_i|x_{[-i]}^t,y)
$$
where $x^t$ denotes the value of the Markov chain at the $t$-th iteration.
However, since your state space $\{,-1,1\}^9$ is small, you can also derive this marginal exactly:
$$
\mathbb{P}(x_i|y) = \sum_{x_{[-i]}\in\{-1,1\}^8} \mathbb{P}((x_i,x_{[-i]})|y) .
$$
Hence an exact derivation of the entropy as well. | Gibbs Sampling and Probability Notation
Answer to the follow-up question
Unfortunately, your intuition about the smoothing distribution $\mathbb{P}(x|y)$ is not correct as $\mathbb{P}(x|y)$ is not the product of the full conditionals:
$$
\p |
55,204 | R: Relationship between hourly mortality and lifespan given that the latter is approximated by an exponential distribution? | The hazard function will be $\frac{f(t)}{1-F(t)}$.
Assuming lifetime to be exponential, the formula is given at that link; it's $\lambda$, the rate parameter of the exponential; you estimate that rate parameter $\lambda$ from the data exactly the same way you would for any exponential distribution.
The hourly mortality rate is the integral of the hazard rate across an hour, but
since the hazard is constant and times are measured in hours, it's trivial.
--
Discussion on the suitability of the model:
If lifespan were truly exponential, hazard would be constant. But that doesn't look exponential to me.
While it's possible to get a nonparametric estimate of the hazard function, with so little data, I'd probably start with a Weibull model, since it includes your suggested exponential life model as a special case and can accommodate both increasing and decreasing hazard functions.
survreg(Surv(beelife)~1)
Call:
survreg(formula = Surv(beelife) ~ 1)
Coefficients:
(Intercept)
3.416515
Scale= 0.7265582
Loglik(model)= -140.5 Loglik(intercept only)= -140.5
n= 33
plot(ecdf(beelife))
lines((0:90),pweibull(0:90,1/.726558,exp(3.4165)),col=2)
The fit looks pretty reasonable.
Further, if we look at the standard error of the log(scale) coefficient, it suggests that exponential (which corresponds to log-scale of 0 if I understand everything right) seems inconsistent with the data:
Value Std. Error z p
(Intercept) 3.417 0.133 25.66 3.50e-145
Log(scale) -0.319 0.137 -2.32 2.01e-02
Scale= 0.727
Weibull distribution
Loglik(model)= -140.5 Loglik(intercept only)= -140.5
Number of Newton-Raphson Iterations: 6
n= 33
If I did it right, the fitted hazard function $h(t) = f(t)/(1-F(t))$ should look like this:
Another model I might consider, which also includes the exponential as a special case would be the gamma. It also has increasing and decreasing hazards. However, I doubt it would do any better in this case. | R: Relationship between hourly mortality and lifespan given that the latter is approximated by an ex | The hazard function will be $\frac{f(t)}{1-F(t)}$.
Assuming lifetime to be exponential, the formula is given at that link; it's $\lambda$, the rate parameter of the exponential; you estimate that rate | R: Relationship between hourly mortality and lifespan given that the latter is approximated by an exponential distribution?
The hazard function will be $\frac{f(t)}{1-F(t)}$.
Assuming lifetime to be exponential, the formula is given at that link; it's $\lambda$, the rate parameter of the exponential; you estimate that rate parameter $\lambda$ from the data exactly the same way you would for any exponential distribution.
The hourly mortality rate is the integral of the hazard rate across an hour, but
since the hazard is constant and times are measured in hours, it's trivial.
--
Discussion on the suitability of the model:
If lifespan were truly exponential, hazard would be constant. But that doesn't look exponential to me.
While it's possible to get a nonparametric estimate of the hazard function, with so little data, I'd probably start with a Weibull model, since it includes your suggested exponential life model as a special case and can accommodate both increasing and decreasing hazard functions.
survreg(Surv(beelife)~1)
Call:
survreg(formula = Surv(beelife) ~ 1)
Coefficients:
(Intercept)
3.416515
Scale= 0.7265582
Loglik(model)= -140.5 Loglik(intercept only)= -140.5
n= 33
plot(ecdf(beelife))
lines((0:90),pweibull(0:90,1/.726558,exp(3.4165)),col=2)
The fit looks pretty reasonable.
Further, if we look at the standard error of the log(scale) coefficient, it suggests that exponential (which corresponds to log-scale of 0 if I understand everything right) seems inconsistent with the data:
Value Std. Error z p
(Intercept) 3.417 0.133 25.66 3.50e-145
Log(scale) -0.319 0.137 -2.32 2.01e-02
Scale= 0.727
Weibull distribution
Loglik(model)= -140.5 Loglik(intercept only)= -140.5
Number of Newton-Raphson Iterations: 6
n= 33
If I did it right, the fitted hazard function $h(t) = f(t)/(1-F(t))$ should look like this:
Another model I might consider, which also includes the exponential as a special case would be the gamma. It also has increasing and decreasing hazards. However, I doubt it would do any better in this case. | R: Relationship between hourly mortality and lifespan given that the latter is approximated by an ex
The hazard function will be $\frac{f(t)}{1-F(t)}$.
Assuming lifetime to be exponential, the formula is given at that link; it's $\lambda$, the rate parameter of the exponential; you estimate that rate |
55,205 | R: Relationship between hourly mortality and lifespan given that the latter is approximated by an exponential distribution? | The question asks about the relationship between hourly mortality and exponential distribution of lifespans.
The universe divides time into short ticks. For each bee a die is rolled. The die has many sides, most of them blank--but one is black. Death comes for the bee whenever the black side shows.
We are about to simulate this with the computer, but being lazy I don't want to slice time as finely as Death does. So instead of rolling a die every second, say, maybe I will roll a die every hour: but I will make it 3600 times more likely to show its black side to make up for the paucity of rolls. Because the chance of death in one second or even one hour is so small, this is a good approximation.
To exploit the computer's ability to do a lot of calculations with arrays at once, I will simulate an entire hive one bee at a time. Let the die roll again and again, once per hour, until the bee dies. Then continue with a newly born (reincarnated?) bee, repeating until many bees are born and die. The results will be the same as if all the bees had coexisted.
Here is R code. Its input is the array of bee lifetimes, bees, used to estimate the exponential rate.
set.seed(17) # (Allows reproducible results)
n <- 10^5 # Number of time slices
units <- 1 # Number of hours per time slice
rate.hat <- 1 / mean(bees) # ML estimate of the death rate
deaths <- runif(n) < rate.hat * units # Roll Death's die
times <- which(deaths != 0) # Note when bees are killed
lifetimes <- (diff(c(0,times))-1) * units # The time differences are the bee lifetimes
Here is the first part of the deaths array, showing the results of the beginning die rolls (with "*" denoting the black face):
paste(ifelse(head(deaths, 100),"*", "."), collapse="")
...........*
......................................................*
..............*
..................
The first few times at which the black face appeared on the die:
head(times)
12 67 82 120 134 147
Thus the first bee was felled at the 12th tick (so I count its lifetime as just 11 ticks), the second one at the 67th tick (for a lifetime of 54 ticks), and so on: these are the positions of the stars in the previous output.
The figures plot these deaths (each is a vertical slash on the left, showing each lifetime as a horizontal gap between slashes) and the distribution of the lifetimes (on the right). The latter has an exponential distribution function superimposed. It's an excellent fit.
The code to plot the histogram is
hist(lifetimes, freq=FALSE, ylim=c(0, rate.hat), breaks=25, xlab="Hours")
curve(dexp(x, rate=rate.hat), col="Red", add=TRUE)
Why is the histogram exponential? Because on average the number of bees alive after a certain age is reduced by the same fraction during the next time slice. That means the histogram has to decrease exponentially.
By the way, the reason why 1/mean(bees) is the maximum likelihood estimator is that the exponential probability distribution function is of the form $\kappa \exp(-\kappa t)$ for lifetimes $t$. To understand this, recall that a PDF is a density: it gives us probability per unit time. Since $t$ is time (in hours), $\kappa$ must be a probability (of death) per unit time: it's the rate. Its logarithm equals $\log(\kappa) - \kappa t$. Therefore the log likelihood for a dataset of lifetimes $(t_1, t_2, \ldots, t_n)$ is
$$\log(L(\kappa)) = n \log(\kappa) - \sum_{i=1}^n \kappa\, t_i.$$
Calculus shows us (by taking the derivative with respect to $\kappa$ and setting that to zero) that $$\hat\kappa = \frac{n}{\sum_{i=1}^n t_i}$$ is the only critical point (and obviously is where the likelihood is maximized). This is the reciprocal of the mean lifetime. That is how hourly mortality fits in with the maximum likelihood machinery. | R: Relationship between hourly mortality and lifespan given that the latter is approximated by an ex | The question asks about the relationship between hourly mortality and exponential distribution of lifespans.
The universe divides time into short ticks. For each bee a die is rolled. The die has man | R: Relationship between hourly mortality and lifespan given that the latter is approximated by an exponential distribution?
The question asks about the relationship between hourly mortality and exponential distribution of lifespans.
The universe divides time into short ticks. For each bee a die is rolled. The die has many sides, most of them blank--but one is black. Death comes for the bee whenever the black side shows.
We are about to simulate this with the computer, but being lazy I don't want to slice time as finely as Death does. So instead of rolling a die every second, say, maybe I will roll a die every hour: but I will make it 3600 times more likely to show its black side to make up for the paucity of rolls. Because the chance of death in one second or even one hour is so small, this is a good approximation.
To exploit the computer's ability to do a lot of calculations with arrays at once, I will simulate an entire hive one bee at a time. Let the die roll again and again, once per hour, until the bee dies. Then continue with a newly born (reincarnated?) bee, repeating until many bees are born and die. The results will be the same as if all the bees had coexisted.
Here is R code. Its input is the array of bee lifetimes, bees, used to estimate the exponential rate.
set.seed(17) # (Allows reproducible results)
n <- 10^5 # Number of time slices
units <- 1 # Number of hours per time slice
rate.hat <- 1 / mean(bees) # ML estimate of the death rate
deaths <- runif(n) < rate.hat * units # Roll Death's die
times <- which(deaths != 0) # Note when bees are killed
lifetimes <- (diff(c(0,times))-1) * units # The time differences are the bee lifetimes
Here is the first part of the deaths array, showing the results of the beginning die rolls (with "*" denoting the black face):
paste(ifelse(head(deaths, 100),"*", "."), collapse="")
...........*
......................................................*
..............*
..................
The first few times at which the black face appeared on the die:
head(times)
12 67 82 120 134 147
Thus the first bee was felled at the 12th tick (so I count its lifetime as just 11 ticks), the second one at the 67th tick (for a lifetime of 54 ticks), and so on: these are the positions of the stars in the previous output.
The figures plot these deaths (each is a vertical slash on the left, showing each lifetime as a horizontal gap between slashes) and the distribution of the lifetimes (on the right). The latter has an exponential distribution function superimposed. It's an excellent fit.
The code to plot the histogram is
hist(lifetimes, freq=FALSE, ylim=c(0, rate.hat), breaks=25, xlab="Hours")
curve(dexp(x, rate=rate.hat), col="Red", add=TRUE)
Why is the histogram exponential? Because on average the number of bees alive after a certain age is reduced by the same fraction during the next time slice. That means the histogram has to decrease exponentially.
By the way, the reason why 1/mean(bees) is the maximum likelihood estimator is that the exponential probability distribution function is of the form $\kappa \exp(-\kappa t)$ for lifetimes $t$. To understand this, recall that a PDF is a density: it gives us probability per unit time. Since $t$ is time (in hours), $\kappa$ must be a probability (of death) per unit time: it's the rate. Its logarithm equals $\log(\kappa) - \kappa t$. Therefore the log likelihood for a dataset of lifetimes $(t_1, t_2, \ldots, t_n)$ is
$$\log(L(\kappa)) = n \log(\kappa) - \sum_{i=1}^n \kappa\, t_i.$$
Calculus shows us (by taking the derivative with respect to $\kappa$ and setting that to zero) that $$\hat\kappa = \frac{n}{\sum_{i=1}^n t_i}$$ is the only critical point (and obviously is where the likelihood is maximized). This is the reciprocal of the mean lifetime. That is how hourly mortality fits in with the maximum likelihood machinery. | R: Relationship between hourly mortality and lifespan given that the latter is approximated by an ex
The question asks about the relationship between hourly mortality and exponential distribution of lifespans.
The universe divides time into short ticks. For each bee a die is rolled. The die has man |
55,206 | R: Relationship between hourly mortality and lifespan given that the latter is approximated by an exponential distribution? | Here's an approach
Get the histogram of the data
bl <- c(7.1, 2.3, 9.6, 25.8, 14.6, 12.8, 20.9, 30.0, 71.1, 36.9, 3.9, 18.2, 24.3, 55.8, 84.7,
13.6, 10.8, 18.2, 19.0, 34.8, 27.1, 54.2, 34.8, 65.8, 33.3, 26.5, 9.5, 44.2, 35.7, 2.3,
4.0, 25.9, 41.3)
b1h <- hist(bl, freq = FALSE, xlim = c(0, quantile(bl, 0.999)), plot=TRUE)
df <- data.frame(density=b1h$density, mids=b1h$mids)
Plot the histogram as points
plot(b1h$mids,b1h$density,cex=1,xlab='Life (hr)',ylab='Probability')
Fit an exponential distribution to the histogram
fit <- nls(density ~ lambda*exp(-lambda*mids), start=list(lambda=.1), data=df)
lines(df$mids,predict(fit),col='red')
Alternatively, fit a exponential distribution to the data
fit1 <- fitdistr(bl, "exponential")
curve(dexp(x, rate = fit1$estimate), col = "green", add = TRUE)
The mean life is 1/lambda, and the mortality rate is lambda
cat('from nls (red) the mean life is',1/coef(fit),'and mortality rate is',coef(fit),'per year')
cat('from fitdistr (green) the mean life is',1/coef(fit1),'and mortality rate is',coef(fit1),'per year')
The results are
from nls (red) the mean life is 35.47136 and mortality rate is 0.02819176 per year
from fitdistr (green) the mean life is 27.84848 and mortality rate is 0.0359086 per year
Function fitdisr provides the maximum likelihood estimate for the parameters. The nls function provides a least square estimate. Note, for an exponential distribution, the maximum likelihood estimate is given by n/sum(xi), ref derivation | R: Relationship between hourly mortality and lifespan given that the latter is approximated by an ex | Here's an approach
Get the histogram of the data
bl <- c(7.1, 2.3, 9.6, 25.8, 14.6, 12.8, 20.9, 30.0, 71.1, 36.9, 3.9, 18.2, 24.3, 55.8, 84.7,
13.6, 10.8, 18.2, 19.0, 34.8, 27.1, 54.2, 34.8, | R: Relationship between hourly mortality and lifespan given that the latter is approximated by an exponential distribution?
Here's an approach
Get the histogram of the data
bl <- c(7.1, 2.3, 9.6, 25.8, 14.6, 12.8, 20.9, 30.0, 71.1, 36.9, 3.9, 18.2, 24.3, 55.8, 84.7,
13.6, 10.8, 18.2, 19.0, 34.8, 27.1, 54.2, 34.8, 65.8, 33.3, 26.5, 9.5, 44.2, 35.7, 2.3,
4.0, 25.9, 41.3)
b1h <- hist(bl, freq = FALSE, xlim = c(0, quantile(bl, 0.999)), plot=TRUE)
df <- data.frame(density=b1h$density, mids=b1h$mids)
Plot the histogram as points
plot(b1h$mids,b1h$density,cex=1,xlab='Life (hr)',ylab='Probability')
Fit an exponential distribution to the histogram
fit <- nls(density ~ lambda*exp(-lambda*mids), start=list(lambda=.1), data=df)
lines(df$mids,predict(fit),col='red')
Alternatively, fit a exponential distribution to the data
fit1 <- fitdistr(bl, "exponential")
curve(dexp(x, rate = fit1$estimate), col = "green", add = TRUE)
The mean life is 1/lambda, and the mortality rate is lambda
cat('from nls (red) the mean life is',1/coef(fit),'and mortality rate is',coef(fit),'per year')
cat('from fitdistr (green) the mean life is',1/coef(fit1),'and mortality rate is',coef(fit1),'per year')
The results are
from nls (red) the mean life is 35.47136 and mortality rate is 0.02819176 per year
from fitdistr (green) the mean life is 27.84848 and mortality rate is 0.0359086 per year
Function fitdisr provides the maximum likelihood estimate for the parameters. The nls function provides a least square estimate. Note, for an exponential distribution, the maximum likelihood estimate is given by n/sum(xi), ref derivation | R: Relationship between hourly mortality and lifespan given that the latter is approximated by an ex
Here's an approach
Get the histogram of the data
bl <- c(7.1, 2.3, 9.6, 25.8, 14.6, 12.8, 20.9, 30.0, 71.1, 36.9, 3.9, 18.2, 24.3, 55.8, 84.7,
13.6, 10.8, 18.2, 19.0, 34.8, 27.1, 54.2, 34.8, |
55,207 | Predict magnitude from angle in linear regression | Here, we want to predict a linear dependent variable from circular independent variables. There are several ways to approach this. The main thing to check is whether the relation between your dependent variable (let's say $Y$) and the circular predictor (say $\theta$) has a sinusoidal shape. This is often the case, but not necessarily. Below is an example of data of this shape.
th <- rnorm(100, 1, 4) %% (2*pi)
err <- rnorm(100, mean = 0, sd = 0.8)
icp <- 10
bc <- 2
bs <- 3
y <- icp + bc * cos(th) + bs * sin(th) + err
plot(th, y)
If the data does have this shape, roughly, a good simple model for the data is then given by splitting the circular predictor $\theta$ up in a sine and a cosine component, and running a regular linear regression on these two components, in this case by:
lm(y ~ cos(th) + sin(th))
>Call:
>lm(formula = y ~ cos(th) + sin(th))
>
>Coefficients:
>(Intercept) cos(th) sin(th)
> 10.12 2.04 2.95
Of course, this can be done for multiple predictors as well. A good introduction on this may be found in Pewsey, Neuhauser & Ruxton (2013), Circular Statistics in R.
As mentioned before, we may add terms as in a Fourier regression, but this can only be recommended if the relationship structurally exhibits very different forms, because higher-order Fourier regression introduces, IIRC, a large number of difficult to interpret parameters. | Predict magnitude from angle in linear regression | Here, we want to predict a linear dependent variable from circular independent variables. There are several ways to approach this. The main thing to check is whether the relation between your dependen | Predict magnitude from angle in linear regression
Here, we want to predict a linear dependent variable from circular independent variables. There are several ways to approach this. The main thing to check is whether the relation between your dependent variable (let's say $Y$) and the circular predictor (say $\theta$) has a sinusoidal shape. This is often the case, but not necessarily. Below is an example of data of this shape.
th <- rnorm(100, 1, 4) %% (2*pi)
err <- rnorm(100, mean = 0, sd = 0.8)
icp <- 10
bc <- 2
bs <- 3
y <- icp + bc * cos(th) + bs * sin(th) + err
plot(th, y)
If the data does have this shape, roughly, a good simple model for the data is then given by splitting the circular predictor $\theta$ up in a sine and a cosine component, and running a regular linear regression on these two components, in this case by:
lm(y ~ cos(th) + sin(th))
>Call:
>lm(formula = y ~ cos(th) + sin(th))
>
>Coefficients:
>(Intercept) cos(th) sin(th)
> 10.12 2.04 2.95
Of course, this can be done for multiple predictors as well. A good introduction on this may be found in Pewsey, Neuhauser & Ruxton (2013), Circular Statistics in R.
As mentioned before, we may add terms as in a Fourier regression, but this can only be recommended if the relationship structurally exhibits very different forms, because higher-order Fourier regression introduces, IIRC, a large number of difficult to interpret parameters. | Predict magnitude from angle in linear regression
Here, we want to predict a linear dependent variable from circular independent variables. There are several ways to approach this. The main thing to check is whether the relation between your dependen |
55,208 | Predict magnitude from angle in linear regression | According to the title of your question, you want the linear regression. There is no linear function from circular argument (except the function $ f(x)=0 $ ). Therefore, you should use non-linear functions. I recommend to replace each circular variables $x$ by two variables $\sin x $ and $\cos x$ and use both for linear regression. | Predict magnitude from angle in linear regression | According to the title of your question, you want the linear regression. There is no linear function from circular argument (except the function $ f(x)=0 $ ). Therefore, you should use non-linear fu | Predict magnitude from angle in linear regression
According to the title of your question, you want the linear regression. There is no linear function from circular argument (except the function $ f(x)=0 $ ). Therefore, you should use non-linear functions. I recommend to replace each circular variables $x$ by two variables $\sin x $ and $\cos x$ and use both for linear regression. | Predict magnitude from angle in linear regression
According to the title of your question, you want the linear regression. There is no linear function from circular argument (except the function $ f(x)=0 $ ). Therefore, you should use non-linear fu |
55,209 | How to build a function with the result of auto.arima in R? | You should use the following formula:
Y(t) = 0.3793*(Y(t-1) - 9132.46 - 22.0469*X(t-1)) + 9132.46 + 22.0469*X(t).
Example to replicate out-of-sample forecasts:
require(forecast)
set.seed(123)
n <- 100
xreg <- rnorm(n)
x <- arima.sim(n=n, model=list(ar=0.4)) + 2 + 0.8 * xreg
fit <- arima(x, order=c(1,0,0), include.mean=TRUE, xreg = xreg)
newxreg <- rnorm(5)
# forecasts using predict
pred <- predict(fit, n.ahead=5, newxreg = newxreg)$pred
# forecasts by hand
pred2 <- rep(NA, 5)
pred2[1] <- coef(fit)[1] * (x[n] - coef(fit)[2] - coef(fit)[3] * xreg[n]) +
coef(fit)[2] + coef(fit)[3] * newxreg[1]
for (i in seq(2, 5))
pred2[i] <- coef(fit)[1] * (pred2[i-1] - coef(fit)[2] - coef(fit)[3] * newxreg[i-1]) +
coef(fit)[2] + coef(fit)[3] * newxreg[i]
cbind(pred, pred2)
# pred pred2
# 101 2.966803 2.966803
# 102 1.822288 1.822288
# 103 1.991748 1.991748
# 104 2.095386 2.095386
# 105 2.855613 2.855613
all.equal(as.vector(pred), pred2)
#[1] TRUE | How to build a function with the result of auto.arima in R? | You should use the following formula:
Y(t) = 0.3793*(Y(t-1) - 9132.46 - 22.0469*X(t-1)) + 9132.46 + 22.0469*X(t).
Example to replicate out-of-sample forecasts:
require(forecast)
set.seed(123)
n <- 10 | How to build a function with the result of auto.arima in R?
You should use the following formula:
Y(t) = 0.3793*(Y(t-1) - 9132.46 - 22.0469*X(t-1)) + 9132.46 + 22.0469*X(t).
Example to replicate out-of-sample forecasts:
require(forecast)
set.seed(123)
n <- 100
xreg <- rnorm(n)
x <- arima.sim(n=n, model=list(ar=0.4)) + 2 + 0.8 * xreg
fit <- arima(x, order=c(1,0,0), include.mean=TRUE, xreg = xreg)
newxreg <- rnorm(5)
# forecasts using predict
pred <- predict(fit, n.ahead=5, newxreg = newxreg)$pred
# forecasts by hand
pred2 <- rep(NA, 5)
pred2[1] <- coef(fit)[1] * (x[n] - coef(fit)[2] - coef(fit)[3] * xreg[n]) +
coef(fit)[2] + coef(fit)[3] * newxreg[1]
for (i in seq(2, 5))
pred2[i] <- coef(fit)[1] * (pred2[i-1] - coef(fit)[2] - coef(fit)[3] * newxreg[i-1]) +
coef(fit)[2] + coef(fit)[3] * newxreg[i]
cbind(pred, pred2)
# pred pred2
# 101 2.966803 2.966803
# 102 1.822288 1.822288
# 103 1.991748 1.991748
# 104 2.095386 2.095386
# 105 2.855613 2.855613
all.equal(as.vector(pred), pred2)
#[1] TRUE | How to build a function with the result of auto.arima in R?
You should use the following formula:
Y(t) = 0.3793*(Y(t-1) - 9132.46 - 22.0469*X(t-1)) + 9132.46 + 22.0469*X(t).
Example to replicate out-of-sample forecasts:
require(forecast)
set.seed(123)
n <- 10 |
55,210 | Logit Standard Error | For a binomial random variable $X\sim \text{Bin}(n,p)$, the sample proportion $k/n$ is a consistent estimator of the probability parameter $p$, $\hat p = k/n$. We then have the asymptotic normality result
$$\sqrt n (\hat p -p) \xrightarrow{d} N(0, p(1-p))$$
Applying the Delta Theorem,
$$\sqrt n (g(\hat p) -g(p)) \xrightarrow{d} N(0, p(1-p)[g'(p)]^2)$$
Set $g(z) \equiv \ln(z/(1-z))$. Then
$$g'(z) = \frac {1-z}{z}\cdot \frac {1}{(1-z)^2} = \frac 1{z(1-z)}$$
Therefore
$$\sqrt n \left(\ln\frac{\hat p}{1-\hat p} -\ln\frac{ p}{1-p}\right) \xrightarrow{d} N\left(0, \frac 1{p(1-p)}\right)$$
In finite samples then we have the approximation
$$\ln\frac{\hat p}{1-\hat p} \sim_{\text{approx.}} N\left(\ln\frac{p}{1-p}, \frac 1{np(1-p)}\right)$$
To estimate the variance we use $\hat p =k/n$ instead of $p$ and we have
$$\hat Var \left(\ln\frac{\hat p}{1-\hat p}\right) = \frac {1}{n(k/n)(1-k/n)} = \frac 1k +\frac 1{n-k} $$
This is also the result obtained from the empirical inverted Hessian of the relevant log-likelihood.
Obviously, as Glen_b mentioned, if the observed proportion is $0$ or $1$ then this formula does not work.
ADDENDUM
Following conversation in the comments, I think we can describe the problem as follows:
The moments of the finite distribution of the logit transform, denote it $Z$ for brevity, are undefined in the sense that they contain an indeterminate form. For example, denoting $p_k$ the theoretical probability of the binomial taking the value $k$, we have
$$E(Z) = \sum_{k=0}^{n}p_k\ln[k/(n-k)]$$
$$= (1-p)^n\cdot \lim_{k\rightarrow 0}\ln[k/(n-k)] +...(\text{finite terms})...+ p^n\cdot \lim_{k\rightarrow n}\ln[k/(n-k)]$$
$$=(1-p)^n\left(-\infty - \ln[n]\right) +...+p^n\cdot \left(\ln[n] - (-\infty)\right)$$
$$=(1-p)^n\cdot (-\infty) +...+ p^n\cdot \infty$$
i.e. it contains a $-\infty +\infty$ expression.
If we now take the limit as $n\rightarrow \infty$, we will face
$$...=0\cdot (-\infty) +...+ 0 \cdot \infty$$
In general, the expression $0\cdot \pm \infty$ is also indeterminate, but from what I know, in measure theory $0\cdot \pm \infty$ is defined to be equal to $0$. So, where does that leave us? | Logit Standard Error | For a binomial random variable $X\sim \text{Bin}(n,p)$, the sample proportion $k/n$ is a consistent estimator of the probability parameter $p$, $\hat p = k/n$. We then have the asymptotic normality re | Logit Standard Error
For a binomial random variable $X\sim \text{Bin}(n,p)$, the sample proportion $k/n$ is a consistent estimator of the probability parameter $p$, $\hat p = k/n$. We then have the asymptotic normality result
$$\sqrt n (\hat p -p) \xrightarrow{d} N(0, p(1-p))$$
Applying the Delta Theorem,
$$\sqrt n (g(\hat p) -g(p)) \xrightarrow{d} N(0, p(1-p)[g'(p)]^2)$$
Set $g(z) \equiv \ln(z/(1-z))$. Then
$$g'(z) = \frac {1-z}{z}\cdot \frac {1}{(1-z)^2} = \frac 1{z(1-z)}$$
Therefore
$$\sqrt n \left(\ln\frac{\hat p}{1-\hat p} -\ln\frac{ p}{1-p}\right) \xrightarrow{d} N\left(0, \frac 1{p(1-p)}\right)$$
In finite samples then we have the approximation
$$\ln\frac{\hat p}{1-\hat p} \sim_{\text{approx.}} N\left(\ln\frac{p}{1-p}, \frac 1{np(1-p)}\right)$$
To estimate the variance we use $\hat p =k/n$ instead of $p$ and we have
$$\hat Var \left(\ln\frac{\hat p}{1-\hat p}\right) = \frac {1}{n(k/n)(1-k/n)} = \frac 1k +\frac 1{n-k} $$
This is also the result obtained from the empirical inverted Hessian of the relevant log-likelihood.
Obviously, as Glen_b mentioned, if the observed proportion is $0$ or $1$ then this formula does not work.
ADDENDUM
Following conversation in the comments, I think we can describe the problem as follows:
The moments of the finite distribution of the logit transform, denote it $Z$ for brevity, are undefined in the sense that they contain an indeterminate form. For example, denoting $p_k$ the theoretical probability of the binomial taking the value $k$, we have
$$E(Z) = \sum_{k=0}^{n}p_k\ln[k/(n-k)]$$
$$= (1-p)^n\cdot \lim_{k\rightarrow 0}\ln[k/(n-k)] +...(\text{finite terms})...+ p^n\cdot \lim_{k\rightarrow n}\ln[k/(n-k)]$$
$$=(1-p)^n\left(-\infty - \ln[n]\right) +...+p^n\cdot \left(\ln[n] - (-\infty)\right)$$
$$=(1-p)^n\cdot (-\infty) +...+ p^n\cdot \infty$$
i.e. it contains a $-\infty +\infty$ expression.
If we now take the limit as $n\rightarrow \infty$, we will face
$$...=0\cdot (-\infty) +...+ 0 \cdot \infty$$
In general, the expression $0\cdot \pm \infty$ is also indeterminate, but from what I know, in measure theory $0\cdot \pm \infty$ is defined to be equal to $0$. So, where does that leave us? | Logit Standard Error
For a binomial random variable $X\sim \text{Bin}(n,p)$, the sample proportion $k/n$ is a consistent estimator of the probability parameter $p$, $\hat p = k/n$. We then have the asymptotic normality re |
55,211 | Logit Standard Error | $X\sim \text{Bin}(n,p)$
$p = X/n$
Since $p$ has a non-zero chance to be both 0 and 1, $E(\log(\frac{p}{1-p}))$, and also $\text{Var}(\log(\frac{p}{1-p}))$ are undefined.
If you want some other answer, you'll need to keep $p$ away from 0 and 1. | Logit Standard Error | $X\sim \text{Bin}(n,p)$
$p = X/n$
Since $p$ has a non-zero chance to be both 0 and 1, $E(\log(\frac{p}{1-p}))$, and also $\text{Var}(\log(\frac{p}{1-p}))$ are undefined.
If you want some other answer | Logit Standard Error
$X\sim \text{Bin}(n,p)$
$p = X/n$
Since $p$ has a non-zero chance to be both 0 and 1, $E(\log(\frac{p}{1-p}))$, and also $\text{Var}(\log(\frac{p}{1-p}))$ are undefined.
If you want some other answer, you'll need to keep $p$ away from 0 and 1. | Logit Standard Error
$X\sim \text{Bin}(n,p)$
$p = X/n$
Since $p$ has a non-zero chance to be both 0 and 1, $E(\log(\frac{p}{1-p}))$, and also $\text{Var}(\log(\frac{p}{1-p}))$ are undefined.
If you want some other answer |
55,212 | How to update a probability that a defendant is guilty after testimonies of multiple unreliable witnesses? | Instead of a tree, use a contingency table (which is the same thing, but lays out the calculations in a more convenient form). Instead of probabilities, perform the calculations in terms of odds.
Because the problem eventually concerns multiple witnesses, let's address the case where after $n-1$ witnesses have come forward our belief in guilt has probability $q$, say (and therefore our belief in lack of guilt is $1-q$). Because the next witness will split each row into two parts in the proportion $p:(1-p)$, the table must divide up like this:
Witness: Guilty Not guilty Total
---------- ---------- |-----
Prior belief is guilty: pq (1-p)q | q
... not guilty: (1-p)(1-q) p(1-q) | 1-q
The updated belief in guilt when the witness testifies to guilt will be found in the proportions of the "Guilty" column appearing in the first row. (The second column becomes irrelevant.) Since the total in the "Guilty" column is $pq + (1-p)(1-q)$, this proportion equals
$$\frac{pq}{pq + (1-p)(1-q)}.$$
A neater way to express this is in terms of the odds, assuming neither $p$ nor $q$ equals $1$. (When either does equal $1$, the odds become infinite but the calculations in terms of probabilities are easy.) The odds are found by dividing the belief in guilt (first row) given the witness is testifying to guilt (left column) by the belief in not guilty (second row) within the same column. The final odds are therefore
$$\frac{pq}{(1-p)(1-q)} = \frac{p}{1-p} \frac{q}{1-q}.$$
We see that the initial odds of $q/(1-q)$ have been updated to odds of $p/(1-p)\times q/(1-q)$. In other words,
The posterior odds of guilt are the prior odds of guilt $q/(1-q)$ multiplied by the witness's odds of guilt $p(1-p)$.
This updating will happen each time a witness comes forward. (That explains how to extend the probability tree, if one really wanted to: it consists of a sequence of identical steps, each leading to the same calculations.) Therefore, after $n$ witnesses appear, the initial odds $q_0 = (1/2)/(1 - 1/2) = 1$ have been multiplied by $\left(p/(1-p)\right)^n$. That is, with $n$ witnesses the odds of our belief in guilt should be updated to
$$\text{Odds(Guilty)} = \left(\frac{p}{1-p}\right)^n \frac{q_0}{1-q_0} = \left(\frac{p}{1-p}\right)^n.$$
This forms a geometric sequence with initial value $q_0/(1-q_0)=1$ and common ratio $\lambda=p/(1-p)$. When $\lambda\gt 1$ (equivalently, $p\gt 1/2$), which occurs when the witnesses are somewhat reliable, the sequence increases without bound, showing that the probability of guilt is growing large. When $\lambda\lt 1$ (i.e., $p\lt 1/2$), which occurs when the witnesses are more likely to lie than not, the sequence converges to zero, showing that the probability of guilt is becoming small: a pack of lying bastards should convince you the defendant is innocent! When $\lambda=1$ (which corresponds to witnesses who are doing no better than randomly guessing), the odds remain the same as always. | How to update a probability that a defendant is guilty after testimonies of multiple unreliable witn | Instead of a tree, use a contingency table (which is the same thing, but lays out the calculations in a more convenient form). Instead of probabilities, perform the calculations in terms of odds.
Be | How to update a probability that a defendant is guilty after testimonies of multiple unreliable witnesses?
Instead of a tree, use a contingency table (which is the same thing, but lays out the calculations in a more convenient form). Instead of probabilities, perform the calculations in terms of odds.
Because the problem eventually concerns multiple witnesses, let's address the case where after $n-1$ witnesses have come forward our belief in guilt has probability $q$, say (and therefore our belief in lack of guilt is $1-q$). Because the next witness will split each row into two parts in the proportion $p:(1-p)$, the table must divide up like this:
Witness: Guilty Not guilty Total
---------- ---------- |-----
Prior belief is guilty: pq (1-p)q | q
... not guilty: (1-p)(1-q) p(1-q) | 1-q
The updated belief in guilt when the witness testifies to guilt will be found in the proportions of the "Guilty" column appearing in the first row. (The second column becomes irrelevant.) Since the total in the "Guilty" column is $pq + (1-p)(1-q)$, this proportion equals
$$\frac{pq}{pq + (1-p)(1-q)}.$$
A neater way to express this is in terms of the odds, assuming neither $p$ nor $q$ equals $1$. (When either does equal $1$, the odds become infinite but the calculations in terms of probabilities are easy.) The odds are found by dividing the belief in guilt (first row) given the witness is testifying to guilt (left column) by the belief in not guilty (second row) within the same column. The final odds are therefore
$$\frac{pq}{(1-p)(1-q)} = \frac{p}{1-p} \frac{q}{1-q}.$$
We see that the initial odds of $q/(1-q)$ have been updated to odds of $p/(1-p)\times q/(1-q)$. In other words,
The posterior odds of guilt are the prior odds of guilt $q/(1-q)$ multiplied by the witness's odds of guilt $p(1-p)$.
This updating will happen each time a witness comes forward. (That explains how to extend the probability tree, if one really wanted to: it consists of a sequence of identical steps, each leading to the same calculations.) Therefore, after $n$ witnesses appear, the initial odds $q_0 = (1/2)/(1 - 1/2) = 1$ have been multiplied by $\left(p/(1-p)\right)^n$. That is, with $n$ witnesses the odds of our belief in guilt should be updated to
$$\text{Odds(Guilty)} = \left(\frac{p}{1-p}\right)^n \frac{q_0}{1-q_0} = \left(\frac{p}{1-p}\right)^n.$$
This forms a geometric sequence with initial value $q_0/(1-q_0)=1$ and common ratio $\lambda=p/(1-p)$. When $\lambda\gt 1$ (equivalently, $p\gt 1/2$), which occurs when the witnesses are somewhat reliable, the sequence increases without bound, showing that the probability of guilt is growing large. When $\lambda\lt 1$ (i.e., $p\lt 1/2$), which occurs when the witnesses are more likely to lie than not, the sequence converges to zero, showing that the probability of guilt is becoming small: a pack of lying bastards should convince you the defendant is innocent! When $\lambda=1$ (which corresponds to witnesses who are doing no better than randomly guessing), the odds remain the same as always. | How to update a probability that a defendant is guilty after testimonies of multiple unreliable witn
Instead of a tree, use a contingency table (which is the same thing, but lays out the calculations in a more convenient form). Instead of probabilities, perform the calculations in terms of odds.
Be |
55,213 | How to update a probability that a defendant is guilty after testimonies of multiple unreliable witnesses? | P(guilty) = 0.5
P(witness says guilty | guilty) = p (Witness is telling the truth)
P(witness says guilty | not guilty) = 1- p (Witness is telling a lie)
P(guilty and witness says guilty) = 0.5p
P(not guilty and witness says guilty) = 0.5(1-p)
P(witness says guilty) = 0.5[p + (1-p)] = 0.5
by bayes' rule
P(guilty | witness says guilty)= P(guilty and witness says guilty)/P(witness says guilty) = 0.5p/0.5 = p
Source: http://slaystats.com/adms2320assignment.html | How to update a probability that a defendant is guilty after testimonies of multiple unreliable witn | P(guilty) = 0.5
P(witness says guilty | guilty) = p (Witness is telling the truth)
P(witness says guilty | not guilty) = 1- p (Witness is telling a lie)
P(guilty and witness says guilty) = 0.5p
P | How to update a probability that a defendant is guilty after testimonies of multiple unreliable witnesses?
P(guilty) = 0.5
P(witness says guilty | guilty) = p (Witness is telling the truth)
P(witness says guilty | not guilty) = 1- p (Witness is telling a lie)
P(guilty and witness says guilty) = 0.5p
P(not guilty and witness says guilty) = 0.5(1-p)
P(witness says guilty) = 0.5[p + (1-p)] = 0.5
by bayes' rule
P(guilty | witness says guilty)= P(guilty and witness says guilty)/P(witness says guilty) = 0.5p/0.5 = p
Source: http://slaystats.com/adms2320assignment.html | How to update a probability that a defendant is guilty after testimonies of multiple unreliable witn
P(guilty) = 0.5
P(witness says guilty | guilty) = p (Witness is telling the truth)
P(witness says guilty | not guilty) = 1- p (Witness is telling a lie)
P(guilty and witness says guilty) = 0.5p
P |
55,214 | null hypothesis change | The form you are describing is commonly referred to as a one-tailed test. As $\alpha$ only now refers to a single tail of the distribution, the critical value will change. For example, on a Z test, the critical value for $\alpha = .05$ on a two-tailed test is $\approx 1.96$, but for a one tailed-test with the same alpha the critical value would be $\approx 1.64$, i.e. the value where the one tail of the distribution contains $\alpha$ of the distribution.
Edit: Note that this answer only applies to tests where those $H_0$ and $H_1$s are sensible. There are contexts in which the critical value is drawn from a distribution where answering hypotheses in that form is not possible without additional work outside of the typical use of the statitic. For example, one would not easily use hypotheses of that form in relationship to an F test. See Why do we use a one-tailed test F-test in analysis of variance (ANOVA)? for more details on this disclaiming point. | null hypothesis change | The form you are describing is commonly referred to as a one-tailed test. As $\alpha$ only now refers to a single tail of the distribution, the critical value will change. For example, on a Z test, | null hypothesis change
The form you are describing is commonly referred to as a one-tailed test. As $\alpha$ only now refers to a single tail of the distribution, the critical value will change. For example, on a Z test, the critical value for $\alpha = .05$ on a two-tailed test is $\approx 1.96$, but for a one tailed-test with the same alpha the critical value would be $\approx 1.64$, i.e. the value where the one tail of the distribution contains $\alpha$ of the distribution.
Edit: Note that this answer only applies to tests where those $H_0$ and $H_1$s are sensible. There are contexts in which the critical value is drawn from a distribution where answering hypotheses in that form is not possible without additional work outside of the typical use of the statitic. For example, one would not easily use hypotheses of that form in relationship to an F test. See Why do we use a one-tailed test F-test in analysis of variance (ANOVA)? for more details on this disclaiming point. | null hypothesis change
The form you are describing is commonly referred to as a one-tailed test. As $\alpha$ only now refers to a single tail of the distribution, the critical value will change. For example, on a Z test, |
55,215 | What is meant by effective parameters in machine learning | Some models (here: linear regression) have parameters $\beta$:
$$ \hat{y} = \sum_{i\in\{1..p\}} \beta_i x_i + \beta_0 $$
For the same number of input features, more complex models (here: basis expansion to a quadratic model) have more parameters:
$$ \hat{y} = \sum_{i,j\in\{1..p\} \atop i\le j} \beta_{ij} x_ix_j + \sum_{i\in\{1..p\}} \beta_{i} x_i + \beta_0 $$
In general, models with more parameters are more flexible (because there are more parameters to tune to fit the model to the data), but are more difficult to fit and thus more likely tend to overfit. Regularization helps overcome these problems by reducing the degrees of freedom for tuning, thus reducing the complexity a bit:
$$ \hat{y} = \sum_{i,j\in\{1..p\} \atop i\le j} \beta_{ij} x_ix_j + \sum_{i\in\{1..p\}} \beta_{i} x_i + \beta_0 \textrm{, with} \sum \beta^2 \textrm{ small} $$
We can express the 'model complexity' or 'degrees of freedom for tuning' in terms of 'effective parameters': For the model that you investigate, find the parameter-based model (without regularization) that has the same degree of freedom for tuning. The number of parameters for that model is the effective number of parameters for your model.
This generalization can also be extended further to models that are not based on parameters (e.g. k nearest neighbor). Then the degree of complexity is known as Vapnik–Chervonenkis (VC) dimension. Conveniently, for (non-regularized) linear regression, it is $p+1$, the actual number of parameters.
Elements of Statistical Learning, section 7.9, gives more information about that (and they seem to use the terms 'VC', 'effective number of parameters' and 'model complexity' more or less interchangably). | What is meant by effective parameters in machine learning | Some models (here: linear regression) have parameters $\beta$:
$$ \hat{y} = \sum_{i\in\{1..p\}} \beta_i x_i + \beta_0 $$
For the same number of input features, more complex models (here: basis expansi | What is meant by effective parameters in machine learning
Some models (here: linear regression) have parameters $\beta$:
$$ \hat{y} = \sum_{i\in\{1..p\}} \beta_i x_i + \beta_0 $$
For the same number of input features, more complex models (here: basis expansion to a quadratic model) have more parameters:
$$ \hat{y} = \sum_{i,j\in\{1..p\} \atop i\le j} \beta_{ij} x_ix_j + \sum_{i\in\{1..p\}} \beta_{i} x_i + \beta_0 $$
In general, models with more parameters are more flexible (because there are more parameters to tune to fit the model to the data), but are more difficult to fit and thus more likely tend to overfit. Regularization helps overcome these problems by reducing the degrees of freedom for tuning, thus reducing the complexity a bit:
$$ \hat{y} = \sum_{i,j\in\{1..p\} \atop i\le j} \beta_{ij} x_ix_j + \sum_{i\in\{1..p\}} \beta_{i} x_i + \beta_0 \textrm{, with} \sum \beta^2 \textrm{ small} $$
We can express the 'model complexity' or 'degrees of freedom for tuning' in terms of 'effective parameters': For the model that you investigate, find the parameter-based model (without regularization) that has the same degree of freedom for tuning. The number of parameters for that model is the effective number of parameters for your model.
This generalization can also be extended further to models that are not based on parameters (e.g. k nearest neighbor). Then the degree of complexity is known as Vapnik–Chervonenkis (VC) dimension. Conveniently, for (non-regularized) linear regression, it is $p+1$, the actual number of parameters.
Elements of Statistical Learning, section 7.9, gives more information about that (and they seem to use the terms 'VC', 'effective number of parameters' and 'model complexity' more or less interchangably). | What is meant by effective parameters in machine learning
Some models (here: linear regression) have parameters $\beta$:
$$ \hat{y} = \sum_{i\in\{1..p\}} \beta_i x_i + \beta_0 $$
For the same number of input features, more complex models (here: basis expansi |
55,216 | What is meant by effective parameters in machine learning | "Effective parameters" can also be referred to as "effective degrees of freedom". In a linear model, we note that the leverages -- the amount the fitted value changes with the actual value $\partial \hat y_i \over \partial y_i $ -- can be added up to obtain the degrees of freedom of the model. This way of calculating degrees of freedom can then be applied to non-linear models.
For example suppose your "model" is to take the average of the $k$ nearest neighbours. For each observation, the fitted value is the average of $k$ values. The observation itself will be amongst these $k$ nearest points, and its own contribution will be $y_i/k$, hence the leverage for every observation is $1/k$, and for $n$ observations the effective degrees of freedom is $n/k$. | What is meant by effective parameters in machine learning | "Effective parameters" can also be referred to as "effective degrees of freedom". In a linear model, we note that the leverages -- the amount the fitted value changes with the actual value $\partial \ | What is meant by effective parameters in machine learning
"Effective parameters" can also be referred to as "effective degrees of freedom". In a linear model, we note that the leverages -- the amount the fitted value changes with the actual value $\partial \hat y_i \over \partial y_i $ -- can be added up to obtain the degrees of freedom of the model. This way of calculating degrees of freedom can then be applied to non-linear models.
For example suppose your "model" is to take the average of the $k$ nearest neighbours. For each observation, the fitted value is the average of $k$ values. The observation itself will be amongst these $k$ nearest points, and its own contribution will be $y_i/k$, hence the leverage for every observation is $1/k$, and for $n$ observations the effective degrees of freedom is $n/k$. | What is meant by effective parameters in machine learning
"Effective parameters" can also be referred to as "effective degrees of freedom". In a linear model, we note that the leverages -- the amount the fitted value changes with the actual value $\partial \ |
55,217 | Formula for calculating sample size for hypergeometric distribution | First of all for background:
“The hypergeometric distribution applies to sampling without replacement from a finite population whose elements can be classified into two mutually exclusive categories like Pass/Fail” (Wikipedia)
That being said, if your sample size is extremely large it is possible that even without replacement your results may approximate the binomial distribution.
Equation for sample size calculation for small populations:
Hypergeometric distribution
𝑛 = (𝑁𝑧^2 𝑝𝑞) / ((𝐸^2 (𝑁−1)+𝑧^2 𝑝𝑞))
Where:
n = Minimum sample size
N = Population size
z = Confidence level (zα/2)
p = Proportion of events in population
q = Proportion of non-events in population
E = Accuracy of sample proportions
Simple binomial distribution (included for comparison)
𝑛=(𝑍^2 𝑝𝑞)/𝐸^2
Useful links and resources:
My current reputation prevents me from posting more than 2 links so please vote this answer up if it is helpful to you:
Formula and examples: University of Regina
Online calculator
Google has the book: "Six sigma and beyond"
Useful examples on "stattrek.com"
Wolfram Alpha
digitheadslabnotebook.blogspot.com | Formula for calculating sample size for hypergeometric distribution | First of all for background:
“The hypergeometric distribution applies to sampling without replacement from a finite population whose elements can be classified into two mutually exclusive categories l | Formula for calculating sample size for hypergeometric distribution
First of all for background:
“The hypergeometric distribution applies to sampling without replacement from a finite population whose elements can be classified into two mutually exclusive categories like Pass/Fail” (Wikipedia)
That being said, if your sample size is extremely large it is possible that even without replacement your results may approximate the binomial distribution.
Equation for sample size calculation for small populations:
Hypergeometric distribution
𝑛 = (𝑁𝑧^2 𝑝𝑞) / ((𝐸^2 (𝑁−1)+𝑧^2 𝑝𝑞))
Where:
n = Minimum sample size
N = Population size
z = Confidence level (zα/2)
p = Proportion of events in population
q = Proportion of non-events in population
E = Accuracy of sample proportions
Simple binomial distribution (included for comparison)
𝑛=(𝑍^2 𝑝𝑞)/𝐸^2
Useful links and resources:
My current reputation prevents me from posting more than 2 links so please vote this answer up if it is helpful to you:
Formula and examples: University of Regina
Online calculator
Google has the book: "Six sigma and beyond"
Useful examples on "stattrek.com"
Wolfram Alpha
digitheadslabnotebook.blogspot.com | Formula for calculating sample size for hypergeometric distribution
First of all for background:
“The hypergeometric distribution applies to sampling without replacement from a finite population whose elements can be classified into two mutually exclusive categories l |
55,218 | Formula for calculating sample size for hypergeometric distribution | Using the formula provided by Stan in his answer to his own question and plugging in the values for $N$, $p$ and $q$ in the question, i.e.
Population: $N = 1,000,000$
proportion of white marbles: $p = 0.001$
proportion of black marbles: $q = 1 - p$
and assuming a precision
$E = 0.05$
we end up with
$$
n = \frac{1,000,000 \cdot z_{1-\alpha / 2}^2 \cdot 0.000999}{2499.998 + z_{1-\alpha / 2}^2 \cdot 0.000999}
$$
Here we need a clarification:
$z$ is not the confidence level but usually interpreted as the $(1-\alpha/2)$ quantile of the standard normal Distribution.
The confidence level is $1-\alpha$.
Typical values of $\alpha$ are 0.01, 0.05 and 0.1 and are set to control the probability of error type 1 in hypothesis testing. Wikipedia
Using the $(1-\alpha/2)$ quantile implies we want to conduct a two sided hypothesis test (otherwise we would have to use the $(1-\alpha)$ quantile).
Back to the example:
Let's use $\alpha = 0.01$, i.e. an 99% confidence level. Now we get a result for sample size $n$:
$$
\begin{align}
n &= \frac{1,000,000 \cdot 2.58^2 \cdot 0.000999}{2499.998 + 2.58^2 \cdot 0.000999}\\
&= 2.6513
\end{align}
$$
Discussion of result:
Clearly it is impractical to draw 2.6513 marbles from the urn. Thus drawing 2 or 3 marbles are the options to choose from.
Neither would be very satisfying...
Imagine we opt for drawing 3 marbles. What are the probabilities of drawing 0, 1, 2 or even 3 white marbles?
$P(0) = 0.997003$
$P(1) = 0.002994009$
$P(2) \approx 0$
$P(3) \approx 0$
Regarding a hypothesis test $H_0: p_0 = 0.001$ we are fine because the chances of rejecting the true hypothesis are at most 1% (as required by setting $\alpha = 0.01$).
Caution:
If it is not just about hypothesis testing but also about estimating the proportion of white marbles in the population you would not want to do this based on a sample size of just 3, because your estimate would be limited by design to the values 0%, 33.3%, 66.% or 100%. Not quite what you had in mind when specifying a precision of 0.05 (i.e. 5 percentage points). | Formula for calculating sample size for hypergeometric distribution | Using the formula provided by Stan in his answer to his own question and plugging in the values for $N$, $p$ and $q$ in the question, i.e.
Population: $N = 1,000,000$
proportion of white marbles: $p = | Formula for calculating sample size for hypergeometric distribution
Using the formula provided by Stan in his answer to his own question and plugging in the values for $N$, $p$ and $q$ in the question, i.e.
Population: $N = 1,000,000$
proportion of white marbles: $p = 0.001$
proportion of black marbles: $q = 1 - p$
and assuming a precision
$E = 0.05$
we end up with
$$
n = \frac{1,000,000 \cdot z_{1-\alpha / 2}^2 \cdot 0.000999}{2499.998 + z_{1-\alpha / 2}^2 \cdot 0.000999}
$$
Here we need a clarification:
$z$ is not the confidence level but usually interpreted as the $(1-\alpha/2)$ quantile of the standard normal Distribution.
The confidence level is $1-\alpha$.
Typical values of $\alpha$ are 0.01, 0.05 and 0.1 and are set to control the probability of error type 1 in hypothesis testing. Wikipedia
Using the $(1-\alpha/2)$ quantile implies we want to conduct a two sided hypothesis test (otherwise we would have to use the $(1-\alpha)$ quantile).
Back to the example:
Let's use $\alpha = 0.01$, i.e. an 99% confidence level. Now we get a result for sample size $n$:
$$
\begin{align}
n &= \frac{1,000,000 \cdot 2.58^2 \cdot 0.000999}{2499.998 + 2.58^2 \cdot 0.000999}\\
&= 2.6513
\end{align}
$$
Discussion of result:
Clearly it is impractical to draw 2.6513 marbles from the urn. Thus drawing 2 or 3 marbles are the options to choose from.
Neither would be very satisfying...
Imagine we opt for drawing 3 marbles. What are the probabilities of drawing 0, 1, 2 or even 3 white marbles?
$P(0) = 0.997003$
$P(1) = 0.002994009$
$P(2) \approx 0$
$P(3) \approx 0$
Regarding a hypothesis test $H_0: p_0 = 0.001$ we are fine because the chances of rejecting the true hypothesis are at most 1% (as required by setting $\alpha = 0.01$).
Caution:
If it is not just about hypothesis testing but also about estimating the proportion of white marbles in the population you would not want to do this based on a sample size of just 3, because your estimate would be limited by design to the values 0%, 33.3%, 66.% or 100%. Not quite what you had in mind when specifying a precision of 0.05 (i.e. 5 percentage points). | Formula for calculating sample size for hypergeometric distribution
Using the formula provided by Stan in his answer to his own question and plugging in the values for $N$, $p$ and $q$ in the question, i.e.
Population: $N = 1,000,000$
proportion of white marbles: $p = |
55,219 | Non-parametric version of paired t-test (Mann–Whitney U test) | The non-parametric analog of the paired $t$-test is the Wilcoxon. | Non-parametric version of paired t-test (Mann–Whitney U test) | The non-parametric analog of the paired $t$-test is the Wilcoxon. | Non-parametric version of paired t-test (Mann–Whitney U test)
The non-parametric analog of the paired $t$-test is the Wilcoxon. | Non-parametric version of paired t-test (Mann–Whitney U test)
The non-parametric analog of the paired $t$-test is the Wilcoxon. |
55,220 | Simulation of KS-test with estimated parameters | It is a Lilliefors test, and your first and third items are pretty close to how to do it.
The statement that Lilliefors test is only for the normal distribution is wrong. He did one for the exponential as well (you can even see that in the references at the bottom of the Wikipedia page you linked to), and the technique should work with quite a few other distributions.
However, I don't think you quite have it right.
There are a couple of ways of organizing it, but one approach that works for continuous distributions is as follows:
Repeat many times:
Simulate a sample of the desired sample size from the assumed distribution.
Estimate the parameters of the distribution.
Treating the estimated parameters as the population values, transform to uniformity via the probability integral transform. (You can compute a KS statistic without transforming at this step; however, it makes the computation a bit simpler.)
Compute a KS test statistic.
Collect the simulated statistics, and work out the proportion of times the simulated statistic is at least as extreme (more consistent with $H_1$) as the observed sample value.
If you have it right you should be able to reproduce the results for Lilliefors paper (to the limited accuracy he had, anyway. | Simulation of KS-test with estimated parameters | It is a Lilliefors test, and your first and third items are pretty close to how to do it.
The statement that Lilliefors test is only for the normal distribution is wrong. He did one for the exponentia | Simulation of KS-test with estimated parameters
It is a Lilliefors test, and your first and third items are pretty close to how to do it.
The statement that Lilliefors test is only for the normal distribution is wrong. He did one for the exponential as well (you can even see that in the references at the bottom of the Wikipedia page you linked to), and the technique should work with quite a few other distributions.
However, I don't think you quite have it right.
There are a couple of ways of organizing it, but one approach that works for continuous distributions is as follows:
Repeat many times:
Simulate a sample of the desired sample size from the assumed distribution.
Estimate the parameters of the distribution.
Treating the estimated parameters as the population values, transform to uniformity via the probability integral transform. (You can compute a KS statistic without transforming at this step; however, it makes the computation a bit simpler.)
Compute a KS test statistic.
Collect the simulated statistics, and work out the proportion of times the simulated statistic is at least as extreme (more consistent with $H_1$) as the observed sample value.
If you have it right you should be able to reproduce the results for Lilliefors paper (to the limited accuracy he had, anyway. | Simulation of KS-test with estimated parameters
It is a Lilliefors test, and your first and third items are pretty close to how to do it.
The statement that Lilliefors test is only for the normal distribution is wrong. He did one for the exponentia |
55,221 | Fit measures for GMM Arellano-Bond estimator in R | If you are using the pgmm command from the package plm you do have good ways to assess model fit available to you (if this is not what you're using it's what you should be using). The standard way to test model fit would be to look at the $J$-test of overidentifying restrictions, also called the Sargan test (see here or here). Strictly speaking this is a test of the validity of your all instruments (given that some subset of your instruments is valid). This test is listed under "Sargan test" when you use the summary command on your pgmm model. summary also shows test of serial correlation of the residuals.
I gather what you want is less to test the Arellano-Bond model itself and more to test the degree to which the predicted time series seems to be close to the result. You can actually reproduce the square correlation used in Bloom et al. (2001) pretty easily. The fitted values are exactly what you need. Starting with an example from the plm documentation found here:
data("EmplUK", package = "plm")
## Arellano and Bond (1991), table 4b
z1 <- pgmm(log(emp) ~ lag(log(emp), 1:2) + lag(log(wage), 0:1)
+ log(capital) + lag(log(output), 0:1) | lag(log(emp), 2:99),
data = EmplUK, effect = "twoways", model = "twosteps")
# Shows Sargan test and serial correlation of residuals.
summary(z1)
# Getting the actual Y values out. There is likely a more elegant way to do this.
Y <- c()
for( i in 1:length(z1$model)){ Y <- c(Y,z1$model[[i]][,1])}
# Note that the fitted-values are exactly what you need.
Yhat <- z1$fitted.values[1:length(z1$fitted.values)]
# Squared correlation of fitted values and actual values
cor(Y,Yhat)^2 | Fit measures for GMM Arellano-Bond estimator in R | If you are using the pgmm command from the package plm you do have good ways to assess model fit available to you (if this is not what you're using it's what you should be using). The standard way to | Fit measures for GMM Arellano-Bond estimator in R
If you are using the pgmm command from the package plm you do have good ways to assess model fit available to you (if this is not what you're using it's what you should be using). The standard way to test model fit would be to look at the $J$-test of overidentifying restrictions, also called the Sargan test (see here or here). Strictly speaking this is a test of the validity of your all instruments (given that some subset of your instruments is valid). This test is listed under "Sargan test" when you use the summary command on your pgmm model. summary also shows test of serial correlation of the residuals.
I gather what you want is less to test the Arellano-Bond model itself and more to test the degree to which the predicted time series seems to be close to the result. You can actually reproduce the square correlation used in Bloom et al. (2001) pretty easily. The fitted values are exactly what you need. Starting with an example from the plm documentation found here:
data("EmplUK", package = "plm")
## Arellano and Bond (1991), table 4b
z1 <- pgmm(log(emp) ~ lag(log(emp), 1:2) + lag(log(wage), 0:1)
+ log(capital) + lag(log(output), 0:1) | lag(log(emp), 2:99),
data = EmplUK, effect = "twoways", model = "twosteps")
# Shows Sargan test and serial correlation of residuals.
summary(z1)
# Getting the actual Y values out. There is likely a more elegant way to do this.
Y <- c()
for( i in 1:length(z1$model)){ Y <- c(Y,z1$model[[i]][,1])}
# Note that the fitted-values are exactly what you need.
Yhat <- z1$fitted.values[1:length(z1$fitted.values)]
# Squared correlation of fitted values and actual values
cor(Y,Yhat)^2 | Fit measures for GMM Arellano-Bond estimator in R
If you are using the pgmm command from the package plm you do have good ways to assess model fit available to you (if this is not what you're using it's what you should be using). The standard way to |
55,222 | detect line in scatter | A Hough Transformation should help. It's mainly used for detecting lines is images, but (x,y) pairs can be considered a sparse representation of a bitmap image.
The idea is for each (x,y) point in the input, compute a list of (slope, intercept) pairs which represent "all" lines that pass through that point. "all" is constrained to some ranges of slope and intercept and granularity limit. If you look at or analyze the resulting collection of (slope, intercept) pairs, you'll see a cluster for each line in the input.
Often the (slope, intercept) pairs are represented as an image or scatter plot with transparency, so that clusters or co-incident values appear as dark spots (or bright spots depending on you color settings).
(Addition after comment from @whuber: There are other possible transformations besides slope/intercept. Angle/intercept puts the first coordinate in a nicer range, and angle/distance-to-origin avoids the intercept bounds issue for near-vertical lines.)
Here's a quick R attempt starting with your code.
#############################
# Hough transform:
# for each point find slopes and intercepts that go through the line
#############################
# first, set up a grid of intercepts to cycle through
dy <- max(points$y) - min(points$y)
intercepts <- seq(min(points$y) - dy, max(points$y) + dy, dy/50) # the intercept grid for the lines to consider
# a function that takes a point and a grid of intercepts and returns a data frame of slope-intercept pairs
compute_slopes_and_intercepts <- function(x,y,intercepts) {
data.frame(intercept=intercepts,
slope=(y-intercepts) / x)
}
# apply the function above to all points
all_slopes_and_intercepts.list <- apply(points,1, function(point) compute_slopes_and_intercepts(point['x'],point['y'],intercepts))
# bind together all resulting data frames
all_slopes_and_intercepts <- do.call(rbind,all_slopes_and_intercepts.list)
# plot the slope-intercept representation
plot(all_slopes_and_intercepts$intercept, all_slopes_and_intercepts$slope, pch=19,col=rgb(50,50,50,2,maxColorValue=255),ylim=c(-5,5))
# circle the true value
slope <- (end$y - start$y) / (end$x - start$x)
intercept <- start$y - start$x * slope
points(intercept, slope, col='red', cex = 4)
This generates the following plot:
In the plot, the actual slope and intercept of the true line is circled. Alternatively, use ggplot2 and stat_bin2d showing count per bin.
We'll do something similar to ggplot above to find a "best guess" estimate:
# Make a best guess. Bin the data according to a fixed grid and count the number of slope-intercept pairs in each bin
slope_intercepts = all_slopes_and_intercepts
bin_width.slope=0.05
bin_width.intercept=0.05
slope_intercepts$slope.cut <- cut(slope_intercepts$slope,seq(-5,5,by=bin_width.slope))
slope_intercepts$intercept.cut <- cut(slope_intercepts$intercept,seq(-5,5,by=bin_width.intercept))
accumulator <- aggregate(slope ~ slope.cut + intercept.cut, data=slope_intercepts, length)
head(accumulator[order(-accumulator$slope),]) # the best guesses
(best.grid_cell <- accumulator[which.max(accumulator$slope),c('slope.cut','intercept.cut')]) # the best guess
# as the best guess take the mean of slope and intercept in the best grid cell
best.slope_intercepts <- slope_intercepts[slope_intercepts$slope.cut == best.grid_cell$slope.cut & slope_intercepts$intercept.cut == best.grid_cell$intercept.cut,]
(best.guess <- colMeans(best.slope_intercepts[,1:2],na.rm = TRUE))
points(best.guess['intercept'], best.guess['slope'], col='blue', cex = 4)
This could be improved in all sorts of ways, e.g. by running a kernel density estimation on the data and picking the likelihood maximum. | detect line in scatter | A Hough Transformation should help. It's mainly used for detecting lines is images, but (x,y) pairs can be considered a sparse representation of a bitmap image.
The idea is for each (x,y) point in the | detect line in scatter
A Hough Transformation should help. It's mainly used for detecting lines is images, but (x,y) pairs can be considered a sparse representation of a bitmap image.
The idea is for each (x,y) point in the input, compute a list of (slope, intercept) pairs which represent "all" lines that pass through that point. "all" is constrained to some ranges of slope and intercept and granularity limit. If you look at or analyze the resulting collection of (slope, intercept) pairs, you'll see a cluster for each line in the input.
Often the (slope, intercept) pairs are represented as an image or scatter plot with transparency, so that clusters or co-incident values appear as dark spots (or bright spots depending on you color settings).
(Addition after comment from @whuber: There are other possible transformations besides slope/intercept. Angle/intercept puts the first coordinate in a nicer range, and angle/distance-to-origin avoids the intercept bounds issue for near-vertical lines.)
Here's a quick R attempt starting with your code.
#############################
# Hough transform:
# for each point find slopes and intercepts that go through the line
#############################
# first, set up a grid of intercepts to cycle through
dy <- max(points$y) - min(points$y)
intercepts <- seq(min(points$y) - dy, max(points$y) + dy, dy/50) # the intercept grid for the lines to consider
# a function that takes a point and a grid of intercepts and returns a data frame of slope-intercept pairs
compute_slopes_and_intercepts <- function(x,y,intercepts) {
data.frame(intercept=intercepts,
slope=(y-intercepts) / x)
}
# apply the function above to all points
all_slopes_and_intercepts.list <- apply(points,1, function(point) compute_slopes_and_intercepts(point['x'],point['y'],intercepts))
# bind together all resulting data frames
all_slopes_and_intercepts <- do.call(rbind,all_slopes_and_intercepts.list)
# plot the slope-intercept representation
plot(all_slopes_and_intercepts$intercept, all_slopes_and_intercepts$slope, pch=19,col=rgb(50,50,50,2,maxColorValue=255),ylim=c(-5,5))
# circle the true value
slope <- (end$y - start$y) / (end$x - start$x)
intercept <- start$y - start$x * slope
points(intercept, slope, col='red', cex = 4)
This generates the following plot:
In the plot, the actual slope and intercept of the true line is circled. Alternatively, use ggplot2 and stat_bin2d showing count per bin.
We'll do something similar to ggplot above to find a "best guess" estimate:
# Make a best guess. Bin the data according to a fixed grid and count the number of slope-intercept pairs in each bin
slope_intercepts = all_slopes_and_intercepts
bin_width.slope=0.05
bin_width.intercept=0.05
slope_intercepts$slope.cut <- cut(slope_intercepts$slope,seq(-5,5,by=bin_width.slope))
slope_intercepts$intercept.cut <- cut(slope_intercepts$intercept,seq(-5,5,by=bin_width.intercept))
accumulator <- aggregate(slope ~ slope.cut + intercept.cut, data=slope_intercepts, length)
head(accumulator[order(-accumulator$slope),]) # the best guesses
(best.grid_cell <- accumulator[which.max(accumulator$slope),c('slope.cut','intercept.cut')]) # the best guess
# as the best guess take the mean of slope and intercept in the best grid cell
best.slope_intercepts <- slope_intercepts[slope_intercepts$slope.cut == best.grid_cell$slope.cut & slope_intercepts$intercept.cut == best.grid_cell$intercept.cut,]
(best.guess <- colMeans(best.slope_intercepts[,1:2],na.rm = TRUE))
points(best.guess['intercept'], best.guess['slope'], col='blue', cex = 4)
This could be improved in all sorts of ways, e.g. by running a kernel density estimation on the data and picking the likelihood maximum. | detect line in scatter
A Hough Transformation should help. It's mainly used for detecting lines is images, but (x,y) pairs can be considered a sparse representation of a bitmap image.
The idea is for each (x,y) point in the |
55,223 | LCA number of parameters & degrees of freedom | polca and mclust both performs Model-based cluster analysis, based on finite mixture models. However, polca is designed for Latent Class Analysis (LCA) which is the name for a particular class of mixture models suitable for categorical (polytomous) data. On the converse, mclust estimates Gaussian mixtures, so is suitable for quantitative variables.
You should choose between the two classes of models by analyzing the nature and structure of your variables.
Note that with LCA you are considering the variables as qualitative, that is, the information about the ordering of the modalities is ignored.
As regards to poLCA, you have too many unique values in each variable for the model to be identifiable. The number of independent parameters is related to the number of modalities (what you called unique values) of each variable and must be lower than the number of distinct configurations of the variables (in your case distinct observed 5-ples of outcomes among the units, which is $\leq 200$). In particular, if $m_a$, $m_b$, $m_c$ are the numbers of modalities for a 3-variables models with $k$ Latent Classes, then the number of independent parameters is:
$$
(k-1)+ k\cdot[(m_a-1)+(m_b-1)+(m_c-1)]
$$
So, yes: if you want to use LCA, you need to aggregate the modalities in order to reduce the number of parameters.
Btw, to run poLCA multiple times, you can simply use the nrep option. | LCA number of parameters & degrees of freedom | polca and mclust both performs Model-based cluster analysis, based on finite mixture models. However, polca is designed for Latent Class Analysis (LCA) which is the name for a particular class of mixt | LCA number of parameters & degrees of freedom
polca and mclust both performs Model-based cluster analysis, based on finite mixture models. However, polca is designed for Latent Class Analysis (LCA) which is the name for a particular class of mixture models suitable for categorical (polytomous) data. On the converse, mclust estimates Gaussian mixtures, so is suitable for quantitative variables.
You should choose between the two classes of models by analyzing the nature and structure of your variables.
Note that with LCA you are considering the variables as qualitative, that is, the information about the ordering of the modalities is ignored.
As regards to poLCA, you have too many unique values in each variable for the model to be identifiable. The number of independent parameters is related to the number of modalities (what you called unique values) of each variable and must be lower than the number of distinct configurations of the variables (in your case distinct observed 5-ples of outcomes among the units, which is $\leq 200$). In particular, if $m_a$, $m_b$, $m_c$ are the numbers of modalities for a 3-variables models with $k$ Latent Classes, then the number of independent parameters is:
$$
(k-1)+ k\cdot[(m_a-1)+(m_b-1)+(m_c-1)]
$$
So, yes: if you want to use LCA, you need to aggregate the modalities in order to reduce the number of parameters.
Btw, to run poLCA multiple times, you can simply use the nrep option. | LCA number of parameters & degrees of freedom
polca and mclust both performs Model-based cluster analysis, based on finite mixture models. However, polca is designed for Latent Class Analysis (LCA) which is the name for a particular class of mixt |
55,224 | LCA number of parameters & degrees of freedom | As has been noted, poLCA only handles categorical data. So what does that mean for how it processes your data? From the help documentation for poLCA:
...Manifest variables must contain only integer values, and must be coded
with consecutive values from 1 to the maximum number of outcomes for
each variable.
That means that for each of your variables, poLCA will assume each value is a unique category, and that there are as many possible categories (outcomes) as the highest value in the variable. Taking a look at the data you provided, that would imply that RevCd is a categorical variable with more than 600 unique levels! There appear to be similar problems with ProcCd2, and possibly the other three variables as well (it's hard to be certain without knowing more about the structure of your data set).
mclust, on the other hand, is designed to work only with continuous data. That means that when you give it a categorical variable, it just processes it as though it were continuous (which is almost certainly not what you want). Assume your variable FundType is categorical with three levels 1, 2, and 3 corresponding to three different types of funds. If you process that as though it were continuous, then you're not estimating anything to do with three different funds, you're estimating some made up underlying continuous variable (let's call it "fundness"), which increases linearly from fund 1 to fund 2, and fund 2 to fund 3. Except in some rare cases with ordinal categorical variables, this is nonsense.
My guess is that you want to use a combination of both continuous and categorical variables to determine latent classes.
There's no mathematical reason you can't do this, but neither poLCA nor mclust will run the model you need. depmixs4 can handle both categorical and continuous manifest variables, so you may want to start there. Also see this post with a related question: Latent class model with both continuous and categorical indicators in R | LCA number of parameters & degrees of freedom | As has been noted, poLCA only handles categorical data. So what does that mean for how it processes your data? From the help documentation for poLCA:
...Manifest variables must contain only integer v | LCA number of parameters & degrees of freedom
As has been noted, poLCA only handles categorical data. So what does that mean for how it processes your data? From the help documentation for poLCA:
...Manifest variables must contain only integer values, and must be coded
with consecutive values from 1 to the maximum number of outcomes for
each variable.
That means that for each of your variables, poLCA will assume each value is a unique category, and that there are as many possible categories (outcomes) as the highest value in the variable. Taking a look at the data you provided, that would imply that RevCd is a categorical variable with more than 600 unique levels! There appear to be similar problems with ProcCd2, and possibly the other three variables as well (it's hard to be certain without knowing more about the structure of your data set).
mclust, on the other hand, is designed to work only with continuous data. That means that when you give it a categorical variable, it just processes it as though it were continuous (which is almost certainly not what you want). Assume your variable FundType is categorical with three levels 1, 2, and 3 corresponding to three different types of funds. If you process that as though it were continuous, then you're not estimating anything to do with three different funds, you're estimating some made up underlying continuous variable (let's call it "fundness"), which increases linearly from fund 1 to fund 2, and fund 2 to fund 3. Except in some rare cases with ordinal categorical variables, this is nonsense.
My guess is that you want to use a combination of both continuous and categorical variables to determine latent classes.
There's no mathematical reason you can't do this, but neither poLCA nor mclust will run the model you need. depmixs4 can handle both categorical and continuous manifest variables, so you may want to start there. Also see this post with a related question: Latent class model with both continuous and categorical indicators in R | LCA number of parameters & degrees of freedom
As has been noted, poLCA only handles categorical data. So what does that mean for how it processes your data? From the help documentation for poLCA:
...Manifest variables must contain only integer v |
55,225 | Studentized residual distribution | In my opinion there are two possible explanations here:
Externally studentized residuals are based on data with one observation deleted, this may account for the loss of the single degree of freedom.
There is inconsistency across books what $``k"$ actually refers to in a multiple regression model. If $k$ is the number of regressors, then the correct degrees of freedom is $n-k-1$. On the other hand, if $k$ is the number of regression coefficients (which is usually the number of regressors plus one, for the intercept), then the correct degrees of freedom is $n-k.$
Note: In general, the distribution of studentized residuals doesn't depend on whether there are dummy variables in the model or not. To be clear, let the regression model be $Y=X\beta +\epsilon$, where $X \in R^{n \times (k+1)}$, where $n$ is the number of observations and $k$ is the number of regressors. The design matrix $X$ may contain continuous variables, dummy variables and/or both. In this general framework, the internally Studentized residual is defined as
$$
r_i = \frac{e_i}{MSE(1-h_{ii})}
$$
where $e_i$ is the $i^{th}$ residual, $H=(h_{ij})=(X'X)^-X'Y$ is the so-called "hat" matrix. The internally Studentized residuals do not follow a $t$-distribution, because $e_i$ and $MSE$ are not independent.
The externally Studentized residual is defined as
$$
t_i = \frac{e_i}{MSE_{(i)}(1-h_{ii})}
$$
where $MSE_{(i)}$ is the mean-square error from the regression model fitted with the $i^{th}$ observation deleted. In this case, $e_i$ and $MSE_{(i)}$ are independent, and it can be shown that $t_i \sim t_{n-k-2}$, the loss of the extra one degree of freedom due to the deletion of observation $i$.
I hope this makes it clearer. So to understand the degrees of freedom in your case, you should consider the design matrix as a whole, and not break in into two parts - one with continuous predictors and one with dummy(s). Once you do that, figure out if the Studentized residuals in questions are externally or internally studentized. Then apply the above. | Studentized residual distribution | In my opinion there are two possible explanations here:
Externally studentized residuals are based on data with one observation deleted, this may account for the loss of the single degree of freedom. | Studentized residual distribution
In my opinion there are two possible explanations here:
Externally studentized residuals are based on data with one observation deleted, this may account for the loss of the single degree of freedom.
There is inconsistency across books what $``k"$ actually refers to in a multiple regression model. If $k$ is the number of regressors, then the correct degrees of freedom is $n-k-1$. On the other hand, if $k$ is the number of regression coefficients (which is usually the number of regressors plus one, for the intercept), then the correct degrees of freedom is $n-k.$
Note: In general, the distribution of studentized residuals doesn't depend on whether there are dummy variables in the model or not. To be clear, let the regression model be $Y=X\beta +\epsilon$, where $X \in R^{n \times (k+1)}$, where $n$ is the number of observations and $k$ is the number of regressors. The design matrix $X$ may contain continuous variables, dummy variables and/or both. In this general framework, the internally Studentized residual is defined as
$$
r_i = \frac{e_i}{MSE(1-h_{ii})}
$$
where $e_i$ is the $i^{th}$ residual, $H=(h_{ij})=(X'X)^-X'Y$ is the so-called "hat" matrix. The internally Studentized residuals do not follow a $t$-distribution, because $e_i$ and $MSE$ are not independent.
The externally Studentized residual is defined as
$$
t_i = \frac{e_i}{MSE_{(i)}(1-h_{ii})}
$$
where $MSE_{(i)}$ is the mean-square error from the regression model fitted with the $i^{th}$ observation deleted. In this case, $e_i$ and $MSE_{(i)}$ are independent, and it can be shown that $t_i \sim t_{n-k-2}$, the loss of the extra one degree of freedom due to the deletion of observation $i$.
I hope this makes it clearer. So to understand the degrees of freedom in your case, you should consider the design matrix as a whole, and not break in into two parts - one with continuous predictors and one with dummy(s). Once you do that, figure out if the Studentized residuals in questions are externally or internally studentized. Then apply the above. | Studentized residual distribution
In my opinion there are two possible explanations here:
Externally studentized residuals are based on data with one observation deleted, this may account for the loss of the single degree of freedom. |
55,226 | Assigning meaningful cluster name automatically | One technique for this is unsupervised multi-document keyword extraction. That is, extracting the most salient words and/or collocations for each of your clusters. A number of methods are available to do that, most of which are centered around either using graph centrality measures (think: Google's PageRank) or using language processing (NER/noun phrase detection) of each document, or simply using all possible n-grams and then extracting the most significant entities/phrases according to, e.g., their log(TF)-IDF scoring or log(TF)-Entropy score. Instead of using all n-grams, you could also detect collocations (ideally, with a likelihood ratios estimator, p. 161 ff.) on a large, independent corpus, and then use those (only) to form any higher n-gram.
However, in your case, maybe, statistical (feature) selection techniques might be more appropriate, like Boostrap testing to extract the terms (again, either sampling over all possible n-grams or by selecting specific, pre-computed collocations) that are significant for each cluster, relatively to all others. Note that you can use either the raw word counts or plug in any re-weighted count from the TF-IDF family of term weighting functions mentioned above. However, these statistical techniques require that you have access to all words (viz, word counts) among all clusters and indeed have the computational resources to do an all against all comparison, something that might be infeasible if your clusters and word data are too large. | Assigning meaningful cluster name automatically | One technique for this is unsupervised multi-document keyword extraction. That is, extracting the most salient words and/or collocations for each of your clusters. A number of methods are available to | Assigning meaningful cluster name automatically
One technique for this is unsupervised multi-document keyword extraction. That is, extracting the most salient words and/or collocations for each of your clusters. A number of methods are available to do that, most of which are centered around either using graph centrality measures (think: Google's PageRank) or using language processing (NER/noun phrase detection) of each document, or simply using all possible n-grams and then extracting the most significant entities/phrases according to, e.g., their log(TF)-IDF scoring or log(TF)-Entropy score. Instead of using all n-grams, you could also detect collocations (ideally, with a likelihood ratios estimator, p. 161 ff.) on a large, independent corpus, and then use those (only) to form any higher n-gram.
However, in your case, maybe, statistical (feature) selection techniques might be more appropriate, like Boostrap testing to extract the terms (again, either sampling over all possible n-grams or by selecting specific, pre-computed collocations) that are significant for each cluster, relatively to all others. Note that you can use either the raw word counts or plug in any re-weighted count from the TF-IDF family of term weighting functions mentioned above. However, these statistical techniques require that you have access to all words (viz, word counts) among all clusters and indeed have the computational resources to do an all against all comparison, something that might be infeasible if your clusters and word data are too large. | Assigning meaningful cluster name automatically
One technique for this is unsupervised multi-document keyword extraction. That is, extracting the most salient words and/or collocations for each of your clusters. A number of methods are available to |
55,227 | Assigning meaningful cluster name automatically | Yes it makes sense, however frequencies of single words may lead to trivial topics. You need to do some kind of normalisation by using TF-IDF and finding the most informative word(s) in the cluster (IDFs should be computed on the whole corpus). In other words you want the most frequent word(s) in the cluster that are in-frequent in other clusters.
It would be better to go for sets of 2-3 consecutive words (easily discoverable by running some adaptation of a-priori on each cluster). Also search for meme tracking algorithms as an alternative option. | Assigning meaningful cluster name automatically | Yes it makes sense, however frequencies of single words may lead to trivial topics. You need to do some kind of normalisation by using TF-IDF and finding the most informative word(s) in the cluster (I | Assigning meaningful cluster name automatically
Yes it makes sense, however frequencies of single words may lead to trivial topics. You need to do some kind of normalisation by using TF-IDF and finding the most informative word(s) in the cluster (IDFs should be computed on the whole corpus). In other words you want the most frequent word(s) in the cluster that are in-frequent in other clusters.
It would be better to go for sets of 2-3 consecutive words (easily discoverable by running some adaptation of a-priori on each cluster). Also search for meme tracking algorithms as an alternative option. | Assigning meaningful cluster name automatically
Yes it makes sense, however frequencies of single words may lead to trivial topics. You need to do some kind of normalisation by using TF-IDF and finding the most informative word(s) in the cluster (I |
55,228 | Compute pdf of a k-th order statistic | Preamble
As per the above comments, order statistics from non-identical distributions typically require complicated calculations, and generally yield complicated solutions, which makes them well-suited for solving with computer algebra systems. I am not aware that one can generally derive closed-form solutions as a function of the sample size $n$ and $k$-th order statistic (though it may be possible for your example)... but one CAN certainly obtain quite neat solutions to your problem, given any arbitrary integer value for $n$ and $k$ of your own choice. I am going to pursue the computer algebra approach here, because those tools are familiar to me, and because it makes short shrift of a lot of messy algebra.
The Problem
Let $X_i$ denote a continuous random variable with pdf $f(x; m_i)$, such that $(X_1,X_2,\dots,X_n)$ are independent but not identical variables due to differing parameters $m_i$, for $i = 1,\dots,n$.
For the OP's question, we have a $Uniform(m,1)$ parent where identicality is relaxed by replacing parameter $m$ with $m_i$, for $i = 1, \dots, n$. Thus, the pdf $f(x; m_i)$, can be written:
To illustrate, here is plot of the family of pdf's, when $n = 4$, and $m_i = \frac{i}{5}$.
Solution
If $X_i$ has pdf $f(x; m_i)$, then, for any sample size $n$, the pdf of the $k$-th order statistic is given by:
$\qquad \qquad $OrderStatNonIdentical[k, {$f_i$}, {n}]
where OrderStatNonIdentical is a function from the mathStatica package for Mathematica, and where $n$ and $k$ are integers. For the OP's question, in a sample of size $n = 4$, the pdf of the 2nd smallest order statistic is given immediately by:
Here is a plot of the pdf of the 2nd order statistic (just derived), when the sample size is $n=4$, and $m_i = i/5$:
Similarly, the pdf of the 3rd order statistic is:
... and here is a plot of same:
Monte Carlo check
When doing symbolic work, it is always a good idea to check one's work using numerical methods, to make sure no errors have crept in. Here is a quick Monte Carlo check of the $k = 2$ case solution derived above, again with $m_i = i/5$, and $n = 4$. The ragged blue line is the empirical pdf (blue), plotted on top of the theoretical solution (dashed red line) derived above:
All looks fine :)
General Solutions by Induction
It appears possible to attain general symbolic solutions by induction, at least for the 1st and 2nd order statistics. In particular:
the pdf of the 1st order statistic, irrespective of the size of $n$, has form:
$$ \begin{cases}
\frac{n(1-x)^{n-1}}{\prod_{i=1}^n (1-m_i)} & m_n < x < 1 \\
\quad \quad \dots & \quad \quad \dots \\
\frac{3(1-x)^2}{(1-m_1)(1-m_2)(1-m_3)} & m_3 < x \leq m_4 \\
\frac{2(1-x)}{(1-m_1)(1-m_2)} & m_2 < x \leq m_3 \\
\frac{1}{1-m_1} & m_1 < x \leq m_2 \\
0 & \text{otherwise}
\end{cases}$$
the pdf of the 2nd order statistic has form:
$$ \begin{cases}
\frac{(n-1)(1-x)^{n-2}}{\prod_{i=1}^n (1-m_i)}(n x-\sum_{i=1}^n m_i) & m_n < x < 1 \\
\quad \quad \dots & \quad \quad \dots \\
\frac{2(1-x)^1}{(1-m_1)(1-m_2)(1-m_3)} (3x-m_1-m_2-m_3) & m_3 < x \leq m_4 \\
\frac{1(1-x)^0}{(1-m_1)(1-m_2)} (2x-m_1-m_2) & m_2 < x \leq m_3 \\
0 & \text{otherwise}
\end{cases}$$ | Compute pdf of a k-th order statistic | Preamble
As per the above comments, order statistics from non-identical distributions typically require complicated calculations, and generally yield complicated solutions, which makes them well-suite | Compute pdf of a k-th order statistic
Preamble
As per the above comments, order statistics from non-identical distributions typically require complicated calculations, and generally yield complicated solutions, which makes them well-suited for solving with computer algebra systems. I am not aware that one can generally derive closed-form solutions as a function of the sample size $n$ and $k$-th order statistic (though it may be possible for your example)... but one CAN certainly obtain quite neat solutions to your problem, given any arbitrary integer value for $n$ and $k$ of your own choice. I am going to pursue the computer algebra approach here, because those tools are familiar to me, and because it makes short shrift of a lot of messy algebra.
The Problem
Let $X_i$ denote a continuous random variable with pdf $f(x; m_i)$, such that $(X_1,X_2,\dots,X_n)$ are independent but not identical variables due to differing parameters $m_i$, for $i = 1,\dots,n$.
For the OP's question, we have a $Uniform(m,1)$ parent where identicality is relaxed by replacing parameter $m$ with $m_i$, for $i = 1, \dots, n$. Thus, the pdf $f(x; m_i)$, can be written:
To illustrate, here is plot of the family of pdf's, when $n = 4$, and $m_i = \frac{i}{5}$.
Solution
If $X_i$ has pdf $f(x; m_i)$, then, for any sample size $n$, the pdf of the $k$-th order statistic is given by:
$\qquad \qquad $OrderStatNonIdentical[k, {$f_i$}, {n}]
where OrderStatNonIdentical is a function from the mathStatica package for Mathematica, and where $n$ and $k$ are integers. For the OP's question, in a sample of size $n = 4$, the pdf of the 2nd smallest order statistic is given immediately by:
Here is a plot of the pdf of the 2nd order statistic (just derived), when the sample size is $n=4$, and $m_i = i/5$:
Similarly, the pdf of the 3rd order statistic is:
... and here is a plot of same:
Monte Carlo check
When doing symbolic work, it is always a good idea to check one's work using numerical methods, to make sure no errors have crept in. Here is a quick Monte Carlo check of the $k = 2$ case solution derived above, again with $m_i = i/5$, and $n = 4$. The ragged blue line is the empirical pdf (blue), plotted on top of the theoretical solution (dashed red line) derived above:
All looks fine :)
General Solutions by Induction
It appears possible to attain general symbolic solutions by induction, at least for the 1st and 2nd order statistics. In particular:
the pdf of the 1st order statistic, irrespective of the size of $n$, has form:
$$ \begin{cases}
\frac{n(1-x)^{n-1}}{\prod_{i=1}^n (1-m_i)} & m_n < x < 1 \\
\quad \quad \dots & \quad \quad \dots \\
\frac{3(1-x)^2}{(1-m_1)(1-m_2)(1-m_3)} & m_3 < x \leq m_4 \\
\frac{2(1-x)}{(1-m_1)(1-m_2)} & m_2 < x \leq m_3 \\
\frac{1}{1-m_1} & m_1 < x \leq m_2 \\
0 & \text{otherwise}
\end{cases}$$
the pdf of the 2nd order statistic has form:
$$ \begin{cases}
\frac{(n-1)(1-x)^{n-2}}{\prod_{i=1}^n (1-m_i)}(n x-\sum_{i=1}^n m_i) & m_n < x < 1 \\
\quad \quad \dots & \quad \quad \dots \\
\frac{2(1-x)^1}{(1-m_1)(1-m_2)(1-m_3)} (3x-m_1-m_2-m_3) & m_3 < x \leq m_4 \\
\frac{1(1-x)^0}{(1-m_1)(1-m_2)} (2x-m_1-m_2) & m_2 < x \leq m_3 \\
0 & \text{otherwise}
\end{cases}$$ | Compute pdf of a k-th order statistic
Preamble
As per the above comments, order statistics from non-identical distributions typically require complicated calculations, and generally yield complicated solutions, which makes them well-suite |
55,229 | How the hypergeometric distribution sums to 1? | The reference mentions that this identity is from combinatorics: that is, it counts things.
What does it count? Consider $N$ objects. Once and for all, divide those $n$ things into a group of $K$ of them, which I will call "red," and the remainder, which I will call "blue." Each subset of $n$ such objects determines, and is determined by, its red objects and its blue objects. The number of such sets with $k$ red objects (and therefore $n-k$ blue objects) equals the number of ways to choose $k$ red objects from all $K$ ones (written $\color{red}{\binom{K}{k}}$) times the number of ways to choose the remaining $n-k$ blue objects from all the $N-K$ ones (written $\color{blue}{\binom{N-K}{n-k}}$).
Now if $k$ is not between $0$ and $K$, then there is no $k$-element subset of $K$ things, so $\binom{K}{k}=0$ in such cases. Similarly, $\binom{N-K}{n-k}=0$ if $n-k$ is not between $0$ and $N-K$. (This not only makes sense, it is actually how good software will evaluate these quantities. Ask R, for instance, to compute choose(5,6) or choose(5,-1): it will return the correct value of $0$ in both cases.)
Summing over all possible numbers $k$ shows that
$$\binom{N}{n} = \sum_k \color{red}{\binom{K}{k}}\color{blue}{\binom{N-K}{n-k}}$$
and as you read this you should say to yourself "any $n$ objects are comprised of some number $k$ of red objects and the remaining $n-k$ blue objects."
The sum needs to include all $k$ for which both the terms $\color{red}{\binom{K}{k}}$ and $\color{blue}{\binom{N-K}{n-k}}$ are nonzero, but it's fine to include any other values of $k$ because they will just introduce some extra zeros into the sum, which does not change it. We just need to make sure all relevant $k$ are included. It suffices to find an obvious lower bound for it ($0$ will do nicely and is more practicable than $-\infty$!) and an obvious upper bound ($N$ works because we cannot find more than $N$ objects altogether). A slightly better upper bound is $n$ (because $k$ counts the red objects in a set of $n$ things). Thus, writing these bounds explicitly and dividing both sides by $\binom{N}{n}$, we obtain
$$1 = \sum_{0\le k\le n}\frac{\color{red}{\binom{K}{k}}\color{blue}{\binom{N-K}{n-k}}}{\binom{N}{n}} .$$
Despite the notation, this formula does not implicitly assert that all values of $k$ in the range from $0$ to $n$ can occur in this distribution. About the only reason to fiddle with the inequalities and figure out what the smallest possible range of $k$ can be would be for writing programs that loop over these values: that might save a little time adding up some zeros. | How the hypergeometric distribution sums to 1? | The reference mentions that this identity is from combinatorics: that is, it counts things.
What does it count? Consider $N$ objects. Once and for all, divide those $n$ things into a group of $K$ o | How the hypergeometric distribution sums to 1?
The reference mentions that this identity is from combinatorics: that is, it counts things.
What does it count? Consider $N$ objects. Once and for all, divide those $n$ things into a group of $K$ of them, which I will call "red," and the remainder, which I will call "blue." Each subset of $n$ such objects determines, and is determined by, its red objects and its blue objects. The number of such sets with $k$ red objects (and therefore $n-k$ blue objects) equals the number of ways to choose $k$ red objects from all $K$ ones (written $\color{red}{\binom{K}{k}}$) times the number of ways to choose the remaining $n-k$ blue objects from all the $N-K$ ones (written $\color{blue}{\binom{N-K}{n-k}}$).
Now if $k$ is not between $0$ and $K$, then there is no $k$-element subset of $K$ things, so $\binom{K}{k}=0$ in such cases. Similarly, $\binom{N-K}{n-k}=0$ if $n-k$ is not between $0$ and $N-K$. (This not only makes sense, it is actually how good software will evaluate these quantities. Ask R, for instance, to compute choose(5,6) or choose(5,-1): it will return the correct value of $0$ in both cases.)
Summing over all possible numbers $k$ shows that
$$\binom{N}{n} = \sum_k \color{red}{\binom{K}{k}}\color{blue}{\binom{N-K}{n-k}}$$
and as you read this you should say to yourself "any $n$ objects are comprised of some number $k$ of red objects and the remaining $n-k$ blue objects."
The sum needs to include all $k$ for which both the terms $\color{red}{\binom{K}{k}}$ and $\color{blue}{\binom{N-K}{n-k}}$ are nonzero, but it's fine to include any other values of $k$ because they will just introduce some extra zeros into the sum, which does not change it. We just need to make sure all relevant $k$ are included. It suffices to find an obvious lower bound for it ($0$ will do nicely and is more practicable than $-\infty$!) and an obvious upper bound ($N$ works because we cannot find more than $N$ objects altogether). A slightly better upper bound is $n$ (because $k$ counts the red objects in a set of $n$ things). Thus, writing these bounds explicitly and dividing both sides by $\binom{N}{n}$, we obtain
$$1 = \sum_{0\le k\le n}\frac{\color{red}{\binom{K}{k}}\color{blue}{\binom{N-K}{n-k}}}{\binom{N}{n}} .$$
Despite the notation, this formula does not implicitly assert that all values of $k$ in the range from $0$ to $n$ can occur in this distribution. About the only reason to fiddle with the inequalities and figure out what the smallest possible range of $k$ can be would be for writing programs that loop over these values: that might save a little time adding up some zeros. | How the hypergeometric distribution sums to 1?
The reference mentions that this identity is from combinatorics: that is, it counts things.
What does it count? Consider $N$ objects. Once and for all, divide those $n$ things into a group of $K$ o |
55,230 | How the hypergeometric distribution sums to 1? | I don't think there is a problem here.
Note that $min(n,K)=n$ when $n\leq K$ and $min(n,K)=K$ when $K<n$. Whenever $n\leq K$, we will not have any problem as the upper bound in Vandermonde's identity is $n$. Also when $K<n$, we have: $k\leq K<n$. This is because $k$ is the number of successes and $K$ is the number of success states in the population. So obviously $k\leq K$. Therefore in this case, again we will not have any problem as the upper bound in Vandermonde's identity is $n$.
Also $max(0,n-K-N)=0$ when $n+K-N\leq 0$ and $max(0,n-K-N)=n-K-N$ when $n+K-N>0$. Therefore, if $n+K-N\leq 0$, then we will not have any problem as the lower bound in Vandermonde's identity is $0$. Now for the final case, suppose $n+K-N>0$. So $K>N-n\geq 0$. We know that $k\leq K$. So we either have:
$K\geq k>N-n\geq 0$
$K>N-n\geq k\geq 0$ or
$K>N-n\geq 0 \geq k$.
Now looking at 1, 2 and 3 above, we can see that again in case 1, 2 as well as case 3 when $k=0$, we will not have any problem as the lower bound in Vandermonde's identity is $0$. Note that in case 3, $k<0$ is not possible, since the number of successes cannot be negative. | How the hypergeometric distribution sums to 1? | I don't think there is a problem here.
Note that $min(n,K)=n$ when $n\leq K$ and $min(n,K)=K$ when $K<n$. Whenever $n\leq K$, we will not have any problem as the upper bound in Vandermonde's ident | How the hypergeometric distribution sums to 1?
I don't think there is a problem here.
Note that $min(n,K)=n$ when $n\leq K$ and $min(n,K)=K$ when $K<n$. Whenever $n\leq K$, we will not have any problem as the upper bound in Vandermonde's identity is $n$. Also when $K<n$, we have: $k\leq K<n$. This is because $k$ is the number of successes and $K$ is the number of success states in the population. So obviously $k\leq K$. Therefore in this case, again we will not have any problem as the upper bound in Vandermonde's identity is $n$.
Also $max(0,n-K-N)=0$ when $n+K-N\leq 0$ and $max(0,n-K-N)=n-K-N$ when $n+K-N>0$. Therefore, if $n+K-N\leq 0$, then we will not have any problem as the lower bound in Vandermonde's identity is $0$. Now for the final case, suppose $n+K-N>0$. So $K>N-n\geq 0$. We know that $k\leq K$. So we either have:
$K\geq k>N-n\geq 0$
$K>N-n\geq k\geq 0$ or
$K>N-n\geq 0 \geq k$.
Now looking at 1, 2 and 3 above, we can see that again in case 1, 2 as well as case 3 when $k=0$, we will not have any problem as the lower bound in Vandermonde's identity is $0$. Note that in case 3, $k<0$ is not possible, since the number of successes cannot be negative. | How the hypergeometric distribution sums to 1?
I don't think there is a problem here.
Note that $min(n,K)=n$ when $n\leq K$ and $min(n,K)=K$ when $K<n$. Whenever $n\leq K$, we will not have any problem as the upper bound in Vandermonde's ident |
55,231 | When interpreting interactions in a factorial ANOVA, is it necessary to look at the residual cell means? | While it looks generally wise to require closer investigation of the data, the authors overlooked important facts that weaken their argumentation. I'll show these fact on the illustrational examples they provide.
1) The first is about the possible interaction of child's sex and health status to "grief" (about the child's death?). This is --as often in psychometrics-- measured on an ordinal, not metric scale. So it is not allowed do calculate differences in grief. (What should if be after all? How far you are bowed down to the floor with grief?) This eradicates a major requisite of the author's argumentation, namely the decomposition of the effects into interaction and main effect by taking differences of the means (also not appropriate in ordinal analysis). In the end, for such an ordinal layout, all you can say is in fact "healthy male > healthy female > sick female > sick male".
This is in conjunction to a noteworthiness of the term "interaction" in such settings, and another flaw both of the authors and those they criticize. Namely, the only way to prove an $2\times 2$ ordinal interaction to be significant would be a X shaped interaction plot. (Like in Figure 1 in the paper). Why?
Assume a "<" shaped interaction plot like in the first example (Table 1). As the size of differences on the grief scale, you can choose a monotonic transformation that moves the 3 close to 1 and -1 close to -3. This does not destroy the essential information content in the data. But now you almost have a "=" shaped interaction plot and would conclude (even from ANOVA, which is not appropriate to ordinal data anyway; one should use nonparametric procedures there) that there is no interaction.
So an ordinal interaction would have been "healthy male >= sick female > healthy female >= sick male". This X-shaped pattern cannot be destroyed by monotonically rescaling the (arbitrary) ordinal scaling of "grief".
2) Concerning the second example, there is a totally different flaw.
This example considers something metric, namly the numbers of hits of baseball players, who were subjected to $2\times 2$ possibly interacting conditions. Now it is OK to calculate differences in hits, and a decomposition into main effect and interaction effect is allowed. But is it unique?
We can never tell. Consider Table 6:
a0 a1
b0 b1 b0 b1
group mean 3 3 5 7
row effect -1.5 -1.5 1.5 1.5
column effect -0.5 0.5 -0.5 0.5
grand mean 4.5
interaction +0.5 -0.5 -0.5 +0.5
What makes the authors believe that -1.5 is an unbiased estimator of the row effect of a0 and 1.5 of a1? They chose these values analogously to least squares estimation, but LSE can only estimate the expected value. It cannot tell us how to decompose unknown parameters into even more unknown summands.
And we are interested in these unkown summands! Why is there a colum effect of +/-0.5 between a0b0 and a0b1 if both cell values are exactly the same? It is because of the other cells. That means, due to completely different baseball players, namly those under condition a1, we conclude that if we would treat a player of group a0 with condition b1 instead of b0, he would hit once more per game? Although in group a0 no difference between condition b1 and b0 has been observed? Can this be true? Or is it simply a statistical mirage?
The statistical background of this phenomenon has been discovered by Rao (1962) and is called estimability. It can be shown that in this simple $2\times2$ layout with all four interaction effects, main effects are not estimable, that means they depend on something arbitrary. That causes this mirage.
The main effect estimators can only become unique if we remove the interactions from the model. So Rosnow and Rosenthal want to compare terms that are simply not present at the same time.
This error leads also to the erroneous conclusion that significant interactions are always X-shaped.
But they are not completely wrong: If you don't find a significant interaction in ANOVA and want to start considering only the main effects, one should have in mind that a type-II-error could have occured, and that there is in fact an interaction that biases the estimation and tests of the main effects. So an interaction plot with confidence intervals would be a good idea, as it also sheds more light on the effects themselves. | When interpreting interactions in a factorial ANOVA, is it necessary to look at the residual cell me | While it looks generally wise to require closer investigation of the data, the authors overlooked important facts that weaken their argumentation. I'll show these fact on the illustrational examples t | When interpreting interactions in a factorial ANOVA, is it necessary to look at the residual cell means?
While it looks generally wise to require closer investigation of the data, the authors overlooked important facts that weaken their argumentation. I'll show these fact on the illustrational examples they provide.
1) The first is about the possible interaction of child's sex and health status to "grief" (about the child's death?). This is --as often in psychometrics-- measured on an ordinal, not metric scale. So it is not allowed do calculate differences in grief. (What should if be after all? How far you are bowed down to the floor with grief?) This eradicates a major requisite of the author's argumentation, namely the decomposition of the effects into interaction and main effect by taking differences of the means (also not appropriate in ordinal analysis). In the end, for such an ordinal layout, all you can say is in fact "healthy male > healthy female > sick female > sick male".
This is in conjunction to a noteworthiness of the term "interaction" in such settings, and another flaw both of the authors and those they criticize. Namely, the only way to prove an $2\times 2$ ordinal interaction to be significant would be a X shaped interaction plot. (Like in Figure 1 in the paper). Why?
Assume a "<" shaped interaction plot like in the first example (Table 1). As the size of differences on the grief scale, you can choose a monotonic transformation that moves the 3 close to 1 and -1 close to -3. This does not destroy the essential information content in the data. But now you almost have a "=" shaped interaction plot and would conclude (even from ANOVA, which is not appropriate to ordinal data anyway; one should use nonparametric procedures there) that there is no interaction.
So an ordinal interaction would have been "healthy male >= sick female > healthy female >= sick male". This X-shaped pattern cannot be destroyed by monotonically rescaling the (arbitrary) ordinal scaling of "grief".
2) Concerning the second example, there is a totally different flaw.
This example considers something metric, namly the numbers of hits of baseball players, who were subjected to $2\times 2$ possibly interacting conditions. Now it is OK to calculate differences in hits, and a decomposition into main effect and interaction effect is allowed. But is it unique?
We can never tell. Consider Table 6:
a0 a1
b0 b1 b0 b1
group mean 3 3 5 7
row effect -1.5 -1.5 1.5 1.5
column effect -0.5 0.5 -0.5 0.5
grand mean 4.5
interaction +0.5 -0.5 -0.5 +0.5
What makes the authors believe that -1.5 is an unbiased estimator of the row effect of a0 and 1.5 of a1? They chose these values analogously to least squares estimation, but LSE can only estimate the expected value. It cannot tell us how to decompose unknown parameters into even more unknown summands.
And we are interested in these unkown summands! Why is there a colum effect of +/-0.5 between a0b0 and a0b1 if both cell values are exactly the same? It is because of the other cells. That means, due to completely different baseball players, namly those under condition a1, we conclude that if we would treat a player of group a0 with condition b1 instead of b0, he would hit once more per game? Although in group a0 no difference between condition b1 and b0 has been observed? Can this be true? Or is it simply a statistical mirage?
The statistical background of this phenomenon has been discovered by Rao (1962) and is called estimability. It can be shown that in this simple $2\times2$ layout with all four interaction effects, main effects are not estimable, that means they depend on something arbitrary. That causes this mirage.
The main effect estimators can only become unique if we remove the interactions from the model. So Rosnow and Rosenthal want to compare terms that are simply not present at the same time.
This error leads also to the erroneous conclusion that significant interactions are always X-shaped.
But they are not completely wrong: If you don't find a significant interaction in ANOVA and want to start considering only the main effects, one should have in mind that a type-II-error could have occured, and that there is in fact an interaction that biases the estimation and tests of the main effects. So an interaction plot with confidence intervals would be a good idea, as it also sheds more light on the effects themselves. | When interpreting interactions in a factorial ANOVA, is it necessary to look at the residual cell me
While it looks generally wise to require closer investigation of the data, the authors overlooked important facts that weaken their argumentation. I'll show these fact on the illustrational examples t |
55,232 | When interpreting interactions in a factorial ANOVA, is it necessary to look at the residual cell means? | This will not be a full answer, but I'll say a few things.
I am suspicious of any advice that says that such and such is "absolutely necessary." While looking at interaction effects is one way to help understand an interaction, it is not the only possible way, nor may it be the best way, in some contexts.
I also tend to discount advice that does not include the word "graph" or a synonym thereof. We do statistics because we want to understand what is going on in the data. A person who is unduly focused on $F$ and $t$ statistics, $P$ values, and residual effects is looking at trees and probably not appreciating the forest. I think an interaction plot -- or a dozen of them, depending on the complexity of the situation -- is almost always a very good start in understanding an interaction. Even in a case where the interaction $F$ test is not significant, an interaction plot that looks like an interaction is present provides a pretty strong message about the inadequacy of the data to identify it.
So my general answer to the question is to try to really understand what is going on, and use whatever it takes to achieve that understanding. | When interpreting interactions in a factorial ANOVA, is it necessary to look at the residual cell me | This will not be a full answer, but I'll say a few things.
I am suspicious of any advice that says that such and such is "absolutely necessary." While looking at interaction effects is one way to help | When interpreting interactions in a factorial ANOVA, is it necessary to look at the residual cell means?
This will not be a full answer, but I'll say a few things.
I am suspicious of any advice that says that such and such is "absolutely necessary." While looking at interaction effects is one way to help understand an interaction, it is not the only possible way, nor may it be the best way, in some contexts.
I also tend to discount advice that does not include the word "graph" or a synonym thereof. We do statistics because we want to understand what is going on in the data. A person who is unduly focused on $F$ and $t$ statistics, $P$ values, and residual effects is looking at trees and probably not appreciating the forest. I think an interaction plot -- or a dozen of them, depending on the complexity of the situation -- is almost always a very good start in understanding an interaction. Even in a case where the interaction $F$ test is not significant, an interaction plot that looks like an interaction is present provides a pretty strong message about the inadequacy of the data to identify it.
So my general answer to the question is to try to really understand what is going on, and use whatever it takes to achieve that understanding. | When interpreting interactions in a factorial ANOVA, is it necessary to look at the residual cell me
This will not be a full answer, but I'll say a few things.
I am suspicious of any advice that says that such and such is "absolutely necessary." While looking at interaction effects is one way to help |
55,233 | Sample covariance matrix | I believe there is a confusion of notation here. The paper uses $\mathbf I$ to denote the identity matrix, while you seem to use this symbol to denote a matrix where all its elements are equal to one, which the paper expresses using the notation $\mathbf1\mathbf1'$, $\mathbf1$ denoting a column vector of ones. (Moreover the paper defines $\mathbf X$ as a matrix where the series of a regressor is in one row, not in one column as is the most traditional case, but you don't seem troubled by this).
If we get past this notational issue, then the two expressions are the same (taking into account that you use $\mathbf I$ to denote $\mathbf 1\mathbf 1'$), since the matrix (using the paper's notation)
$$Q_T = \mathbf I - \frac{1}{T} \mathbf 1\mathbf1'$$
is idempotent, $Q_TQ_T= Q_T$ and symmetric, $Q_T' = Q_T$, and what it does is to de-mean the regressors (center them around their mean).
Assume we have 2 regressors $x,z$, with three observations. The authors define the regressor matrix as $ 2 \times 3$
$$X = \left [\begin{matrix} x_{1} & x_{2} & x_{3}\\
z_{1} & z_{2} & z_{3}\\
\end{matrix}\right]$$
and so
$$Q_3 = \left[ \begin{matrix} 1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1\\
\end{matrix} \right] - \frac{1}{T} \left[ \begin{matrix} 1 & 1 & 1\\
1 & 1 & 1\\
1 & 1 & 1\\
\end{matrix} \right]$$
Calculating $Q_3 X'$ shows the effect of $Q_3$.
Then the authors expression is compactly written
$$S = \frac 1 T XQ_TX' =\frac 1 T XQ_TQ_TX' = \frac 1 T(Q_T'X')'(Q_TX') = \frac 1 T(Q_TX')'(Q_TX')$$
where we have used the properties of the $Q_T$ matrix stated. If we define the transpose matrix of regressors de-meaned by $\tilde X' = Q_TX'$, then we have
$$S = \frac 1 T\tilde X \tilde X'$$
which has dimensions $2 \times 2$ as it should, and is the sample covariance matrix, given how the original regressor matrix is defined.
If you too, write $\mathbf I$ for the identity matrix, then your expression is simply mistaken, since it does not lead to subtraction of the sample mean from the observations. | Sample covariance matrix | I believe there is a confusion of notation here. The paper uses $\mathbf I$ to denote the identity matrix, while you seem to use this symbol to denote a matrix where all its elements are equal to one, | Sample covariance matrix
I believe there is a confusion of notation here. The paper uses $\mathbf I$ to denote the identity matrix, while you seem to use this symbol to denote a matrix where all its elements are equal to one, which the paper expresses using the notation $\mathbf1\mathbf1'$, $\mathbf1$ denoting a column vector of ones. (Moreover the paper defines $\mathbf X$ as a matrix where the series of a regressor is in one row, not in one column as is the most traditional case, but you don't seem troubled by this).
If we get past this notational issue, then the two expressions are the same (taking into account that you use $\mathbf I$ to denote $\mathbf 1\mathbf 1'$), since the matrix (using the paper's notation)
$$Q_T = \mathbf I - \frac{1}{T} \mathbf 1\mathbf1'$$
is idempotent, $Q_TQ_T= Q_T$ and symmetric, $Q_T' = Q_T$, and what it does is to de-mean the regressors (center them around their mean).
Assume we have 2 regressors $x,z$, with three observations. The authors define the regressor matrix as $ 2 \times 3$
$$X = \left [\begin{matrix} x_{1} & x_{2} & x_{3}\\
z_{1} & z_{2} & z_{3}\\
\end{matrix}\right]$$
and so
$$Q_3 = \left[ \begin{matrix} 1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1\\
\end{matrix} \right] - \frac{1}{T} \left[ \begin{matrix} 1 & 1 & 1\\
1 & 1 & 1\\
1 & 1 & 1\\
\end{matrix} \right]$$
Calculating $Q_3 X'$ shows the effect of $Q_3$.
Then the authors expression is compactly written
$$S = \frac 1 T XQ_TX' =\frac 1 T XQ_TQ_TX' = \frac 1 T(Q_T'X')'(Q_TX') = \frac 1 T(Q_TX')'(Q_TX')$$
where we have used the properties of the $Q_T$ matrix stated. If we define the transpose matrix of regressors de-meaned by $\tilde X' = Q_TX'$, then we have
$$S = \frac 1 T\tilde X \tilde X'$$
which has dimensions $2 \times 2$ as it should, and is the sample covariance matrix, given how the original regressor matrix is defined.
If you too, write $\mathbf I$ for the identity matrix, then your expression is simply mistaken, since it does not lead to subtraction of the sample mean from the observations. | Sample covariance matrix
I believe there is a confusion of notation here. The paper uses $\mathbf I$ to denote the identity matrix, while you seem to use this symbol to denote a matrix where all its elements are equal to one, |
55,234 | Sample covariance matrix | The simplest way to understand the equation for S is by rearranging it as follows, using the expression for means vector $m=\frac{1}{T}X1$ given in the same paper:
$S=\frac{1}{T}\left(XX'-T\left(\frac{1}{T}X1\right)\left(1'X'\frac{1}{T}\right)\right)=\frac{1}{T}XX'-mm'$.
It should be obvious now how the equation for S corresponds to a usual covariance formula $E[xy]-E[x]E[y]$
The confusion is in somewhat unusual notation $1$ for conformant vector of ones: a $T\times 1$ vector of all 1s. So, the mean vector $m$ is simply a mean of each variable $m_i=\frac{\sum_{t=1}^T X_{it}\times 1}{T}$, i.e. a product of $N\times T$ matrix and $T\times 1$ vector is $N\times 1$ vector. Note, that $X1$ is the sum of each variable.
Now, $11'$ is the product of $T\times 1$ and $1\times T$ vectors, it's a $T\times T$ matrix of all 1s.
$I$ is a usual $T\times T$ identity matrix. | Sample covariance matrix | The simplest way to understand the equation for S is by rearranging it as follows, using the expression for means vector $m=\frac{1}{T}X1$ given in the same paper:
$S=\frac{1}{T}\left(XX'-T\left(\frac | Sample covariance matrix
The simplest way to understand the equation for S is by rearranging it as follows, using the expression for means vector $m=\frac{1}{T}X1$ given in the same paper:
$S=\frac{1}{T}\left(XX'-T\left(\frac{1}{T}X1\right)\left(1'X'\frac{1}{T}\right)\right)=\frac{1}{T}XX'-mm'$.
It should be obvious now how the equation for S corresponds to a usual covariance formula $E[xy]-E[x]E[y]$
The confusion is in somewhat unusual notation $1$ for conformant vector of ones: a $T\times 1$ vector of all 1s. So, the mean vector $m$ is simply a mean of each variable $m_i=\frac{\sum_{t=1}^T X_{it}\times 1}{T}$, i.e. a product of $N\times T$ matrix and $T\times 1$ vector is $N\times 1$ vector. Note, that $X1$ is the sum of each variable.
Now, $11'$ is the product of $T\times 1$ and $1\times T$ vectors, it's a $T\times T$ matrix of all 1s.
$I$ is a usual $T\times T$ identity matrix. | Sample covariance matrix
The simplest way to understand the equation for S is by rearranging it as follows, using the expression for means vector $m=\frac{1}{T}X1$ given in the same paper:
$S=\frac{1}{T}\left(XX'-T\left(\frac |
55,235 | Meaning of Bagged Random Forests? | As a colleague of the authors, I can address this question.
To directly answer the OP, @rapaio is correct: the top quotation means that the authors created 10 separate bags each with a random forest of 10 trees -- there will be 100 total trees.
As @rapaio mentioned, there's no clear cut reason why this performed better than 100 bags or 100 random forest. The Weka implementation made it easy to perform all five experiments experiments (single tree, 10 bags, 100 bags, 100 random forest, 10 bags of 10 random forests) and the authors felt the result was interesting enough to mention. Whether this is something that shows up as a general trend against other datasets (such as Kaggle or UCI) could be the basis for a good research paper.
@rapaio did not mention it, but there may also be some interplay between the various oob estimates and the final performance metric -- AUC (not accuracy). Perhaps this is a trend only observed when optimizing for rank-order and not necessarily discrete predictions. | Meaning of Bagged Random Forests? | As a colleague of the authors, I can address this question.
To directly answer the OP, @rapaio is correct: the top quotation means that the authors created 10 separate bags each with a random forest | Meaning of Bagged Random Forests?
As a colleague of the authors, I can address this question.
To directly answer the OP, @rapaio is correct: the top quotation means that the authors created 10 separate bags each with a random forest of 10 trees -- there will be 100 total trees.
As @rapaio mentioned, there's no clear cut reason why this performed better than 100 bags or 100 random forest. The Weka implementation made it easy to perform all five experiments experiments (single tree, 10 bags, 100 bags, 100 random forest, 10 bags of 10 random forests) and the authors felt the result was interesting enough to mention. Whether this is something that shows up as a general trend against other datasets (such as Kaggle or UCI) could be the basis for a good research paper.
@rapaio did not mention it, but there may also be some interplay between the various oob estimates and the final performance metric -- AUC (not accuracy). Perhaps this is a trend only observed when optimizing for rank-order and not necessarily discrete predictions. | Meaning of Bagged Random Forests?
As a colleague of the authors, I can address this question.
To directly answer the OP, @rapaio is correct: the top quotation means that the authors created 10 separate bags each with a random forest |
55,236 | Meaning of Bagged Random Forests? | I would venture that it refers to regular random forests, but the author wants to bring out the distinction between a) the bagging / bootstrapping of the observations used for each tree and b) the random selection of a subset of the input parameters. not sure though. | Meaning of Bagged Random Forests? | I would venture that it refers to regular random forests, but the author wants to bring out the distinction between a) the bagging / bootstrapping of the observations used for each tree and b) the ran | Meaning of Bagged Random Forests?
I would venture that it refers to regular random forests, but the author wants to bring out the distinction between a) the bagging / bootstrapping of the observations used for each tree and b) the random selection of a subset of the input parameters. not sure though. | Meaning of Bagged Random Forests?
I would venture that it refers to regular random forests, but the author wants to bring out the distinction between a) the bagging / bootstrapping of the observations used for each tree and b) the ran |
55,237 | Meaning of Bagged Random Forests? | I tried to understand why bagging 10 random forests would work better than a random forest with 100 tress and I see no rational reason. I do not exclude that there could be some Weka implementation details.
However to answer your question I believe it is talking about a bagging ensemble with 10 bags and in each bag a random forest with 10 trees.
I do not have a proof, but there are some elements which I consider it provides enough evidence for this:
In the first paragraph, the last sentence which starts with "we found that neither 100.." it talks about building either a bagging with 100 trees or a random forest with 100 trees. Now 100 is 10 x 10, which are 10 bags x a random forest with 10 trees.
10 bagged random forests means the same thing from the previous point (even if it is not usual to bag random forests)
in Weka is very easy to combine in this way some classifiers, since they can be chained together due to the fact that they implement the same interface (this is a programming stuff, I am a programmer, and I confirm that it is obvious that Weka intended to make such kind of experimenting as easy as possible)
there is a way to decrease the total compute out-of-the-bag error if you group 10 random forests with 10 trees into bagging, instead of 100 random forest. If you have 10 bags and in each bag one rf with 10 trees, than the oob error is computed for the other instance of the 10 bootstrap samples from the bags. These 10 oob samples sets are classified with a random forest instead of a normal random tree. Of course, the random forest has in general less variability, so the oob errors from the 10 bags could be lower than if you have the average of oob errors for 100 oobs predicted with trees. I do not know if this is a real improvement, it might be. From my point of view it is not, it looks like an introduction of a optimistic bias in oob estimation, but this is only my intuition and since I don't have an extensive experience, I might be wrong. Even so, it looks like a plausible track
last argument is a personal intuition: I asked myself why one would over-complicate these kind of formulations for nothing? I found reasonable to believe they did that with a purpose, and a "reasonable reason" is that they simply used a bagging of random forest. | Meaning of Bagged Random Forests? | I tried to understand why bagging 10 random forests would work better than a random forest with 100 tress and I see no rational reason. I do not exclude that there could be some Weka implementation de | Meaning of Bagged Random Forests?
I tried to understand why bagging 10 random forests would work better than a random forest with 100 tress and I see no rational reason. I do not exclude that there could be some Weka implementation details.
However to answer your question I believe it is talking about a bagging ensemble with 10 bags and in each bag a random forest with 10 trees.
I do not have a proof, but there are some elements which I consider it provides enough evidence for this:
In the first paragraph, the last sentence which starts with "we found that neither 100.." it talks about building either a bagging with 100 trees or a random forest with 100 trees. Now 100 is 10 x 10, which are 10 bags x a random forest with 10 trees.
10 bagged random forests means the same thing from the previous point (even if it is not usual to bag random forests)
in Weka is very easy to combine in this way some classifiers, since they can be chained together due to the fact that they implement the same interface (this is a programming stuff, I am a programmer, and I confirm that it is obvious that Weka intended to make such kind of experimenting as easy as possible)
there is a way to decrease the total compute out-of-the-bag error if you group 10 random forests with 10 trees into bagging, instead of 100 random forest. If you have 10 bags and in each bag one rf with 10 trees, than the oob error is computed for the other instance of the 10 bootstrap samples from the bags. These 10 oob samples sets are classified with a random forest instead of a normal random tree. Of course, the random forest has in general less variability, so the oob errors from the 10 bags could be lower than if you have the average of oob errors for 100 oobs predicted with trees. I do not know if this is a real improvement, it might be. From my point of view it is not, it looks like an introduction of a optimistic bias in oob estimation, but this is only my intuition and since I don't have an extensive experience, I might be wrong. Even so, it looks like a plausible track
last argument is a personal intuition: I asked myself why one would over-complicate these kind of formulations for nothing? I found reasonable to believe they did that with a purpose, and a "reasonable reason" is that they simply used a bagging of random forest. | Meaning of Bagged Random Forests?
I tried to understand why bagging 10 random forests would work better than a random forest with 100 tress and I see no rational reason. I do not exclude that there could be some Weka implementation de |
55,238 | Comparing different deep learning models? | Bengio's review of representation learining is probably as close as it gets. | Comparing different deep learning models? | Bengio's review of representation learining is probably as close as it gets. | Comparing different deep learning models?
Bengio's review of representation learining is probably as close as it gets. | Comparing different deep learning models?
Bengio's review of representation learining is probably as close as it gets. |
55,239 | Comparing different deep learning models? | Here is a good resource I use for comparison - their performance on different benchmark datasets.
This is an excellent site that does that has an ordered results of most of the noteworthy papers.
Take note of the "details" button on the right column, It gives a short description on their testing methodology, for example if an ensemble was used, what type of augmentation, etc.
A word of advice, the best thing you can do is try it yourself. From my personal experience, significant part of the performance difference published in deep learning papers has to do with training knowhow - proper choice of hyperparameters and augmentation than the actual architecture. sad but true.
UPDATE:
The site I posted now seems to be less frequently updated. I would humbly still suggest it as an excellent starting point. But I suggest all avid explorers to look further into citations of these original papers to see how the research evolved. | Comparing different deep learning models? | Here is a good resource I use for comparison - their performance on different benchmark datasets.
This is an excellent site that does that has an ordered results of most of the noteworthy papers.
Ta | Comparing different deep learning models?
Here is a good resource I use for comparison - their performance on different benchmark datasets.
This is an excellent site that does that has an ordered results of most of the noteworthy papers.
Take note of the "details" button on the right column, It gives a short description on their testing methodology, for example if an ensemble was used, what type of augmentation, etc.
A word of advice, the best thing you can do is try it yourself. From my personal experience, significant part of the performance difference published in deep learning papers has to do with training knowhow - proper choice of hyperparameters and augmentation than the actual architecture. sad but true.
UPDATE:
The site I posted now seems to be less frequently updated. I would humbly still suggest it as an excellent starting point. But I suggest all avid explorers to look further into citations of these original papers to see how the research evolved. | Comparing different deep learning models?
Here is a good resource I use for comparison - their performance on different benchmark datasets.
This is an excellent site that does that has an ordered results of most of the noteworthy papers.
Ta |
55,240 | Converting log odds coefficients to probabilities | The relation between odds & probabilities is non-linear, so a model with a constant odds ratio between males & females doesn't translate into one with a constant probability ratio (a.k.a. relative risk) between males & females—the latter depends on the intercept & values of other predictors. And you apply the inverse logit function to get a probability from an odds, not to get a probability ratio from an odds ratio.
Here the probability ratio between black males & black females is
$$\frac{\frac{\exp (-1.0976 + 0.4035)}{1 + \exp (-1.0976 + 0.4035)}}{
\frac{\exp (-1.0976)}{1 + \exp (-1.098)}}\approx 1.331$$
while that between Hispanic males & Hispanic females is
$$\frac{\frac{\exp (-1.0976 + 0.1846 + 0.4035)}{1 + \exp (-1.0976 + 0.1846 + 0.4035)}}{
\frac{\exp (-1.0976+ 0.1846 )}{1 + \exp (-1.098+ 0.1846)}}\approx 1.311$$
If you had another race with a large odds ratio the probability ratio could be very different. | Converting log odds coefficients to probabilities | The relation between odds & probabilities is non-linear, so a model with a constant odds ratio between males & females doesn't translate into one with a constant probability ratio (a.k.a. relative ris | Converting log odds coefficients to probabilities
The relation between odds & probabilities is non-linear, so a model with a constant odds ratio between males & females doesn't translate into one with a constant probability ratio (a.k.a. relative risk) between males & females—the latter depends on the intercept & values of other predictors. And you apply the inverse logit function to get a probability from an odds, not to get a probability ratio from an odds ratio.
Here the probability ratio between black males & black females is
$$\frac{\frac{\exp (-1.0976 + 0.4035)}{1 + \exp (-1.0976 + 0.4035)}}{
\frac{\exp (-1.0976)}{1 + \exp (-1.098)}}\approx 1.331$$
while that between Hispanic males & Hispanic females is
$$\frac{\frac{\exp (-1.0976 + 0.1846 + 0.4035)}{1 + \exp (-1.0976 + 0.1846 + 0.4035)}}{
\frac{\exp (-1.0976+ 0.1846 )}{1 + \exp (-1.098+ 0.1846)}}\approx 1.311$$
If you had another race with a large odds ratio the probability ratio could be very different. | Converting log odds coefficients to probabilities
The relation between odds & probabilities is non-linear, so a model with a constant odds ratio between males & females doesn't translate into one with a constant probability ratio (a.k.a. relative ris |
55,241 | Interpreting regression coefficients of log(y+1) transformed responses | An alternative to thinking in terms of a GM is semi-elasticity.
Your model for the expected value is something like $$E[\ln(y+1) \vert x]= \alpha + \beta \cdot x + \gamma \cdot z$$
Taking the derivative of that with respect to $x$, you get $$\frac{\partial E[\ln(y+1) \vert x]}{\partial x}= \frac{1}{y+1}\cdot \frac{\partial y}{\partial x} = \beta $$
This can be rewritten as $$\frac{100 \cdot \frac{\Delta y}{y+1}}{\Delta x}=100 \cdot \beta,$$
which is almost the same as the equation for semi-elasticity,
$$\epsilon = \frac{100 \cdot \frac{\Delta y}{y}}{\Delta x}.$$
Common practice is to say that when $y$ is large, the two equations are basically the same, and to interpret $\beta$ as the percentage change in $y$ associated with one additional unit of $x$.
If your X is a set of dummies rather than continuous, you can do this instead.
Having said all that, it is good to think carefully about the origin of the zeros in your model. If there are lots of them, adding .01 can often drastically change your results compared to adding 1 or .001. Since the choice of constant is usually arbitrary, doing some robustness checks with different constants or even tuning the constant can work well. Another option is GLM, like a Poisson with a heteroskedastic variance that relaxes the mean-variance equality assumption, which obviates the need for any transformation entirely. The latter is my favorite option.
If zeros come from a different process than the positives, the interpretation gets more fraught. One example is expenditures on vacation and the number of children, where a big share of HHs spend nothing. More children make it less likely that a family goes on vacation, but conditional on going on one, more children lead to higher expenditures. Here the zeros come from a different choice process than the positive values, and so a more complicated model is needed. | Interpreting regression coefficients of log(y+1) transformed responses | An alternative to thinking in terms of a GM is semi-elasticity.
Your model for the expected value is something like $$E[\ln(y+1) \vert x]= \alpha + \beta \cdot x + \gamma \cdot z$$
Taking the derivati | Interpreting regression coefficients of log(y+1) transformed responses
An alternative to thinking in terms of a GM is semi-elasticity.
Your model for the expected value is something like $$E[\ln(y+1) \vert x]= \alpha + \beta \cdot x + \gamma \cdot z$$
Taking the derivative of that with respect to $x$, you get $$\frac{\partial E[\ln(y+1) \vert x]}{\partial x}= \frac{1}{y+1}\cdot \frac{\partial y}{\partial x} = \beta $$
This can be rewritten as $$\frac{100 \cdot \frac{\Delta y}{y+1}}{\Delta x}=100 \cdot \beta,$$
which is almost the same as the equation for semi-elasticity,
$$\epsilon = \frac{100 \cdot \frac{\Delta y}{y}}{\Delta x}.$$
Common practice is to say that when $y$ is large, the two equations are basically the same, and to interpret $\beta$ as the percentage change in $y$ associated with one additional unit of $x$.
If your X is a set of dummies rather than continuous, you can do this instead.
Having said all that, it is good to think carefully about the origin of the zeros in your model. If there are lots of them, adding .01 can often drastically change your results compared to adding 1 or .001. Since the choice of constant is usually arbitrary, doing some robustness checks with different constants or even tuning the constant can work well. Another option is GLM, like a Poisson with a heteroskedastic variance that relaxes the mean-variance equality assumption, which obviates the need for any transformation entirely. The latter is my favorite option.
If zeros come from a different process than the positives, the interpretation gets more fraught. One example is expenditures on vacation and the number of children, where a big share of HHs spend nothing. More children make it less likely that a family goes on vacation, but conditional on going on one, more children lead to higher expenditures. Here the zeros come from a different choice process than the positive values, and so a more complicated model is needed. | Interpreting regression coefficients of log(y+1) transformed responses
An alternative to thinking in terms of a GM is semi-elasticity.
Your model for the expected value is something like $$E[\ln(y+1) \vert x]= \alpha + \beta \cdot x + \gamma \cdot z$$
Taking the derivati |
55,242 | Interpreting regression coefficients of log(y+1) transformed responses | Disclaimer: I would highly advise against fitting log(y+1) in order to prevent the logarithm of negative values. This might be commonly used, but that alone does not yet make it a good practice. In many cases there are better techniques. In the regression case one can use GLM or non-linear regression.
We can estimate it with a linear approximation by looking at the derivative of
$$\frac{\partial}{\partial x} f(x) = \frac{\partial}{\partial x}\sqrt[n]{\prod_{i=1}^n(y_i + 1 + x)} $$
We have
$$\frac{\partial}{\partial x} \log f(x) = \frac{1}{n} \sum_{i=1}^{n}\frac{1}{y_i +1 +x} $$
And
$$\frac{\partial}{\partial x} f(x) = f(x)\frac{\partial}{\partial x} \left[\log f(x)\right] = f(x) \frac{1}{n} \sum_{i=1}^{n} \frac{1}{y_i +1 +x}$$
The linear approximation is then
$$ \sqrt[n]{\prod_{i=1}^n(y_i)} \approx \sqrt[n]{\prod_{i=1}^n(y_i+1)} \cdot \left( 1- \overline{1/y}\right) $$
So the difference between the geometric means of $y$ and $y+1$ relates approximately to the mean of $1/y$ and you could compute bounds of the estimate by the means of $1/y$ and $1/(y+1)$
Computational example:
set.seed(1)
n = 10
y = runif(n,2,10)
# true value 5.896058
prod(y)^(1/n)
# lowerbound 5.673379
exp(mean(log(y+1))) * (1-mean(1/y))
# upperbound 5.911496
exp(mean(log(y+1))) * (1-mean(1/(y+1))) | Interpreting regression coefficients of log(y+1) transformed responses | Disclaimer: I would highly advise against fitting log(y+1) in order to prevent the logarithm of negative values. This might be commonly used, but that alone does not yet make it a good practice. In ma | Interpreting regression coefficients of log(y+1) transformed responses
Disclaimer: I would highly advise against fitting log(y+1) in order to prevent the logarithm of negative values. This might be commonly used, but that alone does not yet make it a good practice. In many cases there are better techniques. In the regression case one can use GLM or non-linear regression.
We can estimate it with a linear approximation by looking at the derivative of
$$\frac{\partial}{\partial x} f(x) = \frac{\partial}{\partial x}\sqrt[n]{\prod_{i=1}^n(y_i + 1 + x)} $$
We have
$$\frac{\partial}{\partial x} \log f(x) = \frac{1}{n} \sum_{i=1}^{n}\frac{1}{y_i +1 +x} $$
And
$$\frac{\partial}{\partial x} f(x) = f(x)\frac{\partial}{\partial x} \left[\log f(x)\right] = f(x) \frac{1}{n} \sum_{i=1}^{n} \frac{1}{y_i +1 +x}$$
The linear approximation is then
$$ \sqrt[n]{\prod_{i=1}^n(y_i)} \approx \sqrt[n]{\prod_{i=1}^n(y_i+1)} \cdot \left( 1- \overline{1/y}\right) $$
So the difference between the geometric means of $y$ and $y+1$ relates approximately to the mean of $1/y$ and you could compute bounds of the estimate by the means of $1/y$ and $1/(y+1)$
Computational example:
set.seed(1)
n = 10
y = runif(n,2,10)
# true value 5.896058
prod(y)^(1/n)
# lowerbound 5.673379
exp(mean(log(y+1))) * (1-mean(1/y))
# upperbound 5.911496
exp(mean(log(y+1))) * (1-mean(1/(y+1))) | Interpreting regression coefficients of log(y+1) transformed responses
Disclaimer: I would highly advise against fitting log(y+1) in order to prevent the logarithm of negative values. This might be commonly used, but that alone does not yet make it a good practice. In ma |
55,243 | GEE iteration process | You assume:
a link function $g(\mu_{ij})=\mathbf{x}_{ij}'\boldsymbol{\beta}$;
the conditional variance of each $y_{ij}$, $\text{Var}(y_{ij}\mid\mathbf{x}_{ij})=\phi v(\mu_{ij})$;
the pairwise within-subject association, $\mathbf{V}_i=\mathbf{A}^{1/2}_i\mathbf{C}_i\mathbf{A}^{1/2}_i$, where $\mathbf{A}_i=\text{diag}[\text{Var}(y_{ij}\mid\mathbf{x}_{ij})]$ and $\mathbf{C}_i$ is a correlation matrix depending on a set of parameters $\boldsymbol{\alpha}$.
GEE minimizes $\sum_i(y_i-\mu_{ij})'\mathbf{V}_i^{-1}(y_i-\mu_{ij})$, where $\mu_{ij}=g^{-1}(\mathbf{x}_{ij}'\boldsymbol{\beta})$, with respect to $\boldsymbol{\beta}$. To do so, it must solve the generalized estimating equations:
$$\sum_{i=1}^N\mathbf{D}_i'\mathbf{V}^{-1}_i(\mathbf{y}_i-\boldsymbol\mu_i)=0$$
where $\mathbf{D}_i$ is the matrix of the derivatives of $\boldsymbol{\mu}_i$ with respect to $\boldsymbol\beta$ and is therefore a function of $\boldsymbol\beta$, while $\mathbf{V}_i$ is a function of $\boldsymbol\beta$, $\boldsymbol\alpha$, and $\phi$.
The estimation procedure estimates a first $\mathbf{V}_i$ matrix from the residuals of an OLS linear regression, then the two-stage iterative procedure is:
given current estimate of $\mathbf{V}_i$, i.e. of $\boldsymbol\beta$ $\boldsymbol\alpha$ and $\phi$, compute an updated estimate of $\boldsymbol{\beta}$ as the solution to the generalized estimating equations;
use this new estimate $\hat{\boldsymbol\beta}$ to update estimates of $\boldsymbol\alpha$ and $\phi$, and so of $\mathbf{V}_i$, from the standardized residuals. | GEE iteration process | You assume:
a link function $g(\mu_{ij})=\mathbf{x}_{ij}'\boldsymbol{\beta}$;
the conditional variance of each $y_{ij}$, $\text{Var}(y_{ij}\mid\mathbf{x}_{ij})=\phi v(\mu_{ij})$;
the pairwise within- | GEE iteration process
You assume:
a link function $g(\mu_{ij})=\mathbf{x}_{ij}'\boldsymbol{\beta}$;
the conditional variance of each $y_{ij}$, $\text{Var}(y_{ij}\mid\mathbf{x}_{ij})=\phi v(\mu_{ij})$;
the pairwise within-subject association, $\mathbf{V}_i=\mathbf{A}^{1/2}_i\mathbf{C}_i\mathbf{A}^{1/2}_i$, where $\mathbf{A}_i=\text{diag}[\text{Var}(y_{ij}\mid\mathbf{x}_{ij})]$ and $\mathbf{C}_i$ is a correlation matrix depending on a set of parameters $\boldsymbol{\alpha}$.
GEE minimizes $\sum_i(y_i-\mu_{ij})'\mathbf{V}_i^{-1}(y_i-\mu_{ij})$, where $\mu_{ij}=g^{-1}(\mathbf{x}_{ij}'\boldsymbol{\beta})$, with respect to $\boldsymbol{\beta}$. To do so, it must solve the generalized estimating equations:
$$\sum_{i=1}^N\mathbf{D}_i'\mathbf{V}^{-1}_i(\mathbf{y}_i-\boldsymbol\mu_i)=0$$
where $\mathbf{D}_i$ is the matrix of the derivatives of $\boldsymbol{\mu}_i$ with respect to $\boldsymbol\beta$ and is therefore a function of $\boldsymbol\beta$, while $\mathbf{V}_i$ is a function of $\boldsymbol\beta$, $\boldsymbol\alpha$, and $\phi$.
The estimation procedure estimates a first $\mathbf{V}_i$ matrix from the residuals of an OLS linear regression, then the two-stage iterative procedure is:
given current estimate of $\mathbf{V}_i$, i.e. of $\boldsymbol\beta$ $\boldsymbol\alpha$ and $\phi$, compute an updated estimate of $\boldsymbol{\beta}$ as the solution to the generalized estimating equations;
use this new estimate $\hat{\boldsymbol\beta}$ to update estimates of $\boldsymbol\alpha$ and $\phi$, and so of $\mathbf{V}_i$, from the standardized residuals. | GEE iteration process
You assume:
a link function $g(\mu_{ij})=\mathbf{x}_{ij}'\boldsymbol{\beta}$;
the conditional variance of each $y_{ij}$, $\text{Var}(y_{ij}\mid\mathbf{x}_{ij})=\phi v(\mu_{ij})$;
the pairwise within- |
55,244 | Remove effect of a factor on continuous proportion data using regression in R | The big issue, I think, is that you need a model of the data to work with. These could be regression models or simply your data fit to a distribution. Your data could theoretically be consistent with a number of models / distributions (such as the normal distribution, proportions of successes arising from a binomial distribution with the appropriate total numbers of trials, or a beta distribution). Since you say that these are fake data just to illustrate the problem, and that you have continuous proportions, I will simply go with the idea that these proportions come from a beta distribution.
In R, you can model beta distributed data using the betareg package. You should read the vignette (pdf), which is very helpful for understanding the ideas about beta regression and how to use the package to do it. Note in particular that they parameterize the beta distribution differently than the typical parameterization. Instead of using $\alpha$ and $\beta$ (which are, e.g., the shape1 and shape2 arguments in the standard ?pbeta, etc., functions), they use the mean proportion, $\mu$, and the precision, $\phi$, which are defined as follows:
\begin{align}
\mu &= \frac{\alpha}{(\alpha + \beta)} &\text{and,}& &\phi = \alpha + \beta \\
\ &\ &\ & &\ \\
& &\text{thus,}& & \\
\ &\ &\ & &\ \\
\alpha &= \mu\times\phi &\text{and,}& &\beta = \phi - \alpha
\end{align}
We can use these equations to move between the two parameterizations and therefore the two sets of functions. Then we can link the modeling in betareg to the distribution and quantile functions. The operative transformation will be based on an analogy to how qq-plots can be used to compare two distributions.
First, let's model these data with betareg and determine if the precision is constant between the groups:
library(betareg)
library(lmtest)
# test of constant precision based on beta distribution:
cp = betareg(x~f, df)
vp = betareg(x~f|f, df)
lrtest(cp, vp)
# Likelihood ratio test
#
# Model 1: x ~ f
# Model 2: x ~ f | f
# #Df LogLik Df Chisq Pr(>Chisq)
# 1 3 9.2136
# 2 4 9.4793 1 0.5314 0.466
For these (fake) data, there is little reason to believe that the precision differs, but for your real data, they might. So I will demonstrate the more complicated version, which you could simplify if you want. At any rate, the next step is to determine the estimated $\alpha$ (shape1) and $\beta$ (shape2) parameters for A and B by converting the model's coefficients to the distribution parameters (don't forget the link functions!) and using the above formulas to convert between the two parameterizations:
summary(vp)
# ...
# Coefficients (mean model with logit link):
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) -1.3153 0.2165 -6.074 1.25e-09 ***
# fB 1.4260 0.3178 4.487 7.23e-06 ***
#
# Phi coefficients (precision model with log link):
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) 3.0094 0.5695 5.284 1.26e-07 ***
# fB -0.5848 0.7943 -0.736 0.462
# ...
lo.to.prop = function(x){
o = exp(x)
p = o / (o+1)
return(p)
}
alpha.A = lo.to.prop(coef(vp)[1] ) * exp(coef(vp)[3] )
alpha.B = lo.to.prop(coef(vp)[2]+coef(vp)[1]) * exp(coef(vp)[4]+coef(vp)[3])
beta.A = exp( coef(vp)[3] ) - alpha.A
beta.B = exp( coef(vp)[4]+coef(vp)[3]) - alpha.B
Due to the complexity of this, we might do a common-sense check on the final values. (This also suggests another--simpler--way to get these parameters, if you don't need to model them and/or don't need to test if the precisions are the same.)
cbind(c(alpha.A, beta.A), c(alpha.B, beta.B))
# [,1] [,2]
# (Intercept) 4.290073 5.960932
# (phi)_(Intercept) 15.984533 5.336616
# parameterization for beta distribution:
library(fitdistrplus)
fitdist(with(df, x[f=="A"]), "beta")
# Fitting of the distribution ' beta ' by maximum likelihood
# Parameters:
# estimate Std. Error
# shape1 4.290883 2.389871
# shape2 15.987505 9.295580
fitdist(with(df, x[f=="B"]), "beta")
# Fitting of the distribution ' beta ' by maximum likelihood
# Parameters:
# estimate Std. Error
# shape1 5.960404 3.379447
# shape2 5.335963 3.010541
Now, we convert the raw proportions in B to their corresponding beta probability (that is, how far through their own beta distribution they lie), and convert those probabilities to quantiles of A's beta distribution:
p.B = pbeta(df$x[df$f=="B"], shape1=alpha.B, shape2=beta.B)
q.B.as.A = qbeta(p.B, shape1=alpha.A, shape2=beta.A)
For a final step, let's take a look at the new values:
cbind(df$x[df$f=="B"], q.B.as.A)
# q.B.as.A
# [1,] 0.50 0.18502881
# [2,] 0.30 0.08784683
# [3,] 0.45 0.15784496
# [4,] 0.60 0.24648412
# [5,] 0.54 0.20838604
# [6,] 0.77 0.38246863 | Remove effect of a factor on continuous proportion data using regression in R | The big issue, I think, is that you need a model of the data to work with. These could be regression models or simply your data fit to a distribution. Your data could theoretically be consistent wit | Remove effect of a factor on continuous proportion data using regression in R
The big issue, I think, is that you need a model of the data to work with. These could be regression models or simply your data fit to a distribution. Your data could theoretically be consistent with a number of models / distributions (such as the normal distribution, proportions of successes arising from a binomial distribution with the appropriate total numbers of trials, or a beta distribution). Since you say that these are fake data just to illustrate the problem, and that you have continuous proportions, I will simply go with the idea that these proportions come from a beta distribution.
In R, you can model beta distributed data using the betareg package. You should read the vignette (pdf), which is very helpful for understanding the ideas about beta regression and how to use the package to do it. Note in particular that they parameterize the beta distribution differently than the typical parameterization. Instead of using $\alpha$ and $\beta$ (which are, e.g., the shape1 and shape2 arguments in the standard ?pbeta, etc., functions), they use the mean proportion, $\mu$, and the precision, $\phi$, which are defined as follows:
\begin{align}
\mu &= \frac{\alpha}{(\alpha + \beta)} &\text{and,}& &\phi = \alpha + \beta \\
\ &\ &\ & &\ \\
& &\text{thus,}& & \\
\ &\ &\ & &\ \\
\alpha &= \mu\times\phi &\text{and,}& &\beta = \phi - \alpha
\end{align}
We can use these equations to move between the two parameterizations and therefore the two sets of functions. Then we can link the modeling in betareg to the distribution and quantile functions. The operative transformation will be based on an analogy to how qq-plots can be used to compare two distributions.
First, let's model these data with betareg and determine if the precision is constant between the groups:
library(betareg)
library(lmtest)
# test of constant precision based on beta distribution:
cp = betareg(x~f, df)
vp = betareg(x~f|f, df)
lrtest(cp, vp)
# Likelihood ratio test
#
# Model 1: x ~ f
# Model 2: x ~ f | f
# #Df LogLik Df Chisq Pr(>Chisq)
# 1 3 9.2136
# 2 4 9.4793 1 0.5314 0.466
For these (fake) data, there is little reason to believe that the precision differs, but for your real data, they might. So I will demonstrate the more complicated version, which you could simplify if you want. At any rate, the next step is to determine the estimated $\alpha$ (shape1) and $\beta$ (shape2) parameters for A and B by converting the model's coefficients to the distribution parameters (don't forget the link functions!) and using the above formulas to convert between the two parameterizations:
summary(vp)
# ...
# Coefficients (mean model with logit link):
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) -1.3153 0.2165 -6.074 1.25e-09 ***
# fB 1.4260 0.3178 4.487 7.23e-06 ***
#
# Phi coefficients (precision model with log link):
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) 3.0094 0.5695 5.284 1.26e-07 ***
# fB -0.5848 0.7943 -0.736 0.462
# ...
lo.to.prop = function(x){
o = exp(x)
p = o / (o+1)
return(p)
}
alpha.A = lo.to.prop(coef(vp)[1] ) * exp(coef(vp)[3] )
alpha.B = lo.to.prop(coef(vp)[2]+coef(vp)[1]) * exp(coef(vp)[4]+coef(vp)[3])
beta.A = exp( coef(vp)[3] ) - alpha.A
beta.B = exp( coef(vp)[4]+coef(vp)[3]) - alpha.B
Due to the complexity of this, we might do a common-sense check on the final values. (This also suggests another--simpler--way to get these parameters, if you don't need to model them and/or don't need to test if the precisions are the same.)
cbind(c(alpha.A, beta.A), c(alpha.B, beta.B))
# [,1] [,2]
# (Intercept) 4.290073 5.960932
# (phi)_(Intercept) 15.984533 5.336616
# parameterization for beta distribution:
library(fitdistrplus)
fitdist(with(df, x[f=="A"]), "beta")
# Fitting of the distribution ' beta ' by maximum likelihood
# Parameters:
# estimate Std. Error
# shape1 4.290883 2.389871
# shape2 15.987505 9.295580
fitdist(with(df, x[f=="B"]), "beta")
# Fitting of the distribution ' beta ' by maximum likelihood
# Parameters:
# estimate Std. Error
# shape1 5.960404 3.379447
# shape2 5.335963 3.010541
Now, we convert the raw proportions in B to their corresponding beta probability (that is, how far through their own beta distribution they lie), and convert those probabilities to quantiles of A's beta distribution:
p.B = pbeta(df$x[df$f=="B"], shape1=alpha.B, shape2=beta.B)
q.B.as.A = qbeta(p.B, shape1=alpha.A, shape2=beta.A)
For a final step, let's take a look at the new values:
cbind(df$x[df$f=="B"], q.B.as.A)
# q.B.as.A
# [1,] 0.50 0.18502881
# [2,] 0.30 0.08784683
# [3,] 0.45 0.15784496
# [4,] 0.60 0.24648412
# [5,] 0.54 0.20838604
# [6,] 0.77 0.38246863 | Remove effect of a factor on continuous proportion data using regression in R
The big issue, I think, is that you need a model of the data to work with. These could be regression models or simply your data fit to a distribution. Your data could theoretically be consistent wit |
55,245 | Why we shouldn't be obsessed with unbiasedness | Some people are annoyingly obsessed with unbiasedness. Bias and dispersion are measures of uncertainty which roughly correspond to accuracy and precision. You usually have a trade-off between accuracy and precision, some estimators may be more precise but less accurate and vice versa.
MSE is a sum of bias and the variance: $MSE=E[\beta-\hat{\beta}]^2=E[\beta-E[\hat{\beta}]+E[\hat{\beta}]-\hat{\beta} ]^2=bias^2+Var[\hat{\beta}]$
Example. Let's say your parameter is from Breit Wiegner distribution.
There is no unbiased estimator. The bias is defined as $bias=E[\hat{\beta}]-\beta$, where $\beta$ - true value, and $\hat{\beta}$ is its estimator. In this case $E[\hat{\beta}]$ is not defined mathematically, so you can't compute the bias.
This is an extreme example where unbiased estimator does not exist at all. | Why we shouldn't be obsessed with unbiasedness | Some people are annoyingly obsessed with unbiasedness. Bias and dispersion are measures of uncertainty which roughly correspond to accuracy and precision. You usually have a trade-off between accuracy | Why we shouldn't be obsessed with unbiasedness
Some people are annoyingly obsessed with unbiasedness. Bias and dispersion are measures of uncertainty which roughly correspond to accuracy and precision. You usually have a trade-off between accuracy and precision, some estimators may be more precise but less accurate and vice versa.
MSE is a sum of bias and the variance: $MSE=E[\beta-\hat{\beta}]^2=E[\beta-E[\hat{\beta}]+E[\hat{\beta}]-\hat{\beta} ]^2=bias^2+Var[\hat{\beta}]$
Example. Let's say your parameter is from Breit Wiegner distribution.
There is no unbiased estimator. The bias is defined as $bias=E[\hat{\beta}]-\beta$, where $\beta$ - true value, and $\hat{\beta}$ is its estimator. In this case $E[\hat{\beta}]$ is not defined mathematically, so you can't compute the bias.
This is an extreme example where unbiased estimator does not exist at all. | Why we shouldn't be obsessed with unbiasedness
Some people are annoyingly obsessed with unbiasedness. Bias and dispersion are measures of uncertainty which roughly correspond to accuracy and precision. You usually have a trade-off between accuracy |
55,246 | Why we shouldn't be obsessed with unbiasedness | The motivation of the statement is a rejection of some very important work from Rao-Blackwell about the provision of so called UMVUEs (Uniform minimum variance unbiased estimators). This showed that there is a lower bound on variance of unbiased estimators, and if you achieve that, you usually get a very good estimator. Many of our most popular statistics are special cases of that, such as the z-test for normal data, or the least squares regression models (among linear estimators, which makes LS BLUE (best linear unbiased estimator)).
The big question is what happens when we stop caring about bias? Do we get better estimators? The answer is YES. For inference, plenty of biased estimators are extremely efficient at detecting associations in data. A great way of evaluating the quality of an estimator is with its MSE (mean squarred error), or:
$\mbox{MSE}(\hat{\theta}) = \mathcal{E}\left( (\hat{\theta} - \theta)^2\right)$
This can be rewritten as:
$\mbox{MSE}(\hat{\theta}) = \mathcal{E}\left( (\mathcal{E}(\hat{\theta}) - \theta)^2\right) + \mathcal{E} \left( (\mathcal{E}(\hat{\theta}) - \hat{\theta} )^2 \right)$
which is the sum of the squared bias and the variance of the estimator.
An example of an estimator that has better MSE than the UMVUE for a multivariate normal model is ridge regression. What your professor may have referred to regarding "shrink to zero" is shrinkate: the tendency for high dimensional statistics to have poor out-of-sample validity. The "squared error loss" (which is variance plus squared-bias) can be improved from using multivariate normal MLE. The ridge estimator uses an L2 penalty to penalize highly variable estimates. The LASSO uses an L1 penalty and also has shrinkage properties. The L1 penalty forces relatively small estimates to be exactly zero, but it is an abuse of terminology to claim they "shrink to zero" when shrinkage refers to optimizing MSE. Rather, we constrain.
http://www.few.vu.nl/~wvanwie/Courses/HighdimensionalDataAnalysis/WNvanWieringen_HDDA_Lecture234_RidgeRegression_20182019.pdf
Bayes estimators which are posterior means using a conjugate prior minimize squared error loss.
Bearing that in mind, if MSE was all we called about controlling for, you can come up with a plethora of estimators which are a little biased but with much smaller variance than the UMVUE. And given that UMVUE estimators can have complicated asymptotic distributions, or are sometimes difficult to find, or sometimes don't exist, we often seek out biased estimators that lack all those problems. | Why we shouldn't be obsessed with unbiasedness | The motivation of the statement is a rejection of some very important work from Rao-Blackwell about the provision of so called UMVUEs (Uniform minimum variance unbiased estimators). This showed that t | Why we shouldn't be obsessed with unbiasedness
The motivation of the statement is a rejection of some very important work from Rao-Blackwell about the provision of so called UMVUEs (Uniform minimum variance unbiased estimators). This showed that there is a lower bound on variance of unbiased estimators, and if you achieve that, you usually get a very good estimator. Many of our most popular statistics are special cases of that, such as the z-test for normal data, or the least squares regression models (among linear estimators, which makes LS BLUE (best linear unbiased estimator)).
The big question is what happens when we stop caring about bias? Do we get better estimators? The answer is YES. For inference, plenty of biased estimators are extremely efficient at detecting associations in data. A great way of evaluating the quality of an estimator is with its MSE (mean squarred error), or:
$\mbox{MSE}(\hat{\theta}) = \mathcal{E}\left( (\hat{\theta} - \theta)^2\right)$
This can be rewritten as:
$\mbox{MSE}(\hat{\theta}) = \mathcal{E}\left( (\mathcal{E}(\hat{\theta}) - \theta)^2\right) + \mathcal{E} \left( (\mathcal{E}(\hat{\theta}) - \hat{\theta} )^2 \right)$
which is the sum of the squared bias and the variance of the estimator.
An example of an estimator that has better MSE than the UMVUE for a multivariate normal model is ridge regression. What your professor may have referred to regarding "shrink to zero" is shrinkate: the tendency for high dimensional statistics to have poor out-of-sample validity. The "squared error loss" (which is variance plus squared-bias) can be improved from using multivariate normal MLE. The ridge estimator uses an L2 penalty to penalize highly variable estimates. The LASSO uses an L1 penalty and also has shrinkage properties. The L1 penalty forces relatively small estimates to be exactly zero, but it is an abuse of terminology to claim they "shrink to zero" when shrinkage refers to optimizing MSE. Rather, we constrain.
http://www.few.vu.nl/~wvanwie/Courses/HighdimensionalDataAnalysis/WNvanWieringen_HDDA_Lecture234_RidgeRegression_20182019.pdf
Bayes estimators which are posterior means using a conjugate prior minimize squared error loss.
Bearing that in mind, if MSE was all we called about controlling for, you can come up with a plethora of estimators which are a little biased but with much smaller variance than the UMVUE. And given that UMVUE estimators can have complicated asymptotic distributions, or are sometimes difficult to find, or sometimes don't exist, we often seek out biased estimators that lack all those problems. | Why we shouldn't be obsessed with unbiasedness
The motivation of the statement is a rejection of some very important work from Rao-Blackwell about the provision of so called UMVUEs (Uniform minimum variance unbiased estimators). This showed that t |
55,247 | Data mining techniques in R for advertising and sales data | Given that you have a time series, with possible influences of trend and seasonality on sales, I recommend that you look for time series techniques that can handle causal effects such as advertising. This thread should be a good starting point, although your focus appears not to be forecasting.
Try something like this:
> library(forecast)
> model <- auto.arima(data_ts[,"Sales"],xreg=data_ts[,"Advertising"])
This will build an ARIMAX model for sales, with advertising as an external variable. You can then do summary(model) to see, e.g., parameter estimates.
> summary(model)
Series: data_ts[, "Sales"]
ARIMA(0,0,0)(0,1,0)[12]
Coefficients:
data_ts[, "Advertising"]
1.6445
s.e. 0.6574
sigma^2 estimated as 575.3: log likelihood=-51
AIC=106 AICc=106.57 BIC=108.35
Training set error measures:
ME RMSE MAE MPE MAPE MASE
Training set -2.821585 13.84857 9.039446 -40.91741 64.68516 0.5506261
ACF1
Training set 0.003027406
We see that ARIMAX believes that each unit of advertising increases sales by 1.64. You can plot:
plot(data_ts[,"Sales"])
lines(data_ts[,"Advertising"],col="red")
If you have future values data_ts_ad_future for your advertising, you can forecast and plot point forecasts and prediction intervals:
set.seed(1)
data_ts_ad_future <- ts(sample(data_ts[,"Advertising"],12,replace=TRUE),
start=c(2009,1),frequency=frequency(data_ts[,"Advertising"]))
fcst <- forecast(model,xreg=data_ts_ad_future)
plot(fcst)
lines(data_ts[,"Advertising"],col="red")
lines(data_ts_ad_future,col="red",lty=2) | Data mining techniques in R for advertising and sales data | Given that you have a time series, with possible influences of trend and seasonality on sales, I recommend that you look for time series techniques that can handle causal effects such as advertising. | Data mining techniques in R for advertising and sales data
Given that you have a time series, with possible influences of trend and seasonality on sales, I recommend that you look for time series techniques that can handle causal effects such as advertising. This thread should be a good starting point, although your focus appears not to be forecasting.
Try something like this:
> library(forecast)
> model <- auto.arima(data_ts[,"Sales"],xreg=data_ts[,"Advertising"])
This will build an ARIMAX model for sales, with advertising as an external variable. You can then do summary(model) to see, e.g., parameter estimates.
> summary(model)
Series: data_ts[, "Sales"]
ARIMA(0,0,0)(0,1,0)[12]
Coefficients:
data_ts[, "Advertising"]
1.6445
s.e. 0.6574
sigma^2 estimated as 575.3: log likelihood=-51
AIC=106 AICc=106.57 BIC=108.35
Training set error measures:
ME RMSE MAE MPE MAPE MASE
Training set -2.821585 13.84857 9.039446 -40.91741 64.68516 0.5506261
ACF1
Training set 0.003027406
We see that ARIMAX believes that each unit of advertising increases sales by 1.64. You can plot:
plot(data_ts[,"Sales"])
lines(data_ts[,"Advertising"],col="red")
If you have future values data_ts_ad_future for your advertising, you can forecast and plot point forecasts and prediction intervals:
set.seed(1)
data_ts_ad_future <- ts(sample(data_ts[,"Advertising"],12,replace=TRUE),
start=c(2009,1),frequency=frequency(data_ts[,"Advertising"]))
fcst <- forecast(model,xreg=data_ts_ad_future)
plot(fcst)
lines(data_ts[,"Advertising"],col="red")
lines(data_ts_ad_future,col="red",lty=2) | Data mining techniques in R for advertising and sales data
Given that you have a time series, with possible influences of trend and seasonality on sales, I recommend that you look for time series techniques that can handle causal effects such as advertising. |
55,248 | Data mining techniques in R for advertising and sales data | Keeping models as simple as possible(but not too simple) is very important. There is absolutely no proof that one should incorporate seasonal differencing into a reasonable model for your data. Some analysts believe that complicated models will yield higher consulting fees. Differencing is a form of a transformation(complication) and like other transformations (drugs/alcohol) can have consequences. Early references (before Intervention Detection ala http://www.unc.edu/~jbhill/tsay.pdf) not being aware of the need for seasonal pulses often incorporated seasonal AR or seasonal differencing when a few seasonal indicators night be sufficient. The ACF of the original series is devoid of any need for seasonal differencing. The acf of lag 12 is induced by a few months have year-to-year similarities BUT this is atypical as only three months exhibit similarities (March +22, April +12 and August +35). . A useful model which includes 3 seasonal pulses and 3 pulse effects (one-time only) is as follows. . The acf of the residuals suggests sufficiency which is visually supported by a plot of the residuals . The actual/fit and forecast plot is as follows which provides forecasts that are much more pleasing to my eye than others presented here. Note that my forecasts for Y are based upon the expected X for next year using the most recent year's values as a typical baseline. Different forecasts for X will translate into different forecasts for Y. . Notice that the forecast pattern for AUTOBOX and auto.arima are "somewhat similar but have a different level". Also note the "false forecast" of a high July next year from auto.arima as it believes the July 1992 value as the basis rather than challenging the 1992/7 value as being exceptional as AUTOBOX does. AUTOBOX senses that July 1992 is significantly higher than expectations (52 versus 21 and 33 for prior two years) thus contains an an "outlier" by a magnitude of +22.8623 . Adjusting that July value by subtracting 22.8623 yields a forecast that is not flawed by the unusual. In contrast the values for August are 22 and for the last two years a confirming 62 and 65 thus the estimated August effect is a plus 35.698. In the absence of an ARIMA effect the adjustment for August would have been [(62+65)/2] -22 or 41.5 .
Finally software availability often limits what some researchers can do or even know about what can be done as they personally don't have access to innovative methods. The advantage of Stack_Exchange is the free openness and exchange of ideas and approaches.
The example comes from the 1982 book by Abraham and Ledolter. http://tinyurl.com/mxurcxy
See page 70 where they analyze this time series. They find that there are lag effects of advertising while auto.arima does not. Notice that AUTOBOX includes an AR(1) structure for the noise which translates to a lag structure on both Y and X thus generally supporting the textbook solution of lagged dependence. The problem with auto.arima is that if you don't treat the anomalies the error variance is enlarged thus the necessary AR structure is missed due to the downward bias in the acf as the acf is partially based on the (inflated) error variance.
To answer Whuber's question I present here exactly how the forecast is made for time period 37 . | Data mining techniques in R for advertising and sales data | Keeping models as simple as possible(but not too simple) is very important. There is absolutely no proof that one should incorporate seasonal differencing into a reasonable model for your data. Some a | Data mining techniques in R for advertising and sales data
Keeping models as simple as possible(but not too simple) is very important. There is absolutely no proof that one should incorporate seasonal differencing into a reasonable model for your data. Some analysts believe that complicated models will yield higher consulting fees. Differencing is a form of a transformation(complication) and like other transformations (drugs/alcohol) can have consequences. Early references (before Intervention Detection ala http://www.unc.edu/~jbhill/tsay.pdf) not being aware of the need for seasonal pulses often incorporated seasonal AR or seasonal differencing when a few seasonal indicators night be sufficient. The ACF of the original series is devoid of any need for seasonal differencing. The acf of lag 12 is induced by a few months have year-to-year similarities BUT this is atypical as only three months exhibit similarities (March +22, April +12 and August +35). . A useful model which includes 3 seasonal pulses and 3 pulse effects (one-time only) is as follows. . The acf of the residuals suggests sufficiency which is visually supported by a plot of the residuals . The actual/fit and forecast plot is as follows which provides forecasts that are much more pleasing to my eye than others presented here. Note that my forecasts for Y are based upon the expected X for next year using the most recent year's values as a typical baseline. Different forecasts for X will translate into different forecasts for Y. . Notice that the forecast pattern for AUTOBOX and auto.arima are "somewhat similar but have a different level". Also note the "false forecast" of a high July next year from auto.arima as it believes the July 1992 value as the basis rather than challenging the 1992/7 value as being exceptional as AUTOBOX does. AUTOBOX senses that July 1992 is significantly higher than expectations (52 versus 21 and 33 for prior two years) thus contains an an "outlier" by a magnitude of +22.8623 . Adjusting that July value by subtracting 22.8623 yields a forecast that is not flawed by the unusual. In contrast the values for August are 22 and for the last two years a confirming 62 and 65 thus the estimated August effect is a plus 35.698. In the absence of an ARIMA effect the adjustment for August would have been [(62+65)/2] -22 or 41.5 .
Finally software availability often limits what some researchers can do or even know about what can be done as they personally don't have access to innovative methods. The advantage of Stack_Exchange is the free openness and exchange of ideas and approaches.
The example comes from the 1982 book by Abraham and Ledolter. http://tinyurl.com/mxurcxy
See page 70 where they analyze this time series. They find that there are lag effects of advertising while auto.arima does not. Notice that AUTOBOX includes an AR(1) structure for the noise which translates to a lag structure on both Y and X thus generally supporting the textbook solution of lagged dependence. The problem with auto.arima is that if you don't treat the anomalies the error variance is enlarged thus the necessary AR structure is missed due to the downward bias in the acf as the acf is partially based on the (inflated) error variance.
To answer Whuber's question I present here exactly how the forecast is made for time period 37 . | Data mining techniques in R for advertising and sales data
Keeping models as simple as possible(but not too simple) is very important. There is absolutely no proof that one should incorporate seasonal differencing into a reasonable model for your data. Some a |
55,249 | Weighted Linear Regression R | Ordinary least squares minimizes the sum of squared residuals (residual = measured value - fitted value). Weighted least squares weights the sqared residuals. From help("lm"):
weighted least squares is used with weights weights (that is,
minimizing sum(w*e^2)) | Weighted Linear Regression R | Ordinary least squares minimizes the sum of squared residuals (residual = measured value - fitted value). Weighted least squares weights the sqared residuals. From help("lm"):
weighted least squares | Weighted Linear Regression R
Ordinary least squares minimizes the sum of squared residuals (residual = measured value - fitted value). Weighted least squares weights the sqared residuals. From help("lm"):
weighted least squares is used with weights weights (that is,
minimizing sum(w*e^2)) | Weighted Linear Regression R
Ordinary least squares minimizes the sum of squared residuals (residual = measured value - fitted value). Weighted least squares weights the sqared residuals. From help("lm"):
weighted least squares |
55,250 | Weighted Linear Regression R | In simple terms, it means that each data point is assigned a weight which either increases or decreases the influence of that data point on the final model. Thus the slope and intercept change as the raw data is not used, but rather the weighted data.
If you want more detailed information, I'd suggest stats.stackexchange.com This forum is really just for programming questions, not statistical questions (unless the two blend extensively). | Weighted Linear Regression R | In simple terms, it means that each data point is assigned a weight which either increases or decreases the influence of that data point on the final model. Thus the slope and intercept change as the | Weighted Linear Regression R
In simple terms, it means that each data point is assigned a weight which either increases or decreases the influence of that data point on the final model. Thus the slope and intercept change as the raw data is not used, but rather the weighted data.
If you want more detailed information, I'd suggest stats.stackexchange.com This forum is really just for programming questions, not statistical questions (unless the two blend extensively). | Weighted Linear Regression R
In simple terms, it means that each data point is assigned a weight which either increases or decreases the influence of that data point on the final model. Thus the slope and intercept change as the |
55,251 | Analysis of the Residuals vs Fitted | Note that within each diagonal band, the residual decreases by one unit for every one unit the fitted value increases. This looks to me like the response for a given subject remains relatively constant and the predictors change a little bit between observations. Thus when the change in predictors predict a unit increase in mean response (fitted value) and the observed response remains the same, the residual for that observation decreases by one unit.
Because you have multiple observations from each subject, your data exhibits clustering, and you should use a method that handles clustered data if you wish to keep all of your observations as-is. However, if your predictors are indeed less stable or more prone to measurement error than your response, you might think about regressing the mean response on the mean predictor values per subject, or some similar procedure. | Analysis of the Residuals vs Fitted | Note that within each diagonal band, the residual decreases by one unit for every one unit the fitted value increases. This looks to me like the response for a given subject remains relatively constan | Analysis of the Residuals vs Fitted
Note that within each diagonal band, the residual decreases by one unit for every one unit the fitted value increases. This looks to me like the response for a given subject remains relatively constant and the predictors change a little bit between observations. Thus when the change in predictors predict a unit increase in mean response (fitted value) and the observed response remains the same, the residual for that observation decreases by one unit.
Because you have multiple observations from each subject, your data exhibits clustering, and you should use a method that handles clustered data if you wish to keep all of your observations as-is. However, if your predictors are indeed less stable or more prone to measurement error than your response, you might think about regressing the mean response on the mean predictor values per subject, or some similar procedure. | Analysis of the Residuals vs Fitted
Note that within each diagonal band, the residual decreases by one unit for every one unit the fitted value increases. This looks to me like the response for a given subject remains relatively constan |
55,252 | Sample from a custom continuous distribution in R | If you want to sample from a certain pdf, you can use
rejection sampling which requires nothing more than the density function and the specification of a value as upper bound which is at least as large as the largest value of the density function. The disadvantage is that it can eventually be a very inefficient way to sample depending on the shape of the density function.
inverse transform sampling is the preferred way if the inverse of the distribution function is known. This is the case for the likelihood in your example since it is a Gaussian distribution and the associated quantile function (=inverse distribution function) is available in R. In general, inverse transform sampling works by sampling from a uniform distribution in the interval [0,1] and use the obtained values as the argument of the quantile function. The resulting values from the quantile function then follow the specified probability distribution.
To elaborate on the example: Since the likelihood is for a Gaussian distribution, it is maximized by setting $\mu$ to the arithmetic mean of the $D$ values
$$\mu = \frac{1}{n}\sum_i^n D_i$$
from which the variance can be calculated by
$$\text{Var}(\mu)=\frac{1}{n^2}\sum_i^n \text{Var}(D_i)=\frac{\sigma^2}{n}$$
Thus the $\mu$ value follows a Gaussian distribution $N(\mu,\sigma/\sqrt{n})$. Instead of implementing inverse transform sampling yourself, you could also use the rnorm function. | Sample from a custom continuous distribution in R | If you want to sample from a certain pdf, you can use
rejection sampling which requires nothing more than the density function and the specification of a value as upper bound which is at least as larg | Sample from a custom continuous distribution in R
If you want to sample from a certain pdf, you can use
rejection sampling which requires nothing more than the density function and the specification of a value as upper bound which is at least as large as the largest value of the density function. The disadvantage is that it can eventually be a very inefficient way to sample depending on the shape of the density function.
inverse transform sampling is the preferred way if the inverse of the distribution function is known. This is the case for the likelihood in your example since it is a Gaussian distribution and the associated quantile function (=inverse distribution function) is available in R. In general, inverse transform sampling works by sampling from a uniform distribution in the interval [0,1] and use the obtained values as the argument of the quantile function. The resulting values from the quantile function then follow the specified probability distribution.
To elaborate on the example: Since the likelihood is for a Gaussian distribution, it is maximized by setting $\mu$ to the arithmetic mean of the $D$ values
$$\mu = \frac{1}{n}\sum_i^n D_i$$
from which the variance can be calculated by
$$\text{Var}(\mu)=\frac{1}{n^2}\sum_i^n \text{Var}(D_i)=\frac{\sigma^2}{n}$$
Thus the $\mu$ value follows a Gaussian distribution $N(\mu,\sigma/\sqrt{n})$. Instead of implementing inverse transform sampling yourself, you could also use the rnorm function. | Sample from a custom continuous distribution in R
If you want to sample from a certain pdf, you can use
rejection sampling which requires nothing more than the density function and the specification of a value as upper bound which is at least as larg |
55,253 | Sample from a custom continuous distribution in R | There are some posts on the internet about rejection sampling, but I found this one to be the most helpful. My example is from there, with minor modifications. If you need to speed things up a bit more, you could use data.table package which can make you feel dizzy from speed gains. I didn't bother because this particular implementation was instantaneous for me.
Here's my implementation of a function for $x \geq 0$.
kindaSortaLikeAcdfWeibullButNotReally <- function(x, sigma, b, mx) {
exp(-(x/sigma)^(-b)) * mx
}
xrange <- 400 # function range from 0 (implicit) to x
N <- 100000 # number of samples
b <- -2.16
mx <- 35.48
sigma <- 147.17
xy <- data.frame(proposed = runif(N, min = 0, max = xrange))
xy$fit <- kindaSortaLikeAcdfWeibullButNotReally(x = xy$proposed,
sigma = sigma, b = b, mx = mx)
xy$random <- runif(N, min = 0, max = 1)
maxDens <- max(xy$fit)
xy$accepted <- with(xy, random <= fit/maxDens)
# retain only those values which are "below" the custom distribution
xy <- xy[xy$accepted, ]
hist(xy$proposed, freq = FALSE, breaks = 100, col = "light grey")
# multiply by 130 to make it look fit nicely
curve(weibullLikeDistribution(x, sigma = sigma, b = b, mx = mx)/(maxDens * 130),
from = 0, to = 400, add = TRUE, col = "red", lwd = 2)
Here's an image which shows how the algorithm works. You find the distribution (fit, in black dots), throw a bunch of values in a square around the distribution (random column) and see if it's higher than the fit or not.
# modify above example to put
mx <- 1
xrange <- 300
# xy <- xy[xy$accepted, ] # skip this step - you'll see why if you don't
library(ggplot2)
xys <- xy[order(xy$proposed), ]
xys <- xys[seq(1, nrow(xys), by = 11), ]
ggplot(xy, aes(x = proposed, y = fit/maxDens)) +
theme_bw() +
scale_color_brewer(palette = "Set1") +
geom_line(alpha = 0.5) +
geom_point(data = xys, aes(y = random, color = accepted), alpha = 0.5) +
geom_point(data = xys, aes(x = proposed, y = fit/maxDens), alpha = 0.5) +
geom_segment(data = xys, aes(x = proposed, y = random, xend = proposed, yend = fit/maxDens), alpha = 0.3) | Sample from a custom continuous distribution in R | There are some posts on the internet about rejection sampling, but I found this one to be the most helpful. My example is from there, with minor modifications. If you need to speed things up a bit mor | Sample from a custom continuous distribution in R
There are some posts on the internet about rejection sampling, but I found this one to be the most helpful. My example is from there, with minor modifications. If you need to speed things up a bit more, you could use data.table package which can make you feel dizzy from speed gains. I didn't bother because this particular implementation was instantaneous for me.
Here's my implementation of a function for $x \geq 0$.
kindaSortaLikeAcdfWeibullButNotReally <- function(x, sigma, b, mx) {
exp(-(x/sigma)^(-b)) * mx
}
xrange <- 400 # function range from 0 (implicit) to x
N <- 100000 # number of samples
b <- -2.16
mx <- 35.48
sigma <- 147.17
xy <- data.frame(proposed = runif(N, min = 0, max = xrange))
xy$fit <- kindaSortaLikeAcdfWeibullButNotReally(x = xy$proposed,
sigma = sigma, b = b, mx = mx)
xy$random <- runif(N, min = 0, max = 1)
maxDens <- max(xy$fit)
xy$accepted <- with(xy, random <= fit/maxDens)
# retain only those values which are "below" the custom distribution
xy <- xy[xy$accepted, ]
hist(xy$proposed, freq = FALSE, breaks = 100, col = "light grey")
# multiply by 130 to make it look fit nicely
curve(weibullLikeDistribution(x, sigma = sigma, b = b, mx = mx)/(maxDens * 130),
from = 0, to = 400, add = TRUE, col = "red", lwd = 2)
Here's an image which shows how the algorithm works. You find the distribution (fit, in black dots), throw a bunch of values in a square around the distribution (random column) and see if it's higher than the fit or not.
# modify above example to put
mx <- 1
xrange <- 300
# xy <- xy[xy$accepted, ] # skip this step - you'll see why if you don't
library(ggplot2)
xys <- xy[order(xy$proposed), ]
xys <- xys[seq(1, nrow(xys), by = 11), ]
ggplot(xy, aes(x = proposed, y = fit/maxDens)) +
theme_bw() +
scale_color_brewer(palette = "Set1") +
geom_line(alpha = 0.5) +
geom_point(data = xys, aes(y = random, color = accepted), alpha = 0.5) +
geom_point(data = xys, aes(x = proposed, y = fit/maxDens), alpha = 0.5) +
geom_segment(data = xys, aes(x = proposed, y = random, xend = proposed, yend = fit/maxDens), alpha = 0.3) | Sample from a custom continuous distribution in R
There are some posts on the internet about rejection sampling, but I found this one to be the most helpful. My example is from there, with minor modifications. If you need to speed things up a bit mor |
55,254 | Regression model for proportion or count when counts of outcome and total events are often zero | Probably the most common way to look at this kind of thing, if you're only interested in the proportions, is to assume that at the $i$th location $A_i$ & $B_i$ are independent Poisson variables with rates $\lambda_i$ & $\mu_i$ respectively. (That doesn't seem unreasonable for two types of car crashes at the same location over a limited period of time.) The joint mass function is
$$\newcommand{\e}{\mathrm{e}} f_{A_i,B_i}(a_i,b_i) = \frac{\lambda_i^{a_i} \e^{-\lambda_i}}{a_i!} \cdot \frac{\mu_i^{b_i} \e^{-\mu_i}}{b_i!}$$
Reparametrize with
$$\pi_i = \frac{\lambda_i}{\lambda_i+\mu_i}$$
$$\nu_i= \mu_i+\lambda_i$$
, let
$$N_i = A_i+B_i$$
, & the joint density can be written as
$$f_{A_i,N_i}(a_i,n_i)=\frac{1}{a_i!(n_i-a_i!)}\cdot\pi_i^{a_i} (1-\pi_i)^{n_i-a_i}\cdot \nu_i^{n_i} \e^{\nu_i}$$
Note that $\pi_i$, what you're interested in, & $\nu_i$, the nuisance parameter, separate cleanly; $N_i$ is sufficient for $\nu_i$, & $(A_i,N_i)$ sufficient for $\pi_i$. Sum over $a_i$ to get the marginal distribution of $N_i$, which is also Poisson, with rate $\nu_i$:
$$f_{N_i}(n_i)= \frac{\nu_i^{n_i} \e^{-\nu_i}}{n_i!}$$
Conditioning on the observed value of the ancillary complement $N_i=n_i$ gives
$$f_{A_i|N_i=n_i}(a_i;n_i)=\frac{n_i!}{a_i!(n_i-a_i!)}\cdot\pi_i^{a_i} (1-\pi_i)^{n_i-a_i}$$
, i.e. a binomial distribution for $A_i$ successes out of $n_i$ trials.
I'm not sure what your concern is about locations where there are no events—there's simply no data at these to estimate the proportion of type-A crashes because there weren't any crashes. That doesn't stop you estimating $\pi_i$ at other locations. If location is the only predictor you have a simple $2\times k$ contingency table for the $k$ locations with data. If there are continuous predictors you can use a logistic regression model. If you want to make estimates for the $n=0$ locations you need in some way to borrow information from other locations: e.g. with predictors whose coefficients are estimated from other locations, treating location as a random effect. A Bayesian multi-level model might be quite useful, as some locations will have small, though non-zero, event counts, & estimates for these will be pulled further in the direction of the global model. | Regression model for proportion or count when counts of outcome and total events are often zero | Probably the most common way to look at this kind of thing, if you're only interested in the proportions, is to assume that at the $i$th location $A_i$ & $B_i$ are independent Poisson variables with r | Regression model for proportion or count when counts of outcome and total events are often zero
Probably the most common way to look at this kind of thing, if you're only interested in the proportions, is to assume that at the $i$th location $A_i$ & $B_i$ are independent Poisson variables with rates $\lambda_i$ & $\mu_i$ respectively. (That doesn't seem unreasonable for two types of car crashes at the same location over a limited period of time.) The joint mass function is
$$\newcommand{\e}{\mathrm{e}} f_{A_i,B_i}(a_i,b_i) = \frac{\lambda_i^{a_i} \e^{-\lambda_i}}{a_i!} \cdot \frac{\mu_i^{b_i} \e^{-\mu_i}}{b_i!}$$
Reparametrize with
$$\pi_i = \frac{\lambda_i}{\lambda_i+\mu_i}$$
$$\nu_i= \mu_i+\lambda_i$$
, let
$$N_i = A_i+B_i$$
, & the joint density can be written as
$$f_{A_i,N_i}(a_i,n_i)=\frac{1}{a_i!(n_i-a_i!)}\cdot\pi_i^{a_i} (1-\pi_i)^{n_i-a_i}\cdot \nu_i^{n_i} \e^{\nu_i}$$
Note that $\pi_i$, what you're interested in, & $\nu_i$, the nuisance parameter, separate cleanly; $N_i$ is sufficient for $\nu_i$, & $(A_i,N_i)$ sufficient for $\pi_i$. Sum over $a_i$ to get the marginal distribution of $N_i$, which is also Poisson, with rate $\nu_i$:
$$f_{N_i}(n_i)= \frac{\nu_i^{n_i} \e^{-\nu_i}}{n_i!}$$
Conditioning on the observed value of the ancillary complement $N_i=n_i$ gives
$$f_{A_i|N_i=n_i}(a_i;n_i)=\frac{n_i!}{a_i!(n_i-a_i!)}\cdot\pi_i^{a_i} (1-\pi_i)^{n_i-a_i}$$
, i.e. a binomial distribution for $A_i$ successes out of $n_i$ trials.
I'm not sure what your concern is about locations where there are no events—there's simply no data at these to estimate the proportion of type-A crashes because there weren't any crashes. That doesn't stop you estimating $\pi_i$ at other locations. If location is the only predictor you have a simple $2\times k$ contingency table for the $k$ locations with data. If there are continuous predictors you can use a logistic regression model. If you want to make estimates for the $n=0$ locations you need in some way to borrow information from other locations: e.g. with predictors whose coefficients are estimated from other locations, treating location as a random effect. A Bayesian multi-level model might be quite useful, as some locations will have small, though non-zero, event counts, & estimates for these will be pulled further in the direction of the global model. | Regression model for proportion or count when counts of outcome and total events are often zero
Probably the most common way to look at this kind of thing, if you're only interested in the proportions, is to assume that at the $i$th location $A_i$ & $B_i$ are independent Poisson variables with r |
55,255 | Regression model for proportion or count when counts of outcome and total events are often zero | I may be missing something about your motivation to 'control' for the total number of events, but how about the following:
Concentrate on modelling the rate (not the count) of A per area. To do this you would need to control for different numbers of total events in each area. You'd do this by adding an offset of $\log(A+B)$ to a regular Poisson regression (or over-dispersed, zero-inflated, etc. variant) for each area.
This, btw, is my reading of the second disjunct in your comment: "I want to model the proportion of successes at the location level OR I want to model the count of successes controlling for the total number of events at the location." | Regression model for proportion or count when counts of outcome and total events are often zero | I may be missing something about your motivation to 'control' for the total number of events, but how about the following:
Concentrate on modelling the rate (not the count) of A per area. To do this | Regression model for proportion or count when counts of outcome and total events are often zero
I may be missing something about your motivation to 'control' for the total number of events, but how about the following:
Concentrate on modelling the rate (not the count) of A per area. To do this you would need to control for different numbers of total events in each area. You'd do this by adding an offset of $\log(A+B)$ to a regular Poisson regression (or over-dispersed, zero-inflated, etc. variant) for each area.
This, btw, is my reading of the second disjunct in your comment: "I want to model the proportion of successes at the location level OR I want to model the count of successes controlling for the total number of events at the location." | Regression model for proportion or count when counts of outcome and total events are often zero
I may be missing something about your motivation to 'control' for the total number of events, but how about the following:
Concentrate on modelling the rate (not the count) of A per area. To do this |
55,256 | Regression model for proportion or count when counts of outcome and total events are often zero | Have you thought about using Tukey's folded logs, as in $$\frac{1}{2} \cdot \ln \left( \frac{A + 1/6}{A+B + 1/3}\right) - \frac{1}{2} \cdot \ln \left(1 - \frac{A + 1/6}{A+B + 1/3}\right)$$
You can justify this type of transformation with some Bayesian arguments. For example, here's some date illustrating this transformation:
A B ratio flog
100 0 1 3.1992975
1 0 1 .97295507
0 1 0 -.97295507
0 0 . 0
0 25 0 -2.5086399
0 75 0 -3.0557337
50 100 1/3 -.34574233
One of the things I really like about it is that $A=100,B=0$ has a higher value than $A=1,B=0$ even if the proportions are both 1. The disadvantage of this approach is that the coefficients will be hard to interpret.
If you have a different prior, you can adjust accordingly. | Regression model for proportion or count when counts of outcome and total events are often zero | Have you thought about using Tukey's folded logs, as in $$\frac{1}{2} \cdot \ln \left( \frac{A + 1/6}{A+B + 1/3}\right) - \frac{1}{2} \cdot \ln \left(1 - \frac{A + 1/6}{A+B + 1/3}\right)$$
You can jus | Regression model for proportion or count when counts of outcome and total events are often zero
Have you thought about using Tukey's folded logs, as in $$\frac{1}{2} \cdot \ln \left( \frac{A + 1/6}{A+B + 1/3}\right) - \frac{1}{2} \cdot \ln \left(1 - \frac{A + 1/6}{A+B + 1/3}\right)$$
You can justify this type of transformation with some Bayesian arguments. For example, here's some date illustrating this transformation:
A B ratio flog
100 0 1 3.1992975
1 0 1 .97295507
0 1 0 -.97295507
0 0 . 0
0 25 0 -2.5086399
0 75 0 -3.0557337
50 100 1/3 -.34574233
One of the things I really like about it is that $A=100,B=0$ has a higher value than $A=1,B=0$ even if the proportions are both 1. The disadvantage of this approach is that the coefficients will be hard to interpret.
If you have a different prior, you can adjust accordingly. | Regression model for proportion or count when counts of outcome and total events are often zero
Have you thought about using Tukey's folded logs, as in $$\frac{1}{2} \cdot \ln \left( \frac{A + 1/6}{A+B + 1/3}\right) - \frac{1}{2} \cdot \ln \left(1 - \frac{A + 1/6}{A+B + 1/3}\right)$$
You can jus |
55,257 | Why is a symmetric distribution sufficient for the sample mean and variance to be uncorrelated? | Too long for a comment:
It is not true that sample mean and variance are always independent if the distribution is symmetric. For example, take a sample from a distribution which takes values $\pm1$ with equal probability: if the sample mean is $\pm1$ then the sample variance will be $0$, while if the sample mean is not $\pm1$ then the sample variance will be positive.
It is true that the distributions of the sample mean and variance have zero correlation (if they have a correlation) if the distribution is symmetric. This is because $E(s_X^2|\bar{X}-\mu=k)=E(s_X^2|\bar{X}-\mu=-k)$ by symmetry.
Neither of these points deal with the statement in the book, which says only if but not if.
For an example of the final statement, if most of a distribution is closely clustered but there can be the occasional particular very large value, then the sample mean will be largely determined by the number of very large values in the sample, and the more of them there are, the higher the sample variance will be too, leading to high correlation. | Why is a symmetric distribution sufficient for the sample mean and variance to be uncorrelated? | Too long for a comment:
It is not true that sample mean and variance are always independent if the distribution is symmetric. For example, take a sample from a distribution which takes values $\pm1$ | Why is a symmetric distribution sufficient for the sample mean and variance to be uncorrelated?
Too long for a comment:
It is not true that sample mean and variance are always independent if the distribution is symmetric. For example, take a sample from a distribution which takes values $\pm1$ with equal probability: if the sample mean is $\pm1$ then the sample variance will be $0$, while if the sample mean is not $\pm1$ then the sample variance will be positive.
It is true that the distributions of the sample mean and variance have zero correlation (if they have a correlation) if the distribution is symmetric. This is because $E(s_X^2|\bar{X}-\mu=k)=E(s_X^2|\bar{X}-\mu=-k)$ by symmetry.
Neither of these points deal with the statement in the book, which says only if but not if.
For an example of the final statement, if most of a distribution is closely clustered but there can be the occasional particular very large value, then the sample mean will be largely determined by the number of very large values in the sample, and the more of them there are, the higher the sample variance will be too, leading to high correlation. | Why is a symmetric distribution sufficient for the sample mean and variance to be uncorrelated?
Too long for a comment:
It is not true that sample mean and variance are always independent if the distribution is symmetric. For example, take a sample from a distribution which takes values $\pm1$ |
55,258 | A seeming paradox with rational agents not coming to the same conclusion given the same data | I am afraid that you fell victim to the usual misinterpretation of the (essentially vacuous) dictum "rational agents given the same information must come to the same conclusion". "Information" in this context is not just "data". It includes also the information-processing procedures a rational agent will use. Also, it includes the structure of preferences or attitudes towards uncertainty.
Let's say person A and person B have the same data. Do they process the data in the same way? If not, they do not have "same information" (and there are many different ways to process the data, all compatible with rationality). If yes, do they quantify the costs of making wrong inferences in the same way? If not, (and there are many "cost-quantification" approaches compatible with rationality -say, a risk-averse person is no less rational than a risk-neutral one), they do not have "same information". If yes, they are a priori identical (for all aspects relevant to the issue at hand), and that's why the dictum becomes vacuous, and so no paradox emerges: persons A and B in your specific example, obviously have not the same information, in this more general and complete sense.
"Rationality" is a very minimum set of requirements, having mainly to do with internal consistency. There are a lot of different inferential setups that are internally consistent, and so they all are "rational" -and they lead to different conclusions. | A seeming paradox with rational agents not coming to the same conclusion given the same data | I am afraid that you fell victim to the usual misinterpretation of the (essentially vacuous) dictum "rational agents given the same information must come to the same conclusion". "Information" in this | A seeming paradox with rational agents not coming to the same conclusion given the same data
I am afraid that you fell victim to the usual misinterpretation of the (essentially vacuous) dictum "rational agents given the same information must come to the same conclusion". "Information" in this context is not just "data". It includes also the information-processing procedures a rational agent will use. Also, it includes the structure of preferences or attitudes towards uncertainty.
Let's say person A and person B have the same data. Do they process the data in the same way? If not, they do not have "same information" (and there are many different ways to process the data, all compatible with rationality). If yes, do they quantify the costs of making wrong inferences in the same way? If not, (and there are many "cost-quantification" approaches compatible with rationality -say, a risk-averse person is no less rational than a risk-neutral one), they do not have "same information". If yes, they are a priori identical (for all aspects relevant to the issue at hand), and that's why the dictum becomes vacuous, and so no paradox emerges: persons A and B in your specific example, obviously have not the same information, in this more general and complete sense.
"Rationality" is a very minimum set of requirements, having mainly to do with internal consistency. There are a lot of different inferential setups that are internally consistent, and so they all are "rational" -and they lead to different conclusions. | A seeming paradox with rational agents not coming to the same conclusion given the same data
I am afraid that you fell victim to the usual misinterpretation of the (essentially vacuous) dictum "rational agents given the same information must come to the same conclusion". "Information" in this |
55,259 | A seeming paradox with rational agents not coming to the same conclusion given the same data | Let me first point out that you appear to have a common misunderstanding about the meaning of p-values. In conventional (frequentist) statistical analysis, the p-value is the probability of getting a sample statistic (say a sample mean) as far or further from the proposed null value as yours, if the null value is the true value. Importantly, there is no such thing as (e.g.) "bananas cure cancer with probability at least 99.99%". The fact that a p-value might be $< 0.0001$ very much does not imply that there is a 99.99% probability the alternative hypothesis is true (or a 0.01% probability the null hypothesis is true). For more on this topic, it may help you to read this CV thread: What is the meaning of p values and t values in statistical tests?
That having been said, it is possible to assert a (subjective) probability associated with the null hypothesis within the Bayesian framework. Bayes' rule is:
$$
Pr(H_0|D) = \frac{Pr(D|H_0)}{Pr(D)}Pr(H_0)
$$
In words, the probability that the null hypothesis is true that you should believe after having seen some data is equal to the distinctiveness of the data with respect to the null hypothesis (indexed by the quotient on the RHS) multiplied by the probability that the null hypothesis is true that you believed before having seen the data in question. To make this easier, consider the following example1:
MAMMOGRAPHY
A reporter for a women's monthly magazine would like to write an article about breast cancer. As a part of her research, she focuses on mammography as an indicator of breast cancer. She wonders what it really means if a woman tests positive for breast cancer during her routine mammography examination. She has the following data:
The probability that a woman who undergoes a mammography will have breast cancer is 1%.
If a woman undergoing a mammography has breast cancer, the probability that she will test positive is 80%.
If a woman undergoing a mammography does not have breast cancer, the probability that she will test positive is 10%.
What is the probability that a woman who has undergone a mammography actually has breast cancer, if she tests positive?
How can we figure out that probability? We must revise the a priori probability that a woman who undergoes a mammography has breast cancer, p(cancer) which according to the text is 1% or p=.01, in light of the new information that the test was positive. That is, we are looking for the conditional probability of p(cancer|positive). The probability of a positive result given breast cancer, p(positive|cancer), is 80% or p=.8, and the probability of a positive result given no breast cancer, p(positive|no cancer), is 10% or p=.1.
Thus, we have:
$$
Pr({\rm cancer|positive}) = \frac{0.80}{\underbrace{0.80\!\times\! 0.01}_{Pr(D)\text{ w/ cancer}}\;+\;\underbrace{0.10\!\times\! 0.99}_{\Pr(D)\text{ w/o cancer}}} 0.01 = 0.075
$$
(The denominator of the fraction in Bayes' rule is often hard for people to understand. In this case, it is possible to enumerate the possible probabilities of the data, and $Pr(D)$ is simply the sum of all the individual enumerated probabilities. For greater clarity, I annotated them here. Often, the set of possible probabilities is much harder to determine. In practice, people often ignore the denominator and replace the equals sign with $\propto$, 'proportional to'.)
Now in this example, the cancer rate is known beforehand. To make this example more like your new research finding example, let's imagine that no one knows exactly what the cancer rate is, but two different doctors believe the cancer rate is 1%, and 5% respectively. If we use the latter value in the equation above, we get:
$$
Pr({\rm cancer|positive}) = \frac{0.80}{0.80\!\times\! 0.05\;+\;0.10\!\times\! 0.95} 0.05 = 0.296
$$
The probability is now 29.6%, which is very different from the 7.5% above. So who is right? We don't really know, but the important part is that both doctors are rational in believing their (very different) probabilities that their patient has breast cancer. To put this a different way, what is rational isn't the probability that each believes, but rather the manner in which they change their belief in light of new evidence. Since both doctors changed their belief using a correct application of Bayes' rule, both are rational, even though they came to different conclusions. The reason they didn't end up with the same probability is because they didn't believe in the same probability beforehand; this is what @AlecosPapadopoulos meant by 'they do not have "same information"'.
1. This example is copied from: Sedlmeier, Improving Statistical Reasoning, pp. 8-9. | A seeming paradox with rational agents not coming to the same conclusion given the same data | Let me first point out that you appear to have a common misunderstanding about the meaning of p-values. In conventional (frequentist) statistical analysis, the p-value is the probability of getting a | A seeming paradox with rational agents not coming to the same conclusion given the same data
Let me first point out that you appear to have a common misunderstanding about the meaning of p-values. In conventional (frequentist) statistical analysis, the p-value is the probability of getting a sample statistic (say a sample mean) as far or further from the proposed null value as yours, if the null value is the true value. Importantly, there is no such thing as (e.g.) "bananas cure cancer with probability at least 99.99%". The fact that a p-value might be $< 0.0001$ very much does not imply that there is a 99.99% probability the alternative hypothesis is true (or a 0.01% probability the null hypothesis is true). For more on this topic, it may help you to read this CV thread: What is the meaning of p values and t values in statistical tests?
That having been said, it is possible to assert a (subjective) probability associated with the null hypothesis within the Bayesian framework. Bayes' rule is:
$$
Pr(H_0|D) = \frac{Pr(D|H_0)}{Pr(D)}Pr(H_0)
$$
In words, the probability that the null hypothesis is true that you should believe after having seen some data is equal to the distinctiveness of the data with respect to the null hypothesis (indexed by the quotient on the RHS) multiplied by the probability that the null hypothesis is true that you believed before having seen the data in question. To make this easier, consider the following example1:
MAMMOGRAPHY
A reporter for a women's monthly magazine would like to write an article about breast cancer. As a part of her research, she focuses on mammography as an indicator of breast cancer. She wonders what it really means if a woman tests positive for breast cancer during her routine mammography examination. She has the following data:
The probability that a woman who undergoes a mammography will have breast cancer is 1%.
If a woman undergoing a mammography has breast cancer, the probability that she will test positive is 80%.
If a woman undergoing a mammography does not have breast cancer, the probability that she will test positive is 10%.
What is the probability that a woman who has undergone a mammography actually has breast cancer, if she tests positive?
How can we figure out that probability? We must revise the a priori probability that a woman who undergoes a mammography has breast cancer, p(cancer) which according to the text is 1% or p=.01, in light of the new information that the test was positive. That is, we are looking for the conditional probability of p(cancer|positive). The probability of a positive result given breast cancer, p(positive|cancer), is 80% or p=.8, and the probability of a positive result given no breast cancer, p(positive|no cancer), is 10% or p=.1.
Thus, we have:
$$
Pr({\rm cancer|positive}) = \frac{0.80}{\underbrace{0.80\!\times\! 0.01}_{Pr(D)\text{ w/ cancer}}\;+\;\underbrace{0.10\!\times\! 0.99}_{\Pr(D)\text{ w/o cancer}}} 0.01 = 0.075
$$
(The denominator of the fraction in Bayes' rule is often hard for people to understand. In this case, it is possible to enumerate the possible probabilities of the data, and $Pr(D)$ is simply the sum of all the individual enumerated probabilities. For greater clarity, I annotated them here. Often, the set of possible probabilities is much harder to determine. In practice, people often ignore the denominator and replace the equals sign with $\propto$, 'proportional to'.)
Now in this example, the cancer rate is known beforehand. To make this example more like your new research finding example, let's imagine that no one knows exactly what the cancer rate is, but two different doctors believe the cancer rate is 1%, and 5% respectively. If we use the latter value in the equation above, we get:
$$
Pr({\rm cancer|positive}) = \frac{0.80}{0.80\!\times\! 0.05\;+\;0.10\!\times\! 0.95} 0.05 = 0.296
$$
The probability is now 29.6%, which is very different from the 7.5% above. So who is right? We don't really know, but the important part is that both doctors are rational in believing their (very different) probabilities that their patient has breast cancer. To put this a different way, what is rational isn't the probability that each believes, but rather the manner in which they change their belief in light of new evidence. Since both doctors changed their belief using a correct application of Bayes' rule, both are rational, even though they came to different conclusions. The reason they didn't end up with the same probability is because they didn't believe in the same probability beforehand; this is what @AlecosPapadopoulos meant by 'they do not have "same information"'.
1. This example is copied from: Sedlmeier, Improving Statistical Reasoning, pp. 8-9. | A seeming paradox with rational agents not coming to the same conclusion given the same data
Let me first point out that you appear to have a common misunderstanding about the meaning of p-values. In conventional (frequentist) statistical analysis, the p-value is the probability of getting a |
55,260 | A seeming paradox with rational agents not coming to the same conclusion given the same data | The reason that your question appears to be both difficult and strange is that conventional accounts of statistics do not include the philosophy of statistics. In particular for your question we need to consider two normative principles. It is the likelihood principle that suggests that identical evidence should lead to equivalent inferences and it is the repeated sampling principle that suggests that we should assess statistical methods on the basis of how frequently they lead us astray. Both principles are reasonable, but they conflict. Thus we end up in a position where your question can exist.
Inferential methods can be fully compliant with the likelihood principle or with the repeated sampling principle but, often, not with both. Rational minds can disagree about which principle is more important.
Inferences are made by people, not by statistical algorithms, and rational minds can disagree about the acceptable degree of subjectivism and the mechanism for its incorporation into inferential considerations. | A seeming paradox with rational agents not coming to the same conclusion given the same data | The reason that your question appears to be both difficult and strange is that conventional accounts of statistics do not include the philosophy of statistics. In particular for your question we need | A seeming paradox with rational agents not coming to the same conclusion given the same data
The reason that your question appears to be both difficult and strange is that conventional accounts of statistics do not include the philosophy of statistics. In particular for your question we need to consider two normative principles. It is the likelihood principle that suggests that identical evidence should lead to equivalent inferences and it is the repeated sampling principle that suggests that we should assess statistical methods on the basis of how frequently they lead us astray. Both principles are reasonable, but they conflict. Thus we end up in a position where your question can exist.
Inferential methods can be fully compliant with the likelihood principle or with the repeated sampling principle but, often, not with both. Rational minds can disagree about which principle is more important.
Inferences are made by people, not by statistical algorithms, and rational minds can disagree about the acceptable degree of subjectivism and the mechanism for its incorporation into inferential considerations. | A seeming paradox with rational agents not coming to the same conclusion given the same data
The reason that your question appears to be both difficult and strange is that conventional accounts of statistics do not include the philosophy of statistics. In particular for your question we need |
55,261 | To know clearly about the population | They can be confusing. However attempts were made to define them:
Two Types of Population in Research
Target Population
Target population refers to the ENTIRE group of individuals or objects
to which researchers are interested in generalizing the conclusions.
The target population usually has varying characteristics and it is
also known as the theoretical population.
Accessible Population
The accessible population is the population in research to which the
researchers can apply their conclusions. This population is a subset
of the target population and is also known as the study population. It
is from the accessible population that researchers draw their samples.
Here is another example:
Basically, target population (also known as theoretical population) is the group to whom we wish to generalize our findings.
Study population (also known as accessible population) is the actual sampling frame, from which we randomly drew our sample. This is the group to whom we actually can generalize our findings, because we sampled from them.
Having said these, for safety I would recommend checking the glossary and definition of the text to make sure the authors really do use the same system. | To know clearly about the population | They can be confusing. However attempts were made to define them:
Two Types of Population in Research
Target Population
Target population refers to the ENTIRE group of individuals or objects
to whi | To know clearly about the population
They can be confusing. However attempts were made to define them:
Two Types of Population in Research
Target Population
Target population refers to the ENTIRE group of individuals or objects
to which researchers are interested in generalizing the conclusions.
The target population usually has varying characteristics and it is
also known as the theoretical population.
Accessible Population
The accessible population is the population in research to which the
researchers can apply their conclusions. This population is a subset
of the target population and is also known as the study population. It
is from the accessible population that researchers draw their samples.
Here is another example:
Basically, target population (also known as theoretical population) is the group to whom we wish to generalize our findings.
Study population (also known as accessible population) is the actual sampling frame, from which we randomly drew our sample. This is the group to whom we actually can generalize our findings, because we sampled from them.
Having said these, for safety I would recommend checking the glossary and definition of the text to make sure the authors really do use the same system. | To know clearly about the population
They can be confusing. However attempts were made to define them:
Two Types of Population in Research
Target Population
Target population refers to the ENTIRE group of individuals or objects
to whi |
55,262 | To know clearly about the population | Easily speaking:
Target population is the population you are interested in your study;
Study population is a sub population that you are taking from the target population for doing your study
Theoretical population is the same as target population, which is the population you want your study to be generalized to.
For example, you want to do a study: the average height of all men between age 20 to 30 in Canada, then the target and theoretical population is all men between age 20 to 30; and your study population would be say all men between age 20 to 30 in Ontario. | To know clearly about the population | Easily speaking:
Target population is the population you are interested in your study;
Study population is a sub population that you are taking from the target population for doing your study
Theoret | To know clearly about the population
Easily speaking:
Target population is the population you are interested in your study;
Study population is a sub population that you are taking from the target population for doing your study
Theoretical population is the same as target population, which is the population you want your study to be generalized to.
For example, you want to do a study: the average height of all men between age 20 to 30 in Canada, then the target and theoretical population is all men between age 20 to 30; and your study population would be say all men between age 20 to 30 in Ontario. | To know clearly about the population
Easily speaking:
Target population is the population you are interested in your study;
Study population is a sub population that you are taking from the target population for doing your study
Theoret |
55,263 | To know clearly about the population | Target population, as the name suggests, is the population that is of interest to the researcher. Also known as the theoretical population, it serves as the main environment for the researcher's hypothesis in a general term.
For instance, one may be thinking about doing a research on the criteria for recruiting high school teachers in Liberia. High school teachers in Liberia is the target population, which is quite large, difficult to manage, difficult to sample, etc.
On the other hand, the study population, which is also known as the accessible population, is the population that is derived from the target population for the smooth conduction of the research in a specific term.
For instance, instead of the entire population of high school teachers in Liberia, the researcher could consider doing a study on the high school teachers of the Monrovia Consolidated School System (MCSS). This specific group of people is the study population, which is manageable, much easy to sample, etc. | To know clearly about the population | Target population, as the name suggests, is the population that is of interest to the researcher. Also known as the theoretical population, it serves as the main environment for the researcher's hypot | To know clearly about the population
Target population, as the name suggests, is the population that is of interest to the researcher. Also known as the theoretical population, it serves as the main environment for the researcher's hypothesis in a general term.
For instance, one may be thinking about doing a research on the criteria for recruiting high school teachers in Liberia. High school teachers in Liberia is the target population, which is quite large, difficult to manage, difficult to sample, etc.
On the other hand, the study population, which is also known as the accessible population, is the population that is derived from the target population for the smooth conduction of the research in a specific term.
For instance, instead of the entire population of high school teachers in Liberia, the researcher could consider doing a study on the high school teachers of the Monrovia Consolidated School System (MCSS). This specific group of people is the study population, which is manageable, much easy to sample, etc. | To know clearly about the population
Target population, as the name suggests, is the population that is of interest to the researcher. Also known as the theoretical population, it serves as the main environment for the researcher's hypot |
55,264 | Variance within each cluster | According to the Hastie equation 14.31 (see also Halkidi et al. 2001), the within-cluster variance $W(C_{k})$ of a cluster $C_{k}$ is defined (for the Euclidean distance) as $\sum_{x_{i}\in{C_{k}}}\|x_{i}-\bar{x}_{k}\|^2$ , where $\bar{x}_{k}$ is the mean of cluster $C_{k}$ (also called the cluster centroid, its values are the coordinate-wise average of the data points in $C_{k}$), and {${x_{1}, ..., x_{N}}$} is the set of observations (they are vectors, i.e., one coordinate per dimension). In plain English, the cluster variance is the coordinate-wise squared deviations from the mean of the cluster of all the observations belonging to that cluster. The total within cluster scatter (for the entire set of observations) is simply $W=\sum\limits_{k=1}^K\sum_{x_{i}\in{C_{k}}}\|x_{i}-\bar{x}_{k}\|^2$ for K clusters and N observations with $K<N$. The goal of a clustering algorithm such as K-means is to minimize this quantity (or to maximize the between-cluster variance $B$). The total point scatter in the data set (the information) $T$ is equal to $W+B$. When clustering, we just make sense of the information originally present by decreasing $W$ and increasing $B$ as much as possible, however, $T$ remains constant, there is no information loss (as opposed to dimensionality reduction techniques such as PCA, for instance). | Variance within each cluster | According to the Hastie equation 14.31 (see also Halkidi et al. 2001), the within-cluster variance $W(C_{k})$ of a cluster $C_{k}$ is defined (for the Euclidean distance) as $\sum_{x_{i}\in{C_{k}}}\|x | Variance within each cluster
According to the Hastie equation 14.31 (see also Halkidi et al. 2001), the within-cluster variance $W(C_{k})$ of a cluster $C_{k}$ is defined (for the Euclidean distance) as $\sum_{x_{i}\in{C_{k}}}\|x_{i}-\bar{x}_{k}\|^2$ , where $\bar{x}_{k}$ is the mean of cluster $C_{k}$ (also called the cluster centroid, its values are the coordinate-wise average of the data points in $C_{k}$), and {${x_{1}, ..., x_{N}}$} is the set of observations (they are vectors, i.e., one coordinate per dimension). In plain English, the cluster variance is the coordinate-wise squared deviations from the mean of the cluster of all the observations belonging to that cluster. The total within cluster scatter (for the entire set of observations) is simply $W=\sum\limits_{k=1}^K\sum_{x_{i}\in{C_{k}}}\|x_{i}-\bar{x}_{k}\|^2$ for K clusters and N observations with $K<N$. The goal of a clustering algorithm such as K-means is to minimize this quantity (or to maximize the between-cluster variance $B$). The total point scatter in the data set (the information) $T$ is equal to $W+B$. When clustering, we just make sense of the information originally present by decreasing $W$ and increasing $B$ as much as possible, however, $T$ remains constant, there is no information loss (as opposed to dimensionality reduction techniques such as PCA, for instance). | Variance within each cluster
According to the Hastie equation 14.31 (see also Halkidi et al. 2001), the within-cluster variance $W(C_{k})$ of a cluster $C_{k}$ is defined (for the Euclidean distance) as $\sum_{x_{i}\in{C_{k}}}\|x |
55,265 | Compare means of two datasets of binary data | You can express your data in the form of a contingency table.
For a small N you can use Fisher's exact test to test whether your measurements a and b are dependent on each other.
For a larger N you can use the chi-squared test | Compare means of two datasets of binary data | You can express your data in the form of a contingency table.
For a small N you can use Fisher's exact test to test whether your measurements a and b are dependent on each other.
For a larger N you c | Compare means of two datasets of binary data
You can express your data in the form of a contingency table.
For a small N you can use Fisher's exact test to test whether your measurements a and b are dependent on each other.
For a larger N you can use the chi-squared test | Compare means of two datasets of binary data
You can express your data in the form of a contingency table.
For a small N you can use Fisher's exact test to test whether your measurements a and b are dependent on each other.
For a larger N you c |
55,266 | Compare means of two datasets of binary data | Since in your case means and proportions are the same
you can use proportions test for testing the null that
the proportions (probabilities of success) in groups
are the same.
As for reference I can suggest the
Hollander and Wolfe's book Nonparametric Statistical Methods | Compare means of two datasets of binary data | Since in your case means and proportions are the same
you can use proportions test for testing the null that
the proportions (probabilities of success) in groups
are the same.
As for reference I ca | Compare means of two datasets of binary data
Since in your case means and proportions are the same
you can use proportions test for testing the null that
the proportions (probabilities of success) in groups
are the same.
As for reference I can suggest the
Hollander and Wolfe's book Nonparametric Statistical Methods | Compare means of two datasets of binary data
Since in your case means and proportions are the same
you can use proportions test for testing the null that
the proportions (probabilities of success) in groups
are the same.
As for reference I ca |
55,267 | Conditional Expectations and Variances | The conditional version of the law of total variance works just the same.
Expectation follows trivially from the tower property, and for variance:
\begin{align}
\mathbb{V} [X | Y] &= \mathbb{E}[(X - E[X | Y])^2 | Y] \\
&= \mathbb{E}[X^2 | Y] - (\mathbb{E}[X | Y])^2 \\
&= \mathbb{E}[\mathbb{E}[X^2 | Y,Z] | Y] - (\mathbb{E}[X | Y,Z])^2 \\
&= \mathbb{E}[\mathbb{V}[X | Y,Z] + (\mathbb{E}[X | Y,Z])^2 | Y] - (\mathbb{E}[\mathbb{E}[X | Y,Z] | Y])^2 \\
&= \mathbb{E}[\mathbb{V}[X | Y,Z] | Y] + \mathbb{V}[\mathbb{E}[X | Y,Z] | Y] \\
\end{align} | Conditional Expectations and Variances | The conditional version of the law of total variance works just the same.
Expectation follows trivially from the tower property, and for variance:
\begin{align}
\mathbb{V} [X | Y] &= \mathbb{E}[(X - E | Conditional Expectations and Variances
The conditional version of the law of total variance works just the same.
Expectation follows trivially from the tower property, and for variance:
\begin{align}
\mathbb{V} [X | Y] &= \mathbb{E}[(X - E[X | Y])^2 | Y] \\
&= \mathbb{E}[X^2 | Y] - (\mathbb{E}[X | Y])^2 \\
&= \mathbb{E}[\mathbb{E}[X^2 | Y,Z] | Y] - (\mathbb{E}[X | Y,Z])^2 \\
&= \mathbb{E}[\mathbb{V}[X | Y,Z] + (\mathbb{E}[X | Y,Z])^2 | Y] - (\mathbb{E}[\mathbb{E}[X | Y,Z] | Y])^2 \\
&= \mathbb{E}[\mathbb{V}[X | Y,Z] | Y] + \mathbb{V}[\mathbb{E}[X | Y,Z] | Y] \\
\end{align} | Conditional Expectations and Variances
The conditional version of the law of total variance works just the same.
Expectation follows trivially from the tower property, and for variance:
\begin{align}
\mathbb{V} [X | Y] &= \mathbb{E}[(X - E |
55,268 | If C wins B 80% of the time, and B wins A 80% of the time, how often would C beat A? | this information is not enough. Let's look more precisely what I mean by the lack of information. An event $CA$ means that $C$ wins against $A$ and event $\overline{CA}$ for when $C$ looses against $A$. Then we have:
\begin{equation}
p(CB) = p(CB|CA)p(CA) + p(CB|\overline{CA})p(\overline{CA})
\end{equation}
and following that we have,
\begin{equation}
p(CB|CA) = p(CB|CA,BA)P(BA) + p(CB|CA,\overline{BA})P(\overline{BA})
\end{equation}
however you don't have information on the conditional probabilities, having the prior probabilities such as $p(CA)$ and $p(BA)$ is not logically sufficient to infer $p(CB)$. | If C wins B 80% of the time, and B wins A 80% of the time, how often would C beat A? | this information is not enough. Let's look more precisely what I mean by the lack of information. An event $CA$ means that $C$ wins against $A$ and event $\overline{CA}$ for when $C$ looses against $A | If C wins B 80% of the time, and B wins A 80% of the time, how often would C beat A?
this information is not enough. Let's look more precisely what I mean by the lack of information. An event $CA$ means that $C$ wins against $A$ and event $\overline{CA}$ for when $C$ looses against $A$. Then we have:
\begin{equation}
p(CB) = p(CB|CA)p(CA) + p(CB|\overline{CA})p(\overline{CA})
\end{equation}
and following that we have,
\begin{equation}
p(CB|CA) = p(CB|CA,BA)P(BA) + p(CB|CA,\overline{BA})P(\overline{BA})
\end{equation}
however you don't have information on the conditional probabilities, having the prior probabilities such as $p(CA)$ and $p(BA)$ is not logically sufficient to infer $p(CB)$. | If C wins B 80% of the time, and B wins A 80% of the time, how often would C beat A?
this information is not enough. Let's look more precisely what I mean by the lack of information. An event $CA$ means that $C$ wins against $A$ and event $\overline{CA}$ for when $C$ looses against $A |
55,269 | If C wins B 80% of the time, and B wins A 80% of the time, how often would C beat A? | In the Elo model, C would be over one class interval (+240) stronger. B the average player (=0) and A over one class interval weaker -(240). The Elo model predicts a win for C against A with 95% probability (rating diff = 480).
See here for the rating probabilities: https://www.fide.com/docs/regulations/FIDE%20Rating%20Regulations%202022.pdf.
In a real world example, A, B and C could be equally strong. Where C is the "fear opponent" of B and B is the "fear opponent" of A. | If C wins B 80% of the time, and B wins A 80% of the time, how often would C beat A? | In the Elo model, C would be over one class interval (+240) stronger. B the average player (=0) and A over one class interval weaker -(240). The Elo model predicts a win for C against A with 95% proba | If C wins B 80% of the time, and B wins A 80% of the time, how often would C beat A?
In the Elo model, C would be over one class interval (+240) stronger. B the average player (=0) and A over one class interval weaker -(240). The Elo model predicts a win for C against A with 95% probability (rating diff = 480).
See here for the rating probabilities: https://www.fide.com/docs/regulations/FIDE%20Rating%20Regulations%202022.pdf.
In a real world example, A, B and C could be equally strong. Where C is the "fear opponent" of B and B is the "fear opponent" of A. | If C wins B 80% of the time, and B wins A 80% of the time, how often would C beat A?
In the Elo model, C would be over one class interval (+240) stronger. B the average player (=0) and A over one class interval weaker -(240). The Elo model predicts a win for C against A with 95% proba |
55,270 | Cluster Analysis in R | You're trying to measure the Euclidean distance of categories. Euclidean distance is the "normal" distance on numbers: the Euclidean distance of 7 and 10 is 3, the euclidean distance of -1 and 1 is 2.
If you give your categories numbers, then you'll calculate the distances between these numbers - but will they make sense? Say I have the category "Favourite Ice Cream" with entries "Vanilla", "Strawberry" and "Hedgehog", and I call these 1, 2 and 3. Then R will calculate the distance between Vanilla and Strawberry as 1, between Strawberry and Hedgehog as 1 and between Vanilla and Hedgehog as 2. But this distance doesn't correspond to anything real - the fact the distance from Vanilla to Hedgehog is twice as far as from Strawberry to Hedgehog doesn't correspond to anything in real life (people who like Hedgehog ice cream are not twice as different from Vanilla lovers as they are to Strawberry lovers). But your clustering would be based on these numbers, and equally meaningless.
So you need a way for clustering your mixed numerical and categorical data with a more rational method. In the R package cluster you have the method ?daisy, but it's worth reading the manual page to understand which options make sense for you. | Cluster Analysis in R | You're trying to measure the Euclidean distance of categories. Euclidean distance is the "normal" distance on numbers: the Euclidean distance of 7 and 10 is 3, the euclidean distance of -1 and 1 is 2 | Cluster Analysis in R
You're trying to measure the Euclidean distance of categories. Euclidean distance is the "normal" distance on numbers: the Euclidean distance of 7 and 10 is 3, the euclidean distance of -1 and 1 is 2.
If you give your categories numbers, then you'll calculate the distances between these numbers - but will they make sense? Say I have the category "Favourite Ice Cream" with entries "Vanilla", "Strawberry" and "Hedgehog", and I call these 1, 2 and 3. Then R will calculate the distance between Vanilla and Strawberry as 1, between Strawberry and Hedgehog as 1 and between Vanilla and Hedgehog as 2. But this distance doesn't correspond to anything real - the fact the distance from Vanilla to Hedgehog is twice as far as from Strawberry to Hedgehog doesn't correspond to anything in real life (people who like Hedgehog ice cream are not twice as different from Vanilla lovers as they are to Strawberry lovers). But your clustering would be based on these numbers, and equally meaningless.
So you need a way for clustering your mixed numerical and categorical data with a more rational method. In the R package cluster you have the method ?daisy, but it's worth reading the manual page to understand which options make sense for you. | Cluster Analysis in R
You're trying to measure the Euclidean distance of categories. Euclidean distance is the "normal" distance on numbers: the Euclidean distance of 7 and 10 is 3, the euclidean distance of -1 and 1 is 2 |
55,271 | Cluster Analysis in R | If you want to see the similarities within your data across different dimensions, you might want to use some descriptive plots such as boxplots for numeric data, barplots for counts of your categoric data, crosstabs and if required, stacked boxplots (boxplots where you compare some numeric variable, split according one categoric variable).
See:
http://www.statmethods.net/stATS/descriptives.html
http://www.statmethods.net/stATS/frequencies.html
That is how you typically learn about your data.
If you want to do clustering, you can do it either on numeric data only using the kmeans function, or you can use the R package cluster that accepts a dissimilarity matrix as an input.
Using an appropriate distance function (such as the daisy function in the above package), you can calculate distances even between mixed-type data.
But please consider if calculating a numeric distance between objects like (Dog, height: 67cm, color: black) and (Giraffe, height: 485cm, color: brown/white) is really the road you want to go down and what is your actual objective you are trying to achieve.
More often than not, people want to find clusters to predict something, in which case clustering is the wrong approach. | Cluster Analysis in R | If you want to see the similarities within your data across different dimensions, you might want to use some descriptive plots such as boxplots for numeric data, barplots for counts of your categoric | Cluster Analysis in R
If you want to see the similarities within your data across different dimensions, you might want to use some descriptive plots such as boxplots for numeric data, barplots for counts of your categoric data, crosstabs and if required, stacked boxplots (boxplots where you compare some numeric variable, split according one categoric variable).
See:
http://www.statmethods.net/stATS/descriptives.html
http://www.statmethods.net/stATS/frequencies.html
That is how you typically learn about your data.
If you want to do clustering, you can do it either on numeric data only using the kmeans function, or you can use the R package cluster that accepts a dissimilarity matrix as an input.
Using an appropriate distance function (such as the daisy function in the above package), you can calculate distances even between mixed-type data.
But please consider if calculating a numeric distance between objects like (Dog, height: 67cm, color: black) and (Giraffe, height: 485cm, color: brown/white) is really the road you want to go down and what is your actual objective you are trying to achieve.
More often than not, people want to find clusters to predict something, in which case clustering is the wrong approach. | Cluster Analysis in R
If you want to see the similarities within your data across different dimensions, you might want to use some descriptive plots such as boxplots for numeric data, barplots for counts of your categoric |
55,272 | Not reporting significant results from a model I am reporting other results from | I would like to see the entire model included somewhere in the paper, probably in the methods section or in a supplement/appendix. If it the model is huge, a table is a nice way to present this. You could even organize the table to distinguish between "interesting" variables (e.g., drug dose) and boring covariates that are likely to be trivially true (e.g., rater). I saw this done recently and thought it was very convenient, but I'm having a hard time tracking the paper down.
However, once you've "disclosed" your model, there's no need to give equal time to each of its components when discussing the results. You could ignore the covariates completely, or you could start the results section with something like
After controlling for observer bias (see complete model in Appendix 1), we found that....
If the bias is particularly surprising or you have done something to control it (e.g., provide raters with a standardized rubric), then it might be worth a slightly longer mention. | Not reporting significant results from a model I am reporting other results from | I would like to see the entire model included somewhere in the paper, probably in the methods section or in a supplement/appendix. If it the model is huge, a table is a nice way to present this. You c | Not reporting significant results from a model I am reporting other results from
I would like to see the entire model included somewhere in the paper, probably in the methods section or in a supplement/appendix. If it the model is huge, a table is a nice way to present this. You could even organize the table to distinguish between "interesting" variables (e.g., drug dose) and boring covariates that are likely to be trivially true (e.g., rater). I saw this done recently and thought it was very convenient, but I'm having a hard time tracking the paper down.
However, once you've "disclosed" your model, there's no need to give equal time to each of its components when discussing the results. You could ignore the covariates completely, or you could start the results section with something like
After controlling for observer bias (see complete model in Appendix 1), we found that....
If the bias is particularly surprising or you have done something to control it (e.g., provide raters with a standardized rubric), then it might be worth a slightly longer mention. | Not reporting significant results from a model I am reporting other results from
I would like to see the entire model included somewhere in the paper, probably in the methods section or in a supplement/appendix. If it the model is huge, a table is a nice way to present this. You c |
55,273 | Not reporting significant results from a model I am reporting other results from | If the bulk of your paper is taken up with discussion of the ANOVA results it would seem odd to omit to mention controlling for observer bias; & odd to want to, as your having taken it into account forestalls potential criticism for not having done so. If you're giving effect estimates and standard errors you need to specify the model they're embedded in if you want anyone else to be able to use them. If, on the other hand, you merely mention in passing some conclusions drawn from the ANOVA, it doesn't seem necessary to go into too much detail about it. | Not reporting significant results from a model I am reporting other results from | If the bulk of your paper is taken up with discussion of the ANOVA results it would seem odd to omit to mention controlling for observer bias; & odd to want to, as your having taken it into account fo | Not reporting significant results from a model I am reporting other results from
If the bulk of your paper is taken up with discussion of the ANOVA results it would seem odd to omit to mention controlling for observer bias; & odd to want to, as your having taken it into account forestalls potential criticism for not having done so. If you're giving effect estimates and standard errors you need to specify the model they're embedded in if you want anyone else to be able to use them. If, on the other hand, you merely mention in passing some conclusions drawn from the ANOVA, it doesn't seem necessary to go into too much detail about it. | Not reporting significant results from a model I am reporting other results from
If the bulk of your paper is taken up with discussion of the ANOVA results it would seem odd to omit to mention controlling for observer bias; & odd to want to, as your having taken it into account fo |
55,274 | What are the differences between the linear regression and multiple linear regression? | By linear regression I assume that you mean simple linear regression. The difference is in the number of independent explanatory variables you use to model your dependent variable.
Simple linear regression
$Y=\beta X+\beta_0$
Multiple linear regression
$Y=\beta_1 X_1+\beta_2 X_2+...+ \beta_m X_m + \beta_0$
Where the $\beta$'s are the parameters to fit. In other words
simple linear regression is just a special case of multiple linear regression.
This is not the same as multivariate linear regression which has more than one dependent variable. | What are the differences between the linear regression and multiple linear regression? | By linear regression I assume that you mean simple linear regression. The difference is in the number of independent explanatory variables you use to model your dependent variable.
Simple linear regre | What are the differences between the linear regression and multiple linear regression?
By linear regression I assume that you mean simple linear regression. The difference is in the number of independent explanatory variables you use to model your dependent variable.
Simple linear regression
$Y=\beta X+\beta_0$
Multiple linear regression
$Y=\beta_1 X_1+\beta_2 X_2+...+ \beta_m X_m + \beta_0$
Where the $\beta$'s are the parameters to fit. In other words
simple linear regression is just a special case of multiple linear regression.
This is not the same as multivariate linear regression which has more than one dependent variable. | What are the differences between the linear regression and multiple linear regression?
By linear regression I assume that you mean simple linear regression. The difference is in the number of independent explanatory variables you use to model your dependent variable.
Simple linear regre |
55,275 | Rejection region or p-value | In situations like these - it's best to look at things from the reader's perspective. Would the reader care about the actual value of the test statistic? Do you want the reader to know that the $T$-statistic is $2.79$ or $F = 8.91$? In most cases, the reader would not be interested in these values, so just give the p-value along with the test that you used and an estimate of the magnitude of your effect size. | Rejection region or p-value | In situations like these - it's best to look at things from the reader's perspective. Would the reader care about the actual value of the test statistic? Do you want the reader to know that the $T$-st | Rejection region or p-value
In situations like these - it's best to look at things from the reader's perspective. Would the reader care about the actual value of the test statistic? Do you want the reader to know that the $T$-statistic is $2.79$ or $F = 8.91$? In most cases, the reader would not be interested in these values, so just give the p-value along with the test that you used and an estimate of the magnitude of your effect size. | Rejection region or p-value
In situations like these - it's best to look at things from the reader's perspective. Would the reader care about the actual value of the test statistic? Do you want the reader to know that the $T$-st |
55,276 | Rejection region or p-value | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
I suggest you to put a exact p-value for the testing you've done. However if the p-value is very small, e.g. 0.000001, then I would write it as p-value < 0.0001.
Hope this helps. | Rejection region or p-value | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Rejection region or p-value
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
I suggest you to put a exact p-value for the testing you've done. However if the p-value is very small, e.g. 0.000001, then I would write it as p-value < 0.0001.
Hope this helps. | Rejection region or p-value
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
55,277 | Rejection region or p-value | I would recommend to report $p$-values and if you have the space go for the test statistics and rejection region. But there is a reason behind that as in the most research journals and software you can see $p$-values. First of all, let me say that the conclusions based on $p$-values and rejection region approach is basically the same i.e. if your $p$-values is less that or equal to $\alpha$ (i.e. your significance level) then the test statistic will fall into the rejection region and vice versa. The rejection region method only provides a decision for a specific value of $\alpha$. However by providing the $p$-values, your reader not only can make his/her decision based on that particular $\alpha$ level but also he/she can see all possible values for $\alpha$ in which the null hypothesis can be rejected. As a result, the $p$-value provides more flexibility than the rejection region method.
You can also have a look at the last paragraph in page 449 of "Statistical Ideas and Methods" by Jessica Utts, Robert Heckard here that addresses this property. | Rejection region or p-value | I would recommend to report $p$-values and if you have the space go for the test statistics and rejection region. But there is a reason behind that as in the most research journals and software you ca | Rejection region or p-value
I would recommend to report $p$-values and if you have the space go for the test statistics and rejection region. But there is a reason behind that as in the most research journals and software you can see $p$-values. First of all, let me say that the conclusions based on $p$-values and rejection region approach is basically the same i.e. if your $p$-values is less that or equal to $\alpha$ (i.e. your significance level) then the test statistic will fall into the rejection region and vice versa. The rejection region method only provides a decision for a specific value of $\alpha$. However by providing the $p$-values, your reader not only can make his/her decision based on that particular $\alpha$ level but also he/she can see all possible values for $\alpha$ in which the null hypothesis can be rejected. As a result, the $p$-value provides more flexibility than the rejection region method.
You can also have a look at the last paragraph in page 449 of "Statistical Ideas and Methods" by Jessica Utts, Robert Heckard here that addresses this property. | Rejection region or p-value
I would recommend to report $p$-values and if you have the space go for the test statistics and rejection region. But there is a reason behind that as in the most research journals and software you ca |
55,278 | A question on SEM and path analysis | A few thoughts come to mind. I hope they are helpful.
Lets say you have exposure X, outcome Y, and mediator X.
1) Baron and Kenny is, in my opinion, not a very good way to address mediation, at least not without a lot of thougfulness. The main problem is potential "collider bias" REF. If there are confounders of the Z-Y relationship ( Z <-- C --> Y ), these will then act as confounders on the X-Y relationship once you have adjusted for Z, so interpreting the difference in the coefficient for X between models isn't as straightforward as some people make it out to be.
2) Mediation is a fundamentally causal question. Before building your SEM, I would use a Directed Aclylic Graph REF to draw out all of your hypothesized causal relationships. This would include any mediating influence between variables. You should then identify the relationship(s) you are most interested in for your research, and use the DAG to identify potential confounders...inlcuding those of the Z --> Y relationship (given your interest in potential mediation).
3) I would not view your SEM as a collection of linear regressions (though technically that is exactly what it is). The beauty of SEMs is that they are a holistic, theoretical statement about how you think the universe works, that can then be tested against the data. The SEM, like a DAG, should only include what you need to answer your research question. From this perpective, making each relationship in the SEM a "research question" is letting the tail wag the dog. You should have a research question as your starting point, and then build the SEM as appropriate. | A question on SEM and path analysis | A few thoughts come to mind. I hope they are helpful.
Lets say you have exposure X, outcome Y, and mediator X.
1) Baron and Kenny is, in my opinion, not a very good way to address mediation, at leas | A question on SEM and path analysis
A few thoughts come to mind. I hope they are helpful.
Lets say you have exposure X, outcome Y, and mediator X.
1) Baron and Kenny is, in my opinion, not a very good way to address mediation, at least not without a lot of thougfulness. The main problem is potential "collider bias" REF. If there are confounders of the Z-Y relationship ( Z <-- C --> Y ), these will then act as confounders on the X-Y relationship once you have adjusted for Z, so interpreting the difference in the coefficient for X between models isn't as straightforward as some people make it out to be.
2) Mediation is a fundamentally causal question. Before building your SEM, I would use a Directed Aclylic Graph REF to draw out all of your hypothesized causal relationships. This would include any mediating influence between variables. You should then identify the relationship(s) you are most interested in for your research, and use the DAG to identify potential confounders...inlcuding those of the Z --> Y relationship (given your interest in potential mediation).
3) I would not view your SEM as a collection of linear regressions (though technically that is exactly what it is). The beauty of SEMs is that they are a holistic, theoretical statement about how you think the universe works, that can then be tested against the data. The SEM, like a DAG, should only include what you need to answer your research question. From this perpective, making each relationship in the SEM a "research question" is letting the tail wag the dog. You should have a research question as your starting point, and then build the SEM as appropriate. | A question on SEM and path analysis
A few thoughts come to mind. I hope they are helpful.
Lets say you have exposure X, outcome Y, and mediator X.
1) Baron and Kenny is, in my opinion, not a very good way to address mediation, at leas |
55,279 | A question on SEM and path analysis | Trying different methods seems wise to me for the sake of understanding any differences in the results you receive from different analyses. Baron & Kenny's approach has received a fair amount of criticism (e.g., Pardo & Román, 2013; Hayes, 2009; Zhao, Lynch, & Chen, 2009; Krause et al., 2010), so alternatives to that would seem particularly worth exploring, ideally with attention to the problems that necessitate alternatives. [This may be an incomplete answer to your first question...]
If you are interested theoretically in the mediation question, it makes sense to incorporate it into your introduction and discussion. I would argue that you should have (and communicate) some justification for every path in a model based on prior theory, but in some cases, the question may be simple, such as, "Will variable $a$ predict variable $b$ in roughly the same manner as previous researchers have found?" or, "Will the model fit adequately if I fix this path to one/zero?" Questions like these aren't really interesting so long as the answers are affirmative, so I would only recommend explaining why you're modeling these potentially (even probably, I'd hope) uninteresting paths the way you are (probably to reflect some well-supported theoretical assumption), and discussing any problematic results if they arise. By comparison, a novel question of mediation is probably more interesting regardless of whether the result is problematic: even a "null" result is still a result! | A question on SEM and path analysis | Trying different methods seems wise to me for the sake of understanding any differences in the results you receive from different analyses. Baron & Kenny's approach has received a fair amount of criti | A question on SEM and path analysis
Trying different methods seems wise to me for the sake of understanding any differences in the results you receive from different analyses. Baron & Kenny's approach has received a fair amount of criticism (e.g., Pardo & Román, 2013; Hayes, 2009; Zhao, Lynch, & Chen, 2009; Krause et al., 2010), so alternatives to that would seem particularly worth exploring, ideally with attention to the problems that necessitate alternatives. [This may be an incomplete answer to your first question...]
If you are interested theoretically in the mediation question, it makes sense to incorporate it into your introduction and discussion. I would argue that you should have (and communicate) some justification for every path in a model based on prior theory, but in some cases, the question may be simple, such as, "Will variable $a$ predict variable $b$ in roughly the same manner as previous researchers have found?" or, "Will the model fit adequately if I fix this path to one/zero?" Questions like these aren't really interesting so long as the answers are affirmative, so I would only recommend explaining why you're modeling these potentially (even probably, I'd hope) uninteresting paths the way you are (probably to reflect some well-supported theoretical assumption), and discussing any problematic results if they arise. By comparison, a novel question of mediation is probably more interesting regardless of whether the result is problematic: even a "null" result is still a result! | A question on SEM and path analysis
Trying different methods seems wise to me for the sake of understanding any differences in the results you receive from different analyses. Baron & Kenny's approach has received a fair amount of criti |
55,280 | Likelihood for negative binomial distribution | Actually I'd like to disagree slightly with a previous answer. Yes it's true that the estimates themselves that come from a MLE are indeed invariant to transformations of the parameters, so it's correct that you can just take $\hat{\phi} = 1 + \hat{m}/\hat{r}$
However, the question also asked about the standard error estimate (or in more complete terms, the covariance matrix) of the estimates $\hat{\phi}$ and $\hat{m}$ as well. Those are not invariant, especially under a non-linear transformation such as this one. To compute a first order approximation to the standard error (or more properly, the covariance matrix) under such a change of variables, you need to compute a Jacobian. Section 2 of this reference describes how to use the Jacobian $J$ in order to transform the uncertainties; essentially $C^{'} = JCJ^{T}$. Alternatively, equations 3, 9, and 17 of this reference also sketch out the basic concept pretty well too. For a non-linear transformation of variables such as the one in this question, approximating it to first order using the Jacobian can sometimes give poor results. If an accurate error estimate is important to you, you may want to explore an unscented transform as well, which is an approximately equivalent numerical framework for performing the same types of calculations. | Likelihood for negative binomial distribution | Actually I'd like to disagree slightly with a previous answer. Yes it's true that the estimates themselves that come from a MLE are indeed invariant to transformations of the parameters, so it's corr | Likelihood for negative binomial distribution
Actually I'd like to disagree slightly with a previous answer. Yes it's true that the estimates themselves that come from a MLE are indeed invariant to transformations of the parameters, so it's correct that you can just take $\hat{\phi} = 1 + \hat{m}/\hat{r}$
However, the question also asked about the standard error estimate (or in more complete terms, the covariance matrix) of the estimates $\hat{\phi}$ and $\hat{m}$ as well. Those are not invariant, especially under a non-linear transformation such as this one. To compute a first order approximation to the standard error (or more properly, the covariance matrix) under such a change of variables, you need to compute a Jacobian. Section 2 of this reference describes how to use the Jacobian $J$ in order to transform the uncertainties; essentially $C^{'} = JCJ^{T}$. Alternatively, equations 3, 9, and 17 of this reference also sketch out the basic concept pretty well too. For a non-linear transformation of variables such as the one in this question, approximating it to first order using the Jacobian can sometimes give poor results. If an accurate error estimate is important to you, you may want to explore an unscented transform as well, which is an approximately equivalent numerical framework for performing the same types of calculations. | Likelihood for negative binomial distribution
Actually I'd like to disagree slightly with a previous answer. Yes it's true that the estimates themselves that come from a MLE are indeed invariant to transformations of the parameters, so it's corr |
55,281 | Likelihood for negative binomial distribution | One of the nice things about maximum-likelihood estimators is that they're invariant to transformations of the parameters (one-to-one transformations at any rate). So you can just take $\hat{\phi}=1+\hat{m}/\hat{r}$. | Likelihood for negative binomial distribution | One of the nice things about maximum-likelihood estimators is that they're invariant to transformations of the parameters (one-to-one transformations at any rate). So you can just take $\hat{\phi}=1+\ | Likelihood for negative binomial distribution
One of the nice things about maximum-likelihood estimators is that they're invariant to transformations of the parameters (one-to-one transformations at any rate). So you can just take $\hat{\phi}=1+\hat{m}/\hat{r}$. | Likelihood for negative binomial distribution
One of the nice things about maximum-likelihood estimators is that they're invariant to transformations of the parameters (one-to-one transformations at any rate). So you can just take $\hat{\phi}=1+\ |
55,282 | Does the central limit theorem imply the law of large numbers | WLLN, yes.
Here is a general claim: Suppose $\{ f_n \}$, $f$, and $g$ are random variables, and
$$
\sqrt{n} (f_n - f) \stackrel{d}{\mapsto} g.
$$
Let's say the CDF of $g$ is continuous everywhere. Then $f_n \rightarrow f$ in probability. This is because $\sqrt{n} (f_n - f)$ is bounded in probability/uniformly tight. | Does the central limit theorem imply the law of large numbers | WLLN, yes.
Here is a general claim: Suppose $\{ f_n \}$, $f$, and $g$ are random variables, and
$$
\sqrt{n} (f_n - f) \stackrel{d}{\mapsto} g.
$$
Let's say the CDF of $g$ is continuous everywhere. The | Does the central limit theorem imply the law of large numbers
WLLN, yes.
Here is a general claim: Suppose $\{ f_n \}$, $f$, and $g$ are random variables, and
$$
\sqrt{n} (f_n - f) \stackrel{d}{\mapsto} g.
$$
Let's say the CDF of $g$ is continuous everywhere. Then $f_n \rightarrow f$ in probability. This is because $\sqrt{n} (f_n - f)$ is bounded in probability/uniformly tight. | Does the central limit theorem imply the law of large numbers
WLLN, yes.
Here is a general claim: Suppose $\{ f_n \}$, $f$, and $g$ are random variables, and
$$
\sqrt{n} (f_n - f) \stackrel{d}{\mapsto} g.
$$
Let's say the CDF of $g$ is continuous everywhere. The |
55,283 | Mann-Whitney U test with very large sample size? | This is not a problem of the t-test, but of any test in which the power of the test depends on the sample size. This is called "overpowering". And yes, changing the test to Mann-Whitney will not help.
Therefore, apart from asking whether the results are statistically significant, you need to ask yourself whether the observed effect size is significant in the common sense of the word (i.e., meaningful). This requires more than statistical knowledge, but also your expertise in the field you are investigating.
In general, there are two ways you can look at the effect size. One way is to scale the difference between the means in your data by its standard deviation. Since standard deviation is in the same units as your means and describes the dispersion of your data, you can express the difference between your groups in terms of standard deviation. Also, when you estimate the variance / standard deviation in your data, it does not necessarily decrease with the number of samples (unlike standard deviation of the mean).
This is, for example, the reasoning behind Cohen's $d$:
$$d = \frac{ \bar{x}_1 - \bar{x}_2 }{ s}$$
...where $s$ is the square root of the pooled variance.
$$s = \sqrt{\frac{ s_1^2\cdot(n_1-1) + s_2^2\cdot(n_2 - 1) }{ N - 2 } }$$
(where $N=n_1+n_2$ and $s_1$ and $s_2$ are the standard deviations in group 1 and 2, respectively; that is, $s_1 = \sqrt{ \frac{\sum(x_i-\bar{x_1})^2 }{n_1 -1 }} $).
Another way of looking at the effect size -- and frankly, one that I personally prefer -- is to ask what part (percentage) of the variability in the data can be explained by the estimated effect. You can estimate the variance between and within the groups and see how they relate (this is actually what ANOVA is, and t-test is in principle a special case of ANOVA).This is the reasoning behind the coefficient of determination, $r^2$, and the related $\eta^2$ and $\omega^2$ stats. Now, in a t-test, $\eta^2$ can easily be calculated from the $t$ statistic itself:
$$\eta^2 = \frac{ t^2}{t^2 + n_1 + n_2 - 2 }$$
This value can be directly interpreted as "fraction of variance in the data which is explained by the difference between the groups". There are different rules of thumb to say what is a "large" and what is a "small" effect, but it all depends on your particular question. 1% of the variance explained can be laughable, or can be just enough. | Mann-Whitney U test with very large sample size? | This is not a problem of the t-test, but of any test in which the power of the test depends on the sample size. This is called "overpowering". And yes, changing the test to Mann-Whitney will not help. | Mann-Whitney U test with very large sample size?
This is not a problem of the t-test, but of any test in which the power of the test depends on the sample size. This is called "overpowering". And yes, changing the test to Mann-Whitney will not help.
Therefore, apart from asking whether the results are statistically significant, you need to ask yourself whether the observed effect size is significant in the common sense of the word (i.e., meaningful). This requires more than statistical knowledge, but also your expertise in the field you are investigating.
In general, there are two ways you can look at the effect size. One way is to scale the difference between the means in your data by its standard deviation. Since standard deviation is in the same units as your means and describes the dispersion of your data, you can express the difference between your groups in terms of standard deviation. Also, when you estimate the variance / standard deviation in your data, it does not necessarily decrease with the number of samples (unlike standard deviation of the mean).
This is, for example, the reasoning behind Cohen's $d$:
$$d = \frac{ \bar{x}_1 - \bar{x}_2 }{ s}$$
...where $s$ is the square root of the pooled variance.
$$s = \sqrt{\frac{ s_1^2\cdot(n_1-1) + s_2^2\cdot(n_2 - 1) }{ N - 2 } }$$
(where $N=n_1+n_2$ and $s_1$ and $s_2$ are the standard deviations in group 1 and 2, respectively; that is, $s_1 = \sqrt{ \frac{\sum(x_i-\bar{x_1})^2 }{n_1 -1 }} $).
Another way of looking at the effect size -- and frankly, one that I personally prefer -- is to ask what part (percentage) of the variability in the data can be explained by the estimated effect. You can estimate the variance between and within the groups and see how they relate (this is actually what ANOVA is, and t-test is in principle a special case of ANOVA).This is the reasoning behind the coefficient of determination, $r^2$, and the related $\eta^2$ and $\omega^2$ stats. Now, in a t-test, $\eta^2$ can easily be calculated from the $t$ statistic itself:
$$\eta^2 = \frac{ t^2}{t^2 + n_1 + n_2 - 2 }$$
This value can be directly interpreted as "fraction of variance in the data which is explained by the difference between the groups". There are different rules of thumb to say what is a "large" and what is a "small" effect, but it all depends on your particular question. 1% of the variance explained can be laughable, or can be just enough. | Mann-Whitney U test with very large sample size?
This is not a problem of the t-test, but of any test in which the power of the test depends on the sample size. This is called "overpowering". And yes, changing the test to Mann-Whitney will not help. |
55,284 | Simulating longitudinal lognormal data in R | Without more information, it is difficult to guess exactly what kind of data you would like to simulate, but here is an example. In my experience, growth data are often (approximately) linear when transforming (log, square-root, etc.) the predictor (time) variable, so this would be my suggestion.
library(MASS)
library(nlme)
### set number of individuals
n <- 200
### average intercept and slope
beta0 <- 1.0
beta1 <- 6.0
### true autocorrelation
ar.val <- .4
### true error SD, intercept SD, slope SD, and intercept-slope cor
sigma <- 1.5
tau0 <- 2.5
tau1 <- 2.0
tau01 <- 0.3
### maximum number of possible observations
m <- 10
### simulate number of observations for each individual
p <- round(runif(n,4,m))
### simulate observation moments (assume everybody has 1st obs)
obs <- unlist(sapply(p, function(x) c(1, sort(sample(2:m, x-1, replace=FALSE)))))
### set up data frame
dat <- data.frame(id=rep(1:n, times=p), obs=obs)
### simulate (correlated) random effects for intercepts and slopes
mu <- c(0,0)
S <- matrix(c(1, tau01, tau01, 1), nrow=2)
tau <- c(tau0, tau1)
S <- diag(tau) %*% S %*% diag(tau)
U <- mvrnorm(n, mu=mu, Sigma=S)
### simulate AR(1) errors and then the actual outcomes
dat$eij <- unlist(sapply(p, function(x) arima.sim(model=list(ar=ar.val), n=x) * sqrt(1-ar.val^2) * sigma))
dat$yij <- (beta0 + rep(U[,1], times=p)) + (beta1 + rep(U[,2], times=p)) * log(dat$obs) + dat$eij
### note: use arima.sim(model=list(ar=ar.val), n=x) * sqrt(1-ar.val^2) * sigma
### construction, so that the true error SD is equal to sigma
### create grouped data object
dat <- groupedData(yij ~ obs | id, data=dat)
### profile plots
plot(dat, pch=19, cex=.5)
### fit corresponding growth model
res <- lme(yij ~ log(obs), random = ~ log(obs) | id, correlation = corAR1(form = ~ 1 | id), data=dat)
summary(res)
A single run of this yields the following profile plots:
And the output from the model:
Linear mixed-effects model fit by REML
Data: dat
AIC BIC logLik
5726.028 5762.519 -2856.014
Random effects:
Formula: ~log(obs) | id
Structure: General positive-definite, Log-Cholesky parametrization
StdDev Corr
(Intercept) 2.611384 (Intr)
log(obs) 2.092532 0.391
Residual 1.509075
Correlation Structure: AR(1)
Formula: ~1 | id
Parameter estimate(s):
Phi
0.3708575
Fixed effects: yij ~ log(obs)
Value Std.Error DF t-value p-value
(Intercept) 1.409415 0.2104311 1158 6.69775 0
log(obs) 6.076326 0.1601022 1158 37.95279 0
Correlation:
(Intr)
log(obs) 0.166
Standardized Within-Group Residuals:
Min Q1 Med Q3 Max
-2.58849482 -0.53571963 0.04011378 0.52296310 3.11959082
Number of Observations: 1359
Number of Groups: 200
So, the estimates are quite close to the actual parameter values (of course, the larger $n$ is, the better this will work). Maybe this gives you a starting point for tweaking the code to your needs. | Simulating longitudinal lognormal data in R | Without more information, it is difficult to guess exactly what kind of data you would like to simulate, but here is an example. In my experience, growth data are often (approximately) linear when tra | Simulating longitudinal lognormal data in R
Without more information, it is difficult to guess exactly what kind of data you would like to simulate, but here is an example. In my experience, growth data are often (approximately) linear when transforming (log, square-root, etc.) the predictor (time) variable, so this would be my suggestion.
library(MASS)
library(nlme)
### set number of individuals
n <- 200
### average intercept and slope
beta0 <- 1.0
beta1 <- 6.0
### true autocorrelation
ar.val <- .4
### true error SD, intercept SD, slope SD, and intercept-slope cor
sigma <- 1.5
tau0 <- 2.5
tau1 <- 2.0
tau01 <- 0.3
### maximum number of possible observations
m <- 10
### simulate number of observations for each individual
p <- round(runif(n,4,m))
### simulate observation moments (assume everybody has 1st obs)
obs <- unlist(sapply(p, function(x) c(1, sort(sample(2:m, x-1, replace=FALSE)))))
### set up data frame
dat <- data.frame(id=rep(1:n, times=p), obs=obs)
### simulate (correlated) random effects for intercepts and slopes
mu <- c(0,0)
S <- matrix(c(1, tau01, tau01, 1), nrow=2)
tau <- c(tau0, tau1)
S <- diag(tau) %*% S %*% diag(tau)
U <- mvrnorm(n, mu=mu, Sigma=S)
### simulate AR(1) errors and then the actual outcomes
dat$eij <- unlist(sapply(p, function(x) arima.sim(model=list(ar=ar.val), n=x) * sqrt(1-ar.val^2) * sigma))
dat$yij <- (beta0 + rep(U[,1], times=p)) + (beta1 + rep(U[,2], times=p)) * log(dat$obs) + dat$eij
### note: use arima.sim(model=list(ar=ar.val), n=x) * sqrt(1-ar.val^2) * sigma
### construction, so that the true error SD is equal to sigma
### create grouped data object
dat <- groupedData(yij ~ obs | id, data=dat)
### profile plots
plot(dat, pch=19, cex=.5)
### fit corresponding growth model
res <- lme(yij ~ log(obs), random = ~ log(obs) | id, correlation = corAR1(form = ~ 1 | id), data=dat)
summary(res)
A single run of this yields the following profile plots:
And the output from the model:
Linear mixed-effects model fit by REML
Data: dat
AIC BIC logLik
5726.028 5762.519 -2856.014
Random effects:
Formula: ~log(obs) | id
Structure: General positive-definite, Log-Cholesky parametrization
StdDev Corr
(Intercept) 2.611384 (Intr)
log(obs) 2.092532 0.391
Residual 1.509075
Correlation Structure: AR(1)
Formula: ~1 | id
Parameter estimate(s):
Phi
0.3708575
Fixed effects: yij ~ log(obs)
Value Std.Error DF t-value p-value
(Intercept) 1.409415 0.2104311 1158 6.69775 0
log(obs) 6.076326 0.1601022 1158 37.95279 0
Correlation:
(Intr)
log(obs) 0.166
Standardized Within-Group Residuals:
Min Q1 Med Q3 Max
-2.58849482 -0.53571963 0.04011378 0.52296310 3.11959082
Number of Observations: 1359
Number of Groups: 200
So, the estimates are quite close to the actual parameter values (of course, the larger $n$ is, the better this will work). Maybe this gives you a starting point for tweaking the code to your needs. | Simulating longitudinal lognormal data in R
Without more information, it is difficult to guess exactly what kind of data you would like to simulate, but here is an example. In my experience, growth data are often (approximately) linear when tra |
55,285 | 2D multivariate normal coverage probability | Here's my understanding of your question: you have a circular covariance matrix, $$C = \left( \begin{array}{cc} \sigma^{2} & 0 \\ 0 & \sigma^{2} \end{array} \right)$$ together with a position vector $ \overrightarrow{X} = [x,y]$ and a 2D Gaussian pdf defined by those two items, i.e., $$p(x,y) = \frac{1}{2\pi\sigma^{2}} e^{-\frac{X^{T} C^{-1} X}{2}} dx dy = \frac{1}{2\pi\sigma^{2}} e^{-\frac{(x^{2} + y^{2})}{2 \sigma^{2}}} dx dy$$ and you want to know what encircling radius $r$, in polar coordinates, will enclose 90% of the volume inside the pdf.
If that is indeed your question, you may answer it by first transforming to polar coordinates, and then performing a pair of integrals. Using the fact that this transformation requires a Jacobian (see example 3 in the link) and the fact that $r^{2} = x^{2} + y^{2}$, then in polar coordinates, the equivalent expression becomes: $$p(r,\theta) = \frac{1}{2 \pi \sigma^{2}} e^{-r^{2}/2 \sigma^{2}} r dr d \theta $$ You can marginalize this over $ \theta $ to get $$ p(r) = \int_{\theta=0}^{\theta=2\pi} \frac{1}{2 \pi \sigma^{2}} e^{-r^{2}/2 \sigma^{2}} r dr d \theta = \frac{r}{\sigma^{2}} e^{-r^{2} / 2 \sigma^{2}} dr $$ which you may recognize as a Rayleigh distribution. The Rayleigh distribution has a cdf: $$ p_{c}(r) = \int_{0}^{r} \frac{z}{\sigma^{2}} e^{-z^{2} / 2 \sigma^{2}} dz = 1 - e^{-r^{2} / 2 \sigma^{2}} $$
You can solve for the 90% encircling radius by setting: $$ 0.9 = 1 - e^{-r^{2} / 2 \sigma^{2}}$$ or $$ r = \sqrt{-2 \ln (0.1)} \sigma \approx 2.15 \sigma $$ so your first instinct appears to have been the correct one.
Phrased a little differently, the marginal approach is wrong because a square measuring $2 \times 1.64 \sigma$ on each side will cover a smaller area than a circle of radius $2.15 \sigma $ and will enclose a volume consisting of less than 90% of the probability. Thus, the marginal approach is wrong.
Alternatively, you can think of it this way: if you assign a coverage area of $\pm 1.64 \sigma $ in each dimension, then the probability for each coordinate of the $(x, y)$ arrival vector to lie separately within the coverage is indeed 0.9, but the joint probability for both components to fall simultaneously within the coverage area is smaller than that: it's $0.9^{2} = 0.81$. | 2D multivariate normal coverage probability | Here's my understanding of your question: you have a circular covariance matrix, $$C = \left( \begin{array}{cc} \sigma^{2} & 0 \\ 0 & \sigma^{2} \end{array} \right)$$ together with a position vector $ | 2D multivariate normal coverage probability
Here's my understanding of your question: you have a circular covariance matrix, $$C = \left( \begin{array}{cc} \sigma^{2} & 0 \\ 0 & \sigma^{2} \end{array} \right)$$ together with a position vector $ \overrightarrow{X} = [x,y]$ and a 2D Gaussian pdf defined by those two items, i.e., $$p(x,y) = \frac{1}{2\pi\sigma^{2}} e^{-\frac{X^{T} C^{-1} X}{2}} dx dy = \frac{1}{2\pi\sigma^{2}} e^{-\frac{(x^{2} + y^{2})}{2 \sigma^{2}}} dx dy$$ and you want to know what encircling radius $r$, in polar coordinates, will enclose 90% of the volume inside the pdf.
If that is indeed your question, you may answer it by first transforming to polar coordinates, and then performing a pair of integrals. Using the fact that this transformation requires a Jacobian (see example 3 in the link) and the fact that $r^{2} = x^{2} + y^{2}$, then in polar coordinates, the equivalent expression becomes: $$p(r,\theta) = \frac{1}{2 \pi \sigma^{2}} e^{-r^{2}/2 \sigma^{2}} r dr d \theta $$ You can marginalize this over $ \theta $ to get $$ p(r) = \int_{\theta=0}^{\theta=2\pi} \frac{1}{2 \pi \sigma^{2}} e^{-r^{2}/2 \sigma^{2}} r dr d \theta = \frac{r}{\sigma^{2}} e^{-r^{2} / 2 \sigma^{2}} dr $$ which you may recognize as a Rayleigh distribution. The Rayleigh distribution has a cdf: $$ p_{c}(r) = \int_{0}^{r} \frac{z}{\sigma^{2}} e^{-z^{2} / 2 \sigma^{2}} dz = 1 - e^{-r^{2} / 2 \sigma^{2}} $$
You can solve for the 90% encircling radius by setting: $$ 0.9 = 1 - e^{-r^{2} / 2 \sigma^{2}}$$ or $$ r = \sqrt{-2 \ln (0.1)} \sigma \approx 2.15 \sigma $$ so your first instinct appears to have been the correct one.
Phrased a little differently, the marginal approach is wrong because a square measuring $2 \times 1.64 \sigma$ on each side will cover a smaller area than a circle of radius $2.15 \sigma $ and will enclose a volume consisting of less than 90% of the probability. Thus, the marginal approach is wrong.
Alternatively, you can think of it this way: if you assign a coverage area of $\pm 1.64 \sigma $ in each dimension, then the probability for each coordinate of the $(x, y)$ arrival vector to lie separately within the coverage is indeed 0.9, but the joint probability for both components to fall simultaneously within the coverage area is smaller than that: it's $0.9^{2} = 0.81$. | 2D multivariate normal coverage probability
Here's my understanding of your question: you have a circular covariance matrix, $$C = \left( \begin{array}{cc} \sigma^{2} & 0 \\ 0 & \sigma^{2} \end{array} \right)$$ together with a position vector $ |
55,286 | Proof of Convergence in Probability | The meaning of $P$ is monotone relative to set containment is this:
If $A \subseteq B$ then $P(A)\leq P(B)$.
Now here for every $\omega\in\Omega$, we have
$$\begin{align*}
A &= \left\{\omega : \left| (X_n(\omega) + Y_n(\omega)) - ( X(\omega) + Y(\omega) ) \right| \geq \epsilon \right\}, \text{ and}\\
B &= \{ \omega : \left| X_n(\omega) - X(\omega) \right| + \left| Y_n(\omega) - Y(\omega) \right| \geq \epsilon \}.
\end{align*}$$
From your first inequality and for every $\omega\in\Omega$ and $\varepsilon >0$, we have
$$\left\{ \left| (X_n(\omega) + Y_n(\omega) - (X(\omega) + Y(\omega)) \right| \geq \varepsilon \right\} \subseteq \left\{ \left|X_n(\omega) - X(\omega)\right| + \left|Y_n(\omega) - Y(\omega)\right| \geq \varepsilon \right\}.$$
So from the statement above
$$P\left( \left| (X_n(\omega) + Y_n(\omega) - (X(\omega) + Y(\omega)) \right| \geq \varepsilon \right) \leq P\left( \left|X_n(\omega) - X(\omega)\right| + \left|Y_n(\omega) - Y(\omega)\right| \geq \varepsilon \right).$$
To show the last inequality, first we prove that for every for every $\omega\in\Omega$ and $\varepsilon >0$ we have
$$\begin{multline*}
\{\omega : \left|X_n(\omega) - X(\omega)\right| + \left|Y_n(\omega) - Y(\omega)\right| \geq \varepsilon \}\\
\subseteq \{\omega : \left|X_n(\omega) - X(\omega) \right| \geq \varepsilon/2\} \cup \{\omega : \left|Y_n(\omega) - Y(\omega) \right| \geq \varepsilon/2\}.
\end{multline*}$$
This can be simply shown by contradiction, i.e. if
$$\{ \omega : \left|X_n(\omega) - X(\omega) \right| < \varepsilon/2 \} \text{ and } \{\omega : \left|Y_n(\omega) - Y(\omega) \right| < \varepsilon/2 \}$$
then we can sum them up to get $\{ \omega : \left|X_n(\omega) - X(\omega)\right| + \left|Y_n(\omega) - Y(\omega)\right| < \varepsilon \}$, which is a contradiction.
Hence,
$$\begin{align*}
P\bigl( \{ \omega : &\left| X_n(\omega)-X(\omega) \right| + \left| Y_n(\omega) - Y(\omega) \right|\geq \varepsilon \} \bigr)\\
&\leq P\left( \{\omega : \left| X_n(\omega) - X(\omega) \right| \geq \varepsilon/2\} \cup \{\omega : \left| Y_n(\omega) - Y(\omega) \right| \geq \varepsilon/2 \} \right)\\
&\leq P\left( \{\omega : \left| X_n(\omega) - X(\omega) \right| \geq \varepsilon/2\}\right) + P\left( \{\omega : \left| Y_n(\omega) - Y(\omega) \right| \geq \varepsilon/2 \} \right),
\end{align*}$$
where the last inequality comes from $P(C \cup D) \leq P(C) + P(D)$. Cheers! | Proof of Convergence in Probability | The meaning of $P$ is monotone relative to set containment is this:
If $A \subseteq B$ then $P(A)\leq P(B)$.
Now here for every $\omega\in\Omega$, we have
$$\begin{align*}
A &= \left\{\omega : \lef | Proof of Convergence in Probability
The meaning of $P$ is monotone relative to set containment is this:
If $A \subseteq B$ then $P(A)\leq P(B)$.
Now here for every $\omega\in\Omega$, we have
$$\begin{align*}
A &= \left\{\omega : \left| (X_n(\omega) + Y_n(\omega)) - ( X(\omega) + Y(\omega) ) \right| \geq \epsilon \right\}, \text{ and}\\
B &= \{ \omega : \left| X_n(\omega) - X(\omega) \right| + \left| Y_n(\omega) - Y(\omega) \right| \geq \epsilon \}.
\end{align*}$$
From your first inequality and for every $\omega\in\Omega$ and $\varepsilon >0$, we have
$$\left\{ \left| (X_n(\omega) + Y_n(\omega) - (X(\omega) + Y(\omega)) \right| \geq \varepsilon \right\} \subseteq \left\{ \left|X_n(\omega) - X(\omega)\right| + \left|Y_n(\omega) - Y(\omega)\right| \geq \varepsilon \right\}.$$
So from the statement above
$$P\left( \left| (X_n(\omega) + Y_n(\omega) - (X(\omega) + Y(\omega)) \right| \geq \varepsilon \right) \leq P\left( \left|X_n(\omega) - X(\omega)\right| + \left|Y_n(\omega) - Y(\omega)\right| \geq \varepsilon \right).$$
To show the last inequality, first we prove that for every for every $\omega\in\Omega$ and $\varepsilon >0$ we have
$$\begin{multline*}
\{\omega : \left|X_n(\omega) - X(\omega)\right| + \left|Y_n(\omega) - Y(\omega)\right| \geq \varepsilon \}\\
\subseteq \{\omega : \left|X_n(\omega) - X(\omega) \right| \geq \varepsilon/2\} \cup \{\omega : \left|Y_n(\omega) - Y(\omega) \right| \geq \varepsilon/2\}.
\end{multline*}$$
This can be simply shown by contradiction, i.e. if
$$\{ \omega : \left|X_n(\omega) - X(\omega) \right| < \varepsilon/2 \} \text{ and } \{\omega : \left|Y_n(\omega) - Y(\omega) \right| < \varepsilon/2 \}$$
then we can sum them up to get $\{ \omega : \left|X_n(\omega) - X(\omega)\right| + \left|Y_n(\omega) - Y(\omega)\right| < \varepsilon \}$, which is a contradiction.
Hence,
$$\begin{align*}
P\bigl( \{ \omega : &\left| X_n(\omega)-X(\omega) \right| + \left| Y_n(\omega) - Y(\omega) \right|\geq \varepsilon \} \bigr)\\
&\leq P\left( \{\omega : \left| X_n(\omega) - X(\omega) \right| \geq \varepsilon/2\} \cup \{\omega : \left| Y_n(\omega) - Y(\omega) \right| \geq \varepsilon/2 \} \right)\\
&\leq P\left( \{\omega : \left| X_n(\omega) - X(\omega) \right| \geq \varepsilon/2\}\right) + P\left( \{\omega : \left| Y_n(\omega) - Y(\omega) \right| \geq \varepsilon/2 \} \right),
\end{align*}$$
where the last inequality comes from $P(C \cup D) \leq P(C) + P(D)$. Cheers! | Proof of Convergence in Probability
The meaning of $P$ is monotone relative to set containment is this:
If $A \subseteq B$ then $P(A)\leq P(B)$.
Now here for every $\omega\in\Omega$, we have
$$\begin{align*}
A &= \left\{\omega : \lef |
55,287 | Proof of Convergence in Probability | If you're aware of an equivalent characterization of convergence in probability, the proof is really short. It is known that $X_n \rightarrow X$ in probability if and only if, for any subsequence $\left \{a_1, a_2, \ldots \right\}$, there exists a sub-subsequence $\left\{a'_1,a'_2,\ldots \right \}$ such that $X_{a'_n} \rightarrow X$ almost surely. This can be proven using the Borel-Cantelli Lemma.
Then, to show that $X_n + Y_n \rightarrow X + Y$ in probability, let $\left\{a_1,a_2,\ldots \right\}$ be an arbitrary subsequence. By taking two successive subsequences of the $\left\{a_n\right\}$, we find a sub-sub-subsequence $\left\{a''_n\right\}$ such that $X_{a''_n} \rightarrow X$ and $Y_{a''_n} \rightarrow Y$, both almost surely. From basic real analysis, $X_{a''_n} + Y_{a''_n} \rightarrow X + Y$ almost surely. By the characterization, we know that $X_n + Y_n \rightarrow X + Y$ in probability. | Proof of Convergence in Probability | If you're aware of an equivalent characterization of convergence in probability, the proof is really short. It is known that $X_n \rightarrow X$ in probability if and only if, for any subsequence $\l | Proof of Convergence in Probability
If you're aware of an equivalent characterization of convergence in probability, the proof is really short. It is known that $X_n \rightarrow X$ in probability if and only if, for any subsequence $\left \{a_1, a_2, \ldots \right\}$, there exists a sub-subsequence $\left\{a'_1,a'_2,\ldots \right \}$ such that $X_{a'_n} \rightarrow X$ almost surely. This can be proven using the Borel-Cantelli Lemma.
Then, to show that $X_n + Y_n \rightarrow X + Y$ in probability, let $\left\{a_1,a_2,\ldots \right\}$ be an arbitrary subsequence. By taking two successive subsequences of the $\left\{a_n\right\}$, we find a sub-sub-subsequence $\left\{a''_n\right\}$ such that $X_{a''_n} \rightarrow X$ and $Y_{a''_n} \rightarrow Y$, both almost surely. From basic real analysis, $X_{a''_n} + Y_{a''_n} \rightarrow X + Y$ almost surely. By the characterization, we know that $X_n + Y_n \rightarrow X + Y$ in probability. | Proof of Convergence in Probability
If you're aware of an equivalent characterization of convergence in probability, the proof is really short. It is known that $X_n \rightarrow X$ in probability if and only if, for any subsequence $\l |
55,288 | Vector space model: cosine similarity vs euclidean distance | To complement other answers:
Cosine similarity of $x, y$ : $\frac{\langle x, y\rangle}{\|x\|\|y\|}$
Euclidean distance (squared) between $x, y$: $\|x-y\|^2 = \|x\|^2 +\|y\|^2 - 2\langle x , y\rangle$
Assuming that $x, y$ are normed
Cosine similarity: $\langle x , y\rangle$
Euclidean distance (squared): $2(1 - \langle x , y\rangle)$
As you can see, minimizing (square) euclidean distance is equivalent to maximizing cosine similarity if the vectors are normalized. | Vector space model: cosine similarity vs euclidean distance | To complement other answers:
Cosine similarity of $x, y$ : $\frac{\langle x, y\rangle}{\|x\|\|y\|}$
Euclidean distance (squared) between $x, y$: $\|x-y\|^2 = \|x\|^2 +\|y\|^2 - 2\langle x , y\rangle$ | Vector space model: cosine similarity vs euclidean distance
To complement other answers:
Cosine similarity of $x, y$ : $\frac{\langle x, y\rangle}{\|x\|\|y\|}$
Euclidean distance (squared) between $x, y$: $\|x-y\|^2 = \|x\|^2 +\|y\|^2 - 2\langle x , y\rangle$
Assuming that $x, y$ are normed
Cosine similarity: $\langle x , y\rangle$
Euclidean distance (squared): $2(1 - \langle x , y\rangle)$
As you can see, minimizing (square) euclidean distance is equivalent to maximizing cosine similarity if the vectors are normalized. | Vector space model: cosine similarity vs euclidean distance
To complement other answers:
Cosine similarity of $x, y$ : $\frac{\langle x, y\rangle}{\|x\|\|y\|}$
Euclidean distance (squared) between $x, y$: $\|x-y\|^2 = \|x\|^2 +\|y\|^2 - 2\langle x , y\rangle$ |
55,289 | Vector space model: cosine similarity vs euclidean distance | You can use the Euclidean distance, as far as you use an appropriate transformation rule, e.g:
$dist = 1 -sim$, $dist = \frac{1-sim}{sim}$, $dist = \sqrt{1-sim}$ or $dist = -\log(sim)$.
However, it is important to remember that in general a distance is not a similarity. The latter one is subjective-driven (are two objects $X$ and $Y$ similar if their calculated similarity score $sim(X,Y)$ exceeds 0.85193 ?).
A distance, in contrast, is a real metric that follows a number of well-founded properties. Have a look on "Encyclopedia of Distances" | Vector space model: cosine similarity vs euclidean distance | You can use the Euclidean distance, as far as you use an appropriate transformation rule, e.g:
$dist = 1 -sim$, $dist = \frac{1-sim}{sim}$, $dist = \sqrt{1-sim}$ or $dist = -\log(sim)$.
However, it i | Vector space model: cosine similarity vs euclidean distance
You can use the Euclidean distance, as far as you use an appropriate transformation rule, e.g:
$dist = 1 -sim$, $dist = \frac{1-sim}{sim}$, $dist = \sqrt{1-sim}$ or $dist = -\log(sim)$.
However, it is important to remember that in general a distance is not a similarity. The latter one is subjective-driven (are two objects $X$ and $Y$ similar if their calculated similarity score $sim(X,Y)$ exceeds 0.85193 ?).
A distance, in contrast, is a real metric that follows a number of well-founded properties. Have a look on "Encyclopedia of Distances" | Vector space model: cosine similarity vs euclidean distance
You can use the Euclidean distance, as far as you use an appropriate transformation rule, e.g:
$dist = 1 -sim$, $dist = \frac{1-sim}{sim}$, $dist = \sqrt{1-sim}$ or $dist = -\log(sim)$.
However, it i |
55,290 | Vector space model: cosine similarity vs euclidean distance | If you don't normalize the vectors to be all the same length then their length will depend on the length of the document. Usually, in document classification we don't want to be biased by the document lengths. This is one reason why cosine similarity is preferred. | Vector space model: cosine similarity vs euclidean distance | If you don't normalize the vectors to be all the same length then their length will depend on the length of the document. Usually, in document classification we don't want to be biased by the document | Vector space model: cosine similarity vs euclidean distance
If you don't normalize the vectors to be all the same length then their length will depend on the length of the document. Usually, in document classification we don't want to be biased by the document lengths. This is one reason why cosine similarity is preferred. | Vector space model: cosine similarity vs euclidean distance
If you don't normalize the vectors to be all the same length then their length will depend on the length of the document. Usually, in document classification we don't want to be biased by the document |
55,291 | Normality Testing - Choose the transformation that makes the data "most normal"? | You can say a lot about a lower bound for an upper percentile, even with small amounts of data.
Suppose the 99th percentile of the true (but unknown) distribution from which $n$ values are obtained (independently) is the number $x_{0.99}$. Then the chance that $k$ or more of your data exceed $x_{0.99}$ is given by the Binomial distribution and equals
$$\sum_{j=k}^{n}\binom{n}{j}(0.01)^j(1-0.01)^{n-j}.$$
For instance, suppose the 99th percentile were less than $6$ but larger than $5.5$, which are the two largest values in the dataset (the $6$ appears three times). Then $k=3$; a quick computation shows that the chance of obtaining three or more values above the 99th percentile out of $n=20$ is a mere $43$ per million: that's really small and much smaller than the $100 - 95\%$ risk you are willing to run with $95\%$ confidence. Put another way, we can assert with as much as $100(1 - 43/1000000) = 99.9957\%$ confidence that the 99th percentile is no lower than $5.5$.
This calculation would not change under any monotonic, increasing re-expression of the data (such as the square root, logarithm, or negative reciprocal). Thus the question of data distribution is practically irrelevant.
This calculation is called a non-parametric lower tolerance limit for the upper $99$th percentile with $95\%$ confidence. There are parametric calculations that rely on knowledge (or assumptions) of the underlying distribution governing the data. However, the apparently discrete nature of these data (they occur in multiples of $1/2$) shows that any continuous distribution, such as a Normal, is at best a crude approximation. Thus it is safest--and scarcely gives up any precision--to use a method that assumes little or nothing about the distribution. That's what "nonparametric" means, and shows it is a robust procedure. For instance, if you were to lose as much as a quarter of these data, then even in the worst case (where the highest five values went missing) your conclusion would not change. (Your statements of the confidence level might vary, but they would always exceed the desired $95\%$.) | Normality Testing - Choose the transformation that makes the data "most normal"? | You can say a lot about a lower bound for an upper percentile, even with small amounts of data.
Suppose the 99th percentile of the true (but unknown) distribution from which $n$ values are obtained (i | Normality Testing - Choose the transformation that makes the data "most normal"?
You can say a lot about a lower bound for an upper percentile, even with small amounts of data.
Suppose the 99th percentile of the true (but unknown) distribution from which $n$ values are obtained (independently) is the number $x_{0.99}$. Then the chance that $k$ or more of your data exceed $x_{0.99}$ is given by the Binomial distribution and equals
$$\sum_{j=k}^{n}\binom{n}{j}(0.01)^j(1-0.01)^{n-j}.$$
For instance, suppose the 99th percentile were less than $6$ but larger than $5.5$, which are the two largest values in the dataset (the $6$ appears three times). Then $k=3$; a quick computation shows that the chance of obtaining three or more values above the 99th percentile out of $n=20$ is a mere $43$ per million: that's really small and much smaller than the $100 - 95\%$ risk you are willing to run with $95\%$ confidence. Put another way, we can assert with as much as $100(1 - 43/1000000) = 99.9957\%$ confidence that the 99th percentile is no lower than $5.5$.
This calculation would not change under any monotonic, increasing re-expression of the data (such as the square root, logarithm, or negative reciprocal). Thus the question of data distribution is practically irrelevant.
This calculation is called a non-parametric lower tolerance limit for the upper $99$th percentile with $95\%$ confidence. There are parametric calculations that rely on knowledge (or assumptions) of the underlying distribution governing the data. However, the apparently discrete nature of these data (they occur in multiples of $1/2$) shows that any continuous distribution, such as a Normal, is at best a crude approximation. Thus it is safest--and scarcely gives up any precision--to use a method that assumes little or nothing about the distribution. That's what "nonparametric" means, and shows it is a robust procedure. For instance, if you were to lose as much as a quarter of these data, then even in the worst case (where the highest five values went missing) your conclusion would not change. (Your statements of the confidence level might vary, but they would always exceed the desired $95\%$.) | Normality Testing - Choose the transformation that makes the data "most normal"?
You can say a lot about a lower bound for an upper percentile, even with small amounts of data.
Suppose the 99th percentile of the true (but unknown) distribution from which $n$ values are obtained (i |
55,292 | Normality Testing - Choose the transformation that makes the data "most normal"? | Any attempt to use the data to fit the distribution, when the choice is among 3 or more distributions, will result in at most tiny improvements over nonparametric methods, due to model uncertainty. For example if you try different distributions to get agreement with the empirical CDF the true variance of the final estimates will equal the variance of the empirical CDF. | Normality Testing - Choose the transformation that makes the data "most normal"? | Any attempt to use the data to fit the distribution, when the choice is among 3 or more distributions, will result in at most tiny improvements over nonparametric methods, due to model uncertainty. F | Normality Testing - Choose the transformation that makes the data "most normal"?
Any attempt to use the data to fit the distribution, when the choice is among 3 or more distributions, will result in at most tiny improvements over nonparametric methods, due to model uncertainty. For example if you try different distributions to get agreement with the empirical CDF the true variance of the final estimates will equal the variance of the empirical CDF. | Normality Testing - Choose the transformation that makes the data "most normal"?
Any attempt to use the data to fit the distribution, when the choice is among 3 or more distributions, will result in at most tiny improvements over nonparametric methods, due to model uncertainty. F |
55,293 | Autocorrelation and trends | The autocorrelation(acf) function summarizes the correlation of different lags and is a descriptive statistic. If there is a "trend" in the data then the acf will suggest non-stationarity. However a non-stationary acf does not necessarily suggest a "trend". If the series is impacted by one or more level/step shifts the the acf will suggest non-stationarity (symptom) but the cause is simple a shift in the mean at one or more points in time. "Trends" in time series can sometimes be adequately handled by incorporating a deterministic structure. There are a few forms ( the software package needs to help here and/or the skilled analyst ) to determine which form is the most adequate. One form is to incorporate a predictor series(x1) such as 1,2,3,4,..n which would imply one and only one trend. Another might be to incororporate two input series (x1 and x2) where x1=1,2,3,4,,..n and x2=0,0,0,0,0,1,2,3,..n reflecting two trends where the second trend starts at period 6 in this example. Of course there could be multiple "trends" or breakpoints in trend.
An alternative way of incorporating a "trend" is to model the data as some variety of a differencing model of the form (1-B)Y(T) = constant + [theta(B)/phi(B)]*A(T) which suggests one and only 1 trend . Good analytics will suggest the "most correct" approach to an individual data set. If you have exogenous/causal/supporting/input series then the "trend" in Y could well be explained by one or more of these series. The acf is of little or no help in helping you decide which "trend model" is appropriate. | Autocorrelation and trends | The autocorrelation(acf) function summarizes the correlation of different lags and is a descriptive statistic. If there is a "trend" in the data then the acf will suggest non-stationarity. However a n | Autocorrelation and trends
The autocorrelation(acf) function summarizes the correlation of different lags and is a descriptive statistic. If there is a "trend" in the data then the acf will suggest non-stationarity. However a non-stationary acf does not necessarily suggest a "trend". If the series is impacted by one or more level/step shifts the the acf will suggest non-stationarity (symptom) but the cause is simple a shift in the mean at one or more points in time. "Trends" in time series can sometimes be adequately handled by incorporating a deterministic structure. There are a few forms ( the software package needs to help here and/or the skilled analyst ) to determine which form is the most adequate. One form is to incorporate a predictor series(x1) such as 1,2,3,4,..n which would imply one and only one trend. Another might be to incororporate two input series (x1 and x2) where x1=1,2,3,4,,..n and x2=0,0,0,0,0,1,2,3,..n reflecting two trends where the second trend starts at period 6 in this example. Of course there could be multiple "trends" or breakpoints in trend.
An alternative way of incorporating a "trend" is to model the data as some variety of a differencing model of the form (1-B)Y(T) = constant + [theta(B)/phi(B)]*A(T) which suggests one and only 1 trend . Good analytics will suggest the "most correct" approach to an individual data set. If you have exogenous/causal/supporting/input series then the "trend" in Y could well be explained by one or more of these series. The acf is of little or no help in helping you decide which "trend model" is appropriate. | Autocorrelation and trends
The autocorrelation(acf) function summarizes the correlation of different lags and is a descriptive statistic. If there is a "trend" in the data then the acf will suggest non-stationarity. However a n |
55,294 | Autocorrelation and trends | Autocorrelation function (ACF) is an theoretical object related to the population moments. What happens when these moments do not exist as finite?
Sample autocorrelation function (SACF) is a descriptive statistic and is a function of sample moments, mainly sample mean. What is a breakpoint value for the sample mean? Is it small or large? If you know these then you would know what are dangers related to inference from these sample values.
Objects calculated from the sample always exist, though it will be probable that these estimates do diverge when underlying population moments are not really finite or process goes through somekind of change.
Regards,
-A | Autocorrelation and trends | Autocorrelation function (ACF) is an theoretical object related to the population moments. What happens when these moments do not exist as finite?
Sample autocorrelation function (SACF) is a descripti | Autocorrelation and trends
Autocorrelation function (ACF) is an theoretical object related to the population moments. What happens when these moments do not exist as finite?
Sample autocorrelation function (SACF) is a descriptive statistic and is a function of sample moments, mainly sample mean. What is a breakpoint value for the sample mean? Is it small or large? If you know these then you would know what are dangers related to inference from these sample values.
Objects calculated from the sample always exist, though it will be probable that these estimates do diverge when underlying population moments are not really finite or process goes through somekind of change.
Regards,
-A | Autocorrelation and trends
Autocorrelation function (ACF) is an theoretical object related to the population moments. What happens when these moments do not exist as finite?
Sample autocorrelation function (SACF) is a descripti |
55,295 | How to identify outliers and conduct robust PCA? | Robust PCA is a very active research area, and identifying and removing outliers in a sound way is quite delicate. (I've written two papers in this field, so I do know a bit about it.) While I don't know SPSS, you may be able to implement the relatively simple Algorithm (1) here.
This algorithm (not mine) has rigorous guarantees but requires only some basic computations and a "while" loop. Assuming you are searching for $d$ principal components, the basic procedure is
Compute PCA on your data,
Project your data on to the top $d$ principal components,
Throw away "at random" one of the data points whose projection is "too large", and
Repeat this "a few" times.
Everything in quotation marks is a heuristic; you can find the details in the paper.
The idea behind this procedure is that vectors whose projection after PCA is large may have effected the estimate too much, and so you may want to throw them away. It turns out that choosing the ones to throw away "at random" is actually a reasonable thing to do.
If anyone actually wants to take the time to write the SPSS code for this, I'm sure @cathy would appreciate it. | How to identify outliers and conduct robust PCA? | Robust PCA is a very active research area, and identifying and removing outliers in a sound way is quite delicate. (I've written two papers in this field, so I do know a bit about it.) While I don't k | How to identify outliers and conduct robust PCA?
Robust PCA is a very active research area, and identifying and removing outliers in a sound way is quite delicate. (I've written two papers in this field, so I do know a bit about it.) While I don't know SPSS, you may be able to implement the relatively simple Algorithm (1) here.
This algorithm (not mine) has rigorous guarantees but requires only some basic computations and a "while" loop. Assuming you are searching for $d$ principal components, the basic procedure is
Compute PCA on your data,
Project your data on to the top $d$ principal components,
Throw away "at random" one of the data points whose projection is "too large", and
Repeat this "a few" times.
Everything in quotation marks is a heuristic; you can find the details in the paper.
The idea behind this procedure is that vectors whose projection after PCA is large may have effected the estimate too much, and so you may want to throw them away. It turns out that choosing the ones to throw away "at random" is actually a reasonable thing to do.
If anyone actually wants to take the time to write the SPSS code for this, I'm sure @cathy would appreciate it. | How to identify outliers and conduct robust PCA?
Robust PCA is a very active research area, and identifying and removing outliers in a sound way is quite delicate. (I've written two papers in this field, so I do know a bit about it.) While I don't k |
55,296 | Independence of multivariate normal distribution | Indeed, for the normal distribution, uncorrelatedness implies independence.
For your first case, showing formally independence between the random vector $(X_1, X_2)$ and the scalar random variable $X_3$ can be done by showing that the conditional on $X_3$ mean and conditional covariance matrix of the random vector $(X_1, X_2)$ is equal to the unconditional mean and covariance matrix. Using notation as stated in this Wikipedia article,
$\Sigma$ is the $3\times 3$ covariance matrix of the joint distribution of your three variables. It is then unequally partitioned into sub-matrices. Denoting $v_{ij}$ the elements of $\Sigma$, we have
$$\Sigma_{11} =\left[\begin{matrix} v_{11} &v_{12} \\ v_{21} & v_{22}
\end{matrix}\right] = \left[\begin{matrix} 4 &-1 \\ -1 & 5
\end{matrix}\right] $$
$$\Sigma_{12} =\left[\begin{matrix} v_{13} \\v_{23}
\end{matrix}\right] = \left[\begin{matrix} 0 \\0
\end{matrix}\right] \qquad \Sigma_{21} = \Sigma_{12}'$$
$$\Sigma_{22} = v_{33} = \sigma_3^2 = 2 $$
Then the conditional expectation vector- function $E\Big[(X_1,X_2)\mid X_3)\Big]$ is
$$E\Big[(X_1,X_2)\mid X_3)\Big] = \left [\begin{matrix} \mu_1 \\ \\\mu_2\end{matrix} \right] + \Sigma_{12}\Sigma^{-1}_{22}(X_3 - \mu_3) $$
which is equal to the unconditional mean-vector since $\Sigma_{12} = \mathbb 0$.The analogous result can be easily shown to hold for the covariance matrix.
As for your second question, one can proceed as follows:
Define
$Y_1 = X_1 - X_2$ and $Y_2 = X_1 + X_2 - X_3$. The $Y$'s are still normally distributed random variables. So independence is here too equivalent to uncorrelatedness, i.e. zero covariance. So all you have to do is calculate
$$\operatorname{Cov}(Y_1,Y_2) = E(Y_1Y_2) - E(Y_1)E(Y_2)$$
and see whether it equals zero (normally, it won't). | Independence of multivariate normal distribution | Indeed, for the normal distribution, uncorrelatedness implies independence.
For your first case, showing formally independence between the random vector $(X_1, X_2)$ and the scalar random variable $X_ | Independence of multivariate normal distribution
Indeed, for the normal distribution, uncorrelatedness implies independence.
For your first case, showing formally independence between the random vector $(X_1, X_2)$ and the scalar random variable $X_3$ can be done by showing that the conditional on $X_3$ mean and conditional covariance matrix of the random vector $(X_1, X_2)$ is equal to the unconditional mean and covariance matrix. Using notation as stated in this Wikipedia article,
$\Sigma$ is the $3\times 3$ covariance matrix of the joint distribution of your three variables. It is then unequally partitioned into sub-matrices. Denoting $v_{ij}$ the elements of $\Sigma$, we have
$$\Sigma_{11} =\left[\begin{matrix} v_{11} &v_{12} \\ v_{21} & v_{22}
\end{matrix}\right] = \left[\begin{matrix} 4 &-1 \\ -1 & 5
\end{matrix}\right] $$
$$\Sigma_{12} =\left[\begin{matrix} v_{13} \\v_{23}
\end{matrix}\right] = \left[\begin{matrix} 0 \\0
\end{matrix}\right] \qquad \Sigma_{21} = \Sigma_{12}'$$
$$\Sigma_{22} = v_{33} = \sigma_3^2 = 2 $$
Then the conditional expectation vector- function $E\Big[(X_1,X_2)\mid X_3)\Big]$ is
$$E\Big[(X_1,X_2)\mid X_3)\Big] = \left [\begin{matrix} \mu_1 \\ \\\mu_2\end{matrix} \right] + \Sigma_{12}\Sigma^{-1}_{22}(X_3 - \mu_3) $$
which is equal to the unconditional mean-vector since $\Sigma_{12} = \mathbb 0$.The analogous result can be easily shown to hold for the covariance matrix.
As for your second question, one can proceed as follows:
Define
$Y_1 = X_1 - X_2$ and $Y_2 = X_1 + X_2 - X_3$. The $Y$'s are still normally distributed random variables. So independence is here too equivalent to uncorrelatedness, i.e. zero covariance. So all you have to do is calculate
$$\operatorname{Cov}(Y_1,Y_2) = E(Y_1Y_2) - E(Y_1)E(Y_2)$$
and see whether it equals zero (normally, it won't). | Independence of multivariate normal distribution
Indeed, for the normal distribution, uncorrelatedness implies independence.
For your first case, showing formally independence between the random vector $(X_1, X_2)$ and the scalar random variable $X_ |
55,297 | How can a combination of model parameters have a lower standard error than each individual coefficient? | Denote $\hat b_1$, $\hat b_2$ two estimated coefficients/estimator functions. These are random variables, and have been estimated using the same data (same realizations of random variables). So naturally, they are not independent. Apart from extreme cases of totally non-linear dependence, this means that their co-variance will not be zero.
Define now their sum, which is another random variable, say,
$$b_s = \hat b_1+\hat b_2$$
The variance of $b_s$ is
$$\text {Var} (b_s) = \text {Var} (\hat b_1+\hat b_2) = \text {Var} (\hat b_1) + \text {Var} (\hat b_2) + 2 \text {Cov}(\hat b_1,\hat b_2)$$
If $\text {Cov}(\hat b_1,\hat b_2)$ is negative then $\text {Var} (b_s) < \text {Var} (\hat b_1)+ \text {Var} (\hat b_2)$.
I leave for you to think about the intuition behind this mathematical result. | How can a combination of model parameters have a lower standard error than each individual coefficie | Denote $\hat b_1$, $\hat b_2$ two estimated coefficients/estimator functions. These are random variables, and have been estimated using the same data (same realizations of random variables). So natura | How can a combination of model parameters have a lower standard error than each individual coefficient?
Denote $\hat b_1$, $\hat b_2$ two estimated coefficients/estimator functions. These are random variables, and have been estimated using the same data (same realizations of random variables). So naturally, they are not independent. Apart from extreme cases of totally non-linear dependence, this means that their co-variance will not be zero.
Define now their sum, which is another random variable, say,
$$b_s = \hat b_1+\hat b_2$$
The variance of $b_s$ is
$$\text {Var} (b_s) = \text {Var} (\hat b_1+\hat b_2) = \text {Var} (\hat b_1) + \text {Var} (\hat b_2) + 2 \text {Cov}(\hat b_1,\hat b_2)$$
If $\text {Cov}(\hat b_1,\hat b_2)$ is negative then $\text {Var} (b_s) < \text {Var} (\hat b_1)+ \text {Var} (\hat b_2)$.
I leave for you to think about the intuition behind this mathematical result. | How can a combination of model parameters have a lower standard error than each individual coefficie
Denote $\hat b_1$, $\hat b_2$ two estimated coefficients/estimator functions. These are random variables, and have been estimated using the same data (same realizations of random variables). So natura |
55,298 | Methods for randomly allocating people between active and placebo in clinical trials | If you have good reasons to think that these three factors influence the outcome, you may want to block them. Otherwise, you could still use blocking (e.g., Imai, King, & Stuart, 2008), but it's probably not that important in that case.
Search for blocking on this site, or Wikipedia, or ...
Imai, K., King, G., & Stuart, E. A. (2008). Misunderstandings between experimentalists and observationalists about causal inference. Journal of the Royal Statistical Society, 171, 481–502. doi: 10.1111/j.1467-985X.2007.00527.x | Methods for randomly allocating people between active and placebo in clinical trials | If you have good reasons to think that these three factors influence the outcome, you may want to block them. Otherwise, you could still use blocking (e.g., Imai, King, & Stuart, 2008), but it's proba | Methods for randomly allocating people between active and placebo in clinical trials
If you have good reasons to think that these three factors influence the outcome, you may want to block them. Otherwise, you could still use blocking (e.g., Imai, King, & Stuart, 2008), but it's probably not that important in that case.
Search for blocking on this site, or Wikipedia, or ...
Imai, K., King, G., & Stuart, E. A. (2008). Misunderstandings between experimentalists and observationalists about causal inference. Journal of the Royal Statistical Society, 171, 481–502. doi: 10.1111/j.1467-985X.2007.00527.x | Methods for randomly allocating people between active and placebo in clinical trials
If you have good reasons to think that these three factors influence the outcome, you may want to block them. Otherwise, you could still use blocking (e.g., Imai, King, & Stuart, 2008), but it's proba |
55,299 | Methods for randomly allocating people between active and placebo in clinical trials | Assuming all three factors are prognostic (i.e. strongly related to the primary outcome) the most common methods of balancing them within the treatment groups are:
Random permuted blocks within strata
Minimisation
In both cases you must categorise your age variable - it's usual to create 2 categories based on median age in your patient population unless you have other pre-defined cut-points. Let's assume you categorise into 65-75 and 76-90 years.
The potential problem with using blocks within strata is that you have 8 strata (2 age-groups x 2 sexes x 2 APOE states) and only 120 patients. In the best case assuming you recruit 50% females and 50% APOE+, you would have 15 patients per strata. In reality you're likely to have some strata with fewer patients and others with more. If, for instance, being APOE+ is rare (5%) then the APOE+ strata (65-75/Male/APOE+, 76-90/Male/APOE+, 65-75/Female/APOE+, 76-90/Female/APOE+) are likely to have only 1 or 2 patients each. This would mean you don't have enough patients within these strata to complete any blocks, so the blocking doesn't get a chance to work and balance the treatments. You can choose very small blocks (size 2) to combat this, but it makes the allocation sequence more predictable which is undesirable.
Minimisation doesn't suffer from this problem with sparse strata, but is complicated to implement. You would need to use some specialist software or an online randomisation service if you choose minimisation. | Methods for randomly allocating people between active and placebo in clinical trials | Assuming all three factors are prognostic (i.e. strongly related to the primary outcome) the most common methods of balancing them within the treatment groups are:
Random permuted blocks within strat | Methods for randomly allocating people between active and placebo in clinical trials
Assuming all three factors are prognostic (i.e. strongly related to the primary outcome) the most common methods of balancing them within the treatment groups are:
Random permuted blocks within strata
Minimisation
In both cases you must categorise your age variable - it's usual to create 2 categories based on median age in your patient population unless you have other pre-defined cut-points. Let's assume you categorise into 65-75 and 76-90 years.
The potential problem with using blocks within strata is that you have 8 strata (2 age-groups x 2 sexes x 2 APOE states) and only 120 patients. In the best case assuming you recruit 50% females and 50% APOE+, you would have 15 patients per strata. In reality you're likely to have some strata with fewer patients and others with more. If, for instance, being APOE+ is rare (5%) then the APOE+ strata (65-75/Male/APOE+, 76-90/Male/APOE+, 65-75/Female/APOE+, 76-90/Female/APOE+) are likely to have only 1 or 2 patients each. This would mean you don't have enough patients within these strata to complete any blocks, so the blocking doesn't get a chance to work and balance the treatments. You can choose very small blocks (size 2) to combat this, but it makes the allocation sequence more predictable which is undesirable.
Minimisation doesn't suffer from this problem with sparse strata, but is complicated to implement. You would need to use some specialist software or an online randomisation service if you choose minimisation. | Methods for randomly allocating people between active and placebo in clinical trials
Assuming all three factors are prognostic (i.e. strongly related to the primary outcome) the most common methods of balancing them within the treatment groups are:
Random permuted blocks within strat |
55,300 | To what extent can statistics improve patient's treatment? | @user32240, first of all, I am sorry for your loss. It is quite difficult and painful to watch people we love fail in treatment or watch a friend's family member fail in treatment. I think we are a pattern seeking species and that trait is part of what helped us evolve. For example, we were able to recognize a pattern in weather that translated into knowing when to best protect ourselves from floods. Patients will seek a pattern and blame the event closest to the change. This is why the scientific method developed, to ensure that we weren't exposing patients to risky procedures simply due to anecdotal evidence.
On an individual basis, we might find someone's symptoms are exacerbated by a irritant or relieved by a treatment, but it is only after multiple exposure or withdrawal that we can comfortable arrive at a conclusion of association. It seems impossible to find anything meaningful with a sample size of 1 and no repeated exposures. Sometimes the best we can do is to not lose that data and combine it with the experience with other patients. Perhaps in the future someone will be helped. | To what extent can statistics improve patient's treatment? | @user32240, first of all, I am sorry for your loss. It is quite difficult and painful to watch people we love fail in treatment or watch a friend's family member fail in treatment. I think we are a | To what extent can statistics improve patient's treatment?
@user32240, first of all, I am sorry for your loss. It is quite difficult and painful to watch people we love fail in treatment or watch a friend's family member fail in treatment. I think we are a pattern seeking species and that trait is part of what helped us evolve. For example, we were able to recognize a pattern in weather that translated into knowing when to best protect ourselves from floods. Patients will seek a pattern and blame the event closest to the change. This is why the scientific method developed, to ensure that we weren't exposing patients to risky procedures simply due to anecdotal evidence.
On an individual basis, we might find someone's symptoms are exacerbated by a irritant or relieved by a treatment, but it is only after multiple exposure or withdrawal that we can comfortable arrive at a conclusion of association. It seems impossible to find anything meaningful with a sample size of 1 and no repeated exposures. Sometimes the best we can do is to not lose that data and combine it with the experience with other patients. Perhaps in the future someone will be helped. | To what extent can statistics improve patient's treatment?
@user32240, first of all, I am sorry for your loss. It is quite difficult and painful to watch people we love fail in treatment or watch a friend's family member fail in treatment. I think we are a |
Subsets and Splits