idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
49,601 | Probability of two people meeting | The side of the square is one, so its surface $= 1 * 1 = 1$.
The side of the upper white square-cornered triangle is $1 - w_1$, so its surface is $(1-w_1)*(1-w_1)/2$.
Similarly with the bottom triangle.
So: the surface of the gray area = square - two white triangles = formula $P2$.
Finally note that each 'point' in the square is equally likely to occur, so that the area of the 'valid' points (the grey area) divided by the area of 'all' points (the white square) is the probability of a 'valid' event (i.e. the two persons meeting). Since the surface of the square is 1, the result is still formula $P2$. | Probability of two people meeting | The side of the square is one, so its surface $= 1 * 1 = 1$.
The side of the upper white square-cornered triangle is $1 - w_1$, so its surface is $(1-w_1)*(1-w_1)/2$.
Similarly with the bottom triangl | Probability of two people meeting
The side of the square is one, so its surface $= 1 * 1 = 1$.
The side of the upper white square-cornered triangle is $1 - w_1$, so its surface is $(1-w_1)*(1-w_1)/2$.
Similarly with the bottom triangle.
So: the surface of the gray area = square - two white triangles = formula $P2$.
Finally note that each 'point' in the square is equally likely to occur, so that the area of the 'valid' points (the grey area) divided by the area of 'all' points (the white square) is the probability of a 'valid' event (i.e. the two persons meeting). Since the surface of the square is 1, the result is still formula $P2$. | Probability of two people meeting
The side of the square is one, so its surface $= 1 * 1 = 1$.
The side of the upper white square-cornered triangle is $1 - w_1$, so its surface is $(1-w_1)*(1-w_1)/2$.
Similarly with the bottom triangl |
49,602 | Probability of two people meeting | First of all this is a very good example of application of geometric probability. It's related to continuous distribution.
The idea behind taking the 1×1 square is as below. Imagine a square with both axes as time.let the time be 1 hour each.this means the area is now 1hour ×1 hour.Let the 2 people be A and B . If A arrives at time t=0second he waits for 10 minutes.If he arrives at t=0.001second ,he waits till 10mins,0.001 seconds. similarly if he comes at t=1min he waits till 11 mins and so on...lets associate A with horizontal time axis and B with vertical time axis.now imagine a vertical slot of 10minutes moving along x axis as time passes.To be clear, the slot is a vertical bar,moving along x axis as time.so, now we are clear that A arrives at any instant in the big square frame of 1hour ,waits for 10 minutes.
Similar is the case with B also.This time consider a horizontal bar of 10 minutes moving upwards as time passes. This means B also arrives at some instant within 1 hour, waits for 10 minutes.
Now the task is to see where is possibility of the two people meeting in this frame of 1 hour × 1 hour.
When we move the horizontal and vertical bar together we get an area that is common to both the bars .this represents the time at which they are together. The common region we obtain by moving the 2 bars is diagonal as u can see in one of the above links.
Finding probability is now easy as we know the favourable area and the total area.
Probability= favourable area/total area | Probability of two people meeting | First of all this is a very good example of application of geometric probability. It's related to continuous distribution.
The idea behind taking the 1×1 square is as below. Imagine a square with both | Probability of two people meeting
First of all this is a very good example of application of geometric probability. It's related to continuous distribution.
The idea behind taking the 1×1 square is as below. Imagine a square with both axes as time.let the time be 1 hour each.this means the area is now 1hour ×1 hour.Let the 2 people be A and B . If A arrives at time t=0second he waits for 10 minutes.If he arrives at t=0.001second ,he waits till 10mins,0.001 seconds. similarly if he comes at t=1min he waits till 11 mins and so on...lets associate A with horizontal time axis and B with vertical time axis.now imagine a vertical slot of 10minutes moving along x axis as time passes.To be clear, the slot is a vertical bar,moving along x axis as time.so, now we are clear that A arrives at any instant in the big square frame of 1hour ,waits for 10 minutes.
Similar is the case with B also.This time consider a horizontal bar of 10 minutes moving upwards as time passes. This means B also arrives at some instant within 1 hour, waits for 10 minutes.
Now the task is to see where is possibility of the two people meeting in this frame of 1 hour × 1 hour.
When we move the horizontal and vertical bar together we get an area that is common to both the bars .this represents the time at which they are together. The common region we obtain by moving the 2 bars is diagonal as u can see in one of the above links.
Finding probability is now easy as we know the favourable area and the total area.
Probability= favourable area/total area | Probability of two people meeting
First of all this is a very good example of application of geometric probability. It's related to continuous distribution.
The idea behind taking the 1×1 square is as below. Imagine a square with both |
49,603 | Probability of two people meeting | Beside the geometric interpretation, the probability that the two people meet in the hour is given by the probability that the second person arrives in a suitable interval ($\pm 10$ minutes) conditioned to the time of arrival of the first person:
$$p(meet) = p(y \in \Delta T_x | x) \cdot p(x) = p(y \in \Delta T_x) \cdot p(x),$$
where we can assume the independence between the arrival times $x$ and $y$ and we define $\Delta T_x $ as the time window suitable for meeting:
$$\Delta T_x = [max(0, x-10), min(x+10, 60)].$$
As depicted in the figure, $\Delta T$ is symmetric and it is constant except in the first and the last 10 minutes within the hour, where it exhibits a linear behaviour.
Putting all together, and considering three separate time intervals for people to meet (0-10 min, 10-50, 50-60 min), the overall probability is given by:
$$p(meet) = p(y \in \Delta T_x)\cdot p(x) = p(y \in \Delta T_x)\cdot p(x \in [0,10]) + \\
+ p(y \in \Delta T_x)\cdot p(x \in [10,50]) + p(y \in \Delta T_x)\cdot p(x \in [50,60])= \\
= \frac{1}{60}\cdot\int_0^{10}\frac{10+x}{60}dx + \frac{2}{3}\cdot\frac{20}{60} +
\frac{1}{60}\cdot\int_{50}^{60} \frac{10+x}{60}dx = \frac{2}{60}\cdot\int_0^{10} \frac{10+x}{60}dx + \frac{2}{3}\cdot\frac{20}{60} = \\
= \frac{1}{30}\cdot\frac{150}{60} + \frac{2}{9} = \frac{11}{36}.$$ | Probability of two people meeting | Beside the geometric interpretation, the probability that the two people meet in the hour is given by the probability that the second person arrives in a suitable interval ($\pm 10$ minutes) condition | Probability of two people meeting
Beside the geometric interpretation, the probability that the two people meet in the hour is given by the probability that the second person arrives in a suitable interval ($\pm 10$ minutes) conditioned to the time of arrival of the first person:
$$p(meet) = p(y \in \Delta T_x | x) \cdot p(x) = p(y \in \Delta T_x) \cdot p(x),$$
where we can assume the independence between the arrival times $x$ and $y$ and we define $\Delta T_x $ as the time window suitable for meeting:
$$\Delta T_x = [max(0, x-10), min(x+10, 60)].$$
As depicted in the figure, $\Delta T$ is symmetric and it is constant except in the first and the last 10 minutes within the hour, where it exhibits a linear behaviour.
Putting all together, and considering three separate time intervals for people to meet (0-10 min, 10-50, 50-60 min), the overall probability is given by:
$$p(meet) = p(y \in \Delta T_x)\cdot p(x) = p(y \in \Delta T_x)\cdot p(x \in [0,10]) + \\
+ p(y \in \Delta T_x)\cdot p(x \in [10,50]) + p(y \in \Delta T_x)\cdot p(x \in [50,60])= \\
= \frac{1}{60}\cdot\int_0^{10}\frac{10+x}{60}dx + \frac{2}{3}\cdot\frac{20}{60} +
\frac{1}{60}\cdot\int_{50}^{60} \frac{10+x}{60}dx = \frac{2}{60}\cdot\int_0^{10} \frac{10+x}{60}dx + \frac{2}{3}\cdot\frac{20}{60} = \\
= \frac{1}{30}\cdot\frac{150}{60} + \frac{2}{9} = \frac{11}{36}.$$ | Probability of two people meeting
Beside the geometric interpretation, the probability that the two people meet in the hour is given by the probability that the second person arrives in a suitable interval ($\pm 10$ minutes) condition |
49,604 | Probability of two people meeting | First of all, we can see how many different ways the first person can arrive
i.e.,
if he comes at 3pm then he goes again at 3 10 pm, similarly
3 01 pm to 3 11 pm
3 02 pm to 3 12 pm ...................and so on
...
...
...
3 50 pm to 4 00 pm
Therefore, he can arrive in 51 ways if we calculate the above combinations, and similarly the second person can also arrive in 51 ways.
So, the combinations of arrival of both the persons is 51*51 and in so much above ways there will be 51 ways in which both will arrive the same time
Therefore, probability of arriving at the same time: $\frac{51}{51*51}$, hence probability = $\frac{1}{51}$ | Probability of two people meeting | First of all, we can see how many different ways the first person can arrive
i.e.,
if he comes at 3pm then he goes again at 3 10 pm, similarly
3 01 pm to 3 11 pm
3 02 pm to 3 12 pm . | Probability of two people meeting
First of all, we can see how many different ways the first person can arrive
i.e.,
if he comes at 3pm then he goes again at 3 10 pm, similarly
3 01 pm to 3 11 pm
3 02 pm to 3 12 pm ...................and so on
...
...
...
3 50 pm to 4 00 pm
Therefore, he can arrive in 51 ways if we calculate the above combinations, and similarly the second person can also arrive in 51 ways.
So, the combinations of arrival of both the persons is 51*51 and in so much above ways there will be 51 ways in which both will arrive the same time
Therefore, probability of arriving at the same time: $\frac{51}{51*51}$, hence probability = $\frac{1}{51}$ | Probability of two people meeting
First of all, we can see how many different ways the first person can arrive
i.e.,
if he comes at 3pm then he goes again at 3 10 pm, similarly
3 01 pm to 3 11 pm
3 02 pm to 3 12 pm . |
49,605 | How to understand the plotting of the cox.zph function in R? | When interpreting the output of cox.zph it is just as much (or even more) the "flatness" of the line, as it is the straightness of the line, that is important. If the line is straight but slanted upward it implies non-proportionality in the form of a rising hazard ratio over time. See Therneau and Gramsch's text in their chapter on "Functional Form".
Regarding the values ... the estimation and inferences are all on the log-hazard scale and 0.46 looks about right for an estimate of the mean value of that plotted line. | How to understand the plotting of the cox.zph function in R? | When interpreting the output of cox.zph it is just as much (or even more) the "flatness" of the line, as it is the straightness of the line, that is important. If the line is straight but slanted upwa | How to understand the plotting of the cox.zph function in R?
When interpreting the output of cox.zph it is just as much (or even more) the "flatness" of the line, as it is the straightness of the line, that is important. If the line is straight but slanted upward it implies non-proportionality in the form of a rising hazard ratio over time. See Therneau and Gramsch's text in their chapter on "Functional Form".
Regarding the values ... the estimation and inferences are all on the log-hazard scale and 0.46 looks about right for an estimate of the mean value of that plotted line. | How to understand the plotting of the cox.zph function in R?
When interpreting the output of cox.zph it is just as much (or even more) the "flatness" of the line, as it is the straightness of the line, that is important. If the line is straight but slanted upwa |
49,606 | How to understand the plotting of the cox.zph function in R? | The curve is a natural spline fit (by default, with 4 degrees of freedom) of the time varying estimates of beta (the log of the hazard ratio). If that line is fairly flat and straight, then proportionality is supported. The dashed lines are confidence intervals at two standard errors. See the help pages for cox.zph and plot.cox.zph for some more information. | How to understand the plotting of the cox.zph function in R? | The curve is a natural spline fit (by default, with 4 degrees of freedom) of the time varying estimates of beta (the log of the hazard ratio). If that line is fairly flat and straight, then proportio | How to understand the plotting of the cox.zph function in R?
The curve is a natural spline fit (by default, with 4 degrees of freedom) of the time varying estimates of beta (the log of the hazard ratio). If that line is fairly flat and straight, then proportionality is supported. The dashed lines are confidence intervals at two standard errors. See the help pages for cox.zph and plot.cox.zph for some more information. | How to understand the plotting of the cox.zph function in R?
The curve is a natural spline fit (by default, with 4 degrees of freedom) of the time varying estimates of beta (the log of the hazard ratio). If that line is fairly flat and straight, then proportio |
49,607 | What is the most computationally efficient way to sample from an unnormalized density? | First of all, $P(\theta,D)=P(D|\theta)P(\theta)$ and not $P(\theta|D)$. Perhaps it is a type since you refer to it as an un-normalized version. Secondly, you may not need to run two rejection sampling algorithms since the prior $P(\theta)$ can usually be sampled directly and then you can reject it with $P(D|\theta)$. What is the dimension of $\theta$? Only if you are interested in the joint (or marginals, expectations etc.) of a many dimensional distribution does Gibbs make sense. I think Metropolis Hastings might be beneficial if direct sampling from $P(\theta)$ and rejections using $P(D|\theta)$ leads to a very low overall acceptance rate.
There are some symbolic math capabilities in Mathematica and SymPy that I know of. | What is the most computationally efficient way to sample from an unnormalized density? | First of all, $P(\theta,D)=P(D|\theta)P(\theta)$ and not $P(\theta|D)$. Perhaps it is a type since you refer to it as an un-normalized version. Secondly, you may not need to run two rejection sampling | What is the most computationally efficient way to sample from an unnormalized density?
First of all, $P(\theta,D)=P(D|\theta)P(\theta)$ and not $P(\theta|D)$. Perhaps it is a type since you refer to it as an un-normalized version. Secondly, you may not need to run two rejection sampling algorithms since the prior $P(\theta)$ can usually be sampled directly and then you can reject it with $P(D|\theta)$. What is the dimension of $\theta$? Only if you are interested in the joint (or marginals, expectations etc.) of a many dimensional distribution does Gibbs make sense. I think Metropolis Hastings might be beneficial if direct sampling from $P(\theta)$ and rejections using $P(D|\theta)$ leads to a very low overall acceptance rate.
There are some symbolic math capabilities in Mathematica and SymPy that I know of. | What is the most computationally efficient way to sample from an unnormalized density?
First of all, $P(\theta,D)=P(D|\theta)P(\theta)$ and not $P(\theta|D)$. Perhaps it is a type since you refer to it as an un-normalized version. Secondly, you may not need to run two rejection sampling |
49,608 | What is the most computationally efficient way to sample from an unnormalized density? | It all depends on three aspects of your system:
the number of modes
dimensionality
the correlation structure of $\theta$
If you expect all parameters to have a unique solution, and your posterior to be unimodal, sampling is quite easy with all the methods you just cited, and I wouldn't really bother looking further, since you will always end up in the global maximum of the posterior.
If $\theta$ has lots of coordinates, slice sampling might still work, although I don't know that method really well. However, being a form of Gibbs sampling, I have the feeling that Slice sampling has the following drawback. Reading Neal's 2003 article page 712, slice sampling is performed in the following way
(a) Draw a real value, $y$, uniformly from $(0,f(x0))$, thereby defining a horizontal
“slice”: $S = {x : y < f (x)}$. Note that $x_0$ is always within $S$.
(b) Find an interval, $I = (L, R)$, around $x_0$ that contains all, or much, of the
slice.
(c) Draw the new point, $x_1$, from the part of the slice within this interval.
If I'm not mistaken, step (b) could actually be very painful if the coordinates of $\theta$ (in Neal's notation, the $x_i$) are highly correlated. You would end up having an infinitely small patch $I$, and convergence would be slow.
I am personnally a big fan of Hybrid monte carlo (Duane 1987), which is a monte carlo combined with molecular dynamics (e.g. overrelaxation, see also the end of Neal's paper). It has the advantage that you propose concerted changes of $\theta$'s coordinates, since they are modified at the same time. It comes with additional parameters you need to tune, but I believe it's really powerful.
EDIT: Here are the references. Since it's copyrighted material, I link to the journal pages where you can try to download the papers.
This one is free
This one needs a paid subscription, but you can find it on google if you look for it | What is the most computationally efficient way to sample from an unnormalized density? | It all depends on three aspects of your system:
the number of modes
dimensionality
the correlation structure of $\theta$
If you expect all parameters to have a unique solution, and your posterior to | What is the most computationally efficient way to sample from an unnormalized density?
It all depends on three aspects of your system:
the number of modes
dimensionality
the correlation structure of $\theta$
If you expect all parameters to have a unique solution, and your posterior to be unimodal, sampling is quite easy with all the methods you just cited, and I wouldn't really bother looking further, since you will always end up in the global maximum of the posterior.
If $\theta$ has lots of coordinates, slice sampling might still work, although I don't know that method really well. However, being a form of Gibbs sampling, I have the feeling that Slice sampling has the following drawback. Reading Neal's 2003 article page 712, slice sampling is performed in the following way
(a) Draw a real value, $y$, uniformly from $(0,f(x0))$, thereby defining a horizontal
“slice”: $S = {x : y < f (x)}$. Note that $x_0$ is always within $S$.
(b) Find an interval, $I = (L, R)$, around $x_0$ that contains all, or much, of the
slice.
(c) Draw the new point, $x_1$, from the part of the slice within this interval.
If I'm not mistaken, step (b) could actually be very painful if the coordinates of $\theta$ (in Neal's notation, the $x_i$) are highly correlated. You would end up having an infinitely small patch $I$, and convergence would be slow.
I am personnally a big fan of Hybrid monte carlo (Duane 1987), which is a monte carlo combined with molecular dynamics (e.g. overrelaxation, see also the end of Neal's paper). It has the advantage that you propose concerted changes of $\theta$'s coordinates, since they are modified at the same time. It comes with additional parameters you need to tune, but I believe it's really powerful.
EDIT: Here are the references. Since it's copyrighted material, I link to the journal pages where you can try to download the papers.
This one is free
This one needs a paid subscription, but you can find it on google if you look for it | What is the most computationally efficient way to sample from an unnormalized density?
It all depends on three aspects of your system:
the number of modes
dimensionality
the correlation structure of $\theta$
If you expect all parameters to have a unique solution, and your posterior to |
49,609 | How to quantify correlation stability? | You might want to compare constant conditional correlation with dynamic conditional correlation. In R, the ccgarch package will be helpful. In Matlab, Kevin Sheppard has an implementation of DCC. | How to quantify correlation stability? | You might want to compare constant conditional correlation with dynamic conditional correlation. In R, the ccgarch package will be helpful. In Matlab, Kevin Sheppard has an implementation of DCC. | How to quantify correlation stability?
You might want to compare constant conditional correlation with dynamic conditional correlation. In R, the ccgarch package will be helpful. In Matlab, Kevin Sheppard has an implementation of DCC. | How to quantify correlation stability?
You might want to compare constant conditional correlation with dynamic conditional correlation. In R, the ccgarch package will be helpful. In Matlab, Kevin Sheppard has an implementation of DCC. |
49,610 | How to quantify correlation stability? | You could start with a simple 'rolling' analysis of the correlation to see how stable it is over time. Here is an example in R:
#Get Data
require(quantmod)
getSymbols(c('SPY','EEM'))
#Rolling Correlation (30 days)
require(PerformanceAnalytics)
chart.RollingCorrelation(Cl(SPY), Cl(EEM), width=30) | How to quantify correlation stability? | You could start with a simple 'rolling' analysis of the correlation to see how stable it is over time. Here is an example in R:
#Get Data
require(quantmod)
getSymbols(c('SPY','EEM'))
#Rolling Correl | How to quantify correlation stability?
You could start with a simple 'rolling' analysis of the correlation to see how stable it is over time. Here is an example in R:
#Get Data
require(quantmod)
getSymbols(c('SPY','EEM'))
#Rolling Correlation (30 days)
require(PerformanceAnalytics)
chart.RollingCorrelation(Cl(SPY), Cl(EEM), width=30) | How to quantify correlation stability?
You could start with a simple 'rolling' analysis of the correlation to see how stable it is over time. Here is an example in R:
#Get Data
require(quantmod)
getSymbols(c('SPY','EEM'))
#Rolling Correl |
49,611 | How does "ward" clustering (in R's hclust function) work? | The distance between two clusters is calculated using the Lance-Williams update formula, see the Wikipedia entry. It holds that:
$$
2/3*\text{abs}(2-3)+2/3*\text{abs}(1-3)-1/3*1 = 1.666667
$$ | How does "ward" clustering (in R's hclust function) work? | The distance between two clusters is calculated using the Lance-Williams update formula, see the Wikipedia entry. It holds that:
$$
2/3*\text{abs}(2-3)+2/3*\text{abs}(1-3)-1/3*1 = 1.666667
$$ | How does "ward" clustering (in R's hclust function) work?
The distance between two clusters is calculated using the Lance-Williams update formula, see the Wikipedia entry. It holds that:
$$
2/3*\text{abs}(2-3)+2/3*\text{abs}(1-3)-1/3*1 = 1.666667
$$ | How does "ward" clustering (in R's hclust function) work?
The distance between two clusters is calculated using the Lance-Williams update formula, see the Wikipedia entry. It holds that:
$$
2/3*\text{abs}(2-3)+2/3*\text{abs}(1-3)-1/3*1 = 1.666667
$$ |
49,612 | How does "ward" clustering (in R's hclust function) work? | It is in fact (in words) the absolute distance from the extreme value to the overall mean, plus two times the absolute distance from the mean of the two moderate values to the overall mean, minus a third of the absolute distance from one of the moderate values to mean of the two moderate values, minus a third of the absolute distance from the other moderate value to the mean of the two moderate values.
Try this with
plot(hclust(dist(c(0,18,126)),method = "ward"))
and the absolute distance from 126 to 48, plus twice the absolute distance from 9 to 48, minus a third of the absolute distance from 18 to 9, minus a third of the absolute distance from 0 to 9, gives $78 + 2\times 39 - 9/3 -9/3 =150$. | How does "ward" clustering (in R's hclust function) work? | It is in fact (in words) the absolute distance from the extreme value to the overall mean, plus two times the absolute distance from the mean of the two moderate values to the overall mean, minus a th | How does "ward" clustering (in R's hclust function) work?
It is in fact (in words) the absolute distance from the extreme value to the overall mean, plus two times the absolute distance from the mean of the two moderate values to the overall mean, minus a third of the absolute distance from one of the moderate values to mean of the two moderate values, minus a third of the absolute distance from the other moderate value to the mean of the two moderate values.
Try this with
plot(hclust(dist(c(0,18,126)),method = "ward"))
and the absolute distance from 126 to 48, plus twice the absolute distance from 9 to 48, minus a third of the absolute distance from 18 to 9, minus a third of the absolute distance from 0 to 9, gives $78 + 2\times 39 - 9/3 -9/3 =150$. | How does "ward" clustering (in R's hclust function) work?
It is in fact (in words) the absolute distance from the extreme value to the overall mean, plus two times the absolute distance from the mean of the two moderate values to the overall mean, minus a th |
49,613 | How to simplify a stretched exponential fit? | The first thing to do, if possible, is to take care of the heteroscedasticity. Notice how the spread of the residuals consistently increases with the fit: in fact, the spread seems to increase almost quadratically with larger fit.
A standard cure is to return to the original response ($log(xy)$) and apply a strong transformation, such as a logarithm or even a reciprocal: something in that range is suggested by this pattern of heteroscedasticity. Then redo the fitting and recheck the residuals.
It's a good idea to fit lines by eye, using graphs of transformed $xy$ against $z$ (or $\log(z)$. This usually reveals more than any amount of manipulating a regression routine. Once you have a suitable model, you can finally use least squares (or robust regression) to produce a final fit.
In this instance you might also want to explore the relationships among $x$ and $z$ and $y$ and $z$ separately to see whether just one of $x$, $y$ is causing the sudden change in slope between 2.9 and 3.6. The change clearly is not quadratic: both "limbs" of the residual plot are linear. One way to model this change--if it persists after you have dealt with the heteroscedasticity--is with a changepoint model that posits one value of the slope $\beta_1$ for, say, $z \le 3.2$, and a different value for $z \gt 3.2$.
Finally, in Monte-Carlo simulations you have full control and you know exactly the mechanism that produces the responses. It can be useful to subject this to some analysis to find out just what the correct relationship among the triples ought to be. | How to simplify a stretched exponential fit? | The first thing to do, if possible, is to take care of the heteroscedasticity. Notice how the spread of the residuals consistently increases with the fit: in fact, the spread seems to increase almost | How to simplify a stretched exponential fit?
The first thing to do, if possible, is to take care of the heteroscedasticity. Notice how the spread of the residuals consistently increases with the fit: in fact, the spread seems to increase almost quadratically with larger fit.
A standard cure is to return to the original response ($log(xy)$) and apply a strong transformation, such as a logarithm or even a reciprocal: something in that range is suggested by this pattern of heteroscedasticity. Then redo the fitting and recheck the residuals.
It's a good idea to fit lines by eye, using graphs of transformed $xy$ against $z$ (or $\log(z)$. This usually reveals more than any amount of manipulating a regression routine. Once you have a suitable model, you can finally use least squares (or robust regression) to produce a final fit.
In this instance you might also want to explore the relationships among $x$ and $z$ and $y$ and $z$ separately to see whether just one of $x$, $y$ is causing the sudden change in slope between 2.9 and 3.6. The change clearly is not quadratic: both "limbs" of the residual plot are linear. One way to model this change--if it persists after you have dealt with the heteroscedasticity--is with a changepoint model that posits one value of the slope $\beta_1$ for, say, $z \le 3.2$, and a different value for $z \gt 3.2$.
Finally, in Monte-Carlo simulations you have full control and you know exactly the mechanism that produces the responses. It can be useful to subject this to some analysis to find out just what the correct relationship among the triples ought to be. | How to simplify a stretched exponential fit?
The first thing to do, if possible, is to take care of the heteroscedasticity. Notice how the spread of the residuals consistently increases with the fit: in fact, the spread seems to increase almost |
49,614 | X-mean algorithm BIC calculation question | Let the clusters be indexed by $j = 1, \ldots, K$ with $K_j \gt 0$ points in cluster $j$. Let $\mu_j$ (no parentheses around the subscript) designate the mean of cluster $j$. Then, because by definition $\mu_{(i)}$ is the mean of whichever cluster $x_i$ belongs to, we can group the terms in the summation by cluster:
$$\eqalign{
\sigma^2 &= \frac{1}{R-K}\sum_{i}(x_i - \mu_{(i)})^2 \\
&= \frac{1}{R-K}\sum_{j=1}^K\sum_{k=1}^{K_j}(x_k - \mu_j)^2 \\
&= \frac{1}{R-K}\sum_{j=1}^K K_j \frac{1}{K_j}\sum_{k=1}^{K_j}(x_k - \mu_j)^2 \\
&= \frac{1}{R-K}\sum_{j=1}^K K_j \sigma_j^2
},$$
with $\sigma_j^2$ being the variance within cluster $j$ (where we must use $K_j$ instead of $K_j-1$ in the denominators to handle singleton clusters). I believe this is what you were expecting. | X-mean algorithm BIC calculation question | Let the clusters be indexed by $j = 1, \ldots, K$ with $K_j \gt 0$ points in cluster $j$. Let $\mu_j$ (no parentheses around the subscript) designate the mean of cluster $j$. Then, because by defini | X-mean algorithm BIC calculation question
Let the clusters be indexed by $j = 1, \ldots, K$ with $K_j \gt 0$ points in cluster $j$. Let $\mu_j$ (no parentheses around the subscript) designate the mean of cluster $j$. Then, because by definition $\mu_{(i)}$ is the mean of whichever cluster $x_i$ belongs to, we can group the terms in the summation by cluster:
$$\eqalign{
\sigma^2 &= \frac{1}{R-K}\sum_{i}(x_i - \mu_{(i)})^2 \\
&= \frac{1}{R-K}\sum_{j=1}^K\sum_{k=1}^{K_j}(x_k - \mu_j)^2 \\
&= \frac{1}{R-K}\sum_{j=1}^K K_j \frac{1}{K_j}\sum_{k=1}^{K_j}(x_k - \mu_j)^2 \\
&= \frac{1}{R-K}\sum_{j=1}^K K_j \sigma_j^2
},$$
with $\sigma_j^2$ being the variance within cluster $j$ (where we must use $K_j$ instead of $K_j-1$ in the denominators to handle singleton clusters). I believe this is what you were expecting. | X-mean algorithm BIC calculation question
Let the clusters be indexed by $j = 1, \ldots, K$ with $K_j \gt 0$ points in cluster $j$. Let $\mu_j$ (no parentheses around the subscript) designate the mean of cluster $j$. Then, because by defini |
49,615 | Interpret t-values when not assuming normal distribution of the error term | If the residuals are not normal (and note that this applies to the theoretical residuals rather than the observed residuals), but not overly skewed or with outliers then the Central Limit Theorem applies and the inference on the slopes (t-tests, confidence intervals) will be approximately correct. The quality of the approximation depends on the sample size and the degree and type of non-normality in the residuals.
The CLT works fine for the inference on the slopes, but does not apply to prediction intervals for new data.
If your not happy with the CLT argument (small sample sizes, skewness, just not sure, want a second opinion, want to convince a skeptic, etc.) then you can use bootstrap or permutation methods which do not depend on the normality assumption. | Interpret t-values when not assuming normal distribution of the error term | If the residuals are not normal (and note that this applies to the theoretical residuals rather than the observed residuals), but not overly skewed or with outliers then the Central Limit Theorem appl | Interpret t-values when not assuming normal distribution of the error term
If the residuals are not normal (and note that this applies to the theoretical residuals rather than the observed residuals), but not overly skewed or with outliers then the Central Limit Theorem applies and the inference on the slopes (t-tests, confidence intervals) will be approximately correct. The quality of the approximation depends on the sample size and the degree and type of non-normality in the residuals.
The CLT works fine for the inference on the slopes, but does not apply to prediction intervals for new data.
If your not happy with the CLT argument (small sample sizes, skewness, just not sure, want a second opinion, want to convince a skeptic, etc.) then you can use bootstrap or permutation methods which do not depend on the normality assumption. | Interpret t-values when not assuming normal distribution of the error term
If the residuals are not normal (and note that this applies to the theoretical residuals rather than the observed residuals), but not overly skewed or with outliers then the Central Limit Theorem appl |
49,616 | Interpret t-values when not assuming normal distribution of the error term | If the errors are not normally distributed, asymptotic results can be used. Suppose your model is
$$y_i=x_i'\beta+\varepsilon_i$$
where $(y_i,x_i',\varepsilon_i)$, $i=1,...,n$ is an iid sample. Assume
\begin{align*}
E(\varepsilon_i|x_i)&=0 \\
E(\varepsilon_i^2|x_i)&=\sigma^2
\end{align*}
and
$$rank(Ex_ix_i')=K,$$
where $K$ is the number of coefficients. Then usual OLS estimate $\hat\beta$ is asymptoticaly normal:
$$\sqrt{n}(\hat\beta-\beta)\to N(0,\sigma^2E(x_ix_i'))$$
Practical implications of this result are that the usual t-statistics become z-statistics, i.e. their distribution is normal instead of Student. So you can interpret t-statistics as usual, only p-values should be adjusted for normal distribution.
Note that since this result is asymptotic, it does not hold for small sample sizes. Also the assumptions used can be relaxed. | Interpret t-values when not assuming normal distribution of the error term | If the errors are not normally distributed, asymptotic results can be used. Suppose your model is
$$y_i=x_i'\beta+\varepsilon_i$$
where $(y_i,x_i',\varepsilon_i)$, $i=1,...,n$ is an iid sample. Assume | Interpret t-values when not assuming normal distribution of the error term
If the errors are not normally distributed, asymptotic results can be used. Suppose your model is
$$y_i=x_i'\beta+\varepsilon_i$$
where $(y_i,x_i',\varepsilon_i)$, $i=1,...,n$ is an iid sample. Assume
\begin{align*}
E(\varepsilon_i|x_i)&=0 \\
E(\varepsilon_i^2|x_i)&=\sigma^2
\end{align*}
and
$$rank(Ex_ix_i')=K,$$
where $K$ is the number of coefficients. Then usual OLS estimate $\hat\beta$ is asymptoticaly normal:
$$\sqrt{n}(\hat\beta-\beta)\to N(0,\sigma^2E(x_ix_i'))$$
Practical implications of this result are that the usual t-statistics become z-statistics, i.e. their distribution is normal instead of Student. So you can interpret t-statistics as usual, only p-values should be adjusted for normal distribution.
Note that since this result is asymptotic, it does not hold for small sample sizes. Also the assumptions used can be relaxed. | Interpret t-values when not assuming normal distribution of the error term
If the errors are not normally distributed, asymptotic results can be used. Suppose your model is
$$y_i=x_i'\beta+\varepsilon_i$$
where $(y_i,x_i',\varepsilon_i)$, $i=1,...,n$ is an iid sample. Assume |
49,617 | How to combine two independent repeated experiments with different success probabilities? | Interesting problem.
Let's first generalize it and simplify the notation. There are two multinomial distributions, one with probabilities $(p_1, p_2, \ldots, p_n)$ = $(2/(n+1), 1/(n+1), \ldots, 1/(n+1))$ and the other with probabilities $(q_1,q_2, \ldots, q_n)$ = $(3/(2n+1), 2/(2n+1), \ldots, 2/(2n+1))$. The probabilities are in descending order: $p_1 \ge p_2 \ge \cdots \ge p_n \gt 0$ and $q_1 \ge q_2 \ge \cdots \ge q_n \gt 0$.
You make $t$ independent observations of each, with counts $k_i$ and $m_i$ ($i=1, 2, \ldots, n$), respectively. However, you do not know the subscripts: you only have the ordered pairs $\left((k_{\sigma(1)},m_{\sigma(1)}), \ldots, (k_{\sigma(n)}, m_{\sigma(n)})\right)$ for some unknown permutation $\sigma$ of the subscripts.
You propose identifying which of these pairs corresponds to subscript $1$ by fixing positive coefficients $x$ and $y$ and computing the statistics
$$z_i = x k_i + y m_i, \quad i = 1, 2, \ldots, n,$$
and nominating the subscript with the largest value of $z_i$.
Let's assume your loss function is simply the indicator of correctness, so that your aim is to maximize the chance that $z_1$ is the largest of the $z_i$.
To get a handle on what the optimal values of $x$ and $y$ ought to be, consider the case where both $n$ and $t$ are large. Large $n$ allows us to ignore the slight dependency of the $k_i$ (and $m_i$), treating them as if they were independent. Large $t$ allows us to adopt Normal approximations to the distributions of the $k_i$ and $m_i$. These state that, to a good approximation,
$$k_i \sim N(p_i t, p_i(1-p_i)t); \quad m_i \sim N(q_i t, q_i(1-q_i)t)$$
(where the parameters are the mean and variance). Therefore
$$z_i \sim N((x p_i + y q_i)t, (x^2 p_i(1-p_i) + y^2 q_i(1-q_i))t).$$
To maximize the chance of making a correct determination, we want to maximize the probability that $z_1 \gt z_i$ for $i \gt 1$. Because
$$\eqalign{
z_1 - z_i \sim & N((x(p_1-p_i) + y(q_1-q_i))t, \\
&(x^2 [p_1(1-p_1) + p_i(1-p_i)] + y^2 [ q_1(1-q_1)+q_i(1-q_i)])t),
}$$
this is tantamount to maximizing the z-score,
$$z = \frac{(x(p_1-p_i) + y(q_1-q_i))t}{\sqrt{(x^2 [p_1(1-p_1) + p_i(1-p_i)] + y^2 [ q_1(1-q_1)+q_i(1-q_i)])t}}.$$
This expression takes the form
$$z = \sqrt{t} \frac{a x + b y}{\sqrt{c x^2 + d y^2}}$$
for strictly positive coefficients $a, b, c, d$ (guaranteeing $z$ will be positive, which should be obvious). Note, too, that only the ratio $\xi = x/y$ matters, because rescaling $x$ and $y$ does not change the ordering of the $z_i$. It therefore suffices to maximize the square of this expression,
$$z^2 = t \frac{(a\xi + b)^2}{c\xi^2 + d},$$
for $\xi \gt 0$.
This straightforward problem has the solution
$$\eqalign{
\xi = &\frac{a d}{b c} \\
= &\frac{(p_1 - p_i)(q_1(1-q_1)+q_i(1-q_i)}{q_1 - q_i)(p_1(1-p_1) + p_i(1-p_i))} \\
= &\frac{1/(n+1)(3(2n-2)/(2n+1)^2 + 2(2n-1)/(2n+1)^2)}{1/(2n+1)(2(n-1)/(n+1)^2 + n/(n+1)^2)} \\
= & \frac{10 n^2 + 2 n - 8}{6 n^2 - n - 2}
}$$
for all $i \gt 1$. Recalling the assumption that $n$ is large, we retain only the highest powers of $n$ and find (choosing $x=1$ so that $y = 1/\xi$):
$$x = 1, \quad y = 3/5 = 0.6.$$
This literally answers the question: the value $\log_2(3/2) \sim 0.585$ is not quite the right weight, although it's (surprisingly) close.
This analysis does not answer the more basic question, though: as a function of $n$ and $t$, what is the best weight? This can be found with a similar analysis using much more painful calculations concerning the multinomial distribution in place of the Normal approximations. I suspect, without any proof, that the formula for $\xi$ will work well even for small $n$ and small $t$. | How to combine two independent repeated experiments with different success probabilities? | Interesting problem.
Let's first generalize it and simplify the notation. There are two multinomial distributions, one with probabilities $(p_1, p_2, \ldots, p_n)$ = $(2/(n+1), 1/(n+1), \ldots, 1/(n+ | How to combine two independent repeated experiments with different success probabilities?
Interesting problem.
Let's first generalize it and simplify the notation. There are two multinomial distributions, one with probabilities $(p_1, p_2, \ldots, p_n)$ = $(2/(n+1), 1/(n+1), \ldots, 1/(n+1))$ and the other with probabilities $(q_1,q_2, \ldots, q_n)$ = $(3/(2n+1), 2/(2n+1), \ldots, 2/(2n+1))$. The probabilities are in descending order: $p_1 \ge p_2 \ge \cdots \ge p_n \gt 0$ and $q_1 \ge q_2 \ge \cdots \ge q_n \gt 0$.
You make $t$ independent observations of each, with counts $k_i$ and $m_i$ ($i=1, 2, \ldots, n$), respectively. However, you do not know the subscripts: you only have the ordered pairs $\left((k_{\sigma(1)},m_{\sigma(1)}), \ldots, (k_{\sigma(n)}, m_{\sigma(n)})\right)$ for some unknown permutation $\sigma$ of the subscripts.
You propose identifying which of these pairs corresponds to subscript $1$ by fixing positive coefficients $x$ and $y$ and computing the statistics
$$z_i = x k_i + y m_i, \quad i = 1, 2, \ldots, n,$$
and nominating the subscript with the largest value of $z_i$.
Let's assume your loss function is simply the indicator of correctness, so that your aim is to maximize the chance that $z_1$ is the largest of the $z_i$.
To get a handle on what the optimal values of $x$ and $y$ ought to be, consider the case where both $n$ and $t$ are large. Large $n$ allows us to ignore the slight dependency of the $k_i$ (and $m_i$), treating them as if they were independent. Large $t$ allows us to adopt Normal approximations to the distributions of the $k_i$ and $m_i$. These state that, to a good approximation,
$$k_i \sim N(p_i t, p_i(1-p_i)t); \quad m_i \sim N(q_i t, q_i(1-q_i)t)$$
(where the parameters are the mean and variance). Therefore
$$z_i \sim N((x p_i + y q_i)t, (x^2 p_i(1-p_i) + y^2 q_i(1-q_i))t).$$
To maximize the chance of making a correct determination, we want to maximize the probability that $z_1 \gt z_i$ for $i \gt 1$. Because
$$\eqalign{
z_1 - z_i \sim & N((x(p_1-p_i) + y(q_1-q_i))t, \\
&(x^2 [p_1(1-p_1) + p_i(1-p_i)] + y^2 [ q_1(1-q_1)+q_i(1-q_i)])t),
}$$
this is tantamount to maximizing the z-score,
$$z = \frac{(x(p_1-p_i) + y(q_1-q_i))t}{\sqrt{(x^2 [p_1(1-p_1) + p_i(1-p_i)] + y^2 [ q_1(1-q_1)+q_i(1-q_i)])t}}.$$
This expression takes the form
$$z = \sqrt{t} \frac{a x + b y}{\sqrt{c x^2 + d y^2}}$$
for strictly positive coefficients $a, b, c, d$ (guaranteeing $z$ will be positive, which should be obvious). Note, too, that only the ratio $\xi = x/y$ matters, because rescaling $x$ and $y$ does not change the ordering of the $z_i$. It therefore suffices to maximize the square of this expression,
$$z^2 = t \frac{(a\xi + b)^2}{c\xi^2 + d},$$
for $\xi \gt 0$.
This straightforward problem has the solution
$$\eqalign{
\xi = &\frac{a d}{b c} \\
= &\frac{(p_1 - p_i)(q_1(1-q_1)+q_i(1-q_i)}{q_1 - q_i)(p_1(1-p_1) + p_i(1-p_i))} \\
= &\frac{1/(n+1)(3(2n-2)/(2n+1)^2 + 2(2n-1)/(2n+1)^2)}{1/(2n+1)(2(n-1)/(n+1)^2 + n/(n+1)^2)} \\
= & \frac{10 n^2 + 2 n - 8}{6 n^2 - n - 2}
}$$
for all $i \gt 1$. Recalling the assumption that $n$ is large, we retain only the highest powers of $n$ and find (choosing $x=1$ so that $y = 1/\xi$):
$$x = 1, \quad y = 3/5 = 0.6.$$
This literally answers the question: the value $\log_2(3/2) \sim 0.585$ is not quite the right weight, although it's (surprisingly) close.
This analysis does not answer the more basic question, though: as a function of $n$ and $t$, what is the best weight? This can be found with a similar analysis using much more painful calculations concerning the multinomial distribution in place of the Normal approximations. I suspect, without any proof, that the formula for $\xi$ will work well even for small $n$ and small $t$. | How to combine two independent repeated experiments with different success probabilities?
Interesting problem.
Let's first generalize it and simplify the notation. There are two multinomial distributions, one with probabilities $(p_1, p_2, \ldots, p_n)$ = $(2/(n+1), 1/(n+1), \ldots, 1/(n+ |
49,618 | How to tell the "closeness" of two variables | Solution
When you assume the residuals (vertical deviations in a graph of $n$ data) are independently and identically distributed with some normal distribution of zero mean, the estimate of the slope will have a Student t distribution with $n-2$ degrees of freedom, scaled by the standard error. Because the theoretical value has essentially zero error, we can ignore this complication and treat the theoretical value as a constant. Therefore we refer the ratio
$$t = (0.0106623 - 0.0075) / 0.0011 = 2.88$$
to Student's t distribution (as a two sided test, because in principle the slope could have been greater or less than the theoretical value and you just want to see whether the difference could be attributed to chance).
Whether this deviation is "significant" depends on your criterion for significance and on the degrees of freedom. For example, if you want 95% or greater significance, then this difference will be significant if and only if you have six or more data values. This conclusion follows from noting that the 95% two-sided critical value with $5-2 = 3$ degrees of freedom is $3.182$, greater than $2.88$, and the critical value with $6-2 = 4$ d.f. is $2.776$, less than $2.88$.
Discussion
If the uncertainty in the theoretical value were appreciable compared to the standard error of the slope ($0.0011$) and you had relatively few data points (perhaps 10 or fewer), the problem would become more difficult:
First, you don't know the distribution of the theoretical error.
Second, you probably don't know for sure that it is a standard error (people often report confidence limits or two or three standard errors or even standard deviations without clearly specifying what they have computed).
Third, the sum of a t-distributed value (your error) and another distribution (the theoretical error) can have a mathematically less tractable distribution.
Mitigating these complications, though, is a simple consideration: if the theoretical uncertainty were largish, then it would add to the overall uncertainty in the difference between the theoretical and estimated values, thereby lowering the t-statistic. In some cases such a semi-quantitative result might be good enough. (The addition is in terms of variances: you sum the squares of the two standard errors, obtaining the square of the standard error of the difference, and (therefore) take its square root.)
For instance, if the theoretical uncertainty were equal to the uncertainty of the estimate, the t-statistic would be reduced to $2.03$. The distribution of the difference would be approximately Normal, but with slightly longer tails, so referring the value of $2.03$ to a standard Normal distribution would slightly overestimate the significance. Well, we can compute that $4.2\%$ of the standard Normal distribution is more extreme than $\pm 2.03$. Thus--still in this hypothetical situation with a largish standard error for the theoretical result--you would not conclude the difference is significant if your criterion for significance exceeds $100 - 4.2 = 95.8\%$. Otherwise, the picture is murky and the determination depends on the resolution of the difficulties enumerated above. | How to tell the "closeness" of two variables | Solution
When you assume the residuals (vertical deviations in a graph of $n$ data) are independently and identically distributed with some normal distribution of zero mean, the estimate of the slope | How to tell the "closeness" of two variables
Solution
When you assume the residuals (vertical deviations in a graph of $n$ data) are independently and identically distributed with some normal distribution of zero mean, the estimate of the slope will have a Student t distribution with $n-2$ degrees of freedom, scaled by the standard error. Because the theoretical value has essentially zero error, we can ignore this complication and treat the theoretical value as a constant. Therefore we refer the ratio
$$t = (0.0106623 - 0.0075) / 0.0011 = 2.88$$
to Student's t distribution (as a two sided test, because in principle the slope could have been greater or less than the theoretical value and you just want to see whether the difference could be attributed to chance).
Whether this deviation is "significant" depends on your criterion for significance and on the degrees of freedom. For example, if you want 95% or greater significance, then this difference will be significant if and only if you have six or more data values. This conclusion follows from noting that the 95% two-sided critical value with $5-2 = 3$ degrees of freedom is $3.182$, greater than $2.88$, and the critical value with $6-2 = 4$ d.f. is $2.776$, less than $2.88$.
Discussion
If the uncertainty in the theoretical value were appreciable compared to the standard error of the slope ($0.0011$) and you had relatively few data points (perhaps 10 or fewer), the problem would become more difficult:
First, you don't know the distribution of the theoretical error.
Second, you probably don't know for sure that it is a standard error (people often report confidence limits or two or three standard errors or even standard deviations without clearly specifying what they have computed).
Third, the sum of a t-distributed value (your error) and another distribution (the theoretical error) can have a mathematically less tractable distribution.
Mitigating these complications, though, is a simple consideration: if the theoretical uncertainty were largish, then it would add to the overall uncertainty in the difference between the theoretical and estimated values, thereby lowering the t-statistic. In some cases such a semi-quantitative result might be good enough. (The addition is in terms of variances: you sum the squares of the two standard errors, obtaining the square of the standard error of the difference, and (therefore) take its square root.)
For instance, if the theoretical uncertainty were equal to the uncertainty of the estimate, the t-statistic would be reduced to $2.03$. The distribution of the difference would be approximately Normal, but with slightly longer tails, so referring the value of $2.03$ to a standard Normal distribution would slightly overestimate the significance. Well, we can compute that $4.2\%$ of the standard Normal distribution is more extreme than $\pm 2.03$. Thus--still in this hypothetical situation with a largish standard error for the theoretical result--you would not conclude the difference is significant if your criterion for significance exceeds $100 - 4.2 = 95.8\%$. Otherwise, the picture is murky and the determination depends on the resolution of the difficulties enumerated above. | How to tell the "closeness" of two variables
Solution
When you assume the residuals (vertical deviations in a graph of $n$ data) are independently and identically distributed with some normal distribution of zero mean, the estimate of the slope |
49,619 | Getting an average measurement based on two raters for cases where data is missing for one rater | The idea above sounds rather like single imputation. This is a better idea when faced with missing data than either list-wise or pair-wise deletion. However, its still not a good approach.
A better approach could be multiple imputation. Essentially, you simulate from 3-10 datasets conditional on your observed data. You then perform all of your analyses on each of these datasets, and combine the results at the end. The purposes in the simulation of multiple datasets is to ensure that the uncertainty in the imputation process is accounted for.
This can be done using the Multiple Imputation procedure in SPSS (i believe its on the analyze menu).
However, while multiple imputation has been shown to be valid with large datasets, there is not as much information on its use in small samples.
A good introduction (from an educational perspective) can be found here
A paper on its use in small samples (in a longitudinal context) can be found here | Getting an average measurement based on two raters for cases where data is missing for one rater | The idea above sounds rather like single imputation. This is a better idea when faced with missing data than either list-wise or pair-wise deletion. However, its still not a good approach.
A better a | Getting an average measurement based on two raters for cases where data is missing for one rater
The idea above sounds rather like single imputation. This is a better idea when faced with missing data than either list-wise or pair-wise deletion. However, its still not a good approach.
A better approach could be multiple imputation. Essentially, you simulate from 3-10 datasets conditional on your observed data. You then perform all of your analyses on each of these datasets, and combine the results at the end. The purposes in the simulation of multiple datasets is to ensure that the uncertainty in the imputation process is accounted for.
This can be done using the Multiple Imputation procedure in SPSS (i believe its on the analyze menu).
However, while multiple imputation has been shown to be valid with large datasets, there is not as much information on its use in small samples.
A good introduction (from an educational perspective) can be found here
A paper on its use in small samples (in a longitudinal context) can be found here | Getting an average measurement based on two raters for cases where data is missing for one rater
The idea above sounds rather like single imputation. This is a better idea when faced with missing data than either list-wise or pair-wise deletion. However, its still not a good approach.
A better a |
49,620 | What is the pull distribution? | I think this CDF public analysis note can be a good answer.
or the ps file from CDF. | What is the pull distribution? | I think this CDF public analysis note can be a good answer.
or the ps file from CDF. | What is the pull distribution?
I think this CDF public analysis note can be a good answer.
or the ps file from CDF. | What is the pull distribution?
I think this CDF public analysis note can be a good answer.
or the ps file from CDF. |
49,621 | Can anyone explain quantile maximum probability estimation (QMPE)? | I'm late to the party but it seems this question still needs an answer, and, surprising to me, I don't see any other questions on this topic. So I'll go ahead and offer this.
To summarize, the QMPE approach is to formulate a likelihood function in terms of the quantiles of the observed data and not the observed data themselves. Then parameters are sought which maximize the likelihood, by differentiating the log-likelihood wrt the parameters and applying a standard optimization method.
It appears that the authors are proposing to ignore the exactly values of the data and only work with quantiles. I'm not sure I agree with that, but in any event, the likelihood function as they state it is correct if one only has quantiles at hand, which is sometimes the case.
The likelihood function can be derived by considering the data failing between quantiles as being interval-censored; you know how many data are in each interval, but you don't know where they are. (That isn't how it's motivated by Heathcote et al., but that's what makes the most sense to me.) For each inter-quantile interval, there is a term
(P(datum in interval | parameters))^(number of data in interval)
Now the number of data in each interval is just n[i] = N (p[i] - p[i - 1]) where N is the total number of data and p[i] is the proportional corresponding to the i-th quantile q[i], i.e. # { data <= q[i] } / N = p[i], and P(datum in interval | parameters) is just cdf(q[i] | parameters) - cdf(q[i - 1] | parameters) where cdf is the cumulative distribution function of the assumed distribution. Then the log-likelihood is the summation of
N (p[i] - p[i - 1]) log(cdf(q[i] | parameters) - cdf(q[i - 1] | parameters))
over the inter-quantile intervals indexed by i. This is the same for any particular distribution; just plug in the appropriate cdf.
To find the parameters which maximize the likelihood, given a list of quantiles, any method for maximizing a function can be employed. If cdf is differentiable, it's probably most convenient to make use of the gradient wrt the parameters. I don't know if anything can be said about the derivatives of cdf wrt the parameters in general, although maybe if some family is assumed, such as location-scale distributions, some general results can be worked out.
The factor N is constant wrt the parameters so it's convenient to omit it. It's often more convenient, when working with optimization software, to search for a minimum instead of a maximum, so one can look at the negative log-likelihood instead of the log-likelihood, and minimize that instead of maximizing.
About maximizing wrt the parameters, Heathcote et al. make use of a particular line-search method, but that is not essential; given the NLL and its gradient, many optimization algorithms are applicable.
To get a starting place for the parameter search, I think it would be convenient to compute the ordinary complete-data maximum likelihood estimates, assuming that all n[i] data in the i-th inter-quantile interval fall at the middle of the interval; such an estimate could be conveniently computed if the function to compute ML estimates allows for weighted data, since then the data are the interval midpoints with weights equal to the mass in each interval. Heathcote et al. find a starting point by estimating parameters by matching moments.
It has been interesting reading about this topic. Hope this helps someone else understand what's going on. | Can anyone explain quantile maximum probability estimation (QMPE)? | I'm late to the party but it seems this question still needs an answer, and, surprising to me, I don't see any other questions on this topic. So I'll go ahead and offer this.
To summarize, the QMPE ap | Can anyone explain quantile maximum probability estimation (QMPE)?
I'm late to the party but it seems this question still needs an answer, and, surprising to me, I don't see any other questions on this topic. So I'll go ahead and offer this.
To summarize, the QMPE approach is to formulate a likelihood function in terms of the quantiles of the observed data and not the observed data themselves. Then parameters are sought which maximize the likelihood, by differentiating the log-likelihood wrt the parameters and applying a standard optimization method.
It appears that the authors are proposing to ignore the exactly values of the data and only work with quantiles. I'm not sure I agree with that, but in any event, the likelihood function as they state it is correct if one only has quantiles at hand, which is sometimes the case.
The likelihood function can be derived by considering the data failing between quantiles as being interval-censored; you know how many data are in each interval, but you don't know where they are. (That isn't how it's motivated by Heathcote et al., but that's what makes the most sense to me.) For each inter-quantile interval, there is a term
(P(datum in interval | parameters))^(number of data in interval)
Now the number of data in each interval is just n[i] = N (p[i] - p[i - 1]) where N is the total number of data and p[i] is the proportional corresponding to the i-th quantile q[i], i.e. # { data <= q[i] } / N = p[i], and P(datum in interval | parameters) is just cdf(q[i] | parameters) - cdf(q[i - 1] | parameters) where cdf is the cumulative distribution function of the assumed distribution. Then the log-likelihood is the summation of
N (p[i] - p[i - 1]) log(cdf(q[i] | parameters) - cdf(q[i - 1] | parameters))
over the inter-quantile intervals indexed by i. This is the same for any particular distribution; just plug in the appropriate cdf.
To find the parameters which maximize the likelihood, given a list of quantiles, any method for maximizing a function can be employed. If cdf is differentiable, it's probably most convenient to make use of the gradient wrt the parameters. I don't know if anything can be said about the derivatives of cdf wrt the parameters in general, although maybe if some family is assumed, such as location-scale distributions, some general results can be worked out.
The factor N is constant wrt the parameters so it's convenient to omit it. It's often more convenient, when working with optimization software, to search for a minimum instead of a maximum, so one can look at the negative log-likelihood instead of the log-likelihood, and minimize that instead of maximizing.
About maximizing wrt the parameters, Heathcote et al. make use of a particular line-search method, but that is not essential; given the NLL and its gradient, many optimization algorithms are applicable.
To get a starting place for the parameter search, I think it would be convenient to compute the ordinary complete-data maximum likelihood estimates, assuming that all n[i] data in the i-th inter-quantile interval fall at the middle of the interval; such an estimate could be conveniently computed if the function to compute ML estimates allows for weighted data, since then the data are the interval midpoints with weights equal to the mass in each interval. Heathcote et al. find a starting point by estimating parameters by matching moments.
It has been interesting reading about this topic. Hope this helps someone else understand what's going on. | Can anyone explain quantile maximum probability estimation (QMPE)?
I'm late to the party but it seems this question still needs an answer, and, surprising to me, I don't see any other questions on this topic. So I'll go ahead and offer this.
To summarize, the QMPE ap |
49,622 | Can anyone explain quantile maximum probability estimation (QMPE)? | Just a small suggestion:
Have you checked out the Newcastle Cognition Lab's page on QMPE?
It has source code, a getting started guide, and a few other resources. | Can anyone explain quantile maximum probability estimation (QMPE)? | Just a small suggestion:
Have you checked out the Newcastle Cognition Lab's page on QMPE?
It has source code, a getting started guide, and a few other resources. | Can anyone explain quantile maximum probability estimation (QMPE)?
Just a small suggestion:
Have you checked out the Newcastle Cognition Lab's page on QMPE?
It has source code, a getting started guide, and a few other resources. | Can anyone explain quantile maximum probability estimation (QMPE)?
Just a small suggestion:
Have you checked out the Newcastle Cognition Lab's page on QMPE?
It has source code, a getting started guide, and a few other resources. |
49,623 | Structural equation modeling for experimental design data | There is no simple yes or no answer. People constantly attempt to make inferences about causal relationships. The question is what assumptions you have to make, and how sensitive your inferences are to changing those assumptions.
The causal effects you can identify with the fewest assumptions are the effects of the things you randomly assign: A, B, and the interaction A*B, on Y1, Y2, Y3, and Y4.
I'm likely to be skeptical of a claim to have identified the causal effect of any of the non-randomized variables on anything else. The scientific context (which you have not provided) will shape what is considered a reasonable inference. | Structural equation modeling for experimental design data | There is no simple yes or no answer. People constantly attempt to make inferences about causal relationships. The question is what assumptions you have to make, and how sensitive your inferences are | Structural equation modeling for experimental design data
There is no simple yes or no answer. People constantly attempt to make inferences about causal relationships. The question is what assumptions you have to make, and how sensitive your inferences are to changing those assumptions.
The causal effects you can identify with the fewest assumptions are the effects of the things you randomly assign: A, B, and the interaction A*B, on Y1, Y2, Y3, and Y4.
I'm likely to be skeptical of a claim to have identified the causal effect of any of the non-randomized variables on anything else. The scientific context (which you have not provided) will shape what is considered a reasonable inference. | Structural equation modeling for experimental design data
There is no simple yes or no answer. People constantly attempt to make inferences about causal relationships. The question is what assumptions you have to make, and how sensitive your inferences are |
49,624 | Generating dependent time series from a given distribution? | You can use Markov chains. You will have a to specify a density $p(x_t|x_{t-1})$. Of course you will have to be able to sample from that marginal. Then just sample... | Generating dependent time series from a given distribution? | You can use Markov chains. You will have a to specify a density $p(x_t|x_{t-1})$. Of course you will have to be able to sample from that marginal. Then just sample... | Generating dependent time series from a given distribution?
You can use Markov chains. You will have a to specify a density $p(x_t|x_{t-1})$. Of course you will have to be able to sample from that marginal. Then just sample... | Generating dependent time series from a given distribution?
You can use Markov chains. You will have a to specify a density $p(x_t|x_{t-1})$. Of course you will have to be able to sample from that marginal. Then just sample... |
49,625 | Generating dependent time series from a given distribution? | One way is to use transformations of random variables. It's easy to generate dependent Gaussians; then transform them to uniform variates with the CDF of the gaussian, and then transform the uniform variates to your distribution with the inverse CDF of your distribution. | Generating dependent time series from a given distribution? | One way is to use transformations of random variables. It's easy to generate dependent Gaussians; then transform them to uniform variates with the CDF of the gaussian, and then transform the uniform | Generating dependent time series from a given distribution?
One way is to use transformations of random variables. It's easy to generate dependent Gaussians; then transform them to uniform variates with the CDF of the gaussian, and then transform the uniform variates to your distribution with the inverse CDF of your distribution. | Generating dependent time series from a given distribution?
One way is to use transformations of random variables. It's easy to generate dependent Gaussians; then transform them to uniform variates with the CDF of the gaussian, and then transform the uniform |
49,626 | Fast integration of a posterior distribution | How accurate does your posterior cdf need to be? You might consider replacing the continuous prior with a discrete approximation:
$p^*(\theta) \propto p(\theta) 1(\theta\in t_1, \dots, t_k)$
where $p(\theta)$ is your original continuous prior.
Then to compute the posterior you just calculate likelihood x prior
$p(\theta|x) \propto p^*(\theta)p(x|\theta)$
over the support of the prior $t_1, \dots, t_k$ and renormalize.
This is called "griddy Gibbs" by some. It can be quite effective if you have an informative prior in which case you can choose the grid points non-uniformly (and, of course, if you can live with a discrete approximation coarse enough to be computationally feasible). | Fast integration of a posterior distribution | How accurate does your posterior cdf need to be? You might consider replacing the continuous prior with a discrete approximation:
$p^*(\theta) \propto p(\theta) 1(\theta\in t_1, \dots, t_k)$
where $p( | Fast integration of a posterior distribution
How accurate does your posterior cdf need to be? You might consider replacing the continuous prior with a discrete approximation:
$p^*(\theta) \propto p(\theta) 1(\theta\in t_1, \dots, t_k)$
where $p(\theta)$ is your original continuous prior.
Then to compute the posterior you just calculate likelihood x prior
$p(\theta|x) \propto p^*(\theta)p(x|\theta)$
over the support of the prior $t_1, \dots, t_k$ and renormalize.
This is called "griddy Gibbs" by some. It can be quite effective if you have an informative prior in which case you can choose the grid points non-uniformly (and, of course, if you can live with a discrete approximation coarse enough to be computationally feasible). | Fast integration of a posterior distribution
How accurate does your posterior cdf need to be? You might consider replacing the continuous prior with a discrete approximation:
$p^*(\theta) \propto p(\theta) 1(\theta\in t_1, \dots, t_k)$
where $p( |
49,627 | Fast integration of a posterior distribution | There may be a simpler approach, simply by applying the usual Beta conjugate to the binomial, and then requiring $\theta \in [\frac12,1]$. You can do this with an indicator function, for example as in
$$p(\theta) \propto \theta^{\alpha-1} {(1-\theta)}^{\beta-1} \mathbb{1}[{\tfrac12 \le \theta \le 1}]$$
Now apply your $p(n,x | \theta) \propto \theta^x {(1-\theta)}^{n-x}$ to get the posterior density
$$p(\theta|x) \propto \theta^{\alpha+x-1} {(1-\theta)}^{\beta+n-x-1} \mathbb{1}[{\tfrac12 \le \theta \le 1}].$$
The posterior cumulative distribution function for $\theta \in [\frac12,1]$ is then $\dfrac{I_\theta(\alpha+x, \beta+n-x)-I_{\frac12}(\alpha+x, \beta+n-x)}{1-I_{\frac12}(\alpha+x, \beta+n-x)}$ with $I$ representing a regularised incomplete beta function or the cumulative distribution function of a Beta distribution, which any decent statistical program will calculate quickly, such as R's pbeta function. | Fast integration of a posterior distribution | There may be a simpler approach, simply by applying the usual Beta conjugate to the binomial, and then requiring $\theta \in [\frac12,1]$. You can do this with an indicator function, for example as in | Fast integration of a posterior distribution
There may be a simpler approach, simply by applying the usual Beta conjugate to the binomial, and then requiring $\theta \in [\frac12,1]$. You can do this with an indicator function, for example as in
$$p(\theta) \propto \theta^{\alpha-1} {(1-\theta)}^{\beta-1} \mathbb{1}[{\tfrac12 \le \theta \le 1}]$$
Now apply your $p(n,x | \theta) \propto \theta^x {(1-\theta)}^{n-x}$ to get the posterior density
$$p(\theta|x) \propto \theta^{\alpha+x-1} {(1-\theta)}^{\beta+n-x-1} \mathbb{1}[{\tfrac12 \le \theta \le 1}].$$
The posterior cumulative distribution function for $\theta \in [\frac12,1]$ is then $\dfrac{I_\theta(\alpha+x, \beta+n-x)-I_{\frac12}(\alpha+x, \beta+n-x)}{1-I_{\frac12}(\alpha+x, \beta+n-x)}$ with $I$ representing a regularised incomplete beta function or the cumulative distribution function of a Beta distribution, which any decent statistical program will calculate quickly, such as R's pbeta function. | Fast integration of a posterior distribution
There may be a simpler approach, simply by applying the usual Beta conjugate to the binomial, and then requiring $\theta \in [\frac12,1]$. You can do this with an indicator function, for example as in |
49,628 | Fast integration of a posterior distribution | You can always use Monte Carlo Integration or the Midpoint method. With Monte Carlo, you simply generate a bunch of points in your parameter space and see if they are in the area or volume or hyper-dimensional space you are trying to integrate.
From:
http://farside.ph.utexas.edu/teaching/329/lectures/node109.html
"Let us now consider the so-called Monte-Carlo method for evaluating multi-dimensional integrals. Consider, for example, the evaluation of the area, , enclosed by a curve, . Suppose that the curve lies wholly within some simple domain of area , as illustrated in Fig. 97. Let us generate points which are randomly distributed throughout . Suppose that of these points lie within curve . Our estimate for the area enclosed by the curve is simply" the ratio of random points in the space times the size of the space. The link has a nice picture and a description of the inferior midpoint method that I would suggest skipping. | Fast integration of a posterior distribution | You can always use Monte Carlo Integration or the Midpoint method. With Monte Carlo, you simply generate a bunch of points in your parameter space and see if they are in the area or volume or hyper-di | Fast integration of a posterior distribution
You can always use Monte Carlo Integration or the Midpoint method. With Monte Carlo, you simply generate a bunch of points in your parameter space and see if they are in the area or volume or hyper-dimensional space you are trying to integrate.
From:
http://farside.ph.utexas.edu/teaching/329/lectures/node109.html
"Let us now consider the so-called Monte-Carlo method for evaluating multi-dimensional integrals. Consider, for example, the evaluation of the area, , enclosed by a curve, . Suppose that the curve lies wholly within some simple domain of area , as illustrated in Fig. 97. Let us generate points which are randomly distributed throughout . Suppose that of these points lie within curve . Our estimate for the area enclosed by the curve is simply" the ratio of random points in the space times the size of the space. The link has a nice picture and a description of the inferior midpoint method that I would suggest skipping. | Fast integration of a posterior distribution
You can always use Monte Carlo Integration or the Midpoint method. With Monte Carlo, you simply generate a bunch of points in your parameter space and see if they are in the area or volume or hyper-di |
49,629 | Resampling within a survey to account for missing data | Your question is above my pay grade, as it were, but I can suggest a first look at the R survey package, which might implement some of the routines that you'd use to answer your questions. | Resampling within a survey to account for missing data | Your question is above my pay grade, as it were, but I can suggest a first look at the R survey package, which might implement some of the routines that you'd use to answer your questions. | Resampling within a survey to account for missing data
Your question is above my pay grade, as it were, but I can suggest a first look at the R survey package, which might implement some of the routines that you'd use to answer your questions. | Resampling within a survey to account for missing data
Your question is above my pay grade, as it were, but I can suggest a first look at the R survey package, which might implement some of the routines that you'd use to answer your questions. |
49,630 | Resampling within a survey to account for missing data | Standard formulas for standard errors of a proportion would be suitable. With regards to your question about which companies the "n=100 sample" plan to use in the future, these standard errors would be based on n = 100. If this yields standard errors that are too large for your liking, then you need to increase your sample size.
In some cases you might be able to increase your effective sample size by engaging in more targeted sampling of the subset of the population that interests you (i.e., with company X, but planning to use company X less in the future). | Resampling within a survey to account for missing data | Standard formulas for standard errors of a proportion would be suitable. With regards to your question about which companies the "n=100 sample" plan to use in the future, these standard errors would b | Resampling within a survey to account for missing data
Standard formulas for standard errors of a proportion would be suitable. With regards to your question about which companies the "n=100 sample" plan to use in the future, these standard errors would be based on n = 100. If this yields standard errors that are too large for your liking, then you need to increase your sample size.
In some cases you might be able to increase your effective sample size by engaging in more targeted sampling of the subset of the population that interests you (i.e., with company X, but planning to use company X less in the future). | Resampling within a survey to account for missing data
Standard formulas for standard errors of a proportion would be suitable. With regards to your question about which companies the "n=100 sample" plan to use in the future, these standard errors would b |
49,631 | Comparing cosine similarities for tf-idf vectors for documents with different length | According to Wikipedia's article of tf-idf:
The term count in the given document is simply the number of times a given term appears in
that document. This count is usually normalized to prevent a bias towards longer documents
(which may have a higher term count regardless of the actual importance of that term in the
document) to give a measure of the importance of the term t within the particular document d
So, normalize the frequency of a term t by the length of the document d in which it occurs. Then you can compute cosine similarity between your tf-idf vectors. | Comparing cosine similarities for tf-idf vectors for documents with different length | According to Wikipedia's article of tf-idf:
The term count in the given document is simply the number of times a given term appears in
that document. This count is usually normalized to prevent a | Comparing cosine similarities for tf-idf vectors for documents with different length
According to Wikipedia's article of tf-idf:
The term count in the given document is simply the number of times a given term appears in
that document. This count is usually normalized to prevent a bias towards longer documents
(which may have a higher term count regardless of the actual importance of that term in the
document) to give a measure of the importance of the term t within the particular document d
So, normalize the frequency of a term t by the length of the document d in which it occurs. Then you can compute cosine similarity between your tf-idf vectors. | Comparing cosine similarities for tf-idf vectors for documents with different length
According to Wikipedia's article of tf-idf:
The term count in the given document is simply the number of times a given term appears in
that document. This count is usually normalized to prevent a |
49,632 | Comparing cosine similarities for tf-idf vectors for documents with different length | The cosine similarity is still a valid measure. Actually, this is the rule that tf-idf weights have different lengths for different documents, simply because they do not use exactly the same words. Notice that a missing word in a tf-idf vector is actually a word with a frequency of 0.
So you elongate both vectors to the same length by adding and couple of 0's and youb compute the cosine similarity. | Comparing cosine similarities for tf-idf vectors for documents with different length | The cosine similarity is still a valid measure. Actually, this is the rule that tf-idf weights have different lengths for different documents, simply because they do not use exactly the same words. No | Comparing cosine similarities for tf-idf vectors for documents with different length
The cosine similarity is still a valid measure. Actually, this is the rule that tf-idf weights have different lengths for different documents, simply because they do not use exactly the same words. Notice that a missing word in a tf-idf vector is actually a word with a frequency of 0.
So you elongate both vectors to the same length by adding and couple of 0's and youb compute the cosine similarity. | Comparing cosine similarities for tf-idf vectors for documents with different length
The cosine similarity is still a valid measure. Actually, this is the rule that tf-idf weights have different lengths for different documents, simply because they do not use exactly the same words. No |
49,633 | Estimating speed from position updates with uncertain time intervals | Because you trust the GPS positions (and therefore the distances computed from them) but the times have errors, regress the times against the cumulative distances.
To account for acceleration and deceleration, consider a model of the form
$$\text{Time} = t = \beta_0 + \beta_1 X + \beta_2 X^2 + \varepsilon$$
where $X$ is distance and $\varepsilon$ represents the time errors. After fitting this you can estimate $\widehat{dt/dX} = \hat{\beta}_1 + 2 \hat{\beta}_2 X$, whence
$$\text{Speed estimate} = \widehat{dX/dt} = 1/\widehat{dt/dX} = \frac{1}{\hat{\beta}_1 + 2 \hat{\beta}_2 X}.$$
This will likely work best for times near the middle of your dataset. Thus, for instance, you could maintain a circular buffer of $k$ observations which at any moment would span times $t_1, t_2, \ldots, t_k$ and use them to estimate speeds near the mean position $\frac{1}{k}\sum_{i=1}^{k}X_k$.
As an example I generated distances according to the formula $X(t) = 15 + i - i^2/25$ for times $t=1, 2, \ldots, 10$. This represents a period of uniform deceleration from a unit speed. Note that it is not the same as the postulated model: the variation in time is given by the quadratic formula. I then varied those times by independent standard normal variates. Compared to the nominal sampling interval this is a huge error: you can't even be sure that two successive times are actually in the right order. This is a fairly severe test of the method.
Here is a plot of a typical realization, with the red dots showing the true values, the blue ones showing the observed values (i.e., the true values with time jittering), and the ordinary least squares fit of the time. I used $k=10$.
Note that the time measurements are so awful, the boat seems to reverse its course several times, especially near times 2 and 4.
Unlike many of the realizations I looked at, this fit doesn't look terribly good in the center: the line departs from the red dots. Be that as it may, let's look at how the estimated and actual speeds varied during this period:
(Note that you can estimate a speed at any time, not just a measured time, because the fitted coefficients give a formula for the speed.)
In this plot of speed versus position (not time), the blue curve is the estimate and the red curve is the actual speed. Obviously they are not very close near the ends, but for the middle times the estimate is excellent. (The middle times are around positions 19-20, as the first plot indicates.) Note that you would run into severe difficulties estimating speeds directly from the sequence of positions, because (due to the errors in the times) the boat seems to be jumping around forwards and backwards. If you treat the backwards jumps as true reversals you would grossly overestimate the speeds, but if you treat them as negative speeds you would run into severe problems when averaging. The moral here is that it's best to fit a model of position versus time and only then attempt to estimate speeds; don't attempt to estimate speeds directly.
The larger $k$ is, the more accurate you can expect to be. Use the machinery of OLS to estimate the prediction error of the time for any desired position; from that you can propagate the error to the speed estimate.
The speed estimates might jump around a little due to the addition of new values and dropping of old values as you go along. You could get fancier and use weighted regression, decreasing the weights smoothly to zero near the extreme positions. In doing so, a plot of estimated speeds versus position would be a little smoother. (This technique is akin to the recently popularized "Geographically Weighted Regression" of Fotheringham, Charlton, and Brunsdon.) | Estimating speed from position updates with uncertain time intervals | Because you trust the GPS positions (and therefore the distances computed from them) but the times have errors, regress the times against the cumulative distances.
To account for acceleration and dece | Estimating speed from position updates with uncertain time intervals
Because you trust the GPS positions (and therefore the distances computed from them) but the times have errors, regress the times against the cumulative distances.
To account for acceleration and deceleration, consider a model of the form
$$\text{Time} = t = \beta_0 + \beta_1 X + \beta_2 X^2 + \varepsilon$$
where $X$ is distance and $\varepsilon$ represents the time errors. After fitting this you can estimate $\widehat{dt/dX} = \hat{\beta}_1 + 2 \hat{\beta}_2 X$, whence
$$\text{Speed estimate} = \widehat{dX/dt} = 1/\widehat{dt/dX} = \frac{1}{\hat{\beta}_1 + 2 \hat{\beta}_2 X}.$$
This will likely work best for times near the middle of your dataset. Thus, for instance, you could maintain a circular buffer of $k$ observations which at any moment would span times $t_1, t_2, \ldots, t_k$ and use them to estimate speeds near the mean position $\frac{1}{k}\sum_{i=1}^{k}X_k$.
As an example I generated distances according to the formula $X(t) = 15 + i - i^2/25$ for times $t=1, 2, \ldots, 10$. This represents a period of uniform deceleration from a unit speed. Note that it is not the same as the postulated model: the variation in time is given by the quadratic formula. I then varied those times by independent standard normal variates. Compared to the nominal sampling interval this is a huge error: you can't even be sure that two successive times are actually in the right order. This is a fairly severe test of the method.
Here is a plot of a typical realization, with the red dots showing the true values, the blue ones showing the observed values (i.e., the true values with time jittering), and the ordinary least squares fit of the time. I used $k=10$.
Note that the time measurements are so awful, the boat seems to reverse its course several times, especially near times 2 and 4.
Unlike many of the realizations I looked at, this fit doesn't look terribly good in the center: the line departs from the red dots. Be that as it may, let's look at how the estimated and actual speeds varied during this period:
(Note that you can estimate a speed at any time, not just a measured time, because the fitted coefficients give a formula for the speed.)
In this plot of speed versus position (not time), the blue curve is the estimate and the red curve is the actual speed. Obviously they are not very close near the ends, but for the middle times the estimate is excellent. (The middle times are around positions 19-20, as the first plot indicates.) Note that you would run into severe difficulties estimating speeds directly from the sequence of positions, because (due to the errors in the times) the boat seems to be jumping around forwards and backwards. If you treat the backwards jumps as true reversals you would grossly overestimate the speeds, but if you treat them as negative speeds you would run into severe problems when averaging. The moral here is that it's best to fit a model of position versus time and only then attempt to estimate speeds; don't attempt to estimate speeds directly.
The larger $k$ is, the more accurate you can expect to be. Use the machinery of OLS to estimate the prediction error of the time for any desired position; from that you can propagate the error to the speed estimate.
The speed estimates might jump around a little due to the addition of new values and dropping of old values as you go along. You could get fancier and use weighted regression, decreasing the weights smoothly to zero near the extreme positions. In doing so, a plot of estimated speeds versus position would be a little smoother. (This technique is akin to the recently popularized "Geographically Weighted Regression" of Fotheringham, Charlton, and Brunsdon.) | Estimating speed from position updates with uncertain time intervals
Because you trust the GPS positions (and therefore the distances computed from them) but the times have errors, regress the times against the cumulative distances.
To account for acceleration and dece |
49,634 | Principal components of spatial variables | Your idea about the "rasters" is not very clearly stated, but you might have a look at the paper by Borcard and Legendre (1994) and their later works on spatial eigenvector-based analyses to see if one of the approaches will fit to your problem.
Borcard, D., Legendre, P., (1994) Environmental control and spatial structure in ecological communities: an example using oribatid mites (Acari, Oribatei). Environmental and Ecological Statistics 1, 37–61. | Principal components of spatial variables | Your idea about the "rasters" is not very clearly stated, but you might have a look at the paper by Borcard and Legendre (1994) and their later works on spatial eigenvector-based analyses to see if on | Principal components of spatial variables
Your idea about the "rasters" is not very clearly stated, but you might have a look at the paper by Borcard and Legendre (1994) and their later works on spatial eigenvector-based analyses to see if one of the approaches will fit to your problem.
Borcard, D., Legendre, P., (1994) Environmental control and spatial structure in ecological communities: an example using oribatid mites (Acari, Oribatei). Environmental and Ecological Statistics 1, 37–61. | Principal components of spatial variables
Your idea about the "rasters" is not very clearly stated, but you might have a look at the paper by Borcard and Legendre (1994) and their later works on spatial eigenvector-based analyses to see if on |
49,635 | Is it possible to fit a multivariate regression model where the independent variable is latent? | Might we reformulate the question as: 'I have N M-variate observations which I assume to be generated by N corresponding P-variate latent variables i.e. for each case/row M observed numbers are generated by P unobserved numbers. I have an idea that this mapping is linear with an M x P matrix of coefficients and I want to know what the latent matrix values should be.'?
If that's accurate then you have a multivariate version of the regression calibration problem. Normally one knows X and Y and estimates beta, whereas here one knows Y and beta and estimates / 'backs-out' X.
This is what motivates suncoolsu's question about control - the question is about what distribution assumptions can be made about the marginal distribution of X (if any). Your EM idea will make sense if you are happy to make distributional assumptions about P(Y | X; beta) and P(X) to apply Bayes theorem (although you won't need to iterate.)
Or maybe that's not the problem you're facing and I just don't understand your description. | Is it possible to fit a multivariate regression model where the independent variable is latent? | Might we reformulate the question as: 'I have N M-variate observations which I assume to be generated by N corresponding P-variate latent variables i.e. for each case/row M observed numbers are genera | Is it possible to fit a multivariate regression model where the independent variable is latent?
Might we reformulate the question as: 'I have N M-variate observations which I assume to be generated by N corresponding P-variate latent variables i.e. for each case/row M observed numbers are generated by P unobserved numbers. I have an idea that this mapping is linear with an M x P matrix of coefficients and I want to know what the latent matrix values should be.'?
If that's accurate then you have a multivariate version of the regression calibration problem. Normally one knows X and Y and estimates beta, whereas here one knows Y and beta and estimates / 'backs-out' X.
This is what motivates suncoolsu's question about control - the question is about what distribution assumptions can be made about the marginal distribution of X (if any). Your EM idea will make sense if you are happy to make distributional assumptions about P(Y | X; beta) and P(X) to apply Bayes theorem (although you won't need to iterate.)
Or maybe that's not the problem you're facing and I just don't understand your description. | Is it possible to fit a multivariate regression model where the independent variable is latent?
Might we reformulate the question as: 'I have N M-variate observations which I assume to be generated by N corresponding P-variate latent variables i.e. for each case/row M observed numbers are genera |
49,636 | Output layer of artificial neural networks when learning non-linear functions with limited value range | I am opposed to cutting values of, since this will lead to an undifferentiable transfer function and your gradient based training algorithm might screw up.
The sigmoid function at the output layer is fine: $\sigma(x) = \frac{1}{1 + e^{-x}}$. It will squash any output to lie within $(0, 1)$. So you can get arbitrarily close to the targets.
However, if you use the squared error you will lose the property of a "matching loss function". When using linear outputs for a squared error, the derivatives of the error reduce to $y - t$ where $y$ is the output and $t$ the corresponding target value. So you have to check your gradients.
I have personally had good results with sigmoids as outputs when I have targets in that range and using sum of squares error anyway. | Output layer of artificial neural networks when learning non-linear functions with limited value ran | I am opposed to cutting values of, since this will lead to an undifferentiable transfer function and your gradient based training algorithm might screw up.
The sigmoid function at the output layer is | Output layer of artificial neural networks when learning non-linear functions with limited value range
I am opposed to cutting values of, since this will lead to an undifferentiable transfer function and your gradient based training algorithm might screw up.
The sigmoid function at the output layer is fine: $\sigma(x) = \frac{1}{1 + e^{-x}}$. It will squash any output to lie within $(0, 1)$. So you can get arbitrarily close to the targets.
However, if you use the squared error you will lose the property of a "matching loss function". When using linear outputs for a squared error, the derivatives of the error reduce to $y - t$ where $y$ is the output and $t$ the corresponding target value. So you have to check your gradients.
I have personally had good results with sigmoids as outputs when I have targets in that range and using sum of squares error anyway. | Output layer of artificial neural networks when learning non-linear functions with limited value ran
I am opposed to cutting values of, since this will lead to an undifferentiable transfer function and your gradient based training algorithm might screw up.
The sigmoid function at the output layer is |
49,637 | Output layer of artificial neural networks when learning non-linear functions with limited value range | If you use a logistic activation function in the output layer it will restrict the output to the range 0-1 as you require.
However if you have a regression problem with a restricted output range the sum-of-squares error metric may not be ideal and maybe a beta noise model might be more appropriate (c.f. beta regression, which IIRC is implemented in an R package, but I have never used it myself) | Output layer of artificial neural networks when learning non-linear functions with limited value ran | If you use a logistic activation function in the output layer it will restrict the output to the range 0-1 as you require.
However if you have a regression problem with a restricted output range the | Output layer of artificial neural networks when learning non-linear functions with limited value range
If you use a logistic activation function in the output layer it will restrict the output to the range 0-1 as you require.
However if you have a regression problem with a restricted output range the sum-of-squares error metric may not be ideal and maybe a beta noise model might be more appropriate (c.f. beta regression, which IIRC is implemented in an R package, but I have never used it myself) | Output layer of artificial neural networks when learning non-linear functions with limited value ran
If you use a logistic activation function in the output layer it will restrict the output to the range 0-1 as you require.
However if you have a regression problem with a restricted output range the |
49,638 | Output layer of artificial neural networks when learning non-linear functions with limited value range | If you know an absolute range for the output, but there is no reason to expect it to have the non-linear characteristic of the typical logistic activation function (i.e. a value in the middle is just as likely as a value near 0 or 1), then you can just transform the output by dividing by the absolute maximum. If the minimum were not 0, you could subtract the absolute minimum before dividing by the value (maximum - minimum).
So basically don't try to train the neural network to the raw value, train it to the percentile value (0 for minimum, 1 for maximum). | Output layer of artificial neural networks when learning non-linear functions with limited value ran | If you know an absolute range for the output, but there is no reason to expect it to have the non-linear characteristic of the typical logistic activation function (i.e. a value in the middle is just | Output layer of artificial neural networks when learning non-linear functions with limited value range
If you know an absolute range for the output, but there is no reason to expect it to have the non-linear characteristic of the typical logistic activation function (i.e. a value in the middle is just as likely as a value near 0 or 1), then you can just transform the output by dividing by the absolute maximum. If the minimum were not 0, you could subtract the absolute minimum before dividing by the value (maximum - minimum).
So basically don't try to train the neural network to the raw value, train it to the percentile value (0 for minimum, 1 for maximum). | Output layer of artificial neural networks when learning non-linear functions with limited value ran
If you know an absolute range for the output, but there is no reason to expect it to have the non-linear characteristic of the typical logistic activation function (i.e. a value in the middle is just |
49,639 | Output layer of artificial neural networks when learning non-linear functions with limited value range | "Would it work to use the linear function and simply cut all values below 0 to 0, and values above 1 to 1?"
I believe in many cases the cut-off value should be the percentage split of the training data. Eg if your training data has 13% - 0's and 87% - 1's, then the cut-off would be 0.13; For example anything 0.13 and below on the output is 0 and anything 0.14 and above is 1. Obviously there is more uncertainty the closer to the cut-off the output provides. It may also help adjusting the cut-off limits especially where the cost of a mis-classification is high. This link may help a little http://timmanns.blogspot.com/2009/11/building-neural-networks-on-unbalanced.html | Output layer of artificial neural networks when learning non-linear functions with limited value ran | "Would it work to use the linear function and simply cut all values below 0 to 0, and values above 1 to 1?"
I believe in many cases the cut-off value should be the percentage split of the training dat | Output layer of artificial neural networks when learning non-linear functions with limited value range
"Would it work to use the linear function and simply cut all values below 0 to 0, and values above 1 to 1?"
I believe in many cases the cut-off value should be the percentage split of the training data. Eg if your training data has 13% - 0's and 87% - 1's, then the cut-off would be 0.13; For example anything 0.13 and below on the output is 0 and anything 0.14 and above is 1. Obviously there is more uncertainty the closer to the cut-off the output provides. It may also help adjusting the cut-off limits especially where the cost of a mis-classification is high. This link may help a little http://timmanns.blogspot.com/2009/11/building-neural-networks-on-unbalanced.html | Output layer of artificial neural networks when learning non-linear functions with limited value ran
"Would it work to use the linear function and simply cut all values below 0 to 0, and values above 1 to 1?"
I believe in many cases the cut-off value should be the percentage split of the training dat |
49,640 | How to graphically compare predicted and actual values from multivariate regression in R? | In addition to @mpiktas's comment, you can also have a look at the rms package from Frank Harrell. The advantage is that it handles both LM and GLM for model fitting and prediction; see for example the plot.Predict() function. If you're planning to do serious job in regression modeling, this package and its companion Hmisc are really good. | How to graphically compare predicted and actual values from multivariate regression in R? | In addition to @mpiktas's comment, you can also have a look at the rms package from Frank Harrell. The advantage is that it handles both LM and GLM for model fitting and prediction; see for example th | How to graphically compare predicted and actual values from multivariate regression in R?
In addition to @mpiktas's comment, you can also have a look at the rms package from Frank Harrell. The advantage is that it handles both LM and GLM for model fitting and prediction; see for example the plot.Predict() function. If you're planning to do serious job in regression modeling, this package and its companion Hmisc are really good. | How to graphically compare predicted and actual values from multivariate regression in R?
In addition to @mpiktas's comment, you can also have a look at the rms package from Frank Harrell. The advantage is that it handles both LM and GLM for model fitting and prediction; see for example th |
49,641 | Probability calculation, system uptime, likelihood of occurence | Okay, so here is my answer that I promised. I initially thought it would be quickish, but my answer has become quite large, so at the begining, I state my general results first, and leave the gory details down the bottom for those who want to see it.
I must thank @terry felkrow for this fascinating question - if I could give you +10 I would! This basically is a prime example of the slickness and elegance of Bayesian and Maximum Entropy methods. I have had much fun working it out!
SUMMARY
Exact result
$$Pr(\theta \in (0,S)|F_{obs},T_U,T_D)=1-\frac{T_U}{T_U+T_D}\Bigg(\frac{T_U}{T_U+S}\Bigg)^{F_{obs}+1}$$
Where $\theta$ is the time of the first down time (in seconds) observed by the user, $T_U$ is the number of "up time" seconds observed , $T_D$ is the number of "down time" seconds observed, and $F_{obs}$ is the number of "down periods" (F for "failures"; $\frac{T_D}{F_{obs}}$ is the average number of seconds spent in "down time") observed
For your case, $F_{obs}$ is not given, but I would guess that you could find out what it was (which is why I gave the answer for known $F_{obs}$). Now because you know $T_D$, this tells you a bit about $F_{obs}$, and you should be able to pose an "Expected Value" or educated guess of $F_{obs}$, call it $\hat{F}$. Now using the geometric distribution with probability parameter $p=\frac{1}{\hat{F}}$ (this is the Maximum Entropy distribution for fixed mean equal to $\hat{F}$), to integrate out $F_{obs}$ gives the probability of (see details for the maths):
$$Pr(\theta \in (0,S)|\hat{F},T_U,T_D)=1-\frac{\Bigg(\frac{T_U}{T_U+T_D}\Bigg)\Bigg(\frac{T_U}{T_U+S}\Bigg)^2}{\hat{F}-(\hat{F}-1)\Bigg(\frac{T_U}{T_U+S}\Bigg)}$$
So for your specific case, the table below shows various bounds for different $F$, assuming it is known (column 2) or "expected" (column 3). Can see that the knowing $F_{obs}$ comparing to knowing a "rough" guess $\hat{F}$ only matters when it is very large, (i.e. when the observed average down time is 1 second or less).
$$
\begin{array}{c|c}
F & Pr(\theta \geq \text{S}|F_{obs},T_U,T_D) & Pr(\theta \in (0,S)|\hat{F},T_U,T_D) \\
\hline
1,000,000 & 0.625 & 0.499 \\
\hline
500,000 & 0.393 & 0.336 \\
\hline
250,000 & 0.227 & 0.207 \\
\hline
125,000 & 0.128 & 0.122 \\
\hline
62,500 & 0.074 & 0.072 \\
\hline
31,250 & 0.045 & 0.045 \\
\hline
15,685 & 0.031 & 0.030 \\
\hline
7,812 & 0.023 & 0.023 \\
\hline
1 & 0.016 & 0.016
\end{array}
$$
DETAILS
It is based on example 3 in the paper below
Jaynes, E. T., 1976. `Confidence Intervals vs Bayesian Intervals,' in Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science, W. L. Harper and C. A. Hooker (eds.), D. Reidel, Dordrecht, p. 175; pdf
It supposes that the probability that a machine will operate without failure for a time $t$, is given by
$$Pr(\theta \geq t)=e^{-\lambda t};\ \ 0<t,\lambda < \infty$$
Where $\lambda$ is an unknown "rate of failure", to be estimated from some data.
I will use this to model the failure times in 2 separate cases. Where "failure" indicates going from "working" to "down time", and the other way around. You can think of this like modeling two "memoryless" proceedures. We first "wait" for the down time, from time $t=t_{0u}=0$, to time $t=t_{1d}$ (so that there was $t_1$ seconds of uninterupted "operating" time). This has a failure rate of $\lambda_d$ At time $t=t_{1d}$ a new process takes over and now we "wait" for the down time to "fail" at time $t=t{1u}$. It is also supposed that the rate of failure is constant over time, and that the process has independent increments (i.e. if you know where the process is at time $t=s$, then all other information about the process prior to time $t<s$ is irrelevant). This is what is known as a first order Markov process, also known as a "memoryless" process (for obvious reasons).
Okay, the problem goes as follows, Jaynes eq (8) gives the density that $r$ units out of $n$ will fail at the times $t_1 ,t_2 ,\dots,t_r$, and the remaining (n-r) do not fail at time t as
$$p(t_1 ,t_2 ,\dots,t_r | \lambda,n)=[\lambda^r exp(-\lambda \sum_{i}t_i)][exp(-(n-r)\lambda t)]$$
Then assigning a uniform prior (the particular prior you use won't matter in your case because you have so much data, the likelihood will dominate any reasonably "flat" prior) to $\lambda$, this give the a posterior predictive distribution of (see Jaynes paper for details, eq (9)-(13)):
$$Pr(\theta\geq\theta_0|n,t_1 ,\dots,t_r)=\int_0^{\infty}Pr(\theta\geq\theta_0|\lambda)p( \lambda | t_1 ,t_2 ,\dots,t_r,n)d\lambda=\Bigg(\frac{T}{T+\theta_0}\Bigg)^{r+1}$$
Where $T=\sum_{i}t_i + (n-r)t$ is the total time the devices operated without failure. This indicates that you only needed to know the total "failure free time", which you have both given as $T_D=500,000$ and $T_U=31,556,926-500,000=31,056,926$. Also for you problem we always observed either $n$ or $n-1$ "failures" by time $t$, depending on whether the system was "down" or "up" at time $t$.
Now if you knew what $F_{obs}$ was, then you just plug in $r=F_{obs}$ to the above equation. The probability that a user will not be in the "down time" in the first $S$ seconds given that the system was "up" when they started is then
$$Pr(\theta\geq S|[\text{Up at start} ],F_{obs},T_U)=\Bigg(\frac{T_U}{T_U+30}\Bigg)^{F_{obs}+1}$$
But the story is not yet finished, because we can marginalise (remove conditions) further. To make the equations shorter, let $A$ stand for the system was up when the user started, and let $B$ stand for no down time in $S$ seconds. Then, by the law of total probability, we have
$$Pr(B|F_{obs},T_U,T_D)=Pr(B|F_{obs},T_U,T_D,A)Pr(A|F_{obs},T_U,T_D)$$
$$+Pr(B|F_{obs},T_U,T_D,\overline{A})Pr(\overline{A}|T_U,T_D)$$
Now $\overline{A}$ means that the system was down when the user started, so that it is impossible for $B$ to be true (i.e. no down time) when $\overline{A}$ is true. Thus, $Pr(B|F_{obs},T_U,T_D,\overline{A})=0$, and we just have to multiply by $Pr(A|F_{obs},T_U,T_D)$. This is given by $\frac{T_U}{T_U+T_D}$, because none of information contained in $F_{obs},T_U,T_D$ give any reason to favor any particular time over any other time.
$$Pr(\theta\geq S|F_{obs},T_U,T_D)=\frac{T_U}{T_U+T_D}\Bigg(\frac{T_U}{T_U+S}\Bigg)^{F_{obs}+1}$$
Taking 1 minus this gives the desired result.
NOTE: We may have additional knowledge which would favor certain times, such as knowing what time of day is more likely to have a system outage, or we may believe that system outage is related to the number of users; this analysis ignores such information, and so could be improved upon by taking it into account.
NOTE: if you only knew the a rough guess of $F_{obs}$, say $\hat{F}$, you could (in theory) use the geometric distribution (has largest entropy for fixed mean) for $F_{obs}$ with probability parameter $p=\frac{1}{\hat{F}}$ and marginalise over $F_{obs}$ to give:
$$Pr(\theta \geq S|T_U,T_D)=\frac{T_U}{T_U+T_D}\sum_{i=1}^{i=\infty} p(1-p)^{i-1}\Bigg(\frac{T_U}{T_U+S}\Bigg)^{i+1}$$
$$=\frac{T_U}{T_U+T_D}\Bigg(\frac{T_U}{T_U+S}\Bigg)\sum_{i=1}^{i=\infty} p(1-p)^{i-1}\Bigg(\frac{T_U}{T_U+S}\Bigg)^{i}$$
$$=\frac{T_U}{T_U+T_D}\Bigg(\frac{T_U}{T_U+S}\Bigg)\sum_{i=1}^{i=\infty} p(1-p)^{i-1} exp\Bigg(i log\Bigg[\frac{T_U}{T_U+S}\Bigg]\Bigg)$$
Now the summation is just the moment generating function, $m_{X}(t)=E[exp(tX)]$, evaluated at $t=log\Bigg[\frac{T_U}{T_U+S}\Bigg]$. The mgf for the geometric distribution is given by:
$$m_{X}(t)=E[exp(tX)]=\frac{pe^t}{1-(1-p)e^t}$$
$$\rightarrow m_{X}(log\Bigg[\frac{T_U}{T_U+S}\Bigg])=\frac{p\Bigg[\frac{T_U}{T_U+S}\Bigg]}{1-(1-p)\Bigg[\frac{T_U}{T_U+S}\Bigg]}$$
And this gives a marginal probability of (noting $p=\frac{1}{\hat{F}}$):
$$Pr(\theta \geq S|T_U,T_D)=\frac{T_U}{T_U+T_D}\Bigg(\frac{T_U}{T_U+S}\Bigg)\frac{\frac{1}{\hat{F}}\Bigg[\frac{T_U}{T_U+S}\Bigg]}{1-(1-\frac{1}{\hat{F}})\Bigg[\frac{T_U}{T_U+S}\Bigg]}$$
Rearranging terms gives the final result:
$$Pr(\theta \in (0,S)|T_U,T_D)=1-Pr(\theta \geq S|T_U,T_D)=1-\frac{\Bigg(\frac{T_U}{T_U+T_D}\Bigg)\Bigg(\frac{T_U}{T_U+S}\Bigg)^2}{\hat{F}-(\hat{F}-1)\Bigg(\frac{T_U}{T_U+S}\Bigg)}$$ | Probability calculation, system uptime, likelihood of occurence | Okay, so here is my answer that I promised. I initially thought it would be quickish, but my answer has become quite large, so at the begining, I state my general results first, and leave the gory de | Probability calculation, system uptime, likelihood of occurence
Okay, so here is my answer that I promised. I initially thought it would be quickish, but my answer has become quite large, so at the begining, I state my general results first, and leave the gory details down the bottom for those who want to see it.
I must thank @terry felkrow for this fascinating question - if I could give you +10 I would! This basically is a prime example of the slickness and elegance of Bayesian and Maximum Entropy methods. I have had much fun working it out!
SUMMARY
Exact result
$$Pr(\theta \in (0,S)|F_{obs},T_U,T_D)=1-\frac{T_U}{T_U+T_D}\Bigg(\frac{T_U}{T_U+S}\Bigg)^{F_{obs}+1}$$
Where $\theta$ is the time of the first down time (in seconds) observed by the user, $T_U$ is the number of "up time" seconds observed , $T_D$ is the number of "down time" seconds observed, and $F_{obs}$ is the number of "down periods" (F for "failures"; $\frac{T_D}{F_{obs}}$ is the average number of seconds spent in "down time") observed
For your case, $F_{obs}$ is not given, but I would guess that you could find out what it was (which is why I gave the answer for known $F_{obs}$). Now because you know $T_D$, this tells you a bit about $F_{obs}$, and you should be able to pose an "Expected Value" or educated guess of $F_{obs}$, call it $\hat{F}$. Now using the geometric distribution with probability parameter $p=\frac{1}{\hat{F}}$ (this is the Maximum Entropy distribution for fixed mean equal to $\hat{F}$), to integrate out $F_{obs}$ gives the probability of (see details for the maths):
$$Pr(\theta \in (0,S)|\hat{F},T_U,T_D)=1-\frac{\Bigg(\frac{T_U}{T_U+T_D}\Bigg)\Bigg(\frac{T_U}{T_U+S}\Bigg)^2}{\hat{F}-(\hat{F}-1)\Bigg(\frac{T_U}{T_U+S}\Bigg)}$$
So for your specific case, the table below shows various bounds for different $F$, assuming it is known (column 2) or "expected" (column 3). Can see that the knowing $F_{obs}$ comparing to knowing a "rough" guess $\hat{F}$ only matters when it is very large, (i.e. when the observed average down time is 1 second or less).
$$
\begin{array}{c|c}
F & Pr(\theta \geq \text{S}|F_{obs},T_U,T_D) & Pr(\theta \in (0,S)|\hat{F},T_U,T_D) \\
\hline
1,000,000 & 0.625 & 0.499 \\
\hline
500,000 & 0.393 & 0.336 \\
\hline
250,000 & 0.227 & 0.207 \\
\hline
125,000 & 0.128 & 0.122 \\
\hline
62,500 & 0.074 & 0.072 \\
\hline
31,250 & 0.045 & 0.045 \\
\hline
15,685 & 0.031 & 0.030 \\
\hline
7,812 & 0.023 & 0.023 \\
\hline
1 & 0.016 & 0.016
\end{array}
$$
DETAILS
It is based on example 3 in the paper below
Jaynes, E. T., 1976. `Confidence Intervals vs Bayesian Intervals,' in Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science, W. L. Harper and C. A. Hooker (eds.), D. Reidel, Dordrecht, p. 175; pdf
It supposes that the probability that a machine will operate without failure for a time $t$, is given by
$$Pr(\theta \geq t)=e^{-\lambda t};\ \ 0<t,\lambda < \infty$$
Where $\lambda$ is an unknown "rate of failure", to be estimated from some data.
I will use this to model the failure times in 2 separate cases. Where "failure" indicates going from "working" to "down time", and the other way around. You can think of this like modeling two "memoryless" proceedures. We first "wait" for the down time, from time $t=t_{0u}=0$, to time $t=t_{1d}$ (so that there was $t_1$ seconds of uninterupted "operating" time). This has a failure rate of $\lambda_d$ At time $t=t_{1d}$ a new process takes over and now we "wait" for the down time to "fail" at time $t=t{1u}$. It is also supposed that the rate of failure is constant over time, and that the process has independent increments (i.e. if you know where the process is at time $t=s$, then all other information about the process prior to time $t<s$ is irrelevant). This is what is known as a first order Markov process, also known as a "memoryless" process (for obvious reasons).
Okay, the problem goes as follows, Jaynes eq (8) gives the density that $r$ units out of $n$ will fail at the times $t_1 ,t_2 ,\dots,t_r$, and the remaining (n-r) do not fail at time t as
$$p(t_1 ,t_2 ,\dots,t_r | \lambda,n)=[\lambda^r exp(-\lambda \sum_{i}t_i)][exp(-(n-r)\lambda t)]$$
Then assigning a uniform prior (the particular prior you use won't matter in your case because you have so much data, the likelihood will dominate any reasonably "flat" prior) to $\lambda$, this give the a posterior predictive distribution of (see Jaynes paper for details, eq (9)-(13)):
$$Pr(\theta\geq\theta_0|n,t_1 ,\dots,t_r)=\int_0^{\infty}Pr(\theta\geq\theta_0|\lambda)p( \lambda | t_1 ,t_2 ,\dots,t_r,n)d\lambda=\Bigg(\frac{T}{T+\theta_0}\Bigg)^{r+1}$$
Where $T=\sum_{i}t_i + (n-r)t$ is the total time the devices operated without failure. This indicates that you only needed to know the total "failure free time", which you have both given as $T_D=500,000$ and $T_U=31,556,926-500,000=31,056,926$. Also for you problem we always observed either $n$ or $n-1$ "failures" by time $t$, depending on whether the system was "down" or "up" at time $t$.
Now if you knew what $F_{obs}$ was, then you just plug in $r=F_{obs}$ to the above equation. The probability that a user will not be in the "down time" in the first $S$ seconds given that the system was "up" when they started is then
$$Pr(\theta\geq S|[\text{Up at start} ],F_{obs},T_U)=\Bigg(\frac{T_U}{T_U+30}\Bigg)^{F_{obs}+1}$$
But the story is not yet finished, because we can marginalise (remove conditions) further. To make the equations shorter, let $A$ stand for the system was up when the user started, and let $B$ stand for no down time in $S$ seconds. Then, by the law of total probability, we have
$$Pr(B|F_{obs},T_U,T_D)=Pr(B|F_{obs},T_U,T_D,A)Pr(A|F_{obs},T_U,T_D)$$
$$+Pr(B|F_{obs},T_U,T_D,\overline{A})Pr(\overline{A}|T_U,T_D)$$
Now $\overline{A}$ means that the system was down when the user started, so that it is impossible for $B$ to be true (i.e. no down time) when $\overline{A}$ is true. Thus, $Pr(B|F_{obs},T_U,T_D,\overline{A})=0$, and we just have to multiply by $Pr(A|F_{obs},T_U,T_D)$. This is given by $\frac{T_U}{T_U+T_D}$, because none of information contained in $F_{obs},T_U,T_D$ give any reason to favor any particular time over any other time.
$$Pr(\theta\geq S|F_{obs},T_U,T_D)=\frac{T_U}{T_U+T_D}\Bigg(\frac{T_U}{T_U+S}\Bigg)^{F_{obs}+1}$$
Taking 1 minus this gives the desired result.
NOTE: We may have additional knowledge which would favor certain times, such as knowing what time of day is more likely to have a system outage, or we may believe that system outage is related to the number of users; this analysis ignores such information, and so could be improved upon by taking it into account.
NOTE: if you only knew the a rough guess of $F_{obs}$, say $\hat{F}$, you could (in theory) use the geometric distribution (has largest entropy for fixed mean) for $F_{obs}$ with probability parameter $p=\frac{1}{\hat{F}}$ and marginalise over $F_{obs}$ to give:
$$Pr(\theta \geq S|T_U,T_D)=\frac{T_U}{T_U+T_D}\sum_{i=1}^{i=\infty} p(1-p)^{i-1}\Bigg(\frac{T_U}{T_U+S}\Bigg)^{i+1}$$
$$=\frac{T_U}{T_U+T_D}\Bigg(\frac{T_U}{T_U+S}\Bigg)\sum_{i=1}^{i=\infty} p(1-p)^{i-1}\Bigg(\frac{T_U}{T_U+S}\Bigg)^{i}$$
$$=\frac{T_U}{T_U+T_D}\Bigg(\frac{T_U}{T_U+S}\Bigg)\sum_{i=1}^{i=\infty} p(1-p)^{i-1} exp\Bigg(i log\Bigg[\frac{T_U}{T_U+S}\Bigg]\Bigg)$$
Now the summation is just the moment generating function, $m_{X}(t)=E[exp(tX)]$, evaluated at $t=log\Bigg[\frac{T_U}{T_U+S}\Bigg]$. The mgf for the geometric distribution is given by:
$$m_{X}(t)=E[exp(tX)]=\frac{pe^t}{1-(1-p)e^t}$$
$$\rightarrow m_{X}(log\Bigg[\frac{T_U}{T_U+S}\Bigg])=\frac{p\Bigg[\frac{T_U}{T_U+S}\Bigg]}{1-(1-p)\Bigg[\frac{T_U}{T_U+S}\Bigg]}$$
And this gives a marginal probability of (noting $p=\frac{1}{\hat{F}}$):
$$Pr(\theta \geq S|T_U,T_D)=\frac{T_U}{T_U+T_D}\Bigg(\frac{T_U}{T_U+S}\Bigg)\frac{\frac{1}{\hat{F}}\Bigg[\frac{T_U}{T_U+S}\Bigg]}{1-(1-\frac{1}{\hat{F}})\Bigg[\frac{T_U}{T_U+S}\Bigg]}$$
Rearranging terms gives the final result:
$$Pr(\theta \in (0,S)|T_U,T_D)=1-Pr(\theta \geq S|T_U,T_D)=1-\frac{\Bigg(\frac{T_U}{T_U+T_D}\Bigg)\Bigg(\frac{T_U}{T_U+S}\Bigg)^2}{\hat{F}-(\hat{F}-1)\Bigg(\frac{T_U}{T_U+S}\Bigg)}$$ | Probability calculation, system uptime, likelihood of occurence
Okay, so here is my answer that I promised. I initially thought it would be quickish, but my answer has become quite large, so at the begining, I state my general results first, and leave the gory de |
49,642 | Predicting a semi-deterministic process | If you want to forecast time-series data, first you need to check whether it is stationary. Basically this means checking whether data has trends. If for example some time trend is present, you can concern yourself only with its forecast, because time-trends usually dominate everything else. For stationary time series it is good to use Box-Jenkins approach. This in the end will give you some kind of ARMA model (autoregressive model suggested by @whuber is a subset of this model). Since you have three time series you may look into VAR models.
If you use R, then first step can be performed by function stl, it is function from standard R. Autoregressive models can be fit automatically by auto.arima in package forecast. This function can either fit your desired model, or find the best specification for certain definition of best. You might look into that package more, since it is specially designed for forecasting time series. For VAR model use VAR function from vars package. This package has a nice vignette describing its capabilities. | Predicting a semi-deterministic process | If you want to forecast time-series data, first you need to check whether it is stationary. Basically this means checking whether data has trends. If for example some time trend is present, you can co | Predicting a semi-deterministic process
If you want to forecast time-series data, first you need to check whether it is stationary. Basically this means checking whether data has trends. If for example some time trend is present, you can concern yourself only with its forecast, because time-trends usually dominate everything else. For stationary time series it is good to use Box-Jenkins approach. This in the end will give you some kind of ARMA model (autoregressive model suggested by @whuber is a subset of this model). Since you have three time series you may look into VAR models.
If you use R, then first step can be performed by function stl, it is function from standard R. Autoregressive models can be fit automatically by auto.arima in package forecast. This function can either fit your desired model, or find the best specification for certain definition of best. You might look into that package more, since it is specially designed for forecasting time series. For VAR model use VAR function from vars package. This package has a nice vignette describing its capabilities. | Predicting a semi-deterministic process
If you want to forecast time-series data, first you need to check whether it is stationary. Basically this means checking whether data has trends. If for example some time trend is present, you can co |
49,643 | Cluster data points by distance between clusters | Create a graph in which the points are nodes and two points are connected with an edge if and only if they lie within distance $d$ of each other. Stated in these terms, your criteria become
Every node in a cluster of two or more nodes is connected to at least one other node in that cluster.
No two points in any disjoint clusters can be connected to each other.
In short, you want to compute the connected components of this graph. Linear-time algorithms are well known and simple to execute, as described in the Wikipedia article.
To do this efficiently for a lot of points in 3D, you want to limit the amount of distance calculation you perform. Data structures such as octrees, or even simpler ones (such as exploiting a lexicographic sorting of the points) work well. | Cluster data points by distance between clusters | Create a graph in which the points are nodes and two points are connected with an edge if and only if they lie within distance $d$ of each other. Stated in these terms, your criteria become
Every no | Cluster data points by distance between clusters
Create a graph in which the points are nodes and two points are connected with an edge if and only if they lie within distance $d$ of each other. Stated in these terms, your criteria become
Every node in a cluster of two or more nodes is connected to at least one other node in that cluster.
No two points in any disjoint clusters can be connected to each other.
In short, you want to compute the connected components of this graph. Linear-time algorithms are well known and simple to execute, as described in the Wikipedia article.
To do this efficiently for a lot of points in 3D, you want to limit the amount of distance calculation you perform. Data structures such as octrees, or even simpler ones (such as exploiting a lexicographic sorting of the points) work well. | Cluster data points by distance between clusters
Create a graph in which the points are nodes and two points are connected with an edge if and only if they lie within distance $d$ of each other. Stated in these terms, your criteria become
Every no |
49,644 | Leave-one-out cross validation and boosted regression trees | It is hard to tell without data, but the set may be "too homogeneous" to make LOO work -- imagine you have a set $X$ and you duplicate all objects to make a set $X_d$ -- while BRT usually have very good accuracy on its train, it is pretty obvious that LOO on $X_d$ will probably give identical results to test-on-train.
So if the accuracy if good I would even try resampling CV (on each of let's say 10 folds you make train of an equal size to the full set by sampling objects with replacement and test from objects that were not placed in train -- this should spit them in about 1:2 proportion) on this data to verify this result.
EDIT: More precise algorithm of resampling CV
Given a dataset with $N$ objects and $M$ attributes:
Training set is made by randomly selecting $N$ objects from the original set with replacement
The objects that were not selected in the step 1 form the test set (this is roughly $\frac{1}{3}N$ objects)
Classifier is trained on a train set and tested on test set, and the measured error is gathered
Steps 1-3 are repeated $T$ times, where $T$ is more less arbitrary, say 10, 15 or 30 | Leave-one-out cross validation and boosted regression trees | It is hard to tell without data, but the set may be "too homogeneous" to make LOO work -- imagine you have a set $X$ and you duplicate all objects to make a set $X_d$ -- while BRT usually have very go | Leave-one-out cross validation and boosted regression trees
It is hard to tell without data, but the set may be "too homogeneous" to make LOO work -- imagine you have a set $X$ and you duplicate all objects to make a set $X_d$ -- while BRT usually have very good accuracy on its train, it is pretty obvious that LOO on $X_d$ will probably give identical results to test-on-train.
So if the accuracy if good I would even try resampling CV (on each of let's say 10 folds you make train of an equal size to the full set by sampling objects with replacement and test from objects that were not placed in train -- this should spit them in about 1:2 proportion) on this data to verify this result.
EDIT: More precise algorithm of resampling CV
Given a dataset with $N$ objects and $M$ attributes:
Training set is made by randomly selecting $N$ objects from the original set with replacement
The objects that were not selected in the step 1 form the test set (this is roughly $\frac{1}{3}N$ objects)
Classifier is trained on a train set and tested on test set, and the measured error is gathered
Steps 1-3 are repeated $T$ times, where $T$ is more less arbitrary, say 10, 15 or 30 | Leave-one-out cross validation and boosted regression trees
It is hard to tell without data, but the set may be "too homogeneous" to make LOO work -- imagine you have a set $X$ and you duplicate all objects to make a set $X_d$ -- while BRT usually have very go |
49,645 | Choosing the right threshold for a biometric trait authentication system | Generally, the cut-off value is chosen such as to maximize the compromise between sensitivity (Se) and specificity (Sp). You can generate a regular sequence of thresholds and plot the resulting ROC curve, as shown below, based on the DiagnosisMed R package.
Actually, the raw data looks like
test.values TP FN FP TN Sensitivity Specificity
1 0.037 51 0 97 0 1 0.0000
2 0.038 51 0 96 1 1 0.0103
3 0.039 51 0 91 6 1 0.0619
4 0.040 51 0 84 13 1 0.1340
5 0.041 51 0 74 23 1 0.2371
6 0.042 51 0 67 30 1 0.3093
and the optimal threshold is found as
test.values TP FN FP TN Sensitivity Specificity
47 0.194 43 8 8 89 0.8431 0.9175
To sum up, I would suggest to generate a regular sequence of possible thresholds and compute Se and Sp in each case; then, choose the one that maximize Se and (1-Sp) (or use other criteria if you want to minimize FP or FN rates). | Choosing the right threshold for a biometric trait authentication system | Generally, the cut-off value is chosen such as to maximize the compromise between sensitivity (Se) and specificity (Sp). You can generate a regular sequence of thresholds and plot the resulting ROC cu | Choosing the right threshold for a biometric trait authentication system
Generally, the cut-off value is chosen such as to maximize the compromise between sensitivity (Se) and specificity (Sp). You can generate a regular sequence of thresholds and plot the resulting ROC curve, as shown below, based on the DiagnosisMed R package.
Actually, the raw data looks like
test.values TP FN FP TN Sensitivity Specificity
1 0.037 51 0 97 0 1 0.0000
2 0.038 51 0 96 1 1 0.0103
3 0.039 51 0 91 6 1 0.0619
4 0.040 51 0 84 13 1 0.1340
5 0.041 51 0 74 23 1 0.2371
6 0.042 51 0 67 30 1 0.3093
and the optimal threshold is found as
test.values TP FN FP TN Sensitivity Specificity
47 0.194 43 8 8 89 0.8431 0.9175
To sum up, I would suggest to generate a regular sequence of possible thresholds and compute Se and Sp in each case; then, choose the one that maximize Se and (1-Sp) (or use other criteria if you want to minimize FP or FN rates). | Choosing the right threshold for a biometric trait authentication system
Generally, the cut-off value is chosen such as to maximize the compromise between sensitivity (Se) and specificity (Sp). You can generate a regular sequence of thresholds and plot the resulting ROC cu |
49,646 | Generalization of the Signal-Noise ratio for non-Gaussian processes | I think signal to noise ratio is very common in signal processing regardless of the form of the noise distribution. It is like the reciprocal of the coefficient of variation not the ratio of two variances. | Generalization of the Signal-Noise ratio for non-Gaussian processes | I think signal to noise ratio is very common in signal processing regardless of the form of the noise distribution. It is like the reciprocal of the coefficient of variation not the ratio of two vari | Generalization of the Signal-Noise ratio for non-Gaussian processes
I think signal to noise ratio is very common in signal processing regardless of the form of the noise distribution. It is like the reciprocal of the coefficient of variation not the ratio of two variances. | Generalization of the Signal-Noise ratio for non-Gaussian processes
I think signal to noise ratio is very common in signal processing regardless of the form of the noise distribution. It is like the reciprocal of the coefficient of variation not the ratio of two vari |
49,647 | Generalization of the Signal-Noise ratio for non-Gaussian processes | Speaking from an engineering viewpoint, there are many different definitions of signal to noise ratio (see for example, this question on dsp.SE) depending on the application and the author, and the key property that they all share is that the performance parameters of interest (e.g. bit error rate) generally are monotone functions of the signal to noise ratio.
In the presence of Gaussian noise, it is often possible to calculate
explicitly the parameter of interest as a function of the signal to noise
ratio. In other kinds of noise, one might have to be content with bounds.
But in all instances, all engineers are in agreement that large signal to
noise ratio is better than small signal to noise ratio even if they cannot agree on what is meant by signal to noise ratio, and are unable to determine
exactly how the signal to noise ratio determines the parameters of interest to
them. | Generalization of the Signal-Noise ratio for non-Gaussian processes | Speaking from an engineering viewpoint, there are many different definitions of signal to noise ratio (see for example, this question on dsp.SE) depending on the application and the author, and the ke | Generalization of the Signal-Noise ratio for non-Gaussian processes
Speaking from an engineering viewpoint, there are many different definitions of signal to noise ratio (see for example, this question on dsp.SE) depending on the application and the author, and the key property that they all share is that the performance parameters of interest (e.g. bit error rate) generally are monotone functions of the signal to noise ratio.
In the presence of Gaussian noise, it is often possible to calculate
explicitly the parameter of interest as a function of the signal to noise
ratio. In other kinds of noise, one might have to be content with bounds.
But in all instances, all engineers are in agreement that large signal to
noise ratio is better than small signal to noise ratio even if they cannot agree on what is meant by signal to noise ratio, and are unable to determine
exactly how the signal to noise ratio determines the parameters of interest to
them. | Generalization of the Signal-Noise ratio for non-Gaussian processes
Speaking from an engineering viewpoint, there are many different definitions of signal to noise ratio (see for example, this question on dsp.SE) depending on the application and the author, and the ke |
49,648 | How to test for parameter stationarity? | This problem is encountered in quality control/statistical process control settings. There's a large literature, as you have hinted, because different parameters as estimated in various ways from different forms of sampling different distributions can be expected to vary in different ways. The purpose is to detect that variation on-line as soon as possible after it occurs without triggering too many false detections along the way. Consider using a control chart (1, 2). In your concrete situation a good choice is a combined Shewhart-CUSUM control chart. | How to test for parameter stationarity? | This problem is encountered in quality control/statistical process control settings. There's a large literature, as you have hinted, because different parameters as estimated in various ways from dif | How to test for parameter stationarity?
This problem is encountered in quality control/statistical process control settings. There's a large literature, as you have hinted, because different parameters as estimated in various ways from different forms of sampling different distributions can be expected to vary in different ways. The purpose is to detect that variation on-line as soon as possible after it occurs without triggering too many false detections along the way. Consider using a control chart (1, 2). In your concrete situation a good choice is a combined Shewhart-CUSUM control chart. | How to test for parameter stationarity?
This problem is encountered in quality control/statistical process control settings. There's a large literature, as you have hinted, because different parameters as estimated in various ways from dif |
49,649 | How to test for parameter stationarity? | This is a pretty general problem in time series analysis. I'd probably start by looking at some descriptive statistics like the cross-correlation to see if the samples are roughly independent over time. You could also test whether the correlation between successive samples is significant.
Or you could go the model-fitting route in which case one simple thing to do is to fit an auto-regressive model with some order k and then do model comparison versus the static model. If you assume that $\theta$ just follows a Gaussian random walk, then model you're describing is exactly a Kalman filter. So that might be another thing to look at. | How to test for parameter stationarity? | This is a pretty general problem in time series analysis. I'd probably start by looking at some descriptive statistics like the cross-correlation to see if the samples are roughly independent over tim | How to test for parameter stationarity?
This is a pretty general problem in time series analysis. I'd probably start by looking at some descriptive statistics like the cross-correlation to see if the samples are roughly independent over time. You could also test whether the correlation between successive samples is significant.
Or you could go the model-fitting route in which case one simple thing to do is to fit an auto-regressive model with some order k and then do model comparison versus the static model. If you assume that $\theta$ just follows a Gaussian random walk, then model you're describing is exactly a Kalman filter. So that might be another thing to look at. | How to test for parameter stationarity?
This is a pretty general problem in time series analysis. I'd probably start by looking at some descriptive statistics like the cross-correlation to see if the samples are roughly independent over tim |
49,650 | From Marginal Exp-Norm Distributions to What Conditionals and Joint? | Would copulas be any use here? I don't know enough about them, or your problem, to be sure. | From Marginal Exp-Norm Distributions to What Conditionals and Joint? | Would copulas be any use here? I don't know enough about them, or your problem, to be sure. | From Marginal Exp-Norm Distributions to What Conditionals and Joint?
Would copulas be any use here? I don't know enough about them, or your problem, to be sure. | From Marginal Exp-Norm Distributions to What Conditionals and Joint?
Would copulas be any use here? I don't know enough about them, or your problem, to be sure. |
49,651 | Forecasting unemployment rate with plm | It is a known bug in plm package, when effect="individual" the pgmm crashes. The bug is fixed in plm version 1.2-7.
As for simulation, you need to calculate estimates of individual effects, since they are not estimated in GMM. At the current moment, the plm does not have the functions for predicting GMM model. I have created and put them here, but I do not advise to use them if you are not familiar with R. I have submitted these functions (forecast.pgmm and predict.pgmm) to plm maintainers, but they did not include them yet in plm package. | Forecasting unemployment rate with plm | It is a known bug in plm package, when effect="individual" the pgmm crashes. The bug is fixed in plm version 1.2-7.
As for simulation, you need to calculate estimates of individual effects, since th | Forecasting unemployment rate with plm
It is a known bug in plm package, when effect="individual" the pgmm crashes. The bug is fixed in plm version 1.2-7.
As for simulation, you need to calculate estimates of individual effects, since they are not estimated in GMM. At the current moment, the plm does not have the functions for predicting GMM model. I have created and put them here, but I do not advise to use them if you are not familiar with R. I have submitted these functions (forecast.pgmm and predict.pgmm) to plm maintainers, but they did not include them yet in plm package. | Forecasting unemployment rate with plm
It is a known bug in plm package, when effect="individual" the pgmm crashes. The bug is fixed in plm version 1.2-7.
As for simulation, you need to calculate estimates of individual effects, since th |
49,652 | Are there any R functions which support Reversible Jump MCMC for a GLM or SGLM? [closed] | WinBUGS has support for RJMCMC with an addon. I've used it for GLMs, including those with an intrinsic CAR component. Not R, obviously, but through R2WinBUGS you can patch them together. | Are there any R functions which support Reversible Jump MCMC for a GLM or SGLM? [closed] | WinBUGS has support for RJMCMC with an addon. I've used it for GLMs, including those with an intrinsic CAR component. Not R, obviously, but through R2WinBUGS you can patch them together. | Are there any R functions which support Reversible Jump MCMC for a GLM or SGLM? [closed]
WinBUGS has support for RJMCMC with an addon. I've used it for GLMs, including those with an intrinsic CAR component. Not R, obviously, but through R2WinBUGS you can patch them together. | Are there any R functions which support Reversible Jump MCMC for a GLM or SGLM? [closed]
WinBUGS has support for RJMCMC with an addon. I've used it for GLMs, including those with an intrinsic CAR component. Not R, obviously, but through R2WinBUGS you can patch them together. |
49,653 | What are good references for dynamic pricing? | This article is highly cited:
"Yield Management at American Airlines" by Barry C. Smith et al.
Links:
JSTOR
free PDF 1, broken at 06.09.12
free PDF 2, broken at 02.01.18
free PDF 3 | What are good references for dynamic pricing? | This article is highly cited:
"Yield Management at American Airlines" by Barry C. Smith et al.
Links:
JSTOR
free PDF 1, broken at 06.09.12
free PDF 2, broken at 02.01.18
free PDF 3 | What are good references for dynamic pricing?
This article is highly cited:
"Yield Management at American Airlines" by Barry C. Smith et al.
Links:
JSTOR
free PDF 1, broken at 06.09.12
free PDF 2, broken at 02.01.18
free PDF 3 | What are good references for dynamic pricing?
This article is highly cited:
"Yield Management at American Airlines" by Barry C. Smith et al.
Links:
JSTOR
free PDF 1, broken at 06.09.12
free PDF 2, broken at 02.01.18
free PDF 3 |
49,654 | Learning parameters of a mixture of Gaussian using MLE | EM essentially solves the maximum likelihood problem and therefore has the same properties w.r.t. sample sizes. EM for Gaussian mixture models is known to converge asymptotically to a local maximum and exhibits first order convergence (see this paper).
BTW, there are some results which quantify how good the EM solution is in terms of the parameters of the data distribution. See this paper which shows that the goodness depends on the separation of mixture components (measured by variances). A lot of papers have analyzed mixture models using this criteria.
If you don't want to use EM for mixture models, you can take a fully Bayesian approach. | Learning parameters of a mixture of Gaussian using MLE | EM essentially solves the maximum likelihood problem and therefore has the same properties w.r.t. sample sizes. EM for Gaussian mixture models is known to converge asymptotically to a local maximum an | Learning parameters of a mixture of Gaussian using MLE
EM essentially solves the maximum likelihood problem and therefore has the same properties w.r.t. sample sizes. EM for Gaussian mixture models is known to converge asymptotically to a local maximum and exhibits first order convergence (see this paper).
BTW, there are some results which quantify how good the EM solution is in terms of the parameters of the data distribution. See this paper which shows that the goodness depends on the separation of mixture components (measured by variances). A lot of papers have analyzed mixture models using this criteria.
If you don't want to use EM for mixture models, you can take a fully Bayesian approach. | Learning parameters of a mixture of Gaussian using MLE
EM essentially solves the maximum likelihood problem and therefore has the same properties w.r.t. sample sizes. EM for Gaussian mixture models is known to converge asymptotically to a local maximum an |
49,655 | Tangency portfolio in R | I haven't looked at your code yet, but here are two pointers:
Rmetrics has the tangencyPortfolio function in the fPortfolio package: http://help.rmetrics.org/fPortfolio/html/class-fPORTFOLIO.html
Here is a solution from David Ruppert's "Statistics and Finance" book: http://www.stat.tamu.edu/~ljin/Finance/chapter5/Fig5_9.txt | Tangency portfolio in R | I haven't looked at your code yet, but here are two pointers:
Rmetrics has the tangencyPortfolio function in the fPortfolio package: http://help.rmetrics.org/fPortfolio/html/class-fPORTFOLIO.html
Her | Tangency portfolio in R
I haven't looked at your code yet, but here are two pointers:
Rmetrics has the tangencyPortfolio function in the fPortfolio package: http://help.rmetrics.org/fPortfolio/html/class-fPORTFOLIO.html
Here is a solution from David Ruppert's "Statistics and Finance" book: http://www.stat.tamu.edu/~ljin/Finance/chapter5/Fig5_9.txt | Tangency portfolio in R
I haven't looked at your code yet, but here are two pointers:
Rmetrics has the tangencyPortfolio function in the fPortfolio package: http://help.rmetrics.org/fPortfolio/html/class-fPORTFOLIO.html
Her |
49,656 | How to fit a negative binomial distribution in R while incorporating censoring [closed] | You can try gamlss.cens package. | How to fit a negative binomial distribution in R while incorporating censoring [closed] | You can try gamlss.cens package. | How to fit a negative binomial distribution in R while incorporating censoring [closed]
You can try gamlss.cens package. | How to fit a negative binomial distribution in R while incorporating censoring [closed]
You can try gamlss.cens package. |
49,657 | How to fit a negative binomial distribution in R while incorporating censoring [closed] | Another R package that seems to do what you want, is pscal. The associated vignette has lots of examples. | How to fit a negative binomial distribution in R while incorporating censoring [closed] | Another R package that seems to do what you want, is pscal. The associated vignette has lots of examples. | How to fit a negative binomial distribution in R while incorporating censoring [closed]
Another R package that seems to do what you want, is pscal. The associated vignette has lots of examples. | How to fit a negative binomial distribution in R while incorporating censoring [closed]
Another R package that seems to do what you want, is pscal. The associated vignette has lots of examples. |
49,658 | Why do language models like InstructGPT and LLM utilize reinforcement learning instead of supervised learning to learn based on user-ranked examples? | Supervised LLM training only gives the model positive examples, i.e. ones it should produce. It does not provide the negative ones, and a naive attempt to do so would probably fail due to the sheer volume of negatives in the space of possible outputs.
Indeed, you probably could somehow penalize the model for producing outputs like "afsjkafnkfkasfjk nasjfasfas" but that would be a poor negative sample as the model would probably not produce this gibberish in the first place. Coming up with a particular set of useful negative examples is hard and probably depends on a particular model. This is where RL comes in: it allows you to operate on the models' outputs themselves, which is exactly the thing you want to improve. | Why do language models like InstructGPT and LLM utilize reinforcement learning instead of supervised | Supervised LLM training only gives the model positive examples, i.e. ones it should produce. It does not provide the negative ones, and a naive attempt to do so would probably fail due to the sheer vo | Why do language models like InstructGPT and LLM utilize reinforcement learning instead of supervised learning to learn based on user-ranked examples?
Supervised LLM training only gives the model positive examples, i.e. ones it should produce. It does not provide the negative ones, and a naive attempt to do so would probably fail due to the sheer volume of negatives in the space of possible outputs.
Indeed, you probably could somehow penalize the model for producing outputs like "afsjkafnkfkasfjk nasjfasfas" but that would be a poor negative sample as the model would probably not produce this gibberish in the first place. Coming up with a particular set of useful negative examples is hard and probably depends on a particular model. This is where RL comes in: it allows you to operate on the models' outputs themselves, which is exactly the thing you want to improve. | Why do language models like InstructGPT and LLM utilize reinforcement learning instead of supervised
Supervised LLM training only gives the model positive examples, i.e. ones it should produce. It does not provide the negative ones, and a naive attempt to do so would probably fail due to the sheer vo |
49,659 | Why do language models like InstructGPT and LLM utilize reinforcement learning instead of supervised learning to learn based on user-ranked examples? | Your LLM will give you a categorical distribution as output, over which you can sample, and thus use RL to estimate the gradient...
What you are suggesting looks more like a GAN which instead of using a discriminator, you have a ordinal-NN, over which you take the gradient to maximize the ordinal output... however, this is unstable and usually the generator (LLM) will have a very easy time maximizing the output of the discriminator, which is known as mode alignment...
You can probably fine tune a LLM with a loss like:
$$
\nabla L(\theta) = \nabla(-D_{kl}(\pi_{\theta}(y|x)||\pi_{orig}(y|x))) + \nabla D(\pi_{\theta}(y|x)|x)
$$
where the second term is the gradient flowing from the conditional discriminator (which maximization should give you more human-like response), and the first one is just a penalization term to not go too far from the pre-trained model
However, in my opinion, this will just make the LLM overfit the discriminator... | Why do language models like InstructGPT and LLM utilize reinforcement learning instead of supervised | Your LLM will give you a categorical distribution as output, over which you can sample, and thus use RL to estimate the gradient...
What you are suggesting looks more like a GAN which instead of using | Why do language models like InstructGPT and LLM utilize reinforcement learning instead of supervised learning to learn based on user-ranked examples?
Your LLM will give you a categorical distribution as output, over which you can sample, and thus use RL to estimate the gradient...
What you are suggesting looks more like a GAN which instead of using a discriminator, you have a ordinal-NN, over which you take the gradient to maximize the ordinal output... however, this is unstable and usually the generator (LLM) will have a very easy time maximizing the output of the discriminator, which is known as mode alignment...
You can probably fine tune a LLM with a loss like:
$$
\nabla L(\theta) = \nabla(-D_{kl}(\pi_{\theta}(y|x)||\pi_{orig}(y|x))) + \nabla D(\pi_{\theta}(y|x)|x)
$$
where the second term is the gradient flowing from the conditional discriminator (which maximization should give you more human-like response), and the first one is just a penalization term to not go too far from the pre-trained model
However, in my opinion, this will just make the LLM overfit the discriminator... | Why do language models like InstructGPT and LLM utilize reinforcement learning instead of supervised
Your LLM will give you a categorical distribution as output, over which you can sample, and thus use RL to estimate the gradient...
What you are suggesting looks more like a GAN which instead of using |
49,660 | Why do language models like InstructGPT and LLM utilize reinforcement learning instead of supervised learning to learn based on user-ranked examples? | The paper LIMA: Less Is More for Alignment uploaded a few days ago to arXiv shows that fine-tuning with the standard supervised loss without any reinforcement learning works fine:
Large language models are trained in two stages: (1) unsupervised pretraining from raw text, to learn general-purpose representations, and (2) large scale instruction tuning and reinforcement learning, to better align to end tasks and user preferences. We measure the relative importance of these two stages by training LIMA, a 65B parameter LLaMa language model fine-tuned with the standard supervised loss on only 1,000 carefully curated prompts and responses, without any reinforcement learning or human preference modeling. LIMA demonstrates remarkably strong performance, learning to follow specific response formats from only a handful of examples in the training data, including complex queries that range from planning trip itineraries to speculating about alternate history. Moreover, the model tends to generalize well to unseen tasks that did not appear in the training data. In a controlled human study, responses from LIMA are either equivalent or strictly preferred to GPT-4 in 43% of cases; this statistic is as high as 58% when compared to Bard and 65% versus DaVinci003, which was trained with human feedback. Taken together, these results strongly suggest that almost all knowledge in large language models is learned during pretraining, and only limited instruction tuning data is necessary to teach models to produce high quality output. | Why do language models like InstructGPT and LLM utilize reinforcement learning instead of supervised | The paper LIMA: Less Is More for Alignment uploaded a few days ago to arXiv shows that fine-tuning with the standard supervised loss without any reinforcement learning works fine:
Large language mode | Why do language models like InstructGPT and LLM utilize reinforcement learning instead of supervised learning to learn based on user-ranked examples?
The paper LIMA: Less Is More for Alignment uploaded a few days ago to arXiv shows that fine-tuning with the standard supervised loss without any reinforcement learning works fine:
Large language models are trained in two stages: (1) unsupervised pretraining from raw text, to learn general-purpose representations, and (2) large scale instruction tuning and reinforcement learning, to better align to end tasks and user preferences. We measure the relative importance of these two stages by training LIMA, a 65B parameter LLaMa language model fine-tuned with the standard supervised loss on only 1,000 carefully curated prompts and responses, without any reinforcement learning or human preference modeling. LIMA demonstrates remarkably strong performance, learning to follow specific response formats from only a handful of examples in the training data, including complex queries that range from planning trip itineraries to speculating about alternate history. Moreover, the model tends to generalize well to unseen tasks that did not appear in the training data. In a controlled human study, responses from LIMA are either equivalent or strictly preferred to GPT-4 in 43% of cases; this statistic is as high as 58% when compared to Bard and 65% versus DaVinci003, which was trained with human feedback. Taken together, these results strongly suggest that almost all knowledge in large language models is learned during pretraining, and only limited instruction tuning data is necessary to teach models to produce high quality output. | Why do language models like InstructGPT and LLM utilize reinforcement learning instead of supervised
The paper LIMA: Less Is More for Alignment uploaded a few days ago to arXiv shows that fine-tuning with the standard supervised loss without any reinforcement learning works fine:
Large language mode |
49,661 | Does Frequentist statistics still make sense when the experiment is not repeatable? | You have hit on the major challenge to the frequentist interpretation
As a preliminary observation, the usual examples of non-repeatable events used for this question relate to predictions of one-off things like the outcome of an election. Unlike your vase example, these are situations where the event of interest is not repeatable, even in principle. For example, if a polling firm was previously attempting to determine the probability that Obama would beat Romney in the 2012 US election, what is the frequentist interpretation of the probablity of this one-off event (in the non-repeatable context in which it occurs)? I will make reference to these kinds of events rather than your vase event, since they better capture the philosophical issue you are raising.
What you have hit on here is really the main philosophical challenge made against the frequentist interpretation of probability. We often wish to make probabilistic predictions in relation to one-off events that occur in a very specific context and which cannot be repeated in the same context. Much of the discipline of predictive analysis applies to one-off events and probability and statistics seems like a natural discipline to use to inform such prediction. For example, a polling company might want to determine the probability that Obama will beat Romney in the 2012 election. (Imagine asking this question in, say, late October of 2012, when we don't yet know the answer.) But what, if anything, would the "probability" of this one-off event mean in the frequentist paradigm? Even if people were to run the presidential election between Obama and Romney over and over again, after the first time it would never again occur under the same political context as the one that was of actual interest (and so it would not really be a repitition of the same election). So the question to the frequentist is: Can we use probability in such cases, and if so, how can it be accorded the frequentist interpretation?
Frequentists have generated two main answers to this challenge. One answer to this challenge is that we shouldn't use probability in such contexts and that the attempt to do so is illusory because the event under consideration is non-repeatable even in principle. This view delimits the application of probability and statistics and says that it should only be applied in a relatively narrow class of problems where we have events that are (at least in principle) repeatable. In that case it is possible to deploy the frequentist interpretation with respect to the infinite sequence of repititions that is (in principle) possible. Another answer to this challenge is to invoke the metaphysical hypothesis of a "multiverse" and say that any event that is a one-off event in our own universe is just one manifestation of an infinite set of outcomes under the same context that occur (or in principle could occur) in parallel universes. This view claims that any event is (in principle) repeatable by virtue of the fact that other outcomes were possible and that all possible outcomes occurs in "some universe". As you can see, these issues stray into philosophical territory and raise questions about the admissibility and sensibleness of speculating on unobserved (and unobservable) repititions of an experiment or event. There may be other answers I'm unfamiliar with, but they would probably be variations of these two general strategies (i.e., either delimit the scope of application of probability, or somehow assert that all events are repeatable in principle).
While I would encourage you to read more broadly on philosophy and probability to learn more about this topic, my own view is that neither of the above responses of the frequentist viewpoint are satisfactory. My view is that the frequentist approach to probability cannot explain probability in all contexts and so should not be seen as a valid basis for probability theory. The more sensible approach is to view probability as an epistemological concept --- a decision-making tool developed to assess uncertainty, subject to some important measurement and consistency desiderata. As I've noted in many other posts (see e.g., here and here), the frequentist interpretation is valid to the extent that it essentially just asserts the LLN --- all practitioners of all philosophical schools that use probability accept the LLN and thereby accept the frequentist interpretation in contexts where it applies. | Does Frequentist statistics still make sense when the experiment is not repeatable? | You have hit on the major challenge to the frequentist interpretation
As a preliminary observation, the usual examples of non-repeatable events used for this question relate to predictions of one-off | Does Frequentist statistics still make sense when the experiment is not repeatable?
You have hit on the major challenge to the frequentist interpretation
As a preliminary observation, the usual examples of non-repeatable events used for this question relate to predictions of one-off things like the outcome of an election. Unlike your vase example, these are situations where the event of interest is not repeatable, even in principle. For example, if a polling firm was previously attempting to determine the probability that Obama would beat Romney in the 2012 US election, what is the frequentist interpretation of the probablity of this one-off event (in the non-repeatable context in which it occurs)? I will make reference to these kinds of events rather than your vase event, since they better capture the philosophical issue you are raising.
What you have hit on here is really the main philosophical challenge made against the frequentist interpretation of probability. We often wish to make probabilistic predictions in relation to one-off events that occur in a very specific context and which cannot be repeated in the same context. Much of the discipline of predictive analysis applies to one-off events and probability and statistics seems like a natural discipline to use to inform such prediction. For example, a polling company might want to determine the probability that Obama will beat Romney in the 2012 election. (Imagine asking this question in, say, late October of 2012, when we don't yet know the answer.) But what, if anything, would the "probability" of this one-off event mean in the frequentist paradigm? Even if people were to run the presidential election between Obama and Romney over and over again, after the first time it would never again occur under the same political context as the one that was of actual interest (and so it would not really be a repitition of the same election). So the question to the frequentist is: Can we use probability in such cases, and if so, how can it be accorded the frequentist interpretation?
Frequentists have generated two main answers to this challenge. One answer to this challenge is that we shouldn't use probability in such contexts and that the attempt to do so is illusory because the event under consideration is non-repeatable even in principle. This view delimits the application of probability and statistics and says that it should only be applied in a relatively narrow class of problems where we have events that are (at least in principle) repeatable. In that case it is possible to deploy the frequentist interpretation with respect to the infinite sequence of repititions that is (in principle) possible. Another answer to this challenge is to invoke the metaphysical hypothesis of a "multiverse" and say that any event that is a one-off event in our own universe is just one manifestation of an infinite set of outcomes under the same context that occur (or in principle could occur) in parallel universes. This view claims that any event is (in principle) repeatable by virtue of the fact that other outcomes were possible and that all possible outcomes occurs in "some universe". As you can see, these issues stray into philosophical territory and raise questions about the admissibility and sensibleness of speculating on unobserved (and unobservable) repititions of an experiment or event. There may be other answers I'm unfamiliar with, but they would probably be variations of these two general strategies (i.e., either delimit the scope of application of probability, or somehow assert that all events are repeatable in principle).
While I would encourage you to read more broadly on philosophy and probability to learn more about this topic, my own view is that neither of the above responses of the frequentist viewpoint are satisfactory. My view is that the frequentist approach to probability cannot explain probability in all contexts and so should not be seen as a valid basis for probability theory. The more sensible approach is to view probability as an epistemological concept --- a decision-making tool developed to assess uncertainty, subject to some important measurement and consistency desiderata. As I've noted in many other posts (see e.g., here and here), the frequentist interpretation is valid to the extent that it essentially just asserts the LLN --- all practitioners of all philosophical schools that use probability accept the LLN and thereby accept the frequentist interpretation in contexts where it applies. | Does Frequentist statistics still make sense when the experiment is not repeatable?
You have hit on the major challenge to the frequentist interpretation
As a preliminary observation, the usual examples of non-repeatable events used for this question relate to predictions of one-off |
49,662 | Does Frequentist statistics still make sense when the experiment is not repeatable? | Simple answer, yes the frequentist approach is still viable. We start with a (admittedly idealized) model. Take your vase example. Pre-sample, under the model we know that the confidence procedure with random sampling produces CIs covering the parameter 95% of the time. No actual repetitions are required. Also, if you look at a large number of independent 95% CIs from different problems then if their underlying models were valid (a big ask) approximately 95% would contain their respective parameter.
Regarding your real life example using past one-off data, the question is whether it is reasonable to treat the data as if it was a realization from an underlying model with random variables. For this to give valid frequentist inferences, you would also need to make sure that the choice of model by you was made before you saw the actual data. | Does Frequentist statistics still make sense when the experiment is not repeatable? | Simple answer, yes the frequentist approach is still viable. We start with a (admittedly idealized) model. Take your vase example. Pre-sample, under the model we know that the confidence procedure wit | Does Frequentist statistics still make sense when the experiment is not repeatable?
Simple answer, yes the frequentist approach is still viable. We start with a (admittedly idealized) model. Take your vase example. Pre-sample, under the model we know that the confidence procedure with random sampling produces CIs covering the parameter 95% of the time. No actual repetitions are required. Also, if you look at a large number of independent 95% CIs from different problems then if their underlying models were valid (a big ask) approximately 95% would contain their respective parameter.
Regarding your real life example using past one-off data, the question is whether it is reasonable to treat the data as if it was a realization from an underlying model with random variables. For this to give valid frequentist inferences, you would also need to make sure that the choice of model by you was made before you saw the actual data. | Does Frequentist statistics still make sense when the experiment is not repeatable?
Simple answer, yes the frequentist approach is still viable. We start with a (admittedly idealized) model. Take your vase example. Pre-sample, under the model we know that the confidence procedure wit |
49,663 | Book recommendations for beginners about probability distributions | I am not familiar with any book that would meet your requirements, but there's a Harvard course Statistics 110 by Joe Blitzstein that focuses exactly on the intuitions behind the distributions and probability theory concepts. (It is freely available online.) | Book recommendations for beginners about probability distributions | I am not familiar with any book that would meet your requirements, but there's a Harvard course Statistics 110 by Joe Blitzstein that focuses exactly on the intuitions behind the distributions and pro | Book recommendations for beginners about probability distributions
I am not familiar with any book that would meet your requirements, but there's a Harvard course Statistics 110 by Joe Blitzstein that focuses exactly on the intuitions behind the distributions and probability theory concepts. (It is freely available online.) | Book recommendations for beginners about probability distributions
I am not familiar with any book that would meet your requirements, but there's a Harvard course Statistics 110 by Joe Blitzstein that focuses exactly on the intuitions behind the distributions and pro |
49,664 | Book recommendations for beginners about probability distributions | Blitzstein & Hwang's "Introduction to Probability" is in the same vein as the online course referenced by Tim but a lot more breadth. Highly recommended. | Book recommendations for beginners about probability distributions | Blitzstein & Hwang's "Introduction to Probability" is in the same vein as the online course referenced by Tim but a lot more breadth. Highly recommended. | Book recommendations for beginners about probability distributions
Blitzstein & Hwang's "Introduction to Probability" is in the same vein as the online course referenced by Tim but a lot more breadth. Highly recommended. | Book recommendations for beginners about probability distributions
Blitzstein & Hwang's "Introduction to Probability" is in the same vein as the online course referenced by Tim but a lot more breadth. Highly recommended. |
49,665 | justification for 'population prediction intervals'? | In the econometrics literature this is referred to as the method of Krinsky and Robb (1986, 1990, & 1991).
I think the argument goes as follows:
Assume we have consistently estimated the expectation and the covariance matrix of an asymptotically normal estimator $\hat{\Theta}$ of $\Theta$. Then, by the law of large numbers, we can estimate the expectation, variance, and quantiles of the distribution of $f(\hat{\Theta})$ by drawing a random sample $\left\{\tilde{\Theta}^{(1)},\ldots,\tilde{\Theta}^{(M)}\right\}$ of size $M$ (where $M$ is large) from the asymptotic normal distribution of $\hat{\Theta}$ and using the sample mean/variance/quantiles of $\left\{f\left(\tilde{\Theta}^{(1)}\right),\ldots,f\left(\tilde{\Theta}^{(M)}\right)\right\}$ as consistent estimates for their population counterparts.
From the simulation studies I've read about this method I remember that it was, overall, not superior to the delta method. | justification for 'population prediction intervals'? | In the econometrics literature this is referred to as the method of Krinsky and Robb (1986, 1990, & 1991).
I think the argument goes as follows:
Assume we have consistently estimated the expectation a | justification for 'population prediction intervals'?
In the econometrics literature this is referred to as the method of Krinsky and Robb (1986, 1990, & 1991).
I think the argument goes as follows:
Assume we have consistently estimated the expectation and the covariance matrix of an asymptotically normal estimator $\hat{\Theta}$ of $\Theta$. Then, by the law of large numbers, we can estimate the expectation, variance, and quantiles of the distribution of $f(\hat{\Theta})$ by drawing a random sample $\left\{\tilde{\Theta}^{(1)},\ldots,\tilde{\Theta}^{(M)}\right\}$ of size $M$ (where $M$ is large) from the asymptotic normal distribution of $\hat{\Theta}$ and using the sample mean/variance/quantiles of $\left\{f\left(\tilde{\Theta}^{(1)}\right),\ldots,f\left(\tilde{\Theta}^{(M)}\right)\right\}$ as consistent estimates for their population counterparts.
From the simulation studies I've read about this method I remember that it was, overall, not superior to the delta method. | justification for 'population prediction intervals'?
In the econometrics literature this is referred to as the method of Krinsky and Robb (1986, 1990, & 1991).
I think the argument goes as follows:
Assume we have consistently estimated the expectation a |
49,666 | Biased coin game | Well, I've done a simulation where I used a simple criterion of saying the coin is biased if the current proportion is >0.55 else the coin is fair (middle between 0.5 and 0.6). Code in R:
res=replicate(1e3,{
k=sample(c(0.5,0.6),1)
s=sample(0:1,2e3,prob=c(1-k,k),replace=T)
tmp=cumsum(s)/seq_along(s)
ifelse(ifelse(k==0.6,1,0)==ifelse(tmp>0.55,1,0),1e4,-2e4)-seq_along(tmp)
})
and this is what the average reward plot looks like
note: i only tried up to 2000 draws as it has already converged somewhat by this time. | Biased coin game | Well, I've done a simulation where I used a simple criterion of saying the coin is biased if the current proportion is >0.55 else the coin is fair (middle between 0.5 and 0.6). Code in R:
res=replicat | Biased coin game
Well, I've done a simulation where I used a simple criterion of saying the coin is biased if the current proportion is >0.55 else the coin is fair (middle between 0.5 and 0.6). Code in R:
res=replicate(1e3,{
k=sample(c(0.5,0.6),1)
s=sample(0:1,2e3,prob=c(1-k,k),replace=T)
tmp=cumsum(s)/seq_along(s)
ifelse(ifelse(k==0.6,1,0)==ifelse(tmp>0.55,1,0),1e4,-2e4)-seq_along(tmp)
})
and this is what the average reward plot looks like
note: i only tried up to 2000 draws as it has already converged somewhat by this time. | Biased coin game
Well, I've done a simulation where I used a simple criterion of saying the coin is biased if the current proportion is >0.55 else the coin is fair (middle between 0.5 and 0.6). Code in R:
res=replicat |
49,667 | Biased coin game | I'd keep track of the Bayes factor for the two hypotheses over time, and make a decision once the evidence passes a threshold.
Let $F$ be the event that the coin is fair and $B$ be the event that the coin is biased. Let $n$ be the number of coin tosses observed, and let $h, t$ be the observed number of heads and tails. The Bayes factor here is
$$K(h, t) = \frac{P(F \mid h, t)}{P(B \mid h, t)} = \frac{\binom{n}{h}0.5^h0.5^t}{\binom{n}{h}0.6^h0.4^t} \frac{P(F)}{P(B)}.$$
You told us that there's a 50% chance we get a fair coin, so $P(F) = P(B) = 0.5$. The Bayes factor is a measure of how many times more likely one model is than another. My proposal is that you stop once $K(h, t)$ is greater than some threshold $T$, or less than $1/T$.
Cancelling out common terms and taking logarithms to make the Bayes factor nice to work with, we have
$$\log K(h, t) = n \log 0.5 - h \log 0.6 - t \log 0.4.$$
You stop once $\log K(h, t) > \log T$, or $\log K < -\log T$.
We need an estimate of how many observations we will see before we make a decision for a given threshold $T$. We can approximate this by taking the expectation of $\log K$. If $F$, then
$$\log K(h, t) = n \log 0.5 - (n/2) \log 0.6 - (n/2) \log 0.4 = n\log(0.5\cdot0.6^{-0.5}\cdot0.4^{-0.5}) \approx 0.02n.$$
For $\log K$ to exceed $\log T$, we will therefore need $0.02n \approx \log T$, or $n \approx 50 \log T$. I quickly checked that this is an OK approximation in R (code below). If we observe $\log K$ until it exceeds $1$ then we should wait approximately 50 steps. And running the code below shows that it is approximately 50:
set.seed(1)
samples = numeric(1000)
for (i in 1:1000) {
samples[i] = min(which((1:1000)*log(0.5/0.4) - cumsum(rbinom(1000, 1, 0.5))*log(0.6/0.4) > 1))
}
mean(samples)
53.961
If $B$, then
$$\log K(h, t) = n \log 0.5 - 0.6n \log 0.6 - 0.4n \log 0.4 = n\log(0.5\cdot0.6^{-0.6}\cdot0.4^{-0.4}) \approx -0.02n.$$
For $\log K$ to be less than $-\log T$ we need $n \approx 50 \log T$ again.
If we stop when $K > T$, then
$$\frac{P(F \mid h, t)}{1 - P(F \mid h, t)} = T,$$
so
$$P(F \mid h, t) = \frac{T}{T + 1}.$$
Similarly, if we stop when $K < 1/T$ then
$$P(B \mid h, t) = \frac{T}{T + 1}.$$
Hence the probability that we make the right decision is $T/(T + 1)$.
The expected benefit to the game is the expected value of the \$10k vs -\$20k payoff given that the probability we're choosing correctly minus the expected number of turns until we get to threshold $T$ or $1/T$:
$$E_T = \frac{T}{T + 1}10{,}000 - \frac{1}{T + 1}20{,}000 - 50\log T.$$
Differentiation w.r.t. $T$ gives
$$\frac{1}{(T + 1)^2} 10{,}000 + \frac{1}{(T + 1)^2} 20{,}000 - \frac{1}{T} 50.$$
Setting this to $0$ and solving gives:
$$\hat{T} = 299 - 10\sqrt{894} \approx 600, \log \hat{T} \approx 6.4.$$
That's quite a high threshold! Here's a plot of the expected value depending on $T$ from Wolfram: | Biased coin game | I'd keep track of the Bayes factor for the two hypotheses over time, and make a decision once the evidence passes a threshold.
Let $F$ be the event that the coin is fair and $B$ be the event that the | Biased coin game
I'd keep track of the Bayes factor for the two hypotheses over time, and make a decision once the evidence passes a threshold.
Let $F$ be the event that the coin is fair and $B$ be the event that the coin is biased. Let $n$ be the number of coin tosses observed, and let $h, t$ be the observed number of heads and tails. The Bayes factor here is
$$K(h, t) = \frac{P(F \mid h, t)}{P(B \mid h, t)} = \frac{\binom{n}{h}0.5^h0.5^t}{\binom{n}{h}0.6^h0.4^t} \frac{P(F)}{P(B)}.$$
You told us that there's a 50% chance we get a fair coin, so $P(F) = P(B) = 0.5$. The Bayes factor is a measure of how many times more likely one model is than another. My proposal is that you stop once $K(h, t)$ is greater than some threshold $T$, or less than $1/T$.
Cancelling out common terms and taking logarithms to make the Bayes factor nice to work with, we have
$$\log K(h, t) = n \log 0.5 - h \log 0.6 - t \log 0.4.$$
You stop once $\log K(h, t) > \log T$, or $\log K < -\log T$.
We need an estimate of how many observations we will see before we make a decision for a given threshold $T$. We can approximate this by taking the expectation of $\log K$. If $F$, then
$$\log K(h, t) = n \log 0.5 - (n/2) \log 0.6 - (n/2) \log 0.4 = n\log(0.5\cdot0.6^{-0.5}\cdot0.4^{-0.5}) \approx 0.02n.$$
For $\log K$ to exceed $\log T$, we will therefore need $0.02n \approx \log T$, or $n \approx 50 \log T$. I quickly checked that this is an OK approximation in R (code below). If we observe $\log K$ until it exceeds $1$ then we should wait approximately 50 steps. And running the code below shows that it is approximately 50:
set.seed(1)
samples = numeric(1000)
for (i in 1:1000) {
samples[i] = min(which((1:1000)*log(0.5/0.4) - cumsum(rbinom(1000, 1, 0.5))*log(0.6/0.4) > 1))
}
mean(samples)
53.961
If $B$, then
$$\log K(h, t) = n \log 0.5 - 0.6n \log 0.6 - 0.4n \log 0.4 = n\log(0.5\cdot0.6^{-0.6}\cdot0.4^{-0.4}) \approx -0.02n.$$
For $\log K$ to be less than $-\log T$ we need $n \approx 50 \log T$ again.
If we stop when $K > T$, then
$$\frac{P(F \mid h, t)}{1 - P(F \mid h, t)} = T,$$
so
$$P(F \mid h, t) = \frac{T}{T + 1}.$$
Similarly, if we stop when $K < 1/T$ then
$$P(B \mid h, t) = \frac{T}{T + 1}.$$
Hence the probability that we make the right decision is $T/(T + 1)$.
The expected benefit to the game is the expected value of the \$10k vs -\$20k payoff given that the probability we're choosing correctly minus the expected number of turns until we get to threshold $T$ or $1/T$:
$$E_T = \frac{T}{T + 1}10{,}000 - \frac{1}{T + 1}20{,}000 - 50\log T.$$
Differentiation w.r.t. $T$ gives
$$\frac{1}{(T + 1)^2} 10{,}000 + \frac{1}{(T + 1)^2} 20{,}000 - \frac{1}{T} 50.$$
Setting this to $0$ and solving gives:
$$\hat{T} = 299 - 10\sqrt{894} \approx 600, \log \hat{T} \approx 6.4.$$
That's quite a high threshold! Here's a plot of the expected value depending on $T$ from Wolfram: | Biased coin game
I'd keep track of the Bayes factor for the two hypotheses over time, and make a decision once the evidence passes a threshold.
Let $F$ be the event that the coin is fair and $B$ be the event that the |
49,668 | Strange result with GLMM (binomial) | You construct a Wald confidence interval for the log odds of passing the test: $\hat{\theta} \pm z_{1-\alpha/2}\operatorname{SE}(\hat{\theta})$. This is based on the theory that the maximum likelihood estimator (MLE) is asymptotically Normal. However, since the probability of passing the test is very close to 1 (its upper bound), the distribution of the log odds estimator $\hat{\theta}$ is somewhat asymmetric, so not close to Normal. (Of course the approximation gets better as the sample size increases.)
broom.mixed::tidy(model, "fixed", conf.int = TRUE, conf.method = "Wald") %>%
mutate(
across(c(estimate, conf.low, conf.high), plogis)
)
#> # A tibble: 1 × 5
#> effect term estimate conf.low conf.high
#> <chr> <chr> <dbl> <dbl> <dbl>
#> 1 fixed (Intercept) 1.00 0.683 1.00
Instead construct a profile likelihood confidence interval which doesn't assume that the log-likelihood function is Normal at the MLE or even that it is symmetric. So the profile confidence interval has better statistical properties in this case and it is narrow as you expect.
Constructing confidence intervals based on profile likelihood
broom.mixed::tidy(model, "fixed", conf.int = TRUE, conf.method = "profile") %>%
mutate(
across(c(estimate, conf.low, conf.high), plogis)
)
#> Computing profile confidence intervals ...
#> # A tibble: 1 × 5
#> effect term estimate conf.low conf.high
#> <chr> <chr> <dbl> <dbl> <dbl>
#> 1 fixed (Intercept) 1.00 0.985 1 | Strange result with GLMM (binomial) | You construct a Wald confidence interval for the log odds of passing the test: $\hat{\theta} \pm z_{1-\alpha/2}\operatorname{SE}(\hat{\theta})$. This is based on the theory that the maximum likelihood | Strange result with GLMM (binomial)
You construct a Wald confidence interval for the log odds of passing the test: $\hat{\theta} \pm z_{1-\alpha/2}\operatorname{SE}(\hat{\theta})$. This is based on the theory that the maximum likelihood estimator (MLE) is asymptotically Normal. However, since the probability of passing the test is very close to 1 (its upper bound), the distribution of the log odds estimator $\hat{\theta}$ is somewhat asymmetric, so not close to Normal. (Of course the approximation gets better as the sample size increases.)
broom.mixed::tidy(model, "fixed", conf.int = TRUE, conf.method = "Wald") %>%
mutate(
across(c(estimate, conf.low, conf.high), plogis)
)
#> # A tibble: 1 × 5
#> effect term estimate conf.low conf.high
#> <chr> <chr> <dbl> <dbl> <dbl>
#> 1 fixed (Intercept) 1.00 0.683 1.00
Instead construct a profile likelihood confidence interval which doesn't assume that the log-likelihood function is Normal at the MLE or even that it is symmetric. So the profile confidence interval has better statistical properties in this case and it is narrow as you expect.
Constructing confidence intervals based on profile likelihood
broom.mixed::tidy(model, "fixed", conf.int = TRUE, conf.method = "profile") %>%
mutate(
across(c(estimate, conf.low, conf.high), plogis)
)
#> Computing profile confidence intervals ...
#> # A tibble: 1 × 5
#> effect term estimate conf.low conf.high
#> <chr> <chr> <dbl> <dbl> <dbl>
#> 1 fixed (Intercept) 1.00 0.985 1 | Strange result with GLMM (binomial)
You construct a Wald confidence interval for the log odds of passing the test: $\hat{\theta} \pm z_{1-\alpha/2}\operatorname{SE}(\hat{\theta})$. This is based on the theory that the maximum likelihood |
49,669 | Standardizing neural network inputs with a linear layer? | One way to do standardization is to subtract some value (e.g. the sample mean $\hat \mu$) and divide by another value (e.g. the sample standard deviation $\hat \sigma$):
$$
z = \frac{x - \hat \mu}{\hat \sigma}.
$$
When $X$ is a matrix, we can compute the columns' means and standard deviations; each is a vector. Then we can center and scale each vector $x$ with these vectors:
$$\begin{align}
z &= \left(\hat \sigma I\right)^{-1}(x - \hat \mu) \\
&= \left(\hat \sigma I\right)^{-1}x - \left(\hat \sigma I\right)^{-1} \hat\mu \\
&= Ax + b
\end{align}$$
This should be recognizable as the same operations of a linear layer: matrix-vector multiplication and vector-vector addition.
In other words, if you assign $A,b$ the exact values that you want to use and then never update those values, the linear layer will do standardization for you. This is the stated design goal: put the manipulations for standardization into the PyTorch object, instead of standardizing the data prior to handing it off to the model. | Standardizing neural network inputs with a linear layer? | One way to do standardization is to subtract some value (e.g. the sample mean $\hat \mu$) and divide by another value (e.g. the sample standard deviation $\hat \sigma$):
$$
z = \frac{x - \hat \mu}{\ha | Standardizing neural network inputs with a linear layer?
One way to do standardization is to subtract some value (e.g. the sample mean $\hat \mu$) and divide by another value (e.g. the sample standard deviation $\hat \sigma$):
$$
z = \frac{x - \hat \mu}{\hat \sigma}.
$$
When $X$ is a matrix, we can compute the columns' means and standard deviations; each is a vector. Then we can center and scale each vector $x$ with these vectors:
$$\begin{align}
z &= \left(\hat \sigma I\right)^{-1}(x - \hat \mu) \\
&= \left(\hat \sigma I\right)^{-1}x - \left(\hat \sigma I\right)^{-1} \hat\mu \\
&= Ax + b
\end{align}$$
This should be recognizable as the same operations of a linear layer: matrix-vector multiplication and vector-vector addition.
In other words, if you assign $A,b$ the exact values that you want to use and then never update those values, the linear layer will do standardization for you. This is the stated design goal: put the manipulations for standardization into the PyTorch object, instead of standardizing the data prior to handing it off to the model. | Standardizing neural network inputs with a linear layer?
One way to do standardization is to subtract some value (e.g. the sample mean $\hat \mu$) and divide by another value (e.g. the sample standard deviation $\hat \sigma$):
$$
z = \frac{x - \hat \mu}{\ha |
49,670 | double-tail Bayesian "p value" à la MCMCglmm | You might find Reconnecting p-Value and Posterior Probability Under One- and Two-Sided Tests by Shi and Yin (2021) useful.
They show interesting connections between the two-sided posterior probability $\mathrm{{PoP}_2}$ and the (frequentist) p-value if flat or "non-informative" priors are used. In particular, $\mathrm{{PoP}_2}$ for two independent samples with normally or Bernoulli distributed outcomes is defined as
$$
\mathrm{{PoP}_2}=2[1-\max{\{\mathbb{P}(\beta<0|\mathrm{data}),\mathbb{P}(\beta>0|\mathrm{data})\}}],
$$
where $\beta$ is the difference of the two population means. | double-tail Bayesian "p value" à la MCMCglmm | You might find Reconnecting p-Value and Posterior Probability Under One- and Two-Sided Tests by Shi and Yin (2021) useful.
They show interesting connections between the two-sided posterior probability | double-tail Bayesian "p value" à la MCMCglmm
You might find Reconnecting p-Value and Posterior Probability Under One- and Two-Sided Tests by Shi and Yin (2021) useful.
They show interesting connections between the two-sided posterior probability $\mathrm{{PoP}_2}$ and the (frequentist) p-value if flat or "non-informative" priors are used. In particular, $\mathrm{{PoP}_2}$ for two independent samples with normally or Bernoulli distributed outcomes is defined as
$$
\mathrm{{PoP}_2}=2[1-\max{\{\mathbb{P}(\beta<0|\mathrm{data}),\mathbb{P}(\beta>0|\mathrm{data})\}}],
$$
where $\beta$ is the difference of the two population means. | double-tail Bayesian "p value" à la MCMCglmm
You might find Reconnecting p-Value and Posterior Probability Under One- and Two-Sided Tests by Shi and Yin (2021) useful.
They show interesting connections between the two-sided posterior probability |
49,671 | Show that no two sets in the probability space with $\mathbb{P}(\{k\})=2^{-k!}$ are independent | The question outlines a rigorous proof -- but where does the idea come from?
It all becomes clear when you write the probabilities in binary: from the binary representation of one of these probabilities $\mathbb{P}(A)$ you can read off the elements of $A,$ at least when $1\notin A.$ Just look at positions $2, 6, 24, 120, 720, \ldots, k!, \ldots$ in that number: the elements of $A$ are those $k$ for which a $1$ appears in position $k!.$
Consider how you would multiply two such probabilities in evaluating $\mathbb{P}(A)$ and $\mathbb{P}(B).$ Provided neither of $A$ and $B$ contains $1,$ you must compute the product of the sums
$$\mathbb{P}(A)\mathbb{P}(B) = \sum_{i\in A}2^{-i!}\sum_{j\in B}2^{-j!} = \sum_{(i,j)\in A\times B}2^{-(i!+j!)} = \sum_{k\in\mathbb N} \left(\sum_{(i,j)\in A\times B\mid i!+j!=k} 1\right)2^{-k} .$$
In the right hand sum, ordered pairs $(i,i)$ can appear at most once, but ordered pairs $(i,j)$ for $i\ne j$ can appear either once or twice (a fact proven below). When they appear twice (that is, both $i$ and $j$ are in both $A$ and $B$), note that they still sum to a power of $2:$
$$2^{-(i!+j!)} + 2^{(-i!+j!)} = 2^{-(i!+j!-1)}.$$
This forces on us a consideration of what the sums of two factorials, $i!+j!,$ might be. Certainly both $i$ and $j$ are $2$ or greater. Without any loss of generality, let $i\le j.$ These constraints $2\le i\le j$ easily imply the following inequalities:
$$j! \lt \color{red}{i! + j! - 1} \lt \color{red}{i! + j!} \le 2j! \lt (j+1)j! = (j+1)!.$$
(The proof that a power $k = i!+j!$ cannot appear more than twice comes down to showing that $i!+j!$ determines the set $\{i,j\},$ which corresponds only to the ordered pairs $i\in A,j\in B$ and $j\in A, i\in B.$ The foregoing inequalities show that $j$ (the larger of the two numbers) can be recovered by finding the largest factorial less than $k=i!+j!;$ and now that $j$ is found, $i$ is found by computing $k-j! = i!,$ which determines $i$ uniquely.)
In all cases, the powers of $2$ appearing in the right hand sum, when expressed in binary, are never themselves factorials (because they are all squeezed between two successive factorials).
Consequently, unless at least one of $A$ and $B$ is empty, it is impossible for this product to be the probability of any set, much less $A\cap B,$ because--just like the probabilities of $A$ and $B$--the binary representation of $\mathbb{P}(A\cap B)$ has ones only in the places $2,6,24,\ldots$ after the binary point. This proves (3).
The proof of (1) seems elusive until you realize that, very generally, independence of events $A$ and $B$ is equivalent to independence of their complements, too; so if $1\in A,$ replace $A$ by its complement and if $1\in B,$ replace $B$ by its complement, and only then proceed as before to check for independence.
Finally, there's no longer any need to prove (2), but we ought to establish that $\mathbb{P}$ is a valid probability measure. This comes down to showing that $\mathbb{P}(\{1\})$ is non-negative--but that's obvious, since in binary this probability has ones everywhere except at the digits in locations $2,6,24,\ldots$ after the binary point,
0.101110111111111111111110111111111111111111111111111111111111...B
We may evaluate it in double precision with a simple calculation, such as this R expression
sum(2^(-setdiff(1:60, factorial(2:5))))
0.7343749403953552 | Show that no two sets in the probability space with $\mathbb{P}(\{k\})=2^{-k!}$ are independent | The question outlines a rigorous proof -- but where does the idea come from?
It all becomes clear when you write the probabilities in binary: from the binary representation of one of these probabiliti | Show that no two sets in the probability space with $\mathbb{P}(\{k\})=2^{-k!}$ are independent
The question outlines a rigorous proof -- but where does the idea come from?
It all becomes clear when you write the probabilities in binary: from the binary representation of one of these probabilities $\mathbb{P}(A)$ you can read off the elements of $A,$ at least when $1\notin A.$ Just look at positions $2, 6, 24, 120, 720, \ldots, k!, \ldots$ in that number: the elements of $A$ are those $k$ for which a $1$ appears in position $k!.$
Consider how you would multiply two such probabilities in evaluating $\mathbb{P}(A)$ and $\mathbb{P}(B).$ Provided neither of $A$ and $B$ contains $1,$ you must compute the product of the sums
$$\mathbb{P}(A)\mathbb{P}(B) = \sum_{i\in A}2^{-i!}\sum_{j\in B}2^{-j!} = \sum_{(i,j)\in A\times B}2^{-(i!+j!)} = \sum_{k\in\mathbb N} \left(\sum_{(i,j)\in A\times B\mid i!+j!=k} 1\right)2^{-k} .$$
In the right hand sum, ordered pairs $(i,i)$ can appear at most once, but ordered pairs $(i,j)$ for $i\ne j$ can appear either once or twice (a fact proven below). When they appear twice (that is, both $i$ and $j$ are in both $A$ and $B$), note that they still sum to a power of $2:$
$$2^{-(i!+j!)} + 2^{(-i!+j!)} = 2^{-(i!+j!-1)}.$$
This forces on us a consideration of what the sums of two factorials, $i!+j!,$ might be. Certainly both $i$ and $j$ are $2$ or greater. Without any loss of generality, let $i\le j.$ These constraints $2\le i\le j$ easily imply the following inequalities:
$$j! \lt \color{red}{i! + j! - 1} \lt \color{red}{i! + j!} \le 2j! \lt (j+1)j! = (j+1)!.$$
(The proof that a power $k = i!+j!$ cannot appear more than twice comes down to showing that $i!+j!$ determines the set $\{i,j\},$ which corresponds only to the ordered pairs $i\in A,j\in B$ and $j\in A, i\in B.$ The foregoing inequalities show that $j$ (the larger of the two numbers) can be recovered by finding the largest factorial less than $k=i!+j!;$ and now that $j$ is found, $i$ is found by computing $k-j! = i!,$ which determines $i$ uniquely.)
In all cases, the powers of $2$ appearing in the right hand sum, when expressed in binary, are never themselves factorials (because they are all squeezed between two successive factorials).
Consequently, unless at least one of $A$ and $B$ is empty, it is impossible for this product to be the probability of any set, much less $A\cap B,$ because--just like the probabilities of $A$ and $B$--the binary representation of $\mathbb{P}(A\cap B)$ has ones only in the places $2,6,24,\ldots$ after the binary point. This proves (3).
The proof of (1) seems elusive until you realize that, very generally, independence of events $A$ and $B$ is equivalent to independence of their complements, too; so if $1\in A,$ replace $A$ by its complement and if $1\in B,$ replace $B$ by its complement, and only then proceed as before to check for independence.
Finally, there's no longer any need to prove (2), but we ought to establish that $\mathbb{P}$ is a valid probability measure. This comes down to showing that $\mathbb{P}(\{1\})$ is non-negative--but that's obvious, since in binary this probability has ones everywhere except at the digits in locations $2,6,24,\ldots$ after the binary point,
0.101110111111111111111110111111111111111111111111111111111111...B
We may evaluate it in double precision with a simple calculation, such as this R expression
sum(2^(-setdiff(1:60, factorial(2:5))))
0.7343749403953552 | Show that no two sets in the probability space with $\mathbb{P}(\{k\})=2^{-k!}$ are independent
The question outlines a rigorous proof -- but where does the idea come from?
It all becomes clear when you write the probabilities in binary: from the binary representation of one of these probabiliti |
49,672 | References on data partitioning (cross-validation, train/val/test set construction) when data are non-IID | It really all boils down to two rules of thumb:
When splitting your data, leave out what you want to predict. If you want to generalize to new hospitals, rather than new patients at the same hospital, leave out one hospital at a time when doing CV — do not leave out one patient at a time, as this only tests your ability to generalize to patients at the same hospital.
When doing cross-validation, split your test data into folds that can be considered approximately independent. For example, with time series data, you want to leave out a single run/“chunk” of observations at a time. If you have a time series running from 1900 to 2000 and want to use 10 folds, the first fold should be the first 10 years, the second the next 10, and so on. The idea here is that even if a time series isn’t independent, we can think of 10 years as enough for most of the correlation to disappear, especially if we’re comparing models that are already OK at dealing with the time series structure of our data. If we assign each year to a random fold, a model can easily “Cheat” by assuming that 2020 will look just the same as 2021 and 2019, but it’s hard to predict 2020 from 2010. A correlogram can help you identify how long a lag is enough that you can consider each block “Basically independent” of the others.
Some relevant papers:
https://onlinelibrary.wiley.com/doi/abs/10.1111/ecog.02881
https://www.sciencedirect.com/science/article/pii/S0020025511006773
https://www.tandfonline.com/doi/full/10.1080/00949655.2020.1783262
You can also check out the Sperrorest R package for this. | References on data partitioning (cross-validation, train/val/test set construction) when data are no | It really all boils down to two rules of thumb:
When splitting your data, leave out what you want to predict. If you want to generalize to new hospitals, rather than new patients at the same hospital | References on data partitioning (cross-validation, train/val/test set construction) when data are non-IID
It really all boils down to two rules of thumb:
When splitting your data, leave out what you want to predict. If you want to generalize to new hospitals, rather than new patients at the same hospital, leave out one hospital at a time when doing CV — do not leave out one patient at a time, as this only tests your ability to generalize to patients at the same hospital.
When doing cross-validation, split your test data into folds that can be considered approximately independent. For example, with time series data, you want to leave out a single run/“chunk” of observations at a time. If you have a time series running from 1900 to 2000 and want to use 10 folds, the first fold should be the first 10 years, the second the next 10, and so on. The idea here is that even if a time series isn’t independent, we can think of 10 years as enough for most of the correlation to disappear, especially if we’re comparing models that are already OK at dealing with the time series structure of our data. If we assign each year to a random fold, a model can easily “Cheat” by assuming that 2020 will look just the same as 2021 and 2019, but it’s hard to predict 2020 from 2010. A correlogram can help you identify how long a lag is enough that you can consider each block “Basically independent” of the others.
Some relevant papers:
https://onlinelibrary.wiley.com/doi/abs/10.1111/ecog.02881
https://www.sciencedirect.com/science/article/pii/S0020025511006773
https://www.tandfonline.com/doi/full/10.1080/00949655.2020.1783262
You can also check out the Sperrorest R package for this. | References on data partitioning (cross-validation, train/val/test set construction) when data are no
It really all boils down to two rules of thumb:
When splitting your data, leave out what you want to predict. If you want to generalize to new hospitals, rather than new patients at the same hospital |
49,673 | How do I summarize the “contribution” of covariates in a GLM? | People do compare $R^2$ values, but for some GLMs there are approximations to $R^2$, but they are of limited value. A little more common is to compare deviance values (these are at least well defined). But comparing $R^2$ or deviance is more about statistical significance than the real importance of a variable.
I prefer to look at predictions, set all the variables that you are not currently examining to their mean, median, most common category, or other meaningful value, then set the predictor variable of interest to some meaningful values and make predictions from these values, then compare the predictions.
Another option for looking at importance is a nomogram. This stack overflow post (https://stackoverflow.com/questions/38276973/r-how-to-read-nomograms-to-predict-the-desired-variable) shows and example of a nomogram (and the code used to create it) and the answer shows how to interpret it. But this lets us see the relative importance of the different variables at a glance. Look at the line for Blood Pressure, you can see that a big change in blood pressure will only change the total points by a very small amount (and therefore the final prediction will change very little), but a modest change in Age will have a much larger effect. If there were categorical variables in that nomogram (other than the one involved in an interaction) then there would be a line with a tick mark for each category and you could see how far apart the ticks are for relative importance. | How do I summarize the “contribution” of covariates in a GLM? | People do compare $R^2$ values, but for some GLMs there are approximations to $R^2$, but they are of limited value. A little more common is to compare deviance values (these are at least well defined | How do I summarize the “contribution” of covariates in a GLM?
People do compare $R^2$ values, but for some GLMs there are approximations to $R^2$, but they are of limited value. A little more common is to compare deviance values (these are at least well defined). But comparing $R^2$ or deviance is more about statistical significance than the real importance of a variable.
I prefer to look at predictions, set all the variables that you are not currently examining to their mean, median, most common category, or other meaningful value, then set the predictor variable of interest to some meaningful values and make predictions from these values, then compare the predictions.
Another option for looking at importance is a nomogram. This stack overflow post (https://stackoverflow.com/questions/38276973/r-how-to-read-nomograms-to-predict-the-desired-variable) shows and example of a nomogram (and the code used to create it) and the answer shows how to interpret it. But this lets us see the relative importance of the different variables at a glance. Look at the line for Blood Pressure, you can see that a big change in blood pressure will only change the total points by a very small amount (and therefore the final prediction will change very little), but a modest change in Age will have a much larger effect. If there were categorical variables in that nomogram (other than the one involved in an interaction) then there would be a line with a tick mark for each category and you could see how far apart the ticks are for relative importance. | How do I summarize the “contribution” of covariates in a GLM?
People do compare $R^2$ values, but for some GLMs there are approximations to $R^2$, but they are of limited value. A little more common is to compare deviance values (these are at least well defined |
49,674 | Uniformly Most Powerful Test for Weibull Distribution | You've got $L=\frac{\theta_a^n}{\theta_0^n}\,exp\left({-\frac{\theta_a-\theta_0}
{\theta_0\theta_a}\sum_{i=1}^ny_i^m}\right)<k$, which is good. Now we take $log$ of both sides:
$$n log\left(\frac{\theta_a}{\theta_0}\right)+\left(\frac{\theta_0-\theta_a}{\theta_0\theta_a}\right)\sum_{i=1}^{n}{y_i^m} < log(k)$$
and so the test itself is in the form:
$$\left\{ \sum_{i=1}^{n}{y_i^m} < c \right\}$$
(rejecting if $\sum_{i=1}^{n}{y_i^m} > c$).
Now, for part (b), there's something to note here: $y^m$ has an exponential distribution, and so the $\sum{y^m_i}\sim \Gamma(n,\theta)$. Under the null we get that $\frac{2\sum_{i=1}^{n}{y_i^m}}{\theta_0} > \frac{2c}{\theta_0}$ has a $\chi^2$ distribution with $2n$ degrees of freedom (look for the relation between gamma and chi-squared).
Now let's solve (b):
$$\theta_0=100,\theta_a=400,\alpha=0.05,\beta=0.05$$
When $H_0$ is true, we get $\alpha$ using:
$$\alpha=P\left(\frac{2\sum_{i=1}^{n}{y_i^m}}{100} > \chi^2_{0.05}\right)=0.05.$$
When $H_a$ is true, we get $\beta$ using:
$$\beta=P\left(\frac{2\sum_{i=1}^{n}{y_i^m}}{100} \le \chi^2_{0.05} \middle| \theta=400\right)=P\left(\frac{2\sum_{i=1}^{n}{y_i^m}}{400} \le \frac{1}{4}\chi^2_{0.05} \middle| \theta=400\right)=P\left(\chi^2\le\frac{1}{4}\chi^2_{0.05}\right)=0.05$$
So, we need to find the row in $\chi^2$ table where $\frac{1}{4}\chi^2_{0.05}=\chi^2_{0.95}$:
You can see that for $12$ degrees of freedom, $\chi^2_{0.95}=5.226$ and $\chi^2_{0.05}=21.03$, which is the closest we get for achieving $\frac{1}{4}\chi^2_{0.05}=\chi^2_{0.95}$. Recall that this has $2n$ degrees of freedom, so the appropriate sample size is $6$. | Uniformly Most Powerful Test for Weibull Distribution | You've got $L=\frac{\theta_a^n}{\theta_0^n}\,exp\left({-\frac{\theta_a-\theta_0}
{\theta_0\theta_a}\sum_{i=1}^ny_i^m}\right)<k$, which is good. Now we take $log$ of both sides:
$$n log\left(\frac{\the | Uniformly Most Powerful Test for Weibull Distribution
You've got $L=\frac{\theta_a^n}{\theta_0^n}\,exp\left({-\frac{\theta_a-\theta_0}
{\theta_0\theta_a}\sum_{i=1}^ny_i^m}\right)<k$, which is good. Now we take $log$ of both sides:
$$n log\left(\frac{\theta_a}{\theta_0}\right)+\left(\frac{\theta_0-\theta_a}{\theta_0\theta_a}\right)\sum_{i=1}^{n}{y_i^m} < log(k)$$
and so the test itself is in the form:
$$\left\{ \sum_{i=1}^{n}{y_i^m} < c \right\}$$
(rejecting if $\sum_{i=1}^{n}{y_i^m} > c$).
Now, for part (b), there's something to note here: $y^m$ has an exponential distribution, and so the $\sum{y^m_i}\sim \Gamma(n,\theta)$. Under the null we get that $\frac{2\sum_{i=1}^{n}{y_i^m}}{\theta_0} > \frac{2c}{\theta_0}$ has a $\chi^2$ distribution with $2n$ degrees of freedom (look for the relation between gamma and chi-squared).
Now let's solve (b):
$$\theta_0=100,\theta_a=400,\alpha=0.05,\beta=0.05$$
When $H_0$ is true, we get $\alpha$ using:
$$\alpha=P\left(\frac{2\sum_{i=1}^{n}{y_i^m}}{100} > \chi^2_{0.05}\right)=0.05.$$
When $H_a$ is true, we get $\beta$ using:
$$\beta=P\left(\frac{2\sum_{i=1}^{n}{y_i^m}}{100} \le \chi^2_{0.05} \middle| \theta=400\right)=P\left(\frac{2\sum_{i=1}^{n}{y_i^m}}{400} \le \frac{1}{4}\chi^2_{0.05} \middle| \theta=400\right)=P\left(\chi^2\le\frac{1}{4}\chi^2_{0.05}\right)=0.05$$
So, we need to find the row in $\chi^2$ table where $\frac{1}{4}\chi^2_{0.05}=\chi^2_{0.95}$:
You can see that for $12$ degrees of freedom, $\chi^2_{0.95}=5.226$ and $\chi^2_{0.05}=21.03$, which is the closest we get for achieving $\frac{1}{4}\chi^2_{0.05}=\chi^2_{0.95}$. Recall that this has $2n$ degrees of freedom, so the appropriate sample size is $6$. | Uniformly Most Powerful Test for Weibull Distribution
You've got $L=\frac{\theta_a^n}{\theta_0^n}\,exp\left({-\frac{\theta_a-\theta_0}
{\theta_0\theta_a}\sum_{i=1}^ny_i^m}\right)<k$, which is good. Now we take $log$ of both sides:
$$n log\left(\frac{\the |
49,675 | Numbers of double-headed, double-tailed, and normal coins based on one toss per coin | This can be done using Bayes' Theorem. It requires you to have a (somewhat subjective) prior distribution for how many of each type of coin the bucket contains. For example, let $A$ be the event that the bucket contains 90 double-headed coins and 10 normal coins, and let $B$ be the event that you observe exactly 90 heads and 10 tails. You would plug in your prior probabilities for $A$ and $B$ into this formula to get an updated posterior probability for $A$:
$P(A|B) = \frac{P(B|A)P(A)}{P(B)} = \frac{2^{-10}P(A)}{P(B)}$ | Numbers of double-headed, double-tailed, and normal coins based on one toss per coin | This can be done using Bayes' Theorem. It requires you to have a (somewhat subjective) prior distribution for how many of each type of coin the bucket contains. For example, let $A$ be the event that | Numbers of double-headed, double-tailed, and normal coins based on one toss per coin
This can be done using Bayes' Theorem. It requires you to have a (somewhat subjective) prior distribution for how many of each type of coin the bucket contains. For example, let $A$ be the event that the bucket contains 90 double-headed coins and 10 normal coins, and let $B$ be the event that you observe exactly 90 heads and 10 tails. You would plug in your prior probabilities for $A$ and $B$ into this formula to get an updated posterior probability for $A$:
$P(A|B) = \frac{P(B|A)P(A)}{P(B)} = \frac{2^{-10}P(A)}{P(B)}$ | Numbers of double-headed, double-tailed, and normal coins based on one toss per coin
This can be done using Bayes' Theorem. It requires you to have a (somewhat subjective) prior distribution for how many of each type of coin the bucket contains. For example, let $A$ be the event that |
49,676 | Heteroscedasticity and non-normal errors are only an issue when predicting from a linear model - why? | I suspect that Gelman and Hill are either overstating the case concerning whether normality is an issue, or that their comments are taken out of context. While linearity (or more precisely, correct functional specification) is usually more important, there are cases of non-normality that indeed should be of great concern. These include:
Extremely heavy-tailed distributions. In such a case, you definitely should not use ordinary least squares to estimate the trend, even if it is linear. Quantile regression is a good alternative.
Binary or highly discrete dependent variable. In this case, some maximum likelihood method would be preferred, even if the response function happened to be linear (which is very unlikely in this case).
As far as needing normality and homoscedasticity specifically for prediction (assuming OLS), imagine you are regressing an individual stock return percentage ($Y$) on the S&P 500 return ($X$). You would like to predict what would happen to your return when the market declines by 1% (i.e., when $X=-1$). In this case, you know that your $Y$ is a random variable, with some distribution.
Now, unthinking application of OLS would give you the (essentially) the following prediction: $Y$ will lie within $\pm 3\times rmse $ of the regression prediction, with approximate 99.7% probability.
The problem regarding the homoscedasticity assumption of OLS is that $rmse$, being a pooled estimate across all values of $X$, may over- or under- estimate the conditional standard deviation of $Y$ when $X=1$, depending upon the nature of the heteroscedasticity. This results in a prediction interval that may be too wide, or it may be too narrow.
The problem with the normality assumption of OLS is that, say if the distribution of returns is heavy-tailed (and it probably is), there will occasionally be returns that are much farther than three standard deviations from the conditional mean. So the usual predictions are overly optimistic in terms of bounding risk. Further, if the distribution of returns is asymmetric, then the prediction interval should be correspondingly asymmetric.
Another good example is multinomial logistic regression. Say $Y$ is choice of toothpaste brand, and $X$ is age. What is the prediction of $Y$ when $X=50$ years old? It is certainly not some average of Crest, Colgate, Mr. Toms, etc. The prediction is the distribution itself, which is certainly not a normal distribution. Rather, it is a simple discrete probability distribution on all the brands.
Regression may be best understood as a model for the conditional distribution of $Y$, given $X$. This representation resolves a lot of supposed conflicts between OLS and ML; it leads seamlessly to likelihoods and Bayes; it puts ordinary regression, heteroscedastic regression, ANOVA, Poisson regression, multinomial logistic regression, survival analysis, quantile regression, neural net regression, tree regression, etc., all under a common umbrella; and it includes the mean portion of the model that people are usually interested in as a special case. | Heteroscedasticity and non-normal errors are only an issue when predicting from a linear model - why | I suspect that Gelman and Hill are either overstating the case concerning whether normality is an issue, or that their comments are taken out of context. While linearity (or more precisely, correct fu | Heteroscedasticity and non-normal errors are only an issue when predicting from a linear model - why?
I suspect that Gelman and Hill are either overstating the case concerning whether normality is an issue, or that their comments are taken out of context. While linearity (or more precisely, correct functional specification) is usually more important, there are cases of non-normality that indeed should be of great concern. These include:
Extremely heavy-tailed distributions. In such a case, you definitely should not use ordinary least squares to estimate the trend, even if it is linear. Quantile regression is a good alternative.
Binary or highly discrete dependent variable. In this case, some maximum likelihood method would be preferred, even if the response function happened to be linear (which is very unlikely in this case).
As far as needing normality and homoscedasticity specifically for prediction (assuming OLS), imagine you are regressing an individual stock return percentage ($Y$) on the S&P 500 return ($X$). You would like to predict what would happen to your return when the market declines by 1% (i.e., when $X=-1$). In this case, you know that your $Y$ is a random variable, with some distribution.
Now, unthinking application of OLS would give you the (essentially) the following prediction: $Y$ will lie within $\pm 3\times rmse $ of the regression prediction, with approximate 99.7% probability.
The problem regarding the homoscedasticity assumption of OLS is that $rmse$, being a pooled estimate across all values of $X$, may over- or under- estimate the conditional standard deviation of $Y$ when $X=1$, depending upon the nature of the heteroscedasticity. This results in a prediction interval that may be too wide, or it may be too narrow.
The problem with the normality assumption of OLS is that, say if the distribution of returns is heavy-tailed (and it probably is), there will occasionally be returns that are much farther than three standard deviations from the conditional mean. So the usual predictions are overly optimistic in terms of bounding risk. Further, if the distribution of returns is asymmetric, then the prediction interval should be correspondingly asymmetric.
Another good example is multinomial logistic regression. Say $Y$ is choice of toothpaste brand, and $X$ is age. What is the prediction of $Y$ when $X=50$ years old? It is certainly not some average of Crest, Colgate, Mr. Toms, etc. The prediction is the distribution itself, which is certainly not a normal distribution. Rather, it is a simple discrete probability distribution on all the brands.
Regression may be best understood as a model for the conditional distribution of $Y$, given $X$. This representation resolves a lot of supposed conflicts between OLS and ML; it leads seamlessly to likelihoods and Bayes; it puts ordinary regression, heteroscedastic regression, ANOVA, Poisson regression, multinomial logistic regression, survival analysis, quantile regression, neural net regression, tree regression, etc., all under a common umbrella; and it includes the mean portion of the model that people are usually interested in as a special case. | Heteroscedasticity and non-normal errors are only an issue when predicting from a linear model - why
I suspect that Gelman and Hill are either overstating the case concerning whether normality is an issue, or that their comments are taken out of context. While linearity (or more precisely, correct fu |
49,677 | Estimate $E[X_1 | X_1>X_2>\cdots>X_k]$ with simulation | I have never heard of this algorithm by Geweke et al. but since it is a special case of importance sampling the fact that a single weight takes all the mass is indicative that the importance function is poorly suited for the target. Given that the integral is over the set$$\mathfrak H=\{x\in\mathbb R^k;~x_1>x_2>\cdots>x_k\}$$the importance function $G$ should be concentrated on that set, i.e., $G(\mathfrak H)=1$, in which case$$\mathbb E[X_1|X_1>X_2>\cdots>X_k] \approx
\sum_{s=1}^S x_1^s \dfrac{\text dF}{\text dG}(x^s)\Big/\sum_{s=1}^S \dfrac{\text dF}{\text dG}(x^s)$$
Here is a toy implementation in R, using the product of Normal $\prod_{i=1}^k\mathcal{N}(i,1/i^2)$ as the target and the product of Normal $\mathcal N(1,1)\times\prod_{i=2}^k\mathcal N^-(x_{i-1},1/i)$ as importance distribution (where $\mathcal N^-(x_{i-1},1/i)$ denotes the half-normal truncated to the left of its mean, $x_{i-1}$):
# target density
tgt=function(x,k=length(x)) sum(dnorm(x,mean=1:k,sd=1/(1:k),log=TRUE))
# importance density
imp=function(x,k=length(x)) dnorm(x[1],mean=1,log=TRUE)+sum(
dnorm(x[2:k],mean=x[-k],sd=1/(2:k),log=TRUE))
# importance simulation with S iid k-dim vectors
rtr=function(S,k){
X=matrix(0,S,k)
X[,1]=rnorm(S)+1
for(i in 2:k)X[,i]=X[,i-1]-abs(rnorm(S,sd=1/i))#truncated Normal
X}
# importance weight
wt=function(X){
w=1:(dim(X)[1])
for(i in 1:length(w))w[i]=tgt(X[i,])-imp(X[i,])
w}
# mean approximation
mn=function(S,k=2){
w=wt(X<-rtr(S,k))
print(c(sum(X[,1]*(p<-exp(w-max(w))))/sum(p),
round(sum(p)^2/sum(p^2))),digits=4) #ESS
}
# verification for k=2
vf=function(S){
X=rbind(rnorm(S)+1,rnorm(S)/2+2)
sum(X[1,X[1,]>X[2,]])/sum(X[1,]>X[2,])}
While the importance approximation is variable (as shown by the effective sample size check), it returns a value in the same range as the sheer Monte Carlo approximation
> vf(1e7)
[1] 2.289312
> mn(1e7,2)
[1] 2.276 48689
The attempt running R independent importance runs and taking
$$\mathbb E[X_1|X_1>X_2>\cdots>X_k] \approx \frac{1}{R}\sum_r x^{s^*_r}_1$$is incorrect because the take-it-all realisations $x^{s^*_r}$ are not realisations from the target distribution. | Estimate $E[X_1 | X_1>X_2>\cdots>X_k]$ with simulation | I have never heard of this algorithm by Geweke et al. but since it is a special case of importance sampling the fact that a single weight takes all the mass is indicative that the importance function | Estimate $E[X_1 | X_1>X_2>\cdots>X_k]$ with simulation
I have never heard of this algorithm by Geweke et al. but since it is a special case of importance sampling the fact that a single weight takes all the mass is indicative that the importance function is poorly suited for the target. Given that the integral is over the set$$\mathfrak H=\{x\in\mathbb R^k;~x_1>x_2>\cdots>x_k\}$$the importance function $G$ should be concentrated on that set, i.e., $G(\mathfrak H)=1$, in which case$$\mathbb E[X_1|X_1>X_2>\cdots>X_k] \approx
\sum_{s=1}^S x_1^s \dfrac{\text dF}{\text dG}(x^s)\Big/\sum_{s=1}^S \dfrac{\text dF}{\text dG}(x^s)$$
Here is a toy implementation in R, using the product of Normal $\prod_{i=1}^k\mathcal{N}(i,1/i^2)$ as the target and the product of Normal $\mathcal N(1,1)\times\prod_{i=2}^k\mathcal N^-(x_{i-1},1/i)$ as importance distribution (where $\mathcal N^-(x_{i-1},1/i)$ denotes the half-normal truncated to the left of its mean, $x_{i-1}$):
# target density
tgt=function(x,k=length(x)) sum(dnorm(x,mean=1:k,sd=1/(1:k),log=TRUE))
# importance density
imp=function(x,k=length(x)) dnorm(x[1],mean=1,log=TRUE)+sum(
dnorm(x[2:k],mean=x[-k],sd=1/(2:k),log=TRUE))
# importance simulation with S iid k-dim vectors
rtr=function(S,k){
X=matrix(0,S,k)
X[,1]=rnorm(S)+1
for(i in 2:k)X[,i]=X[,i-1]-abs(rnorm(S,sd=1/i))#truncated Normal
X}
# importance weight
wt=function(X){
w=1:(dim(X)[1])
for(i in 1:length(w))w[i]=tgt(X[i,])-imp(X[i,])
w}
# mean approximation
mn=function(S,k=2){
w=wt(X<-rtr(S,k))
print(c(sum(X[,1]*(p<-exp(w-max(w))))/sum(p),
round(sum(p)^2/sum(p^2))),digits=4) #ESS
}
# verification for k=2
vf=function(S){
X=rbind(rnorm(S)+1,rnorm(S)/2+2)
sum(X[1,X[1,]>X[2,]])/sum(X[1,]>X[2,])}
While the importance approximation is variable (as shown by the effective sample size check), it returns a value in the same range as the sheer Monte Carlo approximation
> vf(1e7)
[1] 2.289312
> mn(1e7,2)
[1] 2.276 48689
The attempt running R independent importance runs and taking
$$\mathbb E[X_1|X_1>X_2>\cdots>X_k] \approx \frac{1}{R}\sum_r x^{s^*_r}_1$$is incorrect because the take-it-all realisations $x^{s^*_r}$ are not realisations from the target distribution. | Estimate $E[X_1 | X_1>X_2>\cdots>X_k]$ with simulation
I have never heard of this algorithm by Geweke et al. but since it is a special case of importance sampling the fact that a single weight takes all the mass is indicative that the importance function |
49,678 | What is an intuitive explanation for Q90 (X+Y) > Q90(X) + Q90(Y) in fat-tailed variables. Non Subadditivity | This is a visual answer: a careful consideration of the second figure shows what is going on. Everything else in this post is only a gloss on that figure.
In this figure of the $(x,y)$ plane, region $I$ (blue, top) consists of all $y$ values exceeding a quantile $Q_{90}(Y),$ region $II$ (red, right) consists of all $x$ values exceeding a quantile $Q_{90}(X),$ and therefore their intersection (purple, top right) shows all points $(x,y)$ where both $x\ge Q_{90}(X)$ and $y \ge Q_{90}(Y).$
This figure has been drawn to scale, so that the line $x+y=Q_{90}(X)+Q_{90}(Y)$ will make an angle of $-45$ degrees, it will pass through the central point $(Q_{90}(X), Q_{90}(Y)),$ and all points $(x,y)$ whose sum exceeds this threshold will lie to the upper right of that line:
The question wonders about the circumstances that would permit the probability of the combined three regions $B\cup C \cup D$ to exceed $100\%-90\%,$ for then the $90^\text{th}$ quantile of $X+Y$ would have to lie even further above and to the right.
This way of reframing the question makes the answer generally clear, whether or not $X$ and $Y$ are independent: we need only concentrate most of the probability from regions $I\cup II,$ as colored in the first figure, within the three regions $B\cup C\cup D$ (that green triangle in the second figure).
When $X$ and $Y$ are independent it's a little tricky to do that, because independence implies $$\Pr(C) = \Pr(I\cap II) = \Pr(I)\Pr(II) = (1-0.9)(1-0.9) = 0.01,$$ severely limiting how much probability we can assign to $C.$ Observe, though, that including $B$ and $D$ can greatly increase the probability of $B\cup C\cup D.$ If we make sure most of the probability of the events $I$ and $II$ is located way above and way to the right (respectively), then $B \cup C$ can include almost all of the probability of $I.$ That is, we can make $\Pr(B\cup C) \le 1 - 0.9=0.1$ arbitrarily close to $0.1$ and likewise we can make $\Pr(D\cup C)$ arbitrarily close to $0.1.$ Consequently we obtain the bounds, which clearly can be approached as nearly as we might desire, of
$$\Pr(B\cup C\cup D) = \Pr(B\cup C) + \Pr(D\cup C) - \Pr(C) \le 0.1 + 0.1 - 0.01 = 0.19.$$
These considerations lead immediately to a simple example: let's put atoms at appropriate places within the figure. For instance, at the places where the symbols "A" through "E" are drawn, assign concentrated probabilities symmetrically in $x$ and $y$ as follows by choosing an arbitrarily tiny positive $\epsilon:$
$$\cases{A: 0 \\ B: 1-0.9-\epsilon\approx 0.1 \\ C: (1-0.9)^2=0.01 \\ D: 1-0.9-\epsilon\approx 0.1 \\E: 0}$$
and distribute the rest of the probability to the left and below everything else, again symmetrically, so that $X$ and $Y$ have identical and independent distributions.
Notice this says almost nothing about how heavy the tail of this common distribution might be! In the example, this distribution could have an upper limit just barely greater than $Q_{90}.$ | What is an intuitive explanation for Q90 (X+Y) > Q90(X) + Q90(Y) in fat-tailed variables. Non Subadd | This is a visual answer: a careful consideration of the second figure shows what is going on. Everything else in this post is only a gloss on that figure.
In this figure of the $(x,y)$ plane, region | What is an intuitive explanation for Q90 (X+Y) > Q90(X) + Q90(Y) in fat-tailed variables. Non Subadditivity
This is a visual answer: a careful consideration of the second figure shows what is going on. Everything else in this post is only a gloss on that figure.
In this figure of the $(x,y)$ plane, region $I$ (blue, top) consists of all $y$ values exceeding a quantile $Q_{90}(Y),$ region $II$ (red, right) consists of all $x$ values exceeding a quantile $Q_{90}(X),$ and therefore their intersection (purple, top right) shows all points $(x,y)$ where both $x\ge Q_{90}(X)$ and $y \ge Q_{90}(Y).$
This figure has been drawn to scale, so that the line $x+y=Q_{90}(X)+Q_{90}(Y)$ will make an angle of $-45$ degrees, it will pass through the central point $(Q_{90}(X), Q_{90}(Y)),$ and all points $(x,y)$ whose sum exceeds this threshold will lie to the upper right of that line:
The question wonders about the circumstances that would permit the probability of the combined three regions $B\cup C \cup D$ to exceed $100\%-90\%,$ for then the $90^\text{th}$ quantile of $X+Y$ would have to lie even further above and to the right.
This way of reframing the question makes the answer generally clear, whether or not $X$ and $Y$ are independent: we need only concentrate most of the probability from regions $I\cup II,$ as colored in the first figure, within the three regions $B\cup C\cup D$ (that green triangle in the second figure).
When $X$ and $Y$ are independent it's a little tricky to do that, because independence implies $$\Pr(C) = \Pr(I\cap II) = \Pr(I)\Pr(II) = (1-0.9)(1-0.9) = 0.01,$$ severely limiting how much probability we can assign to $C.$ Observe, though, that including $B$ and $D$ can greatly increase the probability of $B\cup C\cup D.$ If we make sure most of the probability of the events $I$ and $II$ is located way above and way to the right (respectively), then $B \cup C$ can include almost all of the probability of $I.$ That is, we can make $\Pr(B\cup C) \le 1 - 0.9=0.1$ arbitrarily close to $0.1$ and likewise we can make $\Pr(D\cup C)$ arbitrarily close to $0.1.$ Consequently we obtain the bounds, which clearly can be approached as nearly as we might desire, of
$$\Pr(B\cup C\cup D) = \Pr(B\cup C) + \Pr(D\cup C) - \Pr(C) \le 0.1 + 0.1 - 0.01 = 0.19.$$
These considerations lead immediately to a simple example: let's put atoms at appropriate places within the figure. For instance, at the places where the symbols "A" through "E" are drawn, assign concentrated probabilities symmetrically in $x$ and $y$ as follows by choosing an arbitrarily tiny positive $\epsilon:$
$$\cases{A: 0 \\ B: 1-0.9-\epsilon\approx 0.1 \\ C: (1-0.9)^2=0.01 \\ D: 1-0.9-\epsilon\approx 0.1 \\E: 0}$$
and distribute the rest of the probability to the left and below everything else, again symmetrically, so that $X$ and $Y$ have identical and independent distributions.
Notice this says almost nothing about how heavy the tail of this common distribution might be! In the example, this distribution could have an upper limit just barely greater than $Q_{90}.$ | What is an intuitive explanation for Q90 (X+Y) > Q90(X) + Q90(Y) in fat-tailed variables. Non Subadd
This is a visual answer: a careful consideration of the second figure shows what is going on. Everything else in this post is only a gloss on that figure.
In this figure of the $(x,y)$ plane, region |
49,679 | What is an intuitive explanation for Q90 (X+Y) > Q90(X) + Q90(Y) in fat-tailed variables. Non Subadditivity | TLDR; Let's define $Q_X(0.9) = c$ as the reserved costs. And $2c$ will be the costs reserved for two variables $X$ and $Y$.
For the specific case in the question, the probability for $X+Y$ to exceed $2c$ is larger than 10%, namely 13.16%.
This is because the probability for $X$ and $Y$ to exceed $2c$ is already individually more than 5%, namely 6.52%.
When $X$ and $Y$ are independent then the probability for the sum $X+Y>2c$ is larger than the sum of probabilities $X>2c$ and $Y>2c$.
A sufficient condition for $Q_{X+Y}(0.9)> Q_X(0.9) + Q_Y(0.9)$ can be expressed in terms of the survival function.
Note that
There is an equivalent statement in terms of quantile functions and survival functions $$Q_{X+Y}(0.9)> 2c \quad \equiv \quad S_{X+Y}(2c) > 0.1$$ Note the blue arrows in the image below. Vertical arrow: The survival function of $X+Y$ in the point $2c$ is above $0.1$. Horizontal arrow: The 0.9 quantile of $X+Y$ is above $2c$.
The survival function of a sum is larger than the sum of the survival functions. $$S_{X+Y}(2c) \geq S_X(2c) + S_Y(2c)$$ This is because $X+Y$ is larger than $2c$ when either $X$ or $Y$ is larger than $2c$
So a sufficient condition for $Q_{X+Y}(0.9)> Q_X(0.9) + Q_Y(0.9)$ is when
$$S_{X}(2c) > \frac{S_{X}(c)}{2}$$
where $c = Q_X(0.9) = Q_Y(0.9)$. This is because if $S_X(2c)$ is larger than half the $S_X(c)$, then $S_{X+Y}(2c)$ (which is at least twice $S_X(2c)$) will be larger than $S_X(c)$ and this is equivalent to the 0.9-th quantile being larger than $2c$
Relationship with tails.
For distributions where the survival function approaches a power law
$$S(x) = Pr[X>x] \sim x^{-\alpha}$$
The condition $Pr[X>2x] > Pr[X>x]/2$ will be fulfilled if $\alpha < 1$.
This condition does not relate to just any distribution with heavy tails. the cases with $\alpha < 1$ have an infinite or undefined mean.
So the case with the lognormal distribution, which does not approach a power-law tail, is not a property of the tails. It happens because in the beginning close to $0$ the survival function will fall less quickly then $1/x$, but in the tails, this is not the case anymore.
See the image below where the distribution of $2X$ and $X+Y$ is compared in terms of the survival function. At some point the survival function of $X+Y$ is below the survival function of $2X$, this is also the point where no more the property for the quantile function is true.
In the image, we plotted gray broken lines that relate to the $\propto 1/x$ relationship. The point where the two survival functions cross is also where the slope becomes steeper than the $1/x$ relationship. | What is an intuitive explanation for Q90 (X+Y) > Q90(X) + Q90(Y) in fat-tailed variables. Non Subadd | TLDR; Let's define $Q_X(0.9) = c$ as the reserved costs. And $2c$ will be the costs reserved for two variables $X$ and $Y$.
For the specific case in the question, the probability for $X+Y$ to exceed $ | What is an intuitive explanation for Q90 (X+Y) > Q90(X) + Q90(Y) in fat-tailed variables. Non Subadditivity
TLDR; Let's define $Q_X(0.9) = c$ as the reserved costs. And $2c$ will be the costs reserved for two variables $X$ and $Y$.
For the specific case in the question, the probability for $X+Y$ to exceed $2c$ is larger than 10%, namely 13.16%.
This is because the probability for $X$ and $Y$ to exceed $2c$ is already individually more than 5%, namely 6.52%.
When $X$ and $Y$ are independent then the probability for the sum $X+Y>2c$ is larger than the sum of probabilities $X>2c$ and $Y>2c$.
A sufficient condition for $Q_{X+Y}(0.9)> Q_X(0.9) + Q_Y(0.9)$ can be expressed in terms of the survival function.
Note that
There is an equivalent statement in terms of quantile functions and survival functions $$Q_{X+Y}(0.9)> 2c \quad \equiv \quad S_{X+Y}(2c) > 0.1$$ Note the blue arrows in the image below. Vertical arrow: The survival function of $X+Y$ in the point $2c$ is above $0.1$. Horizontal arrow: The 0.9 quantile of $X+Y$ is above $2c$.
The survival function of a sum is larger than the sum of the survival functions. $$S_{X+Y}(2c) \geq S_X(2c) + S_Y(2c)$$ This is because $X+Y$ is larger than $2c$ when either $X$ or $Y$ is larger than $2c$
So a sufficient condition for $Q_{X+Y}(0.9)> Q_X(0.9) + Q_Y(0.9)$ is when
$$S_{X}(2c) > \frac{S_{X}(c)}{2}$$
where $c = Q_X(0.9) = Q_Y(0.9)$. This is because if $S_X(2c)$ is larger than half the $S_X(c)$, then $S_{X+Y}(2c)$ (which is at least twice $S_X(2c)$) will be larger than $S_X(c)$ and this is equivalent to the 0.9-th quantile being larger than $2c$
Relationship with tails.
For distributions where the survival function approaches a power law
$$S(x) = Pr[X>x] \sim x^{-\alpha}$$
The condition $Pr[X>2x] > Pr[X>x]/2$ will be fulfilled if $\alpha < 1$.
This condition does not relate to just any distribution with heavy tails. the cases with $\alpha < 1$ have an infinite or undefined mean.
So the case with the lognormal distribution, which does not approach a power-law tail, is not a property of the tails. It happens because in the beginning close to $0$ the survival function will fall less quickly then $1/x$, but in the tails, this is not the case anymore.
See the image below where the distribution of $2X$ and $X+Y$ is compared in terms of the survival function. At some point the survival function of $X+Y$ is below the survival function of $2X$, this is also the point where no more the property for the quantile function is true.
In the image, we plotted gray broken lines that relate to the $\propto 1/x$ relationship. The point where the two survival functions cross is also where the slope becomes steeper than the $1/x$ relationship. | What is an intuitive explanation for Q90 (X+Y) > Q90(X) + Q90(Y) in fat-tailed variables. Non Subadd
TLDR; Let's define $Q_X(0.9) = c$ as the reserved costs. And $2c$ will be the costs reserved for two variables $X$ and $Y$.
For the specific case in the question, the probability for $X+Y$ to exceed $ |
49,680 | What is an intuitive explanation for Q90 (X+Y) > Q90(X) + Q90(Y) in fat-tailed variables. Non Subadditivity | As distributions get more right-tailed, the $90^{th}$ percentile of $X+Y$ approaches the $\frac{3}{\sqrt{10}}$th quantile of $X$, roughly the $95^{th}$ percentile. In other words, the top 10% of combined losses will come from roughly 5% in which the first loss is as high as possible, and 5% in which the second loss is as high as possible. (The number $3/\sqrt{10}\simeq.9487$ is a more exact limit which avoids double-counting when both $X$ and $Y$ are high.) So the counterintuitive situation in the post will arise whenever $Q_{95}(X)\gg2Q_{90}(X)$, which is the case with $LN(0,3)$.
Here is an analogous situation where the math is simpler.
Suppose 90% of the losses are uniformly distributed between 1 and 10, and 10% of the losses are uniformly distributed between 10 and 100. Then
$$Q_{90}(X)=10,\ \, Q_{95}(X)=55,\ \ Q_{90}(X+Y)\simeq 60$$
The quantile function for $X$ has an especially simple form:
$$Q_p(X)=\begin{cases}
\ \ 10p+1\quad\ \text{ if }\ \ 0.0<p<0.9\\
900p-800\ \text{ if }\ \ 0.9<p<1.0\\
\end{cases}$$
The graph below shows the percentile for the first loss on the $x-$axis, the percentile for the second loss on the $y-$axis, and the blue region where the sum of the two losses is less than 60, i.e. $Q_p(X)+Q_q(Y)<60$:
The blue region has 90% of the area of the square, which is 90% of the probability. It intersects the axes near 0.95, leaving out roughly 5% of the square on top and 5% of the square on the right, corresponding to the 5% highest $X$ and $Y$. | What is an intuitive explanation for Q90 (X+Y) > Q90(X) + Q90(Y) in fat-tailed variables. Non Subadd | As distributions get more right-tailed, the $90^{th}$ percentile of $X+Y$ approaches the $\frac{3}{\sqrt{10}}$th quantile of $X$, roughly the $95^{th}$ percentile. In other words, the top 10% of combi | What is an intuitive explanation for Q90 (X+Y) > Q90(X) + Q90(Y) in fat-tailed variables. Non Subadditivity
As distributions get more right-tailed, the $90^{th}$ percentile of $X+Y$ approaches the $\frac{3}{\sqrt{10}}$th quantile of $X$, roughly the $95^{th}$ percentile. In other words, the top 10% of combined losses will come from roughly 5% in which the first loss is as high as possible, and 5% in which the second loss is as high as possible. (The number $3/\sqrt{10}\simeq.9487$ is a more exact limit which avoids double-counting when both $X$ and $Y$ are high.) So the counterintuitive situation in the post will arise whenever $Q_{95}(X)\gg2Q_{90}(X)$, which is the case with $LN(0,3)$.
Here is an analogous situation where the math is simpler.
Suppose 90% of the losses are uniformly distributed between 1 and 10, and 10% of the losses are uniformly distributed between 10 and 100. Then
$$Q_{90}(X)=10,\ \, Q_{95}(X)=55,\ \ Q_{90}(X+Y)\simeq 60$$
The quantile function for $X$ has an especially simple form:
$$Q_p(X)=\begin{cases}
\ \ 10p+1\quad\ \text{ if }\ \ 0.0<p<0.9\\
900p-800\ \text{ if }\ \ 0.9<p<1.0\\
\end{cases}$$
The graph below shows the percentile for the first loss on the $x-$axis, the percentile for the second loss on the $y-$axis, and the blue region where the sum of the two losses is less than 60, i.e. $Q_p(X)+Q_q(Y)<60$:
The blue region has 90% of the area of the square, which is 90% of the probability. It intersects the axes near 0.95, leaving out roughly 5% of the square on top and 5% of the square on the right, corresponding to the 5% highest $X$ and $Y$. | What is an intuitive explanation for Q90 (X+Y) > Q90(X) + Q90(Y) in fat-tailed variables. Non Subadd
As distributions get more right-tailed, the $90^{th}$ percentile of $X+Y$ approaches the $\frac{3}{\sqrt{10}}$th quantile of $X$, roughly the $95^{th}$ percentile. In other words, the top 10% of combi |
49,681 | What is an intuitive explanation for Q90 (X+Y) > Q90(X) + Q90(Y) in fat-tailed variables. Non Subadditivity | Using the business world example: Quantile measures of risk exclude worst-case scenarios. When a distribution is very fat-tailed, a measure such as $Q_{90}$ will exclude almost all bad scenarios and deceptively present an innocent-looking risk.
But when risks are summed, we are now looking at the risk of at-least-one-bad-thing-happening. The more risks, the higher the chance that at least one thing will go wrong, and the higher the risk reserve for a rainy day that must be carried. | What is an intuitive explanation for Q90 (X+Y) > Q90(X) + Q90(Y) in fat-tailed variables. Non Subadd | Using the business world example: Quantile measures of risk exclude worst-case scenarios. When a distribution is very fat-tailed, a measure such as $Q_{90}$ will exclude almost all bad scenarios and | What is an intuitive explanation for Q90 (X+Y) > Q90(X) + Q90(Y) in fat-tailed variables. Non Subadditivity
Using the business world example: Quantile measures of risk exclude worst-case scenarios. When a distribution is very fat-tailed, a measure such as $Q_{90}$ will exclude almost all bad scenarios and deceptively present an innocent-looking risk.
But when risks are summed, we are now looking at the risk of at-least-one-bad-thing-happening. The more risks, the higher the chance that at least one thing will go wrong, and the higher the risk reserve for a rainy day that must be carried. | What is an intuitive explanation for Q90 (X+Y) > Q90(X) + Q90(Y) in fat-tailed variables. Non Subadd
Using the business world example: Quantile measures of risk exclude worst-case scenarios. When a distribution is very fat-tailed, a measure such as $Q_{90}$ will exclude almost all bad scenarios and |
49,682 | Lavaan mediation + moderation + 2 X's | I simulated some irrelevant data:
library(lavaan)
dat <- as.data.frame(mvrnorm(5e2, rep(0, 6), matrix(.25, 6, 6) + .75 * diag(6)))
colnames(dat) <- c("X", "X2", "M1", "M2", "W", "Y")
dat$X2 <- dat$X ^ 2
head(dat)
# X X2 M1 M2 W Y
# 1 -1.2556613 1.57668526 0.9558917 -0.6155703 -0.4540778 -0.3747208
# 2 -0.8463471 0.71630340 0.2066086 -0.1680590 -0.9181445 -0.2819832
# 3 0.1081230 0.01169059 -0.1934604 1.2662057 0.4817797 -0.5342619
# 4 0.6180336 0.38196555 0.3301925 -0.1277026 1.3222274 0.3626635
Given these data, this is the model code you want:
summary(sem(
"M1 ~ a11 * X + a12 * X2 + w10 * W + w11 * X:W + w12 * X2:W
M2 ~ a21 * X + a22 * X2 + w20 * W + w21 * X:W + w22 * X2:W + d * M1
Y ~ c1 * X + c2 * X2 + w30 * W + w31 * X:W + w32 * X2:W + b1 * M1 + b2 * M2
a11db2 := a11 * d * b2
a12db2 := a12 * d * b2
a11b1 := a11 * b1
a12b1 := a12 * b1
a21b2 := a21 * b2
a22b2 := a22 * b2",
dat), rsquare = TRUE)
With this, you can work out the exact combinations that get you the total indirect, direct and total effects. The nature of W makes things a bit difficult. Right now, the computed effects at the bottom are for when W = 0. You might want to play around with the computed effects depending on the type of variable W is. | Lavaan mediation + moderation + 2 X's | I simulated some irrelevant data:
library(lavaan)
dat <- as.data.frame(mvrnorm(5e2, rep(0, 6), matrix(.25, 6, 6) + .75 * diag(6)))
colnames(dat) <- c("X", "X2", "M1", "M2", "W", "Y")
dat$X2 <- dat$X | Lavaan mediation + moderation + 2 X's
I simulated some irrelevant data:
library(lavaan)
dat <- as.data.frame(mvrnorm(5e2, rep(0, 6), matrix(.25, 6, 6) + .75 * diag(6)))
colnames(dat) <- c("X", "X2", "M1", "M2", "W", "Y")
dat$X2 <- dat$X ^ 2
head(dat)
# X X2 M1 M2 W Y
# 1 -1.2556613 1.57668526 0.9558917 -0.6155703 -0.4540778 -0.3747208
# 2 -0.8463471 0.71630340 0.2066086 -0.1680590 -0.9181445 -0.2819832
# 3 0.1081230 0.01169059 -0.1934604 1.2662057 0.4817797 -0.5342619
# 4 0.6180336 0.38196555 0.3301925 -0.1277026 1.3222274 0.3626635
Given these data, this is the model code you want:
summary(sem(
"M1 ~ a11 * X + a12 * X2 + w10 * W + w11 * X:W + w12 * X2:W
M2 ~ a21 * X + a22 * X2 + w20 * W + w21 * X:W + w22 * X2:W + d * M1
Y ~ c1 * X + c2 * X2 + w30 * W + w31 * X:W + w32 * X2:W + b1 * M1 + b2 * M2
a11db2 := a11 * d * b2
a12db2 := a12 * d * b2
a11b1 := a11 * b1
a12b1 := a12 * b1
a21b2 := a21 * b2
a22b2 := a22 * b2",
dat), rsquare = TRUE)
With this, you can work out the exact combinations that get you the total indirect, direct and total effects. The nature of W makes things a bit difficult. Right now, the computed effects at the bottom are for when W = 0. You might want to play around with the computed effects depending on the type of variable W is. | Lavaan mediation + moderation + 2 X's
I simulated some irrelevant data:
library(lavaan)
dat <- as.data.frame(mvrnorm(5e2, rep(0, 6), matrix(.25, 6, 6) + .75 * diag(6)))
colnames(dat) <- c("X", "X2", "M1", "M2", "W", "Y")
dat$X2 <- dat$X |
49,683 | Lavaan mediation + moderation + 2 X's | Moderation is basically the product of the IVs ($X_i$) with the moderator ($W$). Since you have two IVs, you have two products $X_1W$ and $W_2W$, you just have to do it once for each predictor $X_i$ on each outcome ($M_1,M_2,Y$), since you want the moderator on each of them.
I used the notation in the graph for the code,
Y ~ cprime1*X + cprime2*X2 + w3_1*X1:W + w3_2*X2:W + controls
M2 ~ aprime1*X1 + aprime2*W + w2_1*X1:W + w2_2*X2:W + + d*M1 + controls
M1 ~ a1*X1 + a2*W + w2_1*X1:W + w2_2*X2:W + controls
plus your indirect effects.
You can also consider a triple interaction X1:X2:W. You would have to add the double interaction of $X_1X_2$ and the triple interaction $X_1X_2W$. At this point you would have an IV and two moderators (conceptually speaking; statistically, it is arbitrary because there is no difference between the variables). I don't think it is what you want though. | Lavaan mediation + moderation + 2 X's | Moderation is basically the product of the IVs ($X_i$) with the moderator ($W$). Since you have two IVs, you have two products $X_1W$ and $W_2W$, you just have to do it once for each predictor $X_i$ o | Lavaan mediation + moderation + 2 X's
Moderation is basically the product of the IVs ($X_i$) with the moderator ($W$). Since you have two IVs, you have two products $X_1W$ and $W_2W$, you just have to do it once for each predictor $X_i$ on each outcome ($M_1,M_2,Y$), since you want the moderator on each of them.
I used the notation in the graph for the code,
Y ~ cprime1*X + cprime2*X2 + w3_1*X1:W + w3_2*X2:W + controls
M2 ~ aprime1*X1 + aprime2*W + w2_1*X1:W + w2_2*X2:W + + d*M1 + controls
M1 ~ a1*X1 + a2*W + w2_1*X1:W + w2_2*X2:W + controls
plus your indirect effects.
You can also consider a triple interaction X1:X2:W. You would have to add the double interaction of $X_1X_2$ and the triple interaction $X_1X_2W$. At this point you would have an IV and two moderators (conceptually speaking; statistically, it is arbitrary because there is no difference between the variables). I don't think it is what you want though. | Lavaan mediation + moderation + 2 X's
Moderation is basically the product of the IVs ($X_i$) with the moderator ($W$). Since you have two IVs, you have two products $X_1W$ and $W_2W$, you just have to do it once for each predictor $X_i$ o |
49,684 | Are there any conjugate likelihood distributions for a Categorical Prior? | Any likelihood function will do this: Since $Z \sim \text{Cat}(\pi)$, it is a discrete random variable with some finite number of possible states. Regardless of the likelihood function, it will still be distributed over these same states a posteriori and so it will still have a categorical distribution. This merely reflects the fact that every distribution over a finite set of outcomes is a categorical distribution.
If you would like an explicit result, you can just apply Bayes theorem. For any observed value $o$ giving likelihood $L_o$, the resulting posterior distribution is:
$$Z|o \sim \text{Cat}(\pi^{*})
\quad \quad \quad
\pi_z^{*} \equiv \frac{L_o(z) \cdot \pi_z}{\sum_z L_o(z) \cdot \pi_z }.$$ | Are there any conjugate likelihood distributions for a Categorical Prior? | Any likelihood function will do this: Since $Z \sim \text{Cat}(\pi)$, it is a discrete random variable with some finite number of possible states. Regardless of the likelihood function, it will still | Are there any conjugate likelihood distributions for a Categorical Prior?
Any likelihood function will do this: Since $Z \sim \text{Cat}(\pi)$, it is a discrete random variable with some finite number of possible states. Regardless of the likelihood function, it will still be distributed over these same states a posteriori and so it will still have a categorical distribution. This merely reflects the fact that every distribution over a finite set of outcomes is a categorical distribution.
If you would like an explicit result, you can just apply Bayes theorem. For any observed value $o$ giving likelihood $L_o$, the resulting posterior distribution is:
$$Z|o \sim \text{Cat}(\pi^{*})
\quad \quad \quad
\pi_z^{*} \equiv \frac{L_o(z) \cdot \pi_z}{\sum_z L_o(z) \cdot \pi_z }.$$ | Are there any conjugate likelihood distributions for a Categorical Prior?
Any likelihood function will do this: Since $Z \sim \text{Cat}(\pi)$, it is a discrete random variable with some finite number of possible states. Regardless of the likelihood function, it will still |
49,685 | Unbiased estimator of $X^k$ given independent unbiased estimators of $X$ | The obvious analogy here is to use independent estimators $Y_1,...,Y_c$ with expectations:
$$\mathbb{E}(Y_i) = X^{p_i}
\quad \quad \quad \sum p_i = k.$$
In theory this is possible so long as you have a method to construct the required estimators for this problem. The independence requirement will generally make it impractical, since in most cases it will require you to partition your dataset to form the estimators seperately. Consequently, while it is possible to get an unbiased estimator by this method, it will tend to be a poor estimator (with high variance), owing to the fact that it uses a partition of the data where only a small amount of the data is used for each part of the estimator.
An example using IID data with zero mean: Suppose you have IID data $X_1,...,X_n$ from some distribution with zero mean and finite variance $\sigma^2 < \infty$, and you want to estimate $\mathbb{E}(X^k)$. In this case the sample mean and sample variance give unbiased estimators for the first and second raw moments:
$$\mathbb{E}(\bar{X}) = \mathbb{E}(X) = 0
\quad \quad \quad
\mathbb{E}(S^2) = \mathbb{E}(X^2) = \sigma^2.$$
Now, suppose we partition our data into $c < k$ parts where we have at least two data points in each part (so that we can form the sample variance). Denote these partition parts by $\boldsymbol{X}_{(1)},...,\boldsymbol{X}_{(c)}$ and denote statistics using these samples with the corresponding subscripts. Now, choose some values $p_1,...,p_c \in \{ 1,2 \}$ with $p_1 + \cdots + p_c = k$ and form the estimator:
$$\text{Est} \equiv \prod_{i:p_i = 1} \bar{X}_{(i)} \times \prod_{i:p_i = 2} S_{(i)}^2 .$$
The partition of the data means that the parts are independent, so we have:
$$\begin{align}
\mathbb{E}(\text{Est})
&= \prod_{i:p_i = 1} \mathbb{E}(\bar{X}_{(i)}) \times \prod_{i:p_i = 2} \mathbb{E}(S_{(i)}^2) \\[6pt]
&= \prod_{i:p_i = 1} \mathbb{E}(X) \times \prod_{i:p_i = 2} \mathbb{E}(X^2) \\[6pt]
&= \prod_{i:p_i = 1} \mathbb{E}(X^{p_i}) \times \prod_{i:p_i = 2} \mathbb{E}(X^{p_i}) \\[6pt]
&= \prod_{i} \mathbb{E}(X^{p_i}) \\[6pt]
&= \mathbb{E}(X^{\sum p_i}) \\[12pt]
&= \mathbb{E}(X^k). \\[12pt]
\end{align}$$
Note that although this is an unbiased estimator, it will be a terrible estimator when the number of partition pieces is large. | Unbiased estimator of $X^k$ given independent unbiased estimators of $X$ | The obvious analogy here is to use independent estimators $Y_1,...,Y_c$ with expectations:
$$\mathbb{E}(Y_i) = X^{p_i}
\quad \quad \quad \sum p_i = k.$$
In theory this is possible so long as you have | Unbiased estimator of $X^k$ given independent unbiased estimators of $X$
The obvious analogy here is to use independent estimators $Y_1,...,Y_c$ with expectations:
$$\mathbb{E}(Y_i) = X^{p_i}
\quad \quad \quad \sum p_i = k.$$
In theory this is possible so long as you have a method to construct the required estimators for this problem. The independence requirement will generally make it impractical, since in most cases it will require you to partition your dataset to form the estimators seperately. Consequently, while it is possible to get an unbiased estimator by this method, it will tend to be a poor estimator (with high variance), owing to the fact that it uses a partition of the data where only a small amount of the data is used for each part of the estimator.
An example using IID data with zero mean: Suppose you have IID data $X_1,...,X_n$ from some distribution with zero mean and finite variance $\sigma^2 < \infty$, and you want to estimate $\mathbb{E}(X^k)$. In this case the sample mean and sample variance give unbiased estimators for the first and second raw moments:
$$\mathbb{E}(\bar{X}) = \mathbb{E}(X) = 0
\quad \quad \quad
\mathbb{E}(S^2) = \mathbb{E}(X^2) = \sigma^2.$$
Now, suppose we partition our data into $c < k$ parts where we have at least two data points in each part (so that we can form the sample variance). Denote these partition parts by $\boldsymbol{X}_{(1)},...,\boldsymbol{X}_{(c)}$ and denote statistics using these samples with the corresponding subscripts. Now, choose some values $p_1,...,p_c \in \{ 1,2 \}$ with $p_1 + \cdots + p_c = k$ and form the estimator:
$$\text{Est} \equiv \prod_{i:p_i = 1} \bar{X}_{(i)} \times \prod_{i:p_i = 2} S_{(i)}^2 .$$
The partition of the data means that the parts are independent, so we have:
$$\begin{align}
\mathbb{E}(\text{Est})
&= \prod_{i:p_i = 1} \mathbb{E}(\bar{X}_{(i)}) \times \prod_{i:p_i = 2} \mathbb{E}(S_{(i)}^2) \\[6pt]
&= \prod_{i:p_i = 1} \mathbb{E}(X) \times \prod_{i:p_i = 2} \mathbb{E}(X^2) \\[6pt]
&= \prod_{i:p_i = 1} \mathbb{E}(X^{p_i}) \times \prod_{i:p_i = 2} \mathbb{E}(X^{p_i}) \\[6pt]
&= \prod_{i} \mathbb{E}(X^{p_i}) \\[6pt]
&= \mathbb{E}(X^{\sum p_i}) \\[12pt]
&= \mathbb{E}(X^k). \\[12pt]
\end{align}$$
Note that although this is an unbiased estimator, it will be a terrible estimator when the number of partition pieces is large. | Unbiased estimator of $X^k$ given independent unbiased estimators of $X$
The obvious analogy here is to use independent estimators $Y_1,...,Y_c$ with expectations:
$$\mathbb{E}(Y_i) = X^{p_i}
\quad \quad \quad \sum p_i = k.$$
In theory this is possible so long as you have |
49,686 | Unbiased estimator of $X^k$ given independent unbiased estimators of $X$ | To illustrate the point that the answer depends on the underlying statistical model: If $Y\sim\mathcal E(1/X)$, an exponential variable, then
$$\mathbb E[Y^k]= X^k\Gamma(k+1)$$
meaning that $Y^k/\Gamma(k+1)=Y^k/k!$ is an unbiased estimator of $X^k$, based on a single observation. This extends to Gamma variables, obviously. | Unbiased estimator of $X^k$ given independent unbiased estimators of $X$ | To illustrate the point that the answer depends on the underlying statistical model: If $Y\sim\mathcal E(1/X)$, an exponential variable, then
$$\mathbb E[Y^k]= X^k\Gamma(k+1)$$
meaning that $Y^k/\Gamm | Unbiased estimator of $X^k$ given independent unbiased estimators of $X$
To illustrate the point that the answer depends on the underlying statistical model: If $Y\sim\mathcal E(1/X)$, an exponential variable, then
$$\mathbb E[Y^k]= X^k\Gamma(k+1)$$
meaning that $Y^k/\Gamma(k+1)=Y^k/k!$ is an unbiased estimator of $X^k$, based on a single observation. This extends to Gamma variables, obviously. | Unbiased estimator of $X^k$ given independent unbiased estimators of $X$
To illustrate the point that the answer depends on the underlying statistical model: If $Y\sim\mathcal E(1/X)$, an exponential variable, then
$$\mathbb E[Y^k]= X^k\Gamma(k+1)$$
meaning that $Y^k/\Gamm |
49,687 | Analysis for a design with variation among and within trees | Your model:
y ~ Sex*Side + (1 | Site) + (1 | Site:Tree)
makes sense to me conceptually. However with only 5 sites, this is rather few for fitting random intercepts, so I would suggest also fitting
y ~ Sex*Side + Site*Tree
Hopefully the inferences for Sex, Side and their interaction will be similar in both models.
You don't need anything special to account for the within and between factors. | Analysis for a design with variation among and within trees | Your model:
y ~ Sex*Side + (1 | Site) + (1 | Site:Tree)
makes sense to me conceptually. However with only 5 sites, this is rather few for fitting random intercepts, so I would suggest also fitting
y | Analysis for a design with variation among and within trees
Your model:
y ~ Sex*Side + (1 | Site) + (1 | Site:Tree)
makes sense to me conceptually. However with only 5 sites, this is rather few for fitting random intercepts, so I would suggest also fitting
y ~ Sex*Side + Site*Tree
Hopefully the inferences for Sex, Side and their interaction will be similar in both models.
You don't need anything special to account for the within and between factors. | Analysis for a design with variation among and within trees
Your model:
y ~ Sex*Side + (1 | Site) + (1 | Site:Tree)
makes sense to me conceptually. However with only 5 sites, this is rather few for fitting random intercepts, so I would suggest also fitting
y |
49,688 | Comparisons between independent geometric random variables | By remembering how the geometric distribution arises, we can solve this problem with almost no calculation.
The problem can be seen as a competition
A geometric random variable $W$ models the number of failures in a sequence of independent Bernoulli trials before the first success is observed. Its parameter $p$ is the chance of success in each trial.
The usual metaphor for a Bernoulli$(p)$ trial is the flip of a coin with probability $p.$ The problem, then, can be phrased in terms of a competition. It consists of a series of turns that is continued until a definite outcome is achieved:
You hold a coin with probability $p_1$ of heads and I hold a coin with probability $p_2$ of heads. On each turn we both flip our coins. If both outcomes are the same, we tie; if your coin is heads you win; if my coin is heads I win; and otherwise we continue the series. What are the chances (i) you win, (ii) I win, (iii) I tie, (iv) the series goes on forever?
The competition will have a definite outcome
Let's deal with that last possibility right away: at each turn the series will continue only when we each observe tails, which has a chance of $q=(1-p_1)(1-p_2).$ The chance of continuing through $n=1,2,\ldots$ turns without a definite outcome therefore is $q^n.$ Provided $q\lt 1,$ this converges to $0,$ demonstrating there is a vanishingly small chance that the series goes longer than $n$ turns. Unless both coins always come up tails ($p_1=p_2=0$), then, the chance of (iv) is zero.
The problem can be restated in terms of the competition's outcome
We have seen the game will eventually terminate. If, after it is over, the loser were to continue flipping until they, too, observed a heads, then the numbers of flips will both be realizations of geometric random variables $W_1$ and $W_2$ with parameters $p_1$ and $p_2.$ Evidently, you win when $W_1$ is less than $W_2,$ I win when $W_1$ exceeds $W_2,$ and otherwise we tie.
A simple equation determines the chance you win
Let's consider your chances of winning in a little more detail. You can win exactly when either (a) you toss a heads and I toss a tail on the current turn or (b) we both toss tails on the current turn, in which case the game effectively starts over at the beginning. The chance of (a) is $p_1(1-p_2)$ (because our tosses are independent) and the chance of (b) is $(1-p_1)(1-p_2).$ Therefore,
$$\Pr(W_1 \lt W_2) = \Pr(\text{You win}) = p_1(1-p_2) + (1-p_1)(1-p_2)\Pr(\text{You win}).$$
This simple (linear) equation for your winning chances is easily solved to give
$$\Pr(W_1 \lt W_2) = \Pr(\text{You win}) = \frac{p_1(1-p_2)}{1 - (1-p_1)(1-p_2)} = \frac{p_1 -p_1p_2}{p_1+p_2-p_1p_2}.$$
The rest is easy
Interchanging our roles merely swaps the subscripts, from which we read off
$$\Pr(W_1 \gt W_2) = \Pr(W_2 \lt W_1) = \frac{p_2 -p_1p_2}{p_1+p_2-p_1p_2}.$$
The chance of a tie plus the chance that somebody wins must equal $1,$ because the chance that this game goes on forever is zero. Thus
$$\Pr(W_1=W_2) = 1 - (\Pr(W_1 \lt W_2) + \Pr(W_1 \gt W_2)) = \frac{p_1p_2}{p_1+p_2-p_1p_2}.$$
Simulations indicate this answer is correct
As a check, I simulated this game ten million times where your coin, with $p_1 = 9/10,$ has a slight edge over mine with $p_2=10/11.$ Here are the frequencies of the results compared to the formula:
Lose Tie Win
Simulation 0.0827 0.826 0.0917
Theory 0.0826 0.826 0.0917
True, most of the time we tie (because both coins so strongly favor heads), but you do win noticeably more often than I do, despite the tiny difference in the coins.
Here is the R code for the simulation. It takes a few seconds to run.
p1 <- 9/10 # Your chances of heads
p2 <- 10/11 # My chances of heads
n <- 1e7 # Number of iterations
set.seed(17)
W1 <- rgeom(n, p1)
W2 <- rgeom(n, p2)
Outcome <- ifelse(W1 > W2, "Win", ifelse(W1 < W2, "Lose", "Tie"))
print(rbind(Simulation = table(Outcome) / n,
Theory = c(Win=p1 - p1*p2, Tie=p1*p2, Lose=p2-p1*p2)/(p1 + p2 - p1*p2)),
digits=3)
``` | Comparisons between independent geometric random variables | By remembering how the geometric distribution arises, we can solve this problem with almost no calculation.
The problem can be seen as a competition
A geometric random variable $W$ models the number o | Comparisons between independent geometric random variables
By remembering how the geometric distribution arises, we can solve this problem with almost no calculation.
The problem can be seen as a competition
A geometric random variable $W$ models the number of failures in a sequence of independent Bernoulli trials before the first success is observed. Its parameter $p$ is the chance of success in each trial.
The usual metaphor for a Bernoulli$(p)$ trial is the flip of a coin with probability $p.$ The problem, then, can be phrased in terms of a competition. It consists of a series of turns that is continued until a definite outcome is achieved:
You hold a coin with probability $p_1$ of heads and I hold a coin with probability $p_2$ of heads. On each turn we both flip our coins. If both outcomes are the same, we tie; if your coin is heads you win; if my coin is heads I win; and otherwise we continue the series. What are the chances (i) you win, (ii) I win, (iii) I tie, (iv) the series goes on forever?
The competition will have a definite outcome
Let's deal with that last possibility right away: at each turn the series will continue only when we each observe tails, which has a chance of $q=(1-p_1)(1-p_2).$ The chance of continuing through $n=1,2,\ldots$ turns without a definite outcome therefore is $q^n.$ Provided $q\lt 1,$ this converges to $0,$ demonstrating there is a vanishingly small chance that the series goes longer than $n$ turns. Unless both coins always come up tails ($p_1=p_2=0$), then, the chance of (iv) is zero.
The problem can be restated in terms of the competition's outcome
We have seen the game will eventually terminate. If, after it is over, the loser were to continue flipping until they, too, observed a heads, then the numbers of flips will both be realizations of geometric random variables $W_1$ and $W_2$ with parameters $p_1$ and $p_2.$ Evidently, you win when $W_1$ is less than $W_2,$ I win when $W_1$ exceeds $W_2,$ and otherwise we tie.
A simple equation determines the chance you win
Let's consider your chances of winning in a little more detail. You can win exactly when either (a) you toss a heads and I toss a tail on the current turn or (b) we both toss tails on the current turn, in which case the game effectively starts over at the beginning. The chance of (a) is $p_1(1-p_2)$ (because our tosses are independent) and the chance of (b) is $(1-p_1)(1-p_2).$ Therefore,
$$\Pr(W_1 \lt W_2) = \Pr(\text{You win}) = p_1(1-p_2) + (1-p_1)(1-p_2)\Pr(\text{You win}).$$
This simple (linear) equation for your winning chances is easily solved to give
$$\Pr(W_1 \lt W_2) = \Pr(\text{You win}) = \frac{p_1(1-p_2)}{1 - (1-p_1)(1-p_2)} = \frac{p_1 -p_1p_2}{p_1+p_2-p_1p_2}.$$
The rest is easy
Interchanging our roles merely swaps the subscripts, from which we read off
$$\Pr(W_1 \gt W_2) = \Pr(W_2 \lt W_1) = \frac{p_2 -p_1p_2}{p_1+p_2-p_1p_2}.$$
The chance of a tie plus the chance that somebody wins must equal $1,$ because the chance that this game goes on forever is zero. Thus
$$\Pr(W_1=W_2) = 1 - (\Pr(W_1 \lt W_2) + \Pr(W_1 \gt W_2)) = \frac{p_1p_2}{p_1+p_2-p_1p_2}.$$
Simulations indicate this answer is correct
As a check, I simulated this game ten million times where your coin, with $p_1 = 9/10,$ has a slight edge over mine with $p_2=10/11.$ Here are the frequencies of the results compared to the formula:
Lose Tie Win
Simulation 0.0827 0.826 0.0917
Theory 0.0826 0.826 0.0917
True, most of the time we tie (because both coins so strongly favor heads), but you do win noticeably more often than I do, despite the tiny difference in the coins.
Here is the R code for the simulation. It takes a few seconds to run.
p1 <- 9/10 # Your chances of heads
p2 <- 10/11 # My chances of heads
n <- 1e7 # Number of iterations
set.seed(17)
W1 <- rgeom(n, p1)
W2 <- rgeom(n, p2)
Outcome <- ifelse(W1 > W2, "Win", ifelse(W1 < W2, "Lose", "Tie"))
print(rbind(Simulation = table(Outcome) / n,
Theory = c(Win=p1 - p1*p2, Tie=p1*p2, Lose=p2-p1*p2)/(p1 + p2 - p1*p2)),
digits=3)
``` | Comparisons between independent geometric random variables
By remembering how the geometric distribution arises, we can solve this problem with almost no calculation.
The problem can be seen as a competition
A geometric random variable $W$ models the number o |
49,689 | Comparisons between independent geometric random variables | In accordance with whuber's suggestion, I am posting an extended version of some comments that I made on whuber's answer as a separate answer of my own.
The experiment consists of players A and B each (independently) tossing their individual coins that turn up Heads with probabilities $p_A$ and $p_B$ respectively. Repeated independent trials of this experiment are performed until at least one of A and B tosses a Head for the first time, at which point the game ends with A the winner if the outcome is $(H,T)$, B the winner if the outcome is $(T,H)$, and a tie if the outcome is $(H,H)$. The game ends on the very first trial on which the outcome is NOT $(T,T)$. Clearly, if $p_A=p_B=0$ (both players have two-tailed coins), the outcome of each trial is $(T,T)$ and the game never ends, and so to exclude this trivial case, we assume that both $p_A$ and $p_B$ cannot have value $0$. If exactly one of $p_A$ and $p_B$ has value $0$, then with $\{X,Y\} = \{A, B\}$ where $p_X = 0$ and $p_Y > 0$, we can say that Y is guaranteed to win the game (ties are impossible), and it takes an average of $\frac{1}{p_Y}$ trials for Y to actually win the game by tossing a Head.
So, assuming that $p_A > 0$, $p_B > 0$, the game is guaranteed to end in a finite number of trials (cf. whuber's answer cited above). Because of independence, we can ignore all trials on which $(T,T)$ is the outcome and concentrate on the very first trial on which the outcome $(T,T)$ does not occur meaning that the outcome is necessarily either $(H,T)$ in which case A wins, or $(T,H)$ in which case B wins, or $(H,H)$ in which case there is a tie. Note that the game ends at this point. So, all previous trials (if any) have resulted in $(T,T)$ and the current trial is the very first one on which the outcome is not $(T,T)$. Since the game ends at this point, there are no future trials to consider.
Given that the event $\{(H,T), (T,H), (H,H)\}$ has occurred, what is the conditional probability that the outcome is $(H,T)$ and so A wins? the conditional probability that the outcome is $(T,H)$ and so B wins?
the conditional probability that the outcome is $(H,H)$ and so the game ends in a tie? We have
\begin{align}
P((H,T)\mid (T,T)^c)
&= \frac{P(H,T)}{P(\text{at least one of A and B tosses a Head})}\\
&= \frac{p_A(1-p_B)}{p_A + p_B - p_Ap_B}\tag{1}\\
&= P(\text{A wins}),\\
P((T,H)\mid (T,T)^c)
&= \frac{P(T,H)}{P(\text{at least one of A and B tosses a Head})}\\
&= \frac{p_B(1-p_A)}{p_A + p_B - p_Ap_B}\tag{2}\\
&= P(\text{B wins}),\\
P((H,H)\mid (T,T)^c)
&= \frac{P(H,H)}{P(\text{at least one of A and B tosses a Head})}\\
&= \frac{p_Ap_B}{p_A + p_B - p_Ap_B}\tag{3}\\
&= P(\text{game is tied}).
\end{align}
But, as whuber cogently asked earlier, why am I claiming that the conditional probabilities computed in $(1), (2)$, and $(3)$ (note that they add up to $1$) are respectively equal to the unconditional probabilities of A winning, B winning, and the game being tied? Well, the game ends on the trial being considered and we are just looking at the reduced sample space $\Omega^\prime = \{(H,T), (T,H), (H,H)\}$ and the conditional probability measure that assigns the probabilities given by $(1), (2)$, and $(3)$ to these outcomes.
Alternatively, consider the mutually exclusive events $C= \{H,T)\}$ and $D = \{(T,H),(H,H)\}$. It is a standard result in probability theory that on a sequence of independent trials, the (unconditional) probability that $C$ occurs before $D$ does (and so A wins) is given by
\begin{align}P(\text{C occurs before D}) &= \frac{P(C)}{P(C)+P(D)}\\
&= \frac{p_A(1-p_B)}{p_A(1-p_B) + p_B((1-p_A) + p_Ap_B}\\
&= \frac{p_A(1-p_B)}{p_A + p_B - p_Ap_B}
\end{align}
which is the same value as in $(1)$. The careful but incredulous reader is invited to work out the other cases similarly to verify that right sides of $(2)$ and $(3)$ do indeed give the respective unconditional probabilities of B winning, and of the game ending in a tie. | Comparisons between independent geometric random variables | In accordance with whuber's suggestion, I am posting an extended version of some comments that I made on whuber's answer as a separate answer of my own.
The experiment consists of players A and B each | Comparisons between independent geometric random variables
In accordance with whuber's suggestion, I am posting an extended version of some comments that I made on whuber's answer as a separate answer of my own.
The experiment consists of players A and B each (independently) tossing their individual coins that turn up Heads with probabilities $p_A$ and $p_B$ respectively. Repeated independent trials of this experiment are performed until at least one of A and B tosses a Head for the first time, at which point the game ends with A the winner if the outcome is $(H,T)$, B the winner if the outcome is $(T,H)$, and a tie if the outcome is $(H,H)$. The game ends on the very first trial on which the outcome is NOT $(T,T)$. Clearly, if $p_A=p_B=0$ (both players have two-tailed coins), the outcome of each trial is $(T,T)$ and the game never ends, and so to exclude this trivial case, we assume that both $p_A$ and $p_B$ cannot have value $0$. If exactly one of $p_A$ and $p_B$ has value $0$, then with $\{X,Y\} = \{A, B\}$ where $p_X = 0$ and $p_Y > 0$, we can say that Y is guaranteed to win the game (ties are impossible), and it takes an average of $\frac{1}{p_Y}$ trials for Y to actually win the game by tossing a Head.
So, assuming that $p_A > 0$, $p_B > 0$, the game is guaranteed to end in a finite number of trials (cf. whuber's answer cited above). Because of independence, we can ignore all trials on which $(T,T)$ is the outcome and concentrate on the very first trial on which the outcome $(T,T)$ does not occur meaning that the outcome is necessarily either $(H,T)$ in which case A wins, or $(T,H)$ in which case B wins, or $(H,H)$ in which case there is a tie. Note that the game ends at this point. So, all previous trials (if any) have resulted in $(T,T)$ and the current trial is the very first one on which the outcome is not $(T,T)$. Since the game ends at this point, there are no future trials to consider.
Given that the event $\{(H,T), (T,H), (H,H)\}$ has occurred, what is the conditional probability that the outcome is $(H,T)$ and so A wins? the conditional probability that the outcome is $(T,H)$ and so B wins?
the conditional probability that the outcome is $(H,H)$ and so the game ends in a tie? We have
\begin{align}
P((H,T)\mid (T,T)^c)
&= \frac{P(H,T)}{P(\text{at least one of A and B tosses a Head})}\\
&= \frac{p_A(1-p_B)}{p_A + p_B - p_Ap_B}\tag{1}\\
&= P(\text{A wins}),\\
P((T,H)\mid (T,T)^c)
&= \frac{P(T,H)}{P(\text{at least one of A and B tosses a Head})}\\
&= \frac{p_B(1-p_A)}{p_A + p_B - p_Ap_B}\tag{2}\\
&= P(\text{B wins}),\\
P((H,H)\mid (T,T)^c)
&= \frac{P(H,H)}{P(\text{at least one of A and B tosses a Head})}\\
&= \frac{p_Ap_B}{p_A + p_B - p_Ap_B}\tag{3}\\
&= P(\text{game is tied}).
\end{align}
But, as whuber cogently asked earlier, why am I claiming that the conditional probabilities computed in $(1), (2)$, and $(3)$ (note that they add up to $1$) are respectively equal to the unconditional probabilities of A winning, B winning, and the game being tied? Well, the game ends on the trial being considered and we are just looking at the reduced sample space $\Omega^\prime = \{(H,T), (T,H), (H,H)\}$ and the conditional probability measure that assigns the probabilities given by $(1), (2)$, and $(3)$ to these outcomes.
Alternatively, consider the mutually exclusive events $C= \{H,T)\}$ and $D = \{(T,H),(H,H)\}$. It is a standard result in probability theory that on a sequence of independent trials, the (unconditional) probability that $C$ occurs before $D$ does (and so A wins) is given by
\begin{align}P(\text{C occurs before D}) &= \frac{P(C)}{P(C)+P(D)}\\
&= \frac{p_A(1-p_B)}{p_A(1-p_B) + p_B((1-p_A) + p_Ap_B}\\
&= \frac{p_A(1-p_B)}{p_A + p_B - p_Ap_B}
\end{align}
which is the same value as in $(1)$. The careful but incredulous reader is invited to work out the other cases similarly to verify that right sides of $(2)$ and $(3)$ do indeed give the respective unconditional probabilities of B winning, and of the game ending in a tie. | Comparisons between independent geometric random variables
In accordance with whuber's suggestion, I am posting an extended version of some comments that I made on whuber's answer as a separate answer of my own.
The experiment consists of players A and B each |
49,690 | Comparisons between independent geometric random variables | As I indicated in a comment above there is a cited path to the solution available here courtesy of the Math forum on Stack Exchange on the topic 'Difference between two independent geometric distribution.
However, I have also found another route that is worthy of mention as it may have more broad applications (that is, for other distributions than the Geometric). It is especially understandable for those acquainted with the Monte Carlo CDF inversion method to generate random deviates. The method basically consists of equating a uniform deviate on (0,1) to the CDF and solve (if possible directly or otherwise) to obtain X, the associated random deviate for the distribution of interest.
Note: I have constructed a spreadsheet with varying parameters values for ${p_1}$ and ${p_2}$ and feel confident that the methodology presented is sound in practice and theory, based on repeated runs lengths of a 1,000 pairs of generated pairs of uniform random deviates and the corresponding produces Geometric distribution (CDF reference here).
In the case of the Geometric, the math proceeds as follows:
$${U = CDF = 1 - (1-p)^X}$$
which can produce a pair of random Geometric deviates starting with two Uniform random deviates for two Geometric distributions with corresponding different parameters:
$${X1 = Ln(1-U1)/Ln(1-p_1)}$$
$${X2 = Ln(1-U2)/Ln(1-p_2)}$$
Note: for efficiency, and mathematical convenient latter, as both U1 and 1-U1 are uniformly distributed random deviates, one can replace 1-U1 with just U1. [EDIT] Although originally, I did not round the generated deviates to nearest integers, subsequent investigation working with integer values for the Geometric deviates, and returning a blank cell with ties (so the observed simulation statistics were absence incidents of draws), produced results actually closer to the expected theoretical value detailed below, albeit a little more variable from a varying reduced sample size (per ignoring draws).
So, how does this assist in the calculations of the respective probabilities involving the two Geometric variables? Quite directly actually by straight substitution:
$${ Pr[ X1 < X2] = Pr[ Ln(U1)/Ln(1-p_1) < Ln(U2)/Ln(1-p_2)]}$$
Taking the exponential of both sides of the inequality implies:
$${ Pr[ Exp(Ln(U1)*Exp(-Ln(1-p_1)) < Exp(Ln(U2))*Exp(-Ln(1-p_2) ]}$$
Or:
$${ Pr[ U1/(1-p_1) < U2/(1-p_2)]}$$
And, most importantly, as we can now introduce the topic of the Uniform Ratio Distribution also discussed in Wikipedia:
$${ Pr[ (Z =) U1/U2 < (1-p_1)/(1-p_2)]}$$
is provided directly and facilely by CDF of the Uniform Ratio Distribution in the cases where ${(1-p_1)/(1-p_2) < 1}$ (or greater), which equivalently implies corresponding values when say ${p_1 > p_2}$ (or ${p_1 < p_2}$).
Based on my simulation, I observed that the sampling observed value became much closer to the theoretical value postulated by the Uniform Ratio distribution, on a starting simulation of 1,000 random deviate pairs, as the absolute difference between ${p_1}$ and ${p_2}$ increased.
[EDIT] To end confusion as to my claims and provide verification of reported accuracy, I now provide a link to a Published Google Spreadsheet. It updates every 5 minutes from my master copy and is read-only, but fully populated with 1,000 rows. I can periodically recalculate to provide alternate simulations results. [EDIT][EDIT]Link removed: Spreadsheet has since evolved into the basis of a Casino Game Provisional Patent (I may sometime in the future post more details upon request). | Comparisons between independent geometric random variables | As I indicated in a comment above there is a cited path to the solution available here courtesy of the Math forum on Stack Exchange on the topic 'Difference between two independent geometric distribut | Comparisons between independent geometric random variables
As I indicated in a comment above there is a cited path to the solution available here courtesy of the Math forum on Stack Exchange on the topic 'Difference between two independent geometric distribution.
However, I have also found another route that is worthy of mention as it may have more broad applications (that is, for other distributions than the Geometric). It is especially understandable for those acquainted with the Monte Carlo CDF inversion method to generate random deviates. The method basically consists of equating a uniform deviate on (0,1) to the CDF and solve (if possible directly or otherwise) to obtain X, the associated random deviate for the distribution of interest.
Note: I have constructed a spreadsheet with varying parameters values for ${p_1}$ and ${p_2}$ and feel confident that the methodology presented is sound in practice and theory, based on repeated runs lengths of a 1,000 pairs of generated pairs of uniform random deviates and the corresponding produces Geometric distribution (CDF reference here).
In the case of the Geometric, the math proceeds as follows:
$${U = CDF = 1 - (1-p)^X}$$
which can produce a pair of random Geometric deviates starting with two Uniform random deviates for two Geometric distributions with corresponding different parameters:
$${X1 = Ln(1-U1)/Ln(1-p_1)}$$
$${X2 = Ln(1-U2)/Ln(1-p_2)}$$
Note: for efficiency, and mathematical convenient latter, as both U1 and 1-U1 are uniformly distributed random deviates, one can replace 1-U1 with just U1. [EDIT] Although originally, I did not round the generated deviates to nearest integers, subsequent investigation working with integer values for the Geometric deviates, and returning a blank cell with ties (so the observed simulation statistics were absence incidents of draws), produced results actually closer to the expected theoretical value detailed below, albeit a little more variable from a varying reduced sample size (per ignoring draws).
So, how does this assist in the calculations of the respective probabilities involving the two Geometric variables? Quite directly actually by straight substitution:
$${ Pr[ X1 < X2] = Pr[ Ln(U1)/Ln(1-p_1) < Ln(U2)/Ln(1-p_2)]}$$
Taking the exponential of both sides of the inequality implies:
$${ Pr[ Exp(Ln(U1)*Exp(-Ln(1-p_1)) < Exp(Ln(U2))*Exp(-Ln(1-p_2) ]}$$
Or:
$${ Pr[ U1/(1-p_1) < U2/(1-p_2)]}$$
And, most importantly, as we can now introduce the topic of the Uniform Ratio Distribution also discussed in Wikipedia:
$${ Pr[ (Z =) U1/U2 < (1-p_1)/(1-p_2)]}$$
is provided directly and facilely by CDF of the Uniform Ratio Distribution in the cases where ${(1-p_1)/(1-p_2) < 1}$ (or greater), which equivalently implies corresponding values when say ${p_1 > p_2}$ (or ${p_1 < p_2}$).
Based on my simulation, I observed that the sampling observed value became much closer to the theoretical value postulated by the Uniform Ratio distribution, on a starting simulation of 1,000 random deviate pairs, as the absolute difference between ${p_1}$ and ${p_2}$ increased.
[EDIT] To end confusion as to my claims and provide verification of reported accuracy, I now provide a link to a Published Google Spreadsheet. It updates every 5 minutes from my master copy and is read-only, but fully populated with 1,000 rows. I can periodically recalculate to provide alternate simulations results. [EDIT][EDIT]Link removed: Spreadsheet has since evolved into the basis of a Casino Game Provisional Patent (I may sometime in the future post more details upon request). | Comparisons between independent geometric random variables
As I indicated in a comment above there is a cited path to the solution available here courtesy of the Math forum on Stack Exchange on the topic 'Difference between two independent geometric distribut |
49,691 | How do I analyze bimodal distibuted data with a linear mixed model | I try to sum up what I‘ve learned from the comments to close the question:
Linear mixed effect models do not necessarily need normally distributed data; here is a link to another Post dealing with the same question
Not the data itself but the residuals of the model should be normally distributed
One of the most important things to look at while working with lme models, is to find the right model syntax representing your experiment correctly, resources which helped me finding that are the following ones:
A Hitchhiker's Guide to Mixed Models for Randomized Experiments by Piepho et al.
Categorical random effects with lme4 by lionel
This Post from amoeba R's lmer cheat sheet | How do I analyze bimodal distibuted data with a linear mixed model | I try to sum up what I‘ve learned from the comments to close the question:
Linear mixed effect models do not necessarily need normally distributed data; here is a link to another Post dealing with th | How do I analyze bimodal distibuted data with a linear mixed model
I try to sum up what I‘ve learned from the comments to close the question:
Linear mixed effect models do not necessarily need normally distributed data; here is a link to another Post dealing with the same question
Not the data itself but the residuals of the model should be normally distributed
One of the most important things to look at while working with lme models, is to find the right model syntax representing your experiment correctly, resources which helped me finding that are the following ones:
A Hitchhiker's Guide to Mixed Models for Randomized Experiments by Piepho et al.
Categorical random effects with lme4 by lionel
This Post from amoeba R's lmer cheat sheet | How do I analyze bimodal distibuted data with a linear mixed model
I try to sum up what I‘ve learned from the comments to close the question:
Linear mixed effect models do not necessarily need normally distributed data; here is a link to another Post dealing with th |
49,692 | Is there a hard distinction between hyperparameter vs parameter in machine learning? | That's a great question - I'm not sure what the best way to answer this, but in a statistical framework, I believe the differences are a bit more clearly cut. I'll be curious to see how others answer this from a purer ML/DL perspective.
I think one way in which they differ is that parameters (at last from a statistical standpoint) are something on which you can make inference on, whereas a hyper-parameter is an element of the algorithm that is tuned to optimize it.
For a concrete example, say you are running a LASSO-type penalty for a linear regression model. The $\beta$ weights/coefficients are parameters as one can make a decision on the estimated values and determine relevance or directionality (i.e., check which coefficients are not 0 in a LASSO procedure, or which "protect agaisnt" vs. "increase" risk). Using the same LASSO example, the $\alpha$ weight on a penalty function can be considered a hyper parameter, since the actual value of the $\alpha$ would not provide any insight into the model/post-hoc analysis.
This is a bit of a "statistical" perspective of a difference b/w what is a parameter vs. a hyper-parameter, though that's one option with how to differentiate. With non-parametric algorithms, decision trees, and neural networks, this is where I think there are more gray areas. | Is there a hard distinction between hyperparameter vs parameter in machine learning? | That's a great question - I'm not sure what the best way to answer this, but in a statistical framework, I believe the differences are a bit more clearly cut. I'll be curious to see how others answer | Is there a hard distinction between hyperparameter vs parameter in machine learning?
That's a great question - I'm not sure what the best way to answer this, but in a statistical framework, I believe the differences are a bit more clearly cut. I'll be curious to see how others answer this from a purer ML/DL perspective.
I think one way in which they differ is that parameters (at last from a statistical standpoint) are something on which you can make inference on, whereas a hyper-parameter is an element of the algorithm that is tuned to optimize it.
For a concrete example, say you are running a LASSO-type penalty for a linear regression model. The $\beta$ weights/coefficients are parameters as one can make a decision on the estimated values and determine relevance or directionality (i.e., check which coefficients are not 0 in a LASSO procedure, or which "protect agaisnt" vs. "increase" risk). Using the same LASSO example, the $\alpha$ weight on a penalty function can be considered a hyper parameter, since the actual value of the $\alpha$ would not provide any insight into the model/post-hoc analysis.
This is a bit of a "statistical" perspective of a difference b/w what is a parameter vs. a hyper-parameter, though that's one option with how to differentiate. With non-parametric algorithms, decision trees, and neural networks, this is where I think there are more gray areas. | Is there a hard distinction between hyperparameter vs parameter in machine learning?
That's a great question - I'm not sure what the best way to answer this, but in a statistical framework, I believe the differences are a bit more clearly cut. I'll be curious to see how others answer |
49,693 | K-mean clustering label problem | Unfortunately, you cannot do that. Firstly, because your old cluster assignments will not be the same as the new cluster assignments. You can only try to define a mapping afterwards (not saying this is easy), which may not be successful if the two runs significantly differ. | K-mean clustering label problem | Unfortunately, you cannot do that. Firstly, because your old cluster assignments will not be the same as the new cluster assignments. You can only try to define a mapping afterwards (not saying this i | K-mean clustering label problem
Unfortunately, you cannot do that. Firstly, because your old cluster assignments will not be the same as the new cluster assignments. You can only try to define a mapping afterwards (not saying this is easy), which may not be successful if the two runs significantly differ. | K-mean clustering label problem
Unfortunately, you cannot do that. Firstly, because your old cluster assignments will not be the same as the new cluster assignments. You can only try to define a mapping afterwards (not saying this i |
49,694 | Mixed ANOVA normality: which variables should be examined? (in universal and practical application with stats::aov) | TL;DR:
ANOVA pools information among all observations to get the best estimates of fixed effects, random effects, and error variance. If you want to examine normality of ANOVA residuals, doing so after all fixed and random effects are taken into account thus makes the most sense. Reliable ANOVA estimates don't require normality of residuals; the issue is the distribution of the test statistics. In repeated-measures ANOVA, issues like imbalance or mis-specification of correlation structures might be even more substantial obstacles to reliable statistical tests.
ANOVA is simply a particular type of a linear model, as described for example on this page of one of the sites that was linked from the question, and discussed extensively here. Like all linear models, ANOVA combines information from the combinations of predictor values to model the outcome values as a function of the predictors plus an error term. The error term is assumed to have a certain distribution shared among all cases, Gaussian with zero mean for standard ANOVA. Information about the distribution of the error terms is obtained by pooling across all the observations, smoothing out the vagaries that can happen just by chance within individual cells of the ANOVA design. A standard normal q-q diagnostic plot thus examines all the residual values, not those within individual cells.
Despite the usual assumption of Gaussian errors in an ANOVA model, the significance tests don't necessarily require that assumption to be met. Significance tests in ANOVA are tests on regression coefficients. It's thus the sampling distributions of those regression coefficients that must adequately meet assumptions when one performs a standard parametric test.
As @whuber put it in a crucially important comment:
What you really want to know is whether the assumed distributions of the ANOVA test statistics are sufficiently accurate to compute the p-values in which you are interested.
If the model assumptions are met and the shared error term has a Gaussian distribution then you know that tests on regression coefficients will be valid.* But strict normality of the error term isn't required for tests on the regression coefficients to be valid. Think about normally distributed error terms as sufficient but not always necessary for an adequately reliable significance test on linear model regression coefficients, including ANOVA.
That's not to say that it's useless to examine the distribution of residuals around model predictions that incorporate information from all cases. For example, the R lme4 package provides a normal q-q plot as
one of its diagnostic plots; see page 33 of the vignette.
What you will often find, however, is that substantial deviations from normality in such a plot of residuals mean that the model itself is poorly specified. That might be the most useful information from such a plot.
With a mixed ANOVA model having only fixed categorical predictors and including all interactions, you shouldn't have to worry about linearity in the fixed-effect predictors themselves. But there could be an incorrect handling of the outcome variable (e.g., if it's fundamentally log-normal rather than normal), omission of critical covariates associated both with outcome and with the included predictors, or mis-specification of the random-effects structure. Fix those problems exposed by the diagnostic plot rather than obsess about the normality per se.
To evaluate the model all the diagnostic plots should be examined: not only the q-q plot for normality of residuals but also the fitted vs. residual plot and the scale-location plot and the various profile plots (see page 36 of the vignette) for mixed models and their random effects. Examine undue influence of particular observations, e.g. with the influence.ME package in R. This process, rather than a simple examination of normality, is critical to evaluating and improving the quality of the model specification.
If the model is properly specified then the normality assumption on the sampling distribution of the regression coefficients can be reasonably reliable. With enough data the Central Limit Theorem can help with that despite non-normal residuals, although how much data is "enough" depends on the particular case. See this answer, for example. If you don't want to rely on that assumption, bootstrapping provides a way to get non-parametric confidence intervals. But that should be done only when the model itself is adequately specified.
As an edit to the question notes, some diagnostic plots can be generated from repeated-measures data analyzed by aov, which according to its manual page fits "an analysis of variance model by a call to lm for each stratum." Each stratum is a portioning of the means of the observations by progressively complex models, starting with the overall mean. As Venables and Ripley say on page 283 with respect to a simpler split-plot design:
Multistratum models may be fitted using aov, and are specified by a model formula of the form
response ~ mean.formula + Error (strata.formula)
In our example the strata.formula is B/V, specifying strata 2 and 3; the fourth stratum is included automatically as the "within" stratum, the residual stratum from the strata formula.
For more complicated models, the last stratum is thus the automatically included "within" stratum. Continuing on page 284: "It is not possible to associate [fitted values and residuals from the last stratum] uniquely with the plots of the original experiment." You need the residuals from "the projections of the original data vector onto the subspaces defined by each line in the analysis of variance tables." The residuals can be examined for every stratum, but only the final stratum takes all aspects of the model into account. This answer shows the code for the Venables and Ripley example in which the fourth stratum is the "within" stratum.
Before proceeding with aov, however, pay attention to the following quote from its help page:
Note
aov is designed for balanced designs, and the results can be hard to interpret without balance: beware that missing values in the response(s) will likely lose the balance. If there are two or more error strata, the methods used are statistically inefficient without balance, and it may be better to use lme in package nlme.
*This is more complicated with mixed models, for which there is dispute about the number of degrees of freedom to use in the test. But that dispute won't be resolved by examining the distribution of residuals. Tests on mixed models can also involve assumptions about the covariance structure of correlated observations. | Mixed ANOVA normality: which variables should be examined? (in universal and practical application w | TL;DR:
ANOVA pools information among all observations to get the best estimates of fixed effects, random effects, and error variance. If you want to examine normality of ANOVA residuals, doing so afte | Mixed ANOVA normality: which variables should be examined? (in universal and practical application with stats::aov)
TL;DR:
ANOVA pools information among all observations to get the best estimates of fixed effects, random effects, and error variance. If you want to examine normality of ANOVA residuals, doing so after all fixed and random effects are taken into account thus makes the most sense. Reliable ANOVA estimates don't require normality of residuals; the issue is the distribution of the test statistics. In repeated-measures ANOVA, issues like imbalance or mis-specification of correlation structures might be even more substantial obstacles to reliable statistical tests.
ANOVA is simply a particular type of a linear model, as described for example on this page of one of the sites that was linked from the question, and discussed extensively here. Like all linear models, ANOVA combines information from the combinations of predictor values to model the outcome values as a function of the predictors plus an error term. The error term is assumed to have a certain distribution shared among all cases, Gaussian with zero mean for standard ANOVA. Information about the distribution of the error terms is obtained by pooling across all the observations, smoothing out the vagaries that can happen just by chance within individual cells of the ANOVA design. A standard normal q-q diagnostic plot thus examines all the residual values, not those within individual cells.
Despite the usual assumption of Gaussian errors in an ANOVA model, the significance tests don't necessarily require that assumption to be met. Significance tests in ANOVA are tests on regression coefficients. It's thus the sampling distributions of those regression coefficients that must adequately meet assumptions when one performs a standard parametric test.
As @whuber put it in a crucially important comment:
What you really want to know is whether the assumed distributions of the ANOVA test statistics are sufficiently accurate to compute the p-values in which you are interested.
If the model assumptions are met and the shared error term has a Gaussian distribution then you know that tests on regression coefficients will be valid.* But strict normality of the error term isn't required for tests on the regression coefficients to be valid. Think about normally distributed error terms as sufficient but not always necessary for an adequately reliable significance test on linear model regression coefficients, including ANOVA.
That's not to say that it's useless to examine the distribution of residuals around model predictions that incorporate information from all cases. For example, the R lme4 package provides a normal q-q plot as
one of its diagnostic plots; see page 33 of the vignette.
What you will often find, however, is that substantial deviations from normality in such a plot of residuals mean that the model itself is poorly specified. That might be the most useful information from such a plot.
With a mixed ANOVA model having only fixed categorical predictors and including all interactions, you shouldn't have to worry about linearity in the fixed-effect predictors themselves. But there could be an incorrect handling of the outcome variable (e.g., if it's fundamentally log-normal rather than normal), omission of critical covariates associated both with outcome and with the included predictors, or mis-specification of the random-effects structure. Fix those problems exposed by the diagnostic plot rather than obsess about the normality per se.
To evaluate the model all the diagnostic plots should be examined: not only the q-q plot for normality of residuals but also the fitted vs. residual plot and the scale-location plot and the various profile plots (see page 36 of the vignette) for mixed models and their random effects. Examine undue influence of particular observations, e.g. with the influence.ME package in R. This process, rather than a simple examination of normality, is critical to evaluating and improving the quality of the model specification.
If the model is properly specified then the normality assumption on the sampling distribution of the regression coefficients can be reasonably reliable. With enough data the Central Limit Theorem can help with that despite non-normal residuals, although how much data is "enough" depends on the particular case. See this answer, for example. If you don't want to rely on that assumption, bootstrapping provides a way to get non-parametric confidence intervals. But that should be done only when the model itself is adequately specified.
As an edit to the question notes, some diagnostic plots can be generated from repeated-measures data analyzed by aov, which according to its manual page fits "an analysis of variance model by a call to lm for each stratum." Each stratum is a portioning of the means of the observations by progressively complex models, starting with the overall mean. As Venables and Ripley say on page 283 with respect to a simpler split-plot design:
Multistratum models may be fitted using aov, and are specified by a model formula of the form
response ~ mean.formula + Error (strata.formula)
In our example the strata.formula is B/V, specifying strata 2 and 3; the fourth stratum is included automatically as the "within" stratum, the residual stratum from the strata formula.
For more complicated models, the last stratum is thus the automatically included "within" stratum. Continuing on page 284: "It is not possible to associate [fitted values and residuals from the last stratum] uniquely with the plots of the original experiment." You need the residuals from "the projections of the original data vector onto the subspaces defined by each line in the analysis of variance tables." The residuals can be examined for every stratum, but only the final stratum takes all aspects of the model into account. This answer shows the code for the Venables and Ripley example in which the fourth stratum is the "within" stratum.
Before proceeding with aov, however, pay attention to the following quote from its help page:
Note
aov is designed for balanced designs, and the results can be hard to interpret without balance: beware that missing values in the response(s) will likely lose the balance. If there are two or more error strata, the methods used are statistically inefficient without balance, and it may be better to use lme in package nlme.
*This is more complicated with mixed models, for which there is dispute about the number of degrees of freedom to use in the test. But that dispute won't be resolved by examining the distribution of residuals. Tests on mixed models can also involve assumptions about the covariance structure of correlated observations. | Mixed ANOVA normality: which variables should be examined? (in universal and practical application w
TL;DR:
ANOVA pools information among all observations to get the best estimates of fixed effects, random effects, and error variance. If you want to examine normality of ANOVA residuals, doing so afte |
49,695 | Who invented the "Histogram"? | The best I know of at the moment is by an article by Rufilanchas in 2017 [1]: in it, he says that Pearson, the first person to use the word (not the first who used such a diagram), used it in relation to how he believed that the vertical alignment of columns to represent frequency distributions is preferable to it aligned horizontally:
"...Pearson, who seemed to be mostly interested in the psychological effect of the difference in orientation of the bars, found some “optical advantage of vertical over horizontal columns” (Pearson, 1938: 144), hence the choice for a word specifically meaning a vertical structure like a mast as the element for the root of the new word “histogram”..."
We see how this preference for the vertical arrangement of the columns relates to the prefix 'histo' in a little more detail in the Oxford English Dictionary which states its etymology: "... Etymology: < histo- (in histology n.), ultimately < ancient Greek ἱστός mast, (upright) beam of a loom [3], (woven) web < ἵστασθαι , medio-passive of ἱστάναι to (cause to) stand (see stand v.)..."
References and notes:
[1] Rufilanchas, D.R. On the origin of Karl Pearson’s term 'histogram'. Revista Estadistica Española. 2017:192. pg 29-35 (you can get articles and issues at https://www.ine.es/ss/Satellite?c=Page&cid=1254735226759&pagename=ProductosYServicios%2FPYSLayout&L=0).
[2] "histo-, comb. form." OED Online. Oxford University Press, September 2021. Web. 1 October 2021.
[3] Though not part of this question, here it leads to how the prefix 'histo' relates to the medical term histology: tissues appear like the woven cloth made using the vertical loom, both of which use the word 'histo'; see Mossakowska-Gaubert, M. A new kind of loom in early Roman Egypt? How iconography could explain (or not) papyrological evidence" (2020). in Mossakowska-Gaubert, M. Egyptian textiles and their production: ‘word’ and ‘object’. Zea books. pg 13-21, and https://chs.harvard.edu/susan-t-edmunds-picturing-homeric-weaving/#n.22. | Who invented the "Histogram"? | The best I know of at the moment is by an article by Rufilanchas in 2017 [1]: in it, he says that Pearson, the first person to use the word (not the first who used such a diagram), used it in relation | Who invented the "Histogram"?
The best I know of at the moment is by an article by Rufilanchas in 2017 [1]: in it, he says that Pearson, the first person to use the word (not the first who used such a diagram), used it in relation to how he believed that the vertical alignment of columns to represent frequency distributions is preferable to it aligned horizontally:
"...Pearson, who seemed to be mostly interested in the psychological effect of the difference in orientation of the bars, found some “optical advantage of vertical over horizontal columns” (Pearson, 1938: 144), hence the choice for a word specifically meaning a vertical structure like a mast as the element for the root of the new word “histogram”..."
We see how this preference for the vertical arrangement of the columns relates to the prefix 'histo' in a little more detail in the Oxford English Dictionary which states its etymology: "... Etymology: < histo- (in histology n.), ultimately < ancient Greek ἱστός mast, (upright) beam of a loom [3], (woven) web < ἵστασθαι , medio-passive of ἱστάναι to (cause to) stand (see stand v.)..."
References and notes:
[1] Rufilanchas, D.R. On the origin of Karl Pearson’s term 'histogram'. Revista Estadistica Española. 2017:192. pg 29-35 (you can get articles and issues at https://www.ine.es/ss/Satellite?c=Page&cid=1254735226759&pagename=ProductosYServicios%2FPYSLayout&L=0).
[2] "histo-, comb. form." OED Online. Oxford University Press, September 2021. Web. 1 October 2021.
[3] Though not part of this question, here it leads to how the prefix 'histo' relates to the medical term histology: tissues appear like the woven cloth made using the vertical loom, both of which use the word 'histo'; see Mossakowska-Gaubert, M. A new kind of loom in early Roman Egypt? How iconography could explain (or not) papyrological evidence" (2020). in Mossakowska-Gaubert, M. Egyptian textiles and their production: ‘word’ and ‘object’. Zea books. pg 13-21, and https://chs.harvard.edu/susan-t-edmunds-picturing-homeric-weaving/#n.22. | Who invented the "Histogram"?
The best I know of at the moment is by an article by Rufilanchas in 2017 [1]: in it, he says that Pearson, the first person to use the word (not the first who used such a diagram), used it in relation |
49,696 | Do variable-selection methods (e.g. Elastic Net; Lasso) invalidate theory-based models in fields where little is known? | Doubly robust methods (Urminsky et al. "Using Double-Lasso Regression for Principled Variable Selection") have become very popular recently since they allow (see page 18, Concluding Remarks) "identifying which covariates to include and not include in analyses" (even if the number of variables is larger than the sample size as in your case).
This empirical approach alone cannot solve your problem (and I think none will do so entirely) since you will need some theory (in my view, and in the view of the well-known authors cited above [p. 18]):
the analytic method presented here cannot determine either the role that selected variables should play, or how their effects on the relationship of interest should be interpreted. A confound, a manipulation check and a mediator may all have similar statistical relationships in the data (MacKinnon, Krull, & Lockwood, 2000; Zhao, Lynch, & Chen, 2010), and these distinctions should typically be made on theoretical grounds.
That means regarding your question elastic net, or any empirical approach alone, will not necessarily say anything about the "nature".
But the double robust approach might be still what you are looking for [p. 18]:
However, either including all covariates or ignoring covariates entirely, either because of the conceptual difficulty of identifying the theoretical role of the variable or because of the potential for covariates to be used improperly (i.e., in p-hacking), is no solution. Failing to control for valid covariates can yield biased parameter estimates in correlational analyses or in imperfectly randomized experiments and contributes to underpowered analyses even in effectively randomized experiments. As demonstrated in the analyses, double lasso variable selection can be useful as a principled method to identify covariates in analyses of correlations, moderation, mediation and experimental interventions, as well as to test for the effectiveness of randomization. While variable selection methods are no substitute for thinking about what the variables mean, the approach presented here can provide an empirical basis for determining which variables to think hard about.
There are also R-packages available.
Why a double robust method, and not let's say Lasso alone [p. 5]?
The goal is to identify covariates for
inclusion in two steps, finding those that predict the dependent variable and those that predict the
independent variable. The second step is important, because exclusion of a covariate that is a
modest predictor of the dependent variable but a strong predictor of the independent variable can
create a substantial omitted variable bias. | Do variable-selection methods (e.g. Elastic Net; Lasso) invalidate theory-based models in fields whe | Doubly robust methods (Urminsky et al. "Using Double-Lasso Regression for Principled Variable Selection") have become very popular recently since they allow (see page 18, Concluding Remarks) "identify | Do variable-selection methods (e.g. Elastic Net; Lasso) invalidate theory-based models in fields where little is known?
Doubly robust methods (Urminsky et al. "Using Double-Lasso Regression for Principled Variable Selection") have become very popular recently since they allow (see page 18, Concluding Remarks) "identifying which covariates to include and not include in analyses" (even if the number of variables is larger than the sample size as in your case).
This empirical approach alone cannot solve your problem (and I think none will do so entirely) since you will need some theory (in my view, and in the view of the well-known authors cited above [p. 18]):
the analytic method presented here cannot determine either the role that selected variables should play, or how their effects on the relationship of interest should be interpreted. A confound, a manipulation check and a mediator may all have similar statistical relationships in the data (MacKinnon, Krull, & Lockwood, 2000; Zhao, Lynch, & Chen, 2010), and these distinctions should typically be made on theoretical grounds.
That means regarding your question elastic net, or any empirical approach alone, will not necessarily say anything about the "nature".
But the double robust approach might be still what you are looking for [p. 18]:
However, either including all covariates or ignoring covariates entirely, either because of the conceptual difficulty of identifying the theoretical role of the variable or because of the potential for covariates to be used improperly (i.e., in p-hacking), is no solution. Failing to control for valid covariates can yield biased parameter estimates in correlational analyses or in imperfectly randomized experiments and contributes to underpowered analyses even in effectively randomized experiments. As demonstrated in the analyses, double lasso variable selection can be useful as a principled method to identify covariates in analyses of correlations, moderation, mediation and experimental interventions, as well as to test for the effectiveness of randomization. While variable selection methods are no substitute for thinking about what the variables mean, the approach presented here can provide an empirical basis for determining which variables to think hard about.
There are also R-packages available.
Why a double robust method, and not let's say Lasso alone [p. 5]?
The goal is to identify covariates for
inclusion in two steps, finding those that predict the dependent variable and those that predict the
independent variable. The second step is important, because exclusion of a covariate that is a
modest predictor of the dependent variable but a strong predictor of the independent variable can
create a substantial omitted variable bias. | Do variable-selection methods (e.g. Elastic Net; Lasso) invalidate theory-based models in fields whe
Doubly robust methods (Urminsky et al. "Using Double-Lasso Regression for Principled Variable Selection") have become very popular recently since they allow (see page 18, Concluding Remarks) "identify |
49,697 | Do variable-selection methods (e.g. Elastic Net; Lasso) invalidate theory-based models in fields where little is known? | Prediction models are about making good predictions. That's what you are optimizing (in terms of the metric you optimized your elastic net parameters for), when you go for your second option. Whatever hyperparameter settings help the model predict well as assessed by k-fold CV gets used and then you get some resulting coefficients that are non-zero. You should really not overinterpret those, because post-model-selection inference is difficult. There's quite a bit of literature about post-selection inference that try to find ways of doing this that are in some sense "valid", but it's tricky. Certainly, with the numbers you describe you would expect to miss out on some in truth quite relevant predictors by chance. There's also some serious risk that some spurious predictors end up in your model, but that's where methods for post-selection inference would come in to limit that to some degree.
However, don't expect too much. You have a tiny dataset and realistically only so much can be done (see the 2nd quote here: https://en.wikiquote.org/wiki/John_Tukey).
The first approach is less problematic for interpreting the coefficients, because you at least do not have the model selection messing everything up in terms of interpretation. However, you should still be careful to not overinterpret statistical significance (firstly, important predictors might not be by chance and less important ones can be, secondly you of course have a multiple comparison problem) or the coefficients (due to the small sample size even changing sign - aka type S error - or simply completely getting the magnitude wrong - aka type M error - are very real issues). | Do variable-selection methods (e.g. Elastic Net; Lasso) invalidate theory-based models in fields whe | Prediction models are about making good predictions. That's what you are optimizing (in terms of the metric you optimized your elastic net parameters for), when you go for your second option. Whatever | Do variable-selection methods (e.g. Elastic Net; Lasso) invalidate theory-based models in fields where little is known?
Prediction models are about making good predictions. That's what you are optimizing (in terms of the metric you optimized your elastic net parameters for), when you go for your second option. Whatever hyperparameter settings help the model predict well as assessed by k-fold CV gets used and then you get some resulting coefficients that are non-zero. You should really not overinterpret those, because post-model-selection inference is difficult. There's quite a bit of literature about post-selection inference that try to find ways of doing this that are in some sense "valid", but it's tricky. Certainly, with the numbers you describe you would expect to miss out on some in truth quite relevant predictors by chance. There's also some serious risk that some spurious predictors end up in your model, but that's where methods for post-selection inference would come in to limit that to some degree.
However, don't expect too much. You have a tiny dataset and realistically only so much can be done (see the 2nd quote here: https://en.wikiquote.org/wiki/John_Tukey).
The first approach is less problematic for interpreting the coefficients, because you at least do not have the model selection messing everything up in terms of interpretation. However, you should still be careful to not overinterpret statistical significance (firstly, important predictors might not be by chance and less important ones can be, secondly you of course have a multiple comparison problem) or the coefficients (due to the small sample size even changing sign - aka type S error - or simply completely getting the magnitude wrong - aka type M error - are very real issues). | Do variable-selection methods (e.g. Elastic Net; Lasso) invalidate theory-based models in fields whe
Prediction models are about making good predictions. That's what you are optimizing (in terms of the metric you optimized your elastic net parameters for), when you go for your second option. Whatever |
49,698 | Multiple regression with mixed continuous/categorical variables: Dummy coding, scaling, regularization | We do standardization/normalization to put our features in $[0,1]$ or $[-1,1]$ range. Let suppose we are using min-max normalization to put the values in the range $[0,1]$. The answer of your question are as follows.
Should I standardize/scale my data WITH or WITHOUT dummy coded cat. variables?
There is no clear Yes/No answer to this question. But it is not mandatory to do scaling of one-hot-encoded or dummy-encoded features. The intuition behind why it is not mandatory to do scaling is as follows.
Let say you have got two encoded vectors as $A = [0 1 0]$ and $B = [1 0 0]$, you can see that $|A| = \sqrt{0^2+1^2+0^2}\;\;and\;\;|B|=\sqrt{1^2+0^2+0^2}$ will always be equals to $1$ and the distance between them will be $\sqrt{1^2 + 1^2} = \sqrt{2} = 1.41$. So why you should not do standardization is clear from this, as you can see the magnitude of the one-hot encoded features is $1$ and the distance between them is $\sqrt{2}$ hence the variance in this one-hot encoded feature is not that much so as to standardize them. But when you should consider to do standardization? It is when, when you have vectors like $[111011]$ and $[000001]$ in which the variability is very high
What is the generally recommended preprocessing order in total?
You should do Dummy coding -> polynomial transformation -> standardization/scaling -> fit model.
The Reason behind doing polynomial featurization before standardization is quite simple. If you first do standardization then your variable will be in range $[0,1]$ and then squaring them will make the polynomial feature very small due to which your model will not sustain the numerical stability of this feature
Your next questions are not clear to me. Please elaborate them
Hope this helps! | Multiple regression with mixed continuous/categorical variables: Dummy coding, scaling, regularizati | We do standardization/normalization to put our features in $[0,1]$ or $[-1,1]$ range. Let suppose we are using min-max normalization to put the values in the range $[0,1]$. The answer of your question | Multiple regression with mixed continuous/categorical variables: Dummy coding, scaling, regularization
We do standardization/normalization to put our features in $[0,1]$ or $[-1,1]$ range. Let suppose we are using min-max normalization to put the values in the range $[0,1]$. The answer of your question are as follows.
Should I standardize/scale my data WITH or WITHOUT dummy coded cat. variables?
There is no clear Yes/No answer to this question. But it is not mandatory to do scaling of one-hot-encoded or dummy-encoded features. The intuition behind why it is not mandatory to do scaling is as follows.
Let say you have got two encoded vectors as $A = [0 1 0]$ and $B = [1 0 0]$, you can see that $|A| = \sqrt{0^2+1^2+0^2}\;\;and\;\;|B|=\sqrt{1^2+0^2+0^2}$ will always be equals to $1$ and the distance between them will be $\sqrt{1^2 + 1^2} = \sqrt{2} = 1.41$. So why you should not do standardization is clear from this, as you can see the magnitude of the one-hot encoded features is $1$ and the distance between them is $\sqrt{2}$ hence the variance in this one-hot encoded feature is not that much so as to standardize them. But when you should consider to do standardization? It is when, when you have vectors like $[111011]$ and $[000001]$ in which the variability is very high
What is the generally recommended preprocessing order in total?
You should do Dummy coding -> polynomial transformation -> standardization/scaling -> fit model.
The Reason behind doing polynomial featurization before standardization is quite simple. If you first do standardization then your variable will be in range $[0,1]$ and then squaring them will make the polynomial feature very small due to which your model will not sustain the numerical stability of this feature
Your next questions are not clear to me. Please elaborate them
Hope this helps! | Multiple regression with mixed continuous/categorical variables: Dummy coding, scaling, regularizati
We do standardization/normalization to put our features in $[0,1]$ or $[-1,1]$ range. Let suppose we are using min-max normalization to put the values in the range $[0,1]$. The answer of your question |
49,699 | What can we say about P(X<Y and X<Z)? | Yes there can be a bound.
Since $Y$ and $Z$ are exchangable, denote
$$
P(X<Y<Z)=P(X<Z<Y)=p_1 \\
P(Y<X<Z)=P(Z<X<Y)=p_2 \\
P(Y<Z<X)=P(Z<Y<X)=p_3 \\
$$
So the target can be rewritten as $P(X<Y\text{ and }X<Z) = P(X<Y<Z) + P(X<Z<Y) = 2p_1$.
According to the permutations and the additional condition, $p_1,p_2,p_3$ satisfy the following relations:
$$
\begin{cases}
2p_1+2p_2+2p_3=1\\
2p_1+p_2=\frac{2}{3}\\
\end{cases}
$$
Solve this linear system, will get
$$
2p_1=\frac{2}{3}-p_2\\
2p_3=\frac{1}{3}-p_2
$$
In order to make $p_1\ge 0,p_2\ge 0,p_3\ge 0$, $p_2$ must satisfy $0\le p_2\le \frac{1}{3}$, so that $2p_1=\frac{2}{3}-p_2 \ge \frac{1}{3}$ and $2p_1 \le \frac{2}{3}$. That is,
$$
\frac{1}{3} \le P(X<Y\text{ and }X<Z) \le \frac{2}{3}
$$ | What can we say about P(X<Y and X<Z)? | Yes there can be a bound.
Since $Y$ and $Z$ are exchangable, denote
$$
P(X<Y<Z)=P(X<Z<Y)=p_1 \\
P(Y<X<Z)=P(Z<X<Y)=p_2 \\
P(Y<Z<X)=P(Z<Y<X)=p_3 \\
$$
So the target can be rewritten as $P(X<Y\text{ and | What can we say about P(X<Y and X<Z)?
Yes there can be a bound.
Since $Y$ and $Z$ are exchangable, denote
$$
P(X<Y<Z)=P(X<Z<Y)=p_1 \\
P(Y<X<Z)=P(Z<X<Y)=p_2 \\
P(Y<Z<X)=P(Z<Y<X)=p_3 \\
$$
So the target can be rewritten as $P(X<Y\text{ and }X<Z) = P(X<Y<Z) + P(X<Z<Y) = 2p_1$.
According to the permutations and the additional condition, $p_1,p_2,p_3$ satisfy the following relations:
$$
\begin{cases}
2p_1+2p_2+2p_3=1\\
2p_1+p_2=\frac{2}{3}\\
\end{cases}
$$
Solve this linear system, will get
$$
2p_1=\frac{2}{3}-p_2\\
2p_3=\frac{1}{3}-p_2
$$
In order to make $p_1\ge 0,p_2\ge 0,p_3\ge 0$, $p_2$ must satisfy $0\le p_2\le \frac{1}{3}$, so that $2p_1=\frac{2}{3}-p_2 \ge \frac{1}{3}$ and $2p_1 \le \frac{2}{3}$. That is,
$$
\frac{1}{3} \le P(X<Y\text{ and }X<Z) \le \frac{2}{3}
$$ | What can we say about P(X<Y and X<Z)?
Yes there can be a bound.
Since $Y$ and $Z$ are exchangable, denote
$$
P(X<Y<Z)=P(X<Z<Y)=p_1 \\
P(Y<X<Z)=P(Z<X<Y)=p_2 \\
P(Y<Z<X)=P(Z<Y<X)=p_3 \\
$$
So the target can be rewritten as $P(X<Y\text{ and |
49,700 | Formula of the Chebyshev's inequality for an asymmetric interval | What you are observing here is an idiosyncracy of the general Chebyshev inequality. Generally speaking, the inequality gets better as the midpoint of the interval gets closer to the mean $\mu$ and it also gets better as the length of the interval increases. However, if you hold one of the bounds constant and move the other one out to expand the interval, eventually you pass a point where the midpoint of the interval is now far from the mean, and the effect of further movement of the midpoint away from the mean outweighs the effect of expanding the length of the interval. As such, the probability bound gets worse rather than getting better.
Describing the phenomenon in greater generality: A simpler and more general way to frame this phenomenon is in terms of the standardised part-lengths of the interval, which I will denote by:
$$k_- = \frac{\mu-l}{\sigma}
\quad \quad \quad \quad \quad
k_+ = \frac{u-\mu}{\sigma}.$$
The lower probability bound given by the Chebyshev inequatity can be written as:
$$B(k_-, k_+) = 4 \cdot \frac{k_- k_+ - 1}{(k_- + k_+)^2},$$
and the bound is "binding" (i.e., greater than zero) if and only if the interval contains the mean $\mu$ in its interior and we also have $k_- k_+>1$. If you hold one of these arguments constant, it can easily be shown that this function is strictly quasi-concave in the other argument. In particular, holding $k_-$ constant and varying $k_+$ gives the maximiser:
$$\underset{k_+}{\text{arg max}} \ B(k_-, k_+) = \hat{k}_+ = k_- + \frac{2}{k_-}
\quad \quad \quad \quad \quad
\underset{k_+}{\text{max}} \ B(k_-, k_+) = B(\hat{k}_+) = \frac{k_-^2}{k_-^2+1}.$$
The bound function is increasing up to $k_+ = \hat{k}_+$ and then after this it decreases. As stated above, this occurs because after we get past this point, the negative effect of moving the midpoint of the interval away from the mean outweighs the positive effect of making the interval wider.
Of course, the true probability of the interval cannot be getting smaller as you move a boundary point outward to make the interval larger. Thus, you may legitimately use the probability bound at $k_+ = \hat{k}_+$ whenever you have $k_+ > \hat{k}_+$ (and it is desirable to do this, since that lower bound is larger). Indeed, this is what is done in adjust versions of the interval. In the adjusted version, we take the generalised Chebyshev interval to be given by the formula you have written, but adjusted so that it doesn't get smaller as you move outward. | Formula of the Chebyshev's inequality for an asymmetric interval | What you are observing here is an idiosyncracy of the general Chebyshev inequality. Generally speaking, the inequality gets better as the midpoint of the interval gets closer to the mean $\mu$ and it | Formula of the Chebyshev's inequality for an asymmetric interval
What you are observing here is an idiosyncracy of the general Chebyshev inequality. Generally speaking, the inequality gets better as the midpoint of the interval gets closer to the mean $\mu$ and it also gets better as the length of the interval increases. However, if you hold one of the bounds constant and move the other one out to expand the interval, eventually you pass a point where the midpoint of the interval is now far from the mean, and the effect of further movement of the midpoint away from the mean outweighs the effect of expanding the length of the interval. As such, the probability bound gets worse rather than getting better.
Describing the phenomenon in greater generality: A simpler and more general way to frame this phenomenon is in terms of the standardised part-lengths of the interval, which I will denote by:
$$k_- = \frac{\mu-l}{\sigma}
\quad \quad \quad \quad \quad
k_+ = \frac{u-\mu}{\sigma}.$$
The lower probability bound given by the Chebyshev inequatity can be written as:
$$B(k_-, k_+) = 4 \cdot \frac{k_- k_+ - 1}{(k_- + k_+)^2},$$
and the bound is "binding" (i.e., greater than zero) if and only if the interval contains the mean $\mu$ in its interior and we also have $k_- k_+>1$. If you hold one of these arguments constant, it can easily be shown that this function is strictly quasi-concave in the other argument. In particular, holding $k_-$ constant and varying $k_+$ gives the maximiser:
$$\underset{k_+}{\text{arg max}} \ B(k_-, k_+) = \hat{k}_+ = k_- + \frac{2}{k_-}
\quad \quad \quad \quad \quad
\underset{k_+}{\text{max}} \ B(k_-, k_+) = B(\hat{k}_+) = \frac{k_-^2}{k_-^2+1}.$$
The bound function is increasing up to $k_+ = \hat{k}_+$ and then after this it decreases. As stated above, this occurs because after we get past this point, the negative effect of moving the midpoint of the interval away from the mean outweighs the positive effect of making the interval wider.
Of course, the true probability of the interval cannot be getting smaller as you move a boundary point outward to make the interval larger. Thus, you may legitimately use the probability bound at $k_+ = \hat{k}_+$ whenever you have $k_+ > \hat{k}_+$ (and it is desirable to do this, since that lower bound is larger). Indeed, this is what is done in adjust versions of the interval. In the adjusted version, we take the generalised Chebyshev interval to be given by the formula you have written, but adjusted so that it doesn't get smaller as you move outward. | Formula of the Chebyshev's inequality for an asymmetric interval
What you are observing here is an idiosyncracy of the general Chebyshev inequality. Generally speaking, the inequality gets better as the midpoint of the interval gets closer to the mean $\mu$ and it |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.