idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
50,001 | State space model with regression effects | The way this is done, is to first establish the relationship between $\alpha_{t}$ and $\alpha_{t}^{\ast}$ and proceed from there. We take the initial state equations above and take
$$\alpha_{t}^{\ast} = \mathsf{T}_{t}^{-1}\mathsf{W}_{t}\beta + \alpha_{t},$$
we see that we can write
$$\alpha_{t + 1}^{\ast} = \mathsf{T}_{t}\alpha_{t}^{\ast} + \mathsf{R}_{t}\eta_{t}.$$
That's that one done. Now, for the first of the above equations we take
$$\mathsf{X}_{t}^{\ast} = \mathsf{X}_{t} - \mathsf{Z}_{t}\mathsf{T}_{t}^{-1}\mathsf{W}_{t},$$
and we can write
$$y_{t} = \mathsf{X}_{t}^{\ast}\beta + \mathsf{Z}_{t}\alpha_{t}^{\ast} + \epsilon_{t}.$$
You can convince yourself by substituting the expressions for $\alpha_{t}^{\ast}$ and $\mathsf{X}_{t}^{\ast}$ back into the equations and you will see that you get the initial ones back.
I hope this helps. | State space model with regression effects | The way this is done, is to first establish the relationship between $\alpha_{t}$ and $\alpha_{t}^{\ast}$ and proceed from there. We take the initial state equations above and take
$$\alpha_{t}^{\ast} | State space model with regression effects
The way this is done, is to first establish the relationship between $\alpha_{t}$ and $\alpha_{t}^{\ast}$ and proceed from there. We take the initial state equations above and take
$$\alpha_{t}^{\ast} = \mathsf{T}_{t}^{-1}\mathsf{W}_{t}\beta + \alpha_{t},$$
we see that we can write
$$\alpha_{t + 1}^{\ast} = \mathsf{T}_{t}\alpha_{t}^{\ast} + \mathsf{R}_{t}\eta_{t}.$$
That's that one done. Now, for the first of the above equations we take
$$\mathsf{X}_{t}^{\ast} = \mathsf{X}_{t} - \mathsf{Z}_{t}\mathsf{T}_{t}^{-1}\mathsf{W}_{t},$$
and we can write
$$y_{t} = \mathsf{X}_{t}^{\ast}\beta + \mathsf{Z}_{t}\alpha_{t}^{\ast} + \epsilon_{t}.$$
You can convince yourself by substituting the expressions for $\alpha_{t}^{\ast}$ and $\mathsf{X}_{t}^{\ast}$ back into the equations and you will see that you get the initial ones back.
I hope this helps. | State space model with regression effects
The way this is done, is to first establish the relationship between $\alpha_{t}$ and $\alpha_{t}^{\ast}$ and proceed from there. We take the initial state equations above and take
$$\alpha_{t}^{\ast} |
50,002 | multi stage binomial "process" | From $$\mathbb{E}[s^{X_1}]=(sp+q)^K$$
(where $q=1-p$), it is rather straightforward to show that
$$\mathbb{E}[s^{X_1+\ldots+X_\ell}]=\left\{s(1-q^\ell)+q^\ell\right\}^K$$ Indeed, if we assume it holds for a given $\ell$ (and it does for $\ell=1$), then
\begin{align*}\mathbb{E}[s^{X_1+\ldots+X_{\ell+1}}]&=\mathbb{E}[\mathbb{E}[s^{X_1+\ldots+X_{\ell+1}}|X_1+\ldots+X_\ell]]\\ &=\mathbb{E}[s^{X_1+\ldots+X_{\ell}}(sp+q)^{K-X_1-\ldots-X_\ell}]\\
&=(sp+q)^K \left\{ \frac{s}{sp+q}\,(1-q^\ell)+q^\ell\right\}^K\\
&=\left\{s(1-q^{\ell-1})+q^{\ell+1}\right\}^K
\end{align*}
From there, it follows that, for a given $\ell$, $X_1+\ldots+X_\ell$ is distributed as a Binomial $\text{B}(K,1-q^\ell)$ random variable. Hence,
\begin{align*}\mathbb{P}(L=\ell)&=\mathbb{P}(X_1+\ldots+X_{\ell-1}<K=X_1+\ldots+X_{\ell})\\ &=\mathbb{E}[\mathbb{P}(K=X_1+\ldots+X_{\ell}|X_1+\ldots+X_{\ell-1})\mathbb{I}_{X_1+\ldots+X_{\ell-1}<K}]\\&=\mathbb{E}[p^{K-X_1-\ldots-X_{\ell-1}}\,\mathbb{I}_{X_1+\ldots+X_{\ell-1}<K}]\\&=\sum_{i=1}^{K-1} {K \choose i} (1-q^{\ell-1})^i (q^{\ell-1})^{K-i} p^{K-i}\\
&=\sum_{i=1}^{K-1} {K \choose i} (1-q^{\ell-1})^i \left[q^{\ell-1} p\right]^{K-i}\\
&=\left[1-q^{\ell-1}+q^{\ell-1} p\right]^K-(1-q^{\ell-1})^K
\end{align*}
This gives you the distribution of $L$.
As a checkup, you can run the following code
T=10^6
N=13
p=.85
ell=rep(1,T)
for (t in 1:T){
x=rbinom(1,N,p)
while (x<N){ ell[t]=ell[t]+1; x=x+rbinom(1,N-x,p)}}
and compare the frequencies with
probel=function(N,p,el){
(1-(1-p)^(el-1)+p*(1-p)^(el-1))^N-(1-(1-p)^(el-1))^N} | multi stage binomial "process" | From $$\mathbb{E}[s^{X_1}]=(sp+q)^K$$
(where $q=1-p$), it is rather straightforward to show that
$$\mathbb{E}[s^{X_1+\ldots+X_\ell}]=\left\{s(1-q^\ell)+q^\ell\right\}^K$$ Indeed, if we assume it holds | multi stage binomial "process"
From $$\mathbb{E}[s^{X_1}]=(sp+q)^K$$
(where $q=1-p$), it is rather straightforward to show that
$$\mathbb{E}[s^{X_1+\ldots+X_\ell}]=\left\{s(1-q^\ell)+q^\ell\right\}^K$$ Indeed, if we assume it holds for a given $\ell$ (and it does for $\ell=1$), then
\begin{align*}\mathbb{E}[s^{X_1+\ldots+X_{\ell+1}}]&=\mathbb{E}[\mathbb{E}[s^{X_1+\ldots+X_{\ell+1}}|X_1+\ldots+X_\ell]]\\ &=\mathbb{E}[s^{X_1+\ldots+X_{\ell}}(sp+q)^{K-X_1-\ldots-X_\ell}]\\
&=(sp+q)^K \left\{ \frac{s}{sp+q}\,(1-q^\ell)+q^\ell\right\}^K\\
&=\left\{s(1-q^{\ell-1})+q^{\ell+1}\right\}^K
\end{align*}
From there, it follows that, for a given $\ell$, $X_1+\ldots+X_\ell$ is distributed as a Binomial $\text{B}(K,1-q^\ell)$ random variable. Hence,
\begin{align*}\mathbb{P}(L=\ell)&=\mathbb{P}(X_1+\ldots+X_{\ell-1}<K=X_1+\ldots+X_{\ell})\\ &=\mathbb{E}[\mathbb{P}(K=X_1+\ldots+X_{\ell}|X_1+\ldots+X_{\ell-1})\mathbb{I}_{X_1+\ldots+X_{\ell-1}<K}]\\&=\mathbb{E}[p^{K-X_1-\ldots-X_{\ell-1}}\,\mathbb{I}_{X_1+\ldots+X_{\ell-1}<K}]\\&=\sum_{i=1}^{K-1} {K \choose i} (1-q^{\ell-1})^i (q^{\ell-1})^{K-i} p^{K-i}\\
&=\sum_{i=1}^{K-1} {K \choose i} (1-q^{\ell-1})^i \left[q^{\ell-1} p\right]^{K-i}\\
&=\left[1-q^{\ell-1}+q^{\ell-1} p\right]^K-(1-q^{\ell-1})^K
\end{align*}
This gives you the distribution of $L$.
As a checkup, you can run the following code
T=10^6
N=13
p=.85
ell=rep(1,T)
for (t in 1:T){
x=rbinom(1,N,p)
while (x<N){ ell[t]=ell[t]+1; x=x+rbinom(1,N-x,p)}}
and compare the frequencies with
probel=function(N,p,el){
(1-(1-p)^(el-1)+p*(1-p)^(el-1))^N-(1-(1-p)^(el-1))^N} | multi stage binomial "process"
From $$\mathbb{E}[s^{X_1}]=(sp+q)^K$$
(where $q=1-p$), it is rather straightforward to show that
$$\mathbb{E}[s^{X_1+\ldots+X_\ell}]=\left\{s(1-q^\ell)+q^\ell\right\}^K$$ Indeed, if we assume it holds |
50,003 | Linear regression VS linear modeling | Comment made into an answer per suggestion of gung.
Linear modeling can have meanings, outside Statistics, well beyond the Wikipedia entry Linear Model in whuber's comment above. For instance, Linear Programming https://en.wikipedia.org/wiki/Linear_programming is the minimization or maximization of a linear function of several (could be millions) variables subject to linear constraints on those variables. Creation of the model to be solved by Linear Programming is considered to be linear modeling.
Without Linear Programming (it is widely used in oil refining), the gasoline (petrol) you buy for your car would be more expensive, and transportation would cost more (aside from petrol cost). I would venture to say that Linear Programming (to include Mixed Integer Linear Programming) plays a far more important role in the U.S. and world economies than does linear regression, and is THE most important and greatest impact linear modeling which is performed.
That said, I'm a nonlinear guy, so I see nonlinearity everywhere. On the other hand, I sometimes see how to restrict linearity to cost functions (input data to optimization), and thereby still perform "linear modeling" and solution, even though I have managed to get (sneak) significant and vital nonlinearity into the "linear" model. | Linear regression VS linear modeling | Comment made into an answer per suggestion of gung.
Linear modeling can have meanings, outside Statistics, well beyond the Wikipedia entry Linear Model in whuber's comment above. For instance, Linear | Linear regression VS linear modeling
Comment made into an answer per suggestion of gung.
Linear modeling can have meanings, outside Statistics, well beyond the Wikipedia entry Linear Model in whuber's comment above. For instance, Linear Programming https://en.wikipedia.org/wiki/Linear_programming is the minimization or maximization of a linear function of several (could be millions) variables subject to linear constraints on those variables. Creation of the model to be solved by Linear Programming is considered to be linear modeling.
Without Linear Programming (it is widely used in oil refining), the gasoline (petrol) you buy for your car would be more expensive, and transportation would cost more (aside from petrol cost). I would venture to say that Linear Programming (to include Mixed Integer Linear Programming) plays a far more important role in the U.S. and world economies than does linear regression, and is THE most important and greatest impact linear modeling which is performed.
That said, I'm a nonlinear guy, so I see nonlinearity everywhere. On the other hand, I sometimes see how to restrict linearity to cost functions (input data to optimization), and thereby still perform "linear modeling" and solution, even though I have managed to get (sneak) significant and vital nonlinearity into the "linear" model. | Linear regression VS linear modeling
Comment made into an answer per suggestion of gung.
Linear modeling can have meanings, outside Statistics, well beyond the Wikipedia entry Linear Model in whuber's comment above. For instance, Linear |
50,004 | Linear regression VS linear modeling | From my point of view, linear regression is one kind of linear modeling. Thus, this modeling can refer to a full rank model (regression) or to a model not of full rank (experimental designs, for example). Modeling is a more general term, with several applications. | Linear regression VS linear modeling | From my point of view, linear regression is one kind of linear modeling. Thus, this modeling can refer to a full rank model (regression) or to a model not of full rank (experimental designs, for examp | Linear regression VS linear modeling
From my point of view, linear regression is one kind of linear modeling. Thus, this modeling can refer to a full rank model (regression) or to a model not of full rank (experimental designs, for example). Modeling is a more general term, with several applications. | Linear regression VS linear modeling
From my point of view, linear regression is one kind of linear modeling. Thus, this modeling can refer to a full rank model (regression) or to a model not of full rank (experimental designs, for examp |
50,005 | Linear regression VS linear modeling | My impression is that the term linear regression (especially linear regression 'analysis') is used more often to explain relations while modeling is used more often in context of predictions and predictive models. | Linear regression VS linear modeling | My impression is that the term linear regression (especially linear regression 'analysis') is used more often to explain relations while modeling is used more often in context of predictions and predi | Linear regression VS linear modeling
My impression is that the term linear regression (especially linear regression 'analysis') is used more often to explain relations while modeling is used more often in context of predictions and predictive models. | Linear regression VS linear modeling
My impression is that the term linear regression (especially linear regression 'analysis') is used more often to explain relations while modeling is used more often in context of predictions and predi |
50,006 | What are some good references on how probability theory got mathematically rigorous? | I don't know if this counts as an answer, or just a comment (moderators please!), but I believe one should look to the works of Anders Hald.
A History of Parametric Statistical Inference from Bernoulli to Fisher, 1713–1935 2007 Springer
A History of Probability and Statistics and Their Applications before 1750 (2003) Wiley
A History of Mathematical Statistics from 1750 to 1930 (1998).Wiley
ADDENDUM
The question was cross-posted to math.SE, but given its nature, I believe this time cross-posting had a beneficial effect - the books suggested by the other answer are totally different, see
https://math.stackexchange.com/questions/1030229/what-are-some-good-references-on-how-probability-theory-got-mathematically-rigor | What are some good references on how probability theory got mathematically rigorous? | I don't know if this counts as an answer, or just a comment (moderators please!), but I believe one should look to the works of Anders Hald.
A History of Parametric Statistical Inference from Bernoull | What are some good references on how probability theory got mathematically rigorous?
I don't know if this counts as an answer, or just a comment (moderators please!), but I believe one should look to the works of Anders Hald.
A History of Parametric Statistical Inference from Bernoulli to Fisher, 1713–1935 2007 Springer
A History of Probability and Statistics and Their Applications before 1750 (2003) Wiley
A History of Mathematical Statistics from 1750 to 1930 (1998).Wiley
ADDENDUM
The question was cross-posted to math.SE, but given its nature, I believe this time cross-posting had a beneficial effect - the books suggested by the other answer are totally different, see
https://math.stackexchange.com/questions/1030229/what-are-some-good-references-on-how-probability-theory-got-mathematically-rigor | What are some good references on how probability theory got mathematically rigorous?
I don't know if this counts as an answer, or just a comment (moderators please!), but I believe one should look to the works of Anders Hald.
A History of Parametric Statistical Inference from Bernoull |
50,007 | Is it possible to have a case where $D'$ is zero but Logistic Regression is still able to classify accurately? | Your intuition is correct: such an example is impossible.
To see why not, consider both $M_1$ and $M_2$ as collections of $p$-vectors. Because the predicted value of any vector in a logistic regression is a linear function, perfect prediction means there exists a codimension-$1$ affine hyperspace that separates all the points in $M_1$ from those in $M_2$. That implies their centroids cannot coincide, QED.
In this figure $p=2$ and the groups have sizes $30$ (red circles) and $10$ (blue triangles). Their centroids are shown as corresponding filled graphics. Perfect separation occurs, as shown by the gray dotted line. Since the centroids must lie on opposite sides of this line, they cannot coincide. | Is it possible to have a case where $D'$ is zero but Logistic Regression is still able to classify a | Your intuition is correct: such an example is impossible.
To see why not, consider both $M_1$ and $M_2$ as collections of $p$-vectors. Because the predicted value of any vector in a logistic regressi | Is it possible to have a case where $D'$ is zero but Logistic Regression is still able to classify accurately?
Your intuition is correct: such an example is impossible.
To see why not, consider both $M_1$ and $M_2$ as collections of $p$-vectors. Because the predicted value of any vector in a logistic regression is a linear function, perfect prediction means there exists a codimension-$1$ affine hyperspace that separates all the points in $M_1$ from those in $M_2$. That implies their centroids cannot coincide, QED.
In this figure $p=2$ and the groups have sizes $30$ (red circles) and $10$ (blue triangles). Their centroids are shown as corresponding filled graphics. Perfect separation occurs, as shown by the gray dotted line. Since the centroids must lie on opposite sides of this line, they cannot coincide. | Is it possible to have a case where $D'$ is zero but Logistic Regression is still able to classify a
Your intuition is correct: such an example is impossible.
To see why not, consider both $M_1$ and $M_2$ as collections of $p$-vectors. Because the predicted value of any vector in a logistic regressi |
50,008 | group fixed-effects, not individual-fixed effects using plm in R | I have worked on similar projects and am confronting one right now. The way that we handle this is to put in a fixed effect for each village and then to cluster the standard errors by village. This is not a perfect solution, but is fairly standard practice.
The plm package in R and xtreg ..., fe command in Stata, and the traditional fixed effect (within) estimator are designed to follow individuals. I believe one of the names for the method that you want is called a hierarchical linear model.
The simplest implementation in R would be something like
myLM <- lm(y ~ x + v v.t*t, data=df)
where y is the outcome of interest, x is some set of controls, v is a factor variable for the villages, v.t is a binary (factor) variable indicating whether a village was treated, and t is an indicator for pre-post treatment.
For standard inference, it is typical and recommended to produce clustered standard errors use either the multiwayvcov package or clusterSEs package.
Another method for inference, and the preferred method in Bertrand, Duflo & Mullainathan, 2004 is to perform a placebo test, where you vary "treatment" across all villages, form an empirical CDF, and see where the effect of treatment for the truly treated village sits in that distribution. Note that this is roughly the same method recommended for inference with synthetic controls of Abadie, Diamond, and Hainmueller, and has ties back to Fisher's 1935 text. | group fixed-effects, not individual-fixed effects using plm in R | I have worked on similar projects and am confronting one right now. The way that we handle this is to put in a fixed effect for each village and then to cluster the standard errors by village. This is | group fixed-effects, not individual-fixed effects using plm in R
I have worked on similar projects and am confronting one right now. The way that we handle this is to put in a fixed effect for each village and then to cluster the standard errors by village. This is not a perfect solution, but is fairly standard practice.
The plm package in R and xtreg ..., fe command in Stata, and the traditional fixed effect (within) estimator are designed to follow individuals. I believe one of the names for the method that you want is called a hierarchical linear model.
The simplest implementation in R would be something like
myLM <- lm(y ~ x + v v.t*t, data=df)
where y is the outcome of interest, x is some set of controls, v is a factor variable for the villages, v.t is a binary (factor) variable indicating whether a village was treated, and t is an indicator for pre-post treatment.
For standard inference, it is typical and recommended to produce clustered standard errors use either the multiwayvcov package or clusterSEs package.
Another method for inference, and the preferred method in Bertrand, Duflo & Mullainathan, 2004 is to perform a placebo test, where you vary "treatment" across all villages, form an empirical CDF, and see where the effect of treatment for the truly treated village sits in that distribution. Note that this is roughly the same method recommended for inference with synthetic controls of Abadie, Diamond, and Hainmueller, and has ties back to Fisher's 1935 text. | group fixed-effects, not individual-fixed effects using plm in R
I have worked on similar projects and am confronting one right now. The way that we handle this is to put in a fixed effect for each village and then to cluster the standard errors by village. This is |
50,009 | What is the difference between the M5 regression model tree and the Cubist method for regression? | As you mentioned, in the documentation for cubist here, they state that it is an extension to M5 model. The specifications seems to be overlapping with description of M5 model that you have mentioned above. In caret documentation, they specify M5, M5Rules and cubist as M5 (RWeka) Models here.
I guess M5 is RWeka package implementation and cubist is from separate cubist package implementation (more recent), and hence an improvement over the algorithm. It is still uncertain, how it may be better. Some further light by anyone would be great. | What is the difference between the M5 regression model tree and the Cubist method for regression? | As you mentioned, in the documentation for cubist here, they state that it is an extension to M5 model. The specifications seems to be overlapping with description of M5 model that you have mentioned | What is the difference between the M5 regression model tree and the Cubist method for regression?
As you mentioned, in the documentation for cubist here, they state that it is an extension to M5 model. The specifications seems to be overlapping with description of M5 model that you have mentioned above. In caret documentation, they specify M5, M5Rules and cubist as M5 (RWeka) Models here.
I guess M5 is RWeka package implementation and cubist is from separate cubist package implementation (more recent), and hence an improvement over the algorithm. It is still uncertain, how it may be better. Some further light by anyone would be great. | What is the difference between the M5 regression model tree and the Cubist method for regression?
As you mentioned, in the documentation for cubist here, they state that it is an extension to M5 model. The specifications seems to be overlapping with description of M5 model that you have mentioned |
50,010 | What is the difference between the M5 regression model tree and the Cubist method for regression? | From what I have understood, in the cubist algorithm a linear model is made per decision node, and that model is then extended each node further down the tree. This should result in a more continuous predicted value whereas M5P might suffer from discontinuities when jumping from one leaf node to the other by crossing a decision node. | What is the difference between the M5 regression model tree and the Cubist method for regression? | From what I have understood, in the cubist algorithm a linear model is made per decision node, and that model is then extended each node further down the tree. This should result in a more continuous | What is the difference between the M5 regression model tree and the Cubist method for regression?
From what I have understood, in the cubist algorithm a linear model is made per decision node, and that model is then extended each node further down the tree. This should result in a more continuous predicted value whereas M5P might suffer from discontinuities when jumping from one leaf node to the other by crossing a decision node. | What is the difference between the M5 regression model tree and the Cubist method for regression?
From what I have understood, in the cubist algorithm a linear model is made per decision node, and that model is then extended each node further down the tree. This should result in a more continuous |
50,011 | How to determine overlap of two empirical distribution based on quantiles? | Because you will be doing this for $\binom{10}{2}=45$ pairs of distributions, you will want a reasonably efficient method.
The question asks to solve (at least approximately) an equation of the form $G_0(\alpha)-G_1(1-\alpha)=0$ where the $G_i$ are the inverse empirical CDFs. Equivalently, you could solve $F_0(z)+F_1(z)-1=0$ where the $F_i$ are the empirical CDFs. That is best done with a root-finding method which does not assume the function is differentiable (or even continuous) because these functions are discontinuous: they jump at the data values.
In R, uniroot will do the job. Although it assumes the functions are continuous (it uses Brent's Method, I believe), R's implementation of the empirical CDFs makes them look sufficiently continuous. To make this method work you need to bracket the root between known bounds, but this is easy: it must lie within the range of the union of both datasets.
The code is remarkably simple: given two data arrays x and y, create their empirical CDF functions F.x and F.y, then invoke uniroot. That's all you need.
overlap <- function(x, y) {
F.x <- ecdf(x); F.y <- ecdf(y)
z <- uniroot(function(z) F.x(z) + F.y(z) - 1, interval<-c(min(c(x,y)), max(c(x,y))))
return(list(Root=z, F.x=F.x, F.y=F.y))
}
It is reasonably fast: applied to all $45$ pairs of ten datasets ranging in size from $1000$ to $8000$, it found the answers in a total of $0.12$ seconds.
Alternatively, notice that the desired point is the median of an equal mixture of the two distributions. When the two datasets are the same size, just obtain the median of the union of all the data! You can generalize this to datasets of different sizes by computing weighted medians. This capability is available via quantile regression (in the quantreg package), which accommodates weights: regress the data against a constant and weight them in inverse proportion to the sizes of the datasets.
overlap.rq <- function(x, y) {
library(quantreg)
fit <- rq(c(x,y) ~ 1, data=d,
weights=c(rep(1/length(x), length(x)), rep(1/length(y), length(y))))
return(coef(fit))
}
Timing tests show this is at least three times slower than the root-finding method and it does not scale as well for larger datasets: on the preceding test with $45$ pairs of datasets it took $1.67$ seconds, more than ten times slower. The chief advantage is that this particular implementation of weighted medians will issue warnings when the answer does not appear unique, whereas Brent's method tends to find unique answers right in the middle of an interval of possible answers.
As a demonstration, here is a plot of two empirical CDFs along with vertical lines showing the two solutions (and horizontal lines marking the levels of $\alpha$ and $1-\alpha$). In this particular case, the two methods produce the same answer so only one vertical line appears.
#
# Generate some data.
#
set.seed(17)
x <- rnorm(32, 5, 2)
y <- rgamma(10, 2)
#
# Compute the solution two ways.
#
solution <- overlap(x, y)
solution.rq <- overlap.rq(x, y)
F.x <- solution$F.x; F.y <- solution$F.y; z <- solution$Root
alpha <- c(F.x(z$root), F.y(z$root))
#
# Plot the ECDFs and the results.
#
plot(interval, 0:1, type="n", xlab="z", ylab="Probability", main="CDFs")
curve(F.x(x), add=TRUE, lwd=2, col="Red")
curve(F.y(x), add=TRUE, lwd=2, col="Blue")
abline(v=z$root, lty=2)
abline(v=solution.rq, lty=2, col="Green")
abline(h=alpha, lty=3, col="Gray") | How to determine overlap of two empirical distribution based on quantiles? | Because you will be doing this for $\binom{10}{2}=45$ pairs of distributions, you will want a reasonably efficient method.
The question asks to solve (at least approximately) an equation of the form $ | How to determine overlap of two empirical distribution based on quantiles?
Because you will be doing this for $\binom{10}{2}=45$ pairs of distributions, you will want a reasonably efficient method.
The question asks to solve (at least approximately) an equation of the form $G_0(\alpha)-G_1(1-\alpha)=0$ where the $G_i$ are the inverse empirical CDFs. Equivalently, you could solve $F_0(z)+F_1(z)-1=0$ where the $F_i$ are the empirical CDFs. That is best done with a root-finding method which does not assume the function is differentiable (or even continuous) because these functions are discontinuous: they jump at the data values.
In R, uniroot will do the job. Although it assumes the functions are continuous (it uses Brent's Method, I believe), R's implementation of the empirical CDFs makes them look sufficiently continuous. To make this method work you need to bracket the root between known bounds, but this is easy: it must lie within the range of the union of both datasets.
The code is remarkably simple: given two data arrays x and y, create their empirical CDF functions F.x and F.y, then invoke uniroot. That's all you need.
overlap <- function(x, y) {
F.x <- ecdf(x); F.y <- ecdf(y)
z <- uniroot(function(z) F.x(z) + F.y(z) - 1, interval<-c(min(c(x,y)), max(c(x,y))))
return(list(Root=z, F.x=F.x, F.y=F.y))
}
It is reasonably fast: applied to all $45$ pairs of ten datasets ranging in size from $1000$ to $8000$, it found the answers in a total of $0.12$ seconds.
Alternatively, notice that the desired point is the median of an equal mixture of the two distributions. When the two datasets are the same size, just obtain the median of the union of all the data! You can generalize this to datasets of different sizes by computing weighted medians. This capability is available via quantile regression (in the quantreg package), which accommodates weights: regress the data against a constant and weight them in inverse proportion to the sizes of the datasets.
overlap.rq <- function(x, y) {
library(quantreg)
fit <- rq(c(x,y) ~ 1, data=d,
weights=c(rep(1/length(x), length(x)), rep(1/length(y), length(y))))
return(coef(fit))
}
Timing tests show this is at least three times slower than the root-finding method and it does not scale as well for larger datasets: on the preceding test with $45$ pairs of datasets it took $1.67$ seconds, more than ten times slower. The chief advantage is that this particular implementation of weighted medians will issue warnings when the answer does not appear unique, whereas Brent's method tends to find unique answers right in the middle of an interval of possible answers.
As a demonstration, here is a plot of two empirical CDFs along with vertical lines showing the two solutions (and horizontal lines marking the levels of $\alpha$ and $1-\alpha$). In this particular case, the two methods produce the same answer so only one vertical line appears.
#
# Generate some data.
#
set.seed(17)
x <- rnorm(32, 5, 2)
y <- rgamma(10, 2)
#
# Compute the solution two ways.
#
solution <- overlap(x, y)
solution.rq <- overlap.rq(x, y)
F.x <- solution$F.x; F.y <- solution$F.y; z <- solution$Root
alpha <- c(F.x(z$root), F.y(z$root))
#
# Plot the ECDFs and the results.
#
plot(interval, 0:1, type="n", xlab="z", ylab="Probability", main="CDFs")
curve(F.x(x), add=TRUE, lwd=2, col="Red")
curve(F.y(x), add=TRUE, lwd=2, col="Blue")
abline(v=z$root, lty=2)
abline(v=solution.rq, lty=2, col="Green")
abline(h=alpha, lty=3, col="Gray") | How to determine overlap of two empirical distribution based on quantiles?
Because you will be doing this for $\binom{10}{2}=45$ pairs of distributions, you will want a reasonably efficient method.
The question asks to solve (at least approximately) an equation of the form $ |
50,012 | How to determine overlap of two empirical distribution based on quantiles? | I hit upon the idea of using the empirical cumulative distribution function. The answer is approximate to any desired degree of significant digits. Here is what I've come up with:
CDF.intersect<-function(a, b){
#a and b are vectors of the same metric, intent is to find cdf
if(median(a) < median(b)){
Fn1<-ecdf(a)
Fn2<-ecdf(b)
} else{
Fn1<-ecdf(b)
Fn2<-ecdf(a)
}
x<-seq(min(c(a,b)), max(c(a,b)), length.out=100000)
for (i in 1:100000){
y<-(1-Fn1(x[i]))-Fn2(x[i])
z<-x[i]
if (y<=0.00001) break
}
out<-data.frame("Threshold"=z, "Upper Quantile of Lower Distribution"= 1-Fn1(z),
"Lower Quantile of Upper Distribution" = Fn2(z))
return(out)
} | How to determine overlap of two empirical distribution based on quantiles? | I hit upon the idea of using the empirical cumulative distribution function. The answer is approximate to any desired degree of significant digits. Here is what I've come up with:
CDF.intersect<-funct | How to determine overlap of two empirical distribution based on quantiles?
I hit upon the idea of using the empirical cumulative distribution function. The answer is approximate to any desired degree of significant digits. Here is what I've come up with:
CDF.intersect<-function(a, b){
#a and b are vectors of the same metric, intent is to find cdf
if(median(a) < median(b)){
Fn1<-ecdf(a)
Fn2<-ecdf(b)
} else{
Fn1<-ecdf(b)
Fn2<-ecdf(a)
}
x<-seq(min(c(a,b)), max(c(a,b)), length.out=100000)
for (i in 1:100000){
y<-(1-Fn1(x[i]))-Fn2(x[i])
z<-x[i]
if (y<=0.00001) break
}
out<-data.frame("Threshold"=z, "Upper Quantile of Lower Distribution"= 1-Fn1(z),
"Lower Quantile of Upper Distribution" = Fn2(z))
return(out)
} | How to determine overlap of two empirical distribution based on quantiles?
I hit upon the idea of using the empirical cumulative distribution function. The answer is approximate to any desired degree of significant digits. Here is what I've come up with:
CDF.intersect<-funct |
50,013 | Maximizing likelihood versus MCMC sampling: Comparing Parameters and Deviance | Imagine that your posterior is somewhat Gaussian. The expectation of the squared euclidean norm of the best of $N$ draw depends on the dimension $d$ and is asymptotically $O(1/N^{2/d})$, but the average remains $O(1/N)$. As soon as you have more than 2 dimensions, the average converges faster. You have $d=4$ so the average of the points around the mode will be a much better estimate of the mode than the point closest to the mode.
If that's not immediately intuitive, imagine you're drawing from a standard multivariate normal with identity covariance and $d=1000$. The closest vector to $0$ is still going to be very far on average; in high dimensions, most of the mass of a Gaussian is away from the center. The average will be much closer as values from different draws cancel out in each dimensions.
Try it in python
def draw(d):
x = np.random.randn(1000*d).reshape((1000,d))
return np.sum(np.mean(x,axis=0)**2) < np.min(np.sum(x**2,axis=1))
for d in range(1,4):
print d, np.mean( [draw(d) for i in range(0,1000)] )
Edit: so I got nerdsniped into computing the multiplicative factor in the expectation of the minimum squared euclidean norm when drawing from a standard multivariate normal with dimension $d$. It is asymptotically equivalent to
$$2\Gamma\left(1+\frac{d}{2}\right)^{\frac{2}{d}}\Gamma\left(1+\frac{2}{d}\right) n^{-2/d}$$
while the average is of course $d n^{-1}$ | Maximizing likelihood versus MCMC sampling: Comparing Parameters and Deviance | Imagine that your posterior is somewhat Gaussian. The expectation of the squared euclidean norm of the best of $N$ draw depends on the dimension $d$ and is asymptotically $O(1/N^{2/d})$, but the avera | Maximizing likelihood versus MCMC sampling: Comparing Parameters and Deviance
Imagine that your posterior is somewhat Gaussian. The expectation of the squared euclidean norm of the best of $N$ draw depends on the dimension $d$ and is asymptotically $O(1/N^{2/d})$, but the average remains $O(1/N)$. As soon as you have more than 2 dimensions, the average converges faster. You have $d=4$ so the average of the points around the mode will be a much better estimate of the mode than the point closest to the mode.
If that's not immediately intuitive, imagine you're drawing from a standard multivariate normal with identity covariance and $d=1000$. The closest vector to $0$ is still going to be very far on average; in high dimensions, most of the mass of a Gaussian is away from the center. The average will be much closer as values from different draws cancel out in each dimensions.
Try it in python
def draw(d):
x = np.random.randn(1000*d).reshape((1000,d))
return np.sum(np.mean(x,axis=0)**2) < np.min(np.sum(x**2,axis=1))
for d in range(1,4):
print d, np.mean( [draw(d) for i in range(0,1000)] )
Edit: so I got nerdsniped into computing the multiplicative factor in the expectation of the minimum squared euclidean norm when drawing from a standard multivariate normal with dimension $d$. It is asymptotically equivalent to
$$2\Gamma\left(1+\frac{d}{2}\right)^{\frac{2}{d}}\Gamma\left(1+\frac{2}{d}\right) n^{-2/d}$$
while the average is of course $d n^{-1}$ | Maximizing likelihood versus MCMC sampling: Comparing Parameters and Deviance
Imagine that your posterior is somewhat Gaussian. The expectation of the squared euclidean norm of the best of $N$ draw depends on the dimension $d$ and is asymptotically $O(1/N^{2/d})$, but the avera |
50,014 | Assuming a probability density for MLE to do model selection | Motivation: I am trying to use Akaike Information Criterion to assess model ranking and over-fitting risk for a set of nonlinear models.
As I understand it, I must compute the maximum likelihood estimator for each model.
If you want an AIC, yes, you would need MLE. But the AIC is not automatically ideal. [One thing you should be careful of is if you're selecting a model on the same data you're using to do inference, you'll have a variety of problems.]
I could assume the residuals are Gaussian* and then the MLE is a least squares one. I am not convinced this is adequate; if my models are nonlinear, does that imply a more complex probability density?
I don't think so. If your model is of the form $y=f(X,\beta)+\varepsilon$, I don't see why nonlinear $f$ would imply anything about $\varepsilon$.
Your actual distribution about $E(Y)$ is almost certainly more complex than a Gaussian/ (When are data exactly Gaussian? I'd have thought almost never.)
And, if my residuals aren't Gaussian and I don't have a good assumption for a probability density, how would I choose one?
If you don't have a good basis for one, choosing a model by looking at the same data you use to fit the model would bias your AICs anyway. [You might be able to get around that by sample-splitting, or cross-validation for example.]
I can also check this by observing them, no?
Your residuals are proxies for your errors, yes, so for example, skewed residuals might indicate non-normality.
If I check these before each analysis for each model, what would I do if they aren't normal?
You might consider a class of models such as generalized nonlinear models (like GLMs but nonlinear in the parameters) and still use AIC.
You might stick with least-squares even though it's not ML, and treat AIC simply as a (monotonic transformation of) a penalized-MSE.
Depending on what you need your models to achieve, you might consider modifying the current model to one with a slightly heavier tail, such as a contaminated-normal model, perhaps still with ML.
You might consider using some different criterion to do both model selection and model fitting.
But in any case your actual errors won't be exactly normal; the question is the degree to which that will impact your inference. (The AIC, for example, may perform reasonably well at trading off fit for model complexity whether or not the fitted model is exactly right.)
though this is just for one set of data, I do not necessarily know that the errors will always be normal for each model against any possible data set.
Note that the marginal distribution of residuals will only approximate the error distribution if the other (more important) assumptions hold up; you should check that the model for the mean and the variance is reasonable first before trying to worry much about normality. | Assuming a probability density for MLE to do model selection | Motivation: I am trying to use Akaike Information Criterion to assess model ranking and over-fitting risk for a set of nonlinear models.
As I understand it, I must compute the maximum likelihood estim | Assuming a probability density for MLE to do model selection
Motivation: I am trying to use Akaike Information Criterion to assess model ranking and over-fitting risk for a set of nonlinear models.
As I understand it, I must compute the maximum likelihood estimator for each model.
If you want an AIC, yes, you would need MLE. But the AIC is not automatically ideal. [One thing you should be careful of is if you're selecting a model on the same data you're using to do inference, you'll have a variety of problems.]
I could assume the residuals are Gaussian* and then the MLE is a least squares one. I am not convinced this is adequate; if my models are nonlinear, does that imply a more complex probability density?
I don't think so. If your model is of the form $y=f(X,\beta)+\varepsilon$, I don't see why nonlinear $f$ would imply anything about $\varepsilon$.
Your actual distribution about $E(Y)$ is almost certainly more complex than a Gaussian/ (When are data exactly Gaussian? I'd have thought almost never.)
And, if my residuals aren't Gaussian and I don't have a good assumption for a probability density, how would I choose one?
If you don't have a good basis for one, choosing a model by looking at the same data you use to fit the model would bias your AICs anyway. [You might be able to get around that by sample-splitting, or cross-validation for example.]
I can also check this by observing them, no?
Your residuals are proxies for your errors, yes, so for example, skewed residuals might indicate non-normality.
If I check these before each analysis for each model, what would I do if they aren't normal?
You might consider a class of models such as generalized nonlinear models (like GLMs but nonlinear in the parameters) and still use AIC.
You might stick with least-squares even though it's not ML, and treat AIC simply as a (monotonic transformation of) a penalized-MSE.
Depending on what you need your models to achieve, you might consider modifying the current model to one with a slightly heavier tail, such as a contaminated-normal model, perhaps still with ML.
You might consider using some different criterion to do both model selection and model fitting.
But in any case your actual errors won't be exactly normal; the question is the degree to which that will impact your inference. (The AIC, for example, may perform reasonably well at trading off fit for model complexity whether or not the fitted model is exactly right.)
though this is just for one set of data, I do not necessarily know that the errors will always be normal for each model against any possible data set.
Note that the marginal distribution of residuals will only approximate the error distribution if the other (more important) assumptions hold up; you should check that the model for the mean and the variance is reasonable first before trying to worry much about normality. | Assuming a probability density for MLE to do model selection
Motivation: I am trying to use Akaike Information Criterion to assess model ranking and over-fitting risk for a set of nonlinear models.
As I understand it, I must compute the maximum likelihood estim |
50,015 | On the Autocorrelation Matrix of an ARMA(2,2) to derive the Yule Walker Equations | Let's define a general ARMA model of orders $(p,q)$ as follows:
$$
\psi_t \equiv \sum_{i=0}^p \alpha_i\, y_{t-i} = \sum_{i=0}^q \theta_i\, \epsilon_{t-i} \,,
\mbox{ with } \epsilon_t \sim NID\,(0, \sigma^2_\epsilon) \,.
$$
where $\alpha_0$ and $\theta_0$ are normalised to $1$.
You can check that multiplying $\psi_t$ by $\psi_{t-\tau}$ and taking expectations in both sides of the equation yields:
\begin{equation}
\sum_{i=0}^p \sum_{j=0}^p \alpha_i \alpha_j \gamma_{\tau+j-i} =
\sigma^2_\epsilon \sum_{j=0}^{q-\tau} \theta_j \theta_{j+\tau} \,,
\end{equation}
where $\gamma_i$ is the autocovariance of order $i$.
The mapping between the autocovariances and the parameters in an ARMA model is not as rewarding as in an AR model. The above equation does not return a system of equations that can be easily solved to obtain an estimate of the parameters by the method of moments. The Yule-Walker equations are instead easy to solve and return an estimate of the AR coefficients.
Although it is not straightforward, the method of moments can still be applied for an ARMA model by means of a two-steps procedure: the first step uses the Yule-Walker equations and the second step is based on the equation given above. If your question goes in this direction I could give you further details about it.
Edit
The following is an extract from pp. 545-546 in D.S.G. Pollock (1999) A handbook of time series analysis, signal processing and dynamics, Academic Press (changed notation $\theta$ is $\mu$ in the original source):
1)
\begin{eqnarray}
\begin{array}{lcl}
E(\psi_t\psi_{t-\tau}) &=&
E\left\{ \left( \sum_i \theta_i \epsilon_{t-i} \right)
\left( \sum_j \theta_j \epsilon_{t-\tau-j} \right) \right\} \\
&=&
\sum_i \sum_j \theta_i \theta_j E(\epsilon_{t-i} \epsilon_{t-\tau-j}) \\
&=&
\sigma^2_\epsilon \sum_j \theta_j \theta_{j+\tau} \,.
\end{array}
\end{eqnarray}
2)
\begin{eqnarray}
\begin{array}{lcl}
E(\psi_t\psi_{t-\tau}) &=&
E\left\{ \left( \sum_i \alpha_i y_{t-i} \right)
\left( \sum_j \alpha_j y_{t-\tau-j} \right) \right\} \\
&=&
\sum_i \sum_j \alpha_i \alpha_j E(y_{t-i} y_{t-\tau-j}) \\
&=&
\sum_i \sum_j \alpha_i \alpha_j \gamma_{\tau+j-i} \,.
\end{array}
\end{eqnarray}
Putting (1) and (2) together:
$$
\sum_i\sum_j \alpha_i\alpha_j\gamma_{\tau+j-i} =
\sigma^2_\epsilon \sum_j \theta_j \theta_{j+\tau} \,.
$$ | On the Autocorrelation Matrix of an ARMA(2,2) to derive the Yule Walker Equations | Let's define a general ARMA model of orders $(p,q)$ as follows:
$$
\psi_t \equiv \sum_{i=0}^p \alpha_i\, y_{t-i} = \sum_{i=0}^q \theta_i\, \epsilon_{t-i} \,,
\mbox{ with } \epsilon_t \sim NID\,(0, \si | On the Autocorrelation Matrix of an ARMA(2,2) to derive the Yule Walker Equations
Let's define a general ARMA model of orders $(p,q)$ as follows:
$$
\psi_t \equiv \sum_{i=0}^p \alpha_i\, y_{t-i} = \sum_{i=0}^q \theta_i\, \epsilon_{t-i} \,,
\mbox{ with } \epsilon_t \sim NID\,(0, \sigma^2_\epsilon) \,.
$$
where $\alpha_0$ and $\theta_0$ are normalised to $1$.
You can check that multiplying $\psi_t$ by $\psi_{t-\tau}$ and taking expectations in both sides of the equation yields:
\begin{equation}
\sum_{i=0}^p \sum_{j=0}^p \alpha_i \alpha_j \gamma_{\tau+j-i} =
\sigma^2_\epsilon \sum_{j=0}^{q-\tau} \theta_j \theta_{j+\tau} \,,
\end{equation}
where $\gamma_i$ is the autocovariance of order $i$.
The mapping between the autocovariances and the parameters in an ARMA model is not as rewarding as in an AR model. The above equation does not return a system of equations that can be easily solved to obtain an estimate of the parameters by the method of moments. The Yule-Walker equations are instead easy to solve and return an estimate of the AR coefficients.
Although it is not straightforward, the method of moments can still be applied for an ARMA model by means of a two-steps procedure: the first step uses the Yule-Walker equations and the second step is based on the equation given above. If your question goes in this direction I could give you further details about it.
Edit
The following is an extract from pp. 545-546 in D.S.G. Pollock (1999) A handbook of time series analysis, signal processing and dynamics, Academic Press (changed notation $\theta$ is $\mu$ in the original source):
1)
\begin{eqnarray}
\begin{array}{lcl}
E(\psi_t\psi_{t-\tau}) &=&
E\left\{ \left( \sum_i \theta_i \epsilon_{t-i} \right)
\left( \sum_j \theta_j \epsilon_{t-\tau-j} \right) \right\} \\
&=&
\sum_i \sum_j \theta_i \theta_j E(\epsilon_{t-i} \epsilon_{t-\tau-j}) \\
&=&
\sigma^2_\epsilon \sum_j \theta_j \theta_{j+\tau} \,.
\end{array}
\end{eqnarray}
2)
\begin{eqnarray}
\begin{array}{lcl}
E(\psi_t\psi_{t-\tau}) &=&
E\left\{ \left( \sum_i \alpha_i y_{t-i} \right)
\left( \sum_j \alpha_j y_{t-\tau-j} \right) \right\} \\
&=&
\sum_i \sum_j \alpha_i \alpha_j E(y_{t-i} y_{t-\tau-j}) \\
&=&
\sum_i \sum_j \alpha_i \alpha_j \gamma_{\tau+j-i} \,.
\end{array}
\end{eqnarray}
Putting (1) and (2) together:
$$
\sum_i\sum_j \alpha_i\alpha_j\gamma_{\tau+j-i} =
\sigma^2_\epsilon \sum_j \theta_j \theta_{j+\tau} \,.
$$ | On the Autocorrelation Matrix of an ARMA(2,2) to derive the Yule Walker Equations
Let's define a general ARMA model of orders $(p,q)$ as follows:
$$
\psi_t \equiv \sum_{i=0}^p \alpha_i\, y_{t-i} = \sum_{i=0}^q \theta_i\, \epsilon_{t-i} \,,
\mbox{ with } \epsilon_t \sim NID\,(0, \si |
50,016 | Fitting two different mixture distributions [closed] | I would recommend using the flexmix package. You can see in the page 9 of the vignette here: https://cran.r-project.org/web/packages/flexmix/vignettes/flexmix-intro.pdf.
If you wanted, for example, a mixture of a poisson and a normal distribution, you'd want something like:
example <- flexmix(y~1, data=dat, k=2,
model=list(FLXMRglm( ~ x, family ="gaussian"),
FLXMRglm( ~ x, family="poisson"))) | Fitting two different mixture distributions [closed] | I would recommend using the flexmix package. You can see in the page 9 of the vignette here: https://cran.r-project.org/web/packages/flexmix/vignettes/flexmix-intro.pdf.
If you wanted, for example, a | Fitting two different mixture distributions [closed]
I would recommend using the flexmix package. You can see in the page 9 of the vignette here: https://cran.r-project.org/web/packages/flexmix/vignettes/flexmix-intro.pdf.
If you wanted, for example, a mixture of a poisson and a normal distribution, you'd want something like:
example <- flexmix(y~1, data=dat, k=2,
model=list(FLXMRglm( ~ x, family ="gaussian"),
FLXMRglm( ~ x, family="poisson"))) | Fitting two different mixture distributions [closed]
I would recommend using the flexmix package. You can see in the page 9 of the vignette here: https://cran.r-project.org/web/packages/flexmix/vignettes/flexmix-intro.pdf.
If you wanted, for example, a |
50,017 | Estimating standard error of parameters of linear model fitted using gradient descent | I found that bootstrap gives estimates that are pretty close to those from OLS, but works with literally any training algorithm.
Bootstrap is a kind of Monte Carlo method and roughly boils down to repeated sampling with replacement from original dataset and collecting values of a target statistic. Having a set of statistic values, it becomes trivial to calculate their mean and standard error. G. James et al. provide experimental evidence of closeness of OLS and bootstrap results. Without further explanation, I'm giving a link to their excellent work (see pages 187-190 for bootstrap explanation and 195-197 for experiments):
G. James et al. An introduction to Statistical Learning | Estimating standard error of parameters of linear model fitted using gradient descent | I found that bootstrap gives estimates that are pretty close to those from OLS, but works with literally any training algorithm.
Bootstrap is a kind of Monte Carlo method and roughly boils down to re | Estimating standard error of parameters of linear model fitted using gradient descent
I found that bootstrap gives estimates that are pretty close to those from OLS, but works with literally any training algorithm.
Bootstrap is a kind of Monte Carlo method and roughly boils down to repeated sampling with replacement from original dataset and collecting values of a target statistic. Having a set of statistic values, it becomes trivial to calculate their mean and standard error. G. James et al. provide experimental evidence of closeness of OLS and bootstrap results. Without further explanation, I'm giving a link to their excellent work (see pages 187-190 for bootstrap explanation and 195-197 for experiments):
G. James et al. An introduction to Statistical Learning | Estimating standard error of parameters of linear model fitted using gradient descent
I found that bootstrap gives estimates that are pretty close to those from OLS, but works with literally any training algorithm.
Bootstrap is a kind of Monte Carlo method and roughly boils down to re |
50,018 | Regression with t-distributed errors and MASS::rlm | It looks to me like you can supply your own psi function.
The already-supplied psi-functions are just simple functions (try MASS:::psi.huber and MASS:::psi.hampel for example). They're rather neatly set up in that the same function supplies both the psi function and its derivative, depending on whether they're called with deriv=0 (the default) or not. Note that each function takes different parameters.
I think you could just supply your own psi function (called psi.t say, with df as one of the parameters). Or if you think some particular value - say 5 df - is always going to be adequate, you could write an even simpler function with df hard-coded.
Note that with the t-distribution you have a redescending psi-function, so the issues mentioned in the help in relation to the Hampel and bisquare psi functions also apply to yours (i.e. multiple local minima). You might want to try starting at the solution for a Huber, perhaps after choosing a value of $k$ that describes the central part of the psi function for a $t_4$ or $t_5$ reasonably well.
For a parametric model, $\psi(x) = -\log(f(x))=-f'(x)/f(x)$. The $\psi-$function for a $t_\nu$ is reasonably simple; Wikipedia gives it as
$$\psi(x) = \frac{x}{x^2+\nu}$$
Consequently, assuming I made no errors, $\psi'(x) = \frac{x^2+\nu - x\cdot 2x}{(x^2+\nu)^2}= \frac{\nu - x^2}{(x^2+\nu)^2}$.
I don't think you get very much choice on the variance parameter though; I think you would have to settle for a robust scale (not that this is necessarily a bad thing; see the information relating to scale.est and method="MM" for what options there are). Alternatively, if you really want to use the t-assumption for finding the scale as well, you could instead use the various functions for maximizing likelihood to attempt an ML solution to the whole problem (perhaps after using rlm to get essentially all the way there for the mean-function). | Regression with t-distributed errors and MASS::rlm | It looks to me like you can supply your own psi function.
The already-supplied psi-functions are just simple functions (try MASS:::psi.huber and MASS:::psi.hampel for example). They're rather neatly | Regression with t-distributed errors and MASS::rlm
It looks to me like you can supply your own psi function.
The already-supplied psi-functions are just simple functions (try MASS:::psi.huber and MASS:::psi.hampel for example). They're rather neatly set up in that the same function supplies both the psi function and its derivative, depending on whether they're called with deriv=0 (the default) or not. Note that each function takes different parameters.
I think you could just supply your own psi function (called psi.t say, with df as one of the parameters). Or if you think some particular value - say 5 df - is always going to be adequate, you could write an even simpler function with df hard-coded.
Note that with the t-distribution you have a redescending psi-function, so the issues mentioned in the help in relation to the Hampel and bisquare psi functions also apply to yours (i.e. multiple local minima). You might want to try starting at the solution for a Huber, perhaps after choosing a value of $k$ that describes the central part of the psi function for a $t_4$ or $t_5$ reasonably well.
For a parametric model, $\psi(x) = -\log(f(x))=-f'(x)/f(x)$. The $\psi-$function for a $t_\nu$ is reasonably simple; Wikipedia gives it as
$$\psi(x) = \frac{x}{x^2+\nu}$$
Consequently, assuming I made no errors, $\psi'(x) = \frac{x^2+\nu - x\cdot 2x}{(x^2+\nu)^2}= \frac{\nu - x^2}{(x^2+\nu)^2}$.
I don't think you get very much choice on the variance parameter though; I think you would have to settle for a robust scale (not that this is necessarily a bad thing; see the information relating to scale.est and method="MM" for what options there are). Alternatively, if you really want to use the t-assumption for finding the scale as well, you could instead use the various functions for maximizing likelihood to attempt an ML solution to the whole problem (perhaps after using rlm to get essentially all the way there for the mean-function). | Regression with t-distributed errors and MASS::rlm
It looks to me like you can supply your own psi function.
The already-supplied psi-functions are just simple functions (try MASS:::psi.huber and MASS:::psi.hampel for example). They're rather neatly |
50,019 | Log-rank / Cox analysis with very unequal sized groups: alternative calculations of p-value? | In these kinds of comparisons, you'll find that what happens is a two-sample test becomes very approximately a one sample test where all the power comes from the smaller group (they are being "calibrated" to the larger group), and so the assumptions behind sample sizes in 1 sample tests apply for that group. 3 deaths does not suffice to estimate a Cox model. Survival models are driven by the numbers of events, not the denominator.
If there is no censoring in these data, you can condition upon the failures observed after a fixed point and compare survival by looking at proportions which did not survive beyond that fixed point. It is a basic proportions test of a contingency table and achievable via Fisher's Exact Test which is accurate is small samples.
$$
\begin{array}{ccc}
& \mbox{Died} & \mbox{Lived} \\
\overline{\mbox{Fix} }& 11,174 & 626,551\\
\mbox{Fix} & 3& 35\\
\end{array}
$$
The benefit of using an Exact test is that it is effectively answering the question of "what is the probability I may have seen 0, 1, 2, or 3 deaths out of 38 in the Fix group given that my expected death rate is ($0.02 = 11174 / 637725$). The effect of the large non-fix group is that the variability in expected rate will be very low and almost entirely determined by those data. | Log-rank / Cox analysis with very unequal sized groups: alternative calculations of p-value? | In these kinds of comparisons, you'll find that what happens is a two-sample test becomes very approximately a one sample test where all the power comes from the smaller group (they are being "calibra | Log-rank / Cox analysis with very unequal sized groups: alternative calculations of p-value?
In these kinds of comparisons, you'll find that what happens is a two-sample test becomes very approximately a one sample test where all the power comes from the smaller group (they are being "calibrated" to the larger group), and so the assumptions behind sample sizes in 1 sample tests apply for that group. 3 deaths does not suffice to estimate a Cox model. Survival models are driven by the numbers of events, not the denominator.
If there is no censoring in these data, you can condition upon the failures observed after a fixed point and compare survival by looking at proportions which did not survive beyond that fixed point. It is a basic proportions test of a contingency table and achievable via Fisher's Exact Test which is accurate is small samples.
$$
\begin{array}{ccc}
& \mbox{Died} & \mbox{Lived} \\
\overline{\mbox{Fix} }& 11,174 & 626,551\\
\mbox{Fix} & 3& 35\\
\end{array}
$$
The benefit of using an Exact test is that it is effectively answering the question of "what is the probability I may have seen 0, 1, 2, or 3 deaths out of 38 in the Fix group given that my expected death rate is ($0.02 = 11174 / 637725$). The effect of the large non-fix group is that the variability in expected rate will be very low and almost entirely determined by those data. | Log-rank / Cox analysis with very unequal sized groups: alternative calculations of p-value?
In these kinds of comparisons, you'll find that what happens is a two-sample test becomes very approximately a one sample test where all the power comes from the smaller group (they are being "calibra |
50,020 | Log-rank / Cox analysis with very unequal sized groups: alternative calculations of p-value? | Do not pay too much attention to the p-values. They just provide a probability that you might accidentally have found a survival difference in your particular study sample when there really isn't a difference in the population as a whole.
You evidently want to use predictor variables for each new patient to classify relative risk. It's better to base that classification on an estimate of each new patient's predicted survival or equivalent (like time to some undesired event), and also on the reliability of the estimate, rather than on the p-values from analyses of your study sample.
There are simple predict functions for R coxph and survreg objects, but you will be better off learning to use the rms package in R, which provides ways to validate your model and even build nomograms for prediction. The author, Frank Harrell, is a regular contributor to this site. | Log-rank / Cox analysis with very unequal sized groups: alternative calculations of p-value? | Do not pay too much attention to the p-values. They just provide a probability that you might accidentally have found a survival difference in your particular study sample when there really isn't a di | Log-rank / Cox analysis with very unequal sized groups: alternative calculations of p-value?
Do not pay too much attention to the p-values. They just provide a probability that you might accidentally have found a survival difference in your particular study sample when there really isn't a difference in the population as a whole.
You evidently want to use predictor variables for each new patient to classify relative risk. It's better to base that classification on an estimate of each new patient's predicted survival or equivalent (like time to some undesired event), and also on the reliability of the estimate, rather than on the p-values from analyses of your study sample.
There are simple predict functions for R coxph and survreg objects, but you will be better off learning to use the rms package in R, which provides ways to validate your model and even build nomograms for prediction. The author, Frank Harrell, is a regular contributor to this site. | Log-rank / Cox analysis with very unequal sized groups: alternative calculations of p-value?
Do not pay too much attention to the p-values. They just provide a probability that you might accidentally have found a survival difference in your particular study sample when there really isn't a di |
50,021 | Difference-in-differences with no pre-treatment? | The issue I see with your approach is that you will not be able to see anything about the pre-treatment differences unless you have very precise information about the experiment or policy. It will be hard or even impossible to say something about the common trend assumption between the treatment and control groups which is a vital part of difference in differences.
For instance, say you have a job market program which is mandatory but in period 1 only motivated individuals will attend it. In period 2, which is the starting point of your data, the policy maker forces the other individuals to attend the job market program, and finally in period 3 you see all "treated" individuals. In this case it is hard to claim that those treated in period 1 and those in treated in period 2 have the same trend in their outcomes
due to the unobserved factors that led to treatment selection in the first round
due to the fact that individuals in period 1 have already been treated so their trend already changed (if the policy had an effect).
Of course this is a very artificial example and problematic mostly because treatment is non-random but I guess you will see the point. Without more knowledge about the experiment you can not credibly sell a difference in differences analysis in this set-up because you cannot say anything about the pre-treatment differences in the outcome of the two groups. Even if you know that treatment was random, you can't be sure about this common trend assumption. Actually, you rarely can be sure about it anyway but with pre-treatment data you can have at least an idea. | Difference-in-differences with no pre-treatment? | The issue I see with your approach is that you will not be able to see anything about the pre-treatment differences unless you have very precise information about the experiment or policy. It will be | Difference-in-differences with no pre-treatment?
The issue I see with your approach is that you will not be able to see anything about the pre-treatment differences unless you have very precise information about the experiment or policy. It will be hard or even impossible to say something about the common trend assumption between the treatment and control groups which is a vital part of difference in differences.
For instance, say you have a job market program which is mandatory but in period 1 only motivated individuals will attend it. In period 2, which is the starting point of your data, the policy maker forces the other individuals to attend the job market program, and finally in period 3 you see all "treated" individuals. In this case it is hard to claim that those treated in period 1 and those in treated in period 2 have the same trend in their outcomes
due to the unobserved factors that led to treatment selection in the first round
due to the fact that individuals in period 1 have already been treated so their trend already changed (if the policy had an effect).
Of course this is a very artificial example and problematic mostly because treatment is non-random but I guess you will see the point. Without more knowledge about the experiment you can not credibly sell a difference in differences analysis in this set-up because you cannot say anything about the pre-treatment differences in the outcome of the two groups. Even if you know that treatment was random, you can't be sure about this common trend assumption. Actually, you rarely can be sure about it anyway but with pre-treatment data you can have at least an idea. | Difference-in-differences with no pre-treatment?
The issue I see with your approach is that you will not be able to see anything about the pre-treatment differences unless you have very precise information about the experiment or policy. It will be |
50,022 | Gaussian Mixture Model parameters from density | You can use minimum squared errors in order to estimate/fit a mixture density to your data (Note that this method also inherits problems of uniqueness of the estimators, as any other approach in the context of finite mixtures).
Basically, the idea is to minimize the distances between a mixture density (with fixed number of mixture components) and the observed density values, as a function of the parameters of the mixture you want to fit. The following R code shows an example with simulated data. The simulated data is obtained by simulating from a two-component Gaussian mixture and then using the command hist() (which emulates your context where only the density values are observed). As you can see, the estimators are very accurate in this example. The accuracy depends on how "informative" is the grid where you observe the density values.
rm(list=ls())
library(mixtools)
# Simulated data
set.seed(100)
n <- 500
lambda <- rep(1, 2)/2
mu <- c(0, 5)
sigma <- rep(1, 2)
mixnorm.data <- rnormmix(n, lambda, mu, sigma)
##A histogram of the simulated data.
hist(mixnorm.data,breaks=50)
# Binning the data
x.data <- hist(mixnorm.data,breaks=50,plot=FALSE)$mids
den.data <- hist(mixnorm.data,breaks=50,plot=FALSE)$density
# Sum of squared errors
ld <- function(param){
mu1 = param[1]
mu2 = param[2]
sigma1 = param[3]
sigma2 = param[4]
eps = param[5]
if(sigma1>0 & sigma2>0 & eps >0 & eps<1){
return(sum((eps*dnorm(x.data,mu1,sigma1) + (1-eps)*dnorm(x.data,mu2,sigma2) - den.data)^2))
}
else return(Inf)
}
# Optimization step
optim(c(0,5,1,2,0.5),ld)
# Estimators of the parameters
MSEPAR <- optim(c(0,5,1,2,0.5),ld)$par
# Fitted density
dmix <- Vectorize(function(x){
mu1 = MSEPAR[1]
mu2 = MSEPAR[2]
sigma1 = MSEPAR[3]
sigma2 = MSEPAR[4]
eps = MSEPAR[5]
return( eps*dnorm(x,mu1,sigma1) + (1-eps)*dnorm(x,mu2,sigma2))
})
hist(mixnorm.data,breaks=50,probability=T)
curve(dmix,add=T,col="red") | Gaussian Mixture Model parameters from density | You can use minimum squared errors in order to estimate/fit a mixture density to your data (Note that this method also inherits problems of uniqueness of the estimators, as any other approach in the c | Gaussian Mixture Model parameters from density
You can use minimum squared errors in order to estimate/fit a mixture density to your data (Note that this method also inherits problems of uniqueness of the estimators, as any other approach in the context of finite mixtures).
Basically, the idea is to minimize the distances between a mixture density (with fixed number of mixture components) and the observed density values, as a function of the parameters of the mixture you want to fit. The following R code shows an example with simulated data. The simulated data is obtained by simulating from a two-component Gaussian mixture and then using the command hist() (which emulates your context where only the density values are observed). As you can see, the estimators are very accurate in this example. The accuracy depends on how "informative" is the grid where you observe the density values.
rm(list=ls())
library(mixtools)
# Simulated data
set.seed(100)
n <- 500
lambda <- rep(1, 2)/2
mu <- c(0, 5)
sigma <- rep(1, 2)
mixnorm.data <- rnormmix(n, lambda, mu, sigma)
##A histogram of the simulated data.
hist(mixnorm.data,breaks=50)
# Binning the data
x.data <- hist(mixnorm.data,breaks=50,plot=FALSE)$mids
den.data <- hist(mixnorm.data,breaks=50,plot=FALSE)$density
# Sum of squared errors
ld <- function(param){
mu1 = param[1]
mu2 = param[2]
sigma1 = param[3]
sigma2 = param[4]
eps = param[5]
if(sigma1>0 & sigma2>0 & eps >0 & eps<1){
return(sum((eps*dnorm(x.data,mu1,sigma1) + (1-eps)*dnorm(x.data,mu2,sigma2) - den.data)^2))
}
else return(Inf)
}
# Optimization step
optim(c(0,5,1,2,0.5),ld)
# Estimators of the parameters
MSEPAR <- optim(c(0,5,1,2,0.5),ld)$par
# Fitted density
dmix <- Vectorize(function(x){
mu1 = MSEPAR[1]
mu2 = MSEPAR[2]
sigma1 = MSEPAR[3]
sigma2 = MSEPAR[4]
eps = MSEPAR[5]
return( eps*dnorm(x,mu1,sigma1) + (1-eps)*dnorm(x,mu2,sigma2))
})
hist(mixnorm.data,breaks=50,probability=T)
curve(dmix,add=T,col="red") | Gaussian Mixture Model parameters from density
You can use minimum squared errors in order to estimate/fit a mixture density to your data (Note that this method also inherits problems of uniqueness of the estimators, as any other approach in the c |
50,023 | Gaussian Mixture Model parameters from density | If you are willing to code a bit, you can implement your own version of the EM algorithm that takes into account the density value of a grid point. So instead of using the usual likelihood function:
$$
\log L(\Theta) = \sum_{t=1}^L\left [{\sum_{i=1}^N \phi(\boldsymbol r_t|\boldsymbol \mu_i,\boldsymbol \Sigma_i)} \right]
$$
where the outer sum is over the data points and the inner sum is over the GMM. You would instead want a density-weighted likelihood function:
$$
\log L(\Theta) = \sum_{t=1}^Lp(\boldsymbol r_t)\left [{\sum_{i=1}^N \phi(\boldsymbol r_t|\boldsymbol \mu_i,\boldsymbol \Sigma_i)} \right]
$$
This would also affect the M-step - you'd have to multiply the responsibilities (that is, the probability of each data point) by the density at that point, and make sure things are still normalized correctly. Something like this was done in this paper:
http://www.ncbi.nlm.nih.gov/pubmed/18708469 | Gaussian Mixture Model parameters from density | If you are willing to code a bit, you can implement your own version of the EM algorithm that takes into account the density value of a grid point. So instead of using the usual likelihood function:
$ | Gaussian Mixture Model parameters from density
If you are willing to code a bit, you can implement your own version of the EM algorithm that takes into account the density value of a grid point. So instead of using the usual likelihood function:
$$
\log L(\Theta) = \sum_{t=1}^L\left [{\sum_{i=1}^N \phi(\boldsymbol r_t|\boldsymbol \mu_i,\boldsymbol \Sigma_i)} \right]
$$
where the outer sum is over the data points and the inner sum is over the GMM. You would instead want a density-weighted likelihood function:
$$
\log L(\Theta) = \sum_{t=1}^Lp(\boldsymbol r_t)\left [{\sum_{i=1}^N \phi(\boldsymbol r_t|\boldsymbol \mu_i,\boldsymbol \Sigma_i)} \right]
$$
This would also affect the M-step - you'd have to multiply the responsibilities (that is, the probability of each data point) by the density at that point, and make sure things are still normalized correctly. Something like this was done in this paper:
http://www.ncbi.nlm.nih.gov/pubmed/18708469 | Gaussian Mixture Model parameters from density
If you are willing to code a bit, you can implement your own version of the EM algorithm that takes into account the density value of a grid point. So instead of using the usual likelihood function:
$ |
50,024 | Sample Mean of AR(1) model | FIRST STEP
Sometimes, patience and algebra are still required to obtain what we need to obtain. In your case, by repeated substitution as already suggested we get
$$X_t = \sum_{j=0}^t\phi^j\epsilon_{t-j}$$
and we note that, although not clearly stated in the question, here we have $E(\epsilon_t) = \mu$, not necessarily zero. The sample mean for a sample of size $T$ is therefore
$$\bar X = \frac 1T\sum_{t=1}^TX_t = \frac 1T\sum_{t=1}^T\sum_{j=0}^t\phi^j\epsilon_{t-j}$$
Don't despair at this point. Patiently write out the internal sums for each $t=1,...T$ ($T$ is still finite) and you will see that you can re-arrange them as a sum in the innovations, each innovation being multiplied by a different constant term (although these constant terms will obviously form a recognizable pattern). So this will be a linear combination of i.i.d. random variables. So it will be a sum of independently but not identically distributed random variables...
SECOND STEP
So we have that
$$T\bar X = \sum_{t=1}^T\Big( \epsilon_t + \phi\epsilon_{t-1} + \phi^2\epsilon_{t-2}+...+\phi^{t-1}\epsilon_1\Big)$$
$$\begin{align} =& \epsilon_1 &\\
+&\phi\epsilon_1 +\epsilon_2 \\
+&...\\
+&\phi^{T-1}\epsilon_1+\phi^{T-2}\epsilon_2+...+ \epsilon_T\\
\end{align}$$
(reversing the order and summing per innovation)
$$=\epsilon_T + (1+\phi)\epsilon_{T-1} + (1+\phi+\phi^2)\epsilon_{T-2} +...+(1+\phi+\phi^2+...+\phi^{T-1})\epsilon_1$$
$$\Rightarrow \bar X = \frac 1T \sum_{t=1}^T\left[\left(\sum_{j=t}^T\phi^{T-j}\right)\epsilon_t\right] $$
So we see that the sample mean is a sum of independently but not identically distributed random variables. Therefore, we can invoke this variant of the classical Central Limit Theorem, and check if and when the Lindeberg and/or Lyapunov conditions hold (without the need to go into martingale theory, Gordin's conditions etc, which are the "time series course" material you mentioned that prove directly a CLT for dependent processes).
I would suggest to start with the case $\phi=1$. | Sample Mean of AR(1) model | FIRST STEP
Sometimes, patience and algebra are still required to obtain what we need to obtain. In your case, by repeated substitution as already suggested we get
$$X_t = \sum_{j=0}^t\phi^j\epsilon_{t | Sample Mean of AR(1) model
FIRST STEP
Sometimes, patience and algebra are still required to obtain what we need to obtain. In your case, by repeated substitution as already suggested we get
$$X_t = \sum_{j=0}^t\phi^j\epsilon_{t-j}$$
and we note that, although not clearly stated in the question, here we have $E(\epsilon_t) = \mu$, not necessarily zero. The sample mean for a sample of size $T$ is therefore
$$\bar X = \frac 1T\sum_{t=1}^TX_t = \frac 1T\sum_{t=1}^T\sum_{j=0}^t\phi^j\epsilon_{t-j}$$
Don't despair at this point. Patiently write out the internal sums for each $t=1,...T$ ($T$ is still finite) and you will see that you can re-arrange them as a sum in the innovations, each innovation being multiplied by a different constant term (although these constant terms will obviously form a recognizable pattern). So this will be a linear combination of i.i.d. random variables. So it will be a sum of independently but not identically distributed random variables...
SECOND STEP
So we have that
$$T\bar X = \sum_{t=1}^T\Big( \epsilon_t + \phi\epsilon_{t-1} + \phi^2\epsilon_{t-2}+...+\phi^{t-1}\epsilon_1\Big)$$
$$\begin{align} =& \epsilon_1 &\\
+&\phi\epsilon_1 +\epsilon_2 \\
+&...\\
+&\phi^{T-1}\epsilon_1+\phi^{T-2}\epsilon_2+...+ \epsilon_T\\
\end{align}$$
(reversing the order and summing per innovation)
$$=\epsilon_T + (1+\phi)\epsilon_{T-1} + (1+\phi+\phi^2)\epsilon_{T-2} +...+(1+\phi+\phi^2+...+\phi^{T-1})\epsilon_1$$
$$\Rightarrow \bar X = \frac 1T \sum_{t=1}^T\left[\left(\sum_{j=t}^T\phi^{T-j}\right)\epsilon_t\right] $$
So we see that the sample mean is a sum of independently but not identically distributed random variables. Therefore, we can invoke this variant of the classical Central Limit Theorem, and check if and when the Lindeberg and/or Lyapunov conditions hold (without the need to go into martingale theory, Gordin's conditions etc, which are the "time series course" material you mentioned that prove directly a CLT for dependent processes).
I would suggest to start with the case $\phi=1$. | Sample Mean of AR(1) model
FIRST STEP
Sometimes, patience and algebra are still required to obtain what we need to obtain. In your case, by repeated substitution as already suggested we get
$$X_t = \sum_{j=0}^t\phi^j\epsilon_{t |
50,025 | How to select kernel for Gaussian Process? | One possibility you might try is simulating Gaussian Processes with different kernels. In that way, you can get a feel for what the different kernels will produce. This can most easily be done by selecting a grid of values and simulating from the multivariate normal implied by that grid. To make things easier, just use a zero vector for your mean function. You can also see with this method if the properties of your simulated draws tend to match up with how your time series data looks.
For example, you will see that the squared exponential kernel is very smooth. In fact, draws from a Gaussian Process with a squared exponential kernel will be continuous with probability one and also in fact infinitely differentiable with probability one. This is one property of the squared exponential that makes it very useful. Another reason for why it gets a lot of use is its clear connection with a Gaussian density.
Other kernels such as the Ornstein–Uhlenbeck covariance function will produce much rougher draws and may be more desirable in terms of a model. | How to select kernel for Gaussian Process? | One possibility you might try is simulating Gaussian Processes with different kernels. In that way, you can get a feel for what the different kernels will produce. This can most easily be done by sele | How to select kernel for Gaussian Process?
One possibility you might try is simulating Gaussian Processes with different kernels. In that way, you can get a feel for what the different kernels will produce. This can most easily be done by selecting a grid of values and simulating from the multivariate normal implied by that grid. To make things easier, just use a zero vector for your mean function. You can also see with this method if the properties of your simulated draws tend to match up with how your time series data looks.
For example, you will see that the squared exponential kernel is very smooth. In fact, draws from a Gaussian Process with a squared exponential kernel will be continuous with probability one and also in fact infinitely differentiable with probability one. This is one property of the squared exponential that makes it very useful. Another reason for why it gets a lot of use is its clear connection with a Gaussian density.
Other kernels such as the Ornstein–Uhlenbeck covariance function will produce much rougher draws and may be more desirable in terms of a model. | How to select kernel for Gaussian Process?
One possibility you might try is simulating Gaussian Processes with different kernels. In that way, you can get a feel for what the different kernels will produce. This can most easily be done by sele |
50,026 | How to select kernel for Gaussian Process? | Set aside a second set of training data, and "train" your model architecture using that.
i.e.
1) select an arbitrary kernel
2) train it using training set 1
3) evaluate it on training set 2 (using accuracy, precision, recall, whatever)
4) if !tired: goto 1)
5) else: return kernel with highest evaluation score from step 3)
It would probably make sense to start with "simple" kernels, and gradually try more complicated ones. The simple models will perform nominally on training set 2. As the kernel get more complicated, the model will start to perform better. As the kernel gets insanely complicated, the model will perform worse on training set 2, as the insanely complicated model starts overfitting. This is is good time to stop. | How to select kernel for Gaussian Process? | Set aside a second set of training data, and "train" your model architecture using that.
i.e.
1) select an arbitrary kernel
2) train it using training set 1
3) evaluate it on training set 2 (usi | How to select kernel for Gaussian Process?
Set aside a second set of training data, and "train" your model architecture using that.
i.e.
1) select an arbitrary kernel
2) train it using training set 1
3) evaluate it on training set 2 (using accuracy, precision, recall, whatever)
4) if !tired: goto 1)
5) else: return kernel with highest evaluation score from step 3)
It would probably make sense to start with "simple" kernels, and gradually try more complicated ones. The simple models will perform nominally on training set 2. As the kernel get more complicated, the model will start to perform better. As the kernel gets insanely complicated, the model will perform worse on training set 2, as the insanely complicated model starts overfitting. This is is good time to stop. | How to select kernel for Gaussian Process?
Set aside a second set of training data, and "train" your model architecture using that.
i.e.
1) select an arbitrary kernel
2) train it using training set 1
3) evaluate it on training set 2 (usi |
50,027 | MCMC: examples of when direct sampling is difficult (but Metropolis Hastings is easy) | I don't have a great example off the top of my head, but MH is easy compared to direct sampling whenever the parameter's prior is not conjugate with that parameter's likelihood. In fact this is the only reason I have ever seen MH preferred. A toy example is that $p \sim \text{Beta}(\alpha, \beta)$, and you wanted to have (independent) priors $\alpha, \beta \sim \text{Gamma}()$. This is not conjugate and you would need to use MH for $\alpha$ and $\beta$.
This presentation gives an example of a Poisson GLM which uses MH for drawing the GLM coefficients.
If you don't already know, it might be worth noting that direct sampling is just the case of MH when we always accept the drawn value. So whenever we can direct sample we should, to avoid having to tune our proposal distribution. | MCMC: examples of when direct sampling is difficult (but Metropolis Hastings is easy) | I don't have a great example off the top of my head, but MH is easy compared to direct sampling whenever the parameter's prior is not conjugate with that parameter's likelihood. In fact this is the on | MCMC: examples of when direct sampling is difficult (but Metropolis Hastings is easy)
I don't have a great example off the top of my head, but MH is easy compared to direct sampling whenever the parameter's prior is not conjugate with that parameter's likelihood. In fact this is the only reason I have ever seen MH preferred. A toy example is that $p \sim \text{Beta}(\alpha, \beta)$, and you wanted to have (independent) priors $\alpha, \beta \sim \text{Gamma}()$. This is not conjugate and you would need to use MH for $\alpha$ and $\beta$.
This presentation gives an example of a Poisson GLM which uses MH for drawing the GLM coefficients.
If you don't already know, it might be worth noting that direct sampling is just the case of MH when we always accept the drawn value. So whenever we can direct sample we should, to avoid having to tune our proposal distribution. | MCMC: examples of when direct sampling is difficult (but Metropolis Hastings is easy)
I don't have a great example off the top of my head, but MH is easy compared to direct sampling whenever the parameter's prior is not conjugate with that parameter's likelihood. In fact this is the on |
50,028 | Mixed effects modelling; what to do when model is over-specified? | The Keep it maximal proposal is not to be taken as a dogma. Be more pragmatic, and try to determine what level of model complexity your data will support (or at least a maximal level that will be supported).
Computationally speaking: The IWRLS estimation procedure used might not converge to the optimal parameter values; as a result your inference will be wrong. In addition, a large number of parameters in a model may results in a very flat (conceptually speaking) log-likelihood surface and as a consequence the optimization problem you were previously solving "easily" just become extremely hairy.
The reasonable thing to do is to reduce the number of groups you are estimating. Right now you have an over-parametrized LME models; as you assume a model $y\sim N(X\beta,ZDZ^T+\sigma^2I)$ what is written essentially has more variance parameters than data-points.
Check for starters something like: y ~ x1*x2*z1*z2+ (1+z1|ID) + (1+z2|ID)) if you feel you should have correlated random slopes and intercepts in your model. This is already quite explicit for a covariance structure anyway. You can modify the model later if it does not fit your modeling assumptions.
And to get back to you original final question: No, there is no way to automatically specify the maximal random effects structure of your model; unfortunately there is no $silver$ $bullet$ for that statistical question. | Mixed effects modelling; what to do when model is over-specified? | The Keep it maximal proposal is not to be taken as a dogma. Be more pragmatic, and try to determine what level of model complexity your data will support (or at least a maximal level that will be supp | Mixed effects modelling; what to do when model is over-specified?
The Keep it maximal proposal is not to be taken as a dogma. Be more pragmatic, and try to determine what level of model complexity your data will support (or at least a maximal level that will be supported).
Computationally speaking: The IWRLS estimation procedure used might not converge to the optimal parameter values; as a result your inference will be wrong. In addition, a large number of parameters in a model may results in a very flat (conceptually speaking) log-likelihood surface and as a consequence the optimization problem you were previously solving "easily" just become extremely hairy.
The reasonable thing to do is to reduce the number of groups you are estimating. Right now you have an over-parametrized LME models; as you assume a model $y\sim N(X\beta,ZDZ^T+\sigma^2I)$ what is written essentially has more variance parameters than data-points.
Check for starters something like: y ~ x1*x2*z1*z2+ (1+z1|ID) + (1+z2|ID)) if you feel you should have correlated random slopes and intercepts in your model. This is already quite explicit for a covariance structure anyway. You can modify the model later if it does not fit your modeling assumptions.
And to get back to you original final question: No, there is no way to automatically specify the maximal random effects structure of your model; unfortunately there is no $silver$ $bullet$ for that statistical question. | Mixed effects modelling; what to do when model is over-specified?
The Keep it maximal proposal is not to be taken as a dogma. Be more pragmatic, and try to determine what level of model complexity your data will support (or at least a maximal level that will be supp |
50,029 | demonstration of benefits of ridge regression over ordinary regression | I try for an answer, but a rather general one.
(1) It depends on what you mean by "performing better". Often, performance is measured in terms of the capability to generalize and forecast. For this cross-validation is an often used tool, where you repeatedly divide the data into a training and test set, fit the model using the training set, and then take the deviation between forecast and test set as a measure for the generalization capability.
(2) There are many of these situations. The main reason is that ridge regression often can avoid overfitting. A basic example is given at the beginning of Bishop's machine learning book: Here, a polynomial of order nine is fitted to random realizations of a sine curve with added noise. Without ridge regression, the fit obviously seems to overfit for $M=9$:
... well, obviously at least when you additionally see the corresponding sine curve. But even without this information, one should -- according to Ockham's razor -- prefer simple models, in this case the $M=3$ polynomial on the left.
Here are the corresponding results using ridge regression:
You see the benefits, but also the dangers. If you choose $\lambda=1$ (i.e. $\ln \lambda =0$) as is done on the right-hand side, you obtain a fit which most people will find disappointing. For $\ln \lambda = -18$ you retain a simple and obviously appropriate description similar to the $M=3$ polynomial.
The parameter $\lambda$ therefore is seen to reduce the complexity of the model. That is, you can assume a sophisticated model and let the procedure automatically reduce the complexity when it is needed. In general, this is a way to avoid the task of finding an appropriate model specific to each new dataset -- instead, you simply pick a general model and then reduce its complexity until you hopefully get the desired result. Of course, by this you get a further free parameter $\lambda$ which must be properly estimated. Again, this is often done by cross-validation.
Finally, here you see the influence of the ridge parameter on training and test error:
Naturally, with growing $\lambda$, the training error increases as the residual sum of squares becomes larger. At the same time. however, On the other hand, you see that the test-error reaches a minimum somewhere around $\ln \lambda=-30$, which suggests that this is a good value for generalization tasks. | demonstration of benefits of ridge regression over ordinary regression | I try for an answer, but a rather general one.
(1) It depends on what you mean by "performing better". Often, performance is measured in terms of the capability to generalize and forecast. For this cr | demonstration of benefits of ridge regression over ordinary regression
I try for an answer, but a rather general one.
(1) It depends on what you mean by "performing better". Often, performance is measured in terms of the capability to generalize and forecast. For this cross-validation is an often used tool, where you repeatedly divide the data into a training and test set, fit the model using the training set, and then take the deviation between forecast and test set as a measure for the generalization capability.
(2) There are many of these situations. The main reason is that ridge regression often can avoid overfitting. A basic example is given at the beginning of Bishop's machine learning book: Here, a polynomial of order nine is fitted to random realizations of a sine curve with added noise. Without ridge regression, the fit obviously seems to overfit for $M=9$:
... well, obviously at least when you additionally see the corresponding sine curve. But even without this information, one should -- according to Ockham's razor -- prefer simple models, in this case the $M=3$ polynomial on the left.
Here are the corresponding results using ridge regression:
You see the benefits, but also the dangers. If you choose $\lambda=1$ (i.e. $\ln \lambda =0$) as is done on the right-hand side, you obtain a fit which most people will find disappointing. For $\ln \lambda = -18$ you retain a simple and obviously appropriate description similar to the $M=3$ polynomial.
The parameter $\lambda$ therefore is seen to reduce the complexity of the model. That is, you can assume a sophisticated model and let the procedure automatically reduce the complexity when it is needed. In general, this is a way to avoid the task of finding an appropriate model specific to each new dataset -- instead, you simply pick a general model and then reduce its complexity until you hopefully get the desired result. Of course, by this you get a further free parameter $\lambda$ which must be properly estimated. Again, this is often done by cross-validation.
Finally, here you see the influence of the ridge parameter on training and test error:
Naturally, with growing $\lambda$, the training error increases as the residual sum of squares becomes larger. At the same time. however, On the other hand, you see that the test-error reaches a minimum somewhere around $\ln \lambda=-30$, which suggests that this is a good value for generalization tasks. | demonstration of benefits of ridge regression over ordinary regression
I try for an answer, but a rather general one.
(1) It depends on what you mean by "performing better". Often, performance is measured in terms of the capability to generalize and forecast. For this cr |
50,030 | Probability generating function for negative values of random variables? | As whuber stated above, there is really no problem here as long as the resultant sum is well-defined in some neighborhood of a finite point in $\mathbb{C}$ (we can always shift things around to find the moments, if they exist, or probabilities, no matter what the number is). There are a few things you can think about. First, if the probabilities $p_n$ have support on $\text{Supp}(p) = \{-N, 1-N,...\} \cup \mathbb{N}$, then there is clearly no problem at all since we will define the generating function as
$$
G(z) = \sum_{n \geq -N}p_n z^n = \sum_{n \geq 0}q_n z^n,
$$
where $q_n = p_{n - N}$, so that you are back to the familiar positive power case.
Likewise, if your probability distribution is supported on $-\mathbb{N} \cap \{0,1,...,N\}$, define $r_n = p_{N-n}$ and a similar result is apparent.
So the only real issue becomes when your series is a doubly-infinite Laurent series:
$$
G(z) = \sum_{-\infty}^{\infty}p_n z^{n}.
$$
with nonzero $p_n\ \forall n < 0$.
This definitely can be an issue---see the wikipedia article on Laurent series for a discussion. The gist of the problem is that, in this case, $G$ might have an essential singularity or some nontrivial behavior on the inner disc of its annulus of convergence.
This gives (yet another) reason to prefer characteristic functions when possible... | Probability generating function for negative values of random variables? | As whuber stated above, there is really no problem here as long as the resultant sum is well-defined in some neighborhood of a finite point in $\mathbb{C}$ (we can always shift things around to find t | Probability generating function for negative values of random variables?
As whuber stated above, there is really no problem here as long as the resultant sum is well-defined in some neighborhood of a finite point in $\mathbb{C}$ (we can always shift things around to find the moments, if they exist, or probabilities, no matter what the number is). There are a few things you can think about. First, if the probabilities $p_n$ have support on $\text{Supp}(p) = \{-N, 1-N,...\} \cup \mathbb{N}$, then there is clearly no problem at all since we will define the generating function as
$$
G(z) = \sum_{n \geq -N}p_n z^n = \sum_{n \geq 0}q_n z^n,
$$
where $q_n = p_{n - N}$, so that you are back to the familiar positive power case.
Likewise, if your probability distribution is supported on $-\mathbb{N} \cap \{0,1,...,N\}$, define $r_n = p_{N-n}$ and a similar result is apparent.
So the only real issue becomes when your series is a doubly-infinite Laurent series:
$$
G(z) = \sum_{-\infty}^{\infty}p_n z^{n}.
$$
with nonzero $p_n\ \forall n < 0$.
This definitely can be an issue---see the wikipedia article on Laurent series for a discussion. The gist of the problem is that, in this case, $G$ might have an essential singularity or some nontrivial behavior on the inner disc of its annulus of convergence.
This gives (yet another) reason to prefer characteristic functions when possible... | Probability generating function for negative values of random variables?
As whuber stated above, there is really no problem here as long as the resultant sum is well-defined in some neighborhood of a finite point in $\mathbb{C}$ (we can always shift things around to find t |
50,031 | Probability generating function for negative values of random variables? | I believe it's basically because usually the treatment relies on results that apply to sums of non-negative powers.
An example of the sort of thing that's relied on would be Abel's theorem. With r.v.s that take negative values, you'd have to try to establish the radius of convergence without it.
So there are some issues to deal with when the X can be negative (though of course we still have MGFs, characteristic functions and so on in any case). You might find this [1] and some of its references useful. The discussion is of extending from random variables on the non-negative integers to more general cases (as an example, the treatment establishes that a connection between the characteristic function and pgf still holds for variables also taking negative values as long as the tails decay at least exponentially).
So it seems it can be extended in the sense you'd like, at least under certain conditions.
[1] Esquível, M.L. (2004),
Probability Generating Functions For Discrete Real Valued Random Variables,
(author's link) | Probability generating function for negative values of random variables? | I believe it's basically because usually the treatment relies on results that apply to sums of non-negative powers.
An example of the sort of thing that's relied on would be Abel's theorem. With r.v. | Probability generating function for negative values of random variables?
I believe it's basically because usually the treatment relies on results that apply to sums of non-negative powers.
An example of the sort of thing that's relied on would be Abel's theorem. With r.v.s that take negative values, you'd have to try to establish the radius of convergence without it.
So there are some issues to deal with when the X can be negative (though of course we still have MGFs, characteristic functions and so on in any case). You might find this [1] and some of its references useful. The discussion is of extending from random variables on the non-negative integers to more general cases (as an example, the treatment establishes that a connection between the characteristic function and pgf still holds for variables also taking negative values as long as the tails decay at least exponentially).
So it seems it can be extended in the sense you'd like, at least under certain conditions.
[1] Esquível, M.L. (2004),
Probability Generating Functions For Discrete Real Valued Random Variables,
(author's link) | Probability generating function for negative values of random variables?
I believe it's basically because usually the treatment relies on results that apply to sums of non-negative powers.
An example of the sort of thing that's relied on would be Abel's theorem. With r.v. |
50,032 | Probability generating function for negative values of random variables? | The (probability) generating function (a/k/a the factorial moment generatring function) is defined as $$h_X(t) = E\{t^X\}.$$ The garden-variety moment generating function is defined as $$M_X(t) = E\{e^{tX}\}.$$ For either to be useful it must exist in a neighborhood of 0 (to pull moments off) or in the case of $h_X(t)$ a neighborhood of 1 (to pull discrete probabilities off). In either case, if one exists, the other must also exist: What is $E\{e^{(\ln{u}\cdot X}\}$?
This shows a backdoor to $h_X$: evaluate $M_X(\ln{t})$. Do be sure to check the neighborhoods of 0 and 1 once you've done the formalism. What is unclear to me at this point is how you would strip the pmf from this. For non-negative discrete random variables, $f_X(k) = h_X^{(k)}(1)$. Do you integrate $h$ to get to negative values of $X$? Or perhaps you don't or can't strip off the pmf values for $x<0$? It's late here, I'll have to think on this more tomorrow. | Probability generating function for negative values of random variables? | The (probability) generating function (a/k/a the factorial moment generatring function) is defined as $$h_X(t) = E\{t^X\}.$$ The garden-variety moment generating function is defined as $$M_X(t) = E\{e | Probability generating function for negative values of random variables?
The (probability) generating function (a/k/a the factorial moment generatring function) is defined as $$h_X(t) = E\{t^X\}.$$ The garden-variety moment generating function is defined as $$M_X(t) = E\{e^{tX}\}.$$ For either to be useful it must exist in a neighborhood of 0 (to pull moments off) or in the case of $h_X(t)$ a neighborhood of 1 (to pull discrete probabilities off). In either case, if one exists, the other must also exist: What is $E\{e^{(\ln{u}\cdot X}\}$?
This shows a backdoor to $h_X$: evaluate $M_X(\ln{t})$. Do be sure to check the neighborhoods of 0 and 1 once you've done the formalism. What is unclear to me at this point is how you would strip the pmf from this. For non-negative discrete random variables, $f_X(k) = h_X^{(k)}(1)$. Do you integrate $h$ to get to negative values of $X$? Or perhaps you don't or can't strip off the pmf values for $x<0$? It's late here, I'll have to think on this more tomorrow. | Probability generating function for negative values of random variables?
The (probability) generating function (a/k/a the factorial moment generatring function) is defined as $$h_X(t) = E\{t^X\}.$$ The garden-variety moment generating function is defined as $$M_X(t) = E\{e |
50,033 | Cointegration - same thing as stationary residuals? | No, this is not true. In order to consider a cointegrating relationship your variables need to be at least integrated of order one, $I\left(1\right)
$. In order to carry out a cointegration analysis you would first have to conduct a unit root test to see if your time series are in fact $I\left(1\right)
$. Then you could conduct a cointegration test on the relevant series, some of the more popular being the Johansen trace test/maximum eigenvalue test (estimated using maximum likelihood) or the more robust Engle-Granger method (estimated using OLS). If you only have two variables or only suspect one cointegrating relationship you could use the Engle-Granger while the Johansen test can accommodate several cointegrating relationship.
Consider an economic example: You are interested in testing whether or not the money and output cointegrate. You would first run a/several unit root test/s on the series in order to see whether or not they were in fact $I(1)
$. If they were in fact $I\left(1\right)
$ you could test for a cointegrating relationship using the Engle-Granger method since you only have two variables, hence you can at most have one cointegrating vector.
First you would run the regression: $y_{t}=\beta_{0}+\beta_{1}m_{t}+u_{t}
$, where $\beta_{0}
$ is a constant, $m_{t}
$ is money, $y_{t}
$ is output and $u_{t}
$ is the error term. After running this regression you would run a unit root test on the residuals to see if they were stationary or $I\left(1\right)
$. If they are stationary the series cointegrate! What is crucial for cointegration is that the series share a common stochastic trend and that they are at least integrated of order 1. By just regressing one $I\left(1\right)
$ series on another $I\left(1\right)
$ series you could end up with a spurious regression if they do not share a common stochastic trend, i.e. cointegrate. You could also deal with the spurious regression problem by including enough lags for each variable of interest when using a dynamic model!
Note that you can have different kinds of non-stationarity. A trend-stationary series which has an upwards trend is non-stationary. By detrending the series or including a time trend you can make these residuals stationary (see the Frisch–Waugh–Lovell theorem) although there is no cointegration present at all. Further, you can have non-stationary series due to level shifts (structural breaks) or sub-samples with differing degree of volatility. You can have an $I\left(1\right)
$ series which can be made stationary by differencing it once.
Hopefully this answered your question. I would recommend you to read up on stationarity, integration and cointegration. | Cointegration - same thing as stationary residuals? | No, this is not true. In order to consider a cointegrating relationship your variables need to be at least integrated of order one, $I\left(1\right)
$. In order to carry out a cointegration analysis | Cointegration - same thing as stationary residuals?
No, this is not true. In order to consider a cointegrating relationship your variables need to be at least integrated of order one, $I\left(1\right)
$. In order to carry out a cointegration analysis you would first have to conduct a unit root test to see if your time series are in fact $I\left(1\right)
$. Then you could conduct a cointegration test on the relevant series, some of the more popular being the Johansen trace test/maximum eigenvalue test (estimated using maximum likelihood) or the more robust Engle-Granger method (estimated using OLS). If you only have two variables or only suspect one cointegrating relationship you could use the Engle-Granger while the Johansen test can accommodate several cointegrating relationship.
Consider an economic example: You are interested in testing whether or not the money and output cointegrate. You would first run a/several unit root test/s on the series in order to see whether or not they were in fact $I(1)
$. If they were in fact $I\left(1\right)
$ you could test for a cointegrating relationship using the Engle-Granger method since you only have two variables, hence you can at most have one cointegrating vector.
First you would run the regression: $y_{t}=\beta_{0}+\beta_{1}m_{t}+u_{t}
$, where $\beta_{0}
$ is a constant, $m_{t}
$ is money, $y_{t}
$ is output and $u_{t}
$ is the error term. After running this regression you would run a unit root test on the residuals to see if they were stationary or $I\left(1\right)
$. If they are stationary the series cointegrate! What is crucial for cointegration is that the series share a common stochastic trend and that they are at least integrated of order 1. By just regressing one $I\left(1\right)
$ series on another $I\left(1\right)
$ series you could end up with a spurious regression if they do not share a common stochastic trend, i.e. cointegrate. You could also deal with the spurious regression problem by including enough lags for each variable of interest when using a dynamic model!
Note that you can have different kinds of non-stationarity. A trend-stationary series which has an upwards trend is non-stationary. By detrending the series or including a time trend you can make these residuals stationary (see the Frisch–Waugh–Lovell theorem) although there is no cointegration present at all. Further, you can have non-stationary series due to level shifts (structural breaks) or sub-samples with differing degree of volatility. You can have an $I\left(1\right)
$ series which can be made stationary by differencing it once.
Hopefully this answered your question. I would recommend you to read up on stationarity, integration and cointegration. | Cointegration - same thing as stationary residuals?
No, this is not true. In order to consider a cointegrating relationship your variables need to be at least integrated of order one, $I\left(1\right)
$. In order to carry out a cointegration analysis |
50,034 | Smoothing dirty data? | There are methods which use the knowledge of the point in time of the unusual event which leads to a window of response before and after the known event. These methods are called different things but one name is Dynamic Regression or Transfer Functions or armaX models. | Smoothing dirty data? | There are methods which use the knowledge of the point in time of the unusual event which leads to a window of response before and after the known event. These methods are called different things but | Smoothing dirty data?
There are methods which use the knowledge of the point in time of the unusual event which leads to a window of response before and after the known event. These methods are called different things but one name is Dynamic Regression or Transfer Functions or armaX models. | Smoothing dirty data?
There are methods which use the knowledge of the point in time of the unusual event which leads to a window of response before and after the known event. These methods are called different things but |
50,035 | Is $H=\min(t_1,...,t_n)$ a Copula? | One way to prove a function $H$ on the $n$ cube $[0,1]^n$ is a copula is to exhibit a random variable whose distribution function restricted to the cube is $H.$
To that end, let $X$ be a univariate random variable with a uniform distribution on $[0,1],$ which means that for all $t\in[0,1],$ $\Pr(X\le t)=t.$ Define the $n$-vector-valued variable $\mathbf X$ as
$$\mathbf{X} = (X,X,\ldots, X).$$
Let $(t_1,\ldots, t_n)\in [0,1]^n$ and note (to justify the third equality below) that $\min(t_1,\ldots, t_n)\in[0,1].$ Successive application of the definitions of $\mathbf X,$ $\min,$ the uniform distribution, and $H$ justifies these four equalities:
$$\begin{aligned}
\Pr(X_1\le t_1, \ldots, X_n\le t_n) &= \Pr(X \le t_1, \ldots, X\le t_n) \\
&= \Pr(X \le \min(t_1,\ldots,t_n)) \\
&= \min(t_1,\ldots, t_n) \\
&= H(t_1,\ldots, t_n),
\end{aligned}$$
QED. | Is $H=\min(t_1,...,t_n)$ a Copula? | One way to prove a function $H$ on the $n$ cube $[0,1]^n$ is a copula is to exhibit a random variable whose distribution function restricted to the cube is $H.$
To that end, let $X$ be a univariate ra | Is $H=\min(t_1,...,t_n)$ a Copula?
One way to prove a function $H$ on the $n$ cube $[0,1]^n$ is a copula is to exhibit a random variable whose distribution function restricted to the cube is $H.$
To that end, let $X$ be a univariate random variable with a uniform distribution on $[0,1],$ which means that for all $t\in[0,1],$ $\Pr(X\le t)=t.$ Define the $n$-vector-valued variable $\mathbf X$ as
$$\mathbf{X} = (X,X,\ldots, X).$$
Let $(t_1,\ldots, t_n)\in [0,1]^n$ and note (to justify the third equality below) that $\min(t_1,\ldots, t_n)\in[0,1].$ Successive application of the definitions of $\mathbf X,$ $\min,$ the uniform distribution, and $H$ justifies these four equalities:
$$\begin{aligned}
\Pr(X_1\le t_1, \ldots, X_n\le t_n) &= \Pr(X \le t_1, \ldots, X\le t_n) \\
&= \Pr(X \le \min(t_1,\ldots,t_n)) \\
&= \min(t_1,\ldots, t_n) \\
&= H(t_1,\ldots, t_n),
\end{aligned}$$
QED. | Is $H=\min(t_1,...,t_n)$ a Copula?
One way to prove a function $H$ on the $n$ cube $[0,1]^n$ is a copula is to exhibit a random variable whose distribution function restricted to the cube is $H.$
To that end, let $X$ be a univariate ra |
50,036 | Confidence interval for a proportion estimated through stratified sampling | I have no real answer for you, only some thoughts. You are unlucky in that illness is so rare.
I'll first note that this design would have caused trouble even if illness was common. For example, the SE formula for the weighted prevalence requires $n_h$>1 observation per stratum (Cochran, 1977, Chapter 5).
You ask if it is okay to ignore the stratification and apply a formula for an exact CI. There's no real justification for this formula: the theory assumes simple random sampling (SRS). In that design every observation has the same probability of selection. In your design, a stratified sample, the probabilities range from 1/2 to 1/15, or, more formally $1/N_h$, where $N_h$ is the size of stratum h. The SRS CI endpoint will be biased if you over-sampled or under-sampled strata with higher expected prevalences.
You can, however check on this directional bias. You have some knowledge of risk predictors for the illness-the characteristics you used to form the strata. As best you can, form G groups of strata with different levels of risk and rank the groups from lowest expected risk to highest. Then plot the individual $N_h$ and the group mean $N_h$ against group number. A positive trend (average $N_h$ increasing with group number) will indicate that you under-sampled the higher risk groups. This might partly account for the failure to see any cases. A negative trend would show that you over-sampled the high risk groups. In that case the failure to see cases is partly due to bad luck and to taking too small a sample.
Theory for Simple Random Sampling without replacement
Let the unknown number of patients with illness be D, assumed >0; then the prevalence of illness $P$ is
$$
P = \frac{D}{N}
$$
Note that D can take only integer values.
Suppose number of observed patients with the condition is T. Then T has a hypergeometric distribution, not a binomial distribution, because the population size is finite (Cochran, 1977, p. 55). (This accounts for the appearance of the finite population correction for variances in sampling without replacement).
The parameters for the hypergeometric distribution are $N$, the population size, $D$ the number of patients with the illness in the population, and $n$, the sample size. The probability that $T = d$ is:
$$
\text{Pr($T =d \vert N, n,D$)} =\dfrac{ { D\choose{d}} {N -D\choose{n-d}}} {{N \choose{n}}}
$$
Confidence interval for SRS without replacement
I'll demonstrate the CI that would have been valid for a simple random sample. With population size $N$, events in the population, $d$ events in the sample, and a sample size of $n$. The one-sided $1-\alpha$ endpoint for $D$ is the largest value D for which
$$
P(T \leq d \> \vert \> N, n, D) \leq \alpha
$$
where T has a hypergeometric distribution with parameters (N, n, and D). This CI is based on inverting a hypothesis test about D. See, e.g. Blaker, 2000.
With d = 0, this is
$$
P(T =0 \> \vert \> N, n, D) \leq \alpha
$$
In your study, $N=2500$, $n= 40$, and $d=0$. Suppose this data had been generated by a SRS. I used Stata's hypergeometric function to generate a one-sided 80% CI. I choose 80% because in such a situation, my practice is to trade confidence for a smaller interval.
Under SRS, the upper bound of the one-sided 80% (actually 79.8%) hypergeometric CI for $D$ would be $D_u$ =9, which corresponds to a prevalence of $\hat{P}$= 9/250 = 3.6%. The corresponding one-sided binomial interval which ignores the finite sampling would $\hat{P}$= 3.9%. You can see that the hypergeometric interval is shorter. Both intervals are likely to be conservative, with the true probability of coverage greater than the nominal 80% (Blaker, 2000).
Actual distribution: weighted sum of Bernoulli variables
Let $h$ index strata. In stratum $h$, let $n_h$ the size of sample (=1 here), $d_h$ be the number with illness in the sample (= 0 or 1, here) , $D_h$ be the unknown number of patients in the population with illness, $P_h= D_h/N_h$ be the unknown prevalence in the strata.
If the sum of the $D_h$ is $D$ is the unknown number of ill patients in the population. The estimated prevalence is
\begin{align}
\hat{P} & = \frac{\hat{D}}{N}
\end{align}
with
\begin{align}
\hat{D} = \sum_h \dfrac{N_h}{n_h} d_h = \sum_h N_h d_h
\end{align}
With $n_h$=1, the distribution of $d_h$ is that of a Bernoulli 0-1 random variable with probability $p_h$ = $D_h/N_h$. Thus $\hat{D}$ is a weighted sum of these..
I don't know how to do a hypothesis test for $D$ in this situation; so don't have a test to invert to get a confidence interval. The problem is that there is no single probability distribution for $\hat{D}$ for each possible value $D_0$; there is a different distribution for each compatible set of the $D_h$ for which $\sum_h D_h = D_0$.
Other Designs
Confronted with a population with a rare outcome, there are not many good choices. A larger sample would have helped. For a rare outcome such as yours, I would have tried inverse sampling: sample randomly until one case was found, so that the number of trials is the random variable. There are CI formulas for the case of independent samples (See Zou, 2010), but I haven't found one for the case of without-replacement sampling, where the relevant distribution is the "negative hypergeometric", which is the same as the beta-binomial distribution,
There is a theory of optimal design, and I state it for background. According to the theory, selection probability $\pi$ for an observation should be proportional to the expected "size" of the observation, in this case its risk of disease. For stratified sampling (Cochran, 1977, Chapter 5), you'd form a small number of strata in which the observations have similar expected very low risks $P_h$, then make the selection fraction $n_h/N_h \propto P_h(1-P_h)$, which is very close to $P_h$ for small risks. It's unlikely that you'd be able to quantify actual risks, but you get the idea: higher risk patients are selected with higher probabilities.
A practical tactic is to identify a group of $N_1$ patients with risks so low that that you are very sure there are no cases among them. This leaves $N_2 = N -N_1$ people. You then omit them from the inverse sampling. If the upper endpoint CI from inverse or random sampling is $\hat{P_2}$, the estimated prevalence in the population is $\hat{P} = \dfrac{N_2}{N} \hat{P_2}$.
References
H. Blaker, 2000. Confidence curves and improved exact confidence intervals for discrete distributions. Canadian Journal of Statistics Can J Statistics 28, no. 4: 783-798.
Cochran, William G. 1977. Sampling Techniques. New York: Wiley.
Zou, G.Y. 2010. Confidence interval estimation under inverse sampling. Computational Statistics & Data Analysis 54, no. 1: 55-64. | Confidence interval for a proportion estimated through stratified sampling | I have no real answer for you, only some thoughts. You are unlucky in that illness is so rare.
I'll first note that this design would have caused trouble even if illness was common. For example, the S | Confidence interval for a proportion estimated through stratified sampling
I have no real answer for you, only some thoughts. You are unlucky in that illness is so rare.
I'll first note that this design would have caused trouble even if illness was common. For example, the SE formula for the weighted prevalence requires $n_h$>1 observation per stratum (Cochran, 1977, Chapter 5).
You ask if it is okay to ignore the stratification and apply a formula for an exact CI. There's no real justification for this formula: the theory assumes simple random sampling (SRS). In that design every observation has the same probability of selection. In your design, a stratified sample, the probabilities range from 1/2 to 1/15, or, more formally $1/N_h$, where $N_h$ is the size of stratum h. The SRS CI endpoint will be biased if you over-sampled or under-sampled strata with higher expected prevalences.
You can, however check on this directional bias. You have some knowledge of risk predictors for the illness-the characteristics you used to form the strata. As best you can, form G groups of strata with different levels of risk and rank the groups from lowest expected risk to highest. Then plot the individual $N_h$ and the group mean $N_h$ against group number. A positive trend (average $N_h$ increasing with group number) will indicate that you under-sampled the higher risk groups. This might partly account for the failure to see any cases. A negative trend would show that you over-sampled the high risk groups. In that case the failure to see cases is partly due to bad luck and to taking too small a sample.
Theory for Simple Random Sampling without replacement
Let the unknown number of patients with illness be D, assumed >0; then the prevalence of illness $P$ is
$$
P = \frac{D}{N}
$$
Note that D can take only integer values.
Suppose number of observed patients with the condition is T. Then T has a hypergeometric distribution, not a binomial distribution, because the population size is finite (Cochran, 1977, p. 55). (This accounts for the appearance of the finite population correction for variances in sampling without replacement).
The parameters for the hypergeometric distribution are $N$, the population size, $D$ the number of patients with the illness in the population, and $n$, the sample size. The probability that $T = d$ is:
$$
\text{Pr($T =d \vert N, n,D$)} =\dfrac{ { D\choose{d}} {N -D\choose{n-d}}} {{N \choose{n}}}
$$
Confidence interval for SRS without replacement
I'll demonstrate the CI that would have been valid for a simple random sample. With population size $N$, events in the population, $d$ events in the sample, and a sample size of $n$. The one-sided $1-\alpha$ endpoint for $D$ is the largest value D for which
$$
P(T \leq d \> \vert \> N, n, D) \leq \alpha
$$
where T has a hypergeometric distribution with parameters (N, n, and D). This CI is based on inverting a hypothesis test about D. See, e.g. Blaker, 2000.
With d = 0, this is
$$
P(T =0 \> \vert \> N, n, D) \leq \alpha
$$
In your study, $N=2500$, $n= 40$, and $d=0$. Suppose this data had been generated by a SRS. I used Stata's hypergeometric function to generate a one-sided 80% CI. I choose 80% because in such a situation, my practice is to trade confidence for a smaller interval.
Under SRS, the upper bound of the one-sided 80% (actually 79.8%) hypergeometric CI for $D$ would be $D_u$ =9, which corresponds to a prevalence of $\hat{P}$= 9/250 = 3.6%. The corresponding one-sided binomial interval which ignores the finite sampling would $\hat{P}$= 3.9%. You can see that the hypergeometric interval is shorter. Both intervals are likely to be conservative, with the true probability of coverage greater than the nominal 80% (Blaker, 2000).
Actual distribution: weighted sum of Bernoulli variables
Let $h$ index strata. In stratum $h$, let $n_h$ the size of sample (=1 here), $d_h$ be the number with illness in the sample (= 0 or 1, here) , $D_h$ be the unknown number of patients in the population with illness, $P_h= D_h/N_h$ be the unknown prevalence in the strata.
If the sum of the $D_h$ is $D$ is the unknown number of ill patients in the population. The estimated prevalence is
\begin{align}
\hat{P} & = \frac{\hat{D}}{N}
\end{align}
with
\begin{align}
\hat{D} = \sum_h \dfrac{N_h}{n_h} d_h = \sum_h N_h d_h
\end{align}
With $n_h$=1, the distribution of $d_h$ is that of a Bernoulli 0-1 random variable with probability $p_h$ = $D_h/N_h$. Thus $\hat{D}$ is a weighted sum of these..
I don't know how to do a hypothesis test for $D$ in this situation; so don't have a test to invert to get a confidence interval. The problem is that there is no single probability distribution for $\hat{D}$ for each possible value $D_0$; there is a different distribution for each compatible set of the $D_h$ for which $\sum_h D_h = D_0$.
Other Designs
Confronted with a population with a rare outcome, there are not many good choices. A larger sample would have helped. For a rare outcome such as yours, I would have tried inverse sampling: sample randomly until one case was found, so that the number of trials is the random variable. There are CI formulas for the case of independent samples (See Zou, 2010), but I haven't found one for the case of without-replacement sampling, where the relevant distribution is the "negative hypergeometric", which is the same as the beta-binomial distribution,
There is a theory of optimal design, and I state it for background. According to the theory, selection probability $\pi$ for an observation should be proportional to the expected "size" of the observation, in this case its risk of disease. For stratified sampling (Cochran, 1977, Chapter 5), you'd form a small number of strata in which the observations have similar expected very low risks $P_h$, then make the selection fraction $n_h/N_h \propto P_h(1-P_h)$, which is very close to $P_h$ for small risks. It's unlikely that you'd be able to quantify actual risks, but you get the idea: higher risk patients are selected with higher probabilities.
A practical tactic is to identify a group of $N_1$ patients with risks so low that that you are very sure there are no cases among them. This leaves $N_2 = N -N_1$ people. You then omit them from the inverse sampling. If the upper endpoint CI from inverse or random sampling is $\hat{P_2}$, the estimated prevalence in the population is $\hat{P} = \dfrac{N_2}{N} \hat{P_2}$.
References
H. Blaker, 2000. Confidence curves and improved exact confidence intervals for discrete distributions. Canadian Journal of Statistics Can J Statistics 28, no. 4: 783-798.
Cochran, William G. 1977. Sampling Techniques. New York: Wiley.
Zou, G.Y. 2010. Confidence interval estimation under inverse sampling. Computational Statistics & Data Analysis 54, no. 1: 55-64. | Confidence interval for a proportion estimated through stratified sampling
I have no real answer for you, only some thoughts. You are unlucky in that illness is so rare.
I'll first note that this design would have caused trouble even if illness was common. For example, the S |
50,037 | Expectation of conditional normal distribution | To summarize the comments:
Since we assume that $s_1$ and $s_2$ jointly follow a standard bivariate normal distribution, with correlation coefficient $\rho$, then the joint density is
$$f(s_1,s_2) = \frac{1}{2 \pi \sqrt{1-\rho^2}}
\exp\left\{-\frac{s_1^2 +s_2^2 -2\rho s_1s_2}{2(1-\rho^2)}\right\} $$
We also have
$$E(s_1|s_1>r_1,\ s_2>r_2) = \frac {E(s_1;\{s_1>r_1,\ s_2>r_2\})}{P(s_1>r_1,\ s_2>r_2)}$$
$$=\frac {\int_{r_2}^{\infty}\int_{r_1}^{\infty}s_1f(s_1,s_2)ds_1ds_2}{\int_{r_2}^{\infty}\int_{r_1}^{\infty}f(s_1,s_2)ds_1ds_2} $$
The fact that the conditioning statement includes, and places bounds on, both variables, does not permit us to simplify this ratio of integrals (as would be in the cases described by the OP in the comments). Moreover, as @whuber writes, these integrals do not have an analytical solution for $\rho \ne \{0,\pm1\}$, and must be computed numerically for each $\{r_1, r_2\}$. | Expectation of conditional normal distribution | To summarize the comments:
Since we assume that $s_1$ and $s_2$ jointly follow a standard bivariate normal distribution, with correlation coefficient $\rho$, then the joint density is
$$f(s_1,s_2) = \ | Expectation of conditional normal distribution
To summarize the comments:
Since we assume that $s_1$ and $s_2$ jointly follow a standard bivariate normal distribution, with correlation coefficient $\rho$, then the joint density is
$$f(s_1,s_2) = \frac{1}{2 \pi \sqrt{1-\rho^2}}
\exp\left\{-\frac{s_1^2 +s_2^2 -2\rho s_1s_2}{2(1-\rho^2)}\right\} $$
We also have
$$E(s_1|s_1>r_1,\ s_2>r_2) = \frac {E(s_1;\{s_1>r_1,\ s_2>r_2\})}{P(s_1>r_1,\ s_2>r_2)}$$
$$=\frac {\int_{r_2}^{\infty}\int_{r_1}^{\infty}s_1f(s_1,s_2)ds_1ds_2}{\int_{r_2}^{\infty}\int_{r_1}^{\infty}f(s_1,s_2)ds_1ds_2} $$
The fact that the conditioning statement includes, and places bounds on, both variables, does not permit us to simplify this ratio of integrals (as would be in the cases described by the OP in the comments). Moreover, as @whuber writes, these integrals do not have an analytical solution for $\rho \ne \{0,\pm1\}$, and must be computed numerically for each $\{r_1, r_2\}$. | Expectation of conditional normal distribution
To summarize the comments:
Since we assume that $s_1$ and $s_2$ jointly follow a standard bivariate normal distribution, with correlation coefficient $\rho$, then the joint density is
$$f(s_1,s_2) = \ |
50,038 | I would like help calculating the probability of a simple problem | How would you keep track of the person's walk? All you need to do is (1) remember whether their previous step was a fall or not and (2) note when two falls occur in a row. That is a data structure with three states:
Previous step was not a fall.
Previous step was a fall.
At some point in the past, two steps in a row were falls.
Each step in the walk is a random transition between states. The new state is determined entirely by whether the next step is a fall (with probability $p$) or not (with complementary probability $1-p$). Of course once state (3) is entered it doesn't really matter what the next step is: you stay in state (3) with probability $1$).
This information can be summarized with a transition matrix $\mathbb{Q}$ whose rows and columns index the states. Row $i$ lists the transition probabilities from state $i$ into the other states.
$$\mathbb{Q} = \left(
\begin{array}{ccc}
1-p & p & 0 \\
1-p & 0 & p \\
0 & 0 & 1 \\
\end{array}
\right).$$
The same information can also be neatly drawn with a graph having one node for each state and directed edges denoting the transitions. The edge labels give the transition probabilities:
There are several ways to calculate with this information. The machinery of linear algebra shows how to compute powers of $\mathbb{Q}$ and extract the answers from their coefficients. A more elementary approach exploits the inherent recursive nature of this system. Let $f(n,p,i)$ be the chance of reaching state (3) (two falls in a row) within $n$ steps beginning at state $i$. The first part of the question asks for $f(n,p,1)$ (with $n=1000$ and $p=10^{-5}$). The graph tells us
$f(n,p,3) = 1$ because we are already in state (3).
$f(n,p,2) = (1-p) f(n-1,p,1) + p f(n-1,p,3)$ because from state (2) transitions are possible to state (1) (with probability $1-p$) and to state (3) (with probability $p$).
$f(n,p,1) = (1-p) f(n-1,p,1) + p f(n-1,p,2)$ for comparable reasons.
These can be combined by solving for $f(n,p,1)$ in terms of $f(n^\prime,p,1)$ for smaller values of $n^\prime$. To abbreviate the notation let $f(n) = f(n,p,1)$:
$$f(n) = p^2 + (1-p)f(n-1) + p(1-p)f(n-2).$$
If you were to work this out manually, you would begin with a list of the known values of $f$ for $n=0,1$: $$0, 0, \ldots$$ Then you would augment this list using the recurrence relation $f(2) = p^2 + (1-p) f(1) + p(1-p) f(0) = p^2$, producing $$0, 0, p^2, \ldots$$ At the next step $f(3) = p^2 + (1-p) f(2) + p(1-p)f(0) = p^2 + (1-p)p^2 = 2p^2 - p^3$, extending the list to $0,0,p^2, 2p^2-p^3,\ldots$. This straightforward and fast method will enable you to compute values of $f$ for small $n$ with no trouble, as illustrated with this R code:
f <- function(n, p=10^-5, x=double(0)) {
y <- x
if (length(y) < 2) y <- c(0,0,NA)
if (length(y) <= n) y <- c(y, rep(NA, 2^ceiling(log(n+1,2))-length(y)))
i <- which.max(is.na(y))
if (i <= n+1) for (j in i:(n+1)) y[j] <- p^2 + (1-p)*y[j-1] + p*(1-p)*y[j-2]
return (list(f=y[n+1], cache=y))
}
f(1000, 10^(-5))$f
The output of 9.9899e-08 was produced in less than $0.006$ seconds on this machine.
A little analysis will take us a lot further.
This inhomogeneous linear difference equation for $f$ is a slight generalization of the Fibonacci numbers $F_n$, which satisfy $F_{n} = F_{n-1} + F_{n-2}$ (as if the constants $1-p$ and $p(1-p)$ were both replaced by $1$ and $p^2$ were replaced by $0$ in the equation for $f(n)$.) We may thereby emulate the well-known analyses of the Fibonacci numbers by using either linear algebra (applied to $\mathbb{Q}$) or combinatorial methods (for the linear difference equation) to obtain a closed-form solution--based on the initial conditions $f(0)=f(1)=0$--as
$$f(n) = 1-\frac{(d+p+1) \phi_{+}^n-(-d+p+1) \phi_{-}^n}{2 d}$$
where
$d = \sqrt{(1-p)^2 - 4p(1-p)} = \sqrt{1 + (2-3p)p}$
$\phi_{+} = (1-p+d)/2,\ \phi_{-} = (1-p-d)/2.$
Because $0\le p\le 1$, $\phi_{-}$ will be negative and $\phi_{+}$ will be positive (but less than $1$) and larger than $\phi_{-}$ in size. Indeed, with smallish values of $p$, $\phi_{-}$ will be close to $0$ and $\phi_{+}$ close to $1$. Thus, even for small $n$, $\phi_{-}^n$ can be treated as approximately zero. When that term is neglected we obtain
$$f(n) \approx 1-\frac{(d+p+1) \phi_{+}^n}{2 d}.$$
As an example of the utility of this expression, suppose $p=1/2$. Then $d = \sqrt{1 + (2-3/2)/2} = \sqrt{5}/2$ and $\phi_{+} = (1+\sqrt{5})/4.$ The approximate formula works out to
$$f(n) \approx 1 - 1.17082 (0.809017)^n.$$
The first few values are
$$-0.2, 0.05, 0.23, 0.380, 0.498, 0.5942, 0.6717, 0.7344, 0.78514, 0.826176, 0.859374, \ldots$$
while the correct values are
$$0.0, 0.00, 0.25, 0.375, 0.500, 0.5938, 0.6719, 0.7344, 0.78515, 0.826172, 0.859375, \ldots$$
The approximation rapidly improves as $n$ grows. | I would like help calculating the probability of a simple problem | How would you keep track of the person's walk? All you need to do is (1) remember whether their previous step was a fall or not and (2) note when two falls occur in a row. That is a data structure w | I would like help calculating the probability of a simple problem
How would you keep track of the person's walk? All you need to do is (1) remember whether their previous step was a fall or not and (2) note when two falls occur in a row. That is a data structure with three states:
Previous step was not a fall.
Previous step was a fall.
At some point in the past, two steps in a row were falls.
Each step in the walk is a random transition between states. The new state is determined entirely by whether the next step is a fall (with probability $p$) or not (with complementary probability $1-p$). Of course once state (3) is entered it doesn't really matter what the next step is: you stay in state (3) with probability $1$).
This information can be summarized with a transition matrix $\mathbb{Q}$ whose rows and columns index the states. Row $i$ lists the transition probabilities from state $i$ into the other states.
$$\mathbb{Q} = \left(
\begin{array}{ccc}
1-p & p & 0 \\
1-p & 0 & p \\
0 & 0 & 1 \\
\end{array}
\right).$$
The same information can also be neatly drawn with a graph having one node for each state and directed edges denoting the transitions. The edge labels give the transition probabilities:
There are several ways to calculate with this information. The machinery of linear algebra shows how to compute powers of $\mathbb{Q}$ and extract the answers from their coefficients. A more elementary approach exploits the inherent recursive nature of this system. Let $f(n,p,i)$ be the chance of reaching state (3) (two falls in a row) within $n$ steps beginning at state $i$. The first part of the question asks for $f(n,p,1)$ (with $n=1000$ and $p=10^{-5}$). The graph tells us
$f(n,p,3) = 1$ because we are already in state (3).
$f(n,p,2) = (1-p) f(n-1,p,1) + p f(n-1,p,3)$ because from state (2) transitions are possible to state (1) (with probability $1-p$) and to state (3) (with probability $p$).
$f(n,p,1) = (1-p) f(n-1,p,1) + p f(n-1,p,2)$ for comparable reasons.
These can be combined by solving for $f(n,p,1)$ in terms of $f(n^\prime,p,1)$ for smaller values of $n^\prime$. To abbreviate the notation let $f(n) = f(n,p,1)$:
$$f(n) = p^2 + (1-p)f(n-1) + p(1-p)f(n-2).$$
If you were to work this out manually, you would begin with a list of the known values of $f$ for $n=0,1$: $$0, 0, \ldots$$ Then you would augment this list using the recurrence relation $f(2) = p^2 + (1-p) f(1) + p(1-p) f(0) = p^2$, producing $$0, 0, p^2, \ldots$$ At the next step $f(3) = p^2 + (1-p) f(2) + p(1-p)f(0) = p^2 + (1-p)p^2 = 2p^2 - p^3$, extending the list to $0,0,p^2, 2p^2-p^3,\ldots$. This straightforward and fast method will enable you to compute values of $f$ for small $n$ with no trouble, as illustrated with this R code:
f <- function(n, p=10^-5, x=double(0)) {
y <- x
if (length(y) < 2) y <- c(0,0,NA)
if (length(y) <= n) y <- c(y, rep(NA, 2^ceiling(log(n+1,2))-length(y)))
i <- which.max(is.na(y))
if (i <= n+1) for (j in i:(n+1)) y[j] <- p^2 + (1-p)*y[j-1] + p*(1-p)*y[j-2]
return (list(f=y[n+1], cache=y))
}
f(1000, 10^(-5))$f
The output of 9.9899e-08 was produced in less than $0.006$ seconds on this machine.
A little analysis will take us a lot further.
This inhomogeneous linear difference equation for $f$ is a slight generalization of the Fibonacci numbers $F_n$, which satisfy $F_{n} = F_{n-1} + F_{n-2}$ (as if the constants $1-p$ and $p(1-p)$ were both replaced by $1$ and $p^2$ were replaced by $0$ in the equation for $f(n)$.) We may thereby emulate the well-known analyses of the Fibonacci numbers by using either linear algebra (applied to $\mathbb{Q}$) or combinatorial methods (for the linear difference equation) to obtain a closed-form solution--based on the initial conditions $f(0)=f(1)=0$--as
$$f(n) = 1-\frac{(d+p+1) \phi_{+}^n-(-d+p+1) \phi_{-}^n}{2 d}$$
where
$d = \sqrt{(1-p)^2 - 4p(1-p)} = \sqrt{1 + (2-3p)p}$
$\phi_{+} = (1-p+d)/2,\ \phi_{-} = (1-p-d)/2.$
Because $0\le p\le 1$, $\phi_{-}$ will be negative and $\phi_{+}$ will be positive (but less than $1$) and larger than $\phi_{-}$ in size. Indeed, with smallish values of $p$, $\phi_{-}$ will be close to $0$ and $\phi_{+}$ close to $1$. Thus, even for small $n$, $\phi_{-}^n$ can be treated as approximately zero. When that term is neglected we obtain
$$f(n) \approx 1-\frac{(d+p+1) \phi_{+}^n}{2 d}.$$
As an example of the utility of this expression, suppose $p=1/2$. Then $d = \sqrt{1 + (2-3/2)/2} = \sqrt{5}/2$ and $\phi_{+} = (1+\sqrt{5})/4.$ The approximate formula works out to
$$f(n) \approx 1 - 1.17082 (0.809017)^n.$$
The first few values are
$$-0.2, 0.05, 0.23, 0.380, 0.498, 0.5942, 0.6717, 0.7344, 0.78514, 0.826176, 0.859374, \ldots$$
while the correct values are
$$0.0, 0.00, 0.25, 0.375, 0.500, 0.5938, 0.6719, 0.7344, 0.78515, 0.826172, 0.859375, \ldots$$
The approximation rapidly improves as $n$ grows. | I would like help calculating the probability of a simple problem
How would you keep track of the person's walk? All you need to do is (1) remember whether their previous step was a fall or not and (2) note when two falls occur in a row. That is a data structure w |
50,039 | kozachenko-leonenko entropy estimation | Use the k-th nearest neighbor instead, for k as large as needed to obtain an $\epsilon_i > 0$. To reflect this in the Kozachenko-Leonenko estimator, simply replace $\psi(1)$ with $\psi(k)$. Since it's allowed to vary k from point to point, you could for instance look for the "closest distinct neighbor" each time. (If you find all your $x_i$ to be equal, simply set $\hat{H}(X) = 0$.) | kozachenko-leonenko entropy estimation | Use the k-th nearest neighbor instead, for k as large as needed to obtain an $\epsilon_i > 0$. To reflect this in the Kozachenko-Leonenko estimator, simply replace $\psi(1)$ with $\psi(k)$. Since it's | kozachenko-leonenko entropy estimation
Use the k-th nearest neighbor instead, for k as large as needed to obtain an $\epsilon_i > 0$. To reflect this in the Kozachenko-Leonenko estimator, simply replace $\psi(1)$ with $\psi(k)$. Since it's allowed to vary k from point to point, you could for instance look for the "closest distinct neighbor" each time. (If you find all your $x_i$ to be equal, simply set $\hat{H}(X) = 0$.) | kozachenko-leonenko entropy estimation
Use the k-th nearest neighbor instead, for k as large as needed to obtain an $\epsilon_i > 0$. To reflect this in the Kozachenko-Leonenko estimator, simply replace $\psi(1)$ with $\psi(k)$. Since it's |
50,040 | Neural Network: What if there are multiple right answers for a given set of inputs? | A neural network can in principle deal with this. Actually, I believe they are among the best models for this task. The question is whether it is modeled correctly.
Say you are looking at a regression problem and minimize the sum of squares, i.e.
$$L(\theta) = \sum_i (\hat{y}_i - y_i)^2.$$
Here, $L$ is the loss function we minimize with respect to the parameters $\theta$ of our neural net $f$, which we use to find an approximation $\hat{y}_i = f(x_i; \theta)$ of $y_i$.
What will this loss function result in for ambiguous data like $(x_1, y_1), (x_1, y_2)$ with $y_1 \neq y_2$? It will make the function predict $f$ predict the mean of both.
This is a property which not only holds for neural nets, but also for linear regression, random forests, gradient boosting machines etc--basically every model that is trained with a squared error.
It makes now sense to investigate where the squared error comes from, so that we can adapt it. I have explained elsewhere that the squared error stems from the log-likelihood of a Gaussian assumption: $p(y|x) = \mathcal{N}(f(x; \theta), \sqrt{1 \over 2})$. Gaussians are uni modal, which means that this assumption is the core error in the model. If you have ambiguous outputs, you need an output model with many modes.
The most commonly used one is mixture density networks, which assume that the output $p(y|x)$ is actually a mixture of Gaussians, e.g.
$$p(y|x) = \sum_j \pi_j(x) \mathcal{N}(y|\mu_j(x), \Sigma_j(x)).$$
Here, $\mu_j(x), \Sigma_j(x)$ and $\pi_j(x)$ are all distinct output units of the neural nets. Training is done via differentiating the log-likelihood and back-propagation.
There are many other ways, though:
This idea is applicable also to GBMs and RFs.
A completely different strategy would be to estimate a complicated joint likelihood $p(x, y)$ which allows conditioning on $x$, yielding a complex $p(y|x)$. Efficient inference/estimation will be an issue here.
A quite different example is certain Bayesian approaches which give rise to multimodal output distributions as well. Efficient inference/estimation is a problem here as well. | Neural Network: What if there are multiple right answers for a given set of inputs? | A neural network can in principle deal with this. Actually, I believe they are among the best models for this task. The question is whether it is modeled correctly.
Say you are looking at a regression | Neural Network: What if there are multiple right answers for a given set of inputs?
A neural network can in principle deal with this. Actually, I believe they are among the best models for this task. The question is whether it is modeled correctly.
Say you are looking at a regression problem and minimize the sum of squares, i.e.
$$L(\theta) = \sum_i (\hat{y}_i - y_i)^2.$$
Here, $L$ is the loss function we minimize with respect to the parameters $\theta$ of our neural net $f$, which we use to find an approximation $\hat{y}_i = f(x_i; \theta)$ of $y_i$.
What will this loss function result in for ambiguous data like $(x_1, y_1), (x_1, y_2)$ with $y_1 \neq y_2$? It will make the function predict $f$ predict the mean of both.
This is a property which not only holds for neural nets, but also for linear regression, random forests, gradient boosting machines etc--basically every model that is trained with a squared error.
It makes now sense to investigate where the squared error comes from, so that we can adapt it. I have explained elsewhere that the squared error stems from the log-likelihood of a Gaussian assumption: $p(y|x) = \mathcal{N}(f(x; \theta), \sqrt{1 \over 2})$. Gaussians are uni modal, which means that this assumption is the core error in the model. If you have ambiguous outputs, you need an output model with many modes.
The most commonly used one is mixture density networks, which assume that the output $p(y|x)$ is actually a mixture of Gaussians, e.g.
$$p(y|x) = \sum_j \pi_j(x) \mathcal{N}(y|\mu_j(x), \Sigma_j(x)).$$
Here, $\mu_j(x), \Sigma_j(x)$ and $\pi_j(x)$ are all distinct output units of the neural nets. Training is done via differentiating the log-likelihood and back-propagation.
There are many other ways, though:
This idea is applicable also to GBMs and RFs.
A completely different strategy would be to estimate a complicated joint likelihood $p(x, y)$ which allows conditioning on $x$, yielding a complex $p(y|x)$. Efficient inference/estimation will be an issue here.
A quite different example is certain Bayesian approaches which give rise to multimodal output distributions as well. Efficient inference/estimation is a problem here as well. | Neural Network: What if there are multiple right answers for a given set of inputs?
A neural network can in principle deal with this. Actually, I believe they are among the best models for this task. The question is whether it is modeled correctly.
Say you are looking at a regression |
50,041 | Neural Network: What if there are multiple right answers for a given set of inputs? | Perhaps an RNN can solve this "order doesn't matter" problem.
Consider the task of image captioning which has been successfully implemented by Stanford and Google. Now consider that an image might have multiple equally correct solutions, "dog playing with cat" or "cat playing with dog". I believe using an RNN (recurrent neural network) to spit out the text is the key to getting around this, because the RNN knows that if it already said "dog playing with" the next word should be "cat", and vice versa for "cat playing with" -> dog. http://cs.stanford.edu/people/karpathy/cvpr2015.pdf | Neural Network: What if there are multiple right answers for a given set of inputs? | Perhaps an RNN can solve this "order doesn't matter" problem.
Consider the task of image captioning which has been successfully implemented by Stanford and Google. Now consider that an image might ha | Neural Network: What if there are multiple right answers for a given set of inputs?
Perhaps an RNN can solve this "order doesn't matter" problem.
Consider the task of image captioning which has been successfully implemented by Stanford and Google. Now consider that an image might have multiple equally correct solutions, "dog playing with cat" or "cat playing with dog". I believe using an RNN (recurrent neural network) to spit out the text is the key to getting around this, because the RNN knows that if it already said "dog playing with" the next word should be "cat", and vice versa for "cat playing with" -> dog. http://cs.stanford.edu/people/karpathy/cvpr2015.pdf | Neural Network: What if there are multiple right answers for a given set of inputs?
Perhaps an RNN can solve this "order doesn't matter" problem.
Consider the task of image captioning which has been successfully implemented by Stanford and Google. Now consider that an image might ha |
50,042 | Neural Network: What if there are multiple right answers for a given set of inputs? | Firstly there is no reason for back propagation to 'fail' in the case of ambiguous data. Here is why.
Neural nets work by producing a truly highly non-linear function by composing linear functions with a non-linear activation function. The model class of neural nets are functions of this class. Roughly speaking a neural net produces a function as follows: At each stage a decision is made as to how many variables (features) to create. Each new variable is created by composing the non-linear activation function with an arbitrary linear combination of the previous variables. That means that there are (n+1)*(m) constants that are created . Each new variable is some unknown linear combination of the n variables of the previous stage plus a constant.
One wishes to minimize the difference between actual observations and predictions according to some loss function $L(\Theta ,x_i)$ where $\Theta$ is all the parameters created by the model, that is the unknown sets of coefficients of all the linear functions. Thus the loss function is a function of the parameter set ${\Theta}$ and one wishes to minimize L with respect to the "theta's"
In the case discussed by bayerj, that loss function is $L_{\Theta} = \sum_i (y_i - F_{\Theta} (x_i) )^2 $ . Where i runs over all the observations. The model is (in theory only !!!!) fitted by finding the parameters $\Theta$ which minimize this highly non-linear and non-convex function.
In general that is impossible to do. What one can do is find local minima of the function $L_{\Theta}$ as a function of ${\Theta} = (\theta_1, \ldots, \theta_M) $. Local minima can be calculated by various methods including gradient descent, which in the context of neural nets is called 'back propagation'. So there is nothing ambiguous about the y's being multivalued. That is because one is interested in solving the system of equations $ \frac{\partial L}{\partial w_i}=0 $ for $ i = 1 \ldots (n+1)m $ . There is no inconsistency because it treats each variable set and it's outcome as constants.
I will end with a little though experiment and a bigger thought experiment. An ordinary least squares model is a trivial case of a neural net in which the activation is linear and there is only one layer and one output. Imagine a data set consisting of $x_i = i, y_i = 3x_i + 'noise'$ and i running from 0 to 10,000. If I add a new observation $(x,y) = (0, 10) $ we have the ambiguous data points (0,noise) and (0,10) . However the other 9,999 observations favor the data point (0,noise) and the model will reflect a value much closer to zero than to 5 = (0+10)/2 .
Bigger thought experiment. Imagine one is trying to discover probability of loan default using a neural net with income data and loan to income ratio (LTI). Suppose one trains on 1000 people who don't default and 150 who do. Now add example 1151 a person with characteristics of the top 1% of the no-default crowd, but assign him to the default outcome. For example Mr. #1151 could have just discovered that his true love in life is gambling and not going to work every day. The model will still have no choice but to characterize him as default = 0. In essence, if one looks at the loss function for the first 150 people $L(x_i) = \mbox{nnet.train}(x_i) '=' 0 $ will be almost identical. To add $L(x_{151}$ to the mix will still have a loss function majorized by the behaviour on the first 150 examples. It cannot average. | Neural Network: What if there are multiple right answers for a given set of inputs? | Firstly there is no reason for back propagation to 'fail' in the case of ambiguous data. Here is why.
Neural nets work by producing a truly highly non-linear function by composing linear functions wi | Neural Network: What if there are multiple right answers for a given set of inputs?
Firstly there is no reason for back propagation to 'fail' in the case of ambiguous data. Here is why.
Neural nets work by producing a truly highly non-linear function by composing linear functions with a non-linear activation function. The model class of neural nets are functions of this class. Roughly speaking a neural net produces a function as follows: At each stage a decision is made as to how many variables (features) to create. Each new variable is created by composing the non-linear activation function with an arbitrary linear combination of the previous variables. That means that there are (n+1)*(m) constants that are created . Each new variable is some unknown linear combination of the n variables of the previous stage plus a constant.
One wishes to minimize the difference between actual observations and predictions according to some loss function $L(\Theta ,x_i)$ where $\Theta$ is all the parameters created by the model, that is the unknown sets of coefficients of all the linear functions. Thus the loss function is a function of the parameter set ${\Theta}$ and one wishes to minimize L with respect to the "theta's"
In the case discussed by bayerj, that loss function is $L_{\Theta} = \sum_i (y_i - F_{\Theta} (x_i) )^2 $ . Where i runs over all the observations. The model is (in theory only !!!!) fitted by finding the parameters $\Theta$ which minimize this highly non-linear and non-convex function.
In general that is impossible to do. What one can do is find local minima of the function $L_{\Theta}$ as a function of ${\Theta} = (\theta_1, \ldots, \theta_M) $. Local minima can be calculated by various methods including gradient descent, which in the context of neural nets is called 'back propagation'. So there is nothing ambiguous about the y's being multivalued. That is because one is interested in solving the system of equations $ \frac{\partial L}{\partial w_i}=0 $ for $ i = 1 \ldots (n+1)m $ . There is no inconsistency because it treats each variable set and it's outcome as constants.
I will end with a little though experiment and a bigger thought experiment. An ordinary least squares model is a trivial case of a neural net in which the activation is linear and there is only one layer and one output. Imagine a data set consisting of $x_i = i, y_i = 3x_i + 'noise'$ and i running from 0 to 10,000. If I add a new observation $(x,y) = (0, 10) $ we have the ambiguous data points (0,noise) and (0,10) . However the other 9,999 observations favor the data point (0,noise) and the model will reflect a value much closer to zero than to 5 = (0+10)/2 .
Bigger thought experiment. Imagine one is trying to discover probability of loan default using a neural net with income data and loan to income ratio (LTI). Suppose one trains on 1000 people who don't default and 150 who do. Now add example 1151 a person with characteristics of the top 1% of the no-default crowd, but assign him to the default outcome. For example Mr. #1151 could have just discovered that his true love in life is gambling and not going to work every day. The model will still have no choice but to characterize him as default = 0. In essence, if one looks at the loss function for the first 150 people $L(x_i) = \mbox{nnet.train}(x_i) '=' 0 $ will be almost identical. To add $L(x_{151}$ to the mix will still have a loss function majorized by the behaviour on the first 150 examples. It cannot average. | Neural Network: What if there are multiple right answers for a given set of inputs?
Firstly there is no reason for back propagation to 'fail' in the case of ambiguous data. Here is why.
Neural nets work by producing a truly highly non-linear function by composing linear functions wi |
50,043 | Neural Network: What if there are multiple right answers for a given set of inputs? | If you have more than one output $o_1, o_2,..,o_n$ for the $n$ possible correct answers. I have found the following error function works for about up to 4 expected answers:
$$E(o) = \tanh|e-o_1| \tanh|e-o_2|...\tanh|e-o_n|$$
Where $e$ is the expected result which is a random one of the correct answers.
Notice that the error is $E(o_n)=0$ for all $n$ and that nowhere $E''(x)=0$ | Neural Network: What if there are multiple right answers for a given set of inputs? | If you have more than one output $o_1, o_2,..,o_n$ for the $n$ possible correct answers. I have found the following error function works for about up to 4 expected answers:
$$E(o) = \tanh|e-o_1| \tanh | Neural Network: What if there are multiple right answers for a given set of inputs?
If you have more than one output $o_1, o_2,..,o_n$ for the $n$ possible correct answers. I have found the following error function works for about up to 4 expected answers:
$$E(o) = \tanh|e-o_1| \tanh|e-o_2|...\tanh|e-o_n|$$
Where $e$ is the expected result which is a random one of the correct answers.
Notice that the error is $E(o_n)=0$ for all $n$ and that nowhere $E''(x)=0$ | Neural Network: What if there are multiple right answers for a given set of inputs?
If you have more than one output $o_1, o_2,..,o_n$ for the $n$ possible correct answers. I have found the following error function works for about up to 4 expected answers:
$$E(o) = \tanh|e-o_1| \tanh |
50,044 | Univariate priors for the parameters of a Beta distribution | Any prior on $\alpha$ (or $\beta$) is admissible as long as it satisfies the requirements of the beta distribution in your parameterization, usually $\alpha >0$ and $\beta >0$, and as long as it yields a finite posterior. Assuming univariate priors and independence of $\alpha$ and $\beta$, one option might be the exponential distribution, since it's bounded by $0$. Additionally, it has a mode at $0$, meaning that plausible values will tend to be small. Some might find this attractive because they may desire only vague prior information. In this case, your prior is $$p(\alpha)=\lambda_\alpha\exp(-\lambda_\alpha \alpha)$$$$p(\beta)=\lambda_\beta\exp(-\lambda_\beta \beta)$$
But this is just an example. Any non-negative prior is an option. Modern Bayesian inference software such as Stan dose not restrict you to conjugate priors. | Univariate priors for the parameters of a Beta distribution | Any prior on $\alpha$ (or $\beta$) is admissible as long as it satisfies the requirements of the beta distribution in your parameterization, usually $\alpha >0$ and $\beta >0$, and as long as it yield | Univariate priors for the parameters of a Beta distribution
Any prior on $\alpha$ (or $\beta$) is admissible as long as it satisfies the requirements of the beta distribution in your parameterization, usually $\alpha >0$ and $\beta >0$, and as long as it yields a finite posterior. Assuming univariate priors and independence of $\alpha$ and $\beta$, one option might be the exponential distribution, since it's bounded by $0$. Additionally, it has a mode at $0$, meaning that plausible values will tend to be small. Some might find this attractive because they may desire only vague prior information. In this case, your prior is $$p(\alpha)=\lambda_\alpha\exp(-\lambda_\alpha \alpha)$$$$p(\beta)=\lambda_\beta\exp(-\lambda_\beta \beta)$$
But this is just an example. Any non-negative prior is an option. Modern Bayesian inference software such as Stan dose not restrict you to conjugate priors. | Univariate priors for the parameters of a Beta distribution
Any prior on $\alpha$ (or $\beta$) is admissible as long as it satisfies the requirements of the beta distribution in your parameterization, usually $\alpha >0$ and $\beta >0$, and as long as it yield |
50,045 | Test the randomness (uniformly distributed) on a 64 bit float random generator | As it stands, this is not a good way to test whether floating point numbers are uniformly distributed. Like Aksakal, I wondered about whether the bits of the exponent part of the floating point representation would be uniformly distributed. The answer to this is that they aren't uniformly distributed, because there are very many more numbers with large exponents than there are numbers with small exponents.
I wrote a small test program that confirms this. It generates $N = 1 \text{ million}$ uniformly distributed random floating point numbers, and as a control, $N$ random integers. (There were various problems generating 64 bit floating point numbers, see e.g. here, and 32 bits seems sufficient for demonstration purposes.)
First, the control case. The plot of the bins of bits for integers is just as you suggested, with each bin $\approx N/2$.
Now for the floating point numbers. A plot of the sorted numbers is a straight line, indicating that they would pass the Kolmogorov–Smirnov test for uniformity.
But the bins are definitely not uniform.
If you plot only bins 1 to 23 together with bin 32, you do get bins $\approx N/2$, but bins 24 to 31 show a clear increasing pattern. These bits corresponds precisely with the bits for the exponent in 32 bit floating point numbers. The IEEE single precision floating point definition stipulates
the least significant 23 bits are for the mantissa
the next 8 bits are for the exponent
the most significant bit is for the sign
Another way to see this is to consider a simpler example. Think about generating numbers in base 10 between 0 and $10^7$, with a base 10 exponent. Numbers between 0 and 1 would have an exponent of 0. Numbers between 1 and 10 would have an exponent of 1, numbers between 10 and 100 an exponent of 2, ..., and numbers between $10^6$ and $10^7$ an exponent of 7. The numbers $10^4$ to $10^7$ are $(10^7-10^4)/10^7=99.9\%$ of the range and in binary their exponents range from 001 to 111, so you'd expect the most significant bit to occur 99.9% of the time, not 50% of the time.
It would be possible, with some care, to use an approach like this to get the expected frequencies for each bin in the binary exponent of a floating point number, and use this in a $\chi^2$ test, but Kolmogorov–Smirnov is a better approach in theory and easy to implement. Nevertheless a test like this could pick up distributional biases in the implementation of a random number generation that Kolmogorov–Smirnov might not. For example, when I first tried generating 64 bit double precision floating point random numbers in C++, I forgot to change to a 64 bit Mersenne Twister engine. The sorted numbers give a straight line plot, but you can see from the plots of the bins of the bits that the 64 bit Mersenne Twister engine is superior to the 32 bit one (as you would expect).
(Note in both cases that the last bit, the sign bit, is zero, due to the difficulties of generating random numbers across the whole range.) | Test the randomness (uniformly distributed) on a 64 bit float random generator | As it stands, this is not a good way to test whether floating point numbers are uniformly distributed. Like Aksakal, I wondered about whether the bits of the exponent part of the floating point repres | Test the randomness (uniformly distributed) on a 64 bit float random generator
As it stands, this is not a good way to test whether floating point numbers are uniformly distributed. Like Aksakal, I wondered about whether the bits of the exponent part of the floating point representation would be uniformly distributed. The answer to this is that they aren't uniformly distributed, because there are very many more numbers with large exponents than there are numbers with small exponents.
I wrote a small test program that confirms this. It generates $N = 1 \text{ million}$ uniformly distributed random floating point numbers, and as a control, $N$ random integers. (There were various problems generating 64 bit floating point numbers, see e.g. here, and 32 bits seems sufficient for demonstration purposes.)
First, the control case. The plot of the bins of bits for integers is just as you suggested, with each bin $\approx N/2$.
Now for the floating point numbers. A plot of the sorted numbers is a straight line, indicating that they would pass the Kolmogorov–Smirnov test for uniformity.
But the bins are definitely not uniform.
If you plot only bins 1 to 23 together with bin 32, you do get bins $\approx N/2$, but bins 24 to 31 show a clear increasing pattern. These bits corresponds precisely with the bits for the exponent in 32 bit floating point numbers. The IEEE single precision floating point definition stipulates
the least significant 23 bits are for the mantissa
the next 8 bits are for the exponent
the most significant bit is for the sign
Another way to see this is to consider a simpler example. Think about generating numbers in base 10 between 0 and $10^7$, with a base 10 exponent. Numbers between 0 and 1 would have an exponent of 0. Numbers between 1 and 10 would have an exponent of 1, numbers between 10 and 100 an exponent of 2, ..., and numbers between $10^6$ and $10^7$ an exponent of 7. The numbers $10^4$ to $10^7$ are $(10^7-10^4)/10^7=99.9\%$ of the range and in binary their exponents range from 001 to 111, so you'd expect the most significant bit to occur 99.9% of the time, not 50% of the time.
It would be possible, with some care, to use an approach like this to get the expected frequencies for each bin in the binary exponent of a floating point number, and use this in a $\chi^2$ test, but Kolmogorov–Smirnov is a better approach in theory and easy to implement. Nevertheless a test like this could pick up distributional biases in the implementation of a random number generation that Kolmogorov–Smirnov might not. For example, when I first tried generating 64 bit double precision floating point random numbers in C++, I forgot to change to a 64 bit Mersenne Twister engine. The sorted numbers give a straight line plot, but you can see from the plots of the bins of the bits that the 64 bit Mersenne Twister engine is superior to the 32 bit one (as you would expect).
(Note in both cases that the last bit, the sign bit, is zero, due to the difficulties of generating random numbers across the whole range.) | Test the randomness (uniformly distributed) on a 64 bit float random generator
As it stands, this is not a good way to test whether floating point numbers are uniformly distributed. Like Aksakal, I wondered about whether the bits of the exponent part of the floating point repres |
50,046 | Test the randomness (uniformly distributed) on a 64 bit float random generator | have you looked at A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications by NIST?
I think it's a great place to start your analysis. | Test the randomness (uniformly distributed) on a 64 bit float random generator | have you looked at A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications by NIST?
I think it's a great place to start your analysis. | Test the randomness (uniformly distributed) on a 64 bit float random generator
have you looked at A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications by NIST?
I think it's a great place to start your analysis. | Test the randomness (uniformly distributed) on a 64 bit float random generator
have you looked at A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications by NIST?
I think it's a great place to start your analysis. |
50,047 | Interaction effects in non-linear models | The way I solved the issue that the interaction effects in terms of marginal effect differ across observations is that in my article I did not look at interaction effects in terms of marginal effects but in terms of odds ratios.
With marginal effects you try to fit a linear line on top of a non-linear line, and this does not fit perfectly. It is these deviations that are the cause of the variation in marginal effects across observations.
There is no such "leakage" between a logit model and odds ratios, so I can describe an interaction effect in that model with just one parameter (a ratio of odds ratios) that works for all observations. | Interaction effects in non-linear models | The way I solved the issue that the interaction effects in terms of marginal effect differ across observations is that in my article I did not look at interaction effects in terms of marginal effects | Interaction effects in non-linear models
The way I solved the issue that the interaction effects in terms of marginal effect differ across observations is that in my article I did not look at interaction effects in terms of marginal effects but in terms of odds ratios.
With marginal effects you try to fit a linear line on top of a non-linear line, and this does not fit perfectly. It is these deviations that are the cause of the variation in marginal effects across observations.
There is no such "leakage" between a logit model and odds ratios, so I can describe an interaction effect in that model with just one parameter (a ratio of odds ratios) that works for all observations. | Interaction effects in non-linear models
The way I solved the issue that the interaction effects in terms of marginal effect differ across observations is that in my article I did not look at interaction effects in terms of marginal effects |
50,048 | How can I estimate the shape of a curve where the predictor variable is right censored interval variable? | It turns out that the problem of regression with an interval-censored independent variable is much less studied than regression with an interval-censored dependent variable. There are at least a dozen studies on this topic, but as an applied researcher with limited mathematical statistics, I found few of them accessible.
An exception to this is a recent paper in Psychological Methods by Timothy R. Johnson and Michelle M. Wiest. In line with a number of other researchers, they frame interval censoring as a type of missing data problem. Unlike other papers which demonstrate methods that are daunting to implement, Johnson and Wiest provide a number of JAGS model specifications in an appendix which can be modified to suit the problem at hand. Their methods are extensible to all generalised linear models with interval-censored and/or top-coded covariates. | How can I estimate the shape of a curve where the predictor variable is right censored interval vari | It turns out that the problem of regression with an interval-censored independent variable is much less studied than regression with an interval-censored dependent variable. There are at least a dozen | How can I estimate the shape of a curve where the predictor variable is right censored interval variable?
It turns out that the problem of regression with an interval-censored independent variable is much less studied than regression with an interval-censored dependent variable. There are at least a dozen studies on this topic, but as an applied researcher with limited mathematical statistics, I found few of them accessible.
An exception to this is a recent paper in Psychological Methods by Timothy R. Johnson and Michelle M. Wiest. In line with a number of other researchers, they frame interval censoring as a type of missing data problem. Unlike other papers which demonstrate methods that are daunting to implement, Johnson and Wiest provide a number of JAGS model specifications in an appendix which can be modified to suit the problem at hand. Their methods are extensible to all generalised linear models with interval-censored and/or top-coded covariates. | How can I estimate the shape of a curve where the predictor variable is right censored interval vari
It turns out that the problem of regression with an interval-censored independent variable is much less studied than regression with an interval-censored dependent variable. There are at least a dozen |
50,049 | What are some differences between confirmatory analysis and exploratory analysis? | First EDA will be done on the data set to understand the data & prepare the hypothesis, then confirmatory analysis is done. In EDA, most of the time we do visual analysis. Whereas in Confirmatory analysis we take probability models into consideration.
Comparison from here:
Confirmatory Analysis
Inferential Statistics - Deductive Approach
Heavy reliance on probability models
Must accept untestable assumptions
Look for definite answers to specific questions
Emphasis on numerical calculations
Hypotheses determined at outset
Hypothesis tests and formal confidence interval estimation
Advantages
Provide precise information in the right circumstances
Well-established theory and methods
Disadvantages
Misleading impression of precision in less than ideal circumstances
Analysis driven by preconceived ideas
Difficult to notice unexpected results
Exploratory Analysis
Descriptive Statistics - Inductive Approach
Look for flexible ways to examine data without preconceptions
Attempt to evaluate validity of assumptions
Heavy reliance on graphical displays
Let data suggest questions
Focus on indications and approximate error magnitudes
Advantages
Flexible ways to generate hypotheses
More realistic statements of accuracy
Does not require more than data can support
Promotes deeper understanding of processes
Statistical learning
Disadvantages
Usually does not provide definitive answers
Difficult to avoid optimistic bias produced by overfitting
Requires judgement and artistry - can't be cookbooked
For further reading read this. | What are some differences between confirmatory analysis and exploratory analysis? | First EDA will be done on the data set to understand the data & prepare the hypothesis, then confirmatory analysis is done. In EDA, most of the time we do visual analysis. Whereas in Confirmatory anal | What are some differences between confirmatory analysis and exploratory analysis?
First EDA will be done on the data set to understand the data & prepare the hypothesis, then confirmatory analysis is done. In EDA, most of the time we do visual analysis. Whereas in Confirmatory analysis we take probability models into consideration.
Comparison from here:
Confirmatory Analysis
Inferential Statistics - Deductive Approach
Heavy reliance on probability models
Must accept untestable assumptions
Look for definite answers to specific questions
Emphasis on numerical calculations
Hypotheses determined at outset
Hypothesis tests and formal confidence interval estimation
Advantages
Provide precise information in the right circumstances
Well-established theory and methods
Disadvantages
Misleading impression of precision in less than ideal circumstances
Analysis driven by preconceived ideas
Difficult to notice unexpected results
Exploratory Analysis
Descriptive Statistics - Inductive Approach
Look for flexible ways to examine data without preconceptions
Attempt to evaluate validity of assumptions
Heavy reliance on graphical displays
Let data suggest questions
Focus on indications and approximate error magnitudes
Advantages
Flexible ways to generate hypotheses
More realistic statements of accuracy
Does not require more than data can support
Promotes deeper understanding of processes
Statistical learning
Disadvantages
Usually does not provide definitive answers
Difficult to avoid optimistic bias produced by overfitting
Requires judgement and artistry - can't be cookbooked
For further reading read this. | What are some differences between confirmatory analysis and exploratory analysis?
First EDA will be done on the data set to understand the data & prepare the hypothesis, then confirmatory analysis is done. In EDA, most of the time we do visual analysis. Whereas in Confirmatory anal |
50,050 | What are some differences between confirmatory analysis and exploratory analysis? | I don't think there is a set recipe for when to perform which. You have to use the tools required for the task, whether they are most useful for an exploratory analysis or testing hypotheses. It is likely you will begin with hypotheses (that's why you collected this data in the first place right?) and then test them. Your results may not be what you expect. Then you go back to exploring the data and generate new hypotheses. This is just how the scientific method works. | What are some differences between confirmatory analysis and exploratory analysis? | I don't think there is a set recipe for when to perform which. You have to use the tools required for the task, whether they are most useful for an exploratory analysis or testing hypotheses. It is | What are some differences between confirmatory analysis and exploratory analysis?
I don't think there is a set recipe for when to perform which. You have to use the tools required for the task, whether they are most useful for an exploratory analysis or testing hypotheses. It is likely you will begin with hypotheses (that's why you collected this data in the first place right?) and then test them. Your results may not be what you expect. Then you go back to exploring the data and generate new hypotheses. This is just how the scientific method works. | What are some differences between confirmatory analysis and exploratory analysis?
I don't think there is a set recipe for when to perform which. You have to use the tools required for the task, whether they are most useful for an exploratory analysis or testing hypotheses. It is |
50,051 | Importance Sampling to evaluate integral in R | When running the code provided for the second function I get
> c(mean(Y),var(Y))
[1] 3.2981238 0.5203621
> integrate(f,0.01,1)
3.19264 with absolute error < 1.1e-06
which means that the true value of the integral is close to 3.2, not to 0.70.
If you want to integrate f from 0.3 to 8, then the importance function must be
> w <- function(x) dunif(x,0.3,8)/dnorm(x,0.5,0.25)
and I changed the handling of the NA's in the problem by
> f <- function(x) (x>0)*(1+sinh(2*x)*log(abs(x)))^(-1)
which leads to
> Y <- w(X)*f(X)
> integrate(f,0.3,8)
2.77512 with absolute error < 2.4e-05
> integrate(f,0.3,8)$val/(8-.3)
[1] 0.3604052
which shows a good fit even though the Normal importance function N(0.5,0.5) may be too concentrated, i.e., it has too small a variance to cover (0.3,8) with some reasonable probability. If one changes the scale of the Normal,
> X <- rnorm(1e5,0.5,2.25)
> Y <- w(X)*f(X)
> c(mean(Y),var(Y))
[1] 0.3664046 1.0230197
> integrate(f,0.3,8)$va/(8-.3)
[1] 0.3604052
which shows a fairly good fit. When using a smaller standard deviation like $\sigma=0.1$, I find a poorer fit
> c(mean(Y),var(Y))
[1] 0.3370849 484.6246147
even when compared with $\sigma=1$:
> c(mean(Y),var(Y))
[1] 0.3606815 0.3232931
To make sense of those variations, I ran the experiment for a range of values of $\sigma$, from $0.1$ to $4$, with $10^6$ normal simulations, and got the following graph (with a log scale on the first axis)
which shows the improvement brought by large enough $\sigma$'s. And an optimum around $\sigma=0.74$.
Here is the relevant R code
w <- function(x) dunif(x,0.3,8)/dnorm(x,0.5,sigma)
sigs=varz=meanz=seq(.1,4,le=50)
for (i in 1:50){
sigma=sigs[i];X=0.5+sigma*X0;Y=w(X)*f(X)
varz[i]=var(Y);meanz[i]=mean(Y)} | Importance Sampling to evaluate integral in R | When running the code provided for the second function I get
> c(mean(Y),var(Y))
[1] 3.2981238 0.5203621
> integrate(f,0.01,1)
3.19264 with absolute error < 1.1e-06
which means that the true value of | Importance Sampling to evaluate integral in R
When running the code provided for the second function I get
> c(mean(Y),var(Y))
[1] 3.2981238 0.5203621
> integrate(f,0.01,1)
3.19264 with absolute error < 1.1e-06
which means that the true value of the integral is close to 3.2, not to 0.70.
If you want to integrate f from 0.3 to 8, then the importance function must be
> w <- function(x) dunif(x,0.3,8)/dnorm(x,0.5,0.25)
and I changed the handling of the NA's in the problem by
> f <- function(x) (x>0)*(1+sinh(2*x)*log(abs(x)))^(-1)
which leads to
> Y <- w(X)*f(X)
> integrate(f,0.3,8)
2.77512 with absolute error < 2.4e-05
> integrate(f,0.3,8)$val/(8-.3)
[1] 0.3604052
which shows a good fit even though the Normal importance function N(0.5,0.5) may be too concentrated, i.e., it has too small a variance to cover (0.3,8) with some reasonable probability. If one changes the scale of the Normal,
> X <- rnorm(1e5,0.5,2.25)
> Y <- w(X)*f(X)
> c(mean(Y),var(Y))
[1] 0.3664046 1.0230197
> integrate(f,0.3,8)$va/(8-.3)
[1] 0.3604052
which shows a fairly good fit. When using a smaller standard deviation like $\sigma=0.1$, I find a poorer fit
> c(mean(Y),var(Y))
[1] 0.3370849 484.6246147
even when compared with $\sigma=1$:
> c(mean(Y),var(Y))
[1] 0.3606815 0.3232931
To make sense of those variations, I ran the experiment for a range of values of $\sigma$, from $0.1$ to $4$, with $10^6$ normal simulations, and got the following graph (with a log scale on the first axis)
which shows the improvement brought by large enough $\sigma$'s. And an optimum around $\sigma=0.74$.
Here is the relevant R code
w <- function(x) dunif(x,0.3,8)/dnorm(x,0.5,sigma)
sigs=varz=meanz=seq(.1,4,le=50)
for (i in 1:50){
sigma=sigs[i];X=0.5+sigma*X0;Y=w(X)*f(X)
varz[i]=var(Y);meanz[i]=mean(Y)} | Importance Sampling to evaluate integral in R
When running the code provided for the second function I get
> c(mean(Y),var(Y))
[1] 3.2981238 0.5203621
> integrate(f,0.01,1)
3.19264 with absolute error < 1.1e-06
which means that the true value of |
50,052 | Regression with "unidirectional" noise | This set up is equivalent to the Deterministic (Efficiency/Productivity) Frontier Analysis in Econometrics, where the econometrician is trying to measure how far a firm/unit of production is from full-efficiency in the utilization of production factors. The $f(x)$ function is the full-efficiency production function (i.e. it gives maximum output given technology and inputs $x$, the "production frontier") and the error embodies a measurement of the distance of actual output from the theoretical maximum,
$$q_i = f(x_i) + \epsilon_i,\;\; \epsilon_i\le 0$$
This model has been largely abandoned, because in it, one of the regularity conditions for maximum likelihood estimation is violated: since $\epsilon_i\le 0$ we have always
$$q_i \le f(x_i)$$
which makes the range (i.e. the support) of the random variable $q_i$ (actual production) dependent on the parameters to be estimated (that are included in $f(x_i)$). Then the standard asymptotic properties of maximum likelihood estimators cannot be invoked, i.e. it is unknown whether they hold or not.
So it has been replaced by the Stochastic Frontier framework, where alongside the one-sided error-term, a zero-mean symmetric disturbance (usually assumed normal) is added (that represents chance effects on the output of the firm that are not related to the "internal efficiency" of the firm):
$$q_i = f(x_i) + u_i+\epsilon_i,\;\; E(u_i\mid x_i) = 0,\;\;\epsilon_i\le 0$$
which deals with the issue mentioned above (and it is after all, more realistic also). Can you augment your model also, by adding a symmetric zero-mean error? Then the machinery of ML estimation is already in place in the Stochastic Frontier literature, with more than one stochastic specifications worked out. | Regression with "unidirectional" noise | This set up is equivalent to the Deterministic (Efficiency/Productivity) Frontier Analysis in Econometrics, where the econometrician is trying to measure how far a firm/unit of production is from full | Regression with "unidirectional" noise
This set up is equivalent to the Deterministic (Efficiency/Productivity) Frontier Analysis in Econometrics, where the econometrician is trying to measure how far a firm/unit of production is from full-efficiency in the utilization of production factors. The $f(x)$ function is the full-efficiency production function (i.e. it gives maximum output given technology and inputs $x$, the "production frontier") and the error embodies a measurement of the distance of actual output from the theoretical maximum,
$$q_i = f(x_i) + \epsilon_i,\;\; \epsilon_i\le 0$$
This model has been largely abandoned, because in it, one of the regularity conditions for maximum likelihood estimation is violated: since $\epsilon_i\le 0$ we have always
$$q_i \le f(x_i)$$
which makes the range (i.e. the support) of the random variable $q_i$ (actual production) dependent on the parameters to be estimated (that are included in $f(x_i)$). Then the standard asymptotic properties of maximum likelihood estimators cannot be invoked, i.e. it is unknown whether they hold or not.
So it has been replaced by the Stochastic Frontier framework, where alongside the one-sided error-term, a zero-mean symmetric disturbance (usually assumed normal) is added (that represents chance effects on the output of the firm that are not related to the "internal efficiency" of the firm):
$$q_i = f(x_i) + u_i+\epsilon_i,\;\; E(u_i\mid x_i) = 0,\;\;\epsilon_i\le 0$$
which deals with the issue mentioned above (and it is after all, more realistic also). Can you augment your model also, by adding a symmetric zero-mean error? Then the machinery of ML estimation is already in place in the Stochastic Frontier literature, with more than one stochastic specifications worked out. | Regression with "unidirectional" noise
This set up is equivalent to the Deterministic (Efficiency/Productivity) Frontier Analysis in Econometrics, where the econometrician is trying to measure how far a firm/unit of production is from full |
50,053 | Are sampling weights necessary in logistic regression? | The sampling weights are designed to account for the non-simple random sample nature of your sample. Therefore, they are just as needed in one form of regression as another. Exactly how to do this may be complicated; e.g. in SAS there is PROC SURVEYLOGISTIC to deal with various sorts of samples. In R there is the survey package which I think does similar things (but I have not used it). | Are sampling weights necessary in logistic regression? | The sampling weights are designed to account for the non-simple random sample nature of your sample. Therefore, they are just as needed in one form of regression as another. Exactly how to do this may | Are sampling weights necessary in logistic regression?
The sampling weights are designed to account for the non-simple random sample nature of your sample. Therefore, they are just as needed in one form of regression as another. Exactly how to do this may be complicated; e.g. in SAS there is PROC SURVEYLOGISTIC to deal with various sorts of samples. In R there is the survey package which I think does similar things (but I have not used it). | Are sampling weights necessary in logistic regression?
The sampling weights are designed to account for the non-simple random sample nature of your sample. Therefore, they are just as needed in one form of regression as another. Exactly how to do this may |
50,054 | How to measure uncertainty of a parameter when false positives exist? | In principle this is a classification problem. If you would know which observation is a true positive, you could just take these observations and estimate the mean and the variance for them. Doing so, you implicitly assume that the true value follows a normal (or more accurately T-student) distribution defined by the obtained mean $\overline{x}$ and variance $s$:
$$\mu\;\tilde{}\;N\left(\overline{x},s\right)$$
Because you are not certain about which observations are true positives, there are various scenarios which need consideration. If you would construct 100 scenarios then in 80 of them you would include an observation which you give 80% to be a true positive. The probability model gets more complicated:
$$p(\mu) = p\left(\mu\ |\, \text{subset of the }X_i\right)\times
p\left(\text{subset of the }X_i\right)$$
where the first factor on the right hand side denotes the probability for a certain value $\mu$ to be the true value given the subset of observations. The second factor denotes the probability that a certain subset of the observations contains all the true positives and no false positives.
So how to get the standard deviation? You could write a computer program that samples values for $\mu$ from the above stated probability distribution. If you afterwards have a list of $\mu_1,\mu_2,\dots,\mu_N$, you can calculate mean and variance in the usual way:
$$\mu = \frac{1}{N}\sum_{i=1}^N \mu_i \hspace{1em}\mbox{and}\hspace{1em}
\sigma^2 = \frac{1}{N}\sum_{i=1}^{N}(\mu_i-\mu)^2$$
Additionally, just to make sure that the variance (or standard deviation) is a meaningful measure for the uncertainty, you could draw a histogram for the $\mu_1,\mu_2,\dots$ and compare it to a normal distribution characterized by $\mu$ and $\sigma$.
In order to get the $\mu_i$'s you can perform the sampling in two steps. Firstly, sample a subset of the $X_i$. The following pseudo code demonstrates how this could be done:
for each observation X_i do
P := probability for X_i to be a true positive
R := random number in [0,1] drawn from uniform distribution
if R smaller P
accept X_i as true positive
otherwise
reject X_i for the scenario
If you work with statistical programming language $R$ you could also simply use the sample function.
For the sampled subset of the $X_i$'s calculate a mean $\overline{x}$ and variance $s$ in the usual way. Use that to draw a sample data point $\mu_i$ from the normal (better T-student) distribution characterized by mean $\overline{x}$ and variance $s$.
Repeat these two steps until you have enough data points $\mu_i$ to get convergence for $\mu$ and $\sigma$. | How to measure uncertainty of a parameter when false positives exist? | In principle this is a classification problem. If you would know which observation is a true positive, you could just take these observations and estimate the mean and the variance for them. Doing so, | How to measure uncertainty of a parameter when false positives exist?
In principle this is a classification problem. If you would know which observation is a true positive, you could just take these observations and estimate the mean and the variance for them. Doing so, you implicitly assume that the true value follows a normal (or more accurately T-student) distribution defined by the obtained mean $\overline{x}$ and variance $s$:
$$\mu\;\tilde{}\;N\left(\overline{x},s\right)$$
Because you are not certain about which observations are true positives, there are various scenarios which need consideration. If you would construct 100 scenarios then in 80 of them you would include an observation which you give 80% to be a true positive. The probability model gets more complicated:
$$p(\mu) = p\left(\mu\ |\, \text{subset of the }X_i\right)\times
p\left(\text{subset of the }X_i\right)$$
where the first factor on the right hand side denotes the probability for a certain value $\mu$ to be the true value given the subset of observations. The second factor denotes the probability that a certain subset of the observations contains all the true positives and no false positives.
So how to get the standard deviation? You could write a computer program that samples values for $\mu$ from the above stated probability distribution. If you afterwards have a list of $\mu_1,\mu_2,\dots,\mu_N$, you can calculate mean and variance in the usual way:
$$\mu = \frac{1}{N}\sum_{i=1}^N \mu_i \hspace{1em}\mbox{and}\hspace{1em}
\sigma^2 = \frac{1}{N}\sum_{i=1}^{N}(\mu_i-\mu)^2$$
Additionally, just to make sure that the variance (or standard deviation) is a meaningful measure for the uncertainty, you could draw a histogram for the $\mu_1,\mu_2,\dots$ and compare it to a normal distribution characterized by $\mu$ and $\sigma$.
In order to get the $\mu_i$'s you can perform the sampling in two steps. Firstly, sample a subset of the $X_i$. The following pseudo code demonstrates how this could be done:
for each observation X_i do
P := probability for X_i to be a true positive
R := random number in [0,1] drawn from uniform distribution
if R smaller P
accept X_i as true positive
otherwise
reject X_i for the scenario
If you work with statistical programming language $R$ you could also simply use the sample function.
For the sampled subset of the $X_i$'s calculate a mean $\overline{x}$ and variance $s$ in the usual way. Use that to draw a sample data point $\mu_i$ from the normal (better T-student) distribution characterized by mean $\overline{x}$ and variance $s$.
Repeat these two steps until you have enough data points $\mu_i$ to get convergence for $\mu$ and $\sigma$. | How to measure uncertainty of a parameter when false positives exist?
In principle this is a classification problem. If you would know which observation is a true positive, you could just take these observations and estimate the mean and the variance for them. Doing so, |
50,055 | Definition of statistical model in case of hierarchical model | I wonder what is $\Theta$ in the case of a hierarchical model. Is it composed of all the latent variables of the model or only the one at the top level? Does this include the hyper-parameters?
As far as I undestand, the definition you point out from Wikipedia outlines that a parametric model is collection $\mathcal{P}$ of some $\mathbb{P}_{\theta}$ distributions.
Each of these distributions has a finite dimensional vector of parameters $\theta$, and each of these $\theta$'s has a feasible region $\Theta$. For the model to be parametric, each $\theta$ must be finite-dimensional, that is, $\Theta \subseteq \mathbb{R}^d$. So, answering your question, $\Theta$ refers only to the "possible outcomes" of the parameters of one distribution.
For example:
Suppose you have a model with likelihood:
$Y \sim Normal(\mu, \sigma)$,
then the feasible set for $\theta = \begin{bmatrix} \mu \\ \sigma \end{bmatrix}$ of this Normal distribution is $\Theta = \begin{bmatrix} (-\infty; \infty) \\ (0; \infty) \end{bmatrix} \subseteq \mathbb{R}^2$ (sorry for the abuse of notation here).
Now an example of an hierarchical model:
$$Y_1 \sim Normal(\mu_1, \sigma)$$
$$Y_2 \sim Normal(\mu_2, \sigma)$$
$$\mu_1 \sim Normal(\alpha, 10)$$
$$\mu_2 \sim Normal(\alpha, 10)$$
Following Wikipedia's definition, we would be for each distribution, respectively:
$$\Theta = \begin{bmatrix} (-\infty; \infty) \\ (0; \infty) \end{bmatrix} \subseteq \mathbb{R}^2$$
$$\Theta = \begin{bmatrix} (-\infty; \infty) \\ (0; \infty) \end{bmatrix} \subseteq \mathbb{R}^2$$
$$\Theta = \begin{bmatrix} (-\infty; \infty) \end{bmatrix} \subseteq \mathbb{R}^2$$
$$\Theta = \begin{bmatrix} (-\infty; \infty) \end{bmatrix} \subseteq \mathbb{R}^2$$
(ok, this is also abuse of notation, you should have $\theta_i$ and $\Theta_i$ for each of the 4 distributions, but I hope you can get the idea.)
My concern is that this definition has an influence on the definition of model identifiability.
I don't see how this definition itself could impact the definition of model identifiability, let's also remember that Bayesian and frenquentist identifiability are different concepts and since you use Bayesian tag in your question, this discussion might be of interest. | Definition of statistical model in case of hierarchical model | I wonder what is $\Theta$ in the case of a hierarchical model. Is it composed of all the latent variables of the model or only the one at the top level? Does this include the hyper-parameters?
As fa | Definition of statistical model in case of hierarchical model
I wonder what is $\Theta$ in the case of a hierarchical model. Is it composed of all the latent variables of the model or only the one at the top level? Does this include the hyper-parameters?
As far as I undestand, the definition you point out from Wikipedia outlines that a parametric model is collection $\mathcal{P}$ of some $\mathbb{P}_{\theta}$ distributions.
Each of these distributions has a finite dimensional vector of parameters $\theta$, and each of these $\theta$'s has a feasible region $\Theta$. For the model to be parametric, each $\theta$ must be finite-dimensional, that is, $\Theta \subseteq \mathbb{R}^d$. So, answering your question, $\Theta$ refers only to the "possible outcomes" of the parameters of one distribution.
For example:
Suppose you have a model with likelihood:
$Y \sim Normal(\mu, \sigma)$,
then the feasible set for $\theta = \begin{bmatrix} \mu \\ \sigma \end{bmatrix}$ of this Normal distribution is $\Theta = \begin{bmatrix} (-\infty; \infty) \\ (0; \infty) \end{bmatrix} \subseteq \mathbb{R}^2$ (sorry for the abuse of notation here).
Now an example of an hierarchical model:
$$Y_1 \sim Normal(\mu_1, \sigma)$$
$$Y_2 \sim Normal(\mu_2, \sigma)$$
$$\mu_1 \sim Normal(\alpha, 10)$$
$$\mu_2 \sim Normal(\alpha, 10)$$
Following Wikipedia's definition, we would be for each distribution, respectively:
$$\Theta = \begin{bmatrix} (-\infty; \infty) \\ (0; \infty) \end{bmatrix} \subseteq \mathbb{R}^2$$
$$\Theta = \begin{bmatrix} (-\infty; \infty) \\ (0; \infty) \end{bmatrix} \subseteq \mathbb{R}^2$$
$$\Theta = \begin{bmatrix} (-\infty; \infty) \end{bmatrix} \subseteq \mathbb{R}^2$$
$$\Theta = \begin{bmatrix} (-\infty; \infty) \end{bmatrix} \subseteq \mathbb{R}^2$$
(ok, this is also abuse of notation, you should have $\theta_i$ and $\Theta_i$ for each of the 4 distributions, but I hope you can get the idea.)
My concern is that this definition has an influence on the definition of model identifiability.
I don't see how this definition itself could impact the definition of model identifiability, let's also remember that Bayesian and frenquentist identifiability are different concepts and since you use Bayesian tag in your question, this discussion might be of interest. | Definition of statistical model in case of hierarchical model
I wonder what is $\Theta$ in the case of a hierarchical model. Is it composed of all the latent variables of the model or only the one at the top level? Does this include the hyper-parameters?
As fa |
50,056 | Definition of statistical model in case of hierarchical model | The parameters have no concept of the hierarchy, nor should they. $\Theta$ is the space of possibilities. Consider the example:
$ Y_{ij} \sim \text{Bernoulli}(p_i),$
$p_i \sim \text{Beta}(\alpha, \beta),$
for $i = 1, \ldots, 10$ subjects and $j=1, \ldots 5$ binary outcomes within each subject.
In this case, $\Theta = (0, \infty) \times (0, \infty)$, the space of possibilities for $(\alpha, \beta)$. Once you have those, you can compute any probability. | Definition of statistical model in case of hierarchical model | The parameters have no concept of the hierarchy, nor should they. $\Theta$ is the space of possibilities. Consider the example:
$ Y_{ij} \sim \text{Bernoulli}(p_i),$
$p_i \sim \text{Beta}(\alpha, \be | Definition of statistical model in case of hierarchical model
The parameters have no concept of the hierarchy, nor should they. $\Theta$ is the space of possibilities. Consider the example:
$ Y_{ij} \sim \text{Bernoulli}(p_i),$
$p_i \sim \text{Beta}(\alpha, \beta),$
for $i = 1, \ldots, 10$ subjects and $j=1, \ldots 5$ binary outcomes within each subject.
In this case, $\Theta = (0, \infty) \times (0, \infty)$, the space of possibilities for $(\alpha, \beta)$. Once you have those, you can compute any probability. | Definition of statistical model in case of hierarchical model
The parameters have no concept of the hierarchy, nor should they. $\Theta$ is the space of possibilities. Consider the example:
$ Y_{ij} \sim \text{Bernoulli}(p_i),$
$p_i \sim \text{Beta}(\alpha, \be |
50,057 | Variance of arrival process with shifted exponential distribution | Assuming that the inter-arrivals say $X_n$ ($n \geqslant 1$) are
independent, you have a renewal process, see e.g. this course, or the
classical references quoted in it: the book by D.R. Cox Renewal
Theory or the one by S. Karlin and H.M. Taylor A First Course in
Stochastic Processes, vol. 1 chap. 5.
The $n$-th arrival time from $t=0$ is the sum $S_n:= X_1 + X_2 + \dots
+ X_n$ for a specific initial condition: when $t=0$ is an
arrival time. Then $X_1$ is distributed as are the $X_n$ for $n >
1$. A variant takes a specific stationary distribution
for the first arrival $X_1$, leading to the stationary renewal
process. The initial condition yet has no impact in the long run.
Let $N(t)$ be the number of arrivals on $(0,\,t)$. When $t$ is large,
a renewal theorem states that $N(t)$ is approximately normal with mean
$t/\mu$ and variance $t \sigma^2 / \mu^3$ where $\mu$ and $\sigma^2$
are the inter-arrival mean and variance. In your case, the theorem
applies with $\mu = \theta + 1/\lambda$ and $\sigma = 1/\lambda$.
The distribution of $N(t)$ can also be found by noticing that
$\text{Pr}\{N(t) \geq n\} = F_n(t)$ where $F_n$ is the distribution
function of the sum $S_n$, and thus $\text{Pr}\{N(t) = n\} = F_n(t) -
F_{n+1}(t)$. In your case, $X_n = X_n^\star +\theta$ where
$X_n^\star$ follows a standard exponential with mean $1/\lambda$, so
$F_n(t) = F_n^\star(t-n\theta)$ where $F_n^\star$ is the
distribution function of the sum $S_n^\star$ relative to the
$X_k^\star$, and an explicit formula based on Erlang's distribution
can be used in numerical computations. Assume an arrival at $t=0$ and
let $t^\star := t -n \theta$; if $t^\star > 0$, then
$$
F_n^\star(t^\star) = 1 -
\sum_{k=0}^{n-1} e^{-\lambda t^\star} \frac{(\lambda t^\star)^k}{k!}
= \text{Pr}\left\{N^\star \ge n \right\}
$$
where $N^\star$ is Poisson with mean $\lambda t^\star$. A similar
formula can be used for $F_{n+1}(t)$. The number of probability masses
$\text{Pr}\{N(t) = n\}$ to be computed must be such that the total
mass is close to $1$.
theta <- 0.4; lambda <- 1.0;
mu <- theta + 1 / lambda; sigma <- 1 / lambda
t <- 10;
## asymptotic 'Exp'ectation and 'Var'iance from the central limit renewal thm
aExpN <- t / mu
aVarN <- t * sigma^2 / mu^3
## compute the distribution: 'nMax' should be chosen suitably.
## Pr{ N(t) = n } is 'prob[n + 1]' since array indices are >= 1
nMax <- 100; prob <- rep(0, nMax + 1)
for (n in 0:nMax){
tStar <- t - n * theta
if (tStar > 0) {
prob[n + 1] <- prob[n + 1] +
ppois(n - 1, lambda = lambda * tStar, lower.tail = FALSE)
}
tStar <- t - (n + 1) * theta
if (tStar > 0) {
prob[n + 1] <- prob[n + 1] -
ppois(n, lambda = lambda * tStar, lower.tail = FALSE)
}
}
names(prob) <- 0:nMax
ExpN <- sum((0:nMax) * prob)
VarN <- sum((0:nMax)^2 * prob) - ExpN^2
## compute (estimate) expectation and variance using a simulation
nSim <- 500000
set.seed(12345) ## to be reproducible
X <- theta + matrix(rexp(100 * nSim, rate = lambda), nrow = nSim, ncol = 100)
Nsim <- apply(X, MARGIN = 1, FUN = function(x) { sum(cumsum(x) < t) } )
## compare empirical and numerical distributions
prob1 <- table(Nsim) / length(Nsim)
prob2 <- cbind(prob1, prob[names(prob1)])
colnames(prob2) <- c("sim", "num")
barplot(t(prob2), beside = TRUE, legend = TRUE,
main = sprintf(paste("distr. of the number of arrivals",
"lambda = %5.2f, theta = %5.2f"), lambda, theta))
## compare Expectation and variance
res <- rbind(asympt = c(aExpN, aVarN),
sim = c(mean(Nsim), var(Nsim)),
num = c(ExpN, VarN))
colnames(res) <- c("Exp", "Var")
res | Variance of arrival process with shifted exponential distribution | Assuming that the inter-arrivals say $X_n$ ($n \geqslant 1$) are
independent, you have a renewal process, see e.g. this course, or the
classical references quoted in it: the book by D.R. Cox Renewal
T | Variance of arrival process with shifted exponential distribution
Assuming that the inter-arrivals say $X_n$ ($n \geqslant 1$) are
independent, you have a renewal process, see e.g. this course, or the
classical references quoted in it: the book by D.R. Cox Renewal
Theory or the one by S. Karlin and H.M. Taylor A First Course in
Stochastic Processes, vol. 1 chap. 5.
The $n$-th arrival time from $t=0$ is the sum $S_n:= X_1 + X_2 + \dots
+ X_n$ for a specific initial condition: when $t=0$ is an
arrival time. Then $X_1$ is distributed as are the $X_n$ for $n >
1$. A variant takes a specific stationary distribution
for the first arrival $X_1$, leading to the stationary renewal
process. The initial condition yet has no impact in the long run.
Let $N(t)$ be the number of arrivals on $(0,\,t)$. When $t$ is large,
a renewal theorem states that $N(t)$ is approximately normal with mean
$t/\mu$ and variance $t \sigma^2 / \mu^3$ where $\mu$ and $\sigma^2$
are the inter-arrival mean and variance. In your case, the theorem
applies with $\mu = \theta + 1/\lambda$ and $\sigma = 1/\lambda$.
The distribution of $N(t)$ can also be found by noticing that
$\text{Pr}\{N(t) \geq n\} = F_n(t)$ where $F_n$ is the distribution
function of the sum $S_n$, and thus $\text{Pr}\{N(t) = n\} = F_n(t) -
F_{n+1}(t)$. In your case, $X_n = X_n^\star +\theta$ where
$X_n^\star$ follows a standard exponential with mean $1/\lambda$, so
$F_n(t) = F_n^\star(t-n\theta)$ where $F_n^\star$ is the
distribution function of the sum $S_n^\star$ relative to the
$X_k^\star$, and an explicit formula based on Erlang's distribution
can be used in numerical computations. Assume an arrival at $t=0$ and
let $t^\star := t -n \theta$; if $t^\star > 0$, then
$$
F_n^\star(t^\star) = 1 -
\sum_{k=0}^{n-1} e^{-\lambda t^\star} \frac{(\lambda t^\star)^k}{k!}
= \text{Pr}\left\{N^\star \ge n \right\}
$$
where $N^\star$ is Poisson with mean $\lambda t^\star$. A similar
formula can be used for $F_{n+1}(t)$. The number of probability masses
$\text{Pr}\{N(t) = n\}$ to be computed must be such that the total
mass is close to $1$.
theta <- 0.4; lambda <- 1.0;
mu <- theta + 1 / lambda; sigma <- 1 / lambda
t <- 10;
## asymptotic 'Exp'ectation and 'Var'iance from the central limit renewal thm
aExpN <- t / mu
aVarN <- t * sigma^2 / mu^3
## compute the distribution: 'nMax' should be chosen suitably.
## Pr{ N(t) = n } is 'prob[n + 1]' since array indices are >= 1
nMax <- 100; prob <- rep(0, nMax + 1)
for (n in 0:nMax){
tStar <- t - n * theta
if (tStar > 0) {
prob[n + 1] <- prob[n + 1] +
ppois(n - 1, lambda = lambda * tStar, lower.tail = FALSE)
}
tStar <- t - (n + 1) * theta
if (tStar > 0) {
prob[n + 1] <- prob[n + 1] -
ppois(n, lambda = lambda * tStar, lower.tail = FALSE)
}
}
names(prob) <- 0:nMax
ExpN <- sum((0:nMax) * prob)
VarN <- sum((0:nMax)^2 * prob) - ExpN^2
## compute (estimate) expectation and variance using a simulation
nSim <- 500000
set.seed(12345) ## to be reproducible
X <- theta + matrix(rexp(100 * nSim, rate = lambda), nrow = nSim, ncol = 100)
Nsim <- apply(X, MARGIN = 1, FUN = function(x) { sum(cumsum(x) < t) } )
## compare empirical and numerical distributions
prob1 <- table(Nsim) / length(Nsim)
prob2 <- cbind(prob1, prob[names(prob1)])
colnames(prob2) <- c("sim", "num")
barplot(t(prob2), beside = TRUE, legend = TRUE,
main = sprintf(paste("distr. of the number of arrivals",
"lambda = %5.2f, theta = %5.2f"), lambda, theta))
## compare Expectation and variance
res <- rbind(asympt = c(aExpN, aVarN),
sim = c(mean(Nsim), var(Nsim)),
num = c(ExpN, VarN))
colnames(res) <- c("Exp", "Var")
res | Variance of arrival process with shifted exponential distribution
Assuming that the inter-arrivals say $X_n$ ($n \geqslant 1$) are
independent, you have a renewal process, see e.g. this course, or the
classical references quoted in it: the book by D.R. Cox Renewal
T |
50,058 | What is an "Unpaired Bland-Altman plot"? | I have never heard of this name.
In fact, the plot looks just like any Bland Altman plot I have seen, other than there are two sets of data overlaid on the plot. I guess the "unpaired" indicates that you cannot tell which MLEM and ST-MLEM data are coming from the same patient, because there is no linkage between the blue and the pink data points on the plot. | What is an "Unpaired Bland-Altman plot"? | I have never heard of this name.
In fact, the plot looks just like any Bland Altman plot I have seen, other than there are two sets of data overlaid on the plot. I guess the "unpaired" indicates that | What is an "Unpaired Bland-Altman plot"?
I have never heard of this name.
In fact, the plot looks just like any Bland Altman plot I have seen, other than there are two sets of data overlaid on the plot. I guess the "unpaired" indicates that you cannot tell which MLEM and ST-MLEM data are coming from the same patient, because there is no linkage between the blue and the pink data points on the plot. | What is an "Unpaired Bland-Altman plot"?
I have never heard of this name.
In fact, the plot looks just like any Bland Altman plot I have seen, other than there are two sets of data overlaid on the plot. I guess the "unpaired" indicates that |
50,059 | When using a Neural Network to classify more than two classes, is it better to have multiple output nodes (one for each class) or one output node? | Usually when you design a learning process with neural nets, you have to be aware of any structure you induce. This induced structure might be learned by net, since neural nets are very capable of incorporating patterns. The easiest way to induce a desired or undesired structure (learning bias) in the learning process is to manipulate accordingly the inputs and outputs.
Without further knowledge on your specific problem I suspect that the first option will induce a ordering relation between the outputs. What it means is that the neural net might probably "consider" that the distance between $O_1$ and $O_3$, than the distance between $O_1$ and $O_2$. If there is such an ordering, than you can proceed with this way. Anyway, it might be possible to have to scale the outputs as well, so the final learned values to not be $1, 2, 3$, but smaller values centered at $0$.
However, usually there are no such ordering relations between classes, so the second option "breaks" this possible induced structure. Each option has the same chances to be learned as any other. Going further with this rationale, I believe you have to also consider using softmax node outputs, which have the nice feature that the obtained output values comes from a probability function (the values are in $[0, 1]$ and the sum of the outputs is $1 = \sum_{j=1}^{3}O_{ij}$). For softmax nodes you can see how they are working on wikipedia | When using a Neural Network to classify more than two classes, is it better to have multiple output | Usually when you design a learning process with neural nets, you have to be aware of any structure you induce. This induced structure might be learned by net, since neural nets are very capable of inc | When using a Neural Network to classify more than two classes, is it better to have multiple output nodes (one for each class) or one output node?
Usually when you design a learning process with neural nets, you have to be aware of any structure you induce. This induced structure might be learned by net, since neural nets are very capable of incorporating patterns. The easiest way to induce a desired or undesired structure (learning bias) in the learning process is to manipulate accordingly the inputs and outputs.
Without further knowledge on your specific problem I suspect that the first option will induce a ordering relation between the outputs. What it means is that the neural net might probably "consider" that the distance between $O_1$ and $O_3$, than the distance between $O_1$ and $O_2$. If there is such an ordering, than you can proceed with this way. Anyway, it might be possible to have to scale the outputs as well, so the final learned values to not be $1, 2, 3$, but smaller values centered at $0$.
However, usually there are no such ordering relations between classes, so the second option "breaks" this possible induced structure. Each option has the same chances to be learned as any other. Going further with this rationale, I believe you have to also consider using softmax node outputs, which have the nice feature that the obtained output values comes from a probability function (the values are in $[0, 1]$ and the sum of the outputs is $1 = \sum_{j=1}^{3}O_{ij}$). For softmax nodes you can see how they are working on wikipedia | When using a Neural Network to classify more than two classes, is it better to have multiple output
Usually when you design a learning process with neural nets, you have to be aware of any structure you induce. This induced structure might be learned by net, since neural nets are very capable of inc |
50,060 | Why do we say that the variance of the error terms is constant? | The error term ($\epsilon_i$) is indeed a random variable. The normality assumption holds if it has Normal distribution - $\epsilon_i$ ~ $N(\mu,\sigma)$. You are right when you say:
I always think about the error term in a linear regression model as a random variable, with some distribution and a variance
The assumption of constant variance (aka homoscedasticity) holds if the dispersion of the residuals is homogeneous along the range of values in $X$ or $Y$. This pattern of dispersion can vary.
So if the error terms come from this random variable, why do we say that they have a constant variance?
One error observation alone does not have variance. The variances come from subsets of groups of error observations. For a better comprehension, look into this picture, borrowed from @caracal's answer here.
It also helps looking to some plots which illustrates the opposite of homoscedasticity (non constant variance). | Why do we say that the variance of the error terms is constant? | The error term ($\epsilon_i$) is indeed a random variable. The normality assumption holds if it has Normal distribution - $\epsilon_i$ ~ $N(\mu,\sigma)$. You are right when you say:
I always think ab | Why do we say that the variance of the error terms is constant?
The error term ($\epsilon_i$) is indeed a random variable. The normality assumption holds if it has Normal distribution - $\epsilon_i$ ~ $N(\mu,\sigma)$. You are right when you say:
I always think about the error term in a linear regression model as a random variable, with some distribution and a variance
The assumption of constant variance (aka homoscedasticity) holds if the dispersion of the residuals is homogeneous along the range of values in $X$ or $Y$. This pattern of dispersion can vary.
So if the error terms come from this random variable, why do we say that they have a constant variance?
One error observation alone does not have variance. The variances come from subsets of groups of error observations. For a better comprehension, look into this picture, borrowed from @caracal's answer here.
It also helps looking to some plots which illustrates the opposite of homoscedasticity (non constant variance). | Why do we say that the variance of the error terms is constant?
The error term ($\epsilon_i$) is indeed a random variable. The normality assumption holds if it has Normal distribution - $\epsilon_i$ ~ $N(\mu,\sigma)$. You are right when you say:
I always think ab |
50,061 | Averaging LASSO coefficients for repeated random partitioning of data | A similar thing with bootstrap replication is implemented in the "bolasso" function of the R package "mht" (for multiple hypothesis testing), and published here http://www.di.ens.fr/sierra/pdfs/icml_bolasso.pdf but they take the intersection of the sets of predictors with nonzero coefficients from all the replication samples, and then fit unregularized least squares estimators using only those variables.
You pointed out the problem with taking the union of the supports, that you lose the advantage of dimensionality reduction, and your Lasso estimates are still biased. | Averaging LASSO coefficients for repeated random partitioning of data | A similar thing with bootstrap replication is implemented in the "bolasso" function of the R package "mht" (for multiple hypothesis testing), and published here http://www.di.ens.fr/sierra/pdfs/icml_b | Averaging LASSO coefficients for repeated random partitioning of data
A similar thing with bootstrap replication is implemented in the "bolasso" function of the R package "mht" (for multiple hypothesis testing), and published here http://www.di.ens.fr/sierra/pdfs/icml_bolasso.pdf but they take the intersection of the sets of predictors with nonzero coefficients from all the replication samples, and then fit unregularized least squares estimators using only those variables.
You pointed out the problem with taking the union of the supports, that you lose the advantage of dimensionality reduction, and your Lasso estimates are still biased. | Averaging LASSO coefficients for repeated random partitioning of data
A similar thing with bootstrap replication is implemented in the "bolasso" function of the R package "mht" (for multiple hypothesis testing), and published here http://www.di.ens.fr/sierra/pdfs/icml_b |
50,062 | When estimating population mean, how can one half of the sample mean have lower risk than the sample mean itself? | You don't really need a simulation to see how this can happen: @whuber's comment essentially nails it.
Imagine that the population is described by $\mathcal N(1,10)$, i.e. population mean is $\mu=1$ and standard deviation is $\sigma=10$. Let your sample size be $n=10$. The variance of the sample mean (MSE) will be around $\sigma^2/n=10$, so you will get values of the sample mean that are quite far from the true mean $\mu=1$. Taking one half of the sample mean, will reduce the variance four times, bringing the estimation much closer to zero, and, as a consequence, much closer to $\mu=1$. | When estimating population mean, how can one half of the sample mean have lower risk than the sample | You don't really need a simulation to see how this can happen: @whuber's comment essentially nails it.
Imagine that the population is described by $\mathcal N(1,10)$, i.e. population mean is $\mu=1$ a | When estimating population mean, how can one half of the sample mean have lower risk than the sample mean itself?
You don't really need a simulation to see how this can happen: @whuber's comment essentially nails it.
Imagine that the population is described by $\mathcal N(1,10)$, i.e. population mean is $\mu=1$ and standard deviation is $\sigma=10$. Let your sample size be $n=10$. The variance of the sample mean (MSE) will be around $\sigma^2/n=10$, so you will get values of the sample mean that are quite far from the true mean $\mu=1$. Taking one half of the sample mean, will reduce the variance four times, bringing the estimation much closer to zero, and, as a consequence, much closer to $\mu=1$. | When estimating population mean, how can one half of the sample mean have lower risk than the sample
You don't really need a simulation to see how this can happen: @whuber's comment essentially nails it.
Imagine that the population is described by $\mathcal N(1,10)$, i.e. population mean is $\mu=1$ a |
50,063 | Ordinal/continuous vs dummy variable for time series regression/data mining | Modeling time continuously introduces the assumption that there is a linear influence of time upon the outcome, conditional upon $x$. However, adjusting for time as a fixed and random effect makes this interpretation a bit untenable.
Yes it does matter, it matters in absolutely all scenarios. You can verify this by simulating data according to either linear model. When you fit categorical effects for linear time, you still consistently estimate the linear trend in time, but you "spend more" with regards to the degrees of freedom.
In general, yes. There are fewer effects in the first model. However, the overarching idea of which model (categorical effects versus linear time) is correct can be most correctly addressed by asking: What is the scientific question? | Ordinal/continuous vs dummy variable for time series regression/data mining | Modeling time continuously introduces the assumption that there is a linear influence of time upon the outcome, conditional upon $x$. However, adjusting for time as a fixed and random effect makes thi | Ordinal/continuous vs dummy variable for time series regression/data mining
Modeling time continuously introduces the assumption that there is a linear influence of time upon the outcome, conditional upon $x$. However, adjusting for time as a fixed and random effect makes this interpretation a bit untenable.
Yes it does matter, it matters in absolutely all scenarios. You can verify this by simulating data according to either linear model. When you fit categorical effects for linear time, you still consistently estimate the linear trend in time, but you "spend more" with regards to the degrees of freedom.
In general, yes. There are fewer effects in the first model. However, the overarching idea of which model (categorical effects versus linear time) is correct can be most correctly addressed by asking: What is the scientific question? | Ordinal/continuous vs dummy variable for time series regression/data mining
Modeling time continuously introduces the assumption that there is a linear influence of time upon the outcome, conditional upon $x$. However, adjusting for time as a fixed and random effect makes thi |
50,064 | Ordinal/continuous vs dummy variable for time series regression/data mining | What makes you think that time has any effect on the dependent variable?
I'd suggest plotting the dependent variable against time to gauge what sort of model might be useful.
Both approaches - a linear (or non-linear) time trend and seasonal dummy variables might be necessary. (Normally dummy variables are used for seasonal or calendar effects or shocks).
If you fit a dummy time variable for every time period and you don't have many observations per time period you could easily end up over fitting. Also, if you use a series of independent dummy variables you have no idea what the effect of the next time period will be - since it will be independent as well. This makes it less useful for forecasting than other ways of using time in a model.
Perhaps an even more complex process such as ARIMA might be useful. Something like the forecast package in R might be useful for understanding the time series. For fitting a model you might want to look beyond OLS and consider auto-regressive or dynamic regression models. | Ordinal/continuous vs dummy variable for time series regression/data mining | What makes you think that time has any effect on the dependent variable?
I'd suggest plotting the dependent variable against time to gauge what sort of model might be useful.
Both approaches - a linea | Ordinal/continuous vs dummy variable for time series regression/data mining
What makes you think that time has any effect on the dependent variable?
I'd suggest plotting the dependent variable against time to gauge what sort of model might be useful.
Both approaches - a linear (or non-linear) time trend and seasonal dummy variables might be necessary. (Normally dummy variables are used for seasonal or calendar effects or shocks).
If you fit a dummy time variable for every time period and you don't have many observations per time period you could easily end up over fitting. Also, if you use a series of independent dummy variables you have no idea what the effect of the next time period will be - since it will be independent as well. This makes it less useful for forecasting than other ways of using time in a model.
Perhaps an even more complex process such as ARIMA might be useful. Something like the forecast package in R might be useful for understanding the time series. For fitting a model you might want to look beyond OLS and consider auto-regressive or dynamic regression models. | Ordinal/continuous vs dummy variable for time series regression/data mining
What makes you think that time has any effect on the dependent variable?
I'd suggest plotting the dependent variable against time to gauge what sort of model might be useful.
Both approaches - a linea |
50,065 | Sufficiency of order statistics | As mentioned in comments, it's clearly not true for discrete random variables.
The problem is, as the original poster suggested in comments, that we can get ties.
The nonzero probability of ties make the equality
$P(X_1, \ldots, X_n|X_{(1)}, \ldots, X_{(n)}) = \frac{1}{n!}$
- which works in the continuous case - untrue in general.
(This is a familiar problem when dealing with nonparametric tests.) | Sufficiency of order statistics | As mentioned in comments, it's clearly not true for discrete random variables.
The problem is, as the original poster suggested in comments, that we can get ties.
The nonzero probability of ties make | Sufficiency of order statistics
As mentioned in comments, it's clearly not true for discrete random variables.
The problem is, as the original poster suggested in comments, that we can get ties.
The nonzero probability of ties make the equality
$P(X_1, \ldots, X_n|X_{(1)}, \ldots, X_{(n)}) = \frac{1}{n!}$
- which works in the continuous case - untrue in general.
(This is a familiar problem when dealing with nonparametric tests.) | Sufficiency of order statistics
As mentioned in comments, it's clearly not true for discrete random variables.
The problem is, as the original poster suggested in comments, that we can get ties.
The nonzero probability of ties make |
50,066 | conditional sampling of bivariate normals | If you had another bound (such as $\epsilon_2 > T3$), you could sample uniformly and then weights the sample using the bivariate normal density. You would have zero rejection. Maybe in your application it is not too unreasonable to impose such a bound?
Probably better:
You find the intersection between the two linear conditions. Then you generate a r.v. $x_1$ from an exponential or a truncated normal along one of the two conditions (say along $\epsilon_1 = T_1$). Then, if the angle between the 2 linear conditions is acute, you draw uniformly (an perpendicularly to $\epsilon_1 = T_1$) along the line between $x_1$ and $a\epsilon_1 + b\epsilon_2 = T_2$. If it is obtuse, you draw perpendicularly to $\epsilon_1 = T_1$ from a truncated normal or exponential. There is no rejection involved, and you don't need the area to be bounded, but you get a weighted sample. | conditional sampling of bivariate normals | If you had another bound (such as $\epsilon_2 > T3$), you could sample uniformly and then weights the sample using the bivariate normal density. You would have zero rejection. Maybe in your applicatio | conditional sampling of bivariate normals
If you had another bound (such as $\epsilon_2 > T3$), you could sample uniformly and then weights the sample using the bivariate normal density. You would have zero rejection. Maybe in your application it is not too unreasonable to impose such a bound?
Probably better:
You find the intersection between the two linear conditions. Then you generate a r.v. $x_1$ from an exponential or a truncated normal along one of the two conditions (say along $\epsilon_1 = T_1$). Then, if the angle between the 2 linear conditions is acute, you draw uniformly (an perpendicularly to $\epsilon_1 = T_1$) along the line between $x_1$ and $a\epsilon_1 + b\epsilon_2 = T_2$. If it is obtuse, you draw perpendicularly to $\epsilon_1 = T_1$ from a truncated normal or exponential. There is no rejection involved, and you don't need the area to be bounded, but you get a weighted sample. | conditional sampling of bivariate normals
If you had another bound (such as $\epsilon_2 > T3$), you could sample uniformly and then weights the sample using the bivariate normal density. You would have zero rejection. Maybe in your applicatio |
50,067 | conditional sampling of bivariate normals | I have used the Gibbs sampling approach. This way only the beginning of the Gibbs sampling is thrown out (stabilization period). Thus number of waisted samples is not increasing with the number of required samples.
Conditional on observing $\varepsilon_1$, $\varepsilon_2$ is sampling from normal distribution with bound $b\varepsilon_2< Th_2 - a\varepsilon_1$.
Conditional on observing $\varepsilon_2$, $Th_1<\varepsilon_1< (Th_2 - b\varepsilon_2)/a$.
Below code sets $a=\sqrt{t1}$, $b=\sqrt{t2-t1}$.
nScens = 1E8;
epsilon1 = randn(nScens, 1);
epsilon2 = randn(nScens, 1);
Th1 = -3;
Th2 = -2.9;
t1 = 700;
t2 = 707;
ind = epsilon1 > Th1 & ( epsilon1*sqrt(t1) + epsilon2*sqrt(t2-t1))/sqrt(t2) < Th2;
sum(ind)
figure(1)
subplot(121)
scatter(epsilon1(ind), epsilon2(ind),'.' )
axis([ -3 -2.5 -5 1])
subplot(122)
smoothhist2D([epsilon1(ind), epsilon2(ind)],5, [100,100],[], 'contour')
axis([ -3 -2.5 -5 1])
% gibbs sampler
nGibbs = 75000;
epsilon1Gibbs = 0;
for i=1:nGibbs
epsilon2Gibbs = norminv( normcdf( (Th2*sqrt(t2) - epsilon1Gibbs*sqrt(t1) )/sqrt(t2-t1) )*rand );
p = ( -normcdf(Th1) + normcdf( (Th2*sqrt(t2) - epsilon2Gibbs*sqrt(t2-t1) )/sqrt(t1) ) )*rand + normcdf(Th1);
epsilon1Gibbs = norminv( p );
epsilonGibbs(i, :) = [epsilon1Gibbs epsilon2Gibbs];
end
indGibbs = 2500:nGibbs;
figure(2)
subplot(121)
scatter(epsilonGibbs(indGibbs,1), epsilonGibbs(indGibbs,2),'.' )
axis([ -3 -2.5 -5 1])
subplot(122)
smoothhist2D( epsilonGibbs(indGibbs,:) ,5, [100,100],[], 'contour')
axis([ -3 -2.5 -5 1])
Brute force sampling:
Gibbs sampling: | conditional sampling of bivariate normals | I have used the Gibbs sampling approach. This way only the beginning of the Gibbs sampling is thrown out (stabilization period). Thus number of waisted samples is not increasing with the number of req | conditional sampling of bivariate normals
I have used the Gibbs sampling approach. This way only the beginning of the Gibbs sampling is thrown out (stabilization period). Thus number of waisted samples is not increasing with the number of required samples.
Conditional on observing $\varepsilon_1$, $\varepsilon_2$ is sampling from normal distribution with bound $b\varepsilon_2< Th_2 - a\varepsilon_1$.
Conditional on observing $\varepsilon_2$, $Th_1<\varepsilon_1< (Th_2 - b\varepsilon_2)/a$.
Below code sets $a=\sqrt{t1}$, $b=\sqrt{t2-t1}$.
nScens = 1E8;
epsilon1 = randn(nScens, 1);
epsilon2 = randn(nScens, 1);
Th1 = -3;
Th2 = -2.9;
t1 = 700;
t2 = 707;
ind = epsilon1 > Th1 & ( epsilon1*sqrt(t1) + epsilon2*sqrt(t2-t1))/sqrt(t2) < Th2;
sum(ind)
figure(1)
subplot(121)
scatter(epsilon1(ind), epsilon2(ind),'.' )
axis([ -3 -2.5 -5 1])
subplot(122)
smoothhist2D([epsilon1(ind), epsilon2(ind)],5, [100,100],[], 'contour')
axis([ -3 -2.5 -5 1])
% gibbs sampler
nGibbs = 75000;
epsilon1Gibbs = 0;
for i=1:nGibbs
epsilon2Gibbs = norminv( normcdf( (Th2*sqrt(t2) - epsilon1Gibbs*sqrt(t1) )/sqrt(t2-t1) )*rand );
p = ( -normcdf(Th1) + normcdf( (Th2*sqrt(t2) - epsilon2Gibbs*sqrt(t2-t1) )/sqrt(t1) ) )*rand + normcdf(Th1);
epsilon1Gibbs = norminv( p );
epsilonGibbs(i, :) = [epsilon1Gibbs epsilon2Gibbs];
end
indGibbs = 2500:nGibbs;
figure(2)
subplot(121)
scatter(epsilonGibbs(indGibbs,1), epsilonGibbs(indGibbs,2),'.' )
axis([ -3 -2.5 -5 1])
subplot(122)
smoothhist2D( epsilonGibbs(indGibbs,:) ,5, [100,100],[], 'contour')
axis([ -3 -2.5 -5 1])
Brute force sampling:
Gibbs sampling: | conditional sampling of bivariate normals
I have used the Gibbs sampling approach. This way only the beginning of the Gibbs sampling is thrown out (stabilization period). Thus number of waisted samples is not increasing with the number of req |
50,068 | conditional sampling of bivariate normals | One simple approach that would involve a huge reduction in the rejection rate would be to rotate the coordinates $(\epsilon_1,\epsilon_2)$ to say $(X_1,X_2)$ such that the line $aε_1+bε_2=T_2$ becomes vertical ($cX_1=\tau_2$, say). Then generate from the truncated normal such that $cX_1<\tau_2$. Then generate an independent $X_2$ and reject those pairs which fail the other (rotated) condition, and rotate the accepted pairs back.
The rejection rate will be substantial (it will likely exceed 50%, for example), but probably won't be at all extreme, as it certainly would be if you didn't generate from the extreme-tail truncated normal to begin with. | conditional sampling of bivariate normals | One simple approach that would involve a huge reduction in the rejection rate would be to rotate the coordinates $(\epsilon_1,\epsilon_2)$ to say $(X_1,X_2)$ such that the line $aε_1+bε_2=T_2$ becomes | conditional sampling of bivariate normals
One simple approach that would involve a huge reduction in the rejection rate would be to rotate the coordinates $(\epsilon_1,\epsilon_2)$ to say $(X_1,X_2)$ such that the line $aε_1+bε_2=T_2$ becomes vertical ($cX_1=\tau_2$, say). Then generate from the truncated normal such that $cX_1<\tau_2$. Then generate an independent $X_2$ and reject those pairs which fail the other (rotated) condition, and rotate the accepted pairs back.
The rejection rate will be substantial (it will likely exceed 50%, for example), but probably won't be at all extreme, as it certainly would be if you didn't generate from the extreme-tail truncated normal to begin with. | conditional sampling of bivariate normals
One simple approach that would involve a huge reduction in the rejection rate would be to rotate the coordinates $(\epsilon_1,\epsilon_2)$ to say $(X_1,X_2)$ such that the line $aε_1+bε_2=T_2$ becomes |
50,069 | Chi Square test for survey data | It appears that you are first doing an omnibus test (Chi square test for independence) with 2 df to determine if the "like status" and "gender" are independent or not. And then you are doing post-hoc tests on the individual rows (Chi square goodness of fit tests) to see if the males/females are equally likely under each row. According to This Link under the section "Post Hoc Follow-up Tests", these post-hoc tests are allowable. Each row would generate a Chi square test with 1 df. They would test, for instance "Ho: men and women 'are likers' at the same rate", for each row.
However, I am leery that no adjustment was made for multiple comparisons. Since it appears you are doing three of these 1 df tests, you should adjust your $\alpha$ to correct the familywise error rate (Bonferroni correction for instance).
If your client wants to know how much more likely men are to be a "liker", etc. you could (a), provide a point estimate based on your data as Peter Flom suggested, or (b) you could construct a CI for the difference between the two proportions if you want an interval estimate. Along with the statement that the difference is significant (or not significant), my guess is that a point estimate would suffice for your clients.
Other than the problem with not controlling the familywise error rate, the analysis seems adequate to me. I hope this helps. | Chi Square test for survey data | It appears that you are first doing an omnibus test (Chi square test for independence) with 2 df to determine if the "like status" and "gender" are independent or not. And then you are doing post-hoc | Chi Square test for survey data
It appears that you are first doing an omnibus test (Chi square test for independence) with 2 df to determine if the "like status" and "gender" are independent or not. And then you are doing post-hoc tests on the individual rows (Chi square goodness of fit tests) to see if the males/females are equally likely under each row. According to This Link under the section "Post Hoc Follow-up Tests", these post-hoc tests are allowable. Each row would generate a Chi square test with 1 df. They would test, for instance "Ho: men and women 'are likers' at the same rate", for each row.
However, I am leery that no adjustment was made for multiple comparisons. Since it appears you are doing three of these 1 df tests, you should adjust your $\alpha$ to correct the familywise error rate (Bonferroni correction for instance).
If your client wants to know how much more likely men are to be a "liker", etc. you could (a), provide a point estimate based on your data as Peter Flom suggested, or (b) you could construct a CI for the difference between the two proportions if you want an interval estimate. Along with the statement that the difference is significant (or not significant), my guess is that a point estimate would suffice for your clients.
Other than the problem with not controlling the familywise error rate, the analysis seems adequate to me. I hope this helps. | Chi Square test for survey data
It appears that you are first doing an omnibus test (Chi square test for independence) with 2 df to determine if the "like status" and "gender" are independent or not. And then you are doing post-hoc |
50,070 | Chi Square test for survey data | The portion after "this is what the code does instead" seems off, although it is hard to tell.
The client's request is reasonable. It isn't answered by chi-square, but it still a reasonable request. The proportion of men who liked it is 54/99 = about 54%, of women it is 46/103 = about 46% (you can calculate the exact values) so the difference is about 8%.
The chi-square reported here is about two variables: Liking and sex. Specifically, it looks at whether they are associated. Given that one variable is ordinal, there are more powerful tests that regular chi-square. | Chi Square test for survey data | The portion after "this is what the code does instead" seems off, although it is hard to tell.
The client's request is reasonable. It isn't answered by chi-square, but it still a reasonable request. | Chi Square test for survey data
The portion after "this is what the code does instead" seems off, although it is hard to tell.
The client's request is reasonable. It isn't answered by chi-square, but it still a reasonable request. The proportion of men who liked it is 54/99 = about 54%, of women it is 46/103 = about 46% (you can calculate the exact values) so the difference is about 8%.
The chi-square reported here is about two variables: Liking and sex. Specifically, it looks at whether they are associated. Given that one variable is ordinal, there are more powerful tests that regular chi-square. | Chi Square test for survey data
The portion after "this is what the code does instead" seems off, although it is hard to tell.
The client's request is reasonable. It isn't answered by chi-square, but it still a reasonable request. |
50,071 | How do I deal with large data similarity computation? | I have been doing a similar procedure on a regular basis lately. It isn't quick and it takes a decent chunk of HDD space if you process a lot of files. As a note, the data I work with has fewer "features", more "users", and I use perl to process it.
First off, I would not recommend storing the data together as a single matrix, since most programs (certainly R) will not be able to handle it. If you store each user as a separate file (.txt or whatever other format works better for you), you can then access them individually, even with R.
Then, as a new document comes in, you will have to do 100,000 comparisons each between two vectors of length 10 million.
Here's an example in R with two random binary vectors of length 10,000,000.
x=as.numeric(rnorm(10000000)<0)
y=as.numeric(rnorm(10000000)<0)
sim = crossprod(x,y)/sqrt(crossprod(x)*crossprod(y))
[,1]
[1,] 0.4999211
Since the two vectors in this example are random 0,1 vectors, they have a cosine similarity of 0.5. This one similarity (cosine sim) calculation took less than a second without me trying to optimize it.
To see how long your process would take, you could loop this code over 100,000 iterations and store each similarity result to a results vector that contains all its matches. I tried the above code with 1000 iterations and it took about 70 seconds.
You can also insert whatever similarity measure you desire. It is certainly doable in terms of computation time, but you may want to optimize this if you need it done faster. Hope this gives you an idea of what it might take computationally. | How do I deal with large data similarity computation? | I have been doing a similar procedure on a regular basis lately. It isn't quick and it takes a decent chunk of HDD space if you process a lot of files. As a note, the data I work with has fewer "fea | How do I deal with large data similarity computation?
I have been doing a similar procedure on a regular basis lately. It isn't quick and it takes a decent chunk of HDD space if you process a lot of files. As a note, the data I work with has fewer "features", more "users", and I use perl to process it.
First off, I would not recommend storing the data together as a single matrix, since most programs (certainly R) will not be able to handle it. If you store each user as a separate file (.txt or whatever other format works better for you), you can then access them individually, even with R.
Then, as a new document comes in, you will have to do 100,000 comparisons each between two vectors of length 10 million.
Here's an example in R with two random binary vectors of length 10,000,000.
x=as.numeric(rnorm(10000000)<0)
y=as.numeric(rnorm(10000000)<0)
sim = crossprod(x,y)/sqrt(crossprod(x)*crossprod(y))
[,1]
[1,] 0.4999211
Since the two vectors in this example are random 0,1 vectors, they have a cosine similarity of 0.5. This one similarity (cosine sim) calculation took less than a second without me trying to optimize it.
To see how long your process would take, you could loop this code over 100,000 iterations and store each similarity result to a results vector that contains all its matches. I tried the above code with 1000 iterations and it took about 70 seconds.
You can also insert whatever similarity measure you desire. It is certainly doable in terms of computation time, but you may want to optimize this if you need it done faster. Hope this gives you an idea of what it might take computationally. | How do I deal with large data similarity computation?
I have been doing a similar procedure on a regular basis lately. It isn't quick and it takes a decent chunk of HDD space if you process a lot of files. As a note, the data I work with has fewer "fea |
50,072 | How do I deal with large data similarity computation? | What you're talking about is a "Vector Space Model" of information retrieval. Wikipedia lists some programs which help with this - the one I'm most familiar with is Lucene.
This page describes their algorithm. The major points are that 1) you can invert your index, 2) you can look through indices in parallel and 3) you can limit to just the top $k$. All of these things give you a pretty nice speedup. | How do I deal with large data similarity computation? | What you're talking about is a "Vector Space Model" of information retrieval. Wikipedia lists some programs which help with this - the one I'm most familiar with is Lucene.
This page describes their a | How do I deal with large data similarity computation?
What you're talking about is a "Vector Space Model" of information retrieval. Wikipedia lists some programs which help with this - the one I'm most familiar with is Lucene.
This page describes their algorithm. The major points are that 1) you can invert your index, 2) you can look through indices in parallel and 3) you can limit to just the top $k$. All of these things give you a pretty nice speedup. | How do I deal with large data similarity computation?
What you're talking about is a "Vector Space Model" of information retrieval. Wikipedia lists some programs which help with this - the one I'm most familiar with is Lucene.
This page describes their a |
50,073 | How do I deal with large data similarity computation? | You could try sorting your data by the total number of "1s" in each row (vector length). This would give you a space to start searching when you're given a new user. For example, if the new user has a length of 1342, you could check all entries with lengths plus or minus 500. You can do this efficiently if the data are sorted. Obviously, this requires an upfront investment of compute time to pre-sort your data.
The best solution will probably depend on the special features of the data you have (you mention that the data are sparse, so you should try to exploit that somehow). My answer would be effective if the difference in the length of two vectors correlates with the distance between them (e.g. Hamming distance). You could check to see if this is true on a random sample of your data by making a simple scatter plot.
In general, your best bet would be to determine some scalar function that is a good predictor of how similar two entries are, then sort your data by that function, and then search locally when given a new entry. My first guess would be to try vector length as that function, but there is a decent chance you'll be able to find a better one by playing around with your data. | How do I deal with large data similarity computation? | You could try sorting your data by the total number of "1s" in each row (vector length). This would give you a space to start searching when you're given a new user. For example, if the new user has a | How do I deal with large data similarity computation?
You could try sorting your data by the total number of "1s" in each row (vector length). This would give you a space to start searching when you're given a new user. For example, if the new user has a length of 1342, you could check all entries with lengths plus or minus 500. You can do this efficiently if the data are sorted. Obviously, this requires an upfront investment of compute time to pre-sort your data.
The best solution will probably depend on the special features of the data you have (you mention that the data are sparse, so you should try to exploit that somehow). My answer would be effective if the difference in the length of two vectors correlates with the distance between them (e.g. Hamming distance). You could check to see if this is true on a random sample of your data by making a simple scatter plot.
In general, your best bet would be to determine some scalar function that is a good predictor of how similar two entries are, then sort your data by that function, and then search locally when given a new entry. My first guess would be to try vector length as that function, but there is a decent chance you'll be able to find a better one by playing around with your data. | How do I deal with large data similarity computation?
You could try sorting your data by the total number of "1s" in each row (vector length). This would give you a space to start searching when you're given a new user. For example, if the new user has a |
50,074 | How do I deal with large data similarity computation? | There are a number of non-euclidean distance measures, some of which are specifically used for binary data. Two distance-measures are:
1) Simple Matching Coefficient;
2) Jaccard Coefficient.
They have some different strengths and weaknesses. In the simple matching coefficient, mutual absences and presences contribute to the similarity, although the Jaccard coefficient is good for focusing on mutual presences.
You can look up those measures for more specifically if you want (Simple Matching and Jaccard are summarized here: http://stat.ethz.ch/education/semesters/ss2012/ams/slides/v4.2.pdf)
If you are using R, the function "dist" in the base package has the simple matching coefficient (referred to as "binary symmetric") and command "vegdist" in the package "vegan" has the Jaccard index.
(Edit): I just found something, which, depending on your hardware, might yield some benefit. If you have a NVIDIA multicore GPU (which is fairly common), the package 'rpud' has a function rpuDist() which computes a number of the standard distance metrics using the GPU with great improvement, as shown here: [http://www.r-tutor.com/gpu-computing/clustering/distance-matrix] http://www.r-tutor.com/gpu-computing/clustering/distance-matrix
I haven't tested it, and with a dataset your size there might be other bottlenecks, but it might be worth having a go at it. Also, it appears that this, and another package (gputools) are only available on Linux, so that is another limitation...
Hope that helps! | How do I deal with large data similarity computation? | There are a number of non-euclidean distance measures, some of which are specifically used for binary data. Two distance-measures are:
1) Simple Matching Coefficient;
2) Jaccard Coefficient.
They have | How do I deal with large data similarity computation?
There are a number of non-euclidean distance measures, some of which are specifically used for binary data. Two distance-measures are:
1) Simple Matching Coefficient;
2) Jaccard Coefficient.
They have some different strengths and weaknesses. In the simple matching coefficient, mutual absences and presences contribute to the similarity, although the Jaccard coefficient is good for focusing on mutual presences.
You can look up those measures for more specifically if you want (Simple Matching and Jaccard are summarized here: http://stat.ethz.ch/education/semesters/ss2012/ams/slides/v4.2.pdf)
If you are using R, the function "dist" in the base package has the simple matching coefficient (referred to as "binary symmetric") and command "vegdist" in the package "vegan" has the Jaccard index.
(Edit): I just found something, which, depending on your hardware, might yield some benefit. If you have a NVIDIA multicore GPU (which is fairly common), the package 'rpud' has a function rpuDist() which computes a number of the standard distance metrics using the GPU with great improvement, as shown here: [http://www.r-tutor.com/gpu-computing/clustering/distance-matrix] http://www.r-tutor.com/gpu-computing/clustering/distance-matrix
I haven't tested it, and with a dataset your size there might be other bottlenecks, but it might be worth having a go at it. Also, it appears that this, and another package (gputools) are only available on Linux, so that is another limitation...
Hope that helps! | How do I deal with large data similarity computation?
There are a number of non-euclidean distance measures, some of which are specifically used for binary data. Two distance-measures are:
1) Simple Matching Coefficient;
2) Jaccard Coefficient.
They have |
50,075 | RandomForestClassifier Parameter Optimization | To answer your second question, why accuracy tails off, I put together an example in R that should resemble your problem. I generated ~50 good predictors and ~1000 bad predictors (that are just randomly assigned dummy variables). I start by increasing the number of good predictors, and then after maxing those out I incrementally add in all of the bad predictors.
This illustrates what you observe in your data - up to a point the predictors are good and adding value, then at some point you're adding in the worse features and they start to drown out the good features.
The (admittedly messy) code is below:
library(data.table)
library(randomForest)
set.seed(343)
y <- sample(c(0,1), size=1500, replace=TRUE, prob=c(.8,.2))
pct_seq <- seq(.2,.1,by=-.002)
good.x <- sample(c(1,0), size=1500, replace=TRUE, prob=c(.21,.79))
for(i in pct_seq) {
samp1 <- sample(c(1,0), size=1500, replace=TRUE, prob=c(i,1-i))
samp0 <- sample(c(1,0), size=1500, replace=TRUE, prob=c(i/5,1-i/5))
good.x <- cbind(good.x,ifelse(y==1,samp1, samp0))
}
pct_seq <- rep(.02,1000)
bad.x <- sample(c(1,0), size=1500, replace=TRUE, prob=c(.01,.99))
for(i in pct_seq) {
samp1 <- sample(c(1,0), size=1500, replace=TRUE, prob=c(i,1-i))
samp0 <- sample(c(1,0), size=1500, replace=TRUE, prob=c(i,1-i))
bad.x <- cbind(bad.x,ifelse(y==1,samp1, samp0))
}
x <- cbind(good.x,bad.x)
y.fac <- as.factor(y)
var.seq <- c(seq(11,51, by=10), seq(151,951, by=100))
model.results <- data.frame(0,0)
for (j in var.seq) {
print(j)
print(randomForest(x[,1:j],y.fac,ntree=1000))
} | RandomForestClassifier Parameter Optimization | To answer your second question, why accuracy tails off, I put together an example in R that should resemble your problem. I generated ~50 good predictors and ~1000 bad predictors (that are just random | RandomForestClassifier Parameter Optimization
To answer your second question, why accuracy tails off, I put together an example in R that should resemble your problem. I generated ~50 good predictors and ~1000 bad predictors (that are just randomly assigned dummy variables). I start by increasing the number of good predictors, and then after maxing those out I incrementally add in all of the bad predictors.
This illustrates what you observe in your data - up to a point the predictors are good and adding value, then at some point you're adding in the worse features and they start to drown out the good features.
The (admittedly messy) code is below:
library(data.table)
library(randomForest)
set.seed(343)
y <- sample(c(0,1), size=1500, replace=TRUE, prob=c(.8,.2))
pct_seq <- seq(.2,.1,by=-.002)
good.x <- sample(c(1,0), size=1500, replace=TRUE, prob=c(.21,.79))
for(i in pct_seq) {
samp1 <- sample(c(1,0), size=1500, replace=TRUE, prob=c(i,1-i))
samp0 <- sample(c(1,0), size=1500, replace=TRUE, prob=c(i/5,1-i/5))
good.x <- cbind(good.x,ifelse(y==1,samp1, samp0))
}
pct_seq <- rep(.02,1000)
bad.x <- sample(c(1,0), size=1500, replace=TRUE, prob=c(.01,.99))
for(i in pct_seq) {
samp1 <- sample(c(1,0), size=1500, replace=TRUE, prob=c(i,1-i))
samp0 <- sample(c(1,0), size=1500, replace=TRUE, prob=c(i,1-i))
bad.x <- cbind(bad.x,ifelse(y==1,samp1, samp0))
}
x <- cbind(good.x,bad.x)
y.fac <- as.factor(y)
var.seq <- c(seq(11,51, by=10), seq(151,951, by=100))
model.results <- data.frame(0,0)
for (j in var.seq) {
print(j)
print(randomForest(x[,1:j],y.fac,ntree=1000))
} | RandomForestClassifier Parameter Optimization
To answer your second question, why accuracy tails off, I put together an example in R that should resemble your problem. I generated ~50 good predictors and ~1000 bad predictors (that are just random |
50,076 | RandomForestClassifier Parameter Optimization | There is few unorthodox (but not wrong) steps in your approach:
1) Usually, one does not use feature selection in sequence with classification. RF are usually used for one or for the other. It is not clear from the question whether you use the first step to select the "good" words and only use them in the second step, or not.
2) The three usual hyperparameters to set in RF are (in the order of importance - as I gather from peoples impression I do not know of any empirical research in this)
the number of (randomly chosen) features to select in each tree construction step (max_features in sklearn, mtry in R)
the number of trees per forest (n_estimators in sklearn, ntree in R)
the maximum depth of the tree or dome measure of the size of the tree (here things diverge, sklearn limits the depth of the tree max_depth while R limits the size of the tree nodesize and maxnodes)
You decided only to select the first hyperparameter, and not the others, which is ok, since it is considered as the most important (again, I know of no empirical evidence to that effect).
3) You used way too many repetitions (500) to select the hyperparameter - my limited experience is that much less repetitions are needed (10) (but I do not have experience in text data - which is spare - many 0s). | RandomForestClassifier Parameter Optimization | There is few unorthodox (but not wrong) steps in your approach:
1) Usually, one does not use feature selection in sequence with classification. RF are usually used for one or for the other. It is not | RandomForestClassifier Parameter Optimization
There is few unorthodox (but not wrong) steps in your approach:
1) Usually, one does not use feature selection in sequence with classification. RF are usually used for one or for the other. It is not clear from the question whether you use the first step to select the "good" words and only use them in the second step, or not.
2) The three usual hyperparameters to set in RF are (in the order of importance - as I gather from peoples impression I do not know of any empirical research in this)
the number of (randomly chosen) features to select in each tree construction step (max_features in sklearn, mtry in R)
the number of trees per forest (n_estimators in sklearn, ntree in R)
the maximum depth of the tree or dome measure of the size of the tree (here things diverge, sklearn limits the depth of the tree max_depth while R limits the size of the tree nodesize and maxnodes)
You decided only to select the first hyperparameter, and not the others, which is ok, since it is considered as the most important (again, I know of no empirical evidence to that effect).
3) You used way too many repetitions (500) to select the hyperparameter - my limited experience is that much less repetitions are needed (10) (but I do not have experience in text data - which is spare - many 0s). | RandomForestClassifier Parameter Optimization
There is few unorthodox (but not wrong) steps in your approach:
1) Usually, one does not use feature selection in sequence with classification. RF are usually used for one or for the other. It is not |
50,077 | RandomForestClassifier Parameter Optimization | "Number of Features" parameter holds for the amount of randomness in the Random Forest (the fewer features you choose the more random your forest is).
If you have lots of "relevant" features, you can choose small feature set to build each tree. But if only a fraction of your features is relevant, you better choose more features for each tree. In this case you can also perform feature selection before using Random Forest. | RandomForestClassifier Parameter Optimization | "Number of Features" parameter holds for the amount of randomness in the Random Forest (the fewer features you choose the more random your forest is).
If you have lots of "relevant" features, you can | RandomForestClassifier Parameter Optimization
"Number of Features" parameter holds for the amount of randomness in the Random Forest (the fewer features you choose the more random your forest is).
If you have lots of "relevant" features, you can choose small feature set to build each tree. But if only a fraction of your features is relevant, you better choose more features for each tree. In this case you can also perform feature selection before using Random Forest. | RandomForestClassifier Parameter Optimization
"Number of Features" parameter holds for the amount of randomness in the Random Forest (the fewer features you choose the more random your forest is).
If you have lots of "relevant" features, you can |
50,078 | 1 control group vs. 2 treatments: one ANOVA or two t-tests? | You don't have to run an ANOVA first, but most people do out of habit. (Whether reviewers will give you a hard time about not having done so is a separate issue.) Note that the original Dunnett's test required that the conditions have equal $n$s. The test has since been generalized, so it is fine if you do not have equal $n$s, just be sure you are running the right test (and citing it properly). You can also run two t-tests instead of either an ANOVA or Dunnett's test, but if you want to control for type I error inflation, you will need to use the Bonferroni correction as your tests would not be independent. | 1 control group vs. 2 treatments: one ANOVA or two t-tests? | You don't have to run an ANOVA first, but most people do out of habit. (Whether reviewers will give you a hard time about not having done so is a separate issue.) Note that the original Dunnett's te | 1 control group vs. 2 treatments: one ANOVA or two t-tests?
You don't have to run an ANOVA first, but most people do out of habit. (Whether reviewers will give you a hard time about not having done so is a separate issue.) Note that the original Dunnett's test required that the conditions have equal $n$s. The test has since been generalized, so it is fine if you do not have equal $n$s, just be sure you are running the right test (and citing it properly). You can also run two t-tests instead of either an ANOVA or Dunnett's test, but if you want to control for type I error inflation, you will need to use the Bonferroni correction as your tests would not be independent. | 1 control group vs. 2 treatments: one ANOVA or two t-tests?
You don't have to run an ANOVA first, but most people do out of habit. (Whether reviewers will give you a hard time about not having done so is a separate issue.) Note that the original Dunnett's te |
50,079 | 1 control group vs. 2 treatments: one ANOVA or two t-tests? | If you have three groups you should do an ANOVA (after checking assumptions of normality etc of course) which will test if the three groups differ overall. If that is the case you can then either do contrasts or post-hoc tests to test your hypotheses directly, e.g. does group 1 differ from group 2. How to do contrasts or post-hoc tests depends on the software you use (e.g. R, SPSS etc). | 1 control group vs. 2 treatments: one ANOVA or two t-tests? | If you have three groups you should do an ANOVA (after checking assumptions of normality etc of course) which will test if the three groups differ overall. If that is the case you can then either do c | 1 control group vs. 2 treatments: one ANOVA or two t-tests?
If you have three groups you should do an ANOVA (after checking assumptions of normality etc of course) which will test if the three groups differ overall. If that is the case you can then either do contrasts or post-hoc tests to test your hypotheses directly, e.g. does group 1 differ from group 2. How to do contrasts or post-hoc tests depends on the software you use (e.g. R, SPSS etc). | 1 control group vs. 2 treatments: one ANOVA or two t-tests?
If you have three groups you should do an ANOVA (after checking assumptions of normality etc of course) which will test if the three groups differ overall. If that is the case you can then either do c |
50,080 | Is MLE more efficient than Moment method? | I just wanted to chime in with a story. Last Joint Statistical Meetings, I saw Donald Rubin speak after a few presentations at a causal inference session. He started poking fun at the presenters because their methods were based on inverse probability weighting schemes (resembling the Horvitz-Thompson estimator in sampling theory). Anyway, I'll never forget the quote (paraphrasing):
"Horvitz-Thompson is just glorified Method of Moments. We've known that was inferior to
Maximum Likelihood since Fisher in the 40s!" | Is MLE more efficient than Moment method? | I just wanted to chime in with a story. Last Joint Statistical Meetings, I saw Donald Rubin speak after a few presentations at a causal inference session. He started poking fun at the presenters becau | Is MLE more efficient than Moment method?
I just wanted to chime in with a story. Last Joint Statistical Meetings, I saw Donald Rubin speak after a few presentations at a causal inference session. He started poking fun at the presenters because their methods were based on inverse probability weighting schemes (resembling the Horvitz-Thompson estimator in sampling theory). Anyway, I'll never forget the quote (paraphrasing):
"Horvitz-Thompson is just glorified Method of Moments. We've known that was inferior to
Maximum Likelihood since Fisher in the 40s!" | Is MLE more efficient than Moment method?
I just wanted to chime in with a story. Last Joint Statistical Meetings, I saw Donald Rubin speak after a few presentations at a causal inference session. He started poking fun at the presenters becau |
50,081 | Is MLE more efficient than Moment method? | Percentile estimates will not have a normal distribution, even asymptotically. Since you know your data are normal, why not consider a tolerance interval. It will not contain the 99.5 and .05 percentiles, per se, but you can set one up to cover 99% of the possible values with X% confidence (adjustible). If your goal is coverage of possible values, this will be sufficient. However, if you actually want the actual percentiles, then see this paper and this | Is MLE more efficient than Moment method? | Percentile estimates will not have a normal distribution, even asymptotically. Since you know your data are normal, why not consider a tolerance interval. It will not contain the 99.5 and .05 percenti | Is MLE more efficient than Moment method?
Percentile estimates will not have a normal distribution, even asymptotically. Since you know your data are normal, why not consider a tolerance interval. It will not contain the 99.5 and .05 percentiles, per se, but you can set one up to cover 99% of the possible values with X% confidence (adjustible). If your goal is coverage of possible values, this will be sufficient. However, if you actually want the actual percentiles, then see this paper and this | Is MLE more efficient than Moment method?
Percentile estimates will not have a normal distribution, even asymptotically. Since you know your data are normal, why not consider a tolerance interval. It will not contain the 99.5 and .05 percenti |
50,082 | State of the art: Non-parametric density estimation with a boundary and data clumped near zero [duplicate] | If you know the range of your data, you can use
the inverse probit transformation. On a couple
of examples, the fit looked very satisfying visually.
This approach is explained in more detail in a clear paper[1]. I think there
should be an R implementation but I couldn't find it
(perhaps you can contact the author).
The approach can also be adapted to the case you where your random variable
is distributed in $[0,+\infty)$
[1] G. Geenens, Probit transformation for kernel density estimation on the
unit interval. | State of the art: Non-parametric density estimation with a boundary and data clumped near zero [dupl | If you know the range of your data, you can use
the inverse probit transformation. On a couple
of examples, the fit looked very satisfying visually.
This approach is explained in more detail in a c | State of the art: Non-parametric density estimation with a boundary and data clumped near zero [duplicate]
If you know the range of your data, you can use
the inverse probit transformation. On a couple
of examples, the fit looked very satisfying visually.
This approach is explained in more detail in a clear paper[1]. I think there
should be an R implementation but I couldn't find it
(perhaps you can contact the author).
The approach can also be adapted to the case you where your random variable
is distributed in $[0,+\infty)$
[1] G. Geenens, Probit transformation for kernel density estimation on the
unit interval. | State of the art: Non-parametric density estimation with a boundary and data clumped near zero [dupl
If you know the range of your data, you can use
the inverse probit transformation. On a couple
of examples, the fit looked very satisfying visually.
This approach is explained in more detail in a c |
50,083 | Why describe a sample as i.i.d.? | You have to recall a random variable is just a function that maps an event space into a probability space. For a single realization from a single observation, it may seem redundant to consider that such mappings are defined similarly over $n=1000$ replications. However, the statistical experiment is based on some summary measure or "data reduction" defined on the event space and probability space. The fact that IID has reduced these concepts to mere cartesian products of the basic observation is a product of the stringent IID assumption.
Designating each random variable allows you to formalize the event space, define estimators and calculate their distribution, and set up probability models for outcomes. In many experiments, $X_1, X_2, \ldots, X_n$ are neither independent nor are they identically distributed, such as with Urn models. So you can represent the probability as the product of conditional probabilities for each of $X_1$, $X_2 | X_1$, $X_3 | X_2, X_1$, etc. Indeed many useful limit theorems can be derived in the presence of mildly correlated observations and or distributional differences such as the general Lyapunov or Lindeberg-Feller Central Limit Theorem. | Why describe a sample as i.i.d.? | You have to recall a random variable is just a function that maps an event space into a probability space. For a single realization from a single observation, it may seem redundant to consider that su | Why describe a sample as i.i.d.?
You have to recall a random variable is just a function that maps an event space into a probability space. For a single realization from a single observation, it may seem redundant to consider that such mappings are defined similarly over $n=1000$ replications. However, the statistical experiment is based on some summary measure or "data reduction" defined on the event space and probability space. The fact that IID has reduced these concepts to mere cartesian products of the basic observation is a product of the stringent IID assumption.
Designating each random variable allows you to formalize the event space, define estimators and calculate their distribution, and set up probability models for outcomes. In many experiments, $X_1, X_2, \ldots, X_n$ are neither independent nor are they identically distributed, such as with Urn models. So you can represent the probability as the product of conditional probabilities for each of $X_1$, $X_2 | X_1$, $X_3 | X_2, X_1$, etc. Indeed many useful limit theorems can be derived in the presence of mildly correlated observations and or distributional differences such as the general Lyapunov or Lindeberg-Feller Central Limit Theorem. | Why describe a sample as i.i.d.?
You have to recall a random variable is just a function that maps an event space into a probability space. For a single realization from a single observation, it may seem redundant to consider that su |
50,084 | GLM with Temporal Data | I'm still learning a lot in this area, but since you don't have an answer yet, my thoughts are...
The correlation structure you specify in the various functions that allow it (gls, lme, etc) are for within-group correlation, so I don't believe AR1 is correct since the multiple measurements are within the same timeframe.
Perhaps you want (I created dat2, which centers your variables):
gls (wat ~ rain + temp, dat2, correlation=corCompSymm (form = ~1 | month))
which gives answers, in your example, similar to GEE:
library (geepack)
geeglm (wat ~ rain + temp, data = dat2, id = month, corstr = "exchangeable")
Unfortunately, I've read several papers on GEE v GLMM and still haven't figured out whether GEE would be applicable in such a case. There are several threads on this, one of which is:
What is the difference between generalized estimating equations and GLMM?
Hope that helps. | GLM with Temporal Data | I'm still learning a lot in this area, but since you don't have an answer yet, my thoughts are...
The correlation structure you specify in the various functions that allow it (gls, lme, etc) are for w | GLM with Temporal Data
I'm still learning a lot in this area, but since you don't have an answer yet, my thoughts are...
The correlation structure you specify in the various functions that allow it (gls, lme, etc) are for within-group correlation, so I don't believe AR1 is correct since the multiple measurements are within the same timeframe.
Perhaps you want (I created dat2, which centers your variables):
gls (wat ~ rain + temp, dat2, correlation=corCompSymm (form = ~1 | month))
which gives answers, in your example, similar to GEE:
library (geepack)
geeglm (wat ~ rain + temp, data = dat2, id = month, corstr = "exchangeable")
Unfortunately, I've read several papers on GEE v GLMM and still haven't figured out whether GEE would be applicable in such a case. There are several threads on this, one of which is:
What is the difference between generalized estimating equations and GLMM?
Hope that helps. | GLM with Temporal Data
I'm still learning a lot in this area, but since you don't have an answer yet, my thoughts are...
The correlation structure you specify in the various functions that allow it (gls, lme, etc) are for w |
50,085 | How to generate from the copula by inverse conditional cdf function of the copula? | A typical approach (see e.g. Nelsen 2006, p. 41) is to sample two independent uniform distributed random vectors $u$ and $y$ of the desired sample length. The conditional copula $C_u$ (conditioned on $u$) is given through the partial derivative: $$ C_u(v) = \frac{\partial}{\partial u} C(u,v) $$
Hence, one needs to solve $C_u(v)=y$ for $v$ to get the desired pair $(u,v)$. For a "custom made" copula, one has to calculate its partial derivative and its quasi-inverse. In case the copula is not completely "custom made" it might already be covered in other statistical software. One might for instance take a look into the R packages copula and VineCopula offering a rich set of families (speaking from my R experience, there are more in R and of course in other languages). | How to generate from the copula by inverse conditional cdf function of the copula? | A typical approach (see e.g. Nelsen 2006, p. 41) is to sample two independent uniform distributed random vectors $u$ and $y$ of the desired sample length. The conditional copula $C_u$ (conditioned on | How to generate from the copula by inverse conditional cdf function of the copula?
A typical approach (see e.g. Nelsen 2006, p. 41) is to sample two independent uniform distributed random vectors $u$ and $y$ of the desired sample length. The conditional copula $C_u$ (conditioned on $u$) is given through the partial derivative: $$ C_u(v) = \frac{\partial}{\partial u} C(u,v) $$
Hence, one needs to solve $C_u(v)=y$ for $v$ to get the desired pair $(u,v)$. For a "custom made" copula, one has to calculate its partial derivative and its quasi-inverse. In case the copula is not completely "custom made" it might already be covered in other statistical software. One might for instance take a look into the R packages copula and VineCopula offering a rich set of families (speaking from my R experience, there are more in R and of course in other languages). | How to generate from the copula by inverse conditional cdf function of the copula?
A typical approach (see e.g. Nelsen 2006, p. 41) is to sample two independent uniform distributed random vectors $u$ and $y$ of the desired sample length. The conditional copula $C_u$ (conditioned on |
50,086 | Joint pdf of a continuous and a discrete rv | Sheldon, Sheldon. How comes that you have to ask a question about math to people like us?
In survival analysis, your setting is called "competing risk". The joint distribution of the earliest failure time and the type of failure is fully described by the so called "cumulative incidence function" (it even allows for censoring, i.e. no failure until end of time horizon). I am quite sure that you will find relevant information in the literature stated in
Assumptions and pitfalls in competing risks model | Joint pdf of a continuous and a discrete rv | Sheldon, Sheldon. How comes that you have to ask a question about math to people like us?
In survival analysis, your setting is called "competing risk". The joint distribution of the earliest failure | Joint pdf of a continuous and a discrete rv
Sheldon, Sheldon. How comes that you have to ask a question about math to people like us?
In survival analysis, your setting is called "competing risk". The joint distribution of the earliest failure time and the type of failure is fully described by the so called "cumulative incidence function" (it even allows for censoring, i.e. no failure until end of time horizon). I am quite sure that you will find relevant information in the literature stated in
Assumptions and pitfalls in competing risks model | Joint pdf of a continuous and a discrete rv
Sheldon, Sheldon. How comes that you have to ask a question about math to people like us?
In survival analysis, your setting is called "competing risk". The joint distribution of the earliest failure |
50,087 | Joint pdf of a continuous and a discrete rv | In simplistic terms, there is no such thing as a joint density of a continuous random variable and a discrete random variable because all the probability mass lies
on two straight lines ($v=0$ and $v=1$) and on these lines, the joint
density, being the probability mass per unit area, is infinite. On the other
hand, the line density of the mass on the two lines
is a (univariate) exponential density (measured in probability mass per unit
length). More specifically, the line density on the line $v=0$ is the density
of $U_2$ and the line density on the line $v=1$ is the density of $U_1$. | Joint pdf of a continuous and a discrete rv | In simplistic terms, there is no such thing as a joint density of a continuous random variable and a discrete random variable because all the probability mass lies
on two straight lines ($v=0$ and $v= | Joint pdf of a continuous and a discrete rv
In simplistic terms, there is no such thing as a joint density of a continuous random variable and a discrete random variable because all the probability mass lies
on two straight lines ($v=0$ and $v=1$) and on these lines, the joint
density, being the probability mass per unit area, is infinite. On the other
hand, the line density of the mass on the two lines
is a (univariate) exponential density (measured in probability mass per unit
length). More specifically, the line density on the line $v=0$ is the density
of $U_2$ and the line density on the line $v=1$ is the density of $U_1$. | Joint pdf of a continuous and a discrete rv
In simplistic terms, there is no such thing as a joint density of a continuous random variable and a discrete random variable because all the probability mass lies
on two straight lines ($v=0$ and $v= |
50,088 | Joint pdf of a continuous and a discrete rv | What you have here is a mixture model, specifically a mixture of exponentials. If I understand your problem setup correctly, I believe what you're looking for looks something like this:
$$
u \sim f(x) =
\begin{cases}
f_{Y_1}(x), & V=1 \\
f_{Y_2}(x), & V=0
\end{cases}
$$
or alternatively
$$u \sim f(x) = \theta f_{Y_1}(x) + (1-\theta)f_{Y_2}(x)$$
Where $\theta$ is the expected proportion of samples generated by $Y_1$ (or using your formulation, $\theta = E[V]$).
You can confirm this experimentally. Here's a mixture model with arbitrarily selected parameters r1, r2 and theta:
n=1e5
theta=.2
v=rbinom(n,1,theta)
r1=5; r2=1
sample=v*rexp(n,r1) + (1-v)*rexp(n,r2)
f=function(x){theta*dexp(x,r1) + (1-theta)*dexp(x,r2)}
plot(density(sample), xlim=c(0,6))
xv=seq(from=0,to=6, length.out=1e4)
lines(xv,f(xv), col='red') | Joint pdf of a continuous and a discrete rv | What you have here is a mixture model, specifically a mixture of exponentials. If I understand your problem setup correctly, I believe what you're looking for looks something like this:
$$
u \sim f(x) | Joint pdf of a continuous and a discrete rv
What you have here is a mixture model, specifically a mixture of exponentials. If I understand your problem setup correctly, I believe what you're looking for looks something like this:
$$
u \sim f(x) =
\begin{cases}
f_{Y_1}(x), & V=1 \\
f_{Y_2}(x), & V=0
\end{cases}
$$
or alternatively
$$u \sim f(x) = \theta f_{Y_1}(x) + (1-\theta)f_{Y_2}(x)$$
Where $\theta$ is the expected proportion of samples generated by $Y_1$ (or using your formulation, $\theta = E[V]$).
You can confirm this experimentally. Here's a mixture model with arbitrarily selected parameters r1, r2 and theta:
n=1e5
theta=.2
v=rbinom(n,1,theta)
r1=5; r2=1
sample=v*rexp(n,r1) + (1-v)*rexp(n,r2)
f=function(x){theta*dexp(x,r1) + (1-theta)*dexp(x,r2)}
plot(density(sample), xlim=c(0,6))
xv=seq(from=0,to=6, length.out=1e4)
lines(xv,f(xv), col='red') | Joint pdf of a continuous and a discrete rv
What you have here is a mixture model, specifically a mixture of exponentials. If I understand your problem setup correctly, I believe what you're looking for looks something like this:
$$
u \sim f(x) |
50,089 | Duration analysis of unemployment | First, the incidental parameter problem is pretty easy to solve in a discrete duration model. As long as you are willing to assume a logistic form for your model, you can eliminate the incidental parameters via a clever conditioning argument. The usual cite in economics is Chamberlain (1980, Rev Econ Stud). If you prefer a textbook, there is Greene's Econometric Analysis (any of the recent editions) --- look up "fixed effects model binary choice" or "Chamberlain" in the index. In the seventh edition, the discussion runs from pg 721 through 725. The resulting estimator is usually called "fixed-effects logit" or "Chamberlain's estimator."
To be clear, you DO NOT just run a logistic regression with a bunch of dummy variables for households. If you are a Stata user, the xtlogit command with the fe option runs Chamberlain's fixed effects logit model. In R, I don't know how to do it. There are a couple of questions on this here at cross validated (one, two), and one over at stack overflow. The answers in those threads seem mostly to misunderstand what the Chamberlain estimator is, and I think the right conclusion from them is that Chamberlain's estimator is not currently implemented in R. (I would love to be corrected if I am wrong)
Looking over your question again, I wonder whether you really want a fixed effects estimator. As with any fixed effects estimator, you are not going to be able to directly estimate the effects of any household characteristic which does not change over time. Generally, schooling, occupation, household size are fixed or almost fixed in a short panel. If you include time dummies (and why wouldn't you?), then any characteristic which changes regularly with time within each household cannot be included. Age, for example, since, once you control for time, it is just birth date, a fixed characteristic of household head. Similarly, even the effect of the duration of the unemployment spell cannot be measured once you have time dummies and household dummies, unless some households have multiple unemployment spells. | Duration analysis of unemployment | First, the incidental parameter problem is pretty easy to solve in a discrete duration model. As long as you are willing to assume a logistic form for your model, you can eliminate the incidental par | Duration analysis of unemployment
First, the incidental parameter problem is pretty easy to solve in a discrete duration model. As long as you are willing to assume a logistic form for your model, you can eliminate the incidental parameters via a clever conditioning argument. The usual cite in economics is Chamberlain (1980, Rev Econ Stud). If you prefer a textbook, there is Greene's Econometric Analysis (any of the recent editions) --- look up "fixed effects model binary choice" or "Chamberlain" in the index. In the seventh edition, the discussion runs from pg 721 through 725. The resulting estimator is usually called "fixed-effects logit" or "Chamberlain's estimator."
To be clear, you DO NOT just run a logistic regression with a bunch of dummy variables for households. If you are a Stata user, the xtlogit command with the fe option runs Chamberlain's fixed effects logit model. In R, I don't know how to do it. There are a couple of questions on this here at cross validated (one, two), and one over at stack overflow. The answers in those threads seem mostly to misunderstand what the Chamberlain estimator is, and I think the right conclusion from them is that Chamberlain's estimator is not currently implemented in R. (I would love to be corrected if I am wrong)
Looking over your question again, I wonder whether you really want a fixed effects estimator. As with any fixed effects estimator, you are not going to be able to directly estimate the effects of any household characteristic which does not change over time. Generally, schooling, occupation, household size are fixed or almost fixed in a short panel. If you include time dummies (and why wouldn't you?), then any characteristic which changes regularly with time within each household cannot be included. Age, for example, since, once you control for time, it is just birth date, a fixed characteristic of household head. Similarly, even the effect of the duration of the unemployment spell cannot be measured once you have time dummies and household dummies, unless some households have multiple unemployment spells. | Duration analysis of unemployment
First, the incidental parameter problem is pretty easy to solve in a discrete duration model. As long as you are willing to assume a logistic form for your model, you can eliminate the incidental par |
50,090 | Minimum Sample Size Required to Estimate the Probability $P(X \le c)$ for a Constant $c$ (Given a Confidence Level & Confidence Interval) | The Dvoretzky-Kiefer-Wolfowitz inequality can be used here. The required sample size $b$ (I'm using $b$ to distinguish it from $n$ because you already set your population size as $n$ in the problem statement) is determined by $$b \geq \left( {1 \over 2 \epsilon^2 } \right) \mathrm{ln} \left( {2 \over \alpha} \right),$$ where $\epsilon$ is how close you want your empirical cdf to be and $1-\alpha$ is the confidence level.
So, for example, if you want to estimate $F(c)$ within $\epsilon = 0.01$ with 95% confidence, the formula gives a sample size of $$b \geq 18444.4,$$ or $b = 18445.$
This will cover any and all $c,$ so it is possible you can do much better. Perhaps one of the commenters will fill in the details on a more efficient solution for a single value of $c.$ | Minimum Sample Size Required to Estimate the Probability $P(X \le c)$ for a Constant $c$ (Given a Co | The Dvoretzky-Kiefer-Wolfowitz inequality can be used here. The required sample size $b$ (I'm using $b$ to distinguish it from $n$ because you already set your population size as $n$ in the problem st | Minimum Sample Size Required to Estimate the Probability $P(X \le c)$ for a Constant $c$ (Given a Confidence Level & Confidence Interval)
The Dvoretzky-Kiefer-Wolfowitz inequality can be used here. The required sample size $b$ (I'm using $b$ to distinguish it from $n$ because you already set your population size as $n$ in the problem statement) is determined by $$b \geq \left( {1 \over 2 \epsilon^2 } \right) \mathrm{ln} \left( {2 \over \alpha} \right),$$ where $\epsilon$ is how close you want your empirical cdf to be and $1-\alpha$ is the confidence level.
So, for example, if you want to estimate $F(c)$ within $\epsilon = 0.01$ with 95% confidence, the formula gives a sample size of $$b \geq 18444.4,$$ or $b = 18445.$
This will cover any and all $c,$ so it is possible you can do much better. Perhaps one of the commenters will fill in the details on a more efficient solution for a single value of $c.$ | Minimum Sample Size Required to Estimate the Probability $P(X \le c)$ for a Constant $c$ (Given a Co
The Dvoretzky-Kiefer-Wolfowitz inequality can be used here. The required sample size $b$ (I'm using $b$ to distinguish it from $n$ because you already set your population size as $n$ in the problem st |
50,091 | Maximum likelihood estimate: Is this possible to solve? | The second problem (d), where the mean is equal to the variance is discussed on pp. 53 of Asymptotic Theory of Statistics and Probability by Anirban DasGupta (2008). The $\mathcal{N}(\theta, \theta)$ distribution, the normal distribution with an equal mean and variance can be seen as a continuous analog of the Poisson distribution.
I will try to outline a path to the solutions.
The log-likelihood function of a $\mathcal{N}(\mu, \sigma^{2})$ is given by:
$$
\ell(\mu, \sigma^2)=-\frac{n}{2}\log(2\pi)-\frac{n}{2}\log(\sigma^2)-\frac{1}{2\sigma^{2}}\sum_{i=1}^{n}(x_{i}-\mu)^{2}.
$$
Setting $\mu=\sigma^{2}=\theta$ yields
$$
\ell(\theta)=-\frac{n}{2}\log(2\pi)-\frac{n}{2}\log(\theta)-\frac{1}{2\theta}\sum_{i=1}^{n}(x_{i}-\theta)^{2}.
$$
Expanding the term under the sum leads to
$$
\begin{align}
\ell(\theta) &=-\frac{n}{2}\log(2\pi)-\frac{n}{2}\log(\theta)-\frac{1}{2\theta}\left(\sum_{i=1}^{n}x_{i}^{2}-2\theta\sum_{i=1}^{n}x_{i}+n\theta^{2}\right) \\
&=-\frac{n}{2}\log(2\pi)-\frac{n}{2}\log(\theta)-\frac{s}{2\theta}+t-\frac{n\theta}{2} \\
\end{align}
$$
where $s=\sum_{i=1}^{n}x_{i}^{2}$ and $t=\sum_{i=1}^{n}x_{i}$. Taking the first derivative wrt $\theta$ gives
$$
S(\theta)=\frac{d}{d\theta}\ell(\theta)=\frac{s}{2\theta^{2}}-\frac{n}{2\theta}-\frac{n}{2}.
$$
So $s$ is the minimal sufficient statistic. The maximum likelihood estimator $\hat{\theta}$ can be found by setting $S(\theta)=0$ and solving for $\theta$.
Applying the same procedure to $\mathcal{N}(\mu=\theta, \sigma^{2}=\theta^{2})$, the log-likelihood function is
$$
\ell(\theta)=-\frac{n}{2}\log(2\pi)-\frac{n}{2}\log(\theta^{2})-\frac{1}{2\theta^{2}}\sum_{i=1}^{n}(x_{i}-\theta)^{2}.
$$
This leads to the following score function (again, with $s=\sum_{i=1}^{n}x_{i}^{2}$ and $t=\sum_{i=1}^{n}x_{i}$):
$$
S(\theta)=\frac{s}{\theta^{3}}-\frac{t}{\theta^2}-\frac{n}{\theta}.
$$
Fisher information
The Fisher information is defined as the negative second derivative of the log-likelihood function:
$$
I(\theta)=-\frac{d^{2}\,\ell(\theta)}{d\,\theta^{2}}=-\frac{d\,S(\theta)}{d\,\theta}.
$$
The observed Fisher information is $I(\hat{\theta})$, the Fisher information evaluated at the maximum likelihood estimate.
For the second question (d), we have:
$$
I(\theta)=-\frac{d}{d\,\theta}\left(\frac{s}{2\theta^{2}}-\frac{n}{2\theta}-\frac{n}{2} \right) = \frac{s}{\theta^{3}}-\frac{n}{2\theta^{2}}.
$$
And for the first question (c), we have:
$$
I(\theta)=-\frac{d}{d\,\theta}\left(\frac{s}{\theta^{3}}-\frac{t}{\theta^2}-\frac{n}{\theta}\right) = \frac{3s}{\theta^{4}}-\frac{2t}{\theta^{3}}-\frac{n}{\theta^{2}}.
$$
To get the observed Fisher information, plug in the maximum likelihood estimates.
Gamma distribution
It looks right to me but you don't need the sums in the expressions of the Fisher information. | Maximum likelihood estimate: Is this possible to solve? | The second problem (d), where the mean is equal to the variance is discussed on pp. 53 of Asymptotic Theory of Statistics and Probability by Anirban DasGupta (2008). The $\mathcal{N}(\theta, \theta)$ | Maximum likelihood estimate: Is this possible to solve?
The second problem (d), where the mean is equal to the variance is discussed on pp. 53 of Asymptotic Theory of Statistics and Probability by Anirban DasGupta (2008). The $\mathcal{N}(\theta, \theta)$ distribution, the normal distribution with an equal mean and variance can be seen as a continuous analog of the Poisson distribution.
I will try to outline a path to the solutions.
The log-likelihood function of a $\mathcal{N}(\mu, \sigma^{2})$ is given by:
$$
\ell(\mu, \sigma^2)=-\frac{n}{2}\log(2\pi)-\frac{n}{2}\log(\sigma^2)-\frac{1}{2\sigma^{2}}\sum_{i=1}^{n}(x_{i}-\mu)^{2}.
$$
Setting $\mu=\sigma^{2}=\theta$ yields
$$
\ell(\theta)=-\frac{n}{2}\log(2\pi)-\frac{n}{2}\log(\theta)-\frac{1}{2\theta}\sum_{i=1}^{n}(x_{i}-\theta)^{2}.
$$
Expanding the term under the sum leads to
$$
\begin{align}
\ell(\theta) &=-\frac{n}{2}\log(2\pi)-\frac{n}{2}\log(\theta)-\frac{1}{2\theta}\left(\sum_{i=1}^{n}x_{i}^{2}-2\theta\sum_{i=1}^{n}x_{i}+n\theta^{2}\right) \\
&=-\frac{n}{2}\log(2\pi)-\frac{n}{2}\log(\theta)-\frac{s}{2\theta}+t-\frac{n\theta}{2} \\
\end{align}
$$
where $s=\sum_{i=1}^{n}x_{i}^{2}$ and $t=\sum_{i=1}^{n}x_{i}$. Taking the first derivative wrt $\theta$ gives
$$
S(\theta)=\frac{d}{d\theta}\ell(\theta)=\frac{s}{2\theta^{2}}-\frac{n}{2\theta}-\frac{n}{2}.
$$
So $s$ is the minimal sufficient statistic. The maximum likelihood estimator $\hat{\theta}$ can be found by setting $S(\theta)=0$ and solving for $\theta$.
Applying the same procedure to $\mathcal{N}(\mu=\theta, \sigma^{2}=\theta^{2})$, the log-likelihood function is
$$
\ell(\theta)=-\frac{n}{2}\log(2\pi)-\frac{n}{2}\log(\theta^{2})-\frac{1}{2\theta^{2}}\sum_{i=1}^{n}(x_{i}-\theta)^{2}.
$$
This leads to the following score function (again, with $s=\sum_{i=1}^{n}x_{i}^{2}$ and $t=\sum_{i=1}^{n}x_{i}$):
$$
S(\theta)=\frac{s}{\theta^{3}}-\frac{t}{\theta^2}-\frac{n}{\theta}.
$$
Fisher information
The Fisher information is defined as the negative second derivative of the log-likelihood function:
$$
I(\theta)=-\frac{d^{2}\,\ell(\theta)}{d\,\theta^{2}}=-\frac{d\,S(\theta)}{d\,\theta}.
$$
The observed Fisher information is $I(\hat{\theta})$, the Fisher information evaluated at the maximum likelihood estimate.
For the second question (d), we have:
$$
I(\theta)=-\frac{d}{d\,\theta}\left(\frac{s}{2\theta^{2}}-\frac{n}{2\theta}-\frac{n}{2} \right) = \frac{s}{\theta^{3}}-\frac{n}{2\theta^{2}}.
$$
And for the first question (c), we have:
$$
I(\theta)=-\frac{d}{d\,\theta}\left(\frac{s}{\theta^{3}}-\frac{t}{\theta^2}-\frac{n}{\theta}\right) = \frac{3s}{\theta^{4}}-\frac{2t}{\theta^{3}}-\frac{n}{\theta^{2}}.
$$
To get the observed Fisher information, plug in the maximum likelihood estimates.
Gamma distribution
It looks right to me but you don't need the sums in the expressions of the Fisher information. | Maximum likelihood estimate: Is this possible to solve?
The second problem (d), where the mean is equal to the variance is discussed on pp. 53 of Asymptotic Theory of Statistics and Probability by Anirban DasGupta (2008). The $\mathcal{N}(\theta, \theta)$ |
50,092 | Using KNN for prediction, how should I normalize my data? | I think that depends on the data. If you know your feature is bounded, you could scale it to $[0,1]$. If it's binary I guess $\{0,1\}$ is a good choice, perhaps $\{-1,1\}$. Now, if it's unbounded, the standardization to $\text Z$-scores $\overline x = 0$, $\sigma=1$ is a reasonable choice. | Using KNN for prediction, how should I normalize my data? | I think that depends on the data. If you know your feature is bounded, you could scale it to $[0,1]$. If it's binary I guess $\{0,1\}$ is a good choice, perhaps $\{-1,1\}$. Now, if it's unbounded, the | Using KNN for prediction, how should I normalize my data?
I think that depends on the data. If you know your feature is bounded, you could scale it to $[0,1]$. If it's binary I guess $\{0,1\}$ is a good choice, perhaps $\{-1,1\}$. Now, if it's unbounded, the standardization to $\text Z$-scores $\overline x = 0$, $\sigma=1$ is a reasonable choice. | Using KNN for prediction, how should I normalize my data?
I think that depends on the data. If you know your feature is bounded, you could scale it to $[0,1]$. If it's binary I guess $\{0,1\}$ is a good choice, perhaps $\{-1,1\}$. Now, if it's unbounded, the |
50,093 | Using KNN for prediction, how should I normalize my data? | Similar to K-means, KNN uses distance measure. Therefore
It is better to normalize features. If not, the features with larger values will be dominant.
If you have too many discrete variables and use dummy coding, distance measures would not work well.
Also, I think my answers for K-means would answer your question of what may happen if we do not normalize features.
Standardizing some features in K-Means | Using KNN for prediction, how should I normalize my data? | Similar to K-means, KNN uses distance measure. Therefore
It is better to normalize features. If not, the features with larger values will be dominant.
If you have too many discrete variables and use | Using KNN for prediction, how should I normalize my data?
Similar to K-means, KNN uses distance measure. Therefore
It is better to normalize features. If not, the features with larger values will be dominant.
If you have too many discrete variables and use dummy coding, distance measures would not work well.
Also, I think my answers for K-means would answer your question of what may happen if we do not normalize features.
Standardizing some features in K-Means | Using KNN for prediction, how should I normalize my data?
Similar to K-means, KNN uses distance measure. Therefore
It is better to normalize features. If not, the features with larger values will be dominant.
If you have too many discrete variables and use |
50,094 | What's the probability that as I roll dice I'll see a sum of $7$ on them before I see a sum of $8$? | Question 1
Why does the author say
We could assume that the sample space $S$ contains all sequences of outcomes that terminate as
soon as either the sum $T = 7$ or the sum $T = 8$ is obtained. Then we could find the
sum of the probabilities of all the sequences that terminate when the value $T = 7$ is
obtained.
Answer
Sample space $S$ has $m \rightarrow \infty$ sequences of length $n \rightarrow \infty$ that end in either $7$ or $8$. Out of these sequences we're interested in summing up the probabilities of all the series that end in a $7$. The probability of a sequence of precisely $n$ throws ending in a $7$ is:
$$
P_n(7) = \left(\frac{25}{36}\right)^{n-1} \cdot \frac{6}{36}
$$
However, since $n$ can take any value up to infinity, the overall probability of ending a sequence of any length in a $7$ is the sum of the prob. of ending a seq. after one throw plus the prob. of ending a sequence after two throws, and so on. This is the geometric series:
$$
\Phi_7 = P_1(7) + P_2(7) + P_3(7) + ... + P_n(7)
$$
which, as $n \rightarrow \infty$, sums up to (basic geometric sum formula)
$$
\Phi_7 = \lim_{n \rightarrow \infty} \frac{\frac{6}{36}\left(1-\left(\frac{25}{36}\right)^n\right)}{1-\frac{25}{36}} = \lim_{n \rightarrow \infty} \frac{6}{11}\left(1-\left(\frac{25}{36}\right)^n\right) = \frac{6}{11}
$$
This is the probability of ending a sequence of throws in a $7$ without ever hitting $8$. It's the answer you're looking for using the first, "more complicated" method.
Question 2
How can we go from lengthy sequences of outcomes that terminate as
soon as either the sum $T = 7$ or the sum $T = 8$ is obtained to just the outcome of the experiment for which either $T = 7$ or $T = 8$ ?
Answer
This will become clear if we rephrase the first method a little bit. Sample space $S$ has $m \rightarrow \infty$ sequences of length $n \rightarrow \infty$ which end in either a $7$ or an $8$. The probability of you running a sequence of length $n$ which ends in $7$ is the probability
$$
P_n(7)|(P_n(7) \cup P_n(8)) = \frac{P_n(7) \cap (P_n(7)\cup P_n(8))}{P_n(7)\cup P_n(8)} = \frac{P_n(7)}{P_n(7) \cup P_n(8)}
$$
$$
\frac{P_n(7)}{P_n(7) \cup P_n(8)} = \frac{\left(\frac{25}{36}\right)^{n-1} \cdot \frac{6}{36}}{\left(\frac{25}{36}\right)^{n-1} \cdot \frac{6}{36} + \left(\frac{25}{36}\right)^{n-1} \cdot \frac{5}{36}} = \frac{6}{11}
$$
This is a lot of LaTeX for not a very impressive statement but it is useful because we can now use it to prove by induction the jump from a sequence of length $n$ to a sequence of length $1$. If we run the same formula for $n-1$ we get
$$
P_{n-1}(7)|(P_{n-1}(7) \cup P_{n-1}(8)) = \frac{P_{n-1}(7)}{P_{n-1}(7) \cup P_{n-1}(8)}
$$
where
$$
\frac{P_{n-1}(7)}{P_{n-1}(7) \cup P_{n-1}(8)} = \frac{\left(\frac{25}{36}\right)^{n-2} \cdot \frac{6}{36}}{\left(\frac{25}{36}\right)^{n-2} \cdot \frac{6}{36} + \left(\frac{25}{36}\right)^{n-2} \cdot \frac{5}{36}} = \frac{6}{11}
$$
But this means that
$$
\frac{P_n(7)}{P_n(7) \cup P_n(8)} = \frac{P_{n-1}(7)}{P_{n-1}(7) \cup P_{n-1}(8)}
$$
and it follows, by induction, that
$$
\frac{P_n(7)}{P_n(7) \cup P_n(8)} = \frac{P_{1}(7)}{P_{1}(7) \cup P_{1}(8)}
$$
Therefore, whatever value $n$ takes, the probability of a sequence of that length ending in $7$ given that it ends in either $7$ or $8$ is equal to the probability of a sequence of length $1$ ending in $7$ given it is a part of $S$. | What's the probability that as I roll dice I'll see a sum of $7$ on them before I see a sum of $8$? | Question 1
Why does the author say
We could assume that the sample space $S$ contains all sequences of outcomes that terminate as
soon as either the sum $T = 7$ or the sum $T = 8$ is obtained. Th | What's the probability that as I roll dice I'll see a sum of $7$ on them before I see a sum of $8$?
Question 1
Why does the author say
We could assume that the sample space $S$ contains all sequences of outcomes that terminate as
soon as either the sum $T = 7$ or the sum $T = 8$ is obtained. Then we could find the
sum of the probabilities of all the sequences that terminate when the value $T = 7$ is
obtained.
Answer
Sample space $S$ has $m \rightarrow \infty$ sequences of length $n \rightarrow \infty$ that end in either $7$ or $8$. Out of these sequences we're interested in summing up the probabilities of all the series that end in a $7$. The probability of a sequence of precisely $n$ throws ending in a $7$ is:
$$
P_n(7) = \left(\frac{25}{36}\right)^{n-1} \cdot \frac{6}{36}
$$
However, since $n$ can take any value up to infinity, the overall probability of ending a sequence of any length in a $7$ is the sum of the prob. of ending a seq. after one throw plus the prob. of ending a sequence after two throws, and so on. This is the geometric series:
$$
\Phi_7 = P_1(7) + P_2(7) + P_3(7) + ... + P_n(7)
$$
which, as $n \rightarrow \infty$, sums up to (basic geometric sum formula)
$$
\Phi_7 = \lim_{n \rightarrow \infty} \frac{\frac{6}{36}\left(1-\left(\frac{25}{36}\right)^n\right)}{1-\frac{25}{36}} = \lim_{n \rightarrow \infty} \frac{6}{11}\left(1-\left(\frac{25}{36}\right)^n\right) = \frac{6}{11}
$$
This is the probability of ending a sequence of throws in a $7$ without ever hitting $8$. It's the answer you're looking for using the first, "more complicated" method.
Question 2
How can we go from lengthy sequences of outcomes that terminate as
soon as either the sum $T = 7$ or the sum $T = 8$ is obtained to just the outcome of the experiment for which either $T = 7$ or $T = 8$ ?
Answer
This will become clear if we rephrase the first method a little bit. Sample space $S$ has $m \rightarrow \infty$ sequences of length $n \rightarrow \infty$ which end in either a $7$ or an $8$. The probability of you running a sequence of length $n$ which ends in $7$ is the probability
$$
P_n(7)|(P_n(7) \cup P_n(8)) = \frac{P_n(7) \cap (P_n(7)\cup P_n(8))}{P_n(7)\cup P_n(8)} = \frac{P_n(7)}{P_n(7) \cup P_n(8)}
$$
$$
\frac{P_n(7)}{P_n(7) \cup P_n(8)} = \frac{\left(\frac{25}{36}\right)^{n-1} \cdot \frac{6}{36}}{\left(\frac{25}{36}\right)^{n-1} \cdot \frac{6}{36} + \left(\frac{25}{36}\right)^{n-1} \cdot \frac{5}{36}} = \frac{6}{11}
$$
This is a lot of LaTeX for not a very impressive statement but it is useful because we can now use it to prove by induction the jump from a sequence of length $n$ to a sequence of length $1$. If we run the same formula for $n-1$ we get
$$
P_{n-1}(7)|(P_{n-1}(7) \cup P_{n-1}(8)) = \frac{P_{n-1}(7)}{P_{n-1}(7) \cup P_{n-1}(8)}
$$
where
$$
\frac{P_{n-1}(7)}{P_{n-1}(7) \cup P_{n-1}(8)} = \frac{\left(\frac{25}{36}\right)^{n-2} \cdot \frac{6}{36}}{\left(\frac{25}{36}\right)^{n-2} \cdot \frac{6}{36} + \left(\frac{25}{36}\right)^{n-2} \cdot \frac{5}{36}} = \frac{6}{11}
$$
But this means that
$$
\frac{P_n(7)}{P_n(7) \cup P_n(8)} = \frac{P_{n-1}(7)}{P_{n-1}(7) \cup P_{n-1}(8)}
$$
and it follows, by induction, that
$$
\frac{P_n(7)}{P_n(7) \cup P_n(8)} = \frac{P_{1}(7)}{P_{1}(7) \cup P_{1}(8)}
$$
Therefore, whatever value $n$ takes, the probability of a sequence of that length ending in $7$ given that it ends in either $7$ or $8$ is equal to the probability of a sequence of length $1$ ending in $7$ given it is a part of $S$. | What's the probability that as I roll dice I'll see a sum of $7$ on them before I see a sum of $8$?
Question 1
Why does the author say
We could assume that the sample space $S$ contains all sequences of outcomes that terminate as
soon as either the sum $T = 7$ or the sum $T = 8$ is obtained. Th |
50,095 | What's the probability that as I roll dice I'll see a sum of $7$ on them before I see a sum of $8$? | Is a definition of what the probability of an event $A$ is e.g.
$$
P(A) = \frac{\text{number of ways $A$ can happen}}{\text{total number of things that can happen}}
$$ Like when you try to figure out what is the probability that you roll a two given that you know you rolled an even number is $1/3$ since there is just one way that a two can come up, and 3 ways that an even number could have come up.
So, using this definition, to get $P(\text{we roll a 7 before we roll an 8})$ is out of all the sequences of two dice rolls that end in either 7 or 8, how many of them have a 7 occur before 8.
The author notes that such a strategy for determining the probability is a bit too complex and continues to describe a simpler strategy
This can be justified with noting that
$$P(\text{you roll 7 before 8}| \text{you rolled either a 7 or an 8}) = P(\text{you roll 7 before 8})$$ that is the event of rolling a 7 before an 8 is independent of the event of rolling either a 7 or an 8 (i.e. If I told you the result one one die was either 7 or 8, it will not impact the probability that a 7 happens before an 8) | What's the probability that as I roll dice I'll see a sum of $7$ on them before I see a sum of $8$? | Is a definition of what the probability of an event $A$ is e.g.
$$
P(A) = \frac{\text{number of ways $A$ can happen}}{\text{total number of things that can happen}}
$$ Like when you try to figure o | What's the probability that as I roll dice I'll see a sum of $7$ on them before I see a sum of $8$?
Is a definition of what the probability of an event $A$ is e.g.
$$
P(A) = \frac{\text{number of ways $A$ can happen}}{\text{total number of things that can happen}}
$$ Like when you try to figure out what is the probability that you roll a two given that you know you rolled an even number is $1/3$ since there is just one way that a two can come up, and 3 ways that an even number could have come up.
So, using this definition, to get $P(\text{we roll a 7 before we roll an 8})$ is out of all the sequences of two dice rolls that end in either 7 or 8, how many of them have a 7 occur before 8.
The author notes that such a strategy for determining the probability is a bit too complex and continues to describe a simpler strategy
This can be justified with noting that
$$P(\text{you roll 7 before 8}| \text{you rolled either a 7 or an 8}) = P(\text{you roll 7 before 8})$$ that is the event of rolling a 7 before an 8 is independent of the event of rolling either a 7 or an 8 (i.e. If I told you the result one one die was either 7 or 8, it will not impact the probability that a 7 happens before an 8) | What's the probability that as I roll dice I'll see a sum of $7$ on them before I see a sum of $8$?
Is a definition of what the probability of an event $A$ is e.g.
$$
P(A) = \frac{\text{number of ways $A$ can happen}}{\text{total number of things that can happen}}
$$ Like when you try to figure o |
50,096 | How can I combine data from 2 separate experiments? | Effects should be different across experiments, and so should variances. That's the nature of sampling. What you have is just different samples being different. There's no way to know which estimate of variance is closer to true value, or even guess at it with the information you've given and equal N's in the samples. So, while you'd like it to be the smaller one, that may not be correct. More than likely the average variance is best.
Your general tactic here of searching for an effect may eventually bare fruit. You may combine all of the subjects into one experiment, run a few more, drop an outlier here or there, look for various analysis techniques, and voila, a significant effect. Maybe you won't do all of that but I'm trying to point out that you're thinking about it wrong. Use the data you have to make your best determination about the truth of the matter, not show an effect.
An important thing to keep in mind is that an unstated assumption about any statistical test is that you're performing it because you want to know the answer to the test, not because you've previously done other tests and failed to find what you would like to find. So now, because you've already done the test, the rate of Type I error is no longer what you set it to be, alpha. You're increasing the probability of finding an effect whether there is one or not.
That said, you could do something that's not a test. You could construct a confidence interval of the effect through a mega-analysis (just combine all of the data) and report that as a higher quality estimate of the effect than either experiment had alone. You will have to concede that what you've done is post hoc and describe the tests that you did do already. But this is probably the best way to report what you've done so far. | How can I combine data from 2 separate experiments? | Effects should be different across experiments, and so should variances. That's the nature of sampling. What you have is just different samples being different. There's no way to know which estimate o | How can I combine data from 2 separate experiments?
Effects should be different across experiments, and so should variances. That's the nature of sampling. What you have is just different samples being different. There's no way to know which estimate of variance is closer to true value, or even guess at it with the information you've given and equal N's in the samples. So, while you'd like it to be the smaller one, that may not be correct. More than likely the average variance is best.
Your general tactic here of searching for an effect may eventually bare fruit. You may combine all of the subjects into one experiment, run a few more, drop an outlier here or there, look for various analysis techniques, and voila, a significant effect. Maybe you won't do all of that but I'm trying to point out that you're thinking about it wrong. Use the data you have to make your best determination about the truth of the matter, not show an effect.
An important thing to keep in mind is that an unstated assumption about any statistical test is that you're performing it because you want to know the answer to the test, not because you've previously done other tests and failed to find what you would like to find. So now, because you've already done the test, the rate of Type I error is no longer what you set it to be, alpha. You're increasing the probability of finding an effect whether there is one or not.
That said, you could do something that's not a test. You could construct a confidence interval of the effect through a mega-analysis (just combine all of the data) and report that as a higher quality estimate of the effect than either experiment had alone. You will have to concede that what you've done is post hoc and describe the tests that you did do already. But this is probably the best way to report what you've done so far. | How can I combine data from 2 separate experiments?
Effects should be different across experiments, and so should variances. That's the nature of sampling. What you have is just different samples being different. There's no way to know which estimate o |
50,097 | How to rate successive predictions of the outcome of an event which are made while it is taking place? | What I am more interested in learning about are approaches that are specifically tailored to my scenario, i.e. which take into account the fact that these predictions are all made on the same outcome and each prediction is made with successively more information.
That's the key: for your predictor to be acceptable, it should get better the closer we get to the end of the match, because it uses more and more information. To apply a modified Brier-score logic, let $o_F$ be a binary $\{0,1\}$ representing the final outcome of the game (say "$1$" = player A wins), $I_k$ be the set containing information available up to and including events as of stage $k$ of the game, and let $f_k(o_F=1\mid I_k)$ be the predicted probability that player A will win, given this information. Then we can define a "cumulative" Brier-like score as
$$BS_k = \frac 1k\sum_{i=0}^k \Big(f_i(o_F=1\mid I_i) - o_F\Big)^2 \qquad k=0,...,N$$
(I have included ${k=0}$ to cover the prediction before the game starts). Then, a reasonable demand for a good predictor is that the sequence $\{BS_k\}$ be decreasing. Comparing two competing predictors would amount to compare their rates of decrease.
You could easily try also a "moving window" expression, where some past information gets discarded if it becomes "old enough" - it depends on what information you will deem relevant in predicting the outcome, and will eventually include as input to your predictor.
Of course, if your predictors are human beings, you don't need to find out their prediction functions - you will just record their predictions and compare them. | How to rate successive predictions of the outcome of an event which are made while it is taking plac | What I am more interested in learning about are approaches that are specifically tailored to my scenario, i.e. which take into account the fact that these predictions are all made on the same outcome | How to rate successive predictions of the outcome of an event which are made while it is taking place?
What I am more interested in learning about are approaches that are specifically tailored to my scenario, i.e. which take into account the fact that these predictions are all made on the same outcome and each prediction is made with successively more information.
That's the key: for your predictor to be acceptable, it should get better the closer we get to the end of the match, because it uses more and more information. To apply a modified Brier-score logic, let $o_F$ be a binary $\{0,1\}$ representing the final outcome of the game (say "$1$" = player A wins), $I_k$ be the set containing information available up to and including events as of stage $k$ of the game, and let $f_k(o_F=1\mid I_k)$ be the predicted probability that player A will win, given this information. Then we can define a "cumulative" Brier-like score as
$$BS_k = \frac 1k\sum_{i=0}^k \Big(f_i(o_F=1\mid I_i) - o_F\Big)^2 \qquad k=0,...,N$$
(I have included ${k=0}$ to cover the prediction before the game starts). Then, a reasonable demand for a good predictor is that the sequence $\{BS_k\}$ be decreasing. Comparing two competing predictors would amount to compare their rates of decrease.
You could easily try also a "moving window" expression, where some past information gets discarded if it becomes "old enough" - it depends on what information you will deem relevant in predicting the outcome, and will eventually include as input to your predictor.
Of course, if your predictors are human beings, you don't need to find out their prediction functions - you will just record their predictions and compare them. | How to rate successive predictions of the outcome of an event which are made while it is taking plac
What I am more interested in learning about are approaches that are specifically tailored to my scenario, i.e. which take into account the fact that these predictions are all made on the same outcome |
50,098 | Back-propagation in Neural Nets with >2 hidden layers | This is just a simple computation of the partial derivative and observation, that the derivative on the layer $i$ (from top) can be fully computed using partial derivative for weights in layer $i-1$. This applies to any number of layers, but this leads to so called "vanishing gradient phenomenon" which is a reason for not using multiple hidden layers in general (at least with basic architecture and basic training). To overcome this issue, deep learning has been proposed in recent years (like for example Deep Convolutional Networks, Deep Belief Networks, Deep Autoencoders, Deep Boltzmann Machines etc.) | Back-propagation in Neural Nets with >2 hidden layers | This is just a simple computation of the partial derivative and observation, that the derivative on the layer $i$ (from top) can be fully computed using partial derivative for weights in layer $i-1$. | Back-propagation in Neural Nets with >2 hidden layers
This is just a simple computation of the partial derivative and observation, that the derivative on the layer $i$ (from top) can be fully computed using partial derivative for weights in layer $i-1$. This applies to any number of layers, but this leads to so called "vanishing gradient phenomenon" which is a reason for not using multiple hidden layers in general (at least with basic architecture and basic training). To overcome this issue, deep learning has been proposed in recent years (like for example Deep Convolutional Networks, Deep Belief Networks, Deep Autoencoders, Deep Boltzmann Machines etc.) | Back-propagation in Neural Nets with >2 hidden layers
This is just a simple computation of the partial derivative and observation, that the derivative on the layer $i$ (from top) can be fully computed using partial derivative for weights in layer $i-1$. |
50,099 | How do you check the linearity of a multiple regression | Indeed the most common and easy way would be to use scatter plot of residual versus predicted value; a horizontal band of points indicates a linear relationship. | How do you check the linearity of a multiple regression | Indeed the most common and easy way would be to use scatter plot of residual versus predicted value; a horizontal band of points indicates a linear relationship. | How do you check the linearity of a multiple regression
Indeed the most common and easy way would be to use scatter plot of residual versus predicted value; a horizontal band of points indicates a linear relationship. | How do you check the linearity of a multiple regression
Indeed the most common and easy way would be to use scatter plot of residual versus predicted value; a horizontal band of points indicates a linear relationship. |
50,100 | Cross validation for lasso logistic regression | The short answer is, its up to you, depending on your interest. In the past I have used AIC for Lasso.
However it sounds like you are using this model for prediction, and thus using the mis-classification rate is a good idea. However misclassification can be categorized in many ways. Are you interested in the the absolute % classified correctly? Or maybe you just care about of those classified as 1 (or yes, etc), how many of those were classified correctly? I would do some reading into Positive Predictive values, Negative predictive values, etc.
https://en.wikipedia.org/wiki/Positive_and_negative_predictive_values
In addition when doing your cross validation, there are a plethora of criteria you could use to validate your model. A short list of other common criterion are:
$R^2$
$MSE$
$Mallow’s$ $C_p$
$AIC$
Look them up and see which is most relevant to you! | Cross validation for lasso logistic regression | The short answer is, its up to you, depending on your interest. In the past I have used AIC for Lasso.
However it sounds like you are using this model for prediction, and thus using the mis-classifica | Cross validation for lasso logistic regression
The short answer is, its up to you, depending on your interest. In the past I have used AIC for Lasso.
However it sounds like you are using this model for prediction, and thus using the mis-classification rate is a good idea. However misclassification can be categorized in many ways. Are you interested in the the absolute % classified correctly? Or maybe you just care about of those classified as 1 (or yes, etc), how many of those were classified correctly? I would do some reading into Positive Predictive values, Negative predictive values, etc.
https://en.wikipedia.org/wiki/Positive_and_negative_predictive_values
In addition when doing your cross validation, there are a plethora of criteria you could use to validate your model. A short list of other common criterion are:
$R^2$
$MSE$
$Mallow’s$ $C_p$
$AIC$
Look them up and see which is most relevant to you! | Cross validation for lasso logistic regression
The short answer is, its up to you, depending on your interest. In the past I have used AIC for Lasso.
However it sounds like you are using this model for prediction, and thus using the mis-classifica |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.