idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
54,801 | How can I generate a time series with autocorrelation at lags other than 1? | The problem is solved once you can solve it for lag 1, because you can take $k$ such independent (or at least uncorrelated) time series $X_t^{(1)},$ $X_t^{(2)},$ through $X_t^{(k)}$ and interleave them to form a time series
$$X_t = X_1^{(1)}, X_1^{(2)}, \ldots, X_1^{(k)},\ X_2^{(1)}, \ldots, X_2^{(k)},\ X_3^{(1)}, \ldots$$
with zero autocorrelation at all but lags that are multiples of $k.$ Let's call this the "$k$-interleaved series."
Consider a portion of a time series process $X_t$, which we may index at times $t=1,2,\ldots,n.$ For the autocorrelations to be meaningful, we must assume the entire process is at least weakly second order stationary. The question requires the covariance matrix of the random vector $\mathcal{X}=(X_1,X_2,\ldots,X_n)$ to have the form
$$\mathbb{P}_n(\rho) = \pmatrix{1 & \rho & \color{gray}0 & \color{gray}0 & \cdots & \color{gray}0 &\color{gray}0\\
\rho & 1 & \rho & \color{gray}0 & \cdots & \color{gray}0& \color{gray}0\\
\color{gray}0 & \rho & 1 & \rho & \cdots & \color{gray}0& \color{gray}0\\
\vdots & \vdots & \vdots & \ddots & \cdots & \vdots& \vdots\\
\color{gray}0 & \color{gray}0 & \color{gray}0 & \color{gray}0 & \cdots &\rho & \color{gray}0\\
\color{gray}0 & \color{gray}0 & \color{gray}0 & \color{gray}0 & \cdots &1& \rho\\
\color{gray}0 & \color{gray}0 & \color{gray}0 & \color{gray}0 & \cdots & \rho & 1
}$$
where $\rho$ is the common lag-1 autocovariance.
Regardless what value $\rho$ might have, this matrix has the eigenvectors
$$q_{j;n} = \left(\sin\left(\frac{2\pi i j}{n+1}\right)\right)_{i=1,2,\ldots,n}$$
with corresponding eigenvalues $\lambda_{j;n}(\rho)$ (which may easily be computed from the eigenvalue equation: $\lambda_{j;n}$ is the common ratio of the components of $\mathbb{P}_n(\rho)q_{j;n}$ and the components of $q_{j;n}$). Provided $|\rho|\le 1/2,$ all those eigenvalues will be non-negative, which implies there exists a random vector $\mathcal X$ with $\mathbb{P}_n(\rho)$ as its covariance matrix. The orthogonal matrix $\mathbb{Q}_n = (q_{ij}) = (q_1;q_2;\cdots;q_n)$ whose columns comprise the normalized eigenvectors diagonalizes $\mathbb{P}_n(\rho).$
Consequently, when $\mathcal{Y}_n(\rho) = (Y_{j;n}(\rho))$ is any
vector of $n$ independent zero-mean variables with variances
$$\operatorname{Var}(Y_{j;n}) = \lambda_{j;n}(\rho),$$ the variable
$$\mathcal{X}_n(\rho) = \mathbb{Q}_n \mathcal{Y}_n(\rho)$$
has all the properties desired of $X_t.$
This demonstration leads to a simple and fairly efficient algorithm to generate realizations of the process $X_t.$ (It requires $O(n^2)$ computation, but likely could be sped up to $O(n\log(n))$ using the FFT.) On this workstation it requires less than 0.02 seconds to generate one realization with $n=2000.$
Here are the empirical acfs of eight simulated series with $n=2000$ and lag-1 coefficient $\rho=1/2:$
All the lags but the first differ only randomly from zero, while the lag-1 coefficient differs only randomly from $\rho.$
When these $8$ series are interleaved, they form one series of length $8\times 2000=16000$ and, by design, the only nonzero autocorrelation coefficient occurs at lag $8,$ where it equals $\rho:$
What would such a series look like? Here are the first $128$ values of the interleaved series from the simulation:
There is a hint of a seasonal cycle of period 8. However, as we know, this series has no seasonal fluctuations. For instance, the stl function in R finds only a tiny seasonal component of variance $0.0017$ compared to residuals of variance $0.82,$ almost 500 times larger.
This is the R code used to produce the simulated data and graphics.
n <- 2000 # Time series length (2 or longer)
rho <- 0.5 # Lag-1 autocorrelation (safe between -0.5 and +0.5).
n.sim <- 8 # Number of realizations to create
rf <- rnorm # Generates zero-mean random values
system.time({
#
# Compute eigenvector components.
#
y <- sin(seq(0, 2*pi, length.out=2*n+3))[-(2*n+3)]
y <- y / sqrt(sum(y^2)/2) # Normalized
#
# Compute eigenvalues.
#
f <- function(i, y, rho) {
n1 <- length(y) / 2
j <- 0:2 * i %% (2*n1) + 1
sum(y[j] * c(rho, 1, rho)) / y[j[2]]
}
lambda <- sapply(1:n, f, y=y, rho=rho)
if (min(lambda) < 0) warning("Correlation ", rho, " for n = ", n, " is impossible.")
#
# Generate random values.
#
e <- matrix(rf(n * n.sim), n) * sqrt(lambda) # In columns
#
# Take linear combinations of them.
#
x <- t(sapply(seq_len(n), function(i) {
j <- (1:n * i) %% (2*(n+1)) + 1
colSums(e * y[j])
}))
})
#
# The interleaved ACF.
#
acf(c(t(x)), lwd=2, main=bquote(paste("ACF of the ", .(n.sim), "-interleaved series")))
#
# Plot the ACFs.
#
par(mfrow=c(2,4))
invisible(apply(x, 2, acf, lag.max=10, main="", bty="n", lwd=2))
par(mfrow=c(1,1))
#
# Plot part of one simulated series.
#
plot(ts(c(t(x))[1:128]), ylab="Simulated Values",
main=bquote(paste("Prefix of the ", .(n.sim), "-interleaved series for ", rho==.(rho)))) | How can I generate a time series with autocorrelation at lags other than 1? | The problem is solved once you can solve it for lag 1, because you can take $k$ such independent (or at least uncorrelated) time series $X_t^{(1)},$ $X_t^{(2)},$ through $X_t^{(k)}$ and interleave the | How can I generate a time series with autocorrelation at lags other than 1?
The problem is solved once you can solve it for lag 1, because you can take $k$ such independent (or at least uncorrelated) time series $X_t^{(1)},$ $X_t^{(2)},$ through $X_t^{(k)}$ and interleave them to form a time series
$$X_t = X_1^{(1)}, X_1^{(2)}, \ldots, X_1^{(k)},\ X_2^{(1)}, \ldots, X_2^{(k)},\ X_3^{(1)}, \ldots$$
with zero autocorrelation at all but lags that are multiples of $k.$ Let's call this the "$k$-interleaved series."
Consider a portion of a time series process $X_t$, which we may index at times $t=1,2,\ldots,n.$ For the autocorrelations to be meaningful, we must assume the entire process is at least weakly second order stationary. The question requires the covariance matrix of the random vector $\mathcal{X}=(X_1,X_2,\ldots,X_n)$ to have the form
$$\mathbb{P}_n(\rho) = \pmatrix{1 & \rho & \color{gray}0 & \color{gray}0 & \cdots & \color{gray}0 &\color{gray}0\\
\rho & 1 & \rho & \color{gray}0 & \cdots & \color{gray}0& \color{gray}0\\
\color{gray}0 & \rho & 1 & \rho & \cdots & \color{gray}0& \color{gray}0\\
\vdots & \vdots & \vdots & \ddots & \cdots & \vdots& \vdots\\
\color{gray}0 & \color{gray}0 & \color{gray}0 & \color{gray}0 & \cdots &\rho & \color{gray}0\\
\color{gray}0 & \color{gray}0 & \color{gray}0 & \color{gray}0 & \cdots &1& \rho\\
\color{gray}0 & \color{gray}0 & \color{gray}0 & \color{gray}0 & \cdots & \rho & 1
}$$
where $\rho$ is the common lag-1 autocovariance.
Regardless what value $\rho$ might have, this matrix has the eigenvectors
$$q_{j;n} = \left(\sin\left(\frac{2\pi i j}{n+1}\right)\right)_{i=1,2,\ldots,n}$$
with corresponding eigenvalues $\lambda_{j;n}(\rho)$ (which may easily be computed from the eigenvalue equation: $\lambda_{j;n}$ is the common ratio of the components of $\mathbb{P}_n(\rho)q_{j;n}$ and the components of $q_{j;n}$). Provided $|\rho|\le 1/2,$ all those eigenvalues will be non-negative, which implies there exists a random vector $\mathcal X$ with $\mathbb{P}_n(\rho)$ as its covariance matrix. The orthogonal matrix $\mathbb{Q}_n = (q_{ij}) = (q_1;q_2;\cdots;q_n)$ whose columns comprise the normalized eigenvectors diagonalizes $\mathbb{P}_n(\rho).$
Consequently, when $\mathcal{Y}_n(\rho) = (Y_{j;n}(\rho))$ is any
vector of $n$ independent zero-mean variables with variances
$$\operatorname{Var}(Y_{j;n}) = \lambda_{j;n}(\rho),$$ the variable
$$\mathcal{X}_n(\rho) = \mathbb{Q}_n \mathcal{Y}_n(\rho)$$
has all the properties desired of $X_t.$
This demonstration leads to a simple and fairly efficient algorithm to generate realizations of the process $X_t.$ (It requires $O(n^2)$ computation, but likely could be sped up to $O(n\log(n))$ using the FFT.) On this workstation it requires less than 0.02 seconds to generate one realization with $n=2000.$
Here are the empirical acfs of eight simulated series with $n=2000$ and lag-1 coefficient $\rho=1/2:$
All the lags but the first differ only randomly from zero, while the lag-1 coefficient differs only randomly from $\rho.$
When these $8$ series are interleaved, they form one series of length $8\times 2000=16000$ and, by design, the only nonzero autocorrelation coefficient occurs at lag $8,$ where it equals $\rho:$
What would such a series look like? Here are the first $128$ values of the interleaved series from the simulation:
There is a hint of a seasonal cycle of period 8. However, as we know, this series has no seasonal fluctuations. For instance, the stl function in R finds only a tiny seasonal component of variance $0.0017$ compared to residuals of variance $0.82,$ almost 500 times larger.
This is the R code used to produce the simulated data and graphics.
n <- 2000 # Time series length (2 or longer)
rho <- 0.5 # Lag-1 autocorrelation (safe between -0.5 and +0.5).
n.sim <- 8 # Number of realizations to create
rf <- rnorm # Generates zero-mean random values
system.time({
#
# Compute eigenvector components.
#
y <- sin(seq(0, 2*pi, length.out=2*n+3))[-(2*n+3)]
y <- y / sqrt(sum(y^2)/2) # Normalized
#
# Compute eigenvalues.
#
f <- function(i, y, rho) {
n1 <- length(y) / 2
j <- 0:2 * i %% (2*n1) + 1
sum(y[j] * c(rho, 1, rho)) / y[j[2]]
}
lambda <- sapply(1:n, f, y=y, rho=rho)
if (min(lambda) < 0) warning("Correlation ", rho, " for n = ", n, " is impossible.")
#
# Generate random values.
#
e <- matrix(rf(n * n.sim), n) * sqrt(lambda) # In columns
#
# Take linear combinations of them.
#
x <- t(sapply(seq_len(n), function(i) {
j <- (1:n * i) %% (2*(n+1)) + 1
colSums(e * y[j])
}))
})
#
# The interleaved ACF.
#
acf(c(t(x)), lwd=2, main=bquote(paste("ACF of the ", .(n.sim), "-interleaved series")))
#
# Plot the ACFs.
#
par(mfrow=c(2,4))
invisible(apply(x, 2, acf, lag.max=10, main="", bty="n", lwd=2))
par(mfrow=c(1,1))
#
# Plot part of one simulated series.
#
plot(ts(c(t(x))[1:128]), ylab="Simulated Values",
main=bquote(paste("Prefix of the ", .(n.sim), "-interleaved series for ", rho==.(rho)))) | How can I generate a time series with autocorrelation at lags other than 1?
The problem is solved once you can solve it for lag 1, because you can take $k$ such independent (or at least uncorrelated) time series $X_t^{(1)},$ $X_t^{(2)},$ through $X_t^{(k)}$ and interleave the |
54,802 | How can I generate a time series with autocorrelation at lags other than 1? | Edit: It looks like I misinterpreted this question. The question appears to be asking for a method to generate a random vector with non-zero auto-correlation at one lag. My answer give a method to generate a random vector with non-zero auto-regression at only one lag, which is a different thing (and gives non-zero auto-correlation at multiple lags). I am going to leave the answer here because I think it may still be useful to some users interested in the content anyway.
Use the rGARMA function in the ts.extend package
You can generate random vectors from any stationary Gaussian ARMA model using the ts.extend package. This package generates random vectors directly form the multivariate normal distribution using the computed autocorrelation matrix for the random vector, so it gives random vectors from the exact distribution and does not require "burn-in" iterations. Here is an example from an $\text{AR}(2)$ model with non-zero autoregression only at lag $\ell = 2$.
#Load the package
library(ts.extend)
#Set parameters
AR <- c(0, 0.4)
m <- 100
#Generate n = 12 random vectors from this model
set.seed(1)
SERIES <- rGARMA(n = 12, m = m, ar = AR)
#Plot the series using ggplot2 graphics
library(ggplot2)
plot(SERIES) | How can I generate a time series with autocorrelation at lags other than 1? | Edit: It looks like I misinterpreted this question. The question appears to be asking for a method to generate a random vector with non-zero auto-correlation at one lag. My answer give a method to g | How can I generate a time series with autocorrelation at lags other than 1?
Edit: It looks like I misinterpreted this question. The question appears to be asking for a method to generate a random vector with non-zero auto-correlation at one lag. My answer give a method to generate a random vector with non-zero auto-regression at only one lag, which is a different thing (and gives non-zero auto-correlation at multiple lags). I am going to leave the answer here because I think it may still be useful to some users interested in the content anyway.
Use the rGARMA function in the ts.extend package
You can generate random vectors from any stationary Gaussian ARMA model using the ts.extend package. This package generates random vectors directly form the multivariate normal distribution using the computed autocorrelation matrix for the random vector, so it gives random vectors from the exact distribution and does not require "burn-in" iterations. Here is an example from an $\text{AR}(2)$ model with non-zero autoregression only at lag $\ell = 2$.
#Load the package
library(ts.extend)
#Set parameters
AR <- c(0, 0.4)
m <- 100
#Generate n = 12 random vectors from this model
set.seed(1)
SERIES <- rGARMA(n = 12, m = m, ar = AR)
#Plot the series using ggplot2 graphics
library(ggplot2)
plot(SERIES) | How can I generate a time series with autocorrelation at lags other than 1?
Edit: It looks like I misinterpreted this question. The question appears to be asking for a method to generate a random vector with non-zero auto-correlation at one lag. My answer give a method to g |
54,803 | Bayesian methods are about averaging over uncertainty rather than optimization. Explain? | When doing model selection, each model $\mathfrak M$ is given an evidence $\mathfrak e(\mathfrak M)$ that writes as the corresponding integrated likelihood
$$\mathfrak e(\mathfrak M) = \int f_{\mathfrak M}(\mathbf x|\theta_{\mathfrak M})\,\text{d}\theta_{\mathfrak M}$$Each model is then given a posterior probability $\pi(\mathfrak M|\mathbf x)$. The decision to select a model, if need be, is based on the maximisation of a utility function
$$\arg\max_{\mathfrak M} \mathbb E[U(\mathfrak M,\theta_{\mathfrak M})|\mathbf x]$$Hence, it is correct that in the decision, optimisation occurs. | Bayesian methods are about averaging over uncertainty rather than optimization. Explain? | When doing model selection, each model $\mathfrak M$ is given an evidence $\mathfrak e(\mathfrak M)$ that writes as the corresponding integrated likelihood
$$\mathfrak e(\mathfrak M) = \int f_{\mathfr | Bayesian methods are about averaging over uncertainty rather than optimization. Explain?
When doing model selection, each model $\mathfrak M$ is given an evidence $\mathfrak e(\mathfrak M)$ that writes as the corresponding integrated likelihood
$$\mathfrak e(\mathfrak M) = \int f_{\mathfrak M}(\mathbf x|\theta_{\mathfrak M})\,\text{d}\theta_{\mathfrak M}$$Each model is then given a posterior probability $\pi(\mathfrak M|\mathbf x)$. The decision to select a model, if need be, is based on the maximisation of a utility function
$$\arg\max_{\mathfrak M} \mathbb E[U(\mathfrak M,\theta_{\mathfrak M})|\mathbf x]$$Hence, it is correct that in the decision, optimisation occurs. | Bayesian methods are about averaging over uncertainty rather than optimization. Explain?
When doing model selection, each model $\mathfrak M$ is given an evidence $\mathfrak e(\mathfrak M)$ that writes as the corresponding integrated likelihood
$$\mathfrak e(\mathfrak M) = \int f_{\mathfr |
54,804 | Bayesian methods are about averaging over uncertainty rather than optimization. Explain? | I did a cursory search for the context of the quote you give. I find it showing up a lot with slides for Gharamani.
In those slides he's using that statement to suggest that MAP is not a bayesian method. My experience with Bayesian methods has been that the results of some inference are usually a distribution of parameters that you then average over in some way to get an estimation/prediction.
I think Xi'an demonstrates that getting the evidence itself doesn't require optimization. The only point where optimization comes into play is when you have to make a decision to select a model based on some utility function. | Bayesian methods are about averaging over uncertainty rather than optimization. Explain? | I did a cursory search for the context of the quote you give. I find it showing up a lot with slides for Gharamani.
In those slides he's using that statement to suggest that MAP is not a bayesian met | Bayesian methods are about averaging over uncertainty rather than optimization. Explain?
I did a cursory search for the context of the quote you give. I find it showing up a lot with slides for Gharamani.
In those slides he's using that statement to suggest that MAP is not a bayesian method. My experience with Bayesian methods has been that the results of some inference are usually a distribution of parameters that you then average over in some way to get an estimation/prediction.
I think Xi'an demonstrates that getting the evidence itself doesn't require optimization. The only point where optimization comes into play is when you have to make a decision to select a model based on some utility function. | Bayesian methods are about averaging over uncertainty rather than optimization. Explain?
I did a cursory search for the context of the quote you give. I find it showing up a lot with slides for Gharamani.
In those slides he's using that statement to suggest that MAP is not a bayesian met |
54,805 | Is there a maximum number of independent variables for generalized additive model? | The number of predictors should reflect your understanding of the system you are modelling and the hypotheses you are testing.
Technically, you can have as many covariates in the model as observations minus 2 (one for the intercept and one to have some hope of estimating everything). In a GAM, with 100 observations and 10 covariates using say the default basis size in mgcv in R for each covariate, you would be fitting a model with 91 coefficients (10 covariates times 9 basis functions as k = 10 but only 9 get used due to identifiability constraints, plus one for the intercept).
Using method = 'REML' you could likes fit this model and the wiggliness penalties on the 9 smooth functions would more than likely be able to fit this model, assuming non of the covariates is concurve with other covariates (nonlinear version of collinearity). If you want to do selection of terms then you'll need to use something like a second penalty on the null space of the basis (via select = TRUE in mgcv).
In practice, however, with so many covariates given the data set size, you are going to run into the issue of having highly uncertain estimated smooth functions.
With a GAM therefore, especially if you are limited by sample size, you need to think very carefully about the expected wiggliness of each of the smooth effects of the covariates you want to include in the model. do you really need 9 degrees of wiggliness freedom to represent all of the smooth effects of the covariates? For many situations in GAMs, especially when we are not modelling temporal or spatial trends (or both), the effects of covariates tend to be relatively smooth. So perhaps you can get away with k = 6 to get 5 basis functions per smooth. That will give you some headroom with which to estimate the model. Then you can use several checks to see if the basis expansion should have used more basis functions for some of the covariates (say via gam.check() in mgcv).
In summary;
you should always think about the terms you are including in the model and justify their inclusion:
You don't want to be seduced by some unexpected significant but spuriously significant effect that arose because of the particular sample of data you collected or the variables you threw into the model.
Likewise, you wouldn't through all variables at your disposal into a model because of collinearity (or concurvity) problems.
So some thought has to precede model fitting.
In a GAM you also have to think about the size of the basis expansion used to represent the effect of each covariate on the response. You want the basis expansions to be rich enough to capture the true effect or a close approximation to it, but you don't want to needlessly waste degrees of freedom on extra basis functions that will just be penalised almost entirely out of the model.
So think about how complex (wiggly) each of the anticipated smooth effects should be and set the basis size accordingly.
Finally, if you really don't know which variables should be in the model, then applying the Double Penalty approach of Marra & Wood (2012) can put a second penalty on the null space of the basis (those basis functions that are perfectly smooth from the viewpoint of the penalty), which can shrink terms out of the model entirely in a similar way to the Lasso penalty. | Is there a maximum number of independent variables for generalized additive model? | The number of predictors should reflect your understanding of the system you are modelling and the hypotheses you are testing.
Technically, you can have as many covariates in the model as observations | Is there a maximum number of independent variables for generalized additive model?
The number of predictors should reflect your understanding of the system you are modelling and the hypotheses you are testing.
Technically, you can have as many covariates in the model as observations minus 2 (one for the intercept and one to have some hope of estimating everything). In a GAM, with 100 observations and 10 covariates using say the default basis size in mgcv in R for each covariate, you would be fitting a model with 91 coefficients (10 covariates times 9 basis functions as k = 10 but only 9 get used due to identifiability constraints, plus one for the intercept).
Using method = 'REML' you could likes fit this model and the wiggliness penalties on the 9 smooth functions would more than likely be able to fit this model, assuming non of the covariates is concurve with other covariates (nonlinear version of collinearity). If you want to do selection of terms then you'll need to use something like a second penalty on the null space of the basis (via select = TRUE in mgcv).
In practice, however, with so many covariates given the data set size, you are going to run into the issue of having highly uncertain estimated smooth functions.
With a GAM therefore, especially if you are limited by sample size, you need to think very carefully about the expected wiggliness of each of the smooth effects of the covariates you want to include in the model. do you really need 9 degrees of wiggliness freedom to represent all of the smooth effects of the covariates? For many situations in GAMs, especially when we are not modelling temporal or spatial trends (or both), the effects of covariates tend to be relatively smooth. So perhaps you can get away with k = 6 to get 5 basis functions per smooth. That will give you some headroom with which to estimate the model. Then you can use several checks to see if the basis expansion should have used more basis functions for some of the covariates (say via gam.check() in mgcv).
In summary;
you should always think about the terms you are including in the model and justify their inclusion:
You don't want to be seduced by some unexpected significant but spuriously significant effect that arose because of the particular sample of data you collected or the variables you threw into the model.
Likewise, you wouldn't through all variables at your disposal into a model because of collinearity (or concurvity) problems.
So some thought has to precede model fitting.
In a GAM you also have to think about the size of the basis expansion used to represent the effect of each covariate on the response. You want the basis expansions to be rich enough to capture the true effect or a close approximation to it, but you don't want to needlessly waste degrees of freedom on extra basis functions that will just be penalised almost entirely out of the model.
So think about how complex (wiggly) each of the anticipated smooth effects should be and set the basis size accordingly.
Finally, if you really don't know which variables should be in the model, then applying the Double Penalty approach of Marra & Wood (2012) can put a second penalty on the null space of the basis (those basis functions that are perfectly smooth from the viewpoint of the penalty), which can shrink terms out of the model entirely in a similar way to the Lasso penalty. | Is there a maximum number of independent variables for generalized additive model?
The number of predictors should reflect your understanding of the system you are modelling and the hypotheses you are testing.
Technically, you can have as many covariates in the model as observations |
54,806 | Is there a maximum number of independent variables for generalized additive model? | there is no such thing as the right number. This is totally dependent on the problem.
If you want to use classical regression methods the only "real" restriction is that you should have less than 100 variables.
I usually use a lasso regression with CV lambda to select the variables for the GANMmodel. | Is there a maximum number of independent variables for generalized additive model? | there is no such thing as the right number. This is totally dependent on the problem.
If you want to use classical regression methods the only "real" restriction is that you should have less than 100 | Is there a maximum number of independent variables for generalized additive model?
there is no such thing as the right number. This is totally dependent on the problem.
If you want to use classical regression methods the only "real" restriction is that you should have less than 100 variables.
I usually use a lasso regression with CV lambda to select the variables for the GANMmodel. | Is there a maximum number of independent variables for generalized additive model?
there is no such thing as the right number. This is totally dependent on the problem.
If you want to use classical regression methods the only "real" restriction is that you should have less than 100 |
54,807 | Find mgf from joint pmf | This problem is intriguing because it indicates something symmetrical lurks. One can't help feeling there is a simple, low-computation, insightful solution. Indeed, a little staring at the pmf suggests defining
$$z = y-x.$$
We may assume $z \ge 0$ (because when $z \lt 0$ the $z!$ terms in the denominator have poles, which makes those probabilities zero). Now, for non-negative integers $x$ and $z$
$$p_{XY}(x,z) = e^{-2}\, \frac{1}{x!}\frac{1}{z!} = \frac{e^{-1}}{x!}\frac{e^{-1}}{z!} $$
is manifestly symmetric under exchanging $x$ and $z.$ It is immediately recognizable as the probability distribution of two independent Poisson$(1)$ variables $X$ and $Z$ where $Z=Y-X.$
Consequently--as you know, or can look up, or can easily compute, its moment generating function is
$$m_{XZ}(s,u) = E[e^{sX + uZ}] = E[e^{sX}]E[e^{uZ}] = \exp(e^s - 1)\exp(e^u-1)=\exp(e^s+e^u-2).$$
This function remains symmetric under an exchange of its arguments $s$ and $u.$ At this point we must break the symmetry to return to the original problem--but the calculations remain simple. Since $Y=X+Z,$
$$\eqalign{
m_{XY}(s,t) &= E[e^{sX+tY}] = E[e^{sX + t(X+Z)}] = E[e^{(s+t)X+tZ}] = m_{XZ}(s+t, t) \\&= \exp(e^{s+t} + e^t - 2).
}$$
Even if your goal is only to simplify the sums you obtained, this insight into the variables and the underlying symmetric formulation can help guide your work. | Find mgf from joint pmf | This problem is intriguing because it indicates something symmetrical lurks. One can't help feeling there is a simple, low-computation, insightful solution. Indeed, a little staring at the pmf sugges | Find mgf from joint pmf
This problem is intriguing because it indicates something symmetrical lurks. One can't help feeling there is a simple, low-computation, insightful solution. Indeed, a little staring at the pmf suggests defining
$$z = y-x.$$
We may assume $z \ge 0$ (because when $z \lt 0$ the $z!$ terms in the denominator have poles, which makes those probabilities zero). Now, for non-negative integers $x$ and $z$
$$p_{XY}(x,z) = e^{-2}\, \frac{1}{x!}\frac{1}{z!} = \frac{e^{-1}}{x!}\frac{e^{-1}}{z!} $$
is manifestly symmetric under exchanging $x$ and $z.$ It is immediately recognizable as the probability distribution of two independent Poisson$(1)$ variables $X$ and $Z$ where $Z=Y-X.$
Consequently--as you know, or can look up, or can easily compute, its moment generating function is
$$m_{XZ}(s,u) = E[e^{sX + uZ}] = E[e^{sX}]E[e^{uZ}] = \exp(e^s - 1)\exp(e^u-1)=\exp(e^s+e^u-2).$$
This function remains symmetric under an exchange of its arguments $s$ and $u.$ At this point we must break the symmetry to return to the original problem--but the calculations remain simple. Since $Y=X+Z,$
$$\eqalign{
m_{XY}(s,t) &= E[e^{sX+tY}] = E[e^{sX + t(X+Z)}] = E[e^{(s+t)X+tZ}] = m_{XZ}(s+t, t) \\&= \exp(e^{s+t} + e^t - 2).
}$$
Even if your goal is only to simplify the sums you obtained, this insight into the variables and the underlying symmetric formulation can help guide your work. | Find mgf from joint pmf
This problem is intriguing because it indicates something symmetrical lurks. One can't help feeling there is a simple, low-computation, insightful solution. Indeed, a little staring at the pmf sugges |
54,808 | Find mgf from joint pmf | The joint pmf of $(X,Y)$ is of the form
\begin{align}
p(x,y)&=\frac{e^{-2}}{y!}\binom{y}{x}\mathbf1_{x=0,1,\ldots,y\,;\,y=0,1,\ldots}
\\&=\binom{y}{x}\frac{1}{2^y}\mathbf1_{x=0,1,\ldots,y}\,\frac{e^{-2}2^y}{y!}\mathbf1_{y=0,1,\ldots}
\end{align}
This shows that $X\mid Y\sim\mathsf{Bin}\left(Y,\frac{1}{2}\right)$, where $Y\sim\mathsf{Poisson}(2)$.
Using the law of total expectation:
\begin{align}
E\left[e^{t_1X+t_2Y}\right]&=E\left[E\left[e^{t_1X+t_2Y}\mid Y\right]\right]
\\&=E\left[e^{t_2Y}E\left[e^{t_1X}\mid Y\right]\right]
\end{align}
Now recall the MGF and PGF of binomial and poisson distribution respectively.
Edit.
To clarify, my hint above is the same thing as you have done. To finish it off, note that the last sum is just $E(a^Y)$ for some $a=a(t_1,t_2)$, which is just the PGF of $Y$. | Find mgf from joint pmf | The joint pmf of $(X,Y)$ is of the form
\begin{align}
p(x,y)&=\frac{e^{-2}}{y!}\binom{y}{x}\mathbf1_{x=0,1,\ldots,y\,;\,y=0,1,\ldots}
\\&=\binom{y}{x}\frac{1}{2^y}\mathbf1_{x=0,1,\ldots,y}\,\frac{e^{- | Find mgf from joint pmf
The joint pmf of $(X,Y)$ is of the form
\begin{align}
p(x,y)&=\frac{e^{-2}}{y!}\binom{y}{x}\mathbf1_{x=0,1,\ldots,y\,;\,y=0,1,\ldots}
\\&=\binom{y}{x}\frac{1}{2^y}\mathbf1_{x=0,1,\ldots,y}\,\frac{e^{-2}2^y}{y!}\mathbf1_{y=0,1,\ldots}
\end{align}
This shows that $X\mid Y\sim\mathsf{Bin}\left(Y,\frac{1}{2}\right)$, where $Y\sim\mathsf{Poisson}(2)$.
Using the law of total expectation:
\begin{align}
E\left[e^{t_1X+t_2Y}\right]&=E\left[E\left[e^{t_1X+t_2Y}\mid Y\right]\right]
\\&=E\left[e^{t_2Y}E\left[e^{t_1X}\mid Y\right]\right]
\end{align}
Now recall the MGF and PGF of binomial and poisson distribution respectively.
Edit.
To clarify, my hint above is the same thing as you have done. To finish it off, note that the last sum is just $E(a^Y)$ for some $a=a(t_1,t_2)$, which is just the PGF of $Y$. | Find mgf from joint pmf
The joint pmf of $(X,Y)$ is of the form
\begin{align}
p(x,y)&=\frac{e^{-2}}{y!}\binom{y}{x}\mathbf1_{x=0,1,\ldots,y\,;\,y=0,1,\ldots}
\\&=\binom{y}{x}\frac{1}{2^y}\mathbf1_{x=0,1,\ldots,y}\,\frac{e^{- |
54,809 | Which R-squared value to report while using a fixed effects model - within, between or overall? | All three of these values provide some insight into your model, so you may need to report all three, but the within value is typically of main interest, as fixed-effects is known as the within estimator. At least in Stata, it comes from OLS-estimated mean-deviated model:
$$
\left ( y_{it} - \bar{y_{i}} \right ) = \left ( x_{it} - \bar{x_{i}} \right )\boldsymbol{\beta } + \left ( \epsilon _{it} - \bar{\epsilon _{i}} \right )
$$
The definition of each of R-squared value is below:
Within: How much of the variation in the dependent variable within household units is captured by your model (i.e., how well do your explanatory variables account for changes in DV within each of the households over time). As I said above, in Stata it comes from the OLS-estimated mean-deviated model and is calculated as squared correlation between actual and predicted values of DV (which, in OLS case, is equal to the the ratio of their variances - the formal definition of R-squared).
Between: How much of the variation in the dependent variable between household units is captured by your model (i.e., how well do your explanatory variables account for differences in DV between households).
Overall: weighted average of the two
More detailed information (calculation of each one) can be obtained from the Stata manual: https://www.stata.com/manuals13/xtxtreg.pdf
Also, if you don't already know, if you are using xtreg, fe for your estimation, the within R-squared is obtained in a manner that assumes that groups (households, in your case) are fixed quantities, so their effects are removed from the model. There is also areg procedure that estimates coefficients for each dummy variable for your groups. More information can be found at:
https://www.stata.com/support/faqs/statistics/areg-versus-xtreg-fe
https://kb.iu.edu/auur
https://dss.princeton.edu/training/Panel101.pdf | Which R-squared value to report while using a fixed effects model - within, between or overall? | All three of these values provide some insight into your model, so you may need to report all three, but the within value is typically of main interest, as fixed-effects is known as the within estimat | Which R-squared value to report while using a fixed effects model - within, between or overall?
All three of these values provide some insight into your model, so you may need to report all three, but the within value is typically of main interest, as fixed-effects is known as the within estimator. At least in Stata, it comes from OLS-estimated mean-deviated model:
$$
\left ( y_{it} - \bar{y_{i}} \right ) = \left ( x_{it} - \bar{x_{i}} \right )\boldsymbol{\beta } + \left ( \epsilon _{it} - \bar{\epsilon _{i}} \right )
$$
The definition of each of R-squared value is below:
Within: How much of the variation in the dependent variable within household units is captured by your model (i.e., how well do your explanatory variables account for changes in DV within each of the households over time). As I said above, in Stata it comes from the OLS-estimated mean-deviated model and is calculated as squared correlation between actual and predicted values of DV (which, in OLS case, is equal to the the ratio of their variances - the formal definition of R-squared).
Between: How much of the variation in the dependent variable between household units is captured by your model (i.e., how well do your explanatory variables account for differences in DV between households).
Overall: weighted average of the two
More detailed information (calculation of each one) can be obtained from the Stata manual: https://www.stata.com/manuals13/xtxtreg.pdf
Also, if you don't already know, if you are using xtreg, fe for your estimation, the within R-squared is obtained in a manner that assumes that groups (households, in your case) are fixed quantities, so their effects are removed from the model. There is also areg procedure that estimates coefficients for each dummy variable for your groups. More information can be found at:
https://www.stata.com/support/faqs/statistics/areg-versus-xtreg-fe
https://kb.iu.edu/auur
https://dss.princeton.edu/training/Panel101.pdf | Which R-squared value to report while using a fixed effects model - within, between or overall?
All three of these values provide some insight into your model, so you may need to report all three, but the within value is typically of main interest, as fixed-effects is known as the within estimat |
54,810 | Roots within the unit circle and non-stationarity | Not restricted to time-series analysis, characteristic equations (CE) are used in many applications or problems, such as differential/difference equation solving, signal processing, control systems etc. And, it is directly related with commonly used transforms, e.g. Z, (Disc./Cont.) Fourier, Laplace transform etc. Using back-shift operator is a type of analysis allowing one to transform the time-series equations into another domain and deduce related properties. Most analyses are not easy to do in time-domain, which is why transforms exist.
And, CE obtained by the backshift operator is a way of analyses. This is a matter of which transform you use. You could get the CE of $y_t+\alpha y_{t-1}+\beta y_{t-2}=\epsilon_t$ as $1+\alpha B+\beta B^2$ or if you use $\mathcal{Z}$ transform, that'd be $1+\alpha z^{-1}+\beta z^{-2}=0$, which requires your roots, i.e. $z$'s, should be inside the unit circle if you want to be stationary, instead of outside as in $B$, since roots obtained by $B$ and $z$ are reciprocals of each other.
If we go back to what you're accustomed to, i.e. using the back-shift operator, $B$, we can try to find the roots' relation to stationarity. Consider the time series $y_t=2y_{t-1}+\epsilon_t$, we use the previous output, double it, and add it up with $\epsilon_t$ to obtain current output. The process is clearly exploding. The CE of this is $1-2B=0$, which yields $|B|=1/2<1$. But, if it were $y_t=0.5y_{t-1}+\epsilon_t$, the root would be $|B|=2>1$. This is for just an intuition why would you need $|B|>1$ for being stationary.
Now consider the general case, $y_t=\alpha y_{t-1}+\epsilon_t$. We can obtain the following equation by back-substitution: i.e. $y_t=\alpha^2y_{t-2}+\alpha\epsilon_{t-1}+\epsilon_t$, and so...
$$y_t=\sum_{i=0}^\infty{\alpha^i\epsilon_{t-i}}$$
mean and variance of $y_t$ would be
$$E[y_t]=\sum_{i=0}^{\infty}{\alpha^iE[\epsilon_{t-i}]}=\mu_\epsilon\sum_{i=0}^\infty\alpha^i=\frac{\mu_\epsilon}{1-\alpha} \ \ \text{iff} \ \ |\alpha|<1 \rightarrow |B|>1$$
since CE is $1-B\alpha=0\rightarrow B=1/\alpha$.
Similarly for the variance, we have
$$\sigma_Y^2=var(y_t)=\sum_{i=0}^{\infty}\alpha^{2i}\sigma_{\epsilon}^2=\frac{\sigma_{\epsilon}^2}{1-\alpha^2} \ \ \text{iff} \ \ |\alpha^2|<1 \rightarrow |\alpha=1/B|<1$$
The auto-correlation is $r_Y(k)=E[y_t y_{t-k}]:$
$$E[y_t y_{t-1}]=E[(\alpha y_{t-1}+\epsilon_t)y_{t-1}]=\alpha\sigma_Y^2+\mu_\epsilon\mu_Y$$
$$E[y_t y_{t-2}]=E[(\alpha y_{t-1}+\epsilon_t)y_{t-2}]=\alpha r_Y(1)+\mu_\epsilon\mu_Y=\alpha^2\sigma_Y^2+\mu_\epsilon\mu_Y(1+\alpha)$$
...
$$E[y_t y_{t-k}]=\alpha^k\sigma_Y^2+\mu_\epsilon\mu_Y(1+\alpha+...+\alpha^{k-1})$$
which is irrespective of $t$, and conditioned on $\sigma_Y^2$ and the means don't depend on $t$.
This is for AR(1), but AR(k) can be reduced down to a series of AR(1)'s:
$$(1-\alpha B)(1-\beta B)y_t=\epsilon_t \rightarrow x_t = (1-\beta B)y_t=x_t \ \ \ \& \ \ (1-\alpha B)x_t=\epsilon_t$$
and the analysis can be performed recursively.
Here, we're actually referring to weak stationarity but if $\epsilon_t$ is Gaussian as usual (i.e. which is the noise process is assumed to be distributed with typically), weak stationarity corresponds to stationarity. | Roots within the unit circle and non-stationarity | Not restricted to time-series analysis, characteristic equations (CE) are used in many applications or problems, such as differential/difference equation solving, signal processing, control systems et | Roots within the unit circle and non-stationarity
Not restricted to time-series analysis, characteristic equations (CE) are used in many applications or problems, such as differential/difference equation solving, signal processing, control systems etc. And, it is directly related with commonly used transforms, e.g. Z, (Disc./Cont.) Fourier, Laplace transform etc. Using back-shift operator is a type of analysis allowing one to transform the time-series equations into another domain and deduce related properties. Most analyses are not easy to do in time-domain, which is why transforms exist.
And, CE obtained by the backshift operator is a way of analyses. This is a matter of which transform you use. You could get the CE of $y_t+\alpha y_{t-1}+\beta y_{t-2}=\epsilon_t$ as $1+\alpha B+\beta B^2$ or if you use $\mathcal{Z}$ transform, that'd be $1+\alpha z^{-1}+\beta z^{-2}=0$, which requires your roots, i.e. $z$'s, should be inside the unit circle if you want to be stationary, instead of outside as in $B$, since roots obtained by $B$ and $z$ are reciprocals of each other.
If we go back to what you're accustomed to, i.e. using the back-shift operator, $B$, we can try to find the roots' relation to stationarity. Consider the time series $y_t=2y_{t-1}+\epsilon_t$, we use the previous output, double it, and add it up with $\epsilon_t$ to obtain current output. The process is clearly exploding. The CE of this is $1-2B=0$, which yields $|B|=1/2<1$. But, if it were $y_t=0.5y_{t-1}+\epsilon_t$, the root would be $|B|=2>1$. This is for just an intuition why would you need $|B|>1$ for being stationary.
Now consider the general case, $y_t=\alpha y_{t-1}+\epsilon_t$. We can obtain the following equation by back-substitution: i.e. $y_t=\alpha^2y_{t-2}+\alpha\epsilon_{t-1}+\epsilon_t$, and so...
$$y_t=\sum_{i=0}^\infty{\alpha^i\epsilon_{t-i}}$$
mean and variance of $y_t$ would be
$$E[y_t]=\sum_{i=0}^{\infty}{\alpha^iE[\epsilon_{t-i}]}=\mu_\epsilon\sum_{i=0}^\infty\alpha^i=\frac{\mu_\epsilon}{1-\alpha} \ \ \text{iff} \ \ |\alpha|<1 \rightarrow |B|>1$$
since CE is $1-B\alpha=0\rightarrow B=1/\alpha$.
Similarly for the variance, we have
$$\sigma_Y^2=var(y_t)=\sum_{i=0}^{\infty}\alpha^{2i}\sigma_{\epsilon}^2=\frac{\sigma_{\epsilon}^2}{1-\alpha^2} \ \ \text{iff} \ \ |\alpha^2|<1 \rightarrow |\alpha=1/B|<1$$
The auto-correlation is $r_Y(k)=E[y_t y_{t-k}]:$
$$E[y_t y_{t-1}]=E[(\alpha y_{t-1}+\epsilon_t)y_{t-1}]=\alpha\sigma_Y^2+\mu_\epsilon\mu_Y$$
$$E[y_t y_{t-2}]=E[(\alpha y_{t-1}+\epsilon_t)y_{t-2}]=\alpha r_Y(1)+\mu_\epsilon\mu_Y=\alpha^2\sigma_Y^2+\mu_\epsilon\mu_Y(1+\alpha)$$
...
$$E[y_t y_{t-k}]=\alpha^k\sigma_Y^2+\mu_\epsilon\mu_Y(1+\alpha+...+\alpha^{k-1})$$
which is irrespective of $t$, and conditioned on $\sigma_Y^2$ and the means don't depend on $t$.
This is for AR(1), but AR(k) can be reduced down to a series of AR(1)'s:
$$(1-\alpha B)(1-\beta B)y_t=\epsilon_t \rightarrow x_t = (1-\beta B)y_t=x_t \ \ \ \& \ \ (1-\alpha B)x_t=\epsilon_t$$
and the analysis can be performed recursively.
Here, we're actually referring to weak stationarity but if $\epsilon_t$ is Gaussian as usual (i.e. which is the noise process is assumed to be distributed with typically), weak stationarity corresponds to stationarity. | Roots within the unit circle and non-stationarity
Not restricted to time-series analysis, characteristic equations (CE) are used in many applications or problems, such as differential/difference equation solving, signal processing, control systems et |
54,811 | Roots within the unit circle and non-stationarity | to my perspective, you can simply understand the characteristic equations as the restrictions of your target. Your target has to be the root of your characteristic equations. If you don't understand how the time series AR or MA or ARMA model come from, you can read the book "Analysis of Financial Time Series" by Ruey S. Tsay. I would also highly recommend you check out the youtube channel named "ritvikmath", his explanations of some terminologies are really easy to follow.
We have to understand "what is stationarity" first. Normally we refer to "stationarity" as weakly stationarity since strict stationarity is a very strong condition and hard to verify empirically. The weak stationarity means the mean $/mu$ and the covariance $/sigma$ are constant or time-invariant. If the roots of the characteristic function are greater than 1, by the expression of mean and variance, they will not be time-invariant. That's why the roots have to be within the unit-circle. | Roots within the unit circle and non-stationarity | to my perspective, you can simply understand the characteristic equations as the restrictions of your target. Your target has to be the root of your characteristic equations. If you don't understand h | Roots within the unit circle and non-stationarity
to my perspective, you can simply understand the characteristic equations as the restrictions of your target. Your target has to be the root of your characteristic equations. If you don't understand how the time series AR or MA or ARMA model come from, you can read the book "Analysis of Financial Time Series" by Ruey S. Tsay. I would also highly recommend you check out the youtube channel named "ritvikmath", his explanations of some terminologies are really easy to follow.
We have to understand "what is stationarity" first. Normally we refer to "stationarity" as weakly stationarity since strict stationarity is a very strong condition and hard to verify empirically. The weak stationarity means the mean $/mu$ and the covariance $/sigma$ are constant or time-invariant. If the roots of the characteristic function are greater than 1, by the expression of mean and variance, they will not be time-invariant. That's why the roots have to be within the unit-circle. | Roots within the unit circle and non-stationarity
to my perspective, you can simply understand the characteristic equations as the restrictions of your target. Your target has to be the root of your characteristic equations. If you don't understand h |
54,812 | Truncated Gamma Distribution | A generic if rarely mentioned result about conjugate families is that they are defined in terms of an arbitrary dominating measure $\lambda$. This means that their density wrt this dominating measure is provided by the corresponding exponential family shape
$$\exp\{A(\theta)\cdot S_0 -\lambda \psi(\theta)\}$$
but that the dominating measure $\lambda$ may include an arbitrary function $C(\theta)$ of $\theta$ wrt a standard dominating measure like the Lebesgue measure (or any other), in particular the indicator of a particular subset of the parameter space.
In the case of the Poisson example, with likelihood$$\theta^S\exp\{-n\theta\}$$the prior with density against the Lebesgue measure$$\pi(\theta)\propto \theta^{S_0}\exp\{-\lambda\theta\}\mathbb{I}_{(a,b)}(\theta)$$is associated with the posterior$$\pi(\theta)\propto \theta^{S+S_0}\exp\{-(n+\lambda)\theta\}\mathbb{I}_{(a,b)}(\theta)$$which is indeed of the same shape, $S_0$ being replaced with $S+S_0$ and $\lambda$ with $n+\lambda$. | Truncated Gamma Distribution | A generic if rarely mentioned result about conjugate families is that they are defined in terms of an arbitrary dominating measure $\lambda$. This means that their density wrt this dominating measure | Truncated Gamma Distribution
A generic if rarely mentioned result about conjugate families is that they are defined in terms of an arbitrary dominating measure $\lambda$. This means that their density wrt this dominating measure is provided by the corresponding exponential family shape
$$\exp\{A(\theta)\cdot S_0 -\lambda \psi(\theta)\}$$
but that the dominating measure $\lambda$ may include an arbitrary function $C(\theta)$ of $\theta$ wrt a standard dominating measure like the Lebesgue measure (or any other), in particular the indicator of a particular subset of the parameter space.
In the case of the Poisson example, with likelihood$$\theta^S\exp\{-n\theta\}$$the prior with density against the Lebesgue measure$$\pi(\theta)\propto \theta^{S_0}\exp\{-\lambda\theta\}\mathbb{I}_{(a,b)}(\theta)$$is associated with the posterior$$\pi(\theta)\propto \theta^{S+S_0}\exp\{-(n+\lambda)\theta\}\mathbb{I}_{(a,b)}(\theta)$$which is indeed of the same shape, $S_0$ being replaced with $S+S_0$ and $\lambda$ with $n+\lambda$. | Truncated Gamma Distribution
A generic if rarely mentioned result about conjugate families is that they are defined in terms of an arbitrary dominating measure $\lambda$. This means that their density wrt this dominating measure |
54,813 | Multilevel model with 4 levels? | There is no restriction to the number of "levels" in lme4. The package will be able to fit any number of levels provided that the data supports such a random effects structure.
We can demonstrate with the following simulation of a 4-level dataset similar to that as described in the OP:
> set.seed(15)
> library(lme4)
> dt1 <- data.frame(expand.grid(SchoolID = LETTERS[1:6], FacultyID = LETTERS[1:6], CourseID = LETTERS[1:10], StudentID = 1:100, Score = c(NA, NA, NA)))
> dt1$Score <- as.numeric(dt1$SchoolID) + as.numeric(dt1$FacultyID) + as.numeric(dt1$CourseID) + as.numeric(dt1$StudentID) + rnorm(nrow(dt1), 0,5)
> lmm1 <- lmer(Score ~ 1 + (1 | SchoolID/FacultyID/CourseID/StudentID), data = dt1)
> summary(lmm1)
Random effects:
Groups Name Variance Std.Dev.
StudentID:(CourseID:(FacultyID:SchoolID)) (Intercept) 841.6574 29.0113
CourseID:(FacultyID:SchoolID) (Intercept) 0.8581 0.9263
FacultyID:SchoolID (Intercept) 2.5579 1.5993
SchoolID (Intercept) 2.8880 1.6994
Residual 24.9743 4.9974
Number of obs: 108000, groups:
StudentID:(CourseID:(FacultyID:SchoolID)), 36000; CourseID:(FacultyID:SchoolID), 360; FacultyID:SchoolID, 36; SchoolID, 6
We could also fit a 5-level model if we wished:
> dt2 <- data.frame(expand.grid(CityID = LETTERS[1:6], SchoolID = LETTERS[1:6], FacultyID = LETTERS[1:6], CourseID = LETTERS[1:10], StudentID = 1:20, Score = c(NA, NA, NA)))
> dt2$Score <- as.numeric(dt2$CityID) + as.numeric(dt2$SchoolID) + as.numeric(dt2$FacultyID) + as.numeric(dt2$CourseID) + as.numeric(dt2$StudentID) + rnorm(nrow(dt2), 0, 5)
> lmm2 <- lmer(Score ~ 1 + (1 | CityID/SchoolID/FacultyID/CourseID/StudentID), data = dt2)
> summary(lmm2)
Random effects:
Groups Name Variance
Std.Dev.
StudentID:(CourseID:(FacultyID:(SchoolID:CityID))) (Intercept) 34.778 5.897
CourseID:(FacultyID:(SchoolID:CityID)) (Intercept) 7.418 2.724
FacultyID:(SchoolID:CityID) (Intercept) 2.516 1.586
SchoolID:CityID (Intercept) 2.873 1.695
CityID (Intercept) 2.922 1.709
Residual 24.940 4.994
Number of obs: 129600, groups:
StudentID:(CourseID:(FacultyID:(SchoolID:CityID))), 43200; CourseID:(FacultyID:(SchoolID:CityID)), 2160; FacultyID:(SchoolID:CityID), 216; SchoolID:CityID, 36; CityID, 6
[ Note that this 2nd model may take a while to fit ! ]
The partially crossed structure would be best represented by ensuring that the factors in each clusters are coded uniquely and lme4 should then be able to handle the partially crossed / partially nested structure simply by specifying the random effects as
(1 | SchoolID) + (1 | FacultyID) + (1 | CourseID) + (1 | StudentID)
This means that, for example, if you have StudentID 1 in Faculty A and Student 1 in Faculty B and these are different (ie, these 2 students are nested in their respective Faculties), then they should be coded as something like StudentID 1A and StudentID 1B respectively. We can demonstrate this with the dt1 dataset above, by re-coding the factors as follows:
> dt1.1 <- dt1
> dt1.1$FacultyID <- paste(dt1$SchoolID, dt1$FacultyID, sep = ".")
> dt1.1$CourseID <- paste(dt1.1$FacultyID, dt1$CourseID, sep = ".")
> dt1.1$StudentID <- paste(dt1.1$CourseID, dt1$StudentID, sep = ".")
> lmm1.1 <- lmer(Score ~ 1 + (1 | SchoolID) + (1 | FacultyID) + (1 | CourseID) + (1 | StudentID), data = dt1.1)
> summary(lmm1.1)
Random effects:
Groups Name Variance Std.Dev.
StudentID (Intercept) 841.6568 29.0113
CourseID (Intercept) 0.8584 0.9265
FacultyID (Intercept) 2.5585 1.5995
SchoolID (Intercept) 2.8893 1.6998
Residual 24.9743 4.9974
Number of obs: 108000, groups: StudentID, 36000; CourseID, 360; FacultyID, 36; SchoolID, 6
Note that the model output is the same as for lmm1 above, although presented slightly differently.
So far the data are fully nested. That is, each student is enrolled on 1 and only 1 course, one course "belongs" to one and only 1 Faculty etc. To simulate a crossed factor, for example a student that is enrolled on 2 courses, we can simply combine the relevant student IDs: First we identify the student IDs that we want to combine:
> dt1.1[dt1.1$StudentID == "A.A.A.31" | dt1.1$StudentID == "A.A.B.31", ]
SchoolID FacultyID CourseID StudentID Score
10801 A A.A A.A.A A.A.A.31 33.00600
10837 A A.A A.A.B A.A.B.31 33.69633
46801 A A.A A.A.A A.A.A.31 33.03089
46837 A A.A A.A.B A.A.B.31 33.00802
82801 A A.A A.A.A A.A.A.31 41.68804
82837 A A.A A.A.B A.A.B.31 31.26155
and gives them the same (unique) ID:
> dt1.1[dt1.1$StudentID == "A.A.A.31" | dt1.1$StudentID == "A.A.B.31", ]$StudentID <- "CCCC"
And then we can fit the model with the same same call:
lmm1.1 <- lmer(Score ~ 1 + (1 | SchoolID) + (1 | FacultyID) + (1 | CourseID) + (1 | StudentID), data = dt1.1)
> summary(lmm1.1)
Random effects:
Groups Name Variance Std.Dev.
StudentID (Intercept) 841.6867 29.0118
CourseID (Intercept) 0.8312 0.9117
FacultyID (Intercept) 2.5570 1.5991
SchoolID (Intercept) 2.8851 1.6986
Residual 24.9742 4.9974
Number of obs: 108000, groups: StudentID, 35999; CourseID, 360; FacultyID, 36; SchoolID, 6
Note that we now have 35,999 StudentIDs, rather than 36,000.
See here for more info about coding crossed and nested factors. | Multilevel model with 4 levels? | There is no restriction to the number of "levels" in lme4. The package will be able to fit any number of levels provided that the data supports such a random effects structure.
We can demonstrate wit | Multilevel model with 4 levels?
There is no restriction to the number of "levels" in lme4. The package will be able to fit any number of levels provided that the data supports such a random effects structure.
We can demonstrate with the following simulation of a 4-level dataset similar to that as described in the OP:
> set.seed(15)
> library(lme4)
> dt1 <- data.frame(expand.grid(SchoolID = LETTERS[1:6], FacultyID = LETTERS[1:6], CourseID = LETTERS[1:10], StudentID = 1:100, Score = c(NA, NA, NA)))
> dt1$Score <- as.numeric(dt1$SchoolID) + as.numeric(dt1$FacultyID) + as.numeric(dt1$CourseID) + as.numeric(dt1$StudentID) + rnorm(nrow(dt1), 0,5)
> lmm1 <- lmer(Score ~ 1 + (1 | SchoolID/FacultyID/CourseID/StudentID), data = dt1)
> summary(lmm1)
Random effects:
Groups Name Variance Std.Dev.
StudentID:(CourseID:(FacultyID:SchoolID)) (Intercept) 841.6574 29.0113
CourseID:(FacultyID:SchoolID) (Intercept) 0.8581 0.9263
FacultyID:SchoolID (Intercept) 2.5579 1.5993
SchoolID (Intercept) 2.8880 1.6994
Residual 24.9743 4.9974
Number of obs: 108000, groups:
StudentID:(CourseID:(FacultyID:SchoolID)), 36000; CourseID:(FacultyID:SchoolID), 360; FacultyID:SchoolID, 36; SchoolID, 6
We could also fit a 5-level model if we wished:
> dt2 <- data.frame(expand.grid(CityID = LETTERS[1:6], SchoolID = LETTERS[1:6], FacultyID = LETTERS[1:6], CourseID = LETTERS[1:10], StudentID = 1:20, Score = c(NA, NA, NA)))
> dt2$Score <- as.numeric(dt2$CityID) + as.numeric(dt2$SchoolID) + as.numeric(dt2$FacultyID) + as.numeric(dt2$CourseID) + as.numeric(dt2$StudentID) + rnorm(nrow(dt2), 0, 5)
> lmm2 <- lmer(Score ~ 1 + (1 | CityID/SchoolID/FacultyID/CourseID/StudentID), data = dt2)
> summary(lmm2)
Random effects:
Groups Name Variance
Std.Dev.
StudentID:(CourseID:(FacultyID:(SchoolID:CityID))) (Intercept) 34.778 5.897
CourseID:(FacultyID:(SchoolID:CityID)) (Intercept) 7.418 2.724
FacultyID:(SchoolID:CityID) (Intercept) 2.516 1.586
SchoolID:CityID (Intercept) 2.873 1.695
CityID (Intercept) 2.922 1.709
Residual 24.940 4.994
Number of obs: 129600, groups:
StudentID:(CourseID:(FacultyID:(SchoolID:CityID))), 43200; CourseID:(FacultyID:(SchoolID:CityID)), 2160; FacultyID:(SchoolID:CityID), 216; SchoolID:CityID, 36; CityID, 6
[ Note that this 2nd model may take a while to fit ! ]
The partially crossed structure would be best represented by ensuring that the factors in each clusters are coded uniquely and lme4 should then be able to handle the partially crossed / partially nested structure simply by specifying the random effects as
(1 | SchoolID) + (1 | FacultyID) + (1 | CourseID) + (1 | StudentID)
This means that, for example, if you have StudentID 1 in Faculty A and Student 1 in Faculty B and these are different (ie, these 2 students are nested in their respective Faculties), then they should be coded as something like StudentID 1A and StudentID 1B respectively. We can demonstrate this with the dt1 dataset above, by re-coding the factors as follows:
> dt1.1 <- dt1
> dt1.1$FacultyID <- paste(dt1$SchoolID, dt1$FacultyID, sep = ".")
> dt1.1$CourseID <- paste(dt1.1$FacultyID, dt1$CourseID, sep = ".")
> dt1.1$StudentID <- paste(dt1.1$CourseID, dt1$StudentID, sep = ".")
> lmm1.1 <- lmer(Score ~ 1 + (1 | SchoolID) + (1 | FacultyID) + (1 | CourseID) + (1 | StudentID), data = dt1.1)
> summary(lmm1.1)
Random effects:
Groups Name Variance Std.Dev.
StudentID (Intercept) 841.6568 29.0113
CourseID (Intercept) 0.8584 0.9265
FacultyID (Intercept) 2.5585 1.5995
SchoolID (Intercept) 2.8893 1.6998
Residual 24.9743 4.9974
Number of obs: 108000, groups: StudentID, 36000; CourseID, 360; FacultyID, 36; SchoolID, 6
Note that the model output is the same as for lmm1 above, although presented slightly differently.
So far the data are fully nested. That is, each student is enrolled on 1 and only 1 course, one course "belongs" to one and only 1 Faculty etc. To simulate a crossed factor, for example a student that is enrolled on 2 courses, we can simply combine the relevant student IDs: First we identify the student IDs that we want to combine:
> dt1.1[dt1.1$StudentID == "A.A.A.31" | dt1.1$StudentID == "A.A.B.31", ]
SchoolID FacultyID CourseID StudentID Score
10801 A A.A A.A.A A.A.A.31 33.00600
10837 A A.A A.A.B A.A.B.31 33.69633
46801 A A.A A.A.A A.A.A.31 33.03089
46837 A A.A A.A.B A.A.B.31 33.00802
82801 A A.A A.A.A A.A.A.31 41.68804
82837 A A.A A.A.B A.A.B.31 31.26155
and gives them the same (unique) ID:
> dt1.1[dt1.1$StudentID == "A.A.A.31" | dt1.1$StudentID == "A.A.B.31", ]$StudentID <- "CCCC"
And then we can fit the model with the same same call:
lmm1.1 <- lmer(Score ~ 1 + (1 | SchoolID) + (1 | FacultyID) + (1 | CourseID) + (1 | StudentID), data = dt1.1)
> summary(lmm1.1)
Random effects:
Groups Name Variance Std.Dev.
StudentID (Intercept) 841.6867 29.0118
CourseID (Intercept) 0.8312 0.9117
FacultyID (Intercept) 2.5570 1.5991
SchoolID (Intercept) 2.8851 1.6986
Residual 24.9742 4.9974
Number of obs: 108000, groups: StudentID, 35999; CourseID, 360; FacultyID, 36; SchoolID, 6
Note that we now have 35,999 StudentIDs, rather than 36,000.
See here for more info about coding crossed and nested factors. | Multilevel model with 4 levels?
There is no restriction to the number of "levels" in lme4. The package will be able to fit any number of levels provided that the data supports such a random effects structure.
We can demonstrate wit |
54,814 | How to account for temporal autocorrelation in logistic regression with longitudinal data? | If you would include the time variable in the specification of your random effects you would account for the correlations in the repeated measurements of your outcome variable ThermalResponse, e.g., something like
fm1 <- glmer(ThermalResponse ~ Temperature + Humidity + (TimeSeries | Location),
data = thermoreg, family = binomial())
You could study further evaluate if you the correlations in the data are more complex by including nonlinear terms of time in the specification of random-effects part, for example, using a second-degree polynomial , e.g.,
fm2 <- glmer(ThermalResponse ~ Temperature + Humidity + (poly(TimeSeries, 2) | Location),
data = thermoreg, family = binomial())
and compare with the previous model using a likelihood ratio test, i.e.,
anova(fm1, fm2)
Note that both glmer() and the glmmTMB() use the Laplace approximation in the calculation of the likelihood function of the model, which is known not to be that optimal for binary data. A better approach is to use the adaptive Gaussian quadrature approximation provided, for example, by the GLMMadaptive package. | How to account for temporal autocorrelation in logistic regression with longitudinal data? | If you would include the time variable in the specification of your random effects you would account for the correlations in the repeated measurements of your outcome variable ThermalResponse, e.g., s | How to account for temporal autocorrelation in logistic regression with longitudinal data?
If you would include the time variable in the specification of your random effects you would account for the correlations in the repeated measurements of your outcome variable ThermalResponse, e.g., something like
fm1 <- glmer(ThermalResponse ~ Temperature + Humidity + (TimeSeries | Location),
data = thermoreg, family = binomial())
You could study further evaluate if you the correlations in the data are more complex by including nonlinear terms of time in the specification of random-effects part, for example, using a second-degree polynomial , e.g.,
fm2 <- glmer(ThermalResponse ~ Temperature + Humidity + (poly(TimeSeries, 2) | Location),
data = thermoreg, family = binomial())
and compare with the previous model using a likelihood ratio test, i.e.,
anova(fm1, fm2)
Note that both glmer() and the glmmTMB() use the Laplace approximation in the calculation of the likelihood function of the model, which is known not to be that optimal for binary data. A better approach is to use the adaptive Gaussian quadrature approximation provided, for example, by the GLMMadaptive package. | How to account for temporal autocorrelation in logistic regression with longitudinal data?
If you would include the time variable in the specification of your random effects you would account for the correlations in the repeated measurements of your outcome variable ThermalResponse, e.g., s |
54,815 | How to account for temporal autocorrelation in logistic regression with longitudinal data? | Switch to GLMMTMB package it allows for different covariance structures like AR1 with random effects and the use of multiple different distribution including the binomial. I use it all the time for logistic mixed models with temporal autocorrelation. Also, the reason LME freaked out is that your data are not normal (which it requires); thus, why you are using the binomial distribution.
Link with explanation of package:
https://rdrr.io/cran/glmmTMB/man/glmmTMB.html
Here is an example of someone using an AR1 in GLMMTMB on a random effect:
https://github.com/glmmTMB/glmmTMB/issues/329
Goodluck! | How to account for temporal autocorrelation in logistic regression with longitudinal data? | Switch to GLMMTMB package it allows for different covariance structures like AR1 with random effects and the use of multiple different distribution including the binomial. I use it all the time for lo | How to account for temporal autocorrelation in logistic regression with longitudinal data?
Switch to GLMMTMB package it allows for different covariance structures like AR1 with random effects and the use of multiple different distribution including the binomial. I use it all the time for logistic mixed models with temporal autocorrelation. Also, the reason LME freaked out is that your data are not normal (which it requires); thus, why you are using the binomial distribution.
Link with explanation of package:
https://rdrr.io/cran/glmmTMB/man/glmmTMB.html
Here is an example of someone using an AR1 in GLMMTMB on a random effect:
https://github.com/glmmTMB/glmmTMB/issues/329
Goodluck! | How to account for temporal autocorrelation in logistic regression with longitudinal data?
Switch to GLMMTMB package it allows for different covariance structures like AR1 with random effects and the use of multiple different distribution including the binomial. I use it all the time for lo |
54,816 | How to account for temporal autocorrelation in logistic regression with longitudinal data? | If I understand you correctly, You want to argue that there is an effect of the temperature controlling for time and the location. Thus, you could use a fixed effect interaction between time and the location like in this example
n_times <- 100L # number of time periods
n_sites <- 10L # number of sites
n_sub <- 2L # number of subjects / site
#####
# simulate data
set.seed(58550574)
df <- lapply(1:n_sites, function(site) {
# temporal site effect
ti <- Reduce(function(x, y) .5 * x + y, rnorm(n_times), accumulate = TRUE)
z <- runif(1, -1, 1) # location effect
do.call(rbind, lapply(1:n_sub, function(id) {
x <- runif(n_times, -1, 1) # variable of interest
y <- (1 + exp(-(z + .5 * x + ti)))^(-1) > runif(n_times)
data.frame(id = id, x = x, y = y, site = site, time = 1:n_times)
}))
})
df <- within(do.call(rbind, df), {
site <- as.factor(site)
time <- as.factor(time)
})
#####
# fit model and show variable of interest with Z-score
fit <- glm(y ~ x + site * time, binomial(), df)
summary(fit)$coefficients["x", , drop = FALSE]
#R Estimate Std. Error z value Pr(>|z|)
#R x 0.833 0.183 4.55 5.43e-06
# not sure this is a good idea but we do the plots anyway to show
# that there is not sign of auto correlation as expected
par(mar = c(5, 4, .5, .5), mfcol = c(3, 3))
tapply(
residuals(fit, type = "pearson"), list(df$site, df$id), acf, main = "")
# fit model faster
library(speedglm)
fit <- speedglm(
y ~ x + site * time, family = binomial(), data = df, sparse = TRUE)
summary(fit)$coefficients["x", , drop = FALSE]
#R Estimate Std. Error z value Pr(>|z|)
#R x 0.833 0.183 4.55 5.43e-06
My overall goal is to determine the temperature at which these birds begin experiencing heat stress. I really don't care about the effect of time, I just want to control for the autocorrelation caused by the fact that this data is from a time-series.
The question is how much variation there is in the temperature which is unrelated to location and time. This seems like something that might be an issue regardless of the model you use.
Update
It seems like you just did acf(residuals(<symbol for glmer output>)). What you get out of this will likely not make any sense and depends on the order of the original data. The code below shows an example hereof (this was too short to put into a comment)
library(lme4)
# ordered by events
cbpp <- cbpp[order(cbpp$incidence/cbpp$size), ]
gm1a <- glmer(cbind(incidence, size - incidence) ~ period + (1 | herd), cbpp,
binomial, nAGQ = 0)
acf(residuals(gm1a))
# random order
set.seed(6490517)
cbpp <- cbpp[sample.int(nrow(cbpp)), ]
gm1a <- glmer(cbind(incidence, size - incidence) ~ period + (1 | herd), cbpp,
binomial, nAGQ = 0)
acf(residuals(gm1a))
What I think you want/intended to is to plot the within location auto-correlation of the Pearson residuals. Some caution may be needed in the interpretation of the plot as you observe binary events which does not justify a normal approximation of the Pearson residuals. | How to account for temporal autocorrelation in logistic regression with longitudinal data? | If I understand you correctly, You want to argue that there is an effect of the temperature controlling for time and the location. Thus, you could use a fixed effect interaction between time and the l | How to account for temporal autocorrelation in logistic regression with longitudinal data?
If I understand you correctly, You want to argue that there is an effect of the temperature controlling for time and the location. Thus, you could use a fixed effect interaction between time and the location like in this example
n_times <- 100L # number of time periods
n_sites <- 10L # number of sites
n_sub <- 2L # number of subjects / site
#####
# simulate data
set.seed(58550574)
df <- lapply(1:n_sites, function(site) {
# temporal site effect
ti <- Reduce(function(x, y) .5 * x + y, rnorm(n_times), accumulate = TRUE)
z <- runif(1, -1, 1) # location effect
do.call(rbind, lapply(1:n_sub, function(id) {
x <- runif(n_times, -1, 1) # variable of interest
y <- (1 + exp(-(z + .5 * x + ti)))^(-1) > runif(n_times)
data.frame(id = id, x = x, y = y, site = site, time = 1:n_times)
}))
})
df <- within(do.call(rbind, df), {
site <- as.factor(site)
time <- as.factor(time)
})
#####
# fit model and show variable of interest with Z-score
fit <- glm(y ~ x + site * time, binomial(), df)
summary(fit)$coefficients["x", , drop = FALSE]
#R Estimate Std. Error z value Pr(>|z|)
#R x 0.833 0.183 4.55 5.43e-06
# not sure this is a good idea but we do the plots anyway to show
# that there is not sign of auto correlation as expected
par(mar = c(5, 4, .5, .5), mfcol = c(3, 3))
tapply(
residuals(fit, type = "pearson"), list(df$site, df$id), acf, main = "")
# fit model faster
library(speedglm)
fit <- speedglm(
y ~ x + site * time, family = binomial(), data = df, sparse = TRUE)
summary(fit)$coefficients["x", , drop = FALSE]
#R Estimate Std. Error z value Pr(>|z|)
#R x 0.833 0.183 4.55 5.43e-06
My overall goal is to determine the temperature at which these birds begin experiencing heat stress. I really don't care about the effect of time, I just want to control for the autocorrelation caused by the fact that this data is from a time-series.
The question is how much variation there is in the temperature which is unrelated to location and time. This seems like something that might be an issue regardless of the model you use.
Update
It seems like you just did acf(residuals(<symbol for glmer output>)). What you get out of this will likely not make any sense and depends on the order of the original data. The code below shows an example hereof (this was too short to put into a comment)
library(lme4)
# ordered by events
cbpp <- cbpp[order(cbpp$incidence/cbpp$size), ]
gm1a <- glmer(cbind(incidence, size - incidence) ~ period + (1 | herd), cbpp,
binomial, nAGQ = 0)
acf(residuals(gm1a))
# random order
set.seed(6490517)
cbpp <- cbpp[sample.int(nrow(cbpp)), ]
gm1a <- glmer(cbind(incidence, size - incidence) ~ period + (1 | herd), cbpp,
binomial, nAGQ = 0)
acf(residuals(gm1a))
What I think you want/intended to is to plot the within location auto-correlation of the Pearson residuals. Some caution may be needed in the interpretation of the plot as you observe binary events which does not justify a normal approximation of the Pearson residuals. | How to account for temporal autocorrelation in logistic regression with longitudinal data?
If I understand you correctly, You want to argue that there is an effect of the temperature controlling for time and the location. Thus, you could use a fixed effect interaction between time and the l |
54,817 | Limits of a density function | Yes.
Suppose the limit is anything else, so $\lim_{t \rightarrow \infty} f(t) = a \neq 0$. Then, by the definition of the limit, there is an $N$ so that for all $t > N$, $| f(t) - a | < \frac{a}{2}$. In particular, $f(t) > \frac{a}{2}$ in this reigon.
But then:
$$
\int_{\mathbf{R}} f(t) dt \geq \int_{N}^{\infty} f(t) dt \geq \int_{N}^{\infty} \frac{a}{2} dt = \infty
$$
So $f$ cannot be a density function. | Limits of a density function | Yes.
Suppose the limit is anything else, so $\lim_{t \rightarrow \infty} f(t) = a \neq 0$. Then, by the definition of the limit, there is an $N$ so that for all $t > N$, $| f(t) - a | < \frac{a}{2}$. | Limits of a density function
Yes.
Suppose the limit is anything else, so $\lim_{t \rightarrow \infty} f(t) = a \neq 0$. Then, by the definition of the limit, there is an $N$ so that for all $t > N$, $| f(t) - a | < \frac{a}{2}$. In particular, $f(t) > \frac{a}{2}$ in this reigon.
But then:
$$
\int_{\mathbf{R}} f(t) dt \geq \int_{N}^{\infty} f(t) dt \geq \int_{N}^{\infty} \frac{a}{2} dt = \infty
$$
So $f$ cannot be a density function. | Limits of a density function
Yes.
Suppose the limit is anything else, so $\lim_{t \rightarrow \infty} f(t) = a \neq 0$. Then, by the definition of the limit, there is an $N$ so that for all $t > N$, $| f(t) - a | < \frac{a}{2}$. |
54,818 | Variance of unbiased estimator for the shape parameter of Pareto distribution | I will write the standard Pareto distribution with density
$$
f(x;\alpha,x_m)=\frac{\alpha x_m^\alpha}{x^{\alpha+1}}\cdot I(x > x_m),
$$ for some $\alpha>0, x_m>0$. Then the loglikelihood function can be written
$$
\ell(\alpha,x_m)=n\log\alpha + n\alpha\log x_m - (\alpha+1) \sum_i \log x_i
$$ (for $x_m<\min_i x_i$, otherwise $-\infty$.)
So the maximum likelihood estimators can be found to be
$$
\begin{align}
\hat{x}_m &= \min_i x_i \\
\hat{\alpha}&= \frac{n}{\sum_i \log(x_i/\hat{x}_m)}
\end{align}
$$
but these are not unbiased!
In Barry C. Arnold: Pareto Distributions Second Edition the following results are given, which can be used then to find an unbiased estimator and its variance (for $\alpha$, as asked). First, the exact distributions of the ML estimators can be found, $\hat{x}_m \sim \mathcal{P}(x_m,n \alpha)$ and with $Y_i=\log X_i$ we have $Y_i - \log x_m \sim \Gamma(1,\alpha^{-1})$ (that is, exponential) and we have the following densities
$$
\begin{align}
f_{\hat{x}_m}(u)&=n\alpha x_m^{n\alpha} u^{-(n\alpha+1)},\quad u>x_m \\
f_\hat{\alpha}(v)&= \frac{(\alpha n)^{n-1}}{\Gamma(n-1)v^n}e^{-(n\alpha/v)},\quad v>0 (\text{Inverse gamma})
\end{align}
$$
Then we can find expectation and variances
$$ \DeclareMathOperator{\E}{\mathbb{E}}\DeclareMathOperator{\V}{\mathbb{V}}
\begin{align}
\E \hat{x}_m &= x_m (1-\frac1{n\alpha})^{-1} \\
\V \hat{x}_m&= \alpha n (n-2)^{-1} \\
\E \hat{\alpha} &=\alpha n (n-2)^{-1} \\
\V \hat{\alpha}&= \alpha^2 n^2 (n-2)^{-2} (n-3)^{-1}
\end{align}
$$
Using this we can modify the ML estimator to find the unbiased estimator
$$ \hat{\alpha}_u=\frac{n-2}{\sum_i \log(x_i/\hat{x}_m)} $$
with variance
$$ \V \hat{\alpha}_u = \frac{\alpha^2}{n-3}.$$
Addendum: The referenced book states, but does not prove (p 229) that the family of distributions of the maximum likelihood estimators is complete. A consequence of this is that the unbiased estimator above is UMVUE, that is, it has lowest variance among all unbiased estimators. Two papers are referred, paper1 and paper2. I did not find ungated links. | Variance of unbiased estimator for the shape parameter of Pareto distribution | I will write the standard Pareto distribution with density
$$
f(x;\alpha,x_m)=\frac{\alpha x_m^\alpha}{x^{\alpha+1}}\cdot I(x > x_m),
$$ for some $\alpha>0, x_m>0$. Then the loglikelihood function | Variance of unbiased estimator for the shape parameter of Pareto distribution
I will write the standard Pareto distribution with density
$$
f(x;\alpha,x_m)=\frac{\alpha x_m^\alpha}{x^{\alpha+1}}\cdot I(x > x_m),
$$ for some $\alpha>0, x_m>0$. Then the loglikelihood function can be written
$$
\ell(\alpha,x_m)=n\log\alpha + n\alpha\log x_m - (\alpha+1) \sum_i \log x_i
$$ (for $x_m<\min_i x_i$, otherwise $-\infty$.)
So the maximum likelihood estimators can be found to be
$$
\begin{align}
\hat{x}_m &= \min_i x_i \\
\hat{\alpha}&= \frac{n}{\sum_i \log(x_i/\hat{x}_m)}
\end{align}
$$
but these are not unbiased!
In Barry C. Arnold: Pareto Distributions Second Edition the following results are given, which can be used then to find an unbiased estimator and its variance (for $\alpha$, as asked). First, the exact distributions of the ML estimators can be found, $\hat{x}_m \sim \mathcal{P}(x_m,n \alpha)$ and with $Y_i=\log X_i$ we have $Y_i - \log x_m \sim \Gamma(1,\alpha^{-1})$ (that is, exponential) and we have the following densities
$$
\begin{align}
f_{\hat{x}_m}(u)&=n\alpha x_m^{n\alpha} u^{-(n\alpha+1)},\quad u>x_m \\
f_\hat{\alpha}(v)&= \frac{(\alpha n)^{n-1}}{\Gamma(n-1)v^n}e^{-(n\alpha/v)},\quad v>0 (\text{Inverse gamma})
\end{align}
$$
Then we can find expectation and variances
$$ \DeclareMathOperator{\E}{\mathbb{E}}\DeclareMathOperator{\V}{\mathbb{V}}
\begin{align}
\E \hat{x}_m &= x_m (1-\frac1{n\alpha})^{-1} \\
\V \hat{x}_m&= \alpha n (n-2)^{-1} \\
\E \hat{\alpha} &=\alpha n (n-2)^{-1} \\
\V \hat{\alpha}&= \alpha^2 n^2 (n-2)^{-2} (n-3)^{-1}
\end{align}
$$
Using this we can modify the ML estimator to find the unbiased estimator
$$ \hat{\alpha}_u=\frac{n-2}{\sum_i \log(x_i/\hat{x}_m)} $$
with variance
$$ \V \hat{\alpha}_u = \frac{\alpha^2}{n-3}.$$
Addendum: The referenced book states, but does not prove (p 229) that the family of distributions of the maximum likelihood estimators is complete. A consequence of this is that the unbiased estimator above is UMVUE, that is, it has lowest variance among all unbiased estimators. Two papers are referred, paper1 and paper2. I did not find ungated links. | Variance of unbiased estimator for the shape parameter of Pareto distribution
I will write the standard Pareto distribution with density
$$
f(x;\alpha,x_m)=\frac{\alpha x_m^\alpha}{x^{\alpha+1}}\cdot I(x > x_m),
$$ for some $\alpha>0, x_m>0$. Then the loglikelihood function |
54,819 | Interpreting BLUPs or VarCorr estimates in mixed models? | As explained in the answer you cited above, the covariance matrices are referring to two different models, one in the marginal model (integrating out the random effects), and the other on the conditional model on the random effects.
It is not that one is better than the other because they are not referring to the same model. Which one you select depends on your question of interest.
Regarding your second question, you have to be a bit more specific on what you exactly you mean by "BLUPs" under the two models. For example, the empirical Bayes estimates of the random effects are derived using the same idea under the two approaches, i.e., the mode of the conditional distribution of the random effects given the observed outcomes. You can add these to the fixed-effects part to obtain subject-specific predictions, taking into account though that in the case of glmer() you also have a link function.
EDIT: Regarding the two models mentioned above; say, $y$ is your outcome variable and $b$ are the random effects. A general definition of a mixed model is:
$$\left\{
\begin{array}{l}
y \mid b \sim \mathcal F_y(\theta_y),\\\\
b \sim \mathcal N(0, D),
\end{array}
\right.$$
where $\mathcal F_y(\theta_y)$ is an appropriate distribution for the outcome $y$, e.g., it could be Gaussian (in which case you obtain a linear mixed model), Binomial (and you obtain a mixed effects logistic regression), Poisson (mixed effects Poisson regression), etc.
The random effects are estimated as the modes of the posterior distribution
$$
p(b \mid y) = \frac{p(y \mid b) \; p(b)}{p(y)},
$$
where $p(y \mid b)$ is the probability density or probability mass function behind $\mathcal F_y$, and $p(b)$ the probability density function of the multivariate normal distribution for the random effects.
With regard to your question, the covariance matrix of the empirical Bayes estimates obtained from ranef() is related to the covariance of this posterior distribution, whereas VarCorr() is giving the $D$ matrix, which is the covariance matrix of the prior distribution of the random effects. These are not the same.
EDIT 2: A relevant feature of the estimation of the random effects is shrinkage. That is, the estimates of the random effects are shrunk towards the overall mean of the model. The degree of shrinkage depends on
The relative magnitude of the variance of the random effects versus the variance of the error terms. I.e., the larger the variance of the random effects with respect to the error variance, the smaller the degree of shrinkage.
The number of repeated measurements. The more repeated measurements, the smaller the degree of shrinkage.
The following code illustrates this in the simple random-intercepts model:
prior_vs_post <- function (prior_variance = 1, error_variance = 1,
repeated_meas_per_id = 5) {
require("lme4")
n <- 1000 # number of subjects
beta <- 10 # fixed effect intercept
b <- rnorm(n, 0, sqrt(prior_variance)) # random intercepts
DF <- data.frame(id = rep(seq_len(n), each = repeated_meas_per_id))
# simulate normal data conditional on the random intercepts
DF$y <- rnorm(n * repeated_meas_per_id, beta + b[DF$id], sqrt(error_variance))
###############
# Fit the model
fm <- lmer(y ~ 1 + (1 | id), data = DF)
c(estimated_prior_variance = VarCorr(fm)$id[1L],
BLUPs_variance = var(ranef(fm)$id[[1L]]))
}
# high variance of REs, low variance error terms
# 2 repeated measurements => low shrinkage
prior_vs_post(prior_variance = 10, error_variance = 1,
repeated_meas_per_id = 2)
#> estimated_prior_variance BLUPs_variance
#> 11.05215 10.58501
# high variance of REs, low variance error terms
# 20 repeated measurements => almost no shrinkage
prior_vs_post(prior_variance = 10, error_variance = 1,
repeated_meas_per_id = 20)
#> estimated_prior_variance BLUPs_variance
#> 10.07539 10.02580
# low variance REs, high variance error terms,
# 20 repeated measurements => considerable shrinkage
prior_vs_post(prior_variance = 1, error_variance = 10,
repeated_meas_per_id = 20)
#> estimated_prior_variance BLUPs_variance
#> 1.0002202 0.6666536
# low variance REs, high variance error terms,
# 2 repeated measurements => extreme shrinkage
prior_vs_post(prior_variance = 1, error_variance = 10,
repeated_meas_per_id = 2)
#> estimated_prior_variance BLUPs_variance
#> 0.9479291 0.1574824 | Interpreting BLUPs or VarCorr estimates in mixed models? | As explained in the answer you cited above, the covariance matrices are referring to two different models, one in the marginal model (integrating out the random effects), and the other on the conditio | Interpreting BLUPs or VarCorr estimates in mixed models?
As explained in the answer you cited above, the covariance matrices are referring to two different models, one in the marginal model (integrating out the random effects), and the other on the conditional model on the random effects.
It is not that one is better than the other because they are not referring to the same model. Which one you select depends on your question of interest.
Regarding your second question, you have to be a bit more specific on what you exactly you mean by "BLUPs" under the two models. For example, the empirical Bayes estimates of the random effects are derived using the same idea under the two approaches, i.e., the mode of the conditional distribution of the random effects given the observed outcomes. You can add these to the fixed-effects part to obtain subject-specific predictions, taking into account though that in the case of glmer() you also have a link function.
EDIT: Regarding the two models mentioned above; say, $y$ is your outcome variable and $b$ are the random effects. A general definition of a mixed model is:
$$\left\{
\begin{array}{l}
y \mid b \sim \mathcal F_y(\theta_y),\\\\
b \sim \mathcal N(0, D),
\end{array}
\right.$$
where $\mathcal F_y(\theta_y)$ is an appropriate distribution for the outcome $y$, e.g., it could be Gaussian (in which case you obtain a linear mixed model), Binomial (and you obtain a mixed effects logistic regression), Poisson (mixed effects Poisson regression), etc.
The random effects are estimated as the modes of the posterior distribution
$$
p(b \mid y) = \frac{p(y \mid b) \; p(b)}{p(y)},
$$
where $p(y \mid b)$ is the probability density or probability mass function behind $\mathcal F_y$, and $p(b)$ the probability density function of the multivariate normal distribution for the random effects.
With regard to your question, the covariance matrix of the empirical Bayes estimates obtained from ranef() is related to the covariance of this posterior distribution, whereas VarCorr() is giving the $D$ matrix, which is the covariance matrix of the prior distribution of the random effects. These are not the same.
EDIT 2: A relevant feature of the estimation of the random effects is shrinkage. That is, the estimates of the random effects are shrunk towards the overall mean of the model. The degree of shrinkage depends on
The relative magnitude of the variance of the random effects versus the variance of the error terms. I.e., the larger the variance of the random effects with respect to the error variance, the smaller the degree of shrinkage.
The number of repeated measurements. The more repeated measurements, the smaller the degree of shrinkage.
The following code illustrates this in the simple random-intercepts model:
prior_vs_post <- function (prior_variance = 1, error_variance = 1,
repeated_meas_per_id = 5) {
require("lme4")
n <- 1000 # number of subjects
beta <- 10 # fixed effect intercept
b <- rnorm(n, 0, sqrt(prior_variance)) # random intercepts
DF <- data.frame(id = rep(seq_len(n), each = repeated_meas_per_id))
# simulate normal data conditional on the random intercepts
DF$y <- rnorm(n * repeated_meas_per_id, beta + b[DF$id], sqrt(error_variance))
###############
# Fit the model
fm <- lmer(y ~ 1 + (1 | id), data = DF)
c(estimated_prior_variance = VarCorr(fm)$id[1L],
BLUPs_variance = var(ranef(fm)$id[[1L]]))
}
# high variance of REs, low variance error terms
# 2 repeated measurements => low shrinkage
prior_vs_post(prior_variance = 10, error_variance = 1,
repeated_meas_per_id = 2)
#> estimated_prior_variance BLUPs_variance
#> 11.05215 10.58501
# high variance of REs, low variance error terms
# 20 repeated measurements => almost no shrinkage
prior_vs_post(prior_variance = 10, error_variance = 1,
repeated_meas_per_id = 20)
#> estimated_prior_variance BLUPs_variance
#> 10.07539 10.02580
# low variance REs, high variance error terms,
# 20 repeated measurements => considerable shrinkage
prior_vs_post(prior_variance = 1, error_variance = 10,
repeated_meas_per_id = 20)
#> estimated_prior_variance BLUPs_variance
#> 1.0002202 0.6666536
# low variance REs, high variance error terms,
# 2 repeated measurements => extreme shrinkage
prior_vs_post(prior_variance = 1, error_variance = 10,
repeated_meas_per_id = 2)
#> estimated_prior_variance BLUPs_variance
#> 0.9479291 0.1574824 | Interpreting BLUPs or VarCorr estimates in mixed models?
As explained in the answer you cited above, the covariance matrices are referring to two different models, one in the marginal model (integrating out the random effects), and the other on the conditio |
54,820 | What is a consequence of an ill-conditioned Hessian matrix? | It is easiest understood when considering solving the linear problem,
$$
Ax = b
$$
where $b$ and $A$ are the problem data, and $x$ the parameters we are trying to estimate. In practice you have errors in $b$ which propagate through $A$. How?
Assume we have only errors in the measurements, $b$, and denote $\delta b$ and $\delta x$ the errors in the measurements and estimatation, respectively.
Because of linearty,
$$
\delta b = A \delta x
$$
In order to see how the measurements errors are magnified by the matrix $A$, you can calculate,
$$
\frac{||\delta x ||}{||x||}/\frac{||\delta b ||}{||b||}
$$
We have that this number is bounded by the condition number of $A$,
$$
cond(A) = \frac{\sigma_{1}}{\sigma_{n}}
$$
where $\sigma_{1}$ and $\sigma_{n}$ are the biggest and smallest eigenvalue of $A$, resp. Hence, the bigger the condition number, the higher the magnification of errors.
Here, a low condition number corresponds to directions where the gradient is small, which leads to oscillations and slow convergence.
This issue has motivated a lot of research for the optimization of neural networks (as you already point out), which has led to the development of techniques like momentum (see On the importance of initialization and momentum in deep learning) and early stopping. This blog entry provides a very nice description of this topic. | What is a consequence of an ill-conditioned Hessian matrix? | It is easiest understood when considering solving the linear problem,
$$
Ax = b
$$
where $b$ and $A$ are the problem data, and $x$ the parameters we are trying to estimate. In practice you have errors | What is a consequence of an ill-conditioned Hessian matrix?
It is easiest understood when considering solving the linear problem,
$$
Ax = b
$$
where $b$ and $A$ are the problem data, and $x$ the parameters we are trying to estimate. In practice you have errors in $b$ which propagate through $A$. How?
Assume we have only errors in the measurements, $b$, and denote $\delta b$ and $\delta x$ the errors in the measurements and estimatation, respectively.
Because of linearty,
$$
\delta b = A \delta x
$$
In order to see how the measurements errors are magnified by the matrix $A$, you can calculate,
$$
\frac{||\delta x ||}{||x||}/\frac{||\delta b ||}{||b||}
$$
We have that this number is bounded by the condition number of $A$,
$$
cond(A) = \frac{\sigma_{1}}{\sigma_{n}}
$$
where $\sigma_{1}$ and $\sigma_{n}$ are the biggest and smallest eigenvalue of $A$, resp. Hence, the bigger the condition number, the higher the magnification of errors.
Here, a low condition number corresponds to directions where the gradient is small, which leads to oscillations and slow convergence.
This issue has motivated a lot of research for the optimization of neural networks (as you already point out), which has led to the development of techniques like momentum (see On the importance of initialization and momentum in deep learning) and early stopping. This blog entry provides a very nice description of this topic. | What is a consequence of an ill-conditioned Hessian matrix?
It is easiest understood when considering solving the linear problem,
$$
Ax = b
$$
where $b$ and $A$ are the problem data, and $x$ the parameters we are trying to estimate. In practice you have errors |
54,821 | What is a consequence of an ill-conditioned Hessian matrix? | Many optimization methods, such as Newton's, require the computation of the inverse of the Hessian.
The conditioning of a matrix $H$ is usually defined as the ratio between the largest and smallest singular values,
$$
\kappa(H)=\frac{\sigma_1}{\sigma_n}.
$$
If this number is large, that is, $\sigma_n$ is small with respect to $\|H\|$, the matrix is said to be ill-conditioned.
Now, consider the singular value decomposition of $H=U\Sigma V^T$. The inverse, provided $H$ is non-singular, can be computed as $H^{-1}=V\Sigma^+U^T$, where
$$
\Sigma^+_{ij}=\begin{cases}1/\Sigma_{ij} &\mbox{if } i=j \\ 0 & \mbox {otherwise.} \end{cases}
$$
Since $\Sigma_{nn}=\sigma_n$, whenever this value is small enough, the computation of $1/\Sigma_{ij}$ might introduce significant numerical instability. | What is a consequence of an ill-conditioned Hessian matrix? | Many optimization methods, such as Newton's, require the computation of the inverse of the Hessian.
The conditioning of a matrix $H$ is usually defined as the ratio between the largest and smallest si | What is a consequence of an ill-conditioned Hessian matrix?
Many optimization methods, such as Newton's, require the computation of the inverse of the Hessian.
The conditioning of a matrix $H$ is usually defined as the ratio between the largest and smallest singular values,
$$
\kappa(H)=\frac{\sigma_1}{\sigma_n}.
$$
If this number is large, that is, $\sigma_n$ is small with respect to $\|H\|$, the matrix is said to be ill-conditioned.
Now, consider the singular value decomposition of $H=U\Sigma V^T$. The inverse, provided $H$ is non-singular, can be computed as $H^{-1}=V\Sigma^+U^T$, where
$$
\Sigma^+_{ij}=\begin{cases}1/\Sigma_{ij} &\mbox{if } i=j \\ 0 & \mbox {otherwise.} \end{cases}
$$
Since $\Sigma_{nn}=\sigma_n$, whenever this value is small enough, the computation of $1/\Sigma_{ij}$ might introduce significant numerical instability. | What is a consequence of an ill-conditioned Hessian matrix?
Many optimization methods, such as Newton's, require the computation of the inverse of the Hessian.
The conditioning of a matrix $H$ is usually defined as the ratio between the largest and smallest si |
54,822 | lmer for repeated measures | This seems to be a longitudinal study, with measurements over time for each pair. As a first step, based on the date variable you could construct the follow-up time variable, which is the time from the first measurement. Think however carefully if the first measurement really is the time 0 for each pair for your experiment or perhaps another date.
Then, you include random effects for the pair grouping variable, but not include it also as a fixed effect. You could start with a random intercepts model, e.g.,
fm1 <- lmer(time.con ~ follow_up_time + (1 | pair),
data = your_data)
This model postulates that the correlations over time within a pair remain constant. You could extend the model by assume that the correlations decrease with the time span between measurements using a random intercepts and random slopes model, e.g.,
fm2 <- lmer(time.con ~ follow_up_time + (follow_up_time | pair),
data = your_data)
To evaluate if you need the random slopes, you could do a likelihood ratio test, i.e.,
anova(fm1, fm2) | lmer for repeated measures | This seems to be a longitudinal study, with measurements over time for each pair. As a first step, based on the date variable you could construct the follow-up time variable, which is the time from th | lmer for repeated measures
This seems to be a longitudinal study, with measurements over time for each pair. As a first step, based on the date variable you could construct the follow-up time variable, which is the time from the first measurement. Think however carefully if the first measurement really is the time 0 for each pair for your experiment or perhaps another date.
Then, you include random effects for the pair grouping variable, but not include it also as a fixed effect. You could start with a random intercepts model, e.g.,
fm1 <- lmer(time.con ~ follow_up_time + (1 | pair),
data = your_data)
This model postulates that the correlations over time within a pair remain constant. You could extend the model by assume that the correlations decrease with the time span between measurements using a random intercepts and random slopes model, e.g.,
fm2 <- lmer(time.con ~ follow_up_time + (follow_up_time | pair),
data = your_data)
To evaluate if you need the random slopes, you could do a likelihood ratio test, i.e.,
anova(fm1, fm2) | lmer for repeated measures
This seems to be a longitudinal study, with measurements over time for each pair. As a first step, based on the date variable you could construct the follow-up time variable, which is the time from th |
54,823 | lmer for repeated measures | (my first attempted answer on stackexchange...fingers crossed that this works)
I'm not an expert, but I'll offer some feeback. I don't have enough reputation to put this in a comment, so here's an answer.
I guess the first question to ask is, what do you wish to learn from the data? You say you want to compare them. By that, do you just want to see the means for each pair? If so, you could look at either the unadjusted means or shrunk means:
The unadjusted means by using lm(). That is, without pooling you could use:
lm(time.con ~ Pair)
To get the coefficients:
coef( lm(time.con ~ Pair) )
The shrunk means by using lmer(). That is, with pooling you could use:
lmer(time.con ~ (1|Pair))
To get the coefficients:
coef( lmer(time.con ~ (1|Pair)) )
If that's not what you have in mind when you say you want to compare them, then what does "compare" mean to you in this context?
Finally, since your response variable has a hard lower boundary at 0, and some values that are close to it including one right at 0, then the typical assumption of the error term being Gaussian might not be the best. If you want to account for this by using generalized learn models, glm() and glmer() might be helpful. Hopefully someone else can add more details on whether or not it'd be worth it to go to a generalized linear model.
For more background, I think this is a good resource freely available on the web:
https://bookdown.org/roback/bookdown-bysh/ch-MLRreview.html
I worked through an old version, but it looks like they updated it a few days ago. | lmer for repeated measures | (my first attempted answer on stackexchange...fingers crossed that this works)
I'm not an expert, but I'll offer some feeback. I don't have enough reputation to put this in a comment, so here's an an | lmer for repeated measures
(my first attempted answer on stackexchange...fingers crossed that this works)
I'm not an expert, but I'll offer some feeback. I don't have enough reputation to put this in a comment, so here's an answer.
I guess the first question to ask is, what do you wish to learn from the data? You say you want to compare them. By that, do you just want to see the means for each pair? If so, you could look at either the unadjusted means or shrunk means:
The unadjusted means by using lm(). That is, without pooling you could use:
lm(time.con ~ Pair)
To get the coefficients:
coef( lm(time.con ~ Pair) )
The shrunk means by using lmer(). That is, with pooling you could use:
lmer(time.con ~ (1|Pair))
To get the coefficients:
coef( lmer(time.con ~ (1|Pair)) )
If that's not what you have in mind when you say you want to compare them, then what does "compare" mean to you in this context?
Finally, since your response variable has a hard lower boundary at 0, and some values that are close to it including one right at 0, then the typical assumption of the error term being Gaussian might not be the best. If you want to account for this by using generalized learn models, glm() and glmer() might be helpful. Hopefully someone else can add more details on whether or not it'd be worth it to go to a generalized linear model.
For more background, I think this is a good resource freely available on the web:
https://bookdown.org/roback/bookdown-bysh/ch-MLRreview.html
I worked through an old version, but it looks like they updated it a few days ago. | lmer for repeated measures
(my first attempted answer on stackexchange...fingers crossed that this works)
I'm not an expert, but I'll offer some feeback. I don't have enough reputation to put this in a comment, so here's an an |
54,824 | pymc3: acceptance probabilities and divergencies after tuning | You might have better luck on our discourse: https://discourse.pymc.io/
A couple of notes: You need to use: pm.sample(..., nuts_kwargs=dict(target_accept=0.95)) instead to have the target_accept be applied (this behavior is changed in line with what you tried in the next release). That might already solve your problem.
If not, these problems do not look too terrible so you're probably fine. If you still wanted to investigate this further I would look where exactly the divergences appear. Arviz has some better plots for this, see here for one example: https://arviz-devs.github.io/arviz/examples/plot_parallel.html or https://arviz-devs.github.io/arviz/examples/plot_pair.html. Basically you want to find the dimensions where the divergences cluster. That tells you where the problem is. | pymc3: acceptance probabilities and divergencies after tuning | You might have better luck on our discourse: https://discourse.pymc.io/
A couple of notes: You need to use: pm.sample(..., nuts_kwargs=dict(target_accept=0.95)) instead to have the target_accept be ap | pymc3: acceptance probabilities and divergencies after tuning
You might have better luck on our discourse: https://discourse.pymc.io/
A couple of notes: You need to use: pm.sample(..., nuts_kwargs=dict(target_accept=0.95)) instead to have the target_accept be applied (this behavior is changed in line with what you tried in the next release). That might already solve your problem.
If not, these problems do not look too terrible so you're probably fine. If you still wanted to investigate this further I would look where exactly the divergences appear. Arviz has some better plots for this, see here for one example: https://arviz-devs.github.io/arviz/examples/plot_parallel.html or https://arviz-devs.github.io/arviz/examples/plot_pair.html. Basically you want to find the dimensions where the divergences cluster. That tells you where the problem is. | pymc3: acceptance probabilities and divergencies after tuning
You might have better luck on our discourse: https://discourse.pymc.io/
A couple of notes: You need to use: pm.sample(..., nuts_kwargs=dict(target_accept=0.95)) instead to have the target_accept be ap |
54,825 | Central limit theorem (CLT) writing | The notation $\stackrel{d}{\rightarrow}$ in the CLT is shorthand for the formal limit statement:
$$\lim_{n \rightarrow \infty} \mathbb{P} \Big( \sqrt{n} (\bar{X}_n - \mu) \leqslant t \Big) = \Phi(t | 0, \sigma^2).$$
You will notice that the formal statement is a limit statement in $n$ and so all of the values of $n$ are on the left-hand-side of the equation. On the right-hand-side this value does not appear, since the limit operation removes it. The latter statement in your question is generally considered an acceptable shorthand, but it is a slight abuse of notation with respect this formal limit statement, since $n$ appears in the right-hand-side of the limiting symbol. So the reason it is not commonly used is that, formally speaking, the limit of a function taken with respect to $n$ cannot itself be a function of $n$. | Central limit theorem (CLT) writing | The notation $\stackrel{d}{\rightarrow}$ in the CLT is shorthand for the formal limit statement:
$$\lim_{n \rightarrow \infty} \mathbb{P} \Big( \sqrt{n} (\bar{X}_n - \mu) \leqslant t \Big) = \Phi(t | | Central limit theorem (CLT) writing
The notation $\stackrel{d}{\rightarrow}$ in the CLT is shorthand for the formal limit statement:
$$\lim_{n \rightarrow \infty} \mathbb{P} \Big( \sqrt{n} (\bar{X}_n - \mu) \leqslant t \Big) = \Phi(t | 0, \sigma^2).$$
You will notice that the formal statement is a limit statement in $n$ and so all of the values of $n$ are on the left-hand-side of the equation. On the right-hand-side this value does not appear, since the limit operation removes it. The latter statement in your question is generally considered an acceptable shorthand, but it is a slight abuse of notation with respect this formal limit statement, since $n$ appears in the right-hand-side of the limiting symbol. So the reason it is not commonly used is that, formally speaking, the limit of a function taken with respect to $n$ cannot itself be a function of $n$. | Central limit theorem (CLT) writing
The notation $\stackrel{d}{\rightarrow}$ in the CLT is shorthand for the formal limit statement:
$$\lim_{n \rightarrow \infty} \mathbb{P} \Big( \sqrt{n} (\bar{X}_n - \mu) \leqslant t \Big) = \Phi(t | |
54,826 | Simple linear regression: If Y and X are both normal, what's the exact null distribution of the parameters? | Since you have specified that $X$ and $Y$ are independent, the conditional mean of $Y$ given $X$ is:
$$\mathbb{E}(Y|X) = \mathbb{E}(Y) = c,$$
which implies that:
$$\beta_0 = c \quad \quad \quad \beta_1 = 0 \quad \quad \quad \varepsilon \sim \text{N}(0, d).$$
In this case there is nothing to test --- your regression parameters are fully determined by the distributional assumptions you have made at the start of the question.
Remember that a regression model is a model designed to describe the conditional distribution of $Y$ given $X$. If you assume independence of these variables then this pre-empts the entire modelling exercise. | Simple linear regression: If Y and X are both normal, what's the exact null distribution of the para | Since you have specified that $X$ and $Y$ are independent, the conditional mean of $Y$ given $X$ is:
$$\mathbb{E}(Y|X) = \mathbb{E}(Y) = c,$$
which implies that:
$$\beta_0 = c \quad \quad \quad \beta_ | Simple linear regression: If Y and X are both normal, what's the exact null distribution of the parameters?
Since you have specified that $X$ and $Y$ are independent, the conditional mean of $Y$ given $X$ is:
$$\mathbb{E}(Y|X) = \mathbb{E}(Y) = c,$$
which implies that:
$$\beta_0 = c \quad \quad \quad \beta_1 = 0 \quad \quad \quad \varepsilon \sim \text{N}(0, d).$$
In this case there is nothing to test --- your regression parameters are fully determined by the distributional assumptions you have made at the start of the question.
Remember that a regression model is a model designed to describe the conditional distribution of $Y$ given $X$. If you assume independence of these variables then this pre-empts the entire modelling exercise. | Simple linear regression: If Y and X are both normal, what's the exact null distribution of the para
Since you have specified that $X$ and $Y$ are independent, the conditional mean of $Y$ given $X$ is:
$$\mathbb{E}(Y|X) = \mathbb{E}(Y) = c,$$
which implies that:
$$\beta_0 = c \quad \quad \quad \beta_ |
54,827 | Simple linear regression: If Y and X are both normal, what's the exact null distribution of the parameters? | In simple linear regression the computation of the estimate of $\beta_0$ is:
$$\hat\beta_0 = \frac {1}{n} S_y + \frac {1}{n} S_x \frac {n S_{xy} - S_x S_y}{ n S_{xx} - S_x S_x}$$
with $S_x = \sum x_i $, $S_y = \sum y_i $, $S_{xx} = \sum x_i x_i $, $S_{xy} = \sum x_i y_i $
You could say it will be a linear sum of the $y_i$
$$\hat\beta_0 = \frac {1} {n} \sum c_i y_i $$
with
$$c_i =\left( 1 + \frac {n x_i - S_x}{n S_{xx} - S_x S_x} \right) $$
This does not seem to follow an easy distribution (or at least not a typical well known distribution) for both random $x_i $ and $y_i$ you have:
$$\hat\beta_0 \sim N(\mu, \sigma^2)$$
where $\mu$ and $\sigma$ are random variables themselves depending on the distribution of $X$ as well. (if every $y_i$ has an identical distribution $N(a,b)$ then $\mu = a$, independent from the distribution of $X$)
However if you condition on $x_i$ then $\hat\beta_0$ follows a regular normal distribution (note that the $y_i$ do not need to be distributed according to identical Normal distributions) .
In testing you often do not know the variance of this normal distribution and you will estimate it based on the residuals. Then you will use the t-distribution. | Simple linear regression: If Y and X are both normal, what's the exact null distribution of the para | In simple linear regression the computation of the estimate of $\beta_0$ is:
$$\hat\beta_0 = \frac {1}{n} S_y + \frac {1}{n} S_x \frac {n S_{xy} - S_x S_y}{ n S_{xx} - S_x S_x}$$
with $S_x = \sum x_i | Simple linear regression: If Y and X are both normal, what's the exact null distribution of the parameters?
In simple linear regression the computation of the estimate of $\beta_0$ is:
$$\hat\beta_0 = \frac {1}{n} S_y + \frac {1}{n} S_x \frac {n S_{xy} - S_x S_y}{ n S_{xx} - S_x S_x}$$
with $S_x = \sum x_i $, $S_y = \sum y_i $, $S_{xx} = \sum x_i x_i $, $S_{xy} = \sum x_i y_i $
You could say it will be a linear sum of the $y_i$
$$\hat\beta_0 = \frac {1} {n} \sum c_i y_i $$
with
$$c_i =\left( 1 + \frac {n x_i - S_x}{n S_{xx} - S_x S_x} \right) $$
This does not seem to follow an easy distribution (or at least not a typical well known distribution) for both random $x_i $ and $y_i$ you have:
$$\hat\beta_0 \sim N(\mu, \sigma^2)$$
where $\mu$ and $\sigma$ are random variables themselves depending on the distribution of $X$ as well. (if every $y_i$ has an identical distribution $N(a,b)$ then $\mu = a$, independent from the distribution of $X$)
However if you condition on $x_i$ then $\hat\beta_0$ follows a regular normal distribution (note that the $y_i$ do not need to be distributed according to identical Normal distributions) .
In testing you often do not know the variance of this normal distribution and you will estimate it based on the residuals. Then you will use the t-distribution. | Simple linear regression: If Y and X are both normal, what's the exact null distribution of the para
In simple linear regression the computation of the estimate of $\beta_0$ is:
$$\hat\beta_0 = \frac {1}{n} S_y + \frac {1}{n} S_x \frac {n S_{xy} - S_x S_y}{ n S_{xx} - S_x S_x}$$
with $S_x = \sum x_i |
54,828 | Is a neural network consisting of a single softmax classification layer only a linear classifier? | A neural network with no hidden layers and a softmax output layer is exactly logistic regression (possibly with more than 2 classes), when trained to minimize categorical cross-entropy (equivalently maximize the log-likelihood of a multinomial model).
Your explanation is right on the money: a linear combination of inputs learns linear functions, and the softmax function yields a probability vector over classes. | Is a neural network consisting of a single softmax classification layer only a linear classifier? | A neural network with no hidden layers and a softmax output layer is exactly logistic regression (possibly with more than 2 classes), when trained to minimize categorical cross-entropy (equivalently m | Is a neural network consisting of a single softmax classification layer only a linear classifier?
A neural network with no hidden layers and a softmax output layer is exactly logistic regression (possibly with more than 2 classes), when trained to minimize categorical cross-entropy (equivalently maximize the log-likelihood of a multinomial model).
Your explanation is right on the money: a linear combination of inputs learns linear functions, and the softmax function yields a probability vector over classes. | Is a neural network consisting of a single softmax classification layer only a linear classifier?
A neural network with no hidden layers and a softmax output layer is exactly logistic regression (possibly with more than 2 classes), when trained to minimize categorical cross-entropy (equivalently m |
54,829 | Asymmetric cost function in neural networks | This might explain why there are not many papers on asymmetric loss functions.
That's not true. Cross-entropy is used as loss function in most classification problems (and problems that aren't standard classification, like for example autoencoder training and segmentation problems), and it's not symmetric. | Asymmetric cost function in neural networks | This might explain why there are not many papers on asymmetric loss functions.
That's not true. Cross-entropy is used as loss function in most classification problems (and problems that aren't standa | Asymmetric cost function in neural networks
This might explain why there are not many papers on asymmetric loss functions.
That's not true. Cross-entropy is used as loss function in most classification problems (and problems that aren't standard classification, like for example autoencoder training and segmentation problems), and it's not symmetric. | Asymmetric cost function in neural networks
This might explain why there are not many papers on asymmetric loss functions.
That's not true. Cross-entropy is used as loss function in most classification problems (and problems that aren't standa |
54,830 | Asymmetric cost function in neural networks | It's not correct that there are few papers that use an asymmetric loss function. For instance, the cross-entropy loss is asymmetric, and there are gazillions of papers that use neural networks with a cross-entropy loss. Same for the hinge loss.
It's not correct that neural networks necessarily perform badly if you use an asymmetric loss function.
There are many possible reasons why a neural network might perform badly. If you wanted to test whether your loss is responsible for the problem, you could replace your asymmetric loss with a symmetric loss that is approximately equal for the regime of interest. For instance, the Taylor series approximation of the function $f(x) = b(e^{ax} + ax - 1)$ is $f(x) = -b + 2abx + \frac12 a^2 b x^2 + O(x^3)$, so you could try training your network using the symmetric loss function $g(y,\hat{y}) = -b + \frac12 a^2 b (y-\hat{y})^2$ and see how well it works. I conjecture it will behave about the same, but that's something you could test empirically.
It is unusual to min-max normalize outputs of the network. I'm not even sure what that would involve. Also if you are using the sigmoid activation function, then your outputs should already be normalized to be within -1..1, so it is not clear why you are normalizing them.
It is known that sigmoid and tanh activation functions often don't work that well; training can be very slow, or you can have problems with dead neurons. Modern networks usually use a different activation function, e.g., ReLu.
There are many details to make a neural network train effectively, based on initialization, the optimization algorithm, learning rates, network architecture, and more. I don't think you have any justification for concluding that the poor performance you are observing necessarily has anything to do with the asymmetry in your loss function. And a question here might not be the best way to debug your network (certainly, the information provided here isn't enough to do so, and such a question is unlikely to be of interest to others in the future). | Asymmetric cost function in neural networks | It's not correct that there are few papers that use an asymmetric loss function. For instance, the cross-entropy loss is asymmetric, and there are gazillions of papers that use neural networks with a | Asymmetric cost function in neural networks
It's not correct that there are few papers that use an asymmetric loss function. For instance, the cross-entropy loss is asymmetric, and there are gazillions of papers that use neural networks with a cross-entropy loss. Same for the hinge loss.
It's not correct that neural networks necessarily perform badly if you use an asymmetric loss function.
There are many possible reasons why a neural network might perform badly. If you wanted to test whether your loss is responsible for the problem, you could replace your asymmetric loss with a symmetric loss that is approximately equal for the regime of interest. For instance, the Taylor series approximation of the function $f(x) = b(e^{ax} + ax - 1)$ is $f(x) = -b + 2abx + \frac12 a^2 b x^2 + O(x^3)$, so you could try training your network using the symmetric loss function $g(y,\hat{y}) = -b + \frac12 a^2 b (y-\hat{y})^2$ and see how well it works. I conjecture it will behave about the same, but that's something you could test empirically.
It is unusual to min-max normalize outputs of the network. I'm not even sure what that would involve. Also if you are using the sigmoid activation function, then your outputs should already be normalized to be within -1..1, so it is not clear why you are normalizing them.
It is known that sigmoid and tanh activation functions often don't work that well; training can be very slow, or you can have problems with dead neurons. Modern networks usually use a different activation function, e.g., ReLu.
There are many details to make a neural network train effectively, based on initialization, the optimization algorithm, learning rates, network architecture, and more. I don't think you have any justification for concluding that the poor performance you are observing necessarily has anything to do with the asymmetry in your loss function. And a question here might not be the best way to debug your network (certainly, the information provided here isn't enough to do so, and such a question is unlikely to be of interest to others in the future). | Asymmetric cost function in neural networks
It's not correct that there are few papers that use an asymmetric loss function. For instance, the cross-entropy loss is asymmetric, and there are gazillions of papers that use neural networks with a |
54,831 | Asymmetric cost function in neural networks | There are some examples for research papers using asymmetric cost functions / loss functions. One example is "Residual value forecasting using asymmetric cost functions" published in the International Journal for Forecasting. Various estimation methods considering asymmetric costs were used and compared - including neural networks.
https://www.sciencedirect.com/science/article/abs/pii/S0169207018300335 | Asymmetric cost function in neural networks | There are some examples for research papers using asymmetric cost functions / loss functions. One example is "Residual value forecasting using asymmetric cost functions" published in the International | Asymmetric cost function in neural networks
There are some examples for research papers using asymmetric cost functions / loss functions. One example is "Residual value forecasting using asymmetric cost functions" published in the International Journal for Forecasting. Various estimation methods considering asymmetric costs were used and compared - including neural networks.
https://www.sciencedirect.com/science/article/abs/pii/S0169207018300335 | Asymmetric cost function in neural networks
There are some examples for research papers using asymmetric cost functions / loss functions. One example is "Residual value forecasting using asymmetric cost functions" published in the International |
54,832 | Separate Models vs Flags in the same model | I will write an answer assuming some form of regression (-like) model. You say neural network, much the same will apply, nevertheless it will be helpful to understand the issues in a simpler setting. And you should probably try some simpler model before throwing the data at a neural network ...
So let $Y_i$ be the response, $x_i$ a covariate vector (all variables except brand indicator, possibly including interactions, not including brand), and $I_i$ (values 0 or 1) the brand indicator. We can write a model with linear predictor as (error terms not included)
$$ Y_i = \beta_0 + \beta^T x_i +\alpha I_i.
$$ This will simply give two parallel lines, the models for the two brands will only have different intercepts, otherwise equal.
Then we can include all the interactions between brand and other variables
$$ Y_i = \beta_0 + \beta^T x_i + \alpha I_i + \gamma^T
\cdot \begin{cases} x_i \\ 0 \end{cases}
$$ since the product between $x_i$ and the brand indicator will be either $x_i$ or the zero vector. This now splits nicely in two equations, one for each brand, as
$$
Y_i=\begin{cases} \beta_0 + \beta^T x_i & \text{for $I_i=0$} \\
\beta_0+\alpha+(\beta+\gamma)^T x_i &\text{for $I_i=1$} \end{cases}
$$
so the model with all interactions is algebraically the same as fitting two separate models. (In practice it will not be exactly the same, because fitting separately will give two different variance estimates. If there are many groups, this can lead to a serious loss of degrees of freedom for variance estimation). So this should give some hinting about the difference.
If you want an in-between model, with some but not all interactions, you should fit one model.
EDIT To make it more clear, a simple simulated example in R:
set.seed(7*11*13) # My public seed
n <- 100
x1 <- rnorm(n, 10, 3)
x2 <- rnorm(n, 15, 4)
x3 <- x2 + rnorm(n, 5, 2)
Sex <- rbinom(n, 1, 0.5)
Y <- 5 + x1 + Sex*x2 + Sex*x3 + 5*Sex + rnorm(n, 0, 3)
mydata <- data.frame(x1, x2, x3, Y, Sex=as.factor(Sex))
# Two separate models:
modsep0 <- lm(Y ~ x1 + x2 + x3, data=mydata, subset=Sex==0)
modsep1 <- lm(Y ~ x1 + x2 + x3, data=mydata, subset=Sex==1)
modflags <- lm(Y ~ Sex + x1 + x2 + x3 + Sex:x1 + Sex:x2 + Sex:x3,
data=mydata)
Then the model summaries:
summary(modsep0)
Call:
lm(formula = Y ~ x1 + x2 + x3, data = mydata,
subset = Sex == 0)
Residuals:
Min 1Q Median 3Q Max
-7.0752 -1.0416 -0.0627 1.0081 6.0612
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 6.2936 2.6214 2.401 0.0202 *
x1 0.9968 0.1104 9.029 5.34e-12 ***
x2 -0.2664 0.2369 -1.125 0.2662
x3 0.1193 0.1975 0.604 0.5487
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.962 on 49 degrees of freedom
Multiple R-squared: 0.64, Adjusted R-squared: 0.618
F-statistic: 29.04 on 3 and 49 DF, p-value: 6.166e-11
summary(modsep1)
Call:
lm(formula = Y ~ x1 + x2 + x3, data = mydata,
subset = Sex == 1)
Residuals:
Min 1Q Median 3Q Max
-5.7700 -1.9096 0.3434 2.1454 5.8608
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 4.4998 2.5982 1.732 0.0905 .
x1 0.9738 0.1431 6.805 2.47e-08 ***
x2 0.5550 0.2700 2.055 0.0459 *
x3 1.6171 0.2246 7.199 6.64e-09 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.888 on 43 degrees of freedom
Multiple R-squared: 0.9075, Adjusted R-squared: 0.9011
F-statistic: 140.7 on 3 and 43 DF, p-value: < 2.2e-16
summary(modflags)
Call:
lm(formula = Y ~ Sex + x1 + x2 + x3 + Sex:x1 + Sex:x2 +
Sex:x3, data = mydata)
Residuals:
Min 1Q Median 3Q Max
-7.0752 -1.3782 0.0071 1.9441 6.0612
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 6.29355 2.59092 2.429 0.0171 *
Sex1 -1.79380 3.69469 -0.486 0.6285
x1 0.99681 0.10912 9.135 1.51e-14 ***
x2 -0.26644 0.23415 -1.138 0.2581
x3 0.11926 0.19517 0.611 0.5427
Sex1:x1 -0.02304 0.18152 -0.127 0.8993
Sex1:x2 0.82139 0.36019 2.280 0.0249 *
Sex1:x3 1.49783 0.29991 4.994 2.79e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.927 on 92 degrees of freedom
Multiple R-squared: 0.9824, Adjusted R-squared: 0.981
F-statistic: 732.4 on 7 and 92 DF, p-value: < 2.2e-16
Then start comparing coefficients, start with the intercepts (remember that the Sex variable could not be included separately in the separate models, so is there part of the intercept) so start with
6.29355-1.79380
[1] 4.49975
and continue from there ... | Separate Models vs Flags in the same model | I will write an answer assuming some form of regression (-like) model. You say neural network, much the same will apply, nevertheless it will be helpful to understand the issues in a simpler setting. | Separate Models vs Flags in the same model
I will write an answer assuming some form of regression (-like) model. You say neural network, much the same will apply, nevertheless it will be helpful to understand the issues in a simpler setting. And you should probably try some simpler model before throwing the data at a neural network ...
So let $Y_i$ be the response, $x_i$ a covariate vector (all variables except brand indicator, possibly including interactions, not including brand), and $I_i$ (values 0 or 1) the brand indicator. We can write a model with linear predictor as (error terms not included)
$$ Y_i = \beta_0 + \beta^T x_i +\alpha I_i.
$$ This will simply give two parallel lines, the models for the two brands will only have different intercepts, otherwise equal.
Then we can include all the interactions between brand and other variables
$$ Y_i = \beta_0 + \beta^T x_i + \alpha I_i + \gamma^T
\cdot \begin{cases} x_i \\ 0 \end{cases}
$$ since the product between $x_i$ and the brand indicator will be either $x_i$ or the zero vector. This now splits nicely in two equations, one for each brand, as
$$
Y_i=\begin{cases} \beta_0 + \beta^T x_i & \text{for $I_i=0$} \\
\beta_0+\alpha+(\beta+\gamma)^T x_i &\text{for $I_i=1$} \end{cases}
$$
so the model with all interactions is algebraically the same as fitting two separate models. (In practice it will not be exactly the same, because fitting separately will give two different variance estimates. If there are many groups, this can lead to a serious loss of degrees of freedom for variance estimation). So this should give some hinting about the difference.
If you want an in-between model, with some but not all interactions, you should fit one model.
EDIT To make it more clear, a simple simulated example in R:
set.seed(7*11*13) # My public seed
n <- 100
x1 <- rnorm(n, 10, 3)
x2 <- rnorm(n, 15, 4)
x3 <- x2 + rnorm(n, 5, 2)
Sex <- rbinom(n, 1, 0.5)
Y <- 5 + x1 + Sex*x2 + Sex*x3 + 5*Sex + rnorm(n, 0, 3)
mydata <- data.frame(x1, x2, x3, Y, Sex=as.factor(Sex))
# Two separate models:
modsep0 <- lm(Y ~ x1 + x2 + x3, data=mydata, subset=Sex==0)
modsep1 <- lm(Y ~ x1 + x2 + x3, data=mydata, subset=Sex==1)
modflags <- lm(Y ~ Sex + x1 + x2 + x3 + Sex:x1 + Sex:x2 + Sex:x3,
data=mydata)
Then the model summaries:
summary(modsep0)
Call:
lm(formula = Y ~ x1 + x2 + x3, data = mydata,
subset = Sex == 0)
Residuals:
Min 1Q Median 3Q Max
-7.0752 -1.0416 -0.0627 1.0081 6.0612
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 6.2936 2.6214 2.401 0.0202 *
x1 0.9968 0.1104 9.029 5.34e-12 ***
x2 -0.2664 0.2369 -1.125 0.2662
x3 0.1193 0.1975 0.604 0.5487
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.962 on 49 degrees of freedom
Multiple R-squared: 0.64, Adjusted R-squared: 0.618
F-statistic: 29.04 on 3 and 49 DF, p-value: 6.166e-11
summary(modsep1)
Call:
lm(formula = Y ~ x1 + x2 + x3, data = mydata,
subset = Sex == 1)
Residuals:
Min 1Q Median 3Q Max
-5.7700 -1.9096 0.3434 2.1454 5.8608
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 4.4998 2.5982 1.732 0.0905 .
x1 0.9738 0.1431 6.805 2.47e-08 ***
x2 0.5550 0.2700 2.055 0.0459 *
x3 1.6171 0.2246 7.199 6.64e-09 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.888 on 43 degrees of freedom
Multiple R-squared: 0.9075, Adjusted R-squared: 0.9011
F-statistic: 140.7 on 3 and 43 DF, p-value: < 2.2e-16
summary(modflags)
Call:
lm(formula = Y ~ Sex + x1 + x2 + x3 + Sex:x1 + Sex:x2 +
Sex:x3, data = mydata)
Residuals:
Min 1Q Median 3Q Max
-7.0752 -1.3782 0.0071 1.9441 6.0612
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 6.29355 2.59092 2.429 0.0171 *
Sex1 -1.79380 3.69469 -0.486 0.6285
x1 0.99681 0.10912 9.135 1.51e-14 ***
x2 -0.26644 0.23415 -1.138 0.2581
x3 0.11926 0.19517 0.611 0.5427
Sex1:x1 -0.02304 0.18152 -0.127 0.8993
Sex1:x2 0.82139 0.36019 2.280 0.0249 *
Sex1:x3 1.49783 0.29991 4.994 2.79e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.927 on 92 degrees of freedom
Multiple R-squared: 0.9824, Adjusted R-squared: 0.981
F-statistic: 732.4 on 7 and 92 DF, p-value: < 2.2e-16
Then start comparing coefficients, start with the intercepts (remember that the Sex variable could not be included separately in the separate models, so is there part of the intercept) so start with
6.29355-1.79380
[1] 4.49975
and continue from there ... | Separate Models vs Flags in the same model
I will write an answer assuming some form of regression (-like) model. You say neural network, much the same will apply, nevertheless it will be helpful to understand the issues in a simpler setting. |
54,833 | Association, relationship and correlation | They may sometimes be used as if they mean the same thing but correlation is more specific, and association is more general, with relationship being between the two.
Correlation means that they move together (positive correlation indicates increasing and decreasing together, negative correlation means they move in opposite direction). Linear correlation is more specific still; then they move in proportion, not just in the same (or opposite) direction.
A relationship suggests that as one variable changes the other tends to change as well. For example, two variables may have a quadratic relationship. This may occur with correlation or variables may be related but uncorrelated. That is, unless accompanied by some qualifying term, I tend to interpret "relationship" to essentially imply "functional relationship". However the word "relation" in the mathematical sense is more general and many people often use it more broadly without qualifying it. For example, imagine points scattered about a set of concentric ring-shapes. I probably wouldn't say that the variables were related (at least not without adding something to say that they weren't functionally related) but some people would happily do so.
Association may be more general still; (to me at least) it suggests almost any form of dependence between the variables. For example, imagine two variables where all the points tend to lay within small discs of probability (and not outside them), but where these discs are scattered about in a way that looks random. Then I would usually avoid the term "relationship" (qualified or not) and stick to associated or dependent.
Further example: If you look at this image (from the Wikipedia article on Correlation and dependence; the first row displays different amounts of linear correlation, while the last row shows variables that are uncorrelated but dependent. With the the first and fourth, I'd definitely say that the variable on the y-axis had a relationship with the one on the x-axis and I'd call all of them associated. (I might use "related" for the 5th and 6th but I'd tend to want to clarify that they weren't functionally related.)
However, as with many terms in statistics, there is variation in how people use terms like relationship and association; you may see them used almost interchangeably, with association used less generally than just any form of dependence and/or relationship used more generally than I have.
If a variable is correlated with another, then it could also be called related or associated with it. However, a variable may be related to another, or associated with another, but not be correlated to it. | Association, relationship and correlation | They may sometimes be used as if they mean the same thing but correlation is more specific, and association is more general, with relationship being between the two.
Correlation means that they move t | Association, relationship and correlation
They may sometimes be used as if they mean the same thing but correlation is more specific, and association is more general, with relationship being between the two.
Correlation means that they move together (positive correlation indicates increasing and decreasing together, negative correlation means they move in opposite direction). Linear correlation is more specific still; then they move in proportion, not just in the same (or opposite) direction.
A relationship suggests that as one variable changes the other tends to change as well. For example, two variables may have a quadratic relationship. This may occur with correlation or variables may be related but uncorrelated. That is, unless accompanied by some qualifying term, I tend to interpret "relationship" to essentially imply "functional relationship". However the word "relation" in the mathematical sense is more general and many people often use it more broadly without qualifying it. For example, imagine points scattered about a set of concentric ring-shapes. I probably wouldn't say that the variables were related (at least not without adding something to say that they weren't functionally related) but some people would happily do so.
Association may be more general still; (to me at least) it suggests almost any form of dependence between the variables. For example, imagine two variables where all the points tend to lay within small discs of probability (and not outside them), but where these discs are scattered about in a way that looks random. Then I would usually avoid the term "relationship" (qualified or not) and stick to associated or dependent.
Further example: If you look at this image (from the Wikipedia article on Correlation and dependence; the first row displays different amounts of linear correlation, while the last row shows variables that are uncorrelated but dependent. With the the first and fourth, I'd definitely say that the variable on the y-axis had a relationship with the one on the x-axis and I'd call all of them associated. (I might use "related" for the 5th and 6th but I'd tend to want to clarify that they weren't functionally related.)
However, as with many terms in statistics, there is variation in how people use terms like relationship and association; you may see them used almost interchangeably, with association used less generally than just any form of dependence and/or relationship used more generally than I have.
If a variable is correlated with another, then it could also be called related or associated with it. However, a variable may be related to another, or associated with another, but not be correlated to it. | Association, relationship and correlation
They may sometimes be used as if they mean the same thing but correlation is more specific, and association is more general, with relationship being between the two.
Correlation means that they move t |
54,834 | Question about the latent variable in EM algorithm | There is a lot of confusion in the question, confusion that could be reduced by looking at a textbook on the paper, or even the original 1977 paper by Dempster, Laird and Rubin.
Here is an excerpt of our book, Introducing Monte Carlo Methods with R, followed by my answer:
Assume that we observe $X_1, \ldots, X_n$, jointly distributed from $g({\mathbf x}|\theta)$ that satisfies
$$
g({\mathbf x}|\theta)=\int_{\cal Z} f({\mathbf x}, {\mathbf z}|\theta)\, \text{d}{\mathbf z},
$$
and that we want to compute $\hat{\theta} = \arg\max L(\theta|{\mathbf x})= \arg\max g({\mathbf x}|\theta)$.
Since the augmented data is ${\mathbf z}$, where $({\mathbf X}, {\mathbf Z}) \sim f({\mathbf x},{\mathbf z}| \theta)$
the conditional distribution of the missing data ${\mathbf Z}$ given the observed data ${\mathbf x}$ is
$$
k({\mathbf z}| \theta, {\mathbf x}) = f({\mathbf x}, {\mathbf z}|\theta)\big/g({\mathbf x}|\theta)\,.
$$
Taking the logarithm of this expression
leads to the following relationship between the complete-data likelihood $L^c(\theta|{\mathbf x},
{\mathbf z})$ and the observed-data likelihood $L(\theta|{\mathbf x})$. For any value $\theta_0$,
$$
\log L(\theta|{\mathbf x})= \mathbb{E}_{\theta_0}[\log L^c(\theta|{\mathbf x},{\mathbf Z})]
-\mathbb{E}_{\theta_0}[\log k({\mathbf Z}| \theta, {\mathbf x})],\qquad(1)
$$where the expectation is with respect to $k({\mathbf z}| \theta_0, {\mathbf x})$. In the EM algorithm,
while we aim at maximizing $\log L(\theta|{\mathbf x})$, only the first term on the right side of
(1) will be considered.
Denoting$$
Q(\theta |\theta_0, {\mathbf x}) = \mathbb{E}_{\theta_0}
[\log L^c(\theta|{\mathbf x},{\mathbf Z})],
$$
the EM algorithm indeed proceeds iteratively by maximizing
$Q(\theta |\theta_0, {\mathbf x})$ at each iteration and, if $\hat{\theta}_{(1)}$
is the value of $\theta$ maximizing $Q(\theta |\theta_0, {\mathbf x})$,
by replacing $\theta_0$ by the updated value $\hat{\theta}_{(1)}$. In this manner, a sequence of estimators
$\{\hat{\theta}_{(j)}\}_j$ is obtained, where $\hat{\theta}_{(j)}$ is defined as the value of
$\theta$ maximizing $Q(\theta |\hat{\theta}_{(j-1)}, {\mathbf x})$; that is,$$
Q(\hat{\theta}_{(j)} |\hat{\theta}_{(j-1)}, {\mathbf x})
= \max_{\theta}\,Q(\theta |\hat{\theta}_{(j-1)},
{\mathbf x}).$$This iterative scheme thus contains both an expectation step
and a maximization step, giving the algorithm its name.
EM Algorithm
Pick a starting value $\hat{\theta}_{(0)}$
Repeat
Compute the E-step
$$
Q(\theta|\hat{\theta}_{(m)}, {\mathbf x})
=\mathbb{E}_{\hat{\theta}_{(m)}} [\log L^c(\theta|{\mathbf x}, {\mathbf Z})]\,,
$$
where the expectation is with respect to $k({\mathbf z}|\hat{\theta}_{(m)},{\mathbf x})$ and set $m=0$.
Maximize $Q(\theta|\hat{\theta}_{(m)}, {\mathbf x})$ in
$\theta$ and take the M-step
$$
\hat\theta_{(m+1)}=\arg\max_\theta \; Q(\theta|\hat{\theta}_{(m)}, {\mathbf x})
$$
and set $m=m+1$
until a fixed point is reached; i.e., $\hat\theta_{(m+1)}=\hat{\theta}_{(m)}$.
For the normal mixture, using the missing data structure exhibited in previously leads to an objective function
equal to
$$
Q(\theta^\prime|\theta,\mathbf{x}) = -\frac{1}{2}\,\sum_{i=1}^n
\mathbb{E}_\theta\left[\left. Z_i (x_i-\mu_1)^2 + (1-Z_i) (x_i-\mu_2)^2 \right| \mathbf{x} \right].
$$
Solving the M-step then provides the closed-form expressions
$$
\mu_1^\prime = \mathbb{E}_\theta\left[ \sum_{i=1}^n Z_i x_i |\mathbf{x} \right]
\bigg/ \mathbb{E}_\theta\left[ \sum_{i=1}^n Z_i| \mathbf{x} \right]
$$
and
$$
\mu_1^\prime = \mathbb{E}_\theta\left[ \sum_{i=1}^n (1-Z_i) x_i |\mathbf{x} \right]
\bigg/ \mathbb{E}_\theta\left[ \sum_{i=1}^n (1-Z_i)| \mathbf{x} \right].
$$
Since
$$
\mathbb{E}_\theta\left[Z_i|\mathbf{x} \right]=\frac{\varphi(x_i-\mu_1)}{ \varphi(x_i-\mu_1)+3\varphi(x_i-\mu_2)}\,,
$$
the EM algorithm can easily be implemented in this setting.
Whatever the mixture involved, the latent variables $Z_i$ are Multinomial $\mathcal{M}_M(1;\pi_1,\ldots,\pi_M)$ which means only one component of the vector $Z_i$ is equal to one and all of the $M-1$ others are zero. (Note the difference with the question in the notations: the original notation $\mathcal{M}(M;\pi_1,\ldots,\pi_M)$ fails to indicate how many draws are taken, that is, what is the sum of the components of $Z_i$.).When $k=2$ as in the above excerpt, $Z_i$ is an integer in $\{0,1\}$. There may be a confusion between a Multinomial distribution and the property of a distribution (like some mixtures) to be multimodal. The $Z_i$ do not have a multimodal distribution, taking only two values, even conditional on the $X_i$'s, while the $X_i$'s may, at least unconditionally. | Question about the latent variable in EM algorithm | There is a lot of confusion in the question, confusion that could be reduced by looking at a textbook on the paper, or even the original 1977 paper by Dempster, Laird and Rubin.
Here is an excerpt of | Question about the latent variable in EM algorithm
There is a lot of confusion in the question, confusion that could be reduced by looking at a textbook on the paper, or even the original 1977 paper by Dempster, Laird and Rubin.
Here is an excerpt of our book, Introducing Monte Carlo Methods with R, followed by my answer:
Assume that we observe $X_1, \ldots, X_n$, jointly distributed from $g({\mathbf x}|\theta)$ that satisfies
$$
g({\mathbf x}|\theta)=\int_{\cal Z} f({\mathbf x}, {\mathbf z}|\theta)\, \text{d}{\mathbf z},
$$
and that we want to compute $\hat{\theta} = \arg\max L(\theta|{\mathbf x})= \arg\max g({\mathbf x}|\theta)$.
Since the augmented data is ${\mathbf z}$, where $({\mathbf X}, {\mathbf Z}) \sim f({\mathbf x},{\mathbf z}| \theta)$
the conditional distribution of the missing data ${\mathbf Z}$ given the observed data ${\mathbf x}$ is
$$
k({\mathbf z}| \theta, {\mathbf x}) = f({\mathbf x}, {\mathbf z}|\theta)\big/g({\mathbf x}|\theta)\,.
$$
Taking the logarithm of this expression
leads to the following relationship between the complete-data likelihood $L^c(\theta|{\mathbf x},
{\mathbf z})$ and the observed-data likelihood $L(\theta|{\mathbf x})$. For any value $\theta_0$,
$$
\log L(\theta|{\mathbf x})= \mathbb{E}_{\theta_0}[\log L^c(\theta|{\mathbf x},{\mathbf Z})]
-\mathbb{E}_{\theta_0}[\log k({\mathbf Z}| \theta, {\mathbf x})],\qquad(1)
$$where the expectation is with respect to $k({\mathbf z}| \theta_0, {\mathbf x})$. In the EM algorithm,
while we aim at maximizing $\log L(\theta|{\mathbf x})$, only the first term on the right side of
(1) will be considered.
Denoting$$
Q(\theta |\theta_0, {\mathbf x}) = \mathbb{E}_{\theta_0}
[\log L^c(\theta|{\mathbf x},{\mathbf Z})],
$$
the EM algorithm indeed proceeds iteratively by maximizing
$Q(\theta |\theta_0, {\mathbf x})$ at each iteration and, if $\hat{\theta}_{(1)}$
is the value of $\theta$ maximizing $Q(\theta |\theta_0, {\mathbf x})$,
by replacing $\theta_0$ by the updated value $\hat{\theta}_{(1)}$. In this manner, a sequence of estimators
$\{\hat{\theta}_{(j)}\}_j$ is obtained, where $\hat{\theta}_{(j)}$ is defined as the value of
$\theta$ maximizing $Q(\theta |\hat{\theta}_{(j-1)}, {\mathbf x})$; that is,$$
Q(\hat{\theta}_{(j)} |\hat{\theta}_{(j-1)}, {\mathbf x})
= \max_{\theta}\,Q(\theta |\hat{\theta}_{(j-1)},
{\mathbf x}).$$This iterative scheme thus contains both an expectation step
and a maximization step, giving the algorithm its name.
EM Algorithm
Pick a starting value $\hat{\theta}_{(0)}$
Repeat
Compute the E-step
$$
Q(\theta|\hat{\theta}_{(m)}, {\mathbf x})
=\mathbb{E}_{\hat{\theta}_{(m)}} [\log L^c(\theta|{\mathbf x}, {\mathbf Z})]\,,
$$
where the expectation is with respect to $k({\mathbf z}|\hat{\theta}_{(m)},{\mathbf x})$ and set $m=0$.
Maximize $Q(\theta|\hat{\theta}_{(m)}, {\mathbf x})$ in
$\theta$ and take the M-step
$$
\hat\theta_{(m+1)}=\arg\max_\theta \; Q(\theta|\hat{\theta}_{(m)}, {\mathbf x})
$$
and set $m=m+1$
until a fixed point is reached; i.e., $\hat\theta_{(m+1)}=\hat{\theta}_{(m)}$.
For the normal mixture, using the missing data structure exhibited in previously leads to an objective function
equal to
$$
Q(\theta^\prime|\theta,\mathbf{x}) = -\frac{1}{2}\,\sum_{i=1}^n
\mathbb{E}_\theta\left[\left. Z_i (x_i-\mu_1)^2 + (1-Z_i) (x_i-\mu_2)^2 \right| \mathbf{x} \right].
$$
Solving the M-step then provides the closed-form expressions
$$
\mu_1^\prime = \mathbb{E}_\theta\left[ \sum_{i=1}^n Z_i x_i |\mathbf{x} \right]
\bigg/ \mathbb{E}_\theta\left[ \sum_{i=1}^n Z_i| \mathbf{x} \right]
$$
and
$$
\mu_1^\prime = \mathbb{E}_\theta\left[ \sum_{i=1}^n (1-Z_i) x_i |\mathbf{x} \right]
\bigg/ \mathbb{E}_\theta\left[ \sum_{i=1}^n (1-Z_i)| \mathbf{x} \right].
$$
Since
$$
\mathbb{E}_\theta\left[Z_i|\mathbf{x} \right]=\frac{\varphi(x_i-\mu_1)}{ \varphi(x_i-\mu_1)+3\varphi(x_i-\mu_2)}\,,
$$
the EM algorithm can easily be implemented in this setting.
Whatever the mixture involved, the latent variables $Z_i$ are Multinomial $\mathcal{M}_M(1;\pi_1,\ldots,\pi_M)$ which means only one component of the vector $Z_i$ is equal to one and all of the $M-1$ others are zero. (Note the difference with the question in the notations: the original notation $\mathcal{M}(M;\pi_1,\ldots,\pi_M)$ fails to indicate how many draws are taken, that is, what is the sum of the components of $Z_i$.).When $k=2$ as in the above excerpt, $Z_i$ is an integer in $\{0,1\}$. There may be a confusion between a Multinomial distribution and the property of a distribution (like some mixtures) to be multimodal. The $Z_i$ do not have a multimodal distribution, taking only two values, even conditional on the $X_i$'s, while the $X_i$'s may, at least unconditionally. | Question about the latent variable in EM algorithm
There is a lot of confusion in the question, confusion that could be reduced by looking at a textbook on the paper, or even the original 1977 paper by Dempster, Laird and Rubin.
Here is an excerpt of |
54,835 | Question about the latent variable in EM algorithm | If I correctly read between the lines, your question is about the difference between the distribution of $[z]$ (i.e., the prior distribution of the latent variable), and the distribution of $[z \mid y]$ (i.e., the posterior distribution of the latent variable given the data $y$).
The prior probabilities are the same for all subjects, i.e., $\pi$. However, the posterior probabilities will be different per subject because each subject will his/her own probability of belonging in the first component (i.e., the component for which $z_i = 1$) depending on their data $y_i$. Hence, if you do the plot of the posterior probabilities, it can be multimodal. | Question about the latent variable in EM algorithm | If I correctly read between the lines, your question is about the difference between the distribution of $[z]$ (i.e., the prior distribution of the latent variable), and the distribution of $[z \mid y | Question about the latent variable in EM algorithm
If I correctly read between the lines, your question is about the difference between the distribution of $[z]$ (i.e., the prior distribution of the latent variable), and the distribution of $[z \mid y]$ (i.e., the posterior distribution of the latent variable given the data $y$).
The prior probabilities are the same for all subjects, i.e., $\pi$. However, the posterior probabilities will be different per subject because each subject will his/her own probability of belonging in the first component (i.e., the component for which $z_i = 1$) depending on their data $y_i$. Hence, if you do the plot of the posterior probabilities, it can be multimodal. | Question about the latent variable in EM algorithm
If I correctly read between the lines, your question is about the difference between the distribution of $[z]$ (i.e., the prior distribution of the latent variable), and the distribution of $[z \mid y |
54,836 | Ways of Testing Linearity Assumption in Multiple Regression apart from Residual Plots | What you can do is fit a model that relaxes the linearity assumption, using, e.g., splines, and compare it with the model that assumes linearity. For example, in R, for a linear regression model you can do something like that:
library("splines")
# linear effect of age on y
fm_linear <- lm(y ~ age + sex, data = your_data)
# nonlinear effect of age on y using natural cubic splines
fm_non_linear <- lm(y ~ ns(age, 3) + sex, data = your_data)
# F-test between the two models
anova(fm_linear, fm_non_linear) | Ways of Testing Linearity Assumption in Multiple Regression apart from Residual Plots | What you can do is fit a model that relaxes the linearity assumption, using, e.g., splines, and compare it with the model that assumes linearity. For example, in R, for a linear regression model you c | Ways of Testing Linearity Assumption in Multiple Regression apart from Residual Plots
What you can do is fit a model that relaxes the linearity assumption, using, e.g., splines, and compare it with the model that assumes linearity. For example, in R, for a linear regression model you can do something like that:
library("splines")
# linear effect of age on y
fm_linear <- lm(y ~ age + sex, data = your_data)
# nonlinear effect of age on y using natural cubic splines
fm_non_linear <- lm(y ~ ns(age, 3) + sex, data = your_data)
# F-test between the two models
anova(fm_linear, fm_non_linear) | Ways of Testing Linearity Assumption in Multiple Regression apart from Residual Plots
What you can do is fit a model that relaxes the linearity assumption, using, e.g., splines, and compare it with the model that assumes linearity. For example, in R, for a linear regression model you c |
54,837 | How to interpret the result of Friedman's test? | The problem is that the row and column labels for the matrix make the results difficult to understand. ‡
In the following, since there are no column labels, the columns will be labeled V1 to V7 by default. This will make it easy to evaluate the comparisons between them.
if(!require(PMCMR)){install.packages("PMCMR")}
Input =("
0.9753954 0.9771529 0.9789104 0.9789104 0.9806678 0.9771529 0.9806678
0.9736380 0.9806678 0.9806678 0.9806678 0.9841828 0.9771529 0.9771529
0.9753954 0.9841828 0.9806678 0.9771529 0.9806678 0.9771529 0.9718805
0.9771529 0.9859402 0.9789104 0.9789104 0.9824253 0.9824253 0.9841828
0.9736380 0.9806678 0.9771529 0.9824253 0.9824253 0.9806678 0.9771529
0.9701230 0.9789104 0.9736380 0.9806678 0.9841828 0.9824253 0.9753954
0.9912127 0.9912127 0.9859402 0.9859402 0.9859402 0.9841828 0.9824253
0.9789104 0.9806678 0.9859402 0.9859402 0.9841828 0.9806678 0.9789104
0.9806678 0.9841828 0.9876977 0.9824253 0.9841828 0.9859402 0.9841828
0.9789104 0.9771529 0.9753954 0.9789104 0.9666081 0.9613357 0.9630931
")
Matrix = as.matrix(read.table(textConnection(Input)))
Matrix
### V1 V2 V3 V4 V5 V6 V7
### [1,] 0.9753954 0.9771529 0.9789104 0.9789104 0.9806678 0.9771529 0.9806678
### [2,] 0.9736380 0.9806678 0.9806678 0.9806678 0.9841828 0.9771529 0.9771529
### [3,] 0.9753954 0.9841828 0.9806678 0.9771529 0.9806678 0.9771529 0.9718805
### [4,] 0.9771529 0.9859402 0.9789104 0.9789104 0.9824253 0.9824253 0.9841828
### [5,] 0.9736380 0.9806678 0.9771529 0.9824253 0.9824253 0.9806678 0.9771529
### [6,] 0.9701230 0.9789104 0.9736380 0.9806678 0.9841828 0.9824253 0.9753954
### [7,] 0.9912127 0.9912127 0.9859402 0.9859402 0.9859402 0.9841828 0.9824253
### [8,] 0.9789104 0.9806678 0.9859402 0.9859402 0.9841828 0.9806678 0.9789104
### [9,] 0.9806678 0.9841828 0.9876977 0.9824253 0.9841828 0.9859402 0.9841828
### [10,] 0.9789104 0.9771529 0.9753954 0.9789104 0.9666081 0.9613357 0.9630931
library(PMCMR)
posthoc.friedman.nemenyi.test(Matrix)
### Pairwise comparisons using Nemenyi multiple comparison test
with q approximation for unreplicated blocked data
### data: Matrix
### V1 V2 V3 V4 V5 V6
### V2 0.088 - - - - -
### V3 0.310 0.998 - - - -
### V4 0.185 1.000 1.000 - - -
### V5 0.027 1.000 0.958 0.991 - -
### V6 0.804 0.830 0.987 0.946 0.576 -
### V7 0.987 0.436 0.804 0.645 0.207 0.996
###
### P value adjustment method: none
The output above is a table of p-values, each comparing two groups. If you are using p = 0.05 as your cutoff, the only significant comparison is V1 vs. V5 (p = 0.027). The rest of the p-values are all greater than 0.05.
It may be useful to translate this matrix of p-values to a compact letter display. In this output, groups sharing a letter are not significantly different. For this I'll use the fullPTable function in the rcompanion package † and multcompLetters from multcompView.
if(!require(multcompView)){install.packages("multcompView")}
if(!require(PMCMR)){install.packages("PMCMR")}
if(!require(rcompanion)){install.packages("rcompanion")}
library(PMCMR)
library(rcompanion)
library(multcompView)
PT = posthoc.friedman.nemenyi.test(Matrix)$p.value
PT1 = fullPTable(PT)
PT1
library(multcompView)
multcompLetters(PT1)
### V1 V2 V3 V4 V5 V6 V7
### "a" "ab" "ab" "ab" "b" "ab" "ab"
V1 and V5 are the only two groups not sharing a letter.
Addition: PMCMRplus package
There are a few different post-hoc tests available for Friedman's test in PMCMRplus package. Functions begin with frdAllPairs. The Nemenyi test appears to produce results similar to those above. For this example, it was necessary to add row labels to the matrix.
if(!require(PMCMRplus)){install.packages("PMCMRplus")}
library(PMCMRplus)
rownames(Matrix) = LETTERS[1:10]
frdAllPairsNemenyiTest(Matrix)
# Pairwise comparisons using Nemenyi-Wilcoxon-Wilcox all-pairs test for a two-way balanced complete block design
#
# V1 V2 V3 V4 V5 V6
# V2 0.088 - - - - -
# V3 0.310 0.998 - - - -
# V4 0.185 1.000 1.000 - - -
# V5 0.027 1.000 0.958 0.991 - -
# V6 0.804 0.830 0.987 0.946 0.576 -
# V7 0.987 0.436 0.804 0.645 0.207 0.996
#
# P value adjustment method: single-step
‡ Note: This answer addresses the primary question: conducting and interpreting Nemenyi test. It does not weigh in on the discussion in the comments, as to whether the generation of this data makes sense or if Friedman's test is the applicable test in this case.
† Caveat: I am the author of this package. | How to interpret the result of Friedman's test? | The problem is that the row and column labels for the matrix make the results difficult to understand. ‡
In the following, since there are no column labels, the columns will be labeled V1 to V7 by def | How to interpret the result of Friedman's test?
The problem is that the row and column labels for the matrix make the results difficult to understand. ‡
In the following, since there are no column labels, the columns will be labeled V1 to V7 by default. This will make it easy to evaluate the comparisons between them.
if(!require(PMCMR)){install.packages("PMCMR")}
Input =("
0.9753954 0.9771529 0.9789104 0.9789104 0.9806678 0.9771529 0.9806678
0.9736380 0.9806678 0.9806678 0.9806678 0.9841828 0.9771529 0.9771529
0.9753954 0.9841828 0.9806678 0.9771529 0.9806678 0.9771529 0.9718805
0.9771529 0.9859402 0.9789104 0.9789104 0.9824253 0.9824253 0.9841828
0.9736380 0.9806678 0.9771529 0.9824253 0.9824253 0.9806678 0.9771529
0.9701230 0.9789104 0.9736380 0.9806678 0.9841828 0.9824253 0.9753954
0.9912127 0.9912127 0.9859402 0.9859402 0.9859402 0.9841828 0.9824253
0.9789104 0.9806678 0.9859402 0.9859402 0.9841828 0.9806678 0.9789104
0.9806678 0.9841828 0.9876977 0.9824253 0.9841828 0.9859402 0.9841828
0.9789104 0.9771529 0.9753954 0.9789104 0.9666081 0.9613357 0.9630931
")
Matrix = as.matrix(read.table(textConnection(Input)))
Matrix
### V1 V2 V3 V4 V5 V6 V7
### [1,] 0.9753954 0.9771529 0.9789104 0.9789104 0.9806678 0.9771529 0.9806678
### [2,] 0.9736380 0.9806678 0.9806678 0.9806678 0.9841828 0.9771529 0.9771529
### [3,] 0.9753954 0.9841828 0.9806678 0.9771529 0.9806678 0.9771529 0.9718805
### [4,] 0.9771529 0.9859402 0.9789104 0.9789104 0.9824253 0.9824253 0.9841828
### [5,] 0.9736380 0.9806678 0.9771529 0.9824253 0.9824253 0.9806678 0.9771529
### [6,] 0.9701230 0.9789104 0.9736380 0.9806678 0.9841828 0.9824253 0.9753954
### [7,] 0.9912127 0.9912127 0.9859402 0.9859402 0.9859402 0.9841828 0.9824253
### [8,] 0.9789104 0.9806678 0.9859402 0.9859402 0.9841828 0.9806678 0.9789104
### [9,] 0.9806678 0.9841828 0.9876977 0.9824253 0.9841828 0.9859402 0.9841828
### [10,] 0.9789104 0.9771529 0.9753954 0.9789104 0.9666081 0.9613357 0.9630931
library(PMCMR)
posthoc.friedman.nemenyi.test(Matrix)
### Pairwise comparisons using Nemenyi multiple comparison test
with q approximation for unreplicated blocked data
### data: Matrix
### V1 V2 V3 V4 V5 V6
### V2 0.088 - - - - -
### V3 0.310 0.998 - - - -
### V4 0.185 1.000 1.000 - - -
### V5 0.027 1.000 0.958 0.991 - -
### V6 0.804 0.830 0.987 0.946 0.576 -
### V7 0.987 0.436 0.804 0.645 0.207 0.996
###
### P value adjustment method: none
The output above is a table of p-values, each comparing two groups. If you are using p = 0.05 as your cutoff, the only significant comparison is V1 vs. V5 (p = 0.027). The rest of the p-values are all greater than 0.05.
It may be useful to translate this matrix of p-values to a compact letter display. In this output, groups sharing a letter are not significantly different. For this I'll use the fullPTable function in the rcompanion package † and multcompLetters from multcompView.
if(!require(multcompView)){install.packages("multcompView")}
if(!require(PMCMR)){install.packages("PMCMR")}
if(!require(rcompanion)){install.packages("rcompanion")}
library(PMCMR)
library(rcompanion)
library(multcompView)
PT = posthoc.friedman.nemenyi.test(Matrix)$p.value
PT1 = fullPTable(PT)
PT1
library(multcompView)
multcompLetters(PT1)
### V1 V2 V3 V4 V5 V6 V7
### "a" "ab" "ab" "ab" "b" "ab" "ab"
V1 and V5 are the only two groups not sharing a letter.
Addition: PMCMRplus package
There are a few different post-hoc tests available for Friedman's test in PMCMRplus package. Functions begin with frdAllPairs. The Nemenyi test appears to produce results similar to those above. For this example, it was necessary to add row labels to the matrix.
if(!require(PMCMRplus)){install.packages("PMCMRplus")}
library(PMCMRplus)
rownames(Matrix) = LETTERS[1:10]
frdAllPairsNemenyiTest(Matrix)
# Pairwise comparisons using Nemenyi-Wilcoxon-Wilcox all-pairs test for a two-way balanced complete block design
#
# V1 V2 V3 V4 V5 V6
# V2 0.088 - - - - -
# V3 0.310 0.998 - - - -
# V4 0.185 1.000 1.000 - - -
# V5 0.027 1.000 0.958 0.991 - -
# V6 0.804 0.830 0.987 0.946 0.576 -
# V7 0.987 0.436 0.804 0.645 0.207 0.996
#
# P value adjustment method: single-step
‡ Note: This answer addresses the primary question: conducting and interpreting Nemenyi test. It does not weigh in on the discussion in the comments, as to whether the generation of this data makes sense or if Friedman's test is the applicable test in this case.
† Caveat: I am the author of this package. | How to interpret the result of Friedman's test?
The problem is that the row and column labels for the matrix make the results difficult to understand. ‡
In the following, since there are no column labels, the columns will be labeled V1 to V7 by def |
54,838 | Confidence intervals for autocorrelation function | A quick google search with "confidence intervals for acfs" yielded
Janet M. Box-Steffensmeier, John R. Freeman, Matthew P. Hitt, Jon C. W. Pevehouse: Time Series Analysis for the Social Sciences.
In there, on page 38, the standard error of an AC estimator at lag k is stated to be
$AC_{SE,k} = \sqrt{N^{-1}\left(1+2\sum_{i=1}^k[AC_i^2] \right)}$
where $AC_i$ is the AC esimate at lag i and N is the number of time steps in your sample. This is assuming that the true underlying process is actually MA. Assuming asympotic normality of the AC estimator, you can calculate the confidence intervals at each lag then as
$CI_{AC_{k}} = [AC_{k} - 1.96\times\dfrac{AC_{SE,k}}{\sqrt{N}}, AC_{k} + 1.96\times\dfrac{AC_{SE,k}}{\sqrt{N}}]$.
For some further info, see also this and this. | Confidence intervals for autocorrelation function | A quick google search with "confidence intervals for acfs" yielded
Janet M. Box-Steffensmeier, John R. Freeman, Matthew P. Hitt, Jon C. W. Pevehouse: Time Series Analysis for the Social Sciences.
In t | Confidence intervals for autocorrelation function
A quick google search with "confidence intervals for acfs" yielded
Janet M. Box-Steffensmeier, John R. Freeman, Matthew P. Hitt, Jon C. W. Pevehouse: Time Series Analysis for the Social Sciences.
In there, on page 38, the standard error of an AC estimator at lag k is stated to be
$AC_{SE,k} = \sqrt{N^{-1}\left(1+2\sum_{i=1}^k[AC_i^2] \right)}$
where $AC_i$ is the AC esimate at lag i and N is the number of time steps in your sample. This is assuming that the true underlying process is actually MA. Assuming asympotic normality of the AC estimator, you can calculate the confidence intervals at each lag then as
$CI_{AC_{k}} = [AC_{k} - 1.96\times\dfrac{AC_{SE,k}}{\sqrt{N}}, AC_{k} + 1.96\times\dfrac{AC_{SE,k}}{\sqrt{N}}]$.
For some further info, see also this and this. | Confidence intervals for autocorrelation function
A quick google search with "confidence intervals for acfs" yielded
Janet M. Box-Steffensmeier, John R. Freeman, Matthew P. Hitt, Jon C. W. Pevehouse: Time Series Analysis for the Social Sciences.
In t |
54,839 | Confidence intervals for autocorrelation function | When the ACF is estimated from data I think also the error should be directly computed from the same data. I think that is generally the safest and most conservative approach. I would just store the resulting products of the signal with itself after each shift in the row of a matrix. Then you have the full distribution of values at each shift. Computing the column-wise average and the std/sem gives you a estimate of the AC and its variation. Bootstrapping and other resampling procedures on that matrix enable you to estimate the confidence intervals. This has the advantage that there are no special assumptions to be made and it can always be made compatible with your specific AC computation (normalization, padding etc.). | Confidence intervals for autocorrelation function | When the ACF is estimated from data I think also the error should be directly computed from the same data. I think that is generally the safest and most conservative approach. I would just store the r | Confidence intervals for autocorrelation function
When the ACF is estimated from data I think also the error should be directly computed from the same data. I think that is generally the safest and most conservative approach. I would just store the resulting products of the signal with itself after each shift in the row of a matrix. Then you have the full distribution of values at each shift. Computing the column-wise average and the std/sem gives you a estimate of the AC and its variation. Bootstrapping and other resampling procedures on that matrix enable you to estimate the confidence intervals. This has the advantage that there are no special assumptions to be made and it can always be made compatible with your specific AC computation (normalization, padding etc.). | Confidence intervals for autocorrelation function
When the ACF is estimated from data I think also the error should be directly computed from the same data. I think that is generally the safest and most conservative approach. I would just store the r |
54,840 | Build a (normal?) distribution from $n$, quartiles and mean? | The answer is No, not exactly anyhow.
If you have two quartiles of a normal population then you can find $\mu$ and $\sigma.$ For example the lower and upper quantiles of $\mathsf{Norm}(\mu = 100,\, \sigma = 10)$ are $93.255$ and $106.745,$ respectively.
qnorm(c(.25, .75), 100, 10)
[1] 93.2551 106.7449
Then $P\left(\frac{X-\mu}{\sigma} < -0.6745\right) = 0.25$
and $P\left(\frac{X-\mu}{\sigma} < 0.6745\right) = 0.75$
provide two equations that can be solved to find $\mu$ and $\sigma.$
qnorm(c(.25,.75))
[1] -0.6744898 0.6744898
However, sample quartiles are not population quartiles. There is not
enough information in any normal sample precisely to determine $\mu$ and
$\sigma.$
And you are not really sure your sample is from a normal population.
If the population has mean $\mu$ and median $\eta,$ then
the sample mean and median, respectively, are estimates of these two
parameters. If the population is symmetrical, then $\mu = \eta,$ but
you say the sample mean and median do not agree. So you cannot be sure
the population is symmetrical, much less normal. | Build a (normal?) distribution from $n$, quartiles and mean? | The answer is No, not exactly anyhow.
If you have two quartiles of a normal population then you can find $\mu$ and $\sigma.$ For example the lower and upper quantiles of $\mathsf{Norm}(\mu = 100,\, \s | Build a (normal?) distribution from $n$, quartiles and mean?
The answer is No, not exactly anyhow.
If you have two quartiles of a normal population then you can find $\mu$ and $\sigma.$ For example the lower and upper quantiles of $\mathsf{Norm}(\mu = 100,\, \sigma = 10)$ are $93.255$ and $106.745,$ respectively.
qnorm(c(.25, .75), 100, 10)
[1] 93.2551 106.7449
Then $P\left(\frac{X-\mu}{\sigma} < -0.6745\right) = 0.25$
and $P\left(\frac{X-\mu}{\sigma} < 0.6745\right) = 0.75$
provide two equations that can be solved to find $\mu$ and $\sigma.$
qnorm(c(.25,.75))
[1] -0.6744898 0.6744898
However, sample quartiles are not population quartiles. There is not
enough information in any normal sample precisely to determine $\mu$ and
$\sigma.$
And you are not really sure your sample is from a normal population.
If the population has mean $\mu$ and median $\eta,$ then
the sample mean and median, respectively, are estimates of these two
parameters. If the population is symmetrical, then $\mu = \eta,$ but
you say the sample mean and median do not agree. So you cannot be sure
the population is symmetrical, much less normal. | Build a (normal?) distribution from $n$, quartiles and mean?
The answer is No, not exactly anyhow.
If you have two quartiles of a normal population then you can find $\mu$ and $\sigma.$ For example the lower and upper quantiles of $\mathsf{Norm}(\mu = 100,\, \s |
54,841 | Build a (normal?) distribution from $n$, quartiles and mean? | Based on @whuber's Comments about 'modeling', I gave some thought to relatively elementary methods that might be used to estimate the parameters of a normal distribution given the sample size, sample quartiles, and sample mean, assuming that data are normal.
Most of this will work better for very large $n.$ After some experimentation, I found that sample sizes around 35 are just large enough to get reasonably good results. All computations are done using R; seeds (based on current dates) are shown for simulations.
Hypothetical sample: So let's suppose we have a sample (rounded to two places) that is known to have come from a normal population of size
$n = 35,$ but with unknown $\mu$ and $\sigma.$ We are given that the sample mean is $A = 49.19,$ the sample median is $H = 47.72,$ and that the lower and upper quartiles are $Q_1 = 43.62,\, Q_3 = 54.73,$ respectively. (Sometimes reports and articles don't give complete datasets, but do give such summary data about the sample.)
Estimating the population mean. The best estimate of the population mean $\mu$ is the sample mean $A = \hat \mu = 49.19.$
Estimating the population standard deviation: If we knew it, a good estimate of the population standard deviation (SD) would be the sample SD $S,$ but that information is not given.
The interquartile range (IQR) of a standard normal population $1.35,$ and the IQR of a very large normal sample would be about $1.35\sigma.$
diff(qnorm(c(.25, .75)))
[1] 1.34898
set.seed(1018); IQR(rnorm(10^6))
[1] 1.351207
The IQR of our sample of size $n = 35$ is $54.73 - 43.62 = 11.11$ and the expected IQR of a standard normal sample of size 35 is 1.274. So we can estimate $\sigma$ for our population using the sample IQR: $\check \sigma = 11.11/1.274 = 8.72.$
set.seed(910); m = 10^6; n = 25
iqr = replicate(m, IQR(rnorm(n)))
mean(iqr); sd(iqr)
[1] 1.274278
[1] 0.3024651
Assessing results: Thus, we can surmise that our normal population distribution is roughly,
$\mathsf{Norm}(49.19, 8.72).$ Actually, I simulated the sample from
$\mathsf{Norm}(50, 10).$
set.seed(2018); x = round(rnorm(35, 50, 10), 2); summary(x)
Min. 1st Qu. Median Mean 3rd Qu. Max.
31.73 43.62 47.72 49.19 54.73 70.99
It seems worthwhile to make two comparisons: (a) how well does the given information match the CDF of $\mathsf{Norm}(49.19, 8.72),$ and (b) how well does the CDF of this estimated distribution match what we know to be the true normal distribution. Of course,
in a practical situation, we would not know the true normal distribution, so the second comparison would be impossible.
In the figure below, the blue curve is the estimated CDF; the solid red points show the sample values $Q_1, H, Q_3,$ and the red circle shows $A.$ The CDF of the true normal distribution is shown as a broken curve. It is no surprise that the three values $Q_1, Q_3,$ and $A$ used to estimate normal parameters fall near the estimated normal CDF.
curve(pnorm(x, 49.19, 8.27), 0, 80, lwd=2, ylab="CDF",
main="CDF of NORM(49.19, 8.27)", xaxs="i", col="blue")
abline(h=0:1, col="green2")
points(c(43.62, 47.72, 54.73), c(.25, .5, .75), pch=19, col="red")
curve(pnorm(x, 50, 10), add=T, lty="dotted")
points(49.19, .50, col="red")
About symmetry: A remaining question is how much concern might have been appropriate about the normality of the data, upon noting that the sample mean exceeds the sample median by $D = A - H = 49.19 - 47.72 = 1.47.$
We can get a good idea by simulating the difference $D = A - H$ for many samples of size $n = 35$ from $\mathsf{Norm}(\mu = 49.19, \sigma = 8.27).$ A simple simulation shows that a larger positive difference might occur in such a normal sample about 11% of the time.
set.seed(918); m = 10^6; n = 25; d = numeric(m)
for (i in 1:m) {
y = rnorm(n, 49.19, 8.27)
d[i] = mean(y) - median(y) }
mean(d > 1.47)
[1] 0.113
Thus there is no significant evidence of skewness in the comparison of the our sample mean and median. Of course, the Laplace and Cauchy families of distributions are also symmetrical, so this would hardly be "proof" that the sample is from a normal population. | Build a (normal?) distribution from $n$, quartiles and mean? | Based on @whuber's Comments about 'modeling', I gave some thought to relatively elementary methods that might be used to estimate the parameters of a normal distribution given the sample size, sample | Build a (normal?) distribution from $n$, quartiles and mean?
Based on @whuber's Comments about 'modeling', I gave some thought to relatively elementary methods that might be used to estimate the parameters of a normal distribution given the sample size, sample quartiles, and sample mean, assuming that data are normal.
Most of this will work better for very large $n.$ After some experimentation, I found that sample sizes around 35 are just large enough to get reasonably good results. All computations are done using R; seeds (based on current dates) are shown for simulations.
Hypothetical sample: So let's suppose we have a sample (rounded to two places) that is known to have come from a normal population of size
$n = 35,$ but with unknown $\mu$ and $\sigma.$ We are given that the sample mean is $A = 49.19,$ the sample median is $H = 47.72,$ and that the lower and upper quartiles are $Q_1 = 43.62,\, Q_3 = 54.73,$ respectively. (Sometimes reports and articles don't give complete datasets, but do give such summary data about the sample.)
Estimating the population mean. The best estimate of the population mean $\mu$ is the sample mean $A = \hat \mu = 49.19.$
Estimating the population standard deviation: If we knew it, a good estimate of the population standard deviation (SD) would be the sample SD $S,$ but that information is not given.
The interquartile range (IQR) of a standard normal population $1.35,$ and the IQR of a very large normal sample would be about $1.35\sigma.$
diff(qnorm(c(.25, .75)))
[1] 1.34898
set.seed(1018); IQR(rnorm(10^6))
[1] 1.351207
The IQR of our sample of size $n = 35$ is $54.73 - 43.62 = 11.11$ and the expected IQR of a standard normal sample of size 35 is 1.274. So we can estimate $\sigma$ for our population using the sample IQR: $\check \sigma = 11.11/1.274 = 8.72.$
set.seed(910); m = 10^6; n = 25
iqr = replicate(m, IQR(rnorm(n)))
mean(iqr); sd(iqr)
[1] 1.274278
[1] 0.3024651
Assessing results: Thus, we can surmise that our normal population distribution is roughly,
$\mathsf{Norm}(49.19, 8.72).$ Actually, I simulated the sample from
$\mathsf{Norm}(50, 10).$
set.seed(2018); x = round(rnorm(35, 50, 10), 2); summary(x)
Min. 1st Qu. Median Mean 3rd Qu. Max.
31.73 43.62 47.72 49.19 54.73 70.99
It seems worthwhile to make two comparisons: (a) how well does the given information match the CDF of $\mathsf{Norm}(49.19, 8.72),$ and (b) how well does the CDF of this estimated distribution match what we know to be the true normal distribution. Of course,
in a practical situation, we would not know the true normal distribution, so the second comparison would be impossible.
In the figure below, the blue curve is the estimated CDF; the solid red points show the sample values $Q_1, H, Q_3,$ and the red circle shows $A.$ The CDF of the true normal distribution is shown as a broken curve. It is no surprise that the three values $Q_1, Q_3,$ and $A$ used to estimate normal parameters fall near the estimated normal CDF.
curve(pnorm(x, 49.19, 8.27), 0, 80, lwd=2, ylab="CDF",
main="CDF of NORM(49.19, 8.27)", xaxs="i", col="blue")
abline(h=0:1, col="green2")
points(c(43.62, 47.72, 54.73), c(.25, .5, .75), pch=19, col="red")
curve(pnorm(x, 50, 10), add=T, lty="dotted")
points(49.19, .50, col="red")
About symmetry: A remaining question is how much concern might have been appropriate about the normality of the data, upon noting that the sample mean exceeds the sample median by $D = A - H = 49.19 - 47.72 = 1.47.$
We can get a good idea by simulating the difference $D = A - H$ for many samples of size $n = 35$ from $\mathsf{Norm}(\mu = 49.19, \sigma = 8.27).$ A simple simulation shows that a larger positive difference might occur in such a normal sample about 11% of the time.
set.seed(918); m = 10^6; n = 25; d = numeric(m)
for (i in 1:m) {
y = rnorm(n, 49.19, 8.27)
d[i] = mean(y) - median(y) }
mean(d > 1.47)
[1] 0.113
Thus there is no significant evidence of skewness in the comparison of the our sample mean and median. Of course, the Laplace and Cauchy families of distributions are also symmetrical, so this would hardly be "proof" that the sample is from a normal population. | Build a (normal?) distribution from $n$, quartiles and mean?
Based on @whuber's Comments about 'modeling', I gave some thought to relatively elementary methods that might be used to estimate the parameters of a normal distribution given the sample size, sample |
54,842 | Build a (normal?) distribution from $n$, quartiles and mean? | It is possible to estimate the parameters based on this information, but to construct some (approximate) likelihood function based on the given information $n,Q_1, Q_3, \bar{X}_n$ do not see easy. In a way this is a followup on the answer by @BruceET, trying to formalize ideas in that answer.
Using the theory of order statistics we can construct a likelihood function based on $n,Q_1, Q_3$. How to also incorporate the observed mean do seem more difficult. To simplify I will assume $n=4k$ and that $X_{(k)}\le Q_1\le X_{(k+1)}$ and $X_{(3k)}\le Q_3\le X_{(3k+1)}$. An exact analysis (if at all possible) for small $n$ would need to know exactly how the quartiles was computed (different methods can give quite different answers for small $n$). Then we can find the likelihood
$$
L(\mu,\sigma) \propto \Phi(\frac{Q_1-\mu}{\sigma})^k \left[\Phi(\frac{Q_3-\mu}{\sigma})-\Phi(\frac{Q_1-\mu}{\sigma}) \right]^{2k}\left[1-\Phi(\frac{Q_3-\mu}{\sigma})\right]^k \cdot \phi(\frac{Q_1-\mu}{\sigma})\phi(\frac{Q_3-\mu}{\sigma})/\sigma^2
$$ where $\phi, \Phi$ are the standard normal pdf, cdf respectively. This can now be used as any other likelihood function.
But to extend this to a likelihood also using the mean, we need the joint distribution of $Q_1, Q_3, \bar{X}_n$ and that would probably have to be approximated in some way. That seems like a nice little project!
Other ideas to look into here is ABC (approximate bayesian computation) which seems a good fit to estimation based on (insufficient) summary statistics. Or maybe simulated maximum likelihood. I will come back here to look at that. | Build a (normal?) distribution from $n$, quartiles and mean? | It is possible to estimate the parameters based on this information, but to construct some (approximate) likelihood function based on the given information $n,Q_1, Q_3, \bar{X}_n$ do not see easy. In | Build a (normal?) distribution from $n$, quartiles and mean?
It is possible to estimate the parameters based on this information, but to construct some (approximate) likelihood function based on the given information $n,Q_1, Q_3, \bar{X}_n$ do not see easy. In a way this is a followup on the answer by @BruceET, trying to formalize ideas in that answer.
Using the theory of order statistics we can construct a likelihood function based on $n,Q_1, Q_3$. How to also incorporate the observed mean do seem more difficult. To simplify I will assume $n=4k$ and that $X_{(k)}\le Q_1\le X_{(k+1)}$ and $X_{(3k)}\le Q_3\le X_{(3k+1)}$. An exact analysis (if at all possible) for small $n$ would need to know exactly how the quartiles was computed (different methods can give quite different answers for small $n$). Then we can find the likelihood
$$
L(\mu,\sigma) \propto \Phi(\frac{Q_1-\mu}{\sigma})^k \left[\Phi(\frac{Q_3-\mu}{\sigma})-\Phi(\frac{Q_1-\mu}{\sigma}) \right]^{2k}\left[1-\Phi(\frac{Q_3-\mu}{\sigma})\right]^k \cdot \phi(\frac{Q_1-\mu}{\sigma})\phi(\frac{Q_3-\mu}{\sigma})/\sigma^2
$$ where $\phi, \Phi$ are the standard normal pdf, cdf respectively. This can now be used as any other likelihood function.
But to extend this to a likelihood also using the mean, we need the joint distribution of $Q_1, Q_3, \bar{X}_n$ and that would probably have to be approximated in some way. That seems like a nice little project!
Other ideas to look into here is ABC (approximate bayesian computation) which seems a good fit to estimation based on (insufficient) summary statistics. Or maybe simulated maximum likelihood. I will come back here to look at that. | Build a (normal?) distribution from $n$, quartiles and mean?
It is possible to estimate the parameters based on this information, but to construct some (approximate) likelihood function based on the given information $n,Q_1, Q_3, \bar{X}_n$ do not see easy. In |
54,843 | What is intuition behind high variance of Monte Carlo method? [closed] | In RL, for value functions, the bias and variance refer to behaviour of different kinds of estimate for the value function. The value function's true value is the expected return from a specific starting state (and action for action values), assuming that all actions are selected according to the policy being evaluated. For control problems you can just aim for the optimal policy, but the bias and variance are considered with respect to the current "best guess" at the policy. For Monte Carlo control, that is either greedy policy with respect to current Q estimates, or $\epsilon$-greedy with respect to same (for off-policy and on-policy control respectively).
The definition of the action value function is
$$Q_{\pi}(s,a) = \mathbb{E}_{\pi}[G_t | S_t=s, A_t=a] = \mathbb{E}_{\pi}[\sum_{k=0}^{\infty}\gamma^k R_{t+k+1} | S_t=s, A_t=a]$$
From this we can see, almost trivially, that Monte Carlo estimates are unbiased. That is because a single Monte Carlo estimate is given by
$$\hat{q}(s_t,a_t) = \sum_{k=0}^{\infty}\gamma^k r_{t+k+1}$$
It is clearly a sample of the same stochastic function given by the definition of Q. It is unbiased in the same way that a roll of a die is an unbiased estimate of the expected result of rolling that die.
Variance is slightly trickier. Fundamentally, the variance is high because the return is the sum of many random variables ($R_t ... \gamma^{T-t}R_T$, each of which depends on distributions not just in reward $R$, but also in the choice of action $A$ by the policy, and state transition dynamics that pick each $S$) and the variance of a sum is the sum of the variance when the variables are independent - however in RL, the sequence of $R_t$ can be highly correlated. The correlation is not guaranteed by the form of the MDP in general, but it can often be the case. As a result, I don't think you will find a general formula for variance of Monte Carlo estimates of Q, and the argument is more about intuition and empirical data.
It is easier to see that variance of Monte Carlo is higher in general than the variance of one-step Temporal Difference methods. The formula for a basic TD Target (equivalent to the return $G_t$ from Monte Carlo) is
$$\hat{q}(s_t,a_t) = r_{t+1} + \gamma \hat{q}(s_{t+1},a_{t+1})$$
This has only a fixed number of three random variables that have been sampled from the single step being used. Therefore you can expect it to have proportionally less variance than any Monte Carlo estimate based on a longer trajectory. The big problem with it is reliance on the estimate $\hat{q}$, because during the learning process this is very unlikely to be a perfect estimate. In fact it is biased towards whatever the starting values were for $\hat{q}$, which is typically either zero for simple estimation functions, or random such as when dealing with neural networks. That said, often the low variance is a deciding factor and TD learning will require less training data than Monte Carlo on the same problem. | What is intuition behind high variance of Monte Carlo method? [closed] | In RL, for value functions, the bias and variance refer to behaviour of different kinds of estimate for the value function. The value function's true value is the expected return from a specific start | What is intuition behind high variance of Monte Carlo method? [closed]
In RL, for value functions, the bias and variance refer to behaviour of different kinds of estimate for the value function. The value function's true value is the expected return from a specific starting state (and action for action values), assuming that all actions are selected according to the policy being evaluated. For control problems you can just aim for the optimal policy, but the bias and variance are considered with respect to the current "best guess" at the policy. For Monte Carlo control, that is either greedy policy with respect to current Q estimates, or $\epsilon$-greedy with respect to same (for off-policy and on-policy control respectively).
The definition of the action value function is
$$Q_{\pi}(s,a) = \mathbb{E}_{\pi}[G_t | S_t=s, A_t=a] = \mathbb{E}_{\pi}[\sum_{k=0}^{\infty}\gamma^k R_{t+k+1} | S_t=s, A_t=a]$$
From this we can see, almost trivially, that Monte Carlo estimates are unbiased. That is because a single Monte Carlo estimate is given by
$$\hat{q}(s_t,a_t) = \sum_{k=0}^{\infty}\gamma^k r_{t+k+1}$$
It is clearly a sample of the same stochastic function given by the definition of Q. It is unbiased in the same way that a roll of a die is an unbiased estimate of the expected result of rolling that die.
Variance is slightly trickier. Fundamentally, the variance is high because the return is the sum of many random variables ($R_t ... \gamma^{T-t}R_T$, each of which depends on distributions not just in reward $R$, but also in the choice of action $A$ by the policy, and state transition dynamics that pick each $S$) and the variance of a sum is the sum of the variance when the variables are independent - however in RL, the sequence of $R_t$ can be highly correlated. The correlation is not guaranteed by the form of the MDP in general, but it can often be the case. As a result, I don't think you will find a general formula for variance of Monte Carlo estimates of Q, and the argument is more about intuition and empirical data.
It is easier to see that variance of Monte Carlo is higher in general than the variance of one-step Temporal Difference methods. The formula for a basic TD Target (equivalent to the return $G_t$ from Monte Carlo) is
$$\hat{q}(s_t,a_t) = r_{t+1} + \gamma \hat{q}(s_{t+1},a_{t+1})$$
This has only a fixed number of three random variables that have been sampled from the single step being used. Therefore you can expect it to have proportionally less variance than any Monte Carlo estimate based on a longer trajectory. The big problem with it is reliance on the estimate $\hat{q}$, because during the learning process this is very unlikely to be a perfect estimate. In fact it is biased towards whatever the starting values were for $\hat{q}$, which is typically either zero for simple estimation functions, or random such as when dealing with neural networks. That said, often the low variance is a deciding factor and TD learning will require less training data than Monte Carlo on the same problem. | What is intuition behind high variance of Monte Carlo method? [closed]
In RL, for value functions, the bias and variance refer to behaviour of different kinds of estimate for the value function. The value function's true value is the expected return from a specific start |
54,844 | Why do descriptive statistics contradict with regression coefficents? | This not an issue with L1 norms nor with logistic regression specifically; it will happen with ordinary L2-norm multiple linear regression. The direction of an unconditional relationship can change once you condition on another variable (include an additional feature).
Consider 4 groups, where the $p=\text{Pr}(Y=1)$ is an increasing function of $x$ within each group but where the four groups get lower average $p$ as x increases:
Of course the population $p$'s are not observable in practice; you only observe the proportion of Y's that are 1. However, if you have many replicates at each x-value (say 50 or something), you could actually observe something very like this for the sample proportions at each x..
Now if you fit a logistic to just the y vs x values, ignoring the group factor, you will get a negative coefficient. But as soon as you include group, the coefficient for x will be positive.
[In fact it is possible that the sign could even flip each time you included additional feature.]
Further reading:
https://en.wikipedia.org/wiki/Simpson%27s_paradox
https://en.wikipedia.org/wiki/Omitted-variable_bias | Why do descriptive statistics contradict with regression coefficents? | This not an issue with L1 norms nor with logistic regression specifically; it will happen with ordinary L2-norm multiple linear regression. The direction of an unconditional relationship can change on | Why do descriptive statistics contradict with regression coefficents?
This not an issue with L1 norms nor with logistic regression specifically; it will happen with ordinary L2-norm multiple linear regression. The direction of an unconditional relationship can change once you condition on another variable (include an additional feature).
Consider 4 groups, where the $p=\text{Pr}(Y=1)$ is an increasing function of $x$ within each group but where the four groups get lower average $p$ as x increases:
Of course the population $p$'s are not observable in practice; you only observe the proportion of Y's that are 1. However, if you have many replicates at each x-value (say 50 or something), you could actually observe something very like this for the sample proportions at each x..
Now if you fit a logistic to just the y vs x values, ignoring the group factor, you will get a negative coefficient. But as soon as you include group, the coefficient for x will be positive.
[In fact it is possible that the sign could even flip each time you included additional feature.]
Further reading:
https://en.wikipedia.org/wiki/Simpson%27s_paradox
https://en.wikipedia.org/wiki/Omitted-variable_bias | Why do descriptive statistics contradict with regression coefficents?
This not an issue with L1 norms nor with logistic regression specifically; it will happen with ordinary L2-norm multiple linear regression. The direction of an unconditional relationship can change on |
54,845 | In propensity score matching, should a variable used in exact matching also be used in the model? | I always fall back on the propensity score tautology (Ho, Imai. King, & Stuart, 2007): a propensity score (model) should be evaluated for its ability to yield balance samples. Try both methods and see which yields better balance. It's hard to make a general rule when every dataset is different and might have peculiarities.
If the variable does in fact predict selection into treatment, omitting it will yield the wrong propensity score model and therefore "incorrect" propensity scores, regardless of whether you exact match on the variable. These incorrect propensity scores might yield poor balance on the other covariates, even if the exact matching variable is perfectly balanced. | In propensity score matching, should a variable used in exact matching also be used in the model? | I always fall back on the propensity score tautology (Ho, Imai. King, & Stuart, 2007): a propensity score (model) should be evaluated for its ability to yield balance samples. Try both methods and see | In propensity score matching, should a variable used in exact matching also be used in the model?
I always fall back on the propensity score tautology (Ho, Imai. King, & Stuart, 2007): a propensity score (model) should be evaluated for its ability to yield balance samples. Try both methods and see which yields better balance. It's hard to make a general rule when every dataset is different and might have peculiarities.
If the variable does in fact predict selection into treatment, omitting it will yield the wrong propensity score model and therefore "incorrect" propensity scores, regardless of whether you exact match on the variable. These incorrect propensity scores might yield poor balance on the other covariates, even if the exact matching variable is perfectly balanced. | In propensity score matching, should a variable used in exact matching also be used in the model?
I always fall back on the propensity score tautology (Ho, Imai. King, & Stuart, 2007): a propensity score (model) should be evaluated for its ability to yield balance samples. Try both methods and see |
54,846 | In propensity score matching, should a variable used in exact matching also be used in the model? | Yes, we can/is recommended to use a variable $x$ that we used for matching in our final model. The matching itself can also have different steps as here, both exact and then PSM. Using multiple procedures in our analysis does not necessitate using a variable $x$ only in one of the steps.
Using certain covariates with matching procedures as well as other steps of the analysis falls broadly within the context of doubly robust estimators; Stuart (2011) Matching methods for causal inference: A review and a look forward and Kang & Schafer (2007) Demystifying double robustness: A comparison of alternative strategies for estimating a population mean from incomplete data are good places to look at this in more detail. As you correctly recognise, using $x$ again will potentially assists in terms of model efficiency (e.g. standard errors will be smaller). This is true even for exact matching followed by PS calculations as ultimately we will get the output of a logistic model.
As the matching procedure is not guaranteed to be perfect, using a variable $x$ both for matching as well as our final model is almost certainly more helpful (e.g. guards against misspecification of the PSM model and reduces error the standard of the final estimates) at the expense of having slightly fewer degrees of freedom in our model.
As always, doubly/triply/quadruply/quintuply robust methods, or any other matching methods (e.g. entropy balancing) cannot guard us again unmeasured confounding variables. | In propensity score matching, should a variable used in exact matching also be used in the model? | Yes, we can/is recommended to use a variable $x$ that we used for matching in our final model. The matching itself can also have different steps as here, both exact and then PSM. Using multiple proce | In propensity score matching, should a variable used in exact matching also be used in the model?
Yes, we can/is recommended to use a variable $x$ that we used for matching in our final model. The matching itself can also have different steps as here, both exact and then PSM. Using multiple procedures in our analysis does not necessitate using a variable $x$ only in one of the steps.
Using certain covariates with matching procedures as well as other steps of the analysis falls broadly within the context of doubly robust estimators; Stuart (2011) Matching methods for causal inference: A review and a look forward and Kang & Schafer (2007) Demystifying double robustness: A comparison of alternative strategies for estimating a population mean from incomplete data are good places to look at this in more detail. As you correctly recognise, using $x$ again will potentially assists in terms of model efficiency (e.g. standard errors will be smaller). This is true even for exact matching followed by PS calculations as ultimately we will get the output of a logistic model.
As the matching procedure is not guaranteed to be perfect, using a variable $x$ both for matching as well as our final model is almost certainly more helpful (e.g. guards against misspecification of the PSM model and reduces error the standard of the final estimates) at the expense of having slightly fewer degrees of freedom in our model.
As always, doubly/triply/quadruply/quintuply robust methods, or any other matching methods (e.g. entropy balancing) cannot guard us again unmeasured confounding variables. | In propensity score matching, should a variable used in exact matching also be used in the model?
Yes, we can/is recommended to use a variable $x$ that we used for matching in our final model. The matching itself can also have different steps as here, both exact and then PSM. Using multiple proce |
54,847 | Simple Linear Regression: how does $\Sigma\hat{u_i}^2/\sigma^2$ follow chi squared distribution with df (n-2)? | The correct expression for the variance of the $i$th residual is explained here in detail.
I am using a slightly different notation but in the end it will all match with what you are working with.
Suppose we have a simple linear regression model $$y=\alpha+\beta x+\epsilon$$
where $\alpha+\beta x$ is the part of $y$ explained by $x$ and $\epsilon$ is the unexplained part or the error. Here $y$ is stochastic and $x$ is non-stochastic.
We consider the paired observations $(x_i,y_i)$ for $i=1,2,\ldots,n$ and assume that $\epsilon_i$ are i.i.d $\mathcal N(0,\sigma^2)$ for all $i$. This means we have some $Y_i\sim\mathcal N(\alpha+\beta x_i,\sigma^2)$, independently for all $i$.
Define $$s_{xx}=\sum_{i=1}^n (x_i-\bar x)^2\qquad,\qquad s_{yy}=\sum_{i=1}^n (y_i-\bar y)^2$$ and $$s_{xy}=\sum_{i=1}^n (x_i-\bar x)(y_i-\bar y)$$
From the normal equations, we have the least square estimates of $\alpha$ and $\beta$ :
$$\hat\alpha=\bar y-\hat\beta \bar x\qquad,\qquad\hat\beta=\frac{s_{xy}}{s_{xx}}$$
Let the residual variance be $$s^2=\frac{1}{n-2}\sum_{i=1}^n(y_i-\hat\alpha-\hat\beta x_i)^2$$
On simplification,
\begin{align}
(n-2)s^2&=\sum_{i=1}^n (y_i-\hat\alpha-\hat\beta x_i)y_i
\\&=\sum_{i=1}^n \left\{(y_i-\bar y)-\hat\beta(x_i-\bar x)\right\}y_i
\\&=\sum_{i=1}^n (y_i-\bar y)y_i-\hat\beta \sum_{i=1}^n (x_i-\bar x)y_i
\\&=\sum_{i=1}^n (y_i-\bar y)^2-\hat\beta\sum_{i=1}^n (x_i-\bar x)(y_i-\bar y)
\\&=s_{yy}-\hat\beta s_{xy}
\\&=s_{yy}-\hat\beta^2 s_{xx}
\end{align}
Taking $\alpha'=\alpha+\beta\bar x$, joint pdf of $Y=(Y_1,\ldots,Y_n)$ for $(y_1,\ldots,y_n)\in\mathbb R^n$ is
$$f_{Y}(y_1,\ldots,y_n)=\frac{1}{(\sigma\sqrt{2\pi})^n}\exp\left[-\frac{1}{2\sigma^2}\sum_{i=1}^n\left(y_i-\alpha'-\beta(x_i-\bar x)\right)^2\right]$$
Consider the orthogonal transformation $(y_1,\ldots,y_n)\to(z_1,\ldots,z_n)$ such that
$$\begin{pmatrix}z_1\\z_2\\\vdots\\z_n\end{pmatrix}=\mathbf Q\begin{pmatrix}y_1\\y_2\\\vdots\\y_n\end{pmatrix}\,,$$
where $$\mathbf Q=\left[\begin{matrix}\frac{1}{\sqrt n}&\frac{1}{\sqrt n}&\cdots&\frac{1}{\sqrt n}\\\frac{x_1-\bar x}{\sqrt{s_{xx}}}&\frac{x_2-\bar x}{\sqrt{s_{xx}}}&\cdots&\frac{x_n-\bar x}{\sqrt{s_{xx}}}\\\vdots&\vdots&\cdots&\vdots\end{matrix}\right]$$ is an $n\times n$ orthogonal matrix with its first two rows fixed.
Then, $$z_1=\frac{1}{\sqrt n}\sum_{i=1}^n y_i=\sqrt{n}\bar y$$ and $$z_2=\frac{\sum (x_i-\bar x)y_i}{\sqrt{s_{xx}}}=\frac{s_{xy}}{\sqrt{s_{xx}}}=\hat\beta\sqrt{s_{xx}}$$
Note that $\sum\limits_{i=1}^n y_i^2=\sum\limits_{i=1}^n z_i^2$ by virtue of orthogonal transformation, which leads to
\begin{align}
\sum_{i=1}^n(y_i-\alpha'-\beta(x_i-\bar x))^2&=\sum_{i=1}^n y_i^2+n\alpha'^2+\beta^2\sum_{i=1}^n(x_i-\bar x)^2-2\alpha'n\bar y-2\beta\sum_{i=1}^n(x_i-\bar x)y_i
\\&=\sum_{i=1}^n z_i^2 +n\alpha'^2+\beta^2 s_{xx}-2\alpha'\sqrt n z_1-2\beta z_2\sqrt{s_{xx}}
\\&=(z_1-\sqrt n\alpha')^2+(z_2-\beta\sqrt{s_{xx}})^2+\sum_{i=3}^nz_i^2
\end{align}
For $(z_1,\ldots,z_n)\in\mathbb R^n$, joint density of $Z=(Z_1,\ldots,Z_n)$ becomes
$$f_{Z}(z_1,\ldots,z_n)=\frac{1}{(\sigma\sqrt{2\pi})^n}\exp\left[-\frac{1}{2\sigma^2}\left\{(z_1-\sqrt n\alpha')^2+(z_2-\beta\sqrt{s_{xx}})^2+\sum_{i=3}^nz_i^2\right\}\right]\,,$$
so that $Z_1,Z_2,\ldots,Z_n$ are independently distributed with
\begin{align}
Z_1&\sim\mathcal N(\sqrt n\alpha',\sigma^2)
\\Z_2&\sim\mathcal N(\beta\sqrt{s_{xx}},\sigma^2)
\\Z_i&\sim\mathcal N(0,\sigma^2)\qquad,\,i=3,4,\ldots,n
\end{align}
Now,
\begin{align}
(n-2)s^2&=s_{yy}-\hat\beta^2s_{xx}
\\&=\sum_{i=1}^ny_i^2-n\bar y^2-\hat\beta^2s_{xx}
\\&=\sum_{i=1}^nz_i^2-z_1^2-z_2^2
\\&=\sum_{i=3}^nz_i^2
\end{align}
And you have that $Z_3,\ldots,Z_n\sim\mathcal N(0,\sigma^2)$, independently.
This implies $$\sum_{i=3}^n\frac{Z_i^2}{\sigma^2}\sim\chi^2_{n-2}$$
Or in other words, $$\frac{(n-2)s^2}{\sigma^2}\sim\chi^2_{n-2}$$ | Simple Linear Regression: how does $\Sigma\hat{u_i}^2/\sigma^2$ follow chi squared distribution with | The correct expression for the variance of the $i$th residual is explained here in detail.
I am using a slightly different notation but in the end it will all match with what you are working with.
Sup | Simple Linear Regression: how does $\Sigma\hat{u_i}^2/\sigma^2$ follow chi squared distribution with df (n-2)?
The correct expression for the variance of the $i$th residual is explained here in detail.
I am using a slightly different notation but in the end it will all match with what you are working with.
Suppose we have a simple linear regression model $$y=\alpha+\beta x+\epsilon$$
where $\alpha+\beta x$ is the part of $y$ explained by $x$ and $\epsilon$ is the unexplained part or the error. Here $y$ is stochastic and $x$ is non-stochastic.
We consider the paired observations $(x_i,y_i)$ for $i=1,2,\ldots,n$ and assume that $\epsilon_i$ are i.i.d $\mathcal N(0,\sigma^2)$ for all $i$. This means we have some $Y_i\sim\mathcal N(\alpha+\beta x_i,\sigma^2)$, independently for all $i$.
Define $$s_{xx}=\sum_{i=1}^n (x_i-\bar x)^2\qquad,\qquad s_{yy}=\sum_{i=1}^n (y_i-\bar y)^2$$ and $$s_{xy}=\sum_{i=1}^n (x_i-\bar x)(y_i-\bar y)$$
From the normal equations, we have the least square estimates of $\alpha$ and $\beta$ :
$$\hat\alpha=\bar y-\hat\beta \bar x\qquad,\qquad\hat\beta=\frac{s_{xy}}{s_{xx}}$$
Let the residual variance be $$s^2=\frac{1}{n-2}\sum_{i=1}^n(y_i-\hat\alpha-\hat\beta x_i)^2$$
On simplification,
\begin{align}
(n-2)s^2&=\sum_{i=1}^n (y_i-\hat\alpha-\hat\beta x_i)y_i
\\&=\sum_{i=1}^n \left\{(y_i-\bar y)-\hat\beta(x_i-\bar x)\right\}y_i
\\&=\sum_{i=1}^n (y_i-\bar y)y_i-\hat\beta \sum_{i=1}^n (x_i-\bar x)y_i
\\&=\sum_{i=1}^n (y_i-\bar y)^2-\hat\beta\sum_{i=1}^n (x_i-\bar x)(y_i-\bar y)
\\&=s_{yy}-\hat\beta s_{xy}
\\&=s_{yy}-\hat\beta^2 s_{xx}
\end{align}
Taking $\alpha'=\alpha+\beta\bar x$, joint pdf of $Y=(Y_1,\ldots,Y_n)$ for $(y_1,\ldots,y_n)\in\mathbb R^n$ is
$$f_{Y}(y_1,\ldots,y_n)=\frac{1}{(\sigma\sqrt{2\pi})^n}\exp\left[-\frac{1}{2\sigma^2}\sum_{i=1}^n\left(y_i-\alpha'-\beta(x_i-\bar x)\right)^2\right]$$
Consider the orthogonal transformation $(y_1,\ldots,y_n)\to(z_1,\ldots,z_n)$ such that
$$\begin{pmatrix}z_1\\z_2\\\vdots\\z_n\end{pmatrix}=\mathbf Q\begin{pmatrix}y_1\\y_2\\\vdots\\y_n\end{pmatrix}\,,$$
where $$\mathbf Q=\left[\begin{matrix}\frac{1}{\sqrt n}&\frac{1}{\sqrt n}&\cdots&\frac{1}{\sqrt n}\\\frac{x_1-\bar x}{\sqrt{s_{xx}}}&\frac{x_2-\bar x}{\sqrt{s_{xx}}}&\cdots&\frac{x_n-\bar x}{\sqrt{s_{xx}}}\\\vdots&\vdots&\cdots&\vdots\end{matrix}\right]$$ is an $n\times n$ orthogonal matrix with its first two rows fixed.
Then, $$z_1=\frac{1}{\sqrt n}\sum_{i=1}^n y_i=\sqrt{n}\bar y$$ and $$z_2=\frac{\sum (x_i-\bar x)y_i}{\sqrt{s_{xx}}}=\frac{s_{xy}}{\sqrt{s_{xx}}}=\hat\beta\sqrt{s_{xx}}$$
Note that $\sum\limits_{i=1}^n y_i^2=\sum\limits_{i=1}^n z_i^2$ by virtue of orthogonal transformation, which leads to
\begin{align}
\sum_{i=1}^n(y_i-\alpha'-\beta(x_i-\bar x))^2&=\sum_{i=1}^n y_i^2+n\alpha'^2+\beta^2\sum_{i=1}^n(x_i-\bar x)^2-2\alpha'n\bar y-2\beta\sum_{i=1}^n(x_i-\bar x)y_i
\\&=\sum_{i=1}^n z_i^2 +n\alpha'^2+\beta^2 s_{xx}-2\alpha'\sqrt n z_1-2\beta z_2\sqrt{s_{xx}}
\\&=(z_1-\sqrt n\alpha')^2+(z_2-\beta\sqrt{s_{xx}})^2+\sum_{i=3}^nz_i^2
\end{align}
For $(z_1,\ldots,z_n)\in\mathbb R^n$, joint density of $Z=(Z_1,\ldots,Z_n)$ becomes
$$f_{Z}(z_1,\ldots,z_n)=\frac{1}{(\sigma\sqrt{2\pi})^n}\exp\left[-\frac{1}{2\sigma^2}\left\{(z_1-\sqrt n\alpha')^2+(z_2-\beta\sqrt{s_{xx}})^2+\sum_{i=3}^nz_i^2\right\}\right]\,,$$
so that $Z_1,Z_2,\ldots,Z_n$ are independently distributed with
\begin{align}
Z_1&\sim\mathcal N(\sqrt n\alpha',\sigma^2)
\\Z_2&\sim\mathcal N(\beta\sqrt{s_{xx}},\sigma^2)
\\Z_i&\sim\mathcal N(0,\sigma^2)\qquad,\,i=3,4,\ldots,n
\end{align}
Now,
\begin{align}
(n-2)s^2&=s_{yy}-\hat\beta^2s_{xx}
\\&=\sum_{i=1}^ny_i^2-n\bar y^2-\hat\beta^2s_{xx}
\\&=\sum_{i=1}^nz_i^2-z_1^2-z_2^2
\\&=\sum_{i=3}^nz_i^2
\end{align}
And you have that $Z_3,\ldots,Z_n\sim\mathcal N(0,\sigma^2)$, independently.
This implies $$\sum_{i=3}^n\frac{Z_i^2}{\sigma^2}\sim\chi^2_{n-2}$$
Or in other words, $$\frac{(n-2)s^2}{\sigma^2}\sim\chi^2_{n-2}$$ | Simple Linear Regression: how does $\Sigma\hat{u_i}^2/\sigma^2$ follow chi squared distribution with
The correct expression for the variance of the $i$th residual is explained here in detail.
I am using a slightly different notation but in the end it will all match with what you are working with.
Sup |
54,848 | Does it make sense to use an Early Stopping Metric like “mae” instaed of “val_loss” for regression problems? | In order to prevent overfitting, EarlyStopping should monitor a validation metric. Because your loss function is the mse, by monitoring val_loss you are essentially monitoring the validation Mean Squared Error. If you think that mae is a better metric for your task, you should monitor val_mae instead.
Why monitor a validation metric when performing early stopping?
Early stopping, is mostly intended to combat overfitting in your model. Overfitting is a phenomenon, commonly occurring in Machine Learning, where a model performs worse on a validation/test set than the training set.
Take a look at the image below:
The red and blue lines depict the model's loss, during training, on the training/validation sets respectively. As you can see, while the training loss continues to drop, the validation loss initially plateaus and then starts to raise. This happens because after some point the model learns to memorize the training data, while forgetting how to effectively differentiate between the classes. When we reach this point, even though we think the model is improving (indicated by all the training set metrics), actually has begun to diverge from its goal (to perform well on unseen data).
Early stopping is a method of combating this. By terminating the model, before it has completed its training we might get a better performance on unseen data. This works by monitoring a validation metric and terminating the model when this metric stops dropping. | Does it make sense to use an Early Stopping Metric like “mae” instaed of “val_loss” for regression p | In order to prevent overfitting, EarlyStopping should monitor a validation metric. Because your loss function is the mse, by monitoring val_loss you are essentially monitoring the validation Mean Squa | Does it make sense to use an Early Stopping Metric like “mae” instaed of “val_loss” for regression problems?
In order to prevent overfitting, EarlyStopping should monitor a validation metric. Because your loss function is the mse, by monitoring val_loss you are essentially monitoring the validation Mean Squared Error. If you think that mae is a better metric for your task, you should monitor val_mae instead.
Why monitor a validation metric when performing early stopping?
Early stopping, is mostly intended to combat overfitting in your model. Overfitting is a phenomenon, commonly occurring in Machine Learning, where a model performs worse on a validation/test set than the training set.
Take a look at the image below:
The red and blue lines depict the model's loss, during training, on the training/validation sets respectively. As you can see, while the training loss continues to drop, the validation loss initially plateaus and then starts to raise. This happens because after some point the model learns to memorize the training data, while forgetting how to effectively differentiate between the classes. When we reach this point, even though we think the model is improving (indicated by all the training set metrics), actually has begun to diverge from its goal (to perform well on unseen data).
Early stopping is a method of combating this. By terminating the model, before it has completed its training we might get a better performance on unseen data. This works by monitoring a validation metric and terminating the model when this metric stops dropping. | Does it make sense to use an Early Stopping Metric like “mae” instaed of “val_loss” for regression p
In order to prevent overfitting, EarlyStopping should monitor a validation metric. Because your loss function is the mse, by monitoring val_loss you are essentially monitoring the validation Mean Squa |
54,849 | Why is multicollinearity so bad for machine learning models and what can we do about it? | Multicollinearity simply imlies that one or more of the features in your dataset are useless to the model. Thus you get all the problems associated with more features (i.e. curse of dimensionality), but none of the benefits (e.g. making the classes easier separable).
Many ML algorithms are impervious to problems of this nature. Algorithms that internally perform any form of feature selection and are good with high dimensional data (e.g. tree-based algorithms, lasso) are robust against multicollinearity.
$L_1$ regularization mainly helps models as it provides sparse solutions, robust against multicollinearity. $L_2$ doesn't help as much. Read this article if you are interested on the differences of the two.
As a final note, multicollinearity isn't as big a problem in Machine Learning as you make it out to be. That being said, if such a problem is detected it is almost always beneficial to perform some sort of feature selection, or even PCA to help decorrelate the features. | Why is multicollinearity so bad for machine learning models and what can we do about it? | Multicollinearity simply imlies that one or more of the features in your dataset are useless to the model. Thus you get all the problems associated with more features (i.e. curse of dimensionality), b | Why is multicollinearity so bad for machine learning models and what can we do about it?
Multicollinearity simply imlies that one or more of the features in your dataset are useless to the model. Thus you get all the problems associated with more features (i.e. curse of dimensionality), but none of the benefits (e.g. making the classes easier separable).
Many ML algorithms are impervious to problems of this nature. Algorithms that internally perform any form of feature selection and are good with high dimensional data (e.g. tree-based algorithms, lasso) are robust against multicollinearity.
$L_1$ regularization mainly helps models as it provides sparse solutions, robust against multicollinearity. $L_2$ doesn't help as much. Read this article if you are interested on the differences of the two.
As a final note, multicollinearity isn't as big a problem in Machine Learning as you make it out to be. That being said, if such a problem is detected it is almost always beneficial to perform some sort of feature selection, or even PCA to help decorrelate the features. | Why is multicollinearity so bad for machine learning models and what can we do about it?
Multicollinearity simply imlies that one or more of the features in your dataset are useless to the model. Thus you get all the problems associated with more features (i.e. curse of dimensionality), b |
54,850 | Why is multicollinearity so bad for machine learning models and what can we do about it? | The easiest way to understand is to imagine that you have two identical features, e.g. temperature in Celsius and Fehrenheits. This is a case of perfect collinearity.
Two things will happen, both bad. One is that at the very least you're going to waste some neurons. In the first layer you have $a^{[0]}_i$ inputs for $i=1,2,\dots,n$ features. Two of these features are essentially the same, but the model doesn't know it and assigns the weights $w_{ij}$ to them:$z_j=\sum_ia^{[0]}_i w_{ij}$, where $j=1,2,\dots,k^{[1]}$ neurons in the first layer. So, you wasted $k^{[1]}$ neurons right out of the gate.
The second thing is that this will degenerate the potential optimal solution. Again, consider the first layer's connections $a^{[0]}_1 w_{1j}+a^{[0]}_2 w_{2j}$, if the first and the second input are collinear, then there's infinite number of combinations that would produce the same exact result as this sumproduct. This is going to be confusing your optimizer, and make its work a little harder. | Why is multicollinearity so bad for machine learning models and what can we do about it? | The easiest way to understand is to imagine that you have two identical features, e.g. temperature in Celsius and Fehrenheits. This is a case of perfect collinearity.
Two things will happen, both bad | Why is multicollinearity so bad for machine learning models and what can we do about it?
The easiest way to understand is to imagine that you have two identical features, e.g. temperature in Celsius and Fehrenheits. This is a case of perfect collinearity.
Two things will happen, both bad. One is that at the very least you're going to waste some neurons. In the first layer you have $a^{[0]}_i$ inputs for $i=1,2,\dots,n$ features. Two of these features are essentially the same, but the model doesn't know it and assigns the weights $w_{ij}$ to them:$z_j=\sum_ia^{[0]}_i w_{ij}$, where $j=1,2,\dots,k^{[1]}$ neurons in the first layer. So, you wasted $k^{[1]}$ neurons right out of the gate.
The second thing is that this will degenerate the potential optimal solution. Again, consider the first layer's connections $a^{[0]}_1 w_{1j}+a^{[0]}_2 w_{2j}$, if the first and the second input are collinear, then there's infinite number of combinations that would produce the same exact result as this sumproduct. This is going to be confusing your optimizer, and make its work a little harder. | Why is multicollinearity so bad for machine learning models and what can we do about it?
The easiest way to understand is to imagine that you have two identical features, e.g. temperature in Celsius and Fehrenheits. This is a case of perfect collinearity.
Two things will happen, both bad |
54,851 | Conditional expectation on an estimator for defensive sampling | Yeah that’s what I thought when I first looked at it. But the conditional expectation is taken with respect to the conditional pmf of $Y$ given $X_i =x_i$. If $y=1$ the weight for $f(x_i) g_1^{-1}(x_i)$ is $\frac{\varrho g_1(x_i)}{g_1(x_i)\varrho + g_2(x_i) (1-\varrho)}$. You can follow a similar path to obtain the weight for $f(x_i) g_2^{-1}(x_i)$, then you add the two weighted values together, and then there’s some cancellation, and it ends up being what the book says. | Conditional expectation on an estimator for defensive sampling | Yeah that’s what I thought when I first looked at it. But the conditional expectation is taken with respect to the conditional pmf of $Y$ given $X_i =x_i$. If $y=1$ the weight for $f(x_i) g_1^{-1}(x_i | Conditional expectation on an estimator for defensive sampling
Yeah that’s what I thought when I first looked at it. But the conditional expectation is taken with respect to the conditional pmf of $Y$ given $X_i =x_i$. If $y=1$ the weight for $f(x_i) g_1^{-1}(x_i)$ is $\frac{\varrho g_1(x_i)}{g_1(x_i)\varrho + g_2(x_i) (1-\varrho)}$. You can follow a similar path to obtain the weight for $f(x_i) g_2^{-1}(x_i)$, then you add the two weighted values together, and then there’s some cancellation, and it ends up being what the book says. | Conditional expectation on an estimator for defensive sampling
Yeah that’s what I thought when I first looked at it. But the conditional expectation is taken with respect to the conditional pmf of $Y$ given $X_i =x_i$. If $y=1$ the weight for $f(x_i) g_1^{-1}(x_i |
54,852 | Conditional expectation on an estimator for defensive sampling | Hopefully a correct derivation based on Taylor's answer...
$$E\left[\frac{f(X_i)}{g_{Y_i}(X_i)}|X_i\right]=\int \frac{f(X_i)}{g_{Y_i}(X_i)}k(X_i,dY_i)=\sum_{j=1}^2 \frac{f(X_i)}{g_j(X_i)}\frac{p(X_i|Y_{i}=j)p(Y_{i}=j)}{p(X_i)}$$
where $k$ is a regular conditional distribution, $X_i$ is cont. and $Y_i$ is discrete,
$$\sum_{j=1}^2 \frac{f(X_i)}{g_j(X_i)}\frac{p(X_i|Y_{i}=j)p(Y_{i}=j)}{p(X_i)}=\frac{f(X_i)}{g_1(X_i)}\frac{g_1(X_i)\varrho}{g_1(X_i)\varrho+g_2(X_i)(1-\varrho)}+\frac{f(X_i)}{g_2(X_i)}\frac{g_2(X_i)(1-\varrho)}{g_1(X_i)\varrho+g_2(X_i)(1-\varrho)} = \frac{f(X_i)}{g_1(X_i)\varrho+g_2(X_i)(1-\varrho)}$$ | Conditional expectation on an estimator for defensive sampling | Hopefully a correct derivation based on Taylor's answer...
$$E\left[\frac{f(X_i)}{g_{Y_i}(X_i)}|X_i\right]=\int \frac{f(X_i)}{g_{Y_i}(X_i)}k(X_i,dY_i)=\sum_{j=1}^2 \frac{f(X_i)}{g_j(X_i)}\frac{p(X_i|Y | Conditional expectation on an estimator for defensive sampling
Hopefully a correct derivation based on Taylor's answer...
$$E\left[\frac{f(X_i)}{g_{Y_i}(X_i)}|X_i\right]=\int \frac{f(X_i)}{g_{Y_i}(X_i)}k(X_i,dY_i)=\sum_{j=1}^2 \frac{f(X_i)}{g_j(X_i)}\frac{p(X_i|Y_{i}=j)p(Y_{i}=j)}{p(X_i)}$$
where $k$ is a regular conditional distribution, $X_i$ is cont. and $Y_i$ is discrete,
$$\sum_{j=1}^2 \frac{f(X_i)}{g_j(X_i)}\frac{p(X_i|Y_{i}=j)p(Y_{i}=j)}{p(X_i)}=\frac{f(X_i)}{g_1(X_i)}\frac{g_1(X_i)\varrho}{g_1(X_i)\varrho+g_2(X_i)(1-\varrho)}+\frac{f(X_i)}{g_2(X_i)}\frac{g_2(X_i)(1-\varrho)}{g_1(X_i)\varrho+g_2(X_i)(1-\varrho)} = \frac{f(X_i)}{g_1(X_i)\varrho+g_2(X_i)(1-\varrho)}$$ | Conditional expectation on an estimator for defensive sampling
Hopefully a correct derivation based on Taylor's answer...
$$E\left[\frac{f(X_i)}{g_{Y_i}(X_i)}|X_i\right]=\int \frac{f(X_i)}{g_{Y_i}(X_i)}k(X_i,dY_i)=\sum_{j=1}^2 \frac{f(X_i)}{g_j(X_i)}\frac{p(X_i|Y |
54,853 | Conditional expectation on an estimator for defensive sampling | Just to step in a wee bit late,
\begin{align*}
\mathbb{E}\left[\dfrac{f(X_i)}{g_{Y_i}(X_i)}\big|X_i\right]
&= \dfrac{f(X_i)}{g_{1}(X_i)} \mathbb{P}(Y_i=1|X_i) + \dfrac{f(X_i)}{g_{2}(X_i)} \mathbb{P}(Y_i=2|X_i)\\
&= \dfrac{f(X_i)}{g_{1}(X_i)} \dfrac{\rho g_1(X_i)}{\rho g_1(X_i) +(1-\rho) g_2(X_i)} + \dfrac{f(X_i)}{g_{2}(X_i)} \dfrac{(1-\rho) g_2(X_i)}{\rho g_1(X_i) +(1-\rho) g_2(X_i)}\\
&= \dfrac{f(X_i)}{1} \dfrac{\rho {1}}{\rho g_1(X_i) +(1-\rho) g_2(X_i) } + \dfrac{f(X_i)}{1} \dfrac{(1-\rho) 1}{\rho g_1(X_i)+(1-\rho) g_2(X_i)}\\
&= \dfrac{f(X_i)}{\rho g_1(X_i)+(1-\rho) g_2(X_i)}\\
\end{align*} | Conditional expectation on an estimator for defensive sampling | Just to step in a wee bit late,
\begin{align*}
\mathbb{E}\left[\dfrac{f(X_i)}{g_{Y_i}(X_i)}\big|X_i\right]
&= \dfrac{f(X_i)}{g_{1}(X_i)} \mathbb{P}(Y_i=1|X_i) + \dfrac{f(X_i)}{g_{2}(X_i)} \mathbb{P}( | Conditional expectation on an estimator for defensive sampling
Just to step in a wee bit late,
\begin{align*}
\mathbb{E}\left[\dfrac{f(X_i)}{g_{Y_i}(X_i)}\big|X_i\right]
&= \dfrac{f(X_i)}{g_{1}(X_i)} \mathbb{P}(Y_i=1|X_i) + \dfrac{f(X_i)}{g_{2}(X_i)} \mathbb{P}(Y_i=2|X_i)\\
&= \dfrac{f(X_i)}{g_{1}(X_i)} \dfrac{\rho g_1(X_i)}{\rho g_1(X_i) +(1-\rho) g_2(X_i)} + \dfrac{f(X_i)}{g_{2}(X_i)} \dfrac{(1-\rho) g_2(X_i)}{\rho g_1(X_i) +(1-\rho) g_2(X_i)}\\
&= \dfrac{f(X_i)}{1} \dfrac{\rho {1}}{\rho g_1(X_i) +(1-\rho) g_2(X_i) } + \dfrac{f(X_i)}{1} \dfrac{(1-\rho) 1}{\rho g_1(X_i)+(1-\rho) g_2(X_i)}\\
&= \dfrac{f(X_i)}{\rho g_1(X_i)+(1-\rho) g_2(X_i)}\\
\end{align*} | Conditional expectation on an estimator for defensive sampling
Just to step in a wee bit late,
\begin{align*}
\mathbb{E}\left[\dfrac{f(X_i)}{g_{Y_i}(X_i)}\big|X_i\right]
&= \dfrac{f(X_i)}{g_{1}(X_i)} \mathbb{P}(Y_i=1|X_i) + \dfrac{f(X_i)}{g_{2}(X_i)} \mathbb{P}( |
54,854 | Why is Kendall's tau not consistent? | This is an issue of a measure (and hence the test based on it) not being able to pick up an association (an alternative under the test) it's not designed for.
In the same sense a t-test of means is not "consistent" against a difference in distributions that only related to a change in the spread. Almost any test or statistic designed to measure something will suffer from this kind of "deficiency" of failing to detect something different from that.
The Kendall and Spearman correlations do indeed measure monotonic association; they're fine when that's what you're interested in. However they don't attempt to measure every kind of association (and it would be a mistake to treat them as if they were).
For example, if $X$ is standard normal, then the Spearman, Kendall and Pearson correlation of $X$ with $|X|$ is $0$, even though if you know $X$ you know $|X|$ (there's a function dependence there). They are associated but the relationship is not monotonic (let alone linear).
The plot on the left shows perfect linear association. The Pearson, Spearman and Kendall correlations would all be 1. The plot in the center shows perfect monotonic association (when one variable increases, so does the other). The Spearman and Kendall correlations are still 1, but the Pearson correlation (which measures linear association) will be less than 1. The plot on the right shows a more general form of association, one that is non-monotonic. The sample Pearson, Spearman and Kendall correlations would all be essentially zero (the population measures will be exactly zero, by design).
Several additional illustrations of the third case (for less then perfect functional association) can be seen among the third row of plots here (specifically, the first and fourth item in that row):
This plot is from the Wikipedia page on Correlation and dependence, image is by Denis Boigelot, placed in the public domain.
We should not blame these measures for failing to do what they were not designed to do -- and in that sense, calling them inconsistent is potentially misplaced. In the sense it's being used in your quotes (outside what they're designed to do), in general almost any measure will be inconsistent.
[On the other hand, if the purpose of these sections is just to point out not to use these measures of association (or the corresponding tests) inappropriately, this is all to the good -- but then there seems little need to frame the issue this way.] | Why is Kendall's tau not consistent? | This is an issue of a measure (and hence the test based on it) not being able to pick up an association (an alternative under the test) it's not designed for.
In the same sense a t-test of means is no | Why is Kendall's tau not consistent?
This is an issue of a measure (and hence the test based on it) not being able to pick up an association (an alternative under the test) it's not designed for.
In the same sense a t-test of means is not "consistent" against a difference in distributions that only related to a change in the spread. Almost any test or statistic designed to measure something will suffer from this kind of "deficiency" of failing to detect something different from that.
The Kendall and Spearman correlations do indeed measure monotonic association; they're fine when that's what you're interested in. However they don't attempt to measure every kind of association (and it would be a mistake to treat them as if they were).
For example, if $X$ is standard normal, then the Spearman, Kendall and Pearson correlation of $X$ with $|X|$ is $0$, even though if you know $X$ you know $|X|$ (there's a function dependence there). They are associated but the relationship is not monotonic (let alone linear).
The plot on the left shows perfect linear association. The Pearson, Spearman and Kendall correlations would all be 1. The plot in the center shows perfect monotonic association (when one variable increases, so does the other). The Spearman and Kendall correlations are still 1, but the Pearson correlation (which measures linear association) will be less than 1. The plot on the right shows a more general form of association, one that is non-monotonic. The sample Pearson, Spearman and Kendall correlations would all be essentially zero (the population measures will be exactly zero, by design).
Several additional illustrations of the third case (for less then perfect functional association) can be seen among the third row of plots here (specifically, the first and fourth item in that row):
This plot is from the Wikipedia page on Correlation and dependence, image is by Denis Boigelot, placed in the public domain.
We should not blame these measures for failing to do what they were not designed to do -- and in that sense, calling them inconsistent is potentially misplaced. In the sense it's being used in your quotes (outside what they're designed to do), in general almost any measure will be inconsistent.
[On the other hand, if the purpose of these sections is just to point out not to use these measures of association (or the corresponding tests) inappropriately, this is all to the good -- but then there seems little need to frame the issue this way.] | Why is Kendall's tau not consistent?
This is an issue of a measure (and hence the test based on it) not being able to pick up an association (an alternative under the test) it's not designed for.
In the same sense a t-test of means is no |
54,855 | Eicker-Huber-White Robust Variance Estimator | There is a little mistake in your statement, as your $s_t^2$, $t=0,1$, define the sum of squared residuals belonging to the two groups of observations. The formula you refer to (unless your textbook has a typo) defines $s_t^2$ as the estimate of the error variances of the two groups (which, under heteroskedasticity, are allowed to differ). Hence, they are defined as your quantities $s_t^2$ divided by $n_t$ - the average of the squared residuals.
Thus, we need to show that, in your notation,
$$
\widehat{\mathbb{V}_{\text{EHW}}}(\hat\beta|\mathbf{X})=\frac{s_0^2}{n_0^2}+\frac{s_1^2}{n_1^2}.
$$
First, from this answer, note that
$$
(\mathbf{X}^T\mathbf{X})^{-1}=
\frac{1}{n_0n_1}
\begin{pmatrix}
n_1&-n_1\\
-n_1&n_0+n_1
\end{pmatrix},
$$
where I have used $n=n_0+n_1$ and hence $nn_1-n_1^2=n_0n_1$. Next, the "meat" matrix of the sandwich is, in matrix notation,
$$
\sum_{i=1}^n\hat{\epsilon}_i^2\mathbf{X}_i\mathbf{X}_i^T=\mathbf{X}^T\Sigma_{\hat\epsilon}\mathbf{X},
$$
where $\Sigma_{\hat\epsilon}$ is a diagonal matrix with the squared OLS residuals $\hat{\epsilon}_i^2=I(T_i=t)(Y_i-\bar{Y}_t)^2$ on the main diagonal. Thus,
$$
\mathbf{X}^T\Sigma_{\hat\epsilon}\mathbf{X}=
\begin{pmatrix}
\sum_{i=1}^n\hat{\epsilon}_i^2&\sum_{i=1}^nI(T_i=1)\hat{\epsilon}_i^2\\
\sum_{i=1}^nI(T_i=1)\hat{\epsilon}_i^2&\sum_{i=1}^nI(T_i=1)\hat{\epsilon}_i^2
\end{pmatrix}
$$
which, as the total sum of squared residuals is of course nothing but the square of the sum of squares of the two subsamples, equals
$$
\mathbf{X}^T\Sigma_{\hat\epsilon}\mathbf{X}=
\begin{pmatrix}
s_0^2+s_1^2&s_1^2\\
s_1^2&s_1^2
\end{pmatrix}
$$
in your notation. Putting things together gives
$$
\begin{eqnarray*}
\widehat{\mathbb{V}_{\text{EHW}}}(\hat\alpha,\hat\beta|\mathbf{X})&=&(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\Sigma_{\hat\epsilon}\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1}\\
&=&
\frac{1}{n_0^2n_1^2}
\begin{pmatrix}
n_1&-n_1\\
-n_1&n_0+n_1
\end{pmatrix}
\begin{pmatrix}
s_0^2+s_1^2&s_1^2\\
s_1^2&s_1^2
\end{pmatrix}
\begin{pmatrix}
n_1&-n_1\\
-n_1&n_0+n_1
\end{pmatrix}
\end{eqnarray*}
$$
The $(2,2)$ element of that matrix (which corresponds to what we seek, i.e., $\widehat{\mathbb{V}_{\text{EHW}}}(\hat\beta|\mathbf{X})$) is, after direct multiplication,
$$
\frac{-n_1n_0s_1^2+n_1^2s_0^2+(n_0+n_1)n_0s_1^2}{n_0^2n_1^2}=\frac{n_1^2s_0^2+n_0^2s_1^2}{n_0^2n_1^2}.
$$
Here is a little illustration in R:
library(sandwich)
library(lmtest)
n <- 10
y <- rnorm(n) # some dependent variable
x1 <- rbinom(n, size=1, p=.4) # a dummy regressor
n1 <- length(y[x1==1]) # the no. of y's belonging to T_i=1
n0 <- length(y[x1==0]) # the no. of y's belonging to T_i=0
reg <- lm(y~x1)
prepackaged <- coeftest(reg, vcov = vcovHC(reg, "HC0"))[2,2]^2 # square to get variance instead of standard error
s1.squared <- sum((y[x1==1] - mean(y[x1==1]))^2) # "your" s_1^2
s0.squared <- sum((y[x1==0] - mean(y[x1==0]))^2) # "your" s_0^2
sum.of.variances <- s1.squared/n1^2 + s0.squared/n0^2
X <- cbind(rep(1,n),x1)
matrix.formulation <- (solve(t(X)%*%X) %*% t(X)%*%diag(resid(reg)^2)%*%X %*% solve(t(X)%*%X))[2,2]
all.equal(prepackaged, sum.of.variances, matrix.formulation)
[1] TRUE | Eicker-Huber-White Robust Variance Estimator | There is a little mistake in your statement, as your $s_t^2$, $t=0,1$, define the sum of squared residuals belonging to the two groups of observations. The formula you refer to (unless your textbook h | Eicker-Huber-White Robust Variance Estimator
There is a little mistake in your statement, as your $s_t^2$, $t=0,1$, define the sum of squared residuals belonging to the two groups of observations. The formula you refer to (unless your textbook has a typo) defines $s_t^2$ as the estimate of the error variances of the two groups (which, under heteroskedasticity, are allowed to differ). Hence, they are defined as your quantities $s_t^2$ divided by $n_t$ - the average of the squared residuals.
Thus, we need to show that, in your notation,
$$
\widehat{\mathbb{V}_{\text{EHW}}}(\hat\beta|\mathbf{X})=\frac{s_0^2}{n_0^2}+\frac{s_1^2}{n_1^2}.
$$
First, from this answer, note that
$$
(\mathbf{X}^T\mathbf{X})^{-1}=
\frac{1}{n_0n_1}
\begin{pmatrix}
n_1&-n_1\\
-n_1&n_0+n_1
\end{pmatrix},
$$
where I have used $n=n_0+n_1$ and hence $nn_1-n_1^2=n_0n_1$. Next, the "meat" matrix of the sandwich is, in matrix notation,
$$
\sum_{i=1}^n\hat{\epsilon}_i^2\mathbf{X}_i\mathbf{X}_i^T=\mathbf{X}^T\Sigma_{\hat\epsilon}\mathbf{X},
$$
where $\Sigma_{\hat\epsilon}$ is a diagonal matrix with the squared OLS residuals $\hat{\epsilon}_i^2=I(T_i=t)(Y_i-\bar{Y}_t)^2$ on the main diagonal. Thus,
$$
\mathbf{X}^T\Sigma_{\hat\epsilon}\mathbf{X}=
\begin{pmatrix}
\sum_{i=1}^n\hat{\epsilon}_i^2&\sum_{i=1}^nI(T_i=1)\hat{\epsilon}_i^2\\
\sum_{i=1}^nI(T_i=1)\hat{\epsilon}_i^2&\sum_{i=1}^nI(T_i=1)\hat{\epsilon}_i^2
\end{pmatrix}
$$
which, as the total sum of squared residuals is of course nothing but the square of the sum of squares of the two subsamples, equals
$$
\mathbf{X}^T\Sigma_{\hat\epsilon}\mathbf{X}=
\begin{pmatrix}
s_0^2+s_1^2&s_1^2\\
s_1^2&s_1^2
\end{pmatrix}
$$
in your notation. Putting things together gives
$$
\begin{eqnarray*}
\widehat{\mathbb{V}_{\text{EHW}}}(\hat\alpha,\hat\beta|\mathbf{X})&=&(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\Sigma_{\hat\epsilon}\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1}\\
&=&
\frac{1}{n_0^2n_1^2}
\begin{pmatrix}
n_1&-n_1\\
-n_1&n_0+n_1
\end{pmatrix}
\begin{pmatrix}
s_0^2+s_1^2&s_1^2\\
s_1^2&s_1^2
\end{pmatrix}
\begin{pmatrix}
n_1&-n_1\\
-n_1&n_0+n_1
\end{pmatrix}
\end{eqnarray*}
$$
The $(2,2)$ element of that matrix (which corresponds to what we seek, i.e., $\widehat{\mathbb{V}_{\text{EHW}}}(\hat\beta|\mathbf{X})$) is, after direct multiplication,
$$
\frac{-n_1n_0s_1^2+n_1^2s_0^2+(n_0+n_1)n_0s_1^2}{n_0^2n_1^2}=\frac{n_1^2s_0^2+n_0^2s_1^2}{n_0^2n_1^2}.
$$
Here is a little illustration in R:
library(sandwich)
library(lmtest)
n <- 10
y <- rnorm(n) # some dependent variable
x1 <- rbinom(n, size=1, p=.4) # a dummy regressor
n1 <- length(y[x1==1]) # the no. of y's belonging to T_i=1
n0 <- length(y[x1==0]) # the no. of y's belonging to T_i=0
reg <- lm(y~x1)
prepackaged <- coeftest(reg, vcov = vcovHC(reg, "HC0"))[2,2]^2 # square to get variance instead of standard error
s1.squared <- sum((y[x1==1] - mean(y[x1==1]))^2) # "your" s_1^2
s0.squared <- sum((y[x1==0] - mean(y[x1==0]))^2) # "your" s_0^2
sum.of.variances <- s1.squared/n1^2 + s0.squared/n0^2
X <- cbind(rep(1,n),x1)
matrix.formulation <- (solve(t(X)%*%X) %*% t(X)%*%diag(resid(reg)^2)%*%X %*% solve(t(X)%*%X))[2,2]
all.equal(prepackaged, sum.of.variances, matrix.formulation)
[1] TRUE | Eicker-Huber-White Robust Variance Estimator
There is a little mistake in your statement, as your $s_t^2$, $t=0,1$, define the sum of squared residuals belonging to the two groups of observations. The formula you refer to (unless your textbook h |
54,856 | Direction of correlation | I believe that a Pearson correlation is not capable of telling you which 'direction' the relationship is.
A Pearson correlation is essentially looking for the tendency of one variable to vary in a certain way with another variable, but can not establish a causal link in that way.
For example, it is possible that having a lot of fun opens someone up to a lot of new experiences (so fun is influencing experience). It is also possible that life experience allows one to enjoy more things as you discover hobbies (so experience is influencing fun). Even a mixture of the two is plausible, or a third variable is in play (maybe rich people have more experience and more fun, but fun and experience are not related to each other at all causally).
I think typically, without resorting to making several probably fairly complicated assumptions about how you think the variables interact, establishing causality would require either an experimental design (or if this is not possible, a quasi-experimental design). A correlational study can't give you this type of information, which is pretty much why we have to run a lot of experiments rather than just mining for correlations everywhere. | Direction of correlation | I believe that a Pearson correlation is not capable of telling you which 'direction' the relationship is.
A Pearson correlation is essentially looking for the tendency of one variable to vary in a cer | Direction of correlation
I believe that a Pearson correlation is not capable of telling you which 'direction' the relationship is.
A Pearson correlation is essentially looking for the tendency of one variable to vary in a certain way with another variable, but can not establish a causal link in that way.
For example, it is possible that having a lot of fun opens someone up to a lot of new experiences (so fun is influencing experience). It is also possible that life experience allows one to enjoy more things as you discover hobbies (so experience is influencing fun). Even a mixture of the two is plausible, or a third variable is in play (maybe rich people have more experience and more fun, but fun and experience are not related to each other at all causally).
I think typically, without resorting to making several probably fairly complicated assumptions about how you think the variables interact, establishing causality would require either an experimental design (or if this is not possible, a quasi-experimental design). A correlational study can't give you this type of information, which is pretty much why we have to run a lot of experiments rather than just mining for correlations everywhere. | Direction of correlation
I believe that a Pearson correlation is not capable of telling you which 'direction' the relationship is.
A Pearson correlation is essentially looking for the tendency of one variable to vary in a cer |
54,857 | Direction of correlation | The short answer is that you can't use Pearson correlation to determine causality. This is directly related to the popular saying that causation means correlation, but not vise versa.
To give a reason, I think it's best to look at Wilk's lambda as a test of significance. Briefly, Wilk's lambda can be used as a significance test for the correlation between two multivariate random vectors. Let's say you have two random vectors $\bf X$ and $\bf Y$, each containing features that quantify "fun" and "experience", respectively. The question is whether we can regress $\bf X$ given $\bf Y$. To address this, there are several multivariate tests that can be used that measure the "association" or relationship between the two random vectors $\bf X$ and $\bf Y$ (Kshisargar 1972). Wilk's test (which is known as Wilk's lambda) is proportional to
$\Lambda \propto \prod_{i=1}^p (1-r_i^2) $
is fully determined by the correlation coefficients between $\bf X$ and $\bf Y$, where $r_i$ are the sample correlation coefficients between the vectors $\bf X$ and $\bf Y$. The sample correlation coefficients can be calculated using canonical correlation analysis given an equal number of observations from $\bf X$ and $\bf Y$. The distribution of $\Lambda$ determines how likely it is that $\bf Y$ can be used to regress $\bf X$. Alternatively, had we selected to use $\bf X$ as the regressor, $\Lambda$ would have been the same, since the sample correlation coefficients (and the correlation coefficients) are symmetric with respect to $\bf X$ and $\bf Y$. Therefore, however certain you are that $\bf X$ can cause $\bf Y$ (as far as correlation is concerned), you're equally certain that $\bf Y$ can cause $\bf X$. | Direction of correlation | The short answer is that you can't use Pearson correlation to determine causality. This is directly related to the popular saying that causation means correlation, but not vise versa.
To give a reaso | Direction of correlation
The short answer is that you can't use Pearson correlation to determine causality. This is directly related to the popular saying that causation means correlation, but not vise versa.
To give a reason, I think it's best to look at Wilk's lambda as a test of significance. Briefly, Wilk's lambda can be used as a significance test for the correlation between two multivariate random vectors. Let's say you have two random vectors $\bf X$ and $\bf Y$, each containing features that quantify "fun" and "experience", respectively. The question is whether we can regress $\bf X$ given $\bf Y$. To address this, there are several multivariate tests that can be used that measure the "association" or relationship between the two random vectors $\bf X$ and $\bf Y$ (Kshisargar 1972). Wilk's test (which is known as Wilk's lambda) is proportional to
$\Lambda \propto \prod_{i=1}^p (1-r_i^2) $
is fully determined by the correlation coefficients between $\bf X$ and $\bf Y$, where $r_i$ are the sample correlation coefficients between the vectors $\bf X$ and $\bf Y$. The sample correlation coefficients can be calculated using canonical correlation analysis given an equal number of observations from $\bf X$ and $\bf Y$. The distribution of $\Lambda$ determines how likely it is that $\bf Y$ can be used to regress $\bf X$. Alternatively, had we selected to use $\bf X$ as the regressor, $\Lambda$ would have been the same, since the sample correlation coefficients (and the correlation coefficients) are symmetric with respect to $\bf X$ and $\bf Y$. Therefore, however certain you are that $\bf X$ can cause $\bf Y$ (as far as correlation is concerned), you're equally certain that $\bf Y$ can cause $\bf X$. | Direction of correlation
The short answer is that you can't use Pearson correlation to determine causality. This is directly related to the popular saying that causation means correlation, but not vise versa.
To give a reaso |
54,858 | Direction of correlation | You always have to remember that correlation is not causality! No correlation coefficient will tell you the causal relationship of your variables. The only way you can identify causation might be to run linear regression with lagged variables (if you have time series data) and see what seems to predict the other based on time. Otherwise only logical reasoning will help.
https://en.wikipedia.org/wiki/Correlation_does_not_imply_causation | Direction of correlation | You always have to remember that correlation is not causality! No correlation coefficient will tell you the causal relationship of your variables. The only way you can identify causation might be to r | Direction of correlation
You always have to remember that correlation is not causality! No correlation coefficient will tell you the causal relationship of your variables. The only way you can identify causation might be to run linear regression with lagged variables (if you have time series data) and see what seems to predict the other based on time. Otherwise only logical reasoning will help.
https://en.wikipedia.org/wiki/Correlation_does_not_imply_causation | Direction of correlation
You always have to remember that correlation is not causality! No correlation coefficient will tell you the causal relationship of your variables. The only way you can identify causation might be to r |
54,859 | Maximum likelihood and Gumbel distribution. Does the likelihood have a global maximum? | I will prove a more general result: if a density is log-concave, then
the log-likelihood of the corresponding location-scale family has a
global maximum. The wanted result then follows,
since the Gumbel density is log-concave.
Consider a univariate density $f^\star(y)$ which is log-concave and
smooth on the real line; the parameterized density using the location $\mu$ and the
scale $\alpha >0$ is
$$
f(x;\,\mu,\,\alpha) := \frac{1}{\alpha} \, f^\star\left(\frac{x - \mu}{\alpha}\right).
$$
We can use the following alternative parameter vector $[\nu,\,\beta]$
with $\beta >0$
$$
\beta := 1 / \alpha, \qquad \nu := - \mu / \alpha.
$$
We have a one-to-one smooth transformation $[\mu,\,\alpha] \mapsto [\nu,\,\beta]$. Using the parameter vector $[\nu,\,\beta]$, the density at $x$ writes as $\beta\,f^\star(\beta x + \nu)$ and the log-likelihood for a sample $X_i$ is
$$
\log L = \sum_{i=1}^N \log\{ \beta \, f^\star(\beta X_i + \nu) \}.
$$
It is clear that this is a concave function of the vector
$[\nu,\,\beta]$ and hence that a global maximum exists (possibly for
infinite $\nu$ or $\beta$). But this implies that a global maximum
exists as well for the location-scale parameterization
$[\mu,\,\alpha]$. | Maximum likelihood and Gumbel distribution. Does the likelihood have a global maximum? | I will prove a more general result: if a density is log-concave, then
the log-likelihood of the corresponding location-scale family has a
global maximum. The wanted result then follows,
since the Gumb | Maximum likelihood and Gumbel distribution. Does the likelihood have a global maximum?
I will prove a more general result: if a density is log-concave, then
the log-likelihood of the corresponding location-scale family has a
global maximum. The wanted result then follows,
since the Gumbel density is log-concave.
Consider a univariate density $f^\star(y)$ which is log-concave and
smooth on the real line; the parameterized density using the location $\mu$ and the
scale $\alpha >0$ is
$$
f(x;\,\mu,\,\alpha) := \frac{1}{\alpha} \, f^\star\left(\frac{x - \mu}{\alpha}\right).
$$
We can use the following alternative parameter vector $[\nu,\,\beta]$
with $\beta >0$
$$
\beta := 1 / \alpha, \qquad \nu := - \mu / \alpha.
$$
We have a one-to-one smooth transformation $[\mu,\,\alpha] \mapsto [\nu,\,\beta]$. Using the parameter vector $[\nu,\,\beta]$, the density at $x$ writes as $\beta\,f^\star(\beta x + \nu)$ and the log-likelihood for a sample $X_i$ is
$$
\log L = \sum_{i=1}^N \log\{ \beta \, f^\star(\beta X_i + \nu) \}.
$$
It is clear that this is a concave function of the vector
$[\nu,\,\beta]$ and hence that a global maximum exists (possibly for
infinite $\nu$ or $\beta$). But this implies that a global maximum
exists as well for the location-scale parameterization
$[\mu,\,\alpha]$. | Maximum likelihood and Gumbel distribution. Does the likelihood have a global maximum?
I will prove a more general result: if a density is log-concave, then
the log-likelihood of the corresponding location-scale family has a
global maximum. The wanted result then follows,
since the Gumb |
54,860 | Maximum likelihood and Gumbel distribution. Does the likelihood have a global maximum? | Here is the (log-)likelihood surface in $(u,\alpha)$ for a sample of 100 points from a standard Gumbel:
As you can see, the mode is located near $(0,1)$ which is the true value of the parameter. The graph was made by the following R code
library(VGAM)
obs=rgumbel(1e3)
loca=seq(min(obs),max(obs),le=1e2)
scala=seq(.1*sd(obs),10*sd(obs),le=1e2)
like=matrix(0,1e2,1e2)
for (i in 1:1e2)
for (j in 1:1e2)
like[i,j]=sum(dgumbel(x=obs,loc=loca[i],scal=scala[j],log=TRUE)) | Maximum likelihood and Gumbel distribution. Does the likelihood have a global maximum? | Here is the (log-)likelihood surface in $(u,\alpha)$ for a sample of 100 points from a standard Gumbel:
As you can see, the mode is located near $(0,1)$ which is the true value of the parameter. The | Maximum likelihood and Gumbel distribution. Does the likelihood have a global maximum?
Here is the (log-)likelihood surface in $(u,\alpha)$ for a sample of 100 points from a standard Gumbel:
As you can see, the mode is located near $(0,1)$ which is the true value of the parameter. The graph was made by the following R code
library(VGAM)
obs=rgumbel(1e3)
loca=seq(min(obs),max(obs),le=1e2)
scala=seq(.1*sd(obs),10*sd(obs),le=1e2)
like=matrix(0,1e2,1e2)
for (i in 1:1e2)
for (j in 1:1e2)
like[i,j]=sum(dgumbel(x=obs,loc=loca[i],scal=scala[j],log=TRUE)) | Maximum likelihood and Gumbel distribution. Does the likelihood have a global maximum?
Here is the (log-)likelihood surface in $(u,\alpha)$ for a sample of 100 points from a standard Gumbel:
As you can see, the mode is located near $(0,1)$ which is the true value of the parameter. The |
54,861 | Binary logistic regression: Interpreting odds ratio vs. comparing predictive probabilities | We often interpret the odds ratio like you did: $\newcommand{\Odds}{\text{Odds}}\newcommand{\event}{\text{event}}\newcommand{\noevent}{\text{no event}}$
people with an x score of 1 are 12 times as likely then people with an x score of 0 to have a positive outcome on y
But likely is purposefully vague here so as not to confuse non-statisticians.
What we really mean is
people with an x score of 1 have an odds 12 times greater than people with an x score of 0 of having a positive outcome on y
As ttnphns showed in the comments, this is expressed as
$$ \frac{\Odds(\event|x=1)}{\Odds(\noevent|x=0)} = \frac{\frac{p(\event|x=1)}{p(\noevent|x=1)}}{\frac{p(\event|x=0)}{p(\noevent|x=0)}} $$
Basically, this is the difference between interpreting odds ratio and relative risk. (See here for more explanation) | Binary logistic regression: Interpreting odds ratio vs. comparing predictive probabilities | We often interpret the odds ratio like you did: $\newcommand{\Odds}{\text{Odds}}\newcommand{\event}{\text{event}}\newcommand{\noevent}{\text{no event}}$
people with an x score of 1 are 12 times as li | Binary logistic regression: Interpreting odds ratio vs. comparing predictive probabilities
We often interpret the odds ratio like you did: $\newcommand{\Odds}{\text{Odds}}\newcommand{\event}{\text{event}}\newcommand{\noevent}{\text{no event}}$
people with an x score of 1 are 12 times as likely then people with an x score of 0 to have a positive outcome on y
But likely is purposefully vague here so as not to confuse non-statisticians.
What we really mean is
people with an x score of 1 have an odds 12 times greater than people with an x score of 0 of having a positive outcome on y
As ttnphns showed in the comments, this is expressed as
$$ \frac{\Odds(\event|x=1)}{\Odds(\noevent|x=0)} = \frac{\frac{p(\event|x=1)}{p(\noevent|x=1)}}{\frac{p(\event|x=0)}{p(\noevent|x=0)}} $$
Basically, this is the difference between interpreting odds ratio and relative risk. (See here for more explanation) | Binary logistic regression: Interpreting odds ratio vs. comparing predictive probabilities
We often interpret the odds ratio like you did: $\newcommand{\Odds}{\text{Odds}}\newcommand{\event}{\text{event}}\newcommand{\noevent}{\text{no event}}$
people with an x score of 1 are 12 times as li |
54,862 | Interpretation of modes in periodogram | My apologies in advance, I wrote a bit of a novel.
tl;dr
1) Likely there is a mean or trend in the data. This is usually represented at the zero-frequency.
2) Yes, you could say there's a periodic component at this point with a period of 200 time units (seconds).
3) Yes, you can do this, it depends on the method. If you used spec.pgram() then:
pgram <- spec.pgram(yourDataVector)
new.df <- data.frame(x = pgram$freq, y = pgram$spec)
But, for the love of unbiased estimators, please don't use the periodogram (there are TONNES of statistically better methods and Rayleigh commented on the poor properties of the periodogram in 1905-ish (side note: spec.pgram doesn't actually calculate the periodogram but gives you a "direct estimate" of the spectrum using a 10% cosine taper - there are still better (in terms of bias and variance) methods)).
Full version:
1) The frequency at f = 0 or in the first frequency bin indicates that your data likely has a trend or mean that has not been removed. Also, the very low frequency signal near f = 0, doesn't actually look like it's at f = 0. It looks to me like there is a big peak off zero that could be interesting. It might be worthwhile just plotting the spectrum in the band f = 0 to f = 0.002 (you'd have to adjust based on what it looks like of course).
2) Yes, you could say that there is a periodic component in your data with a period of 200 seconds that is contributing the most variance to your original time series. This is only really valid for frequencies above f = 0.0015 (ish) as there is certainly some low powered signal here as well. But I think that would be a hard sell as there are many frequencies near f = 0.005 that have similar power a harmonic F-test (below) could help with this.
3) Yes, you can certainly do this except I am unsure of what you are using to calculate the periodogram (spec.pgram() ?).
And now, I would like to advocate for a theoretically and empirically better method if you will allow me. The periodogram is, in practice, a badly biased estimate and is also inconsistent (variance doesn't converge to zero as sample size increases).
Since you are using R already, the multitaper package implements a spectrum estimator that uses a set of orthogonal data tapers that optimally concentrate power in the band (-W, W) around the frequency of interest - in other words, you're actually estimating what you think you're estimating (low bias).
Because you are using multiple tapers, you can average the spectrum estimates from each taper which reduces the variance of your estimate (consistency).
Lastly, the package implements adaptive weighting which provides excellent balance between bias and variance.
Using the multitaper package is exceptionally easy and I will answer question 3) at the same time:
install.packages("multitaper")
library(multitaper)
s <- spec.mtm(yourDataVector, deltat = yourSamplingRate, nw = 6, k = 10, Ftest = TRUE)
s.df <- data.frame(x = s$freq, y = s$spec)
plot(s$freq, s$mtm$Ftest, type='l')
NOTE: spec.mtm() will remove a constant mean from the data unless you set the argument centre = "none" .
Okay, so the above is pretty straight forward. "nw" and "k" are the time-bandwidth parameter (setting the band (-W, W)) and the number of tapers respectively. Normally you would set nw somewhere between 3 and 15 - depends on how closely spaced you think modes / periodic components in your data would be. The number of tapers, k, should ALWAYS be an integer with k <= 2*nw (the higher order tapers are undesirable for the most part in this context).
The harmonic F-test (which we plot here) is basically testing whether the variance contributed by a frequency matches the expected under the null hypothesis of a white noise process. It follows an F-distribution with (2, 2*k-2) degrees of freedom. Therefore, you can detect significant frequencies, regardless of their spectral amplitude, using this method. A good "rule of thumb" is to set your confidence level to 1 - 1/N (where N is the number of samples). Or you could just set it to whatever you want. To get the critical value in R:
qf(1-1/N, 2, 2*s$mtm$k - 2) .
Lastly, in my experience, it is usually a good idea to plot your spectra on log-y axis as the high power frequencies will wash out everything else.
plot(s$freq, s$spec, type='l', log = 'y')
for example in base-R graphics.
D. J. Thomson - 1982 - "Spectrum estimation and harmonic analysis" is the seminal paper on this method. | Interpretation of modes in periodogram | My apologies in advance, I wrote a bit of a novel.
tl;dr
1) Likely there is a mean or trend in the data. This is usually represented at the zero-frequency.
2) Yes, you could say there's a periodic co | Interpretation of modes in periodogram
My apologies in advance, I wrote a bit of a novel.
tl;dr
1) Likely there is a mean or trend in the data. This is usually represented at the zero-frequency.
2) Yes, you could say there's a periodic component at this point with a period of 200 time units (seconds).
3) Yes, you can do this, it depends on the method. If you used spec.pgram() then:
pgram <- spec.pgram(yourDataVector)
new.df <- data.frame(x = pgram$freq, y = pgram$spec)
But, for the love of unbiased estimators, please don't use the periodogram (there are TONNES of statistically better methods and Rayleigh commented on the poor properties of the periodogram in 1905-ish (side note: spec.pgram doesn't actually calculate the periodogram but gives you a "direct estimate" of the spectrum using a 10% cosine taper - there are still better (in terms of bias and variance) methods)).
Full version:
1) The frequency at f = 0 or in the first frequency bin indicates that your data likely has a trend or mean that has not been removed. Also, the very low frequency signal near f = 0, doesn't actually look like it's at f = 0. It looks to me like there is a big peak off zero that could be interesting. It might be worthwhile just plotting the spectrum in the band f = 0 to f = 0.002 (you'd have to adjust based on what it looks like of course).
2) Yes, you could say that there is a periodic component in your data with a period of 200 seconds that is contributing the most variance to your original time series. This is only really valid for frequencies above f = 0.0015 (ish) as there is certainly some low powered signal here as well. But I think that would be a hard sell as there are many frequencies near f = 0.005 that have similar power a harmonic F-test (below) could help with this.
3) Yes, you can certainly do this except I am unsure of what you are using to calculate the periodogram (spec.pgram() ?).
And now, I would like to advocate for a theoretically and empirically better method if you will allow me. The periodogram is, in practice, a badly biased estimate and is also inconsistent (variance doesn't converge to zero as sample size increases).
Since you are using R already, the multitaper package implements a spectrum estimator that uses a set of orthogonal data tapers that optimally concentrate power in the band (-W, W) around the frequency of interest - in other words, you're actually estimating what you think you're estimating (low bias).
Because you are using multiple tapers, you can average the spectrum estimates from each taper which reduces the variance of your estimate (consistency).
Lastly, the package implements adaptive weighting which provides excellent balance between bias and variance.
Using the multitaper package is exceptionally easy and I will answer question 3) at the same time:
install.packages("multitaper")
library(multitaper)
s <- spec.mtm(yourDataVector, deltat = yourSamplingRate, nw = 6, k = 10, Ftest = TRUE)
s.df <- data.frame(x = s$freq, y = s$spec)
plot(s$freq, s$mtm$Ftest, type='l')
NOTE: spec.mtm() will remove a constant mean from the data unless you set the argument centre = "none" .
Okay, so the above is pretty straight forward. "nw" and "k" are the time-bandwidth parameter (setting the band (-W, W)) and the number of tapers respectively. Normally you would set nw somewhere between 3 and 15 - depends on how closely spaced you think modes / periodic components in your data would be. The number of tapers, k, should ALWAYS be an integer with k <= 2*nw (the higher order tapers are undesirable for the most part in this context).
The harmonic F-test (which we plot here) is basically testing whether the variance contributed by a frequency matches the expected under the null hypothesis of a white noise process. It follows an F-distribution with (2, 2*k-2) degrees of freedom. Therefore, you can detect significant frequencies, regardless of their spectral amplitude, using this method. A good "rule of thumb" is to set your confidence level to 1 - 1/N (where N is the number of samples). Or you could just set it to whatever you want. To get the critical value in R:
qf(1-1/N, 2, 2*s$mtm$k - 2) .
Lastly, in my experience, it is usually a good idea to plot your spectra on log-y axis as the high power frequencies will wash out everything else.
plot(s$freq, s$spec, type='l', log = 'y')
for example in base-R graphics.
D. J. Thomson - 1982 - "Spectrum estimation and harmonic analysis" is the seminal paper on this method. | Interpretation of modes in periodogram
My apologies in advance, I wrote a bit of a novel.
tl;dr
1) Likely there is a mean or trend in the data. This is usually represented at the zero-frequency.
2) Yes, you could say there's a periodic co |
54,863 | How to choose a way to obtain maximum likelihood estimation | There is nothing special about MLEs here, since what you're asking really applies to any optimisation problem. In this broader sense, you want to know what methods are desirable to maximise/minimise a function.
Applying standard calculus methods for optimisation, you should be able to get an equation for the critical points for the function, and the second-order-conditions for optimisation, and this will usually allow you to find the maximising point. Often the maximising point is a critical-point, and in this case, optimisation involves finding the solution(s) to the critical point equation. In some problems the critical point has an explicit closed-form solution, as in the example in your question. In other problems there is no closed-form solution to the critical point equation, and in this case it is usual to solve this equation via iterative numerical methods (i.e., start at some arbitrary point, and then move closer and closer to the critical point via some iterative algorithm).
The best way to distinguish these cases is simply to try to optimise algebraically, and see if this leads you to an explicit solution, or a case that requires numerical methods. If it is possible to get an explicit closed-form solution for the maximising value then you are finished, and no numerical methods are required. If you require numerical methods to find the solution to the critical point equation, then you can either form this iterative process algebraically, or rely on a package such as bbmle in R. At the end of the day, optimisation requires familiarity with calculus; with experience you will be able to anticipate which problems give closed-form solutions and which will require iterative methods. | How to choose a way to obtain maximum likelihood estimation | There is nothing special about MLEs here, since what you're asking really applies to any optimisation problem. In this broader sense, you want to know what methods are desirable to maximise/minimise | How to choose a way to obtain maximum likelihood estimation
There is nothing special about MLEs here, since what you're asking really applies to any optimisation problem. In this broader sense, you want to know what methods are desirable to maximise/minimise a function.
Applying standard calculus methods for optimisation, you should be able to get an equation for the critical points for the function, and the second-order-conditions for optimisation, and this will usually allow you to find the maximising point. Often the maximising point is a critical-point, and in this case, optimisation involves finding the solution(s) to the critical point equation. In some problems the critical point has an explicit closed-form solution, as in the example in your question. In other problems there is no closed-form solution to the critical point equation, and in this case it is usual to solve this equation via iterative numerical methods (i.e., start at some arbitrary point, and then move closer and closer to the critical point via some iterative algorithm).
The best way to distinguish these cases is simply to try to optimise algebraically, and see if this leads you to an explicit solution, or a case that requires numerical methods. If it is possible to get an explicit closed-form solution for the maximising value then you are finished, and no numerical methods are required. If you require numerical methods to find the solution to the critical point equation, then you can either form this iterative process algebraically, or rely on a package such as bbmle in R. At the end of the day, optimisation requires familiarity with calculus; with experience you will be able to anticipate which problems give closed-form solutions and which will require iterative methods. | How to choose a way to obtain maximum likelihood estimation
There is nothing special about MLEs here, since what you're asking really applies to any optimisation problem. In this broader sense, you want to know what methods are desirable to maximise/minimise |
54,864 | Coordinate prediction parameterization in object detection networks | The parametrization seems to originate from the R-CNN paper, Girschick et al., 2013: Rich feature hierarchies for accurate object detection and semantic segmentation. Note that SSD is also using this parametrization (see Eq. (2) in the paper).
Using this parametrization, size of a bounding box is computed as $w=w_a\exp(t)$, where $w_a$ is the size of the anchor box and $t$ is the network output. This parametrization has some (nice) properties:
Predicted bounding box will always have positive size
If $t=0$, size of the predicted box is the same as the anchor box
Values $t<0$ shrink the bounding box "slowly" (large decrease in prediction is small decrease in size)
Values $t>0$ expand the bounding box "fast" (small increase in prediction is large increase in size)
The first property is very useful. It is hard to say if/how much the rest makes the optimization easier, but it seems to work nice since this is the de-facto standard parametrization used in object detection. | Coordinate prediction parameterization in object detection networks | The parametrization seems to originate from the R-CNN paper, Girschick et al., 2013: Rich feature hierarchies for accurate object detection and semantic segmentation. Note that SSD is also using this | Coordinate prediction parameterization in object detection networks
The parametrization seems to originate from the R-CNN paper, Girschick et al., 2013: Rich feature hierarchies for accurate object detection and semantic segmentation. Note that SSD is also using this parametrization (see Eq. (2) in the paper).
Using this parametrization, size of a bounding box is computed as $w=w_a\exp(t)$, where $w_a$ is the size of the anchor box and $t$ is the network output. This parametrization has some (nice) properties:
Predicted bounding box will always have positive size
If $t=0$, size of the predicted box is the same as the anchor box
Values $t<0$ shrink the bounding box "slowly" (large decrease in prediction is small decrease in size)
Values $t>0$ expand the bounding box "fast" (small increase in prediction is large increase in size)
The first property is very useful. It is hard to say if/how much the rest makes the optimization easier, but it seems to work nice since this is the de-facto standard parametrization used in object detection. | Coordinate prediction parameterization in object detection networks
The parametrization seems to originate from the R-CNN paper, Girschick et al., 2013: Rich feature hierarchies for accurate object detection and semantic segmentation. Note that SSD is also using this |
54,865 | Random forest - short text classification | Generally speaking, in a machine learning approach, it is recommended to test several models regardless of their theoretical performance, because their accuracy is dependent of the training dataset. True enough, a couple of algorithms are generally preferred for text classification (SVM, Naive Bayes, multinomial regressions) for various reasons such as linear separation or curse of dimensionality (see this paper). However, nothing prevents you from trying a random forest and examine its performance.
Now to answer your questions.
Generally speaking, regardless of article, can random forest be used effectively for short text classification when the feature space is large ? It seems to me that there may be a lot of weak classifiers due to large feature space and only handful of features present in data to classify.
Given the nature of random forests (a bagging decision tree), it is true that you may come up with a rather weak classifier, especially if only a couple of features are truly significant to determine the outcome. A boosting decision tree may be preferred (such as adaboost or gradient boosting) to weight relevant predictors.
However, keep in mind that in the case of text classification, a preprocessing phase is required to get either your TF or TF-IDF matrix, through which you have already made a selection of pertinent features. Potentially, all features are relevant in this matrix, so the random forest may be performant when you predict your outcome.
How to handle unseen words with random forest(not present in training set)? Should they be simply stripped from input or some technique similar to Laplace smoothing should be used ?
Since you are using word frequency as a feature, you have already defined the grid over which you will be evaluating the outcome of your message. Unseen words will be filtered and stripped in your algorithm because you will be evaluating if the words in your test set are present/absent in your TF matrix.
If somebody perhaps read this short article, maybe could explain how author represented features for random forest ?
The author uses scikit-learn library from python. I conclude that the dataset was most likely a numerical dataframe (categorical values are always converted into logical dummy booleans in python). From my understanding, the author used the following features :
Filtered words, based on their frequency ($>5$ and $<500$), and more specifically, if they are absent/present in the message.
Message length (number of characters in message), later transformed into five categorical booleans (message length $\le 40$, $\le 60$, ...)
Occurrences of character \$
Number of numeric strings
Presence/absence of non-alphabetic characters | Random forest - short text classification | Generally speaking, in a machine learning approach, it is recommended to test several models regardless of their theoretical performance, because their accuracy is dependent of the training dataset. T | Random forest - short text classification
Generally speaking, in a machine learning approach, it is recommended to test several models regardless of their theoretical performance, because their accuracy is dependent of the training dataset. True enough, a couple of algorithms are generally preferred for text classification (SVM, Naive Bayes, multinomial regressions) for various reasons such as linear separation or curse of dimensionality (see this paper). However, nothing prevents you from trying a random forest and examine its performance.
Now to answer your questions.
Generally speaking, regardless of article, can random forest be used effectively for short text classification when the feature space is large ? It seems to me that there may be a lot of weak classifiers due to large feature space and only handful of features present in data to classify.
Given the nature of random forests (a bagging decision tree), it is true that you may come up with a rather weak classifier, especially if only a couple of features are truly significant to determine the outcome. A boosting decision tree may be preferred (such as adaboost or gradient boosting) to weight relevant predictors.
However, keep in mind that in the case of text classification, a preprocessing phase is required to get either your TF or TF-IDF matrix, through which you have already made a selection of pertinent features. Potentially, all features are relevant in this matrix, so the random forest may be performant when you predict your outcome.
How to handle unseen words with random forest(not present in training set)? Should they be simply stripped from input or some technique similar to Laplace smoothing should be used ?
Since you are using word frequency as a feature, you have already defined the grid over which you will be evaluating the outcome of your message. Unseen words will be filtered and stripped in your algorithm because you will be evaluating if the words in your test set are present/absent in your TF matrix.
If somebody perhaps read this short article, maybe could explain how author represented features for random forest ?
The author uses scikit-learn library from python. I conclude that the dataset was most likely a numerical dataframe (categorical values are always converted into logical dummy booleans in python). From my understanding, the author used the following features :
Filtered words, based on their frequency ($>5$ and $<500$), and more specifically, if they are absent/present in the message.
Message length (number of characters in message), later transformed into five categorical booleans (message length $\le 40$, $\le 60$, ...)
Occurrences of character \$
Number of numeric strings
Presence/absence of non-alphabetic characters | Random forest - short text classification
Generally speaking, in a machine learning approach, it is recommended to test several models regardless of their theoretical performance, because their accuracy is dependent of the training dataset. T |
54,866 | How to scale for SelectKBest for feature selection | I think the problem is that you're using the chi2 scoring function. If you instead use the f_classif scoring function, then there will not be any errors due to having negative values in your dataset. So if you want to use chi2, then you would need to transform your data somehow to get rid of negatives (you could normalize it so that all the values fall between 0 and 1, or you could set your minimum value to 0, or do countless things to remove the negatives). If you're already using some sort of normalized values such as z-scores and therefore don't want to do any more normalization, then you should consider using the ANOVA (f_classif) scoring function for your feature selection.
So essentially, to answer the question directly, additional scaling to get rid of negatives may not be necessary for selecting features from a dataset. If you are using z-score normalization or some other normalization that uses negatives (maybe your data falls between -1 and +1), you could just use f_classif scoring function which doesn't require only positive numbers.
As one example of how you can make the data scale to use chi2: When I've used the chi2 scoring function in sklearn, I start with data that are not normalized at all. I then normalize the data so that it falls between 0 and 1 very simply by doing this:
normed_data= (data - data.min(0)) / data.ptp(0)
Here, data.min(0) returns the minimum value of each data column and data.ptp(0) returns the range of each data column. So normed_data ends up being a matrix where every column has been independently normalized to fall within the range of [0, 1]. | How to scale for SelectKBest for feature selection | I think the problem is that you're using the chi2 scoring function. If you instead use the f_classif scoring function, then there will not be any errors due to having negative values in your dataset. | How to scale for SelectKBest for feature selection
I think the problem is that you're using the chi2 scoring function. If you instead use the f_classif scoring function, then there will not be any errors due to having negative values in your dataset. So if you want to use chi2, then you would need to transform your data somehow to get rid of negatives (you could normalize it so that all the values fall between 0 and 1, or you could set your minimum value to 0, or do countless things to remove the negatives). If you're already using some sort of normalized values such as z-scores and therefore don't want to do any more normalization, then you should consider using the ANOVA (f_classif) scoring function for your feature selection.
So essentially, to answer the question directly, additional scaling to get rid of negatives may not be necessary for selecting features from a dataset. If you are using z-score normalization or some other normalization that uses negatives (maybe your data falls between -1 and +1), you could just use f_classif scoring function which doesn't require only positive numbers.
As one example of how you can make the data scale to use chi2: When I've used the chi2 scoring function in sklearn, I start with data that are not normalized at all. I then normalize the data so that it falls between 0 and 1 very simply by doing this:
normed_data= (data - data.min(0)) / data.ptp(0)
Here, data.min(0) returns the minimum value of each data column and data.ptp(0) returns the range of each data column. So normed_data ends up being a matrix where every column has been independently normalized to fall within the range of [0, 1]. | How to scale for SelectKBest for feature selection
I think the problem is that you're using the chi2 scoring function. If you instead use the f_classif scoring function, then there will not be any errors due to having negative values in your dataset. |
54,867 | How to scale for SelectKBest for feature selection | You can use MinMaxScaler. By default, it will scale the data within the range $[0,1]$:
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaler.fit(X)
scaler.transform(X) | How to scale for SelectKBest for feature selection | You can use MinMaxScaler. By default, it will scale the data within the range $[0,1]$:
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaler.fit(X)
scaler.transform(X) | How to scale for SelectKBest for feature selection
You can use MinMaxScaler. By default, it will scale the data within the range $[0,1]$:
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaler.fit(X)
scaler.transform(X) | How to scale for SelectKBest for feature selection
You can use MinMaxScaler. By default, it will scale the data within the range $[0,1]$:
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaler.fit(X)
scaler.transform(X) |
54,868 | Are observations independent in bootstrapped resamples? | Think of it this way:
You have a population of individuals. You select an individual at random from the population, measure his weight and return him back to the population. You then select a second individual at random from the population, measure his weight and return him to the population. You continue this process until you end up with a set of 10 measured weights.
Two of the 10 sampled individuals give you the same weight of 70kg. Would you conclude that the two observations of 70kg are dependent just because they are the same? Not at all - each selection/draw from the population is independent of the others (i.e. the fact that you get a weight of 70kg in the first selection/draw, does not influence in any way the result you may get in other draws).
Don't confuse the value of the draw with the random mechanism used to guarantee that the draws are independent of each other. Two independent draws can produce the same value - that doesn't alter the fact that they are independent.
Edit:
In a repeated measure study, you can select a random sample of subjects and then measure each subject several times on some outcome variable (e.g., blood pressure). The values of the outcomes corresponding to different subjects will be independent of each other. However, within each subject, the values of the outcome variable will likely not be independent of each other. That is because all of these values would be affected by a shared set of factors - some observed, some unobserved - corresponding to that subject (e.g., age, sex). | Are observations independent in bootstrapped resamples? | Think of it this way:
You have a population of individuals. You select an individual at random from the population, measure his weight and return him back to the population. You then select a second | Are observations independent in bootstrapped resamples?
Think of it this way:
You have a population of individuals. You select an individual at random from the population, measure his weight and return him back to the population. You then select a second individual at random from the population, measure his weight and return him to the population. You continue this process until you end up with a set of 10 measured weights.
Two of the 10 sampled individuals give you the same weight of 70kg. Would you conclude that the two observations of 70kg are dependent just because they are the same? Not at all - each selection/draw from the population is independent of the others (i.e. the fact that you get a weight of 70kg in the first selection/draw, does not influence in any way the result you may get in other draws).
Don't confuse the value of the draw with the random mechanism used to guarantee that the draws are independent of each other. Two independent draws can produce the same value - that doesn't alter the fact that they are independent.
Edit:
In a repeated measure study, you can select a random sample of subjects and then measure each subject several times on some outcome variable (e.g., blood pressure). The values of the outcomes corresponding to different subjects will be independent of each other. However, within each subject, the values of the outcome variable will likely not be independent of each other. That is because all of these values would be affected by a shared set of factors - some observed, some unobserved - corresponding to that subject (e.g., age, sex). | Are observations independent in bootstrapped resamples?
Think of it this way:
You have a population of individuals. You select an individual at random from the population, measure his weight and return him back to the population. You then select a second |
54,869 | Are observations independent in bootstrapped resamples? | Independence is a property of a collection of random variables defined on the same probability space. Whether or not your bootstrap samples are independent depends on which random variables you are considering to be underlying your samples.
Consider the random experiment:
Toss a coin twice, write down the outcomes, and sample twice with replacement from the outcomes.
With $\Omega_1:=\{0,1\}$ and $\Omega_2:=\{1,2\}$ you can model this random experiment with random variables on the space $$\Omega_3 = \Omega_1 \times \Omega_1 \times \Omega_2 \times \Omega_2.$$
With the random variables $$X_1:\Omega_3\to \mathbb{R},\ (a,b,c,d)\mapsto a,$$ $$X_2:\Omega_3\to \mathbb{R},\ (a,b,c,d)\mapsto b,$$ $$X_1^*:\Omega_3\to \mathbb{R},\ (a,b,c,d)\mapsto X_c(a,b,c,d),$$ $$X_2^*:\Omega_3\to \mathbb{R},\ (a,b,c,d)\mapsto X_d(a,b,c,d).$$ Here $X_1^*$ and $X_2^*$ are not independent.
However, if you consider the random experiment:
Given the outcomes of your coin toss, sample twice with replacement
and you model $X_1^*$ and $X_2^*$ to be the first and second sample, then they are independent. | Are observations independent in bootstrapped resamples? | Independence is a property of a collection of random variables defined on the same probability space. Whether or not your bootstrap samples are independent depends on which random variables you are co | Are observations independent in bootstrapped resamples?
Independence is a property of a collection of random variables defined on the same probability space. Whether or not your bootstrap samples are independent depends on which random variables you are considering to be underlying your samples.
Consider the random experiment:
Toss a coin twice, write down the outcomes, and sample twice with replacement from the outcomes.
With $\Omega_1:=\{0,1\}$ and $\Omega_2:=\{1,2\}$ you can model this random experiment with random variables on the space $$\Omega_3 = \Omega_1 \times \Omega_1 \times \Omega_2 \times \Omega_2.$$
With the random variables $$X_1:\Omega_3\to \mathbb{R},\ (a,b,c,d)\mapsto a,$$ $$X_2:\Omega_3\to \mathbb{R},\ (a,b,c,d)\mapsto b,$$ $$X_1^*:\Omega_3\to \mathbb{R},\ (a,b,c,d)\mapsto X_c(a,b,c,d),$$ $$X_2^*:\Omega_3\to \mathbb{R},\ (a,b,c,d)\mapsto X_d(a,b,c,d).$$ Here $X_1^*$ and $X_2^*$ are not independent.
However, if you consider the random experiment:
Given the outcomes of your coin toss, sample twice with replacement
and you model $X_1^*$ and $X_2^*$ to be the first and second sample, then they are independent. | Are observations independent in bootstrapped resamples?
Independence is a property of a collection of random variables defined on the same probability space. Whether or not your bootstrap samples are independent depends on which random variables you are co |
54,870 | Are observations independent in bootstrapped resamples? | I'm the OP. The answers from others were very good, and I have accepted one of them. I am also answering here to unite what I have learned.
Yes. In this context, the observations are independent. I believe the key issue is whether we are conditioning on cluster membership.
A formal answer
Above, I tried to check whether independence holds from the definition. But the following statement was wrong:
$$f_{Y^{*}_i | Y^{*}_j}( y_i | y_j ) = 1\{y_i = y_j\} \ne f_{Y^{*}_i}( y_i )$$
because it holds only for those $i,j$ for which we know that they come from the same observation in the original dataset. Thus, I had neglected to condition on a critical piece of information. Let $R_i$ be the index of the row in the original dataset from which we drew $Y^{*}_i$. Then the correct way to state the above is:
$$f_{Y^{*}_i | Y^{*}_j, R_i, R_j}( y_i | y_j, r_i=r_j ) = 1\{y_i = y_j\} \ne f_{Y^{*}_i}( y_i )$$
However, without conditioning on $R_i, R_j$, the observation that $Y_j^{*}=y_j$ carries no information about $Y_j^{*}$ since we don't know that they came from the same row in the original data. Thus:
$$f_{Y^{*}_i | Y^{*}_j}( y_i | y_j ) = f_{Y^{*}_i}( y_i )$$
so independence holds.
A heuristic analog
Here is an analog to the more familiar notions of non-independence arising from the sampling mechanism. A classic way for non-independence to arise is through cluster-sampling, such as sampling schools and then sampling students within schools. This sampling scheme effectively "fixes" or conditions on cluster membership because we choose schools and then choose students within schools.
The critical analog to the bootstrapping independence is that if we instead sampled directly from the population of all students (rather than sampling schools), then the observations are independent whether we draw students from the same school or not. That's because we are no longer "conditioning" on cluster. Analogously, in the boostrapping world, the with-replacement sampling mechanism means we aren't conditioning on cluster; every observation has an equal chance of being sampled.
(I think the analog to the first example regarding cluster-sampling would be if we first chose a sample of observations in the original dataset, and THEN repeated each chosen row a number of times to create the resample. That, I believe, would lead to non-independent observations.) | Are observations independent in bootstrapped resamples? | I'm the OP. The answers from others were very good, and I have accepted one of them. I am also answering here to unite what I have learned.
Yes. In this context, the observations are independent. I b | Are observations independent in bootstrapped resamples?
I'm the OP. The answers from others were very good, and I have accepted one of them. I am also answering here to unite what I have learned.
Yes. In this context, the observations are independent. I believe the key issue is whether we are conditioning on cluster membership.
A formal answer
Above, I tried to check whether independence holds from the definition. But the following statement was wrong:
$$f_{Y^{*}_i | Y^{*}_j}( y_i | y_j ) = 1\{y_i = y_j\} \ne f_{Y^{*}_i}( y_i )$$
because it holds only for those $i,j$ for which we know that they come from the same observation in the original dataset. Thus, I had neglected to condition on a critical piece of information. Let $R_i$ be the index of the row in the original dataset from which we drew $Y^{*}_i$. Then the correct way to state the above is:
$$f_{Y^{*}_i | Y^{*}_j, R_i, R_j}( y_i | y_j, r_i=r_j ) = 1\{y_i = y_j\} \ne f_{Y^{*}_i}( y_i )$$
However, without conditioning on $R_i, R_j$, the observation that $Y_j^{*}=y_j$ carries no information about $Y_j^{*}$ since we don't know that they came from the same row in the original data. Thus:
$$f_{Y^{*}_i | Y^{*}_j}( y_i | y_j ) = f_{Y^{*}_i}( y_i )$$
so independence holds.
A heuristic analog
Here is an analog to the more familiar notions of non-independence arising from the sampling mechanism. A classic way for non-independence to arise is through cluster-sampling, such as sampling schools and then sampling students within schools. This sampling scheme effectively "fixes" or conditions on cluster membership because we choose schools and then choose students within schools.
The critical analog to the bootstrapping independence is that if we instead sampled directly from the population of all students (rather than sampling schools), then the observations are independent whether we draw students from the same school or not. That's because we are no longer "conditioning" on cluster. Analogously, in the boostrapping world, the with-replacement sampling mechanism means we aren't conditioning on cluster; every observation has an equal chance of being sampled.
(I think the analog to the first example regarding cluster-sampling would be if we first chose a sample of observations in the original dataset, and THEN repeated each chosen row a number of times to create the resample. That, I believe, would lead to non-independent observations.) | Are observations independent in bootstrapped resamples?
I'm the OP. The answers from others were very good, and I have accepted one of them. I am also answering here to unite what I have learned.
Yes. In this context, the observations are independent. I b |
54,871 | Expected number of wins until $k$ consecutive wins | The answer is the expected number of games times the expected number of wins per game.
It would be reasonable to doubt this result: sure, it might be close; but couldn't it miss the mark slightly due to the fact that the game ends (by design) with a large streak of wins? Let's find a rigorous way to address this question.
Because the question is only about the number of wins, ignore all losses. The course of the game amounts to a series of runs of wins. It ends when a run of $k$ or more wins is observed.
This description is equivalent to the following more general situation. Let $f$ give the probabilities of any discrete distribution on the positive integers $\{1,2,3,\ldots\},$ let $X_i \sim_{iid} f,\ i=1, 2, 3, \ldots$ be a sequence of independent random variables with that distribution, define $\tau = \min\{i\mid X_i\ge k\}$ to be the stopping time for this process, and set
$$X = \sum_{i=1}^\tau \min(k, X_i) = X_1 + X_2 + \cdots + X_{\tau-1} + k.$$
We seek the value of $e = \mathbb{E}(X),$ the total of the $X_i$ until the process is stopped, counting only $k$ for the value of $X_\tau.$
There is a simple and obvious recursion: either $X_1 \ge k$ and the game stops or else we win $X_1$ games and everything starts all over again. Writing $\pi_i=f(i)$ for $i=1,2,\ldots, k-1$ and
$$\pi_k = f(k) + f(k+1) + f(k+2) + \cdots = 1 - (\pi_1 + \pi_2 + \cdots + \pi_{k-1})$$
for the chance of $k$ or more runs, we stop the game and collect $k$ more wins with probability $\pi_k$. Otherwise we collect $i$ wins and restart the game from the beginning with probabilities $\pi_i,\ i=1, 2, \ldots, k-1.$ Equating the expectations gives
$$e = k\pi_k + \sum_{i=1}^{k-1} \pi_i(i + e).$$
The unique solution of this linear equation for $e$ is
$$e = \frac{1}{\pi_k} \sum_{i=1}^k i \pi_i.$$
In the situation given by the problem, $\pi_i \propto p^i$ for $i\lt k$ and
$$\pi_k \propto p^k + p^{k+1} + p^{k+2} + \cdots = \frac{p^k}{1-p}.$$
The sum of all $k$ values is $p/(1-p),$ whence
$$\eqalign{
\pi_i &= (1-p)p^{i-1}, & i=1, 2, \ldots, k-1; \\
\pi_k &= p^{k-1}.&
}$$
This implies
$$e = p^{1-k} \frac{1-p^k}{1-p},$$
exactly $p$ times the expected number of games. | Expected number of wins until $k$ consecutive wins | The answer is the expected number of games times the expected number of wins per game.
It would be reasonable to doubt this result: sure, it might be close; but couldn't it miss the mark slightly due | Expected number of wins until $k$ consecutive wins
The answer is the expected number of games times the expected number of wins per game.
It would be reasonable to doubt this result: sure, it might be close; but couldn't it miss the mark slightly due to the fact that the game ends (by design) with a large streak of wins? Let's find a rigorous way to address this question.
Because the question is only about the number of wins, ignore all losses. The course of the game amounts to a series of runs of wins. It ends when a run of $k$ or more wins is observed.
This description is equivalent to the following more general situation. Let $f$ give the probabilities of any discrete distribution on the positive integers $\{1,2,3,\ldots\},$ let $X_i \sim_{iid} f,\ i=1, 2, 3, \ldots$ be a sequence of independent random variables with that distribution, define $\tau = \min\{i\mid X_i\ge k\}$ to be the stopping time for this process, and set
$$X = \sum_{i=1}^\tau \min(k, X_i) = X_1 + X_2 + \cdots + X_{\tau-1} + k.$$
We seek the value of $e = \mathbb{E}(X),$ the total of the $X_i$ until the process is stopped, counting only $k$ for the value of $X_\tau.$
There is a simple and obvious recursion: either $X_1 \ge k$ and the game stops or else we win $X_1$ games and everything starts all over again. Writing $\pi_i=f(i)$ for $i=1,2,\ldots, k-1$ and
$$\pi_k = f(k) + f(k+1) + f(k+2) + \cdots = 1 - (\pi_1 + \pi_2 + \cdots + \pi_{k-1})$$
for the chance of $k$ or more runs, we stop the game and collect $k$ more wins with probability $\pi_k$. Otherwise we collect $i$ wins and restart the game from the beginning with probabilities $\pi_i,\ i=1, 2, \ldots, k-1.$ Equating the expectations gives
$$e = k\pi_k + \sum_{i=1}^{k-1} \pi_i(i + e).$$
The unique solution of this linear equation for $e$ is
$$e = \frac{1}{\pi_k} \sum_{i=1}^k i \pi_i.$$
In the situation given by the problem, $\pi_i \propto p^i$ for $i\lt k$ and
$$\pi_k \propto p^k + p^{k+1} + p^{k+2} + \cdots = \frac{p^k}{1-p}.$$
The sum of all $k$ values is $p/(1-p),$ whence
$$\eqalign{
\pi_i &= (1-p)p^{i-1}, & i=1, 2, \ldots, k-1; \\
\pi_k &= p^{k-1}.&
}$$
This implies
$$e = p^{1-k} \frac{1-p^k}{1-p},$$
exactly $p$ times the expected number of games. | Expected number of wins until $k$ consecutive wins
The answer is the expected number of games times the expected number of wins per game.
It would be reasonable to doubt this result: sure, it might be close; but couldn't it miss the mark slightly due |
54,872 | Expected number of wins until $k$ consecutive wins | The fact that it is the win probability $p$ times the expected number of games is a simple application of the optional stopping theorem
Let
$X_n$ be the total number of wins at round $n$. The recentered random variable $Y_n = X_n - p\ n$ is a martingale.
$\tau$ be the time till you win $k$ consecutive games. $\tau$ is obviously a stopping time.
You proved that $\mathbb E[\tau]$ is finite, the increments of $Y_n$ are obviously bounded, so the conditions of case (b) of the Wikipedia article of the optional stopping theorem are true. The optional stopping theorem then tells us that:
$$ \mathbb E[Y_\tau] = \mathbb E[Y_0] \\ = 0$$ in our case. And thus:
$$ \mathbb E[X_\tau - p \ \tau] = 0$$
$$ \mathbb E[X_\tau] = p \ \mathbb E[\tau]$$
Just plug-in the expected value of the stopping time you computed, and you have the result. | Expected number of wins until $k$ consecutive wins | The fact that it is the win probability $p$ times the expected number of games is a simple application of the optional stopping theorem
Let
$X_n$ be the total number of wins at round $n$. The recente | Expected number of wins until $k$ consecutive wins
The fact that it is the win probability $p$ times the expected number of games is a simple application of the optional stopping theorem
Let
$X_n$ be the total number of wins at round $n$. The recentered random variable $Y_n = X_n - p\ n$ is a martingale.
$\tau$ be the time till you win $k$ consecutive games. $\tau$ is obviously a stopping time.
You proved that $\mathbb E[\tau]$ is finite, the increments of $Y_n$ are obviously bounded, so the conditions of case (b) of the Wikipedia article of the optional stopping theorem are true. The optional stopping theorem then tells us that:
$$ \mathbb E[Y_\tau] = \mathbb E[Y_0] \\ = 0$$ in our case. And thus:
$$ \mathbb E[X_\tau - p \ \tau] = 0$$
$$ \mathbb E[X_\tau] = p \ \mathbb E[\tau]$$
Just plug-in the expected value of the stopping time you computed, and you have the result. | Expected number of wins until $k$ consecutive wins
The fact that it is the win probability $p$ times the expected number of games is a simple application of the optional stopping theorem
Let
$X_n$ be the total number of wins at round $n$. The recente |
54,873 | If you're only interested in predicting $Y$ when $X>n$, should you use all $X$ in a regression anyway? | The point of regression is to learn from data points with other predictor values than those for which you want to make a prediction. So, the fact that you are not interested in predicting for certain predictor values is no reason in itself to restrict your dataset in that way.
Consider a more extreme example: Say you assume a linear model and want to predict $Y$ for $X=60$, given the data:
Would you rather go with the $Y$ value of your green data point at $60$, or with the value of the regression line? If you choose to restrict your data to what you want to predict, then you would go with the green dot, undermining the point of doing regression, and overfitting as a result.
You should use all your data, as long you believe the data satisfies the model assumptions. Even if you perhaps have enough data with $X>n$, what if you sill want to predict for $X=n+1$? You would prefer to also include values $X\leq n$ in your data.
If including a subset of your data has as a consequence that your model assumptions are violated, then it can be better not to include it.
There are a couple of convenient plots, involving the fitted values, residuals, number of observation, etc, that help you to check whether the model assumptions are violated. Here is a guide on which the model assumptions in the case of the linear model are, how to check them, and how to transform your data to possibly fix it.
So, for your case, I would check whether the data satisfies your model assumptions for $X\leq n$ and decide what to do taking that into account. | If you're only interested in predicting $Y$ when $X>n$, should you use all $X$ in a regression anywa | The point of regression is to learn from data points with other predictor values than those for which you want to make a prediction. So, the fact that you are not interested in predicting for certain | If you're only interested in predicting $Y$ when $X>n$, should you use all $X$ in a regression anyway?
The point of regression is to learn from data points with other predictor values than those for which you want to make a prediction. So, the fact that you are not interested in predicting for certain predictor values is no reason in itself to restrict your dataset in that way.
Consider a more extreme example: Say you assume a linear model and want to predict $Y$ for $X=60$, given the data:
Would you rather go with the $Y$ value of your green data point at $60$, or with the value of the regression line? If you choose to restrict your data to what you want to predict, then you would go with the green dot, undermining the point of doing regression, and overfitting as a result.
You should use all your data, as long you believe the data satisfies the model assumptions. Even if you perhaps have enough data with $X>n$, what if you sill want to predict for $X=n+1$? You would prefer to also include values $X\leq n$ in your data.
If including a subset of your data has as a consequence that your model assumptions are violated, then it can be better not to include it.
There are a couple of convenient plots, involving the fitted values, residuals, number of observation, etc, that help you to check whether the model assumptions are violated. Here is a guide on which the model assumptions in the case of the linear model are, how to check them, and how to transform your data to possibly fix it.
So, for your case, I would check whether the data satisfies your model assumptions for $X\leq n$ and decide what to do taking that into account. | If you're only interested in predicting $Y$ when $X>n$, should you use all $X$ in a regression anywa
The point of regression is to learn from data points with other predictor values than those for which you want to make a prediction. So, the fact that you are not interested in predicting for certain |
54,874 | If you're only interested in predicting $Y$ when $X>n$, should you use all $X$ in a regression anyway? | If there is reason to think that a relationship between $X$ and $Y$ holds over the entire range of $X$ (possibly after transforming $X$), then yes, you should use all your data. This may hold, for instance, in physical systems.
However, if there is no such constant relationship, then using the full range of $X$ will lull you into a false sense of security.
This is extrapolation. Extrapolation can work fine, especially if you do not extrapolate "far out" (e.g., you observe $0\leq X\leq 100$ and want to predict $Y$ for $X=102$), but can go horribly wrong.
I would recommend that you try both approaches in a cross validation and see which one yields lower out-of-sample errors. | If you're only interested in predicting $Y$ when $X>n$, should you use all $X$ in a regression anywa | If there is reason to think that a relationship between $X$ and $Y$ holds over the entire range of $X$ (possibly after transforming $X$), then yes, you should use all your data. This may hold, for ins | If you're only interested in predicting $Y$ when $X>n$, should you use all $X$ in a regression anyway?
If there is reason to think that a relationship between $X$ and $Y$ holds over the entire range of $X$ (possibly after transforming $X$), then yes, you should use all your data. This may hold, for instance, in physical systems.
However, if there is no such constant relationship, then using the full range of $X$ will lull you into a false sense of security.
This is extrapolation. Extrapolation can work fine, especially if you do not extrapolate "far out" (e.g., you observe $0\leq X\leq 100$ and want to predict $Y$ for $X=102$), but can go horribly wrong.
I would recommend that you try both approaches in a cross validation and see which one yields lower out-of-sample errors. | If you're only interested in predicting $Y$ when $X>n$, should you use all $X$ in a regression anywa
If there is reason to think that a relationship between $X$ and $Y$ holds over the entire range of $X$ (possibly after transforming $X$), then yes, you should use all your data. This may hold, for ins |
54,875 | Why GLM Poisson model predict negative value for count data? | The Poisson GLM fits a model $y_i \sim \text{Pois}(\mu_i)$ with $\log(\mu_i) = x_i^\top \beta$, i.e., a log links the expectation $\mu_i$ to the so-called "linear predictor" $x_i^\top \beta$, often denoted $\eta_i$ in the GLM literature. Hence, at least two types of predictions may be of interest based on the coefficient estimates $\hat \beta$: the predicted link $\hat \eta_i = x_i^\top \hat \beta$ and the predicted expectation $\hat \mu_i = \exp(\hat \eta_i) = \exp(x_i^\top \hat \beta)$. The latter are typically of more interest in applications while the former are often employed in (diagnostic) graphics because they are on a linear scale.
In R, both types of predictions are readily provided for glm objects as predict(model1, type = "link") (the default) and predict(model1, type = "response"), respectively. The former is employed in the graphical displays from plot(model1). | Why GLM Poisson model predict negative value for count data? | The Poisson GLM fits a model $y_i \sim \text{Pois}(\mu_i)$ with $\log(\mu_i) = x_i^\top \beta$, i.e., a log links the expectation $\mu_i$ to the so-called "linear predictor" $x_i^\top \beta$, often de | Why GLM Poisson model predict negative value for count data?
The Poisson GLM fits a model $y_i \sim \text{Pois}(\mu_i)$ with $\log(\mu_i) = x_i^\top \beta$, i.e., a log links the expectation $\mu_i$ to the so-called "linear predictor" $x_i^\top \beta$, often denoted $\eta_i$ in the GLM literature. Hence, at least two types of predictions may be of interest based on the coefficient estimates $\hat \beta$: the predicted link $\hat \eta_i = x_i^\top \hat \beta$ and the predicted expectation $\hat \mu_i = \exp(\hat \eta_i) = \exp(x_i^\top \hat \beta)$. The latter are typically of more interest in applications while the former are often employed in (diagnostic) graphics because they are on a linear scale.
In R, both types of predictions are readily provided for glm objects as predict(model1, type = "link") (the default) and predict(model1, type = "response"), respectively. The former is employed in the graphical displays from plot(model1). | Why GLM Poisson model predict negative value for count data?
The Poisson GLM fits a model $y_i \sim \text{Pois}(\mu_i)$ with $\log(\mu_i) = x_i^\top \beta$, i.e., a log links the expectation $\mu_i$ to the so-called "linear predictor" $x_i^\top \beta$, often de |
54,876 | Theoretical justification for training a multi-class classification model to be used for multi-label classification | A softmax output layer does not seem to make sense. The total probability of all classes would then be coerced to sum to 1. This does not make sense in a multi-label setting. Using a sigmoid instead would seem more logical (allows multiple classes to have high probability e.g. close to 1).
Perhaps what is being done in steps 2, 3 and 4 is a consequence of having to compensate for having used a softmax activiation function for the output layer? At least the bit in step 2 ensures that this matches up with a softmax activiation. I am not so sure that this evens out over epochs: each category will occur less frequently in the created training data than is really the case in the training data and you throw away all information on what categories tend to occur together (unless you have so little data that there is a concern about overfitting to that?!). Additionally, I assume you would get better performance (=not speed-wise, but from the prediction perspective) if you did have the multiple labels in each step. I find it hard to believe that this really cost that much in speed and I would assume that it would improve your predictions to use all the labels at all times.
In short, I see some reasons for why what you describe could go wrong and assume (without really knowing) that some of the contortions in the approach try to compensate for some of these. I do not know enough and have not tried this, so I cannot say how successful this would be. I conjecture that the correlation between the multiple labels is something that this approach would not capture.
Personally, I would be tempted to just do this as a proper multi-label prediction with a final dense layer with a number of units equal to the number of classes (i.e. encoded as e.g. 1 0 0 0 1 0 0 1 0 0 ... if an item falls into the 1st, 4th and 8th class) and a sigmoid activation (using e.g. binary crossentropy as the loss function). I believe this is the standard approach normally recommended for this situation. | Theoretical justification for training a multi-class classification model to be used for multi-label | A softmax output layer does not seem to make sense. The total probability of all classes would then be coerced to sum to 1. This does not make sense in a multi-label setting. Using a sigmoid instead w | Theoretical justification for training a multi-class classification model to be used for multi-label classification
A softmax output layer does not seem to make sense. The total probability of all classes would then be coerced to sum to 1. This does not make sense in a multi-label setting. Using a sigmoid instead would seem more logical (allows multiple classes to have high probability e.g. close to 1).
Perhaps what is being done in steps 2, 3 and 4 is a consequence of having to compensate for having used a softmax activiation function for the output layer? At least the bit in step 2 ensures that this matches up with a softmax activiation. I am not so sure that this evens out over epochs: each category will occur less frequently in the created training data than is really the case in the training data and you throw away all information on what categories tend to occur together (unless you have so little data that there is a concern about overfitting to that?!). Additionally, I assume you would get better performance (=not speed-wise, but from the prediction perspective) if you did have the multiple labels in each step. I find it hard to believe that this really cost that much in speed and I would assume that it would improve your predictions to use all the labels at all times.
In short, I see some reasons for why what you describe could go wrong and assume (without really knowing) that some of the contortions in the approach try to compensate for some of these. I do not know enough and have not tried this, so I cannot say how successful this would be. I conjecture that the correlation between the multiple labels is something that this approach would not capture.
Personally, I would be tempted to just do this as a proper multi-label prediction with a final dense layer with a number of units equal to the number of classes (i.e. encoded as e.g. 1 0 0 0 1 0 0 1 0 0 ... if an item falls into the 1st, 4th and 8th class) and a sigmoid activation (using e.g. binary crossentropy as the loss function). I believe this is the standard approach normally recommended for this situation. | Theoretical justification for training a multi-class classification model to be used for multi-label
A softmax output layer does not seem to make sense. The total probability of all classes would then be coerced to sum to 1. This does not make sense in a multi-label setting. Using a sigmoid instead w |
54,877 | How to reduce type I and II error for determining bias of loaded dice for reduced legal costs? | Concerning the Type I and Type II error comparison, a die roll of an unbiased die yields an outcome from a discrete uniform distribution. To test a candidate die for bias, one has to assume a type of bias distribution that is a discrete nonuniform distribution. Since there are several different ways of biasing a die, there is no unique distribution shape for a biased die. The method presented here is perfectly general in that it can be applied to any observable die bias (some bias types can be hidden), indeed, to any discrete problem type, and using binning, to some continuous distribution problems. Also, there are several allowed statistical tests for bias that could be compared for power for the biased die problem. None of these allow for the usual normal distribution assumption to be applied to dice. However, the method here is a data treatment for comparing a known distribution with an unknown one, such that $\beta$ cannot be assigned a priori.
A die has one of two official number patterns with opposite sides adding to 7; (1) the 1,2,3 vertex and the 4,5,6 vertex numbers increase clockwise, (2) numbers$-$same vertices$-$increase counterclockwise. This puts some constraints on the possible shapes for probability mass distribution for a biased die, but the remaining choices are hardly unique, i.e., there are many.
Cheating at craps in the US with loaded dice was common, usually by the casino, until the innovation of placing bets with the house as well as against it in 1907. However, even recently, many Las Vegas casinos' have been referred to their gaming commission for cheating with biased dice. In some jurisdictions, dice in current casino usage have security features including serial numbers.
Creative commons from https://boingboing.net/2015/07/31/dice.html.
However, in some locations at least, a casino may resist attempts to take any die home, unless it has been invalidated by mutilation.
Although it varies depending on the game and type of bet, the house take averages approximately 5%. I cannot do a full cost analysis without knowing who typically sues whom for what, how much it costs to record a number, $n$, of outcomes, how many dice are tested between lawsuits, what the savings would be for a particular level of exclusion of bias, and what level of bias is worth detecting. Depending on the particulars, both type I and II errors could have associated costs to either party in a lawsuit. As one will see, below, the confidence interval width of insignificantly biased dice proceeds as $\sim\dfrac{1}{\sqrt{n}}$, meaning that our cost for increased confidence for bias detection escalates as $\sim n^2$, when the cost of testing is $\sim n$.
Here are some methods of loading a die from the stack exchange gaming site
1) "Shaved" dice, which are not quite symmetrical, but slightly wider or narrower on one axis than on others. A shaved d6 with, say, the 1–6 axis longer than the others will roll those sides less often, making it "less swingy" than a fair d6 should be (but leaving the average roll unchanged). The name comes from cheaters actually shaving or sanding down dice to flatten them, but cheap dice may have this kind of bias simply due to being poorly made. Other similar biases due to asymmetric shape are also possible, especially in dice with many sides.
2) Uneven (concave / convex) faces may be more or less likely to "stick" to the table, favoring or disfavoring the opposite side. The precise effect may depend on the table material, and on how the dice are rolled. Again, cheap plastic dice case easily have this kind of bias, e.g. due to the plastic shrinking unevenly as it cools after molding. Uneven edges can also create bias, particularly if the edge is asymmetric (i.e. sharper on one side).
3) Actual "loaded" dice, i.e. dice with a center of gravity offset from their geometric center, may occur accidentally due to either bubbles trapped inside the plastic or, more commonly, simply due to the embossed numbers on the sides of the die affecting the balance. In fact, almost all dice, with the exception of high-quality casino dice deliberately balanced to avoid this kind of bias, will likely have it to some small extent.
Others
4) Partially melting die in a microwave,
5) Variable loaded die having lead shot in a solid oil filled center that melts in hand heat.
If I were loading a die myself, I might include a "now it's biased", "now it's unbiased" feature (e.g., using magnets).
A quick test is to float a die in salt water, where the concentration of salt is high enough to just float it, and give it a few flicks. If I were doing this myself, I would x-ray any opaque die. In a casino one can drop a die into a glass of water to see which side lands where, and players and dealers spin a die between two opposite vertices held between their fingers to see which side falls down. Also, so called "dice balancing calipers" are used to test for bias, and are available from gambling supply houses. Nothing will test for all possible methods of 'loading' a die. Keeping in mind that there is no simple test for human ingenuity, the most direct method of bias detection is to roll the die to see how it rolls. Under Las Vegas casino conditions, one would need an actual craps (or dice) table to do this, as casino dice have sharp edges and are not very useful for board games. In summary, one would not actually use statistics for a physical die bias testing, but on line casinos are another problem, and for those, there is no alternative to statistical testing.
One might think that the mean is a sensitive test choice, but not really because 3.5 is not an actual result. One can do sign testing on this, but this will be a lower power test, and like other location tests for would not detect numerically balanced biases. Fisher's exact test would be more interesting and possibly more powerful. If I assume that ordinary t-testing can be used, I will be violating assumptions as a discrete uniform distribution is not the same as a normal distribution. If I persist in doing this, the p-values resulting will be non-normally distributed, e.g., a very approximately, a Laplace distribution (it may be a generalized normal distribution, do not know exactly).
(Image of $t$-test probabilities from 10,000 simulated rolls.) What exactly this is is not that interesting because a disadvantage of using the mean value is that it may not detect certain methods of biasing the results. For example, I could make a die so that it would be equally difficult to land on a one and a six, and the average would be unchanged (e.g., shaved die). So, we need something that tests for each of the six number "cells" simultaneously. Chi-square testing is just such a test.
Chi-squared distribution use for die roll testing is routine. This tests the outcome of each result against its expectation for each possible outcome with the least amount of fuss. And a common rule of thumb is to have at least five times as many rolls as there are sides on the die.
The derivation of Pearson Chi-squared from the central limit theorem appears here and press the [show] link. An example for d6 testing appears here. An analysis of a simulation in Mathematica of 10,000 rolls (die) with testing appears next.
nt = 10000; die =
RandomVariate[DiscreteUniformDistribution[{1, 6}], nt]; h =
Histogram[die, Automatic, ChartStyle -> {Opacity[0.3]}];
Print["The number of 1's through 6's for 10000 simulated die rolls is ",
bc = BinCounts[die, {1, 7}], " having Chi-squared p = ",
bcdie = BinCounts[die, {1, 7}]; s = Sum[(nt/6 - bc[[j]])^2/(nt/6), {j, 1, 6}];
p = 1 - N[CDF[ChiSquareDistribution[5],s]],
". The Bonferroni corrected Poisson distribution confidence interval is from ",
low = InverseCDF[PoissonDistribution[10000/6], 0.025/6], " to ",
high = InverseCDF[PoissonDistribution[10000/6], 1 - 0.025/6]]
lh = Plot[{low, high}, {x, 1, 7},
PlotRange -> {{1, 7}, {0, Automatic}}];
Show[lh, h]
The number of 1's through 6's for 10000 simulated die rolls is
{1679,1696,1619,1651,1689,1666} having Chi-squared p = 0.78885. The Bonferroni corrected Poisson distribution confidence interval is from 1560 to 1775.
The confidence intervals from 0.4167% to 98.583% for individual cells are equivalent to 95% CI's for every cell.
This amounts to a $+6.5$% and $-6.4$% spread using a Poisson counting model, suggesting that, a $12.9$% total range around 1/6th of the number of rolls would not detect a loaded die on all faces, and any die face showing a count beyond that would suggest significant bias.
Now the number ($n$) of rolls we need to perform to reduce our 95% confidence interval to a given fraction $y\sim\dfrac{12.8173}{n^{0.499151}}$ as per the following plot,
is a power function suspiciously decreasing proportional to approx. $1/\sqrt{n}$. Note that the Poisson confidence interval itself, in blue, ratchets for low count rates, not unexpectedly as Poisson counting is a whole number result.
Calculation via simulation of Type I error (false positive rate, FPR) and Type II error (false negative rate, FNR) requires 1) models for biased and unbiased dice 2) varying amounts of bias 3) selection of alpha 4) multiple random outcomes to build up a table.
The model used was to simulate extra weight placed by drilling holes under the dots of the six to make two bias models, one for which the six shows 4/10$^{\text{ths}}$ as frequently a one or six result (a $\frac{3}{2}$ odds ratio of seeing a one versus a six) and one for 1/3$^{\text{rd}}$ of the time (an odds ratio of twice). As the six is opposite the one, we should roll a one 6/10$^{\text{ths}}$ of the time, and 2/3$^{\text{rds}}$ of the time, that we see either number, respectively.
The biased die on the left panel is one trial of 1000 rolls of the die with a 1/3$^{\text{rd}}$ bias. The 95% and 99% confidence intervals are shown which represent the choices of $\alpha$ of 0.05 and 0.01 respectively. We then repeat this experiment 1000 times to accumulate histogram information, which is next displayed as probability histograms.
This shows the biased die results in orange and the unbiased die results in blue. The histogram categories are conveniently displayed at 0.05 and 0.01 probability widths and scaled for probability so that we can read power (1-FNR = 1 - type II error, in orange) and type I error (FPR, in blue) from the result in the first histogram category. Doing this repeatedly allows us to build up a table for 1000 trials of each level of bias for each type of error and each $\alpha$, shown next. This is done using the Chi-square distribution calculation, as it is more accurate than Poisson for this problem.
$$\left(
\begin{array}{ccccc}
\alpha & \text{Bias} & \text{n} & \text{FPR} & \text{FNR} \\
0.05 & \text{4/10} & 200 & 0.054 & 0.797 \\
0.05 & \text{4/10} & 1000 & 0.043 & 0.199 \\
0.05 & \text{4/10} & 4000 & 0.052 & 0. \\
0.01 & \text{4/10} & 200 & 0.008 & 0.918 \\
0.01 & \text{4/10} & 1000 & 0.008 & 0.375 \\
0.01 & \text{4/10} & 4000 & 0.019 & 0. \\
0.05 & \text{1/3} & 200 & 0.039 & 0.486 \\
0.05 & \text{1/3} & 1000 & 0.044 & 0.001 \\
0.05 & \text{1/3} & 4000 & 0.045 & 0. \\
0.01 & \text{1/3} & 200 & 0.011 & 0.728 \\
0.01 & \text{1/3} & 1000 & 0.017 & 0.004 \\
0.01 & \text{1/3} & 4000 & 0.012 & 0. \\
\end{array}
\right)$$
Notice that the recovered type I errors are on the same order as those specified a priori, and are fairly accurate. Most notably, there is a variable not specified in the question. That is, the more subtle the bias, the more difficult it is to detect, i.e., the power then decreases.
Specifying a given ratio of type II and $\alpha$, as is requested in the question is ambiguous as it does not relate to accuracy of testing. What is really desired is high accuracy of testing, i.e., $\text{Accuracy} =1-(\text{FPR}+\text{FNR})$, which is achieved by minimizing the sum $(\text{FPR}+\text{FNR})$.
The above method is general, and applies to electronic results with a single die outcome of bias, like an all-too-frequent one, something that is not possible with a physical die which we could not bias for one's without reducing the frequency of outcomes of six's.
My advice is, if you want to gamble, buy shares. | How to reduce type I and II error for determining bias of loaded dice for reduced legal costs? | Concerning the Type I and Type II error comparison, a die roll of an unbiased die yields an outcome from a discrete uniform distribution. To test a candidate die for bias, one has to assume a type of | How to reduce type I and II error for determining bias of loaded dice for reduced legal costs?
Concerning the Type I and Type II error comparison, a die roll of an unbiased die yields an outcome from a discrete uniform distribution. To test a candidate die for bias, one has to assume a type of bias distribution that is a discrete nonuniform distribution. Since there are several different ways of biasing a die, there is no unique distribution shape for a biased die. The method presented here is perfectly general in that it can be applied to any observable die bias (some bias types can be hidden), indeed, to any discrete problem type, and using binning, to some continuous distribution problems. Also, there are several allowed statistical tests for bias that could be compared for power for the biased die problem. None of these allow for the usual normal distribution assumption to be applied to dice. However, the method here is a data treatment for comparing a known distribution with an unknown one, such that $\beta$ cannot be assigned a priori.
A die has one of two official number patterns with opposite sides adding to 7; (1) the 1,2,3 vertex and the 4,5,6 vertex numbers increase clockwise, (2) numbers$-$same vertices$-$increase counterclockwise. This puts some constraints on the possible shapes for probability mass distribution for a biased die, but the remaining choices are hardly unique, i.e., there are many.
Cheating at craps in the US with loaded dice was common, usually by the casino, until the innovation of placing bets with the house as well as against it in 1907. However, even recently, many Las Vegas casinos' have been referred to their gaming commission for cheating with biased dice. In some jurisdictions, dice in current casino usage have security features including serial numbers.
Creative commons from https://boingboing.net/2015/07/31/dice.html.
However, in some locations at least, a casino may resist attempts to take any die home, unless it has been invalidated by mutilation.
Although it varies depending on the game and type of bet, the house take averages approximately 5%. I cannot do a full cost analysis without knowing who typically sues whom for what, how much it costs to record a number, $n$, of outcomes, how many dice are tested between lawsuits, what the savings would be for a particular level of exclusion of bias, and what level of bias is worth detecting. Depending on the particulars, both type I and II errors could have associated costs to either party in a lawsuit. As one will see, below, the confidence interval width of insignificantly biased dice proceeds as $\sim\dfrac{1}{\sqrt{n}}$, meaning that our cost for increased confidence for bias detection escalates as $\sim n^2$, when the cost of testing is $\sim n$.
Here are some methods of loading a die from the stack exchange gaming site
1) "Shaved" dice, which are not quite symmetrical, but slightly wider or narrower on one axis than on others. A shaved d6 with, say, the 1–6 axis longer than the others will roll those sides less often, making it "less swingy" than a fair d6 should be (but leaving the average roll unchanged). The name comes from cheaters actually shaving or sanding down dice to flatten them, but cheap dice may have this kind of bias simply due to being poorly made. Other similar biases due to asymmetric shape are also possible, especially in dice with many sides.
2) Uneven (concave / convex) faces may be more or less likely to "stick" to the table, favoring or disfavoring the opposite side. The precise effect may depend on the table material, and on how the dice are rolled. Again, cheap plastic dice case easily have this kind of bias, e.g. due to the plastic shrinking unevenly as it cools after molding. Uneven edges can also create bias, particularly if the edge is asymmetric (i.e. sharper on one side).
3) Actual "loaded" dice, i.e. dice with a center of gravity offset from their geometric center, may occur accidentally due to either bubbles trapped inside the plastic or, more commonly, simply due to the embossed numbers on the sides of the die affecting the balance. In fact, almost all dice, with the exception of high-quality casino dice deliberately balanced to avoid this kind of bias, will likely have it to some small extent.
Others
4) Partially melting die in a microwave,
5) Variable loaded die having lead shot in a solid oil filled center that melts in hand heat.
If I were loading a die myself, I might include a "now it's biased", "now it's unbiased" feature (e.g., using magnets).
A quick test is to float a die in salt water, where the concentration of salt is high enough to just float it, and give it a few flicks. If I were doing this myself, I would x-ray any opaque die. In a casino one can drop a die into a glass of water to see which side lands where, and players and dealers spin a die between two opposite vertices held between their fingers to see which side falls down. Also, so called "dice balancing calipers" are used to test for bias, and are available from gambling supply houses. Nothing will test for all possible methods of 'loading' a die. Keeping in mind that there is no simple test for human ingenuity, the most direct method of bias detection is to roll the die to see how it rolls. Under Las Vegas casino conditions, one would need an actual craps (or dice) table to do this, as casino dice have sharp edges and are not very useful for board games. In summary, one would not actually use statistics for a physical die bias testing, but on line casinos are another problem, and for those, there is no alternative to statistical testing.
One might think that the mean is a sensitive test choice, but not really because 3.5 is not an actual result. One can do sign testing on this, but this will be a lower power test, and like other location tests for would not detect numerically balanced biases. Fisher's exact test would be more interesting and possibly more powerful. If I assume that ordinary t-testing can be used, I will be violating assumptions as a discrete uniform distribution is not the same as a normal distribution. If I persist in doing this, the p-values resulting will be non-normally distributed, e.g., a very approximately, a Laplace distribution (it may be a generalized normal distribution, do not know exactly).
(Image of $t$-test probabilities from 10,000 simulated rolls.) What exactly this is is not that interesting because a disadvantage of using the mean value is that it may not detect certain methods of biasing the results. For example, I could make a die so that it would be equally difficult to land on a one and a six, and the average would be unchanged (e.g., shaved die). So, we need something that tests for each of the six number "cells" simultaneously. Chi-square testing is just such a test.
Chi-squared distribution use for die roll testing is routine. This tests the outcome of each result against its expectation for each possible outcome with the least amount of fuss. And a common rule of thumb is to have at least five times as many rolls as there are sides on the die.
The derivation of Pearson Chi-squared from the central limit theorem appears here and press the [show] link. An example for d6 testing appears here. An analysis of a simulation in Mathematica of 10,000 rolls (die) with testing appears next.
nt = 10000; die =
RandomVariate[DiscreteUniformDistribution[{1, 6}], nt]; h =
Histogram[die, Automatic, ChartStyle -> {Opacity[0.3]}];
Print["The number of 1's through 6's for 10000 simulated die rolls is ",
bc = BinCounts[die, {1, 7}], " having Chi-squared p = ",
bcdie = BinCounts[die, {1, 7}]; s = Sum[(nt/6 - bc[[j]])^2/(nt/6), {j, 1, 6}];
p = 1 - N[CDF[ChiSquareDistribution[5],s]],
". The Bonferroni corrected Poisson distribution confidence interval is from ",
low = InverseCDF[PoissonDistribution[10000/6], 0.025/6], " to ",
high = InverseCDF[PoissonDistribution[10000/6], 1 - 0.025/6]]
lh = Plot[{low, high}, {x, 1, 7},
PlotRange -> {{1, 7}, {0, Automatic}}];
Show[lh, h]
The number of 1's through 6's for 10000 simulated die rolls is
{1679,1696,1619,1651,1689,1666} having Chi-squared p = 0.78885. The Bonferroni corrected Poisson distribution confidence interval is from 1560 to 1775.
The confidence intervals from 0.4167% to 98.583% for individual cells are equivalent to 95% CI's for every cell.
This amounts to a $+6.5$% and $-6.4$% spread using a Poisson counting model, suggesting that, a $12.9$% total range around 1/6th of the number of rolls would not detect a loaded die on all faces, and any die face showing a count beyond that would suggest significant bias.
Now the number ($n$) of rolls we need to perform to reduce our 95% confidence interval to a given fraction $y\sim\dfrac{12.8173}{n^{0.499151}}$ as per the following plot,
is a power function suspiciously decreasing proportional to approx. $1/\sqrt{n}$. Note that the Poisson confidence interval itself, in blue, ratchets for low count rates, not unexpectedly as Poisson counting is a whole number result.
Calculation via simulation of Type I error (false positive rate, FPR) and Type II error (false negative rate, FNR) requires 1) models for biased and unbiased dice 2) varying amounts of bias 3) selection of alpha 4) multiple random outcomes to build up a table.
The model used was to simulate extra weight placed by drilling holes under the dots of the six to make two bias models, one for which the six shows 4/10$^{\text{ths}}$ as frequently a one or six result (a $\frac{3}{2}$ odds ratio of seeing a one versus a six) and one for 1/3$^{\text{rd}}$ of the time (an odds ratio of twice). As the six is opposite the one, we should roll a one 6/10$^{\text{ths}}$ of the time, and 2/3$^{\text{rds}}$ of the time, that we see either number, respectively.
The biased die on the left panel is one trial of 1000 rolls of the die with a 1/3$^{\text{rd}}$ bias. The 95% and 99% confidence intervals are shown which represent the choices of $\alpha$ of 0.05 and 0.01 respectively. We then repeat this experiment 1000 times to accumulate histogram information, which is next displayed as probability histograms.
This shows the biased die results in orange and the unbiased die results in blue. The histogram categories are conveniently displayed at 0.05 and 0.01 probability widths and scaled for probability so that we can read power (1-FNR = 1 - type II error, in orange) and type I error (FPR, in blue) from the result in the first histogram category. Doing this repeatedly allows us to build up a table for 1000 trials of each level of bias for each type of error and each $\alpha$, shown next. This is done using the Chi-square distribution calculation, as it is more accurate than Poisson for this problem.
$$\left(
\begin{array}{ccccc}
\alpha & \text{Bias} & \text{n} & \text{FPR} & \text{FNR} \\
0.05 & \text{4/10} & 200 & 0.054 & 0.797 \\
0.05 & \text{4/10} & 1000 & 0.043 & 0.199 \\
0.05 & \text{4/10} & 4000 & 0.052 & 0. \\
0.01 & \text{4/10} & 200 & 0.008 & 0.918 \\
0.01 & \text{4/10} & 1000 & 0.008 & 0.375 \\
0.01 & \text{4/10} & 4000 & 0.019 & 0. \\
0.05 & \text{1/3} & 200 & 0.039 & 0.486 \\
0.05 & \text{1/3} & 1000 & 0.044 & 0.001 \\
0.05 & \text{1/3} & 4000 & 0.045 & 0. \\
0.01 & \text{1/3} & 200 & 0.011 & 0.728 \\
0.01 & \text{1/3} & 1000 & 0.017 & 0.004 \\
0.01 & \text{1/3} & 4000 & 0.012 & 0. \\
\end{array}
\right)$$
Notice that the recovered type I errors are on the same order as those specified a priori, and are fairly accurate. Most notably, there is a variable not specified in the question. That is, the more subtle the bias, the more difficult it is to detect, i.e., the power then decreases.
Specifying a given ratio of type II and $\alpha$, as is requested in the question is ambiguous as it does not relate to accuracy of testing. What is really desired is high accuracy of testing, i.e., $\text{Accuracy} =1-(\text{FPR}+\text{FNR})$, which is achieved by minimizing the sum $(\text{FPR}+\text{FNR})$.
The above method is general, and applies to electronic results with a single die outcome of bias, like an all-too-frequent one, something that is not possible with a physical die which we could not bias for one's without reducing the frequency of outcomes of six's.
My advice is, if you want to gamble, buy shares. | How to reduce type I and II error for determining bias of loaded dice for reduced legal costs?
Concerning the Type I and Type II error comparison, a die roll of an unbiased die yields an outcome from a discrete uniform distribution. To test a candidate die for bias, one has to assume a type of |
54,878 | How to reduce type I and II error for determining bias of loaded dice for reduced legal costs? | I'm not sure I'd approach the problem this way.
First, you have two hypotheses: the die is fair or not fair. If it is not fair, the distribution of throws will not be uniform (taking a mean of the throws is not an efficient way to measure this). Rather, I'd record the distribution of throws and after a few dozen, start calculating the empirical probability distributions (perhaps by chi-square or by way of binomial distribution depending on the game being played).
Factoring in the cost of a lawsuit doesn't enter into the equation for me, because the die should be eliminated regardless as soon as you're certain it's not fair. That said, there are applications of the sequential ratio probability test (SPRT) that are geared towards this. E.g., detecting manufacturing anomalies based on a sequential sample of product. | How to reduce type I and II error for determining bias of loaded dice for reduced legal costs? | I'm not sure I'd approach the problem this way.
First, you have two hypotheses: the die is fair or not fair. If it is not fair, the distribution of throws will not be uniform (taking a mean of the t | How to reduce type I and II error for determining bias of loaded dice for reduced legal costs?
I'm not sure I'd approach the problem this way.
First, you have two hypotheses: the die is fair or not fair. If it is not fair, the distribution of throws will not be uniform (taking a mean of the throws is not an efficient way to measure this). Rather, I'd record the distribution of throws and after a few dozen, start calculating the empirical probability distributions (perhaps by chi-square or by way of binomial distribution depending on the game being played).
Factoring in the cost of a lawsuit doesn't enter into the equation for me, because the die should be eliminated regardless as soon as you're certain it's not fair. That said, there are applications of the sequential ratio probability test (SPRT) that are geared towards this. E.g., detecting manufacturing anomalies based on a sequential sample of product. | How to reduce type I and II error for determining bias of loaded dice for reduced legal costs?
I'm not sure I'd approach the problem this way.
First, you have two hypotheses: the die is fair or not fair. If it is not fair, the distribution of throws will not be uniform (taking a mean of the t |
54,879 | How to reduce type I and II error for determining bias of loaded dice for reduced legal costs? | TLDR – the problem is underdefined
Aim
The details provided in the question can only lead to a relative balance of errors. Simply rearrange the equation and $ P(TypeII Error) ≈ 3P(TypeI Error)/240000$.
This is not what is wanted or needed, rather it is to know what threshold for mean that would be used as a cut off. I.e. what effect size minimises the cost function of your situation.
The cost of a typeI and typeII error have been defined, but not of the test to determine these.
Why does this matter?
The effect size that you can reliably measure depends on the acceptable type I error and the acceptable type II error. Below is a typical sample size calculator ( Box 1 in https://academic.oup.com/ndt/article/25/5/1388/1842407, open access) (OP indicated a generic answer was preferred so I have used a formula for normal distributions as this s probably more widely used, the principles are the same for other distributions types)
$$n = 2*((a + b)^2δ^2)/(μ_1−μ_2)$$where a is the z multiplier to achieve the desired $\alpha$ level (1.96 for $\alpha$ = 0.05), while b is the equivalent for $\beta$. $\sigma$ is the standard deviation of the test result and $(\mu_1-\mu_2)$ is the effect sizem, we can simplify this to the threshold since an unbiased average would be 0 so we can drop $\mu_2$). $n$ in this specific example is the number of repeated trials to be carried out on an individual die. Rearrange this to solve for the effect size:
$$ \mu_1 = 2*((a + b)^2δ^2)/(n)$$
What does this equation tell us?
The issue of type I and type II error trade-off only applies when sample size and effect size are set to a constant.
You can push both $alpha$ and $beta$ down ever lower and detect the same effect size by increasing the number of repetitions. But repeating a test increases the cost.
Since your costs link type I and II errors you can replace $a+b$ with $Z_\alpha + Z_{3\alpha/240000\alpha}$
Then you will need to either:
1) define an upper limit on what you are willing to pay for testing the dice, this will then leave you with two variables to solve for (effect size and alpha)
2) solve for repetitions, alpha and effect size simultaneously
You either need to define even more limits up front or use some post-hoc decision criteria to decide which balance of the possible outcomes is best suited.
If this were a reflection of a real world example, you would want to do a more detailed cost/benefit assessment (non-comprehensive list).
The cost of repetitions for your measure of fairness (this will guide your sample number, if it is say £1 per 10 reps, then it will cost 33x the cost of the dice to do 1000 reps).
The cost of failing to detect a bias (you give this as £240000 in you example for legal expenses, but in the real world the loss of earnings due to reputation damage and many others may be worth throwing in)
The cost of rejecting a fair dice (given as £3 in this case, in more generic cases there may be other issues to consider, including revenue boost due to enhanced reputation)
If you want estimates of absolute costs and risks you would need to define absolute values for how the scenario would be used in the real world. E.g. how many dice do you envisage deploying in the real world and how many times will each be used?
A useful reference:
https://www.graphpad.com/guides/prism/7/statistics/stat_sample_size_for_which_values_o.htm?toc=0&printWindow | How to reduce type I and II error for determining bias of loaded dice for reduced legal costs? | TLDR – the problem is underdefined
Aim
The details provided in the question can only lead to a relative balance of errors. Simply rearrange the equation and $ P(TypeII Error) ≈ 3P(TypeI Error)/240000$ | How to reduce type I and II error for determining bias of loaded dice for reduced legal costs?
TLDR – the problem is underdefined
Aim
The details provided in the question can only lead to a relative balance of errors. Simply rearrange the equation and $ P(TypeII Error) ≈ 3P(TypeI Error)/240000$.
This is not what is wanted or needed, rather it is to know what threshold for mean that would be used as a cut off. I.e. what effect size minimises the cost function of your situation.
The cost of a typeI and typeII error have been defined, but not of the test to determine these.
Why does this matter?
The effect size that you can reliably measure depends on the acceptable type I error and the acceptable type II error. Below is a typical sample size calculator ( Box 1 in https://academic.oup.com/ndt/article/25/5/1388/1842407, open access) (OP indicated a generic answer was preferred so I have used a formula for normal distributions as this s probably more widely used, the principles are the same for other distributions types)
$$n = 2*((a + b)^2δ^2)/(μ_1−μ_2)$$where a is the z multiplier to achieve the desired $\alpha$ level (1.96 for $\alpha$ = 0.05), while b is the equivalent for $\beta$. $\sigma$ is the standard deviation of the test result and $(\mu_1-\mu_2)$ is the effect sizem, we can simplify this to the threshold since an unbiased average would be 0 so we can drop $\mu_2$). $n$ in this specific example is the number of repeated trials to be carried out on an individual die. Rearrange this to solve for the effect size:
$$ \mu_1 = 2*((a + b)^2δ^2)/(n)$$
What does this equation tell us?
The issue of type I and type II error trade-off only applies when sample size and effect size are set to a constant.
You can push both $alpha$ and $beta$ down ever lower and detect the same effect size by increasing the number of repetitions. But repeating a test increases the cost.
Since your costs link type I and II errors you can replace $a+b$ with $Z_\alpha + Z_{3\alpha/240000\alpha}$
Then you will need to either:
1) define an upper limit on what you are willing to pay for testing the dice, this will then leave you with two variables to solve for (effect size and alpha)
2) solve for repetitions, alpha and effect size simultaneously
You either need to define even more limits up front or use some post-hoc decision criteria to decide which balance of the possible outcomes is best suited.
If this were a reflection of a real world example, you would want to do a more detailed cost/benefit assessment (non-comprehensive list).
The cost of repetitions for your measure of fairness (this will guide your sample number, if it is say £1 per 10 reps, then it will cost 33x the cost of the dice to do 1000 reps).
The cost of failing to detect a bias (you give this as £240000 in you example for legal expenses, but in the real world the loss of earnings due to reputation damage and many others may be worth throwing in)
The cost of rejecting a fair dice (given as £3 in this case, in more generic cases there may be other issues to consider, including revenue boost due to enhanced reputation)
If you want estimates of absolute costs and risks you would need to define absolute values for how the scenario would be used in the real world. E.g. how many dice do you envisage deploying in the real world and how many times will each be used?
A useful reference:
https://www.graphpad.com/guides/prism/7/statistics/stat_sample_size_for_which_values_o.htm?toc=0&printWindow | How to reduce type I and II error for determining bias of loaded dice for reduced legal costs?
TLDR – the problem is underdefined
Aim
The details provided in the question can only lead to a relative balance of errors. Simply rearrange the equation and $ P(TypeII Error) ≈ 3P(TypeI Error)/240000$ |
54,880 | How to reduce type I and II error for determining bias of loaded dice for reduced legal costs? | As I understand, the core part of the question is "how to choose $\alpha$ and/or $\beta$"?
Maybe it's me, that is (non-statistically) biased to interpret the question like this. But recently I had numerous discussions and thoughts about this very question. In a nutshell: there is no general rule, it depends on each and every case.
Textbook examples usually contain an outside element, e.g. what kind of error is "morally"/"socially"/"politically"/etc. more or less acceptable. In your case it might be "financially". Examples are:
In law/court, what is worse: that a criminal walks free (type A) or that an innocent person goes is sentenced (type B).
In medicine, a test: gives a false positive -- a healthy person is tested as sick or a false negative -- a sick person tests as healthy.
In any case, the discussion becomes something entirely non-statistical, non-mathematic. In the somewhat unusual/contrived/complicated example the additional step involves the calculation of expectation values and the minimization of costs/risks in light of the interpretation.
I somewhat digress, but it's similar to the "risk management" in a famous movie, that is intended to offend: a car company calculates the cost of a call-back for repairs versus the cost of a lawsuit and ensuing damages for accidents that are caused by the particular failure. The second calculation is based entirely on statistics, but the decision rule surely isn't.
In the case of the casino, it's simply regulated. There are no lawsuits for "unfair" dice. And in the case of a truly unfair die, the plaintiff would have to prove it that it is the casinos' fault. In online casinos the inverse is true; there are departments for fraud detection. | How to reduce type I and II error for determining bias of loaded dice for reduced legal costs? | As I understand, the core part of the question is "how to choose $\alpha$ and/or $\beta$"?
Maybe it's me, that is (non-statistically) biased to interpret the question like this. But recently I had num | How to reduce type I and II error for determining bias of loaded dice for reduced legal costs?
As I understand, the core part of the question is "how to choose $\alpha$ and/or $\beta$"?
Maybe it's me, that is (non-statistically) biased to interpret the question like this. But recently I had numerous discussions and thoughts about this very question. In a nutshell: there is no general rule, it depends on each and every case.
Textbook examples usually contain an outside element, e.g. what kind of error is "morally"/"socially"/"politically"/etc. more or less acceptable. In your case it might be "financially". Examples are:
In law/court, what is worse: that a criminal walks free (type A) or that an innocent person goes is sentenced (type B).
In medicine, a test: gives a false positive -- a healthy person is tested as sick or a false negative -- a sick person tests as healthy.
In any case, the discussion becomes something entirely non-statistical, non-mathematic. In the somewhat unusual/contrived/complicated example the additional step involves the calculation of expectation values and the minimization of costs/risks in light of the interpretation.
I somewhat digress, but it's similar to the "risk management" in a famous movie, that is intended to offend: a car company calculates the cost of a call-back for repairs versus the cost of a lawsuit and ensuing damages for accidents that are caused by the particular failure. The second calculation is based entirely on statistics, but the decision rule surely isn't.
In the case of the casino, it's simply regulated. There are no lawsuits for "unfair" dice. And in the case of a truly unfair die, the plaintiff would have to prove it that it is the casinos' fault. In online casinos the inverse is true; there are departments for fraud detection. | How to reduce type I and II error for determining bias of loaded dice for reduced legal costs?
As I understand, the core part of the question is "how to choose $\alpha$ and/or $\beta$"?
Maybe it's me, that is (non-statistically) biased to interpret the question like this. But recently I had num |
54,881 | Find a matrix with orthonormal columns with minimal Frobenius distance to a given matrix | Q1. Proof
Given $\mathbf A$ that is square or tall, we want to maximize $\operatorname{tr}(\mathbf A^\top \mathbf X)$ subject to $\mathbf X^\top \mathbf X=\mathbf I$.
Let us denote by $\mathbf A = \mathbf{USV}^\top=\mathbf{\tilde U}\mathbf{\tilde S}\mathbf V^\top$ the "thin" and the "full" SVD of $\mathbf A$. Now we have:
\begin{align}
\operatorname{tr}(\mathbf X^\top \mathbf A) &=
\operatorname{tr}(\mathbf X^\top \mathbf {\tilde U}\mathbf {\tilde S}\mathbf V^\top) =
\operatorname{tr}(\mathbf {\tilde S} \mathbf V^\top\mathbf X^\top\mathbf{\tilde U}) \\&=
\operatorname{tr}(\mathbf{\tilde S} \mathbf P) =
\sum s_i P_{ii} \le \sum s_i =
\operatorname{tr}(\mathbf S).
\end{align}
Here what I called $\mathbf P$ is a matrix with orthonormal rows as can be verified directly:
$$\mathbf P\mathbf P^\top = \mathbf V^\top\mathbf X^\top\mathbf{\tilde U} \mathbf{\tilde U}^\top \mathbf X \mathbf V = \mathbf I.$$
So $\mathbf P$ must have all its elements not larger than one. It follows that the whole expression is not larger than the sum of singular values. Using $\mathbf X = \mathbf{UV}^\top$ yields exactly this value of the trace, hence it is the optimum. QED.
(This proof can be found e.g. in Gower & Dijksterhuis, Procrustes Problems, section 5.1. The proof is a little less confusing when $\mathbf A$ is square because then the thin and the full SVDs coincide.)
Q2. How is it called?
Very similar problems have been studied in several different areas. I found three (!).
FIRST, there is orthogonal Procrustes problem: given $\mathbf A$ and $\mathbf B$, find orthogonal matrix $\boldsymbol\Omega$ minimizing $\|\mathbf A - \mathbf B\boldsymbol\Omega\|^2$. Writing it as $$\|\mathbf A - \mathbf B\boldsymbol\Omega\|^2 = \|\mathbf A\|^2 + \|\mathbf B\|^2 - 2\operatorname{tr}(\mathbf{AB}^\top\boldsymbol\Omega),$$ we see that the solution is given by the SVD of $\mathbf{AB}^\top$.
A related problem is minimizing $\|\mathbf A - \mathbf B\boldsymbol\Omega\|^2$ when $\boldsymbol\Omega$ is not square and has orthonormal rows. It has exactly the same solution. Actually, my problem can be seen as exactly this one when $\mathbf B=\mathbf I$.
However, when $\boldsymbol\Omega$ is not square and has orthonormal columns, the problem does not have a closed-form solution (see Procrustes Problems referenced above.)
SECOND, there is an orthogonalization problem: given $\mathbf A=\mathbf{USV}^\top$, find a square matrix $\boldsymbol\Omega$ that would make $\mathbf A\boldsymbol\Omega$ have orthonormal columns such that $\|\mathbf A - \mathbf A\boldsymbol\Omega\|^2$ were minimal. We already saw that the best $\mathbf A\boldsymbol\Omega$ is $\mathbf{UV}^\top$, and in order to achieve that $\boldsymbol\Omega$ should be equal to $\mathbf V\mathbf S^{-1}\mathbf V^\top = \mathbf C^{-1/2}$ where $\mathbf C = \mathbf A^\top\mathbf A$.
This is apparently known in physical chemistry as Löwdin's symmetric orthogonalization! According to Mayer, 2002, On Löwdin's method of symmetric orthogonalization,
Löwdin's symmetric orthogonalization scheme
is well known to everybody working in quantum chemistry. It was introduced in 1950 [1] as a tool
for transforming the generalized eigenvalue problem obtained in overlapping basis sets to an equivalent “standard” eigenvalue problem valid in an auxiliary orthogonal basis. (Many of us call the latter
simply the “Löwdin basis.”)
[...] In 1957 Carlson and Keller proved [2]
that the symmetrically orthogonalized (or simply
“Löwdin-orthogonalized”) orbitals have a remarkable property: among all the possible orthonormal
functions, the symmetrically orthogonalized ones
are the closest in the least-squares sense to the original nonorthogonal functions.
THIRD, there is a problem of whitening: given centered $\mathbf A=\mathbf{USV}^\top$, find a square matrix $\boldsymbol\Omega$ that would make $\mathbf A\boldsymbol\Omega$ have uncorrelated columns with unit variance such that $\|\mathbf A - \mathbf A\boldsymbol\Omega\|^2$ were minimal. This of course differs from symmetric orthogonalization only by a constant, and the solution is $\boldsymbol\Omega = \mathbf C^{-1/2}$ where $\mathbf C = \mathbf A^\top\mathbf A/n$ is the corresponding covariance matrix.
This is called ZCA whitening. See my own answer in What is the difference between ZCA whitening and PCA whitening? and e.g. Kessy et al., 2018, Optimal Whitening and Decorrelation. | Find a matrix with orthonormal columns with minimal Frobenius distance to a given matrix | Q1. Proof
Given $\mathbf A$ that is square or tall, we want to maximize $\operatorname{tr}(\mathbf A^\top \mathbf X)$ subject to $\mathbf X^\top \mathbf X=\mathbf I$.
Let us denote by $\mathbf A = \m | Find a matrix with orthonormal columns with minimal Frobenius distance to a given matrix
Q1. Proof
Given $\mathbf A$ that is square or tall, we want to maximize $\operatorname{tr}(\mathbf A^\top \mathbf X)$ subject to $\mathbf X^\top \mathbf X=\mathbf I$.
Let us denote by $\mathbf A = \mathbf{USV}^\top=\mathbf{\tilde U}\mathbf{\tilde S}\mathbf V^\top$ the "thin" and the "full" SVD of $\mathbf A$. Now we have:
\begin{align}
\operatorname{tr}(\mathbf X^\top \mathbf A) &=
\operatorname{tr}(\mathbf X^\top \mathbf {\tilde U}\mathbf {\tilde S}\mathbf V^\top) =
\operatorname{tr}(\mathbf {\tilde S} \mathbf V^\top\mathbf X^\top\mathbf{\tilde U}) \\&=
\operatorname{tr}(\mathbf{\tilde S} \mathbf P) =
\sum s_i P_{ii} \le \sum s_i =
\operatorname{tr}(\mathbf S).
\end{align}
Here what I called $\mathbf P$ is a matrix with orthonormal rows as can be verified directly:
$$\mathbf P\mathbf P^\top = \mathbf V^\top\mathbf X^\top\mathbf{\tilde U} \mathbf{\tilde U}^\top \mathbf X \mathbf V = \mathbf I.$$
So $\mathbf P$ must have all its elements not larger than one. It follows that the whole expression is not larger than the sum of singular values. Using $\mathbf X = \mathbf{UV}^\top$ yields exactly this value of the trace, hence it is the optimum. QED.
(This proof can be found e.g. in Gower & Dijksterhuis, Procrustes Problems, section 5.1. The proof is a little less confusing when $\mathbf A$ is square because then the thin and the full SVDs coincide.)
Q2. How is it called?
Very similar problems have been studied in several different areas. I found three (!).
FIRST, there is orthogonal Procrustes problem: given $\mathbf A$ and $\mathbf B$, find orthogonal matrix $\boldsymbol\Omega$ minimizing $\|\mathbf A - \mathbf B\boldsymbol\Omega\|^2$. Writing it as $$\|\mathbf A - \mathbf B\boldsymbol\Omega\|^2 = \|\mathbf A\|^2 + \|\mathbf B\|^2 - 2\operatorname{tr}(\mathbf{AB}^\top\boldsymbol\Omega),$$ we see that the solution is given by the SVD of $\mathbf{AB}^\top$.
A related problem is minimizing $\|\mathbf A - \mathbf B\boldsymbol\Omega\|^2$ when $\boldsymbol\Omega$ is not square and has orthonormal rows. It has exactly the same solution. Actually, my problem can be seen as exactly this one when $\mathbf B=\mathbf I$.
However, when $\boldsymbol\Omega$ is not square and has orthonormal columns, the problem does not have a closed-form solution (see Procrustes Problems referenced above.)
SECOND, there is an orthogonalization problem: given $\mathbf A=\mathbf{USV}^\top$, find a square matrix $\boldsymbol\Omega$ that would make $\mathbf A\boldsymbol\Omega$ have orthonormal columns such that $\|\mathbf A - \mathbf A\boldsymbol\Omega\|^2$ were minimal. We already saw that the best $\mathbf A\boldsymbol\Omega$ is $\mathbf{UV}^\top$, and in order to achieve that $\boldsymbol\Omega$ should be equal to $\mathbf V\mathbf S^{-1}\mathbf V^\top = \mathbf C^{-1/2}$ where $\mathbf C = \mathbf A^\top\mathbf A$.
This is apparently known in physical chemistry as Löwdin's symmetric orthogonalization! According to Mayer, 2002, On Löwdin's method of symmetric orthogonalization,
Löwdin's symmetric orthogonalization scheme
is well known to everybody working in quantum chemistry. It was introduced in 1950 [1] as a tool
for transforming the generalized eigenvalue problem obtained in overlapping basis sets to an equivalent “standard” eigenvalue problem valid in an auxiliary orthogonal basis. (Many of us call the latter
simply the “Löwdin basis.”)
[...] In 1957 Carlson and Keller proved [2]
that the symmetrically orthogonalized (or simply
“Löwdin-orthogonalized”) orbitals have a remarkable property: among all the possible orthonormal
functions, the symmetrically orthogonalized ones
are the closest in the least-squares sense to the original nonorthogonal functions.
THIRD, there is a problem of whitening: given centered $\mathbf A=\mathbf{USV}^\top$, find a square matrix $\boldsymbol\Omega$ that would make $\mathbf A\boldsymbol\Omega$ have uncorrelated columns with unit variance such that $\|\mathbf A - \mathbf A\boldsymbol\Omega\|^2$ were minimal. This of course differs from symmetric orthogonalization only by a constant, and the solution is $\boldsymbol\Omega = \mathbf C^{-1/2}$ where $\mathbf C = \mathbf A^\top\mathbf A/n$ is the corresponding covariance matrix.
This is called ZCA whitening. See my own answer in What is the difference between ZCA whitening and PCA whitening? and e.g. Kessy et al., 2018, Optimal Whitening and Decorrelation. | Find a matrix with orthonormal columns with minimal Frobenius distance to a given matrix
Q1. Proof
Given $\mathbf A$ that is square or tall, we want to maximize $\operatorname{tr}(\mathbf A^\top \mathbf X)$ subject to $\mathbf X^\top \mathbf X=\mathbf I$.
Let us denote by $\mathbf A = \m |
54,882 | Find a matrix with orthonormal columns with minimal Frobenius distance to a given matrix | Suppose we are given (square or tall) matrix $\mathrm B \in \mathbb R^{n \times p}$, where $n \geq p$. We have the following optimization problem in semi-orthogonal matrix $\mathrm X_1 \in \mathbb R^{n \times p}$
$$\begin{array}{ll} \text{minimize} & \| \mathrm X_1 - \mathrm B \|_{\text{F}}^2\\ \text{subject to} & \mathrm X_1^\top \mathrm X_1 = \mathrm I_p\end{array}$$
Let us introduce optimization variable $\mathrm X_2 \in \mathbb R^{n \times (n-p)}$, whose $n-p$ orthonormal columns are orthogonal to the $p$ columns of $\mathrm X_1$, i.e.,
$$\begin{array}{rl} \mathrm X_2^\top \mathrm X_2 &= \mathrm I_{n-p}\\ \mathrm X_1^\top \mathrm X_2 &= \mathrm O_{p \times (n-p)}\end{array}$$
Define
$$\mathrm X := \begin{bmatrix} \mathrm X_1 & \mathrm X_2\end{bmatrix}$$
Note that
$$\mathrm X_1 = \begin{bmatrix} \mathrm X_1 & \mathrm X_2\end{bmatrix} \underbrace{\begin{bmatrix} \,\,\mathrm I_{p}\\ \mathrm O\end{bmatrix}}_{=: \mathrm A \in \mathbb R^{n \times p}} = \mathrm X \mathrm A$$
where the zero matrix is $(n-p) \times p$. Thus, we have the Orthogonal Procrustes Problem in square, orthogonal matrix $\mathrm X \in \mathbb R^{n \times n}$
$$\begin{array}{ll} \text{minimize} & \| \mathrm X \mathrm A - \mathrm B \|_{\text{F}}^2\\ \text{subject to} & \mathrm X^\top \mathrm X = \mathrm I_n\end{array}$$
whose (well-known) solution is $\bar{\mathrm X} := \mathrm U \mathrm V^\top$, where the columns of $n \times n$ matrices $\rm U$ and $\rm V$ are the left and right singular vectors of $n \times n$ matrix $\rm B A^\top = \begin{bmatrix} \mathrm B & \mathrm O\end{bmatrix}$. Taking the first $p$ columns of $\bar{\mathrm X}$,
$$\bar{\mathrm X_1} := \mathrm U \mathrm V^\top \begin{bmatrix} \,\,\mathrm I_{p}\\ \mathrm O\end{bmatrix} = \mathrm U \mathrm V^\top \mathrm A$$
which is the solution of the initial optimization problem. | Find a matrix with orthonormal columns with minimal Frobenius distance to a given matrix | Suppose we are given (square or tall) matrix $\mathrm B \in \mathbb R^{n \times p}$, where $n \geq p$. We have the following optimization problem in semi-orthogonal matrix $\mathrm X_1 \in \mathbb R^{ | Find a matrix with orthonormal columns with minimal Frobenius distance to a given matrix
Suppose we are given (square or tall) matrix $\mathrm B \in \mathbb R^{n \times p}$, where $n \geq p$. We have the following optimization problem in semi-orthogonal matrix $\mathrm X_1 \in \mathbb R^{n \times p}$
$$\begin{array}{ll} \text{minimize} & \| \mathrm X_1 - \mathrm B \|_{\text{F}}^2\\ \text{subject to} & \mathrm X_1^\top \mathrm X_1 = \mathrm I_p\end{array}$$
Let us introduce optimization variable $\mathrm X_2 \in \mathbb R^{n \times (n-p)}$, whose $n-p$ orthonormal columns are orthogonal to the $p$ columns of $\mathrm X_1$, i.e.,
$$\begin{array}{rl} \mathrm X_2^\top \mathrm X_2 &= \mathrm I_{n-p}\\ \mathrm X_1^\top \mathrm X_2 &= \mathrm O_{p \times (n-p)}\end{array}$$
Define
$$\mathrm X := \begin{bmatrix} \mathrm X_1 & \mathrm X_2\end{bmatrix}$$
Note that
$$\mathrm X_1 = \begin{bmatrix} \mathrm X_1 & \mathrm X_2\end{bmatrix} \underbrace{\begin{bmatrix} \,\,\mathrm I_{p}\\ \mathrm O\end{bmatrix}}_{=: \mathrm A \in \mathbb R^{n \times p}} = \mathrm X \mathrm A$$
where the zero matrix is $(n-p) \times p$. Thus, we have the Orthogonal Procrustes Problem in square, orthogonal matrix $\mathrm X \in \mathbb R^{n \times n}$
$$\begin{array}{ll} \text{minimize} & \| \mathrm X \mathrm A - \mathrm B \|_{\text{F}}^2\\ \text{subject to} & \mathrm X^\top \mathrm X = \mathrm I_n\end{array}$$
whose (well-known) solution is $\bar{\mathrm X} := \mathrm U \mathrm V^\top$, where the columns of $n \times n$ matrices $\rm U$ and $\rm V$ are the left and right singular vectors of $n \times n$ matrix $\rm B A^\top = \begin{bmatrix} \mathrm B & \mathrm O\end{bmatrix}$. Taking the first $p$ columns of $\bar{\mathrm X}$,
$$\bar{\mathrm X_1} := \mathrm U \mathrm V^\top \begin{bmatrix} \,\,\mathrm I_{p}\\ \mathrm O\end{bmatrix} = \mathrm U \mathrm V^\top \mathrm A$$
which is the solution of the initial optimization problem. | Find a matrix with orthonormal columns with minimal Frobenius distance to a given matrix
Suppose we are given (square or tall) matrix $\mathrm B \in \mathbb R^{n \times p}$, where $n \geq p$. We have the following optimization problem in semi-orthogonal matrix $\mathrm X_1 \in \mathbb R^{ |
54,883 | The dirty coins and the three judges | If you rephrase the question,
not asking about the 'true' $p_{coin}$ how often the coin is heads or tails
but instead you analyze the 'effective' $p_{judges}$ the probability how often the judges say heads or tails
, then the regular analysis becomes 'ok'.
This analysis does not become more difficult. Even if the $p_{judges}$ is not a constant like $p_{coin}$ (the probability that the judges say heads or tails is not a constant every throw); the distribution of the outcome can still be modeled as a Bernoulli distribution (a mixture of Bernoulli distributions is a Bernoulli distribution).
The outcomes from the three judges together could possibly be modeled as a beta-binomial distribution, or one could throw other more specific analyses at it. But this depends on assumptions that we can make about the consistency and variability among judges. at least we can do more than the simple Bernoulli distribution.
For instance: in the case of the dirty coins we should expect some correlation between the occasions that the judges make false/random judgments. We can expect that, if we split the judges results into unanimous and non-unanimous, then the $p$ obtained from these different groups could differ. This allows us to make estimates about $p_{coin}$ based on the two versions of $p_{judges}$. (without such analysis then we only know that $|p_{coin}-0.5| \geq |p_{judges}-0.5|$ but we do not know by how much without a good guess of the reliability) | The dirty coins and the three judges | If you rephrase the question,
not asking about the 'true' $p_{coin}$ how often the coin is heads or tails
but instead you analyze the 'effective' $p_{judges}$ the probability how often the judges sa | The dirty coins and the three judges
If you rephrase the question,
not asking about the 'true' $p_{coin}$ how often the coin is heads or tails
but instead you analyze the 'effective' $p_{judges}$ the probability how often the judges say heads or tails
, then the regular analysis becomes 'ok'.
This analysis does not become more difficult. Even if the $p_{judges}$ is not a constant like $p_{coin}$ (the probability that the judges say heads or tails is not a constant every throw); the distribution of the outcome can still be modeled as a Bernoulli distribution (a mixture of Bernoulli distributions is a Bernoulli distribution).
The outcomes from the three judges together could possibly be modeled as a beta-binomial distribution, or one could throw other more specific analyses at it. But this depends on assumptions that we can make about the consistency and variability among judges. at least we can do more than the simple Bernoulli distribution.
For instance: in the case of the dirty coins we should expect some correlation between the occasions that the judges make false/random judgments. We can expect that, if we split the judges results into unanimous and non-unanimous, then the $p$ obtained from these different groups could differ. This allows us to make estimates about $p_{coin}$ based on the two versions of $p_{judges}$. (without such analysis then we only know that $|p_{coin}-0.5| \geq |p_{judges}-0.5|$ but we do not know by how much without a good guess of the reliability) | The dirty coins and the three judges
If you rephrase the question,
not asking about the 'true' $p_{coin}$ how often the coin is heads or tails
but instead you analyze the 'effective' $p_{judges}$ the probability how often the judges sa |
54,884 | The dirty coins and the three judges | The problem simplifies if we quantify agreement, where by agreement we mean the fraction of $0$'s and $1$'s of the total number of trials that are the same between any two measurements. In specific, we quantify how each (equally good) judge’s judgments agree with the median judgments, and how well median judgements agree with the truth data, as below.
First note that each panel above represents 3 curves, and they superimpose. Those curves represents 5 different biases of coinage, $0.5, (0.2, 0.8),$ and $(0.1, 0.9)$. Thus, we have eliminated (apart from noise) the effect of bias on the measurements in a way that cannot be had from correlation. This occurs because agreement of a $0$ with a $0$ is as useful as agreement of a $1$ with a $1$. Thus, it does not matter what the relative number of $1$'s and $0$'s is, just whether those outcomes are the same or different for a single judge and the median judgments.
To use these curves, we calculate the agreement of one of the judges with the median judgments using available data. We then use the curves to back solve for the median judgments. That curve appears to be from total probability $\text{J}1_{\text{agr}}=\dfrac{3}{4}+\dfrac{\text{M}^{2}}{4}$, whose back solution is $\text{M}=\sqrt{4 \,\text{J}1_{\text{agr}}-3}$. We know from simulation that a particular value of median judgments corresponds to a unique (or nearly so) fraction of agreement with the truth data. This appears to be also from total probability $\text{Agr(frac)}=-\dfrac{\text{M}^3}{4}+\dfrac{3 \text{M}}{4}+\dfrac{1}{2}$. That value, in turn, times the $n$ trials, gives us a number of agreements. It is that number, $<n$, which is the basis for calculating confidence intervals. As that number is less than the number of trials, our confidence intervals will be different. As we shall see in a bit, how they differ depends on judge training. To complete our analysis, we need to make a further assumption and that is that an adequately trained judge makes unbiased errors of judgment. That is, his errors of judgment are $0$’s and $1$’s 50% of the time.
Suppose that there were $60$ yes’s (e.g., Yes, picture A is more diagnostic than picture B) and $40$ no’s with an agreement fraction of $0.8$. Then we write $60-x+40-x=80$, or $x=10$ giving us a truth value of $50$ yes’s and $30$ no’s in our “truth” population of $100\times0.8=80$. Then $P$(yes)$=\frac{50}{80}=0.625$, which finally gives us enough information to calculate confidence intervals. The 95% CI for our $50$ out of $80$ yes's was $40.92119$ to $57.83999$, and does not include $40$, where $40$ or $\dfrac{1}{2}$ of the $80$ results would be $p=0.0151145$ one-tailed. Note, as the binomial distribution only allows for whole numbers, I calculated the CI's using the binomial CDF real number equivalent $I_{1-p}(n-k,k+1)$, i.e., the regularized beta function of those parameters.
If we had used the original number of trials in our example; $100$, with a median judgement of $60$ yes's, our one-tailed $p(k\leq 50)$ would be $0.0270992$, and two-tails would have been $p>0.05$. So, it can make a difference how we calculate this. Moreover, training makes a difference. Untrained judgements may have errors in proportion to voting numbers, e.g., $60x+40x=80\rightarrow48$ yes votes out of $80$ [$p(k\leq40)=0.0444971$, one-tailed] and not $50/80,\,p(k\leq40)=0.0151145$ one-tailed. Even that may underestimate the effect of a lack of training, because the agreement of an untrained judge with the median may be worse as well. What is training? For example, for adjudicating diagnostic images, one either knows what all of the applicable diagnoses are, or the judgment is uninformed. That is, the training either covers off all possible implications or we are better off not using that judgement, at least in those circumstances where the outcome really needs adjudication, i.e., is not immediately obvious. So, we saw that the confidence intervals were relatively more narrow for trained judges than for the uncorrected problem, which, in turn, had a relatively smaller confidence interval than that for the corrected results for untrained judges, that is, for our single example case.
The polynomials used fit the above curves well, $r\sim 0.9999$ for median judgments agreement with truth data, and $\sim 0.9997$ for single judge agreement with the median judgment. The former was suggested by a linear transform of a sigmoidal-like smoothstep functional relationship with the data. The latter appears to be a simple probability calculation. Since we can test for each judge's results compared to the median, and the percentage of yes's and no's, it may be possible to extend the analysis here to include judges of unequal talent, contrarian judges who vote the opposite of what they actually think, and maybe even somewhat for biased judging, or at least we can test for the presence/absence of biased decision making among judges. | The dirty coins and the three judges | The problem simplifies if we quantify agreement, where by agreement we mean the fraction of $0$'s and $1$'s of the total number of trials that are the same between any two measurements. In specific, w | The dirty coins and the three judges
The problem simplifies if we quantify agreement, where by agreement we mean the fraction of $0$'s and $1$'s of the total number of trials that are the same between any two measurements. In specific, we quantify how each (equally good) judge’s judgments agree with the median judgments, and how well median judgements agree with the truth data, as below.
First note that each panel above represents 3 curves, and they superimpose. Those curves represents 5 different biases of coinage, $0.5, (0.2, 0.8),$ and $(0.1, 0.9)$. Thus, we have eliminated (apart from noise) the effect of bias on the measurements in a way that cannot be had from correlation. This occurs because agreement of a $0$ with a $0$ is as useful as agreement of a $1$ with a $1$. Thus, it does not matter what the relative number of $1$'s and $0$'s is, just whether those outcomes are the same or different for a single judge and the median judgments.
To use these curves, we calculate the agreement of one of the judges with the median judgments using available data. We then use the curves to back solve for the median judgments. That curve appears to be from total probability $\text{J}1_{\text{agr}}=\dfrac{3}{4}+\dfrac{\text{M}^{2}}{4}$, whose back solution is $\text{M}=\sqrt{4 \,\text{J}1_{\text{agr}}-3}$. We know from simulation that a particular value of median judgments corresponds to a unique (or nearly so) fraction of agreement with the truth data. This appears to be also from total probability $\text{Agr(frac)}=-\dfrac{\text{M}^3}{4}+\dfrac{3 \text{M}}{4}+\dfrac{1}{2}$. That value, in turn, times the $n$ trials, gives us a number of agreements. It is that number, $<n$, which is the basis for calculating confidence intervals. As that number is less than the number of trials, our confidence intervals will be different. As we shall see in a bit, how they differ depends on judge training. To complete our analysis, we need to make a further assumption and that is that an adequately trained judge makes unbiased errors of judgment. That is, his errors of judgment are $0$’s and $1$’s 50% of the time.
Suppose that there were $60$ yes’s (e.g., Yes, picture A is more diagnostic than picture B) and $40$ no’s with an agreement fraction of $0.8$. Then we write $60-x+40-x=80$, or $x=10$ giving us a truth value of $50$ yes’s and $30$ no’s in our “truth” population of $100\times0.8=80$. Then $P$(yes)$=\frac{50}{80}=0.625$, which finally gives us enough information to calculate confidence intervals. The 95% CI for our $50$ out of $80$ yes's was $40.92119$ to $57.83999$, and does not include $40$, where $40$ or $\dfrac{1}{2}$ of the $80$ results would be $p=0.0151145$ one-tailed. Note, as the binomial distribution only allows for whole numbers, I calculated the CI's using the binomial CDF real number equivalent $I_{1-p}(n-k,k+1)$, i.e., the regularized beta function of those parameters.
If we had used the original number of trials in our example; $100$, with a median judgement of $60$ yes's, our one-tailed $p(k\leq 50)$ would be $0.0270992$, and two-tails would have been $p>0.05$. So, it can make a difference how we calculate this. Moreover, training makes a difference. Untrained judgements may have errors in proportion to voting numbers, e.g., $60x+40x=80\rightarrow48$ yes votes out of $80$ [$p(k\leq40)=0.0444971$, one-tailed] and not $50/80,\,p(k\leq40)=0.0151145$ one-tailed. Even that may underestimate the effect of a lack of training, because the agreement of an untrained judge with the median may be worse as well. What is training? For example, for adjudicating diagnostic images, one either knows what all of the applicable diagnoses are, or the judgment is uninformed. That is, the training either covers off all possible implications or we are better off not using that judgement, at least in those circumstances where the outcome really needs adjudication, i.e., is not immediately obvious. So, we saw that the confidence intervals were relatively more narrow for trained judges than for the uncorrected problem, which, in turn, had a relatively smaller confidence interval than that for the corrected results for untrained judges, that is, for our single example case.
The polynomials used fit the above curves well, $r\sim 0.9999$ for median judgments agreement with truth data, and $\sim 0.9997$ for single judge agreement with the median judgment. The former was suggested by a linear transform of a sigmoidal-like smoothstep functional relationship with the data. The latter appears to be a simple probability calculation. Since we can test for each judge's results compared to the median, and the percentage of yes's and no's, it may be possible to extend the analysis here to include judges of unequal talent, contrarian judges who vote the opposite of what they actually think, and maybe even somewhat for biased judging, or at least we can test for the presence/absence of biased decision making among judges. | The dirty coins and the three judges
The problem simplifies if we quantify agreement, where by agreement we mean the fraction of $0$'s and $1$'s of the total number of trials that are the same between any two measurements. In specific, w |
54,885 | An unbiased estimator of σ³ | Let us consider a Normal sample. Since$$\sigma^{-2}\sum_{i=1}^n (x_i-\bar{x}_n)^2\sim\chi^2_{n-1}$$which is also a Gamma $\mathcal{Ga}(n-1,1/2)$ variate, the $3/2$ moment of this variate is
$$\int_0^\infty z^{3/2}z^{n-1-1}\exp\{-z/2\}2^{-(n-1)}\Gamma(n-1)^{-1}\text{d}z=\frac{2^{(n-1)+3/2}\Gamma(n-1+3/2)}{2^{n-1}\Gamma(n-1)}$$i.e.
$$\frac{2^{3/2}\Gamma(n-1+3/2)}{\Gamma(n-1)}$$This implies that
$$\frac{\Gamma(n-1)}{2^{3/2}\Gamma(n-1+3/2)}\left\{\sum_{i=1}^n (x_i-\bar{x}_n)^2\right\}^{3/2}$$is an unbiased estimator of $\sigma^3$ in the Normal case. Now, as discussed in my earlier answer to an earlier question, this estimator does not provide an unbiased estimator for other scale families of distributions and there is no such an estimator for all distributions, relating to a 1968 paper by Peter Bickel and Erich Lehmann. | An unbiased estimator of σ³ | Let us consider a Normal sample. Since$$\sigma^{-2}\sum_{i=1}^n (x_i-\bar{x}_n)^2\sim\chi^2_{n-1}$$which is also a Gamma $\mathcal{Ga}(n-1,1/2)$ variate, the $3/2$ moment of this variate is
$$\int_0^\ | An unbiased estimator of σ³
Let us consider a Normal sample. Since$$\sigma^{-2}\sum_{i=1}^n (x_i-\bar{x}_n)^2\sim\chi^2_{n-1}$$which is also a Gamma $\mathcal{Ga}(n-1,1/2)$ variate, the $3/2$ moment of this variate is
$$\int_0^\infty z^{3/2}z^{n-1-1}\exp\{-z/2\}2^{-(n-1)}\Gamma(n-1)^{-1}\text{d}z=\frac{2^{(n-1)+3/2}\Gamma(n-1+3/2)}{2^{n-1}\Gamma(n-1)}$$i.e.
$$\frac{2^{3/2}\Gamma(n-1+3/2)}{\Gamma(n-1)}$$This implies that
$$\frac{\Gamma(n-1)}{2^{3/2}\Gamma(n-1+3/2)}\left\{\sum_{i=1}^n (x_i-\bar{x}_n)^2\right\}^{3/2}$$is an unbiased estimator of $\sigma^3$ in the Normal case. Now, as discussed in my earlier answer to an earlier question, this estimator does not provide an unbiased estimator for other scale families of distributions and there is no such an estimator for all distributions, relating to a 1968 paper by Peter Bickel and Erich Lehmann. | An unbiased estimator of σ³
Let us consider a Normal sample. Since$$\sigma^{-2}\sum_{i=1}^n (x_i-\bar{x}_n)^2\sim\chi^2_{n-1}$$which is also a Gamma $\mathcal{Ga}(n-1,1/2)$ variate, the $3/2$ moment of this variate is
$$\int_0^\ |
54,886 | Selecting alpha and beta parameters for a beta distribution, based on a mode and a 95% credible interval | This is an optimisation problem to match a distribution to 3 parameters: 2 quantiles and the mode.
In fact, even the function you used earlier returns estimates from an optimisation. If you calculate the quantile values from those $\alpha, \beta$ parameters it gave you, you'll see they don't match up exactly:
> qbeta(p=c(0.025, 0.975), shape1=2.44, shape2=38.21)
[1] 0.01004775 0.14983899
We'll define an objective function that calculates the squared error between the known and optimised quantile values and the mode, for a given set of parameters $\alpha,\beta$ of the distribution (encoded in the params vector):
objective.function <- function(params) {
alpha <- params[1]
beta <- params[2]
intended.quantiles <- c(0.01, 0.15)
calculated.quantiles <- qbeta(p=c(0.025, 0.975), shape1=alpha, shape2=beta)
squared.error.quantiles <- sum((intended.quantiles - calculated.quantiles)^2)
intended.mode <- 0.05
calculated.mode <- calculate.mode(alpha, beta)
squared.error.mode <- (intended.mode - calculated.mode)^2
return(squared.error.quantiles + squared.error.mode)
}
calculate.mode <- function(alpha, beta) {
return((alpha-1) / (alpha+beta-2))
}
You already have some good starting values for $\alpha, \beta$, so let's get those ready:
starting.params <- c(2.44, 38.21)
Incidentally, this is what the PDF of a Beta distribution parameterised with these starting estimates looks like. Red lines are actual quantiles & mode, and blue lines are what you are trying to match. As you suggest, the quantiles look fine but the mode is off:
Now we use the nlm optimisation function to estimate optimal values $\alpha^*, \beta^*$, starting for those initial values. The algorithm converges:
nlm.result <- nlm(f = objective.function, p = starting.params)
optimal.alpha <- nlm.result$estimate[1]
optimal.beta <- nlm.result$estimate[2]
So the optimised estimates are:
$\alpha^* = 3.174725$
$\beta^* = 44.94454$
And the quantiles and mode corresponding to these optimal parameters are:
> qbeta(p=c(0.025, 0.975), shape1=optimal.alpha, shape2=optimal.beta)
[1] 0.01499578 0.15042877
> calculate.mode(optimal.alpha, optimal.beta)
[1] 0.04715437
Recreating the plot of the Beta PDF, this time with the new parameters, we can see that the mode is much better matched, at the expense of the lower $2.5\%$ quantile:
Possible enhancements:
This is a quick implementation that may not deal well with extreme values and numerical issues. Have a look at this answer for better code in the case of 2 quantiles only.
Investigate constrained optimisation packages, where the problem would be formulated as estimating 2 parameters ($\alpha, \beta$) subject to the constraint $\frac{\alpha - 1}{\alpha + \beta - 2} = 0.05$ | Selecting alpha and beta parameters for a beta distribution, based on a mode and a 95% credible inte | This is an optimisation problem to match a distribution to 3 parameters: 2 quantiles and the mode.
In fact, even the function you used earlier returns estimates from an optimisation. If you calculate | Selecting alpha and beta parameters for a beta distribution, based on a mode and a 95% credible interval
This is an optimisation problem to match a distribution to 3 parameters: 2 quantiles and the mode.
In fact, even the function you used earlier returns estimates from an optimisation. If you calculate the quantile values from those $\alpha, \beta$ parameters it gave you, you'll see they don't match up exactly:
> qbeta(p=c(0.025, 0.975), shape1=2.44, shape2=38.21)
[1] 0.01004775 0.14983899
We'll define an objective function that calculates the squared error between the known and optimised quantile values and the mode, for a given set of parameters $\alpha,\beta$ of the distribution (encoded in the params vector):
objective.function <- function(params) {
alpha <- params[1]
beta <- params[2]
intended.quantiles <- c(0.01, 0.15)
calculated.quantiles <- qbeta(p=c(0.025, 0.975), shape1=alpha, shape2=beta)
squared.error.quantiles <- sum((intended.quantiles - calculated.quantiles)^2)
intended.mode <- 0.05
calculated.mode <- calculate.mode(alpha, beta)
squared.error.mode <- (intended.mode - calculated.mode)^2
return(squared.error.quantiles + squared.error.mode)
}
calculate.mode <- function(alpha, beta) {
return((alpha-1) / (alpha+beta-2))
}
You already have some good starting values for $\alpha, \beta$, so let's get those ready:
starting.params <- c(2.44, 38.21)
Incidentally, this is what the PDF of a Beta distribution parameterised with these starting estimates looks like. Red lines are actual quantiles & mode, and blue lines are what you are trying to match. As you suggest, the quantiles look fine but the mode is off:
Now we use the nlm optimisation function to estimate optimal values $\alpha^*, \beta^*$, starting for those initial values. The algorithm converges:
nlm.result <- nlm(f = objective.function, p = starting.params)
optimal.alpha <- nlm.result$estimate[1]
optimal.beta <- nlm.result$estimate[2]
So the optimised estimates are:
$\alpha^* = 3.174725$
$\beta^* = 44.94454$
And the quantiles and mode corresponding to these optimal parameters are:
> qbeta(p=c(0.025, 0.975), shape1=optimal.alpha, shape2=optimal.beta)
[1] 0.01499578 0.15042877
> calculate.mode(optimal.alpha, optimal.beta)
[1] 0.04715437
Recreating the plot of the Beta PDF, this time with the new parameters, we can see that the mode is much better matched, at the expense of the lower $2.5\%$ quantile:
Possible enhancements:
This is a quick implementation that may not deal well with extreme values and numerical issues. Have a look at this answer for better code in the case of 2 quantiles only.
Investigate constrained optimisation packages, where the problem would be formulated as estimating 2 parameters ($\alpha, \beta$) subject to the constraint $\frac{\alpha - 1}{\alpha + \beta - 2} = 0.05$ | Selecting alpha and beta parameters for a beta distribution, based on a mode and a 95% credible inte
This is an optimisation problem to match a distribution to 3 parameters: 2 quantiles and the mode.
In fact, even the function you used earlier returns estimates from an optimisation. If you calculate |
54,887 | Difference between stochastic variational inference and variational inference? | Have a look at the paper Stochastic Variational Inference:
The coordinate ascent algorithm in Figure 3 is inefficient for large data sets because we must optimize the local variational parameters for each data point before re-estimating the global variational parameters. Stochastic variational inference uses stochastic optimization to fit the global variational parameters.
So instead of getting the gradient from the full dataset, you obtain the natural gradient from batches. | Difference between stochastic variational inference and variational inference? | Have a look at the paper Stochastic Variational Inference:
The coordinate ascent algorithm in Figure 3 is inefficient for large data sets because we must optimize the local variational parameters for | Difference between stochastic variational inference and variational inference?
Have a look at the paper Stochastic Variational Inference:
The coordinate ascent algorithm in Figure 3 is inefficient for large data sets because we must optimize the local variational parameters for each data point before re-estimating the global variational parameters. Stochastic variational inference uses stochastic optimization to fit the global variational parameters.
So instead of getting the gradient from the full dataset, you obtain the natural gradient from batches. | Difference between stochastic variational inference and variational inference?
Have a look at the paper Stochastic Variational Inference:
The coordinate ascent algorithm in Figure 3 is inefficient for large data sets because we must optimize the local variational parameters for |
54,888 | Difference between stochastic variational inference and variational inference? | Stochastic VI means you don't use the exact, complete, information you have [because it's too complicated, or computationally expensive] but rather a stochastic version of it.
While the paper about SVI only deals with the Exponential Family, and one type of stochasticity, I think the term should also apply to any general purpose VI algorithm where you use Stochastic-Gradient-Ascent (e.g., Automatic Differentiation VI / ADVI), or any VI algorithm that uses some form of stochasticity.
In my opinion you can divide the literature into 2:
SVI for Expo. Family (e.g., Stochastic Variational Inference, Hoffman et al. 2013)
SVI in General (e.g., ADVI, Kucukelbir et al. 2016)
The stochasticity doesn't have to come only from optimizing the "local" parameters by sampling from your dataset $x$, as in the global-local type of problems that the SVI paper looks at, but it can also come from using MC integration and sampling your parameters $\theta$ as the ADVI paper uses. | Difference between stochastic variational inference and variational inference? | Stochastic VI means you don't use the exact, complete, information you have [because it's too complicated, or computationally expensive] but rather a stochastic version of it.
While the paper about SV | Difference between stochastic variational inference and variational inference?
Stochastic VI means you don't use the exact, complete, information you have [because it's too complicated, or computationally expensive] but rather a stochastic version of it.
While the paper about SVI only deals with the Exponential Family, and one type of stochasticity, I think the term should also apply to any general purpose VI algorithm where you use Stochastic-Gradient-Ascent (e.g., Automatic Differentiation VI / ADVI), or any VI algorithm that uses some form of stochasticity.
In my opinion you can divide the literature into 2:
SVI for Expo. Family (e.g., Stochastic Variational Inference, Hoffman et al. 2013)
SVI in General (e.g., ADVI, Kucukelbir et al. 2016)
The stochasticity doesn't have to come only from optimizing the "local" parameters by sampling from your dataset $x$, as in the global-local type of problems that the SVI paper looks at, but it can also come from using MC integration and sampling your parameters $\theta$ as the ADVI paper uses. | Difference between stochastic variational inference and variational inference?
Stochastic VI means you don't use the exact, complete, information you have [because it's too complicated, or computationally expensive] but rather a stochastic version of it.
While the paper about SV |
54,889 | Can my standard laptop be used to run deep learning projects? | I have an hulking old refurbished T520 with the CD drive ripped out and the plastic busted out of the corner from when I dropped it. 8 GB of RAM. You know what? I've done quite a lot with keras, tensorflow and theano on that machine. Things start to go south with CNNs on bigger images, but the point is: no excuses! Install that stuff and start learning!!
@JanKukacka's right about other cloud options. I have used Amazon's GPU instance on EC2. It can get expensive - you'll want to spin up a spot instance if you can. People have complained that the EC2 GPU instances are not as fast as they should be, saying that a $100 GPU card on a dedicated machine can perform better. I don't know about that, but I can say it is way faster than my laptop.
Believe it or not I actually considered trying to add a GPU to the T520 (it's that special to me). But from my research, I took away the line, "laptops don't have GPUs!" from some message board and stopped looking. I'm expecting to see a comment below explaining why that's wrong.
Another complaint about using a cloud GPU is that there's a time-tax for everything you do, especially if you're cheap like me and want to terminate the instance when you're done using it. Spinning up the machine, moving data, etc. all take time and drain a non-trivial amount of energy. I have to admit, it would be nice to have a powerful GPU machine that's all mine all the time. But one thing that doesn't sound nice is configuring the GPU. At least with EC2 you can find community instances that are already configured to use the GPU with tensorflow or theano. | Can my standard laptop be used to run deep learning projects? | I have an hulking old refurbished T520 with the CD drive ripped out and the plastic busted out of the corner from when I dropped it. 8 GB of RAM. You know what? I've done quite a lot with keras, tenso | Can my standard laptop be used to run deep learning projects?
I have an hulking old refurbished T520 with the CD drive ripped out and the plastic busted out of the corner from when I dropped it. 8 GB of RAM. You know what? I've done quite a lot with keras, tensorflow and theano on that machine. Things start to go south with CNNs on bigger images, but the point is: no excuses! Install that stuff and start learning!!
@JanKukacka's right about other cloud options. I have used Amazon's GPU instance on EC2. It can get expensive - you'll want to spin up a spot instance if you can. People have complained that the EC2 GPU instances are not as fast as they should be, saying that a $100 GPU card on a dedicated machine can perform better. I don't know about that, but I can say it is way faster than my laptop.
Believe it or not I actually considered trying to add a GPU to the T520 (it's that special to me). But from my research, I took away the line, "laptops don't have GPUs!" from some message board and stopped looking. I'm expecting to see a comment below explaining why that's wrong.
Another complaint about using a cloud GPU is that there's a time-tax for everything you do, especially if you're cheap like me and want to terminate the instance when you're done using it. Spinning up the machine, moving data, etc. all take time and drain a non-trivial amount of energy. I have to admit, it would be nice to have a powerful GPU machine that's all mine all the time. But one thing that doesn't sound nice is configuring the GPU. At least with EC2 you can find community instances that are already configured to use the GPU with tensorflow or theano. | Can my standard laptop be used to run deep learning projects?
I have an hulking old refurbished T520 with the CD drive ripped out and the plastic busted out of the corner from when I dropped it. 8 GB of RAM. You know what? I've done quite a lot with keras, tenso |
54,890 | Can my standard laptop be used to run deep learning projects? | The benchmarks described in
https://github.com/jcjohnson/cnn-benchmarks
are very instructive. If you want to estimate how much slower your model will be,a rough guideline is to do one forward and backward pass on the code described above, divide it by the time taken on a GPU, and you will get an idea of how many X your model will be slower.
GPUs are not "necessary", they are just helpful. I ran into similar issues a while back, when I was training some of the models on a machine which did not have a gpu. Unfortunately, some of the open source code assumes that you will have GPUs at your disposal, since otherwise the training time is irrationally high. If you run the code you pull on github, you might have to spend a reasonable amount of time downgrading the code to work on "cpu only mode" (Its not merely a flip of a switch, you might need to hack with the actual code, even those you might get from reliable places like FB research..I am speaking from experience)
If you have access to only a CPU, it might not be worth training a model, simply download a trained model, and backpropopgate through the last 1-2 layers.
As an instructive example, at TEST time, faster-rcnn is 10% slower (source - https://github.com/rbgirshick/py-faster-rcnn) on CPU alone; at training time, I can only imagine. | Can my standard laptop be used to run deep learning projects? | The benchmarks described in
https://github.com/jcjohnson/cnn-benchmarks
are very instructive. If you want to estimate how much slower your model will be,a rough guideline is to do one forward and ba | Can my standard laptop be used to run deep learning projects?
The benchmarks described in
https://github.com/jcjohnson/cnn-benchmarks
are very instructive. If you want to estimate how much slower your model will be,a rough guideline is to do one forward and backward pass on the code described above, divide it by the time taken on a GPU, and you will get an idea of how many X your model will be slower.
GPUs are not "necessary", they are just helpful. I ran into similar issues a while back, when I was training some of the models on a machine which did not have a gpu. Unfortunately, some of the open source code assumes that you will have GPUs at your disposal, since otherwise the training time is irrationally high. If you run the code you pull on github, you might have to spend a reasonable amount of time downgrading the code to work on "cpu only mode" (Its not merely a flip of a switch, you might need to hack with the actual code, even those you might get from reliable places like FB research..I am speaking from experience)
If you have access to only a CPU, it might not be worth training a model, simply download a trained model, and backpropopgate through the last 1-2 layers.
As an instructive example, at TEST time, faster-rcnn is 10% slower (source - https://github.com/rbgirshick/py-faster-rcnn) on CPU alone; at training time, I can only imagine. | Can my standard laptop be used to run deep learning projects?
The benchmarks described in
https://github.com/jcjohnson/cnn-benchmarks
are very instructive. If you want to estimate how much slower your model will be,a rough guideline is to do one forward and ba |
54,891 | Can my standard laptop be used to run deep learning projects? | As other have said in their answers, using your laptop should be perfectly fine for running inference on trained models. However, training a network from scratch (or even fine-tuning one) takes quite a long time, and having a training occupying 100% of your laptop's CPU for a week just to find out you need to change some hyperparameters and start again will be probably not the best for your workflow.
If you consider training your own models but don't want to invest in the high-end hardware, you might want to use some of the deep learning cloud computing platforms. For example, FloydHub seems to have very user-friendly interface (even though I have no personal experience with the service). Other options include Amazon Web Services, Google Cloud, and even Nvidia has their own cloud solution.
Also, I am not sure if you are affiliated with a university or a company, but it might be the case the institution has some high-performance computing platform available. I would inform myself with your boss/supervisor and colleagues from other departments. | Can my standard laptop be used to run deep learning projects? | As other have said in their answers, using your laptop should be perfectly fine for running inference on trained models. However, training a network from scratch (or even fine-tuning one) takes quite | Can my standard laptop be used to run deep learning projects?
As other have said in their answers, using your laptop should be perfectly fine for running inference on trained models. However, training a network from scratch (or even fine-tuning one) takes quite a long time, and having a training occupying 100% of your laptop's CPU for a week just to find out you need to change some hyperparameters and start again will be probably not the best for your workflow.
If you consider training your own models but don't want to invest in the high-end hardware, you might want to use some of the deep learning cloud computing platforms. For example, FloydHub seems to have very user-friendly interface (even though I have no personal experience with the service). Other options include Amazon Web Services, Google Cloud, and even Nvidia has their own cloud solution.
Also, I am not sure if you are affiliated with a university or a company, but it might be the case the institution has some high-performance computing platform available. I would inform myself with your boss/supervisor and colleagues from other departments. | Can my standard laptop be used to run deep learning projects?
As other have said in their answers, using your laptop should be perfectly fine for running inference on trained models. However, training a network from scratch (or even fine-tuning one) takes quite |
54,892 | Can my standard laptop be used to run deep learning projects? | The key question is: are you going to train models yourself or use pretrained models?
If you can stick to pretrained models and do transfer learning, you should be fine - you can do this in any framework, for example see pretrained models in Keras. You can actually do lots of interesting stuff with pretrained models - look up transfer learning. For example you can use couple of lower-level layers from network trained on ImageNet as feature extractor, and then put several fully connected layers on top of that, and train it on relatively small dataset with decent results, and it will work even on smaller machines (I recently did something similar without even using GPU).
On the other hand, training your own models may be a bad idea - remember some of the state-of-the-art models are trained for weeks in multiple-machine (or multi-GPU) settings, and also for some applications the data may not be easily available (for example check out the problems with Text summarization with Tensorflow that is described in this comparison of summarization methods). | Can my standard laptop be used to run deep learning projects? | The key question is: are you going to train models yourself or use pretrained models?
If you can stick to pretrained models and do transfer learning, you should be fine - you can do this in any framew | Can my standard laptop be used to run deep learning projects?
The key question is: are you going to train models yourself or use pretrained models?
If you can stick to pretrained models and do transfer learning, you should be fine - you can do this in any framework, for example see pretrained models in Keras. You can actually do lots of interesting stuff with pretrained models - look up transfer learning. For example you can use couple of lower-level layers from network trained on ImageNet as feature extractor, and then put several fully connected layers on top of that, and train it on relatively small dataset with decent results, and it will work even on smaller machines (I recently did something similar without even using GPU).
On the other hand, training your own models may be a bad idea - remember some of the state-of-the-art models are trained for weeks in multiple-machine (or multi-GPU) settings, and also for some applications the data may not be easily available (for example check out the problems with Text summarization with Tensorflow that is described in this comparison of summarization methods). | Can my standard laptop be used to run deep learning projects?
The key question is: are you going to train models yourself or use pretrained models?
If you can stick to pretrained models and do transfer learning, you should be fine - you can do this in any framew |
54,893 | Using column weights to achieve a different LASSO penalty per coefficient | To simplify the discussion below, I will first consider the case that all $\lambda_i > 0$, and then show how to deal with some unpenalized predictors.
Part 1: All predictors are penalized ($\lambda_i > 0$ for all $i$)
This case indeed works in exactly the way you described in your question.
Let $\Lambda = \text{Diag}(\lambda_1,\dotsc, \lambda_p)$ be the diagonal matrix, where $\lambda_i$ is the penalty you want applied to the i-th predictor.
Then you can write the LASSO problem (design matrix $X$, response $Y$), as follows:
$$ \min_{\beta \in \mathbb R^p} || Y - X\beta||^2_2 + || \Lambda \beta ||_1 $$
Now note that multiplying $X$ by $\Lambda^{-1}$ from the right means you multiply the i-th column by $1/\lambda_i$ and note:
$$ ||Y - X\beta||_2^2 + || \Lambda \beta||_1 = ||Y - X\Lambda^{-1} \Lambda\beta||_2^2 + || \Lambda \beta||_1= ||Y - X\Lambda^{-1} \tilde{\beta}||_2^2 + || \tilde{\beta}||_1 $$
In the last step I defined $\tilde{\beta}= \Lambda \beta$. Hence the original LASSO problem must be equivalent to:
$$ \min_{\tilde{\beta} \in \mathbb R^p} || Y - X\Lambda^{-1} \tilde{\beta}||^2_2 + || \tilde{\beta} ||_1 $$
This is a LASSO problem in which everyone gets a penalty of $1$. It is trivial to extend this so that everyone gets a penalty of $\lambda$ (then the $\Lambda$ entries would not represent the penalization of the i-th predictor, but the relative penalization); you might want this if you want to nest the above within a cross-validation based tuning of the regularization parameter.
When you predict afterwards you need to remember what scaling you used though! E.g. if you use the original $X$, then use $\beta = \Lambda^{-1} \tilde{\beta}$!
Part 2: Some unpenalized predictors (i.e. some $\lambda_i$ = 0)
Let's say you now want to solve a LASSO problem, in which some predictors, let's call them $Z$ are not penalized, i.e.:
$$ \min_{\beta, \gamma} || Y - X\beta - Z\gamma||^2_2 + || \Lambda \beta ||_1 $$
(Here I just split the full design matrix into two parts $X$ and $Z$ corresponding to penalized or unpenalized predictors.)
If your LASSO solver does not support unpenalized predictors, then as you mention in your comment, you could just use the technique from part 1 in which you essentially use a $\lambda_i$ really close to $0$ for unpenalized predictors. This would probably kind of work, except that it would be really bad from a numerical perspective, since some parts of $X\Lambda^{-1}$ would "blow" up.
Instead, there is a better way to do this by orthogonalization. You could proceed in the following steps:
Regress $Y \sim Z$, call the resulting coefficient $\tilde{\gamma}$ and also let $\tilde{Y}$ the residuals from this regression (i.e. $\tilde{Y} = Y - Z\tilde{\gamma}$).
Regress $X \sim Z$: For each column of $X$, say the $i$-th column, run the regression $X_i \sim Z$. Then call $\tilde{X}$ the design matrix whose $i$-th column is the residual from the $i$-th regression.
Run the following LASSO to get the fitted coefficient $\hat{\beta}$ (For this you will need the technique from Part 1.):
$$ \min_{\beta \in \mathbb R^p} || \tilde{Y} - \tilde{X}\beta||^2_2 + || \Lambda \beta ||_1 $$
Finally let $\hat{\gamma} = \tilde{\gamma} - (Z^TZ)^{-1}Z^TX\hat{\beta}$.
Then $(\hat{\beta}, \hat{\gamma})$ will be the solutions to the full LASSO problem.
Why does this work? This is a standard orthogonalization argument, similar to how the QR decomposition can be used to do linear regression. Essentially orthogonality (via the Pythagorean theorem -- I leave out the exact arguments as they are standard) allows us to split as follows (with $\hat{Y} = Y-\tilde{Y}$, $\hat{X} = X-\tilde{X}$):
$$ || Y-X\beta - Z\gamma||^2 = ||\tilde{Y} - \tilde{X}\beta||^2 + || \hat{Y} - \hat{X}\beta - Z\gamma||^2$$
So we want to solve:
$$ \min_{\beta, \gamma} \{ ||\tilde{Y} - \tilde{X}\beta||^2 + || \hat{Y} - \hat{X}\beta - Z\gamma||^2 + || \Lambda \beta ||_1 \}$$
Now if we optimize for fixed $\beta$ over $\gamma$ we will get the expression from Part 4 of the above procedure and furthermore we get rid of the $\beta$ appearing in the 2nd square above. What remains is only the LASSO from part 3. Putting everything together gives us the procedure outlined above. | Using column weights to achieve a different LASSO penalty per coefficient | To simplify the discussion below, I will first consider the case that all $\lambda_i > 0$, and then show how to deal with some unpenalized predictors.
Part 1: All predictors are penalized ($\lambda_i | Using column weights to achieve a different LASSO penalty per coefficient
To simplify the discussion below, I will first consider the case that all $\lambda_i > 0$, and then show how to deal with some unpenalized predictors.
Part 1: All predictors are penalized ($\lambda_i > 0$ for all $i$)
This case indeed works in exactly the way you described in your question.
Let $\Lambda = \text{Diag}(\lambda_1,\dotsc, \lambda_p)$ be the diagonal matrix, where $\lambda_i$ is the penalty you want applied to the i-th predictor.
Then you can write the LASSO problem (design matrix $X$, response $Y$), as follows:
$$ \min_{\beta \in \mathbb R^p} || Y - X\beta||^2_2 + || \Lambda \beta ||_1 $$
Now note that multiplying $X$ by $\Lambda^{-1}$ from the right means you multiply the i-th column by $1/\lambda_i$ and note:
$$ ||Y - X\beta||_2^2 + || \Lambda \beta||_1 = ||Y - X\Lambda^{-1} \Lambda\beta||_2^2 + || \Lambda \beta||_1= ||Y - X\Lambda^{-1} \tilde{\beta}||_2^2 + || \tilde{\beta}||_1 $$
In the last step I defined $\tilde{\beta}= \Lambda \beta$. Hence the original LASSO problem must be equivalent to:
$$ \min_{\tilde{\beta} \in \mathbb R^p} || Y - X\Lambda^{-1} \tilde{\beta}||^2_2 + || \tilde{\beta} ||_1 $$
This is a LASSO problem in which everyone gets a penalty of $1$. It is trivial to extend this so that everyone gets a penalty of $\lambda$ (then the $\Lambda$ entries would not represent the penalization of the i-th predictor, but the relative penalization); you might want this if you want to nest the above within a cross-validation based tuning of the regularization parameter.
When you predict afterwards you need to remember what scaling you used though! E.g. if you use the original $X$, then use $\beta = \Lambda^{-1} \tilde{\beta}$!
Part 2: Some unpenalized predictors (i.e. some $\lambda_i$ = 0)
Let's say you now want to solve a LASSO problem, in which some predictors, let's call them $Z$ are not penalized, i.e.:
$$ \min_{\beta, \gamma} || Y - X\beta - Z\gamma||^2_2 + || \Lambda \beta ||_1 $$
(Here I just split the full design matrix into two parts $X$ and $Z$ corresponding to penalized or unpenalized predictors.)
If your LASSO solver does not support unpenalized predictors, then as you mention in your comment, you could just use the technique from part 1 in which you essentially use a $\lambda_i$ really close to $0$ for unpenalized predictors. This would probably kind of work, except that it would be really bad from a numerical perspective, since some parts of $X\Lambda^{-1}$ would "blow" up.
Instead, there is a better way to do this by orthogonalization. You could proceed in the following steps:
Regress $Y \sim Z$, call the resulting coefficient $\tilde{\gamma}$ and also let $\tilde{Y}$ the residuals from this regression (i.e. $\tilde{Y} = Y - Z\tilde{\gamma}$).
Regress $X \sim Z$: For each column of $X$, say the $i$-th column, run the regression $X_i \sim Z$. Then call $\tilde{X}$ the design matrix whose $i$-th column is the residual from the $i$-th regression.
Run the following LASSO to get the fitted coefficient $\hat{\beta}$ (For this you will need the technique from Part 1.):
$$ \min_{\beta \in \mathbb R^p} || \tilde{Y} - \tilde{X}\beta||^2_2 + || \Lambda \beta ||_1 $$
Finally let $\hat{\gamma} = \tilde{\gamma} - (Z^TZ)^{-1}Z^TX\hat{\beta}$.
Then $(\hat{\beta}, \hat{\gamma})$ will be the solutions to the full LASSO problem.
Why does this work? This is a standard orthogonalization argument, similar to how the QR decomposition can be used to do linear regression. Essentially orthogonality (via the Pythagorean theorem -- I leave out the exact arguments as they are standard) allows us to split as follows (with $\hat{Y} = Y-\tilde{Y}$, $\hat{X} = X-\tilde{X}$):
$$ || Y-X\beta - Z\gamma||^2 = ||\tilde{Y} - \tilde{X}\beta||^2 + || \hat{Y} - \hat{X}\beta - Z\gamma||^2$$
So we want to solve:
$$ \min_{\beta, \gamma} \{ ||\tilde{Y} - \tilde{X}\beta||^2 + || \hat{Y} - \hat{X}\beta - Z\gamma||^2 + || \Lambda \beta ||_1 \}$$
Now if we optimize for fixed $\beta$ over $\gamma$ we will get the expression from Part 4 of the above procedure and furthermore we get rid of the $\beta$ appearing in the 2nd square above. What remains is only the LASSO from part 3. Putting everything together gives us the procedure outlined above. | Using column weights to achieve a different LASSO penalty per coefficient
To simplify the discussion below, I will first consider the case that all $\lambda_i > 0$, and then show how to deal with some unpenalized predictors.
Part 1: All predictors are penalized ($\lambda_i |
54,894 | Is choice of machine learning algorithm a secondary issue? | You want citations for the claims, but I think this advice, while useful, stems from this context specifically.
The fact that these methods work so well on Kaggle and other competitions has to do with the type of datasets in those competitions. Often, especially in elementary competitions, data consists of many examples, with heterogeneous data types (categorical, continuous, missing), that are all mildly predictive, and there are interaction effects to leverage. Housing pricing competitions are a typical example of that. In such cases tree based ensembles like XGBoost and Random Forests are extremely effective and practical. They are flexible enough to learn interactions and nonlinearities, don't overfit too much if done correctly, and can deal naturally with all kinds of data.
Both in more advanced competitions and in practice, with other types of data (images, text, or just not a lot of data), and other problem settings (forecasting, recommendation systems), XGBoost will typically not be the best solution, or at most a part of the approach. Not to say that there are also many problems in practice where XGBoost is just the best approach in terms of classification or regression performance.
On textual data, or on images, tree-based methods are known to not work so well. Timeseries data is also a really different beast. An interesting example on Kaggle I think is for example the chess rating competition, you can find the winning methods here: https://www.kaggle.com/c/ChessRatings2/discussion/568. | Is choice of machine learning algorithm a secondary issue? | You want citations for the claims, but I think this advice, while useful, stems from this context specifically.
The fact that these methods work so well on Kaggle and other competitions has to do wit | Is choice of machine learning algorithm a secondary issue?
You want citations for the claims, but I think this advice, while useful, stems from this context specifically.
The fact that these methods work so well on Kaggle and other competitions has to do with the type of datasets in those competitions. Often, especially in elementary competitions, data consists of many examples, with heterogeneous data types (categorical, continuous, missing), that are all mildly predictive, and there are interaction effects to leverage. Housing pricing competitions are a typical example of that. In such cases tree based ensembles like XGBoost and Random Forests are extremely effective and practical. They are flexible enough to learn interactions and nonlinearities, don't overfit too much if done correctly, and can deal naturally with all kinds of data.
Both in more advanced competitions and in practice, with other types of data (images, text, or just not a lot of data), and other problem settings (forecasting, recommendation systems), XGBoost will typically not be the best solution, or at most a part of the approach. Not to say that there are also many problems in practice where XGBoost is just the best approach in terms of classification or regression performance.
On textual data, or on images, tree-based methods are known to not work so well. Timeseries data is also a really different beast. An interesting example on Kaggle I think is for example the chess rating competition, you can find the winning methods here: https://www.kaggle.com/c/ChessRatings2/discussion/568. | Is choice of machine learning algorithm a secondary issue?
You want citations for the claims, but I think this advice, while useful, stems from this context specifically.
The fact that these methods work so well on Kaggle and other competitions has to do wit |
54,895 | Is choice of machine learning algorithm a secondary issue? | Isn't the NFL concerned with algorithms which are data-independent? Feature engineering is completely data-dependent, because it's not a rigorous, well-defined algorithm, expecially in the sense meant by Kagglers. For different data sets they choose different features based on expertise, Intuition and a lot of hand-waving. It's not like they use a principled, well-defined a priori approach. In other words, they don't always use the same feature engineering algorithm for different competitions. | Is choice of machine learning algorithm a secondary issue? | Isn't the NFL concerned with algorithms which are data-independent? Feature engineering is completely data-dependent, because it's not a rigorous, well-defined algorithm, expecially in the sense meant | Is choice of machine learning algorithm a secondary issue?
Isn't the NFL concerned with algorithms which are data-independent? Feature engineering is completely data-dependent, because it's not a rigorous, well-defined algorithm, expecially in the sense meant by Kagglers. For different data sets they choose different features based on expertise, Intuition and a lot of hand-waving. It's not like they use a principled, well-defined a priori approach. In other words, they don't always use the same feature engineering algorithm for different competitions. | Is choice of machine learning algorithm a secondary issue?
Isn't the NFL concerned with algorithms which are data-independent? Feature engineering is completely data-dependent, because it's not a rigorous, well-defined algorithm, expecially in the sense meant |
54,896 | Is choice of machine learning algorithm a secondary issue? | Thank you for the interesting question. I think the question will be hard to conclusively answer without a more rigorous definition of feature engineering which seems hard to give.
I don't think there is any tension with no free lunch, since I think that feature engineering involves changing the optimisation problem. There is a version of no free lunch (equivalent to the original theorem) which states the following:
Theorem (Radcliffe and Surry, 1995) Consider functions $f,g$ lying in the set of all functions between finite sets $X$ and $Y$. Let $V_m(A, f)$ denote the length $m$ performance vector generated by $A$ and $f$. For any two
algorithms A and B, and for any function $f$, there exists a function
$g$ such that $V(A, f) = V(B,g)$.
I think this has an interesting interpretation in light of your question. If you're willing to model feature engineering as invertible transformations of your objective function $f$, then there is a sequence permutations which takes $f$ to $g$, and algorithm $B$ does just as well as $A$. So in this sense, it would seem to confirm the folklore that the algorithm is less important than the problem formulation. My notation is taken from (Schumacher et al. 2001) which compares several version of no free lunch.
Radcliffe, Nicholas J., and Patrick D. Surry. "Fundamental
limitations on search algorithms: Evolutionary computing in
perspective." Computer Science Today. Springer, Berlin, Heidelberg,
1995. 275-291.
Schumacher, Chris, Michael D. Vose, and L. Darrell Whitley. "The no
free lunch and problem description length." Proceedings of the 3rd
Annual Conference on Genetic and Evolutionary Computation. Morgan
Kaufmann Publishers Inc., 2001. | Is choice of machine learning algorithm a secondary issue? | Thank you for the interesting question. I think the question will be hard to conclusively answer without a more rigorous definition of feature engineering which seems hard to give.
I don't think there | Is choice of machine learning algorithm a secondary issue?
Thank you for the interesting question. I think the question will be hard to conclusively answer without a more rigorous definition of feature engineering which seems hard to give.
I don't think there is any tension with no free lunch, since I think that feature engineering involves changing the optimisation problem. There is a version of no free lunch (equivalent to the original theorem) which states the following:
Theorem (Radcliffe and Surry, 1995) Consider functions $f,g$ lying in the set of all functions between finite sets $X$ and $Y$. Let $V_m(A, f)$ denote the length $m$ performance vector generated by $A$ and $f$. For any two
algorithms A and B, and for any function $f$, there exists a function
$g$ such that $V(A, f) = V(B,g)$.
I think this has an interesting interpretation in light of your question. If you're willing to model feature engineering as invertible transformations of your objective function $f$, then there is a sequence permutations which takes $f$ to $g$, and algorithm $B$ does just as well as $A$. So in this sense, it would seem to confirm the folklore that the algorithm is less important than the problem formulation. My notation is taken from (Schumacher et al. 2001) which compares several version of no free lunch.
Radcliffe, Nicholas J., and Patrick D. Surry. "Fundamental
limitations on search algorithms: Evolutionary computing in
perspective." Computer Science Today. Springer, Berlin, Heidelberg,
1995. 275-291.
Schumacher, Chris, Michael D. Vose, and L. Darrell Whitley. "The no
free lunch and problem description length." Proceedings of the 3rd
Annual Conference on Genetic and Evolutionary Computation. Morgan
Kaufmann Publishers Inc., 2001. | Is choice of machine learning algorithm a secondary issue?
Thank you for the interesting question. I think the question will be hard to conclusively answer without a more rigorous definition of feature engineering which seems hard to give.
I don't think there |
54,897 | Term for "extent to which a test throws away information"? | The term is simply "loss of information" (or "information loss"), & was introduced by Fisher shortly after he came up with the concept of sufficiency. See Fisher (1925), "Theory of Statistical Estimation", Proc. Camb. Philos. Soc., 22, pp 700 – 725. The usual way to measure it is in terms of Fisher information, the variance of the score:
$$\mathcal{I}_X(\theta)=\operatorname{Var}_X\left[\frac{\mathrm{d}\,\ell(\theta;X)}{\mathrm{d}\,\theta}\right]$$
where $\ell(\cdot)$ is the log-likelihood function. Given some regularity conditions
$$\mathcal{I}_T(\theta)\leq\mathcal{I}_D(\theta)$$
with equality only when $T$ is sufficient. So $\mathcal{I}_D(\theta)-\mathcal{I}_T(\theta)$ quantifies the extent to which information is thrown away by using the statistic $T$ for inference rather than the full data $D$.†
Information loss is an important concept in conditional inference. If $(T,S)$ is jointly sufficient, while $S$ is ancillary, then the information lost by using $T$ for inference about $\theta$ can be recovered by conditioning on $S$:
$$\mathcal{I}_D(\theta;D)=\operatorname{E}_S\left[\mathcal{I}_{T|S}(\theta;T|S)\right]>\mathcal{I_T}(\theta;T)$$
See Fisher (1934), "Two New properties of Mathematical Likelihood", Proc. Royal Soc. Lond., A, 144, pp 285 – 307. Conditioning on an only approximate ancillary does result in information loss; an example is Fisher's Exact Test for a joint binomial sampling model, which is justified by the marginal totals' carrying only very little information about the odds ratio parameter. So information loss is a way to evaluate test statistics, though nugatory information loss doesn't imply high power—the power of FET is much reduced by the constriction of the sampling space owing to conditioning.
I'm not sure there's much point to considering the information loss in reducing the data to a accept/reject report according to a hypothesis test. A test $\phi(T)=\boldsymbol{1}_R(T)$ is a statistic too; but the log-likelihood function $\ell(\theta;\phi(T))$ is just the power function of the test (or its complement). Perhaps comparing information loss would be another way to decide between tests whose power functions cross.
† For any $\theta_1,\theta_2$ the Kullback – Leibler divergence from the distribution with density $f(x;\theta_1)$ to that with $f(x;\theta_2)$ is equal to the KL divergence from the distribution with density $f(t;\theta_1)$ to that with $f(t;\theta_2)$ only when $t$ is sufficient for $\theta$. Each divergence, given more regularity conditions, approaches the Fisher information as $\theta_1$ gets closer to $\theta_2$. See Kullback & Leibler (1951), "On information and sufficiency", Ann. Math. Statist., 22, 1, pp 79 – 86. | Term for "extent to which a test throws away information"? | The term is simply "loss of information" (or "information loss"), & was introduced by Fisher shortly after he came up with the concept of sufficiency. See Fisher (1925), "Theory of Statistical Estima | Term for "extent to which a test throws away information"?
The term is simply "loss of information" (or "information loss"), & was introduced by Fisher shortly after he came up with the concept of sufficiency. See Fisher (1925), "Theory of Statistical Estimation", Proc. Camb. Philos. Soc., 22, pp 700 – 725. The usual way to measure it is in terms of Fisher information, the variance of the score:
$$\mathcal{I}_X(\theta)=\operatorname{Var}_X\left[\frac{\mathrm{d}\,\ell(\theta;X)}{\mathrm{d}\,\theta}\right]$$
where $\ell(\cdot)$ is the log-likelihood function. Given some regularity conditions
$$\mathcal{I}_T(\theta)\leq\mathcal{I}_D(\theta)$$
with equality only when $T$ is sufficient. So $\mathcal{I}_D(\theta)-\mathcal{I}_T(\theta)$ quantifies the extent to which information is thrown away by using the statistic $T$ for inference rather than the full data $D$.†
Information loss is an important concept in conditional inference. If $(T,S)$ is jointly sufficient, while $S$ is ancillary, then the information lost by using $T$ for inference about $\theta$ can be recovered by conditioning on $S$:
$$\mathcal{I}_D(\theta;D)=\operatorname{E}_S\left[\mathcal{I}_{T|S}(\theta;T|S)\right]>\mathcal{I_T}(\theta;T)$$
See Fisher (1934), "Two New properties of Mathematical Likelihood", Proc. Royal Soc. Lond., A, 144, pp 285 – 307. Conditioning on an only approximate ancillary does result in information loss; an example is Fisher's Exact Test for a joint binomial sampling model, which is justified by the marginal totals' carrying only very little information about the odds ratio parameter. So information loss is a way to evaluate test statistics, though nugatory information loss doesn't imply high power—the power of FET is much reduced by the constriction of the sampling space owing to conditioning.
I'm not sure there's much point to considering the information loss in reducing the data to a accept/reject report according to a hypothesis test. A test $\phi(T)=\boldsymbol{1}_R(T)$ is a statistic too; but the log-likelihood function $\ell(\theta;\phi(T))$ is just the power function of the test (or its complement). Perhaps comparing information loss would be another way to decide between tests whose power functions cross.
† For any $\theta_1,\theta_2$ the Kullback – Leibler divergence from the distribution with density $f(x;\theta_1)$ to that with $f(x;\theta_2)$ is equal to the KL divergence from the distribution with density $f(t;\theta_1)$ to that with $f(t;\theta_2)$ only when $t$ is sufficient for $\theta$. Each divergence, given more regularity conditions, approaches the Fisher information as $\theta_1$ gets closer to $\theta_2$. See Kullback & Leibler (1951), "On information and sufficiency", Ann. Math. Statist., 22, 1, pp 79 – 86. | Term for "extent to which a test throws away information"?
The term is simply "loss of information" (or "information loss"), & was introduced by Fisher shortly after he came up with the concept of sufficiency. See Fisher (1925), "Theory of Statistical Estima |
54,898 | Term for "extent to which a test throws away information"? | Simple answer to Q. Is there a term for the "extent to which we lose information" when reducing D to a test statistic t or to a rejection or non-rejection R? is that first that there is no simple loss of information and measuring it can be difficult and deceptive. Secondly, when we complete hypothesis testing we have optimistically defined that there is any information relating to our hypothesis, which has nothing to do with information loss, rather we are identifying latent information.
To define what information loss is, we need to first define what information is. For example, in some contexts (not the one being discussed here) there may be no information loss; noise reduction or image compression can be lossless. From this is should be obvious that information loss is context dependent. The elephant in the room is that there is no information without context. Consider Shannon's opinion on this, "Information processing may more specifically be defined in terms used by, Claude E. Shannon as the conversion of latent information into manifest information (McGonigle & Mastrian, 2011)."
To examine this question from a perspective of absolute information content requires quantification of information complexity, a difficult task, see Kolmogorov complexity. That is, absolute information content is the smallest encoding that can generate a data set. On the other hand, relative information content from Shannon entropy relates the amount of information to the natural logarithms (e.g. of 1's) of each of two datum, and as such is proportional to bit length of a random datum. Such things are applied to AIC via measuring relative entropy, K-L divergence (Edit: This applies to the other answer here, for which the connection between Fisher metric and the relative entropy indeed is a relation with K-L divergence). Although interesting, relative information content, is self-information in the self-comparative entropy context, and is not absolute entropy or absolute information content. Thus, as below, it does not really apply to the question asked.
For hypothesis testing Shannon entropy is not relevant, because we are testing for latent information; we are testing whether there is not information latent of a certain type ($H0$) within the total signal, where the total signal is noisy. For example, if we test for normality, and cannot reject normality as an outcome, then we cannot disprove that information is present, we have provided some clarity as to the presence of manifest information. Do we attribute absolute information content to noise? According to Wikipedia, Shannon would not, "Abstractly, information can be thought of as the resolution of uncertainty." For example, when we identify a mean and variance of a normal distribution, that distribution is completely specified to within a certain confidence interval, to be sure, but the point is that the residuals are then either unstructured and lack information in the Kolmogorov sense, or our evidence for normality is lacking.
Before going further, let us illustrate some of these concepts to show that they are necessary. Can absolutely random bits contain information? Consider cryptography. We can encrypt information in a random sequence of binary digits to yield a second string of random bits. The information that is potentially thus contained is upper limited by the bit length of the string. However, without the key, there is no information appreciated. Thus, information content is context dependent, no context (i.e., no key) means no manifest information. When we test hypotheses we are looking for a key. With apologies to those who cannot see out of two eyes and have trouble staring past two close images to fuse them into stereo vision, which is the "key" here, for a lots of fun physical illustration of pattern in randomness look at a few random dot stereograms on the net. In fact, here is an example that I created myself.
Again, to see this, you have to look at it cross-eyed, but level, so that you see three pictures. Concentrate on the middle picture, and you will see three vertical bands that seem to be closer to you than the screen.
The purpose of showing this is that it was generated from a few lines of code, so that the Kolmogorov complexity is small, even as an upper limit. Moreover, the code specifies a bit length to repetition that is exactly the product of all of the primes less than the number of bits generated, in this case 1230 primes for 10000 bits, and an encoding repetition of $5.953232305570463\times10^{4301}$ bit length, a much larger number than the $10^3$ bits encoded. That means that to crack this code using brute force, without a relative key, is not possible.
In this example, the Kolmogorov complexity is small, most of which generates our latent information (the hidden vertical bars). In the relative information context, the latent information is much smaller than the self-referential information of 10000 bits and the noise is perfect to within an astronomical context. For example, without the code itself, there is no cryptanalysis that will tell us whether the left or the right image is the encoding key, indeed no guarantee that both images are not encoded, so that the key is relative, and no current computer can crack much beyond a 64 bit encryption, never mind trying to crack a code whose bit encryption is always astronomically longer than the encoded number of bits. The number of 1's in both images is identical, there is no relative information content in a K-L divergence sense, i.e., the self-entropy is maximal and unchanged comparing images, unless comparative bit order is considered.
Suppose we calculate the mean and the median of a sampling of data. A first reaction to the difference between mean and median would be the argument that the median value discards everything that is not the median value itself (i.e., for an odd number sample size). Does it, now? To contrast a mean with a median, we are contrasting a parametric method with a ranking result, i.e., a nonparametric method.
Generally for nonparametric methods the power is less than it would be for a perfectly defined parametric solution of the same type, but typically if the parametric modelling is not excellently chosen, that would not be true. That is, a nonparametric method that ranks results rather than use ordinary distance calculations discards information, but in effect, the information loss is typically mild.
Real data is not very revealing, it has more noise than structure. Here is a data histogram displayed without information loss.
Now here is the same data put into wider histogram categories.
The naive view would be that we have lost a lot of information by putting the data into histogram categories; why in one category we now have the sum of 21 numbers, which destroys what those numbers were before addition. The more explanatory view is that in the new histogram, much of what we lost was noise, which noise in the Kolmogorov view is not information complexity, and which we needed to filter out in order to better appreciate what the signal, i.e., the information, was. That, is our "decoding key" is addition, and we did not lose much information by doing that, and in the process we increased our signal to noise ratio substantially. So too it is with hypothesis testing, what is important is not the potential for containing information measured as the bit length of the data, i.e., signal plus noise, rather we only count the bit lengths of latent information, i.e., actual signal as information.
With respect to the question, whatever the relative loss of information is, it is not relevant as we are testing as to whether there is a significant absence of information in the data that relates to the $H0$ context alone, and not to a larger context of all possible $H0$, and we do not lose information that we did not assume we had to begin with. If, on the other hand, we cannot show that latent information is absent, then we are interested in the type II error, the absence of false negative testing, and it is the power of the test (as per the other answer above), if it is good enough, that finally allows us to estimate that information was likely manifest. In terms of Kolmogorov complexity, there is information in $H0$, and when we test if the data has information of the same form as $H0$ using a t-statistic, we are not altering the information content, we are comparing that content with latent information in data. The t-statistic need not have the same complexity as either the data or $H0$ to accomplish that goal. Thus, the t-statistic can be thought of by analogy as the fulcrum of a balance beam, and the information content of a t-statistic itself is not representative of the data alone but rather is information concerning the contrast of the data with $H0$. In other words, if the data were an orange, the t-statistic would be an apple, two different things really. | Term for "extent to which a test throws away information"? | Simple answer to Q. Is there a term for the "extent to which we lose information" when reducing D to a test statistic t or to a rejection or non-rejection R? is that first that there is no simple loss | Term for "extent to which a test throws away information"?
Simple answer to Q. Is there a term for the "extent to which we lose information" when reducing D to a test statistic t or to a rejection or non-rejection R? is that first that there is no simple loss of information and measuring it can be difficult and deceptive. Secondly, when we complete hypothesis testing we have optimistically defined that there is any information relating to our hypothesis, which has nothing to do with information loss, rather we are identifying latent information.
To define what information loss is, we need to first define what information is. For example, in some contexts (not the one being discussed here) there may be no information loss; noise reduction or image compression can be lossless. From this is should be obvious that information loss is context dependent. The elephant in the room is that there is no information without context. Consider Shannon's opinion on this, "Information processing may more specifically be defined in terms used by, Claude E. Shannon as the conversion of latent information into manifest information (McGonigle & Mastrian, 2011)."
To examine this question from a perspective of absolute information content requires quantification of information complexity, a difficult task, see Kolmogorov complexity. That is, absolute information content is the smallest encoding that can generate a data set. On the other hand, relative information content from Shannon entropy relates the amount of information to the natural logarithms (e.g. of 1's) of each of two datum, and as such is proportional to bit length of a random datum. Such things are applied to AIC via measuring relative entropy, K-L divergence (Edit: This applies to the other answer here, for which the connection between Fisher metric and the relative entropy indeed is a relation with K-L divergence). Although interesting, relative information content, is self-information in the self-comparative entropy context, and is not absolute entropy or absolute information content. Thus, as below, it does not really apply to the question asked.
For hypothesis testing Shannon entropy is not relevant, because we are testing for latent information; we are testing whether there is not information latent of a certain type ($H0$) within the total signal, where the total signal is noisy. For example, if we test for normality, and cannot reject normality as an outcome, then we cannot disprove that information is present, we have provided some clarity as to the presence of manifest information. Do we attribute absolute information content to noise? According to Wikipedia, Shannon would not, "Abstractly, information can be thought of as the resolution of uncertainty." For example, when we identify a mean and variance of a normal distribution, that distribution is completely specified to within a certain confidence interval, to be sure, but the point is that the residuals are then either unstructured and lack information in the Kolmogorov sense, or our evidence for normality is lacking.
Before going further, let us illustrate some of these concepts to show that they are necessary. Can absolutely random bits contain information? Consider cryptography. We can encrypt information in a random sequence of binary digits to yield a second string of random bits. The information that is potentially thus contained is upper limited by the bit length of the string. However, without the key, there is no information appreciated. Thus, information content is context dependent, no context (i.e., no key) means no manifest information. When we test hypotheses we are looking for a key. With apologies to those who cannot see out of two eyes and have trouble staring past two close images to fuse them into stereo vision, which is the "key" here, for a lots of fun physical illustration of pattern in randomness look at a few random dot stereograms on the net. In fact, here is an example that I created myself.
Again, to see this, you have to look at it cross-eyed, but level, so that you see three pictures. Concentrate on the middle picture, and you will see three vertical bands that seem to be closer to you than the screen.
The purpose of showing this is that it was generated from a few lines of code, so that the Kolmogorov complexity is small, even as an upper limit. Moreover, the code specifies a bit length to repetition that is exactly the product of all of the primes less than the number of bits generated, in this case 1230 primes for 10000 bits, and an encoding repetition of $5.953232305570463\times10^{4301}$ bit length, a much larger number than the $10^3$ bits encoded. That means that to crack this code using brute force, without a relative key, is not possible.
In this example, the Kolmogorov complexity is small, most of which generates our latent information (the hidden vertical bars). In the relative information context, the latent information is much smaller than the self-referential information of 10000 bits and the noise is perfect to within an astronomical context. For example, without the code itself, there is no cryptanalysis that will tell us whether the left or the right image is the encoding key, indeed no guarantee that both images are not encoded, so that the key is relative, and no current computer can crack much beyond a 64 bit encryption, never mind trying to crack a code whose bit encryption is always astronomically longer than the encoded number of bits. The number of 1's in both images is identical, there is no relative information content in a K-L divergence sense, i.e., the self-entropy is maximal and unchanged comparing images, unless comparative bit order is considered.
Suppose we calculate the mean and the median of a sampling of data. A first reaction to the difference between mean and median would be the argument that the median value discards everything that is not the median value itself (i.e., for an odd number sample size). Does it, now? To contrast a mean with a median, we are contrasting a parametric method with a ranking result, i.e., a nonparametric method.
Generally for nonparametric methods the power is less than it would be for a perfectly defined parametric solution of the same type, but typically if the parametric modelling is not excellently chosen, that would not be true. That is, a nonparametric method that ranks results rather than use ordinary distance calculations discards information, but in effect, the information loss is typically mild.
Real data is not very revealing, it has more noise than structure. Here is a data histogram displayed without information loss.
Now here is the same data put into wider histogram categories.
The naive view would be that we have lost a lot of information by putting the data into histogram categories; why in one category we now have the sum of 21 numbers, which destroys what those numbers were before addition. The more explanatory view is that in the new histogram, much of what we lost was noise, which noise in the Kolmogorov view is not information complexity, and which we needed to filter out in order to better appreciate what the signal, i.e., the information, was. That, is our "decoding key" is addition, and we did not lose much information by doing that, and in the process we increased our signal to noise ratio substantially. So too it is with hypothesis testing, what is important is not the potential for containing information measured as the bit length of the data, i.e., signal plus noise, rather we only count the bit lengths of latent information, i.e., actual signal as information.
With respect to the question, whatever the relative loss of information is, it is not relevant as we are testing as to whether there is a significant absence of information in the data that relates to the $H0$ context alone, and not to a larger context of all possible $H0$, and we do not lose information that we did not assume we had to begin with. If, on the other hand, we cannot show that latent information is absent, then we are interested in the type II error, the absence of false negative testing, and it is the power of the test (as per the other answer above), if it is good enough, that finally allows us to estimate that information was likely manifest. In terms of Kolmogorov complexity, there is information in $H0$, and when we test if the data has information of the same form as $H0$ using a t-statistic, we are not altering the information content, we are comparing that content with latent information in data. The t-statistic need not have the same complexity as either the data or $H0$ to accomplish that goal. Thus, the t-statistic can be thought of by analogy as the fulcrum of a balance beam, and the information content of a t-statistic itself is not representative of the data alone but rather is information concerning the contrast of the data with $H0$. In other words, if the data were an orange, the t-statistic would be an apple, two different things really. | Term for "extent to which a test throws away information"?
Simple answer to Q. Is there a term for the "extent to which we lose information" when reducing D to a test statistic t or to a rejection or non-rejection R? is that first that there is no simple loss |
54,899 | A better model has higher residual deviance and AIC. How is it possible? | You do not reject model 2 because there is no effect, but because the effect is not significant enough.
The results are consistent.
Note that an insignificant result for an anova model can still acknowledge an insignificant difference between the two models, e.g. that the likelihood ratio for the small and big model is in favor of the big model with the lower aic or higher relative likelihood (even when the statistiscian/experimenter chooses to prefer the small model).
It is just that anova uses a conservative (different) decision criteria (the arbitrary low p<0.05) that is
not about deciding model 1 over model 2 or the other way around
but instead about deciding whether there is a significant difference between model 1 and model 2 (and in case of no significant difference the simpler model 1 is kept).
The choice to prefer model 1 over model 2 when the anova outcome indicates that there is no large significant difference is entirely due to the statistician's (subjective) use of the anova test and not due to the anova test itselve.
The anova model does not indicate that you do not have model 2 > model 1 (or model B > model A), it only indicates that the observed difference (model2 > model 1) is not significant (where the meaning of significance is very subjective and arbitrarily set at some p-value cut-off level). The observed 'advantage'/'difference' for model B may happen by chance. That is: in the case that there is no difference (null hypothesis) then you would get occasionally (9.736% of the time) such a similar higher aic, residual deviance, likelihood ratio or whatever. | A better model has higher residual deviance and AIC. How is it possible? | You do not reject model 2 because there is no effect, but because the effect is not significant enough.
The results are consistent.
Note that an insignificant result for an anova model can still ackn | A better model has higher residual deviance and AIC. How is it possible?
You do not reject model 2 because there is no effect, but because the effect is not significant enough.
The results are consistent.
Note that an insignificant result for an anova model can still acknowledge an insignificant difference between the two models, e.g. that the likelihood ratio for the small and big model is in favor of the big model with the lower aic or higher relative likelihood (even when the statistiscian/experimenter chooses to prefer the small model).
It is just that anova uses a conservative (different) decision criteria (the arbitrary low p<0.05) that is
not about deciding model 1 over model 2 or the other way around
but instead about deciding whether there is a significant difference between model 1 and model 2 (and in case of no significant difference the simpler model 1 is kept).
The choice to prefer model 1 over model 2 when the anova outcome indicates that there is no large significant difference is entirely due to the statistician's (subjective) use of the anova test and not due to the anova test itselve.
The anova model does not indicate that you do not have model 2 > model 1 (or model B > model A), it only indicates that the observed difference (model2 > model 1) is not significant (where the meaning of significance is very subjective and arbitrarily set at some p-value cut-off level). The observed 'advantage'/'difference' for model B may happen by chance. That is: in the case that there is no difference (null hypothesis) then you would get occasionally (9.736% of the time) such a similar higher aic, residual deviance, likelihood ratio or whatever. | A better model has higher residual deviance and AIC. How is it possible?
You do not reject model 2 because there is no effect, but because the effect is not significant enough.
The results are consistent.
Note that an insignificant result for an anova model can still ackn |
54,900 | A better model has higher residual deviance and AIC. How is it possible? | The residual deviance will always be smaller (or at least equal to) for the larger model. Similar to adding variables will only increase R-Squared in regression.
AIC is only slightly better for Model B, I bet BIC is actually worse as it more severely punishes additional variables.
Different model fit statistics will not always agree. Here it looks like 2 out of the 3 will select the smaller model (assuming BIC is smaller for Model A). The end choice on what model to select is up to you. | A better model has higher residual deviance and AIC. How is it possible? | The residual deviance will always be smaller (or at least equal to) for the larger model. Similar to adding variables will only increase R-Squared in regression.
AIC is only slightly better for Model | A better model has higher residual deviance and AIC. How is it possible?
The residual deviance will always be smaller (or at least equal to) for the larger model. Similar to adding variables will only increase R-Squared in regression.
AIC is only slightly better for Model B, I bet BIC is actually worse as it more severely punishes additional variables.
Different model fit statistics will not always agree. Here it looks like 2 out of the 3 will select the smaller model (assuming BIC is smaller for Model A). The end choice on what model to select is up to you. | A better model has higher residual deviance and AIC. How is it possible?
The residual deviance will always be smaller (or at least equal to) for the larger model. Similar to adding variables will only increase R-Squared in regression.
AIC is only slightly better for Model |
Subsets and Splits