idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
50,501 | How does co-adaptation occur in deep neural nets | Since the weights are not initialized properly and groups of neurons end up in the same local minima, according to their (similar) initialization.
To overcome this, you could use dropout / drop connect to break symmetry.
Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. R. (2012). Improving neural networks by preventing co-adaptation of feature detectors. | How does co-adaptation occur in deep neural nets | Since the weights are not initialized properly and groups of neurons end up in the same local minima, according to their (similar) initialization.
To overcome this, you could use dropout / drop connec | How does co-adaptation occur in deep neural nets
Since the weights are not initialized properly and groups of neurons end up in the same local minima, according to their (similar) initialization.
To overcome this, you could use dropout / drop connect to break symmetry.
Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. R. (2012). Improving neural networks by preventing co-adaptation of feature detectors. | How does co-adaptation occur in deep neural nets
Since the weights are not initialized properly and groups of neurons end up in the same local minima, according to their (similar) initialization.
To overcome this, you could use dropout / drop connec |
50,502 | Gelman & Hill ARM textbook, Question 3.2, R-squared | With these kinds of questions it is usually best to eschew computer coding until you are at least able to write down the various algebraic equations you are using. The key for these questions is to be able to interpret the written information to obtain corresponding algebraic equations from your model. Once you have the available conditions written down, that is more than half the battle, and solving them is usually fairly straightforward.
Letting $x_i$ be height (in inches) and $Y_i$ be earnings (in $1,000s), the log-linear model is:
$$\ln Y_i = \beta_0 + \beta_1 \ln x_i + \varepsilon_i \quad \quad \quad \varepsilon_i \sim \text{N}(0,\sigma^2).$$
Taking expectations gives the true regression line for the model:
$$\mathbb{E}(\ln Y_i|x_i) = \beta_0 + \beta_1 \ln x_i. \quad$$
The estimated regression line for the model is:
$$\quad \text{ } \ln \hat{Y}_i = \hat{\beta}_0 + \hat{\beta}_1 \ln x_i.$$
Since both of the variables enter the model through their logarithms, the parameter $\beta_1$ represents the elasticity of the expected earning with respect to height, and the parameter $\beta_0$ represents the expected log-earnings when the height is one unit (though this interpretation extrapolates beyond the data range). From the stated conditions we have the following three mathematical conditions:
A person who is 66 inches tall is predicted to have earnings of $30,000.
$$\ln 30 = \hat{\beta}_0 + \hat{\beta}_1 \ln 66.$$
Every increase of 1% in height corresponds to a predicted increase of 0.8% in earnings.
$$\hat{\beta}_1 = \frac{0.008}{0.01} = 0.8$$
The earnings of approximately 95% of people fall within a factor of 1.1 of predicted values.
$$\mathbb{P}( |\ln Y_i - \ln \hat{Y}_i| \leqslant 0.1 ) \approx 0.95.$$
Suppose the standard deviation of log heights is 5% in this population.
$$MS_{Tot} = 0.05^2.$$
So now, you need to use these conditions to find the various parameter estimates in the model, and the resultant goodness-of-fit statistics. The first two equations will allow you to find the coefficient estimates, and the third should then allow you to find the estimated standard deviation of the error term. (You might need an additional assumption for this one.) The fourth equation will then allow you to find the goodness-of-fit statistics for the model. | Gelman & Hill ARM textbook, Question 3.2, R-squared | With these kinds of questions it is usually best to eschew computer coding until you are at least able to write down the various algebraic equations you are using. The key for these questions is to b | Gelman & Hill ARM textbook, Question 3.2, R-squared
With these kinds of questions it is usually best to eschew computer coding until you are at least able to write down the various algebraic equations you are using. The key for these questions is to be able to interpret the written information to obtain corresponding algebraic equations from your model. Once you have the available conditions written down, that is more than half the battle, and solving them is usually fairly straightforward.
Letting $x_i$ be height (in inches) and $Y_i$ be earnings (in $1,000s), the log-linear model is:
$$\ln Y_i = \beta_0 + \beta_1 \ln x_i + \varepsilon_i \quad \quad \quad \varepsilon_i \sim \text{N}(0,\sigma^2).$$
Taking expectations gives the true regression line for the model:
$$\mathbb{E}(\ln Y_i|x_i) = \beta_0 + \beta_1 \ln x_i. \quad$$
The estimated regression line for the model is:
$$\quad \text{ } \ln \hat{Y}_i = \hat{\beta}_0 + \hat{\beta}_1 \ln x_i.$$
Since both of the variables enter the model through their logarithms, the parameter $\beta_1$ represents the elasticity of the expected earning with respect to height, and the parameter $\beta_0$ represents the expected log-earnings when the height is one unit (though this interpretation extrapolates beyond the data range). From the stated conditions we have the following three mathematical conditions:
A person who is 66 inches tall is predicted to have earnings of $30,000.
$$\ln 30 = \hat{\beta}_0 + \hat{\beta}_1 \ln 66.$$
Every increase of 1% in height corresponds to a predicted increase of 0.8% in earnings.
$$\hat{\beta}_1 = \frac{0.008}{0.01} = 0.8$$
The earnings of approximately 95% of people fall within a factor of 1.1 of predicted values.
$$\mathbb{P}( |\ln Y_i - \ln \hat{Y}_i| \leqslant 0.1 ) \approx 0.95.$$
Suppose the standard deviation of log heights is 5% in this population.
$$MS_{Tot} = 0.05^2.$$
So now, you need to use these conditions to find the various parameter estimates in the model, and the resultant goodness-of-fit statistics. The first two equations will allow you to find the coefficient estimates, and the third should then allow you to find the estimated standard deviation of the error term. (You might need an additional assumption for this one.) The fourth equation will then allow you to find the goodness-of-fit statistics for the model. | Gelman & Hill ARM textbook, Question 3.2, R-squared
With these kinds of questions it is usually best to eschew computer coding until you are at least able to write down the various algebraic equations you are using. The key for these questions is to b |
50,503 | Gelman & Hill ARM textbook, Question 3.2, R-squared | log(x) - log(y) ~ %$delta$
So log(x) - log(y) + 1.96$\sigma$ = 1.1 implies that $\sigma$ = 1.78.
With standard deviation of log earnings (Gelman has heights in the book, but I think this is an error) at 5(%), R2 = 1 - 1.78 /5 = .64 | Gelman & Hill ARM textbook, Question 3.2, R-squared | log(x) - log(y) ~ %$delta$
So log(x) - log(y) + 1.96$\sigma$ = 1.1 implies that $\sigma$ = 1.78.
With standard deviation of log earnings (Gelman has heights in the book, but I think this is an error) | Gelman & Hill ARM textbook, Question 3.2, R-squared
log(x) - log(y) ~ %$delta$
So log(x) - log(y) + 1.96$\sigma$ = 1.1 implies that $\sigma$ = 1.78.
With standard deviation of log earnings (Gelman has heights in the book, but I think this is an error) at 5(%), R2 = 1 - 1.78 /5 = .64 | Gelman & Hill ARM textbook, Question 3.2, R-squared
log(x) - log(y) ~ %$delta$
So log(x) - log(y) + 1.96$\sigma$ = 1.1 implies that $\sigma$ = 1.78.
With standard deviation of log earnings (Gelman has heights in the book, but I think this is an error) |
50,504 | Gelman & Hill ARM textbook, Question 3.2, R-squared | I had a different interpretation of (b). we have $\log(y) = a + b\log(x)$ from the regression. Hence,
$$sd(\log(y)) = |b|sd(\log(x)) = (0.8)\times(0.05)=0.04,$$
since $sd(\log(x))$ is given. Squaring this we get the regression sum of squares $SSR = 0.0016$. From (a) we have the residual standard deviation ($0.049$). Squaring this yields the error sum of squares $SSE = 0.0024.$ Thus, the total sum of squares on the log scale equals 0.004, giving an R-squared of $1 - 0.0016/0.004 = 0.6$ | Gelman & Hill ARM textbook, Question 3.2, R-squared | I had a different interpretation of (b). we have $\log(y) = a + b\log(x)$ from the regression. Hence,
$$sd(\log(y)) = |b|sd(\log(x)) = (0.8)\times(0.05)=0.04,$$
since $sd(\log(x))$ is given. Squaring | Gelman & Hill ARM textbook, Question 3.2, R-squared
I had a different interpretation of (b). we have $\log(y) = a + b\log(x)$ from the regression. Hence,
$$sd(\log(y)) = |b|sd(\log(x)) = (0.8)\times(0.05)=0.04,$$
since $sd(\log(x))$ is given. Squaring this we get the regression sum of squares $SSR = 0.0016$. From (a) we have the residual standard deviation ($0.049$). Squaring this yields the error sum of squares $SSE = 0.0024.$ Thus, the total sum of squares on the log scale equals 0.004, giving an R-squared of $1 - 0.0016/0.004 = 0.6$ | Gelman & Hill ARM textbook, Question 3.2, R-squared
I had a different interpretation of (b). we have $\log(y) = a + b\log(x)$ from the regression. Hence,
$$sd(\log(y)) = |b|sd(\log(x)) = (0.8)\times(0.05)=0.04,$$
since $sd(\log(x))$ is given. Squaring |
50,505 | Comparison of medians in samples with unequal variance, size and shape | I would suggest that you consider relative summary effects/relative treatment effects methods of Akritas, Arnold, Brunner, etc. The best book on the subject, generally, is Nonparametric Analysis of Longitudinal Data in Factorial Experiments by Brunner, Domhof, and Langer which is very well-written and clear.
These methods are based on modified notions of Mann-Whitney and can accommodate many of the features you mention using standard statistical software. | Comparison of medians in samples with unequal variance, size and shape | I would suggest that you consider relative summary effects/relative treatment effects methods of Akritas, Arnold, Brunner, etc. The best book on the subject, generally, is Nonparametric Analysis of L | Comparison of medians in samples with unequal variance, size and shape
I would suggest that you consider relative summary effects/relative treatment effects methods of Akritas, Arnold, Brunner, etc. The best book on the subject, generally, is Nonparametric Analysis of Longitudinal Data in Factorial Experiments by Brunner, Domhof, and Langer which is very well-written and clear.
These methods are based on modified notions of Mann-Whitney and can accommodate many of the features you mention using standard statistical software. | Comparison of medians in samples with unequal variance, size and shape
I would suggest that you consider relative summary effects/relative treatment effects methods of Akritas, Arnold, Brunner, etc. The best book on the subject, generally, is Nonparametric Analysis of L |
50,506 | Comparison of medians in samples with unequal variance, size and shape | One (very conservative/low power) option would be to use Mood's median test. It does not assume normality or homogeneity of variance (or at least quite robust to unequal variance).
The test is done by making a cross tabulation of the number of observations less than or equal to the overall median vs. greater than the overall median for each group. From there, you run a chi-squared test on the table. For two group comparisons and when the groups are small, I'd use Fisher's exact test instead of chi-squared.
Edit: since I don't know if Mathematica has a function for Mood's median test, if you are willing to try using R, I have code that you can use.
Edit 2: Per Sosi's request, here's the code:
moods.median = function(v, f, exact = FALSE) {
#v is the set of Values you want to test
#f is a Factor or grouping variable
#if you want to use Fisher's exact test, specify "exact = TRUE"
if (length(v) != length(f)) {
stop(c("v and f must have the same length"))
}
#make a new matrix data frame
m = cbind(as.character(f),v)
colnames(m) = c("group", "value")
#get the names of the factors/groups
facs = unique(f)
#count the number of factors/groups
factorN = length(unique(f))
if (factorN < 2) {
stop("there must be at least two groups in variable 'f'")
}
#2 rows (number of values > overall median & number of values <= overall median)
#K-many columns for each level of the factor
MoodsMedianTable = matrix(NA, nrow = 2, ncol = factorN)
rownames(MoodsMedianTable) = c("> overall median", "<= overall median")
colnames(MoodsMedianTable) = c(facs[1:factorN])
colnames(MoodsMedianTable) = paste("Factor:",colnames(MoodsMedianTable))
#get the overall median
overallmedian = median(v)
#put the following into the 2 by K table:
for(j in 1:factorN){ #for each factor level
g = facs[j] #assign a temporary "group name"
#count the number of observations in the factor that are greater than
#the overall median and save it to the table
MoodsMedianTable[1,j] = sum(m[,2][ which(m[,1]==g)] > overallmedian)
#count the number of observations in the factor that are less than
# or equal to the overall median and save it to the table
MoodsMedianTable[2,j] = sum(m[,2][ which(m[,1]==g)] <= overallmedian)
}
print(MoodsMedianTable)
if(exact == FALSE){return(chisq.test(MoodsMedianTable))}
if(exact == TRUE){return(list(
chisq.test(MoodsMedianTable),
fisher.test(MoodsMedianTable)))
}
} | Comparison of medians in samples with unequal variance, size and shape | One (very conservative/low power) option would be to use Mood's median test. It does not assume normality or homogeneity of variance (or at least quite robust to unequal variance).
The test is done by | Comparison of medians in samples with unequal variance, size and shape
One (very conservative/low power) option would be to use Mood's median test. It does not assume normality or homogeneity of variance (or at least quite robust to unequal variance).
The test is done by making a cross tabulation of the number of observations less than or equal to the overall median vs. greater than the overall median for each group. From there, you run a chi-squared test on the table. For two group comparisons and when the groups are small, I'd use Fisher's exact test instead of chi-squared.
Edit: since I don't know if Mathematica has a function for Mood's median test, if you are willing to try using R, I have code that you can use.
Edit 2: Per Sosi's request, here's the code:
moods.median = function(v, f, exact = FALSE) {
#v is the set of Values you want to test
#f is a Factor or grouping variable
#if you want to use Fisher's exact test, specify "exact = TRUE"
if (length(v) != length(f)) {
stop(c("v and f must have the same length"))
}
#make a new matrix data frame
m = cbind(as.character(f),v)
colnames(m) = c("group", "value")
#get the names of the factors/groups
facs = unique(f)
#count the number of factors/groups
factorN = length(unique(f))
if (factorN < 2) {
stop("there must be at least two groups in variable 'f'")
}
#2 rows (number of values > overall median & number of values <= overall median)
#K-many columns for each level of the factor
MoodsMedianTable = matrix(NA, nrow = 2, ncol = factorN)
rownames(MoodsMedianTable) = c("> overall median", "<= overall median")
colnames(MoodsMedianTable) = c(facs[1:factorN])
colnames(MoodsMedianTable) = paste("Factor:",colnames(MoodsMedianTable))
#get the overall median
overallmedian = median(v)
#put the following into the 2 by K table:
for(j in 1:factorN){ #for each factor level
g = facs[j] #assign a temporary "group name"
#count the number of observations in the factor that are greater than
#the overall median and save it to the table
MoodsMedianTable[1,j] = sum(m[,2][ which(m[,1]==g)] > overallmedian)
#count the number of observations in the factor that are less than
# or equal to the overall median and save it to the table
MoodsMedianTable[2,j] = sum(m[,2][ which(m[,1]==g)] <= overallmedian)
}
print(MoodsMedianTable)
if(exact == FALSE){return(chisq.test(MoodsMedianTable))}
if(exact == TRUE){return(list(
chisq.test(MoodsMedianTable),
fisher.test(MoodsMedianTable)))
}
} | Comparison of medians in samples with unequal variance, size and shape
One (very conservative/low power) option would be to use Mood's median test. It does not assume normality or homogeneity of variance (or at least quite robust to unequal variance).
The test is done by |
50,507 | Comparison of medians in samples with unequal variance, size and shape | My personal opinion is that you have the order of your tests backwards. If the Kolmogorov–Smirnov test shows that the samples come from a similar distribution, then use the t-test, otherwise use the Mann-Whitney U test.
For unequal variances, there is a procedure by Fligner and Policello (1981) who describe a methodology which can be employed for computing an adjusted U statistic if the homogeneity of variance assumption underlying the Mann-Whitney U test is violated. The reference I have for the Fligner-Policello test is Fligner, M. A. & Policello, II, G.E. (1981). Robust rank procedures for the Behrens-Fisher problem. Journal of the American Statistical Association, 76, 162-174.
the Welch's T-Test seems to be implemented in the EqualVariances->False option in the MeanDifferenceCl.
EqualVariances False whether the unknown population variances are assumed equal -- Option for MeanDifferenceCI.
"Confidence intervals for the difference between means are also based
on Student's distribution if the variances are not known. If the
variances are assumed equal, MeanDifferenceCI is based on Student's
distribution with Length[list1]+Length[list2]-2 degrees of freedom.
If the population variances are not assumed equal, Welch's
approximation for the degrees of freedom is used". See:
http://reference.wolfram.com/language/HypothesisTesting/tutorial/HypothesisTesting.html | Comparison of medians in samples with unequal variance, size and shape | My personal opinion is that you have the order of your tests backwards. If the Kolmogorov–Smirnov test shows that the samples come from a similar distribution, then use the t-test, otherwise use the | Comparison of medians in samples with unequal variance, size and shape
My personal opinion is that you have the order of your tests backwards. If the Kolmogorov–Smirnov test shows that the samples come from a similar distribution, then use the t-test, otherwise use the Mann-Whitney U test.
For unequal variances, there is a procedure by Fligner and Policello (1981) who describe a methodology which can be employed for computing an adjusted U statistic if the homogeneity of variance assumption underlying the Mann-Whitney U test is violated. The reference I have for the Fligner-Policello test is Fligner, M. A. & Policello, II, G.E. (1981). Robust rank procedures for the Behrens-Fisher problem. Journal of the American Statistical Association, 76, 162-174.
the Welch's T-Test seems to be implemented in the EqualVariances->False option in the MeanDifferenceCl.
EqualVariances False whether the unknown population variances are assumed equal -- Option for MeanDifferenceCI.
"Confidence intervals for the difference between means are also based
on Student's distribution if the variances are not known. If the
variances are assumed equal, MeanDifferenceCI is based on Student's
distribution with Length[list1]+Length[list2]-2 degrees of freedom.
If the population variances are not assumed equal, Welch's
approximation for the degrees of freedom is used". See:
http://reference.wolfram.com/language/HypothesisTesting/tutorial/HypothesisTesting.html | Comparison of medians in samples with unequal variance, size and shape
My personal opinion is that you have the order of your tests backwards. If the Kolmogorov–Smirnov test shows that the samples come from a similar distribution, then use the t-test, otherwise use the |
50,508 | creating random variable with certain auto-correlation in R | In this case, each value returned by filter is a weighted sum of $m$ out of $n$ normal random variates $\{x_1,...,x_n\}$ with the $m$ weights, $w$, set equal to acf.target in the example provided by the OP. The $i^\text{th}$ term is:
$$y_i=\sum_{j=1}^m{w_j{}x_{\text{mod}(i-j+\lfloor(m-1)/2\rfloor,n)+1}}$$
The $i^{th}$ term returned by the acf function is:
$$a_i=\text{cor}(\{y_1,...,y_{n-i+1}\},\{y_i,...,y_{n}\})$$
$$=\frac{\text{cov}(\{y_1,...,y_{n-i+1}\},\{y_i,...,y_{n}\})}{\sum_{j=1}^mw_j^2}$$
$$\lim_{n\to\infty}a_i=\frac{\sum_{j=1}^{m-i+1}w_jw_{j+i-1}}{\sum_{j=1}^mw_j^2}\neq{w_i}$$
x <- rnorm(1e5)
b <- 1.41519
acf.target <- (1:53)^(-b)
w <- acf.target
n <- 1:(length(w) - 1)
y <- filter(x, w, circular = TRUE)
acf(y)
a <- c(1, sapply(n, function(i) sum(head(w, -i)*tail(w, -i)))/sum(w^2))
lines(seq_along(a) - 1, a, col = "green")
If $\{\alpha_i,...\alpha_m\}$ are the desired autocorrelation values, a naive approach would be to iteratively update $w_i$:
$$w_i'=w_i\frac{\alpha_i}{a_i}$$
This works for the OP's example:
tol <- sqrt(.Machine$double.eps)
while (max(abs(acf.target - a)) > tol) {
a <- c(1, sapply(n, function(i) sum(head(w, -i)*tail(w, -i)))/sum(w^2))
w <- w*acf.target/a
}
y <- filter(x, w, circular = TRUE)
acf(y)
lines(seq_along(acf.target) - 1, acf.target, col = "green") | creating random variable with certain auto-correlation in R | In this case, each value returned by filter is a weighted sum of $m$ out of $n$ normal random variates $\{x_1,...,x_n\}$ with the $m$ weights, $w$, set equal to acf.target in the example provided by t | creating random variable with certain auto-correlation in R
In this case, each value returned by filter is a weighted sum of $m$ out of $n$ normal random variates $\{x_1,...,x_n\}$ with the $m$ weights, $w$, set equal to acf.target in the example provided by the OP. The $i^\text{th}$ term is:
$$y_i=\sum_{j=1}^m{w_j{}x_{\text{mod}(i-j+\lfloor(m-1)/2\rfloor,n)+1}}$$
The $i^{th}$ term returned by the acf function is:
$$a_i=\text{cor}(\{y_1,...,y_{n-i+1}\},\{y_i,...,y_{n}\})$$
$$=\frac{\text{cov}(\{y_1,...,y_{n-i+1}\},\{y_i,...,y_{n}\})}{\sum_{j=1}^mw_j^2}$$
$$\lim_{n\to\infty}a_i=\frac{\sum_{j=1}^{m-i+1}w_jw_{j+i-1}}{\sum_{j=1}^mw_j^2}\neq{w_i}$$
x <- rnorm(1e5)
b <- 1.41519
acf.target <- (1:53)^(-b)
w <- acf.target
n <- 1:(length(w) - 1)
y <- filter(x, w, circular = TRUE)
acf(y)
a <- c(1, sapply(n, function(i) sum(head(w, -i)*tail(w, -i)))/sum(w^2))
lines(seq_along(a) - 1, a, col = "green")
If $\{\alpha_i,...\alpha_m\}$ are the desired autocorrelation values, a naive approach would be to iteratively update $w_i$:
$$w_i'=w_i\frac{\alpha_i}{a_i}$$
This works for the OP's example:
tol <- sqrt(.Machine$double.eps)
while (max(abs(acf.target - a)) > tol) {
a <- c(1, sapply(n, function(i) sum(head(w, -i)*tail(w, -i)))/sum(w^2))
w <- w*acf.target/a
}
y <- filter(x, w, circular = TRUE)
acf(y)
lines(seq_along(acf.target) - 1, acf.target, col = "green") | creating random variable with certain auto-correlation in R
In this case, each value returned by filter is a weighted sum of $m$ out of $n$ normal random variates $\{x_1,...,x_n\}$ with the $m$ weights, $w$, set equal to acf.target in the example provided by t |
50,509 | Approximating P(A,B,C) using P(A,B), P(A,C), P(B,C), and P(A), P(B), P(C) | Bounding
$$
P(A \cup B \cup C) = P(A) + P(B) + P(C) - P(A \cap B) - P(B \cap C) - P(C \cap A) + P(A \cap B \cap C)
$$
$\implies$
$$
P(A \cap B \cap C) = P(A \cup B \cup C) - \big(P(A) + P(B) + P(C)\big) + \big(P(A \cap B) + P(B \cap C) + P(C \cap A)\big)
$$
and
$$ 0 \leq P(A \cup B \cup C) \leq 1$$
$\implies$
$$
- \big(P(A) + P(B) + P(C)\big) + \big(P(A \cap B) + P(B \cap C) + P(C \cap A)\big) \leq P(A \cap B \cap C) \leq 1- \big(P(A) + P(B) + P(C)\big) + \big(P(A \cap B) + P(B \cap C) + P(C \cap A)\big)
$$
Approximating
$$
\begin{align}
P(A \cap B \cap C) &= P\big((A \cap B) \cap C\big)\\
&= P(C|(A \cap B))P(A \cap B) \tag{1}\\
&or\ P(B|(A \cap C))P(A \cap C) \tag{2}\\
&or\ P(A|(B \cap C))P(B \cap C) \tag{3}
\end{align}
$$
Which assuming conditional independence is respectively equal to:
$$
\begin{align}
P(A \cap B \cap C)
&= P(C)P(A \cap B) \tag{1a}\\
&or\ P(B)P(A \cap C) \tag{2a}\\
&or\ P(A)P(B \cap C) \tag{3a}
\end{align}
$$ | Approximating P(A,B,C) using P(A,B), P(A,C), P(B,C), and P(A), P(B), P(C) | Bounding
$$
P(A \cup B \cup C) = P(A) + P(B) + P(C) - P(A \cap B) - P(B \cap C) - P(C \cap A) + P(A \cap B \cap C)
$$
$\implies$
$$
P(A \cap B \cap C) = P(A \cup B \cup C) - \big(P(A) + P(B) + P(C)\ | Approximating P(A,B,C) using P(A,B), P(A,C), P(B,C), and P(A), P(B), P(C)
Bounding
$$
P(A \cup B \cup C) = P(A) + P(B) + P(C) - P(A \cap B) - P(B \cap C) - P(C \cap A) + P(A \cap B \cap C)
$$
$\implies$
$$
P(A \cap B \cap C) = P(A \cup B \cup C) - \big(P(A) + P(B) + P(C)\big) + \big(P(A \cap B) + P(B \cap C) + P(C \cap A)\big)
$$
and
$$ 0 \leq P(A \cup B \cup C) \leq 1$$
$\implies$
$$
- \big(P(A) + P(B) + P(C)\big) + \big(P(A \cap B) + P(B \cap C) + P(C \cap A)\big) \leq P(A \cap B \cap C) \leq 1- \big(P(A) + P(B) + P(C)\big) + \big(P(A \cap B) + P(B \cap C) + P(C \cap A)\big)
$$
Approximating
$$
\begin{align}
P(A \cap B \cap C) &= P\big((A \cap B) \cap C\big)\\
&= P(C|(A \cap B))P(A \cap B) \tag{1}\\
&or\ P(B|(A \cap C))P(A \cap C) \tag{2}\\
&or\ P(A|(B \cap C))P(B \cap C) \tag{3}
\end{align}
$$
Which assuming conditional independence is respectively equal to:
$$
\begin{align}
P(A \cap B \cap C)
&= P(C)P(A \cap B) \tag{1a}\\
&or\ P(B)P(A \cap C) \tag{2a}\\
&or\ P(A)P(B \cap C) \tag{3a}
\end{align}
$$ | Approximating P(A,B,C) using P(A,B), P(A,C), P(B,C), and P(A), P(B), P(C)
Bounding
$$
P(A \cup B \cup C) = P(A) + P(B) + P(C) - P(A \cap B) - P(B \cap C) - P(C \cap A) + P(A \cap B \cap C)
$$
$\implies$
$$
P(A \cap B \cap C) = P(A \cup B \cup C) - \big(P(A) + P(B) + P(C)\ |
50,510 | Adjusting time-series before a sudden increase(the reason & time of the increase are known) | Detecting and incorporating Level Shifts (Step Shifts) is an integral part or both non-causal and causal modelling. The term Intervention Detection is often used. Some software implementations restrict identification to non-causal models while others do not. See http://www.unc.edu/~jbhill/tsay.pdf for a seminal article and search here for relevant posts using the string "intervention detection" . Typical output would include "what the series would have looked like without the intervention" . These are often called the "adjusted values" .
There may be a "de facto date" and a "de jure date" to the intervention, often coinciding but sometimes not due to dynamics in the series. If one knows the start date then one can simply add an indicator variable ( 0,0,0,0,1,1,1,1..) to reflect that knowledge and then use the estimated coefficient for that variable as the adjustment factor.
If you knew that an intervention had occurred at a particular point in time you could estimate a model of the form where I is the 0/1 intervention variable is used in conjunction with the ARIMA component.
For example if you had a pulses at two points in time (39 and 21) the augmented data matrix might look like . | Adjusting time-series before a sudden increase(the reason & time of the increase are known) | Detecting and incorporating Level Shifts (Step Shifts) is an integral part or both non-causal and causal modelling. The term Intervention Detection is often used. Some software implementations restric | Adjusting time-series before a sudden increase(the reason & time of the increase are known)
Detecting and incorporating Level Shifts (Step Shifts) is an integral part or both non-causal and causal modelling. The term Intervention Detection is often used. Some software implementations restrict identification to non-causal models while others do not. See http://www.unc.edu/~jbhill/tsay.pdf for a seminal article and search here for relevant posts using the string "intervention detection" . Typical output would include "what the series would have looked like without the intervention" . These are often called the "adjusted values" .
There may be a "de facto date" and a "de jure date" to the intervention, often coinciding but sometimes not due to dynamics in the series. If one knows the start date then one can simply add an indicator variable ( 0,0,0,0,1,1,1,1..) to reflect that knowledge and then use the estimated coefficient for that variable as the adjustment factor.
If you knew that an intervention had occurred at a particular point in time you could estimate a model of the form where I is the 0/1 intervention variable is used in conjunction with the ARIMA component.
For example if you had a pulses at two points in time (39 and 21) the augmented data matrix might look like . | Adjusting time-series before a sudden increase(the reason & time of the increase are known)
Detecting and incorporating Level Shifts (Step Shifts) is an integral part or both non-causal and causal modelling. The term Intervention Detection is often used. Some software implementations restric |
50,511 | Basu's Theorem Proof | The first questions were already answered in the comments. You could add that the inner expectation in
$E_{\theta}E_{\theta}[I_{V \in A}|T]$
is taken "with a fixed T = t", giving a function $g(t)=E_{\theta}[I_{V \in A}|T=t]$, as Xi'an said. The second $E_{\theta}$ then takes the expectation of g(t) (so you vary t now).
From this we conclude $E_{\theta}[g(t) − P(V \in A)] = 0$ for all θ in the sample
space. (Queston: Why is $g(t)$ subtracted from $P(V\in A)$ here? Why are we
concluding from the above that the expectation is 0?
This is using the definition of a complete statistic:
If you have a function h(T) that
1) does not depend on the parameter $\theta$ directly, but only on T
(as it is written, "h(T)", and
2) for which $E_{\theta}(h(T)) = 0$ for whatever $\theta$ you pick,
then $h(t)$ is itself zero almost everywhere (or:
$P_{\theta}(h(T) = 0) = 1$), again for any value of $\theta$.
In the proof, T is complete, and the function h of T is $h(T) = g(T) − P(V \in A) = g(T) - c$. We need that V is ancillary because else, h would not be purely a function of T, but some $h(\theta, T)$.
"From this we conclude ...": your first five lines of formulas say that $E_{\theta}(I_{V \in A}) = E_{\theta}[g(T)]$,so $ E_{\theta}[h(T)]$ is their difference and zero, so the second point is fulfilled, too, so the conclusion follows.
I know it's over two years late. I just figured this out for myself (or so I think). My problem was overlooking the first requirement. Hope it makes sense. | Basu's Theorem Proof | The first questions were already answered in the comments. You could add that the inner expectation in
$E_{\theta}E_{\theta}[I_{V \in A}|T]$
is taken "with a fixed T = t", giving a function $g(t)=E_{ | Basu's Theorem Proof
The first questions were already answered in the comments. You could add that the inner expectation in
$E_{\theta}E_{\theta}[I_{V \in A}|T]$
is taken "with a fixed T = t", giving a function $g(t)=E_{\theta}[I_{V \in A}|T=t]$, as Xi'an said. The second $E_{\theta}$ then takes the expectation of g(t) (so you vary t now).
From this we conclude $E_{\theta}[g(t) − P(V \in A)] = 0$ for all θ in the sample
space. (Queston: Why is $g(t)$ subtracted from $P(V\in A)$ here? Why are we
concluding from the above that the expectation is 0?
This is using the definition of a complete statistic:
If you have a function h(T) that
1) does not depend on the parameter $\theta$ directly, but only on T
(as it is written, "h(T)", and
2) for which $E_{\theta}(h(T)) = 0$ for whatever $\theta$ you pick,
then $h(t)$ is itself zero almost everywhere (or:
$P_{\theta}(h(T) = 0) = 1$), again for any value of $\theta$.
In the proof, T is complete, and the function h of T is $h(T) = g(T) − P(V \in A) = g(T) - c$. We need that V is ancillary because else, h would not be purely a function of T, but some $h(\theta, T)$.
"From this we conclude ...": your first five lines of formulas say that $E_{\theta}(I_{V \in A}) = E_{\theta}[g(T)]$,so $ E_{\theta}[h(T)]$ is their difference and zero, so the second point is fulfilled, too, so the conclusion follows.
I know it's over two years late. I just figured this out for myself (or so I think). My problem was overlooking the first requirement. Hope it makes sense. | Basu's Theorem Proof
The first questions were already answered in the comments. You could add that the inner expectation in
$E_{\theta}E_{\theta}[I_{V \in A}|T]$
is taken "with a fixed T = t", giving a function $g(t)=E_{ |
50,512 | Are There Evaluation Criteria For Instrumental Variables? | Suppose you have several valid instrumental variables. Then you can think of the following when choosing among them:
If they capture the same local average treatment effects (i.e. they generate the same set of compliers), you can perhaps choose both and conduct an overidentifying restrictions test, or choose the one that has the strongest first stage (largest F-statistic).
If they do not capture the same local average treatment effect, then you should compute both IV estimators, and compare their value. Try to explain why they give different or the same answers by thinking about the local average treatment effects that they induce.
You cannot simply base your choice of instrument on the $R^2$. | Are There Evaluation Criteria For Instrumental Variables? | Suppose you have several valid instrumental variables. Then you can think of the following when choosing among them:
If they capture the same local average treatment effects (i.e. they generate the s | Are There Evaluation Criteria For Instrumental Variables?
Suppose you have several valid instrumental variables. Then you can think of the following when choosing among them:
If they capture the same local average treatment effects (i.e. they generate the same set of compliers), you can perhaps choose both and conduct an overidentifying restrictions test, or choose the one that has the strongest first stage (largest F-statistic).
If they do not capture the same local average treatment effect, then you should compute both IV estimators, and compare their value. Try to explain why they give different or the same answers by thinking about the local average treatment effects that they induce.
You cannot simply base your choice of instrument on the $R^2$. | Are There Evaluation Criteria For Instrumental Variables?
Suppose you have several valid instrumental variables. Then you can think of the following when choosing among them:
If they capture the same local average treatment effects (i.e. they generate the s |
50,513 | Is it possible to construct a hypothesis test for the existence of a mean of a symmetric distribution? | Not really. See here for example: Test for finite variance? | Is it possible to construct a hypothesis test for the existence of a mean of a symmetric distributio | Not really. See here for example: Test for finite variance? | Is it possible to construct a hypothesis test for the existence of a mean of a symmetric distribution?
Not really. See here for example: Test for finite variance? | Is it possible to construct a hypothesis test for the existence of a mean of a symmetric distributio
Not really. See here for example: Test for finite variance? |
50,514 | Plotting a tree timeline (evolution history) | So, as the question is perfectly on-topic here, as it deals with "Data Visualization", I would reproduce the comment as an answer, so that future viewers can be benefitted.
What is this type of graph called?
This graph is called as a "Tree diagram" and sometimes also called a dendrogram
Are there existing tools to produce this type of graph?
Yes, the tree diagram can be drawn in all the major data science tools.
Here is a link to the tutorial for plotting the tree diagram in D3.js
In Python, the pydot package can be used. Here is a link to a detailed tutorial.
Here is another tutorial in R, which gives a step-by-step guide on various types of tree diagrams.
I would recommend going with D3 for the dendrograms, owing to better aesthetics, ease of code and flexibility. | Plotting a tree timeline (evolution history) | So, as the question is perfectly on-topic here, as it deals with "Data Visualization", I would reproduce the comment as an answer, so that future viewers can be benefitted.
What is this type of graph | Plotting a tree timeline (evolution history)
So, as the question is perfectly on-topic here, as it deals with "Data Visualization", I would reproduce the comment as an answer, so that future viewers can be benefitted.
What is this type of graph called?
This graph is called as a "Tree diagram" and sometimes also called a dendrogram
Are there existing tools to produce this type of graph?
Yes, the tree diagram can be drawn in all the major data science tools.
Here is a link to the tutorial for plotting the tree diagram in D3.js
In Python, the pydot package can be used. Here is a link to a detailed tutorial.
Here is another tutorial in R, which gives a step-by-step guide on various types of tree diagrams.
I would recommend going with D3 for the dendrograms, owing to better aesthetics, ease of code and flexibility. | Plotting a tree timeline (evolution history)
So, as the question is perfectly on-topic here, as it deals with "Data Visualization", I would reproduce the comment as an answer, so that future viewers can be benefitted.
What is this type of graph |
50,515 | Comparing CV Predictions across Folds for Random Forest | For each fold, you are building a classifier that makes predictions for the observations. The classifiers within each fold have slightly different training sets and different weights, but they are all attempting to estimate the same underlying model. So yes, you can combine the predictions. If you have multiple predictions for one observation, you could take the average prediction of several folds, or weight the predictions so that the more accurate models have more influence than less accurate ones. This applies to any "ensemble learning" system. Predictions for different observations should be made on the same scale (e.g. from -1 to +1 or 0 to +1) so I can't think of any reason not to combine them. | Comparing CV Predictions across Folds for Random Forest | For each fold, you are building a classifier that makes predictions for the observations. The classifiers within each fold have slightly different training sets and different weights, but they are all | Comparing CV Predictions across Folds for Random Forest
For each fold, you are building a classifier that makes predictions for the observations. The classifiers within each fold have slightly different training sets and different weights, but they are all attempting to estimate the same underlying model. So yes, you can combine the predictions. If you have multiple predictions for one observation, you could take the average prediction of several folds, or weight the predictions so that the more accurate models have more influence than less accurate ones. This applies to any "ensemble learning" system. Predictions for different observations should be made on the same scale (e.g. from -1 to +1 or 0 to +1) so I can't think of any reason not to combine them. | Comparing CV Predictions across Folds for Random Forest
For each fold, you are building a classifier that makes predictions for the observations. The classifiers within each fold have slightly different training sets and different weights, but they are all |
50,516 | Comparing CV Predictions across Folds for Random Forest | After speaking with a few other folks about this problem, I think that technially you can't directly compare probabilities predicted for different folds, but practically, in most cases, you can.
The time when you would not be able to is if you have a small, potentially diverse positive set. Then when you divide the positives into k folds, each of the folds of positives may not be that similar to one another, so the k-1 folds are actually going to vary a bit; this would make the trees that compose each of the forests more different - this would seem to indicate that you couldn't directly compare the predicted probabilities across folds.
Now in practice, if you have a decently-sized positive set, then when you split those positives up across folds, each k-1 set of folds that composes the folds will be pretty similar, thus the forests will end up not being that different (assuming you have enough trees). So in practice the predicted probabilities will end up being close to directly comparable. | Comparing CV Predictions across Folds for Random Forest | After speaking with a few other folks about this problem, I think that technially you can't directly compare probabilities predicted for different folds, but practically, in most cases, you can.
The t | Comparing CV Predictions across Folds for Random Forest
After speaking with a few other folks about this problem, I think that technially you can't directly compare probabilities predicted for different folds, but practically, in most cases, you can.
The time when you would not be able to is if you have a small, potentially diverse positive set. Then when you divide the positives into k folds, each of the folds of positives may not be that similar to one another, so the k-1 folds are actually going to vary a bit; this would make the trees that compose each of the forests more different - this would seem to indicate that you couldn't directly compare the predicted probabilities across folds.
Now in practice, if you have a decently-sized positive set, then when you split those positives up across folds, each k-1 set of folds that composes the folds will be pretty similar, thus the forests will end up not being that different (assuming you have enough trees). So in practice the predicted probabilities will end up being close to directly comparable. | Comparing CV Predictions across Folds for Random Forest
After speaking with a few other folks about this problem, I think that technially you can't directly compare probabilities predicted for different folds, but practically, in most cases, you can.
The t |
50,517 | Comparing CV Predictions across Folds for Random Forest | I am not sure you can combine all the predictions of the k-folds.
However, you could stratified your K folds so that you have similar number of positives in each fold and ROC performance would not vary due to imbalanced dataset.
In python, there is this package from scikit learn that works very well: http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.StratifiedShuffleSplit.html#sklearn.cross_validation.StratifiedShuffleSplit
If you really have too few positive instances, you can work with bootstrapping instead of cross-validation (this paper explains it really well: http://scitation.aip.org/content/aapm/journal/medphys/35/4/10.1118/1.2868757 ) | Comparing CV Predictions across Folds for Random Forest | I am not sure you can combine all the predictions of the k-folds.
However, you could stratified your K folds so that you have similar number of positives in each fold and ROC performance would not va | Comparing CV Predictions across Folds for Random Forest
I am not sure you can combine all the predictions of the k-folds.
However, you could stratified your K folds so that you have similar number of positives in each fold and ROC performance would not vary due to imbalanced dataset.
In python, there is this package from scikit learn that works very well: http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.StratifiedShuffleSplit.html#sklearn.cross_validation.StratifiedShuffleSplit
If you really have too few positive instances, you can work with bootstrapping instead of cross-validation (this paper explains it really well: http://scitation.aip.org/content/aapm/journal/medphys/35/4/10.1118/1.2868757 ) | Comparing CV Predictions across Folds for Random Forest
I am not sure you can combine all the predictions of the k-folds.
However, you could stratified your K folds so that you have similar number of positives in each fold and ROC performance would not va |
50,518 | One class SVM with caret in R using cross validation [closed] | Simple option is not to use caret and just use the tune function from E1071.
svm_model <- tune(svm(training,y=NULL, type='one-classification', nu=0.01, gamma=0.002, scale=TRUE, kernel="radial", tunecontrol = tune.control(nrepeat = 3))
The default setting from tune is 10 fold CV. Using tune.control you can adjust this to repeat this as many times as you want. see ?tune.control for more options
If you want to use caret you will have to build your own model, because at the moment there is no one-classification model. But if you follow the steps on the caret page for using your own model you could adjust the first example.
Also look at these examples(example1, example2) | One class SVM with caret in R using cross validation [closed] | Simple option is not to use caret and just use the tune function from E1071.
svm_model <- tune(svm(training,y=NULL, type='one-classification', nu=0.01, gamma=0.002, scale=TRUE, kernel="radial", tunec | One class SVM with caret in R using cross validation [closed]
Simple option is not to use caret and just use the tune function from E1071.
svm_model <- tune(svm(training,y=NULL, type='one-classification', nu=0.01, gamma=0.002, scale=TRUE, kernel="radial", tunecontrol = tune.control(nrepeat = 3))
The default setting from tune is 10 fold CV. Using tune.control you can adjust this to repeat this as many times as you want. see ?tune.control for more options
If you want to use caret you will have to build your own model, because at the moment there is no one-classification model. But if you follow the steps on the caret page for using your own model you could adjust the first example.
Also look at these examples(example1, example2) | One class SVM with caret in R using cross validation [closed]
Simple option is not to use caret and just use the tune function from E1071.
svm_model <- tune(svm(training,y=NULL, type='one-classification', nu=0.01, gamma=0.002, scale=TRUE, kernel="radial", tunec |
50,519 | Controlling FWER using minP/maxT methods | if you feel comfortable with the Bonferroni correction, you should not have a problem with minP method.
Suppose that you have K p-values: $P_1,P_2,...,P_K$. Let's consider the Bonferroni correction, each p-value is adjusted by the number of tests, $P_i^{Bon}= K*P_i$, then you compare your p-values with $\alpha$ to determine the significance.
Pretty simple, correct? But what is the null distribution of your p-values after the correction? Clearly, they are not uniform(0,1) anymore, right? Actually, they are unform(0,K) distributed. It is pretty similar to your question, the p-values are truely not uniform(0,1) distributed, but we compare them with uniform(0,1) to determine the significance.
The minP/maxT method adopts pretty much the same idea. We compare the statistics with a distribution which they are truly not from. This does not imply your null hypothesis has been changed. We have to correct our statistic to avoid any false positive and control the FWER. Therefore, we compare Bonferroni adjusted p-values with uniform(0,1) but they are truly from U(0,K). Likewise, we compare the t-test statistics in your example with the maximum T statistic where they are truly from a t distribution.
If you want some math proof, I posted it on another question that you referred:
Why does max-t methods use only the maximum of generated t-values? | Controlling FWER using minP/maxT methods | if you feel comfortable with the Bonferroni correction, you should not have a problem with minP method.
Suppose that you have K p-values: $P_1,P_2,...,P_K$. Let's consider the Bonferroni correction, e | Controlling FWER using minP/maxT methods
if you feel comfortable with the Bonferroni correction, you should not have a problem with minP method.
Suppose that you have K p-values: $P_1,P_2,...,P_K$. Let's consider the Bonferroni correction, each p-value is adjusted by the number of tests, $P_i^{Bon}= K*P_i$, then you compare your p-values with $\alpha$ to determine the significance.
Pretty simple, correct? But what is the null distribution of your p-values after the correction? Clearly, they are not uniform(0,1) anymore, right? Actually, they are unform(0,K) distributed. It is pretty similar to your question, the p-values are truely not uniform(0,1) distributed, but we compare them with uniform(0,1) to determine the significance.
The minP/maxT method adopts pretty much the same idea. We compare the statistics with a distribution which they are truly not from. This does not imply your null hypothesis has been changed. We have to correct our statistic to avoid any false positive and control the FWER. Therefore, we compare Bonferroni adjusted p-values with uniform(0,1) but they are truly from U(0,K). Likewise, we compare the t-test statistics in your example with the maximum T statistic where they are truly from a t distribution.
If you want some math proof, I posted it on another question that you referred:
Why does max-t methods use only the maximum of generated t-values? | Controlling FWER using minP/maxT methods
if you feel comfortable with the Bonferroni correction, you should not have a problem with minP method.
Suppose that you have K p-values: $P_1,P_2,...,P_K$. Let's consider the Bonferroni correction, e |
50,520 | Expectation Maximization clarification questions | Just to elaborate a little further: In terms of $\theta$ we have
\begin{align*}
\sum_{i=1}^M \sum_{z^{(i)}=1}^K & Q(z^{(i)}) \log\left(\frac{P(x^{(i)},z^{(i)};\theta)}{Q(z^{(i)})}\right) \\
&= \sum_{i=1}^M \sum_{z^{(i)}=1}^K Q(z^{(i)}) \log P(x^{(i)},z^{(i)};\theta) - \sum_{i=1}^M \sum_{z^{(i)}=1}^K Q(z^{(i)}) \log Q(z^{(i)}) \\
&= \sum_{i=1}^M \sum_{z^{(i)}=1}^K Q(z^{(i)}) \log P(x^{(i)},z^{(i)};\theta) - const
\end{align*}
so yes, maximizing $\sum_{i=1}^M \sum_{z^{(i)}=1}^K Q(z^{(i)}) \log\left(\frac{P(x^{(i)},z^{(i)};\theta)}{Q(z^{(i)})}\right)$ in $\theta$ is the same as maximizing $\sum_{i=1}^M \sum_{z^{(i)}=1}^K Q(z^{(i)}) \log P(x^{(i)},z^{(i)};\theta)$ because they just deviate by a constant (in terms of $\theta$).
On the secons question: Just to add a few cents (i.e. one example) to the answer already given: Imagine the standard example of Gaussian Mixture Models. Here, $P(x^{(i)},z^{(i)};\theta) = e^{\text{someFunction}(x^{(i)};\theta_{z^{(i)}})}$ so that indeed, with the log inside the sum:
$$\sum_{i=1}^M \sum_{z^{(i)}=1}^K Q(z^{(i)}) \log P(x^{(i)},z^{(i)};\theta)
= \sum_{i=1}^M \sum_{z^{(i)}=1}^K Q(z^{(i)}) \text{someFunction}(x^{(i)};\theta_{z^{(i)}}) $$
is easier to maximize than
$$ \sum_{i=1}^M \log \left( \sum_{z^{(i)}=1}^K P(x^{(i)},z^{(i)};\theta)\right) $$
because in the second equation...
you have a log in front of a sum which makes the gradients more "complicated" than the other way around
you cannot use the simplification $\log e^{*} = *$
(2. is particularly important in real world examples because many distributions that we use come in the shape of an exponential family...) | Expectation Maximization clarification questions | Just to elaborate a little further: In terms of $\theta$ we have
\begin{align*}
\sum_{i=1}^M \sum_{z^{(i)}=1}^K & Q(z^{(i)}) \log\left(\frac{P(x^{(i)},z^{(i)};\theta)}{Q(z^{(i)})}\right) \\
&= \sum_ | Expectation Maximization clarification questions
Just to elaborate a little further: In terms of $\theta$ we have
\begin{align*}
\sum_{i=1}^M \sum_{z^{(i)}=1}^K & Q(z^{(i)}) \log\left(\frac{P(x^{(i)},z^{(i)};\theta)}{Q(z^{(i)})}\right) \\
&= \sum_{i=1}^M \sum_{z^{(i)}=1}^K Q(z^{(i)}) \log P(x^{(i)},z^{(i)};\theta) - \sum_{i=1}^M \sum_{z^{(i)}=1}^K Q(z^{(i)}) \log Q(z^{(i)}) \\
&= \sum_{i=1}^M \sum_{z^{(i)}=1}^K Q(z^{(i)}) \log P(x^{(i)},z^{(i)};\theta) - const
\end{align*}
so yes, maximizing $\sum_{i=1}^M \sum_{z^{(i)}=1}^K Q(z^{(i)}) \log\left(\frac{P(x^{(i)},z^{(i)};\theta)}{Q(z^{(i)})}\right)$ in $\theta$ is the same as maximizing $\sum_{i=1}^M \sum_{z^{(i)}=1}^K Q(z^{(i)}) \log P(x^{(i)},z^{(i)};\theta)$ because they just deviate by a constant (in terms of $\theta$).
On the secons question: Just to add a few cents (i.e. one example) to the answer already given: Imagine the standard example of Gaussian Mixture Models. Here, $P(x^{(i)},z^{(i)};\theta) = e^{\text{someFunction}(x^{(i)};\theta_{z^{(i)}})}$ so that indeed, with the log inside the sum:
$$\sum_{i=1}^M \sum_{z^{(i)}=1}^K Q(z^{(i)}) \log P(x^{(i)},z^{(i)};\theta)
= \sum_{i=1}^M \sum_{z^{(i)}=1}^K Q(z^{(i)}) \text{someFunction}(x^{(i)};\theta_{z^{(i)}}) $$
is easier to maximize than
$$ \sum_{i=1}^M \log \left( \sum_{z^{(i)}=1}^K P(x^{(i)},z^{(i)};\theta)\right) $$
because in the second equation...
you have a log in front of a sum which makes the gradients more "complicated" than the other way around
you cannot use the simplification $\log e^{*} = *$
(2. is particularly important in real world examples because many distributions that we use come in the shape of an exponential family...) | Expectation Maximization clarification questions
Just to elaborate a little further: In terms of $\theta$ we have
\begin{align*}
\sum_{i=1}^M \sum_{z^{(i)}=1}^K & Q(z^{(i)}) \log\left(\frac{P(x^{(i)},z^{(i)};\theta)}{Q(z^{(i)})}\right) \\
&= \sum_ |
50,521 | Find the MLE of the proportion of employees falling in $[I_1,I_2]$ | Unless I've made an error, you're very close to the right answer.
I don't see how $X_{(n)}$ comes into the MLE. It looks to me like you can work out the MLE of $c$ and $A$ (neither of which involve $X_{(n)}$) and substitute them into the relevant places to get the MLE of the probability. After removing reference to $X_{(n)}$, I believe the only relevant cases are the first three, and they can all be written in one reasonably simple expression for the MLE of the required probability:
$\qquad\min((\frac{X_{(1)}}{I_1})^\hat{c},1)-\min((\frac{X_{(1)}}{I_2})^\hat{c},1)$
where $\hat{c}$ is the usual MLE for $c$ (which I'll leave for you to deal with).
As you see, aside from minor details already mentioned, that's very close to what you had already. | Find the MLE of the proportion of employees falling in $[I_1,I_2]$ | Unless I've made an error, you're very close to the right answer.
I don't see how $X_{(n)}$ comes into the MLE. It looks to me like you can work out the MLE of $c$ and $A$ (neither of which involve $ | Find the MLE of the proportion of employees falling in $[I_1,I_2]$
Unless I've made an error, you're very close to the right answer.
I don't see how $X_{(n)}$ comes into the MLE. It looks to me like you can work out the MLE of $c$ and $A$ (neither of which involve $X_{(n)}$) and substitute them into the relevant places to get the MLE of the probability. After removing reference to $X_{(n)}$, I believe the only relevant cases are the first three, and they can all be written in one reasonably simple expression for the MLE of the required probability:
$\qquad\min((\frac{X_{(1)}}{I_1})^\hat{c},1)-\min((\frac{X_{(1)}}{I_2})^\hat{c},1)$
where $\hat{c}$ is the usual MLE for $c$ (which I'll leave for you to deal with).
As you see, aside from minor details already mentioned, that's very close to what you had already. | Find the MLE of the proportion of employees falling in $[I_1,I_2]$
Unless I've made an error, you're very close to the right answer.
I don't see how $X_{(n)}$ comes into the MLE. It looks to me like you can work out the MLE of $c$ and $A$ (neither of which involve $ |
50,522 | Linear OLS v Mixed-Effects Model with Correlated Regressors | Consistent with the conversation in the comments, it may actually be quite obvious: in mixed-effects, the overall increasing values of $y$ with increasing $x1$ as shown in:
with a positive correlation of cor(x1,y) [1] 0.7924759, is nicely captured in the plane formed by lm(y ~ x1 + x2) (depicted in the OP plots) or in the following graph:
library(car)
library(rgl)
scatter3d (y ∼ x1 + x2)
rgl.snapshot(filename="scat1.png", fmt="png")
On the other hand, this is simply lost in lmer mixed-effects. Rather than a positive slope in the regression of $y$ over $x1$, all the slope coefficients are consistently negative, because at each level of stratification created by the discrete values of $x2$ (1, 2 and 3), the corresponding $x1$ do slope downwards:
scatter3d (x=x1,y=x2,z=y, groups=as.factor(x2))
(source here)
I tend to assume (and liked to get confirmation) that this may be substantial to the idea of mixed-effects, and that it cannot be corrected by modifying the lmer call synthax. I tried for instance lmer(y ~ (x1|x2), REML=F), lmer(y ~ x1 + (x1|x2), REML=F), among others, with conceptually similar outputs.
Leaving aside the fact that in the example in the OP each of the discrete values that $x2$ assumes as a distinct "unit" role, which is a coercion of the idea of a hierarchical model, where units are typically subjects, it is clear that a lmer mixed-effects model is not optimal in this situation, and that even in the case that the values of $x2$ were more 'natural' units (say, x2==1 corresponded to John; x2==2 was for Tom, and x2==3 were the values for Mary), a linear OLS lm with dummy variables would be more appropriate than a mixed-effects model. From a different perspective, the distribution of the $y$ values along each point in the $x1$ axis does not follow a Gaussian distributions both due to variations between units and within units, as is assumed of mixed-effects models.
As in one of the comments, the mixed-effects model would be more natural in a situation as in the sleepstudy {lme4} database, where different individuals respond to sleep deprivation with progressive increases in reaction time, and the heteroskedasticity seems to increase as times goes on:
require(lme4)
attach(sleepstudy)
fit <- lm(Reaction ~ Days)
plot(fit, pch=19, cex=0.5, col='slategray')
lending itself to fitting different intercepts and slopes for each of the Subjects:
plot(Reaction ~ Days, xlim=c(0, 9), ylim=c(200, 480), type='n',
xlab='Days', ylab='Reaction')
for (i in sleepstudy$Subject){
points(Reaction ~ Days, sleepstudy[Subject==i,], pch=19,
cex = 0.5, col=i)
fit<-lm(Reaction ~ Days, sleepstudy[sleepstudy$Subject==i,])
a <- predict(fit)
lines(x=c(0:9),predict(fit), col=i, lwd=1)
}
This model would correspond to the lme4 call lmer(Reaction ~ Days +(Days|Subject)) which would result coef(lmer(Reaction ~ Days +(Days|Subject))) in a different intercept and slope for every subject. On the other hand lmer(Reaction ~ Days +(1|Subject)) would produce different intercepts for each subject, but exactly the same slope for all subjects:
plot(Reaction ~ Days, xlim=c(0, 9), ylim=c(200, 480), type='n',
xlab='Days', ylab='Reaction')
for (i in sleepstudy$Subject){
points(Reaction ~ Days, sleepstudy[Subject==i,], pch=19,
cex = 0.5, col=i)
a<-coef(lmer(Reaction ~ Days + (1|Subject)))$Subject[i,1]
b<-coef(lmer(Reaction ~ Days + (1|Subject)))$Subject[1,2]
abline(a=a, b=b, col=i, lwd=1)
}
... clearly not as good a model as the relative AIC values also seem to indicate (AIC(lmer(Reaction ~ Days +(1|Subject)), lmer(Reaction ~ Days +(Days|Subject)) 1794.465 v 1755.628). | Linear OLS v Mixed-Effects Model with Correlated Regressors | Consistent with the conversation in the comments, it may actually be quite obvious: in mixed-effects, the overall increasing values of $y$ with increasing $x1$ as shown in:
with a positive correlatio | Linear OLS v Mixed-Effects Model with Correlated Regressors
Consistent with the conversation in the comments, it may actually be quite obvious: in mixed-effects, the overall increasing values of $y$ with increasing $x1$ as shown in:
with a positive correlation of cor(x1,y) [1] 0.7924759, is nicely captured in the plane formed by lm(y ~ x1 + x2) (depicted in the OP plots) or in the following graph:
library(car)
library(rgl)
scatter3d (y ∼ x1 + x2)
rgl.snapshot(filename="scat1.png", fmt="png")
On the other hand, this is simply lost in lmer mixed-effects. Rather than a positive slope in the regression of $y$ over $x1$, all the slope coefficients are consistently negative, because at each level of stratification created by the discrete values of $x2$ (1, 2 and 3), the corresponding $x1$ do slope downwards:
scatter3d (x=x1,y=x2,z=y, groups=as.factor(x2))
(source here)
I tend to assume (and liked to get confirmation) that this may be substantial to the idea of mixed-effects, and that it cannot be corrected by modifying the lmer call synthax. I tried for instance lmer(y ~ (x1|x2), REML=F), lmer(y ~ x1 + (x1|x2), REML=F), among others, with conceptually similar outputs.
Leaving aside the fact that in the example in the OP each of the discrete values that $x2$ assumes as a distinct "unit" role, which is a coercion of the idea of a hierarchical model, where units are typically subjects, it is clear that a lmer mixed-effects model is not optimal in this situation, and that even in the case that the values of $x2$ were more 'natural' units (say, x2==1 corresponded to John; x2==2 was for Tom, and x2==3 were the values for Mary), a linear OLS lm with dummy variables would be more appropriate than a mixed-effects model. From a different perspective, the distribution of the $y$ values along each point in the $x1$ axis does not follow a Gaussian distributions both due to variations between units and within units, as is assumed of mixed-effects models.
As in one of the comments, the mixed-effects model would be more natural in a situation as in the sleepstudy {lme4} database, where different individuals respond to sleep deprivation with progressive increases in reaction time, and the heteroskedasticity seems to increase as times goes on:
require(lme4)
attach(sleepstudy)
fit <- lm(Reaction ~ Days)
plot(fit, pch=19, cex=0.5, col='slategray')
lending itself to fitting different intercepts and slopes for each of the Subjects:
plot(Reaction ~ Days, xlim=c(0, 9), ylim=c(200, 480), type='n',
xlab='Days', ylab='Reaction')
for (i in sleepstudy$Subject){
points(Reaction ~ Days, sleepstudy[Subject==i,], pch=19,
cex = 0.5, col=i)
fit<-lm(Reaction ~ Days, sleepstudy[sleepstudy$Subject==i,])
a <- predict(fit)
lines(x=c(0:9),predict(fit), col=i, lwd=1)
}
This model would correspond to the lme4 call lmer(Reaction ~ Days +(Days|Subject)) which would result coef(lmer(Reaction ~ Days +(Days|Subject))) in a different intercept and slope for every subject. On the other hand lmer(Reaction ~ Days +(1|Subject)) would produce different intercepts for each subject, but exactly the same slope for all subjects:
plot(Reaction ~ Days, xlim=c(0, 9), ylim=c(200, 480), type='n',
xlab='Days', ylab='Reaction')
for (i in sleepstudy$Subject){
points(Reaction ~ Days, sleepstudy[Subject==i,], pch=19,
cex = 0.5, col=i)
a<-coef(lmer(Reaction ~ Days + (1|Subject)))$Subject[i,1]
b<-coef(lmer(Reaction ~ Days + (1|Subject)))$Subject[1,2]
abline(a=a, b=b, col=i, lwd=1)
}
... clearly not as good a model as the relative AIC values also seem to indicate (AIC(lmer(Reaction ~ Days +(1|Subject)), lmer(Reaction ~ Days +(Days|Subject)) 1794.465 v 1755.628). | Linear OLS v Mixed-Effects Model with Correlated Regressors
Consistent with the conversation in the comments, it may actually be quite obvious: in mixed-effects, the overall increasing values of $y$ with increasing $x1$ as shown in:
with a positive correlatio |
50,523 | Division with lag operator | First of all it is important to notice that $L$ is an operator that works on the following random variable. Then a short answer is No you cannot. Let me give you an example,
Assume you have a time series of the form of
$y_t- y_{t-1}=e_t - e_{t-1}$. then using backward shift operator we have, $(1-L)y_t=(1-L)e_t$. Itis obvious that we cannot cancel $1-L$ from right and left side of the equation (and get $y_t=e_t$.
No we consider the case where the left hand side is a stationary process.
Let the process $(1-\phi L)y_t=(1-\phi L)e_t$ where $|\phi|>1$. Then using power series we have, $y_t=\sum_0^\infty \phi^i L^i (1-\phi L)e_t$ that is $y_t=\sum_0^\infty \phi^i e_{t-i}- \sum_0^\infty \phi^{i+1} e_{t-i-1}$. | Division with lag operator | First of all it is important to notice that $L$ is an operator that works on the following random variable. Then a short answer is No you cannot. Let me give you an example,
Assume you have a time ser | Division with lag operator
First of all it is important to notice that $L$ is an operator that works on the following random variable. Then a short answer is No you cannot. Let me give you an example,
Assume you have a time series of the form of
$y_t- y_{t-1}=e_t - e_{t-1}$. then using backward shift operator we have, $(1-L)y_t=(1-L)e_t$. Itis obvious that we cannot cancel $1-L$ from right and left side of the equation (and get $y_t=e_t$.
No we consider the case where the left hand side is a stationary process.
Let the process $(1-\phi L)y_t=(1-\phi L)e_t$ where $|\phi|>1$. Then using power series we have, $y_t=\sum_0^\infty \phi^i L^i (1-\phi L)e_t$ that is $y_t=\sum_0^\infty \phi^i e_{t-i}- \sum_0^\infty \phi^{i+1} e_{t-i-1}$. | Division with lag operator
First of all it is important to notice that $L$ is an operator that works on the following random variable. Then a short answer is No you cannot. Let me give you an example,
Assume you have a time ser |
50,524 | Bayesian Updating | This isn't a typical Bayesian update setup - what is the sequence $B_i$? Usually these are the observed variables, while $A_i$ is a sequence of latent variables, the ones we wish to estimate. In that case,
we predict, based on the $B_1,...,B_{n-1}$, using
$$
P(A_n|\{B_1,...,B_{n-1}\}) = \int P(A_n|A_{n-1})P(A_{n-1}|\{B_1,...,B_{n-1}\})dA_{n-1},
$$
then update our bad prediction when $B_n$ arrives by
$$
P(A_n|\{B_1,...,B_n\}) = \frac{P(B_n|A_n)P(A_n|\{B_1,...,B_{n-1}\})}{P(B_n|B_{n-1})},
$$
So the prior you speak of here is $P(A_n|\{B_1,...,B_{n-1}\})$, the previous estimate of the "posterior" (it is not strictly a posterior) which is used from the update step. This follows the general principle in Bayesian forecasting - the current estimate of the prior contains everything we know about that density. It should be used in the next step.
Sorry for using integrals instead of summations - that's how I wrote it up. | Bayesian Updating | This isn't a typical Bayesian update setup - what is the sequence $B_i$? Usually these are the observed variables, while $A_i$ is a sequence of latent variables, the ones we wish to estimate. In that | Bayesian Updating
This isn't a typical Bayesian update setup - what is the sequence $B_i$? Usually these are the observed variables, while $A_i$ is a sequence of latent variables, the ones we wish to estimate. In that case,
we predict, based on the $B_1,...,B_{n-1}$, using
$$
P(A_n|\{B_1,...,B_{n-1}\}) = \int P(A_n|A_{n-1})P(A_{n-1}|\{B_1,...,B_{n-1}\})dA_{n-1},
$$
then update our bad prediction when $B_n$ arrives by
$$
P(A_n|\{B_1,...,B_n\}) = \frac{P(B_n|A_n)P(A_n|\{B_1,...,B_{n-1}\})}{P(B_n|B_{n-1})},
$$
So the prior you speak of here is $P(A_n|\{B_1,...,B_{n-1}\})$, the previous estimate of the "posterior" (it is not strictly a posterior) which is used from the update step. This follows the general principle in Bayesian forecasting - the current estimate of the prior contains everything we know about that density. It should be used in the next step.
Sorry for using integrals instead of summations - that's how I wrote it up. | Bayesian Updating
This isn't a typical Bayesian update setup - what is the sequence $B_i$? Usually these are the observed variables, while $A_i$ is a sequence of latent variables, the ones we wish to estimate. In that |
50,525 | Do I need to adjust the degrees of freedom returned by pool.compare() in MICE? | You posed two questions, so I will simply comment on them in order:
Question 1: High degrees of freedom:
Such high degrees of freedoms are normal with pool.compare. The function implements the procedure by Meng & Rubin (1992), in which the denominator degrees of freedom for the test statistic $D_m$ are derived under the assumptions that the complete-data degrees of freedom are infinite (see also Rubin, 1987).
Thus, the procedure will estimate the degrees of freedom smaller than in the hypothetical complete-data (i.e., smaller than infinity), which often results in relatively large denominator degrees of freedom in MI. Sometimes this is inappropriate, especially in smaller samples.
Question 2: Correction formula of Barnard & Rubin:
The correction formula in Barnard & Rubin (1999) adresses the aforementioned problem, but not for multiparameter tests (as done in pool.compare) but for tests for scalar estimands (e.g., a single regression coefficient).
Therefore, this correction formula is not the way to go here. Luckily, there is also a correction formula available for multiparameter tests. That formula was proposed by Reiter (2007) and was originally developed for the procedure by Li, Raghunathan, and Rubin (1991).
However, these two procedures are asymptotically identical in many cases, and the expression for the degrees of freedom is the same in $D_1$ and $D_3$. Therefore, I would suggest you apply Reiter's correction formula to the results in pool.compare. The formula is not much more difficult to apply than that of Barnard & Rubin, and it is also implemented in a couple of R packages.
You can find some very readable applications of Reiter's correction formula in the article by van Ginkel and Kronenberg (2014), who apply the procedure of Li et al (1991) with Reiter's corrections to the ANOVA (recall that Meng & Rubin, 1992, and Li et al., 1991, can be thought interchangeable in this case).
Edit:
However, it is not unlikely that you will observe no big difference. The outcome of your hypothesis test will likely remain the same. | Do I need to adjust the degrees of freedom returned by pool.compare() in MICE? | You posed two questions, so I will simply comment on them in order:
Question 1: High degrees of freedom:
Such high degrees of freedoms are normal with pool.compare. The function implements the procedu | Do I need to adjust the degrees of freedom returned by pool.compare() in MICE?
You posed two questions, so I will simply comment on them in order:
Question 1: High degrees of freedom:
Such high degrees of freedoms are normal with pool.compare. The function implements the procedure by Meng & Rubin (1992), in which the denominator degrees of freedom for the test statistic $D_m$ are derived under the assumptions that the complete-data degrees of freedom are infinite (see also Rubin, 1987).
Thus, the procedure will estimate the degrees of freedom smaller than in the hypothetical complete-data (i.e., smaller than infinity), which often results in relatively large denominator degrees of freedom in MI. Sometimes this is inappropriate, especially in smaller samples.
Question 2: Correction formula of Barnard & Rubin:
The correction formula in Barnard & Rubin (1999) adresses the aforementioned problem, but not for multiparameter tests (as done in pool.compare) but for tests for scalar estimands (e.g., a single regression coefficient).
Therefore, this correction formula is not the way to go here. Luckily, there is also a correction formula available for multiparameter tests. That formula was proposed by Reiter (2007) and was originally developed for the procedure by Li, Raghunathan, and Rubin (1991).
However, these two procedures are asymptotically identical in many cases, and the expression for the degrees of freedom is the same in $D_1$ and $D_3$. Therefore, I would suggest you apply Reiter's correction formula to the results in pool.compare. The formula is not much more difficult to apply than that of Barnard & Rubin, and it is also implemented in a couple of R packages.
You can find some very readable applications of Reiter's correction formula in the article by van Ginkel and Kronenberg (2014), who apply the procedure of Li et al (1991) with Reiter's corrections to the ANOVA (recall that Meng & Rubin, 1992, and Li et al., 1991, can be thought interchangeable in this case).
Edit:
However, it is not unlikely that you will observe no big difference. The outcome of your hypothesis test will likely remain the same. | Do I need to adjust the degrees of freedom returned by pool.compare() in MICE?
You posed two questions, so I will simply comment on them in order:
Question 1: High degrees of freedom:
Such high degrees of freedoms are normal with pool.compare. The function implements the procedu |
50,526 | In learning theory, why can't we bound like $P[|E_{in}(g)-E_{out}(g)|>\epsilon] \leq 2e^{-2\epsilon^{2}N}$?($g$ is our learned hypothesis) | I believe the error is in the following equality
$$\sum_{h \in H} P[|E_{in}(g)-E_{out}(g)|>\epsilon \;\lvert \;g=h]\;P[g=h]
= \sum_{h \in H} P[|E_{in}(h)-E_{out}(h)|>\epsilon]\;P[g=h]$$
but its hidden by the notation.
Consider your equality
$$ P [|E_{in}(g)-E_{out}(g)|>\epsilon \;\lvert \;g=h] = P [|E_{in}(h)-E_{out}(h)|>\epsilon] $$
It is only specific training sets $X, Y$ that produce $g = h$. And, unfortunately, it is exactly these data sets that bias the out of sample error to be higher than the training error. To produce an equality like this you have to change the probability space to include only those training data sets for which $g = h$, so $P$ just doesn't mean the same thing on the left and the right hand sides. | In learning theory, why can't we bound like $P[|E_{in}(g)-E_{out}(g)|>\epsilon] \leq 2e^{-2\epsilon^ | I believe the error is in the following equality
$$\sum_{h \in H} P[|E_{in}(g)-E_{out}(g)|>\epsilon \;\lvert \;g=h]\;P[g=h]
= \sum_{h \in H} P[|E_{in}(h)-E_{out}(h)|>\epsilon]\;P[g=h]$$
but its hidden | In learning theory, why can't we bound like $P[|E_{in}(g)-E_{out}(g)|>\epsilon] \leq 2e^{-2\epsilon^{2}N}$?($g$ is our learned hypothesis)
I believe the error is in the following equality
$$\sum_{h \in H} P[|E_{in}(g)-E_{out}(g)|>\epsilon \;\lvert \;g=h]\;P[g=h]
= \sum_{h \in H} P[|E_{in}(h)-E_{out}(h)|>\epsilon]\;P[g=h]$$
but its hidden by the notation.
Consider your equality
$$ P [|E_{in}(g)-E_{out}(g)|>\epsilon \;\lvert \;g=h] = P [|E_{in}(h)-E_{out}(h)|>\epsilon] $$
It is only specific training sets $X, Y$ that produce $g = h$. And, unfortunately, it is exactly these data sets that bias the out of sample error to be higher than the training error. To produce an equality like this you have to change the probability space to include only those training data sets for which $g = h$, so $P$ just doesn't mean the same thing on the left and the right hand sides. | In learning theory, why can't we bound like $P[|E_{in}(g)-E_{out}(g)|>\epsilon] \leq 2e^{-2\epsilon^
I believe the error is in the following equality
$$\sum_{h \in H} P[|E_{in}(g)-E_{out}(g)|>\epsilon \;\lvert \;g=h]\;P[g=h]
= \sum_{h \in H} P[|E_{in}(h)-E_{out}(h)|>\epsilon]\;P[g=h]$$
but its hidden |
50,527 | Prior distribution on/of a parameter | Being that the first two results of a Google search (in English) for "prior distribution on" yield papers by Andrew Gelman, I'm sufficiently convinced that "on" is common enough not to cause confusion. Actually, there are contexts in which using "of" would strike some native English speakers as odd. For instance, the phrase "a uniform prior of $log(\sigma$)"—rather than "on $log(\sigma)$," as it reads in one of the Gelman papers—sounds a bit off to my ear. More, "posterior on" is used commonly throughout
So, whether a matter of technical definition or simple convention, it's acceptable. My own bias is to use "on" more frequently than "of," and in some cases I've even seen "for." | Prior distribution on/of a parameter | Being that the first two results of a Google search (in English) for "prior distribution on" yield papers by Andrew Gelman, I'm sufficiently convinced that "on" is common enough not to cause confusion | Prior distribution on/of a parameter
Being that the first two results of a Google search (in English) for "prior distribution on" yield papers by Andrew Gelman, I'm sufficiently convinced that "on" is common enough not to cause confusion. Actually, there are contexts in which using "of" would strike some native English speakers as odd. For instance, the phrase "a uniform prior of $log(\sigma$)"—rather than "on $log(\sigma)$," as it reads in one of the Gelman papers—sounds a bit off to my ear. More, "posterior on" is used commonly throughout
So, whether a matter of technical definition or simple convention, it's acceptable. My own bias is to use "on" more frequently than "of," and in some cases I've even seen "for." | Prior distribution on/of a parameter
Being that the first two results of a Google search (in English) for "prior distribution on" yield papers by Andrew Gelman, I'm sufficiently convinced that "on" is common enough not to cause confusion |
50,528 | Prior distribution on/of a parameter | Why do you want to use on instead of of?
I think that of is more correct because it refers to the fact that we are talking about the distribution of $\mu$ before (prior -> pre -> before) --- the observation of data.
Similarly, I think it is correct "posterior distribution of $\mu$", because it refers to the distribution of $\mu$ after (posterior -> post -> after) we have observed data. | Prior distribution on/of a parameter | Why do you want to use on instead of of?
I think that of is more correct because it refers to the fact that we are talking about the distribution of $\mu$ before (prior -> pre -> before) --- the obser | Prior distribution on/of a parameter
Why do you want to use on instead of of?
I think that of is more correct because it refers to the fact that we are talking about the distribution of $\mu$ before (prior -> pre -> before) --- the observation of data.
Similarly, I think it is correct "posterior distribution of $\mu$", because it refers to the distribution of $\mu$ after (posterior -> post -> after) we have observed data. | Prior distribution on/of a parameter
Why do you want to use on instead of of?
I think that of is more correct because it refers to the fact that we are talking about the distribution of $\mu$ before (prior -> pre -> before) --- the obser |
50,529 | Autoencoder doesn't work (can't learn features) | (Primary author of theanets here.) As hinted in the comments on your question, this is actually a difficult learning problem! The network is, as indicated by the optimized loss value during training, learning the optimal filters for representing this set of input data as well as it can.
The important thing to think about here is that the weights in the network are being tuned to represent the entire space of inputs, not just one input. My guess is that you're expecting the network to learn one gaussian blob feature, but that's not how this works.
From the network's perspective, it's being asked to represent an input that is sampled from this pool of data arbitrarily. Which pixels in the next sample will be zero? Which ones will be nonzero? The network doesn't know, because the inputs tile the entire pixel space with zero and nonzero pixels. The best representation for a set of data that fills the space uniformly is a bunch of more or less uniformly-distributed small values, which is what you're seeing.
In comparison, try limiting your input data to a subset of the gaussian blobs. Let's put them all in a diagonal stripe of pixels, for instance:
import climate
import matplotlib.pyplot as plt
import numpy as np
import skimage.filters
import theanets
climate.enable_default_logging()
def gen_inputs(x=28, sigma=2.0):
return np.array([
skimage.filters.gaussian_filter(i, sigma).astype('f')
for i in (np.eye(x*x)*2).reshape(x, x, x*x).transpose()
]).reshape(x*x, x*x)[10::27]
data = gen_inputs()
plt.imshow(data.mean(axis=0).reshape((28, 28)))
plt.show()
net = theanets.Autoencoder([784, 9, 784])
net.train(data, weight_l2=0.0001)
w = net.find('hid1', 'w').get_value().T
img = np.zeros((3 * 28, 3 * 28), float)
for r in range(3):
for c in range(3):
img[r*28:(r+1)*28, c*28:(c+1)*28] = w[r*3+c].reshape((28, 28))
plt.imshow(img)
plt.show()
Here's a plot of the mean data (the first imshow in the code):
And here's a plot of the learned features (the second imshow):
The features are responding to the mean of the entire dataset!
If you want to get the network to learn more "individual" features, it can be pretty tricky. Things you can play with:
Increase the number of hidden units, as suggested in the comments.
Try training with an L1 penalty on the hidden-unit activations (hidden_l1=0.5).
Try forcing the weights themselves to be sparse (weight_l1=0.5).
Good luck! | Autoencoder doesn't work (can't learn features) | (Primary author of theanets here.) As hinted in the comments on your question, this is actually a difficult learning problem! The network is, as indicated by the optimized loss value during training, | Autoencoder doesn't work (can't learn features)
(Primary author of theanets here.) As hinted in the comments on your question, this is actually a difficult learning problem! The network is, as indicated by the optimized loss value during training, learning the optimal filters for representing this set of input data as well as it can.
The important thing to think about here is that the weights in the network are being tuned to represent the entire space of inputs, not just one input. My guess is that you're expecting the network to learn one gaussian blob feature, but that's not how this works.
From the network's perspective, it's being asked to represent an input that is sampled from this pool of data arbitrarily. Which pixels in the next sample will be zero? Which ones will be nonzero? The network doesn't know, because the inputs tile the entire pixel space with zero and nonzero pixels. The best representation for a set of data that fills the space uniformly is a bunch of more or less uniformly-distributed small values, which is what you're seeing.
In comparison, try limiting your input data to a subset of the gaussian blobs. Let's put them all in a diagonal stripe of pixels, for instance:
import climate
import matplotlib.pyplot as plt
import numpy as np
import skimage.filters
import theanets
climate.enable_default_logging()
def gen_inputs(x=28, sigma=2.0):
return np.array([
skimage.filters.gaussian_filter(i, sigma).astype('f')
for i in (np.eye(x*x)*2).reshape(x, x, x*x).transpose()
]).reshape(x*x, x*x)[10::27]
data = gen_inputs()
plt.imshow(data.mean(axis=0).reshape((28, 28)))
plt.show()
net = theanets.Autoencoder([784, 9, 784])
net.train(data, weight_l2=0.0001)
w = net.find('hid1', 'w').get_value().T
img = np.zeros((3 * 28, 3 * 28), float)
for r in range(3):
for c in range(3):
img[r*28:(r+1)*28, c*28:(c+1)*28] = w[r*3+c].reshape((28, 28))
plt.imshow(img)
plt.show()
Here's a plot of the mean data (the first imshow in the code):
And here's a plot of the learned features (the second imshow):
The features are responding to the mean of the entire dataset!
If you want to get the network to learn more "individual" features, it can be pretty tricky. Things you can play with:
Increase the number of hidden units, as suggested in the comments.
Try training with an L1 penalty on the hidden-unit activations (hidden_l1=0.5).
Try forcing the weights themselves to be sparse (weight_l1=0.5).
Good luck! | Autoencoder doesn't work (can't learn features)
(Primary author of theanets here.) As hinted in the comments on your question, this is actually a difficult learning problem! The network is, as indicated by the optimized loss value during training, |
50,530 | Why non-negative regression? | This is adding prior knowledge to your model, in a somewhat Bayesian sense. Per the comments, not everyone would call this "regularization". I would, because it constrains the parameters to not vary wildly all over the place, just in a different way than (say) L1 or L2 regularization.
When does this make sense? Anytime we are sure that a regression coefficient should be positive, but because of the uncertainties involved, an unconstrained model might fit a negative one. For example:
As seanv507 writes, if we want to estimate item prices from total costs and item amounts, it makes sense to impose positive prices.
I do retail sales forecasting. We typically constrain promotion effects to be positive, and price effects to be negative.
Similarly, you may have cannibalization effects, which are typically very noisy and therefore prime candidates for any kind of regularization. If there is a promotion on Coca-Cola, we would expect Pepsi sales to go down, so in a model for Pepsi sales that includes promotions on Coca-Cola, we would constrain these coefficients to be negative (or at least non-positive).
Yes, often you can make a case that the opposite sign of the coefficient could be possible. However, usually this involves rather tortuous logic, and you often find that constraining parameters makes for better models and predictions in the vast majority of cases, even if you will always find isolated cases that would have profited from unconstrained parameters. | Why non-negative regression? | This is adding prior knowledge to your model, in a somewhat Bayesian sense. Per the comments, not everyone would call this "regularization". I would, because it constrains the parameters to not vary w | Why non-negative regression?
This is adding prior knowledge to your model, in a somewhat Bayesian sense. Per the comments, not everyone would call this "regularization". I would, because it constrains the parameters to not vary wildly all over the place, just in a different way than (say) L1 or L2 regularization.
When does this make sense? Anytime we are sure that a regression coefficient should be positive, but because of the uncertainties involved, an unconstrained model might fit a negative one. For example:
As seanv507 writes, if we want to estimate item prices from total costs and item amounts, it makes sense to impose positive prices.
I do retail sales forecasting. We typically constrain promotion effects to be positive, and price effects to be negative.
Similarly, you may have cannibalization effects, which are typically very noisy and therefore prime candidates for any kind of regularization. If there is a promotion on Coca-Cola, we would expect Pepsi sales to go down, so in a model for Pepsi sales that includes promotions on Coca-Cola, we would constrain these coefficients to be negative (or at least non-positive).
Yes, often you can make a case that the opposite sign of the coefficient could be possible. However, usually this involves rather tortuous logic, and you often find that constraining parameters makes for better models and predictions in the vast majority of cases, even if you will always find isolated cases that would have profited from unconstrained parameters. | Why non-negative regression?
This is adding prior knowledge to your model, in a somewhat Bayesian sense. Per the comments, not everyone would call this "regularization". I would, because it constrains the parameters to not vary w |
50,531 | How to deal with missing coefficients while bootstrapping regressions | One method that can be used (with caution!!) is a stratified bootstrap. That is, suppose we have 20 subjects in group 1 and 20 in group 2. Then we can resample our data, conditional on these sample sizes (i.e. we resample 20 from group 1 and 20 from group 2). Because of this, we are now insured that the estimator of the difference will be defined in each bootstrap sample.
In terms of the caution, you need to realize that you potentially could be performing bootstrapping on very small subsamples! A trivial example is suppose we stratified by x, but x was continuous. Then each sample would be it's own strata and our estimated variance would be 0. Clearly a problem.
In your case, I'm sure you have more than 1 observation per level, but you still need to be wary of the results if the number of observations per level is very small. If that's the case, I would certainly consider Jeremy's suggestion of trying to combine levels that may be very similar to each other in nature. | How to deal with missing coefficients while bootstrapping regressions | One method that can be used (with caution!!) is a stratified bootstrap. That is, suppose we have 20 subjects in group 1 and 20 in group 2. Then we can resample our data, conditional on these sample si | How to deal with missing coefficients while bootstrapping regressions
One method that can be used (with caution!!) is a stratified bootstrap. That is, suppose we have 20 subjects in group 1 and 20 in group 2. Then we can resample our data, conditional on these sample sizes (i.e. we resample 20 from group 1 and 20 from group 2). Because of this, we are now insured that the estimator of the difference will be defined in each bootstrap sample.
In terms of the caution, you need to realize that you potentially could be performing bootstrapping on very small subsamples! A trivial example is suppose we stratified by x, but x was continuous. Then each sample would be it's own strata and our estimated variance would be 0. Clearly a problem.
In your case, I'm sure you have more than 1 observation per level, but you still need to be wary of the results if the number of observations per level is very small. If that's the case, I would certainly consider Jeremy's suggestion of trying to combine levels that may be very similar to each other in nature. | How to deal with missing coefficients while bootstrapping regressions
One method that can be used (with caution!!) is a stratified bootstrap. That is, suppose we have 20 subjects in group 1 and 20 in group 2. Then we can resample our data, conditional on these sample si |
50,532 | How are subjects with only one observation used in fixed effect models? | In a frequentist approach this fixed effect model has an unindentifiability problem (https://en.wikipedia.org/wiki/Identifiability) for $\beta_1$ and $\alpha_i$ for those subjects that have only one measurement. These subjects do not have a slope and do not contribute to the estimation of $\beta_2$.
In a Bayesian approach whether and how the subjects with one measurement contribute to the estimation of $\beta_2$ depends on the specifications of the priors. | How are subjects with only one observation used in fixed effect models? | In a frequentist approach this fixed effect model has an unindentifiability problem (https://en.wikipedia.org/wiki/Identifiability) for $\beta_1$ and $\alpha_i$ for those subjects that have only one m | How are subjects with only one observation used in fixed effect models?
In a frequentist approach this fixed effect model has an unindentifiability problem (https://en.wikipedia.org/wiki/Identifiability) for $\beta_1$ and $\alpha_i$ for those subjects that have only one measurement. These subjects do not have a slope and do not contribute to the estimation of $\beta_2$.
In a Bayesian approach whether and how the subjects with one measurement contribute to the estimation of $\beta_2$ depends on the specifications of the priors. | How are subjects with only one observation used in fixed effect models?
In a frequentist approach this fixed effect model has an unindentifiability problem (https://en.wikipedia.org/wiki/Identifiability) for $\beta_1$ and $\alpha_i$ for those subjects that have only one m |
50,533 | Post propensity score matching analysis | Stuart et al. mention the R package Zelig, which seemlessly works for post-matching analysis after matching with MatchIt. It is mentioned quite often that you should NOT simply compare the means after matching, although this is quite common practice. You can make optimal use of the matching process by using regression models. When matching has been performed, further (parametric) statistical analysis need to be
performed. Matching is just a first step. It can be seen as a non-parametric method for pre-processing the data in order to create a quasi-randomized study and thus decrease or eliminate the dependence of the outcome variables on the confounding covariates. | Post propensity score matching analysis | Stuart et al. mention the R package Zelig, which seemlessly works for post-matching analysis after matching with MatchIt. It is mentioned quite often that you should NOT simply compare the means after | Post propensity score matching analysis
Stuart et al. mention the R package Zelig, which seemlessly works for post-matching analysis after matching with MatchIt. It is mentioned quite often that you should NOT simply compare the means after matching, although this is quite common practice. You can make optimal use of the matching process by using regression models. When matching has been performed, further (parametric) statistical analysis need to be
performed. Matching is just a first step. It can be seen as a non-parametric method for pre-processing the data in order to create a quasi-randomized study and thus decrease or eliminate the dependence of the outcome variables on the confounding covariates. | Post propensity score matching analysis
Stuart et al. mention the R package Zelig, which seemlessly works for post-matching analysis after matching with MatchIt. It is mentioned quite often that you should NOT simply compare the means after |
50,534 | Difference between Bag of words and Vector space model | I find existing answer very misleading.
The Word vector (aka word embedding) is concept coming from probabilistic language models (see [1]). It describes contextual similarity between the words in the language model and came into existence several decades after the VSM was proposed and successfully applied for text categorization, document summarization and information retrieval.
In the Vector Space Model (see [2]), it is not word/term being represented as vector in a n-dimensional space but document. The VSM is constructed to have separate dimension for each distinct unigram word/term, existing in collection of terms aggregated from all BOWs in the document collection. In other words, in the VSM: distinct terms became dimensions, not word vectors. Documents are vectors in the VSM, located at associated term weights by each corresponding dimension.
Bag-of-words (BOW), as approach of document representation in IR, does not allow multiple instances of same word - but represents an unordered list of distinct words, associated with their frequencies in the document (see [3]).
[1] Y. Bengio, R. Ducharme, P. Vincent, C. Janvin, A Neural Probabilistic Language Model, J. Mach. Learn. Res. 3 (2003) 1137–1155. doi:10.1162/153244303322533223.
[2] G. Salton, A. Wong, C. Yang S., A vector space model for automatic indexing, Commun. ACM. 18 (1975) 613–620. doi:10.1145/361219.361220.
[3] G. SALTON, C.S. YANG, ON THE SPECIFICATION OF TERM VALUES IN AUTOMATIC INDEXING, J. Doc. 29 (1973) 351–372. doi:10.1108/eb026562. | Difference between Bag of words and Vector space model | I find existing answer very misleading.
The Word vector (aka word embedding) is concept coming from probabilistic language models (see [1]). It describes contextual similarity between the words in the | Difference between Bag of words and Vector space model
I find existing answer very misleading.
The Word vector (aka word embedding) is concept coming from probabilistic language models (see [1]). It describes contextual similarity between the words in the language model and came into existence several decades after the VSM was proposed and successfully applied for text categorization, document summarization and information retrieval.
In the Vector Space Model (see [2]), it is not word/term being represented as vector in a n-dimensional space but document. The VSM is constructed to have separate dimension for each distinct unigram word/term, existing in collection of terms aggregated from all BOWs in the document collection. In other words, in the VSM: distinct terms became dimensions, not word vectors. Documents are vectors in the VSM, located at associated term weights by each corresponding dimension.
Bag-of-words (BOW), as approach of document representation in IR, does not allow multiple instances of same word - but represents an unordered list of distinct words, associated with their frequencies in the document (see [3]).
[1] Y. Bengio, R. Ducharme, P. Vincent, C. Janvin, A Neural Probabilistic Language Model, J. Mach. Learn. Res. 3 (2003) 1137–1155. doi:10.1162/153244303322533223.
[2] G. Salton, A. Wong, C. Yang S., A vector space model for automatic indexing, Commun. ACM. 18 (1975) 613–620. doi:10.1145/361219.361220.
[3] G. SALTON, C.S. YANG, ON THE SPECIFICATION OF TERM VALUES IN AUTOMATIC INDEXING, J. Doc. 29 (1973) 351–372. doi:10.1108/eb026562. | Difference between Bag of words and Vector space model
I find existing answer very misleading.
The Word vector (aka word embedding) is concept coming from probabilistic language models (see [1]). It describes contextual similarity between the words in the |
50,535 | Difference between Bag of words and Vector space model | Note that the word "bag" means multiset, i.e., it allows multiple instances for each word. Thus:
Bag of words indicates the count of each word in the document. This simple model is used, for example, in naive Bayes
Word vector generalizes the idea of bag of words assigning a ranking to each word in the document. Often the occurrence count, but it can also be another ranking, such as the TF-IDF
Note that each row in a Document Term Matrix (DTM) corresponds to a word vector. | Difference between Bag of words and Vector space model | Note that the word "bag" means multiset, i.e., it allows multiple instances for each word. Thus:
Bag of words indicates the count of each word in the document. This simple model is used, for example, | Difference between Bag of words and Vector space model
Note that the word "bag" means multiset, i.e., it allows multiple instances for each word. Thus:
Bag of words indicates the count of each word in the document. This simple model is used, for example, in naive Bayes
Word vector generalizes the idea of bag of words assigning a ranking to each word in the document. Often the occurrence count, but it can also be another ranking, such as the TF-IDF
Note that each row in a Document Term Matrix (DTM) corresponds to a word vector. | Difference between Bag of words and Vector space model
Note that the word "bag" means multiset, i.e., it allows multiple instances for each word. Thus:
Bag of words indicates the count of each word in the document. This simple model is used, for example, |
50,536 | Simple way for histograms classification | You can use nearest neighbor classification, with an appropriate distance metric. For example histogram intersection distance, $\chi^2$ distance, F-divergence, jensen-shannon divergence, or any other of the divergence measures you like. | Simple way for histograms classification | You can use nearest neighbor classification, with an appropriate distance metric. For example histogram intersection distance, $\chi^2$ distance, F-divergence, jensen-shannon divergence, or any other | Simple way for histograms classification
You can use nearest neighbor classification, with an appropriate distance metric. For example histogram intersection distance, $\chi^2$ distance, F-divergence, jensen-shannon divergence, or any other of the divergence measures you like. | Simple way for histograms classification
You can use nearest neighbor classification, with an appropriate distance metric. For example histogram intersection distance, $\chi^2$ distance, F-divergence, jensen-shannon divergence, or any other |
50,537 | Denoising Autoencoders weights at test time | Yes, assuming the noise has the same nature (ie. setting each input to 0 with some probability p), the same reasoning applies and you should increase the weights at test time. Conversely, you could reduce them during training, which is not exactly equivalent because of the non-linearities, but in practice seems to behave well too.
However, if the noise is just Gaussian noise added to the inputs, then you should not change the weights at test time since Gaussian noise does not affect the scale of the inputs. | Denoising Autoencoders weights at test time | Yes, assuming the noise has the same nature (ie. setting each input to 0 with some probability p), the same reasoning applies and you should increase the weights at test time. Conversely, you could re | Denoising Autoencoders weights at test time
Yes, assuming the noise has the same nature (ie. setting each input to 0 with some probability p), the same reasoning applies and you should increase the weights at test time. Conversely, you could reduce them during training, which is not exactly equivalent because of the non-linearities, but in practice seems to behave well too.
However, if the noise is just Gaussian noise added to the inputs, then you should not change the weights at test time since Gaussian noise does not affect the scale of the inputs. | Denoising Autoencoders weights at test time
Yes, assuming the noise has the same nature (ie. setting each input to 0 with some probability p), the same reasoning applies and you should increase the weights at test time. Conversely, you could re |
50,538 | What will be the estimator for these parameters | I need to find an estimator for the parameters $(y_0,σ^2_η)$ from the
observations $y(t)$,in order to get an estimator for the parameter
$z_d$.
In your formulation, $$y(t) = y_0 + \eta(t)$$with $\eta(t)\stackrel{\text{iid}}{\sim}N(0,\sigma^2_\eta)$, i.e.$$y(t) \stackrel{\text{iid}}{\sim}N(y_0,\sigma^2_\eta)\qquad t=1,\ldots,T$$Therefore, if you are only interested in estimating $(y_0,σ^2_η)$, this is a standard normal model with MLEs$$\hat y_0=\frac{1}{T}\sum_{t=1}^T y(t)\quad \hat{σ^2_η}=\frac{1}{T}\sum_{t=1}^T (y(t)-\hat y_0)^2$$
If you assume further that$$y_0=z^d$$then$$\hat z=\hat{y_0}^{1/d}$$since the MLE of the transform is the transform of the MLE (assuming $\hat y_0>0$).
As I mentioned in a comment, the information that $z$ is a generalised Gamma variate is not useful to estimate $z$.
Update
Now, if repeated observations are available, with $y_0(t)=z(t)^d$, and $y_0(t)\sim \text{Ga}(kn,\lambda)$, this item of information does not contain anything about $d$. Once $y_0$ is observed or estimated, you cannot derive an estimator of $d$ because the assumption for the Gamma distribution is on $y_0(t)$, not $z(t)$. The parameter $d$ is not identifiable for this experiment. | What will be the estimator for these parameters | I need to find an estimator for the parameters $(y_0,σ^2_η)$ from the
observations $y(t)$,in order to get an estimator for the parameter
$z_d$.
In your formulation, $$y(t) = y_0 + \eta(t)$$with $ | What will be the estimator for these parameters
I need to find an estimator for the parameters $(y_0,σ^2_η)$ from the
observations $y(t)$,in order to get an estimator for the parameter
$z_d$.
In your formulation, $$y(t) = y_0 + \eta(t)$$with $\eta(t)\stackrel{\text{iid}}{\sim}N(0,\sigma^2_\eta)$, i.e.$$y(t) \stackrel{\text{iid}}{\sim}N(y_0,\sigma^2_\eta)\qquad t=1,\ldots,T$$Therefore, if you are only interested in estimating $(y_0,σ^2_η)$, this is a standard normal model with MLEs$$\hat y_0=\frac{1}{T}\sum_{t=1}^T y(t)\quad \hat{σ^2_η}=\frac{1}{T}\sum_{t=1}^T (y(t)-\hat y_0)^2$$
If you assume further that$$y_0=z^d$$then$$\hat z=\hat{y_0}^{1/d}$$since the MLE of the transform is the transform of the MLE (assuming $\hat y_0>0$).
As I mentioned in a comment, the information that $z$ is a generalised Gamma variate is not useful to estimate $z$.
Update
Now, if repeated observations are available, with $y_0(t)=z(t)^d$, and $y_0(t)\sim \text{Ga}(kn,\lambda)$, this item of information does not contain anything about $d$. Once $y_0$ is observed or estimated, you cannot derive an estimator of $d$ because the assumption for the Gamma distribution is on $y_0(t)$, not $z(t)$. The parameter $d$ is not identifiable for this experiment. | What will be the estimator for these parameters
I need to find an estimator for the parameters $(y_0,σ^2_η)$ from the
observations $y(t)$,in order to get an estimator for the parameter
$z_d$.
In your formulation, $$y(t) = y_0 + \eta(t)$$with $ |
50,539 | Estimating robust standard errors in panel data regressions | 1) Given that you have specified "id" in the regression (I guess individuals or some other unit you follow over time), the cluster="group" standard errors are clustered at the individual level. This makes sense given that a person's error today may be correlated with her error of yesterday. For more information see page 14 of these notes.
2) The default is to have individual effects in the model which would be equivalent to have a dummy for $N-1$ individuals. If you specify the twoways option, then the model will also include $T-1$ time dummies in order to estimate both individual and time fixed effects (see p. 12, Croissant and Millo (2008) "Panel Data Econometrics in R: The plm Package", link). | Estimating robust standard errors in panel data regressions | 1) Given that you have specified "id" in the regression (I guess individuals or some other unit you follow over time), the cluster="group" standard errors are clustered at the individual level. This m | Estimating robust standard errors in panel data regressions
1) Given that you have specified "id" in the regression (I guess individuals or some other unit you follow over time), the cluster="group" standard errors are clustered at the individual level. This makes sense given that a person's error today may be correlated with her error of yesterday. For more information see page 14 of these notes.
2) The default is to have individual effects in the model which would be equivalent to have a dummy for $N-1$ individuals. If you specify the twoways option, then the model will also include $T-1$ time dummies in order to estimate both individual and time fixed effects (see p. 12, Croissant and Millo (2008) "Panel Data Econometrics in R: The plm Package", link). | Estimating robust standard errors in panel data regressions
1) Given that you have specified "id" in the regression (I guess individuals or some other unit you follow over time), the cluster="group" standard errors are clustered at the individual level. This m |
50,540 | Is a logit model with a pseudo-R^2 of less than 0.5 a worse model than a coin toss? | (Percent correct)/(Total count) is usually termed the Correct Classification Rate. This is not one of the pseudo-R-squared indicators, and it's generally considered an inferior way of assessing model fit because it simplifies so much; it doesn't take into account the differences in predicted probability from observation to observation.
The pseudo-R-squared can be calculated in several different ways, as noted by @A. Webb and @kjetil b halvorsen, but by any of those methods, a result not just of 0.50 but even of, say, 0.03 will, in a sample of a few hundred or a thousand, reflect a model that is much more informative and/or a better guide to decision-making than a simple coin flip. This can be seen concretely by comparing the two distributions of predicted probabilities the model generates: one for observations with a "1" on the dependent variable and one for those with a "0." The predicted probabilities for the "1"s will be noticeably shifted right relative to those for the "0"s. An ROC curve, too, will mark out noticeably more area for this model than it would for a null model based on coin flips. | Is a logit model with a pseudo-R^2 of less than 0.5 a worse model than a coin toss? | (Percent correct)/(Total count) is usually termed the Correct Classification Rate. This is not one of the pseudo-R-squared indicators, and it's generally considered an inferior way of assessing model | Is a logit model with a pseudo-R^2 of less than 0.5 a worse model than a coin toss?
(Percent correct)/(Total count) is usually termed the Correct Classification Rate. This is not one of the pseudo-R-squared indicators, and it's generally considered an inferior way of assessing model fit because it simplifies so much; it doesn't take into account the differences in predicted probability from observation to observation.
The pseudo-R-squared can be calculated in several different ways, as noted by @A. Webb and @kjetil b halvorsen, but by any of those methods, a result not just of 0.50 but even of, say, 0.03 will, in a sample of a few hundred or a thousand, reflect a model that is much more informative and/or a better guide to decision-making than a simple coin flip. This can be seen concretely by comparing the two distributions of predicted probabilities the model generates: one for observations with a "1" on the dependent variable and one for those with a "0." The predicted probabilities for the "1"s will be noticeably shifted right relative to those for the "0"s. An ROC curve, too, will mark out noticeably more area for this model than it would for a null model based on coin flips. | Is a logit model with a pseudo-R^2 of less than 0.5 a worse model than a coin toss?
(Percent correct)/(Total count) is usually termed the Correct Classification Rate. This is not one of the pseudo-R-squared indicators, and it's generally considered an inferior way of assessing model |
50,541 | Elastic net: dealing with wide data with outliers | Couple of things I just thought I will mention. It is hard to mention specifics without actually looking at the results but hope this is helpful. Most of these things I am sure you already know but in case you missed something
1) The 10% v/s 90% check if you used glmnet or cv.glmnet. You should be using cv.glmnet. The first one fits the penalty parameter on the entire data set. 99% seems like an overfit estimate. You may not have made a mistake but no harm in specifying.
2) Since p >> N every point is technically an outlier (curse of dimensionality) so I am not quite sure what you mean. Nevertheless, there is this technique called the Bo-Lasso or the bootstrapped lasso. What it does in essence is does the sub-sample experiments as you have tried and retains only those predictors which appear in more than 80% of the LASSO fits. Needless to say it is slow, but the predictors it selects have some nice asymptotic properties
see http://www.di.ens.fr/~fbach/fbach_bolasso_icml2008.pdf
3) As far as influential points, again, I am a little confused. What is an influential point? Most techniques like the LASSO are using L-1 penalties to generate the sparsity so as a result you have influential PREDICTORS not influential POINTS. On the other hand, if you work with something like Support Vector Machines (SVM) you will get influential POINTS (points of support is essentially I think what you are talking about)
Sorry for not being able to be more specific. I hope these tips help | Elastic net: dealing with wide data with outliers | Couple of things I just thought I will mention. It is hard to mention specifics without actually looking at the results but hope this is helpful. Most of these things I am sure you already know but in | Elastic net: dealing with wide data with outliers
Couple of things I just thought I will mention. It is hard to mention specifics without actually looking at the results but hope this is helpful. Most of these things I am sure you already know but in case you missed something
1) The 10% v/s 90% check if you used glmnet or cv.glmnet. You should be using cv.glmnet. The first one fits the penalty parameter on the entire data set. 99% seems like an overfit estimate. You may not have made a mistake but no harm in specifying.
2) Since p >> N every point is technically an outlier (curse of dimensionality) so I am not quite sure what you mean. Nevertheless, there is this technique called the Bo-Lasso or the bootstrapped lasso. What it does in essence is does the sub-sample experiments as you have tried and retains only those predictors which appear in more than 80% of the LASSO fits. Needless to say it is slow, but the predictors it selects have some nice asymptotic properties
see http://www.di.ens.fr/~fbach/fbach_bolasso_icml2008.pdf
3) As far as influential points, again, I am a little confused. What is an influential point? Most techniques like the LASSO are using L-1 penalties to generate the sparsity so as a result you have influential PREDICTORS not influential POINTS. On the other hand, if you work with something like Support Vector Machines (SVM) you will get influential POINTS (points of support is essentially I think what you are talking about)
Sorry for not being able to be more specific. I hope these tips help | Elastic net: dealing with wide data with outliers
Couple of things I just thought I will mention. It is hard to mention specifics without actually looking at the results but hope this is helpful. Most of these things I am sure you already know but in |
50,542 | Comparing different methods of discrete-time survival analysis | A Cox proportional hazards model with "exact" tie resolution, a.k.a a conditional logistic regression ...
A standard logistic regression with one data point per subject-month, with time represented as a categorical variable
A conditional logistic regression model and a standard binary regression model with the logistic function and with a categorical variable for time is the same thing. You end with a 36 different intercept terms, 1 intercept and 35 dummy coefficients or something similar depending on how you setup the dummy coding.
It seems to me that I should expect these models all to output similar results. In general, when should I prefer one over the other? (Or are there other types of models that I'm missing?)
It depends on what you want to achieve. If your goal is to say something about the proportional hazards between two observations or odds ratios then the dummy coding approach may be preferable as you make no assumptions about the intercept. You need though to check the assumption of the link function you use (e.g., is proportional hazard assumption justified in the Cox model).
However, you cannot make prediction about the probability of survival in future periods in a model with 36 dummies as you have no model for the intercept in future periods. This is not the case with random effect model where you have a model for the intercept or parametric model for the intercept. Though, you need to justify your assumptions about the distribution of the random effects or the parametric model you have chosen.
EDIT: I've preliminarily tried fitting some of them in R and have run into various random segfaults/stack-overflows ...
You can also checkout the ddhazard function in my package dynamichazard. You can use it to fit a discrete time survival model with a random walk for the intercept and/or the coefficients. | Comparing different methods of discrete-time survival analysis | A Cox proportional hazards model with "exact" tie resolution, a.k.a a conditional logistic regression ...
A standard logistic regression with one data point per subject-month, with time represented as | Comparing different methods of discrete-time survival analysis
A Cox proportional hazards model with "exact" tie resolution, a.k.a a conditional logistic regression ...
A standard logistic regression with one data point per subject-month, with time represented as a categorical variable
A conditional logistic regression model and a standard binary regression model with the logistic function and with a categorical variable for time is the same thing. You end with a 36 different intercept terms, 1 intercept and 35 dummy coefficients or something similar depending on how you setup the dummy coding.
It seems to me that I should expect these models all to output similar results. In general, when should I prefer one over the other? (Or are there other types of models that I'm missing?)
It depends on what you want to achieve. If your goal is to say something about the proportional hazards between two observations or odds ratios then the dummy coding approach may be preferable as you make no assumptions about the intercept. You need though to check the assumption of the link function you use (e.g., is proportional hazard assumption justified in the Cox model).
However, you cannot make prediction about the probability of survival in future periods in a model with 36 dummies as you have no model for the intercept in future periods. This is not the case with random effect model where you have a model for the intercept or parametric model for the intercept. Though, you need to justify your assumptions about the distribution of the random effects or the parametric model you have chosen.
EDIT: I've preliminarily tried fitting some of them in R and have run into various random segfaults/stack-overflows ...
You can also checkout the ddhazard function in my package dynamichazard. You can use it to fit a discrete time survival model with a random walk for the intercept and/or the coefficients. | Comparing different methods of discrete-time survival analysis
A Cox proportional hazards model with "exact" tie resolution, a.k.a a conditional logistic regression ...
A standard logistic regression with one data point per subject-month, with time represented as |
50,543 | Categorical variable coding to compare all levels to all levels | You want all possible pairwise comparisons of levels, but there are much more pairs than there are degrees of freedom in the factor. Say the factor has five levels, then you need 4 parameters to code it, but there are $\binom{5}{2}$ pairs, that is, 10 pairs. So it is impossible to find a coding with one parameter for each comparison.
The solution is to use whatever coding you wants, and then compute the 10 pairwise contrasts afterwards, after estimating the model, from the model output. In R, for instance, this could be done many ways , either "by hand", or with the use of packages like contrast or multcomp.
Below an R example, done "by hand", for confidence intervals of all pairwise comparisons:
xfac <- factor(rep(1:5, each=10))
y <- rnorm(50, mean=c(rep(0, 20), rep(1, 30)), sd=2)
mod <- lm( y ~ 0 + xfac)
# generating a hypothesis contrasts matrix with 10 rows:
# each row is one contrast:
cmat <- matrix(0, 10, 5)
nam <- character(length=10)
row <- 0
for (i in 1:4) for (j in (i+1):5) {
row <- row+1
nam[row] <- paste("x[", i, "]-x[", j, "]",
sep="")
cmat[row, c(i, j)] <- c(1, -1)
}
rownames(cmat) <- nam
# We write a contrast testing function by hand:
my.contrast <- function(mod, cmat) {
co <- coef(mod)
CV <- vcov(mod)
se <- sqrt( diag( cmat %*% CV %*% t(cmat) ))
df <- mod$df.residual
contr <- cmat %*% co
ul <- qt(0.975, df=df)
ci <- cbind(contr-ul*se, contr+ul*se)
ci
}
And then using it gives the result:
> my.contrast(mod, cmat)
[,1] [,2]
x[1]-x[2] -1.946376 1.7921298
x[1]-x[3] -3.044916 0.6935897
x[1]-x[4] -2.136283 1.6022227
x[1]-x[5] -2.301393 1.4371135
x[2]-x[3] -2.967793 0.7707130
x[2]-x[4] -2.059160 1.6793460
x[2]-x[5] -2.224269 1.5142368
x[3]-x[4] -0.960620 2.7778861
x[3]-x[5] -1.125729 2.6127769
x[4]-x[5] -2.034362 1.7041439 | Categorical variable coding to compare all levels to all levels | You want all possible pairwise comparisons of levels, but there are much more pairs than there are degrees of freedom in the factor. Say the factor has five levels, then you need 4 parameters to code | Categorical variable coding to compare all levels to all levels
You want all possible pairwise comparisons of levels, but there are much more pairs than there are degrees of freedom in the factor. Say the factor has five levels, then you need 4 parameters to code it, but there are $\binom{5}{2}$ pairs, that is, 10 pairs. So it is impossible to find a coding with one parameter for each comparison.
The solution is to use whatever coding you wants, and then compute the 10 pairwise contrasts afterwards, after estimating the model, from the model output. In R, for instance, this could be done many ways , either "by hand", or with the use of packages like contrast or multcomp.
Below an R example, done "by hand", for confidence intervals of all pairwise comparisons:
xfac <- factor(rep(1:5, each=10))
y <- rnorm(50, mean=c(rep(0, 20), rep(1, 30)), sd=2)
mod <- lm( y ~ 0 + xfac)
# generating a hypothesis contrasts matrix with 10 rows:
# each row is one contrast:
cmat <- matrix(0, 10, 5)
nam <- character(length=10)
row <- 0
for (i in 1:4) for (j in (i+1):5) {
row <- row+1
nam[row] <- paste("x[", i, "]-x[", j, "]",
sep="")
cmat[row, c(i, j)] <- c(1, -1)
}
rownames(cmat) <- nam
# We write a contrast testing function by hand:
my.contrast <- function(mod, cmat) {
co <- coef(mod)
CV <- vcov(mod)
se <- sqrt( diag( cmat %*% CV %*% t(cmat) ))
df <- mod$df.residual
contr <- cmat %*% co
ul <- qt(0.975, df=df)
ci <- cbind(contr-ul*se, contr+ul*se)
ci
}
And then using it gives the result:
> my.contrast(mod, cmat)
[,1] [,2]
x[1]-x[2] -1.946376 1.7921298
x[1]-x[3] -3.044916 0.6935897
x[1]-x[4] -2.136283 1.6022227
x[1]-x[5] -2.301393 1.4371135
x[2]-x[3] -2.967793 0.7707130
x[2]-x[4] -2.059160 1.6793460
x[2]-x[5] -2.224269 1.5142368
x[3]-x[4] -0.960620 2.7778861
x[3]-x[5] -1.125729 2.6127769
x[4]-x[5] -2.034362 1.7041439 | Categorical variable coding to compare all levels to all levels
You want all possible pairwise comparisons of levels, but there are much more pairs than there are degrees of freedom in the factor. Say the factor has five levels, then you need 4 parameters to code |
50,544 | Categorical variable coding to compare all levels to all levels | In the case of ANOVA regression, when you have all the categorical variables, usually one of those will be represented by the intercept (the one you choose). You can verify this: the intercept should be the mean of the variable that was "left out" from your model.
the logic of the variable that is represented in the regression is due to the following logic.
Dummy(b) = 1 or 0
Dummy(c) = 1 or 0
Dummy(d) = 1 or 0
then
Dummy(a) = b0 since all others are zero.
Therefore y = b0 + b1 * b + b2 * c + b3 * d;
if all others are zero the y = b0, where b0 is the intercept and the mean of the first variable.
Hope this helps.
J | Categorical variable coding to compare all levels to all levels | In the case of ANOVA regression, when you have all the categorical variables, usually one of those will be represented by the intercept (the one you choose). You can verify this: the intercept should | Categorical variable coding to compare all levels to all levels
In the case of ANOVA regression, when you have all the categorical variables, usually one of those will be represented by the intercept (the one you choose). You can verify this: the intercept should be the mean of the variable that was "left out" from your model.
the logic of the variable that is represented in the regression is due to the following logic.
Dummy(b) = 1 or 0
Dummy(c) = 1 or 0
Dummy(d) = 1 or 0
then
Dummy(a) = b0 since all others are zero.
Therefore y = b0 + b1 * b + b2 * c + b3 * d;
if all others are zero the y = b0, where b0 is the intercept and the mean of the first variable.
Hope this helps.
J | Categorical variable coding to compare all levels to all levels
In the case of ANOVA regression, when you have all the categorical variables, usually one of those will be represented by the intercept (the one you choose). You can verify this: the intercept should |
50,545 | Sensitivity analysis of machine learning techniques | You can use partial dependence plots, which will give you an estimate of the sensitivity of the predicted output with regards to each of the independent variables.
See chapter 10.13.2 of the Elements of Statistical Learning by Hastie, Tibshirani, Friedman. | Sensitivity analysis of machine learning techniques | You can use partial dependence plots, which will give you an estimate of the sensitivity of the predicted output with regards to each of the independent variables.
See chapter 10.13.2 of the Elements | Sensitivity analysis of machine learning techniques
You can use partial dependence plots, which will give you an estimate of the sensitivity of the predicted output with regards to each of the independent variables.
See chapter 10.13.2 of the Elements of Statistical Learning by Hastie, Tibshirani, Friedman. | Sensitivity analysis of machine learning techniques
You can use partial dependence plots, which will give you an estimate of the sensitivity of the predicted output with regards to each of the independent variables.
See chapter 10.13.2 of the Elements |
50,546 | Modelling flight delays with negative values | First, I agree that this is not count data.
If there are many flights that are canceled, then you might think of it as time to event data and look into survival analysis methods. This might depend on where and when you are: More flights are cancelled from Chicago in winter than from Phoenix in May.
Other than that, you might try quantile regression; I suggest this for two reasons: First, you might be particularly interested in long delays. If you are interested in this from a passenger POV, then a short delay in departure might not matter at all - these are often made up during the flight, and I think most passengers are more concerned with arrival time than departure time. But if you are the airport manager, then even a short delay might be a problem with scheduling runways and so on. Quantile regression lets you model the quantiles. Second, quantile regression makes no assumptions about the distribution of the residuals.
For the early departures, I think you have to figure out whether an early departure is better or worse or equivalent to an on-time departure. | Modelling flight delays with negative values | First, I agree that this is not count data.
If there are many flights that are canceled, then you might think of it as time to event data and look into survival analysis methods. This might depend on | Modelling flight delays with negative values
First, I agree that this is not count data.
If there are many flights that are canceled, then you might think of it as time to event data and look into survival analysis methods. This might depend on where and when you are: More flights are cancelled from Chicago in winter than from Phoenix in May.
Other than that, you might try quantile regression; I suggest this for two reasons: First, you might be particularly interested in long delays. If you are interested in this from a passenger POV, then a short delay in departure might not matter at all - these are often made up during the flight, and I think most passengers are more concerned with arrival time than departure time. But if you are the airport manager, then even a short delay might be a problem with scheduling runways and so on. Quantile regression lets you model the quantiles. Second, quantile regression makes no assumptions about the distribution of the residuals.
For the early departures, I think you have to figure out whether an early departure is better or worse or equivalent to an on-time departure. | Modelling flight delays with negative values
First, I agree that this is not count data.
If there are many flights that are canceled, then you might think of it as time to event data and look into survival analysis methods. This might depend on |
50,547 | Model Selection and RFE using caret | You are fitting a @#$^ ton of models, even with the adaptive resampling. You can do it, but the tuning will take a lot of time regardless.
You would probably be better off fitting a model with built-in feature selection instead of using a feature selection wrapper.
sbf could work, and it probably wouldn't be hard to try, but there is still model tuning. | Model Selection and RFE using caret | You are fitting a @#$^ ton of models, even with the adaptive resampling. You can do it, but the tuning will take a lot of time regardless.
You would probably be better off fitting a model with built- | Model Selection and RFE using caret
You are fitting a @#$^ ton of models, even with the adaptive resampling. You can do it, but the tuning will take a lot of time regardless.
You would probably be better off fitting a model with built-in feature selection instead of using a feature selection wrapper.
sbf could work, and it probably wouldn't be hard to try, but there is still model tuning. | Model Selection and RFE using caret
You are fitting a @#$^ ton of models, even with the adaptive resampling. You can do it, but the tuning will take a lot of time regardless.
You would probably be better off fitting a model with built- |
50,548 | non-classical measurement error in a binary outcome model | Citing from the survey article by Chen et al. (2011) "Nonlinear Models of Measurement Errors", Journal of Economic Literature:
The approximate bias depends on the derivatives of the regression
function with respect to the mismeasured regressor and the curvature
of the distribution functions of the true regressor and the
mismeasured regressors. Locally the conditional mean function of the
dependent variable given the mismeasured regressors is smoother than
the conditional mean function given the true regressors, in analog to
the attenuation bias on the regression coefficient in linear models.
This was a result from Chesher (1991). Also Carroll et al (1984) derive the bias in binary regression models with measurement error. The survey article discusses several types of measurement error in nonlinear models and potential ways around it.
For a practical implementation in Stata have a look at the Stata site titled "Stata software for generalized linear measurement error models" [link]. If you have an idea about the variance of the measurement error then things seem to become a little less complicated. | non-classical measurement error in a binary outcome model | Citing from the survey article by Chen et al. (2011) "Nonlinear Models of Measurement Errors", Journal of Economic Literature:
The approximate bias depends on the derivatives of the regression
func | non-classical measurement error in a binary outcome model
Citing from the survey article by Chen et al. (2011) "Nonlinear Models of Measurement Errors", Journal of Economic Literature:
The approximate bias depends on the derivatives of the regression
function with respect to the mismeasured regressor and the curvature
of the distribution functions of the true regressor and the
mismeasured regressors. Locally the conditional mean function of the
dependent variable given the mismeasured regressors is smoother than
the conditional mean function given the true regressors, in analog to
the attenuation bias on the regression coefficient in linear models.
This was a result from Chesher (1991). Also Carroll et al (1984) derive the bias in binary regression models with measurement error. The survey article discusses several types of measurement error in nonlinear models and potential ways around it.
For a practical implementation in Stata have a look at the Stata site titled "Stata software for generalized linear measurement error models" [link]. If you have an idea about the variance of the measurement error then things seem to become a little less complicated. | non-classical measurement error in a binary outcome model
Citing from the survey article by Chen et al. (2011) "Nonlinear Models of Measurement Errors", Journal of Economic Literature:
The approximate bias depends on the derivatives of the regression
func |
50,549 | "Only if" and "if" direction in Kolmogorov's Existence Theorem | I contacted the author: it was a typo, and he's added it to http://probability.ca/jeff/ftpdir/errata2.pdf. | "Only if" and "if" direction in Kolmogorov's Existence Theorem | I contacted the author: it was a typo, and he's added it to http://probability.ca/jeff/ftpdir/errata2.pdf. | "Only if" and "if" direction in Kolmogorov's Existence Theorem
I contacted the author: it was a typo, and he's added it to http://probability.ca/jeff/ftpdir/errata2.pdf. | "Only if" and "if" direction in Kolmogorov's Existence Theorem
I contacted the author: it was a typo, and he's added it to http://probability.ca/jeff/ftpdir/errata2.pdf. |
50,550 | Least-square fit with uneven distribution of data | Assuming that some data is redundant simply because they are similar in value (on the x-axis) then your approach is correct (if ignoring issues of outliers). The technique you are looking for is called Kernel density estimation. The kernel bandwidth should be chosen based on the context of the data. If x-values within a certain distance are to be considered redundant then the bandwidth of the kernel should be wide enough to include any two values within that distance. Then you can use the inverse of the density estimation as the weights used in the least-squared regression.
Two dimensional kernel's can be used if two points can be considered non-redundant even though they have the same x-value. (e.g. a single point in the top left of your graph would have the same weight as the values in the top right.) | Least-square fit with uneven distribution of data | Assuming that some data is redundant simply because they are similar in value (on the x-axis) then your approach is correct (if ignoring issues of outliers). The technique you are looking for is call | Least-square fit with uneven distribution of data
Assuming that some data is redundant simply because they are similar in value (on the x-axis) then your approach is correct (if ignoring issues of outliers). The technique you are looking for is called Kernel density estimation. The kernel bandwidth should be chosen based on the context of the data. If x-values within a certain distance are to be considered redundant then the bandwidth of the kernel should be wide enough to include any two values within that distance. Then you can use the inverse of the density estimation as the weights used in the least-squared regression.
Two dimensional kernel's can be used if two points can be considered non-redundant even though they have the same x-value. (e.g. a single point in the top left of your graph would have the same weight as the values in the top right.) | Least-square fit with uneven distribution of data
Assuming that some data is redundant simply because they are similar in value (on the x-axis) then your approach is correct (if ignoring issues of outliers). The technique you are looking for is call |
50,551 | Least-square fit with uneven distribution of data | For this problem, I found this article to be of great interest.
For what I understood, there is usually 2 ways to deal with such non well distributed dataset :
either we resample the dataset, usually by deleting some data to transform the dataset into a well distributed one,
or we weight the data points, for example here by accounting for the density of points along x axis.
The second approach is the one they propose in the article. One advantage compared to the first method is that we do not delete data points so we can say in a way that we are more representative. They implement the method in a python package called denseweight. It is based on a kernel density estimation which allows you to get the weight for each point.
Then you can use a weighted linear regression with these weights, for example scipy.optimize.curve_fit with sigma parameter to account for the uncertainty of each data point. Setting sigma=1/weights will make isolated points more certain than dense groups of points according to their weight.
Here is an example with my dataset (x, y), either using denseweight module, or directly trying to use gaussian kernel density estimation:
from scipy.optimize import curve_fit
from denseweight import DenseWeight
from scipy.stats import gaussian_kde
f = lambda x, a, b : a * x + b
fig, ax = plt.subplots()
ax.plot(x, y, '+')
# Standart linear regression :
popt, pcov = curve_fit(f, x, y)
xfit = np.array(ax.get_xlim())
yfit = popt[0] * xfit + popt[1]
ax.plot(xfit, yfit, '-', label='standard linear fit')
# Weighted linear regression with denseweight module' :
dw = DenseWeight(alpha=1)
weights = dw.fit(x)
popt, pcov = curve_fit(f, x, y, sigma=1/weights)
yfit = popt[0] * xfit + popt[1]
ax.plot(xfit, yfit, '-', label='weighted linear fit with denseweight module')
# Weighted linear regression with gaussian_kde :
kde = gaussian_kde(x)
popt, pcov = curve_fit(f, x, y, sigma=kde.pdf(x))
yfit = popt[0] * xfit + popt[1]
ax.plot(xfit, yfit, '-k', label='weighted linear fit with gaussian_kde')
ax.legend() | Least-square fit with uneven distribution of data | For this problem, I found this article to be of great interest.
For what I understood, there is usually 2 ways to deal with such non well distributed dataset :
either we resample the dataset, usually | Least-square fit with uneven distribution of data
For this problem, I found this article to be of great interest.
For what I understood, there is usually 2 ways to deal with such non well distributed dataset :
either we resample the dataset, usually by deleting some data to transform the dataset into a well distributed one,
or we weight the data points, for example here by accounting for the density of points along x axis.
The second approach is the one they propose in the article. One advantage compared to the first method is that we do not delete data points so we can say in a way that we are more representative. They implement the method in a python package called denseweight. It is based on a kernel density estimation which allows you to get the weight for each point.
Then you can use a weighted linear regression with these weights, for example scipy.optimize.curve_fit with sigma parameter to account for the uncertainty of each data point. Setting sigma=1/weights will make isolated points more certain than dense groups of points according to their weight.
Here is an example with my dataset (x, y), either using denseweight module, or directly trying to use gaussian kernel density estimation:
from scipy.optimize import curve_fit
from denseweight import DenseWeight
from scipy.stats import gaussian_kde
f = lambda x, a, b : a * x + b
fig, ax = plt.subplots()
ax.plot(x, y, '+')
# Standart linear regression :
popt, pcov = curve_fit(f, x, y)
xfit = np.array(ax.get_xlim())
yfit = popt[0] * xfit + popt[1]
ax.plot(xfit, yfit, '-', label='standard linear fit')
# Weighted linear regression with denseweight module' :
dw = DenseWeight(alpha=1)
weights = dw.fit(x)
popt, pcov = curve_fit(f, x, y, sigma=1/weights)
yfit = popt[0] * xfit + popt[1]
ax.plot(xfit, yfit, '-', label='weighted linear fit with denseweight module')
# Weighted linear regression with gaussian_kde :
kde = gaussian_kde(x)
popt, pcov = curve_fit(f, x, y, sigma=kde.pdf(x))
yfit = popt[0] * xfit + popt[1]
ax.plot(xfit, yfit, '-k', label='weighted linear fit with gaussian_kde')
ax.legend() | Least-square fit with uneven distribution of data
For this problem, I found this article to be of great interest.
For what I understood, there is usually 2 ways to deal with such non well distributed dataset :
either we resample the dataset, usually |
50,552 | Least-square fit with uneven distribution of data | You can use cubic splines. One advantage of splines over standard polynomial regression is that data points influence is more local. This is due to irregularity at knots. | Least-square fit with uneven distribution of data | You can use cubic splines. One advantage of splines over standard polynomial regression is that data points influence is more local. This is due to irregularity at knots. | Least-square fit with uneven distribution of data
You can use cubic splines. One advantage of splines over standard polynomial regression is that data points influence is more local. This is due to irregularity at knots. | Least-square fit with uneven distribution of data
You can use cubic splines. One advantage of splines over standard polynomial regression is that data points influence is more local. This is due to irregularity at knots. |
50,553 | Distribution of "normalised" Gaussian random variables | Not intended as an answer ... but more a comment that is too long for the comment box ...
Updated for OP's change of sample variance to sample standard deviation
To get an idea of the difficulty of the problem ... consider the simplest possible form this question can take, namely:
a sample of size $n = 2$, where ...
$X_1$ and $X_2$ are random draws from a common standard Normal parent.
Then, using the $(n-1)$ version of sample variance, and defining sample standard deviation as the square root of the latter, ... the problem is to find the distribution of:
$$Y = \frac{\sqrt{2}\, X_1}{\big|X_1-X_2\big|} \quad \quad \text{where } X_i \sim N(0,1) $$
This does not appear to be easy at all ... never mind solving for general $n \geq2$.
Monte Carlo simulation of the pdf (for different sample sizes $n$)
What will the general $n$ solution look like? The following diagram constructs the empirical Monte Carlo pdf of your ratio $Y$, for samples of size $n = 2, 3, 5$ and $25$. Each plot compares the:
empirical Monte Carlo pdf [squiggly blue curve] to
a standard Normal pdf (dashed red curve)
There is perhaps some solace in that, by the time $n = 25$, the distribution appears to be well-approximated by a standard Normal (provided the parent is standard Normal). But the small sample sizes are tricky.
If you are interested in general Normal distributions (i.e. with non-zero means), then there is, of course, the additional complication of asymmetry to the plots. | Distribution of "normalised" Gaussian random variables | Not intended as an answer ... but more a comment that is too long for the comment box ...
Updated for OP's change of sample variance to sample standard deviation
To get an idea of the difficulty of th | Distribution of "normalised" Gaussian random variables
Not intended as an answer ... but more a comment that is too long for the comment box ...
Updated for OP's change of sample variance to sample standard deviation
To get an idea of the difficulty of the problem ... consider the simplest possible form this question can take, namely:
a sample of size $n = 2$, where ...
$X_1$ and $X_2$ are random draws from a common standard Normal parent.
Then, using the $(n-1)$ version of sample variance, and defining sample standard deviation as the square root of the latter, ... the problem is to find the distribution of:
$$Y = \frac{\sqrt{2}\, X_1}{\big|X_1-X_2\big|} \quad \quad \text{where } X_i \sim N(0,1) $$
This does not appear to be easy at all ... never mind solving for general $n \geq2$.
Monte Carlo simulation of the pdf (for different sample sizes $n$)
What will the general $n$ solution look like? The following diagram constructs the empirical Monte Carlo pdf of your ratio $Y$, for samples of size $n = 2, 3, 5$ and $25$. Each plot compares the:
empirical Monte Carlo pdf [squiggly blue curve] to
a standard Normal pdf (dashed red curve)
There is perhaps some solace in that, by the time $n = 25$, the distribution appears to be well-approximated by a standard Normal (provided the parent is standard Normal). But the small sample sizes are tricky.
If you are interested in general Normal distributions (i.e. with non-zero means), then there is, of course, the additional complication of asymmetry to the plots. | Distribution of "normalised" Gaussian random variables
Not intended as an answer ... but more a comment that is too long for the comment box ...
Updated for OP's change of sample variance to sample standard deviation
To get an idea of the difficulty of th |
50,554 | Robust option in Stata: why are the p values computed using a Student distribution? | Robust variance estimators require large samples to be valid. In small samples, they are biased downward, and the normal-distribution-based confidence intervals may have coverage way below nominal coverage rates.
Using a $t_{n-k}$-distribution approximations to be conservative is one possible solution: you hope that this fattens up the tails adequately before you offer up your tests to the journal referee gods. Other ideas are multiplying the squared residuals by $\frac{n}{n-k}$ (or something similar) to inflate them (which Stata also does), or higher order asymptotic expansions, or resampling methods like bootstrapping. Here $n$ is the number of observations and $k$ the number of parameters.
A nice survey of this literature is Imbens and Kolesar (2012). They give a great example where the $t$ approximation goes wrong in a setting where you have a binary treatment with very few treated observations. Using $n=n_T+n_C$ is far too generous.
If your sample size is large, using the $t$ versus the normal won't matter at all. | Robust option in Stata: why are the p values computed using a Student distribution? | Robust variance estimators require large samples to be valid. In small samples, they are biased downward, and the normal-distribution-based confidence intervals may have coverage way below nominal cov | Robust option in Stata: why are the p values computed using a Student distribution?
Robust variance estimators require large samples to be valid. In small samples, they are biased downward, and the normal-distribution-based confidence intervals may have coverage way below nominal coverage rates.
Using a $t_{n-k}$-distribution approximations to be conservative is one possible solution: you hope that this fattens up the tails adequately before you offer up your tests to the journal referee gods. Other ideas are multiplying the squared residuals by $\frac{n}{n-k}$ (or something similar) to inflate them (which Stata also does), or higher order asymptotic expansions, or resampling methods like bootstrapping. Here $n$ is the number of observations and $k$ the number of parameters.
A nice survey of this literature is Imbens and Kolesar (2012). They give a great example where the $t$ approximation goes wrong in a setting where you have a binary treatment with very few treated observations. Using $n=n_T+n_C$ is far too generous.
If your sample size is large, using the $t$ versus the normal won't matter at all. | Robust option in Stata: why are the p values computed using a Student distribution?
Robust variance estimators require large samples to be valid. In small samples, they are biased downward, and the normal-distribution-based confidence intervals may have coverage way below nominal cov |
50,555 | How to get p-values or confidence intervals for pearson correlation coefficient when the sample is small and potentially non-Gaussian? | In terms of the p-value, the answer can be found in an earlier post. Basically, use the permutation test for n<20. A generally normalizing transformation, such as rankit, will work for larger n's and will be more powerful (Bishara & Hittner, 2012). Of course, if you transform, you're no longer looking at the linear relationship on the original scale.
In terms of the confidence interval, the answer is less clear. There aren't many published large-scale Monte Carlo comparisons. Puth et al. (2014) have some evidence that the Fisher Z can be inadequate with large violations of normality. There was no general solution - even bootstrapping with BCa did not solve it. You might consider either:
a)Spearman CIs with Fisher Z. Instead of using $SE_z=1/\sqrt(n-3)$, use the Fieller et al. (1957) estimate of standard error for the Fisher Z:
$SE_z=1.03/\sqrt(n-3)$
b)Transforming via rankit, and then using the Fisher Z for the CI as usual
References:
Bishara, A. J., & Hittner, J. B. (2012). Testing the significance of a correlation with non-normal data: Comparison of Pearson, Spearman, transformation, and resampling approaches. Psychological Methods, 17, 399-417. doi:10.1037/a0028087
Fieller, E. C., Hartley, H. O., & Pearson, E. S. (1957). Tests for rank correlation coefficients. I. Biometrika, 44, 470-481.
Puth, M., Neuhäuser, M., & Ruxton, G. D. (2014). Effective use of Pearson’s product-moment correlation coefficient. Animal Behaviour, 93, 183-189. | How to get p-values or confidence intervals for pearson correlation coefficient when the sample is s | In terms of the p-value, the answer can be found in an earlier post. Basically, use the permutation test for n<20. A generally normalizing transformation, such as rankit, will work for larger n's and | How to get p-values or confidence intervals for pearson correlation coefficient when the sample is small and potentially non-Gaussian?
In terms of the p-value, the answer can be found in an earlier post. Basically, use the permutation test for n<20. A generally normalizing transformation, such as rankit, will work for larger n's and will be more powerful (Bishara & Hittner, 2012). Of course, if you transform, you're no longer looking at the linear relationship on the original scale.
In terms of the confidence interval, the answer is less clear. There aren't many published large-scale Monte Carlo comparisons. Puth et al. (2014) have some evidence that the Fisher Z can be inadequate with large violations of normality. There was no general solution - even bootstrapping with BCa did not solve it. You might consider either:
a)Spearman CIs with Fisher Z. Instead of using $SE_z=1/\sqrt(n-3)$, use the Fieller et al. (1957) estimate of standard error for the Fisher Z:
$SE_z=1.03/\sqrt(n-3)$
b)Transforming via rankit, and then using the Fisher Z for the CI as usual
References:
Bishara, A. J., & Hittner, J. B. (2012). Testing the significance of a correlation with non-normal data: Comparison of Pearson, Spearman, transformation, and resampling approaches. Psychological Methods, 17, 399-417. doi:10.1037/a0028087
Fieller, E. C., Hartley, H. O., & Pearson, E. S. (1957). Tests for rank correlation coefficients. I. Biometrika, 44, 470-481.
Puth, M., Neuhäuser, M., & Ruxton, G. D. (2014). Effective use of Pearson’s product-moment correlation coefficient. Animal Behaviour, 93, 183-189. | How to get p-values or confidence intervals for pearson correlation coefficient when the sample is s
In terms of the p-value, the answer can be found in an earlier post. Basically, use the permutation test for n<20. A generally normalizing transformation, such as rankit, will work for larger n's and |
50,556 | How to get p-values or confidence intervals for pearson correlation coefficient when the sample is small and potentially non-Gaussian? | You could certainly perform a permutation test (of the null that the two are uncorrelated) in the manner you suggest, but you wouldn't normally "use that distribution to get a confidence interval" for the correlation.
You would instead use that distribution to get a p-value, or an acceptance (/rejection) region.
You could use another resampling approach - the bootstrap - to get an interval for the correlation, but that, too, is justified by large-sample arguments and may not get all that close to the desired coverage in small samples.
However, if your data are strongly non-normal, it's pretty common for the relationship between variables to be curved rather than linear. You might want to consider monotonic rather than linear association. In that case there are several measures of monotonic association (i.e. 'do the variables move up/down together?' type questions) that don't have normality assumptions required to test them.
What are you testing correlation for? (i.e. what's the aim here? What are you trying to figure out?) | How to get p-values or confidence intervals for pearson correlation coefficient when the sample is s | You could certainly perform a permutation test (of the null that the two are uncorrelated) in the manner you suggest, but you wouldn't normally "use that distribution to get a confidence interval" for | How to get p-values or confidence intervals for pearson correlation coefficient when the sample is small and potentially non-Gaussian?
You could certainly perform a permutation test (of the null that the two are uncorrelated) in the manner you suggest, but you wouldn't normally "use that distribution to get a confidence interval" for the correlation.
You would instead use that distribution to get a p-value, or an acceptance (/rejection) region.
You could use another resampling approach - the bootstrap - to get an interval for the correlation, but that, too, is justified by large-sample arguments and may not get all that close to the desired coverage in small samples.
However, if your data are strongly non-normal, it's pretty common for the relationship between variables to be curved rather than linear. You might want to consider monotonic rather than linear association. In that case there are several measures of monotonic association (i.e. 'do the variables move up/down together?' type questions) that don't have normality assumptions required to test them.
What are you testing correlation for? (i.e. what's the aim here? What are you trying to figure out?) | How to get p-values or confidence intervals for pearson correlation coefficient when the sample is s
You could certainly perform a permutation test (of the null that the two are uncorrelated) in the manner you suggest, but you wouldn't normally "use that distribution to get a confidence interval" for |
50,557 | Violation of Gauss-Markov assumptions | There've been a couple answers and none of em have touched on what I thought were the most interesting questions asked, the bias and consistency of misspecified linear models. Since it seems pretty clear from the residuals that the model is misspecified with a quadratic term, let's take a look at what happens to our estimates. I'll leave this in terms of a general misspecification instead of solely a quadratic one for funsies.
Suppose we know an oracle who tells us the generating process for the data is $Y=X \beta +Z \alpha +\epsilon$. However, the model we choose to fit is $Y=X \beta+\epsilon$. Take note that the true model contains extra data in the form of Z and extra parameters in the form of the $\alpha $ term. Now, we could think of Z as being data we were unable to or chose not to collect but we could also think of the Z term as being data we collected and chose not to include in our model (like the situation you are in).
Now the typical parameter estimate is $ \hat{ \beta}=(X^{T}X)^{-1}X^{T}Y$. Biasedness relates to the expectation of our estimate and if we want to have consistency, we need that our bias disappears asymptotically. Keeping that in mind, we look at our expectation: $ E [\hat{ \beta}]=(X^{T}X)^{-1}X^{T}E [Y] = \beta +(X^{T}X)^{-1}X^{T}Z \alpha $.
So, if we misspecify, and alpha is not a column of 0's we end up with estimates which will certainly be biased by a factor of $(X^{T}X)^{-1}X^{T}Z \alpha $. Likewise, since consistency depends on asymptotic unbiasedness and our bias term has no reason to disappear asymptotically, we can expect the parameter estimates to fail to be consistent well. | Violation of Gauss-Markov assumptions | There've been a couple answers and none of em have touched on what I thought were the most interesting questions asked, the bias and consistency of misspecified linear models. Since it seems pretty cl | Violation of Gauss-Markov assumptions
There've been a couple answers and none of em have touched on what I thought were the most interesting questions asked, the bias and consistency of misspecified linear models. Since it seems pretty clear from the residuals that the model is misspecified with a quadratic term, let's take a look at what happens to our estimates. I'll leave this in terms of a general misspecification instead of solely a quadratic one for funsies.
Suppose we know an oracle who tells us the generating process for the data is $Y=X \beta +Z \alpha +\epsilon$. However, the model we choose to fit is $Y=X \beta+\epsilon$. Take note that the true model contains extra data in the form of Z and extra parameters in the form of the $\alpha $ term. Now, we could think of Z as being data we were unable to or chose not to collect but we could also think of the Z term as being data we collected and chose not to include in our model (like the situation you are in).
Now the typical parameter estimate is $ \hat{ \beta}=(X^{T}X)^{-1}X^{T}Y$. Biasedness relates to the expectation of our estimate and if we want to have consistency, we need that our bias disappears asymptotically. Keeping that in mind, we look at our expectation: $ E [\hat{ \beta}]=(X^{T}X)^{-1}X^{T}E [Y] = \beta +(X^{T}X)^{-1}X^{T}Z \alpha $.
So, if we misspecify, and alpha is not a column of 0's we end up with estimates which will certainly be biased by a factor of $(X^{T}X)^{-1}X^{T}Z \alpha $. Likewise, since consistency depends on asymptotic unbiasedness and our bias term has no reason to disappear asymptotically, we can expect the parameter estimates to fail to be consistent well. | Violation of Gauss-Markov assumptions
There've been a couple answers and none of em have touched on what I thought were the most interesting questions asked, the bias and consistency of misspecified linear models. Since it seems pretty cl |
50,558 | Violation of Gauss-Markov assumptions | It seems that your data does not follow a linear model. | Violation of Gauss-Markov assumptions | It seems that your data does not follow a linear model. | Violation of Gauss-Markov assumptions
It seems that your data does not follow a linear model. | Violation of Gauss-Markov assumptions
It seems that your data does not follow a linear model. |
50,559 | Violation of Gauss-Markov assumptions | First note that the data look to have a quadratic as opposed to linear relationship. This casts doubt on the linearity assumption:
$$ E[Y\,|\,einkommen] = \beta_{0}+\beta_{1}einkommen$$
My hint to you is this: assume to the contrary that the model is linear, perform the regression (either hypothetically or literally) than check the other assumptions, do they hold?
Extra hint: $E[u]=0$ does not imply $E[u_i|einkommen_i] = 0$ for all $i$. | Violation of Gauss-Markov assumptions | First note that the data look to have a quadratic as opposed to linear relationship. This casts doubt on the linearity assumption:
$$ E[Y\,|\,einkommen] = \beta_{0}+\beta_{1}einkommen$$
My hint to | Violation of Gauss-Markov assumptions
First note that the data look to have a quadratic as opposed to linear relationship. This casts doubt on the linearity assumption:
$$ E[Y\,|\,einkommen] = \beta_{0}+\beta_{1}einkommen$$
My hint to you is this: assume to the contrary that the model is linear, perform the regression (either hypothetically or literally) than check the other assumptions, do they hold?
Extra hint: $E[u]=0$ does not imply $E[u_i|einkommen_i] = 0$ for all $i$. | Violation of Gauss-Markov assumptions
First note that the data look to have a quadratic as opposed to linear relationship. This casts doubt on the linearity assumption:
$$ E[Y\,|\,einkommen] = \beta_{0}+\beta_{1}einkommen$$
My hint to |
50,560 | Violation of Gauss-Markov assumptions | The main issue is with assumption #5, i.e. homoscedasticity. Your error variance seems to change with income. It's higher in the middle than at the ends. | Violation of Gauss-Markov assumptions | The main issue is with assumption #5, i.e. homoscedasticity. Your error variance seems to change with income. It's higher in the middle than at the ends. | Violation of Gauss-Markov assumptions
The main issue is with assumption #5, i.e. homoscedasticity. Your error variance seems to change with income. It's higher in the middle than at the ends. | Violation of Gauss-Markov assumptions
The main issue is with assumption #5, i.e. homoscedasticity. Your error variance seems to change with income. It's higher in the middle than at the ends. |
50,561 | How to take advantage of multiples series with the same behaviour for forecasting? | You could use generalized regression model for producing hierarchical forecasts from the individual forecasts.
Here is a link:
https://www.otexts.org/fpp/9/4 | How to take advantage of multiples series with the same behaviour for forecasting? | You could use generalized regression model for producing hierarchical forecasts from the individual forecasts.
Here is a link:
https://www.otexts.org/fpp/9/4 | How to take advantage of multiples series with the same behaviour for forecasting?
You could use generalized regression model for producing hierarchical forecasts from the individual forecasts.
Here is a link:
https://www.otexts.org/fpp/9/4 | How to take advantage of multiples series with the same behaviour for forecasting?
You could use generalized regression model for producing hierarchical forecasts from the individual forecasts.
Here is a link:
https://www.otexts.org/fpp/9/4 |
50,562 | How to take advantage of multiples series with the same behaviour for forecasting? | There is an article on the inside-R website which uses signal decomposition for training and testing and then forecasting with a neural net and which might be a useful alternative to generalized regression models. R code is supplied.
link here | How to take advantage of multiples series with the same behaviour for forecasting? | There is an article on the inside-R website which uses signal decomposition for training and testing and then forecasting with a neural net and which might be a useful alternative to generalized regre | How to take advantage of multiples series with the same behaviour for forecasting?
There is an article on the inside-R website which uses signal decomposition for training and testing and then forecasting with a neural net and which might be a useful alternative to generalized regression models. R code is supplied.
link here | How to take advantage of multiples series with the same behaviour for forecasting?
There is an article on the inside-R website which uses signal decomposition for training and testing and then forecasting with a neural net and which might be a useful alternative to generalized regre |
50,563 | How to take advantage of multiples series with the same behaviour for forecasting? | Try finding the common factors, and model these factors. For instance, you could run PCA, and see if there's a few factors that explain the variance of your 300 series. It is possible that you may find a handful of principal components explain a huge chunk of the variance of all 300 series. In this case you'll model these few factors only. Then you can recover the original series from factors. | How to take advantage of multiples series with the same behaviour for forecasting? | Try finding the common factors, and model these factors. For instance, you could run PCA, and see if there's a few factors that explain the variance of your 300 series. It is possible that you may fin | How to take advantage of multiples series with the same behaviour for forecasting?
Try finding the common factors, and model these factors. For instance, you could run PCA, and see if there's a few factors that explain the variance of your 300 series. It is possible that you may find a handful of principal components explain a huge chunk of the variance of all 300 series. In this case you'll model these few factors only. Then you can recover the original series from factors. | How to take advantage of multiples series with the same behaviour for forecasting?
Try finding the common factors, and model these factors. For instance, you could run PCA, and see if there's a few factors that explain the variance of your 300 series. It is possible that you may fin |
50,564 | Labeling a pool of unlabelled samples iteratively | To answer my own question, the optimal way to pick an initial sample according to information criteria such as entropy is a notorious problem called maximal entropy sampling. This turns out to be NP-hard, so I will probably select a small uniform sample of the data and then try to apply maximal entropy sampling afterwards.
For approximations, this post seems to give some pointers as well (though their proposes sample size is huge and probably not applicable to my scenario). | Labeling a pool of unlabelled samples iteratively | To answer my own question, the optimal way to pick an initial sample according to information criteria such as entropy is a notorious problem called maximal entropy sampling. This turns out to be NP-h | Labeling a pool of unlabelled samples iteratively
To answer my own question, the optimal way to pick an initial sample according to information criteria such as entropy is a notorious problem called maximal entropy sampling. This turns out to be NP-hard, so I will probably select a small uniform sample of the data and then try to apply maximal entropy sampling afterwards.
For approximations, this post seems to give some pointers as well (though their proposes sample size is huge and probably not applicable to my scenario). | Labeling a pool of unlabelled samples iteratively
To answer my own question, the optimal way to pick an initial sample according to information criteria such as entropy is a notorious problem called maximal entropy sampling. This turns out to be NP-h |
50,565 | How to compute confidence interval from a confidence distribution | In general you'd do it numerically. In some cases you could do it algebraically, but often there will be no explicit closed form algebraic solution.
I would like to figure out how to find confidence intervals for confidence distributions such as Betas or Gammas and I find it difficult to reach a closed-form solution for $θ$.
The inverse cdf for a number of common distributions (including the beta and gamma) is readily available in software.
However, even if all you can do is evaluate a cdf, decent root-finding software can solve an equation in the cdf for you (there's always at least binary section!).
R, for example has many inverse cdfs built in (many more in packages), and has root-finding functionality. | How to compute confidence interval from a confidence distribution | In general you'd do it numerically. In some cases you could do it algebraically, but often there will be no explicit closed form algebraic solution.
I would like to figure out how to find confidence | How to compute confidence interval from a confidence distribution
In general you'd do it numerically. In some cases you could do it algebraically, but often there will be no explicit closed form algebraic solution.
I would like to figure out how to find confidence intervals for confidence distributions such as Betas or Gammas and I find it difficult to reach a closed-form solution for $θ$.
The inverse cdf for a number of common distributions (including the beta and gamma) is readily available in software.
However, even if all you can do is evaluate a cdf, decent root-finding software can solve an equation in the cdf for you (there's always at least binary section!).
R, for example has many inverse cdfs built in (many more in packages), and has root-finding functionality. | How to compute confidence interval from a confidence distribution
In general you'd do it numerically. In some cases you could do it algebraically, but often there will be no explicit closed form algebraic solution.
I would like to figure out how to find confidence |
50,566 | Intuition for the "information matrix equality" result? | I have always found this result to be counter-intuitive as well. In my case, this is due to a tendency to confuse the score function and the maximum likelihood estimator. (Oh, the shame!). In fact, they sort of pull in opposite directions.
Consider the one parameter case. In large samples, the log likelihood tends towards a quadratic function open below. The second derivative will be large and negative when the quadratic is tight around the maximum and the curvature (reciprocal of the second derivative) will be small. Intuitively, it makes sense that the variance of the scaled MLE should be the reciprocal of the Fisher information (LHS of the identity). And it is.
But the RHS of the identify is not the MLE. On the contrary. The score function depends on both the sample values and the true parameter. If we knew the true value of the parameter and drew many samples, $s(x,\theta)$ will vary about 0. The RHS is the variance of that quantity.
In the absurd case where the sample is of size 1 and the distribution is a normal with known variance, the RHS is the variance of $$\frac{x-\mu}{\sigma^2}$$
If I increase the sample to $n$ IID values, I am looking at the variance of $$\sum \frac{x_i - \mu}{\sigma^2}$$
The bigger my sample, the larger the variance, while of course, the variance of the MLE is shrinking. Meanwhile the Hessian is growing more negative, as I add more terms. Think of the score function as a random walk, whose variance grows with the sample size. At the same time, as the sample size grows, my knowledge of $\theta$ is also growing. But knowledge of $\theta$ is captured in the variance of the MLE, which is the reciprocal of the Information matrix (the LHS).
In the normal case, it is trivial to show that the RHS equals the LHS. From there, it makes sense that the two sides should tend to equality when the samples get large. I still find it amazing that the equality is not asymptotic, but that it holds semper ubique. However, you have seen the proof. I'm just trying to get at the intuition.
Later
Still trying to wrap my head around this, I went back to the actual proof of the identity, hoping that it would clarify why the result works. The proof depends totally on the fact that the score function is the derivative of the log likelihood - and happy cancellations occur when evaluating the integrals. Also relevant is the fact that a density function tends to 0 at both plus and minus infinity. There isn't normally a nice connection between the variance of a random variable and its expected derivative with respect to the parameter of the underlying distributional family. It's all part of the magic of logarithms and $$\frac{d \log(f(x))}{dx} = \frac{f'(x)}{f(x)}$$
I don't think your question has an answer. Or rather, the questions should be "why is the log likelihood a good thing?" and "Why is the Hessian called an information matrix?"; and the answer to those questions is the Information identity.
Regarding @singlepeaked 's comparison to the OLS variance, remember that the OLS estimates of the coefficients when $\sigma$ is known and the error is normal is also the maximum likelihood estimator. The var-cov matrix of those estimates is an example of the Information identity at work. | Intuition for the "information matrix equality" result? | I have always found this result to be counter-intuitive as well. In my case, this is due to a tendency to confuse the score function and the maximum likelihood estimator. (Oh, the shame!). In fact, th | Intuition for the "information matrix equality" result?
I have always found this result to be counter-intuitive as well. In my case, this is due to a tendency to confuse the score function and the maximum likelihood estimator. (Oh, the shame!). In fact, they sort of pull in opposite directions.
Consider the one parameter case. In large samples, the log likelihood tends towards a quadratic function open below. The second derivative will be large and negative when the quadratic is tight around the maximum and the curvature (reciprocal of the second derivative) will be small. Intuitively, it makes sense that the variance of the scaled MLE should be the reciprocal of the Fisher information (LHS of the identity). And it is.
But the RHS of the identify is not the MLE. On the contrary. The score function depends on both the sample values and the true parameter. If we knew the true value of the parameter and drew many samples, $s(x,\theta)$ will vary about 0. The RHS is the variance of that quantity.
In the absurd case where the sample is of size 1 and the distribution is a normal with known variance, the RHS is the variance of $$\frac{x-\mu}{\sigma^2}$$
If I increase the sample to $n$ IID values, I am looking at the variance of $$\sum \frac{x_i - \mu}{\sigma^2}$$
The bigger my sample, the larger the variance, while of course, the variance of the MLE is shrinking. Meanwhile the Hessian is growing more negative, as I add more terms. Think of the score function as a random walk, whose variance grows with the sample size. At the same time, as the sample size grows, my knowledge of $\theta$ is also growing. But knowledge of $\theta$ is captured in the variance of the MLE, which is the reciprocal of the Information matrix (the LHS).
In the normal case, it is trivial to show that the RHS equals the LHS. From there, it makes sense that the two sides should tend to equality when the samples get large. I still find it amazing that the equality is not asymptotic, but that it holds semper ubique. However, you have seen the proof. I'm just trying to get at the intuition.
Later
Still trying to wrap my head around this, I went back to the actual proof of the identity, hoping that it would clarify why the result works. The proof depends totally on the fact that the score function is the derivative of the log likelihood - and happy cancellations occur when evaluating the integrals. Also relevant is the fact that a density function tends to 0 at both plus and minus infinity. There isn't normally a nice connection between the variance of a random variable and its expected derivative with respect to the parameter of the underlying distributional family. It's all part of the magic of logarithms and $$\frac{d \log(f(x))}{dx} = \frac{f'(x)}{f(x)}$$
I don't think your question has an answer. Or rather, the questions should be "why is the log likelihood a good thing?" and "Why is the Hessian called an information matrix?"; and the answer to those questions is the Information identity.
Regarding @singlepeaked 's comparison to the OLS variance, remember that the OLS estimates of the coefficients when $\sigma$ is known and the error is normal is also the maximum likelihood estimator. The var-cov matrix of those estimates is an example of the Information identity at work. | Intuition for the "information matrix equality" result?
I have always found this result to be counter-intuitive as well. In my case, this is due to a tendency to confuse the score function and the maximum likelihood estimator. (Oh, the shame!). In fact, th |
50,567 | Intuition for the "information matrix equality" result? | I have to say that I am not sure what you are confused about.
"it seems like they would be zero or the same as the main diagonal?"
Not really - expected value of the score is zero but the cross products aren't (the different components of the score vector are not independent) - it is a matrix multiplication. | Intuition for the "information matrix equality" result? | I have to say that I am not sure what you are confused about.
"it seems like they would be zero or the same as the main diagonal?"
Not really - expected value of the score is zero but the cross produc | Intuition for the "information matrix equality" result?
I have to say that I am not sure what you are confused about.
"it seems like they would be zero or the same as the main diagonal?"
Not really - expected value of the score is zero but the cross products aren't (the different components of the score vector are not independent) - it is a matrix multiplication. | Intuition for the "information matrix equality" result?
I have to say that I am not sure what you are confused about.
"it seems like they would be zero or the same as the main diagonal?"
Not really - expected value of the score is zero but the cross produc |
50,568 | Paired or not paired? Comparing groups after propensity score matching | I personally find results are very similar when you use paired and unpaired tests. Yet, my recommendation, built upon studying quite extensively the topic, and following authoritative sources, such as this one from Austin, is now to use tests that recognize the clustering features of the dataset.
Thus, if I am using propensity score quantiles (eg quintiles), or propensity matched pairs, I routinely use meglm or xtgee in Stata, for continous or categorical variables, and stratified Cox proportional hazard analysis for survival analysis.
Specifically, the following excerpt, also from Austin, is very clear:
When estimating the statistical significance of treatment effects, the
use of methods that account for the matched nature of the sample is
recommended (Austin, 2009d, in press-b). Accordingly, McNemar's test
was used to assess the statistical significance of the risk
difference. Confidence intervals were constructed using a method
proposed by Agresti and Min (2004) that accounts for the matched
nature of the sample. The number needed to treat (NNT) is the
reciprocal of the absolute risk reduction. The relative risk was
estimated as the ratio of the probability of 3-year mortality in
treated participants compared with that of untreated participants in
the matched sample. Methods described by Agresti and Min were used to
estimate 95% confidence intervals.
We then estimated the effect of provision of smoking cessation
counseling on the time to death. Kaplan-Meier survival curves were
estimated separately for treated and untreated participants in the
propensity score matched sample. The log-rank test is not appropriate
for comparing the Kaplan-Meier survival curves between treatment
groups because the test assumes two independent samples (Harrington,
2005; Klein & Moeschberger, 1997). However, the stratified logrank
test is appropriate for matched pairs data (Klein & Moeschberger,
1997).
Finally, we used a Cox proportional hazards model to regress survival
time on an indicator variable denoting treatment status (smoking
cessation counseling vs. no counseling). As the propensity score
matched sample does not consist of independent observations, we used a
marginal survival model with robust standard errors (Lin & Wei, 1989).
An alternative to the use of a marginal model with robust variance
estimation would be to fit a Cox proportional hazards model that
stratified on the matched pairs (Cummings, McKnight, & Greenland,
2003). This approach accounts for the within-pair homogeneity by
allowing the baseline hazard function to vary across matched sets. | Paired or not paired? Comparing groups after propensity score matching | I personally find results are very similar when you use paired and unpaired tests. Yet, my recommendation, built upon studying quite extensively the topic, and following authoritative sources, such as | Paired or not paired? Comparing groups after propensity score matching
I personally find results are very similar when you use paired and unpaired tests. Yet, my recommendation, built upon studying quite extensively the topic, and following authoritative sources, such as this one from Austin, is now to use tests that recognize the clustering features of the dataset.
Thus, if I am using propensity score quantiles (eg quintiles), or propensity matched pairs, I routinely use meglm or xtgee in Stata, for continous or categorical variables, and stratified Cox proportional hazard analysis for survival analysis.
Specifically, the following excerpt, also from Austin, is very clear:
When estimating the statistical significance of treatment effects, the
use of methods that account for the matched nature of the sample is
recommended (Austin, 2009d, in press-b). Accordingly, McNemar's test
was used to assess the statistical significance of the risk
difference. Confidence intervals were constructed using a method
proposed by Agresti and Min (2004) that accounts for the matched
nature of the sample. The number needed to treat (NNT) is the
reciprocal of the absolute risk reduction. The relative risk was
estimated as the ratio of the probability of 3-year mortality in
treated participants compared with that of untreated participants in
the matched sample. Methods described by Agresti and Min were used to
estimate 95% confidence intervals.
We then estimated the effect of provision of smoking cessation
counseling on the time to death. Kaplan-Meier survival curves were
estimated separately for treated and untreated participants in the
propensity score matched sample. The log-rank test is not appropriate
for comparing the Kaplan-Meier survival curves between treatment
groups because the test assumes two independent samples (Harrington,
2005; Klein & Moeschberger, 1997). However, the stratified logrank
test is appropriate for matched pairs data (Klein & Moeschberger,
1997).
Finally, we used a Cox proportional hazards model to regress survival
time on an indicator variable denoting treatment status (smoking
cessation counseling vs. no counseling). As the propensity score
matched sample does not consist of independent observations, we used a
marginal survival model with robust standard errors (Lin & Wei, 1989).
An alternative to the use of a marginal model with robust variance
estimation would be to fit a Cox proportional hazards model that
stratified on the matched pairs (Cummings, McKnight, & Greenland,
2003). This approach accounts for the within-pair homogeneity by
allowing the baseline hazard function to vary across matched sets. | Paired or not paired? Comparing groups after propensity score matching
I personally find results are very similar when you use paired and unpaired tests. Yet, my recommendation, built upon studying quite extensively the topic, and following authoritative sources, such as |
50,569 | Weibull Mixture question | The Weibull survival function with shape parameter $k$ and scale parameter $\lambda$ (both positive) has the form
$$S(x; \lambda, k) = \exp\left(-(x/\lambda)^k\right)$$
for $x \gt 0.$ A finite mixture of $n$ such distributions is determined by positive mixture weights $p_i$ (necessarily summing to unity) and corresponding parameters and has survival function
$$S = \sum_{i=1}^n p_i \exp\left(-(x/\lambda_i)^{k_i}\right).$$
Equating these two expressions and some straightforward analysis show the following:
By studying the asymptotic behavior of $\log S$ for large $x,$ conclude that
$k = k_1 = k_2 = \ldots = k_n.$
Again by studying this asymptotic behavior assuming all the $k_i$ are equal to $k,$ conclude that
$\lambda = \lambda_1 = \ldots = \lambda_n.$
These are necessary and sufficient conditions.
Consequently
$$\eqalign{
S &= \sum_{i=1}^n p_i \exp\left(-(x/\lambda_i)^{k_i}\right) \\
&= \sum_{i=1}^n p_i \exp\left(-(x/\lambda)^k\right) \\
&= \left(\sum_{i=1}^n p_i\right) \exp\left(-(x/\lambda)^k\right) \\
&= \exp\left(-(x/\lambda)^k\right)
}$$
isn't really a mixture at all.
For an example of how such asymptotic investigations may be carried out rigorously, see this answer to the same question about Normal distributions. | Weibull Mixture question | The Weibull survival function with shape parameter $k$ and scale parameter $\lambda$ (both positive) has the form
$$S(x; \lambda, k) = \exp\left(-(x/\lambda)^k\right)$$
for $x \gt 0.$ A finite mixtur | Weibull Mixture question
The Weibull survival function with shape parameter $k$ and scale parameter $\lambda$ (both positive) has the form
$$S(x; \lambda, k) = \exp\left(-(x/\lambda)^k\right)$$
for $x \gt 0.$ A finite mixture of $n$ such distributions is determined by positive mixture weights $p_i$ (necessarily summing to unity) and corresponding parameters and has survival function
$$S = \sum_{i=1}^n p_i \exp\left(-(x/\lambda_i)^{k_i}\right).$$
Equating these two expressions and some straightforward analysis show the following:
By studying the asymptotic behavior of $\log S$ for large $x,$ conclude that
$k = k_1 = k_2 = \ldots = k_n.$
Again by studying this asymptotic behavior assuming all the $k_i$ are equal to $k,$ conclude that
$\lambda = \lambda_1 = \ldots = \lambda_n.$
These are necessary and sufficient conditions.
Consequently
$$\eqalign{
S &= \sum_{i=1}^n p_i \exp\left(-(x/\lambda_i)^{k_i}\right) \\
&= \sum_{i=1}^n p_i \exp\left(-(x/\lambda)^k\right) \\
&= \left(\sum_{i=1}^n p_i\right) \exp\left(-(x/\lambda)^k\right) \\
&= \exp\left(-(x/\lambda)^k\right)
}$$
isn't really a mixture at all.
For an example of how such asymptotic investigations may be carried out rigorously, see this answer to the same question about Normal distributions. | Weibull Mixture question
The Weibull survival function with shape parameter $k$ and scale parameter $\lambda$ (both positive) has the form
$$S(x; \lambda, k) = \exp\left(-(x/\lambda)^k\right)$$
for $x \gt 0.$ A finite mixtur |
50,570 | How many levels in multilevel modeling is too many? | This is hard to answer without much context. But in general, parameters of additional levels will be harder to estimate. For each additional level you will need much more data, specially for the variance-covariance parameters of the higher levels. See here for a related discussion. | How many levels in multilevel modeling is too many? | This is hard to answer without much context. But in general, parameters of additional levels will be harder to estimate. For each additional level you will need much more data, specially for the varia | How many levels in multilevel modeling is too many?
This is hard to answer without much context. But in general, parameters of additional levels will be harder to estimate. For each additional level you will need much more data, specially for the variance-covariance parameters of the higher levels. See here for a related discussion. | How many levels in multilevel modeling is too many?
This is hard to answer without much context. But in general, parameters of additional levels will be harder to estimate. For each additional level you will need much more data, specially for the varia |
50,571 | What is the "pdm" stat in the "rms" R package? | I don't know yet about the background of the statistic but below is an illustration how it is computed.
$$pdm = \frac{1}{n} \sum_{k=1}^n \left| \hat{P}(Y \geq median|X_k) - 0.5 \right| $$
It is an indication of how much the conditional predicted probability varies around the point of the marginal median.
library(rms)
###
### generate some data according to a logit model
###
set.seed(1)
n = 10^2
k = 5
x = runif(n,-2,2) # predictor
noise = rnorm(n,0,1) # noise
y = x + noise # latent variable
bounds = seq(-2,2,1) # values to be predicted
z = as.numeric(sapply(y, FUN = function(yi) sum(bounds<yi))) # ordinal variable
### compute ordinal regression
mod = orm(z ~ x, family =probit)
### marginal median of the data
median = mod$stats[3]
### predictions hat p(y>=median | x) for all x in the data
pr = coef(mod)[median] + coef(mod)[k+1]*x
prs = pnorm(pr)
### mean absolute deviation ... 0.3308655
mean(abs(prs-0.5))
### value from the function ... 0.3308655
mod$stats[14] | What is the "pdm" stat in the "rms" R package? | I don't know yet about the background of the statistic but below is an illustration how it is computed.
$$pdm = \frac{1}{n} \sum_{k=1}^n \left| \hat{P}(Y \geq median|X_k) - 0.5 \right| $$
It is an ind | What is the "pdm" stat in the "rms" R package?
I don't know yet about the background of the statistic but below is an illustration how it is computed.
$$pdm = \frac{1}{n} \sum_{k=1}^n \left| \hat{P}(Y \geq median|X_k) - 0.5 \right| $$
It is an indication of how much the conditional predicted probability varies around the point of the marginal median.
library(rms)
###
### generate some data according to a logit model
###
set.seed(1)
n = 10^2
k = 5
x = runif(n,-2,2) # predictor
noise = rnorm(n,0,1) # noise
y = x + noise # latent variable
bounds = seq(-2,2,1) # values to be predicted
z = as.numeric(sapply(y, FUN = function(yi) sum(bounds<yi))) # ordinal variable
### compute ordinal regression
mod = orm(z ~ x, family =probit)
### marginal median of the data
median = mod$stats[3]
### predictions hat p(y>=median | x) for all x in the data
pr = coef(mod)[median] + coef(mod)[k+1]*x
prs = pnorm(pr)
### mean absolute deviation ... 0.3308655
mean(abs(prs-0.5))
### value from the function ... 0.3308655
mod$stats[14] | What is the "pdm" stat in the "rms" R package?
I don't know yet about the background of the statistic but below is an illustration how it is computed.
$$pdm = \frac{1}{n} \sum_{k=1}^n \left| \hat{P}(Y \geq median|X_k) - 0.5 \right| $$
It is an ind |
50,572 | What is the "pdm" stat in the "rms" R package? | This link: https://www.rdocumentation.org/packages/rms/versions/5.1-0/topics/validate.lrm
describes pdm as a "new" metric, which would imply that is has not been used before. | What is the "pdm" stat in the "rms" R package? | This link: https://www.rdocumentation.org/packages/rms/versions/5.1-0/topics/validate.lrm
describes pdm as a "new" metric, which would imply that is has not been used before. | What is the "pdm" stat in the "rms" R package?
This link: https://www.rdocumentation.org/packages/rms/versions/5.1-0/topics/validate.lrm
describes pdm as a "new" metric, which would imply that is has not been used before. | What is the "pdm" stat in the "rms" R package?
This link: https://www.rdocumentation.org/packages/rms/versions/5.1-0/topics/validate.lrm
describes pdm as a "new" metric, which would imply that is has not been used before. |
50,573 | Maximum likelihood of multivariate t-distributed variable with scaled covariance | The EM algorithm is typically used to find MLEs of the parameters from an iid multivariate t sample. Instead of writting a long answer as to how to implement the algorithm I refer you to McLachlan and Krishnan "The EM algorithm and Extensions", second edition. They show the MLE procedure for both known and unknown degrees of freedom, as well as ways to accelerate the algorithm.
If you don't have access to the book you can also look at this paper
https://arxiv.org/pdf/1707.01130.pdf
It turns out that estimating the degrees of freedom using ML results in an unbounded score function. The above paper finds estimators that minimize the MLq, which results in a bounded score. They also have a great review of the EM algorithm for the usual ML procedure.
Once you have estimators for both the degrees of freedom and the scale parameter, finding an estimate of the covariance is straight forward. | Maximum likelihood of multivariate t-distributed variable with scaled covariance | The EM algorithm is typically used to find MLEs of the parameters from an iid multivariate t sample. Instead of writting a long answer as to how to implement the algorithm I refer you to McLachlan an | Maximum likelihood of multivariate t-distributed variable with scaled covariance
The EM algorithm is typically used to find MLEs of the parameters from an iid multivariate t sample. Instead of writting a long answer as to how to implement the algorithm I refer you to McLachlan and Krishnan "The EM algorithm and Extensions", second edition. They show the MLE procedure for both known and unknown degrees of freedom, as well as ways to accelerate the algorithm.
If you don't have access to the book you can also look at this paper
https://arxiv.org/pdf/1707.01130.pdf
It turns out that estimating the degrees of freedom using ML results in an unbounded score function. The above paper finds estimators that minimize the MLq, which results in a bounded score. They also have a great review of the EM algorithm for the usual ML procedure.
Once you have estimators for both the degrees of freedom and the scale parameter, finding an estimate of the covariance is straight forward. | Maximum likelihood of multivariate t-distributed variable with scaled covariance
The EM algorithm is typically used to find MLEs of the parameters from an iid multivariate t sample. Instead of writting a long answer as to how to implement the algorithm I refer you to McLachlan an |
50,574 | Unrealistically high significance when marginalizing over large number of parameters | As Cyan pointed out, this behaviour arises from the fact that the model does not put enough emphasis on the case $\mathbf{a}=0$. You say that you don't want to impose a 'tighter' prior on $\mathbf{a}$. I'm not sure what you mean by 'tighter', but increasing the prior probability of $\mathbf{a}=0$ is the only solution. This could be done by putting a point mass on $\mathbf{a}=0$, i.e. model averaging as suggested by Cyan, or using a prior with a sharp peak at 0 such as a T distribution with small $\nu$. | Unrealistically high significance when marginalizing over large number of parameters | As Cyan pointed out, this behaviour arises from the fact that the model does not put enough emphasis on the case $\mathbf{a}=0$. You say that you don't want to impose a 'tighter' prior on $\mathbf{a} | Unrealistically high significance when marginalizing over large number of parameters
As Cyan pointed out, this behaviour arises from the fact that the model does not put enough emphasis on the case $\mathbf{a}=0$. You say that you don't want to impose a 'tighter' prior on $\mathbf{a}$. I'm not sure what you mean by 'tighter', but increasing the prior probability of $\mathbf{a}=0$ is the only solution. This could be done by putting a point mass on $\mathbf{a}=0$, i.e. model averaging as suggested by Cyan, or using a prior with a sharp peak at 0 such as a T distribution with small $\nu$. | Unrealistically high significance when marginalizing over large number of parameters
As Cyan pointed out, this behaviour arises from the fact that the model does not put enough emphasis on the case $\mathbf{a}=0$. You say that you don't want to impose a 'tighter' prior on $\mathbf{a} |
50,575 | What is the limiting distribution of the sample mean? | You are correct that convergence in probability implies convergence in distribution as a weaker property. If the sample mean $\bar{X} \rightarrow_p \mu$ by the WLLN we know that $\bar{X} \rightarrow_d $ a constant. A different way to frame a similar question is to say, what is an approximating distribution of $\bar{X}_n$ ($n$ being the sample size in question). Then it would be right to say $\bar{X}_n \dot{\sim} \mathcal{N} \left( \mu, \sigma^2/n \right)$
I think it's sloppy notation and the professor should have been clearer. In fact, in my theory classes, our professor had the deepest ire for what he considered a serious deficiency of understanding if students found limiting distributions that were functions of the $n$. | What is the limiting distribution of the sample mean? | You are correct that convergence in probability implies convergence in distribution as a weaker property. If the sample mean $\bar{X} \rightarrow_p \mu$ by the WLLN we know that $\bar{X} \rightarrow_d | What is the limiting distribution of the sample mean?
You are correct that convergence in probability implies convergence in distribution as a weaker property. If the sample mean $\bar{X} \rightarrow_p \mu$ by the WLLN we know that $\bar{X} \rightarrow_d $ a constant. A different way to frame a similar question is to say, what is an approximating distribution of $\bar{X}_n$ ($n$ being the sample size in question). Then it would be right to say $\bar{X}_n \dot{\sim} \mathcal{N} \left( \mu, \sigma^2/n \right)$
I think it's sloppy notation and the professor should have been clearer. In fact, in my theory classes, our professor had the deepest ire for what he considered a serious deficiency of understanding if students found limiting distributions that were functions of the $n$. | What is the limiting distribution of the sample mean?
You are correct that convergence in probability implies convergence in distribution as a weaker property. If the sample mean $\bar{X} \rightarrow_p \mu$ by the WLLN we know that $\bar{X} \rightarrow_d |
50,576 | Gamma vs tweedie distribution for large productivity dataset | The question you need to ask yourself is if your response variable takes 0 values (not if it takes very small values). Normally if you have 0s on your data you should'nt be able to fit a gamma distribution.
I would suggest then trying lognormal, gamma and inverse normal, which are the most common positive distributions. | Gamma vs tweedie distribution for large productivity dataset | The question you need to ask yourself is if your response variable takes 0 values (not if it takes very small values). Normally if you have 0s on your data you should'nt be able to fit a gamma distrib | Gamma vs tweedie distribution for large productivity dataset
The question you need to ask yourself is if your response variable takes 0 values (not if it takes very small values). Normally if you have 0s on your data you should'nt be able to fit a gamma distribution.
I would suggest then trying lognormal, gamma and inverse normal, which are the most common positive distributions. | Gamma vs tweedie distribution for large productivity dataset
The question you need to ask yourself is if your response variable takes 0 values (not if it takes very small values). Normally if you have 0s on your data you should'nt be able to fit a gamma distrib |
50,577 | Random variables with some properties (conditional expectation) | By keeping things as simple as possible we can construct a rather pretty solution.
Step 0 We have to begin somewhere. Since the variables are supposed to have strictly positive values, take the simplest positive number, $1$, and since $X$ appears first alphabetically, suppose $X=1$. In order to obtain $\mathbb{E}(Y|X=1) \gt 1$, $Y$ will have to have nonzero probability of exceeding $1$. The simplest possible way that could happen would be for all the probability to be assigned to a single value larger than $1$. The simplest number larger than $1$ is $2$. So, let
$$\mathbb{P}((X,Y)=(1,2)) = p,$$
say, and let's stipulate that $\mathbb{P}((1,y)) = 0$ for all $y\ne 2$.
Step 1 Now that $Y=2$ has nonzero probability, we are forced to consider $\mathbb{E}(X|Y=2)$. There is already probability $p$ that $X=1$ when $Y=2$. In order to make $\mathbb{E}(X|Y=2)\gt 2$, we will need to assign some probability to values of $X$ greater than $2$. Moreover, since we don't want to be assigning greater and greater probabilities to values--we want them to decrease so that they can sum to unity--we would prefer that some probability be assigned to $X\gt 3$ (for otherwise it would not be possible for the conditional expectation to be larger than $2$). The simplest number in that range is $4$, so let's stipulate that
$$\mathbb{P}((X,Y)=(4,2)) = q,$$
say. Then
$$\mathbb{E}(X|Y=2) = \frac{p(1) + q(4)}{p+q} \gt 2.$$
Writing
$$\zeta = q/p,$$
this implies $1 \gt \zeta \gt 1/2$.
So far we have obtained
$$\mathbb{P}((X,Y)=(1,2)) = p;\quad \mathbb{P}((X,Y)=(4,2)) = p\zeta.$$
Step 2 Now the tables are turned: we have assigned nonzero probability to $X=4$ and we need to determine what probabilities to assign to ordered pairs of the form $(4,Y)$. By interchanging the roles of $X$ and $Y$ and quadrupling their values, we can proceed exactly as in the last step to assign probability $(p\zeta)\zeta = p\zeta^2$ to the ordered pair $(4,8)=4(1,2)$ and in so doing guarantee, by construction, that $\mathbb{E}(Y|X=4) \gt 4$.
Steps $2n$ and $2n+1$ Continue switching the roles of $X$ and $Y$, quadrupling the values and multiplying each successive probability by $\zeta$.
Step $\omega$ Continuing in this vein produces a pair of random variables $(X,Y)$ which can be thought of as functions of the set of steps $\{0,1,2,\ldots,n,\ldots\} = \mathbb{N}$, the natural numbers. Such functions are sequences. They are
$$X = (x_n) = 1,4,4,16,16,64,64, \ldots, 2^{\lfloor (n+1)/2 \rfloor}, \ldots$$
$$Y = (y_n) = 2,2,8,8,32,32, \ldots, 2^{\lfloor n/2\rfloor + 1}, \ldots $$
The probability associated with the natural number $n$ is $p\zeta^n$. The total probability is $p + p\zeta + \cdots + p\zeta^n + \cdots = p/(1-\zeta)$. Because this must be unity, finally we learn that
$$p = \frac{1}{1-\zeta}.$$
In order for $X$ and $Y$ to be random variables, the inverse images of any value must be measurable. The inverse images under $X$ are the doubleton sets $\{1,2\}, \{3,4\}, \ldots, \{2n-1, 2n\}, \ldots$ while the inverse images under $Y$ are the doubletons $\{0,1\}, \{2,3\}, \ldots, \{2n, 2n+1\}, \ldots$. Clearly these generate (via intersection) all singletons $\{n\}$, whence every subset of $\mathbb{N}$ must be measurable: this is the discrete measure on $\mathbb N$, given by its power set $\mathcal{P}(\mathbb{N})$. Therefore $X$ and $Y$ are random variables with respect to the probability space
$$(\mathbb{N}, \mathcal{P}(\mathbb{N}), \mathbb{P})$$
where $\mathbb{P}$ is completely determined by its values on the atoms,
$$\mathbb{P}(\{n\}) = \frac{\zeta^n}{1-\zeta},\ n=0, 1, 2, \ldots.$$
The sense of "simplicity" adopted in this answer is objective, not subjective: it is the one John Conway describes in his book On Numbers and Games. We could go even a little further and name $\zeta = 3/4$ as the simplest possible value of $\zeta$ for which $1/2 \lt \zeta \lt 1$--but any such $\zeta$ will do. | Random variables with some properties (conditional expectation) | By keeping things as simple as possible we can construct a rather pretty solution.
Step 0 We have to begin somewhere. Since the variables are supposed to have strictly positive values, take the simp | Random variables with some properties (conditional expectation)
By keeping things as simple as possible we can construct a rather pretty solution.
Step 0 We have to begin somewhere. Since the variables are supposed to have strictly positive values, take the simplest positive number, $1$, and since $X$ appears first alphabetically, suppose $X=1$. In order to obtain $\mathbb{E}(Y|X=1) \gt 1$, $Y$ will have to have nonzero probability of exceeding $1$. The simplest possible way that could happen would be for all the probability to be assigned to a single value larger than $1$. The simplest number larger than $1$ is $2$. So, let
$$\mathbb{P}((X,Y)=(1,2)) = p,$$
say, and let's stipulate that $\mathbb{P}((1,y)) = 0$ for all $y\ne 2$.
Step 1 Now that $Y=2$ has nonzero probability, we are forced to consider $\mathbb{E}(X|Y=2)$. There is already probability $p$ that $X=1$ when $Y=2$. In order to make $\mathbb{E}(X|Y=2)\gt 2$, we will need to assign some probability to values of $X$ greater than $2$. Moreover, since we don't want to be assigning greater and greater probabilities to values--we want them to decrease so that they can sum to unity--we would prefer that some probability be assigned to $X\gt 3$ (for otherwise it would not be possible for the conditional expectation to be larger than $2$). The simplest number in that range is $4$, so let's stipulate that
$$\mathbb{P}((X,Y)=(4,2)) = q,$$
say. Then
$$\mathbb{E}(X|Y=2) = \frac{p(1) + q(4)}{p+q} \gt 2.$$
Writing
$$\zeta = q/p,$$
this implies $1 \gt \zeta \gt 1/2$.
So far we have obtained
$$\mathbb{P}((X,Y)=(1,2)) = p;\quad \mathbb{P}((X,Y)=(4,2)) = p\zeta.$$
Step 2 Now the tables are turned: we have assigned nonzero probability to $X=4$ and we need to determine what probabilities to assign to ordered pairs of the form $(4,Y)$. By interchanging the roles of $X$ and $Y$ and quadrupling their values, we can proceed exactly as in the last step to assign probability $(p\zeta)\zeta = p\zeta^2$ to the ordered pair $(4,8)=4(1,2)$ and in so doing guarantee, by construction, that $\mathbb{E}(Y|X=4) \gt 4$.
Steps $2n$ and $2n+1$ Continue switching the roles of $X$ and $Y$, quadrupling the values and multiplying each successive probability by $\zeta$.
Step $\omega$ Continuing in this vein produces a pair of random variables $(X,Y)$ which can be thought of as functions of the set of steps $\{0,1,2,\ldots,n,\ldots\} = \mathbb{N}$, the natural numbers. Such functions are sequences. They are
$$X = (x_n) = 1,4,4,16,16,64,64, \ldots, 2^{\lfloor (n+1)/2 \rfloor}, \ldots$$
$$Y = (y_n) = 2,2,8,8,32,32, \ldots, 2^{\lfloor n/2\rfloor + 1}, \ldots $$
The probability associated with the natural number $n$ is $p\zeta^n$. The total probability is $p + p\zeta + \cdots + p\zeta^n + \cdots = p/(1-\zeta)$. Because this must be unity, finally we learn that
$$p = \frac{1}{1-\zeta}.$$
In order for $X$ and $Y$ to be random variables, the inverse images of any value must be measurable. The inverse images under $X$ are the doubleton sets $\{1,2\}, \{3,4\}, \ldots, \{2n-1, 2n\}, \ldots$ while the inverse images under $Y$ are the doubletons $\{0,1\}, \{2,3\}, \ldots, \{2n, 2n+1\}, \ldots$. Clearly these generate (via intersection) all singletons $\{n\}$, whence every subset of $\mathbb{N}$ must be measurable: this is the discrete measure on $\mathbb N$, given by its power set $\mathcal{P}(\mathbb{N})$. Therefore $X$ and $Y$ are random variables with respect to the probability space
$$(\mathbb{N}, \mathcal{P}(\mathbb{N}), \mathbb{P})$$
where $\mathbb{P}$ is completely determined by its values on the atoms,
$$\mathbb{P}(\{n\}) = \frac{\zeta^n}{1-\zeta},\ n=0, 1, 2, \ldots.$$
The sense of "simplicity" adopted in this answer is objective, not subjective: it is the one John Conway describes in his book On Numbers and Games. We could go even a little further and name $\zeta = 3/4$ as the simplest possible value of $\zeta$ for which $1/2 \lt \zeta \lt 1$--but any such $\zeta$ will do. | Random variables with some properties (conditional expectation)
By keeping things as simple as possible we can construct a rather pretty solution.
Step 0 We have to begin somewhere. Since the variables are supposed to have strictly positive values, take the simp |
50,578 | Fitting a Gaussian to a histogram when the bin size is significant | If you know that $y_i \in [x_j, x_{j+1})$, where $x_j$'s are cut points from a bin, then you can treat this as interval censored data. In other words, for your case, you can define your likelihood function as
$\displaystyle \prod_{i = 1}^n (\Phi(r_i|\mu, \sigma) - \Phi(l_i|\mu, \sigma) )$
Where $l_i$ and $r_i$ are the upper and lower limits of the bin which the exact value lines in.
A note is that the log likelihood is not strictly concave for many of the models for interval censored data, but in practice this is not of much consequence. | Fitting a Gaussian to a histogram when the bin size is significant | If you know that $y_i \in [x_j, x_{j+1})$, where $x_j$'s are cut points from a bin, then you can treat this as interval censored data. In other words, for your case, you can define your likelihood fun | Fitting a Gaussian to a histogram when the bin size is significant
If you know that $y_i \in [x_j, x_{j+1})$, where $x_j$'s are cut points from a bin, then you can treat this as interval censored data. In other words, for your case, you can define your likelihood function as
$\displaystyle \prod_{i = 1}^n (\Phi(r_i|\mu, \sigma) - \Phi(l_i|\mu, \sigma) )$
Where $l_i$ and $r_i$ are the upper and lower limits of the bin which the exact value lines in.
A note is that the log likelihood is not strictly concave for many of the models for interval censored data, but in practice this is not of much consequence. | Fitting a Gaussian to a histogram when the bin size is significant
If you know that $y_i \in [x_j, x_{j+1})$, where $x_j$'s are cut points from a bin, then you can treat this as interval censored data. In other words, for your case, you can define your likelihood fun |
50,579 | Fitting a Gaussian to a histogram when the bin size is significant | You should treat each bin as if it were generating random points uniformly within its bounds. Therefore calculate a weighted average for each bin $(x_l, x_h]$ of $E(x) = \frac{x_h + x_l}{2}$ and $E(x^2) = \frac{x_h^2 + x_lx_h + x_l^2}{3}$. This weighted average determines a Gaussian.
You can incorporate a prior by treating this Gaussian as a likelihood. | Fitting a Gaussian to a histogram when the bin size is significant | You should treat each bin as if it were generating random points uniformly within its bounds. Therefore calculate a weighted average for each bin $(x_l, x_h]$ of $E(x) = \frac{x_h + x_l}{2}$ and $E(x^ | Fitting a Gaussian to a histogram when the bin size is significant
You should treat each bin as if it were generating random points uniformly within its bounds. Therefore calculate a weighted average for each bin $(x_l, x_h]$ of $E(x) = \frac{x_h + x_l}{2}$ and $E(x^2) = \frac{x_h^2 + x_lx_h + x_l^2}{3}$. This weighted average determines a Gaussian.
You can incorporate a prior by treating this Gaussian as a likelihood. | Fitting a Gaussian to a histogram when the bin size is significant
You should treat each bin as if it were generating random points uniformly within its bounds. Therefore calculate a weighted average for each bin $(x_l, x_h]$ of $E(x) = \frac{x_h + x_l}{2}$ and $E(x^ |
50,580 | Fitting a Gaussian to a histogram when the bin size is significant | Given whuber's comment on my last answer, I suggest you use that answer to find a mean and variance $\mu, \sigma^2$ as a starting point. Then, calculate the log-likelihood of having observed the bin counts you got $\ell$. Finally, optimize the mean and variance by gradient descent. It should be easy to calculate the gradients of the log-likelihood with respect the parameters. This log-likelihood seems to me to be convex. | Fitting a Gaussian to a histogram when the bin size is significant | Given whuber's comment on my last answer, I suggest you use that answer to find a mean and variance $\mu, \sigma^2$ as a starting point. Then, calculate the log-likelihood of having observed the bin | Fitting a Gaussian to a histogram when the bin size is significant
Given whuber's comment on my last answer, I suggest you use that answer to find a mean and variance $\mu, \sigma^2$ as a starting point. Then, calculate the log-likelihood of having observed the bin counts you got $\ell$. Finally, optimize the mean and variance by gradient descent. It should be easy to calculate the gradients of the log-likelihood with respect the parameters. This log-likelihood seems to me to be convex. | Fitting a Gaussian to a histogram when the bin size is significant
Given whuber's comment on my last answer, I suggest you use that answer to find a mean and variance $\mu, \sigma^2$ as a starting point. Then, calculate the log-likelihood of having observed the bin |
50,581 | Methods of fitting a dynamic linear model | The Kalman filter is the "forward filtering" part of FFBS, while the "backward sampling" part provides a draw from the joint distribution for the $\Theta_t$ for all $t$.
All the other ways of performing statistical parameter estimation, e.g. maximum likelihood, can be used for DLMs. Yes a particle filter could be used here, but since the model is linear and Gaussian, i.e. a DLM, you won't be gaining anything. Particle filters are better when you have non-linear or non-Gaussian models and thus cannot perform FFBS.
Yes, it is possible to perform Metropolis-within-Gibbs for any Bayesian model. | Methods of fitting a dynamic linear model | The Kalman filter is the "forward filtering" part of FFBS, while the "backward sampling" part provides a draw from the joint distribution for the $\Theta_t$ for all $t$.
All the other ways of perform | Methods of fitting a dynamic linear model
The Kalman filter is the "forward filtering" part of FFBS, while the "backward sampling" part provides a draw from the joint distribution for the $\Theta_t$ for all $t$.
All the other ways of performing statistical parameter estimation, e.g. maximum likelihood, can be used for DLMs. Yes a particle filter could be used here, but since the model is linear and Gaussian, i.e. a DLM, you won't be gaining anything. Particle filters are better when you have non-linear or non-Gaussian models and thus cannot perform FFBS.
Yes, it is possible to perform Metropolis-within-Gibbs for any Bayesian model. | Methods of fitting a dynamic linear model
The Kalman filter is the "forward filtering" part of FFBS, while the "backward sampling" part provides a draw from the joint distribution for the $\Theta_t$ for all $t$.
All the other ways of perform |
50,582 | What are desirable characteristics of a test statistic? | For a test-statistic to be a statistical test you need to know the sampling distribution of that statistic if the null hypothesis is true. For some statistics it is easier to derive (asymptotically) what that distribution would be, and these statistics have been given names like t-statistic, F-statistic, etc. There exist many different test statistics because many will just test different null-hypotheses. Sometimes the difference is huge, sometimes the difference is extremely subtle. Sometimes different test statistics test exactly the same hypothesis. In those cases the difference could be statistical power, and sometimes it turns out to be the same test developed within different sub-disciplines of statistics and given different names. | What are desirable characteristics of a test statistic? | For a test-statistic to be a statistical test you need to know the sampling distribution of that statistic if the null hypothesis is true. For some statistics it is easier to derive (asymptotically) w | What are desirable characteristics of a test statistic?
For a test-statistic to be a statistical test you need to know the sampling distribution of that statistic if the null hypothesis is true. For some statistics it is easier to derive (asymptotically) what that distribution would be, and these statistics have been given names like t-statistic, F-statistic, etc. There exist many different test statistics because many will just test different null-hypotheses. Sometimes the difference is huge, sometimes the difference is extremely subtle. Sometimes different test statistics test exactly the same hypothesis. In those cases the difference could be statistical power, and sometimes it turns out to be the same test developed within different sub-disciplines of statistics and given different names. | What are desirable characteristics of a test statistic?
For a test-statistic to be a statistical test you need to know the sampling distribution of that statistic if the null hypothesis is true. For some statistics it is easier to derive (asymptotically) w |
50,583 | What are desirable characteristics of a test statistic? | tl;dr: your test needs to have statistical power, the concept of statistical power invalidates the whole raison d'etre of null hypothesis testing
Suppose you have run an experiment in a lab and you'd like to test is an effect. Anathema! Don't you know you can only reject the absence of an effect? Haven't you read Popper? Don't you know it's unscientific to try and confirm anything, you can only try to disprove things.
Very well you say, so you conceive of a null hypothesis, you construct a statistic around it, and you test how likely you are to get such an extreme value under the null hypothesis. However, you find the process tedious, and you wonder if you could automate it a little more.
You then have a brilliant idea. To reject the null hypothesis, you will write a report on your experiment (containing the data). You will then put that report through a cryptographic hash function, like sha2. Since this hash is very unpredictable, if there is no effect, the first 6 bits will only be 0 about 1/128th of the time.
Therefore, you now have a universal null hypothesis test. Hash your paper, and test if the first 6 bits are 0. If they aren't, then you can reject the null hypothesis at p=0.78%. This suggests there may be an effect, where somehow the slime mold you've been studying is inverting the hash function.
Of course, in reality, you have about a 0.78% chance of rejecting the null-hypothesis, and all your rejections will be flukes. It is said that your test has no power.
Very well then, let us use tests which have power! So how do we go about that? Well, we need an idea of what would happen if there were an effect, let's see... Horror! Despair! It seems that we must actually model our actual hypothesis... but... that is unspeakably unscientific. Hume showed that induction was impossible!
There's the dirty secret. The entire concept of null hypothesis testing is a contortion used to keep the pretense that we're not making assumptions about the effect we are after, that we are merely rejecting hypotheses. Bullshit. The effect is implicitly modeled the minute we start caring about the power of the test. | What are desirable characteristics of a test statistic? | tl;dr: your test needs to have statistical power, the concept of statistical power invalidates the whole raison d'etre of null hypothesis testing
Suppose you have run an experiment in a lab and you'd | What are desirable characteristics of a test statistic?
tl;dr: your test needs to have statistical power, the concept of statistical power invalidates the whole raison d'etre of null hypothesis testing
Suppose you have run an experiment in a lab and you'd like to test is an effect. Anathema! Don't you know you can only reject the absence of an effect? Haven't you read Popper? Don't you know it's unscientific to try and confirm anything, you can only try to disprove things.
Very well you say, so you conceive of a null hypothesis, you construct a statistic around it, and you test how likely you are to get such an extreme value under the null hypothesis. However, you find the process tedious, and you wonder if you could automate it a little more.
You then have a brilliant idea. To reject the null hypothesis, you will write a report on your experiment (containing the data). You will then put that report through a cryptographic hash function, like sha2. Since this hash is very unpredictable, if there is no effect, the first 6 bits will only be 0 about 1/128th of the time.
Therefore, you now have a universal null hypothesis test. Hash your paper, and test if the first 6 bits are 0. If they aren't, then you can reject the null hypothesis at p=0.78%. This suggests there may be an effect, where somehow the slime mold you've been studying is inverting the hash function.
Of course, in reality, you have about a 0.78% chance of rejecting the null-hypothesis, and all your rejections will be flukes. It is said that your test has no power.
Very well then, let us use tests which have power! So how do we go about that? Well, we need an idea of what would happen if there were an effect, let's see... Horror! Despair! It seems that we must actually model our actual hypothesis... but... that is unspeakably unscientific. Hume showed that induction was impossible!
There's the dirty secret. The entire concept of null hypothesis testing is a contortion used to keep the pretense that we're not making assumptions about the effect we are after, that we are merely rejecting hypotheses. Bullshit. The effect is implicitly modeled the minute we start caring about the power of the test. | What are desirable characteristics of a test statistic?
tl;dr: your test needs to have statistical power, the concept of statistical power invalidates the whole raison d'etre of null hypothesis testing
Suppose you have run an experiment in a lab and you'd |
50,584 | What are desirable characteristics of a test statistic? | From more of an intro stat perspective, some tests are useful with some data and others are not. For example, if your data is normally distributed (or if you're looking at the means of samples greater than 30 in size) you can use a Z-test. They're easy to compute.
If you're forced to use a small sample, a better distribution to rely on is the T distribution, which takes into account the sample size. But once the sample size gets large enough, the T distribution starts to look a like Z (the normal curve).
You can construct a 95% CI and then calculate T and find that they agree: the mean is in the confidence interval--T isn't large enough to reject. It's not all toeMAYtoe toeMAHtoe but especially at the intro stat level, it's all about the amount of data you have, the distribution of the data, and what you're asking. | What are desirable characteristics of a test statistic? | From more of an intro stat perspective, some tests are useful with some data and others are not. For example, if your data is normally distributed (or if you're looking at the means of samples greater | What are desirable characteristics of a test statistic?
From more of an intro stat perspective, some tests are useful with some data and others are not. For example, if your data is normally distributed (or if you're looking at the means of samples greater than 30 in size) you can use a Z-test. They're easy to compute.
If you're forced to use a small sample, a better distribution to rely on is the T distribution, which takes into account the sample size. But once the sample size gets large enough, the T distribution starts to look a like Z (the normal curve).
You can construct a 95% CI and then calculate T and find that they agree: the mean is in the confidence interval--T isn't large enough to reject. It's not all toeMAYtoe toeMAHtoe but especially at the intro stat level, it's all about the amount of data you have, the distribution of the data, and what you're asking. | What are desirable characteristics of a test statistic?
From more of an intro stat perspective, some tests are useful with some data and others are not. For example, if your data is normally distributed (or if you're looking at the means of samples greater |
50,585 | Find conditional expectation given a discrete random variable whose range is N | Since
$$
X_N=\sum_{n\geqslant 1} X_n\mathbf{1}_{N=n}
$$
holds pointwise, we have
$$
{\rm E}[X_N]=\sum_{n\geqslant 1}\mu_np_n
$$
agreeing with your expression. Similarly,
$$
{\rm E}[X_N^2]=\sum_{n\geqslant 1}{\rm E}[X_n^2]p_n=\sum_{n\geqslant 1}(\sigma_n^2+\mu_n^2)p_n
$$
and hence
$$
{\rm Var}(X_N)=\sum_{n\geqslant 1} (\sigma_n^2+\mu_n^2)p_n-\left(\sum_{n\geqslant 1} \mu_np_n\right)^2
$$
also agreeing with your expression. | Find conditional expectation given a discrete random variable whose range is N | Since
$$
X_N=\sum_{n\geqslant 1} X_n\mathbf{1}_{N=n}
$$
holds pointwise, we have
$$
{\rm E}[X_N]=\sum_{n\geqslant 1}\mu_np_n
$$
agreeing with your expression. Similarly,
$$
{\rm E}[X_N^2]=\sum_{n\geq | Find conditional expectation given a discrete random variable whose range is N
Since
$$
X_N=\sum_{n\geqslant 1} X_n\mathbf{1}_{N=n}
$$
holds pointwise, we have
$$
{\rm E}[X_N]=\sum_{n\geqslant 1}\mu_np_n
$$
agreeing with your expression. Similarly,
$$
{\rm E}[X_N^2]=\sum_{n\geqslant 1}{\rm E}[X_n^2]p_n=\sum_{n\geqslant 1}(\sigma_n^2+\mu_n^2)p_n
$$
and hence
$$
{\rm Var}(X_N)=\sum_{n\geqslant 1} (\sigma_n^2+\mu_n^2)p_n-\left(\sum_{n\geqslant 1} \mu_np_n\right)^2
$$
also agreeing with your expression. | Find conditional expectation given a discrete random variable whose range is N
Since
$$
X_N=\sum_{n\geqslant 1} X_n\mathbf{1}_{N=n}
$$
holds pointwise, we have
$$
{\rm E}[X_N]=\sum_{n\geqslant 1}\mu_np_n
$$
agreeing with your expression. Similarly,
$$
{\rm E}[X_N^2]=\sum_{n\geq |
50,586 | Find conditional expectation given a discrete random variable whose range is N | This question is another clear case of applying identity: $E[f(X,Y)|Y=y]=E[f(X,y)|Y=y]$.
$$E[X_N|N=n]=E[X_n|N=n]=E[X_n]=\mu _n$$.
In the same way, for the variance we have:
$$Var[X_N|N=n]=E[(X_N-\mu_N)^2|N=n]=E[(X_n-\mu_n)^2|N=n]=E[(X_n-\mu_n)^2|=\sigma_n^2$$ | Find conditional expectation given a discrete random variable whose range is N | This question is another clear case of applying identity: $E[f(X,Y)|Y=y]=E[f(X,y)|Y=y]$.
$$E[X_N|N=n]=E[X_n|N=n]=E[X_n]=\mu _n$$.
In the same way, for the variance we have:
$$Var[X_N|N=n]=E[(X_N-\mu_N | Find conditional expectation given a discrete random variable whose range is N
This question is another clear case of applying identity: $E[f(X,Y)|Y=y]=E[f(X,y)|Y=y]$.
$$E[X_N|N=n]=E[X_n|N=n]=E[X_n]=\mu _n$$.
In the same way, for the variance we have:
$$Var[X_N|N=n]=E[(X_N-\mu_N)^2|N=n]=E[(X_n-\mu_n)^2|N=n]=E[(X_n-\mu_n)^2|=\sigma_n^2$$ | Find conditional expectation given a discrete random variable whose range is N
This question is another clear case of applying identity: $E[f(X,Y)|Y=y]=E[f(X,y)|Y=y]$.
$$E[X_N|N=n]=E[X_n|N=n]=E[X_n]=\mu _n$$.
In the same way, for the variance we have:
$$Var[X_N|N=n]=E[(X_N-\mu_N |
50,587 | What are some interesting examples of wrong or crazy inferences being drawn from Big Data? | One example could be Google's failure to predict flu trends. See for instance this Guardian article. | What are some interesting examples of wrong or crazy inferences being drawn from Big Data? | One example could be Google's failure to predict flu trends. See for instance this Guardian article. | What are some interesting examples of wrong or crazy inferences being drawn from Big Data?
One example could be Google's failure to predict flu trends. See for instance this Guardian article. | What are some interesting examples of wrong or crazy inferences being drawn from Big Data?
One example could be Google's failure to predict flu trends. See for instance this Guardian article. |
50,588 | Extracting city name from free text? | This task is typically referred as named entity normalization. Fuzzy string matching can be a good baseline if words are not too close (in terms of Levenshtein distance) in your dictionary. I have used the Python package fuzzywuzzy in the past for that purpose. | Extracting city name from free text? | This task is typically referred as named entity normalization. Fuzzy string matching can be a good baseline if words are not too close (in terms of Levenshtein distance) in your dictionary. I have us | Extracting city name from free text?
This task is typically referred as named entity normalization. Fuzzy string matching can be a good baseline if words are not too close (in terms of Levenshtein distance) in your dictionary. I have used the Python package fuzzywuzzy in the past for that purpose. | Extracting city name from free text?
This task is typically referred as named entity normalization. Fuzzy string matching can be a good baseline if words are not too close (in terms of Levenshtein distance) in your dictionary. I have us |
50,589 | Extracting city name from free text? | What I currently do to normalize places name is resorting to geocoding using api such as, e.g. navitia or google map, which already deal with the normalization process.
Once the lat/lng in hand, I reverse-geocode them, of course always using the same api so as to get normalized outputs.
Furthermore, these api return more information than just a normalized city name, but also information which allow to add line to your db in a uniquely identifying manner.
In reaction with the other answer, let try fuzzywuzzy with the example you give.
>>> from fuzzywuzzy import fuzz
>>> a = "Shanghai, China"
>>> b = "China, ShangHai"
>>> fuzz.ratio(a, b)
53
>>> fuzz.partial_ratio(a, b)
53
>>> fuzz.token_sort_ratio(a, b)
100
>>> fuzz.token_set_ratio(a, b)
100
seems fair. | Extracting city name from free text? | What I currently do to normalize places name is resorting to geocoding using api such as, e.g. navitia or google map, which already deal with the normalization process.
Once the lat/lng in hand, I re | Extracting city name from free text?
What I currently do to normalize places name is resorting to geocoding using api such as, e.g. navitia or google map, which already deal with the normalization process.
Once the lat/lng in hand, I reverse-geocode them, of course always using the same api so as to get normalized outputs.
Furthermore, these api return more information than just a normalized city name, but also information which allow to add line to your db in a uniquely identifying manner.
In reaction with the other answer, let try fuzzywuzzy with the example you give.
>>> from fuzzywuzzy import fuzz
>>> a = "Shanghai, China"
>>> b = "China, ShangHai"
>>> fuzz.ratio(a, b)
53
>>> fuzz.partial_ratio(a, b)
53
>>> fuzz.token_sort_ratio(a, b)
100
>>> fuzz.token_set_ratio(a, b)
100
seems fair. | Extracting city name from free text?
What I currently do to normalize places name is resorting to geocoding using api such as, e.g. navitia or google map, which already deal with the normalization process.
Once the lat/lng in hand, I re |
50,590 | Propensity score matching: using alternative methods to create a distance measure | Logistic regression is mostly likely used because of historic convenience, well studied convergence properties, relative data-frugality in comparison with other ML learners as well as being readily available pretty much everywhere. Also the resulting probabilities are usually "well-calibrated" out-of-the-box and this is helpful as it does not lead to under-/over-estimation of the probability to receive treatment as well as makes the occurrence of "extreme probabilities" (near 0 or 1) less likely. GBMs specifically, do not give very well-calibrated probabilities out of the box, I provided relevant material and commentary in this CV.SE tread on: Biased prediction (overestimation) for xgboost. Similarly simply using a tree would not be strongly advisable as it would lead to discontinuities and non-strictly monotonic probabilities that could mess up ordering because of the ties. Finally SVMs are a bit of red-herring as strictly speaking they do not provide probabilities natively but we need to use Platt scaling to get a similar output. This brings us to the last point: we can always post-process our output to make it better calibrated. The success of that step will be crucial but to avoid getting it "very wrong" logistic regression presents a safe bet. I recently read A tutorial on calibration measurements and calibration models for clinical prediction models by Huang et al. and I found it very informative if you want to explore that point further. | Propensity score matching: using alternative methods to create a distance measure | Logistic regression is mostly likely used because of historic convenience, well studied convergence properties, relative data-frugality in comparison with other ML learners as well as being readily av | Propensity score matching: using alternative methods to create a distance measure
Logistic regression is mostly likely used because of historic convenience, well studied convergence properties, relative data-frugality in comparison with other ML learners as well as being readily available pretty much everywhere. Also the resulting probabilities are usually "well-calibrated" out-of-the-box and this is helpful as it does not lead to under-/over-estimation of the probability to receive treatment as well as makes the occurrence of "extreme probabilities" (near 0 or 1) less likely. GBMs specifically, do not give very well-calibrated probabilities out of the box, I provided relevant material and commentary in this CV.SE tread on: Biased prediction (overestimation) for xgboost. Similarly simply using a tree would not be strongly advisable as it would lead to discontinuities and non-strictly monotonic probabilities that could mess up ordering because of the ties. Finally SVMs are a bit of red-herring as strictly speaking they do not provide probabilities natively but we need to use Platt scaling to get a similar output. This brings us to the last point: we can always post-process our output to make it better calibrated. The success of that step will be crucial but to avoid getting it "very wrong" logistic regression presents a safe bet. I recently read A tutorial on calibration measurements and calibration models for clinical prediction models by Huang et al. and I found it very informative if you want to explore that point further. | Propensity score matching: using alternative methods to create a distance measure
Logistic regression is mostly likely used because of historic convenience, well studied convergence properties, relative data-frugality in comparison with other ML learners as well as being readily av |
50,591 | When do (and don't) confidence intervals and credible intervals coincide? | I'm not sure you can consider this a complete answer so you can double check yourself, however, here goes.
By the definition of confidence intervals that
there's an $X\%$ chance that when computing the $X\%$ confidence intervals (CI) the true value $y$ will fall within computed CI,
then you can synthesize an experiment where you know the true parameter values $y$, and you simulate the noise (based on the assumed likelihood function) let's say $P=1000$ times. When you do the fit and compute the $X\%$ confidence intervals, $y$ should fall within the CIs $X\%$ of the time. If this fails or succeeds in a significant way, then it affects a decision.
On the other hand, given the definition of the credible intervals where
Given the observed data, there is a $X\%$ probability that the true value $y$ falls within the $X\%$ credible interval
it means that
you must synthesize $P$ different parameters $\{y_p\}_{p=1,\ldots,P}$ (which are your true values),
solve using a Bayesian estimator $P$ times,
compute the $X\%$ credible intervals for each ($P$ times),
and expect that $X\%$ of the true values $y_p$, should fall within the credible intervals.
Note: $P$ and $X$ are the same in the aforementioned scenarios.
So to summarize, to be able to compare credible intervals to confidence intervals fairly, you need to follow their definitions. In the frequentist approach you assume a fixed set of parameters (remember frequentists assume parameters are fixed) and simulate noise in the measurements (data), whereas in the Bayesian approach you assume your data is fixed, so you must ``randomize'' the parameters. If you follow this approach credible and confidence intervals can be compared fairly (no matter the prior distribution). | When do (and don't) confidence intervals and credible intervals coincide? | I'm not sure you can consider this a complete answer so you can double check yourself, however, here goes.
By the definition of confidence intervals that
there's an $X\%$ chance that when computing | When do (and don't) confidence intervals and credible intervals coincide?
I'm not sure you can consider this a complete answer so you can double check yourself, however, here goes.
By the definition of confidence intervals that
there's an $X\%$ chance that when computing the $X\%$ confidence intervals (CI) the true value $y$ will fall within computed CI,
then you can synthesize an experiment where you know the true parameter values $y$, and you simulate the noise (based on the assumed likelihood function) let's say $P=1000$ times. When you do the fit and compute the $X\%$ confidence intervals, $y$ should fall within the CIs $X\%$ of the time. If this fails or succeeds in a significant way, then it affects a decision.
On the other hand, given the definition of the credible intervals where
Given the observed data, there is a $X\%$ probability that the true value $y$ falls within the $X\%$ credible interval
it means that
you must synthesize $P$ different parameters $\{y_p\}_{p=1,\ldots,P}$ (which are your true values),
solve using a Bayesian estimator $P$ times,
compute the $X\%$ credible intervals for each ($P$ times),
and expect that $X\%$ of the true values $y_p$, should fall within the credible intervals.
Note: $P$ and $X$ are the same in the aforementioned scenarios.
So to summarize, to be able to compare credible intervals to confidence intervals fairly, you need to follow their definitions. In the frequentist approach you assume a fixed set of parameters (remember frequentists assume parameters are fixed) and simulate noise in the measurements (data), whereas in the Bayesian approach you assume your data is fixed, so you must ``randomize'' the parameters. If you follow this approach credible and confidence intervals can be compared fairly (no matter the prior distribution). | When do (and don't) confidence intervals and credible intervals coincide?
I'm not sure you can consider this a complete answer so you can double check yourself, however, here goes.
By the definition of confidence intervals that
there's an $X\%$ chance that when computing |
50,592 | How to statistically compare groups for multiple density plots? | The problem with a chi-square is it ignores the ordering, leading to a loss of power.
One possibility: there is a k-group version of a Kolmogorov-Smirnov test$^{[1]}$.
Another is a k-sample Anderson-Darling test. E.g. see Wikipedia.
A third possibility might be to look at an orthogonal-polynomial decomposition of a chi-square (or rather the first few terms in one), which would then be taking account of the ordering. See, for example chapter 6 of Best (1999)$^{[2]}$.
[1]: Conover, W. J. (1965),
"Several k-Sample Kolmogorov-Smirnov Tests",
The Annals of Mathematical Statistics, 36:3 (Jun.), pp. 1019-1026
[2]: Best, D.J. (1999),
Tests of fit and other nonparametric data analysis,
PhD Thesis, School of Mathematics and Applied Statistics, University of Wollongong
http://ro.uow.edu.au/theses/2061 (direct link) | How to statistically compare groups for multiple density plots? | The problem with a chi-square is it ignores the ordering, leading to a loss of power.
One possibility: there is a k-group version of a Kolmogorov-Smirnov test$^{[1]}$.
Another is a k-sample Anderson- | How to statistically compare groups for multiple density plots?
The problem with a chi-square is it ignores the ordering, leading to a loss of power.
One possibility: there is a k-group version of a Kolmogorov-Smirnov test$^{[1]}$.
Another is a k-sample Anderson-Darling test. E.g. see Wikipedia.
A third possibility might be to look at an orthogonal-polynomial decomposition of a chi-square (or rather the first few terms in one), which would then be taking account of the ordering. See, for example chapter 6 of Best (1999)$^{[2]}$.
[1]: Conover, W. J. (1965),
"Several k-Sample Kolmogorov-Smirnov Tests",
The Annals of Mathematical Statistics, 36:3 (Jun.), pp. 1019-1026
[2]: Best, D.J. (1999),
Tests of fit and other nonparametric data analysis,
PhD Thesis, School of Mathematics and Applied Statistics, University of Wollongong
http://ro.uow.edu.au/theses/2061 (direct link) | How to statistically compare groups for multiple density plots?
The problem with a chi-square is it ignores the ordering, leading to a loss of power.
One possibility: there is a k-group version of a Kolmogorov-Smirnov test$^{[1]}$.
Another is a k-sample Anderson- |
50,593 | Significant output in Levene's test for equality of variances in MANOVA; what to do? | First, make sure you look at the boxplots of the residuals instead of just using Levene's tests. The significant result could be due to outliers, a bimodal distribution, or skewness that you may need to address.
To get equal variances, try log-transforming (ln) your dependent variables before you run MANOVA.
Also, there are four test statistics that can be used in MANOVA. Hotelling's trace and Pillai's criterion are the least affected by violations in assumptions, but Wilk's is the most commonly used. | Significant output in Levene's test for equality of variances in MANOVA; what to do? | First, make sure you look at the boxplots of the residuals instead of just using Levene's tests. The significant result could be due to outliers, a bimodal distribution, or skewness that you may need | Significant output in Levene's test for equality of variances in MANOVA; what to do?
First, make sure you look at the boxplots of the residuals instead of just using Levene's tests. The significant result could be due to outliers, a bimodal distribution, or skewness that you may need to address.
To get equal variances, try log-transforming (ln) your dependent variables before you run MANOVA.
Also, there are four test statistics that can be used in MANOVA. Hotelling's trace and Pillai's criterion are the least affected by violations in assumptions, but Wilk's is the most commonly used. | Significant output in Levene's test for equality of variances in MANOVA; what to do?
First, make sure you look at the boxplots of the residuals instead of just using Levene's tests. The significant result could be due to outliers, a bimodal distribution, or skewness that you may need |
50,594 | Question about Dynkin Lehmann Scheffe Theorem | The following claim makes no sense:
$\frac{L_x(.)}{L_x(\theta_0)}$ is minimal sufficient
Lehmann and Scheffé proved:
If $T(x)=T(y)$ $\iff$ $\theta \mapsto \dfrac{p(x,\theta)}{p(y,\theta)}$ is a constant function, then $T$ is minimal sufficient.
Call $(*)$ = "$\theta \mapsto \frac{p(x,\theta)}{p(y,\theta)}$ is a constant function".
Taking an arbitrary $\theta_0 \in \Theta$ ($\theta_0=3$ in your example) then $(*)$ means that $\dfrac{p(x,\theta)}{p(y,\theta)} = \dfrac{p(x,\theta_0)}{p(y,\theta_0)}$ or in other words $\dfrac{p(x,\theta)}{p(x,\theta_0)} = \dfrac{p(y,\theta)}{p(y,\theta_0)}$ for every $\theta$. The annotations in the table of the solution you posted are the values of $\frac{p(x,\theta)}{p(x,\theta_0)}$ for every $x$ and $\theta$. Call $r(x)$ the vector ${\left(\frac{p(x,\theta)}{p(x,\theta_0)}\right)}_{\theta \in \Theta}$ Then the procedure consists in assigning a value to $T(x)$ shared by all $x$ having the same $r(x)$. For example $r(1)=r(2)$ in your problem, then we assign a common value to $T(1)$ and $T(2)$. Next, $r(3)$ is "alone" in your table ($r(x)\neq r(3)$ for $x\neq 3$), then assign a value to $T(3)$ different from the previously assigned values, and so on... Constructing $T$ by this way, then the condition of Lehmann-Scheffé's theorem is fulfilled, and then the theorem applies.
Update
Sorry I have understand that now:
$\frac{L_x(.)}{L_x(\theta_0)}$ is minimal sufficient
this is nothing but my $r(x)$ ! | Question about Dynkin Lehmann Scheffe Theorem | The following claim makes no sense:
$\frac{L_x(.)}{L_x(\theta_0)}$ is minimal sufficient
Lehmann and Scheffé proved:
If $T(x)=T(y)$ $\iff$ $\theta \mapsto \dfrac{p(x,\theta)}{p(y,\theta)}$ is a con | Question about Dynkin Lehmann Scheffe Theorem
The following claim makes no sense:
$\frac{L_x(.)}{L_x(\theta_0)}$ is minimal sufficient
Lehmann and Scheffé proved:
If $T(x)=T(y)$ $\iff$ $\theta \mapsto \dfrac{p(x,\theta)}{p(y,\theta)}$ is a constant function, then $T$ is minimal sufficient.
Call $(*)$ = "$\theta \mapsto \frac{p(x,\theta)}{p(y,\theta)}$ is a constant function".
Taking an arbitrary $\theta_0 \in \Theta$ ($\theta_0=3$ in your example) then $(*)$ means that $\dfrac{p(x,\theta)}{p(y,\theta)} = \dfrac{p(x,\theta_0)}{p(y,\theta_0)}$ or in other words $\dfrac{p(x,\theta)}{p(x,\theta_0)} = \dfrac{p(y,\theta)}{p(y,\theta_0)}$ for every $\theta$. The annotations in the table of the solution you posted are the values of $\frac{p(x,\theta)}{p(x,\theta_0)}$ for every $x$ and $\theta$. Call $r(x)$ the vector ${\left(\frac{p(x,\theta)}{p(x,\theta_0)}\right)}_{\theta \in \Theta}$ Then the procedure consists in assigning a value to $T(x)$ shared by all $x$ having the same $r(x)$. For example $r(1)=r(2)$ in your problem, then we assign a common value to $T(1)$ and $T(2)$. Next, $r(3)$ is "alone" in your table ($r(x)\neq r(3)$ for $x\neq 3$), then assign a value to $T(3)$ different from the previously assigned values, and so on... Constructing $T$ by this way, then the condition of Lehmann-Scheffé's theorem is fulfilled, and then the theorem applies.
Update
Sorry I have understand that now:
$\frac{L_x(.)}{L_x(\theta_0)}$ is minimal sufficient
this is nothing but my $r(x)$ ! | Question about Dynkin Lehmann Scheffe Theorem
The following claim makes no sense:
$\frac{L_x(.)}{L_x(\theta_0)}$ is minimal sufficient
Lehmann and Scheffé proved:
If $T(x)=T(y)$ $\iff$ $\theta \mapsto \dfrac{p(x,\theta)}{p(y,\theta)}$ is a con |
50,595 | Drawing numbered balls from an urn | I would suggest the following reason:
You have a random variable X - the sum of balls in a round. Which is an independent identically distributed variable ( by assumption?) ( whereas $\mu*$ is not)
You are interested in $Z=\frac{1}{N_x}\sum_{j=1}^{10^6}\sum_{j=1}^{n_j} x_{ij}$.
Well the central limit theorem tells you that Z 'approaches' a normal distribution with mean, st dev $\mu_X,\sigma_X/N_x$.
see http://www.statisticalengineering.com/central_limit_theorem.html ... basically whatever distribution you start off with, the distribution of a million samples will look like a gaussian.
A further note ( and how the CLT is proved). The easiest way to calculate distributions of sums of n iid random variables is by doing a fourier transform of the individual distribution raising it to the nth power and then inverting the fourier transform. So you can actually quite directly test the convergence for your particular distribution by generating the empirical pdf from your million samples, and then calculate fft etc for 10 sums, 100 etc. ( see convolution and sums of independent random variables)
To convince yourself (or otherwise) that the CLT is applicable its probably simplest to use a bootstrap. a) sample with replacement your 1 million sums of balls 1 million times to generate one sample of Z =the sum of 1 million rounds, $z_k$. b) repeat a) 1000 or more times. c) generate histogram of the distribution of Z from your 1000 samples- compare to the normal approximation implied by the CLT. | Drawing numbered balls from an urn | I would suggest the following reason:
You have a random variable X - the sum of balls in a round. Which is an independent identically distributed variable ( by assumption?) ( whereas $\mu*$ is not)
Y | Drawing numbered balls from an urn
I would suggest the following reason:
You have a random variable X - the sum of balls in a round. Which is an independent identically distributed variable ( by assumption?) ( whereas $\mu*$ is not)
You are interested in $Z=\frac{1}{N_x}\sum_{j=1}^{10^6}\sum_{j=1}^{n_j} x_{ij}$.
Well the central limit theorem tells you that Z 'approaches' a normal distribution with mean, st dev $\mu_X,\sigma_X/N_x$.
see http://www.statisticalengineering.com/central_limit_theorem.html ... basically whatever distribution you start off with, the distribution of a million samples will look like a gaussian.
A further note ( and how the CLT is proved). The easiest way to calculate distributions of sums of n iid random variables is by doing a fourier transform of the individual distribution raising it to the nth power and then inverting the fourier transform. So you can actually quite directly test the convergence for your particular distribution by generating the empirical pdf from your million samples, and then calculate fft etc for 10 sums, 100 etc. ( see convolution and sums of independent random variables)
To convince yourself (or otherwise) that the CLT is applicable its probably simplest to use a bootstrap. a) sample with replacement your 1 million sums of balls 1 million times to generate one sample of Z =the sum of 1 million rounds, $z_k$. b) repeat a) 1000 or more times. c) generate histogram of the distribution of Z from your 1000 samples- compare to the normal approximation implied by the CLT. | Drawing numbered balls from an urn
I would suggest the following reason:
You have a random variable X - the sum of balls in a round. Which is an independent identically distributed variable ( by assumption?) ( whereas $\mu*$ is not)
Y |
50,596 | Drawing numbered balls from an urn | Is the mean really the best estimator and why?
If you're looking for the highest reward (or highest sum of ball numbers), then yes, the mean is the best estimator. It's also consistent with common sense to pick the robot which had the highest average in earlier rounds. Especially because the robots don't draw the balls randomly (though as stipulated in the problem, we don't know why or how they choose the balls).
Why not the mean over all the balls drawn?
Correct, you do have to do that (for each robot seperately of course). And based on the formula in your "Standard Solution", that's exactly what you're doing (per robot).
Though it doesn't match your interpretation in the line before that: "...make the decision based on the mean and the standard error on mean for the sum of the balls in each round".
What about the sensitivity to outliers?
In this case, I don't think outliers would cause a big problem for the very reasons you've given in your "Additional Information". If 0 is drawn 95% of the time, the second most common is a mere 1, etc. etc. You can do the math. The 'handful of times' that there are high numbered balls (even if there are several), it won't have a big impact if you're doing it a million rounds (and eventually thus devide by N (10^6) to get the mean). Plus, one would imagine the standard error to take this into account.
It seems a lot of information is lost by not using the knowledge of the set Bk so is something more Baysian advisable?
I'm going to refer to this paper by S.L. Scott because a) I couldn't put it better, b) I don't want to run the risk of oversimplifying the answer and c) I'm simply not sure with this particular problem: A modern Bayesian look at the multi-armed bandit
-
For a deeper understanding (including the 'regret' vs. 'reward' side of the story, etc.) I refer to: Best Arm Identification in Multi-Armed Bandit | Drawing numbered balls from an urn | Is the mean really the best estimator and why?
If you're looking for the highest reward (or highest sum of ball numbers), then yes, the mean is the best estimator. It's also consistent with common se | Drawing numbered balls from an urn
Is the mean really the best estimator and why?
If you're looking for the highest reward (or highest sum of ball numbers), then yes, the mean is the best estimator. It's also consistent with common sense to pick the robot which had the highest average in earlier rounds. Especially because the robots don't draw the balls randomly (though as stipulated in the problem, we don't know why or how they choose the balls).
Why not the mean over all the balls drawn?
Correct, you do have to do that (for each robot seperately of course). And based on the formula in your "Standard Solution", that's exactly what you're doing (per robot).
Though it doesn't match your interpretation in the line before that: "...make the decision based on the mean and the standard error on mean for the sum of the balls in each round".
What about the sensitivity to outliers?
In this case, I don't think outliers would cause a big problem for the very reasons you've given in your "Additional Information". If 0 is drawn 95% of the time, the second most common is a mere 1, etc. etc. You can do the math. The 'handful of times' that there are high numbered balls (even if there are several), it won't have a big impact if you're doing it a million rounds (and eventually thus devide by N (10^6) to get the mean). Plus, one would imagine the standard error to take this into account.
It seems a lot of information is lost by not using the knowledge of the set Bk so is something more Baysian advisable?
I'm going to refer to this paper by S.L. Scott because a) I couldn't put it better, b) I don't want to run the risk of oversimplifying the answer and c) I'm simply not sure with this particular problem: A modern Bayesian look at the multi-armed bandit
-
For a deeper understanding (including the 'regret' vs. 'reward' side of the story, etc.) I refer to: Best Arm Identification in Multi-Armed Bandit | Drawing numbered balls from an urn
Is the mean really the best estimator and why?
If you're looking for the highest reward (or highest sum of ball numbers), then yes, the mean is the best estimator. It's also consistent with common se |
50,597 | Weighted least squares | For weighted regression through the origin, I presume you either know or can show that $b_1=\frac{\sum_i w_ix_iy_i}{\sum_i w_ix_i^2} = \frac{\sum_i (w_ix_i)y_i}{\sum_i (w_ix_i)x_i}$
Since in this case $b_1=\frac{\sum_i y_i}{\sum x_i}$, you can see by inspection that the weights must be such that $w_ix_i$ is a constant.
Can you do it now? | Weighted least squares | For weighted regression through the origin, I presume you either know or can show that $b_1=\frac{\sum_i w_ix_iy_i}{\sum_i w_ix_i^2} = \frac{\sum_i (w_ix_i)y_i}{\sum_i (w_ix_i)x_i}$
Since in this case | Weighted least squares
For weighted regression through the origin, I presume you either know or can show that $b_1=\frac{\sum_i w_ix_iy_i}{\sum_i w_ix_i^2} = \frac{\sum_i (w_ix_i)y_i}{\sum_i (w_ix_i)x_i}$
Since in this case $b_1=\frac{\sum_i y_i}{\sum x_i}$, you can see by inspection that the weights must be such that $w_ix_i$ is a constant.
Can you do it now? | Weighted least squares
For weighted regression through the origin, I presume you either know or can show that $b_1=\frac{\sum_i w_ix_iy_i}{\sum_i w_ix_i^2} = \frac{\sum_i (w_ix_i)y_i}{\sum_i (w_ix_i)x_i}$
Since in this case |
50,598 | Cumulative Hazard Function where "status" is dependent on "time" | I'm not going to give you a full answer on how to address this at the moment, but I do want to alert you a very serious statistical issue in your dataset.
Standard survival analysis methods (such as the standard KM curves, Cox PH models and aft models) all assume that the censoring is independent of event time. But that appears to be not the case your data! The fact that early events increase the probability of being censored is somewhat of a worst-case scenario violation of this assumption.
Unless there's something about the study design that I'm missing (totally possible...), you have Non-Ignorable missing data. As such, the best you can do is to build a sensitivity analysis. This is not such a simple task though. | Cumulative Hazard Function where "status" is dependent on "time" | I'm not going to give you a full answer on how to address this at the moment, but I do want to alert you a very serious statistical issue in your dataset.
Standard survival analysis methods (such as | Cumulative Hazard Function where "status" is dependent on "time"
I'm not going to give you a full answer on how to address this at the moment, but I do want to alert you a very serious statistical issue in your dataset.
Standard survival analysis methods (such as the standard KM curves, Cox PH models and aft models) all assume that the censoring is independent of event time. But that appears to be not the case your data! The fact that early events increase the probability of being censored is somewhat of a worst-case scenario violation of this assumption.
Unless there's something about the study design that I'm missing (totally possible...), you have Non-Ignorable missing data. As such, the best you can do is to build a sensitivity analysis. This is not such a simple task though. | Cumulative Hazard Function where "status" is dependent on "time"
I'm not going to give you a full answer on how to address this at the moment, but I do want to alert you a very serious statistical issue in your dataset.
Standard survival analysis methods (such as |
50,599 | Cumulative Hazard Function where "status" is dependent on "time" | I don´t know if I understood well your question but here's my point of view about this... In my experience with real data ,even though with nothing related with health data, I usually prefer not to deal with cox models because I never met a case with proportional hazards. I've been always in front of data where hazard rates are not constant at all with the pass of time.
The solutions I've been taken are basically AFT (accelerated failure time) models, wich are parametric regression models in case you are not familiar with..
I use both survreg and the psm family functions (based on survreg) at the excelent package called rms. There you'll be able to find functions associated with the survival objects to calculate hazard ratios, quantiles, means etc of the parametric function you choose for your model.
Even taking all of this into account as far as I know there's an important area of improvement modelling events where teoretical distributions don't fit well the data (multimodal distributions of time, for example) wich require an almost artesanal modelling strategies to fit good models. | Cumulative Hazard Function where "status" is dependent on "time" | I don´t know if I understood well your question but here's my point of view about this... In my experience with real data ,even though with nothing related with health data, I usually prefer not to de | Cumulative Hazard Function where "status" is dependent on "time"
I don´t know if I understood well your question but here's my point of view about this... In my experience with real data ,even though with nothing related with health data, I usually prefer not to deal with cox models because I never met a case with proportional hazards. I've been always in front of data where hazard rates are not constant at all with the pass of time.
The solutions I've been taken are basically AFT (accelerated failure time) models, wich are parametric regression models in case you are not familiar with..
I use both survreg and the psm family functions (based on survreg) at the excelent package called rms. There you'll be able to find functions associated with the survival objects to calculate hazard ratios, quantiles, means etc of the parametric function you choose for your model.
Even taking all of this into account as far as I know there's an important area of improvement modelling events where teoretical distributions don't fit well the data (multimodal distributions of time, for example) wich require an almost artesanal modelling strategies to fit good models. | Cumulative Hazard Function where "status" is dependent on "time"
I don´t know if I understood well your question but here's my point of view about this... In my experience with real data ,even though with nothing related with health data, I usually prefer not to de |
50,600 | Estimating Poisson process intensity using GLM | Can I assume the sojourn times to be exponentially i.i.d. given that the Poisson process is inhomogeneous?
In general the inter-event intervals will not be exponentially distributed. This paper by Yakovlev et al. (2008) derives an expression for the inter-event distribution for one-dimensional non-homogeneous Poisson distributions (Equation 6) and gives counterexamples (e.g., Equation 8).
Am I modeling a one-dimensional (time), three-dimensional (time and location) or a $p + 2$ dimensional Poisson process?
I'm guessing you want to model a three-dimensional process. It's not one-dimensional, because you will have different rates $\lambda(t, x, y)$ for different positions $(x, y)$. On the other hand, it's not $(p + 2)$-dimensional, since you probably observe only one $\mathbf{z}$ for every time and position, so that knowing $(t, x, y)$ implies knowing the value of $\mathbf{z}$. Of course you might then assume something like $\lambda(t, x, y) = \lambda(\mathbf{z}_{txy})$ or $\lambda(t, x, y) = \lambda(\mathbf{z}_{txy}, x, y)$, but I would still call that a three-dimensional process. | Estimating Poisson process intensity using GLM | Can I assume the sojourn times to be exponentially i.i.d. given that the Poisson process is inhomogeneous?
In general the inter-event intervals will not be exponentially distributed. This paper by Ya | Estimating Poisson process intensity using GLM
Can I assume the sojourn times to be exponentially i.i.d. given that the Poisson process is inhomogeneous?
In general the inter-event intervals will not be exponentially distributed. This paper by Yakovlev et al. (2008) derives an expression for the inter-event distribution for one-dimensional non-homogeneous Poisson distributions (Equation 6) and gives counterexamples (e.g., Equation 8).
Am I modeling a one-dimensional (time), three-dimensional (time and location) or a $p + 2$ dimensional Poisson process?
I'm guessing you want to model a three-dimensional process. It's not one-dimensional, because you will have different rates $\lambda(t, x, y)$ for different positions $(x, y)$. On the other hand, it's not $(p + 2)$-dimensional, since you probably observe only one $\mathbf{z}$ for every time and position, so that knowing $(t, x, y)$ implies knowing the value of $\mathbf{z}$. Of course you might then assume something like $\lambda(t, x, y) = \lambda(\mathbf{z}_{txy})$ or $\lambda(t, x, y) = \lambda(\mathbf{z}_{txy}, x, y)$, but I would still call that a three-dimensional process. | Estimating Poisson process intensity using GLM
Can I assume the sojourn times to be exponentially i.i.d. given that the Poisson process is inhomogeneous?
In general the inter-event intervals will not be exponentially distributed. This paper by Ya |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.