idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
50,901
GLMM for overdispersed data
Overdispersion Problem It looks like you're modeling a count variable as a binomial and I think that's the source of your overdispersion. You could model everything as a binomial distribution, but the total for each observation is exactly the same. Plus, the count of diseased plants never reaches the maximum of 100, so it's not really censored the way a binomial would be. EDIT: So, you could easily report this as a "rate" of disease over the total sample. In this way you could analyze the 'count' of disease or proportion (disease / total) as a negative binomial model. EDIT2: Because there seems to be some hesitance to use a negative binomial, here is a list of recent phytopathology articles (same discipline as OP) that model disease as a negative binomial (Prager et al., 2014, Mori et al., 2008, Passey et al., 2017, Paiva de Almeida et al., 2016) A histogram of your y variable looks like a zero inflated negative binomial. Note the long right tail that you typically see with a negative binomial or Poisson. There are a few different ways to handle this, but here's an easy solution: m4<-glmer.nb(dis ~ trt + (1 | farm/bk),data = dinc) summary(m4) overdisp_fun(m4) I got the following overdispersion results: chisq ratio rdf p 122.1655582 1.0811111 113.0000000 0.2617332 Looks good, right? (EDIT: Ignore strikethrough portion below) ### Side Issue: Your Trees are Independent Observations At first, it looks like each of the two trees should be a random effect. However, Tree 1 on farm 1 is not comparable to Tree 1 in farm 2. Therefore, you don't want to model the effect of Tree as a random effect. Imagine if each Tree was a different person. Adding a random effect for each person wouldn't matter unless you had multiple observations per person. Similarly, including the farm "block" doesn't really have an effect on the model. Alternative Models and Final Thoughts Could potentially check out zero inflated negative binomial Although your dispersion doesn't seem bad with standard nb The MASS package is an alternative way to run a nb model Additionally you could run this as a Quasi-Poisson I'll include some code below, in case you want to pursue this require("MASS") m5<-glmmPQL(dis ~ trt , random = ~ 1 | farm/bk, family = negative.binomial(theta=9.86), data = dinc) summary(m5) m6<-glmmPQL(dis ~ trt , random = ~ 1 | farm/bk, family = quasipoisson(link='log'), data = dinc) summary(m6) Best of luck with your model! EDIT In case you'd like to run this as a "rate", please try this code: dinc$dis_prob<- dinc$dis / dinc$tot m7<-glmmPQL(dis_prob ~ trt , random = ~ 1 | farm/bk, family = quasipoisson(link='log'), data = dinc) summary(m7)
GLMM for overdispersed data
Overdispersion Problem It looks like you're modeling a count variable as a binomial and I think that's the source of your overdispersion. You could model everything as a binomial distribution, but th
GLMM for overdispersed data Overdispersion Problem It looks like you're modeling a count variable as a binomial and I think that's the source of your overdispersion. You could model everything as a binomial distribution, but the total for each observation is exactly the same. Plus, the count of diseased plants never reaches the maximum of 100, so it's not really censored the way a binomial would be. EDIT: So, you could easily report this as a "rate" of disease over the total sample. In this way you could analyze the 'count' of disease or proportion (disease / total) as a negative binomial model. EDIT2: Because there seems to be some hesitance to use a negative binomial, here is a list of recent phytopathology articles (same discipline as OP) that model disease as a negative binomial (Prager et al., 2014, Mori et al., 2008, Passey et al., 2017, Paiva de Almeida et al., 2016) A histogram of your y variable looks like a zero inflated negative binomial. Note the long right tail that you typically see with a negative binomial or Poisson. There are a few different ways to handle this, but here's an easy solution: m4<-glmer.nb(dis ~ trt + (1 | farm/bk),data = dinc) summary(m4) overdisp_fun(m4) I got the following overdispersion results: chisq ratio rdf p 122.1655582 1.0811111 113.0000000 0.2617332 Looks good, right? (EDIT: Ignore strikethrough portion below) ### Side Issue: Your Trees are Independent Observations At first, it looks like each of the two trees should be a random effect. However, Tree 1 on farm 1 is not comparable to Tree 1 in farm 2. Therefore, you don't want to model the effect of Tree as a random effect. Imagine if each Tree was a different person. Adding a random effect for each person wouldn't matter unless you had multiple observations per person. Similarly, including the farm "block" doesn't really have an effect on the model. Alternative Models and Final Thoughts Could potentially check out zero inflated negative binomial Although your dispersion doesn't seem bad with standard nb The MASS package is an alternative way to run a nb model Additionally you could run this as a Quasi-Poisson I'll include some code below, in case you want to pursue this require("MASS") m5<-glmmPQL(dis ~ trt , random = ~ 1 | farm/bk, family = negative.binomial(theta=9.86), data = dinc) summary(m5) m6<-glmmPQL(dis ~ trt , random = ~ 1 | farm/bk, family = quasipoisson(link='log'), data = dinc) summary(m6) Best of luck with your model! EDIT In case you'd like to run this as a "rate", please try this code: dinc$dis_prob<- dinc$dis / dinc$tot m7<-glmmPQL(dis_prob ~ trt , random = ~ 1 | farm/bk, family = quasipoisson(link='log'), data = dinc) summary(m7)
GLMM for overdispersed data Overdispersion Problem It looks like you're modeling a count variable as a binomial and I think that's the source of your overdispersion. You could model everything as a binomial distribution, but th
50,902
GLMM for overdispersed data
you are correct that you have modeled it appropriately. Each of your flowers is "nested" under a tree and so are not independent of each other. Your code is appropriate where you have allowed the intercept to vary by trees. It also looks like you have examined the intraclass correlation (i.e. the overdisp_fun() that you used). Further, since farm has three levels, it is appropriate to just treat it as fixed (especially if you dont really care about the difference). In this case, you test the inclusion of the fixed levels, and if they do not improve fit then you can discard them. Make sure that you are examining the AIC and BIC to help with model construction.
GLMM for overdispersed data
you are correct that you have modeled it appropriately. Each of your flowers is "nested" under a tree and so are not independent of each other. Your code is appropriate where you have allowed the inte
GLMM for overdispersed data you are correct that you have modeled it appropriately. Each of your flowers is "nested" under a tree and so are not independent of each other. Your code is appropriate where you have allowed the intercept to vary by trees. It also looks like you have examined the intraclass correlation (i.e. the overdisp_fun() that you used). Further, since farm has three levels, it is appropriate to just treat it as fixed (especially if you dont really care about the difference). In this case, you test the inclusion of the fixed levels, and if they do not improve fit then you can discard them. Make sure that you are examining the AIC and BIC to help with model construction.
GLMM for overdispersed data you are correct that you have modeled it appropriately. Each of your flowers is "nested" under a tree and so are not independent of each other. Your code is appropriate where you have allowed the inte
50,903
Word2Vec models for irrelevant word order
1. The simplest idea would be to copy the tags several times with different orderings, to make the data (sort of) invariant to permuting tags. In computer vision people do something similar to achieve translation invariance. You could also use node2vec if you have a meaningful graph structure on your tags (for example a graph where the two tags are adjacent if at least N documents are tagged with both of them). 2. As mentioned in comments, it makes more sense to average these word vectors. Also, maybe this article can help.
Word2Vec models for irrelevant word order
1. The simplest idea would be to copy the tags several times with different orderings, to make the data (sort of) invariant to permuting tags. In computer vision people do something similar to achieve
Word2Vec models for irrelevant word order 1. The simplest idea would be to copy the tags several times with different orderings, to make the data (sort of) invariant to permuting tags. In computer vision people do something similar to achieve translation invariance. You could also use node2vec if you have a meaningful graph structure on your tags (for example a graph where the two tags are adjacent if at least N documents are tagged with both of them). 2. As mentioned in comments, it makes more sense to average these word vectors. Also, maybe this article can help.
Word2Vec models for irrelevant word order 1. The simplest idea would be to copy the tags several times with different orderings, to make the data (sort of) invariant to permuting tags. In computer vision people do something similar to achieve
50,904
Word2Vec models for irrelevant word order
I would suggest you study Word Embeddings a bit. You need to understand the mathematical significance of going from a one-hot encoded space to a dense vector space. However to answer your questions (based on what I understood): 1) Assuming you are talking about English, use some pre-trained Word2Vec model and simply convert all the tags to some fixed vector length, say 10 or 50. 2) Not sure what adding vectors would yield. You might be better off just multiplying all pairs of Doc 1 tags with Doc 2 tags, and then adding their cosine similarities. Like if Doc 1 has 5 tags and Doc 2 has 6 tags, then 30 pairs similarities can be found and those can be used.
Word2Vec models for irrelevant word order
I would suggest you study Word Embeddings a bit. You need to understand the mathematical significance of going from a one-hot encoded space to a dense vector space. However to answer your questions (b
Word2Vec models for irrelevant word order I would suggest you study Word Embeddings a bit. You need to understand the mathematical significance of going from a one-hot encoded space to a dense vector space. However to answer your questions (based on what I understood): 1) Assuming you are talking about English, use some pre-trained Word2Vec model and simply convert all the tags to some fixed vector length, say 10 or 50. 2) Not sure what adding vectors would yield. You might be better off just multiplying all pairs of Doc 1 tags with Doc 2 tags, and then adding their cosine similarities. Like if Doc 1 has 5 tags and Doc 2 has 6 tags, then 30 pairs similarities can be found and those can be used.
Word2Vec models for irrelevant word order I would suggest you study Word Embeddings a bit. You need to understand the mathematical significance of going from a one-hot encoded space to a dense vector space. However to answer your questions (b
50,905
Fisher Information Inequality of a function of a random variable
I am not aware of any general rule to pass over to the Fisher information of the floor function, but in this case it is possible to solve the problem for the specific distribution. The easiest way to do this is to explicitly determine the distribution of $Y = \text{floor} (X)$ and then find the Fisher information for the discrete distribution. For each possible argument value of the floor $y = 0,1,2, ...$ we have: $$p_Y(y) = \mathbb{P}(y \leqslant X < y+1) = (1-e^{-\lambda(y+1)}) - (1-e^{-\lambda y}) = (1 - e^{-\lambda}) e^{-\lambda y}.$$ This is a discrete distribution, with expected value: $$\mathbb{E}(Y) = \sum_{y=0}^\infty y \cdot p_Y(y) = (1 - e^{-\lambda}) \sum_{y=0}^\infty y e^{-\lambda y} = \frac{e^{-\lambda}}{1-e^{-\lambda}}.$$ This distribution has score function: $$s_y(\lambda) \equiv \frac{\partial}{\partial \lambda} \ln p_Y(y | \lambda) = \frac{\partial}{\partial \lambda} ( \ln(1 - e^{-\lambda}) -\lambda y ) = \frac{e^{-\lambda}}{1-e^{-\lambda}} - y.$$ Since the expected score is zero, the Fisher information is given by: $$\begin{equation} \begin{aligned} I_Y(\lambda) &= \mathbb{E} \Big[ s_Y(\lambda)^2 \Big] \\[6pt] &= \sum_{y=0}^\infty \Big( \frac{e^{-\lambda}}{1-e^{-\lambda}} - y \Big)^2 p_Y(y) \\[6pt] &= \sum_{y=0}^\infty \Big( \frac{e^{-2\lambda}}{(1-e^{-\lambda})^2} - \frac{2e^{-\lambda}}{1-e^{-\lambda}} \cdot y + y^2 \Big) (1 - e^{-\lambda}) e^{-\lambda y} \\[6pt] &= \frac{e^{-2\lambda}}{1-e^{-\lambda}} \sum_{y=0}^\infty e^{-\lambda y} - 2 e^{-\lambda} \sum_{y=0}^\infty y e^{-\lambda y} + (1 - e^{-\lambda}) \sum_{y=0}^\infty y^2 e^{-\lambda y} \\[6pt] &= \frac{e^{-2\lambda}}{(1-e^{-\lambda})^2} - \frac{2 e^{-2\lambda}}{(1-e^{-\lambda})^2} + \frac{e^{-\lambda}(1+e^{-\lambda})}{(1-e^{-\lambda})^2} \\[6pt] &= \frac{1}{(1-e^{-\lambda})^2} \Bigg[e^{-2\lambda} - 2 e^{-2\lambda} + e^{-\lambda}(1+e^{-\lambda}) \Bigg] \\[6pt] &= \frac{1}{(1-e^{-\lambda})^2} \Bigg[e^{-2\lambda} - 2 e^{-2\lambda} + e^{-\lambda}+e^{-2\lambda} \Bigg] \\[6pt] &= \frac{e^{-\lambda}}{(1-e^{-\lambda})^2}. \\[6pt] \end{aligned} \end{equation}$$ We can now verify the inequality $I_Y(\lambda) \leqslant I_X(\lambda)$ directly. Using a bit of algebra, and expanding the exponentials using the Maclaurin series representation, we obtain: $$\begin{equation} \begin{aligned} I_Y(\lambda) = \frac{e^{-\lambda}}{(1-e^{-\lambda})^2} &= \frac{e^{-\lambda}}{1 - 2e^{-\lambda} + e^{-2\lambda}} \\[6pt] &= \Big( \frac{1 - 2e^{-\lambda} + e^{-2\lambda}}{e^{-\lambda}} \Big)^{-1}\\[6pt] &= (e^{\lambda} - 2 + e^{-\lambda})^{-1} \\[6pt] &= \Big( \sum_{k=0}^\infty \frac{\lambda^k}{k!} - 2 + \sum_{k=0}^\infty \frac{(-1)^k \lambda^k}{k!} \Big)^{-1} \\[6pt] &= \Big( 1 + \sum_{k=1}^\infty \frac{\lambda^k}{k!} - 2 + 1 + \sum_{k=1}^\infty \frac{(-1)^k \lambda^k}{k!} \Big)^{-1} \\[6pt] &= \Big( \sum_{k=1}^\infty \frac{\lambda^k}{k!} + \sum_{k=1}^\infty \frac{(-1)^k \lambda^k}{k!} \Big)^{-1} \\[6pt] &= \Big( 2 \sum_{k=1}^\infty \frac{\lambda^{2k}}{(2k)!} \Big)^{-1} \\[6pt] &= \Big( \sum_{k=1}^\infty \frac{\lambda^{2k}}{(2k)! / 2} \Big)^{-1} \\[6pt] &= \frac{1}{ \lambda^2 + \frac{\lambda^4}{12} + \frac{\lambda^6}{360} + \cdots} \\[6pt] &\leqslant \frac{1}{\lambda^2} = I_X(\lambda). \end{aligned} \end{equation}$$
Fisher Information Inequality of a function of a random variable
I am not aware of any general rule to pass over to the Fisher information of the floor function, but in this case it is possible to solve the problem for the specific distribution. The easiest way to
Fisher Information Inequality of a function of a random variable I am not aware of any general rule to pass over to the Fisher information of the floor function, but in this case it is possible to solve the problem for the specific distribution. The easiest way to do this is to explicitly determine the distribution of $Y = \text{floor} (X)$ and then find the Fisher information for the discrete distribution. For each possible argument value of the floor $y = 0,1,2, ...$ we have: $$p_Y(y) = \mathbb{P}(y \leqslant X < y+1) = (1-e^{-\lambda(y+1)}) - (1-e^{-\lambda y}) = (1 - e^{-\lambda}) e^{-\lambda y}.$$ This is a discrete distribution, with expected value: $$\mathbb{E}(Y) = \sum_{y=0}^\infty y \cdot p_Y(y) = (1 - e^{-\lambda}) \sum_{y=0}^\infty y e^{-\lambda y} = \frac{e^{-\lambda}}{1-e^{-\lambda}}.$$ This distribution has score function: $$s_y(\lambda) \equiv \frac{\partial}{\partial \lambda} \ln p_Y(y | \lambda) = \frac{\partial}{\partial \lambda} ( \ln(1 - e^{-\lambda}) -\lambda y ) = \frac{e^{-\lambda}}{1-e^{-\lambda}} - y.$$ Since the expected score is zero, the Fisher information is given by: $$\begin{equation} \begin{aligned} I_Y(\lambda) &= \mathbb{E} \Big[ s_Y(\lambda)^2 \Big] \\[6pt] &= \sum_{y=0}^\infty \Big( \frac{e^{-\lambda}}{1-e^{-\lambda}} - y \Big)^2 p_Y(y) \\[6pt] &= \sum_{y=0}^\infty \Big( \frac{e^{-2\lambda}}{(1-e^{-\lambda})^2} - \frac{2e^{-\lambda}}{1-e^{-\lambda}} \cdot y + y^2 \Big) (1 - e^{-\lambda}) e^{-\lambda y} \\[6pt] &= \frac{e^{-2\lambda}}{1-e^{-\lambda}} \sum_{y=0}^\infty e^{-\lambda y} - 2 e^{-\lambda} \sum_{y=0}^\infty y e^{-\lambda y} + (1 - e^{-\lambda}) \sum_{y=0}^\infty y^2 e^{-\lambda y} \\[6pt] &= \frac{e^{-2\lambda}}{(1-e^{-\lambda})^2} - \frac{2 e^{-2\lambda}}{(1-e^{-\lambda})^2} + \frac{e^{-\lambda}(1+e^{-\lambda})}{(1-e^{-\lambda})^2} \\[6pt] &= \frac{1}{(1-e^{-\lambda})^2} \Bigg[e^{-2\lambda} - 2 e^{-2\lambda} + e^{-\lambda}(1+e^{-\lambda}) \Bigg] \\[6pt] &= \frac{1}{(1-e^{-\lambda})^2} \Bigg[e^{-2\lambda} - 2 e^{-2\lambda} + e^{-\lambda}+e^{-2\lambda} \Bigg] \\[6pt] &= \frac{e^{-\lambda}}{(1-e^{-\lambda})^2}. \\[6pt] \end{aligned} \end{equation}$$ We can now verify the inequality $I_Y(\lambda) \leqslant I_X(\lambda)$ directly. Using a bit of algebra, and expanding the exponentials using the Maclaurin series representation, we obtain: $$\begin{equation} \begin{aligned} I_Y(\lambda) = \frac{e^{-\lambda}}{(1-e^{-\lambda})^2} &= \frac{e^{-\lambda}}{1 - 2e^{-\lambda} + e^{-2\lambda}} \\[6pt] &= \Big( \frac{1 - 2e^{-\lambda} + e^{-2\lambda}}{e^{-\lambda}} \Big)^{-1}\\[6pt] &= (e^{\lambda} - 2 + e^{-\lambda})^{-1} \\[6pt] &= \Big( \sum_{k=0}^\infty \frac{\lambda^k}{k!} - 2 + \sum_{k=0}^\infty \frac{(-1)^k \lambda^k}{k!} \Big)^{-1} \\[6pt] &= \Big( 1 + \sum_{k=1}^\infty \frac{\lambda^k}{k!} - 2 + 1 + \sum_{k=1}^\infty \frac{(-1)^k \lambda^k}{k!} \Big)^{-1} \\[6pt] &= \Big( \sum_{k=1}^\infty \frac{\lambda^k}{k!} + \sum_{k=1}^\infty \frac{(-1)^k \lambda^k}{k!} \Big)^{-1} \\[6pt] &= \Big( 2 \sum_{k=1}^\infty \frac{\lambda^{2k}}{(2k)!} \Big)^{-1} \\[6pt] &= \Big( \sum_{k=1}^\infty \frac{\lambda^{2k}}{(2k)! / 2} \Big)^{-1} \\[6pt] &= \frac{1}{ \lambda^2 + \frac{\lambda^4}{12} + \frac{\lambda^6}{360} + \cdots} \\[6pt] &\leqslant \frac{1}{\lambda^2} = I_X(\lambda). \end{aligned} \end{equation}$$
Fisher Information Inequality of a function of a random variable I am not aware of any general rule to pass over to the Fisher information of the floor function, but in this case it is possible to solve the problem for the specific distribution. The easiest way to
50,906
Testing variable importance in prediction
I have used the randomForest package in R several times and there were some functions to measure the variable importance such as importance() and varImpPlot(). As far as I know varImpPlot visualizes the the importance of each predictor with respect to variables' contribution in the decrease of error measures (e.g mean squared error for regression, Gini index for classification etc.) What I usually do to measure variable importance in a really simple way is, I estimate linear and lasso regressions and then see how much the coefficients were shrunk. library(MASS) library(randomForest) library(glmnet) data(Boston) # Random forest (based on the lab example from the book "An Introduction to Statistical Learning") rf.boston <- randomForest(medv ~., data = Boston, mtry = 13, ntree = 25, importance = TRUE) importance(rf.boston) %IncMSE IncNodePurity crim 2.80848132 1340.74773 zn 2.12135233 34.74243 indus 1.49676063 270.12398 chas 0.06577971 31.42226 nox 7.42985606 1381.58615 rm 13.41323143 18128.73241 age 6.28896854 487.95644 dis 7.08361676 2621.61526 rad 1.71445398 128.88846 tax 6.48150760 557.31305 ptratio 5.24860362 660.97934 black 2.00139088 562.16876 lstat 9.26159315 16553.01919 varImpPlot(rf.boston) # Linear model lm.boston <- lm(formula = medv~., data = Boston) # Lasso optim.lambda <- cv.glmnet(x = as.matrix(Boston[, -14]), y = as.vector(Boston[, 14]))$lambda.1se lasso.boston<- glmnet(x = as.matrix(Boston[, -14]), y = as.vector(Boston[, 14]), lambda = optim.lambda) sum.abs <- abs(coef(lasso.boston)[-1])/ abs(coef(lm.boston)[-1]) sum.abs <- sum.abs[order(sum.abs, decreasing = F)] barplot(sum.abs, horiz = T, col = "red", las=2) par(mfrow = c(2,1)) rf.boston <- randomForest(medv ~., data = Boston, mtry = 13, ntree = 25, importance = FALSE) varImpPlot(rf.boston, main = "Variable importance (Random forest)") barplot(sum.abs, horiz = T, col = "red", las=2, main = "Variable importance (Lasso)") And a quick comparison for both: As far as I understand, Lasso approach could be a bit problematic when predictors are correlated. You can see that rm variable has a larger lasso coefficient than the linear one.
Testing variable importance in prediction
I have used the randomForest package in R several times and there were some functions to measure the variable importance such as importance() and varImpPlot(). As far as I know varImpPlot visualizes t
Testing variable importance in prediction I have used the randomForest package in R several times and there were some functions to measure the variable importance such as importance() and varImpPlot(). As far as I know varImpPlot visualizes the the importance of each predictor with respect to variables' contribution in the decrease of error measures (e.g mean squared error for regression, Gini index for classification etc.) What I usually do to measure variable importance in a really simple way is, I estimate linear and lasso regressions and then see how much the coefficients were shrunk. library(MASS) library(randomForest) library(glmnet) data(Boston) # Random forest (based on the lab example from the book "An Introduction to Statistical Learning") rf.boston <- randomForest(medv ~., data = Boston, mtry = 13, ntree = 25, importance = TRUE) importance(rf.boston) %IncMSE IncNodePurity crim 2.80848132 1340.74773 zn 2.12135233 34.74243 indus 1.49676063 270.12398 chas 0.06577971 31.42226 nox 7.42985606 1381.58615 rm 13.41323143 18128.73241 age 6.28896854 487.95644 dis 7.08361676 2621.61526 rad 1.71445398 128.88846 tax 6.48150760 557.31305 ptratio 5.24860362 660.97934 black 2.00139088 562.16876 lstat 9.26159315 16553.01919 varImpPlot(rf.boston) # Linear model lm.boston <- lm(formula = medv~., data = Boston) # Lasso optim.lambda <- cv.glmnet(x = as.matrix(Boston[, -14]), y = as.vector(Boston[, 14]))$lambda.1se lasso.boston<- glmnet(x = as.matrix(Boston[, -14]), y = as.vector(Boston[, 14]), lambda = optim.lambda) sum.abs <- abs(coef(lasso.boston)[-1])/ abs(coef(lm.boston)[-1]) sum.abs <- sum.abs[order(sum.abs, decreasing = F)] barplot(sum.abs, horiz = T, col = "red", las=2) par(mfrow = c(2,1)) rf.boston <- randomForest(medv ~., data = Boston, mtry = 13, ntree = 25, importance = FALSE) varImpPlot(rf.boston, main = "Variable importance (Random forest)") barplot(sum.abs, horiz = T, col = "red", las=2, main = "Variable importance (Lasso)") And a quick comparison for both: As far as I understand, Lasso approach could be a bit problematic when predictors are correlated. You can see that rm variable has a larger lasso coefficient than the linear one.
Testing variable importance in prediction I have used the randomForest package in R several times and there were some functions to measure the variable importance such as importance() and varImpPlot(). As far as I know varImpPlot visualizes t
50,907
Measuring the bias-variance tradeoff
$EEPSE = EE(\hat{f}(x_0)-y_0)^2$ is the average (or "expected", over different predictions $\hat{f}(x_0)$ coming from different datasets) expected prediction squared error (for a particular $\hat{f}(x_0)$ over many test points $y_0$, at $x_0$) To measure the bias-variance trade-off in a concrete example with concrete values, consider e.g. a population of 3 members: $0$, $0.5$, and $1$, each one occurring with the same probability. (The population is infinite). Now consider random datasets that consist of one observation from the population. There will be 3 types of datasets, which occur with the same (1/3) frequency. These datasets have observations "$0$", "$0.5$", and "$1$". Now consider models of type: "The prediction for the next observation is equal to the average in the dataset times alpha". The average in the dataset is equal to the observation in the dataset, as we have just one observation in a dataset. To measure the bias-variance tradeoff among different models (one model per "alpha"), plot the squared bias $[E(\hat{f}(x_0))-\mu_0]^2$, the variance $E(\hat{f}(x_0) - E(\hat{f}(x_0)))^2$, their sum, and their sum plus noise (which is the $EE(\hat{f}(x_0)-y_0)^2$). The "noise" is equal to $var(y_0)$. What you will get is that the optimal alpha is $0.6$. Also, you will see that the unbiased model (alpha = $1$) can never be optimal, as the slope of the EEPSE is positive for alpha = $1$ (see the figure). This picture is true in general. There does not exist an unbiased model, which is optimal. The only two exceptions are: the dataset is infinite (and equal to the population), or the population consists of one member only. In these two corner cases (not practical) alpha=$1$ is optimal, as then $var(\hat{f}(x_0)) = 0$. The optimal alpha is $$\frac{\mu_0^2}{\mu_0^2 + var(\hat{f}(x_0))} = \frac{0.5^2}{0.5^2 + var([0, 0.5, 1])} = 0.6, $$ which is always between $0$ and $1$, where $\hat{f}(x_0)$ is the prediction from the unbiased model "The prediction for the next observation is equal to the average in the dataset". The optimal model is: The prediction for the next observation is equal to the average in the dataset times 0.6 . This result may look "strange". It means that if we have one dataset, and it consists of (the single) observation "1", our prediction for the future observations should be 1 $\times$ alpha = 0.6; and if we have as observation "0.5", our prediction should be 0.5 $\times$ 0.6 = 0.3. Lastly, if we have as observation "0", our prediction should be 0 $\times$ 0.6 = 0. This model results in a smaller $EEPSE$ than the model "the prediction for the next observation is equal to the average in the dataset". Note that the "average in the population" results in the smallest $EEPSE$ (0.166667). So, we would like to predict "the average in the population". Alas, the "average in the dataset" is not the best prediction for the "average in the population". Multiplying an unbiased model by alpha in this case is called penalization. Note that the optimal alpha is in general unknow, as one needs to know the true $\mu_0$ in order to compute it. Still, the point to stress is that optimal alpha < $1$. Even if a small penalization is used (alpha very close to $1$), we will still have smaller test error than the unbiased model.
Measuring the bias-variance tradeoff
$EEPSE = EE(\hat{f}(x_0)-y_0)^2$ is the average (or "expected", over different predictions $\hat{f}(x_0)$ coming from different datasets) expected prediction squared error (for a particular $\hat{f}(
Measuring the bias-variance tradeoff $EEPSE = EE(\hat{f}(x_0)-y_0)^2$ is the average (or "expected", over different predictions $\hat{f}(x_0)$ coming from different datasets) expected prediction squared error (for a particular $\hat{f}(x_0)$ over many test points $y_0$, at $x_0$) To measure the bias-variance trade-off in a concrete example with concrete values, consider e.g. a population of 3 members: $0$, $0.5$, and $1$, each one occurring with the same probability. (The population is infinite). Now consider random datasets that consist of one observation from the population. There will be 3 types of datasets, which occur with the same (1/3) frequency. These datasets have observations "$0$", "$0.5$", and "$1$". Now consider models of type: "The prediction for the next observation is equal to the average in the dataset times alpha". The average in the dataset is equal to the observation in the dataset, as we have just one observation in a dataset. To measure the bias-variance tradeoff among different models (one model per "alpha"), plot the squared bias $[E(\hat{f}(x_0))-\mu_0]^2$, the variance $E(\hat{f}(x_0) - E(\hat{f}(x_0)))^2$, their sum, and their sum plus noise (which is the $EE(\hat{f}(x_0)-y_0)^2$). The "noise" is equal to $var(y_0)$. What you will get is that the optimal alpha is $0.6$. Also, you will see that the unbiased model (alpha = $1$) can never be optimal, as the slope of the EEPSE is positive for alpha = $1$ (see the figure). This picture is true in general. There does not exist an unbiased model, which is optimal. The only two exceptions are: the dataset is infinite (and equal to the population), or the population consists of one member only. In these two corner cases (not practical) alpha=$1$ is optimal, as then $var(\hat{f}(x_0)) = 0$. The optimal alpha is $$\frac{\mu_0^2}{\mu_0^2 + var(\hat{f}(x_0))} = \frac{0.5^2}{0.5^2 + var([0, 0.5, 1])} = 0.6, $$ which is always between $0$ and $1$, where $\hat{f}(x_0)$ is the prediction from the unbiased model "The prediction for the next observation is equal to the average in the dataset". The optimal model is: The prediction for the next observation is equal to the average in the dataset times 0.6 . This result may look "strange". It means that if we have one dataset, and it consists of (the single) observation "1", our prediction for the future observations should be 1 $\times$ alpha = 0.6; and if we have as observation "0.5", our prediction should be 0.5 $\times$ 0.6 = 0.3. Lastly, if we have as observation "0", our prediction should be 0 $\times$ 0.6 = 0. This model results in a smaller $EEPSE$ than the model "the prediction for the next observation is equal to the average in the dataset". Note that the "average in the population" results in the smallest $EEPSE$ (0.166667). So, we would like to predict "the average in the population". Alas, the "average in the dataset" is not the best prediction for the "average in the population". Multiplying an unbiased model by alpha in this case is called penalization. Note that the optimal alpha is in general unknow, as one needs to know the true $\mu_0$ in order to compute it. Still, the point to stress is that optimal alpha < $1$. Even if a small penalization is used (alpha very close to $1$), we will still have smaller test error than the unbiased model.
Measuring the bias-variance tradeoff $EEPSE = EE(\hat{f}(x_0)-y_0)^2$ is the average (or "expected", over different predictions $\hat{f}(x_0)$ coming from different datasets) expected prediction squared error (for a particular $\hat{f}(
50,908
Measuring the bias-variance tradeoff
$E(y_0-\hat{f}(x_0))^2 = Var(\hat{f}(x_0)) + Bias(\hat{f}(x_0))^2 + Var(\epsilon)$ Where - $E(y_0-\hat{f}(x_0))^2$ is the expected test MSE. - $Var(\hat{f}(x_0))$ is variance of fitted model. - $Bias(\hat{f}(x_0))^2$ is squared biased of model. - $Var(\epsilon)$ is variance of error terms Source: James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An introduction to statistical learning (Vol. 112). New York: springer.
Measuring the bias-variance tradeoff
$E(y_0-\hat{f}(x_0))^2 = Var(\hat{f}(x_0)) + Bias(\hat{f}(x_0))^2 + Var(\epsilon)$ Where - $E(y_0-\hat{f}(x_0))^2$ is the expected test MSE. - $Var(\hat{f}(x_0))$ is variance of fitted model. - $Bi
Measuring the bias-variance tradeoff $E(y_0-\hat{f}(x_0))^2 = Var(\hat{f}(x_0)) + Bias(\hat{f}(x_0))^2 + Var(\epsilon)$ Where - $E(y_0-\hat{f}(x_0))^2$ is the expected test MSE. - $Var(\hat{f}(x_0))$ is variance of fitted model. - $Bias(\hat{f}(x_0))^2$ is squared biased of model. - $Var(\epsilon)$ is variance of error terms Source: James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An introduction to statistical learning (Vol. 112). New York: springer.
Measuring the bias-variance tradeoff $E(y_0-\hat{f}(x_0))^2 = Var(\hat{f}(x_0)) + Bias(\hat{f}(x_0))^2 + Var(\epsilon)$ Where - $E(y_0-\hat{f}(x_0))^2$ is the expected test MSE. - $Var(\hat{f}(x_0))$ is variance of fitted model. - $Bi
50,909
How do I show that the sample median minimizes the sum of absolute deviations? [duplicate]
When all of the $x_i$ are distinct, this is easy. If $$f(m) = \sum_{i=1}^{n} |x_i - m|$$ then $$f'(m) = \sum_{i=1}^{n} {\rm sign}(x_i - m)$$ which equals zero when there are an equal number of elements of $x_1, ..., x_m$ that are above and below $m$, which is the definition of the median, $m^{\star}$. As a function of $m$, this is decreasing on $(-\infty, m^{\star})$ and increasing on $(m^{\star}, \infty)$, so $m^{\star}$ is a minimizer. Note: If $n$ is even, the sample median is not necessarily uniquely defined. In that case, what is the "right" point estimate of the median is debatable. So, $m^{\star}$ could be called any value over the open interval where $f'$ is zero (a common convention is taking the midpoint... e.g. see what happens when you type median(1:4) into R....you will get 2.5), but the basic logic of what I wrote above proves it would still minimize the MAD....But so would any value between 2 and 3...
How do I show that the sample median minimizes the sum of absolute deviations? [duplicate]
When all of the $x_i$ are distinct, this is easy. If $$f(m) = \sum_{i=1}^{n} |x_i - m|$$ then $$f'(m) = \sum_{i=1}^{n} {\rm sign}(x_i - m)$$ which equals zero when there are an equal number of element
How do I show that the sample median minimizes the sum of absolute deviations? [duplicate] When all of the $x_i$ are distinct, this is easy. If $$f(m) = \sum_{i=1}^{n} |x_i - m|$$ then $$f'(m) = \sum_{i=1}^{n} {\rm sign}(x_i - m)$$ which equals zero when there are an equal number of elements of $x_1, ..., x_m$ that are above and below $m$, which is the definition of the median, $m^{\star}$. As a function of $m$, this is decreasing on $(-\infty, m^{\star})$ and increasing on $(m^{\star}, \infty)$, so $m^{\star}$ is a minimizer. Note: If $n$ is even, the sample median is not necessarily uniquely defined. In that case, what is the "right" point estimate of the median is debatable. So, $m^{\star}$ could be called any value over the open interval where $f'$ is zero (a common convention is taking the midpoint... e.g. see what happens when you type median(1:4) into R....you will get 2.5), but the basic logic of what I wrote above proves it would still minimize the MAD....But so would any value between 2 and 3...
How do I show that the sample median minimizes the sum of absolute deviations? [duplicate] When all of the $x_i$ are distinct, this is easy. If $$f(m) = \sum_{i=1}^{n} |x_i - m|$$ then $$f'(m) = \sum_{i=1}^{n} {\rm sign}(x_i - m)$$ which equals zero when there are an equal number of element
50,910
Half-normal probability plot
There are many different "finite-sample/discrete sample" corrections, see for example here for the Normal probability plot (not the half normal). Looking at Kutner et al. book, I see that they "link" their Half-normal probability plot formula to the following formula for the Normal PP: $$\Phi^{-1}\left(\frac{k-0.375}{n+0.25}\right)$$ This, I read, is the formula used in the book "Blom, G. (1958), Statistical estimates and transformed beta variables, New York: John Wiley and Sons", and it is also the one used in the software program Minitab. So I would suggest to contact Minitab people (that also has a half-normal PP feature) and ask them "what formula did you use for the half-normal plot, and where did you get it?" I also see that Cullagh & Nelder use the same as above expression for the Normal plot, while offering the different expression for the half-normal plot.... Nelder has passed, but Peter McCullagh is still active so one could always contact him directly for advice,
Half-normal probability plot
There are many different "finite-sample/discrete sample" corrections, see for example here for the Normal probability plot (not the half normal). Looking at Kutner et al. book, I see that they "link
Half-normal probability plot There are many different "finite-sample/discrete sample" corrections, see for example here for the Normal probability plot (not the half normal). Looking at Kutner et al. book, I see that they "link" their Half-normal probability plot formula to the following formula for the Normal PP: $$\Phi^{-1}\left(\frac{k-0.375}{n+0.25}\right)$$ This, I read, is the formula used in the book "Blom, G. (1958), Statistical estimates and transformed beta variables, New York: John Wiley and Sons", and it is also the one used in the software program Minitab. So I would suggest to contact Minitab people (that also has a half-normal PP feature) and ask them "what formula did you use for the half-normal plot, and where did you get it?" I also see that Cullagh & Nelder use the same as above expression for the Normal plot, while offering the different expression for the half-normal plot.... Nelder has passed, but Peter McCullagh is still active so one could always contact him directly for advice,
Half-normal probability plot There are many different "finite-sample/discrete sample" corrections, see for example here for the Normal probability plot (not the half normal). Looking at Kutner et al. book, I see that they "link
50,911
The expected long run proportion of time the chain spends at $a$ , given that it starts at $c$
What is the expected long run proportion of time the chain spends at $a$, given that it starts at $b$? This exercise, technically, asks for the limiting probability value $\ell_b(a)$. You can note that the limiting distribution $\ell_b= \left(\frac{5}{13}, \frac{8}{13}, 0, 0, 0\right)$ that you correctly evaluated is also a stationary distribution of the given matrix -- a limiting distribution will always be stationary. That matrix although is a reducible one and so it can have more than one stationary distribution. In fact a second one is $(0, 0, 0, 0, 1)$. Now, the question that you made in the title is about the limiting distribution $\ell_c$, and, of course, specifically about its first value $\ell_c(a)$: $$\ell_c(a) = \lim_{n \to \infty}P(X_n=a | X_0= c) = \frac{5}{7}\frac{5}{13} = \frac{25}{91}$$ If I didn't get it wrong this is a self-study question, so I will leave to you to find the middle steps of this solution. Consider that $\ell_c$ has non-zero values only for $a$, $b$ and $e$, it is, in fact, a weighted sum of the two stationary distributions above (and therefore, it is a stationary distribution as well).
The expected long run proportion of time the chain spends at $a$ , given that it starts at $c$
What is the expected long run proportion of time the chain spends at $a$, given that it starts at $b$? This exercise, technically, asks for the limiting probability value $\ell_b(a)$. You can note th
The expected long run proportion of time the chain spends at $a$ , given that it starts at $c$ What is the expected long run proportion of time the chain spends at $a$, given that it starts at $b$? This exercise, technically, asks for the limiting probability value $\ell_b(a)$. You can note that the limiting distribution $\ell_b= \left(\frac{5}{13}, \frac{8}{13}, 0, 0, 0\right)$ that you correctly evaluated is also a stationary distribution of the given matrix -- a limiting distribution will always be stationary. That matrix although is a reducible one and so it can have more than one stationary distribution. In fact a second one is $(0, 0, 0, 0, 1)$. Now, the question that you made in the title is about the limiting distribution $\ell_c$, and, of course, specifically about its first value $\ell_c(a)$: $$\ell_c(a) = \lim_{n \to \infty}P(X_n=a | X_0= c) = \frac{5}{7}\frac{5}{13} = \frac{25}{91}$$ If I didn't get it wrong this is a self-study question, so I will leave to you to find the middle steps of this solution. Consider that $\ell_c$ has non-zero values only for $a$, $b$ and $e$, it is, in fact, a weighted sum of the two stationary distributions above (and therefore, it is a stationary distribution as well).
The expected long run proportion of time the chain spends at $a$ , given that it starts at $c$ What is the expected long run proportion of time the chain spends at $a$, given that it starts at $b$? This exercise, technically, asks for the limiting probability value $\ell_b(a)$. You can note th
50,912
Outliers in Linear Regression that ONLY revert significance
I am on the side of saying this is merely another way of understanding deletion diagnostics. Perturbations of a point are highly related to that point's influence function which is also estimated by deletion diagnostics. Typically the df-betas are used to show deletion diagnostics, but they can be scaled to be standardized df-betas and then compared to critical values for their approximate N(0,1) distribution. In terms of perturbations of a point, perturbing it to conform with the line of best fit obtained by deleting that point has essentially the same impact on regression and inference as deleting that point altogether. It goes further to note that, for linear regression, any point not lying on the centroid of the predictor scale can be perturbed arbitrarily to achieve a particular slope or p-value. So we are at a loss of conceptualizing what a meaningful difference or perturbation might be, aside from reproducing some diagnostics that are already established.
Outliers in Linear Regression that ONLY revert significance
I am on the side of saying this is merely another way of understanding deletion diagnostics. Perturbations of a point are highly related to that point's influence function which is also estimated by d
Outliers in Linear Regression that ONLY revert significance I am on the side of saying this is merely another way of understanding deletion diagnostics. Perturbations of a point are highly related to that point's influence function which is also estimated by deletion diagnostics. Typically the df-betas are used to show deletion diagnostics, but they can be scaled to be standardized df-betas and then compared to critical values for their approximate N(0,1) distribution. In terms of perturbations of a point, perturbing it to conform with the line of best fit obtained by deleting that point has essentially the same impact on regression and inference as deleting that point altogether. It goes further to note that, for linear regression, any point not lying on the centroid of the predictor scale can be perturbed arbitrarily to achieve a particular slope or p-value. So we are at a loss of conceptualizing what a meaningful difference or perturbation might be, aside from reproducing some diagnostics that are already established.
Outliers in Linear Regression that ONLY revert significance I am on the side of saying this is merely another way of understanding deletion diagnostics. Perturbations of a point are highly related to that point's influence function which is also estimated by d
50,913
How to analyse growth rate in R?
It would appear to me that you have daily time series data where you are measuring two characteristics. I would initially suggest using augmented ARIMA methods incorporating both memory and possible dummy indicators reflecting pulses. level/step shifts and/or time trends. Secondly it is possible to additionally relate these two series using a multivariate extension of ARIMA called Transfer Functions while investigating the impact of some of your user-suggested explanatory variables. If you wish to post data on one of your subjects showing both characteristics , I might be able to hep further. You can search SE for DAILY DATA or ARIMA and learn more about time series analysis. Model forecasts can always be converted to expected growth rates.
How to analyse growth rate in R?
It would appear to me that you have daily time series data where you are measuring two characteristics. I would initially suggest using augmented ARIMA methods incorporating both memory and possible d
How to analyse growth rate in R? It would appear to me that you have daily time series data where you are measuring two characteristics. I would initially suggest using augmented ARIMA methods incorporating both memory and possible dummy indicators reflecting pulses. level/step shifts and/or time trends. Secondly it is possible to additionally relate these two series using a multivariate extension of ARIMA called Transfer Functions while investigating the impact of some of your user-suggested explanatory variables. If you wish to post data on one of your subjects showing both characteristics , I might be able to hep further. You can search SE for DAILY DATA or ARIMA and learn more about time series analysis. Model forecasts can always be converted to expected growth rates.
How to analyse growth rate in R? It would appear to me that you have daily time series data where you are measuring two characteristics. I would initially suggest using augmented ARIMA methods incorporating both memory and possible d
50,914
Tutorial for feature extraction on unsupervised learning
A nice reference is Dimensionality Reduction A Short Tutorial by Ali Ghodsi. It covers PCA, Locally Linear Embedding, Multidimensional Scaling and Isomap. Dan Ventura provides us with some nice worked examples of Manifold Learning - specifically, PCA, LLE and ISOMAP Kilian Weinberger has a nice web page devoted to Manifold Learning There is a high-level overview of Feature Engineering at Machine Learning Mastery that also has some references. Lawrence Cayton has an overview paper on Algorithms for Manifold Learning Even though it is mostly about supervised feature extraction, I hate to omit mention of the work of Isabelle Guyon. She has a nice paper An Introduction to Variable and Feature Selection slides from a KDD Tutorial and her book on Feature Extraction. All links checked as of 18 Jan 2017
Tutorial for feature extraction on unsupervised learning
A nice reference is Dimensionality Reduction A Short Tutorial by Ali Ghodsi. It covers PCA, Locally Linear Embedding, Multidimensional Scaling and Isomap. Dan Ventura provides us with some nice worke
Tutorial for feature extraction on unsupervised learning A nice reference is Dimensionality Reduction A Short Tutorial by Ali Ghodsi. It covers PCA, Locally Linear Embedding, Multidimensional Scaling and Isomap. Dan Ventura provides us with some nice worked examples of Manifold Learning - specifically, PCA, LLE and ISOMAP Kilian Weinberger has a nice web page devoted to Manifold Learning There is a high-level overview of Feature Engineering at Machine Learning Mastery that also has some references. Lawrence Cayton has an overview paper on Algorithms for Manifold Learning Even though it is mostly about supervised feature extraction, I hate to omit mention of the work of Isabelle Guyon. She has a nice paper An Introduction to Variable and Feature Selection slides from a KDD Tutorial and her book on Feature Extraction. All links checked as of 18 Jan 2017
Tutorial for feature extraction on unsupervised learning A nice reference is Dimensionality Reduction A Short Tutorial by Ali Ghodsi. It covers PCA, Locally Linear Embedding, Multidimensional Scaling and Isomap. Dan Ventura provides us with some nice worke
50,915
Tutorial for feature extraction on unsupervised learning
There is definitely a lot of blogs and explorations and tutorials out there, unfortunately I don't know any. IF you want explanation check Udacity georgia techs ML course, they have a section about PCA/ICA(maybe NMF) You can stack them. Sometimes PCA is run to reduce dimensions so then ICA/NMF doesn't have to as much work. I suppose to run first ICA/NMF and then PCA wouldn't really make much sense. Yes you can train them only on a subset of the data. It should probably have predict and fit functions implemented (or something similar)
Tutorial for feature extraction on unsupervised learning
There is definitely a lot of blogs and explorations and tutorials out there, unfortunately I don't know any. IF you want explanation check Udacity georgia techs ML course, they have a section about PC
Tutorial for feature extraction on unsupervised learning There is definitely a lot of blogs and explorations and tutorials out there, unfortunately I don't know any. IF you want explanation check Udacity georgia techs ML course, they have a section about PCA/ICA(maybe NMF) You can stack them. Sometimes PCA is run to reduce dimensions so then ICA/NMF doesn't have to as much work. I suppose to run first ICA/NMF and then PCA wouldn't really make much sense. Yes you can train them only on a subset of the data. It should probably have predict and fit functions implemented (or something similar)
Tutorial for feature extraction on unsupervised learning There is definitely a lot of blogs and explorations and tutorials out there, unfortunately I don't know any. IF you want explanation check Udacity georgia techs ML course, they have a section about PC
50,916
Tutorial for feature extraction on unsupervised learning
I personally think feature extraction on unsupervised learning is not well defined. If there is no ground truth label in the data, what's the goal of feature extraction, i.e., how do we know the derived feature is good or bad? We can have finite ways to derive new features from data but will not know if new features are good. Classical way of doing feature eingineering includes basis expansion, where we can select different basis, e.g., Polynominal basis, Fourier basis, etc.. Examples can be found here: What's wrong to fit periodic data with polynomials?
Tutorial for feature extraction on unsupervised learning
I personally think feature extraction on unsupervised learning is not well defined. If there is no ground truth label in the data, what's the goal of feature extraction, i.e., how do we know the deriv
Tutorial for feature extraction on unsupervised learning I personally think feature extraction on unsupervised learning is not well defined. If there is no ground truth label in the data, what's the goal of feature extraction, i.e., how do we know the derived feature is good or bad? We can have finite ways to derive new features from data but will not know if new features are good. Classical way of doing feature eingineering includes basis expansion, where we can select different basis, e.g., Polynominal basis, Fourier basis, etc.. Examples can be found here: What's wrong to fit periodic data with polynomials?
Tutorial for feature extraction on unsupervised learning I personally think feature extraction on unsupervised learning is not well defined. If there is no ground truth label in the data, what's the goal of feature extraction, i.e., how do we know the deriv
50,917
Neural networks - how can I interpret what a hidden layer is doing to my data?
From my own understanding, it is notoriously difficult to infer useful things from the weights of a complex neural network. However, with 10 digits, you could play around a bit. Don't look just at the hidden-layer neuron with the strongest weight for 3, look at the top few. Then look at those hidden-layer neuron's input weights, but without multiplying with the input image already. That shows you which regions of the input image the neuron "scans" for. Now if you think heuristically about how such a neural network will work: It'll have to recognize broad features that make a 3 a 3 and discourage features that make it not a 3. So, positive weight for the arches, negative weight for a closed top loop (because that could be a hastily written 9 instead). The problem is that it's getting quite complicated quite fast. As a simpler exercise, you could try to build a smaller neural network that just distinguishes between two relatively dissimilar shapes and build your intuition from there.
Neural networks - how can I interpret what a hidden layer is doing to my data?
From my own understanding, it is notoriously difficult to infer useful things from the weights of a complex neural network. However, with 10 digits, you could play around a bit. Don't look just at the
Neural networks - how can I interpret what a hidden layer is doing to my data? From my own understanding, it is notoriously difficult to infer useful things from the weights of a complex neural network. However, with 10 digits, you could play around a bit. Don't look just at the hidden-layer neuron with the strongest weight for 3, look at the top few. Then look at those hidden-layer neuron's input weights, but without multiplying with the input image already. That shows you which regions of the input image the neuron "scans" for. Now if you think heuristically about how such a neural network will work: It'll have to recognize broad features that make a 3 a 3 and discourage features that make it not a 3. So, positive weight for the arches, negative weight for a closed top loop (because that could be a hastily written 9 instead). The problem is that it's getting quite complicated quite fast. As a simpler exercise, you could try to build a smaller neural network that just distinguishes between two relatively dissimilar shapes and build your intuition from there.
Neural networks - how can I interpret what a hidden layer is doing to my data? From my own understanding, it is notoriously difficult to infer useful things from the weights of a complex neural network. However, with 10 digits, you could play around a bit. Don't look just at the
50,918
Comparing differences of AIC of different data sets
The AIC criterion scales with the overall size of the dataset, and this is true for differences in AIC values as well. The criterion is based on the relationship $$ -2 \, \mathrm{E}[\log \mathrm{Pr}_{\hat \theta}(Y)] \approx -\frac{2}{N} \, \mathrm{E}[\mathrm{loglik}] + \frac{2d}{N} $$ where $d$ is the number of parameters in the likelihood function being maximized (Elements of Statistical Learning equation 7.27). The term on the left is the expected out-of-sample "error" rate, using the log of the probability as the error metric. The right hand consists of the in-sample error rate estimated from the maximized log-likelihood, plus the term $2d/N$ correcting for the optimism of the maximized log-likelihood. The most important factor here is the $N$ in the denominator of the right hand side. The AIC is typically defined as $$ \mathrm{AIC} = -2 \, \mathrm{loglik} + 2d $$ (although the ESL textbook adds a $1/N$ factor). In this form, the AIC predicts $N$ times the out-of-sample error rate. To compare AIC differences from two samples, you should divide the AIC values by the sample size to compare them on equal terms.
Comparing differences of AIC of different data sets
The AIC criterion scales with the overall size of the dataset, and this is true for differences in AIC values as well. The criterion is based on the relationship $$ -2 \, \mathrm{E}[\log \mathrm{Pr}_{
Comparing differences of AIC of different data sets The AIC criterion scales with the overall size of the dataset, and this is true for differences in AIC values as well. The criterion is based on the relationship $$ -2 \, \mathrm{E}[\log \mathrm{Pr}_{\hat \theta}(Y)] \approx -\frac{2}{N} \, \mathrm{E}[\mathrm{loglik}] + \frac{2d}{N} $$ where $d$ is the number of parameters in the likelihood function being maximized (Elements of Statistical Learning equation 7.27). The term on the left is the expected out-of-sample "error" rate, using the log of the probability as the error metric. The right hand consists of the in-sample error rate estimated from the maximized log-likelihood, plus the term $2d/N$ correcting for the optimism of the maximized log-likelihood. The most important factor here is the $N$ in the denominator of the right hand side. The AIC is typically defined as $$ \mathrm{AIC} = -2 \, \mathrm{loglik} + 2d $$ (although the ESL textbook adds a $1/N$ factor). In this form, the AIC predicts $N$ times the out-of-sample error rate. To compare AIC differences from two samples, you should divide the AIC values by the sample size to compare them on equal terms.
Comparing differences of AIC of different data sets The AIC criterion scales with the overall size of the dataset, and this is true for differences in AIC values as well. The criterion is based on the relationship $$ -2 \, \mathrm{E}[\log \mathrm{Pr}_{
50,919
Can you simultaneously fit logistic and ordinal logistic regression models?
This seems analogous to mixture models. These are commonly used when dealing with zero-inflated data, where one component of the model deals with the probability of getting zero vs non-zero results, while a second component deals with continuous variation in the data (usually counts). The second component can also model zeroes, so the first component basically accounts for the 'excess zeroes' in the data. I imagine such an approach could be adapted to dealing with ordinal data such as yours, but I don't know this for certain.
Can you simultaneously fit logistic and ordinal logistic regression models?
This seems analogous to mixture models. These are commonly used when dealing with zero-inflated data, where one component of the model deals with the probability of getting zero vs non-zero results, w
Can you simultaneously fit logistic and ordinal logistic regression models? This seems analogous to mixture models. These are commonly used when dealing with zero-inflated data, where one component of the model deals with the probability of getting zero vs non-zero results, while a second component deals with continuous variation in the data (usually counts). The second component can also model zeroes, so the first component basically accounts for the 'excess zeroes' in the data. I imagine such an approach could be adapted to dealing with ordinal data such as yours, but I don't know this for certain.
Can you simultaneously fit logistic and ordinal logistic regression models? This seems analogous to mixture models. These are commonly used when dealing with zero-inflated data, where one component of the model deals with the probability of getting zero vs non-zero results, w
50,920
Resources for building explanatory regression models?
One major difference between prediction and explanation is that it's hard to interpret interactions of order higher than two, even if they may be important for a better prediction. In addition, you may wish to restrict the model pool to hierarchical models only (if A*B is included, then both main effects, A and B, should be included). Other than that, you can use any method you want (stepwise, forward selection, backward elimination, AIC, etc). I find that combining backward elimination with AIC works fine: you drop the term with the largest p-value until you find the model with the lowest AIC. To interpret AIC values correctly, look up how Akaike weights are defined and calculate them for two or more competing models. Another possibility is to use CART which is also interpretable if you limit yourself to a single tree. That way, you can essentially consider higher order interactions without turning it into a black box.
Resources for building explanatory regression models?
One major difference between prediction and explanation is that it's hard to interpret interactions of order higher than two, even if they may be important for a better prediction. In addition, you ma
Resources for building explanatory regression models? One major difference between prediction and explanation is that it's hard to interpret interactions of order higher than two, even if they may be important for a better prediction. In addition, you may wish to restrict the model pool to hierarchical models only (if A*B is included, then both main effects, A and B, should be included). Other than that, you can use any method you want (stepwise, forward selection, backward elimination, AIC, etc). I find that combining backward elimination with AIC works fine: you drop the term with the largest p-value until you find the model with the lowest AIC. To interpret AIC values correctly, look up how Akaike weights are defined and calculate them for two or more competing models. Another possibility is to use CART which is also interpretable if you limit yourself to a single tree. That way, you can essentially consider higher order interactions without turning it into a black box.
Resources for building explanatory regression models? One major difference between prediction and explanation is that it's hard to interpret interactions of order higher than two, even if they may be important for a better prediction. In addition, you ma
50,921
Linear Regression and cost per employee vs total cost
"As my boss was interested in determining if economies of scale were present for the cost per end user data... I guess this means whether cost per user goes down as number of users increases. You start by assuming something like $$TC_i = B_0U_i^{\beta_1}e^{v_i} = f(U_i)$$ which to a degree is validated as a postulated specification because it provides a "good" descriptive relation at log-log level. Then note that $f(U_i)$ is a homogeneous function in $U_i$, with degree of homogeneity $\beta_1$, because $$f(\alpha U_i) = \alpha^{\beta_i}f(U_i)$$ The degree of homogeneity and economies of scale are closely linked. Here if $\beta_i < 1$ we get "cost economies of scale": if number of users doubles, total cost won't double and so average cost will go down. So you can use the regression specification with Total Cost as the dependent variable, and see what estimate and what confidence interval you get on $\hat \beta_1$.
Linear Regression and cost per employee vs total cost
"As my boss was interested in determining if economies of scale were present for the cost per end user data... I guess this means whether cost per user goes down as number of users increases. You sta
Linear Regression and cost per employee vs total cost "As my boss was interested in determining if economies of scale were present for the cost per end user data... I guess this means whether cost per user goes down as number of users increases. You start by assuming something like $$TC_i = B_0U_i^{\beta_1}e^{v_i} = f(U_i)$$ which to a degree is validated as a postulated specification because it provides a "good" descriptive relation at log-log level. Then note that $f(U_i)$ is a homogeneous function in $U_i$, with degree of homogeneity $\beta_1$, because $$f(\alpha U_i) = \alpha^{\beta_i}f(U_i)$$ The degree of homogeneity and economies of scale are closely linked. Here if $\beta_i < 1$ we get "cost economies of scale": if number of users doubles, total cost won't double and so average cost will go down. So you can use the regression specification with Total Cost as the dependent variable, and see what estimate and what confidence interval you get on $\hat \beta_1$.
Linear Regression and cost per employee vs total cost "As my boss was interested in determining if economies of scale were present for the cost per end user data... I guess this means whether cost per user goes down as number of users increases. You sta
50,922
Handling of categorical variables: rpart vs tree
Partially answered in comments: I don't know the full reason, but CART uses a trick to reduce the number of splits considered. For regression, the levels of a categorical predictor are replaced by mean of the outcome; for binary responses, levels are replaced by the proportion of outcomes in class 1 (see Elements of Statistical Learning book or link for reason). For categorical predictors, there are some approximations. I don't know why randomForest caps this at 32. – Peter Calhoun For some alternative ideas see Random Forest Regression with sparse data in Python
Handling of categorical variables: rpart vs tree
Partially answered in comments: I don't know the full reason, but CART uses a trick to reduce the number of splits considered. For regression, the levels of a categorical predictor are replaced by me
Handling of categorical variables: rpart vs tree Partially answered in comments: I don't know the full reason, but CART uses a trick to reduce the number of splits considered. For regression, the levels of a categorical predictor are replaced by mean of the outcome; for binary responses, levels are replaced by the proportion of outcomes in class 1 (see Elements of Statistical Learning book or link for reason). For categorical predictors, there are some approximations. I don't know why randomForest caps this at 32. – Peter Calhoun For some alternative ideas see Random Forest Regression with sparse data in Python
Handling of categorical variables: rpart vs tree Partially answered in comments: I don't know the full reason, but CART uses a trick to reduce the number of splits considered. For regression, the levels of a categorical predictor are replaced by me
50,923
Which is the best graph to describe a survival analysis with a time-dependent covariate?
The patients at risk in a survival analysis must start time at risk at entry into the risk set. For a time-dependent covariate, time zero is the time when the at risk unit transfers from the initial to the new status. Thus time at risk in this new state begins at zero and everyone transferring is alive by definition and survival = 1.0 at time zero. The time of transfer in a time-dependent covariate analysis is usually arbitrary. If time at transfer to a new group is fixed, this suggests a possible crossover trial, where crossover time is specified, or some rare biologic condition where timing of group transfer is uniform. Regardless, the best way to graph this new group assignment is to start all units at tome 0 with survival at 1.0. For example, in the Stanford transplant data, transplant occurs at a random time and it makes most sense to plot two curves beginning at time 0 and survival = 1.0: 1) all patients waiting, 2) those patients transplanted. Using the landmark of the transplant time is not possible- the times are non-uniform. Shifting to a shared landmark time is possible, if the time is uniform, but the curve shifted to the right seems to imply implausible survival functions at plausible times at risk following transfer to the new at risk group.
Which is the best graph to describe a survival analysis with a time-dependent covariate?
The patients at risk in a survival analysis must start time at risk at entry into the risk set. For a time-dependent covariate, time zero is the time when the at risk unit transfers from the initial t
Which is the best graph to describe a survival analysis with a time-dependent covariate? The patients at risk in a survival analysis must start time at risk at entry into the risk set. For a time-dependent covariate, time zero is the time when the at risk unit transfers from the initial to the new status. Thus time at risk in this new state begins at zero and everyone transferring is alive by definition and survival = 1.0 at time zero. The time of transfer in a time-dependent covariate analysis is usually arbitrary. If time at transfer to a new group is fixed, this suggests a possible crossover trial, where crossover time is specified, or some rare biologic condition where timing of group transfer is uniform. Regardless, the best way to graph this new group assignment is to start all units at tome 0 with survival at 1.0. For example, in the Stanford transplant data, transplant occurs at a random time and it makes most sense to plot two curves beginning at time 0 and survival = 1.0: 1) all patients waiting, 2) those patients transplanted. Using the landmark of the transplant time is not possible- the times are non-uniform. Shifting to a shared landmark time is possible, if the time is uniform, but the curve shifted to the right seems to imply implausible survival functions at plausible times at risk following transfer to the new at risk group.
Which is the best graph to describe a survival analysis with a time-dependent covariate? The patients at risk in a survival analysis must start time at risk at entry into the risk set. For a time-dependent covariate, time zero is the time when the at risk unit transfers from the initial t
50,924
Are there any differences between Approximate Dynamic programming and Adaptive dynamic programming
They both refer to the same thing. Reference: F. -Y. Wang, H. Zhang and D. Liu, "Adaptive Dynamic Programming: An Introduction," in IEEE Computational Intelligence Magazine, vol. 4, no. 2, pp. 39-47, May 2009, doi: 10.1109/MCI.2009.932261.
Are there any differences between Approximate Dynamic programming and Adaptive dynamic programming
They both refer to the same thing. Reference: F. -Y. Wang, H. Zhang and D. Liu, "Adaptive Dynamic Programming: An Introduction," in IEEE Computational Intelligence Magazine, vol. 4, no. 2, pp. 39-47,
Are there any differences between Approximate Dynamic programming and Adaptive dynamic programming They both refer to the same thing. Reference: F. -Y. Wang, H. Zhang and D. Liu, "Adaptive Dynamic Programming: An Introduction," in IEEE Computational Intelligence Magazine, vol. 4, no. 2, pp. 39-47, May 2009, doi: 10.1109/MCI.2009.932261.
Are there any differences between Approximate Dynamic programming and Adaptive dynamic programming They both refer to the same thing. Reference: F. -Y. Wang, H. Zhang and D. Liu, "Adaptive Dynamic Programming: An Introduction," in IEEE Computational Intelligence Magazine, vol. 4, no. 2, pp. 39-47,
50,925
Recursive neural network implementation in TensorFlow
Take a look at Tensorflow Fold: https://github.com/tensorflow/fold TensorFlow Fold is a library for creating TensorFlow models that consume structured data, where the structure of the computation graph depends on the structure of the input data.
Recursive neural network implementation in TensorFlow
Take a look at Tensorflow Fold: https://github.com/tensorflow/fold TensorFlow Fold is a library for creating TensorFlow models that consume structured data, where the structure of the computation gra
Recursive neural network implementation in TensorFlow Take a look at Tensorflow Fold: https://github.com/tensorflow/fold TensorFlow Fold is a library for creating TensorFlow models that consume structured data, where the structure of the computation graph depends on the structure of the input data.
Recursive neural network implementation in TensorFlow Take a look at Tensorflow Fold: https://github.com/tensorflow/fold TensorFlow Fold is a library for creating TensorFlow models that consume structured data, where the structure of the computation gra
50,926
Recursive neural network implementation in TensorFlow
These types of architectures are awkward in Tensorflow. I think someone did implement them though. http://www.kdnuggets.com/2016/06/recursive-neural-networks-tensorflow.html You might prefer to use DyNet, which was designed to allow for dynamically changing architectures. https://github.com/clab/dynet
Recursive neural network implementation in TensorFlow
These types of architectures are awkward in Tensorflow. I think someone did implement them though. http://www.kdnuggets.com/2016/06/recursive-neural-networks-tensorflow.html You might prefer to use Dy
Recursive neural network implementation in TensorFlow These types of architectures are awkward in Tensorflow. I think someone did implement them though. http://www.kdnuggets.com/2016/06/recursive-neural-networks-tensorflow.html You might prefer to use DyNet, which was designed to allow for dynamically changing architectures. https://github.com/clab/dynet
Recursive neural network implementation in TensorFlow These types of architectures are awkward in Tensorflow. I think someone did implement them though. http://www.kdnuggets.com/2016/06/recursive-neural-networks-tensorflow.html You might prefer to use Dy
50,927
Recursive neural network implementation in TensorFlow
I hope this is what you need: https://github.com/sapruash/RecursiveNN: Tensorflow implementation of Recursive Neural Networks using LSTM units as described in "Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks" by Kai Sheng Tai, Richard Socher, and Christopher D. Manning.
Recursive neural network implementation in TensorFlow
I hope this is what you need: https://github.com/sapruash/RecursiveNN: Tensorflow implementation of Recursive Neural Networks using LSTM units as described in "Improved Semantic Representations From
Recursive neural network implementation in TensorFlow I hope this is what you need: https://github.com/sapruash/RecursiveNN: Tensorflow implementation of Recursive Neural Networks using LSTM units as described in "Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks" by Kai Sheng Tai, Richard Socher, and Christopher D. Manning.
Recursive neural network implementation in TensorFlow I hope this is what you need: https://github.com/sapruash/RecursiveNN: Tensorflow implementation of Recursive Neural Networks using LSTM units as described in "Improved Semantic Representations From
50,928
Can any observation lie more than 3 SDs from the mean if there are 10 observations constrained between 0 and 1?
I believe you are correct. I think the general sketch of the proof comes from Cantelli's Lemma (which is related to Chebyshev's Inequality). Note that in our specific case, we get $$ P((X-E[X])/\sigma \geq k) \leq 1/(1+k^2)=1/10 \qquad\text{for k=3 in our case}$$ If we want a strict inequality for the difference, e.g. $k=3+\epsilon$ standard deviations for $\epsilon>0$, then the right hand side is strictly less than 1/10. But we have 10 observations, so each observation must have at least probability 1/10; hence we have a contradiction.
Can any observation lie more than 3 SDs from the mean if there are 10 observations constrained betwe
I believe you are correct. I think the general sketch of the proof comes from Cantelli's Lemma (which is related to Chebyshev's Inequality). Note that in our specific case, we get $$ P((X-E[X])/\sigma
Can any observation lie more than 3 SDs from the mean if there are 10 observations constrained between 0 and 1? I believe you are correct. I think the general sketch of the proof comes from Cantelli's Lemma (which is related to Chebyshev's Inequality). Note that in our specific case, we get $$ P((X-E[X])/\sigma \geq k) \leq 1/(1+k^2)=1/10 \qquad\text{for k=3 in our case}$$ If we want a strict inequality for the difference, e.g. $k=3+\epsilon$ standard deviations for $\epsilon>0$, then the right hand side is strictly less than 1/10. But we have 10 observations, so each observation must have at least probability 1/10; hence we have a contradiction.
Can any observation lie more than 3 SDs from the mean if there are 10 observations constrained betwe I believe you are correct. I think the general sketch of the proof comes from Cantelli's Lemma (which is related to Chebyshev's Inequality). Note that in our specific case, we get $$ P((X-E[X])/\sigma
50,929
Reducing number of data points in excel while keeping the curve shape
I would do the following: The data very obviously follow a power law. Fit this non-linear model and find the highest N residuals. Re-estimate the model using a linear spline at each of the residuals. Output the predicted values and their inputs as a sequence of N points. This can be N=72 or any value you want (higher is better). You probably can't do this in Excel. R however... these models are covered elsewhere in SE and can be found by consulting ?nls, ?spline::bs, etc.
Reducing number of data points in excel while keeping the curve shape
I would do the following: The data very obviously follow a power law. Fit this non-linear model and find the highest N residuals. Re-estimate the model using a linear spline at each of the residuals.
Reducing number of data points in excel while keeping the curve shape I would do the following: The data very obviously follow a power law. Fit this non-linear model and find the highest N residuals. Re-estimate the model using a linear spline at each of the residuals. Output the predicted values and their inputs as a sequence of N points. This can be N=72 or any value you want (higher is better). You probably can't do this in Excel. R however... these models are covered elsewhere in SE and can be found by consulting ?nls, ?spline::bs, etc.
Reducing number of data points in excel while keeping the curve shape I would do the following: The data very obviously follow a power law. Fit this non-linear model and find the highest N residuals. Re-estimate the model using a linear spline at each of the residuals.
50,930
Reducing number of data points in excel while keeping the curve shape
You should try fitting your data with the equation below: data_y = data^a + b1*sin(c1*data) + b2*sin(c2*data) + b3*sin(c3*data) + b4*sin(c4*data)+d This is a combination of a power law and several trigonometric functions. These should be able to capture the characteristics of your data very well, and will reduce your data to only 10 parameters, while also keeping its general shape.
Reducing number of data points in excel while keeping the curve shape
You should try fitting your data with the equation below: data_y = data^a + b1*sin(c1*data) + b2*sin(c2*data) + b3*sin(c3*data) + b4*sin(c4*data)+d This is a combination of a power law and several tr
Reducing number of data points in excel while keeping the curve shape You should try fitting your data with the equation below: data_y = data^a + b1*sin(c1*data) + b2*sin(c2*data) + b3*sin(c3*data) + b4*sin(c4*data)+d This is a combination of a power law and several trigonometric functions. These should be able to capture the characteristics of your data very well, and will reduce your data to only 10 parameters, while also keeping its general shape.
Reducing number of data points in excel while keeping the curve shape You should try fitting your data with the equation below: data_y = data^a + b1*sin(c1*data) + b2*sin(c2*data) + b3*sin(c3*data) + b4*sin(c4*data)+d This is a combination of a power law and several tr
50,931
Why iterative estimation of dispersion in negative binomial glm
It's not the same but it can be close. In this example, offset and exponent are less than $10^{-3}$ apart between the poisson and negative binomial estimate. The Poisson dispersion estimate is 1.08, while the negative binomial is 1.09 R code: library(ggplot2) library(data.table) library(MASS) data=data.table(x=1:100) data[,mu:=5*x] data[,y:=rnbinom(100, mu = mu, size=1)] lmp=glm(y~log(x),data=data,family=poisson(link="log")) data[,pred.poisson:=predict(lmp,type="response")] disp.pois=data[,theta.ml(y,mu,.N)] lmnb=glm.nb(y~log(x), data=data, link="log") data[,pred.nb:=predict(lmnb,type="response")] ggplot(data)+geom_point(aes(x,y))+geom_line(aes(x,mu,colour="true mean"))+ geom_line(aes(x,pred.poisson,colour="poisson"))+ geom_line(aes(x,pred.nb,colour="nb"))
Why iterative estimation of dispersion in negative binomial glm
It's not the same but it can be close. In this example, offset and exponent are less than $10^{-3}$ apart between the poisson and negative binomial estimate. The Poisson dispersion estimate is 1.08, w
Why iterative estimation of dispersion in negative binomial glm It's not the same but it can be close. In this example, offset and exponent are less than $10^{-3}$ apart between the poisson and negative binomial estimate. The Poisson dispersion estimate is 1.08, while the negative binomial is 1.09 R code: library(ggplot2) library(data.table) library(MASS) data=data.table(x=1:100) data[,mu:=5*x] data[,y:=rnbinom(100, mu = mu, size=1)] lmp=glm(y~log(x),data=data,family=poisson(link="log")) data[,pred.poisson:=predict(lmp,type="response")] disp.pois=data[,theta.ml(y,mu,.N)] lmnb=glm.nb(y~log(x), data=data, link="log") data[,pred.nb:=predict(lmnb,type="response")] ggplot(data)+geom_point(aes(x,y))+geom_line(aes(x,mu,colour="true mean"))+ geom_line(aes(x,pred.poisson,colour="poisson"))+ geom_line(aes(x,pred.nb,colour="nb"))
Why iterative estimation of dispersion in negative binomial glm It's not the same but it can be close. In this example, offset and exponent are less than $10^{-3}$ apart between the poisson and negative binomial estimate. The Poisson dispersion estimate is 1.08, w
50,932
"Normalising" join probability of n events, by taking n-th root
To expand on my comment about your specific application, here's a practical example about deciding if normalization is appropriate. The example isn't about probabilities but it makes a point about normalization in general. Consider a school which has two children, one who is 10 years old and is 4ft tall, the other is 15 years old and is 5ft tall. You want to compare their success as a basketball player by examining their height. There are two example applications for this. You want to know who would be better for the school basketball team this year. The 15 year old kid is taller so he looks like a better player. Just like your probability example you might say "This is unfair, the 10 year old is shorter but he is tall for his age, we can't compare purely on height"; it's true that the 10 year old is tall for his age but that is irrelevant because you want a good basketball player for this year's school team. A second comparison you might do is to guess which child has a better chance of becoming an NBA star is his life. Neither of them can become a professional basketball player immediately so their immediate heights are not a good measure of their performance. Now it is valid to say "height is an unfair comparison, the 10 year old is tall for his age". You could normalise their heights by dividing their height by their age (or something more sophisticated) and this can give you a better measure of who will be taller when they play at a professional level. For your question about probabilities you might find that normalizing is a suitable way to compare the two probabilities, but if you just care about which event has a greater chance of occurring then it's not "unfair" on event $B$ at all.
"Normalising" join probability of n events, by taking n-th root
To expand on my comment about your specific application, here's a practical example about deciding if normalization is appropriate. The example isn't about probabilities but it makes a point about nor
"Normalising" join probability of n events, by taking n-th root To expand on my comment about your specific application, here's a practical example about deciding if normalization is appropriate. The example isn't about probabilities but it makes a point about normalization in general. Consider a school which has two children, one who is 10 years old and is 4ft tall, the other is 15 years old and is 5ft tall. You want to compare their success as a basketball player by examining their height. There are two example applications for this. You want to know who would be better for the school basketball team this year. The 15 year old kid is taller so he looks like a better player. Just like your probability example you might say "This is unfair, the 10 year old is shorter but he is tall for his age, we can't compare purely on height"; it's true that the 10 year old is tall for his age but that is irrelevant because you want a good basketball player for this year's school team. A second comparison you might do is to guess which child has a better chance of becoming an NBA star is his life. Neither of them can become a professional basketball player immediately so their immediate heights are not a good measure of their performance. Now it is valid to say "height is an unfair comparison, the 10 year old is tall for his age". You could normalise their heights by dividing their height by their age (or something more sophisticated) and this can give you a better measure of who will be taller when they play at a professional level. For your question about probabilities you might find that normalizing is a suitable way to compare the two probabilities, but if you just care about which event has a greater chance of occurring then it's not "unfair" on event $B$ at all.
"Normalising" join probability of n events, by taking n-th root To expand on my comment about your specific application, here's a practical example about deciding if normalization is appropriate. The example isn't about probabilities but it makes a point about nor
50,933
Bootstrapped confidence interval for the difference in means for paired data
The first method is no resampling test of which I'm aware in the literature. It seems like your goal, by resampling $X$ and $Y$ independently, is to generate data under the null hypothesis. This approach is inefficient because you are ignoring pairing in the design. The preferred resampling method for generating data under the null hypothesis is the permutation test. Permutation testing for paired data is done by randomly negating the $X-Y$ differences; i.e. replacing them with $Y-X$. Here, the between-pair differences are preserved, but the within-pair differences are only preserved if the paired mean difference is 0. The second example is a proper description of a paired bootstrap.
Bootstrapped confidence interval for the difference in means for paired data
The first method is no resampling test of which I'm aware in the literature. It seems like your goal, by resampling $X$ and $Y$ independently, is to generate data under the null hypothesis. This appro
Bootstrapped confidence interval for the difference in means for paired data The first method is no resampling test of which I'm aware in the literature. It seems like your goal, by resampling $X$ and $Y$ independently, is to generate data under the null hypothesis. This approach is inefficient because you are ignoring pairing in the design. The preferred resampling method for generating data under the null hypothesis is the permutation test. Permutation testing for paired data is done by randomly negating the $X-Y$ differences; i.e. replacing them with $Y-X$. Here, the between-pair differences are preserved, but the within-pair differences are only preserved if the paired mean difference is 0. The second example is a proper description of a paired bootstrap.
Bootstrapped confidence interval for the difference in means for paired data The first method is no resampling test of which I'm aware in the literature. It seems like your goal, by resampling $X$ and $Y$ independently, is to generate data under the null hypothesis. This appro
50,934
Is there a software that can draw a Bayesian model from JAGS?
It looks like Rasmus Bååth did exactly what you are looking for, by providing R scripts via his distribution_diagrams repository.
Is there a software that can draw a Bayesian model from JAGS?
It looks like Rasmus Bååth did exactly what you are looking for, by providing R scripts via his distribution_diagrams repository.
Is there a software that can draw a Bayesian model from JAGS? It looks like Rasmus Bååth did exactly what you are looking for, by providing R scripts via his distribution_diagrams repository.
Is there a software that can draw a Bayesian model from JAGS? It looks like Rasmus Bååth did exactly what you are looking for, by providing R scripts via his distribution_diagrams repository.
50,935
Is there a software that can draw a Bayesian model from JAGS?
Expanding on the answer above, here's a blog post about exactly this issue: http://doingbayesiandataanalysis.blogspot.com/2013/10/diagrams-for-hierarchical-models-new.html The post described two different methods for creating the diagrams.
Is there a software that can draw a Bayesian model from JAGS?
Expanding on the answer above, here's a blog post about exactly this issue: http://doingbayesiandataanalysis.blogspot.com/2013/10/diagrams-for-hierarchical-models-new.html The post described two diffe
Is there a software that can draw a Bayesian model from JAGS? Expanding on the answer above, here's a blog post about exactly this issue: http://doingbayesiandataanalysis.blogspot.com/2013/10/diagrams-for-hierarchical-models-new.html The post described two different methods for creating the diagrams.
Is there a software that can draw a Bayesian model from JAGS? Expanding on the answer above, here's a blog post about exactly this issue: http://doingbayesiandataanalysis.blogspot.com/2013/10/diagrams-for-hierarchical-models-new.html The post described two diffe
50,936
What is the difference between dead neuron and killing the gradient?
They both get zero or very small gradients so they can barely get trained. The difference is, the activation values of “dead” ReLU neurons are almost always zero, whereas the activation values of saturated sigmoid neurons are close to 0 or 1.
What is the difference between dead neuron and killing the gradient?
They both get zero or very small gradients so they can barely get trained. The difference is, the activation values of “dead” ReLU neurons are almost always zero, whereas the activation values of satu
What is the difference between dead neuron and killing the gradient? They both get zero or very small gradients so they can barely get trained. The difference is, the activation values of “dead” ReLU neurons are almost always zero, whereas the activation values of saturated sigmoid neurons are close to 0 or 1.
What is the difference between dead neuron and killing the gradient? They both get zero or very small gradients so they can barely get trained. The difference is, the activation values of “dead” ReLU neurons are almost always zero, whereas the activation values of satu
50,937
What is the difference between dead neuron and killing the gradient?
Both of them will have very small gradients, hence both act as a showstopper to learning. Difference is that the likelihood of dead Relu neurons is much less as compared to saturated sigmoids The gradient of a sigmoid is: S′(a)=S(a)(1−S(a)) When we start learning useful features in the later layers, the activations S(a) are high and thus the gradient or the learning of the previous layers starts reducing. Not matter how careful you are with network parameters you will encounter saturated sigmoids and vanishing gradients in your network. For RelU however the gradients are a constant no matter how good the activation/features are in your last layers, thus the previous layers continue to learn. Unless you really mess up the learning rate, or the weight decay terms, or your bias which forces the RelU's to operate in negative regime for all inputs, it is hard to get a lot of "dead" neurons as compared to saturated sigmoids.
What is the difference between dead neuron and killing the gradient?
Both of them will have very small gradients, hence both act as a showstopper to learning. Difference is that the likelihood of dead Relu neurons is much less as compared to saturated sigmoids The grad
What is the difference between dead neuron and killing the gradient? Both of them will have very small gradients, hence both act as a showstopper to learning. Difference is that the likelihood of dead Relu neurons is much less as compared to saturated sigmoids The gradient of a sigmoid is: S′(a)=S(a)(1−S(a)) When we start learning useful features in the later layers, the activations S(a) are high and thus the gradient or the learning of the previous layers starts reducing. Not matter how careful you are with network parameters you will encounter saturated sigmoids and vanishing gradients in your network. For RelU however the gradients are a constant no matter how good the activation/features are in your last layers, thus the previous layers continue to learn. Unless you really mess up the learning rate, or the weight decay terms, or your bias which forces the RelU's to operate in negative regime for all inputs, it is hard to get a lot of "dead" neurons as compared to saturated sigmoids.
What is the difference between dead neuron and killing the gradient? Both of them will have very small gradients, hence both act as a showstopper to learning. Difference is that the likelihood of dead Relu neurons is much less as compared to saturated sigmoids The grad
50,938
Should one use the same overdispersion parameter when comparing Binomial models?
On page 90 of McCullagh & Nelder, they state that many covariate selection procedures, including AIC minimization and tests using the F statistic, are equivalent to minimizing $Q = D + \alpha q \phi$. Here $D$ is the deviance, $\alpha$ is a function of the number of data points, $q$ is the number of covariates, and $\phi$ is the dispersion parameter. They cite a paper of Atkinson which I can't access for this statement. From the introduction, it seems Atkinson's actual statement is that the quantity to be minimized is $-L + \alpha q/2$, where $L$ is the maximized log likelihood of the model. Note the relationship $D/\phi = -2L + C$, where $C$ is a constant, from pages 33 and 34 of McCullagh and Nelder. It seems to me that in order to obtain their formulation from Atkinson's they already assume $\phi$ is the same across all candidate models. I think there are basically two issues here. First, if you are estimating a dispersion parameter in some way other than MLE, it's not even clear that Atkinson's quantity is defined, since $L$ should be the maximized log likelihood. If you are estimating a dispersion parameter for a binomial distribution, as McCullagh and Nelder do, you are already not using MLE to fit all the parameters of the model. Second, you may still want to use a criterion of this kind anyway by minimizing $D/(2\phi) + \alpha q/2$ among your candidate models. $D$ can be calculated for the model even if the regression parameters weren't obtained by maximizing the likelihood. If $\phi$ is constant across all candidate models this has heuristic value because it measures a trade off between model fit (with decreasing $D/\phi$) and complexity (with increasing $\alpha q$). This seems to be what McCullagh and Nelder are suggesting. However if the estimate for $\phi$ is not constant across all candidate models, then even this heuristic value is lost. $D/ \phi$ is no longer solely a measure of how well each candidate model fits the data but is also affected by changes in each model's dispersion parameter estimate, and the nature of the complexity-fit trade off becomes less clear. In fact if the differences in your scale parameter estimates are sufficiently large, minimizing this quantity amounts to minimizing the estimated scale parameter. Of course, the scale parameter estimate for each model will depend both on the number of parameters and the fit of the model. So I think the best answer to this question is: If you believe the complexity-fit trade off implied by your method of estimating $\phi$ will produce a better model than the quantity above, then allow it to vary for each model, or just use it directly. I would be skeptical of this belief, since any method of estimating $\phi$ was likely not intended to be a good procedure for model selection.
Should one use the same overdispersion parameter when comparing Binomial models?
On page 90 of McCullagh & Nelder, they state that many covariate selection procedures, including AIC minimization and tests using the F statistic, are equivalent to minimizing $Q = D + \alpha q \phi$.
Should one use the same overdispersion parameter when comparing Binomial models? On page 90 of McCullagh & Nelder, they state that many covariate selection procedures, including AIC minimization and tests using the F statistic, are equivalent to minimizing $Q = D + \alpha q \phi$. Here $D$ is the deviance, $\alpha$ is a function of the number of data points, $q$ is the number of covariates, and $\phi$ is the dispersion parameter. They cite a paper of Atkinson which I can't access for this statement. From the introduction, it seems Atkinson's actual statement is that the quantity to be minimized is $-L + \alpha q/2$, where $L$ is the maximized log likelihood of the model. Note the relationship $D/\phi = -2L + C$, where $C$ is a constant, from pages 33 and 34 of McCullagh and Nelder. It seems to me that in order to obtain their formulation from Atkinson's they already assume $\phi$ is the same across all candidate models. I think there are basically two issues here. First, if you are estimating a dispersion parameter in some way other than MLE, it's not even clear that Atkinson's quantity is defined, since $L$ should be the maximized log likelihood. If you are estimating a dispersion parameter for a binomial distribution, as McCullagh and Nelder do, you are already not using MLE to fit all the parameters of the model. Second, you may still want to use a criterion of this kind anyway by minimizing $D/(2\phi) + \alpha q/2$ among your candidate models. $D$ can be calculated for the model even if the regression parameters weren't obtained by maximizing the likelihood. If $\phi$ is constant across all candidate models this has heuristic value because it measures a trade off between model fit (with decreasing $D/\phi$) and complexity (with increasing $\alpha q$). This seems to be what McCullagh and Nelder are suggesting. However if the estimate for $\phi$ is not constant across all candidate models, then even this heuristic value is lost. $D/ \phi$ is no longer solely a measure of how well each candidate model fits the data but is also affected by changes in each model's dispersion parameter estimate, and the nature of the complexity-fit trade off becomes less clear. In fact if the differences in your scale parameter estimates are sufficiently large, minimizing this quantity amounts to minimizing the estimated scale parameter. Of course, the scale parameter estimate for each model will depend both on the number of parameters and the fit of the model. So I think the best answer to this question is: If you believe the complexity-fit trade off implied by your method of estimating $\phi$ will produce a better model than the quantity above, then allow it to vary for each model, or just use it directly. I would be skeptical of this belief, since any method of estimating $\phi$ was likely not intended to be a good procedure for model selection.
Should one use the same overdispersion parameter when comparing Binomial models? On page 90 of McCullagh & Nelder, they state that many covariate selection procedures, including AIC minimization and tests using the F statistic, are equivalent to minimizing $Q = D + \alpha q \phi$.
50,939
Should one use the same overdispersion parameter when comparing Binomial models?
I suspect the reason for the recommendation is that, in the old days, first the model was fit, then the dispersion parameter was calculated, and then the likelihood was adjusted for overdispersion. Deriving suitable test statistics for a LRT with adjusted dispersion parameters seems difficult, so it may be that people just said: whatever, we'll just develop a test conditional on a fixed dispersion, and that's it. Still, keeping the dispersion fixed seems weird to me. As you say, the most complex model likely fits tighter to the data, so using its dispersion parameter also for the simpler models should lead to suboptimal likelihoods, which would seem to create a bias towards larger complexity. A more sensible approach that makes use of modern computing power seems to me to fit both models including the dispersion with a full likelihood, and then do a simulated LRT for the comparison.
Should one use the same overdispersion parameter when comparing Binomial models?
I suspect the reason for the recommendation is that, in the old days, first the model was fit, then the dispersion parameter was calculated, and then the likelihood was adjusted for overdispersion. De
Should one use the same overdispersion parameter when comparing Binomial models? I suspect the reason for the recommendation is that, in the old days, first the model was fit, then the dispersion parameter was calculated, and then the likelihood was adjusted for overdispersion. Deriving suitable test statistics for a LRT with adjusted dispersion parameters seems difficult, so it may be that people just said: whatever, we'll just develop a test conditional on a fixed dispersion, and that's it. Still, keeping the dispersion fixed seems weird to me. As you say, the most complex model likely fits tighter to the data, so using its dispersion parameter also for the simpler models should lead to suboptimal likelihoods, which would seem to create a bias towards larger complexity. A more sensible approach that makes use of modern computing power seems to me to fit both models including the dispersion with a full likelihood, and then do a simulated LRT for the comparison.
Should one use the same overdispersion parameter when comparing Binomial models? I suspect the reason for the recommendation is that, in the old days, first the model was fit, then the dispersion parameter was calculated, and then the likelihood was adjusted for overdispersion. De
50,940
Online learning in LSTM
How can we do online learning in any of those models? Does it basically mean to set the batch_size to 1? yes Any ideas and thoughts about this? Is online learning really helpful? see Tradeoff batch size vs. number of iterations to train a neural network
Online learning in LSTM
How can we do online learning in any of those models? Does it basically mean to set the batch_size to 1? yes Any ideas and thoughts about this? Is online learning really helpful? see Tradeoff batch
Online learning in LSTM How can we do online learning in any of those models? Does it basically mean to set the batch_size to 1? yes Any ideas and thoughts about this? Is online learning really helpful? see Tradeoff batch size vs. number of iterations to train a neural network
Online learning in LSTM How can we do online learning in any of those models? Does it basically mean to set the batch_size to 1? yes Any ideas and thoughts about this? Is online learning really helpful? see Tradeoff batch
50,941
Is there a heuristic for determining the size of a fully connected layer at the end of a CNN?
Choosing a network architecture is a bit of a "black art". They might have tried multiple different parameters and chose one that worked well (evaluating each using cross-validation). Also, you can inform your choice by what has been reported in the research literature to work well on similar tasks, and use that as a starting point for experimentation. One consideration here is the number of weights that can be set independently: the more of those you have, the greater the risk of overfitting, and the greater the training time. So, increasing this number makes training take longer and runs a higher risk of overfitting, but potentially increases the expressiveness of your neural network. You probably want the number to be as small as possible, without sacrificing accuracy. So, you might try something small and increase it until you stop getting improvements in accuracy (measuring using cross-validation).
Is there a heuristic for determining the size of a fully connected layer at the end of a CNN?
Choosing a network architecture is a bit of a "black art". They might have tried multiple different parameters and chose one that worked well (evaluating each using cross-validation). Also, you can i
Is there a heuristic for determining the size of a fully connected layer at the end of a CNN? Choosing a network architecture is a bit of a "black art". They might have tried multiple different parameters and chose one that worked well (evaluating each using cross-validation). Also, you can inform your choice by what has been reported in the research literature to work well on similar tasks, and use that as a starting point for experimentation. One consideration here is the number of weights that can be set independently: the more of those you have, the greater the risk of overfitting, and the greater the training time. So, increasing this number makes training take longer and runs a higher risk of overfitting, but potentially increases the expressiveness of your neural network. You probably want the number to be as small as possible, without sacrificing accuracy. So, you might try something small and increase it until you stop getting improvements in accuracy (measuring using cross-validation).
Is there a heuristic for determining the size of a fully connected layer at the end of a CNN? Choosing a network architecture is a bit of a "black art". They might have tried multiple different parameters and chose one that worked well (evaluating each using cross-validation). Also, you can i
50,942
Estimating the mean of a random variable from greater than/less than answers
Here is my take on this question. I will assume that: $X_i \sim \mathcal{N}(\mu, \sigma^2)$, and the $X_i$'s are independent $\mu$ is unknown $\sigma^2$ is known (I'll discuss this assumption later.) Part 1: ML estimation given some data First, consider the case where we are given some data, and we want to estimate $\mu$. Denote the data by $\mathcal{D} = \{ (y_i, t_i) \mid i = 1, \ldots, n \}$, where $y_i \in \mathbf{R}$ and $$ t_i = \begin{cases} 1 & \text{if $X_i$ > $y_i$} \\ 0 & \text{otherwise} \end{cases} $$ Note that I use a lowercase letter for $y_i$ to emphasize that it is a value that we can observe, as opposed to $X_i$. We have $$ P(t_i = 1 \mid y_i) = P(X_i > y_i) = \Phi\left( \frac{\mu - y_i}{\sigma} \right) $$ and the likelihood of $\mu$ given the data is $$ \ell(\mu ; \mathcal{D}) = \prod_{i=1}^{n} \left( \Phi\left( \frac{\mu - y_i}{\sigma} \right) \right)^{t_i} \left( 1- \Phi\left( \frac{\mu - y_i}{\sigma} \right) \right)^{1-t_i} \qquad (*) $$ This function is log-concave, and has a unique maximizer if there is at least one $i$ such that $t_i = 1$, and at least one $i$ such that $t_i = 0$. Furthermore, I suspect that the maximizer is independent of the value of $\sigma^2$ (to be checked). Part 2: Active learning I think this is the more interesting part. Here, we'll assume that you start with $\mathcal{D} = \varnothing$, and you want to iteratively pick a value $y_i$ and observe the corresponding $t_i$, in such a way that you "learn the most" about $\mu$. There are many ways to go about this; in the following I'm taking a bayesian approach. We start by assuming a prior distribution on $\mu$, say $$ \mu \sim \mathcal{N}(0, \tau^2) $$ Given some data $\mathcal{D}$, your knowledge about $\mu$ is contained in the posterior distribution $$ p(\mu \mid \mathcal{D}) \propto p(\mathcal{D} \mid \mu) p(\mu) $$ Unfortunately, this posterior is not analytically tractable for the likelihood given above $(*)$. One practical way to bypass this problem is to approximate the posterior with a Gaussian distribution that is "closest" to the true posterior, in some sense. In particular, Expectation propagation and the Variational Gaussian approximation come to mind. One way to go about picking a value that leads to a lot of "information" about $\mu$ is to greedily maximize the expected reduction in the entropy of the posterior. Informally, the entropy of the posterior tells you how "unsure" you are about the value of $\mu$, and you'll want to pick a $y_i$ that is likely to reduce this uncertainty (I say "likely" because it will depend on the outcome $t_i$). In this particular case, as we are just estimating a single parameter, reducing the entropy can be understood to be simply reducing the variance of the posterior. Conjecture. let $p_i$ be the posterior on $\mu$ after $i$ steps (in particular, $p_0 = \mathcal{N}(0, \tau^2)$). Then, the point $y_{i+1}$ that maximizes the expected reduction in posterior entropy is given by $$ y_{i+1} = \mathbf{E}_{p_i}(\mu) $$ Basically, my conjecture is saying: just sample at your current best guess for $\mu$! Again, I believe that the assumption that $\sigma^2$ is fixed is not too important. I have the impression that what matters really is the ratio $\tau^2 / \sigma^2$. (This is again to be checked.)
Estimating the mean of a random variable from greater than/less than answers
Here is my take on this question. I will assume that: $X_i \sim \mathcal{N}(\mu, \sigma^2)$, and the $X_i$'s are independent $\mu$ is unknown $\sigma^2$ is known (I'll discuss this assumption later.)
Estimating the mean of a random variable from greater than/less than answers Here is my take on this question. I will assume that: $X_i \sim \mathcal{N}(\mu, \sigma^2)$, and the $X_i$'s are independent $\mu$ is unknown $\sigma^2$ is known (I'll discuss this assumption later.) Part 1: ML estimation given some data First, consider the case where we are given some data, and we want to estimate $\mu$. Denote the data by $\mathcal{D} = \{ (y_i, t_i) \mid i = 1, \ldots, n \}$, where $y_i \in \mathbf{R}$ and $$ t_i = \begin{cases} 1 & \text{if $X_i$ > $y_i$} \\ 0 & \text{otherwise} \end{cases} $$ Note that I use a lowercase letter for $y_i$ to emphasize that it is a value that we can observe, as opposed to $X_i$. We have $$ P(t_i = 1 \mid y_i) = P(X_i > y_i) = \Phi\left( \frac{\mu - y_i}{\sigma} \right) $$ and the likelihood of $\mu$ given the data is $$ \ell(\mu ; \mathcal{D}) = \prod_{i=1}^{n} \left( \Phi\left( \frac{\mu - y_i}{\sigma} \right) \right)^{t_i} \left( 1- \Phi\left( \frac{\mu - y_i}{\sigma} \right) \right)^{1-t_i} \qquad (*) $$ This function is log-concave, and has a unique maximizer if there is at least one $i$ such that $t_i = 1$, and at least one $i$ such that $t_i = 0$. Furthermore, I suspect that the maximizer is independent of the value of $\sigma^2$ (to be checked). Part 2: Active learning I think this is the more interesting part. Here, we'll assume that you start with $\mathcal{D} = \varnothing$, and you want to iteratively pick a value $y_i$ and observe the corresponding $t_i$, in such a way that you "learn the most" about $\mu$. There are many ways to go about this; in the following I'm taking a bayesian approach. We start by assuming a prior distribution on $\mu$, say $$ \mu \sim \mathcal{N}(0, \tau^2) $$ Given some data $\mathcal{D}$, your knowledge about $\mu$ is contained in the posterior distribution $$ p(\mu \mid \mathcal{D}) \propto p(\mathcal{D} \mid \mu) p(\mu) $$ Unfortunately, this posterior is not analytically tractable for the likelihood given above $(*)$. One practical way to bypass this problem is to approximate the posterior with a Gaussian distribution that is "closest" to the true posterior, in some sense. In particular, Expectation propagation and the Variational Gaussian approximation come to mind. One way to go about picking a value that leads to a lot of "information" about $\mu$ is to greedily maximize the expected reduction in the entropy of the posterior. Informally, the entropy of the posterior tells you how "unsure" you are about the value of $\mu$, and you'll want to pick a $y_i$ that is likely to reduce this uncertainty (I say "likely" because it will depend on the outcome $t_i$). In this particular case, as we are just estimating a single parameter, reducing the entropy can be understood to be simply reducing the variance of the posterior. Conjecture. let $p_i$ be the posterior on $\mu$ after $i$ steps (in particular, $p_0 = \mathcal{N}(0, \tau^2)$). Then, the point $y_{i+1}$ that maximizes the expected reduction in posterior entropy is given by $$ y_{i+1} = \mathbf{E}_{p_i}(\mu) $$ Basically, my conjecture is saying: just sample at your current best guess for $\mu$! Again, I believe that the assumption that $\sigma^2$ is fixed is not too important. I have the impression that what matters really is the ratio $\tau^2 / \sigma^2$. (This is again to be checked.)
Estimating the mean of a random variable from greater than/less than answers Here is my take on this question. I will assume that: $X_i \sim \mathcal{N}(\mu, \sigma^2)$, and the $X_i$'s are independent $\mu$ is unknown $\sigma^2$ is known (I'll discuss this assumption later.)
50,943
How to compute the AUROC for a single categorical variable
In short: yes, you could use a (simple) model(s) to compute the AUC (AUROC) for categorial features too. When you compute the AUC for an ordinal feature, you use the feature itself like you would use a classification model output and apply the threshold to it (of which one class lies below and the other lies above). Note that the complexity is determined by the - in this case non-existing - model: using a threshold on an ordinal feature boils down to using a linear separation that divides the feature into two parts. If you would use a more complex model instead (e.g. tree), you could easily obtain multiple parts too. For a categorial feature, doing so might make sense. This essentially is just answering the question "how likely is class 1 if my feature is blue?", which you could employ many model types for (small trees, etc). Note that you can of course overfit this too, so using models with little complexity might be reasonable (like the linear separation for the ordinal feature) . PS: you might need to encode your categorial variable in one-hot encoding for some models (that cannot make meaning of categories themselves), e.g. if you want to use it in logistic regression. This makes the problem $N$ dimensional instead, with $N$ being the amount of categories of your variable (though this is automatically done with most implementations).
How to compute the AUROC for a single categorical variable
In short: yes, you could use a (simple) model(s) to compute the AUC (AUROC) for categorial features too. When you compute the AUC for an ordinal feature, you use the feature itself like you would use
How to compute the AUROC for a single categorical variable In short: yes, you could use a (simple) model(s) to compute the AUC (AUROC) for categorial features too. When you compute the AUC for an ordinal feature, you use the feature itself like you would use a classification model output and apply the threshold to it (of which one class lies below and the other lies above). Note that the complexity is determined by the - in this case non-existing - model: using a threshold on an ordinal feature boils down to using a linear separation that divides the feature into two parts. If you would use a more complex model instead (e.g. tree), you could easily obtain multiple parts too. For a categorial feature, doing so might make sense. This essentially is just answering the question "how likely is class 1 if my feature is blue?", which you could employ many model types for (small trees, etc). Note that you can of course overfit this too, so using models with little complexity might be reasonable (like the linear separation for the ordinal feature) . PS: you might need to encode your categorial variable in one-hot encoding for some models (that cannot make meaning of categories themselves), e.g. if you want to use it in logistic regression. This makes the problem $N$ dimensional instead, with $N$ being the amount of categories of your variable (though this is automatically done with most implementations).
How to compute the AUROC for a single categorical variable In short: yes, you could use a (simple) model(s) to compute the AUC (AUROC) for categorial features too. When you compute the AUC for an ordinal feature, you use the feature itself like you would use
50,944
General-to-specific subset selection ("Autometrics") performing well in macroeconomics
Frank Harrell does not rule out intelligent use of backward elimination. He includes as a possibility (page 97, RMS, 2nd edition): Do limited backwards step-down variable selection if parsimony is more important than accuracy. This, however, is only to be done in the context of an already well-specified model. It is the last step before the "'final' model." As this paper linked from the related question emphasizes, the variable selection in GETS must begin with an already well-specified model: The search should start from a congruent statistical model to ensure that selection inferences are reliable. Problems such as residual autocorrelation and heteroscedasticity not only reveal mis-specification. They can deliver incorrect coefficient standard errors for test calculations. Consequently, the algorithm must test for model mis-specification in the initial general model. This is a good deal different from many questions on this site, where those without much evident statistical background often seem to want a plug-and-play approach to the entire problem. They start with some type of multiple regression, give little thought to underlying subject matter, data transformations, the peculiar problems in time series, or the like, and want to determine automatically "what are the most important variables"? Also, GETS seems inherently inapplicable to the $p>n$ setting, where so much of the difficulty (and interest, and poor statistical technique) in variable selection arises. Although time series are outside my expertise, I suspect that large time series from which autocorrelations have been removed effectively provide $n\gg p$, with many degrees of freedom left. I also wonder (without any solid knowledge of time series) whether removing time-based autocorrelations may in practice help minimize other sources of non-orthogonality among predictors. The various flavors of GETS pay a good deal of attention to tradeoffs between Type I and Type II errors in the steps of the reduction of the initial well-specified model to a reduced form, which may obviate shrinkage corrections (or include them implicitly in the estimates of the reduced model).
General-to-specific subset selection ("Autometrics") performing well in macroeconomics
Frank Harrell does not rule out intelligent use of backward elimination. He includes as a possibility (page 97, RMS, 2nd edition): Do limited backwards step-down variable selection if parsimony is mo
General-to-specific subset selection ("Autometrics") performing well in macroeconomics Frank Harrell does not rule out intelligent use of backward elimination. He includes as a possibility (page 97, RMS, 2nd edition): Do limited backwards step-down variable selection if parsimony is more important than accuracy. This, however, is only to be done in the context of an already well-specified model. It is the last step before the "'final' model." As this paper linked from the related question emphasizes, the variable selection in GETS must begin with an already well-specified model: The search should start from a congruent statistical model to ensure that selection inferences are reliable. Problems such as residual autocorrelation and heteroscedasticity not only reveal mis-specification. They can deliver incorrect coefficient standard errors for test calculations. Consequently, the algorithm must test for model mis-specification in the initial general model. This is a good deal different from many questions on this site, where those without much evident statistical background often seem to want a plug-and-play approach to the entire problem. They start with some type of multiple regression, give little thought to underlying subject matter, data transformations, the peculiar problems in time series, or the like, and want to determine automatically "what are the most important variables"? Also, GETS seems inherently inapplicable to the $p>n$ setting, where so much of the difficulty (and interest, and poor statistical technique) in variable selection arises. Although time series are outside my expertise, I suspect that large time series from which autocorrelations have been removed effectively provide $n\gg p$, with many degrees of freedom left. I also wonder (without any solid knowledge of time series) whether removing time-based autocorrelations may in practice help minimize other sources of non-orthogonality among predictors. The various flavors of GETS pay a good deal of attention to tradeoffs between Type I and Type II errors in the steps of the reduction of the initial well-specified model to a reduced form, which may obviate shrinkage corrections (or include them implicitly in the estimates of the reduced model).
General-to-specific subset selection ("Autometrics") performing well in macroeconomics Frank Harrell does not rule out intelligent use of backward elimination. He includes as a possibility (page 97, RMS, 2nd edition): Do limited backwards step-down variable selection if parsimony is mo
50,945
If you know the central moments of the data $X$, find a function $f$ for which $f(X)$ has arbitrary central moments
The function $f(x)$ should be monotonically increasing, such that its inverse $g(y)=f^{-1}(y)$ is defined. Then you can use the theorem about random variable transformations, which says that, when $\varphi(x)$ is the probability density of $X$, then $Y=f(X)$ has the probability density $$\frac{d}{dy}\Phi(g(y))=\varphi(g(y))\cdot\frac{dg(y)}{dx}$$ When you choose a parametrized ansatz for the function $g(y)$, your moment condition transforms into a system of equations for the parameters. Depending on the parametric ansatz and the density function $\varphi$, this system of equations will require a numeric solution, but this is merely a technical point. I would thus conclude that the answer to your question is yes.
If you know the central moments of the data $X$, find a function $f$ for which $f(X)$ has arbitrary
The function $f(x)$ should be monotonically increasing, such that its inverse $g(y)=f^{-1}(y)$ is defined. Then you can use the theorem about random variable transformations, which says that, when $\v
If you know the central moments of the data $X$, find a function $f$ for which $f(X)$ has arbitrary central moments The function $f(x)$ should be monotonically increasing, such that its inverse $g(y)=f^{-1}(y)$ is defined. Then you can use the theorem about random variable transformations, which says that, when $\varphi(x)$ is the probability density of $X$, then $Y=f(X)$ has the probability density $$\frac{d}{dy}\Phi(g(y))=\varphi(g(y))\cdot\frac{dg(y)}{dx}$$ When you choose a parametrized ansatz for the function $g(y)$, your moment condition transforms into a system of equations for the parameters. Depending on the parametric ansatz and the density function $\varphi$, this system of equations will require a numeric solution, but this is merely a technical point. I would thus conclude that the answer to your question is yes.
If you know the central moments of the data $X$, find a function $f$ for which $f(X)$ has arbitrary The function $f(x)$ should be monotonically increasing, such that its inverse $g(y)=f^{-1}(y)$ is defined. Then you can use the theorem about random variable transformations, which says that, when $\v
50,946
How to determine which variables to be used for cluster analysis
https://www.researchgate.net/profile/Federico_Marini/publication/230276990_Finding_relevant_clustering_directions_in_highdimensional_data_using_Particle_Swarm_Optimization/links/550c0b570cf20637993960f2.pdf This paper describes how you can find optimal clustering directions using particle swarm optimization. This algorithm uses binary-PSO (BPSO). The MatLab of BPSO is available on MatLab Central. You can modify your cost function as defined in this paper.
How to determine which variables to be used for cluster analysis
https://www.researchgate.net/profile/Federico_Marini/publication/230276990_Finding_relevant_clustering_directions_in_highdimensional_data_using_Particle_Swarm_Optimization/links/550c0b570cf20637993960
How to determine which variables to be used for cluster analysis https://www.researchgate.net/profile/Federico_Marini/publication/230276990_Finding_relevant_clustering_directions_in_highdimensional_data_using_Particle_Swarm_Optimization/links/550c0b570cf20637993960f2.pdf This paper describes how you can find optimal clustering directions using particle swarm optimization. This algorithm uses binary-PSO (BPSO). The MatLab of BPSO is available on MatLab Central. You can modify your cost function as defined in this paper.
How to determine which variables to be used for cluster analysis https://www.researchgate.net/profile/Federico_Marini/publication/230276990_Finding_relevant_clustering_directions_in_highdimensional_data_using_Particle_Swarm_Optimization/links/550c0b570cf20637993960
50,947
Linear model comparison - which does my data fit best?
So I've been working on understanding this question for a while over the last 20 hours or so. There has been lots of useful discussion, but no definitive answer. I've found a couple resources that might help others understand why I've chosen to go this route. 1) Soil Equilibria: What happens to acid rain? By Sharon Anthony, Michael Beug, Roxanne Hulet, and George Lisensky is a good chemistry learning book and has what I believe to be a thorough explanation of how to use a t-test, but not necessarily why to use one. 2) This blog post over at minitab.com explains when to use a t-test as well as additional information on how to use it. I think the kicker here is that as per the original post, I am expecting a slope of either 1.00 or 1.16, depending on which solution I have if I scatter plot Na and Cl. Another way to phrase this is I am expecting a ratio for Cl to Na of either 1.00 or 1.16 for every sample in the set. This gives me two hypothesis to test in my t-test. The t-test is defined as: $t=\frac{|x-known| * \sqrt N}s$ where $t$ will give a value for the comparison of the experimental mean to the known value, which we can then compare to a tabulated t table for our corresponding degrees of freedom ($N$) and confidence interval (lets pick 95%). $s$ is the standard deviation, $x$ is the mean Cl to Na ratio for this example, and $known$ is the known value or hypothesis I want to test: For hypothesis 1) the ratio of Cl to Na is equal to 1.16 for each sample. Our mean Cl to Na ratio $x$ is (in R) mean(Cl/Na) or 0.95. Similarly, $s$ = sd(Cl/Na) or 0.14. Now if we plug and chug into our t-test equation, we get a $t$ of 4.5. The corresponding $t$ in the tabulated table for 95% confidence and $N = 9$ is 2.26. Our calculated $t$ is greater than the tabulated t value so the mean Cl/Na is different than 1.16 at the 95% confidence interval. For hypothesis 2) the ratio of Cl to Na is equal to 1.00 for each sample. We will use the same steps as in hypothesis 1 (only change $known$), which gives $t = 1.07$. 1.07 is less than our tabulated t value of 2.26 for the same $N$ and confidence interval of 95%, so we can say that our mean Cl/Na is not different than 1.00 at the 95% confidence interval. So to answer the question, I most likely have a table salt dissolved in water solution based on a t-test. I hope some stat enthusiasts can comment on whether or not this is a valid answer! Edit: 9 degrees of freedom with 10 samples. Edit2: R t.test(Cl/Na,mu=1) apparently does not come to the same conclusion as I did above. I do not know why.
Linear model comparison - which does my data fit best?
So I've been working on understanding this question for a while over the last 20 hours or so. There has been lots of useful discussion, but no definitive answer. I've found a couple resources that mig
Linear model comparison - which does my data fit best? So I've been working on understanding this question for a while over the last 20 hours or so. There has been lots of useful discussion, but no definitive answer. I've found a couple resources that might help others understand why I've chosen to go this route. 1) Soil Equilibria: What happens to acid rain? By Sharon Anthony, Michael Beug, Roxanne Hulet, and George Lisensky is a good chemistry learning book and has what I believe to be a thorough explanation of how to use a t-test, but not necessarily why to use one. 2) This blog post over at minitab.com explains when to use a t-test as well as additional information on how to use it. I think the kicker here is that as per the original post, I am expecting a slope of either 1.00 or 1.16, depending on which solution I have if I scatter plot Na and Cl. Another way to phrase this is I am expecting a ratio for Cl to Na of either 1.00 or 1.16 for every sample in the set. This gives me two hypothesis to test in my t-test. The t-test is defined as: $t=\frac{|x-known| * \sqrt N}s$ where $t$ will give a value for the comparison of the experimental mean to the known value, which we can then compare to a tabulated t table for our corresponding degrees of freedom ($N$) and confidence interval (lets pick 95%). $s$ is the standard deviation, $x$ is the mean Cl to Na ratio for this example, and $known$ is the known value or hypothesis I want to test: For hypothesis 1) the ratio of Cl to Na is equal to 1.16 for each sample. Our mean Cl to Na ratio $x$ is (in R) mean(Cl/Na) or 0.95. Similarly, $s$ = sd(Cl/Na) or 0.14. Now if we plug and chug into our t-test equation, we get a $t$ of 4.5. The corresponding $t$ in the tabulated table for 95% confidence and $N = 9$ is 2.26. Our calculated $t$ is greater than the tabulated t value so the mean Cl/Na is different than 1.16 at the 95% confidence interval. For hypothesis 2) the ratio of Cl to Na is equal to 1.00 for each sample. We will use the same steps as in hypothesis 1 (only change $known$), which gives $t = 1.07$. 1.07 is less than our tabulated t value of 2.26 for the same $N$ and confidence interval of 95%, so we can say that our mean Cl/Na is not different than 1.00 at the 95% confidence interval. So to answer the question, I most likely have a table salt dissolved in water solution based on a t-test. I hope some stat enthusiasts can comment on whether or not this is a valid answer! Edit: 9 degrees of freedom with 10 samples. Edit2: R t.test(Cl/Na,mu=1) apparently does not come to the same conclusion as I did above. I do not know why.
Linear model comparison - which does my data fit best? So I've been working on understanding this question for a while over the last 20 hours or so. There has been lots of useful discussion, but no definitive answer. I've found a couple resources that mig
50,948
Garson's algorithm for fully connected LSTMs
I'm actually doing a bit of work on this stuff at the minute. From what I've read in the literature, the connection weight method is actually better as it takes into account the magnitude and sign of the network. So maybe that would be a better starting point. http://www.sciencedirect.com/science/article/pii/S0304380004001565 **Disclaimer: I've just got a paper accepted where I generalise that method to arbitrary depth deep networks :) Can post a link when its up. Re LSTMs, I'd say it would be a lot more complex to do, but I would say either method could be used for it. Just a bit more thinking involved! Hope this helps some bit. Not a lot of people doing stuff in this area, as in network summarisation.
Garson's algorithm for fully connected LSTMs
I'm actually doing a bit of work on this stuff at the minute. From what I've read in the literature, the connection weight method is actually better as it takes into account the magnitude and sign of
Garson's algorithm for fully connected LSTMs I'm actually doing a bit of work on this stuff at the minute. From what I've read in the literature, the connection weight method is actually better as it takes into account the magnitude and sign of the network. So maybe that would be a better starting point. http://www.sciencedirect.com/science/article/pii/S0304380004001565 **Disclaimer: I've just got a paper accepted where I generalise that method to arbitrary depth deep networks :) Can post a link when its up. Re LSTMs, I'd say it would be a lot more complex to do, but I would say either method could be used for it. Just a bit more thinking involved! Hope this helps some bit. Not a lot of people doing stuff in this area, as in network summarisation.
Garson's algorithm for fully connected LSTMs I'm actually doing a bit of work on this stuff at the minute. From what I've read in the literature, the connection weight method is actually better as it takes into account the magnitude and sign of
50,949
Nested Anova vs Multilevel Linear Models
After having spent a few days in the library, I still feel far from understanding all implications of the two options. However, I have learned a few things and if anyone else is in the same boat, I would very much recommend the book "Multilevel Analysis" by Snijders and Bosker to guide your decision. It is by far the best text on this topic that I could find and spans from nested ANOVAs to very complicated multilevel linear models. For me, it provided some clarification for my three points above: 1) Even though I have fairly balanced groups, my group sizes are small enough that changing a few values on the micro level can have a significant effect on the group variance. Multilevel linear models should be more stable in this situation because group effects of one group inform those of other groups. S&B emphasize that this effect may be particularly noticeable for groups with <50 individuals. [+1 MLM] 2) S&B suggest having at least 10, ideally 20+ groups in an MLM analysis. I am definitely at the low end of this spectrum. [+1 ANOVA?] 3) Yes, F-tests for nested ANOVAs are definitely much more straight-forward. S&B seem to believe that there is a case for p-values in MLMs, too but reading the book makes very clear why it is a lot more complicated. [+1 ANOVA] I hope this helps someone. I am still very much unsure how to proceed so if anyone has a comment I'd be really happy to read it. Cheers
Nested Anova vs Multilevel Linear Models
After having spent a few days in the library, I still feel far from understanding all implications of the two options. However, I have learned a few things and if anyone else is in the same boat, I wo
Nested Anova vs Multilevel Linear Models After having spent a few days in the library, I still feel far from understanding all implications of the two options. However, I have learned a few things and if anyone else is in the same boat, I would very much recommend the book "Multilevel Analysis" by Snijders and Bosker to guide your decision. It is by far the best text on this topic that I could find and spans from nested ANOVAs to very complicated multilevel linear models. For me, it provided some clarification for my three points above: 1) Even though I have fairly balanced groups, my group sizes are small enough that changing a few values on the micro level can have a significant effect on the group variance. Multilevel linear models should be more stable in this situation because group effects of one group inform those of other groups. S&B emphasize that this effect may be particularly noticeable for groups with <50 individuals. [+1 MLM] 2) S&B suggest having at least 10, ideally 20+ groups in an MLM analysis. I am definitely at the low end of this spectrum. [+1 ANOVA?] 3) Yes, F-tests for nested ANOVAs are definitely much more straight-forward. S&B seem to believe that there is a case for p-values in MLMs, too but reading the book makes very clear why it is a lot more complicated. [+1 ANOVA] I hope this helps someone. I am still very much unsure how to proceed so if anyone has a comment I'd be really happy to read it. Cheers
Nested Anova vs Multilevel Linear Models After having spent a few days in the library, I still feel far from understanding all implications of the two options. However, I have learned a few things and if anyone else is in the same boat, I wo
50,950
ELO rating for non-pairing sport + serious math
If you're interested in use (more than in development), you should give a try to rankade, our ranking system. Rankade is free and easy to use, and it's different from Elo ranking system (here's a comparison) because it can manage matches with 2+ players, and so it fits your needs, maybe. Rankade's algorithm (ree algorithm, that is extremely complex and not in the public domain, for the time being) manages your expected position structure ("compare players position with expected position given the strength of the opposition"), and provides rankings, and stats, and graphs, and more.
ELO rating for non-pairing sport + serious math
If you're interested in use (more than in development), you should give a try to rankade, our ranking system. Rankade is free and easy to use, and it's different from Elo ranking system (here's a comp
ELO rating for non-pairing sport + serious math If you're interested in use (more than in development), you should give a try to rankade, our ranking system. Rankade is free and easy to use, and it's different from Elo ranking system (here's a comparison) because it can manage matches with 2+ players, and so it fits your needs, maybe. Rankade's algorithm (ree algorithm, that is extremely complex and not in the public domain, for the time being) manages your expected position structure ("compare players position with expected position given the strength of the opposition"), and provides rankings, and stats, and graphs, and more.
ELO rating for non-pairing sport + serious math If you're interested in use (more than in development), you should give a try to rankade, our ranking system. Rankade is free and easy to use, and it's different from Elo ranking system (here's a comp
50,951
R - How to fix NbClust error with error message: "The TSS matrix is indefinite. There must be too many missing values."
The answer you linked seems to suggest that negative eigenvalues tend to crop up with larger values of max.nc. So maybe try reducing that to something reasonable? I don't know how you'd go about interpreting 993 clusters in any case.
R - How to fix NbClust error with error message: "The TSS matrix is indefinite. There must be too ma
The answer you linked seems to suggest that negative eigenvalues tend to crop up with larger values of max.nc. So maybe try reducing that to something reasonable? I don't know how you'd go about inter
R - How to fix NbClust error with error message: "The TSS matrix is indefinite. There must be too many missing values." The answer you linked seems to suggest that negative eigenvalues tend to crop up with larger values of max.nc. So maybe try reducing that to something reasonable? I don't know how you'd go about interpreting 993 clusters in any case.
R - How to fix NbClust error with error message: "The TSS matrix is indefinite. There must be too ma The answer you linked seems to suggest that negative eigenvalues tend to crop up with larger values of max.nc. So maybe try reducing that to something reasonable? I don't know how you'd go about inter
50,952
Is the choice of test statistics in hypothesis testing a completely philosophical one?
I am assuming that you are asking about the choice of test statistic within a specific statistical model rather than asking about the choice of statistical model. I am also assuming that you are asking about the test statistic to be used in a classical hypothesis test in the accept/reject manner. The choice of test statistic is made on the basis of the properties of the resulting test. There is good reason to choose the test statistic to optimise the power to discriminate between a true and false test hypothesis, but it is also useful that the distribution of the test statistic be known. Student (Gossett) wanted to devise a significance test for means from small samples. His resulting t-test uses a particular test statistic, Student's t, not because he wanted to test the ratio of the mean and standard error, but because the distribution of that test statistic is derivable. Whether you wish to call the choice of test statistic a "philosophical one" depends on what you mean by that. ;-)
Is the choice of test statistics in hypothesis testing a completely philosophical one?
I am assuming that you are asking about the choice of test statistic within a specific statistical model rather than asking about the choice of statistical model. I am also assuming that you are askin
Is the choice of test statistics in hypothesis testing a completely philosophical one? I am assuming that you are asking about the choice of test statistic within a specific statistical model rather than asking about the choice of statistical model. I am also assuming that you are asking about the test statistic to be used in a classical hypothesis test in the accept/reject manner. The choice of test statistic is made on the basis of the properties of the resulting test. There is good reason to choose the test statistic to optimise the power to discriminate between a true and false test hypothesis, but it is also useful that the distribution of the test statistic be known. Student (Gossett) wanted to devise a significance test for means from small samples. His resulting t-test uses a particular test statistic, Student's t, not because he wanted to test the ratio of the mean and standard error, but because the distribution of that test statistic is derivable. Whether you wish to call the choice of test statistic a "philosophical one" depends on what you mean by that. ;-)
Is the choice of test statistics in hypothesis testing a completely philosophical one? I am assuming that you are asking about the choice of test statistic within a specific statistical model rather than asking about the choice of statistical model. I am also assuming that you are askin
50,953
How do I classify images with non-rectangle shape with CNN?
You can perfectly train non-rectangular images with CNN. if you have non rectangular input image, you can still use the square kernels. Note that the resulting size will be different in two dimensions. If you have 80*100 image, and use 5*5 filter, resulting dimension will be 76*96 if you do not use any padding. You have to be careful with this dimension because - at the final layer you are probably going to use fully connected layer which size has to match exactly - otherwise you will get dimension mismatch error. In your case, if you have some points missing, one way to use is to go with copy padding, which is similar to zero padding except it adds the values of border pixels to newly added dimensions. And if you think you have to add more more pixels by about 50%, it's better not to use that type of data to train-otherwise its going to add more noise to your data - and making your classifier more confusing.
How do I classify images with non-rectangle shape with CNN?
You can perfectly train non-rectangular images with CNN. if you have non rectangular input image, you can still use the square kernels. Note that the resulting size will be different in two dimensions
How do I classify images with non-rectangle shape with CNN? You can perfectly train non-rectangular images with CNN. if you have non rectangular input image, you can still use the square kernels. Note that the resulting size will be different in two dimensions. If you have 80*100 image, and use 5*5 filter, resulting dimension will be 76*96 if you do not use any padding. You have to be careful with this dimension because - at the final layer you are probably going to use fully connected layer which size has to match exactly - otherwise you will get dimension mismatch error. In your case, if you have some points missing, one way to use is to go with copy padding, which is similar to zero padding except it adds the values of border pixels to newly added dimensions. And if you think you have to add more more pixels by about 50%, it's better not to use that type of data to train-otherwise its going to add more noise to your data - and making your classifier more confusing.
How do I classify images with non-rectangle shape with CNN? You can perfectly train non-rectangular images with CNN. if you have non rectangular input image, you can still use the square kernels. Note that the resulting size will be different in two dimensions
50,954
Link function in a Gamma-distribution GLM
why is the inverse used as the link function, i.e.: $μ=−(Xβ)^{−1}$ That's actually the mean-function $\mu(\eta)$. The link function is $\eta(\mu)$. However, both are in the form of a negative reciprocal in this case, since the negative of the reciprocal is its own inverse-function. In particular, why is the inverse the canonical link? Does it have to do with the natural parameters of the gamma distribution? Yes, that's where canonical links come from. See for example, the Wikipedia article on the Generalized Linear model, in the section on the link function: When using a distribution function with a canonical parameter $\theta$, the canonical link function is the function that expresses $\theta$ in terms of $\mu$ ('canonical parameter' being another term for natural parameter)
Link function in a Gamma-distribution GLM
why is the inverse used as the link function, i.e.: $μ=−(Xβ)^{−1}$ That's actually the mean-function $\mu(\eta)$. The link function is $\eta(\mu)$. However, both are in the form of a negative recipro
Link function in a Gamma-distribution GLM why is the inverse used as the link function, i.e.: $μ=−(Xβ)^{−1}$ That's actually the mean-function $\mu(\eta)$. The link function is $\eta(\mu)$. However, both are in the form of a negative reciprocal in this case, since the negative of the reciprocal is its own inverse-function. In particular, why is the inverse the canonical link? Does it have to do with the natural parameters of the gamma distribution? Yes, that's where canonical links come from. See for example, the Wikipedia article on the Generalized Linear model, in the section on the link function: When using a distribution function with a canonical parameter $\theta$, the canonical link function is the function that expresses $\theta$ in terms of $\mu$ ('canonical parameter' being another term for natural parameter)
Link function in a Gamma-distribution GLM why is the inverse used as the link function, i.e.: $μ=−(Xβ)^{−1}$ That's actually the mean-function $\mu(\eta)$. The link function is $\eta(\mu)$. However, both are in the form of a negative recipro
50,955
Speed of convergence of probability
Consider $X_n$ such that $\mathbb P\left(X_n=n\right)=1/n$ and $\mathbb P\left(X_n=0\right)=1-1/n$, and $r_n:=1/n^2$. Since the sequence $X_n/r_n$ converges to $0$ in probability, we have in particular $X_n=O_p(r_n)$. However, for each positive $\varepsilon$, the quantity $\mathbb P\left(X_n\gt\varepsilon\right)$ is equal to $1/n$ if $n\gt \varepsilon$.
Speed of convergence of probability
Consider $X_n$ such that $\mathbb P\left(X_n=n\right)=1/n$ and $\mathbb P\left(X_n=0\right)=1-1/n$, and $r_n:=1/n^2$. Since the sequence $X_n/r_n$ converges to $0$ in probability, we have in particula
Speed of convergence of probability Consider $X_n$ such that $\mathbb P\left(X_n=n\right)=1/n$ and $\mathbb P\left(X_n=0\right)=1-1/n$, and $r_n:=1/n^2$. Since the sequence $X_n/r_n$ converges to $0$ in probability, we have in particular $X_n=O_p(r_n)$. However, for each positive $\varepsilon$, the quantity $\mathbb P\left(X_n\gt\varepsilon\right)$ is equal to $1/n$ if $n\gt \varepsilon$.
Speed of convergence of probability Consider $X_n$ such that $\mathbb P\left(X_n=n\right)=1/n$ and $\mathbb P\left(X_n=0\right)=1-1/n$, and $r_n:=1/n^2$. Since the sequence $X_n/r_n$ converges to $0$ in probability, we have in particula
50,956
Reject $H_0: \mu_1 = \mu_2 = \mu_3$ but not $H_0: \mu_1 = \mu_2, H_0: \mu_3 = \mu_2,$ or $H_0: \mu_1 = \mu_3$? [duplicate]
In mathematical terms, this amounts to check whether or not the region made of the intersection of (x-y)²<8 (x-z)²<8 (z-y)²<8 (x-z)²+(x-y)²-(x-y)(x-z)>9 is empty. The first three constraints correspond to testing whether $X-Y\sim\mathcal{N}(0,2)$ is not too unlikely (and the same for $X-Z$ and $Y-Z$), while the last constraint is derived from the distribution $$(x-y\quad x-z)\left[\begin{matrix}2 &1\\ 1&2\end{matrix}\right]^{-1}\left(\begin{matrix}x-y\\ x-z\end{matrix} \right)\sim\chi^2_2$$under the null.
Reject $H_0: \mu_1 = \mu_2 = \mu_3$ but not $H_0: \mu_1 = \mu_2, H_0: \mu_3 = \mu_2,$ or $H_0: \mu_1
In mathematical terms, this amounts to check whether or not the region made of the intersection of (x-y)²<8 (x-z)²<8 (z-y)²<8 (x-z)²+(x-y)²-(x-y)(x-z)>9 is empty. The first three constraints corresp
Reject $H_0: \mu_1 = \mu_2 = \mu_3$ but not $H_0: \mu_1 = \mu_2, H_0: \mu_3 = \mu_2,$ or $H_0: \mu_1 = \mu_3$? [duplicate] In mathematical terms, this amounts to check whether or not the region made of the intersection of (x-y)²<8 (x-z)²<8 (z-y)²<8 (x-z)²+(x-y)²-(x-y)(x-z)>9 is empty. The first three constraints correspond to testing whether $X-Y\sim\mathcal{N}(0,2)$ is not too unlikely (and the same for $X-Z$ and $Y-Z$), while the last constraint is derived from the distribution $$(x-y\quad x-z)\left[\begin{matrix}2 &1\\ 1&2\end{matrix}\right]^{-1}\left(\begin{matrix}x-y\\ x-z\end{matrix} \right)\sim\chi^2_2$$under the null.
Reject $H_0: \mu_1 = \mu_2 = \mu_3$ but not $H_0: \mu_1 = \mu_2, H_0: \mu_3 = \mu_2,$ or $H_0: \mu_1 In mathematical terms, this amounts to check whether or not the region made of the intersection of (x-y)²<8 (x-z)²<8 (z-y)²<8 (x-z)²+(x-y)²-(x-y)(x-z)>9 is empty. The first three constraints corresp
50,957
Statistical significance, repeatability, and sample size (50 shades of grey)
You pose several important questions, some focusing on hypothesis testing, some on multiplicity, and so forth. These are my answers: The typical approach is to repeat the experiment analyzing every time as no prior study has been conducted. So in a standard frequentist framework only this eventual p counts. I find it wrong and wasteful, as a frequentist meta-analysis or a Bayesian synthesis would borrow information from prior studies as well. Notice indeed that a Bayesian meta-analysis without informative priors and a frequentist meta-analysis encomppassing the same studies will provide very similar, if not identical, inferential estimates. Yet, take notice that the typical Food and Drug Administration approach for regulatory approval of drugs typically disregards prior studies for hypothesis testing. The problem is not simply sample size, it mainly has to do with the precision of the effect and the effect size you think meaningful. A drug which reduces blood pressure by 0.001 mm Hg might be shown efficacious in a mega-trial, but this statistical significance would not be clinically meaningful (that is why ASA and many others are pushing for the abolition of p values and a transition to other approaches, such as confidence intervals). In any case, in a typical frequentist framework, a study could only test a single hypothesis (single test thus), per study, with all the other analysis requiring some penalization for multiplicity. You may indeed perform a meta-analysis of meta-analyses (meta-epidemiologic study), but this would mainly inform on the meta-analytic process, rather than on the intervention or effect size of interest. Accordingly, if you wish to simply combine studies, than you do not need a meta-meta-analysis. Selective publishing is very difficult to recognize by looking at a single study. If you have many studies, you can recognize peculiar patterns (eg. p=0.049 occurring much more frequently than p=0.051). However, if you only have one study at hand, then the only hope resides in looking at the pre-specified protocol.
Statistical significance, repeatability, and sample size (50 shades of grey)
You pose several important questions, some focusing on hypothesis testing, some on multiplicity, and so forth. These are my answers: The typical approach is to repeat the experiment analyzing every t
Statistical significance, repeatability, and sample size (50 shades of grey) You pose several important questions, some focusing on hypothesis testing, some on multiplicity, and so forth. These are my answers: The typical approach is to repeat the experiment analyzing every time as no prior study has been conducted. So in a standard frequentist framework only this eventual p counts. I find it wrong and wasteful, as a frequentist meta-analysis or a Bayesian synthesis would borrow information from prior studies as well. Notice indeed that a Bayesian meta-analysis without informative priors and a frequentist meta-analysis encomppassing the same studies will provide very similar, if not identical, inferential estimates. Yet, take notice that the typical Food and Drug Administration approach for regulatory approval of drugs typically disregards prior studies for hypothesis testing. The problem is not simply sample size, it mainly has to do with the precision of the effect and the effect size you think meaningful. A drug which reduces blood pressure by 0.001 mm Hg might be shown efficacious in a mega-trial, but this statistical significance would not be clinically meaningful (that is why ASA and many others are pushing for the abolition of p values and a transition to other approaches, such as confidence intervals). In any case, in a typical frequentist framework, a study could only test a single hypothesis (single test thus), per study, with all the other analysis requiring some penalization for multiplicity. You may indeed perform a meta-analysis of meta-analyses (meta-epidemiologic study), but this would mainly inform on the meta-analytic process, rather than on the intervention or effect size of interest. Accordingly, if you wish to simply combine studies, than you do not need a meta-meta-analysis. Selective publishing is very difficult to recognize by looking at a single study. If you have many studies, you can recognize peculiar patterns (eg. p=0.049 occurring much more frequently than p=0.051). However, if you only have one study at hand, then the only hope resides in looking at the pre-specified protocol.
Statistical significance, repeatability, and sample size (50 shades of grey) You pose several important questions, some focusing on hypothesis testing, some on multiplicity, and so forth. These are my answers: The typical approach is to repeat the experiment analyzing every t
50,958
Time series with a sequence of zeros
A common approach to handle many zeroes in a time series is to use a Croston Model. To implement this model with your time series there are two R packages forecast and tsintermittent. The tsintermittent package optimizes the Croston model $\alpha$ parameter whereas the forecast package produces a forecast for a given $\alpha$ value. This is not the only approach to forecasting a time series w/ a sequence of zeroes but is a common approach. Another approach would be to fit a zero-inflated or hurdle model. However, as mentioned in the comments of the post by the OP, the measurements could be continuous on the positive half-line. With non-negative measurements and a zero point mass the most likely approach would be a Tweedie regression. Here is a good post from SO with example codes: https://stackoverflow.com/questions/21807118/r-codes-for-tweedie-compound-poisson-gamma
Time series with a sequence of zeros
A common approach to handle many zeroes in a time series is to use a Croston Model. To implement this model with your time series there are two R packages forecast and tsintermittent. The tsintermitt
Time series with a sequence of zeros A common approach to handle many zeroes in a time series is to use a Croston Model. To implement this model with your time series there are two R packages forecast and tsintermittent. The tsintermittent package optimizes the Croston model $\alpha$ parameter whereas the forecast package produces a forecast for a given $\alpha$ value. This is not the only approach to forecasting a time series w/ a sequence of zeroes but is a common approach. Another approach would be to fit a zero-inflated or hurdle model. However, as mentioned in the comments of the post by the OP, the measurements could be continuous on the positive half-line. With non-negative measurements and a zero point mass the most likely approach would be a Tweedie regression. Here is a good post from SO with example codes: https://stackoverflow.com/questions/21807118/r-codes-for-tweedie-compound-poisson-gamma
Time series with a sequence of zeros A common approach to handle many zeroes in a time series is to use a Croston Model. To implement this model with your time series there are two R packages forecast and tsintermittent. The tsintermitt
50,959
Regression models with comparable MAE but differing R²
$R^2$ is a function of MSE loss. $$ R^2=\dfrac{ \sum_i\big( y_i-\hat y_i \big)^2 }{\sum_i\big( y_i-\bar y \big)^2}\\=\dfrac{ nMSE }{\sum_i\big( y_i-\bar y \big)^2} $$ Consequently, if you are comparing two models on their respective $R^2$ values, you are implicitly using MSE. Any model with lower (worse) $R^2$ than another of the same data has a higher (worse) MSE. Since MAE does not square residuals, large misses are not brutally punished the way they are by MSE. My interpretation of your results than both models have comparable MAE but different MSE is that one (low $R^2$) tends to make small mistakes with the occasional terrible prediction, while the other tends to make larger mistakes but fewer of the colossal errors. This is likely to be reflected in plots of your distributions of model residuals.
Regression models with comparable MAE but differing R²
$R^2$ is a function of MSE loss. $$ R^2=\dfrac{ \sum_i\big( y_i-\hat y_i \big)^2 }{\sum_i\big( y_i-\bar y \big)^2}\\=\dfrac{ nMSE }{\sum_i\big( y_i-\bar y \big)^2} $$ Consequently, if you are comparin
Regression models with comparable MAE but differing R² $R^2$ is a function of MSE loss. $$ R^2=\dfrac{ \sum_i\big( y_i-\hat y_i \big)^2 }{\sum_i\big( y_i-\bar y \big)^2}\\=\dfrac{ nMSE }{\sum_i\big( y_i-\bar y \big)^2} $$ Consequently, if you are comparing two models on their respective $R^2$ values, you are implicitly using MSE. Any model with lower (worse) $R^2$ than another of the same data has a higher (worse) MSE. Since MAE does not square residuals, large misses are not brutally punished the way they are by MSE. My interpretation of your results than both models have comparable MAE but different MSE is that one (low $R^2$) tends to make small mistakes with the occasional terrible prediction, while the other tends to make larger mistakes but fewer of the colossal errors. This is likely to be reflected in plots of your distributions of model residuals.
Regression models with comparable MAE but differing R² $R^2$ is a function of MSE loss. $$ R^2=\dfrac{ \sum_i\big( y_i-\hat y_i \big)^2 }{\sum_i\big( y_i-\bar y \big)^2}\\=\dfrac{ nMSE }{\sum_i\big( y_i-\bar y \big)^2} $$ Consequently, if you are comparin
50,960
What types of HMM are there?
I think your picture coming from Murphy's tutorial on graphical model. And all these models can be named as "directed probabilistic graphical model" or "dynamic Bayesian network". There are two problems we want to consider on these types of structured models What is the structure (connectivity among random variables) What are the hidden variables (which random variable we cannot observe.) One can use knowledge specify the structure or learn the structure from data, Examples can be found here. Which variables are hidden usually comes from domain knowledge. In sum, the name of the model does not matter too much. All of them are directed probabilistic graphical models.
What types of HMM are there?
I think your picture coming from Murphy's tutorial on graphical model. And all these models can be named as "directed probabilistic graphical model" or "dynamic Bayesian network". There are two probl
What types of HMM are there? I think your picture coming from Murphy's tutorial on graphical model. And all these models can be named as "directed probabilistic graphical model" or "dynamic Bayesian network". There are two problems we want to consider on these types of structured models What is the structure (connectivity among random variables) What are the hidden variables (which random variable we cannot observe.) One can use knowledge specify the structure or learn the structure from data, Examples can be found here. Which variables are hidden usually comes from domain knowledge. In sum, the name of the model does not matter too much. All of them are directed probabilistic graphical models.
What types of HMM are there? I think your picture coming from Murphy's tutorial on graphical model. And all these models can be named as "directed probabilistic graphical model" or "dynamic Bayesian network". There are two probl
50,961
How to apply a model on dataset with missing data?
If you have access to the data set the model was trained on, you could impute new data and then compare means, standard deviations etc. to see how they differ. You could also work backwards and use the model as is on a data set then compute statistics for that set, then try out different imputation techniques on the test data set and continue to generate statistics and compare and contrast them. If predictor variables are missing, you might be able to throw out those data points if you have a sufficiently large enough data set to work of off. Also if this is the case, you could retrain the model using imputation and cross validation to achieve a desirable prediction score. Remember if you do retrain the model on a new data set and there is missing data that you wish to impute, perform cross validation and split your data set into test/train sets before imputation. As this will mimic real life. Then perform imputation on the training set and once you have the trained model you want with the type of imputation you want. Perform that same data preprocessing on your testing set. Hope this helps point you in the right direction!
How to apply a model on dataset with missing data?
If you have access to the data set the model was trained on, you could impute new data and then compare means, standard deviations etc. to see how they differ. You could also work backwards and use th
How to apply a model on dataset with missing data? If you have access to the data set the model was trained on, you could impute new data and then compare means, standard deviations etc. to see how they differ. You could also work backwards and use the model as is on a data set then compute statistics for that set, then try out different imputation techniques on the test data set and continue to generate statistics and compare and contrast them. If predictor variables are missing, you might be able to throw out those data points if you have a sufficiently large enough data set to work of off. Also if this is the case, you could retrain the model using imputation and cross validation to achieve a desirable prediction score. Remember if you do retrain the model on a new data set and there is missing data that you wish to impute, perform cross validation and split your data set into test/train sets before imputation. As this will mimic real life. Then perform imputation on the training set and once you have the trained model you want with the type of imputation you want. Perform that same data preprocessing on your testing set. Hope this helps point you in the right direction!
How to apply a model on dataset with missing data? If you have access to the data set the model was trained on, you could impute new data and then compare means, standard deviations etc. to see how they differ. You could also work backwards and use th
50,962
Minimal sufficiency with indicator functions
Someone described to me a clever proof sketch that I will flesh out for any interested parties. First, let's fix the problem about having 0 in the denominator by constructing an equivalent condition: $$ \frac{f(X|\theta)}{f(Y|\theta)} = \frac{ I( max(X) < \theta ) }{ I( max(Y) < \theta )} $$ is equivalent to: $$ \exists \; g(X,Y) \amalg \theta \; \text{such that:}$$ $$ \; {I( max(X) < \theta ) = g(X,Y) \cdot I( max(Y) < \theta )} \; \forall \; \theta \; \; \; \; \; \; (*)$$ Take $max(X)$ as the candidate MSS. For the "if" direction, simply note that when $max(X) = max(Y)$, $(*)$ holds regardless of $\theta$, if we let $g(X,Y) = 1$. For the "only if" direction, we need to show that $(*)$ implies $max(X) = max(Y)$. We will prove the contrapositive: $max(X) \ne max(Y)$ implies $(*)$ does not hold, i.e., $g(X,Y)$ depends on $\theta$. WLOG, let $max(X) < max(Y)$. Now there are three cases: (1) $\theta < max(X) < max(Y)$. Then both indicators are 0, so $g(X,Y)$ can be anything and $(*)$ will hold. (2) $max(X) < \theta < max(Y)$. Then the indicators don't match, so $g(X,Y)$ must evaluate to 0 to fulfill $(*)$. (3) $max(X) < max(Y) < \theta$. Then the indicators are both 1, so $g(X,Y)$ must evaluate to 1 to fulfill $(*)$. Therefore, $max(X) \ne max(Y)$ implies that $(*)$ does not hold, since $g(X,Y)$ would be a piecewise function dependent on $\theta$. And we're done.
Minimal sufficiency with indicator functions
Someone described to me a clever proof sketch that I will flesh out for any interested parties. First, let's fix the problem about having 0 in the denominator by constructing an equivalent condition:
Minimal sufficiency with indicator functions Someone described to me a clever proof sketch that I will flesh out for any interested parties. First, let's fix the problem about having 0 in the denominator by constructing an equivalent condition: $$ \frac{f(X|\theta)}{f(Y|\theta)} = \frac{ I( max(X) < \theta ) }{ I( max(Y) < \theta )} $$ is equivalent to: $$ \exists \; g(X,Y) \amalg \theta \; \text{such that:}$$ $$ \; {I( max(X) < \theta ) = g(X,Y) \cdot I( max(Y) < \theta )} \; \forall \; \theta \; \; \; \; \; \; (*)$$ Take $max(X)$ as the candidate MSS. For the "if" direction, simply note that when $max(X) = max(Y)$, $(*)$ holds regardless of $\theta$, if we let $g(X,Y) = 1$. For the "only if" direction, we need to show that $(*)$ implies $max(X) = max(Y)$. We will prove the contrapositive: $max(X) \ne max(Y)$ implies $(*)$ does not hold, i.e., $g(X,Y)$ depends on $\theta$. WLOG, let $max(X) < max(Y)$. Now there are three cases: (1) $\theta < max(X) < max(Y)$. Then both indicators are 0, so $g(X,Y)$ can be anything and $(*)$ will hold. (2) $max(X) < \theta < max(Y)$. Then the indicators don't match, so $g(X,Y)$ must evaluate to 0 to fulfill $(*)$. (3) $max(X) < max(Y) < \theta$. Then the indicators are both 1, so $g(X,Y)$ must evaluate to 1 to fulfill $(*)$. Therefore, $max(X) \ne max(Y)$ implies that $(*)$ does not hold, since $g(X,Y)$ would be a piecewise function dependent on $\theta$. And we're done.
Minimal sufficiency with indicator functions Someone described to me a clever proof sketch that I will flesh out for any interested parties. First, let's fix the problem about having 0 in the denominator by constructing an equivalent condition:
50,963
How does the complexity parameter correspond to the number of splits in cross validation in rpart?
There is one tree created, which is definitely overfitting the data. The specified minsplit essentially creates a tree that categorizes each terminal node into either all "present" or all "absent". rpart will not prune the tree for you, but can provide cross-validation for you to select the best subtree (i.e. select the complexity parameter $\alpha$). The best tree is any subset of the initial tree; below are a few options: library(rpart.plot) prp(tree,extra=1) #Initial tree with 16 splits prp(prune(tree,cp=0.042),extra=1) #Subtree with 10 splits prp(prune(tree,cp=0.068),extra=1) #Subtree with 5 splits prp(prune(tree,cp=0.14),extra=1) #Subtree with 1 split To decide which subtree is best, we have to perform cross-validation. First we have to determine the possible $\alpha$'s that would yield a subtree (from the initial tree). Then we divide the data into 10 groups and build 10 trees with the 'leave one group out' approach using a possible $\alpha$ to prune the tree. The left out group can determine which $\alpha$ worked best. The technical details can be seen in the rpart vignette The final tree that is returned is still the initial tree. You must use the prune function using the cross-validation plot to choose the best subtree. For this dataset, I don't think CART fits the data that well. If you perform 81-fold cross-validation (i.e. leave one observation out), you'll see five splits seems like the best tree. If you're looking for a model with better prediction accuracy, perhaps you should consider building a random forest.
How does the complexity parameter correspond to the number of splits in cross validation in rpart?
There is one tree created, which is definitely overfitting the data. The specified minsplit essentially creates a tree that categorizes each terminal node into either all "present" or all "absent".
How does the complexity parameter correspond to the number of splits in cross validation in rpart? There is one tree created, which is definitely overfitting the data. The specified minsplit essentially creates a tree that categorizes each terminal node into either all "present" or all "absent". rpart will not prune the tree for you, but can provide cross-validation for you to select the best subtree (i.e. select the complexity parameter $\alpha$). The best tree is any subset of the initial tree; below are a few options: library(rpart.plot) prp(tree,extra=1) #Initial tree with 16 splits prp(prune(tree,cp=0.042),extra=1) #Subtree with 10 splits prp(prune(tree,cp=0.068),extra=1) #Subtree with 5 splits prp(prune(tree,cp=0.14),extra=1) #Subtree with 1 split To decide which subtree is best, we have to perform cross-validation. First we have to determine the possible $\alpha$'s that would yield a subtree (from the initial tree). Then we divide the data into 10 groups and build 10 trees with the 'leave one group out' approach using a possible $\alpha$ to prune the tree. The left out group can determine which $\alpha$ worked best. The technical details can be seen in the rpart vignette The final tree that is returned is still the initial tree. You must use the prune function using the cross-validation plot to choose the best subtree. For this dataset, I don't think CART fits the data that well. If you perform 81-fold cross-validation (i.e. leave one observation out), you'll see five splits seems like the best tree. If you're looking for a model with better prediction accuracy, perhaps you should consider building a random forest.
How does the complexity parameter correspond to the number of splits in cross validation in rpart? There is one tree created, which is definitely overfitting the data. The specified minsplit essentially creates a tree that categorizes each terminal node into either all "present" or all "absent".
50,964
How to assess similarity of two sets of Principal Component Analysis loadings
Maybe the (modified) RV-coefficient is suitable for your problem. This measure computes the similarity/correlation between two matrices. Also note that a factor rotation is somewhat arbitrary, perhaps you should first do a procrustean factor rotation to 'match' the two factor structures.
How to assess similarity of two sets of Principal Component Analysis loadings
Maybe the (modified) RV-coefficient is suitable for your problem. This measure computes the similarity/correlation between two matrices. Also note that a factor rotation is somewhat arbitrary, perhaps
How to assess similarity of two sets of Principal Component Analysis loadings Maybe the (modified) RV-coefficient is suitable for your problem. This measure computes the similarity/correlation between two matrices. Also note that a factor rotation is somewhat arbitrary, perhaps you should first do a procrustean factor rotation to 'match' the two factor structures.
How to assess similarity of two sets of Principal Component Analysis loadings Maybe the (modified) RV-coefficient is suitable for your problem. This measure computes the similarity/correlation between two matrices. Also note that a factor rotation is somewhat arbitrary, perhaps
50,965
Why does Box-Cox transformation fail in following situation?
(mostly copied from the comment by Nick Cox) The Box-Cox transform does not really fail: it is more that it is unnecessary, as there will be no need of transformation if max/min is small. Mostly, with max/min small all the observations are away from zero (relatively), so the power transform will be well approximated linearly over a short interval!
Why does Box-Cox transformation fail in following situation?
(mostly copied from the comment by Nick Cox) The Box-Cox transform does not really fail: it is more that it is unnecessary, as there will be no need of transformation if max/min is small. Mostly, w
Why does Box-Cox transformation fail in following situation? (mostly copied from the comment by Nick Cox) The Box-Cox transform does not really fail: it is more that it is unnecessary, as there will be no need of transformation if max/min is small. Mostly, with max/min small all the observations are away from zero (relatively), so the power transform will be well approximated linearly over a short interval!
Why does Box-Cox transformation fail in following situation? (mostly copied from the comment by Nick Cox) The Box-Cox transform does not really fail: it is more that it is unnecessary, as there will be no need of transformation if max/min is small. Mostly, w
50,966
How to find the deterministic function representation of a random variable?
This can be accomplished using inverse transformation sampling; it does not require a restriction on the desired distribution function. To do this, define $X \equiv f(\theta,\omega) \equiv \inf \{ r \in \mathbb{R} | F_\theta(r) \geqslant \omega \}$$^\dagger$ where $F_\theta$ is the desired distribution function conditional on the parameter $\theta$. Since the distribution $F_\theta$ is non-decreasing we have: $$\inf \{ r \in \mathbb{R} | F_\theta(r) \geqslant \omega \} \leqslant x \quad \quad \iff \quad \quad F_\theta(x) \geqslant \omega .$$ Hence, taking $\omega \sim \text{U}(0,1)$ independent of the parameter $\theta$ you then get: $$\begin{equation} \begin{aligned} \mathbb{P}(X \leqslant x | \theta) &= \mathbb{P} \Big( \inf \{ x \in \mathbb{R} | F_\theta(x) \geqslant \omega \} \leqslant x \Big| \theta \Big) \\[6pt] &= \mathbb{P} \Big( \omega \leqslant F_\theta(x) \Big| \theta \Big) \\[6pt] &= F_\theta(x), \\[6pt] \end{aligned} \end{equation}$$ which is the desired distribution for $X$ conditional on the parameter $\theta$. As you can see, there is no requirement on $F_\theta$ for this technique to work, though it is notable that it is not necessarily the most computationally efficient technique to generate your desired random variable. This technique can be extended to multivariate problems as shown in this related question. $^\dagger$ This infimum function is the "generalised inverse" of $F_\theta$. In the case where $F_\theta$ is continuous you get $\inf \{ r \in \mathbb{R} | F_\theta(r) \geqslant \omega \} = F_\theta^{-1}(\omega)$, which is the inverse in the regular functional sense.
How to find the deterministic function representation of a random variable?
This can be accomplished using inverse transformation sampling; it does not require a restriction on the desired distribution function. To do this, define $X \equiv f(\theta,\omega) \equiv \inf \{ r
How to find the deterministic function representation of a random variable? This can be accomplished using inverse transformation sampling; it does not require a restriction on the desired distribution function. To do this, define $X \equiv f(\theta,\omega) \equiv \inf \{ r \in \mathbb{R} | F_\theta(r) \geqslant \omega \}$$^\dagger$ where $F_\theta$ is the desired distribution function conditional on the parameter $\theta$. Since the distribution $F_\theta$ is non-decreasing we have: $$\inf \{ r \in \mathbb{R} | F_\theta(r) \geqslant \omega \} \leqslant x \quad \quad \iff \quad \quad F_\theta(x) \geqslant \omega .$$ Hence, taking $\omega \sim \text{U}(0,1)$ independent of the parameter $\theta$ you then get: $$\begin{equation} \begin{aligned} \mathbb{P}(X \leqslant x | \theta) &= \mathbb{P} \Big( \inf \{ x \in \mathbb{R} | F_\theta(x) \geqslant \omega \} \leqslant x \Big| \theta \Big) \\[6pt] &= \mathbb{P} \Big( \omega \leqslant F_\theta(x) \Big| \theta \Big) \\[6pt] &= F_\theta(x), \\[6pt] \end{aligned} \end{equation}$$ which is the desired distribution for $X$ conditional on the parameter $\theta$. As you can see, there is no requirement on $F_\theta$ for this technique to work, though it is notable that it is not necessarily the most computationally efficient technique to generate your desired random variable. This technique can be extended to multivariate problems as shown in this related question. $^\dagger$ This infimum function is the "generalised inverse" of $F_\theta$. In the case where $F_\theta$ is continuous you get $\inf \{ r \in \mathbb{R} | F_\theta(r) \geqslant \omega \} = F_\theta^{-1}(\omega)$, which is the inverse in the regular functional sense.
How to find the deterministic function representation of a random variable? This can be accomplished using inverse transformation sampling; it does not require a restriction on the desired distribution function. To do this, define $X \equiv f(\theta,\omega) \equiv \inf \{ r
50,967
n-gram language model
Language models are often used to compute the probability of a sentence. This is done by using the chain rule. For example if we want to estimate the probability of observing the sentence $w_1 w_2 w_3 w_4$ we can factorize it like so... $P(w_1, w_2, w_3, w_4) = P(w_4|w_3, w_2, w_1) P(w_3|w_2, w_1) P(w_2| w_1) P(w_1) $ Each of those terms is something that can be straightforwardly computed by the language model.
n-gram language model
Language models are often used to compute the probability of a sentence. This is done by using the chain rule. For example if we want to estimate the probability of observing the sentence $w_1 w_2 w_3
n-gram language model Language models are often used to compute the probability of a sentence. This is done by using the chain rule. For example if we want to estimate the probability of observing the sentence $w_1 w_2 w_3 w_4$ we can factorize it like so... $P(w_1, w_2, w_3, w_4) = P(w_4|w_3, w_2, w_1) P(w_3|w_2, w_1) P(w_2| w_1) P(w_1) $ Each of those terms is something that can be straightforwardly computed by the language model.
n-gram language model Language models are often used to compute the probability of a sentence. This is done by using the chain rule. For example if we want to estimate the probability of observing the sentence $w_1 w_2 w_3
50,968
How to specify when a level shift begins and ends or in the case of data series with multiple level shifts how to id when one level shift beings/ends?
I took a look at the data that you've posted. I appreciate your question because it gave me the opportunity to run it through with the software Autobox. You are quite correct when you say that there is a structural break in 2004, but had you considered that there may be multiple candidates for a break in parameters? See the following table for a list of these candidates. DIAGNOSTIC CHECK #4: THE CHOW PARAMETER CONSTANCY TEST The Critical value used for this test : .05 The minimum group or interval size was: 51 F TEST TO VERIFY CONSTANCY OF PARAMETERS CANDIDATE BREAKPOINT F VALUE P VALUE 52 2004/ 1 3.69947 .0267988938 57 2004/ 6 3.71222 .0264738875 62 2004/ 11 3.21210 .0427876196 67 2005/ 4 3.58410 .0299305651 72 2005/ 9 3.32751 .0382910942 77 2006/ 2 2.25996 .1075565489 82 2006/ 7 1.67456 .1905412741 87 2006/ 12 2.15920 .1186484852 92 2007/ 5 3.08986 .0481356253 97 2007/ 10 5.95847 .0031691039 102 2008/ 3 .291259 .7477028638 107 2008/ 8 1.21682 .2987975860 The table above is quite illuminating, and provides a useful illustration of why iterative selection processes can sometimes do better than the human eye. Not that any of this is your fault -- you've got a difficult data set in front of you. Am [I] to create a dummy variable for it taking value of 1 between 2004-2009 and 0 otherwise would that be correct? You were quite correct with not only the above statement but also the following one. This is the issue in time series. The Chicken or the Egg. So your next best bet is to take it in steps--try your AR(1)[12] with encoded level shifts, and then do the process in reverse. Eventually, both should converge on an answer; but that is only if your inclination of seasonality was right. If you look at the partial autocorrelation function or autocorrelation function of the series you have put forth, neither of them suggest the need for seasonal differencing. Autobox, after using the Chow test, determined that neither of the above was necessary. Candidate Period 97 (August 2007) was found to be the most statistically significant breakpoint. After deleting the first 96 values, things look a lot different than before. Y(T) =1649.3 +[X1(T)][(+ 2040.7 )] :PULSE 2008/ 9 108 +[X2(T)][(+ 1888.7 )] :PULSE 2009/ 6 117 +[X3(T)][(- 1144.0 )] :PULSE 2008/ 1 100 +[X4(T)][(+ 1797.5 )] :PULSE 2009/ 4 115 +[X5(T)][(+ 1543.3 )] :PULSE 2009/ 5 116 + [(1- .594B** 1)]**-1 [A(T)] It seems that the data is sufficiently explained by an AR(1) with a few identifiable outliers.
How to specify when a level shift begins and ends or in the case of data series with multiple level
I took a look at the data that you've posted. I appreciate your question because it gave me the opportunity to run it through with the software Autobox. You are quite correct when you say that there i
How to specify when a level shift begins and ends or in the case of data series with multiple level shifts how to id when one level shift beings/ends? I took a look at the data that you've posted. I appreciate your question because it gave me the opportunity to run it through with the software Autobox. You are quite correct when you say that there is a structural break in 2004, but had you considered that there may be multiple candidates for a break in parameters? See the following table for a list of these candidates. DIAGNOSTIC CHECK #4: THE CHOW PARAMETER CONSTANCY TEST The Critical value used for this test : .05 The minimum group or interval size was: 51 F TEST TO VERIFY CONSTANCY OF PARAMETERS CANDIDATE BREAKPOINT F VALUE P VALUE 52 2004/ 1 3.69947 .0267988938 57 2004/ 6 3.71222 .0264738875 62 2004/ 11 3.21210 .0427876196 67 2005/ 4 3.58410 .0299305651 72 2005/ 9 3.32751 .0382910942 77 2006/ 2 2.25996 .1075565489 82 2006/ 7 1.67456 .1905412741 87 2006/ 12 2.15920 .1186484852 92 2007/ 5 3.08986 .0481356253 97 2007/ 10 5.95847 .0031691039 102 2008/ 3 .291259 .7477028638 107 2008/ 8 1.21682 .2987975860 The table above is quite illuminating, and provides a useful illustration of why iterative selection processes can sometimes do better than the human eye. Not that any of this is your fault -- you've got a difficult data set in front of you. Am [I] to create a dummy variable for it taking value of 1 between 2004-2009 and 0 otherwise would that be correct? You were quite correct with not only the above statement but also the following one. This is the issue in time series. The Chicken or the Egg. So your next best bet is to take it in steps--try your AR(1)[12] with encoded level shifts, and then do the process in reverse. Eventually, both should converge on an answer; but that is only if your inclination of seasonality was right. If you look at the partial autocorrelation function or autocorrelation function of the series you have put forth, neither of them suggest the need for seasonal differencing. Autobox, after using the Chow test, determined that neither of the above was necessary. Candidate Period 97 (August 2007) was found to be the most statistically significant breakpoint. After deleting the first 96 values, things look a lot different than before. Y(T) =1649.3 +[X1(T)][(+ 2040.7 )] :PULSE 2008/ 9 108 +[X2(T)][(+ 1888.7 )] :PULSE 2009/ 6 117 +[X3(T)][(- 1144.0 )] :PULSE 2008/ 1 100 +[X4(T)][(+ 1797.5 )] :PULSE 2009/ 4 115 +[X5(T)][(+ 1543.3 )] :PULSE 2009/ 5 116 + [(1- .594B** 1)]**-1 [A(T)] It seems that the data is sufficiently explained by an AR(1) with a few identifiable outliers.
How to specify when a level shift begins and ends or in the case of data series with multiple level I took a look at the data that you've posted. I appreciate your question because it gave me the opportunity to run it through with the software Autobox. You are quite correct when you say that there i
50,969
How to specify when a level shift begins and ends or in the case of data series with multiple level shifts how to id when one level shift beings/ends?
@whuber, It's been 5 years and our latest version does detect a break at obs 122 in 2009. F TEST TO VERIFY CONSTANCY OF PARAMETERS CANDIDATE BREAKPOINT F VALUE P VALUE 52 2003/ 4 3.53 .031490 66 2004/ 6 2.85 .060928 80 2005/ 8 1.84 .162610 94 2006/ 10 1.26 .285174 108 2007/ 12 1.03 .358053 122 2009/ 2 4.56 .011817 * A change in the variance was suggested using the remaining data. ***** ITERATIVE RESULTS OF TSAY TEST ***** REGION 1 VAR REGION 2 VAR F VAL PROBABLITY 11 .21717E+06 40 .21545E+06 .9920685 .4536341 21 .21130E+06 30 .21879E+06 1.0354240 .4773001 31 .20461E+06 20 .23257E+06 1.1366749 .3672223 * 41 .20962E+06 10 .24051E+06 1.1473593 .3534678 An AR1 with an outlier and intercept are identified Y(T) = 1398.09 delay [X1(T)][(+ 978.07 )] :PULSE 2010/ 12 144 + [(1- .288B** 1)]**-1 [A(T)]
How to specify when a level shift begins and ends or in the case of data series with multiple level
@whuber, It's been 5 years and our latest version does detect a break at obs 122 in 2009. F TEST TO VERIFY CONSTANCY OF PARAMETERS CANDIDATE BREAKPOINT F V
How to specify when a level shift begins and ends or in the case of data series with multiple level shifts how to id when one level shift beings/ends? @whuber, It's been 5 years and our latest version does detect a break at obs 122 in 2009. F TEST TO VERIFY CONSTANCY OF PARAMETERS CANDIDATE BREAKPOINT F VALUE P VALUE 52 2003/ 4 3.53 .031490 66 2004/ 6 2.85 .060928 80 2005/ 8 1.84 .162610 94 2006/ 10 1.26 .285174 108 2007/ 12 1.03 .358053 122 2009/ 2 4.56 .011817 * A change in the variance was suggested using the remaining data. ***** ITERATIVE RESULTS OF TSAY TEST ***** REGION 1 VAR REGION 2 VAR F VAL PROBABLITY 11 .21717E+06 40 .21545E+06 .9920685 .4536341 21 .21130E+06 30 .21879E+06 1.0354240 .4773001 31 .20461E+06 20 .23257E+06 1.1366749 .3672223 * 41 .20962E+06 10 .24051E+06 1.1473593 .3534678 An AR1 with an outlier and intercept are identified Y(T) = 1398.09 delay [X1(T)][(+ 978.07 )] :PULSE 2010/ 12 144 + [(1- .288B** 1)]**-1 [A(T)]
How to specify when a level shift begins and ends or in the case of data series with multiple level @whuber, It's been 5 years and our latest version does detect a break at obs 122 in 2009. F TEST TO VERIFY CONSTANCY OF PARAMETERS CANDIDATE BREAKPOINT F V
50,970
What are the classification models that work on single-class classification problems?
There are plenty possibilities to construct one-class-classifiers. I wrote a number of simple algorithms in the context of authorship verification. Here, only positive samples of one author X are given, so that the task is to judge if a given document was written by X or not. However, it can be adapted to other fields besides authorship verification by just adjusting the features. Here are two of my papers: Oren Halvani, Lukas Graner, Inna Vogel. Authorship Verification in the Absence of Explicit Features and Thresholds In: Pasi G., Piwowarski B., Azzopardi L., Hanbury A. (eds) Advances in Information Retrieval. ECIR 2018. Lecture Notes in Computer Science, vol 10772. Springer, Cham. https://doi.org/10.1007/978-3-319-76941-7_34 O. Halvani and M. Steinebach, "An Efficient Intrinsic Authorship Verification Scheme Based on Ensemble Learning," 2014 Ninth International Conference on Availability, Reliability and Security, 2014, pp. 571-578, doi: 10.1109/ARES.2014.84.
What are the classification models that work on single-class classification problems?
There are plenty possibilities to construct one-class-classifiers. I wrote a number of simple algorithms in the context of authorship verification. Here, only positive samples of one author X are gi
What are the classification models that work on single-class classification problems? There are plenty possibilities to construct one-class-classifiers. I wrote a number of simple algorithms in the context of authorship verification. Here, only positive samples of one author X are given, so that the task is to judge if a given document was written by X or not. However, it can be adapted to other fields besides authorship verification by just adjusting the features. Here are two of my papers: Oren Halvani, Lukas Graner, Inna Vogel. Authorship Verification in the Absence of Explicit Features and Thresholds In: Pasi G., Piwowarski B., Azzopardi L., Hanbury A. (eds) Advances in Information Retrieval. ECIR 2018. Lecture Notes in Computer Science, vol 10772. Springer, Cham. https://doi.org/10.1007/978-3-319-76941-7_34 O. Halvani and M. Steinebach, "An Efficient Intrinsic Authorship Verification Scheme Based on Ensemble Learning," 2014 Ninth International Conference on Availability, Reliability and Security, 2014, pp. 571-578, doi: 10.1109/ARES.2014.84.
What are the classification models that work on single-class classification problems? There are plenty possibilities to construct one-class-classifiers. I wrote a number of simple algorithms in the context of authorship verification. Here, only positive samples of one author X are gi
50,971
What are the classification models that work on single-class classification problems?
This is generally called One-Class Classification, Single-Class Classification, Outlier Detection or even Support Determination (i.e., what is the support of a distribution). These generally attempt to solve the problem of low-density rejection (i.e., rejecting points that fall in areas where the training data has low probability). See here for some theory, it also references various technical approaches and surveys, although this is an area of active research.
What are the classification models that work on single-class classification problems?
This is generally called One-Class Classification, Single-Class Classification, Outlier Detection or even Support Determination (i.e., what is the support of a distribution). These generally attempt t
What are the classification models that work on single-class classification problems? This is generally called One-Class Classification, Single-Class Classification, Outlier Detection or even Support Determination (i.e., what is the support of a distribution). These generally attempt to solve the problem of low-density rejection (i.e., rejecting points that fall in areas where the training data has low probability). See here for some theory, it also references various technical approaches and surveys, although this is an area of active research.
What are the classification models that work on single-class classification problems? This is generally called One-Class Classification, Single-Class Classification, Outlier Detection or even Support Determination (i.e., what is the support of a distribution). These generally attempt t
50,972
Permutation testing in multiply adjusted analyses
I think this paper might answer your question: Permutation tests for univariate or multivariate by Marti J. Anderson analysis of variance and regression http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.469.4226&rep=rep1&type=pdf Think about two-way ANOVA and permute the rows. Then think about how to extend to the occasion of continuous variables. This might be an approach.
Permutation testing in multiply adjusted analyses
I think this paper might answer your question: Permutation tests for univariate or multivariate by Marti J. Anderson analysis of variance and regression http://citeseerx.ist.psu.edu/viewdoc/download?
Permutation testing in multiply adjusted analyses I think this paper might answer your question: Permutation tests for univariate or multivariate by Marti J. Anderson analysis of variance and regression http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.469.4226&rep=rep1&type=pdf Think about two-way ANOVA and permute the rows. Then think about how to extend to the occasion of continuous variables. This might be an approach.
Permutation testing in multiply adjusted analyses I think this paper might answer your question: Permutation tests for univariate or multivariate by Marti J. Anderson analysis of variance and regression http://citeseerx.ist.psu.edu/viewdoc/download?
50,973
The meaning of Kernel density estimation
Normalized densities are essentially likelihood ratios, not true probability densities. They will no longer integrate to 1. Also, plotting the density values at 25%,50%,75% of the normalized max are not percentiles. To make this interpretable, you need to de-normalize and plot the level sets corresponding to some set of percentiles. Note that the KDE using Gaussian densities will likely not be Gaussian.
The meaning of Kernel density estimation
Normalized densities are essentially likelihood ratios, not true probability densities. They will no longer integrate to 1. Also, plotting the density values at 25%,50%,75% of the normalized max are n
The meaning of Kernel density estimation Normalized densities are essentially likelihood ratios, not true probability densities. They will no longer integrate to 1. Also, plotting the density values at 25%,50%,75% of the normalized max are not percentiles. To make this interpretable, you need to de-normalize and plot the level sets corresponding to some set of percentiles. Note that the KDE using Gaussian densities will likely not be Gaussian.
The meaning of Kernel density estimation Normalized densities are essentially likelihood ratios, not true probability densities. They will no longer integrate to 1. Also, plotting the density values at 25%,50%,75% of the normalized max are n
50,974
Why is Levenberg-Marquardt only used with least squares problem?
Because it is based on a Second Order Approximation of the Squared Residual Function. Hence it requires "Squared Residual Function". The method will fit any model you can make which has the same form.
Why is Levenberg-Marquardt only used with least squares problem?
Because it is based on a Second Order Approximation of the Squared Residual Function. Hence it requires "Squared Residual Function". The method will fit any model you can make which has the same form.
Why is Levenberg-Marquardt only used with least squares problem? Because it is based on a Second Order Approximation of the Squared Residual Function. Hence it requires "Squared Residual Function". The method will fit any model you can make which has the same form.
Why is Levenberg-Marquardt only used with least squares problem? Because it is based on a Second Order Approximation of the Squared Residual Function. Hence it requires "Squared Residual Function". The method will fit any model you can make which has the same form.
50,975
Count Outcomes in Three Card Poker
Here's a Python program that I believe is correct. It takes a little under 7 minutes on my machine to run through every pair of hands. from itertools import combinations from collections import Counter JACK, QUEEN, KING, ACE = 11, 12, 13, 14 N_HANDS = 22100 # 52 choose 3 deck = frozenset((rank, suit) for rank in range(2, ACE + 1) for suit in range(4)) assert len(deck) == 52 evaluate_hand_cache = {} def evaluate_hand(hand): if hand not in evaluate_hand_cache: evaluate_hand_cache[hand] = evaluate_hand_(hand) return evaluate_hand_cache[hand] def evaluate_hand_(hand): ranks = tuple(sorted((rank for (rank, _) in hand), reverse = True)) suits = [suit for (_, suit) in hand] if ranks[0] == ranks[1] == ranks[2]: # Three of a kind return (5, ranks[0]) straight = (ranks[0] == ranks[1] + 1 == ranks[2] + 2 or ranks == (ACE, 3, 2)) flush = suits[0] == suits[1] == suits[2] if straight and flush: return (6, ranks[0]) if straight: return (4, ranks[0]) if flush: return (3,) + ranks if ranks[0] == ranks[1]: # Pair return (2, ranks[0], ranks[2]) if ranks[1] == ranks[2]: # Also a pair return (2, ranks[2], ranks[0]) return (1,) + ranks results = Counter() divisor = N_HANDS // 100 for i, dealer_hand in enumerate(combinations(deck, 3)): if i % divisor == 0: print "{}% complete".format(i // divisor) dealer_hand = frozenset(dealer_hand) dealer_e = evaluate_hand(dealer_hand) for player_hand in combinations(deck - dealer_hand, 3): player_e = evaluate_hand(player_hand) outcome = ( 'Dealer does not qualify' if dealer_e < (2, QUEEN) else 'Player loses' if player_e < dealer_e else 'Player wins' if dealer_e < player_e else 'Tie') results[(player_e[0], outcome)] += 1 print "Player hand class,Outcome,Count" for (hand, outcome), n in sorted(results.items()): print ','.join([str(hand), outcome, str(n)]) The output is: Player hand class,Outcome,Count 1,Dealer does not qualify,264851952 1,Player loses,38038608 2,Dealer does not qualify,60280560 2,Player loses,8453520 2,Player wins,242784 2,Tie,2592 3,Dealer does not qualify,17630040 3,Player loses,1260596 3,Player wins,1298780 3,Tie,3288 4,Dealer does not qualify,11581584 4,Player loses,267804 4,Player wins,1392204 4,Tie,23688 5,Dealer does not qualify,836520 5,Player loses,3312 5,Player wins,118216 6,Dealer does not qualify,771024 6,Player loses,956 6,Player wins,112204 6,Tie,168
Count Outcomes in Three Card Poker
Here's a Python program that I believe is correct. It takes a little under 7 minutes on my machine to run through every pair of hands. from itertools import combinations from collections import Counte
Count Outcomes in Three Card Poker Here's a Python program that I believe is correct. It takes a little under 7 minutes on my machine to run through every pair of hands. from itertools import combinations from collections import Counter JACK, QUEEN, KING, ACE = 11, 12, 13, 14 N_HANDS = 22100 # 52 choose 3 deck = frozenset((rank, suit) for rank in range(2, ACE + 1) for suit in range(4)) assert len(deck) == 52 evaluate_hand_cache = {} def evaluate_hand(hand): if hand not in evaluate_hand_cache: evaluate_hand_cache[hand] = evaluate_hand_(hand) return evaluate_hand_cache[hand] def evaluate_hand_(hand): ranks = tuple(sorted((rank for (rank, _) in hand), reverse = True)) suits = [suit for (_, suit) in hand] if ranks[0] == ranks[1] == ranks[2]: # Three of a kind return (5, ranks[0]) straight = (ranks[0] == ranks[1] + 1 == ranks[2] + 2 or ranks == (ACE, 3, 2)) flush = suits[0] == suits[1] == suits[2] if straight and flush: return (6, ranks[0]) if straight: return (4, ranks[0]) if flush: return (3,) + ranks if ranks[0] == ranks[1]: # Pair return (2, ranks[0], ranks[2]) if ranks[1] == ranks[2]: # Also a pair return (2, ranks[2], ranks[0]) return (1,) + ranks results = Counter() divisor = N_HANDS // 100 for i, dealer_hand in enumerate(combinations(deck, 3)): if i % divisor == 0: print "{}% complete".format(i // divisor) dealer_hand = frozenset(dealer_hand) dealer_e = evaluate_hand(dealer_hand) for player_hand in combinations(deck - dealer_hand, 3): player_e = evaluate_hand(player_hand) outcome = ( 'Dealer does not qualify' if dealer_e < (2, QUEEN) else 'Player loses' if player_e < dealer_e else 'Player wins' if dealer_e < player_e else 'Tie') results[(player_e[0], outcome)] += 1 print "Player hand class,Outcome,Count" for (hand, outcome), n in sorted(results.items()): print ','.join([str(hand), outcome, str(n)]) The output is: Player hand class,Outcome,Count 1,Dealer does not qualify,264851952 1,Player loses,38038608 2,Dealer does not qualify,60280560 2,Player loses,8453520 2,Player wins,242784 2,Tie,2592 3,Dealer does not qualify,17630040 3,Player loses,1260596 3,Player wins,1298780 3,Tie,3288 4,Dealer does not qualify,11581584 4,Player loses,267804 4,Player wins,1392204 4,Tie,23688 5,Dealer does not qualify,836520 5,Player loses,3312 5,Player wins,118216 6,Dealer does not qualify,771024 6,Player loses,956 6,Player wins,112204 6,Tie,168
Count Outcomes in Three Card Poker Here's a Python program that I believe is correct. It takes a little under 7 minutes on my machine to run through every pair of hands. from itertools import combinations from collections import Counte
50,976
How to build scoring model (scorecard) from logistic regression?
The basic ideas are not that difficult: First model: You just multiply the respective coefficients with the new data points and see whether the sum is bigger than the negative intercept (then am is 1) Second model: You first bin the numerical variables into distinct intervals (with cut()) and then run the logistic regression again (dummy variables will be created automatically) . For new data points you check into which interval they would fall and add the resulting coefficients (if the interval is not present you assign 0). You again check whether the sum is bigger than the negative intercept (see first model above). You can scale all the coefficients and the intercept by multiplying with a factor (e.g. it is quite popular to take 20/ln(2)) As an example consider the following case where we want to build a toy scoring model for predicting am from the mtcars dataset: library(OneR) # for bin and eval_model function mtcars_bin <- bin(mtcars) m <- glm(am ~ hp + wt, data = mtcars_bin, family = binomial) ## Warning: glm.fit: fitted probabilities numerically 0 or 1 occurred coefficients(m) #points for ranges ## (Intercept) hp(109,165] hp(165,222] hp(222,278] hp(278,335] ## 21.30781 19.62427 19.91155 40.03050 62.28971 ## wt(2.3,3.08] wt(3.08,3.86] wt(3.86,4.64] wt(4.64,5.43] ## -20.61467 -62.03146 -62.78544 -81.47841 prediction <- round(predict(m, type = 'response', mtcars_bin)) eval_model(prediction, mtcars_bin$am) ## ## Confusion matrix (absolute): ## Actual ## Prediction 0 1 Sum ## 0 18 1 19 ## 1 1 12 13 ## Sum 19 13 32 ## ## Confusion matrix (relative): ## Actual ## Prediction 0 1 Sum ## 0 0.56 0.03 0.59 ## 1 0.03 0.38 0.41 ## Sum 0.59 0.41 1.00 ## ## Accuracy: ## 0.9375 (30/32) ## ## Error rate: ## 0.0625 (2/32) ## ## Error rate reduction (vs. base rate): ## 0.8462 (p-value = 1.452e-05) ## ## different scaling coefficients(m) * 20/log(2) ## (Intercept) hp(109,165] hp(165,222] hp(222,278] hp(278,335] ## 614.8136 566.2367 574.5260 1155.0360 1797.3012 ## wt(2.3,3.08] wt(3.08,3.86] wt(3.86,4.64] wt(4.64,5.43] ## -594.8136 -1789.8496 -1811.6048 -2350.9701 Obviously this model can be further improved by cutting the intervals differently, e.g. combining the ones which give comparable scores, rounding etc., but the general principle stays the same.
How to build scoring model (scorecard) from logistic regression?
The basic ideas are not that difficult: First model: You just multiply the respective coefficients with the new data points and see whether the sum is bigger than the negative intercept (then am is 1
How to build scoring model (scorecard) from logistic regression? The basic ideas are not that difficult: First model: You just multiply the respective coefficients with the new data points and see whether the sum is bigger than the negative intercept (then am is 1) Second model: You first bin the numerical variables into distinct intervals (with cut()) and then run the logistic regression again (dummy variables will be created automatically) . For new data points you check into which interval they would fall and add the resulting coefficients (if the interval is not present you assign 0). You again check whether the sum is bigger than the negative intercept (see first model above). You can scale all the coefficients and the intercept by multiplying with a factor (e.g. it is quite popular to take 20/ln(2)) As an example consider the following case where we want to build a toy scoring model for predicting am from the mtcars dataset: library(OneR) # for bin and eval_model function mtcars_bin <- bin(mtcars) m <- glm(am ~ hp + wt, data = mtcars_bin, family = binomial) ## Warning: glm.fit: fitted probabilities numerically 0 or 1 occurred coefficients(m) #points for ranges ## (Intercept) hp(109,165] hp(165,222] hp(222,278] hp(278,335] ## 21.30781 19.62427 19.91155 40.03050 62.28971 ## wt(2.3,3.08] wt(3.08,3.86] wt(3.86,4.64] wt(4.64,5.43] ## -20.61467 -62.03146 -62.78544 -81.47841 prediction <- round(predict(m, type = 'response', mtcars_bin)) eval_model(prediction, mtcars_bin$am) ## ## Confusion matrix (absolute): ## Actual ## Prediction 0 1 Sum ## 0 18 1 19 ## 1 1 12 13 ## Sum 19 13 32 ## ## Confusion matrix (relative): ## Actual ## Prediction 0 1 Sum ## 0 0.56 0.03 0.59 ## 1 0.03 0.38 0.41 ## Sum 0.59 0.41 1.00 ## ## Accuracy: ## 0.9375 (30/32) ## ## Error rate: ## 0.0625 (2/32) ## ## Error rate reduction (vs. base rate): ## 0.8462 (p-value = 1.452e-05) ## ## different scaling coefficients(m) * 20/log(2) ## (Intercept) hp(109,165] hp(165,222] hp(222,278] hp(278,335] ## 614.8136 566.2367 574.5260 1155.0360 1797.3012 ## wt(2.3,3.08] wt(3.08,3.86] wt(3.86,4.64] wt(4.64,5.43] ## -594.8136 -1789.8496 -1811.6048 -2350.9701 Obviously this model can be further improved by cutting the intervals differently, e.g. combining the ones which give comparable scores, rounding etc., but the general principle stays the same.
How to build scoring model (scorecard) from logistic regression? The basic ideas are not that difficult: First model: You just multiply the respective coefficients with the new data points and see whether the sum is bigger than the negative intercept (then am is 1
50,977
Use Available Pairs Method for Missing Data in OLS
It turns out there is a brief discussion of these methods in the book Missing Data Analysis by Little & Rubin chapter 3 section 4. Matthai (1951) and Wilks (1932) discussed the available cases covariance estimator. They both suggest using the $n_{jk}-1$ degrees of freedom correction to covariance estimates where $n_{jk}$ is the number of available pairs for the $j$, $k$-th covariance. Research through the 60s and 80s seems to suggest available-pairs methods only work reasonably well under a rather narrow set of conditions: in particular, the data ought to be MCAR and the correlations should be modest-to-small in size. Kim & Curry 1977, Van Praag Dijkstra and Van Velzen 1985, Haitovsky 1968, Van Guilder 1981). Some of the undesirable side effects when ideal conditions are not met are having correlation coefficients which exceed the -1, 1 range, singular covariance matrices. If that is possible, then it may not be possible to calculate $\left( \mathbf{X}^T\mathbf{X} \right)^{-1}$ without some ad hoc corrections. The counterexample that Little & Rubin give is the following: y1 y1 y3 1 1 NA 2 2 NA 3 3 NA 4 4 NA 1 NA 1 2 NA 2 3 NA 3 4 NA 4 NA 1 4 NA 2 3 NA 3 2 NA 4 1 The available pair analysis suggests the correlation from y1 to y2 is 1, and from y1 to y3 is 1. Intuitively that would mean the correlation between y2 and y3 is 1 but instead it is estimated to be -1. Based on the relatively simple mathematics, I would imagine that omitting the intercept and performing available pairs regression on centered predictor/outcome data would have no net effect on the estimates and their SEs.
Use Available Pairs Method for Missing Data in OLS
It turns out there is a brief discussion of these methods in the book Missing Data Analysis by Little & Rubin chapter 3 section 4. Matthai (1951) and Wilks (1932) discussed the available cases covaria
Use Available Pairs Method for Missing Data in OLS It turns out there is a brief discussion of these methods in the book Missing Data Analysis by Little & Rubin chapter 3 section 4. Matthai (1951) and Wilks (1932) discussed the available cases covariance estimator. They both suggest using the $n_{jk}-1$ degrees of freedom correction to covariance estimates where $n_{jk}$ is the number of available pairs for the $j$, $k$-th covariance. Research through the 60s and 80s seems to suggest available-pairs methods only work reasonably well under a rather narrow set of conditions: in particular, the data ought to be MCAR and the correlations should be modest-to-small in size. Kim & Curry 1977, Van Praag Dijkstra and Van Velzen 1985, Haitovsky 1968, Van Guilder 1981). Some of the undesirable side effects when ideal conditions are not met are having correlation coefficients which exceed the -1, 1 range, singular covariance matrices. If that is possible, then it may not be possible to calculate $\left( \mathbf{X}^T\mathbf{X} \right)^{-1}$ without some ad hoc corrections. The counterexample that Little & Rubin give is the following: y1 y1 y3 1 1 NA 2 2 NA 3 3 NA 4 4 NA 1 NA 1 2 NA 2 3 NA 3 4 NA 4 NA 1 4 NA 2 3 NA 3 2 NA 4 1 The available pair analysis suggests the correlation from y1 to y2 is 1, and from y1 to y3 is 1. Intuitively that would mean the correlation between y2 and y3 is 1 but instead it is estimated to be -1. Based on the relatively simple mathematics, I would imagine that omitting the intercept and performing available pairs regression on centered predictor/outcome data would have no net effect on the estimates and their SEs.
Use Available Pairs Method for Missing Data in OLS It turns out there is a brief discussion of these methods in the book Missing Data Analysis by Little & Rubin chapter 3 section 4. Matthai (1951) and Wilks (1932) discussed the available cases covaria
50,978
Alternatives to stepwise regression for generalized linear mixed models
How about the ensemble method of boostrapped aggregating, also known as bragging? Using this approach you essentially create a large number of replicates of the original dataset using simple random sampling with replacement (say 10,000 bootstrapped datasets) from your original dataset. Then you implement a variable selection routine (perhaps best subsets or traditional stepwise selection methods) to select the coefficients or predictors that are significant for each of the boostrapped samples. You perform the routines for each bootstrapped samples and then look at the rates of how often the predictors are selected. Predictors that appear in say 90% or more of the sample are then used in the final mixed model. There are many other methods that could be used too, but I highlight this one as it's simple to explain and usually very easy to implement. For more information see, Breiman, Leo (1996). "Bagging predictors". Machine Learning 24 (2): 123–140. doi:10.1007/BF00058655.
Alternatives to stepwise regression for generalized linear mixed models
How about the ensemble method of boostrapped aggregating, also known as bragging? Using this approach you essentially create a large number of replicates of the original dataset using simple random s
Alternatives to stepwise regression for generalized linear mixed models How about the ensemble method of boostrapped aggregating, also known as bragging? Using this approach you essentially create a large number of replicates of the original dataset using simple random sampling with replacement (say 10,000 bootstrapped datasets) from your original dataset. Then you implement a variable selection routine (perhaps best subsets or traditional stepwise selection methods) to select the coefficients or predictors that are significant for each of the boostrapped samples. You perform the routines for each bootstrapped samples and then look at the rates of how often the predictors are selected. Predictors that appear in say 90% or more of the sample are then used in the final mixed model. There are many other methods that could be used too, but I highlight this one as it's simple to explain and usually very easy to implement. For more information see, Breiman, Leo (1996). "Bagging predictors". Machine Learning 24 (2): 123–140. doi:10.1007/BF00058655.
Alternatives to stepwise regression for generalized linear mixed models How about the ensemble method of boostrapped aggregating, also known as bragging? Using this approach you essentially create a large number of replicates of the original dataset using simple random s
50,979
What is an example of data where the permutation test succeeds but a normal t-test fails?
In general, situations in which there is not enough data for the difference between the two sample means to have converged to something "near" the t distribution will cause the t-test to fail, in the sense of not having close to the specified probability of rejecting a true null hypothesis. Let's assume we are drawing two samples from some distribution that does not have a mean or variance, e.g, a Cauchy distribution. In that case, the two-sample t-test will fail for obvious reasons, but the permutation test will be unaffected. Here's an example showing, via simulation, roughly the distributions of the p-values observed from the t-test and the permutation test when comparing the sample means of two standard Cauchy variates with sample sizes of 10: p_value_t <- rep(0, 1000) p_value_perm <- rep(0, 1000) delta <- rep(0, 1000) for (i in seq_along(p_value_t)) { x1 <- rcauchy(10) x2 <- rcauchy(10) t_value <- (mean(x1) - mean(x2)) / sqrt(var(x1)/10 + var(x2)/10) p_value_t[i] <- pt(t_value, 18) x_all <- c(x1, x2) for (j in seq_along(delta)) { x_all <- sample(x_all) delta[j] <- mean(x_all[1:10]) - mean(x_all[11:20]) } p_value_perm[i] <- mean((mean(x1) - mean(x2)) < delta) } hist(p_value_t, main = "Histogram of t-test based p-values", xlab="p-value", ylab="Observed frequency") hist(p_value_perm, main = "Histogram of permutation based p-values", xlab="p-value", ylab="Observed frequency") ... and the resulting histograms: Clearly the p-values from the t-test do not have the desired Uniform distribution, but those from the permutation test look pretty good in this respect. We have made this example rather extreme, as the Cauchy does not have a mean or variance, but the fundamental principle holds: the t-test works reasonably well when the sample sizes are large enough so that the distribution of the difference between the sample means actually has, roughly, a t distribution with the calculated degrees of freedom, but breaks down when this assumption is violated more severely in practice.
What is an example of data where the permutation test succeeds but a normal t-test fails?
In general, situations in which there is not enough data for the difference between the two sample means to have converged to something "near" the t distribution will cause the t-test to fail, in the
What is an example of data where the permutation test succeeds but a normal t-test fails? In general, situations in which there is not enough data for the difference between the two sample means to have converged to something "near" the t distribution will cause the t-test to fail, in the sense of not having close to the specified probability of rejecting a true null hypothesis. Let's assume we are drawing two samples from some distribution that does not have a mean or variance, e.g, a Cauchy distribution. In that case, the two-sample t-test will fail for obvious reasons, but the permutation test will be unaffected. Here's an example showing, via simulation, roughly the distributions of the p-values observed from the t-test and the permutation test when comparing the sample means of two standard Cauchy variates with sample sizes of 10: p_value_t <- rep(0, 1000) p_value_perm <- rep(0, 1000) delta <- rep(0, 1000) for (i in seq_along(p_value_t)) { x1 <- rcauchy(10) x2 <- rcauchy(10) t_value <- (mean(x1) - mean(x2)) / sqrt(var(x1)/10 + var(x2)/10) p_value_t[i] <- pt(t_value, 18) x_all <- c(x1, x2) for (j in seq_along(delta)) { x_all <- sample(x_all) delta[j] <- mean(x_all[1:10]) - mean(x_all[11:20]) } p_value_perm[i] <- mean((mean(x1) - mean(x2)) < delta) } hist(p_value_t, main = "Histogram of t-test based p-values", xlab="p-value", ylab="Observed frequency") hist(p_value_perm, main = "Histogram of permutation based p-values", xlab="p-value", ylab="Observed frequency") ... and the resulting histograms: Clearly the p-values from the t-test do not have the desired Uniform distribution, but those from the permutation test look pretty good in this respect. We have made this example rather extreme, as the Cauchy does not have a mean or variance, but the fundamental principle holds: the t-test works reasonably well when the sample sizes are large enough so that the distribution of the difference between the sample means actually has, roughly, a t distribution with the calculated degrees of freedom, but breaks down when this assumption is violated more severely in practice.
What is an example of data where the permutation test succeeds but a normal t-test fails? In general, situations in which there is not enough data for the difference between the two sample means to have converged to something "near" the t distribution will cause the t-test to fail, in the
50,980
What is an example of data where the permutation test succeeds but a normal t-test fails?
I think a good example is the space shuttle data as given in the textbook $\it{The \ Statistical \ Sleuth}.$ The number of O-ring incidents is given by launch temperature as Temp $ \ \ \ \ \ \ \ \ \ \ \ \ $ Number of O-Ring Incidents $\lt 65^{\circ} \ \ \ \ \ \ \ \ \ \ \ \ \ \ 1,1,1,3$ $\gt 65^{\circ} \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,2$ A Welch's t-test on this data gives a p-value of $0.0853$ The permutation test gives $0.0099$
What is an example of data where the permutation test succeeds but a normal t-test fails?
I think a good example is the space shuttle data as given in the textbook $\it{The \ Statistical \ Sleuth}.$ The number of O-ring incidents is given by launch temperature as Temp $ \ \ \ \ \ \ \ \ \
What is an example of data where the permutation test succeeds but a normal t-test fails? I think a good example is the space shuttle data as given in the textbook $\it{The \ Statistical \ Sleuth}.$ The number of O-ring incidents is given by launch temperature as Temp $ \ \ \ \ \ \ \ \ \ \ \ \ $ Number of O-Ring Incidents $\lt 65^{\circ} \ \ \ \ \ \ \ \ \ \ \ \ \ \ 1,1,1,3$ $\gt 65^{\circ} \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,2$ A Welch's t-test on this data gives a p-value of $0.0853$ The permutation test gives $0.0099$
What is an example of data where the permutation test succeeds but a normal t-test fails? I think a good example is the space shuttle data as given in the textbook $\it{The \ Statistical \ Sleuth}.$ The number of O-ring incidents is given by launch temperature as Temp $ \ \ \ \ \ \ \ \ \
50,981
Classification with unknown class
A viable alternative is to create two models: High vs. Low & Other Low vs. High & Other You'll get probabilities $\text{P(High|Data)}$ and $\text{P(Low|Data)}$. If neither probability is higher than a threshold (say $50\%$) you can label the instance as $\text{Unknown}$ instead. Example in R An example in R using kernlab's ksvm (any probabilistic classifier would work). library(kernlab) #our data x = as.matrix(iris[,-c(2,4,5)]) y = iris$Species #our new classes ysetosa = (y == "setosa") + 0 yversic = (y == "versicolor") + 0 #our two models fitsetosa = ksvm(y = ysetosa, x = x, type = "C-bsvc", prob.model = TRUE) fitversic = ksvm(y = yversic, x = x, type = "C-bsvc", prob.model = TRUE) #the class predictions predsetosa = predict(fitsetosa, x, type = "probabilities") predversic = predict(fitversic, x, type = "probabilities") #the unknown probability is 1 minus the other probabilities pred = cbind(setosa = predsetosa[,2L], versicolor = predversic[,2L], unknown = 1 - predsetosa[,2L] - predversic[,2L]) tail(pred) #> tail(pred) # setosa versicolor unknown #[145,] 0.009275878 0.005356246 0.9853679 #[146,] 0.009058278 0.141930931 0.8490108 #[147,] 0.009945749 0.101307355 0.8887469 #[148,] 0.009903443 0.034164283 0.9559323 #[149,] 0.009027848 0.002268708 0.9887034 #[150,] 0.009679991 0.028774113 0.9615459 We know the last 50 examples in iris are neither setosa nor versicolor, and this is reflected in the respective probabilities. Issues The difference can generate negative probabilities. Better methods for probability coupling exist and should be used instead. I'm fairly sure you can edit kernlab ones (mostly based on binary probabilities) to not sum to 1, which in practice would result in the example I created.
Classification with unknown class
A viable alternative is to create two models: High vs. Low & Other Low vs. High & Other You'll get probabilities $\text{P(High|Data)}$ and $\text{P(Low|Data)}$. If neither probability is higher tha
Classification with unknown class A viable alternative is to create two models: High vs. Low & Other Low vs. High & Other You'll get probabilities $\text{P(High|Data)}$ and $\text{P(Low|Data)}$. If neither probability is higher than a threshold (say $50\%$) you can label the instance as $\text{Unknown}$ instead. Example in R An example in R using kernlab's ksvm (any probabilistic classifier would work). library(kernlab) #our data x = as.matrix(iris[,-c(2,4,5)]) y = iris$Species #our new classes ysetosa = (y == "setosa") + 0 yversic = (y == "versicolor") + 0 #our two models fitsetosa = ksvm(y = ysetosa, x = x, type = "C-bsvc", prob.model = TRUE) fitversic = ksvm(y = yversic, x = x, type = "C-bsvc", prob.model = TRUE) #the class predictions predsetosa = predict(fitsetosa, x, type = "probabilities") predversic = predict(fitversic, x, type = "probabilities") #the unknown probability is 1 minus the other probabilities pred = cbind(setosa = predsetosa[,2L], versicolor = predversic[,2L], unknown = 1 - predsetosa[,2L] - predversic[,2L]) tail(pred) #> tail(pred) # setosa versicolor unknown #[145,] 0.009275878 0.005356246 0.9853679 #[146,] 0.009058278 0.141930931 0.8490108 #[147,] 0.009945749 0.101307355 0.8887469 #[148,] 0.009903443 0.034164283 0.9559323 #[149,] 0.009027848 0.002268708 0.9887034 #[150,] 0.009679991 0.028774113 0.9615459 We know the last 50 examples in iris are neither setosa nor versicolor, and this is reflected in the respective probabilities. Issues The difference can generate negative probabilities. Better methods for probability coupling exist and should be used instead. I'm fairly sure you can edit kernlab ones (mostly based on binary probabilities) to not sum to 1, which in practice would result in the example I created.
Classification with unknown class A viable alternative is to create two models: High vs. Low & Other Low vs. High & Other You'll get probabilities $\text{P(High|Data)}$ and $\text{P(Low|Data)}$. If neither probability is higher tha
50,982
Classification with unknown class
Yes, looks like anomaly detection problem. What you could also try is to generate artificial samples for your third class and train your model using them. Of course, the other question is how you generate it. But this highly depends on problem you solve.
Classification with unknown class
Yes, looks like anomaly detection problem. What you could also try is to generate artificial samples for your third class and train your model using them. Of course, the other question is how you gene
Classification with unknown class Yes, looks like anomaly detection problem. What you could also try is to generate artificial samples for your third class and train your model using them. Of course, the other question is how you generate it. But this highly depends on problem you solve.
Classification with unknown class Yes, looks like anomaly detection problem. What you could also try is to generate artificial samples for your third class and train your model using them. Of course, the other question is how you gene
50,983
Classification with unknown class
This depends on your classifier. Any method that assigns weights to each class and then use a decision rule could be modified to your method. For instance, a random forest typically uses majority voting. Say you have one with 1000 decision trees. You could modify the method so as to use majority vote only if one of your classes is predicted by at least, say, 600 decision trees, otherwise output "Unknown" if the vote counts are too close, i.e. both are between 400 and 600.
Classification with unknown class
This depends on your classifier. Any method that assigns weights to each class and then use a decision rule could be modified to your method. For instance, a random forest typically uses majority voti
Classification with unknown class This depends on your classifier. Any method that assigns weights to each class and then use a decision rule could be modified to your method. For instance, a random forest typically uses majority voting. Say you have one with 1000 decision trees. You could modify the method so as to use majority vote only if one of your classes is predicted by at least, say, 600 decision trees, otherwise output "Unknown" if the vote counts are too close, i.e. both are between 400 and 600.
Classification with unknown class This depends on your classifier. Any method that assigns weights to each class and then use a decision rule could be modified to your method. For instance, a random forest typically uses majority voti
50,984
Classification with unknown class
Probably you don't need to change your classifier at all, just the way how you interpret results. E. g. if you have 2 classifiers and first one predict 10% probability that training sample belongs to it's class and the second one predict 9% probability, then it seems that training sample doesn't belong to any of that classes, so you can just label it as "other".
Classification with unknown class
Probably you don't need to change your classifier at all, just the way how you interpret results. E. g. if you have 2 classifiers and first one predict 10% probability that training sample belongs to
Classification with unknown class Probably you don't need to change your classifier at all, just the way how you interpret results. E. g. if you have 2 classifiers and first one predict 10% probability that training sample belongs to it's class and the second one predict 9% probability, then it seems that training sample doesn't belong to any of that classes, so you can just label it as "other".
Classification with unknown class Probably you don't need to change your classifier at all, just the way how you interpret results. E. g. if you have 2 classifiers and first one predict 10% probability that training sample belongs to
50,985
Performing Cross Validation to Compare Lasso and Other Regression Models in R
I'm not entirely sure I understand precisely where in the analysis pipeline your question is, but I think I can address it by walking through the steps you'll want to take. The software portion of your question is off-topic on CV, but the questions about CV are on-topic, so I'll answer those. My question is: is it technically proper CV to determine the overall CV error by averaging the error on each fold given that the lambda chosen for each fold will be producing a different lasso result? The elementary model development process is usually presented with respect to three partitions of your whole data set: train, test and validate. Training and test data are used together to tune model hyperparameters. Validation data is used to assess the performance of alternative models against data that wasn't used in model construction. The notion is that this is representative of new data that the model might encounter. A slightly more sophisticated elaboration on this process is nested cross-validation. This is preferred because, across the whole process, all data is eventually used in testing and training the model. Instead of using one partitioning of the data, you can do CV partitioning on the whole data set (the outer partition) and then again on the data left over when you hold out one of the outer partitions (the inner set). Here, you tune model hyperparameters on the inner set and have out-of-sample performance evaluated on the outer holdout set. The final model is prepared by composing a final partition over the entire data set, using CV to select a final tuple of hyperparameters and then, at last, estimating a single model on all available data given that selected tuple. In this way, the model building process kind of telescopes on itself, collapsing CV steps as we estimate the final model. It doesn't matter that alternative inner sets might give you different $\lambda_\text{min}$. What you're characterizing with your out-of-sample performance metrics is the model selection process itself. At the end of the day, you'll still only estimate one model, and that's the value of $\lambda_\text{min}$ that you care about. In the preceeding steps, you don't need to know the particular value of $\lambda_\text{min}$ except as a means to achieve out-of-sample estimates. While I know that there is some discussion about using stepwise regression, I have used the stepAIC function to prune my variable set. This is a bit of an understatement: it's not a discussion, it's a consensus that stepwise results are dubious. If you're fitting a lasso anyway, you can get statistically valid model by omitting the stepwise regression step from your analysis. Moreover, since the lasso step won't "see" the stepwise step, your results will have too-narrow error bands and cross-validation results will be irreparably biased. And lasso makes the entire stepwise step pointless anyway, because they solve the same problem! Lasso solves all of the variable selection problems that stepwise attempts to while avoiding the wealth of widely-accepted criticisms of stepwise strategies. There's no downside to using lasso on its own in this case. I'm convinced the only reason stepwise methods are included in R is for pedagogical reasons, and so that the functionality is available should someone need to demonstrate why it's hazardous.
Performing Cross Validation to Compare Lasso and Other Regression Models in R
I'm not entirely sure I understand precisely where in the analysis pipeline your question is, but I think I can address it by walking through the steps you'll want to take. The software portion of you
Performing Cross Validation to Compare Lasso and Other Regression Models in R I'm not entirely sure I understand precisely where in the analysis pipeline your question is, but I think I can address it by walking through the steps you'll want to take. The software portion of your question is off-topic on CV, but the questions about CV are on-topic, so I'll answer those. My question is: is it technically proper CV to determine the overall CV error by averaging the error on each fold given that the lambda chosen for each fold will be producing a different lasso result? The elementary model development process is usually presented with respect to three partitions of your whole data set: train, test and validate. Training and test data are used together to tune model hyperparameters. Validation data is used to assess the performance of alternative models against data that wasn't used in model construction. The notion is that this is representative of new data that the model might encounter. A slightly more sophisticated elaboration on this process is nested cross-validation. This is preferred because, across the whole process, all data is eventually used in testing and training the model. Instead of using one partitioning of the data, you can do CV partitioning on the whole data set (the outer partition) and then again on the data left over when you hold out one of the outer partitions (the inner set). Here, you tune model hyperparameters on the inner set and have out-of-sample performance evaluated on the outer holdout set. The final model is prepared by composing a final partition over the entire data set, using CV to select a final tuple of hyperparameters and then, at last, estimating a single model on all available data given that selected tuple. In this way, the model building process kind of telescopes on itself, collapsing CV steps as we estimate the final model. It doesn't matter that alternative inner sets might give you different $\lambda_\text{min}$. What you're characterizing with your out-of-sample performance metrics is the model selection process itself. At the end of the day, you'll still only estimate one model, and that's the value of $\lambda_\text{min}$ that you care about. In the preceeding steps, you don't need to know the particular value of $\lambda_\text{min}$ except as a means to achieve out-of-sample estimates. While I know that there is some discussion about using stepwise regression, I have used the stepAIC function to prune my variable set. This is a bit of an understatement: it's not a discussion, it's a consensus that stepwise results are dubious. If you're fitting a lasso anyway, you can get statistically valid model by omitting the stepwise regression step from your analysis. Moreover, since the lasso step won't "see" the stepwise step, your results will have too-narrow error bands and cross-validation results will be irreparably biased. And lasso makes the entire stepwise step pointless anyway, because they solve the same problem! Lasso solves all of the variable selection problems that stepwise attempts to while avoiding the wealth of widely-accepted criticisms of stepwise strategies. There's no downside to using lasso on its own in this case. I'm convinced the only reason stepwise methods are included in R is for pedagogical reasons, and so that the functionality is available should someone need to demonstrate why it's hazardous.
Performing Cross Validation to Compare Lasso and Other Regression Models in R I'm not entirely sure I understand precisely where in the analysis pipeline your question is, but I think I can address it by walking through the steps you'll want to take. The software portion of you
50,986
What is the difference between identifiable and estimable?
Identifiability is related to the mathematical model without consideration for any real-world noise in the observations. Estimability takes measurement noise into account. Jacquez, J.A. & P. Greif (1985) "Numerical Parameter Identifiability and Estimability: Integrating Identifiability, Estimability, and Optimal Sampling Design", Mathematical Biosciences 77(1):201-227.
What is the difference between identifiable and estimable?
Identifiability is related to the mathematical model without consideration for any real-world noise in the observations. Estimability takes measurement noise into account. Jacquez, J.A. & P. Greif (19
What is the difference between identifiable and estimable? Identifiability is related to the mathematical model without consideration for any real-world noise in the observations. Estimability takes measurement noise into account. Jacquez, J.A. & P. Greif (1985) "Numerical Parameter Identifiability and Estimability: Integrating Identifiability, Estimability, and Optimal Sampling Design", Mathematical Biosciences 77(1):201-227.
What is the difference between identifiable and estimable? Identifiability is related to the mathematical model without consideration for any real-world noise in the observations. Estimability takes measurement noise into account. Jacquez, J.A. & P. Greif (19
50,987
Joint probability of a minimum and maximum score after $n$ dice rolls
I'll post an "answer," mainly composed of wikichung's ideas. We independently roll the unbiased die $n$ times obtaining $X_1, \dots, X_n$, with minimum value $m$ and maximum value $M$. What is $P(m=2, M=5)$? You noted that $$ P(2 \leq X_i \leq 5; \forall i) = \prod_{i=1}^n \frac{4}{6} = \left(\frac{4}{6}\right)^n. $$ Now how does this relate to $P(m=2, M=5)$? It is two big! For example we are counting events for which $m \neq 2$, such as $(5, 5, \dots, 5)$. However, if for each $i$, $2 \leq X_i \leq 5$, then $m \geq 2$ and $M \leq 5$. So we have determined that, $$ P(m \geq 2, M \leq 5) = \frac{4}{6}. $$ By the same logic for any $a \leq b$: $$ P(m \geq a, M \leq b) = \frac{b-a+1}{6}. $$ So I will leave the last question to you. How can we rewrite $P(m=2, M=5)$ by adding and subtracting terms of the form $P(m\geq a , M\leq b)?$ This is where you will need inclusion/exclusion. edit - Additional Information Define $S(m,M) = |\{$sequences with min $=m$ and max $=M\}|$, and $T(m,M) = |\{$sequences with min $\geq m$ and max $\leq M\}|$. Then we have: $$ S(2,5) = T(2,5) - T(3,5) - t(2,4) + ?. $$ There are some sequences counted by $T(2,5)$, which are not counted by $S(2,5)$, but which were doubly undone with the two negative terms. What are they?
Joint probability of a minimum and maximum score after $n$ dice rolls
I'll post an "answer," mainly composed of wikichung's ideas. We independently roll the unbiased die $n$ times obtaining $X_1, \dots, X_n$, with minimum value $m$ and maximum value $M$. What is $P(m=
Joint probability of a minimum and maximum score after $n$ dice rolls I'll post an "answer," mainly composed of wikichung's ideas. We independently roll the unbiased die $n$ times obtaining $X_1, \dots, X_n$, with minimum value $m$ and maximum value $M$. What is $P(m=2, M=5)$? You noted that $$ P(2 \leq X_i \leq 5; \forall i) = \prod_{i=1}^n \frac{4}{6} = \left(\frac{4}{6}\right)^n. $$ Now how does this relate to $P(m=2, M=5)$? It is two big! For example we are counting events for which $m \neq 2$, such as $(5, 5, \dots, 5)$. However, if for each $i$, $2 \leq X_i \leq 5$, then $m \geq 2$ and $M \leq 5$. So we have determined that, $$ P(m \geq 2, M \leq 5) = \frac{4}{6}. $$ By the same logic for any $a \leq b$: $$ P(m \geq a, M \leq b) = \frac{b-a+1}{6}. $$ So I will leave the last question to you. How can we rewrite $P(m=2, M=5)$ by adding and subtracting terms of the form $P(m\geq a , M\leq b)?$ This is where you will need inclusion/exclusion. edit - Additional Information Define $S(m,M) = |\{$sequences with min $=m$ and max $=M\}|$, and $T(m,M) = |\{$sequences with min $\geq m$ and max $\leq M\}|$. Then we have: $$ S(2,5) = T(2,5) - T(3,5) - t(2,4) + ?. $$ There are some sequences counted by $T(2,5)$, which are not counted by $S(2,5)$, but which were doubly undone with the two negative terms. What are they?
Joint probability of a minimum and maximum score after $n$ dice rolls I'll post an "answer," mainly composed of wikichung's ideas. We independently roll the unbiased die $n$ times obtaining $X_1, \dots, X_n$, with minimum value $m$ and maximum value $M$. What is $P(m=
50,988
Sales forecast with an ARIMA model
Since your data has an upward trend to it, it is good that your model has an upward trend. The data looks exponential, so using a log transform is a good idea. However, it looks like your model's variance is lower than your data's variance. I would try more auto-regressive values. e.g. ARIMA(7,1,0), ARIMA(9,1,0), etc. This might help. You could also average every 2 data points before analyzing. This would produce one data point per week and eliminate that short, regular fluctuation which is really not that interesting. (If possible...) This should produce a better forecast. Also, check that you reversed your log transform on the model results before plotting it against the actual data. This might be the cause of the variance mis-match. I like your idea of looking for other predictors. This would probably help with those outlier points that don't follow the underlying patterns.
Sales forecast with an ARIMA model
Since your data has an upward trend to it, it is good that your model has an upward trend. The data looks exponential, so using a log transform is a good idea. However, it looks like your model's var
Sales forecast with an ARIMA model Since your data has an upward trend to it, it is good that your model has an upward trend. The data looks exponential, so using a log transform is a good idea. However, it looks like your model's variance is lower than your data's variance. I would try more auto-regressive values. e.g. ARIMA(7,1,0), ARIMA(9,1,0), etc. This might help. You could also average every 2 data points before analyzing. This would produce one data point per week and eliminate that short, regular fluctuation which is really not that interesting. (If possible...) This should produce a better forecast. Also, check that you reversed your log transform on the model results before plotting it against the actual data. This might be the cause of the variance mis-match. I like your idea of looking for other predictors. This would probably help with those outlier points that don't follow the underlying patterns.
Sales forecast with an ARIMA model Since your data has an upward trend to it, it is good that your model has an upward trend. The data looks exponential, so using a log transform is a good idea. However, it looks like your model's var
50,989
Sales forecast with an ARIMA model
The forecast is upward trending which seems way too aggressive. If you aren't looking or adjusting for outliers, your model isn't robust. For example, periods 263, 301, 319, 321, 322 and 339 don't follow the up down pattern. A model with double differencing, an AR2 and a separate AR1 plus adjusting for the outliers mentioned above would do it. The model changed over time and the first 167 has different parameters then after so you can use the Chow test to identify this and just use the data from 168 and on.
Sales forecast with an ARIMA model
The forecast is upward trending which seems way too aggressive. If you aren't looking or adjusting for outliers, your model isn't robust. For example, periods 263, 301, 319, 321, 322 and 339 don't fol
Sales forecast with an ARIMA model The forecast is upward trending which seems way too aggressive. If you aren't looking or adjusting for outliers, your model isn't robust. For example, periods 263, 301, 319, 321, 322 and 339 don't follow the up down pattern. A model with double differencing, an AR2 and a separate AR1 plus adjusting for the outliers mentioned above would do it. The model changed over time and the first 167 has different parameters then after so you can use the Chow test to identify this and just use the data from 168 and on.
Sales forecast with an ARIMA model The forecast is upward trending which seems way too aggressive. If you aren't looking or adjusting for outliers, your model isn't robust. For example, periods 263, 301, 319, 321, 322 and 339 don't fol
50,990
Walkthrough of building a time series model (on real examples)
In terms of practical down-to-earth examples , I might suggest reviewing some of my 583 replies to time-series model building questions. It is the only subject that I know and feel competent to comment on and thus is the only area that I do so. @gung nicely pointed to one of them in his response. Most are real data case studies where the data is delivered by the OP and procedural issues are raised. In terms of theory/overview I can recommend a presentation that I made 8 years ago to the International Society of Forecasters ( http://www.autobox.com/stack/dpr-isf27.ppt ) . In particular on slide 41 I presented an analysis of monthly ice cream sales from Norway. As shown it is a univariate (non-causal) model whereas when temperature is incorporated (not shown) the "seasonality" vanishes as temperature is the driver and then needs to be forecasted in order to forecast ice cream sales.
Walkthrough of building a time series model (on real examples)
In terms of practical down-to-earth examples , I might suggest reviewing some of my 583 replies to time-series model building questions. It is the only subject that I know and feel competent to comme
Walkthrough of building a time series model (on real examples) In terms of practical down-to-earth examples , I might suggest reviewing some of my 583 replies to time-series model building questions. It is the only subject that I know and feel competent to comment on and thus is the only area that I do so. @gung nicely pointed to one of them in his response. Most are real data case studies where the data is delivered by the OP and procedural issues are raised. In terms of theory/overview I can recommend a presentation that I made 8 years ago to the International Society of Forecasters ( http://www.autobox.com/stack/dpr-isf27.ppt ) . In particular on slide 41 I presented an analysis of monthly ice cream sales from Norway. As shown it is a univariate (non-causal) model whereas when temperature is incorporated (not shown) the "seasonality" vanishes as temperature is the driver and then needs to be forecasted in order to forecast ice cream sales.
Walkthrough of building a time series model (on real examples) In terms of practical down-to-earth examples , I might suggest reviewing some of my 583 replies to time-series model building questions. It is the only subject that I know and feel competent to comme
50,991
Walkthrough of building a time series model (on real examples)
I'd expect most tutorials to start and end with descriptive time series analysis in an ARIMA framework, but it might also be interesting to look at tutorial treatments of structural time series analysis in a Bayesian framework with a focus on causal inference. If that's your bag, my current go-to resources are the following, all of which conveniently treat the same policy impact question. Books and papers Commandeur and Koopman is a gentle introduction to the model building and checking this class of models. The original paper Durbin and Harvey, 1986 is also worth a read. For a causal inference focus, the paper Brodersen et al. 2015 is good too. Data and code Most of the materials above make some use of the Seatbelts data that comes with R. See also the packages CausalImpact (plus vignette), bsts (sadly without vignette), and the structTS function built into R.
Walkthrough of building a time series model (on real examples)
I'd expect most tutorials to start and end with descriptive time series analysis in an ARIMA framework, but it might also be interesting to look at tutorial treatments of structural time series analy
Walkthrough of building a time series model (on real examples) I'd expect most tutorials to start and end with descriptive time series analysis in an ARIMA framework, but it might also be interesting to look at tutorial treatments of structural time series analysis in a Bayesian framework with a focus on causal inference. If that's your bag, my current go-to resources are the following, all of which conveniently treat the same policy impact question. Books and papers Commandeur and Koopman is a gentle introduction to the model building and checking this class of models. The original paper Durbin and Harvey, 1986 is also worth a read. For a causal inference focus, the paper Brodersen et al. 2015 is good too. Data and code Most of the materials above make some use of the Seatbelts data that comes with R. See also the packages CausalImpact (plus vignette), bsts (sadly without vignette), and the structTS function built into R.
Walkthrough of building a time series model (on real examples) I'd expect most tutorials to start and end with descriptive time series analysis in an ARIMA framework, but it might also be interesting to look at tutorial treatments of structural time series analy
50,992
Walkthrough of building a time series model (on real examples)
MATLAB has a ton of end to end examples. Here's where one batch starts, just follow it to the end. It's part I, the last one is part X, but there are other related examples on multiplicative seasonality, de-trending, filters etc.
Walkthrough of building a time series model (on real examples)
MATLAB has a ton of end to end examples. Here's where one batch starts, just follow it to the end. It's part I, the last one is part X, but there are other related examples on multiplicative seasonali
Walkthrough of building a time series model (on real examples) MATLAB has a ton of end to end examples. Here's where one batch starts, just follow it to the end. It's part I, the last one is part X, but there are other related examples on multiplicative seasonality, de-trending, filters etc.
Walkthrough of building a time series model (on real examples) MATLAB has a ton of end to end examples. Here's where one batch starts, just follow it to the end. It's part I, the last one is part X, but there are other related examples on multiplicative seasonali
50,993
Is there something clever I can do with a log of a sum?
If you're analyzing the sum of geometric means then, I'm afraid you can't go too far. You may try applying geometric mean inequality to get an upper bounds: $$ \left(a_1 a_2 \cdots a_n\right)^{1/n} \le \frac{1}{n} \sum_{k=1}^n a_k$$ Particularly, Poyla's proof looks promising. Your starting expression is: $$\left(\prod_{i=1}^{n}X_{i}+\prod_{i=1}^{n}Y_{i}\right)^{\frac{1}{n}}$$ It's like similar to a geometric mean, so it's got to have similar bounds. For instance, this must have some relation to the geometric mean of a sum $X_i+Y_i$, i.e. $\prod_{i=1}^n\left(X_i+Y_i\right)^\frac{1}{n}$
Is there something clever I can do with a log of a sum?
If you're analyzing the sum of geometric means then, I'm afraid you can't go too far. You may try applying geometric mean inequality to get an upper bounds: $$ \left(a_1 a_2 \cdots a_n\right)^{1/n} \l
Is there something clever I can do with a log of a sum? If you're analyzing the sum of geometric means then, I'm afraid you can't go too far. You may try applying geometric mean inequality to get an upper bounds: $$ \left(a_1 a_2 \cdots a_n\right)^{1/n} \le \frac{1}{n} \sum_{k=1}^n a_k$$ Particularly, Poyla's proof looks promising. Your starting expression is: $$\left(\prod_{i=1}^{n}X_{i}+\prod_{i=1}^{n}Y_{i}\right)^{\frac{1}{n}}$$ It's like similar to a geometric mean, so it's got to have similar bounds. For instance, this must have some relation to the geometric mean of a sum $X_i+Y_i$, i.e. $\prod_{i=1}^n\left(X_i+Y_i\right)^\frac{1}{n}$
Is there something clever I can do with a log of a sum? If you're analyzing the sum of geometric means then, I'm afraid you can't go too far. You may try applying geometric mean inequality to get an upper bounds: $$ \left(a_1 a_2 \cdots a_n\right)^{1/n} \l
50,994
What are the votes in R's unsupervised random Forest?
Reading from here we confirm your understanding of unsupervised Breiman's Random Forests. Unsupervised learning In unsupervised learning the data consist of a set of x -vectors of the same dimension with no class labels or response variables. There is no figure of merit to optimize, leaving the field open to ambiguous conclusions. The usual goal is to cluster the data - to see if it falls into different piles, each of which can be assigned some meaning. The approach in random forests is to consider the original data as class 1 and to create a synthetic second class of the same size that will be labeled as class 2. The synthetic second class is created by sampling at random from the univariate distributions of the original data. Here is how a single member of class two is created - the first coordinate is sampled from the N values {x(1,n)}. The second coordinate is sampled independently from the N values {x(2,n)}, and so forth. Thus, class two has the distribution of independent random variables, each one having the same univariate distribution as the corresponding variable in the original data. Class 2 thus destroys the dependency structure in the original data. But now, there are two classes and this artificial two-class problem can be run through random forests. This allows all of the random forests options to be applied to the original unlabeled data set. If the oob misclassification rate in the two-class problem is, say, 40% or more, it implies that the x -variables look too much like independent variables to random forests. The dependencies do not have a large role and not much discrimination is taking place. If the misclassification rate is lower, then the dependencies are playing an important role. Formulating it as a two class problem has a number of payoffs. Missing values can be replaced effectively. Outliers can be found. Variable importance can be measured. Scaling can be performed (in this case, if the original data had labels, the unsupervised scaling often retains the structure of the original scaling). But the most important payoff is the possibility of clustering. The reason the total number of votes is the number of "true" samples (class 1) is simply due to the fact that there's no reason to return votes for "fake" samples (class 2). These are random and their probability density function is entirely known. Compare the distributions of vote in both yours PU and U models: png("unsupervisedRF.png") par(mfrow = c(1,2), mar = c(5.1, 3.6, 4.1, 1.6)) boxplot(rfPU$votes[1:150,], names = c("PU-1","PU-2"), col = c("green","red")) boxplot(rfU$votes, names = c("U-1","U-2"), col = c("green","red")) dev.off()
What are the votes in R's unsupervised random Forest?
Reading from here we confirm your understanding of unsupervised Breiman's Random Forests. Unsupervised learning In unsupervised learning the data consist of a set of x -vectors of the same dimensio
What are the votes in R's unsupervised random Forest? Reading from here we confirm your understanding of unsupervised Breiman's Random Forests. Unsupervised learning In unsupervised learning the data consist of a set of x -vectors of the same dimension with no class labels or response variables. There is no figure of merit to optimize, leaving the field open to ambiguous conclusions. The usual goal is to cluster the data - to see if it falls into different piles, each of which can be assigned some meaning. The approach in random forests is to consider the original data as class 1 and to create a synthetic second class of the same size that will be labeled as class 2. The synthetic second class is created by sampling at random from the univariate distributions of the original data. Here is how a single member of class two is created - the first coordinate is sampled from the N values {x(1,n)}. The second coordinate is sampled independently from the N values {x(2,n)}, and so forth. Thus, class two has the distribution of independent random variables, each one having the same univariate distribution as the corresponding variable in the original data. Class 2 thus destroys the dependency structure in the original data. But now, there are two classes and this artificial two-class problem can be run through random forests. This allows all of the random forests options to be applied to the original unlabeled data set. If the oob misclassification rate in the two-class problem is, say, 40% or more, it implies that the x -variables look too much like independent variables to random forests. The dependencies do not have a large role and not much discrimination is taking place. If the misclassification rate is lower, then the dependencies are playing an important role. Formulating it as a two class problem has a number of payoffs. Missing values can be replaced effectively. Outliers can be found. Variable importance can be measured. Scaling can be performed (in this case, if the original data had labels, the unsupervised scaling often retains the structure of the original scaling). But the most important payoff is the possibility of clustering. The reason the total number of votes is the number of "true" samples (class 1) is simply due to the fact that there's no reason to return votes for "fake" samples (class 2). These are random and their probability density function is entirely known. Compare the distributions of vote in both yours PU and U models: png("unsupervisedRF.png") par(mfrow = c(1,2), mar = c(5.1, 3.6, 4.1, 1.6)) boxplot(rfPU$votes[1:150,], names = c("PU-1","PU-2"), col = c("green","red")) boxplot(rfU$votes, names = c("U-1","U-2"), col = c("green","red")) dev.off()
What are the votes in R's unsupervised random Forest? Reading from here we confirm your understanding of unsupervised Breiman's Random Forests. Unsupervised learning In unsupervised learning the data consist of a set of x -vectors of the same dimensio
50,995
What's the maximum expectation of a conditional variance, $E[\operatorname{Var}(X+Z_1 \mid X+Z_2)]$?
Motivated by Prof. Guo's paper & Prof. Kim's course materials (see links in the comments below), I've found the answer to this question. I'll post it below in case other folks run into similar questions. The maximum value of $E[Var(X+Z_1\mid X+Z_2)]$ is $3/2$ and is indeed achieved by $X\sim N(0,1)$, as speculated in Remark #3 in the question. The proof is outlined below. First, note that $E[Var(X+Z_1\mid X+Z_2)]$ is the (average) MMSE of estimating $X+Z_1$ based on observation of $X+Z_2$. Clearly, for any distribution of $X$, the MMSE cannot be greater than linear MMSE, or LMMSE. For any given distribution of $X$, the LMMSE $= \sigma_U^2-\frac{Cov(U,V)^2}{\sigma_V^2}=2-\frac{1}{\sigma_X^2+1}\leq 3/2$, independent of the distribution of $X$! ($U$ & $V$ denotes $X+Z_1$ & $X+Z_2$, respectively.) Therefore, $E[Var(X+Z_1\mid X+Z_2)]\leq 3/2$ for any distribution of $X$. Moreover, if $X\sim N(0,1)$, $X+Z_1$ & $X+Z_2$ will be jointly Gaussian. It's well known that in this case MMSE is identical to LMMSE. So the maximum expectation is achieved by Gaussian distribution of $X$.
What's the maximum expectation of a conditional variance, $E[\operatorname{Var}(X+Z_1 \mid X+Z_2)]$?
Motivated by Prof. Guo's paper & Prof. Kim's course materials (see links in the comments below), I've found the answer to this question. I'll post it below in case other folks run into similar questi
What's the maximum expectation of a conditional variance, $E[\operatorname{Var}(X+Z_1 \mid X+Z_2)]$? Motivated by Prof. Guo's paper & Prof. Kim's course materials (see links in the comments below), I've found the answer to this question. I'll post it below in case other folks run into similar questions. The maximum value of $E[Var(X+Z_1\mid X+Z_2)]$ is $3/2$ and is indeed achieved by $X\sim N(0,1)$, as speculated in Remark #3 in the question. The proof is outlined below. First, note that $E[Var(X+Z_1\mid X+Z_2)]$ is the (average) MMSE of estimating $X+Z_1$ based on observation of $X+Z_2$. Clearly, for any distribution of $X$, the MMSE cannot be greater than linear MMSE, or LMMSE. For any given distribution of $X$, the LMMSE $= \sigma_U^2-\frac{Cov(U,V)^2}{\sigma_V^2}=2-\frac{1}{\sigma_X^2+1}\leq 3/2$, independent of the distribution of $X$! ($U$ & $V$ denotes $X+Z_1$ & $X+Z_2$, respectively.) Therefore, $E[Var(X+Z_1\mid X+Z_2)]\leq 3/2$ for any distribution of $X$. Moreover, if $X\sim N(0,1)$, $X+Z_1$ & $X+Z_2$ will be jointly Gaussian. It's well known that in this case MMSE is identical to LMMSE. So the maximum expectation is achieved by Gaussian distribution of $X$.
What's the maximum expectation of a conditional variance, $E[\operatorname{Var}(X+Z_1 \mid X+Z_2)]$? Motivated by Prof. Guo's paper & Prof. Kim's course materials (see links in the comments below), I've found the answer to this question. I'll post it below in case other folks run into similar questi
50,996
Shrinkage of the Sample Covariance matrix
What they evaluate there is $$ \mathbb{E}\big[ (\alpha f + (1-\alpha)s - \sigma)^2 \big], $$ (note this isn't identical to what you wrote in the question) where you are given $$ \mathbb{E}[f] = \phi, \qquad \mathbb{E}[s] = \sigma. $$ They evaluate this by writing $$ \alpha f + (1-\alpha)s-\sigma =: X, $$ and using the identity $$ \mathbb{E}[X^2] = \mathbb{V}[X] + (\mathbb{E}[X])^2. $$ Since $\phi$ and $\sigma$ are constants here, $$ \mathbb{V}[X] = \alpha^2 \mathbb{V}[f] + (1-\alpha)^2 \mathbb{V}[s] + \alpha(1-\alpha)\mathrm{Cov}[f,s], $$ and $$ \mathbb{E}[X] = \alpha(\phi-\sigma) $$ due to $\mathbb{E}[f]=\phi$, $\mathbb{E}[s]=\sigma$, and the usual variance-of-sum formula. To generalize this to any other $\phi$, note that none of the properties of their matrices $\mathbf{F}$, $\mathbf{S}$, $\mathbf{\Sigma}$ are actually being used: this is simply an identity in terms of random variables $f,s$ and their means $\phi,\sigma$. It does not matter whether $\phi$ are the identity matrix elements or something else.
Shrinkage of the Sample Covariance matrix
What they evaluate there is $$ \mathbb{E}\big[ (\alpha f + (1-\alpha)s - \sigma)^2 \big], $$ (note this isn't identical to what you wrote in the question) where you are given $$ \mathbb{E}[f] = \phi,
Shrinkage of the Sample Covariance matrix What they evaluate there is $$ \mathbb{E}\big[ (\alpha f + (1-\alpha)s - \sigma)^2 \big], $$ (note this isn't identical to what you wrote in the question) where you are given $$ \mathbb{E}[f] = \phi, \qquad \mathbb{E}[s] = \sigma. $$ They evaluate this by writing $$ \alpha f + (1-\alpha)s-\sigma =: X, $$ and using the identity $$ \mathbb{E}[X^2] = \mathbb{V}[X] + (\mathbb{E}[X])^2. $$ Since $\phi$ and $\sigma$ are constants here, $$ \mathbb{V}[X] = \alpha^2 \mathbb{V}[f] + (1-\alpha)^2 \mathbb{V}[s] + \alpha(1-\alpha)\mathrm{Cov}[f,s], $$ and $$ \mathbb{E}[X] = \alpha(\phi-\sigma) $$ due to $\mathbb{E}[f]=\phi$, $\mathbb{E}[s]=\sigma$, and the usual variance-of-sum formula. To generalize this to any other $\phi$, note that none of the properties of their matrices $\mathbf{F}$, $\mathbf{S}$, $\mathbf{\Sigma}$ are actually being used: this is simply an identity in terms of random variables $f,s$ and their means $\phi,\sigma$. It does not matter whether $\phi$ are the identity matrix elements or something else.
Shrinkage of the Sample Covariance matrix What they evaluate there is $$ \mathbb{E}\big[ (\alpha f + (1-\alpha)s - \sigma)^2 \big], $$ (note this isn't identical to what you wrote in the question) where you are given $$ \mathbb{E}[f] = \phi,
50,997
Modelling clustered data using boosted regression trees
If you want to use boosted trees instead of linear models, you essentially have three options: (i) you can ignore the grouping structure, (ii) you can include the grouping structure (i.e. the animal ID in your case) as a categorical variable (this corresponds to using fixed effects), (iii) or you can use a boosting algorithm that explicitly accounts for the clustering structure using random effects. Option (i) is in general not a good idea as you ignore potentially important information when ignoring the clustering. Also, option (ii), including the grouping ID (animal ID in your case) as a categorical variable, can have drawbacks: learning is not efficient when there are many categories and a relatively small number of samples per category, and tree algorithms can have problems with high-cardinality categorical variables. The third option avoids these issues. The GPBoost library with Python and R packages builds on LightGBM and allows for combining tree-boosting and mixed effects models. Simply speaking it is an extension of linear mixed effects models where the fixed-effects are learned using tree-boosting. See this blog post and Sigrist (2020) for further information. Disclaimer: I am the author of the GPBoost library.
Modelling clustered data using boosted regression trees
If you want to use boosted trees instead of linear models, you essentially have three options: (i) you can ignore the grouping structure, (ii) you can include the grouping structure (i.e. the animal I
Modelling clustered data using boosted regression trees If you want to use boosted trees instead of linear models, you essentially have three options: (i) you can ignore the grouping structure, (ii) you can include the grouping structure (i.e. the animal ID in your case) as a categorical variable (this corresponds to using fixed effects), (iii) or you can use a boosting algorithm that explicitly accounts for the clustering structure using random effects. Option (i) is in general not a good idea as you ignore potentially important information when ignoring the clustering. Also, option (ii), including the grouping ID (animal ID in your case) as a categorical variable, can have drawbacks: learning is not efficient when there are many categories and a relatively small number of samples per category, and tree algorithms can have problems with high-cardinality categorical variables. The third option avoids these issues. The GPBoost library with Python and R packages builds on LightGBM and allows for combining tree-boosting and mixed effects models. Simply speaking it is an extension of linear mixed effects models where the fixed-effects are learned using tree-boosting. See this blog post and Sigrist (2020) for further information. Disclaimer: I am the author of the GPBoost library.
Modelling clustered data using boosted regression trees If you want to use boosted trees instead of linear models, you essentially have three options: (i) you can ignore the grouping structure, (ii) you can include the grouping structure (i.e. the animal I
50,998
SVM with non-negative weights
Initially, I think the model is fine but I don't know any package can solve this model directly. One method is that you may can try to write the dual form of this problem. I believe it will be a QP problem with extra nonnegative constrains. Alternatively, you also can try to find some algorithms which updates $w_{i}$ separately so that you can slightly rewrite the iteration formulas with $w_i^* = max(w_i, 0)$.
SVM with non-negative weights
Initially, I think the model is fine but I don't know any package can solve this model directly. One method is that you may can try to write the dual form of this problem. I believe it will be a QP p
SVM with non-negative weights Initially, I think the model is fine but I don't know any package can solve this model directly. One method is that you may can try to write the dual form of this problem. I believe it will be a QP problem with extra nonnegative constrains. Alternatively, you also can try to find some algorithms which updates $w_{i}$ separately so that you can slightly rewrite the iteration formulas with $w_i^* = max(w_i, 0)$.
SVM with non-negative weights Initially, I think the model is fine but I don't know any package can solve this model directly. One method is that you may can try to write the dual form of this problem. I believe it will be a QP p
50,999
SVM with non-negative weights
As @Ben Dai suggests, this looks like it will be a quadratic programming problem with inequality constraints. There are various general quadratic programming packages available which may be used to solve the problem, such as the quadprog routine in the MATLAB optimisation toolbox. This is only such routine I have used, but there are many other packages for this sort of problem.
SVM with non-negative weights
As @Ben Dai suggests, this looks like it will be a quadratic programming problem with inequality constraints. There are various general quadratic programming packages available which may be used to s
SVM with non-negative weights As @Ben Dai suggests, this looks like it will be a quadratic programming problem with inequality constraints. There are various general quadratic programming packages available which may be used to solve the problem, such as the quadprog routine in the MATLAB optimisation toolbox. This is only such routine I have used, but there are many other packages for this sort of problem.
SVM with non-negative weights As @Ben Dai suggests, this looks like it will be a quadratic programming problem with inequality constraints. There are various general quadratic programming packages available which may be used to s
51,000
Confidence interval for the odds ratio in a finite population
In general, there is no problem with forming a confidence intervals when your sample is near the size of the (finite) population. You typically just need to use the finite population correction: \begin{align} {\rm finite}\ SE &= SE \times \sqrt{\frac{N-n}{N-1}} \\ \\ &= SE \times\sqrt{\frac{60-50}{60-1}} \\ \\ &= SE\times.41 \end{align} In your specific case (i.e., logistic regression), it is worth noting that this would be a Wald confidence interval, and Wald CI's rely on large sample theory. (There is some relevant discussion here: Why do my p-values differ between logistic regression output, chi-squared test, and the confidence interval for the OR?) Your sample of 50 should probably not be considered large, so the CI will probably not really have the coverage it promises. Nonetheless, I think a confidence interval would add some legitimate information to help interpret your results, so I might still do it (with the appropriate caveats explicitly noted for readers).
Confidence interval for the odds ratio in a finite population
In general, there is no problem with forming a confidence intervals when your sample is near the size of the (finite) population. You typically just need to use the finite population correction: \begi
Confidence interval for the odds ratio in a finite population In general, there is no problem with forming a confidence intervals when your sample is near the size of the (finite) population. You typically just need to use the finite population correction: \begin{align} {\rm finite}\ SE &= SE \times \sqrt{\frac{N-n}{N-1}} \\ \\ &= SE \times\sqrt{\frac{60-50}{60-1}} \\ \\ &= SE\times.41 \end{align} In your specific case (i.e., logistic regression), it is worth noting that this would be a Wald confidence interval, and Wald CI's rely on large sample theory. (There is some relevant discussion here: Why do my p-values differ between logistic regression output, chi-squared test, and the confidence interval for the OR?) Your sample of 50 should probably not be considered large, so the CI will probably not really have the coverage it promises. Nonetheless, I think a confidence interval would add some legitimate information to help interpret your results, so I might still do it (with the appropriate caveats explicitly noted for readers).
Confidence interval for the odds ratio in a finite population In general, there is no problem with forming a confidence intervals when your sample is near the size of the (finite) population. You typically just need to use the finite population correction: \begi