idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
53,601
Explanation for Additive Property of Variance?
The first thing to notice is that Var(A+B) equals VarA + Var B only when Cov(A,B)=0. To gain some intuition behind the relationship between sd(A+B) and sd(A)+sd(B), notice that in order to complete the square in this expression Cov(A,B) would have to equal sd(A)*sd(B). The next question is whether that ever happens? Indeed it does. The Cauchy Schwartz inequality gives us the inequality below: . Whenever the following equality holds , we can complete the square and obtain that sd(A+B)=sd(A)+sd(B). However, in all other cases, sd(A+B) will not equal sd(A)+sd(B).
Explanation for Additive Property of Variance?
The first thing to notice is that Var(A+B) equals VarA + Var B only when Cov(A,B)=0. To gain some intuition behind the relationship between sd(A+B) and sd(A)+sd(B), notice that in order to complete th
Explanation for Additive Property of Variance? The first thing to notice is that Var(A+B) equals VarA + Var B only when Cov(A,B)=0. To gain some intuition behind the relationship between sd(A+B) and sd(A)+sd(B), notice that in order to complete the square in this expression Cov(A,B) would have to equal sd(A)*sd(B). The next question is whether that ever happens? Indeed it does. The Cauchy Schwartz inequality gives us the inequality below: . Whenever the following equality holds , we can complete the square and obtain that sd(A+B)=sd(A)+sd(B). However, in all other cases, sd(A+B) will not equal sd(A)+sd(B).
Explanation for Additive Property of Variance? The first thing to notice is that Var(A+B) equals VarA + Var B only when Cov(A,B)=0. To gain some intuition behind the relationship between sd(A+B) and sd(A)+sd(B), notice that in order to complete th
53,602
Moment Generating Function for Lognormal Random Variable
Other answers to this question claims that the moment generating function (mgf) of the lognormal distribution do not exist. That is a strange claim. The mgf is $$\DeclareMathOperator{\E}{\mathbb{E}} M_X(t) = \E e^{tX}. $$ And for the lognormal this only exists for $t\le 0$. The claim is then that the "mgf only exists when that expectation exists for $t$ in some open interval around zero. Well, some important theorems about mgf's depend on such an assumption, so the mgf of the lognormal distribution might lack some properties guaranteed by such theorems, but still be useful. The existence of papers about the lognormal mgf do suggest that some useful properties there are! This papers often talk about the Laplace transform not mgf, but that is only a parameter change from $t$ to $-t$. I will come back here with a more complete answer, but for the moment I will just point to some papers. On the Laplace transform of the Lognormal distribution by Søren Asmussen, Jens Ledet Jensen and Leonardo Rojas-Nandayapa. There is no exact formula for the mgf, but that paper gives good approximations. Laplace Transforms of Probability Distributions and Their Inversions Are Easy on Logarithmic Scales by A. G. Rossberg. Finally, Accurate Computation of the MGF of the Lognormal Distribution and its Application to Sum of Lognormals by C. Tellambura and D. Senaratne, and the paper Uniform Saddlepoint Approximations and Log-Concave Densities by Jens Ledet Jensen uses saddlepoint approximations for the lognormal as an example. So much for the mgf not existing! And now I discovered that all this (and more) was stated earlier by Cardinal Existence of the moment generating function and variance. For instance, from what Cardinal proves there, one can conclude that the lognormal do not have exponentially decaying tails. (Which is one of the properties that follows from existence of mgf in an open interval containing zero).
Moment Generating Function for Lognormal Random Variable
Other answers to this question claims that the moment generating function (mgf) of the lognormal distribution do not exist. That is a strange claim. The mgf is $$\DeclareMathOperator{\E}{\mathbb{E}}
Moment Generating Function for Lognormal Random Variable Other answers to this question claims that the moment generating function (mgf) of the lognormal distribution do not exist. That is a strange claim. The mgf is $$\DeclareMathOperator{\E}{\mathbb{E}} M_X(t) = \E e^{tX}. $$ And for the lognormal this only exists for $t\le 0$. The claim is then that the "mgf only exists when that expectation exists for $t$ in some open interval around zero. Well, some important theorems about mgf's depend on such an assumption, so the mgf of the lognormal distribution might lack some properties guaranteed by such theorems, but still be useful. The existence of papers about the lognormal mgf do suggest that some useful properties there are! This papers often talk about the Laplace transform not mgf, but that is only a parameter change from $t$ to $-t$. I will come back here with a more complete answer, but for the moment I will just point to some papers. On the Laplace transform of the Lognormal distribution by Søren Asmussen, Jens Ledet Jensen and Leonardo Rojas-Nandayapa. There is no exact formula for the mgf, but that paper gives good approximations. Laplace Transforms of Probability Distributions and Their Inversions Are Easy on Logarithmic Scales by A. G. Rossberg. Finally, Accurate Computation of the MGF of the Lognormal Distribution and its Application to Sum of Lognormals by C. Tellambura and D. Senaratne, and the paper Uniform Saddlepoint Approximations and Log-Concave Densities by Jens Ledet Jensen uses saddlepoint approximations for the lognormal as an example. So much for the mgf not existing! And now I discovered that all this (and more) was stated earlier by Cardinal Existence of the moment generating function and variance. For instance, from what Cardinal proves there, one can conclude that the lognormal do not have exponentially decaying tails. (Which is one of the properties that follows from existence of mgf in an open interval containing zero).
Moment Generating Function for Lognormal Random Variable Other answers to this question claims that the moment generating function (mgf) of the lognormal distribution do not exist. That is a strange claim. The mgf is $$\DeclareMathOperator{\E}{\mathbb{E}}
53,603
Moment Generating Function for Lognormal Random Variable
The lognormal doesn't have an MGF; the integral needs to converge for $t$ in a neighborhood of 0, but the integral for $E(e^{tX})$ is not defined on the positive side. (edit: see the correction i kjetil's answer; of course it does have an MGF, just for $t<0$ -- indeed I originally mentioned that above, but my claim that you need it for some neighborhood above 0 doesn't apply to everything you might want an mgf for) For some details, see the Wikipedia article on the lognormal distribution Interestingly, the lognormal is an example of a distribution with a finite moment sequence that is not characterized by that set of moments (i.e. there are other distributions with the same sequence of moments). If the MGF existed in a neighborhood of 0 this could not occur.
Moment Generating Function for Lognormal Random Variable
The lognormal doesn't have an MGF; the integral needs to converge for $t$ in a neighborhood of 0, but the integral for $E(e^{tX})$ is not defined on the positive side. (edit: see the correction i kje
Moment Generating Function for Lognormal Random Variable The lognormal doesn't have an MGF; the integral needs to converge for $t$ in a neighborhood of 0, but the integral for $E(e^{tX})$ is not defined on the positive side. (edit: see the correction i kjetil's answer; of course it does have an MGF, just for $t<0$ -- indeed I originally mentioned that above, but my claim that you need it for some neighborhood above 0 doesn't apply to everything you might want an mgf for) For some details, see the Wikipedia article on the lognormal distribution Interestingly, the lognormal is an example of a distribution with a finite moment sequence that is not characterized by that set of moments (i.e. there are other distributions with the same sequence of moments). If the MGF existed in a neighborhood of 0 this could not occur.
Moment Generating Function for Lognormal Random Variable The lognormal doesn't have an MGF; the integral needs to converge for $t$ in a neighborhood of 0, but the integral for $E(e^{tX})$ is not defined on the positive side. (edit: see the correction i kje
53,604
What is the Joint Density Function of a Three-Level Mixed-Effects Model?
Using the general mixed model formulation, the log-likelihood of the 3-level model will have the same form as that of the 2-level model. This is because the formulation of the linear mixed effects model as $\mathbf{y} = \mathbf{X\beta}+\mathbf{Z}\boldsymbol{b}+\mathbf{e}$ does not say anything about the number of grouping variables, and in the case of more than 1, whether they they are nested, crossed, or partially crossed. See here for a discussion of nested vs crossed random effects. The model matrix $\mathbf{Z}$ incorporates the structure of all the random effects, while the vector $\mathbf{b}$ contains the conditional modes of all the random effects. Note that I am using $\mathbf{b}$ rather than $\mathbf{u}$ for the random effects vector above. The reason for this should be evident from the following example in R which may help to illustrate: Let’s simulate some nested 3-level data. Here we envisage each row of data to be an observation of a students height. Students are nested within classes and classes within schools set.seed(15) df1 <- expand.grid(pupil = c(1,2,3), class = c(1,2,3), school = c(1,2)) df1$Height <- rnorm(nrow(df1), 1.5, 0.2) + df1$class + df1$school Now we fit a linear mixed model, using lme4 with a single grouping variable (a 2-level model) of pupils nested in classes only: library(lme4) lmm0 <- lmer(Height ~ 1 + (1|class), data = df1) getME(lmm0, "Zt") getME(lmm0, "b") The last 2 lines extract the transpose of $\mathbf{Z}$ and the conditional modes of the random effects (lme4 uses $\mathbf{b}$ rather than $\mathbf{u}$) 3 x 18 sparse Matrix of class "dgCMatrix" [[ suppressing 18 column names ‘1’, ‘2’, ‘3’ ... ]] 1 1 1 1 . . . . . . 1 1 1 . . . . . . 2 . . . 1 1 1 . . . . . . 1 1 1 . . . 3 . . . . . . 1 1 1 . . . . . . 1 1 1 3 x 1 Matrix of class "dgeMatrix" [,1] [1,] -0.9421207 [2,] -0.0119456 [3,] 0.9540663 Now we fit the 3-level model: lmm1 <- lmer(Height ~ 1 + (1|school/class), data = df1) getME(lmm1, "Zt") getME(lmm1, "b") 8 x 18 sparse Matrix of class "dgCMatrix" [[ suppressing 18 column names ‘1’, ‘2’, ‘3’ ... ]] 1:1 1 1 1 . . . . . . . . . . . . . . . 1:2 . . . . . . . . . 1 1 1 . . . . . . 2:1 . . . 1 1 1 . . . . . . . . . . . . 2:2 . . . . . . . . . . . . 1 1 1 . . . 3:1 . . . . . . 1 1 1 . . . . . . . . . 3:2 . . . . . . . . . . . . . . . 1 1 1 1 1 1 1 1 1 1 1 1 1 . . . . . . . . . 2 . . . . . . . . . 1 1 1 1 1 1 1 1 1 8 x 1 Matrix of class "dgeMatrix" [,1] [1,] -1.2863150 [2,] -0.6696624 [3,] -0.4058639 [4,] 0.3810632 [5,] 0.6372665 [6,] 1.3435117 [7,] -0.1097978 [8,] 0.1097978 So we can see that the additional “level” is handled by the structure of $\mathbf{Z}$ and all the random effects are in $\mathbf{b}$ So this means that the general form of the log-likelihood is the same for the 3-level model as it is for the 2-level model. For further information about the structure of $\mathbf{Z}$ I highly recommend this free book by the primary author of lme4 and this paper. References: Bates, D., Mächler, M., Bolker, B., & Walker, S. (2014). Fitting linear mixed-effects models using lme4. arXiv preprint arXiv:1406.5823. Bates, Douglas M. "lme4: Mixed-effects modeling with R." (2010): 470-474.
What is the Joint Density Function of a Three-Level Mixed-Effects Model?
Using the general mixed model formulation, the log-likelihood of the 3-level model will have the same form as that of the 2-level model. This is because the formulation of the linear mixed effects mod
What is the Joint Density Function of a Three-Level Mixed-Effects Model? Using the general mixed model formulation, the log-likelihood of the 3-level model will have the same form as that of the 2-level model. This is because the formulation of the linear mixed effects model as $\mathbf{y} = \mathbf{X\beta}+\mathbf{Z}\boldsymbol{b}+\mathbf{e}$ does not say anything about the number of grouping variables, and in the case of more than 1, whether they they are nested, crossed, or partially crossed. See here for a discussion of nested vs crossed random effects. The model matrix $\mathbf{Z}$ incorporates the structure of all the random effects, while the vector $\mathbf{b}$ contains the conditional modes of all the random effects. Note that I am using $\mathbf{b}$ rather than $\mathbf{u}$ for the random effects vector above. The reason for this should be evident from the following example in R which may help to illustrate: Let’s simulate some nested 3-level data. Here we envisage each row of data to be an observation of a students height. Students are nested within classes and classes within schools set.seed(15) df1 <- expand.grid(pupil = c(1,2,3), class = c(1,2,3), school = c(1,2)) df1$Height <- rnorm(nrow(df1), 1.5, 0.2) + df1$class + df1$school Now we fit a linear mixed model, using lme4 with a single grouping variable (a 2-level model) of pupils nested in classes only: library(lme4) lmm0 <- lmer(Height ~ 1 + (1|class), data = df1) getME(lmm0, "Zt") getME(lmm0, "b") The last 2 lines extract the transpose of $\mathbf{Z}$ and the conditional modes of the random effects (lme4 uses $\mathbf{b}$ rather than $\mathbf{u}$) 3 x 18 sparse Matrix of class "dgCMatrix" [[ suppressing 18 column names ‘1’, ‘2’, ‘3’ ... ]] 1 1 1 1 . . . . . . 1 1 1 . . . . . . 2 . . . 1 1 1 . . . . . . 1 1 1 . . . 3 . . . . . . 1 1 1 . . . . . . 1 1 1 3 x 1 Matrix of class "dgeMatrix" [,1] [1,] -0.9421207 [2,] -0.0119456 [3,] 0.9540663 Now we fit the 3-level model: lmm1 <- lmer(Height ~ 1 + (1|school/class), data = df1) getME(lmm1, "Zt") getME(lmm1, "b") 8 x 18 sparse Matrix of class "dgCMatrix" [[ suppressing 18 column names ‘1’, ‘2’, ‘3’ ... ]] 1:1 1 1 1 . . . . . . . . . . . . . . . 1:2 . . . . . . . . . 1 1 1 . . . . . . 2:1 . . . 1 1 1 . . . . . . . . . . . . 2:2 . . . . . . . . . . . . 1 1 1 . . . 3:1 . . . . . . 1 1 1 . . . . . . . . . 3:2 . . . . . . . . . . . . . . . 1 1 1 1 1 1 1 1 1 1 1 1 1 . . . . . . . . . 2 . . . . . . . . . 1 1 1 1 1 1 1 1 1 8 x 1 Matrix of class "dgeMatrix" [,1] [1,] -1.2863150 [2,] -0.6696624 [3,] -0.4058639 [4,] 0.3810632 [5,] 0.6372665 [6,] 1.3435117 [7,] -0.1097978 [8,] 0.1097978 So we can see that the additional “level” is handled by the structure of $\mathbf{Z}$ and all the random effects are in $\mathbf{b}$ So this means that the general form of the log-likelihood is the same for the 3-level model as it is for the 2-level model. For further information about the structure of $\mathbf{Z}$ I highly recommend this free book by the primary author of lme4 and this paper. References: Bates, D., Mächler, M., Bolker, B., & Walker, S. (2014). Fitting linear mixed-effects models using lme4. arXiv preprint arXiv:1406.5823. Bates, Douglas M. "lme4: Mixed-effects modeling with R." (2010): 470-474.
What is the Joint Density Function of a Three-Level Mixed-Effects Model? Using the general mixed model formulation, the log-likelihood of the 3-level model will have the same form as that of the 2-level model. This is because the formulation of the linear mixed effects mod
53,605
What is the Joint Density Function of a Three-Level Mixed-Effects Model?
Maximum likelihood estimation of mixed models typically works with the marginal likelihood of the observed response outcome data $y$. This marginal likelihood is obtained by integrating out the random effects from the joint density, i.e., for the $i$-th sample unit we have $$\left \{ \begin{array}{l} y_i = X_i\beta + Z_ib + \varepsilon_i,\\\\ b_i \sim \mathcal N(0, D), \quad \varepsilon_i \sim \mathcal N(0, \Sigma). \end{array} \right.$$ Based on this model, the log-likelihood function of the linear mixed model is $$\begin{eqnarray} \ell(\theta) & = & \sum_{i = 1}^n p(y_i; \theta)\\ & = & \sum_{i = 1}^n \int p(y_i \mid b_i; \theta) \, p(b_i; \theta) \; db_i, \end{eqnarray}$$ where $\theta$ denotes the parameters of the model, namely the fixed effects $\beta$ and the unique elements of the covariance matrices, $D$ and $\Sigma$. The first term in the second line is the multivariate normal density from the model $[y \mid b]$, and the second term the multivariate normal density for the random effects. Now, in the case of linear mixed models and because the random effects enter linearly in the mean of the model $[y \mid b]$, the two normal distributions 'work' together, and the integral has a closed-form solution. Namely, the marginal model for $[y_i]$ is $$y_i \sim \mathcal N(X_i \beta, \; Z_i D Z_i^\top + \Sigma).$$ A couple of notes: Even if you have nested random effects, the formulation remains the same. The $Z$ matrix only gets another form (often being sparse). Most software that fit mixed models under maximum likelihood actually work with the implied marginal model and give you estimates of $\theta$. That is, they do not actually fit the mixed model. You could construct situations in which the implied marginal model may come from two different mixed models. In this case, you cannot tell which of the two is the correct one. The random-effect estimates you obtain, for example using function ranef() in R, typically come from a second separate step using empirical Bayes methodology. The integrals work nicely together in the case of linear mixed models (and few other cases). If you have categorical outcome data and normal random effects, you need to approximate the integrals numerically.
What is the Joint Density Function of a Three-Level Mixed-Effects Model?
Maximum likelihood estimation of mixed models typically works with the marginal likelihood of the observed response outcome data $y$. This marginal likelihood is obtained by integrating out the random
What is the Joint Density Function of a Three-Level Mixed-Effects Model? Maximum likelihood estimation of mixed models typically works with the marginal likelihood of the observed response outcome data $y$. This marginal likelihood is obtained by integrating out the random effects from the joint density, i.e., for the $i$-th sample unit we have $$\left \{ \begin{array}{l} y_i = X_i\beta + Z_ib + \varepsilon_i,\\\\ b_i \sim \mathcal N(0, D), \quad \varepsilon_i \sim \mathcal N(0, \Sigma). \end{array} \right.$$ Based on this model, the log-likelihood function of the linear mixed model is $$\begin{eqnarray} \ell(\theta) & = & \sum_{i = 1}^n p(y_i; \theta)\\ & = & \sum_{i = 1}^n \int p(y_i \mid b_i; \theta) \, p(b_i; \theta) \; db_i, \end{eqnarray}$$ where $\theta$ denotes the parameters of the model, namely the fixed effects $\beta$ and the unique elements of the covariance matrices, $D$ and $\Sigma$. The first term in the second line is the multivariate normal density from the model $[y \mid b]$, and the second term the multivariate normal density for the random effects. Now, in the case of linear mixed models and because the random effects enter linearly in the mean of the model $[y \mid b]$, the two normal distributions 'work' together, and the integral has a closed-form solution. Namely, the marginal model for $[y_i]$ is $$y_i \sim \mathcal N(X_i \beta, \; Z_i D Z_i^\top + \Sigma).$$ A couple of notes: Even if you have nested random effects, the formulation remains the same. The $Z$ matrix only gets another form (often being sparse). Most software that fit mixed models under maximum likelihood actually work with the implied marginal model and give you estimates of $\theta$. That is, they do not actually fit the mixed model. You could construct situations in which the implied marginal model may come from two different mixed models. In this case, you cannot tell which of the two is the correct one. The random-effect estimates you obtain, for example using function ranef() in R, typically come from a second separate step using empirical Bayes methodology. The integrals work nicely together in the case of linear mixed models (and few other cases). If you have categorical outcome data and normal random effects, you need to approximate the integrals numerically.
What is the Joint Density Function of a Three-Level Mixed-Effects Model? Maximum likelihood estimation of mixed models typically works with the marginal likelihood of the observed response outcome data $y$. This marginal likelihood is obtained by integrating out the random
53,606
variance of conditional multivariate gaussian
The problem is the sampling, which is to say that you're trying to compare the conditional density with the distribution of slices of data that were sampled from a bivariate density. The theoretical variance of $x_1|x_2$ doesn't depend on the value of $x_2$ but the observed does because the further out you get from the means of $x_1$ and $x_2$, the fewer samples you will have. So what you're seeing is still 100% correct, but the quantity of points is creating the illusion of narrow and wide variances. Using the density of $x_1|x_2$ from the same Wikipedia section you linked to: $$ (X_1\mid X_2=x_2) \ \sim\ \mathcal{N}\!\left(\mu_1+\frac{\sigma_1}{\sigma_2}\rho( x_2 - \mu_2),\, (1-\rho^2)\sigma_1^2\right) $$ we can compare data sampled bivariate-ly with data sampled conditionally: library(magrittr) library(ggplot2) # multivariate-sampled data: mu <- c(0, 0) S <- matrix(c(1, 0.9, 0.9, 1), nrow = 2, byrow = TRUE) set.seed(0) xs <- mvtnorm::rmvnorm(10000, mean = mu, sigma = S) %>% set_colnames(c("X1", "X2")) df <- as.data.frame(xs) # functions & data to overlay the conditional density: conditional_dnorm <- function(x1, x2, m1, m2, s1, s2, r) { mean <- m1 + (s1/s2) * r * (x2 - m2) var <- (1 - r^2) * s1^2 sd <- sqrt(var) return(dnorm(x1, mean, sd)) } conditional_probabilities <- purrr::map_dfr(-2:2, function(x2) { x1 <- seq(-4, 4, length.out = 1000) fx1 <- purrr::map_dbl( x1, conditional_dnorm, x2 = x2, m1 = 0, m2 = 0, s1 = 1, s2 = 1, r = 0.9 ) return(data.frame(X1 = x1, X2 = x2, FX1 = fx1)) }) # plot: ggplot(df, aes(x = X1, y = X2)) + geom_point(alpha = 0.1) + geom_line( aes(x = X1, y = FX1 + X2, group = X2), data = conditional_probabilities, color = "red" ) + theme_minimal() + labs(title = "Sampled multivariately") That's just to confirm your results. The distribution of $x_1$ at various values of $x_2$ does look different. But, let's sample $x_1$ conditionally on $x_2$, ensuring that at each value of $x_2$ we get the same quantity of samples of $x_1$: conditional_rnorm <- function(n, x2, m1, m2, s1, s2, r) { mean <- m1 + (s1/s2) * r * (x2 - m2) var <- (1 - r^2) * s1^2 sd <- sqrt(var) return(rnorm(n, mean, sd)) } conditional_samples <- purrr::map_dfr( seq(-2, 2, length.out = 100), function(x2) { x1 <- conditional_rnorm(100, x2, 0, 0, 1, 1, 0.9) return(data.frame(X1 = x1, X2 = x2)) } ) ggplot(conditional_samples, aes(x = X1, y = X2)) + geom_point(alpha = 0.1) + geom_line( aes(x = X1, y = FX1 + X2, group = X2), data = conditional_probabilities, color = "red" ) + theme_minimal() + labs(title = "Sampled conditionally") Looks better, doesn't it? Hope that helps!
variance of conditional multivariate gaussian
The problem is the sampling, which is to say that you're trying to compare the conditional density with the distribution of slices of data that were sampled from a bivariate density. The theoretical v
variance of conditional multivariate gaussian The problem is the sampling, which is to say that you're trying to compare the conditional density with the distribution of slices of data that were sampled from a bivariate density. The theoretical variance of $x_1|x_2$ doesn't depend on the value of $x_2$ but the observed does because the further out you get from the means of $x_1$ and $x_2$, the fewer samples you will have. So what you're seeing is still 100% correct, but the quantity of points is creating the illusion of narrow and wide variances. Using the density of $x_1|x_2$ from the same Wikipedia section you linked to: $$ (X_1\mid X_2=x_2) \ \sim\ \mathcal{N}\!\left(\mu_1+\frac{\sigma_1}{\sigma_2}\rho( x_2 - \mu_2),\, (1-\rho^2)\sigma_1^2\right) $$ we can compare data sampled bivariate-ly with data sampled conditionally: library(magrittr) library(ggplot2) # multivariate-sampled data: mu <- c(0, 0) S <- matrix(c(1, 0.9, 0.9, 1), nrow = 2, byrow = TRUE) set.seed(0) xs <- mvtnorm::rmvnorm(10000, mean = mu, sigma = S) %>% set_colnames(c("X1", "X2")) df <- as.data.frame(xs) # functions & data to overlay the conditional density: conditional_dnorm <- function(x1, x2, m1, m2, s1, s2, r) { mean <- m1 + (s1/s2) * r * (x2 - m2) var <- (1 - r^2) * s1^2 sd <- sqrt(var) return(dnorm(x1, mean, sd)) } conditional_probabilities <- purrr::map_dfr(-2:2, function(x2) { x1 <- seq(-4, 4, length.out = 1000) fx1 <- purrr::map_dbl( x1, conditional_dnorm, x2 = x2, m1 = 0, m2 = 0, s1 = 1, s2 = 1, r = 0.9 ) return(data.frame(X1 = x1, X2 = x2, FX1 = fx1)) }) # plot: ggplot(df, aes(x = X1, y = X2)) + geom_point(alpha = 0.1) + geom_line( aes(x = X1, y = FX1 + X2, group = X2), data = conditional_probabilities, color = "red" ) + theme_minimal() + labs(title = "Sampled multivariately") That's just to confirm your results. The distribution of $x_1$ at various values of $x_2$ does look different. But, let's sample $x_1$ conditionally on $x_2$, ensuring that at each value of $x_2$ we get the same quantity of samples of $x_1$: conditional_rnorm <- function(n, x2, m1, m2, s1, s2, r) { mean <- m1 + (s1/s2) * r * (x2 - m2) var <- (1 - r^2) * s1^2 sd <- sqrt(var) return(rnorm(n, mean, sd)) } conditional_samples <- purrr::map_dfr( seq(-2, 2, length.out = 100), function(x2) { x1 <- conditional_rnorm(100, x2, 0, 0, 1, 1, 0.9) return(data.frame(X1 = x1, X2 = x2)) } ) ggplot(conditional_samples, aes(x = X1, y = X2)) + geom_point(alpha = 0.1) + geom_line( aes(x = X1, y = FX1 + X2, group = X2), data = conditional_probabilities, color = "red" ) + theme_minimal() + labs(title = "Sampled conditionally") Looks better, doesn't it? Hope that helps!
variance of conditional multivariate gaussian The problem is the sampling, which is to say that you're trying to compare the conditional density with the distribution of slices of data that were sampled from a bivariate density. The theoretical v
53,607
variance of conditional multivariate gaussian
The situation is addressed in the following paragraphs of your wikipedia link. It is also easier to look at Bivariate case there, since it directly gives the formulas you need. The conditional variance is $(1-\rho^2)\sigma_1^2$. Using the bivariate formula in here and assuming zero-means for simplicity, wlog, we can factorize the joint gaussian pdf to find the conditional gaussian pdf for $x|y$: $$p(x,y)=p(x|y)p(y)\propto exp(-\frac{1}{2(1-\rho^2)}(\frac{x_1^2}{\sigma_1^2}-\frac{2\rho x_1x_2}{\sigma_1\sigma_2}+\frac{x^2_2}{\sigma^2_2}))$$ $$\propto exp(-\frac{1}{2(1-\rho^2)}(\frac{x_1^2}{\sigma_1^2}-\frac{2\rho x_1x_2}{\sigma_1\sigma_2}+\frac{x^2_2}{\sigma^2_2}-\frac{(1-\rho^2)x_2^2}{\sigma_2^2}))exp(-\frac{x_2^2}{2\sigma_2^2})$$ Second part is $p(y)$, first part is $p(x|y)$ and both are in still normal form. Normal form is proportional to $exp(-\frac{(x-\mu)^2}{2\sigma^2})=exp(-\frac{x^2}{2\sigma^2}+\frac{x\mu}{\sigma^2}-\frac{\mu^2}{\sigma^2})$. We just match the insides of the exponentials here. The square term in exponent of $p(x|y)$ is $-\frac{x_1^2}{2(1-\rho^2)\sigma_1^2}$, which should be equal to $-\frac{x_1^2}{2\sigma^2}$, which yields $\sigma^2=(1-\rho^2)\sigma_1^2$, which is the same formula above and the variance doesn't depend on the particular value of $x_2$. Notice that I assumed zero-mean, but incorporating $\mu_1$ and $\mu_2$ into these equations will not change the coefficient of $x_1^2$.
variance of conditional multivariate gaussian
The situation is addressed in the following paragraphs of your wikipedia link. It is also easier to look at Bivariate case there, since it directly gives the formulas you need. The conditional varianc
variance of conditional multivariate gaussian The situation is addressed in the following paragraphs of your wikipedia link. It is also easier to look at Bivariate case there, since it directly gives the formulas you need. The conditional variance is $(1-\rho^2)\sigma_1^2$. Using the bivariate formula in here and assuming zero-means for simplicity, wlog, we can factorize the joint gaussian pdf to find the conditional gaussian pdf for $x|y$: $$p(x,y)=p(x|y)p(y)\propto exp(-\frac{1}{2(1-\rho^2)}(\frac{x_1^2}{\sigma_1^2}-\frac{2\rho x_1x_2}{\sigma_1\sigma_2}+\frac{x^2_2}{\sigma^2_2}))$$ $$\propto exp(-\frac{1}{2(1-\rho^2)}(\frac{x_1^2}{\sigma_1^2}-\frac{2\rho x_1x_2}{\sigma_1\sigma_2}+\frac{x^2_2}{\sigma^2_2}-\frac{(1-\rho^2)x_2^2}{\sigma_2^2}))exp(-\frac{x_2^2}{2\sigma_2^2})$$ Second part is $p(y)$, first part is $p(x|y)$ and both are in still normal form. Normal form is proportional to $exp(-\frac{(x-\mu)^2}{2\sigma^2})=exp(-\frac{x^2}{2\sigma^2}+\frac{x\mu}{\sigma^2}-\frac{\mu^2}{\sigma^2})$. We just match the insides of the exponentials here. The square term in exponent of $p(x|y)$ is $-\frac{x_1^2}{2(1-\rho^2)\sigma_1^2}$, which should be equal to $-\frac{x_1^2}{2\sigma^2}$, which yields $\sigma^2=(1-\rho^2)\sigma_1^2$, which is the same formula above and the variance doesn't depend on the particular value of $x_2$. Notice that I assumed zero-mean, but incorporating $\mu_1$ and $\mu_2$ into these equations will not change the coefficient of $x_1^2$.
variance of conditional multivariate gaussian The situation is addressed in the following paragraphs of your wikipedia link. It is also easier to look at Bivariate case there, since it directly gives the formulas you need. The conditional varianc
53,608
Whats the difference between a dense layer and an output layer in a CNN?
Short: Dense Layer = Fullyconnected Layer = topology, describes how the neurons are connected to the next layer of neurons (every neuron is connected to every neuron in the next layer), an intermediate layer (also called hidden layer see figure) Output Layer = Last layer of a Multilayer Perceptron Long: The convolutional part is used as a dimension reduction technique to map the input vector X to a smaller one. For example your input is an image with a size of (227*227) pixels, which is mapped to a vector of length 4096. $${\bf{X} : \mathbb{R}^{51529} \mapsto \mathbb{R}^{4096}}$$ This makes things easier for the second step, the classification/regression part. Therefore a classifier called Multilayer perceptron is used (invented by Frank Rosenblatt). If you stack multiple layers on top you may ask how to connect the neurons between each layer (neuron or perceptron = single unit of a mlp). Indeed there are more options than connecting every neuron to every new one = dense or fullyconnected (other possible topologies: shortcuts, recurrent, lateral, feedback). In the most examples the intermediate layers are desely or fully connected. The last neuron stack, the output layer returns your result. The output neurons are chosen according to your classes and return either a descrete vector or a distribution.
Whats the difference between a dense layer and an output layer in a CNN?
Short: Dense Layer = Fullyconnected Layer = topology, describes how the neurons are connected to the next layer of neurons (every neuron is connected to every neuron in the next layer), an intermedia
Whats the difference between a dense layer and an output layer in a CNN? Short: Dense Layer = Fullyconnected Layer = topology, describes how the neurons are connected to the next layer of neurons (every neuron is connected to every neuron in the next layer), an intermediate layer (also called hidden layer see figure) Output Layer = Last layer of a Multilayer Perceptron Long: The convolutional part is used as a dimension reduction technique to map the input vector X to a smaller one. For example your input is an image with a size of (227*227) pixels, which is mapped to a vector of length 4096. $${\bf{X} : \mathbb{R}^{51529} \mapsto \mathbb{R}^{4096}}$$ This makes things easier for the second step, the classification/regression part. Therefore a classifier called Multilayer perceptron is used (invented by Frank Rosenblatt). If you stack multiple layers on top you may ask how to connect the neurons between each layer (neuron or perceptron = single unit of a mlp). Indeed there are more options than connecting every neuron to every new one = dense or fullyconnected (other possible topologies: shortcuts, recurrent, lateral, feedback). In the most examples the intermediate layers are desely or fully connected. The last neuron stack, the output layer returns your result. The output neurons are chosen according to your classes and return either a descrete vector or a distribution.
Whats the difference between a dense layer and an output layer in a CNN? Short: Dense Layer = Fullyconnected Layer = topology, describes how the neurons are connected to the next layer of neurons (every neuron is connected to every neuron in the next layer), an intermedia
53,609
How to calculate the derivative of crossentropy error function?
There is indeed a mistake:\begin{align} \frac{\partial E_x}{\partial o_j^x} &=\frac{\partial }{\partial o_j^x} \left( - \sum_{k}[t_k^x \log(o_k^x)] + (1-t_k^x) \log(1-o_k^x)]\right) \\ &=-\frac{\partial }{\partial o_j^x} \left( \sum_{k}[t_k^x \log(o_k^x)] + (1-t_k^x) \log(1-o_k^x)]\right) \\ &=-\frac{\partial }{\partial o_j^x} \left( [t_j^x \log(o_j^x)] + (1-t_j^x) \log(1-o_j^x)]\right) \\ &=- \left( \frac{t_j^x}{o_j^x} - \frac{1-t_j^x}{1-o_j^x}\right), \text{Chain rule} \\ &=- \frac{t_j^x}{o_j^x} + \frac{1-t_j^x}{1-o_j^x} \\ \end{align}
How to calculate the derivative of crossentropy error function?
There is indeed a mistake:\begin{align} \frac{\partial E_x}{\partial o_j^x} &=\frac{\partial }{\partial o_j^x} \left( - \sum_{k}[t_k^x \log(o_k^x)] + (1-t_k^x) \log(1-o_k^x)]\right) \\ &=-\frac{\parti
How to calculate the derivative of crossentropy error function? There is indeed a mistake:\begin{align} \frac{\partial E_x}{\partial o_j^x} &=\frac{\partial }{\partial o_j^x} \left( - \sum_{k}[t_k^x \log(o_k^x)] + (1-t_k^x) \log(1-o_k^x)]\right) \\ &=-\frac{\partial }{\partial o_j^x} \left( \sum_{k}[t_k^x \log(o_k^x)] + (1-t_k^x) \log(1-o_k^x)]\right) \\ &=-\frac{\partial }{\partial o_j^x} \left( [t_j^x \log(o_j^x)] + (1-t_j^x) \log(1-o_j^x)]\right) \\ &=- \left( \frac{t_j^x}{o_j^x} - \frac{1-t_j^x}{1-o_j^x}\right), \text{Chain rule} \\ &=- \frac{t_j^x}{o_j^x} + \frac{1-t_j^x}{1-o_j^x} \\ \end{align}
How to calculate the derivative of crossentropy error function? There is indeed a mistake:\begin{align} \frac{\partial E_x}{\partial o_j^x} &=\frac{\partial }{\partial o_j^x} \left( - \sum_{k}[t_k^x \log(o_k^x)] + (1-t_k^x) \log(1-o_k^x)]\right) \\ &=-\frac{\parti
53,610
How to calculate the derivative of crossentropy error function?
An easy way to remember this is to internalize the gradient of the cross-entropy with respect to network parameters, which is famously $t_i - o_i$. The last slide does this correctly. So, it looks like the second slide has a mistake. If you follow the derivations you'll notice the mistake where for no reason a minus sign appears in the middle of the right hand side (before the last equation).
How to calculate the derivative of crossentropy error function?
An easy way to remember this is to internalize the gradient of the cross-entropy with respect to network parameters, which is famously $t_i - o_i$. The last slide does this correctly. So, it looks li
How to calculate the derivative of crossentropy error function? An easy way to remember this is to internalize the gradient of the cross-entropy with respect to network parameters, which is famously $t_i - o_i$. The last slide does this correctly. So, it looks like the second slide has a mistake. If you follow the derivations you'll notice the mistake where for no reason a minus sign appears in the middle of the right hand side (before the last equation).
How to calculate the derivative of crossentropy error function? An easy way to remember this is to internalize the gradient of the cross-entropy with respect to network parameters, which is famously $t_i - o_i$. The last slide does this correctly. So, it looks li
53,611
What is the conjugate prior distribution? [duplicate]
A conjugate prior is a probability distribution that, when multiplied by the likelihood and divided by the normalizing constant, yields a posterior probability distribution that is in the same family of distributions as the prior. In other words, in the formula: $$p(\theta|x) = \frac{p(x|\theta)p(\theta)}{\int{p(x|\theta)p(\theta)d\theta}}$$ The prior $p(\theta)$ is conjugate to the posterior $p(\theta | x)$ if both are in the same family of distributions. For example, the normal distribution is conjugate to itself, because if the likelihood and prior are normal, then so is the posterior.
What is the conjugate prior distribution? [duplicate]
A conjugate prior is a probability distribution that, when multiplied by the likelihood and divided by the normalizing constant, yields a posterior probability distribution that is in the same family
What is the conjugate prior distribution? [duplicate] A conjugate prior is a probability distribution that, when multiplied by the likelihood and divided by the normalizing constant, yields a posterior probability distribution that is in the same family of distributions as the prior. In other words, in the formula: $$p(\theta|x) = \frac{p(x|\theta)p(\theta)}{\int{p(x|\theta)p(\theta)d\theta}}$$ The prior $p(\theta)$ is conjugate to the posterior $p(\theta | x)$ if both are in the same family of distributions. For example, the normal distribution is conjugate to itself, because if the likelihood and prior are normal, then so is the posterior.
What is the conjugate prior distribution? [duplicate] A conjugate prior is a probability distribution that, when multiplied by the likelihood and divided by the normalizing constant, yields a posterior probability distribution that is in the same family
53,612
Is a GMM-HMM equivalent to a no-mixture HMM enriched with more states?
It is not exactly equivalent: the 6-state HMM can model everything the GMM-HMM can, but not the other way around. Suppose you start with the GMM-HMM, with $s_5$ being the GMM state, and turn it into the 6-state HMM with states $s_6$ and $s_7$ instead of $s_5$. Let $p_6$ and $p_7$ be the prior probabilities of the two components of the GMM (that are then transformed into states $s_6$ and $s_7$). For every transition from a state $s_i$ to $s_5$ in the GMM-HMM (with probability $t$), create two transition probabilities in the 6-state HMM: $s_i$ to $s_6$ with probability $t \cdot p_6$ $s_i$ to $s_7$ with probability $t \cdot p_7$ For every transition from $s_5$ to a state $s_i$ in the GMM-HMM (with probability $t$), create two transition probabilities, respectively from $s_6$ and $s_7$, going to $s_i$, both with the same probability $t$. If I am not mistaken, the resulting 6-state HMM is equivalent to the GMM-HMM. However, the other way around doesn't always work. Imagine you are starting the the 6-state HMM. Suppose that the transition probabilities for $s_i \rightarrow s_6$ and $s_i \rightarrow s_7$ are not equal do not have the same ratio as $s_j \rightarrow s_6$ and $s_j \rightarrow s_7$ (EDIT). You could not carry this information into the GMM-HMM. In short, the 6-state HMM should be able to represent everything the GMM-HMM can, and more.
Is a GMM-HMM equivalent to a no-mixture HMM enriched with more states?
It is not exactly equivalent: the 6-state HMM can model everything the GMM-HMM can, but not the other way around. Suppose you start with the GMM-HMM, with $s_5$ being the GMM state, and turn it into t
Is a GMM-HMM equivalent to a no-mixture HMM enriched with more states? It is not exactly equivalent: the 6-state HMM can model everything the GMM-HMM can, but not the other way around. Suppose you start with the GMM-HMM, with $s_5$ being the GMM state, and turn it into the 6-state HMM with states $s_6$ and $s_7$ instead of $s_5$. Let $p_6$ and $p_7$ be the prior probabilities of the two components of the GMM (that are then transformed into states $s_6$ and $s_7$). For every transition from a state $s_i$ to $s_5$ in the GMM-HMM (with probability $t$), create two transition probabilities in the 6-state HMM: $s_i$ to $s_6$ with probability $t \cdot p_6$ $s_i$ to $s_7$ with probability $t \cdot p_7$ For every transition from $s_5$ to a state $s_i$ in the GMM-HMM (with probability $t$), create two transition probabilities, respectively from $s_6$ and $s_7$, going to $s_i$, both with the same probability $t$. If I am not mistaken, the resulting 6-state HMM is equivalent to the GMM-HMM. However, the other way around doesn't always work. Imagine you are starting the the 6-state HMM. Suppose that the transition probabilities for $s_i \rightarrow s_6$ and $s_i \rightarrow s_7$ are not equal do not have the same ratio as $s_j \rightarrow s_6$ and $s_j \rightarrow s_7$ (EDIT). You could not carry this information into the GMM-HMM. In short, the 6-state HMM should be able to represent everything the GMM-HMM can, and more.
Is a GMM-HMM equivalent to a no-mixture HMM enriched with more states? It is not exactly equivalent: the 6-state HMM can model everything the GMM-HMM can, but not the other way around. Suppose you start with the GMM-HMM, with $s_5$ being the GMM state, and turn it into t
53,613
Is a GMM-HMM equivalent to a no-mixture HMM enriched with more states?
No you are not wrong thinking that. If $Y \mid X_1 \sim \alpha f_1(y) + (1-\alpha)f_2(y)$, then you can also let $X_2 \sim \text{Bernoulli}(\alpha)$ independently and say $$ Y \mid X_1, X_2 = 1 \sim f_1(y) $$ and $$ Y \mid X_1, X_2 = 0 \sim f_2(y). $$ This is because $$ f_{Y|X_1}(y \mid x_1) = \sum_{i=1}^2f_{Y|X_1,X_2}(y \mid x_1, x_2) f(x_2) = \alpha f_1(y) + (1-\alpha)f_2(y). $$ Keep in mind the sequence through time $\{X_2^t\}_t$ is iid, and so the Markov structure is overkill (but still perfectly fine).
Is a GMM-HMM equivalent to a no-mixture HMM enriched with more states?
No you are not wrong thinking that. If $Y \mid X_1 \sim \alpha f_1(y) + (1-\alpha)f_2(y)$, then you can also let $X_2 \sim \text{Bernoulli}(\alpha)$ independently and say $$ Y \mid X_1, X_2 = 1 \sim
Is a GMM-HMM equivalent to a no-mixture HMM enriched with more states? No you are not wrong thinking that. If $Y \mid X_1 \sim \alpha f_1(y) + (1-\alpha)f_2(y)$, then you can also let $X_2 \sim \text{Bernoulli}(\alpha)$ independently and say $$ Y \mid X_1, X_2 = 1 \sim f_1(y) $$ and $$ Y \mid X_1, X_2 = 0 \sim f_2(y). $$ This is because $$ f_{Y|X_1}(y \mid x_1) = \sum_{i=1}^2f_{Y|X_1,X_2}(y \mid x_1, x_2) f(x_2) = \alpha f_1(y) + (1-\alpha)f_2(y). $$ Keep in mind the sequence through time $\{X_2^t\}_t$ is iid, and so the Markov structure is overkill (but still perfectly fine).
Is a GMM-HMM equivalent to a no-mixture HMM enriched with more states? No you are not wrong thinking that. If $Y \mid X_1 \sim \alpha f_1(y) + (1-\alpha)f_2(y)$, then you can also let $X_2 \sim \text{Bernoulli}(\alpha)$ independently and say $$ Y \mid X_1, X_2 = 1 \sim
53,614
How does a fitted linear mixed effects model predict longitudinal output for a new subject?
Provided that you have at least one data point for the new patient, you can calculate individualized (dynamic) predictions. In particular, say that $y_j^o$ denotes the observed outcome data for the new patient $j$, then you can first obtain an estimate, say $b_j^*$ of his/her random effects from the posterior distribution $[b_j \mid y_j^o, \theta]$, where $\theta$ denotes the model parameters. For example, $b_j^*$ is the mean of this posterior distribution. Given this estimate of his/her random effects, you calculate predictions using $x_j(t) \beta + z_j(t) b_j^*$, where $x_j(t)$ and $z_j(t)$ denote the design matrices for the fixed and random effects at the (future) time points of interest, and $\beta$ denotes the fixed effects. Standard errors for these predictions can be obtained using a Monte Carlo scheme. For models fitted by lme() you can obtain these predictions using function IndvPred_lme() from package JMbayes. If you have categorical longitudinal data, you can obtain the same type of individualized predictions using the predict() method for models fitted by the mixed_model() function of the GLMMadaptive package; for more info on the latter, you can also check the vignette.
How does a fitted linear mixed effects model predict longitudinal output for a new subject?
Provided that you have at least one data point for the new patient, you can calculate individualized (dynamic) predictions. In particular, say that $y_j^o$ denotes the observed outcome data for the n
How does a fitted linear mixed effects model predict longitudinal output for a new subject? Provided that you have at least one data point for the new patient, you can calculate individualized (dynamic) predictions. In particular, say that $y_j^o$ denotes the observed outcome data for the new patient $j$, then you can first obtain an estimate, say $b_j^*$ of his/her random effects from the posterior distribution $[b_j \mid y_j^o, \theta]$, where $\theta$ denotes the model parameters. For example, $b_j^*$ is the mean of this posterior distribution. Given this estimate of his/her random effects, you calculate predictions using $x_j(t) \beta + z_j(t) b_j^*$, where $x_j(t)$ and $z_j(t)$ denote the design matrices for the fixed and random effects at the (future) time points of interest, and $\beta$ denotes the fixed effects. Standard errors for these predictions can be obtained using a Monte Carlo scheme. For models fitted by lme() you can obtain these predictions using function IndvPred_lme() from package JMbayes. If you have categorical longitudinal data, you can obtain the same type of individualized predictions using the predict() method for models fitted by the mixed_model() function of the GLMMadaptive package; for more info on the latter, you can also check the vignette.
How does a fitted linear mixed effects model predict longitudinal output for a new subject? Provided that you have at least one data point for the new patient, you can calculate individualized (dynamic) predictions. In particular, say that $y_j^o$ denotes the observed outcome data for the n
53,615
How does a fitted linear mixed effects model predict longitudinal output for a new subject?
The random intercepts for the subjects represent deviations from a mean population-level response. When predicting for a new subject, the fitted random effects are not helpful; this is because there is no way of knowing a priori how that subject's pattern deviates from the population-level response. Instead, the best prediction for any new subject will be the population response by itself. EDIT: There's no contradiction between the two answers here. @DimitrisRizopoulos makes the assumption that you have some information about the new patient i.e. at least one measurement. In contrast, I am making the assumption that you have no measurements of the new patient - in this case, the population response is the best prediction for any new patient.
How does a fitted linear mixed effects model predict longitudinal output for a new subject?
The random intercepts for the subjects represent deviations from a mean population-level response. When predicting for a new subject, the fitted random effects are not helpful; this is because there i
How does a fitted linear mixed effects model predict longitudinal output for a new subject? The random intercepts for the subjects represent deviations from a mean population-level response. When predicting for a new subject, the fitted random effects are not helpful; this is because there is no way of knowing a priori how that subject's pattern deviates from the population-level response. Instead, the best prediction for any new subject will be the population response by itself. EDIT: There's no contradiction between the two answers here. @DimitrisRizopoulos makes the assumption that you have some information about the new patient i.e. at least one measurement. In contrast, I am making the assumption that you have no measurements of the new patient - in this case, the population response is the best prediction for any new patient.
How does a fitted linear mixed effects model predict longitudinal output for a new subject? The random intercepts for the subjects represent deviations from a mean population-level response. When predicting for a new subject, the fitted random effects are not helpful; this is because there i
53,616
How to present results of time series forecasting
Standard forecasting papers unfortunately usually only show the averages of errors, so you would show the averages of your MAPEs. The authors often then start to discuss differences in the third significant digit. Without a notion of the variation in errors, this makes no sense. Therefore, I very much recommend that you do indicate the variation in your errors, e.g., by giving standard deviations. In addition, it is common practice in (load and other) forecasting papers to present results on multiple error measures, e.g., the rmse or the mae in addition to the mape. I suggest you skim though a couple of load forecasting papers and be inspired by what you find there. For your specific data, a nice and useful visualization could be a dotchart like this (note how I jittered the dots horizontally to reduce overplotting): mapes <- structure(c(12.4070736159999, 11.1222557214741, 18.1933345652622, 10.9283795353769, 14.4304582955302, 12.4381016317022, 11.2011253786453, 16.5824118552869, 10.8375790347786, 13.749822370597, 13.012084025233, 11.0949104146992, 16.9662739986567, 10.9969285266692, 14.185836762341, 12.8202279490414, 11.0204844071916, 22.0912790309511, 10.65564127531, 14.1723784565888, 13.0173158393873, 10.9866043178404, 18.7201687363193, 10.830705163829, 14.8015564381059), .Dim = c(5L, 5L), .Dimnames = list( c("NIC", "QLD", "SA", "TAS", "VIC"), NULL)) set.seed(1) xx <- runif(nrow(mapes)*ncol(mapes),-0.3,0.3)+rep(1:ncol(mapes),nrow(mapes)) plot(xx,as.vector(mapes),pch=19,xaxt="n",ylab="",xlab="",main="MAPE") axis(1,seq_along(rownames(mapes)),rownames(mapes))
How to present results of time series forecasting
Standard forecasting papers unfortunately usually only show the averages of errors, so you would show the averages of your MAPEs. The authors often then start to discuss differences in the third signi
How to present results of time series forecasting Standard forecasting papers unfortunately usually only show the averages of errors, so you would show the averages of your MAPEs. The authors often then start to discuss differences in the third significant digit. Without a notion of the variation in errors, this makes no sense. Therefore, I very much recommend that you do indicate the variation in your errors, e.g., by giving standard deviations. In addition, it is common practice in (load and other) forecasting papers to present results on multiple error measures, e.g., the rmse or the mae in addition to the mape. I suggest you skim though a couple of load forecasting papers and be inspired by what you find there. For your specific data, a nice and useful visualization could be a dotchart like this (note how I jittered the dots horizontally to reduce overplotting): mapes <- structure(c(12.4070736159999, 11.1222557214741, 18.1933345652622, 10.9283795353769, 14.4304582955302, 12.4381016317022, 11.2011253786453, 16.5824118552869, 10.8375790347786, 13.749822370597, 13.012084025233, 11.0949104146992, 16.9662739986567, 10.9969285266692, 14.185836762341, 12.8202279490414, 11.0204844071916, 22.0912790309511, 10.65564127531, 14.1723784565888, 13.0173158393873, 10.9866043178404, 18.7201687363193, 10.830705163829, 14.8015564381059), .Dim = c(5L, 5L), .Dimnames = list( c("NIC", "QLD", "SA", "TAS", "VIC"), NULL)) set.seed(1) xx <- runif(nrow(mapes)*ncol(mapes),-0.3,0.3)+rep(1:ncol(mapes),nrow(mapes)) plot(xx,as.vector(mapes),pch=19,xaxt="n",ylab="",xlab="",main="MAPE") axis(1,seq_along(rownames(mapes)),rownames(mapes))
How to present results of time series forecasting Standard forecasting papers unfortunately usually only show the averages of errors, so you would show the averages of your MAPEs. The authors often then start to discuss differences in the third signi
53,617
Spearman's rank correlation coefficient
What does each measure? Pearson's correlation coefficient is measure of the strength of a linear relationship between x and y. It is swayed by outliers much like a mean and standard deviation. Spearman's correlation coefficient is a measure of the strength of a monotonic relationship between x and y. This includes but is more general than just linear relationships, including all one-to-many relationships, but does not include many-to-one or many-to-many relationships. It is robust against outliers much like a median and inter-quartile range. Mutual Information is a general measure of the strength of all of these types of relationship, and tends to work best with less noisy data. Hauke et. al. In the case of the paper you refer to, in their graph of X12 (birth rate) vs X7 (population of working age), some outliers prevent the Pearson's correlation from agreeing with the Spearman's correlation for the same data. This is because the Spearman's correlation coefficient, as a rank measure, is robust against a few outliers much like a median is robust to outliers. In regards to their data, I'm not sure what strange normalisation they have performed to get a birth rate of -6000! In the case of X4 (Population density) vs. X5 (Arable land), we have a non-linear, non-monotonic relationship. There are four data points at high population density (in a city) and for these, roughly the higher the density, the less space for farms, so we have a negative correlation. Down with most of the data in the low population density regions, in less hospitable regions (mountains, deserts, etc) there are less people and less farms, and in the more lush regions there are more people and more farms, so we have a positive correlation. So I would replace their conclusion with Always plot your data Understand the actual meaning of the numbers Understand what each correlation measure can tell you about your data (as above)
Spearman's rank correlation coefficient
What does each measure? Pearson's correlation coefficient is measure of the strength of a linear relationship between x and y. It is swayed by outliers much like a mean and standard deviation. Spearma
Spearman's rank correlation coefficient What does each measure? Pearson's correlation coefficient is measure of the strength of a linear relationship between x and y. It is swayed by outliers much like a mean and standard deviation. Spearman's correlation coefficient is a measure of the strength of a monotonic relationship between x and y. This includes but is more general than just linear relationships, including all one-to-many relationships, but does not include many-to-one or many-to-many relationships. It is robust against outliers much like a median and inter-quartile range. Mutual Information is a general measure of the strength of all of these types of relationship, and tends to work best with less noisy data. Hauke et. al. In the case of the paper you refer to, in their graph of X12 (birth rate) vs X7 (population of working age), some outliers prevent the Pearson's correlation from agreeing with the Spearman's correlation for the same data. This is because the Spearman's correlation coefficient, as a rank measure, is robust against a few outliers much like a median is robust to outliers. In regards to their data, I'm not sure what strange normalisation they have performed to get a birth rate of -6000! In the case of X4 (Population density) vs. X5 (Arable land), we have a non-linear, non-monotonic relationship. There are four data points at high population density (in a city) and for these, roughly the higher the density, the less space for farms, so we have a negative correlation. Down with most of the data in the low population density regions, in less hospitable regions (mountains, deserts, etc) there are less people and less farms, and in the more lush regions there are more people and more farms, so we have a positive correlation. So I would replace their conclusion with Always plot your data Understand the actual meaning of the numbers Understand what each correlation measure can tell you about your data (as above)
Spearman's rank correlation coefficient What does each measure? Pearson's correlation coefficient is measure of the strength of a linear relationship between x and y. It is swayed by outliers much like a mean and standard deviation. Spearma
53,618
Leave one out cross validation for LSTM
Leave one out cross validation for LSTM, or any other time-series model, doesn't really make much sense because it would introduce missing values in the series and leaking information from the future. Time series models learn from historical values, to predict future. In leave one out cross-validation, you remove observations from the series, including the past ones, what leads to missing data. What follows, sch cross-validation strategy would need the model to be able to deal somehow with missing data, or would need some external strategy of dealing with them and basically would test how well the model deals with missing data. Simple cross-validation can be used only if the samples can be assumed to be independent, this is never the case for time series (unless you assume it to be random noise). With time series data, as described by Hyndman and Athanasopoulos (2018), we perform one step ahead cross validation, where we use $1,\dots,k$ samples, to predict $k+1$ value (or alternatively $k+1,\dots, k+m$ values), where we use different $k$ values, as illustrated on image below (also by Hyndman and Athanasopoulos, 2018). Such strategy tests what you are trying to test, i.e. how well does your model predict future from the past. Moreover, it also tests how well does the model deal with changing data, a scenario that is very common in such cases, since you usually don't make single prediction using static training sample, but want a model that would be re-trained with new data and would adapt. Commenting on LSTM networks Since you asked explicitly about LSTM, Input sequences (of fixed length) over which the LSTM learns are assumed independent. So my question was, can one sequence be left out for validation just like in the case of a cross sectional data problem. let me comment on this. The above answer applies to LSTM network as well. LSTM network learns the time-dependence in your data by using the sliding LSTM window, so the sequences are not assumed as independent, as the idea of "long term memory" is the dependence assumption. Usually the sequences are generated as in this Keras code example (see also this tutorial for code with description): sentences = [] next_chars = [] for i in range(0, len(text) - maxlen, step): sentences.append(text[i: i + maxlen]) next_chars.append(text[i + maxlen]) As you can see, the loop is used to iterate through your data and create a sliding window, that makes jumps of step size, where step is smaller then maxlen. So the sequences are dependent. Even if you used step equal to maxlen, still you would be assuming time-dependence by choice of the model. What follows, dropping one of the sequences leads to introducing missing data. Using the simple cross-validation would introduce bias, since you would be measuring how does your model work in missing data scenario, rather then how well does it make predictions for the future. For time-series, you should rather use the one step ahead cross-validation.
Leave one out cross validation for LSTM
Leave one out cross validation for LSTM, or any other time-series model, doesn't really make much sense because it would introduce missing values in the series and leaking information from the future.
Leave one out cross validation for LSTM Leave one out cross validation for LSTM, or any other time-series model, doesn't really make much sense because it would introduce missing values in the series and leaking information from the future. Time series models learn from historical values, to predict future. In leave one out cross-validation, you remove observations from the series, including the past ones, what leads to missing data. What follows, sch cross-validation strategy would need the model to be able to deal somehow with missing data, or would need some external strategy of dealing with them and basically would test how well the model deals with missing data. Simple cross-validation can be used only if the samples can be assumed to be independent, this is never the case for time series (unless you assume it to be random noise). With time series data, as described by Hyndman and Athanasopoulos (2018), we perform one step ahead cross validation, where we use $1,\dots,k$ samples, to predict $k+1$ value (or alternatively $k+1,\dots, k+m$ values), where we use different $k$ values, as illustrated on image below (also by Hyndman and Athanasopoulos, 2018). Such strategy tests what you are trying to test, i.e. how well does your model predict future from the past. Moreover, it also tests how well does the model deal with changing data, a scenario that is very common in such cases, since you usually don't make single prediction using static training sample, but want a model that would be re-trained with new data and would adapt. Commenting on LSTM networks Since you asked explicitly about LSTM, Input sequences (of fixed length) over which the LSTM learns are assumed independent. So my question was, can one sequence be left out for validation just like in the case of a cross sectional data problem. let me comment on this. The above answer applies to LSTM network as well. LSTM network learns the time-dependence in your data by using the sliding LSTM window, so the sequences are not assumed as independent, as the idea of "long term memory" is the dependence assumption. Usually the sequences are generated as in this Keras code example (see also this tutorial for code with description): sentences = [] next_chars = [] for i in range(0, len(text) - maxlen, step): sentences.append(text[i: i + maxlen]) next_chars.append(text[i + maxlen]) As you can see, the loop is used to iterate through your data and create a sliding window, that makes jumps of step size, where step is smaller then maxlen. So the sequences are dependent. Even if you used step equal to maxlen, still you would be assuming time-dependence by choice of the model. What follows, dropping one of the sequences leads to introducing missing data. Using the simple cross-validation would introduce bias, since you would be measuring how does your model work in missing data scenario, rather then how well does it make predictions for the future. For time-series, you should rather use the one step ahead cross-validation.
Leave one out cross validation for LSTM Leave one out cross validation for LSTM, or any other time-series model, doesn't really make much sense because it would introduce missing values in the series and leaking information from the future.
53,619
Fitting ARIMA to time series with missing values
The results given by stats::arima in the first approach (ar1) are correct: they have taken into account the missing values. In the second one, they have not. You can fit ARIMA models with missing values easily because all ARIMA models are state space models and the Kalman filter, which is used to fit state space models, deals with missing values exactly by simply skipping the update phase. So, "putting together the likelihood with missing data" is absolutely possible, as is done by the Kalman filter. Any other state space model will allow you to do the same. Unless you are specifically interested in an estimate of those missing values, you do not need to impute them. If you do so incorrectly, you could distort the dynamics, which would cause problems when trying to fit your model afterwards. If you only want to forecast the series, you should probably not impute them. The question of why ar1 is correct but not ar2 is not exactly on topic here, but for the record: stats::arima expects your data as an object of class ts, not xts. If your data isn't ts, it will be converted by using as.ts, which discards the date information; this means that the explicit NA's in the first approach are retained, while the implicit ones in the second will not appear at all and it will indeed just glue the series together. The reason why stats::arima expects an object of class ts is because that class enforces regularly sampled data (at a certain frequency), whereas xts can carry arbitrarily sampled data, and classical ARIMA models are defined for regularly sampled data only.
Fitting ARIMA to time series with missing values
The results given by stats::arima in the first approach (ar1) are correct: they have taken into account the missing values. In the second one, they have not. You can fit ARIMA models with missing valu
Fitting ARIMA to time series with missing values The results given by stats::arima in the first approach (ar1) are correct: they have taken into account the missing values. In the second one, they have not. You can fit ARIMA models with missing values easily because all ARIMA models are state space models and the Kalman filter, which is used to fit state space models, deals with missing values exactly by simply skipping the update phase. So, "putting together the likelihood with missing data" is absolutely possible, as is done by the Kalman filter. Any other state space model will allow you to do the same. Unless you are specifically interested in an estimate of those missing values, you do not need to impute them. If you do so incorrectly, you could distort the dynamics, which would cause problems when trying to fit your model afterwards. If you only want to forecast the series, you should probably not impute them. The question of why ar1 is correct but not ar2 is not exactly on topic here, but for the record: stats::arima expects your data as an object of class ts, not xts. If your data isn't ts, it will be converted by using as.ts, which discards the date information; this means that the explicit NA's in the first approach are retained, while the implicit ones in the second will not appear at all and it will indeed just glue the series together. The reason why stats::arima expects an object of class ts is because that class enforces regularly sampled data (at a certain frequency), whereas xts can carry arbitrarily sampled data, and classical ARIMA models are defined for regularly sampled data only.
Fitting ARIMA to time series with missing values The results given by stats::arima in the first approach (ar1) are correct: they have taken into account the missing values. In the second one, they have not. You can fit ARIMA models with missing valu
53,620
Distance metric for source code
You can check the two links below: A comparison of code similarity analysers Measuring Code Similarity in Large-scaled Code Corpora At the third link below a similarity measure is proposed, which takes into account the amount of the shared information between two sequences. This metric is based on Kolmogorov complexity and it has been applied in measuring the amount of shared information between two computer programs, to enable plagiarism detection. Shared Information and Program Plagiarism Detection
Distance metric for source code
You can check the two links below: A comparison of code similarity analysers Measuring Code Similarity in Large-scaled Code Corpora At the third link below a similarity measure is proposed, which tak
Distance metric for source code You can check the two links below: A comparison of code similarity analysers Measuring Code Similarity in Large-scaled Code Corpora At the third link below a similarity measure is proposed, which takes into account the amount of the shared information between two sequences. This metric is based on Kolmogorov complexity and it has been applied in measuring the amount of shared information between two computer programs, to enable plagiarism detection. Shared Information and Program Plagiarism Detection
Distance metric for source code You can check the two links below: A comparison of code similarity analysers Measuring Code Similarity in Large-scaled Code Corpora At the third link below a similarity measure is proposed, which tak
53,621
Distance metric for source code
An Approach to Source-Code Plagiarism Detection and Investigation Using Latent Semantic Analysis reported success in this type of task with the general procedure: Building a term frequency representation of the source code corpus Performing dimensionality reduction using LSA (they found 30 dimensions to be sufficient) Computing cosine similarities of the query documents against corpus documents Perhaps not much different from what one might do with natural language processing, but they do report reasonably convincing results with this method.
Distance metric for source code
An Approach to Source-Code Plagiarism Detection and Investigation Using Latent Semantic Analysis reported success in this type of task with the general procedure: Building a term frequency representa
Distance metric for source code An Approach to Source-Code Plagiarism Detection and Investigation Using Latent Semantic Analysis reported success in this type of task with the general procedure: Building a term frequency representation of the source code corpus Performing dimensionality reduction using LSA (they found 30 dimensions to be sufficient) Computing cosine similarities of the query documents against corpus documents Perhaps not much different from what one might do with natural language processing, but they do report reasonably convincing results with this method.
Distance metric for source code An Approach to Source-Code Plagiarism Detection and Investigation Using Latent Semantic Analysis reported success in this type of task with the general procedure: Building a term frequency representa
53,622
Do I need stationary time series for Bayesian structural time series (BSTS)?
No, you don't need to make the time series stationary. BSTS should be able to handle that. Stationarity is a requirement specifically for AR and ARMA models. BSTS is supposed to handle structural changes in the time series, which means that by definition it should be able to handle non-stationary data - since a structural change in the time series would imply changes to the mean and variance of the series.
Do I need stationary time series for Bayesian structural time series (BSTS)?
No, you don't need to make the time series stationary. BSTS should be able to handle that. Stationarity is a requirement specifically for AR and ARMA models. BSTS is supposed to handle structural ch
Do I need stationary time series for Bayesian structural time series (BSTS)? No, you don't need to make the time series stationary. BSTS should be able to handle that. Stationarity is a requirement specifically for AR and ARMA models. BSTS is supposed to handle structural changes in the time series, which means that by definition it should be able to handle non-stationary data - since a structural change in the time series would imply changes to the mean and variance of the series.
Do I need stationary time series for Bayesian structural time series (BSTS)? No, you don't need to make the time series stationary. BSTS should be able to handle that. Stationarity is a requirement specifically for AR and ARMA models. BSTS is supposed to handle structural ch
53,623
Why does the rank order of models differ for R squared and RMSE?
In caret, the calculation for results$RMSE and results$Rsquared is not as simple as what you've indicated. They are in fact the average of RMSE and $R^2$ over the ten holdout sets. To confirm this, run the summary: > t1 glmnet 1000 samples 20 predictors No pre-processing Resampling: Cross-Validated (10 fold) Summary of sample sizes: 900, 900, 900, 900, 900, 900, ... Resampling results across tuning parameters: alpha lambda RMSE Rsquared 0.10 0.01065054 17.93931 0.1655746 0.10 0.10650539 17.93720 0.1656599 0.10 1.06505391 17.89291 0.1678166 0.55 0.01065054 17.93838 0.1657046 0.55 0.10650539 17.91755 0.1668356 0.55 1.06505391 17.84962 0.1731936 1.00 0.01065054 17.93824 0.1657245 1.00 0.10650539 17.90045 0.1678998 1.00 1.06505391 17.92535 0.1710923 RMSE was used to select the optimal model using the smallest value. The final values used for the model were alpha = 0.55 and lambda = 1.065054. For the optimal parameter combination alpha = 0.55 and lambda = 1.065054 the performance on each held-out set is seen in the object t1$resample: > t1$resample RMSE Rsquared Resample 1 18.42848 0.04479504 Fold05 2 21.17820 0.10500276 Fold08 3 18.27933 0.20858027 Fold04 4 17.31308 0.19080079 Fold07 5 16.60865 0.21812706 Fold10 6 20.07291 0.18737052 Fold02 7 16.48082 0.24041654 Fold03 8 17.18363 0.18379930 Fold06 9 17.29819 0.13669866 Fold09 10 15.65289 0.21634546 Fold01 (Needless to say, the RMSE and Rsquared seen above are evaluated on different CV folds, so they don't rank order the same.) If you average these columns, you'll get: > mean(t1$resample$RMSE) [1] 17.84962 > mean(t1$resample$Rsquared) [1] 0.1731936 ...which are the same as the RMSE and Rsquared numbers seen in row 6 of the summary. EDIT: Why does averaging over folds disrupt the rank ordering? Suppose we have split the data into $F$ folds, and we are considering $C$ tuning combinations. For each combo $c$ and held-out fold $f$, the relationship between the $R^2$ and MSE calculated on fold $f$ is: $$\operatorname{Rsquared}(c,f)=1-\frac{\operatorname{MSE}(c,f)}{\operatorname{Var}(f)},\tag1 $$ where $\operatorname{Var}(f)$ is shorthand for the variance of the observed responses in fold $f$. It is certainly true that for a given $f$, if we average over all $c$ then the monotonic relationship between $R^2$ and MSE is preserved, since by linearity: $$\frac1C\sum_c\operatorname{Rsquared}(c,f)=1-\frac{\frac1C\sum_c\operatorname{MSE}(c,f)}{\operatorname{Var}(f)}.\tag2 $$ However, if we average (1) over all $f$ we cannot assert a similar statement, since the denominator $\operatorname{Var}(f)$, which varies with the fold being held out, gets in the way: $$\frac1F\sum_f\operatorname{Rsquared}(c,f)=1-\frac1F\sum_f\left(\frac{\operatorname{MSE}(c,f)}{\operatorname{Var}(f)}\right).\tag3 $$ The RHS of (3) cannot be simplified further to reveal a monotonic relationship between the average $R^2$ over all folds and the average MSE over all folds. Since MSE is the square of RMSE, the relationship between fold-averaged $R^2$ and fold-averaged RMSE is even less direct. Indeed, for any given fold, there is not even an analog for (2) between combo-averaged $R^2$ and combo-averaged RMSE.
Why does the rank order of models differ for R squared and RMSE?
In caret, the calculation for results$RMSE and results$Rsquared is not as simple as what you've indicated. They are in fact the average of RMSE and $R^2$ over the ten holdout sets. To confirm this, ru
Why does the rank order of models differ for R squared and RMSE? In caret, the calculation for results$RMSE and results$Rsquared is not as simple as what you've indicated. They are in fact the average of RMSE and $R^2$ over the ten holdout sets. To confirm this, run the summary: > t1 glmnet 1000 samples 20 predictors No pre-processing Resampling: Cross-Validated (10 fold) Summary of sample sizes: 900, 900, 900, 900, 900, 900, ... Resampling results across tuning parameters: alpha lambda RMSE Rsquared 0.10 0.01065054 17.93931 0.1655746 0.10 0.10650539 17.93720 0.1656599 0.10 1.06505391 17.89291 0.1678166 0.55 0.01065054 17.93838 0.1657046 0.55 0.10650539 17.91755 0.1668356 0.55 1.06505391 17.84962 0.1731936 1.00 0.01065054 17.93824 0.1657245 1.00 0.10650539 17.90045 0.1678998 1.00 1.06505391 17.92535 0.1710923 RMSE was used to select the optimal model using the smallest value. The final values used for the model were alpha = 0.55 and lambda = 1.065054. For the optimal parameter combination alpha = 0.55 and lambda = 1.065054 the performance on each held-out set is seen in the object t1$resample: > t1$resample RMSE Rsquared Resample 1 18.42848 0.04479504 Fold05 2 21.17820 0.10500276 Fold08 3 18.27933 0.20858027 Fold04 4 17.31308 0.19080079 Fold07 5 16.60865 0.21812706 Fold10 6 20.07291 0.18737052 Fold02 7 16.48082 0.24041654 Fold03 8 17.18363 0.18379930 Fold06 9 17.29819 0.13669866 Fold09 10 15.65289 0.21634546 Fold01 (Needless to say, the RMSE and Rsquared seen above are evaluated on different CV folds, so they don't rank order the same.) If you average these columns, you'll get: > mean(t1$resample$RMSE) [1] 17.84962 > mean(t1$resample$Rsquared) [1] 0.1731936 ...which are the same as the RMSE and Rsquared numbers seen in row 6 of the summary. EDIT: Why does averaging over folds disrupt the rank ordering? Suppose we have split the data into $F$ folds, and we are considering $C$ tuning combinations. For each combo $c$ and held-out fold $f$, the relationship between the $R^2$ and MSE calculated on fold $f$ is: $$\operatorname{Rsquared}(c,f)=1-\frac{\operatorname{MSE}(c,f)}{\operatorname{Var}(f)},\tag1 $$ where $\operatorname{Var}(f)$ is shorthand for the variance of the observed responses in fold $f$. It is certainly true that for a given $f$, if we average over all $c$ then the monotonic relationship between $R^2$ and MSE is preserved, since by linearity: $$\frac1C\sum_c\operatorname{Rsquared}(c,f)=1-\frac{\frac1C\sum_c\operatorname{MSE}(c,f)}{\operatorname{Var}(f)}.\tag2 $$ However, if we average (1) over all $f$ we cannot assert a similar statement, since the denominator $\operatorname{Var}(f)$, which varies with the fold being held out, gets in the way: $$\frac1F\sum_f\operatorname{Rsquared}(c,f)=1-\frac1F\sum_f\left(\frac{\operatorname{MSE}(c,f)}{\operatorname{Var}(f)}\right).\tag3 $$ The RHS of (3) cannot be simplified further to reveal a monotonic relationship between the average $R^2$ over all folds and the average MSE over all folds. Since MSE is the square of RMSE, the relationship between fold-averaged $R^2$ and fold-averaged RMSE is even less direct. Indeed, for any given fold, there is not even an analog for (2) between combo-averaged $R^2$ and combo-averaged RMSE.
Why does the rank order of models differ for R squared and RMSE? In caret, the calculation for results$RMSE and results$Rsquared is not as simple as what you've indicated. They are in fact the average of RMSE and $R^2$ over the ten holdout sets. To confirm this, ru
53,624
Why does the rank order of models differ for R squared and RMSE?
It's because caret calculates R-squared differently than you are. See the answer to this question: How caret calculates R Squared. To see it in your code, library(caret) set.seed(0) d<-SLC14_1(n=1000) folds<-createMultiFolds(d$y,k=10,times=1) tc<-trainControl(index=folds,returnResamp="all", savePredictions = TRUE) # New option t1<-train(y~.,data=d,method="glmnet",trControl=tc) order(t1$results$RMSE)==order(-t1$results$Rsquared) library(data.table) preds <- data.table(t1$pred) preds[, overall_mean := mean(obs), by = .(lambda, alpha, Resample)] sum_sq <- preds[, .(SS_res = sum((obs - pred)^2), SS_tot = sum((obs - overall_mean)^2), n = .N, var = var(obs), Rsquared_corr = cor(obs, pred)^2), by = .(lambda, alpha, Resample)] sum_sq <- sum_sq[, ':=' (RMSE_Julian = sqrt(SS_res / n), Rsquared_Julian = 1 - (SS_res/SS_tot), Rsquared_traditional = 1 - (SS_res/ ((n-1)*var) ))] sum_sq <- merge(sum_sq, t1$resample, by = c("lambda", "alpha", "Resample")) head(sum_sq) Note the savePredictions = TRUE in the call to traincontrol(). In the final dataset, sum_sq, you can see your result, Rsquared_Julian matches Rsquared_traditional, but these don't match Rsquared_corr which does match the R-squared from caret, Rsquared. Also in your question, you assume n and SS_tot are constant, but that only holds true for a fold, not across all the cross-validations.
Why does the rank order of models differ for R squared and RMSE?
It's because caret calculates R-squared differently than you are. See the answer to this question: How caret calculates R Squared. To see it in your code, library(caret) set.seed(0) d<-SLC14_1(n=1
Why does the rank order of models differ for R squared and RMSE? It's because caret calculates R-squared differently than you are. See the answer to this question: How caret calculates R Squared. To see it in your code, library(caret) set.seed(0) d<-SLC14_1(n=1000) folds<-createMultiFolds(d$y,k=10,times=1) tc<-trainControl(index=folds,returnResamp="all", savePredictions = TRUE) # New option t1<-train(y~.,data=d,method="glmnet",trControl=tc) order(t1$results$RMSE)==order(-t1$results$Rsquared) library(data.table) preds <- data.table(t1$pred) preds[, overall_mean := mean(obs), by = .(lambda, alpha, Resample)] sum_sq <- preds[, .(SS_res = sum((obs - pred)^2), SS_tot = sum((obs - overall_mean)^2), n = .N, var = var(obs), Rsquared_corr = cor(obs, pred)^2), by = .(lambda, alpha, Resample)] sum_sq <- sum_sq[, ':=' (RMSE_Julian = sqrt(SS_res / n), Rsquared_Julian = 1 - (SS_res/SS_tot), Rsquared_traditional = 1 - (SS_res/ ((n-1)*var) ))] sum_sq <- merge(sum_sq, t1$resample, by = c("lambda", "alpha", "Resample")) head(sum_sq) Note the savePredictions = TRUE in the call to traincontrol(). In the final dataset, sum_sq, you can see your result, Rsquared_Julian matches Rsquared_traditional, but these don't match Rsquared_corr which does match the R-squared from caret, Rsquared. Also in your question, you assume n and SS_tot are constant, but that only holds true for a fold, not across all the cross-validations.
Why does the rank order of models differ for R squared and RMSE? It's because caret calculates R-squared differently than you are. See the answer to this question: How caret calculates R Squared. To see it in your code, library(caret) set.seed(0) d<-SLC14_1(n=1
53,625
Why does the rank order of models differ for R squared and RMSE?
@grand_chat has the correct maths, I'm just growing in a comparative example to help illustrate what the issue is in different terms that will hopefully help understanding. We're working with fractional terms here, similar to say miles per gallon. If we average mpg over set units of time we get very different results compared to over set units of fuel or distance. If we travel 10 minutes at 50 mph achieving 50mpg then 10 minutes at 60 achieving 30 mpg and we then want to calculate the average fuel efficiency for the journey. Time based average (with one minute representing a unit of time) is $(50*10+30*10) /20 = 40 mpg$ But the distance we travel is $50/6 + 60/6 = 18.33 miles $ given that ten minutes is 1/6 th of an hour The fuel we use is $(50/6)/50 +(60/6)/30=1/6+2/6 = 1/2 gallon$ This means our average mpg is in fact $18.33/(1/2)=36.66$ Because the total variance is different in every fold you would need to account for this in the averaging to maintain the monotonic relationship. Since it is present in the R2 calculation but not the RMSE then you can get rank switching by not accounting for the total variance in each fold
Why does the rank order of models differ for R squared and RMSE?
@grand_chat has the correct maths, I'm just growing in a comparative example to help illustrate what the issue is in different terms that will hopefully help understanding. We're working with fraction
Why does the rank order of models differ for R squared and RMSE? @grand_chat has the correct maths, I'm just growing in a comparative example to help illustrate what the issue is in different terms that will hopefully help understanding. We're working with fractional terms here, similar to say miles per gallon. If we average mpg over set units of time we get very different results compared to over set units of fuel or distance. If we travel 10 minutes at 50 mph achieving 50mpg then 10 minutes at 60 achieving 30 mpg and we then want to calculate the average fuel efficiency for the journey. Time based average (with one minute representing a unit of time) is $(50*10+30*10) /20 = 40 mpg$ But the distance we travel is $50/6 + 60/6 = 18.33 miles $ given that ten minutes is 1/6 th of an hour The fuel we use is $(50/6)/50 +(60/6)/30=1/6+2/6 = 1/2 gallon$ This means our average mpg is in fact $18.33/(1/2)=36.66$ Because the total variance is different in every fold you would need to account for this in the averaging to maintain the monotonic relationship. Since it is present in the R2 calculation but not the RMSE then you can get rank switching by not accounting for the total variance in each fold
Why does the rank order of models differ for R squared and RMSE? @grand_chat has the correct maths, I'm just growing in a comparative example to help illustrate what the issue is in different terms that will hopefully help understanding. We're working with fraction
53,626
Intuition on Independence of Random Vectors
Yes, $X_i$ is independent of $Y_j$. To see this, note that if $\mathbf{X}$ and $\mathbf{Y}$ are independent, then for functions $f$ and $g$, $f(\mathbf{X})$ and $g(\mathbf{Y})$ are independent. See discussion here for this statement. So let $f$ be the function that picks out the $i$th element of $\mathbf{X}$, that is, $f(\mathbf{X}) = X_i$ and similarly define $g(\mathbf{Y}) = Y_j$. Then $X_i$ is independent of $Y_j$.
Intuition on Independence of Random Vectors
Yes, $X_i$ is independent of $Y_j$. To see this, note that if $\mathbf{X}$ and $\mathbf{Y}$ are independent, then for functions $f$ and $g$, $f(\mathbf{X})$ and $g(\mathbf{Y})$ are independent. See di
Intuition on Independence of Random Vectors Yes, $X_i$ is independent of $Y_j$. To see this, note that if $\mathbf{X}$ and $\mathbf{Y}$ are independent, then for functions $f$ and $g$, $f(\mathbf{X})$ and $g(\mathbf{Y})$ are independent. See discussion here for this statement. So let $f$ be the function that picks out the $i$th element of $\mathbf{X}$, that is, $f(\mathbf{X}) = X_i$ and similarly define $g(\mathbf{Y}) = Y_j$. Then $X_i$ is independent of $Y_j$.
Intuition on Independence of Random Vectors Yes, $X_i$ is independent of $Y_j$. To see this, note that if $\mathbf{X}$ and $\mathbf{Y}$ are independent, then for functions $f$ and $g$, $f(\mathbf{X})$ and $g(\mathbf{Y})$ are independent. See di
53,627
Intuition on Independence of Random Vectors
It is actually even more general than that. For real valued random vectors $X = (X_1,X_2,...,X_m)$ and $Y = (Y_1,Y_2,...,Y_n)$ independence implies: $$F_{X_1,X_2,...X_m,Y_1,Y_2,...,Y_n} (x_1,x_2,...,x_m,y_1,y_2,...,y_n)=F_{X_1,X_2,...X_m}(x_1,x_2,...,x_m) F_{Y_1,Y_2,...,Y_n}(y_1,y_2,...,y_n)$$ where $F$ is cumulative distribution function. Now plugging $\infty$ to any variable into the above formula lets us take any subset of the variables and claim independence among subsets, not only between component pairs.
Intuition on Independence of Random Vectors
It is actually even more general than that. For real valued random vectors $X = (X_1,X_2,...,X_m)$ and $Y = (Y_1,Y_2,...,Y_n)$ independence implies: $$F_{X_1,X_2,...X_m,Y_1,Y_2,...,Y_n} (x_1,x_2,...,x
Intuition on Independence of Random Vectors It is actually even more general than that. For real valued random vectors $X = (X_1,X_2,...,X_m)$ and $Y = (Y_1,Y_2,...,Y_n)$ independence implies: $$F_{X_1,X_2,...X_m,Y_1,Y_2,...,Y_n} (x_1,x_2,...,x_m,y_1,y_2,...,y_n)=F_{X_1,X_2,...X_m}(x_1,x_2,...,x_m) F_{Y_1,Y_2,...,Y_n}(y_1,y_2,...,y_n)$$ where $F$ is cumulative distribution function. Now plugging $\infty$ to any variable into the above formula lets us take any subset of the variables and claim independence among subsets, not only between component pairs.
Intuition on Independence of Random Vectors It is actually even more general than that. For real valued random vectors $X = (X_1,X_2,...,X_m)$ and $Y = (Y_1,Y_2,...,Y_n)$ independence implies: $$F_{X_1,X_2,...X_m,Y_1,Y_2,...,Y_n} (x_1,x_2,...,x
53,628
Compare central distribution between two data sets
This answer is oblique to your question, because I am not clear that it's the best question to ask. Whether kurtosis is higher or lower doesn't bear directly on the main differences in level, spread and shape between empirical and simulated distributions. I can't comment on what is of most scientific interest here. I don't doubt that conventional tests for differences in mean and/or variance will show something: for example, $t$ tests comparing means give $P$-values of around 0.04. But either seems to miss a major point, which is a difference in distribution shape. Here are overlaid quantile plots, first with reference distribution uniform and second with reference distribution normal (Gaussian). What I see most prominently is that empirical and simulated distributions differ most in the left-hand tails. That is consistent with the density plots, but conversely the rather pronounced bump in the right tail of the latter looks a little like over-reaction to a small cluster of values (compare the rug). A strong merit of quantile plots here is that there are no arbitrary or capricious choices of how and how much to smooth (let alone how, and how much, to bin, where binning not only ignores detail within bins but also is sensitive to bin start and width). The data are plotted as they come, signal, fine structure and noise all together. A limitation of Kolmogorov-Smirnov and similar tests is that you still need to look to see where any differences occur. I'd rather start with looking at the data further.
Compare central distribution between two data sets
This answer is oblique to your question, because I am not clear that it's the best question to ask. Whether kurtosis is higher or lower doesn't bear directly on the main differences in level, spread a
Compare central distribution between two data sets This answer is oblique to your question, because I am not clear that it's the best question to ask. Whether kurtosis is higher or lower doesn't bear directly on the main differences in level, spread and shape between empirical and simulated distributions. I can't comment on what is of most scientific interest here. I don't doubt that conventional tests for differences in mean and/or variance will show something: for example, $t$ tests comparing means give $P$-values of around 0.04. But either seems to miss a major point, which is a difference in distribution shape. Here are overlaid quantile plots, first with reference distribution uniform and second with reference distribution normal (Gaussian). What I see most prominently is that empirical and simulated distributions differ most in the left-hand tails. That is consistent with the density plots, but conversely the rather pronounced bump in the right tail of the latter looks a little like over-reaction to a small cluster of values (compare the rug). A strong merit of quantile plots here is that there are no arbitrary or capricious choices of how and how much to smooth (let alone how, and how much, to bin, where binning not only ignores detail within bins but also is sensitive to bin start and width). The data are plotted as they come, signal, fine structure and noise all together. A limitation of Kolmogorov-Smirnov and similar tests is that you still need to look to see where any differences occur. I'd rather start with looking at the data further.
Compare central distribution between two data sets This answer is oblique to your question, because I am not clear that it's the best question to ask. Whether kurtosis is higher or lower doesn't bear directly on the main differences in level, spread a
53,629
Compare central distribution between two data sets
At issue is what you mean by "clusters ... around 0." In practice you might not know for sure. Therefore, this answer proposes a flexible exploratory determination: namely, the degree of clustering around any point ought to depend on the scale at which you are viewing the clustering, so for insight, study how apparent clustering depends on the scale. This suggests developing a quantitative measure of degree of clustering as a function of the scale. Given the data have been presented in terms of kernel density estimates (KDEs), and given that the amount of data near a given value are proportional to the estimated density at that value, a natural choice is the value of a KDE at 0. Any KDE depends on a scale factor, typically represented as a kernel half-width. Specifically, for a kernel function $k$ and scale factor $h,$ the KDE of a dataset $x=(x_1, x_2, \ldots, x_n)$ at $0$ is $$\rho(h;x) = \frac{1}{n}\sum_{i=1}^n hk\left(\frac{0-x_i}{h}\right)= \frac{h}{n}\sum_{i=1}^n k\left(-\frac{x_i}{h}\right).$$ This "peak density trace" is a fast, efficient calculation. You can therefore compare two datasets $x$ and $y=(y_1, y_2, \ldots, y_m)$ at all scales by plotting the graphs of $h\to \rho(h;x)$ and $h\to \rho(h;y)$ on the same axes, as shown by this figure for the data in the question: The plots for $x$ (the empirical data) are shown in yellow and those for $y$ (the simulated data) in blue. At left are the graphs of $\rho$ on log-log axes (using a Gaussian kernel). At right are the two densities for a particular half-width (the one located by the vertical line in the left plot). In this case there is no question about the clustering: no matter what the scale is (over a wide range) the empirical data look more clustered at $0,$ because at each horizontal coordinate in the left-hand plot the yellow ($x$) graph is higher than the blue ($y$) graph. To appreciate what this approach can offer, consider a circumstance where the answer does depend on the scale. I simulated mixtures of two Normal variables, both centered at zero. The $x$ data use standard deviations of $1$ and $1/2$ while the $y$ data use standard deviations of $5$ and $1/8.$ These mixture components represent clustering around zero at four different scales, with $y$ using the two most extreme scales. We should therefore expect the determination of "more clustered" to depend on scale, and indeed it does, as reflected by the next figure. The traces of peak density at the left now cross: at half widths of $1/2$ or less, the blue ($y$) curve is higher, indicating greater clustering by $y;$ at larger half-widths, the yellow ($x$) curve is higher, indicating greater clustering by $x.$ Kernel density estimates for three half-widths (as shown by the vertical lines in the left plot) illustrate this variable clustering. To describe and compare the clustering around 0 you may use the peak density trace to either select an appropriate scale (and choose the dataset with greatest density at 0 as the most clustered) or stop with the trace itself, letting it reveal how clustering varies with scale. Finally, if you are concerned the result might depend on the choice of the kernel, know that it is unlikely to do so, but you can perform a sensitivity analysis by choosing some extreme kernels (such as a uniform and a bi-exponential) and viewing the density traces for them to see how much they might have changed. As an example, here is a re-analysis for the data in the question carried out for a uniform ("rectangular") kernel: Qualitatively it is the same as before and leads to the same conclusion about clustering: at all scales, $x$ is more clustered around $0$ than $y.$ Appendix: R code to create the figures # # Generate random data. # Alternatively: let `x` and `y` be real datasets, as in the question. # # set.seed(17) # x <- c(rnorm(100), rnorm(100, sd=0.5)) # y <- c(rnorm(100, sd=5), rnorm(100, sd=0.125)) # # Figure out a reasonable range of kernel half-widths `h`. # s <- diff(range(c(x,y))) n <- min(length(x), length(y)) d <- 2*s / (length(x) + length(y)) h <- exp(seq(log(d), log(s), length.out=101)) # # Compute the peak traces. # `kstring` is an argument to `density`. # rho <- function(h, x) mean(dnorm(-x, 0, h)) # Gaussian kernel kstring <- "gaussian" # rho <- function(h, x) mean(dunif(-x, -h*sqrt(3), h*sqrt(3))) # Rectangular kernel # kstring <- "rectangular" x.plot <- sapply(h, rho, x=x) y.plot <- sapply(h, rho, x=y) # # Create consistent plotting colors. # col.make <- function(col, h, s, v, a) { q <- do.call("rgb2hsv", as.list(col2rgb(col))) if (!missing(h)) q[1] <- h if (!missing(s)) q[2] <- s if (!missing(v)) q[3] <- v hsv(q[1], q[2], q[3], a) } x.col <- col.make("#F4E5AE", s=0.8); x.cola <- col.make(x.col, a=0.25) y.col <- col.make("#B1E0E4", s=0.8); y.cola <- col.make(y.col, a=0.25) # # Quick and dirty: obtain the half-width estimated by `density` for reference. # x.kde <- density(x, kernel=kstring) y.kde <- density(y, kernel=kstring) bw <- signif(max(x.kde$bw, y.kde$bw), 1) # # Make the plots. # # mult <- c(1,2.5,10) # Multipliers of `bw` to display mult <- 1 par(mfrow=c(1, length(mult)+1)) #-- The peak trace plots plot(h, pmax(y.plot, x.plot), log="xy", type="n", main="Densities at 0", xlab="Kernel half width", ylab="Peak density") abline(v = bw*mult) lines(h, x.plot, lwd=2, col=x.col) lines(h, y.plot, lwd=2, col=y.col) #-- The density plot(s) for (bw in bw * mult) { x.kde <- density(x, bw=bw, kernel=kstring) y.kde <- density(y, bw=bw, kernel=kstring) plot(range(c(x.kde$x, y.kde$x)), range(c(x.kde$y, y.kde$y)), type="n", xlab="Value", ylab="Density", main=paste0("Kernel Densities at Half-width ", bw)) abline(v = 0) polygon(x.kde$x, x.kde$y, border=NA, col=x.cola) lines(x.kde, lwd=2, col=x.col) polygon(y.kde$x, y.kde$y, border=NA, col=y.cola) lines(y.kde, lwd=2, col=y.col) } par(mfrow=c(1,1))
Compare central distribution between two data sets
At issue is what you mean by "clusters ... around 0." In practice you might not know for sure. Therefore, this answer proposes a flexible exploratory determination: namely, the degree of clustering
Compare central distribution between two data sets At issue is what you mean by "clusters ... around 0." In practice you might not know for sure. Therefore, this answer proposes a flexible exploratory determination: namely, the degree of clustering around any point ought to depend on the scale at which you are viewing the clustering, so for insight, study how apparent clustering depends on the scale. This suggests developing a quantitative measure of degree of clustering as a function of the scale. Given the data have been presented in terms of kernel density estimates (KDEs), and given that the amount of data near a given value are proportional to the estimated density at that value, a natural choice is the value of a KDE at 0. Any KDE depends on a scale factor, typically represented as a kernel half-width. Specifically, for a kernel function $k$ and scale factor $h,$ the KDE of a dataset $x=(x_1, x_2, \ldots, x_n)$ at $0$ is $$\rho(h;x) = \frac{1}{n}\sum_{i=1}^n hk\left(\frac{0-x_i}{h}\right)= \frac{h}{n}\sum_{i=1}^n k\left(-\frac{x_i}{h}\right).$$ This "peak density trace" is a fast, efficient calculation. You can therefore compare two datasets $x$ and $y=(y_1, y_2, \ldots, y_m)$ at all scales by plotting the graphs of $h\to \rho(h;x)$ and $h\to \rho(h;y)$ on the same axes, as shown by this figure for the data in the question: The plots for $x$ (the empirical data) are shown in yellow and those for $y$ (the simulated data) in blue. At left are the graphs of $\rho$ on log-log axes (using a Gaussian kernel). At right are the two densities for a particular half-width (the one located by the vertical line in the left plot). In this case there is no question about the clustering: no matter what the scale is (over a wide range) the empirical data look more clustered at $0,$ because at each horizontal coordinate in the left-hand plot the yellow ($x$) graph is higher than the blue ($y$) graph. To appreciate what this approach can offer, consider a circumstance where the answer does depend on the scale. I simulated mixtures of two Normal variables, both centered at zero. The $x$ data use standard deviations of $1$ and $1/2$ while the $y$ data use standard deviations of $5$ and $1/8.$ These mixture components represent clustering around zero at four different scales, with $y$ using the two most extreme scales. We should therefore expect the determination of "more clustered" to depend on scale, and indeed it does, as reflected by the next figure. The traces of peak density at the left now cross: at half widths of $1/2$ or less, the blue ($y$) curve is higher, indicating greater clustering by $y;$ at larger half-widths, the yellow ($x$) curve is higher, indicating greater clustering by $x.$ Kernel density estimates for three half-widths (as shown by the vertical lines in the left plot) illustrate this variable clustering. To describe and compare the clustering around 0 you may use the peak density trace to either select an appropriate scale (and choose the dataset with greatest density at 0 as the most clustered) or stop with the trace itself, letting it reveal how clustering varies with scale. Finally, if you are concerned the result might depend on the choice of the kernel, know that it is unlikely to do so, but you can perform a sensitivity analysis by choosing some extreme kernels (such as a uniform and a bi-exponential) and viewing the density traces for them to see how much they might have changed. As an example, here is a re-analysis for the data in the question carried out for a uniform ("rectangular") kernel: Qualitatively it is the same as before and leads to the same conclusion about clustering: at all scales, $x$ is more clustered around $0$ than $y.$ Appendix: R code to create the figures # # Generate random data. # Alternatively: let `x` and `y` be real datasets, as in the question. # # set.seed(17) # x <- c(rnorm(100), rnorm(100, sd=0.5)) # y <- c(rnorm(100, sd=5), rnorm(100, sd=0.125)) # # Figure out a reasonable range of kernel half-widths `h`. # s <- diff(range(c(x,y))) n <- min(length(x), length(y)) d <- 2*s / (length(x) + length(y)) h <- exp(seq(log(d), log(s), length.out=101)) # # Compute the peak traces. # `kstring` is an argument to `density`. # rho <- function(h, x) mean(dnorm(-x, 0, h)) # Gaussian kernel kstring <- "gaussian" # rho <- function(h, x) mean(dunif(-x, -h*sqrt(3), h*sqrt(3))) # Rectangular kernel # kstring <- "rectangular" x.plot <- sapply(h, rho, x=x) y.plot <- sapply(h, rho, x=y) # # Create consistent plotting colors. # col.make <- function(col, h, s, v, a) { q <- do.call("rgb2hsv", as.list(col2rgb(col))) if (!missing(h)) q[1] <- h if (!missing(s)) q[2] <- s if (!missing(v)) q[3] <- v hsv(q[1], q[2], q[3], a) } x.col <- col.make("#F4E5AE", s=0.8); x.cola <- col.make(x.col, a=0.25) y.col <- col.make("#B1E0E4", s=0.8); y.cola <- col.make(y.col, a=0.25) # # Quick and dirty: obtain the half-width estimated by `density` for reference. # x.kde <- density(x, kernel=kstring) y.kde <- density(y, kernel=kstring) bw <- signif(max(x.kde$bw, y.kde$bw), 1) # # Make the plots. # # mult <- c(1,2.5,10) # Multipliers of `bw` to display mult <- 1 par(mfrow=c(1, length(mult)+1)) #-- The peak trace plots plot(h, pmax(y.plot, x.plot), log="xy", type="n", main="Densities at 0", xlab="Kernel half width", ylab="Peak density") abline(v = bw*mult) lines(h, x.plot, lwd=2, col=x.col) lines(h, y.plot, lwd=2, col=y.col) #-- The density plot(s) for (bw in bw * mult) { x.kde <- density(x, bw=bw, kernel=kstring) y.kde <- density(y, bw=bw, kernel=kstring) plot(range(c(x.kde$x, y.kde$x)), range(c(x.kde$y, y.kde$y)), type="n", xlab="Value", ylab="Density", main=paste0("Kernel Densities at Half-width ", bw)) abline(v = 0) polygon(x.kde$x, x.kde$y, border=NA, col=x.cola) lines(x.kde, lwd=2, col=x.col) polygon(y.kde$x, y.kde$y, border=NA, col=y.cola) lines(y.kde, lwd=2, col=y.col) } par(mfrow=c(1,1))
Compare central distribution between two data sets At issue is what you mean by "clusters ... around 0." In practice you might not know for sure. Therefore, this answer proposes a flexible exploratory determination: namely, the degree of clustering
53,630
Compare central distribution between two data sets
It seems like what you're looking for is a Levene's test of homogeneity of variance. It should test whether the variance in one distribution is significantly different from the variance in another distribution. Which should get at your question about how the data clusters. This will not inform whether it is centered on 0 though. For that I would use a t-test. Between the combination of those two you should be able to answer your question. https://en.wikipedia.org/wiki/Levene%27s_test
Compare central distribution between two data sets
It seems like what you're looking for is a Levene's test of homogeneity of variance. It should test whether the variance in one distribution is significantly different from the variance in another dis
Compare central distribution between two data sets It seems like what you're looking for is a Levene's test of homogeneity of variance. It should test whether the variance in one distribution is significantly different from the variance in another distribution. Which should get at your question about how the data clusters. This will not inform whether it is centered on 0 though. For that I would use a t-test. Between the combination of those two you should be able to answer your question. https://en.wikipedia.org/wiki/Levene%27s_test
Compare central distribution between two data sets It seems like what you're looking for is a Levene's test of homogeneity of variance. It should test whether the variance in one distribution is significantly different from the variance in another dis
53,631
Compare central distribution between two data sets
The OP states, "I am interested in whether my empirical data (yellow) "clusters" around the midpoint of the plot more than the simulated data (blue)." This is not a question about kurtosis: kurtosis does not measure "clustering" around the midpoint. Rather, it measures tails of the distribution. (Rare, extreme potentially observable data). Here is a visual image to help understand the above comment. Compute the z-values for each sample. Plot the $z^4$ values for the observed data sample using a dot plot. Now, compute the average of the $z^4$ values for the simulated data; this is the kurtosis of the simulated data. Now, locate the kurtosis of the simulated data as a "fulcrum" on the horizontal axis of your dot plot of your observed data $z^4$ values. If the dot plot "falls to the right," then your observed data have higher kurtosis than the simulated data, and conversely. Now, what causes the "falling to the right"? Is it greater "clustering around the midpoint" of your actual data? Obviously, not, because it falls to the right, not to the left. So, higher kurtosis implies greater tail weight (rare, extreme value(s)), not greater "clustering around a midpoint." If you want to compare "clustering around the midpoint," you might instead consider comparing the difference between 10th and 90th percentiles (or other similar). You could use a bootstrap-type (perhaps smoothed) method to estimate standard error of the difference.
Compare central distribution between two data sets
The OP states, "I am interested in whether my empirical data (yellow) "clusters" around the midpoint of the plot more than the simulated data (blue)." This is not a question about kurtosis: kurtosis
Compare central distribution between two data sets The OP states, "I am interested in whether my empirical data (yellow) "clusters" around the midpoint of the plot more than the simulated data (blue)." This is not a question about kurtosis: kurtosis does not measure "clustering" around the midpoint. Rather, it measures tails of the distribution. (Rare, extreme potentially observable data). Here is a visual image to help understand the above comment. Compute the z-values for each sample. Plot the $z^4$ values for the observed data sample using a dot plot. Now, compute the average of the $z^4$ values for the simulated data; this is the kurtosis of the simulated data. Now, locate the kurtosis of the simulated data as a "fulcrum" on the horizontal axis of your dot plot of your observed data $z^4$ values. If the dot plot "falls to the right," then your observed data have higher kurtosis than the simulated data, and conversely. Now, what causes the "falling to the right"? Is it greater "clustering around the midpoint" of your actual data? Obviously, not, because it falls to the right, not to the left. So, higher kurtosis implies greater tail weight (rare, extreme value(s)), not greater "clustering around a midpoint." If you want to compare "clustering around the midpoint," you might instead consider comparing the difference between 10th and 90th percentiles (or other similar). You could use a bootstrap-type (perhaps smoothed) method to estimate standard error of the difference.
Compare central distribution between two data sets The OP states, "I am interested in whether my empirical data (yellow) "clusters" around the midpoint of the plot more than the simulated data (blue)." This is not a question about kurtosis: kurtosis
53,632
Suppose $X,Y,Z$ are random variables such that $Y,X$ are perfectly correlated. Does it hold that $P(Z|X,Y) = P(Z|Y)$?
In short the answer is that yes this is true, but proving it requires some manipulation of the measure theoretic definitions of probability: largely because the definition of conditional independence is heavily steeped in measure theory. Note that perfect correlation implies that almost surely $X = aY + b$ for some constants $a,b$. Now we turn to the measure theory, recall that by Bayes formula $$\mathbf P[ Z = z \, | X = ay + b , \, Y =y ] = \frac{ \mathbf P[Z = z, \, X = ay + b,\, Y = y ] }{\mathbf P[ X = ay + b,\, Y = y ]}.$$ Moreover, measure theoretically if the random variables $X,Y,Z \colon \Omega \rightarrow \mathbf R$ then the event $\{X = x\}$ etc. are defined as $$ \{X = x\} = \{ \omega \in \Omega \, \colon \, X(\omega) = x\},$$ further since $X$ is almost surely equal to $aY + b$ we have that up to a sets of measure $0$ $$\{X = ay + b\} = \{Y = y\},$$ and hence their intersection is, up to sets of measure $0$ $$\{X = ay + b\} \cap \{Y = y\} = \{Y = y\}.$$ This can now be substituted into Bayes formula to derive the desired result \begin{align*}\mathbf P[ Z = z \, | X = ay + b , \, Y =y ] &= \frac{ \mathbf P[Z = z, \, X = ay + b,\, Y = y ] }{\mathbf P[ X = ay + b,\, Y = y ]}\\ &=\frac{ \mathbf P[Z = z, \, Y = y ] }{\mathbf P[ Y = y ]}\\ &=\mathbf P[ Z = z \, | \, Y =y ] .\end{align*} Note that there is some level of detail omitted from this (largely around the sets of measure $0$ argument), but the essence is there.
Suppose $X,Y,Z$ are random variables such that $Y,X$ are perfectly correlated. Does it hold that $P(
In short the answer is that yes this is true, but proving it requires some manipulation of the measure theoretic definitions of probability: largely because the definition of conditional independence
Suppose $X,Y,Z$ are random variables such that $Y,X$ are perfectly correlated. Does it hold that $P(Z|X,Y) = P(Z|Y)$? In short the answer is that yes this is true, but proving it requires some manipulation of the measure theoretic definitions of probability: largely because the definition of conditional independence is heavily steeped in measure theory. Note that perfect correlation implies that almost surely $X = aY + b$ for some constants $a,b$. Now we turn to the measure theory, recall that by Bayes formula $$\mathbf P[ Z = z \, | X = ay + b , \, Y =y ] = \frac{ \mathbf P[Z = z, \, X = ay + b,\, Y = y ] }{\mathbf P[ X = ay + b,\, Y = y ]}.$$ Moreover, measure theoretically if the random variables $X,Y,Z \colon \Omega \rightarrow \mathbf R$ then the event $\{X = x\}$ etc. are defined as $$ \{X = x\} = \{ \omega \in \Omega \, \colon \, X(\omega) = x\},$$ further since $X$ is almost surely equal to $aY + b$ we have that up to a sets of measure $0$ $$\{X = ay + b\} = \{Y = y\},$$ and hence their intersection is, up to sets of measure $0$ $$\{X = ay + b\} \cap \{Y = y\} = \{Y = y\}.$$ This can now be substituted into Bayes formula to derive the desired result \begin{align*}\mathbf P[ Z = z \, | X = ay + b , \, Y =y ] &= \frac{ \mathbf P[Z = z, \, X = ay + b,\, Y = y ] }{\mathbf P[ X = ay + b,\, Y = y ]}\\ &=\frac{ \mathbf P[Z = z, \, Y = y ] }{\mathbf P[ Y = y ]}\\ &=\mathbf P[ Z = z \, | \, Y =y ] .\end{align*} Note that there is some level of detail omitted from this (largely around the sets of measure $0$ argument), but the essence is there.
Suppose $X,Y,Z$ are random variables such that $Y,X$ are perfectly correlated. Does it hold that $P( In short the answer is that yes this is true, but proving it requires some manipulation of the measure theoretic definitions of probability: largely because the definition of conditional independence
53,633
Linear regression for large dataset
For larger dataset, we use stochastic gradient descent or batch-gradient descent. But using these may give a optimum value that is close enough. I would suggest you to use batch-gradient descent as it gives better optimum values rather than stochastic gradient descent.
Linear regression for large dataset
For larger dataset, we use stochastic gradient descent or batch-gradient descent. But using these may give a optimum value that is close enough. I would suggest you to use batch-gradient descent as it
Linear regression for large dataset For larger dataset, we use stochastic gradient descent or batch-gradient descent. But using these may give a optimum value that is close enough. I would suggest you to use batch-gradient descent as it gives better optimum values rather than stochastic gradient descent.
Linear regression for large dataset For larger dataset, we use stochastic gradient descent or batch-gradient descent. But using these may give a optimum value that is close enough. I would suggest you to use batch-gradient descent as it
53,634
Linear regression for large dataset
If data is large, iterative method is better than direct method to solve the linear system. Details can be found in this post https://stats.stackexchange.com/a/278779/113777 In addition, stochastic gradient decent can be used to learn from the very large data set. I also discussed it on my answer linked above. The idea is to approximate the gradient from a subset of the data. Which can be implemented in parallel.
Linear regression for large dataset
If data is large, iterative method is better than direct method to solve the linear system. Details can be found in this post https://stats.stackexchange.com/a/278779/113777 In addition, stochastic gr
Linear regression for large dataset If data is large, iterative method is better than direct method to solve the linear system. Details can be found in this post https://stats.stackexchange.com/a/278779/113777 In addition, stochastic gradient decent can be used to learn from the very large data set. I also discussed it on my answer linked above. The idea is to approximate the gradient from a subset of the data. Which can be implemented in parallel.
Linear regression for large dataset If data is large, iterative method is better than direct method to solve the linear system. Details can be found in this post https://stats.stackexchange.com/a/278779/113777 In addition, stochastic gr
53,635
Linear regression for large dataset
If your data is too tall, then a standard technique is batching, where you update the loss function for say, 1000 points at a time. This is how stochastic gradient descent works. If your data is also too wide, then I would think a similar kind of batching procedure would work, where you also select a subset of features to update at any given time. This would be analogous to how dropout works in neural networks.
Linear regression for large dataset
If your data is too tall, then a standard technique is batching, where you update the loss function for say, 1000 points at a time. This is how stochastic gradient descent works. If your data is also
Linear regression for large dataset If your data is too tall, then a standard technique is batching, where you update the loss function for say, 1000 points at a time. This is how stochastic gradient descent works. If your data is also too wide, then I would think a similar kind of batching procedure would work, where you also select a subset of features to update at any given time. This would be analogous to how dropout works in neural networks.
Linear regression for large dataset If your data is too tall, then a standard technique is batching, where you update the loss function for say, 1000 points at a time. This is how stochastic gradient descent works. If your data is also
53,636
Are both ARIMA and Exponential Smoothing special cases of State Space models?
Yes indeed: both exponential smoothing and ARIMA are special cases of state space models. For ARIMA, see this talk by Rob Hyndman, and for Exponential Smoothing, see Forecasting with Exponential Smoothing - the State Space Approach. This underlies the fact that specific Exponential Smoothing methods can be shown to yield MSE-optimal point forecasts for certain ARIMA data generating processes, and vice versa. Rob Hyndman works in general in a state space framework, as do other forecasters. The ARIMA and Exponential Smoothing special cases will not die out, because they are well established forecasting paradigms that are more easily explained than the general state space formulation. However, state space formulations allow "natural" extensions of the "classical" approaches, e.g., to including causal effects or complex seasonality.
Are both ARIMA and Exponential Smoothing special cases of State Space models?
Yes indeed: both exponential smoothing and ARIMA are special cases of state space models. For ARIMA, see this talk by Rob Hyndman, and for Exponential Smoothing, see Forecasting with Exponential Smoot
Are both ARIMA and Exponential Smoothing special cases of State Space models? Yes indeed: both exponential smoothing and ARIMA are special cases of state space models. For ARIMA, see this talk by Rob Hyndman, and for Exponential Smoothing, see Forecasting with Exponential Smoothing - the State Space Approach. This underlies the fact that specific Exponential Smoothing methods can be shown to yield MSE-optimal point forecasts for certain ARIMA data generating processes, and vice versa. Rob Hyndman works in general in a state space framework, as do other forecasters. The ARIMA and Exponential Smoothing special cases will not die out, because they are well established forecasting paradigms that are more easily explained than the general state space formulation. However, state space formulations allow "natural" extensions of the "classical" approaches, e.g., to including causal effects or complex seasonality.
Are both ARIMA and Exponential Smoothing special cases of State Space models? Yes indeed: both exponential smoothing and ARIMA are special cases of state space models. For ARIMA, see this talk by Rob Hyndman, and for Exponential Smoothing, see Forecasting with Exponential Smoot
53,637
auto.arima returns a non-seasonal model even though I am forcing seasonality
Rule number 1: when your code does not do what you want, start inspecting your objects. library(forecast) set.seed(1) (Data <- as.ts(rnorm(116), frequency=52) ) yields Time Series: Start = 1 End = 116 Frequency = 1 ...snip... Note that Frequency is 1, not 52, as we explicitly set above! The problem is that as.ts() silently ignores the frequency parameter and sets the frequency to 1. Here is the help page to stats::ts(): ‘as.ts’ is generic. Its default method will use the ‘tsp’ attribute of the object if it has one to set the start and end times and frequency. So, if you want something seasonal, either supply an x with a tsp attribute to as.ts()... or use ts() straight from the beginning. set.seed(1) (Data <- ts(rnorm(116), frequency=52) ) which yields Time Series: Start = c(1, 1) End = c(3, 12) Frequency = 52 ...snip... This looks much better. Note that we now need to supply 2-vectors to window(), corresponding to the Start and End attributes of Data: Train <- window(Data,start=c(1,1),end=c(2,52)) auto.arima(Train, D=1) which yields a nice seasonal ARIMA model, as required: Series: Train ARIMA(0,0,0)(0,1,0)[52]
auto.arima returns a non-seasonal model even though I am forcing seasonality
Rule number 1: when your code does not do what you want, start inspecting your objects. library(forecast) set.seed(1) (Data <- as.ts(rnorm(116), frequency=52) ) yields Time Series: Start = 1 End = 1
auto.arima returns a non-seasonal model even though I am forcing seasonality Rule number 1: when your code does not do what you want, start inspecting your objects. library(forecast) set.seed(1) (Data <- as.ts(rnorm(116), frequency=52) ) yields Time Series: Start = 1 End = 116 Frequency = 1 ...snip... Note that Frequency is 1, not 52, as we explicitly set above! The problem is that as.ts() silently ignores the frequency parameter and sets the frequency to 1. Here is the help page to stats::ts(): ‘as.ts’ is generic. Its default method will use the ‘tsp’ attribute of the object if it has one to set the start and end times and frequency. So, if you want something seasonal, either supply an x with a tsp attribute to as.ts()... or use ts() straight from the beginning. set.seed(1) (Data <- ts(rnorm(116), frequency=52) ) which yields Time Series: Start = c(1, 1) End = c(3, 12) Frequency = 52 ...snip... This looks much better. Note that we now need to supply 2-vectors to window(), corresponding to the Start and End attributes of Data: Train <- window(Data,start=c(1,1),end=c(2,52)) auto.arima(Train, D=1) which yields a nice seasonal ARIMA model, as required: Series: Train ARIMA(0,0,0)(0,1,0)[52]
auto.arima returns a non-seasonal model even though I am forcing seasonality Rule number 1: when your code does not do what you want, start inspecting your objects. library(forecast) set.seed(1) (Data <- as.ts(rnorm(116), frequency=52) ) yields Time Series: Start = 1 End = 1
53,638
auto.arima returns a non-seasonal model even though I am forcing seasonality
Two things come to mind 1) it is silly to try and fit a seasonal ar model of order 52 to 105 obvservations as you only have 2 cycles of data and 2) see I have correlogram ACF and PACF below for a temperature time series. Can I say it is MA(2) from ACF? What about AR? where ignoring the effect of anomalies is discussed causing a flaw in model identification
auto.arima returns a non-seasonal model even though I am forcing seasonality
Two things come to mind 1) it is silly to try and fit a seasonal ar model of order 52 to 105 obvservations as you only have 2 cycles of data and 2) see I have correlogram ACF and PACF below for a tem
auto.arima returns a non-seasonal model even though I am forcing seasonality Two things come to mind 1) it is silly to try and fit a seasonal ar model of order 52 to 105 obvservations as you only have 2 cycles of data and 2) see I have correlogram ACF and PACF below for a temperature time series. Can I say it is MA(2) from ACF? What about AR? where ignoring the effect of anomalies is discussed causing a flaw in model identification
auto.arima returns a non-seasonal model even though I am forcing seasonality Two things come to mind 1) it is silly to try and fit a seasonal ar model of order 52 to 105 obvservations as you only have 2 cycles of data and 2) see I have correlogram ACF and PACF below for a tem
53,639
Bootstrap p-value
Actually, both are possible. The definition from Wikipedia in the other answer is somewhat imprecise. Here, it is necessary to point out that "greater magnitude" in the definition The probability for a given statistical model that, when the null hypothesis is true, the statistical summary would be the same as or of greater magnitude than the actual observed results. needs to be taken to mean "more extreme" in the sense of as or more unlikely when the null is true. Hence, when you conduct a right-tailed test, large values of $T$ provide evidence against (are unlikely if) the null (is true). Conversely, small (i.e., large negative) values are by no means surprising. This, if bootstrap test statistics $T_i$ can be seen as draws from the null distribution (i.e., your bootstrap was successful), $\frac 1 B \sum_{i=1}^B\mathbb 1_{\{T^{(i)}> T\}}$ would be a useful bootstrap p-value. Conversely, if you test against left-tailed alternatives, $\frac 1 B \sum_{i=1}^B \mathbb 1_{\{T^{(i)}<T\}}$ would be appropriate, as it would give you the fraction of draws from your approximation to the null distribution that are more extreme than (less compatible with) the null than the observed test statistic $T$.
Bootstrap p-value
Actually, both are possible. The definition from Wikipedia in the other answer is somewhat imprecise. Here, it is necessary to point out that "greater magnitude" in the definition The probability fo
Bootstrap p-value Actually, both are possible. The definition from Wikipedia in the other answer is somewhat imprecise. Here, it is necessary to point out that "greater magnitude" in the definition The probability for a given statistical model that, when the null hypothesis is true, the statistical summary would be the same as or of greater magnitude than the actual observed results. needs to be taken to mean "more extreme" in the sense of as or more unlikely when the null is true. Hence, when you conduct a right-tailed test, large values of $T$ provide evidence against (are unlikely if) the null (is true). Conversely, small (i.e., large negative) values are by no means surprising. This, if bootstrap test statistics $T_i$ can be seen as draws from the null distribution (i.e., your bootstrap was successful), $\frac 1 B \sum_{i=1}^B\mathbb 1_{\{T^{(i)}> T\}}$ would be a useful bootstrap p-value. Conversely, if you test against left-tailed alternatives, $\frac 1 B \sum_{i=1}^B \mathbb 1_{\{T^{(i)}<T\}}$ would be appropriate, as it would give you the fraction of draws from your approximation to the null distribution that are more extreme than (less compatible with) the null than the observed test statistic $T$.
Bootstrap p-value Actually, both are possible. The definition from Wikipedia in the other answer is somewhat imprecise. Here, it is necessary to point out that "greater magnitude" in the definition The probability fo
53,640
Bootstrap p-value
Go back to the definition of the P-value (from Wikipedia ): The probability for a given statistical model that, when the null hypothesis is true, the statistical summary would be the same as or of greater magnitude than the actual observed results. For this reason, its estimation by bootstrap is the proportion of the $T^{(i)}$ that exceed $T$.
Bootstrap p-value
Go back to the definition of the P-value (from Wikipedia ): The probability for a given statistical model that, when the null hypothesis is true, the statistical summary would be the same as or of g
Bootstrap p-value Go back to the definition of the P-value (from Wikipedia ): The probability for a given statistical model that, when the null hypothesis is true, the statistical summary would be the same as or of greater magnitude than the actual observed results. For this reason, its estimation by bootstrap is the proportion of the $T^{(i)}$ that exceed $T$.
Bootstrap p-value Go back to the definition of the P-value (from Wikipedia ): The probability for a given statistical model that, when the null hypothesis is true, the statistical summary would be the same as or of g
53,641
How to compare logistic regression curves?
I'd suggest putting all three datasets into one. Include in this dataset an indicator variable for each of the three datasets. Then, fit a logistic regression using this complete dataset, including an interaction term of age and the indicator variable. Additionally, fit a logistic regression model that does not include the interaction term but additive terms for age and the indicator. Perform an likelihood ratio test to evaluate the evidence that the interaction term is significant. That is, compare the models $$ \mathrm{logit}(y)=\beta_0 + \beta_1\cdot \mathrm{age} + \beta_2\cdot \mathrm{g_2} + \beta_3\cdot \mathrm{g_3} + \beta_4\cdot \mathrm{age}\cdot \mathrm{g_2} + \beta_5\cdot \mathrm{age}\cdot \mathrm{g_3} $$ and $$ \mathrm{logit}(y)=\beta_0 + \beta_1\cdot \mathrm{age} + \beta_2\cdot g_2 + \beta_3\cdot g_3 $$ where $\mathrm{g_2}$ and $\mathrm{g_3}$ are indicator variables (dummy variables) for groups 2 and 3. $g_2$ is $1$ whenever the group is $2$ and $0$ otherwise. $g_3$ is $1$ whenever the group is $3$ and $0$ otherweise. Group 1 serves as reference group. You can change the reference group by omitting the corresponding dummy variable in the model. Here, we omitted the indicator variable for group 1 which consequently serves as references group. The likelihood ratio test tests the following hypotheses \begin{align} \mathrm{H}_{0}&: \beta_4 = \beta_5 = 0 \\ \mathrm{H}_{1}&: \text{At least one}\> \beta_j \neq 0, j = 4, 5 \end{align} So it tests whether the differences between the age slope of group 1 (or the reference group) and the groups 2 and 3 differ significantly. It doesn't tell you, however, whether the slopes of group 2 and 3 differ (see next section). If the likelihood ratio test is significant, you have evidence that at least two groups have different slopes. Here is an example using R. I'm using an available dataset that models the probability of admission into a graduate school. Instead of age, it uses GPA and instead of the group indicator it uses ranks 1-4 which indicates the prestige of the institution. For your problem, just change GPA to age and rank to the group-indicator. dat <- read.csv("https://stats.idre.ucla.edu/stat/data/binary.csv") dat$rank <- factor(dat$rank) glm_mod <- glm(admit~gpa*rank, family = binomial, data = dat) glm_mod_noint <- glm(admit~gpa+rank, family = binomial, data = dat) summary(glm_mod) # Coefficients: # Estimate Std. Error z value Pr(>|z|) # (Intercept) -4.6298 2.4843 -1.864 0.0624 . # gpa 1.3906 0.7171 1.939 0.0525 . # rank2 0.9191 2.9618 0.310 0.7563 # rank3 0.1892 3.2431 0.058 0.9535 # rank4 -1.1121 4.1319 -0.269 0.7878 # gpa:rank2 -0.4661 0.8584 -0.543 0.5871 # gpa:rank3 -0.4567 0.9295 -0.491 0.6232 # gpa:rank4 -0.1378 1.1999 -0.115 0.9085 summary(glm_mod_noint) # output omitted here Now you see that the slope for the GPA is $1.39$ for institutions with rank 1. For institutions with rank 2, the slope is $1.391 - 0.466 = 0.925$, the difference in slopes is $-0.466$. The corresponding $p$-value is $0.587$, providing little evidence for a difference in slopes. Accordingly, the difference in slopes compared to institutions of rank 1 for rank 3 is $-0.457$ and $-0.138$ for institutions of rank 4. All corresponding $p$-values are quite high. Let's do the likelihood ratio test now: # Likelihood ratio test anova(glm_mod_noint, glm_mod, test = "Chisq") # Analysis of Deviance Table # # Model 1: admit ~ gpa + rank # Model 2: admit ~ gpa * rank # Resid. Df Resid. Dev Df Deviance Pr(>Chi) # 1 395 462.88 # 2 392 462.49 3 0.38446 0.9434 The high $p$-value of the likelihood ratio test indicates little evidence that the influence of the GPA on the probability of admission differs between institutions of different ranks. We would therefore assume that $\beta_5 = \beta_6 = \beta_7 = 0$ in this case (i.e. the coefficients of the interaction terms are all $0$). Let's visualize our results on the probability scale: An now on the log-odds scale: As you can see, the lines differ slightly in their slope, but these differences are not statistically significant. Informally, the interaction term assesses the evidence for parallel lines. In this case, we can assume that the lines are parallel on the log-odds scale. Pairwise comparisons The likelihood ratio test tests whether all differences in slopes between the reference group and the other groups are $0$ (see hypotheses above). If we'd like to test all pairwise differences in slopes, we have to resort to a post-hoc test. If there are $n$ groups, there are $\frac{1}{2}(n - 1)n$ pairwise comparisons. In your case, there are $n = 3$ groups which leads to a total of $3$ comparisons. In this example with four groups (ranks 1-4), there are $6$ comparisons. We will use the multcomp package to conduct the tests: library(multcomp) glht_mod <- glht(glm_mod, linfct = c( "gpa:rank2 = 0" # rank 1 vs. rank 2 , "gpa:rank3 = 0" # rank 1 vs. rank 3 , "gpa:rank4 = 0" # rank 1 vs. rank 4 , "gpa:rank2 - gpa:rank3 = 0" # rank 2 vs. rank 3 , "gpa:rank3 - gpa:rank4 = 0" # rank 3 vs. rank 4 , "gpa:rank2 - gpa:rank4 = 0")) # rank 2 vs. rank 4 summary(dd) # all pairwise tests # Linear Hypotheses: # Estimate Std. Error z value Pr(>|z|) # gpa:rank2 == 0 -0.466105 0.858379 -0.543 0.947 # gpa:rank3 == 0 -0.456729 0.929507 -0.491 0.960 # gpa:rank4 == 0 -0.137839 1.199946 -0.115 0.999 # gpa:rank2 - gpa:rank3 == 0 -0.009376 0.756521 -0.012 1.000 # gpa:rank3 - gpa:rank4 == 0 -0.318890 1.129327 -0.282 0.992 # gpa:rank2 - gpa:rank4 == 0 -0.328266 1.071546 -0.306 0.990 These $p$-values are adjusted for multiple comparisons. There is very little evidence for any difference between slopes. Finally, let's test the global hypothesis that all pairwise slope comparisons are $0$: summary(glht_mod, Chisqtest()) # global test # Global Test: # Chisq DF Pr(>Chisq) # 1 0.3793 3 0.9445 Again, very little evidence that any of the slopes differ from each other.
How to compare logistic regression curves?
I'd suggest putting all three datasets into one. Include in this dataset an indicator variable for each of the three datasets. Then, fit a logistic regression using this complete dataset, including an
How to compare logistic regression curves? I'd suggest putting all three datasets into one. Include in this dataset an indicator variable for each of the three datasets. Then, fit a logistic regression using this complete dataset, including an interaction term of age and the indicator variable. Additionally, fit a logistic regression model that does not include the interaction term but additive terms for age and the indicator. Perform an likelihood ratio test to evaluate the evidence that the interaction term is significant. That is, compare the models $$ \mathrm{logit}(y)=\beta_0 + \beta_1\cdot \mathrm{age} + \beta_2\cdot \mathrm{g_2} + \beta_3\cdot \mathrm{g_3} + \beta_4\cdot \mathrm{age}\cdot \mathrm{g_2} + \beta_5\cdot \mathrm{age}\cdot \mathrm{g_3} $$ and $$ \mathrm{logit}(y)=\beta_0 + \beta_1\cdot \mathrm{age} + \beta_2\cdot g_2 + \beta_3\cdot g_3 $$ where $\mathrm{g_2}$ and $\mathrm{g_3}$ are indicator variables (dummy variables) for groups 2 and 3. $g_2$ is $1$ whenever the group is $2$ and $0$ otherwise. $g_3$ is $1$ whenever the group is $3$ and $0$ otherweise. Group 1 serves as reference group. You can change the reference group by omitting the corresponding dummy variable in the model. Here, we omitted the indicator variable for group 1 which consequently serves as references group. The likelihood ratio test tests the following hypotheses \begin{align} \mathrm{H}_{0}&: \beta_4 = \beta_5 = 0 \\ \mathrm{H}_{1}&: \text{At least one}\> \beta_j \neq 0, j = 4, 5 \end{align} So it tests whether the differences between the age slope of group 1 (or the reference group) and the groups 2 and 3 differ significantly. It doesn't tell you, however, whether the slopes of group 2 and 3 differ (see next section). If the likelihood ratio test is significant, you have evidence that at least two groups have different slopes. Here is an example using R. I'm using an available dataset that models the probability of admission into a graduate school. Instead of age, it uses GPA and instead of the group indicator it uses ranks 1-4 which indicates the prestige of the institution. For your problem, just change GPA to age and rank to the group-indicator. dat <- read.csv("https://stats.idre.ucla.edu/stat/data/binary.csv") dat$rank <- factor(dat$rank) glm_mod <- glm(admit~gpa*rank, family = binomial, data = dat) glm_mod_noint <- glm(admit~gpa+rank, family = binomial, data = dat) summary(glm_mod) # Coefficients: # Estimate Std. Error z value Pr(>|z|) # (Intercept) -4.6298 2.4843 -1.864 0.0624 . # gpa 1.3906 0.7171 1.939 0.0525 . # rank2 0.9191 2.9618 0.310 0.7563 # rank3 0.1892 3.2431 0.058 0.9535 # rank4 -1.1121 4.1319 -0.269 0.7878 # gpa:rank2 -0.4661 0.8584 -0.543 0.5871 # gpa:rank3 -0.4567 0.9295 -0.491 0.6232 # gpa:rank4 -0.1378 1.1999 -0.115 0.9085 summary(glm_mod_noint) # output omitted here Now you see that the slope for the GPA is $1.39$ for institutions with rank 1. For institutions with rank 2, the slope is $1.391 - 0.466 = 0.925$, the difference in slopes is $-0.466$. The corresponding $p$-value is $0.587$, providing little evidence for a difference in slopes. Accordingly, the difference in slopes compared to institutions of rank 1 for rank 3 is $-0.457$ and $-0.138$ for institutions of rank 4. All corresponding $p$-values are quite high. Let's do the likelihood ratio test now: # Likelihood ratio test anova(glm_mod_noint, glm_mod, test = "Chisq") # Analysis of Deviance Table # # Model 1: admit ~ gpa + rank # Model 2: admit ~ gpa * rank # Resid. Df Resid. Dev Df Deviance Pr(>Chi) # 1 395 462.88 # 2 392 462.49 3 0.38446 0.9434 The high $p$-value of the likelihood ratio test indicates little evidence that the influence of the GPA on the probability of admission differs between institutions of different ranks. We would therefore assume that $\beta_5 = \beta_6 = \beta_7 = 0$ in this case (i.e. the coefficients of the interaction terms are all $0$). Let's visualize our results on the probability scale: An now on the log-odds scale: As you can see, the lines differ slightly in their slope, but these differences are not statistically significant. Informally, the interaction term assesses the evidence for parallel lines. In this case, we can assume that the lines are parallel on the log-odds scale. Pairwise comparisons The likelihood ratio test tests whether all differences in slopes between the reference group and the other groups are $0$ (see hypotheses above). If we'd like to test all pairwise differences in slopes, we have to resort to a post-hoc test. If there are $n$ groups, there are $\frac{1}{2}(n - 1)n$ pairwise comparisons. In your case, there are $n = 3$ groups which leads to a total of $3$ comparisons. In this example with four groups (ranks 1-4), there are $6$ comparisons. We will use the multcomp package to conduct the tests: library(multcomp) glht_mod <- glht(glm_mod, linfct = c( "gpa:rank2 = 0" # rank 1 vs. rank 2 , "gpa:rank3 = 0" # rank 1 vs. rank 3 , "gpa:rank4 = 0" # rank 1 vs. rank 4 , "gpa:rank2 - gpa:rank3 = 0" # rank 2 vs. rank 3 , "gpa:rank3 - gpa:rank4 = 0" # rank 3 vs. rank 4 , "gpa:rank2 - gpa:rank4 = 0")) # rank 2 vs. rank 4 summary(dd) # all pairwise tests # Linear Hypotheses: # Estimate Std. Error z value Pr(>|z|) # gpa:rank2 == 0 -0.466105 0.858379 -0.543 0.947 # gpa:rank3 == 0 -0.456729 0.929507 -0.491 0.960 # gpa:rank4 == 0 -0.137839 1.199946 -0.115 0.999 # gpa:rank2 - gpa:rank3 == 0 -0.009376 0.756521 -0.012 1.000 # gpa:rank3 - gpa:rank4 == 0 -0.318890 1.129327 -0.282 0.992 # gpa:rank2 - gpa:rank4 == 0 -0.328266 1.071546 -0.306 0.990 These $p$-values are adjusted for multiple comparisons. There is very little evidence for any difference between slopes. Finally, let's test the global hypothesis that all pairwise slope comparisons are $0$: summary(glht_mod, Chisqtest()) # global test # Global Test: # Chisq DF Pr(>Chisq) # 1 0.3793 3 0.9445 Again, very little evidence that any of the slopes differ from each other.
How to compare logistic regression curves? I'd suggest putting all three datasets into one. Include in this dataset an indicator variable for each of the three datasets. Then, fit a logistic regression using this complete dataset, including an
53,642
sequential/recursive/online calculation of sample covariance matrix
It's easy if you write $$ \hat\Sigma_n= \frac{1}{n-1}\sum_{i=1}^nX_i X_i^T - \frac{n}{n-1}\hat{\mu}_n\hat{\mu}_n^T. $$ Split up the sum over $n$ elements into two parts. One will involve the first $n-1$ terms (which you can make look like the previous sample covariance), and the second will involve the most recent $n$th term. Finally, find a recursive formula for $\hat{\mu}_n$, and then plug that in. Edit: The correct formula should be $$ \hat{\Sigma}_n = \frac{n-2}{n-1}\hat{\Sigma}_{n-1} + \frac{1}{n}\Delta_n \Delta_n^T. $$ I think you flipped that first fraction accidentally. Here's a proof: \begin{align*} \hat{\Sigma}_n &= \frac{1}{n-1}\sum_{i=1}^n(x_i - \hat{\mu}_n)(x_i - \hat{\mu}_n)^T\\ &=\frac{1}{n-1}\sum_{i=1}^n x_ix_i^T - \frac{n}{n-1}\hat{\mu}_n\hat{\mu}_n^T\\ &= \frac{1}{n-1}\sum_{i=1}^{n-1} x_ix_i^T + \frac{1}{n-1}x_nx_n^T - \frac{n}{n-1}\hat{\mu}_n\hat{\mu}_n^T \\ &= \frac{1}{n-1}\sum_{i=1}^{n-1} x_ix_i^T + \frac{1}{n-1}x_nx_n^T - \frac{n}{n-1} \left(\frac{n-1}{n} \hat{\mu}_{n-1} + \frac{1}{n}x_n \right)\left(\frac{n-1}{n} \hat{\mu}_{n-1} + \frac{1}{n}x_n \right)^T \\ &= \frac{1}{n-1}\sum_{i=1}^{n-1} x_ix_i^T + \frac{1}{n-1}x_nx_n^T - \left( \hat{\mu}_{n-1} + \frac{1}{n-1}x_n \right)\left(\frac{n-1}{n} \hat{\mu}_{n-1} + \frac{1}{n}x_n \right)^T \\ &= \frac{1}{n-1} \sum_{i=1}^{n-1} x_ix_i^T + \frac{1}{n-1}x_nx_n^T - \frac{n-1}{n}\hat{\mu}_{n-1}\hat{\mu}_{n-1}^T - \frac{1}{n} \hat{\mu}_{n-1} x_n^T - \frac{1}{n}x_n\hat{\mu}_{n-1}^T - \frac{1}{(n-1)n} x_n x_n^T \\ &= \frac{1}{n-1} \sum_{i=1}^{n-1} x_ix_i^T + \frac{1}{n-1}x_nx_n^T - \frac{n-1}{n}\hat{\mu}_{n-1}\hat{\mu}_{n-1}^T - \frac{1}{n} \hat{\mu}_{n-1} x_n^T - \frac{1}{n}x_n\hat{\mu}_{n-1}^T + \frac{1}{n}x_n x_n^T - \frac{1}{n-1} x_n x_n^T \\ &= \frac{1}{n-1} \sum_{i=1}^{n-1} x_ix_i^T + \frac{1}{n-1}x_nx_n^T - \hat{\mu}_{n-1}\hat{\mu}_{n-1}^T - \frac{1}{n-1} x_n x_n^T +\left( \frac{1}{n}x_n x_n^T - \frac{1}{n}x_n\hat{\mu}_{n-1}^T - \frac{1}{n} \hat{\mu}_{n-1} x_n^T + \frac{1}{n}\hat{\mu}_{n-1}\hat{\mu}_{n-1}^T \right) \\ &= \frac{1}{n-1} \sum_{i=1}^{n-1} x_ix_i^T - \hat{\mu}_{n-1}\hat{\mu}_{n-1}^T + \frac{1}{n}\Delta_n\Delta_n^T \\ &= \frac{n-2}{n-1}\left(\frac{1}{n-2} \sum_{i=1}^{n-1} x_ix_i^T - \frac{n-1}{n-2}\hat{\mu}_{n-1}\hat{\mu}_{n-1}^T\right) + \frac{1}{n}\Delta_n\Delta_n^T \\ &= \frac{n-2}{n-1}\hat{\Sigma}_{n-1} + \frac{1}{n}\Delta_n \Delta_n^T. \end{align*}
sequential/recursive/online calculation of sample covariance matrix
It's easy if you write $$ \hat\Sigma_n= \frac{1}{n-1}\sum_{i=1}^nX_i X_i^T - \frac{n}{n-1}\hat{\mu}_n\hat{\mu}_n^T. $$ Split up the sum over $n$ elements into two parts. One will involve the first $n
sequential/recursive/online calculation of sample covariance matrix It's easy if you write $$ \hat\Sigma_n= \frac{1}{n-1}\sum_{i=1}^nX_i X_i^T - \frac{n}{n-1}\hat{\mu}_n\hat{\mu}_n^T. $$ Split up the sum over $n$ elements into two parts. One will involve the first $n-1$ terms (which you can make look like the previous sample covariance), and the second will involve the most recent $n$th term. Finally, find a recursive formula for $\hat{\mu}_n$, and then plug that in. Edit: The correct formula should be $$ \hat{\Sigma}_n = \frac{n-2}{n-1}\hat{\Sigma}_{n-1} + \frac{1}{n}\Delta_n \Delta_n^T. $$ I think you flipped that first fraction accidentally. Here's a proof: \begin{align*} \hat{\Sigma}_n &= \frac{1}{n-1}\sum_{i=1}^n(x_i - \hat{\mu}_n)(x_i - \hat{\mu}_n)^T\\ &=\frac{1}{n-1}\sum_{i=1}^n x_ix_i^T - \frac{n}{n-1}\hat{\mu}_n\hat{\mu}_n^T\\ &= \frac{1}{n-1}\sum_{i=1}^{n-1} x_ix_i^T + \frac{1}{n-1}x_nx_n^T - \frac{n}{n-1}\hat{\mu}_n\hat{\mu}_n^T \\ &= \frac{1}{n-1}\sum_{i=1}^{n-1} x_ix_i^T + \frac{1}{n-1}x_nx_n^T - \frac{n}{n-1} \left(\frac{n-1}{n} \hat{\mu}_{n-1} + \frac{1}{n}x_n \right)\left(\frac{n-1}{n} \hat{\mu}_{n-1} + \frac{1}{n}x_n \right)^T \\ &= \frac{1}{n-1}\sum_{i=1}^{n-1} x_ix_i^T + \frac{1}{n-1}x_nx_n^T - \left( \hat{\mu}_{n-1} + \frac{1}{n-1}x_n \right)\left(\frac{n-1}{n} \hat{\mu}_{n-1} + \frac{1}{n}x_n \right)^T \\ &= \frac{1}{n-1} \sum_{i=1}^{n-1} x_ix_i^T + \frac{1}{n-1}x_nx_n^T - \frac{n-1}{n}\hat{\mu}_{n-1}\hat{\mu}_{n-1}^T - \frac{1}{n} \hat{\mu}_{n-1} x_n^T - \frac{1}{n}x_n\hat{\mu}_{n-1}^T - \frac{1}{(n-1)n} x_n x_n^T \\ &= \frac{1}{n-1} \sum_{i=1}^{n-1} x_ix_i^T + \frac{1}{n-1}x_nx_n^T - \frac{n-1}{n}\hat{\mu}_{n-1}\hat{\mu}_{n-1}^T - \frac{1}{n} \hat{\mu}_{n-1} x_n^T - \frac{1}{n}x_n\hat{\mu}_{n-1}^T + \frac{1}{n}x_n x_n^T - \frac{1}{n-1} x_n x_n^T \\ &= \frac{1}{n-1} \sum_{i=1}^{n-1} x_ix_i^T + \frac{1}{n-1}x_nx_n^T - \hat{\mu}_{n-1}\hat{\mu}_{n-1}^T - \frac{1}{n-1} x_n x_n^T +\left( \frac{1}{n}x_n x_n^T - \frac{1}{n}x_n\hat{\mu}_{n-1}^T - \frac{1}{n} \hat{\mu}_{n-1} x_n^T + \frac{1}{n}\hat{\mu}_{n-1}\hat{\mu}_{n-1}^T \right) \\ &= \frac{1}{n-1} \sum_{i=1}^{n-1} x_ix_i^T - \hat{\mu}_{n-1}\hat{\mu}_{n-1}^T + \frac{1}{n}\Delta_n\Delta_n^T \\ &= \frac{n-2}{n-1}\left(\frac{1}{n-2} \sum_{i=1}^{n-1} x_ix_i^T - \frac{n-1}{n-2}\hat{\mu}_{n-1}\hat{\mu}_{n-1}^T\right) + \frac{1}{n}\Delta_n\Delta_n^T \\ &= \frac{n-2}{n-1}\hat{\Sigma}_{n-1} + \frac{1}{n}\Delta_n \Delta_n^T. \end{align*}
sequential/recursive/online calculation of sample covariance matrix It's easy if you write $$ \hat\Sigma_n= \frac{1}{n-1}\sum_{i=1}^nX_i X_i^T - \frac{n}{n-1}\hat{\mu}_n\hat{\mu}_n^T. $$ Split up the sum over $n$ elements into two parts. One will involve the first $n
53,643
(Practical) Applications or RNN
Image captioning Sentiment analysis (this is an example of LSTMs in Theano) Question answering Speech recognition Anomaly detection in time series Wikipedia has a section on applications in LSTM article.
(Practical) Applications or RNN
Image captioning Sentiment analysis (this is an example of LSTMs in Theano) Question answering Speech recognition Anomaly detection in time series Wikipedia has a section on applications in LSTM artic
(Practical) Applications or RNN Image captioning Sentiment analysis (this is an example of LSTMs in Theano) Question answering Speech recognition Anomaly detection in time series Wikipedia has a section on applications in LSTM article.
(Practical) Applications or RNN Image captioning Sentiment analysis (this is an example of LSTMs in Theano) Question answering Speech recognition Anomaly detection in time series Wikipedia has a section on applications in LSTM artic
53,644
(Practical) Applications or RNN
You have to understand that RNNs deal with "series" of data. It can be time series or it can be sentences which can be thought of as series of words. One thing very powerful about RNNs is that it allows you to deal with series which are of different length. Having said that, any thing that looks like series or time series can potentially be handled by RNNs. Few more examples or applications: Log data analysis (web data) Sensor data analysis (Time series) Translation https://www.tensorflow.org/tutorials/seq2seq Named entity recognition (you'll have to use with word2vec) https://github.com/monikkinom/ner-lstm Video classification http://www.ijcsit.com/docs/Volume%206/vol6issue02/ijcsit20150602189.pdf And many others that contain series/time series data.
(Practical) Applications or RNN
You have to understand that RNNs deal with "series" of data. It can be time series or it can be sentences which can be thought of as series of words. One thing very powerful about RNNs is that it all
(Practical) Applications or RNN You have to understand that RNNs deal with "series" of data. It can be time series or it can be sentences which can be thought of as series of words. One thing very powerful about RNNs is that it allows you to deal with series which are of different length. Having said that, any thing that looks like series or time series can potentially be handled by RNNs. Few more examples or applications: Log data analysis (web data) Sensor data analysis (Time series) Translation https://www.tensorflow.org/tutorials/seq2seq Named entity recognition (you'll have to use with word2vec) https://github.com/monikkinom/ner-lstm Video classification http://www.ijcsit.com/docs/Volume%206/vol6issue02/ijcsit20150602189.pdf And many others that contain series/time series data.
(Practical) Applications or RNN You have to understand that RNNs deal with "series" of data. It can be time series or it can be sentences which can be thought of as series of words. One thing very powerful about RNNs is that it all
53,645
Batch normalisation at the end of each layer and not the input?
According to Ioffe and Szegedy (2015), batch normalization is employed to stabilize the inputs to nonlinear activation functions. "Batch Normalization seeks a stable distribution of activation values throughout training, and normalizes the inputs of a nonlinearity since that is where matching the moments is more likely to stabilize the distribution" So normally, it is inserted after dense layers and before the nonlinearity. Below is a part of lecture notes for CS231n.
Batch normalisation at the end of each layer and not the input?
According to Ioffe and Szegedy (2015), batch normalization is employed to stabilize the inputs to nonlinear activation functions. "Batch Normalization seeks a stable distribution of activation values
Batch normalisation at the end of each layer and not the input? According to Ioffe and Szegedy (2015), batch normalization is employed to stabilize the inputs to nonlinear activation functions. "Batch Normalization seeks a stable distribution of activation values throughout training, and normalizes the inputs of a nonlinearity since that is where matching the moments is more likely to stabilize the distribution" So normally, it is inserted after dense layers and before the nonlinearity. Below is a part of lecture notes for CS231n.
Batch normalisation at the end of each layer and not the input? According to Ioffe and Szegedy (2015), batch normalization is employed to stabilize the inputs to nonlinear activation functions. "Batch Normalization seeks a stable distribution of activation values
53,646
Calculating t-SNE gradient (a mistake in the original t-SNE paper)
I just signed up for this forum due to your question :) Nice question! It shows someone is indeed trying to follow & derive the nitty gritty. Your question is totally valid, (28) is indeed missing the $d_{ij}$, but then (24) is missing a $d_{ij}^{-1}$, you can see that from (21) via $\frac{\partial d_{ij}}{\partial y_i}$, taking into account that $$\frac{\partial \lVert\mathbf x\rVert}{\partial x_i} = \frac{x_i}{\lVert\mathbf x\rVert}.$$ So at the end everything is correct again! :P
Calculating t-SNE gradient (a mistake in the original t-SNE paper)
I just signed up for this forum due to your question :) Nice question! It shows someone is indeed trying to follow & derive the nitty gritty. Your question is totally valid, (28) is indeed missing the
Calculating t-SNE gradient (a mistake in the original t-SNE paper) I just signed up for this forum due to your question :) Nice question! It shows someone is indeed trying to follow & derive the nitty gritty. Your question is totally valid, (28) is indeed missing the $d_{ij}$, but then (24) is missing a $d_{ij}^{-1}$, you can see that from (21) via $\frac{\partial d_{ij}}{\partial y_i}$, taking into account that $$\frac{\partial \lVert\mathbf x\rVert}{\partial x_i} = \frac{x_i}{\lVert\mathbf x\rVert}.$$ So at the end everything is correct again! :P
Calculating t-SNE gradient (a mistake in the original t-SNE paper) I just signed up for this forum due to your question :) Nice question! It shows someone is indeed trying to follow & derive the nitty gritty. Your question is totally valid, (28) is indeed missing the
53,647
Calculating t-SNE gradient (a mistake in the original t-SNE paper)
Let our cost function be $C=KL(P||Q)=\sum_k\sum_{l\neq k}p_{kl}\log(p_{kl})-p_{kl}\log(q_{kl})$, so that we are trying to find the gradient $\frac{\partial C}{\partial y_i}$. We will define some intermediate terms to try and simplify the notation; let $d_{ij}=||y_i-y_j||=d_{ji}$, and let $W=\sum_{k\neq l}(1+d_{kl}^2)^{-1}$. With that, lets now think about what terms are important to $C$ when we change just $y_i$. Notice that the only terms in $C$ that depend on $y_i$ are $d_{ij}$ and $d_{ji}$ for all of the $j$'s. Using the chain rule, and the symmetry of $d_{ij}=d_{ji}$, this gives us: $$ \frac{\partial C}{\partial y_{i}} =2\sum_j\frac{\partial C}{\partial d_{ij}}\frac{\partial d_{ij}}{\partial y_i}\\\\ \text{Where} \frac{\partial d_{ij}}{y_{ij}}=\frac{y_i-y_j}{||y_i-y_j||}=\frac{y_i-y_j}{d_{ij}}\\\\ \frac{\partial C}{\partial y_{i}}=2\sum_j\frac{\partial C}{\partial d_{ij}}\frac{y_i-y_j}{d_{ij}}\\\\ $$ The first step is to find $\frac{\partial C}{\partial d_{ij}}$; we get: $$ \frac{\partial C}{\partial d_{ij}}= \frac{\partial}{\partial d_{ij}}\sum_k\sum_{l\neq k}p_{kl}\log(p_{kl})-p_{kl}\log(q_{kl})\\\\\ = -\sum_k\sum_{l\neq k}p_{kl}\frac{\partial}{\partial d_{ij}}\log\left(\frac{(1+d_{kl}^2)^{-1}}{W}\right)\\\\ = -\sum_k\sum_{l\neq k}p_{kl}\left(-\frac{\partial}{\partial d_{ij}}\log(W) -\frac{\partial}{\partial d_{ij}}\log(1+d_{kl}^2)\right)\\\\ = -\sum_k\sum_{l\neq k}p_{kl}\left(-\frac{1}{W}\frac{\partial}{\partial d_{ij}}(W) - \frac{1}{1+d_{kl}^2}\frac{\partial}{\partial d_{ij}}(1+d_{kl}^2)\right)\\\\ =\sum_k\sum_{l\neq k}\frac{p_{kl}}{1+d_{kl}^2}\frac{\partial}{\partial d_{ij}}(1+d_{kl}^2) + \sum_k\sum_{l\neq k}\frac{p_{kl}}{W}\frac{\partial}{\partial d_{ij}}W\\\\ $$ Lets start with the first summations which has the term $\frac{\partial}{\partial d_{ij}}(1+d_{kl}^2)$. Notice that this is 0 except when $k=i$ and $l=j$. And when $k=i,l=j$, we get $\frac{\partial}{\partial d_{ij}}(1+d_{kl}^2) = 2d_{ij}$. This makes: $$\sum_k\sum_{l\neq k}\frac{p_{kl}}{1+d_{kl}^2}\frac{\partial}{\partial d_{ij}}(1+d_{kl}^2)=\frac{2d_{ij}p_{ij}}{1+d_{ij}^2} $$ Lets now look at the second double sum term which has $\frac{1}{W}\frac{\partial}{\partial d_{ij}}W$. Recall that $W=\sum_{k\neq l}(1+d_{kl}^2)^{-1}$, which means $\frac{\partial W}{\partial d_{ij}}$ is 0 when $k\neq i$ or $l\neq j$. So the only non-zero term is when $k=i, l=j$, and we get $\frac{1}{W}\frac{\partial W}{\partial d_{ij}}=\frac{-2d_{ij}(1+d_{ij}^2)^{-2}}{W}$. Also notice that $q_{ij}=\frac{(1+d_{ij}^2)^{-1}}{W}$; this makes: $$\sum_k\sum_{l\neq k}\frac{p_{kl}}{W}\frac{\partial W}{\partial d_{ij}}=-2\sum_k\sum_{l\neq k}\frac{p_{kl}d_{ij}(1+d_{ij}^2)^{-2}}{W}\\\\ = -2d_{ij}q_{ij}(1+d_{ij}^2)^{-1}\sum_k\sum_{l\neq k}p_{kl}\\\\ \text{Notice that} \sum_k\sum_{l\neq k}p_{kl}=1\text{, so...}\\\\ =-2d_{ij}q_{ij}(1+d_{ij}^2)^{-1}\\ $$ Stitching this all back together, we get: $$\frac{\partial C}{\partial d_{ij}}=\sum_k\sum_{l\neq k}\frac{p_{kl}}{1+d_{kl}^2}\frac{\partial}{\partial d_{ij}}(1+d_{kl}^2) + \sum_k\sum_{l\neq k}\frac{p_{kl}}{W}\frac{\partial W}{\partial d_{ij}}\\\\ =2d_{ij}p_{ij}(1+d_{ij}^2)^{-1} - 2d_{ij}1_{ij}(1+d_{ij}^2)^{-1}\\\\ = 2d_{ij}(1+d_{ij}^2)^{-1}(p_{ij}-1_{ij})\\\\ $$ Now that we have $\frac{\partial C}{\partial d_{ij}}$, we can plug this back in to $\frac{\partial C}{\partial y_i}$, to get: $$\frac{\partial C}{\partial y_i}=2\sum_j\frac{\partial C}{\partial d_{ij}}\frac{(y_i-y_j)}{d_{ij}}\\\\ =2\sum_j2d_{ij}(1+d_{ij}^2)^{-1}(p_{ij}-1_{ij})\frac{(y_i-y_j)}{d_{ij}}\\\\ =4\sum_j(p_{ij}-q_{ij})(1+d_{ij}^2)^{-1}(y_i-y_j)\\\\ \frac{\partial C}{\partial d_{ij}}= 4\sum_j(p_{ij}-q_{ij})(1+||y_i-y_j||^2)^{-1}(y_i-y_j) $$
Calculating t-SNE gradient (a mistake in the original t-SNE paper)
Let our cost function be $C=KL(P||Q)=\sum_k\sum_{l\neq k}p_{kl}\log(p_{kl})-p_{kl}\log(q_{kl})$, so that we are trying to find the gradient $\frac{\partial C}{\partial y_i}$. We will define some inter
Calculating t-SNE gradient (a mistake in the original t-SNE paper) Let our cost function be $C=KL(P||Q)=\sum_k\sum_{l\neq k}p_{kl}\log(p_{kl})-p_{kl}\log(q_{kl})$, so that we are trying to find the gradient $\frac{\partial C}{\partial y_i}$. We will define some intermediate terms to try and simplify the notation; let $d_{ij}=||y_i-y_j||=d_{ji}$, and let $W=\sum_{k\neq l}(1+d_{kl}^2)^{-1}$. With that, lets now think about what terms are important to $C$ when we change just $y_i$. Notice that the only terms in $C$ that depend on $y_i$ are $d_{ij}$ and $d_{ji}$ for all of the $j$'s. Using the chain rule, and the symmetry of $d_{ij}=d_{ji}$, this gives us: $$ \frac{\partial C}{\partial y_{i}} =2\sum_j\frac{\partial C}{\partial d_{ij}}\frac{\partial d_{ij}}{\partial y_i}\\\\ \text{Where} \frac{\partial d_{ij}}{y_{ij}}=\frac{y_i-y_j}{||y_i-y_j||}=\frac{y_i-y_j}{d_{ij}}\\\\ \frac{\partial C}{\partial y_{i}}=2\sum_j\frac{\partial C}{\partial d_{ij}}\frac{y_i-y_j}{d_{ij}}\\\\ $$ The first step is to find $\frac{\partial C}{\partial d_{ij}}$; we get: $$ \frac{\partial C}{\partial d_{ij}}= \frac{\partial}{\partial d_{ij}}\sum_k\sum_{l\neq k}p_{kl}\log(p_{kl})-p_{kl}\log(q_{kl})\\\\\ = -\sum_k\sum_{l\neq k}p_{kl}\frac{\partial}{\partial d_{ij}}\log\left(\frac{(1+d_{kl}^2)^{-1}}{W}\right)\\\\ = -\sum_k\sum_{l\neq k}p_{kl}\left(-\frac{\partial}{\partial d_{ij}}\log(W) -\frac{\partial}{\partial d_{ij}}\log(1+d_{kl}^2)\right)\\\\ = -\sum_k\sum_{l\neq k}p_{kl}\left(-\frac{1}{W}\frac{\partial}{\partial d_{ij}}(W) - \frac{1}{1+d_{kl}^2}\frac{\partial}{\partial d_{ij}}(1+d_{kl}^2)\right)\\\\ =\sum_k\sum_{l\neq k}\frac{p_{kl}}{1+d_{kl}^2}\frac{\partial}{\partial d_{ij}}(1+d_{kl}^2) + \sum_k\sum_{l\neq k}\frac{p_{kl}}{W}\frac{\partial}{\partial d_{ij}}W\\\\ $$ Lets start with the first summations which has the term $\frac{\partial}{\partial d_{ij}}(1+d_{kl}^2)$. Notice that this is 0 except when $k=i$ and $l=j$. And when $k=i,l=j$, we get $\frac{\partial}{\partial d_{ij}}(1+d_{kl}^2) = 2d_{ij}$. This makes: $$\sum_k\sum_{l\neq k}\frac{p_{kl}}{1+d_{kl}^2}\frac{\partial}{\partial d_{ij}}(1+d_{kl}^2)=\frac{2d_{ij}p_{ij}}{1+d_{ij}^2} $$ Lets now look at the second double sum term which has $\frac{1}{W}\frac{\partial}{\partial d_{ij}}W$. Recall that $W=\sum_{k\neq l}(1+d_{kl}^2)^{-1}$, which means $\frac{\partial W}{\partial d_{ij}}$ is 0 when $k\neq i$ or $l\neq j$. So the only non-zero term is when $k=i, l=j$, and we get $\frac{1}{W}\frac{\partial W}{\partial d_{ij}}=\frac{-2d_{ij}(1+d_{ij}^2)^{-2}}{W}$. Also notice that $q_{ij}=\frac{(1+d_{ij}^2)^{-1}}{W}$; this makes: $$\sum_k\sum_{l\neq k}\frac{p_{kl}}{W}\frac{\partial W}{\partial d_{ij}}=-2\sum_k\sum_{l\neq k}\frac{p_{kl}d_{ij}(1+d_{ij}^2)^{-2}}{W}\\\\ = -2d_{ij}q_{ij}(1+d_{ij}^2)^{-1}\sum_k\sum_{l\neq k}p_{kl}\\\\ \text{Notice that} \sum_k\sum_{l\neq k}p_{kl}=1\text{, so...}\\\\ =-2d_{ij}q_{ij}(1+d_{ij}^2)^{-1}\\ $$ Stitching this all back together, we get: $$\frac{\partial C}{\partial d_{ij}}=\sum_k\sum_{l\neq k}\frac{p_{kl}}{1+d_{kl}^2}\frac{\partial}{\partial d_{ij}}(1+d_{kl}^2) + \sum_k\sum_{l\neq k}\frac{p_{kl}}{W}\frac{\partial W}{\partial d_{ij}}\\\\ =2d_{ij}p_{ij}(1+d_{ij}^2)^{-1} - 2d_{ij}1_{ij}(1+d_{ij}^2)^{-1}\\\\ = 2d_{ij}(1+d_{ij}^2)^{-1}(p_{ij}-1_{ij})\\\\ $$ Now that we have $\frac{\partial C}{\partial d_{ij}}$, we can plug this back in to $\frac{\partial C}{\partial y_i}$, to get: $$\frac{\partial C}{\partial y_i}=2\sum_j\frac{\partial C}{\partial d_{ij}}\frac{(y_i-y_j)}{d_{ij}}\\\\ =2\sum_j2d_{ij}(1+d_{ij}^2)^{-1}(p_{ij}-1_{ij})\frac{(y_i-y_j)}{d_{ij}}\\\\ =4\sum_j(p_{ij}-q_{ij})(1+d_{ij}^2)^{-1}(y_i-y_j)\\\\ \frac{\partial C}{\partial d_{ij}}= 4\sum_j(p_{ij}-q_{ij})(1+||y_i-y_j||^2)^{-1}(y_i-y_j) $$
Calculating t-SNE gradient (a mistake in the original t-SNE paper) Let our cost function be $C=KL(P||Q)=\sum_k\sum_{l\neq k}p_{kl}\log(p_{kl})-p_{kl}\log(q_{kl})$, so that we are trying to find the gradient $\frac{\partial C}{\partial y_i}$. We will define some inter
53,648
What does it mean to have a probability as random variable?
A probability that's randomly generated is a perfectly legitimate thing to have in a model. Such a model is an example of a hierarchical model: a model in which some parameters are themselves treated as random variables. Hierarchical models come up most frequently in Bayesian statistics, but they see a lot of use in frequentist statistics, too, especially in the form of so-called random effects, which are regression coefficients that are modeled as random variables. In your example, the marginal probability of picking a white ball at any given time will be the mean of $p_w$. In Bayesian terms, if the specified distribution of $p_w$ is its prior, then the marginal probability of picking a white ball will be the posterior mean of $p_w$.
What does it mean to have a probability as random variable?
A probability that's randomly generated is a perfectly legitimate thing to have in a model. Such a model is an example of a hierarchical model: a model in which some parameters are themselves treated
What does it mean to have a probability as random variable? A probability that's randomly generated is a perfectly legitimate thing to have in a model. Such a model is an example of a hierarchical model: a model in which some parameters are themselves treated as random variables. Hierarchical models come up most frequently in Bayesian statistics, but they see a lot of use in frequentist statistics, too, especially in the form of so-called random effects, which are regression coefficients that are modeled as random variables. In your example, the marginal probability of picking a white ball at any given time will be the mean of $p_w$. In Bayesian terms, if the specified distribution of $p_w$ is its prior, then the marginal probability of picking a white ball will be the posterior mean of $p_w$.
What does it mean to have a probability as random variable? A probability that's randomly generated is a perfectly legitimate thing to have in a model. Such a model is an example of a hierarchical model: a model in which some parameters are themselves treated
53,649
What does it mean to have a probability as random variable?
Probabilities as random variables come up all the time in a wide variety of contexts. For one example, imagine you you're trying to model probability of making a claim on a third-party property damage policy in car insurance (insuring against stuff like crashing into someone else's possessions house, car, etc). Individuals will each have different probability of generating a claim (affected by how good a driver they are, how much they drive, the condition of car they have, etc), so if you take a random policy it corresponds to taking a random member of the population of insured's -- the probability of a claim a one-year insurance policy is itself a random variable from some distribution. (I'm ignoring things like people driving other people's cars here. Still, it conveys the idea.)
What does it mean to have a probability as random variable?
Probabilities as random variables come up all the time in a wide variety of contexts. For one example, imagine you you're trying to model probability of making a claim on a third-party property damag
What does it mean to have a probability as random variable? Probabilities as random variables come up all the time in a wide variety of contexts. For one example, imagine you you're trying to model probability of making a claim on a third-party property damage policy in car insurance (insuring against stuff like crashing into someone else's possessions house, car, etc). Individuals will each have different probability of generating a claim (affected by how good a driver they are, how much they drive, the condition of car they have, etc), so if you take a random policy it corresponds to taking a random member of the population of insured's -- the probability of a claim a one-year insurance policy is itself a random variable from some distribution. (I'm ignoring things like people driving other people's cars here. Still, it conveys the idea.)
What does it mean to have a probability as random variable? Probabilities as random variables come up all the time in a wide variety of contexts. For one example, imagine you you're trying to model probability of making a claim on a third-party property damag
53,650
Conditional probability of correlated gaussians
I'm going to switch your variables in the statement so that the discussion uses them in a more conventional sense of studying $Y$ conditional on the values of $X$. So, the objective is to show that $$\Pr(Y\in J\mid X\in I) \ge \Pr(Y\in J).\tag{1}$$ Think of this in terms of regression: We will exploit the fact that the distribution of $Y$ conditional on $X=x$ is a standard distribution $F$ (in this case Normal with mean zero and some given variance $\sigma^2$) whose location has been shifted to $f(x)$ for some function $f$ (the regression function). By letting $J-f(x)$ stand for all values $y-f(x)$ where $y\in J$,this means $$\Pr(Y\in J\mid X=x) = {\Pr}(Y-f(x)\in J-f(x)\mid X=x)=\int_{J-f(x)} \mathrm{d}F(y).\tag{2}$$ Let's suppose that intervals centered around $0$ are maximum-probability intervals for $F$: among all intervals of the same width, these have the largest probability. That's clearly the case for symmetric unimodal distributions like the Normal. This can be written $$\int_J \mathrm{d}F(y) \ge \int_{J-a} \mathrm{d}F(y)\tag{3}$$ for any number $a$. Assume $\Pr(X\in I) \gt 0$. (The other case is trivial to prove). This allows us to write $$\Pr(Y\in J\mid X\in I) = \frac{\Pr(Y\in J\text{ and } X\in I)}{\Pr(X\in I)}.\tag{4}$$ Now, letting $G$ stand for the distribution of $X$, we can compare the two sides of $(1)$ by means of $(2)$, applying $(3)$ with $a=f(x)$, and simplifying the resulting double integral: $$\eqalign{ \Pr(Y\in J\text{ and } X\in I) &= \int_I \Pr(Y\in J\mid X=x) \mathrm{d}G(x) \\ &=\int_I\int_{J-f(x)}\mathrm{d}F(y)\ \mathrm{d}G(x) \\ &\le \int_I\int_J \mathrm{d}F(y)\mathrm{d}G(x) \\ &= \int_J \mathrm{d}F(y) \int_I\mathrm{d}G(x) \\ &=\Pr(Y\in J)\Pr(X\in I). }$$ In light of $(4)$, dividing both sides by $\Pr(X\in I)$ produces $(1)$, QED.
Conditional probability of correlated gaussians
I'm going to switch your variables in the statement so that the discussion uses them in a more conventional sense of studying $Y$ conditional on the values of $X$. So, the objective is to show that $
Conditional probability of correlated gaussians I'm going to switch your variables in the statement so that the discussion uses them in a more conventional sense of studying $Y$ conditional on the values of $X$. So, the objective is to show that $$\Pr(Y\in J\mid X\in I) \ge \Pr(Y\in J).\tag{1}$$ Think of this in terms of regression: We will exploit the fact that the distribution of $Y$ conditional on $X=x$ is a standard distribution $F$ (in this case Normal with mean zero and some given variance $\sigma^2$) whose location has been shifted to $f(x)$ for some function $f$ (the regression function). By letting $J-f(x)$ stand for all values $y-f(x)$ where $y\in J$,this means $$\Pr(Y\in J\mid X=x) = {\Pr}(Y-f(x)\in J-f(x)\mid X=x)=\int_{J-f(x)} \mathrm{d}F(y).\tag{2}$$ Let's suppose that intervals centered around $0$ are maximum-probability intervals for $F$: among all intervals of the same width, these have the largest probability. That's clearly the case for symmetric unimodal distributions like the Normal. This can be written $$\int_J \mathrm{d}F(y) \ge \int_{J-a} \mathrm{d}F(y)\tag{3}$$ for any number $a$. Assume $\Pr(X\in I) \gt 0$. (The other case is trivial to prove). This allows us to write $$\Pr(Y\in J\mid X\in I) = \frac{\Pr(Y\in J\text{ and } X\in I)}{\Pr(X\in I)}.\tag{4}$$ Now, letting $G$ stand for the distribution of $X$, we can compare the two sides of $(1)$ by means of $(2)$, applying $(3)$ with $a=f(x)$, and simplifying the resulting double integral: $$\eqalign{ \Pr(Y\in J\text{ and } X\in I) &= \int_I \Pr(Y\in J\mid X=x) \mathrm{d}G(x) \\ &=\int_I\int_{J-f(x)}\mathrm{d}F(y)\ \mathrm{d}G(x) \\ &\le \int_I\int_J \mathrm{d}F(y)\mathrm{d}G(x) \\ &= \int_J \mathrm{d}F(y) \int_I\mathrm{d}G(x) \\ &=\Pr(Y\in J)\Pr(X\in I). }$$ In light of $(4)$, dividing both sides by $\Pr(X\in I)$ produces $(1)$, QED.
Conditional probability of correlated gaussians I'm going to switch your variables in the statement so that the discussion uses them in a more conventional sense of studying $Y$ conditional on the values of $X$. So, the objective is to show that $
53,651
Conditional probability of correlated gaussians
You want to prove that for any $x^\star \geq 0$ and $y^\star \geq 0$ we have $$ \text{Pr}\{ |X| \leq x^\star,\, |Y| \leq y^\star \} \geq \text{Pr}\{ |X| \leq x^\star\} \text{Pr}\{ |Y| \leq y^\star \} $$ which is a famous result known as Sidak inequality. It holds for an arbitrary centered elliptically contoured bivariate distribution, including the Gaussian. For a quite general formulation, see this paper by das Gupta et al. where more useful results are given.
Conditional probability of correlated gaussians
You want to prove that for any $x^\star \geq 0$ and $y^\star \geq 0$ we have $$ \text{Pr}\{ |X| \leq x^\star,\, |Y| \leq y^\star \} \geq \text{Pr}\{ |X| \leq x^\star\} \text{Pr}\{ |Y| \leq y^\star \}
Conditional probability of correlated gaussians You want to prove that for any $x^\star \geq 0$ and $y^\star \geq 0$ we have $$ \text{Pr}\{ |X| \leq x^\star,\, |Y| \leq y^\star \} \geq \text{Pr}\{ |X| \leq x^\star\} \text{Pr}\{ |Y| \leq y^\star \} $$ which is a famous result known as Sidak inequality. It holds for an arbitrary centered elliptically contoured bivariate distribution, including the Gaussian. For a quite general formulation, see this paper by das Gupta et al. where more useful results are given.
Conditional probability of correlated gaussians You want to prove that for any $x^\star \geq 0$ and $y^\star \geq 0$ we have $$ \text{Pr}\{ |X| \leq x^\star,\, |Y| \leq y^\star \} \geq \text{Pr}\{ |X| \leq x^\star\} \text{Pr}\{ |Y| \leq y^\star \}
53,652
Conditional probability of correlated gaussians
Your desired result is a special case of the recently-proven Gaussian correlation inequality. This may seem like overkill (and as the other answers demonstrate, it is), but the proof of the more general result is relatively short and available on the arXiv.
Conditional probability of correlated gaussians
Your desired result is a special case of the recently-proven Gaussian correlation inequality. This may seem like overkill (and as the other answers demonstrate, it is), but the proof of the more gener
Conditional probability of correlated gaussians Your desired result is a special case of the recently-proven Gaussian correlation inequality. This may seem like overkill (and as the other answers demonstrate, it is), but the proof of the more general result is relatively short and available on the arXiv.
Conditional probability of correlated gaussians Your desired result is a special case of the recently-proven Gaussian correlation inequality. This may seem like overkill (and as the other answers demonstrate, it is), but the proof of the more gener
53,653
Calculating Confidence Intervals for Cross Validated Binary Classifiers
Summary: Whatever you do in order to calculate confidence intervals based on repeated CV, you need to take into account that there are several different sources of uncertainty. Long version: Let me add my 2 ct with regard to repeated cross validation: Repeated cross validation allows you to separate 2 sources of variance uncertainty in the test results. variance due to model instability, i.e. variance in the predictions for the same case by different surrogate models (i.e. exchange a few training cases in the surrogate models' training sets) and variance due to the finite (limited) number of independent test cases Now typically cross validation assumes that the models are stable, i.e. variance 1. is negligible. This assumption you can easily check. If you find it non-negligible, you'd typically go back to training and try to stabilize your models before doing anything else. Variance 2 depends heavily on the total number of independent cases = the total number of independent cases tested in each run of the cross validation (and is usually much worse if you insist on classification rather than staying with metric scores). I'm pointing this out because repetitions of the cross validation can help estimating and reducing variance 1, but will not mitigate variance 2 - but under the standard assumptions for cross validation, variance 2 should be dominating. I suspect that this the underlying cause for VanWinckelen's finding that "Repeated cross-validation should not be assumed to give much more precise estimates of a model’s predic- tive accuracy". If, instead of characterizing the model you get using the data at hand, you try to find out whether one or the other algorithm would be better for similar applications, you have more unknown sources of uncertainty, see Bengio and Grandvalet: No unbiased estimator of the variance of k-fold cross-validation. Disclaimer: I cannot say much on the area under those curves as for my applications the curves are often skewed and I rather need to take into account pairs of figures of merit such as sens and spec or PPV and NPV (or scoring rules that are analogous to them).
Calculating Confidence Intervals for Cross Validated Binary Classifiers
Summary: Whatever you do in order to calculate confidence intervals based on repeated CV, you need to take into account that there are several different sources of uncertainty. Long version: Let me a
Calculating Confidence Intervals for Cross Validated Binary Classifiers Summary: Whatever you do in order to calculate confidence intervals based on repeated CV, you need to take into account that there are several different sources of uncertainty. Long version: Let me add my 2 ct with regard to repeated cross validation: Repeated cross validation allows you to separate 2 sources of variance uncertainty in the test results. variance due to model instability, i.e. variance in the predictions for the same case by different surrogate models (i.e. exchange a few training cases in the surrogate models' training sets) and variance due to the finite (limited) number of independent test cases Now typically cross validation assumes that the models are stable, i.e. variance 1. is negligible. This assumption you can easily check. If you find it non-negligible, you'd typically go back to training and try to stabilize your models before doing anything else. Variance 2 depends heavily on the total number of independent cases = the total number of independent cases tested in each run of the cross validation (and is usually much worse if you insist on classification rather than staying with metric scores). I'm pointing this out because repetitions of the cross validation can help estimating and reducing variance 1, but will not mitigate variance 2 - but under the standard assumptions for cross validation, variance 2 should be dominating. I suspect that this the underlying cause for VanWinckelen's finding that "Repeated cross-validation should not be assumed to give much more precise estimates of a model’s predic- tive accuracy". If, instead of characterizing the model you get using the data at hand, you try to find out whether one or the other algorithm would be better for similar applications, you have more unknown sources of uncertainty, see Bengio and Grandvalet: No unbiased estimator of the variance of k-fold cross-validation. Disclaimer: I cannot say much on the area under those curves as for my applications the curves are often skewed and I rather need to take into account pairs of figures of merit such as sens and spec or PPV and NPV (or scoring rules that are analogous to them).
Calculating Confidence Intervals for Cross Validated Binary Classifiers Summary: Whatever you do in order to calculate confidence intervals based on repeated CV, you need to take into account that there are several different sources of uncertainty. Long version: Let me a
53,654
Calculating Confidence Intervals for Cross Validated Binary Classifiers
Since you are computing areas, note that the AUROC is just the concordance probability $c$ between predicted risks and observed binary outcomes. And so you are not needing to engage in classification. Instead your outcomes can be just predicted risks. There are methods for getting confidence intervals for $c$-indexes by computing a standard error for $c$. But $c$ (AUROC) is not sensitive enough for there to be good statistical power for comparing two models. For that use the gold standard likelihood ratio $\chi^2$ test (if the models are nested) or compute the difference in two proper probability accuracy scores. You might also look into AIC for informal comparisons of non-nested models. Formal $\chi^2$ or $F$ tests are done using pre-specified models and do not involve any resampling such as cross-validation. When more complex situations are present and you use a resampling procedure to unbiasedly estimate an accuracy score, things are more complicated and we don't have all the theory worked out yet. An option is always there - to use the double bootstrap, which is computationally intensive.
Calculating Confidence Intervals for Cross Validated Binary Classifiers
Since you are computing areas, note that the AUROC is just the concordance probability $c$ between predicted risks and observed binary outcomes. And so you are not needing to engage in classification
Calculating Confidence Intervals for Cross Validated Binary Classifiers Since you are computing areas, note that the AUROC is just the concordance probability $c$ between predicted risks and observed binary outcomes. And so you are not needing to engage in classification. Instead your outcomes can be just predicted risks. There are methods for getting confidence intervals for $c$-indexes by computing a standard error for $c$. But $c$ (AUROC) is not sensitive enough for there to be good statistical power for comparing two models. For that use the gold standard likelihood ratio $\chi^2$ test (if the models are nested) or compute the difference in two proper probability accuracy scores. You might also look into AIC for informal comparisons of non-nested models. Formal $\chi^2$ or $F$ tests are done using pre-specified models and do not involve any resampling such as cross-validation. When more complex situations are present and you use a resampling procedure to unbiasedly estimate an accuracy score, things are more complicated and we don't have all the theory worked out yet. An option is always there - to use the double bootstrap, which is computationally intensive.
Calculating Confidence Intervals for Cross Validated Binary Classifiers Since you are computing areas, note that the AUROC is just the concordance probability $c$ between predicted risks and observed binary outcomes. And so you are not needing to engage in classification
53,655
What is the rationale behind LARS-OLS hybrid, i.e. using OLS estimate on the variables chosen by LARS?
The coefficient estimates from LARS will be shrunk (biased) towards zero, and the intensity of shrinkage might be suboptimal (too harsh) for forecasting. However, some shrinkage should be good, as there is a trade-off between bias and variance. For example, if lasso happens to have selected the relevant regressors and only them (which of course is never guaranteed in practice), you could show that a positive (thus nonzero) amount of ridge-type shrinkage is optimal* -- just as you can show it in a basic linear model with no variable selection (see e.g. the answer by Andrew M in the thread "Under exactly what conditions is ridge regression able to provide an improvement over ordinary least squares regression?"). (I do not know if you can show this for LARS-type shrinkage, but intuitively I would not expect zero shrinkage to always be optimal.) This is what motivates (1) relaxed lasso (Meinshausen, 2007) where there are two shrinkage parameters: a harsher one for variable selection and a softer one of the coefficients of the retained variables); or (2) LARS-OLS where there is no shrinkage on the coefficients of the retained variables. *Optimal in the sense that it minimizes the mean squared error of the estimator Meinshausen, Nicolai. "Relaxed lasso." Computational Statistics & Data Analysis 52.1 (2007): 374-393.
What is the rationale behind LARS-OLS hybrid, i.e. using OLS estimate on the variables chosen by LAR
The coefficient estimates from LARS will be shrunk (biased) towards zero, and the intensity of shrinkage might be suboptimal (too harsh) for forecasting. However, some shrinkage should be good, as th
What is the rationale behind LARS-OLS hybrid, i.e. using OLS estimate on the variables chosen by LARS? The coefficient estimates from LARS will be shrunk (biased) towards zero, and the intensity of shrinkage might be suboptimal (too harsh) for forecasting. However, some shrinkage should be good, as there is a trade-off between bias and variance. For example, if lasso happens to have selected the relevant regressors and only them (which of course is never guaranteed in practice), you could show that a positive (thus nonzero) amount of ridge-type shrinkage is optimal* -- just as you can show it in a basic linear model with no variable selection (see e.g. the answer by Andrew M in the thread "Under exactly what conditions is ridge regression able to provide an improvement over ordinary least squares regression?"). (I do not know if you can show this for LARS-type shrinkage, but intuitively I would not expect zero shrinkage to always be optimal.) This is what motivates (1) relaxed lasso (Meinshausen, 2007) where there are two shrinkage parameters: a harsher one for variable selection and a softer one of the coefficients of the retained variables); or (2) LARS-OLS where there is no shrinkage on the coefficients of the retained variables. *Optimal in the sense that it minimizes the mean squared error of the estimator Meinshausen, Nicolai. "Relaxed lasso." Computational Statistics & Data Analysis 52.1 (2007): 374-393.
What is the rationale behind LARS-OLS hybrid, i.e. using OLS estimate on the variables chosen by LAR The coefficient estimates from LARS will be shrunk (biased) towards zero, and the intensity of shrinkage might be suboptimal (too harsh) for forecasting. However, some shrinkage should be good, as th
53,656
Concentration inequalities for gaussian variables
You've got a random variable $\bar X_n \sim \mathcal N(\mu, \sigma^2/n)$ and you're looking to quantify the probability that $\bar X_n$ is a certain distance from its mean. This means you'll want to make use of a concentration inequality. I'm going to prove a result that is very similar to your question but with some modifications so that it is actually true. I will show that $$ \mathbb P\left( \big\vert \bar X_n - \mu \big\vert > \sigma \sqrt{\frac{-4\log(\delta / \sqrt 2)}{n}} \right) < \delta $$ for $0 < \delta < 1$. We know that $\bar X_n - \mu \sim \mathcal N(0, \sigma^2/n)$ so $\frac{n}{\sigma^2}|\bar X_n - \mu | ^2 \sim \chi^2_1$ If $Y \sim \chi^2_\nu$ then the MGF of $Y$ is $$ M_Y(t) = E(e^{tY}) = (1-2t)^{-\nu/2}. $$ Now by Chernoff's bound we have $$ \mathbb P\left( \frac{n}{\sigma^2} \big\vert \bar X_n - \mu \big\vert^2 > \log \left(4 \delta^{-4}\right) \right) < \frac{(1-2t)^{-1/2}}{(4 \delta^{-4})^t} = \delta $$ if we let $t = \frac 14$.
Concentration inequalities for gaussian variables
You've got a random variable $\bar X_n \sim \mathcal N(\mu, \sigma^2/n)$ and you're looking to quantify the probability that $\bar X_n$ is a certain distance from its mean. This means you'll want to m
Concentration inequalities for gaussian variables You've got a random variable $\bar X_n \sim \mathcal N(\mu, \sigma^2/n)$ and you're looking to quantify the probability that $\bar X_n$ is a certain distance from its mean. This means you'll want to make use of a concentration inequality. I'm going to prove a result that is very similar to your question but with some modifications so that it is actually true. I will show that $$ \mathbb P\left( \big\vert \bar X_n - \mu \big\vert > \sigma \sqrt{\frac{-4\log(\delta / \sqrt 2)}{n}} \right) < \delta $$ for $0 < \delta < 1$. We know that $\bar X_n - \mu \sim \mathcal N(0, \sigma^2/n)$ so $\frac{n}{\sigma^2}|\bar X_n - \mu | ^2 \sim \chi^2_1$ If $Y \sim \chi^2_\nu$ then the MGF of $Y$ is $$ M_Y(t) = E(e^{tY}) = (1-2t)^{-\nu/2}. $$ Now by Chernoff's bound we have $$ \mathbb P\left( \frac{n}{\sigma^2} \big\vert \bar X_n - \mu \big\vert^2 > \log \left(4 \delta^{-4}\right) \right) < \frac{(1-2t)^{-1/2}}{(4 \delta^{-4})^t} = \delta $$ if we let $t = \frac 14$.
Concentration inequalities for gaussian variables You've got a random variable $\bar X_n \sim \mathcal N(\mu, \sigma^2/n)$ and you're looking to quantify the probability that $\bar X_n$ is a certain distance from its mean. This means you'll want to m
53,657
Concentration inequalities for gaussian variables
Let $X$ denote a standard normal random variable. As the comments and Chaconne's answer have noted, the question here is to bound $P\{|X| > x\} = 2Q(x)$ where $Q(x) = 1 - \Phi(x)$ is the complementary normal distribution function. Now, a well-known bound is $$ Q(x) < \frac 12e^{-x^2/2} ~~ \text{for } x > 0 \tag{1}$$ which immediately gives $$P\{|X| > x\} < e^{-x^2/2}, \tag{2}$$ or, writing $\delta$ for the right side of $(2)$, $$P\left\{|X| > \sqrt{-2\ln(\delta)}\right\} < \delta \tag{3}$$ Note that this is different from @Chaconne's result $\displaystyle P\left\{|X| > \sqrt{-4 \ln (\delta/\sqrt{2})}\right\} < \delta$. Proof of $(1)$: Let $Y \sim N(0,1)$ be independent of $X$. Then, for $x>0$, $$P\{|X| > x, |Y| > x\} = 4[Q(x)]^2 < P\{X^2+Y^2 > 2x^2\}.$$ But $X^2+Y^2$ is an exponential random variable with parameter $\frac 12$ and so $$P\{X^2+Y^2 > 2x^2\} = e^{-\frac 12\cdot 2x^2} = e^{-x^2}\implies Q(x) < \frac 12e^{-x^2/2}.$$ Chernoff bound on $Q(x)$: For every choice of $\lambda > 0$, $\mathbf 1_{\{t\colon t > x \}} < e^{\lambda (t-x)}$ and so $$P\{X > x\} = E[\mathbf 1_{\{X\colon X > x \}}] < E[e^{\lambda (X-x)}] = e^{-\lambda x}E[e^{\lambda X}] = e^{-\lambda x}\cdot e^{\lambda^2/2}$$ leading to $$P\{X > x\} \leq \min_{\lambda >0}\exp\left(\frac{\lambda^2}{2} - x \lambda\right) = e^{-x^2/2}$$ since the minimum occurs at $\lambda = x$. This gives $$P\left\{|X| > \sqrt{-2\ln (\delta/2)}\right\} < \delta. \tag{4}$$ This is weaker than $(3)$ but it too is different from Chaconne's result.
Concentration inequalities for gaussian variables
Let $X$ denote a standard normal random variable. As the comments and Chaconne's answer have noted, the question here is to bound $P\{|X| > x\} = 2Q(x)$ where $Q(x) = 1 - \Phi(x)$ is the complementary
Concentration inequalities for gaussian variables Let $X$ denote a standard normal random variable. As the comments and Chaconne's answer have noted, the question here is to bound $P\{|X| > x\} = 2Q(x)$ where $Q(x) = 1 - \Phi(x)$ is the complementary normal distribution function. Now, a well-known bound is $$ Q(x) < \frac 12e^{-x^2/2} ~~ \text{for } x > 0 \tag{1}$$ which immediately gives $$P\{|X| > x\} < e^{-x^2/2}, \tag{2}$$ or, writing $\delta$ for the right side of $(2)$, $$P\left\{|X| > \sqrt{-2\ln(\delta)}\right\} < \delta \tag{3}$$ Note that this is different from @Chaconne's result $\displaystyle P\left\{|X| > \sqrt{-4 \ln (\delta/\sqrt{2})}\right\} < \delta$. Proof of $(1)$: Let $Y \sim N(0,1)$ be independent of $X$. Then, for $x>0$, $$P\{|X| > x, |Y| > x\} = 4[Q(x)]^2 < P\{X^2+Y^2 > 2x^2\}.$$ But $X^2+Y^2$ is an exponential random variable with parameter $\frac 12$ and so $$P\{X^2+Y^2 > 2x^2\} = e^{-\frac 12\cdot 2x^2} = e^{-x^2}\implies Q(x) < \frac 12e^{-x^2/2}.$$ Chernoff bound on $Q(x)$: For every choice of $\lambda > 0$, $\mathbf 1_{\{t\colon t > x \}} < e^{\lambda (t-x)}$ and so $$P\{X > x\} = E[\mathbf 1_{\{X\colon X > x \}}] < E[e^{\lambda (X-x)}] = e^{-\lambda x}E[e^{\lambda X}] = e^{-\lambda x}\cdot e^{\lambda^2/2}$$ leading to $$P\{X > x\} \leq \min_{\lambda >0}\exp\left(\frac{\lambda^2}{2} - x \lambda\right) = e^{-x^2/2}$$ since the minimum occurs at $\lambda = x$. This gives $$P\left\{|X| > \sqrt{-2\ln (\delta/2)}\right\} < \delta. \tag{4}$$ This is weaker than $(3)$ but it too is different from Chaconne's result.
Concentration inequalities for gaussian variables Let $X$ denote a standard normal random variable. As the comments and Chaconne's answer have noted, the question here is to bound $P\{|X| > x\} = 2Q(x)$ where $Q(x) = 1 - \Phi(x)$ is the complementary
53,658
Time to event with no censoring - use survival or normal regression?
Survival methods are about modeling some time to event data. There is no need for there to be censoring! the methods will work and be more effective without censoring. Time to event data will probably not be well fitted by normal distribution models, so usual linear regression is not indicated. I say you should go with survival methods.
Time to event with no censoring - use survival or normal regression?
Survival methods are about modeling some time to event data. There is no need for there to be censoring! the methods will work and be more effective without censoring. Time to event data will probab
Time to event with no censoring - use survival or normal regression? Survival methods are about modeling some time to event data. There is no need for there to be censoring! the methods will work and be more effective without censoring. Time to event data will probably not be well fitted by normal distribution models, so usual linear regression is not indicated. I say you should go with survival methods.
Time to event with no censoring - use survival or normal regression? Survival methods are about modeling some time to event data. There is no need for there to be censoring! the methods will work and be more effective without censoring. Time to event data will probab
53,659
Reference request for robust statistics
I would not start with Huber's book, even with its 2009 revision, unless you possess strong mathematical background, i.e. measure theory and topology. The book by Maronna and Yohai entitled Robust Statistics: Theory and Methods is much more accessible for beginners and covers both univariate and multivariate theory, along with the computational aspects of the estimators (Chapter 9). So it is more modern in that respect. Alternatively, if you find their book too easy but Huber's book still difficult there exists an intermediate alternative called Robust Statistics: The Approach Based on Influence Functions from Frank Hampel and co-authors (Hampel was the person who invented the influence function). The mathematical requirements are more modest and there is substantial motivation for the estimators. All these books may be found in digital libraries but if you still have trouble obtaining pdf versions, you might want to try these notes, which are a fusion of all three books. You might also be pleased to know that all robust estimators exist in up-to-date R-packages, e.g. robustbase.
Reference request for robust statistics
I would not start with Huber's book, even with its 2009 revision, unless you possess strong mathematical background, i.e. measure theory and topology. The book by Maronna and Yohai entitled Robust Sta
Reference request for robust statistics I would not start with Huber's book, even with its 2009 revision, unless you possess strong mathematical background, i.e. measure theory and topology. The book by Maronna and Yohai entitled Robust Statistics: Theory and Methods is much more accessible for beginners and covers both univariate and multivariate theory, along with the computational aspects of the estimators (Chapter 9). So it is more modern in that respect. Alternatively, if you find their book too easy but Huber's book still difficult there exists an intermediate alternative called Robust Statistics: The Approach Based on Influence Functions from Frank Hampel and co-authors (Hampel was the person who invented the influence function). The mathematical requirements are more modest and there is substantial motivation for the estimators. All these books may be found in digital libraries but if you still have trouble obtaining pdf versions, you might want to try these notes, which are a fusion of all three books. You might also be pleased to know that all robust estimators exist in up-to-date R-packages, e.g. robustbase.
Reference request for robust statistics I would not start with Huber's book, even with its 2009 revision, unless you possess strong mathematical background, i.e. measure theory and topology. The book by Maronna and Yohai entitled Robust Sta
53,660
Reference request for robust statistics
Maronna's book from 2006, titled "Robust Statistics: Theory and Methods" is a very good introduction to the topic that covers roughly the same as Huber's. Depending on what aspect you want to touch on, I would focus on articles written by both Huber and Maronna to better help you understand how the field was developed.
Reference request for robust statistics
Maronna's book from 2006, titled "Robust Statistics: Theory and Methods" is a very good introduction to the topic that covers roughly the same as Huber's. Depending on what aspect you want to touch on
Reference request for robust statistics Maronna's book from 2006, titled "Robust Statistics: Theory and Methods" is a very good introduction to the topic that covers roughly the same as Huber's. Depending on what aspect you want to touch on, I would focus on articles written by both Huber and Maronna to better help you understand how the field was developed.
Reference request for robust statistics Maronna's book from 2006, titled "Robust Statistics: Theory and Methods" is a very good introduction to the topic that covers roughly the same as Huber's. Depending on what aspect you want to touch on
53,661
How exactly does curse of dimensionality curse?
The idea of nearest neighbours is that, due to continuity, other points close to your point of interest have values close to the value of your point of interest. If you have to spread very far out to find the 100 (for example) closest points, well then these points are not very close or neighbours anymore, and thus there's no reason that their values are relevant at all to help you predict the value at your point of interest!
How exactly does curse of dimensionality curse?
The idea of nearest neighbours is that, due to continuity, other points close to your point of interest have values close to the value of your point of interest. If you have to spread very far out to
How exactly does curse of dimensionality curse? The idea of nearest neighbours is that, due to continuity, other points close to your point of interest have values close to the value of your point of interest. If you have to spread very far out to find the 100 (for example) closest points, well then these points are not very close or neighbours anymore, and thus there's no reason that their values are relevant at all to help you predict the value at your point of interest!
How exactly does curse of dimensionality curse? The idea of nearest neighbours is that, due to continuity, other points close to your point of interest have values close to the value of your point of interest. If you have to spread very far out to
53,662
How exactly does curse of dimensionality curse?
The curse is that a lot of of things that work in lower dimension don't scale well (i.e. grow/shrink too fast compare to linear) with dimension. E.g. a measure of data quality is more than 2 times worse when you double the number of dimension. The curse comes in many form: the range of the data, the density of the data, the distribution, etc. Take the example from ESL book. Says in every 1 dimension the possible values are in $[0,1]$, your data covers $[0.1,0.9]$, i.e. the range of data $r=0.8$. That's pretty good of a range if your data is just 1D. The generalization of range to dimension $p$ is $0.8^p$. As $p$ increase, the range of your data shrinks really fast. In 2D, $p=2$, even though in each dimension, your data spans 80% of possible range, the 'volume' range that it capture is only 64%. In 10D, $p=10$, your data only cover 10% of the space! If you want to cover 80% of the space, in each dimension, your data need to span $0.8^{\frac{1}{10}} \approx 0.98$% of possible range - that's very expensive.
How exactly does curse of dimensionality curse?
The curse is that a lot of of things that work in lower dimension don't scale well (i.e. grow/shrink too fast compare to linear) with dimension. E.g. a measure of data quality is more than 2 times wor
How exactly does curse of dimensionality curse? The curse is that a lot of of things that work in lower dimension don't scale well (i.e. grow/shrink too fast compare to linear) with dimension. E.g. a measure of data quality is more than 2 times worse when you double the number of dimension. The curse comes in many form: the range of the data, the density of the data, the distribution, etc. Take the example from ESL book. Says in every 1 dimension the possible values are in $[0,1]$, your data covers $[0.1,0.9]$, i.e. the range of data $r=0.8$. That's pretty good of a range if your data is just 1D. The generalization of range to dimension $p$ is $0.8^p$. As $p$ increase, the range of your data shrinks really fast. In 2D, $p=2$, even though in each dimension, your data spans 80% of possible range, the 'volume' range that it capture is only 64%. In 10D, $p=10$, your data only cover 10% of the space! If you want to cover 80% of the space, in each dimension, your data need to span $0.8^{\frac{1}{10}} \approx 0.98$% of possible range - that's very expensive.
How exactly does curse of dimensionality curse? The curse is that a lot of of things that work in lower dimension don't scale well (i.e. grow/shrink too fast compare to linear) with dimension. E.g. a measure of data quality is more than 2 times wor
53,663
How exactly does curse of dimensionality curse?
It curses mainly in computational sense. However, if you expand the dimensionality from small to large to infinite, asymptotically any point that's not this point becomes infinitely far away. If you're at point O, and are looking at points A and B, then both $OA=\infty$ and $OB=\infty$, and you can't distinguish which one is farther.
How exactly does curse of dimensionality curse?
It curses mainly in computational sense. However, if you expand the dimensionality from small to large to infinite, asymptotically any point that's not this point becomes infinitely far away. If you'
How exactly does curse of dimensionality curse? It curses mainly in computational sense. However, if you expand the dimensionality from small to large to infinite, asymptotically any point that's not this point becomes infinitely far away. If you're at point O, and are looking at points A and B, then both $OA=\infty$ and $OB=\infty$, and you can't distinguish which one is farther.
How exactly does curse of dimensionality curse? It curses mainly in computational sense. However, if you expand the dimensionality from small to large to infinite, asymptotically any point that's not this point becomes infinitely far away. If you'
53,664
What is the connection between binomial and poisson distribution? [duplicate]
They are strongly related to each other. For $n \rightarrow \infty,\ p \rightarrow 0$ such that $np \rightarrow \lambda\ $we have $$P_{Bin(n,p)}(k) = \binom{n}{k}p^k(1-p)^{n-k} \eqsim \frac{\lambda^k}{k!}\exp^{-\lambda} = P_{Poiss(\lambda)}(k)$$ So Poisson distribution is a limiting binomial distribution with $\lambda$ being the average rate (that is $np$) of certain event occuring
What is the connection between binomial and poisson distribution? [duplicate]
They are strongly related to each other. For $n \rightarrow \infty,\ p \rightarrow 0$ such that $np \rightarrow \lambda\ $we have $$P_{Bin(n,p)}(k) = \binom{n}{k}p^k(1-p)^{n-k} \eqsim \frac{\lambda^k
What is the connection between binomial and poisson distribution? [duplicate] They are strongly related to each other. For $n \rightarrow \infty,\ p \rightarrow 0$ such that $np \rightarrow \lambda\ $we have $$P_{Bin(n,p)}(k) = \binom{n}{k}p^k(1-p)^{n-k} \eqsim \frac{\lambda^k}{k!}\exp^{-\lambda} = P_{Poiss(\lambda)}(k)$$ So Poisson distribution is a limiting binomial distribution with $\lambda$ being the average rate (that is $np$) of certain event occuring
What is the connection between binomial and poisson distribution? [duplicate] They are strongly related to each other. For $n \rightarrow \infty,\ p \rightarrow 0$ such that $np \rightarrow \lambda\ $we have $$P_{Bin(n,p)}(k) = \binom{n}{k}p^k(1-p)^{n-k} \eqsim \frac{\lambda^k
53,665
What is the connection between binomial and poisson distribution? [duplicate]
One can get the Poisson from Binomial by taking limit, and the Binomial from Poisson by conditioning. More precisely, we have the following. If $X\sim\text{Pois}(\lambda_1)$, $Y\sim\text{Pois}(\lambda_2)$ are independent random variables, then the distribution of $X$ given $X+Y=n$ is $X_{\text{cond}}\sim\text{Bin}(n,\lambda_1/(\lambda_1+\lambda_2))$ If $X\sim\text{Bin}(n,p)$, and if $n\to\infty$, $p\to 0$ such that $np\to\lambda$, then $\mathbb{P}(X=k)\to\mathrm{e}^{-\lambda}\dfrac{\lambda^k}{k!}$
What is the connection between binomial and poisson distribution? [duplicate]
One can get the Poisson from Binomial by taking limit, and the Binomial from Poisson by conditioning. More precisely, we have the following. If $X\sim\text{Pois}(\lambda_1)$, $Y\sim\text{Pois}(\lambd
What is the connection between binomial and poisson distribution? [duplicate] One can get the Poisson from Binomial by taking limit, and the Binomial from Poisson by conditioning. More precisely, we have the following. If $X\sim\text{Pois}(\lambda_1)$, $Y\sim\text{Pois}(\lambda_2)$ are independent random variables, then the distribution of $X$ given $X+Y=n$ is $X_{\text{cond}}\sim\text{Bin}(n,\lambda_1/(\lambda_1+\lambda_2))$ If $X\sim\text{Bin}(n,p)$, and if $n\to\infty$, $p\to 0$ such that $np\to\lambda$, then $\mathbb{P}(X=k)\to\mathrm{e}^{-\lambda}\dfrac{\lambda^k}{k!}$
What is the connection between binomial and poisson distribution? [duplicate] One can get the Poisson from Binomial by taking limit, and the Binomial from Poisson by conditioning. More precisely, we have the following. If $X\sim\text{Pois}(\lambda_1)$, $Y\sim\text{Pois}(\lambd
53,666
What is the name of this type of vector product?
Your direction question: In the two-dimensional case, you're looking for the outer product. The Euclidean inner product is $x^T y$ while the outer product is $xy^T$. The outer product generates a matrix. In the general case, you're asking for a tensor product. What you're asking for is also the Kronecker product of the vectors. (See Rodrigo de Azevedo's answer.) Some programming comments: Be aware of row major or column major, that is, how to handle multi-dimensional arrays. This is standard linear algebra, and I would recommend using linear algebra libraries if you're doing any medium to heavy lifting or if performance matters. Don't reinvent the wheel. Some discussion is here. BLAS/LAPACK are standard, stable, and fast. But calling BLAS/LAPACK functions is a pain.
What is the name of this type of vector product?
Your direction question: In the two-dimensional case, you're looking for the outer product. The Euclidean inner product is $x^T y$ while the outer product is $xy^T$. The outer product generates a mat
What is the name of this type of vector product? Your direction question: In the two-dimensional case, you're looking for the outer product. The Euclidean inner product is $x^T y$ while the outer product is $xy^T$. The outer product generates a matrix. In the general case, you're asking for a tensor product. What you're asking for is also the Kronecker product of the vectors. (See Rodrigo de Azevedo's answer.) Some programming comments: Be aware of row major or column major, that is, how to handle multi-dimensional arrays. This is standard linear algebra, and I would recommend using linear algebra libraries if you're doing any medium to heavy lifting or if performance matters. Don't reinvent the wheel. Some discussion is here. BLAS/LAPACK are standard, stable, and fast. But calling BLAS/LAPACK functions is a pain.
What is the name of this type of vector product? Your direction question: In the two-dimensional case, you're looking for the outer product. The Euclidean inner product is $x^T y$ while the outer product is $xy^T$. The outer product generates a mat
53,667
What is the name of this type of vector product?
Let $\mathrm x, \mathrm y \in \mathbb R^d$. The Kronecker product of these two $d$-dimensional column vectors is the following $d^2$-dimensional column vector $$\mathrm x \otimes \mathrm y = \begin{bmatrix} x_1 \mathrm y\\ x_2 \mathrm y\\ \vdots\\ x_d \mathrm y\\\end{bmatrix} = \begin{bmatrix} x_1 y_1\\ x_1 y_2\\ \vdots\\ x_1 y_d\\ x_2 y_1\\ x_2 y_2\\ \vdots\\ x_2 y_d\\ \vdots\\ x_d y_1\\ x_d y_2\\ \vdots\\ x_d y_d\end{bmatrix}$$
What is the name of this type of vector product?
Let $\mathrm x, \mathrm y \in \mathbb R^d$. The Kronecker product of these two $d$-dimensional column vectors is the following $d^2$-dimensional column vector $$\mathrm x \otimes \mathrm y = \begin{bm
What is the name of this type of vector product? Let $\mathrm x, \mathrm y \in \mathbb R^d$. The Kronecker product of these two $d$-dimensional column vectors is the following $d^2$-dimensional column vector $$\mathrm x \otimes \mathrm y = \begin{bmatrix} x_1 \mathrm y\\ x_2 \mathrm y\\ \vdots\\ x_d \mathrm y\\\end{bmatrix} = \begin{bmatrix} x_1 y_1\\ x_1 y_2\\ \vdots\\ x_1 y_d\\ x_2 y_1\\ x_2 y_2\\ \vdots\\ x_2 y_d\\ \vdots\\ x_d y_1\\ x_d y_2\\ \vdots\\ x_d y_d\end{bmatrix}$$
What is the name of this type of vector product? Let $\mathrm x, \mathrm y \in \mathbb R^d$. The Kronecker product of these two $d$-dimensional column vectors is the following $d^2$-dimensional column vector $$\mathrm x \otimes \mathrm y = \begin{bm
53,668
Inferring alleles distribution from the blood types distribution
The probability of the blood types can be defined in terms of the alleles: $$O = o^2$$ $$A=a^2+2oa$$ $$B=b^2+2ob$$ $$AB=2ab$$ These are 4 equations with 3 variables, and thus a solution is not guaranteed. Solving for the first 3 equations we get: $$o=\sqrt{O}$$ $$a=\sqrt{A+O}-\sqrt{O}$$ $$b=\sqrt{B+O}-\sqrt{O}$$ We get that: o=0.6708203932499369 a=0.25113405247935183 b=0.07751108410485141 Thus AB=2ab=0.03893134532663842=3.9% Which is pretty close to the number you described
Inferring alleles distribution from the blood types distribution
The probability of the blood types can be defined in terms of the alleles: $$O = o^2$$ $$A=a^2+2oa$$ $$B=b^2+2ob$$ $$AB=2ab$$ These are 4 equations with 3 variables, and thus a solution is not guarant
Inferring alleles distribution from the blood types distribution The probability of the blood types can be defined in terms of the alleles: $$O = o^2$$ $$A=a^2+2oa$$ $$B=b^2+2ob$$ $$AB=2ab$$ These are 4 equations with 3 variables, and thus a solution is not guaranteed. Solving for the first 3 equations we get: $$o=\sqrt{O}$$ $$a=\sqrt{A+O}-\sqrt{O}$$ $$b=\sqrt{B+O}-\sqrt{O}$$ We get that: o=0.6708203932499369 a=0.25113405247935183 b=0.07751108410485141 Thus AB=2ab=0.03893134532663842=3.9% Which is pretty close to the number you described
Inferring alleles distribution from the blood types distribution The probability of the blood types can be defined in terms of the alleles: $$O = o^2$$ $$A=a^2+2oa$$ $$B=b^2+2ob$$ $$AB=2ab$$ These are 4 equations with 3 variables, and thus a solution is not guarant
53,669
Inferring alleles distribution from the blood types distribution
EDIT: Wrong logic here, disregard this answer and focus on the accepted one. Keeping the original one for reference of what no to do. How about simply reversing the probability ? It sounds simple but it might be what you are looking for ? if aa, oa and ao makes A and there are 40% of A, then the simplest assumption is that there are 40/3 % of each alleles pair, and so on This gives, in % aa -> 13.3 oa -> 13.3 ao -> 13.3 oo -> 45 ob -> 3.7 bo -> 3.7 bb -> 3.7 ba -> 2 ab -> 2 Then, you sum the probability of the pairs that have an "a" pondered by their frequency in the pair (*0.5 in ao, *1.0 in aa for example) Which gives, in % a = 13.33 + 13.33 + 2 = 28.66 b = 3.66 + 3.66 + 2 = 9.33 o = 13.33 + 45 + 3.66 = 62.00
Inferring alleles distribution from the blood types distribution
EDIT: Wrong logic here, disregard this answer and focus on the accepted one. Keeping the original one for reference of what no to do. How about simply reversing the probability ? It sounds simple but
Inferring alleles distribution from the blood types distribution EDIT: Wrong logic here, disregard this answer and focus on the accepted one. Keeping the original one for reference of what no to do. How about simply reversing the probability ? It sounds simple but it might be what you are looking for ? if aa, oa and ao makes A and there are 40% of A, then the simplest assumption is that there are 40/3 % of each alleles pair, and so on This gives, in % aa -> 13.3 oa -> 13.3 ao -> 13.3 oo -> 45 ob -> 3.7 bo -> 3.7 bb -> 3.7 ba -> 2 ab -> 2 Then, you sum the probability of the pairs that have an "a" pondered by their frequency in the pair (*0.5 in ao, *1.0 in aa for example) Which gives, in % a = 13.33 + 13.33 + 2 = 28.66 b = 3.66 + 3.66 + 2 = 9.33 o = 13.33 + 45 + 3.66 = 62.00
Inferring alleles distribution from the blood types distribution EDIT: Wrong logic here, disregard this answer and focus on the accepted one. Keeping the original one for reference of what no to do. How about simply reversing the probability ? It sounds simple but
53,670
How to assign new data to an existing clustering
Assigning new points to a clustering algorithm is always a bit perplexing because the results of a clustering algorithm are imperfect; they represent a snapshot of a (hopefully good) segmentation of the current data. How good they generalise to new data and what is the actual definition of good are open questions. Maybe we derive that clustering based on some cluster stability approach, maybe we derived it using some information criterion, maybe we use some heuristic like the GAP statistic or the Davies-Bouldin index; they are many ways. Nevertheless despite the ambiguity of a good clustering, all is not lost as soon as we have clusters, we have classes. Given a particular clustering segmentation, we can train a robust classifier where we treat the labels we previously assigned during clustering as classes. In that way we can account more intuitively for the non-robustness of the clustering labels. Let me stress, that this will not be perfect exactly because our initial data were not perfect but it will allow us to account naturally for a certain degree of uncertainty. In addition as we expect the clustering we have to reflect "some structure" it is a cheap and straightforward way to encapsulate that structure. Following this rationale, when new points appear, we can use the classifier we trained using the data originally clustered, to predict the classes of the new data have (ie. the cluster they are in). Not only that, but we will be able to explain, to some degree, why we picked that cluster based on the intuition gained from the classifier. As mentioned in the comments, at some point though we should redo this clustering-classification procedure because the clustering will most probably evolve after we accumulate enough new data. This "evolution" will be first noticed on points that lay close to the border of two clusters. To that extent, a single new point might "pull" the centre of a cluster away from that border-point enough to lead into a change of cluster membership, ie. render our original cluster/class assignment obsolete. When we should retrain is again not well defined; I would suggest as soon as we have computational time or we believe that the underlying structure of our data should have changed substantially (ie. we have concept-drift).
How to assign new data to an existing clustering
Assigning new points to a clustering algorithm is always a bit perplexing because the results of a clustering algorithm are imperfect; they represent a snapshot of a (hopefully good) segmentation of t
How to assign new data to an existing clustering Assigning new points to a clustering algorithm is always a bit perplexing because the results of a clustering algorithm are imperfect; they represent a snapshot of a (hopefully good) segmentation of the current data. How good they generalise to new data and what is the actual definition of good are open questions. Maybe we derive that clustering based on some cluster stability approach, maybe we derived it using some information criterion, maybe we use some heuristic like the GAP statistic or the Davies-Bouldin index; they are many ways. Nevertheless despite the ambiguity of a good clustering, all is not lost as soon as we have clusters, we have classes. Given a particular clustering segmentation, we can train a robust classifier where we treat the labels we previously assigned during clustering as classes. In that way we can account more intuitively for the non-robustness of the clustering labels. Let me stress, that this will not be perfect exactly because our initial data were not perfect but it will allow us to account naturally for a certain degree of uncertainty. In addition as we expect the clustering we have to reflect "some structure" it is a cheap and straightforward way to encapsulate that structure. Following this rationale, when new points appear, we can use the classifier we trained using the data originally clustered, to predict the classes of the new data have (ie. the cluster they are in). Not only that, but we will be able to explain, to some degree, why we picked that cluster based on the intuition gained from the classifier. As mentioned in the comments, at some point though we should redo this clustering-classification procedure because the clustering will most probably evolve after we accumulate enough new data. This "evolution" will be first noticed on points that lay close to the border of two clusters. To that extent, a single new point might "pull" the centre of a cluster away from that border-point enough to lead into a change of cluster membership, ie. render our original cluster/class assignment obsolete. When we should retrain is again not well defined; I would suggest as soon as we have computational time or we believe that the underlying structure of our data should have changed substantially (ie. we have concept-drift).
How to assign new data to an existing clustering Assigning new points to a clustering algorithm is always a bit perplexing because the results of a clustering algorithm are imperfect; they represent a snapshot of a (hopefully good) segmentation of t
53,671
How to assign new data to an existing clustering
I guess just 2 points is not gonna mess it so badly, but for more than 10% (to say a number) it might be better to recalculate the centroids (just an opinion) def Labs( dataset,centroids ): l = [] for i in range(len(dataset)): m = [] for j in range(n): p = np.linalg.norm(dataset[(i),:]-centroids[(j),:]) m.append(p) po = np.argmin(m) l.append(po) return pd.DataFrame(np.array(l) + 1,columns =['Lab'])
How to assign new data to an existing clustering
I guess just 2 points is not gonna mess it so badly, but for more than 10% (to say a number) it might be better to recalculate the centroids (just an opinion) def Labs( dataset,centroids ): l = []
How to assign new data to an existing clustering I guess just 2 points is not gonna mess it so badly, but for more than 10% (to say a number) it might be better to recalculate the centroids (just an opinion) def Labs( dataset,centroids ): l = [] for i in range(len(dataset)): m = [] for j in range(n): p = np.linalg.norm(dataset[(i),:]-centroids[(j),:]) m.append(p) po = np.argmin(m) l.append(po) return pd.DataFrame(np.array(l) + 1,columns =['Lab'])
How to assign new data to an existing clustering I guess just 2 points is not gonna mess it so badly, but for more than 10% (to say a number) it might be better to recalculate the centroids (just an opinion) def Labs( dataset,centroids ): l = []
53,672
Basic confusion about Restricted Boltzmann Machines (RBM)
Have a look at section $\textbf{13.2}$ of Hinton's guide to train an RBM, at equation $\textbf{17}$ or a similar and better description in Salakhutdinov's Learning Deep Generative Models, section $\textbf{2.2}$. http://www.cs.toronto.edu/~hinton/absps/guideTR.pdf http://www.cs.cmu.edu/~rsalakhu/papers/annrev.pdf The Gaussian RBM assumes you have real-valued visible units between interval 0-1 (as is with normalized MNIST) and some variance $\textbf{$\sigma^2$}$. In reality you would have to infer $\textbf{$\sigma^2$}$, but for all purposes, this is chosen prior to training your model, in some cases, variance 0.01. The tutorial you mentioned uses the scikit learn package BernoulliRBM, which inherently accepts floats as input and sets the variance to 0.01(see the $\textbf{fit}$ function on github). So what they do is allowed and ok but it's done behind the scenes:) Hope this helps! Patric
Basic confusion about Restricted Boltzmann Machines (RBM)
Have a look at section $\textbf{13.2}$ of Hinton's guide to train an RBM, at equation $\textbf{17}$ or a similar and better description in Salakhutdinov's Learning Deep Generative Models, section $\te
Basic confusion about Restricted Boltzmann Machines (RBM) Have a look at section $\textbf{13.2}$ of Hinton's guide to train an RBM, at equation $\textbf{17}$ or a similar and better description in Salakhutdinov's Learning Deep Generative Models, section $\textbf{2.2}$. http://www.cs.toronto.edu/~hinton/absps/guideTR.pdf http://www.cs.cmu.edu/~rsalakhu/papers/annrev.pdf The Gaussian RBM assumes you have real-valued visible units between interval 0-1 (as is with normalized MNIST) and some variance $\textbf{$\sigma^2$}$. In reality you would have to infer $\textbf{$\sigma^2$}$, but for all purposes, this is chosen prior to training your model, in some cases, variance 0.01. The tutorial you mentioned uses the scikit learn package BernoulliRBM, which inherently accepts floats as input and sets the variance to 0.01(see the $\textbf{fit}$ function on github). So what they do is allowed and ok but it's done behind the scenes:) Hope this helps! Patric
Basic confusion about Restricted Boltzmann Machines (RBM) Have a look at section $\textbf{13.2}$ of Hinton's guide to train an RBM, at equation $\textbf{17}$ or a similar and better description in Salakhutdinov's Learning Deep Generative Models, section $\te
53,673
Basic confusion about Restricted Boltzmann Machines (RBM)
Yeah there are simple extensions to real numbers, but it's kind of an easy trick to scale your data between 0 and 1 (like a probability). Then you can learn using binary stochastic units. Scaling your data between 0 and 1 is one of many pre-data-filtering tricks to make learning faster - the ultimate goal. Whitening pixels to greyscale is also common. You can convert grayscale to a 0 to 1 value without losing information.
Basic confusion about Restricted Boltzmann Machines (RBM)
Yeah there are simple extensions to real numbers, but it's kind of an easy trick to scale your data between 0 and 1 (like a probability). Then you can learn using binary stochastic units. Scaling your
Basic confusion about Restricted Boltzmann Machines (RBM) Yeah there are simple extensions to real numbers, but it's kind of an easy trick to scale your data between 0 and 1 (like a probability). Then you can learn using binary stochastic units. Scaling your data between 0 and 1 is one of many pre-data-filtering tricks to make learning faster - the ultimate goal. Whitening pixels to greyscale is also common. You can convert grayscale to a 0 to 1 value without losing information.
Basic confusion about Restricted Boltzmann Machines (RBM) Yeah there are simple extensions to real numbers, but it's kind of an easy trick to scale your data between 0 and 1 (like a probability). Then you can learn using binary stochastic units. Scaling your
53,674
Basic confusion about Restricted Boltzmann Machines (RBM)
By definition, an RBM with binary visible units can only model binary observations. So in the case of MNIST with integer pixel values in [0, 255], some sort of thresholding can be done to binarize the input. Or, like you suggested, rescale the pixel values to real numbers and use Gaussian visible units to model them. Another alternative is to treat pixel values in [0,1] as probabilities of a binary event and use binomial units as in Hinton (2006) (A fast learning algorithm for deep belief nets). The author of the pyimagesearch post actually clarified this point in a comment: I think you might have been confused by my original comment. If assume the MNIST digits are already thresholded, then we have two pixels values: 255 (white, the foreground) and 0 (black, the background). If you divide all pixel values by 255, then they are all in the range [0, 1].
Basic confusion about Restricted Boltzmann Machines (RBM)
By definition, an RBM with binary visible units can only model binary observations. So in the case of MNIST with integer pixel values in [0, 255], some sort of thresholding can be done to binarize the
Basic confusion about Restricted Boltzmann Machines (RBM) By definition, an RBM with binary visible units can only model binary observations. So in the case of MNIST with integer pixel values in [0, 255], some sort of thresholding can be done to binarize the input. Or, like you suggested, rescale the pixel values to real numbers and use Gaussian visible units to model them. Another alternative is to treat pixel values in [0,1] as probabilities of a binary event and use binomial units as in Hinton (2006) (A fast learning algorithm for deep belief nets). The author of the pyimagesearch post actually clarified this point in a comment: I think you might have been confused by my original comment. If assume the MNIST digits are already thresholded, then we have two pixels values: 255 (white, the foreground) and 0 (black, the background). If you divide all pixel values by 255, then they are all in the range [0, 1].
Basic confusion about Restricted Boltzmann Machines (RBM) By definition, an RBM with binary visible units can only model binary observations. So in the case of MNIST with integer pixel values in [0, 255], some sort of thresholding can be done to binarize the
53,675
How does procedure failure rate affect sample size in a randomized clinical trial?
From the point of intention to treat it is not necessary to take this into account because everybody should be included in the analysis in the arm to which they are randomised. The failure rate is an inherent part of the treatment. If you want to compare the effect of actually receiving the treatment in the two arms that is a different mater but from your description I assume that is unknowable. As a side issue I do not see how you can maintain masking if you are catheterising in one group but not the other. You can, and should, mask the outcome assessors but surely everybody else: patients, investigators, clinical team, ... will know. Edited in response to comments My last paragraph is not relevant now as the OP has kindly elaborated on their ingenious scheme for employing a sham catheter. Just for reference for people coming upon this who may not know about it intention to treat is described in this Wikipedia article and the alternative complier average causal effect is outlined in many articles including this open access one. There does not seem to be a Wikipedia entry for it.
How does procedure failure rate affect sample size in a randomized clinical trial?
From the point of intention to treat it is not necessary to take this into account because everybody should be included in the analysis in the arm to which they are randomised. The failure rate is an
How does procedure failure rate affect sample size in a randomized clinical trial? From the point of intention to treat it is not necessary to take this into account because everybody should be included in the analysis in the arm to which they are randomised. The failure rate is an inherent part of the treatment. If you want to compare the effect of actually receiving the treatment in the two arms that is a different mater but from your description I assume that is unknowable. As a side issue I do not see how you can maintain masking if you are catheterising in one group but not the other. You can, and should, mask the outcome assessors but surely everybody else: patients, investigators, clinical team, ... will know. Edited in response to comments My last paragraph is not relevant now as the OP has kindly elaborated on their ingenious scheme for employing a sham catheter. Just for reference for people coming upon this who may not know about it intention to treat is described in this Wikipedia article and the alternative complier average causal effect is outlined in many articles including this open access one. There does not seem to be a Wikipedia entry for it.
How does procedure failure rate affect sample size in a randomized clinical trial? From the point of intention to treat it is not necessary to take this into account because everybody should be included in the analysis in the arm to which they are randomised. The failure rate is an
53,676
How does procedure failure rate affect sample size in a randomized clinical trial?
The power of your study depends on the magnitude of the difference you hope to detect between the two treatments, the variability among participants in terms of their responses to the treatments, and the number of participants. Insofar as the possibility of placement failure increases the variability of responses among participants, it will necessarily decrease the power for detecting a particular difference between the treatments, given the same number of participants. As an extreme example, if there always was placement failure you would be unable to detect any differences at all. That said, placement failure seems inherent in these treatments so this would seem to be a fair test of their differences in clinical practice (except for the caution raised by @mdewey about whether this is really masked, unless all get catheterized but some receive placebo instead of anesthetic, or unless you are willing to accept a possible placebo effect simply due to catheterization that doesn't correspond to the continued anesthetic infusion). With treatment failure you may not be able to detect as small a difference as you might like, so an estimate of treatment failure should be considered in your design.
How does procedure failure rate affect sample size in a randomized clinical trial?
The power of your study depends on the magnitude of the difference you hope to detect between the two treatments, the variability among participants in terms of their responses to the treatments, and
How does procedure failure rate affect sample size in a randomized clinical trial? The power of your study depends on the magnitude of the difference you hope to detect between the two treatments, the variability among participants in terms of their responses to the treatments, and the number of participants. Insofar as the possibility of placement failure increases the variability of responses among participants, it will necessarily decrease the power for detecting a particular difference between the treatments, given the same number of participants. As an extreme example, if there always was placement failure you would be unable to detect any differences at all. That said, placement failure seems inherent in these treatments so this would seem to be a fair test of their differences in clinical practice (except for the caution raised by @mdewey about whether this is really masked, unless all get catheterized but some receive placebo instead of anesthetic, or unless you are willing to accept a possible placebo effect simply due to catheterization that doesn't correspond to the continued anesthetic infusion). With treatment failure you may not be able to detect as small a difference as you might like, so an estimate of treatment failure should be considered in your design.
How does procedure failure rate affect sample size in a randomized clinical trial? The power of your study depends on the magnitude of the difference you hope to detect between the two treatments, the variability among participants in terms of their responses to the treatments, and
53,677
How does procedure failure rate affect sample size in a randomized clinical trial?
Random uncertainty decreases the precision of an experiment. Systematic uncertainty decreases the accuracy of an experiment. The Standard deviation , $s$ equals the square root of the sum of squares of differences $/ N$ (population). The standard deviation of the mean value of a set of measurements $σ_m$ , (“sigma-em”) $σ_m=s/√N$ When we speak of the uncertainty σ of a set of measurements made under identical conditions, we mean that number $σ_m$ and not $s$. There are two common ways to state the uncertainty of a result: in terms of a $σ$, like the standard deviation of the mean $σ_m$, or in terms of a percent or fractional uncertainty, “epsilon”, $\epsilon$. If the uncertainty of results does not change with quality or experience then the rate of change of standard deviation of the mean value with growing data set size will be small. It is both an indicator of process quality and significant population size. Referring to Shannon's Law and digital communication we know that the Noise to Signal ratio or standard deviation is arithmetically related to the error rate is directly proportional on a log-log scale ( although we usually use the inverse or S to N ratio (SNR)) where 10:1 or 10dB to 15 dB or 30:1 is usually the threshold of noise where you have a 50/50 chance of error depending how how you discriminate the results "good or bad" and 20dB is 1% noise, the error rate is quite small. (more astute statisticians are welcome improve or correct this description) You must decide your own population size and and error threshold and thus population size for significant results depending on the weight of the error. ( longer recovery period vs loss of added cost, infection or whatever) A typical calculation is a measure of the improved recovery time with a standard deviation with and without additional treatment. Then plotting the standard deviation over time for each medical team is an indicator of their treatment efficacy improvement rate. IMHO, blinding the test is less important than defining the measurement methods of observation of recovery accurately, confidence in skill , if this can be measured somehow with measurable thresholds, tolerances and bias inputs. (neglect or more attention may bias the results in either direction) (blind test results are somewhat meaningless without these details because they only apply to the team that repeated the process and not all teams unless a larger measurement takes place.) Even with a large sample base, every precedure must be a measurable parameter with a tolerance to obtain a reliable prediction of the outcome. ( e.g. placement, adhesion secured, patient motion etc.) Reducing the deviation improves the success rate. Design of Experiments (DoE) can be done to optimize the process. (beyond scope of this answer)
How does procedure failure rate affect sample size in a randomized clinical trial?
Random uncertainty decreases the precision of an experiment. Systematic uncertainty decreases the accuracy of an experiment. The Standard deviation , $s$ equals the square root of the sum of squares
How does procedure failure rate affect sample size in a randomized clinical trial? Random uncertainty decreases the precision of an experiment. Systematic uncertainty decreases the accuracy of an experiment. The Standard deviation , $s$ equals the square root of the sum of squares of differences $/ N$ (population). The standard deviation of the mean value of a set of measurements $σ_m$ , (“sigma-em”) $σ_m=s/√N$ When we speak of the uncertainty σ of a set of measurements made under identical conditions, we mean that number $σ_m$ and not $s$. There are two common ways to state the uncertainty of a result: in terms of a $σ$, like the standard deviation of the mean $σ_m$, or in terms of a percent or fractional uncertainty, “epsilon”, $\epsilon$. If the uncertainty of results does not change with quality or experience then the rate of change of standard deviation of the mean value with growing data set size will be small. It is both an indicator of process quality and significant population size. Referring to Shannon's Law and digital communication we know that the Noise to Signal ratio or standard deviation is arithmetically related to the error rate is directly proportional on a log-log scale ( although we usually use the inverse or S to N ratio (SNR)) where 10:1 or 10dB to 15 dB or 30:1 is usually the threshold of noise where you have a 50/50 chance of error depending how how you discriminate the results "good or bad" and 20dB is 1% noise, the error rate is quite small. (more astute statisticians are welcome improve or correct this description) You must decide your own population size and and error threshold and thus population size for significant results depending on the weight of the error. ( longer recovery period vs loss of added cost, infection or whatever) A typical calculation is a measure of the improved recovery time with a standard deviation with and without additional treatment. Then plotting the standard deviation over time for each medical team is an indicator of their treatment efficacy improvement rate. IMHO, blinding the test is less important than defining the measurement methods of observation of recovery accurately, confidence in skill , if this can be measured somehow with measurable thresholds, tolerances and bias inputs. (neglect or more attention may bias the results in either direction) (blind test results are somewhat meaningless without these details because they only apply to the team that repeated the process and not all teams unless a larger measurement takes place.) Even with a large sample base, every precedure must be a measurable parameter with a tolerance to obtain a reliable prediction of the outcome. ( e.g. placement, adhesion secured, patient motion etc.) Reducing the deviation improves the success rate. Design of Experiments (DoE) can be done to optimize the process. (beyond scope of this answer)
How does procedure failure rate affect sample size in a randomized clinical trial? Random uncertainty decreases the precision of an experiment. Systematic uncertainty decreases the accuracy of an experiment. The Standard deviation , $s$ equals the square root of the sum of squares
53,678
What is the difference between the probabilistic and non-probabilistic learning methods? [duplicate]
The task of classification enables a simple comparison: A probabilistic approach (such as Random Forest) would yield a probability distribution over a set of classes for each input sample. A deterministic approach (such as SVM) does not model the distribution of classes but rather separates the feature space and return the class associated with the space where a sample originates from. Yet it is possible for every probabilistic method to simply return the class with the highest probability and therefore seem deterministic. Further, the other way around, based on the distance to the separating hyperplane in SVMs a probability can be computed and returned for each class. Generally, the probabilistic approaches are better suited to incorporate uncertainty regarding the answer and can provide information about how safe a prediction is. Further, probabilistic methods might be able to incorporate prior information about the class distribution. Example: If you would for example try to predict gender based on body height, a deterministic approach would simply chose a separating point and say that: (completely made up numbers) height >= 175 --> man height < 175 --> woman so for a sample with the height of 175 it would yield woman as well as it would for a height of 155. A probabilistic approach in this case could yield a probability such as for a height of 155 ---> man(0.1), woman(0.9) and for 175 --> man(0.49), woman(0.51)
What is the difference between the probabilistic and non-probabilistic learning methods? [duplicate]
The task of classification enables a simple comparison: A probabilistic approach (such as Random Forest) would yield a probability distribution over a set of classes for each input sample. A determini
What is the difference between the probabilistic and non-probabilistic learning methods? [duplicate] The task of classification enables a simple comparison: A probabilistic approach (such as Random Forest) would yield a probability distribution over a set of classes for each input sample. A deterministic approach (such as SVM) does not model the distribution of classes but rather separates the feature space and return the class associated with the space where a sample originates from. Yet it is possible for every probabilistic method to simply return the class with the highest probability and therefore seem deterministic. Further, the other way around, based on the distance to the separating hyperplane in SVMs a probability can be computed and returned for each class. Generally, the probabilistic approaches are better suited to incorporate uncertainty regarding the answer and can provide information about how safe a prediction is. Further, probabilistic methods might be able to incorporate prior information about the class distribution. Example: If you would for example try to predict gender based on body height, a deterministic approach would simply chose a separating point and say that: (completely made up numbers) height >= 175 --> man height < 175 --> woman so for a sample with the height of 175 it would yield woman as well as it would for a height of 155. A probabilistic approach in this case could yield a probability such as for a height of 155 ---> man(0.1), woman(0.9) and for 175 --> man(0.49), woman(0.51)
What is the difference between the probabilistic and non-probabilistic learning methods? [duplicate] The task of classification enables a simple comparison: A probabilistic approach (such as Random Forest) would yield a probability distribution over a set of classes for each input sample. A determini
53,679
Why Discriminator converges to 1/2 in Generative Adversarial Networks?
I'll try to skip math, and use intuition instead. You show Discriminator a lot of examples. Some are real, some are fake (coming from the Generator). Intuitively, you want Generator to be able to fool Discriminator every time into thinking that a fake example is a real one. It shouldn't be able to differentiate between real and fake examples. You don't show it real examples exclusively for it to tell you that it's always a real one. You show it real examples 50% of the time. Other 50% of the time you show it fake examples. You want it to think that it's a real example when it really is. You also want it to think it's a fake when it really is. But, when the fakes are really good, it won't be able to tell a difference. It will classify some real examples as fake, and some fakes as real. There are two classes. On average, two classes have probability of 50% each, if they have the same number of samples. If everything's perfect, the best D can do is guess whether it's a fake or a real example it's been shown, which gives probability of 50%. What would you say if I showed you two twin brothers, that you don't know, and asked you to tell me which one is which? The best you could do is guess. You could flip a coin, too, which is the same. If a GAN cannot differentiate, then it can only guess, because it knows there are two classes, and that further means we have trained our generator well. On a large number of examples, if D can't differentiate between real and fake ones, your GAN has achieved a state of Nash equilibrium. It's a game that G and D play against each other, and we hope they reach the equilibrium at some point. D is a common classifier. If it always outputs '1', for real examples, then P('1') = 1. If it always outputs '0', for fake examples, then P('0') = 1, or, P('1') = 0. So, either your G is really bad, or D is somehow broken. In the beginning, D will be able to differentiate between real and fake examples more easily. But you want to train your G to produce better examples. At the same time, you also train your D to be able to recognize fakes more accurately. You want your GAN to achieve the state of equilibrium, when neither player can improve.
Why Discriminator converges to 1/2 in Generative Adversarial Networks?
I'll try to skip math, and use intuition instead. You show Discriminator a lot of examples. Some are real, some are fake (coming from the Generator). Intuitively, you want Generator to be able to fool
Why Discriminator converges to 1/2 in Generative Adversarial Networks? I'll try to skip math, and use intuition instead. You show Discriminator a lot of examples. Some are real, some are fake (coming from the Generator). Intuitively, you want Generator to be able to fool Discriminator every time into thinking that a fake example is a real one. It shouldn't be able to differentiate between real and fake examples. You don't show it real examples exclusively for it to tell you that it's always a real one. You show it real examples 50% of the time. Other 50% of the time you show it fake examples. You want it to think that it's a real example when it really is. You also want it to think it's a fake when it really is. But, when the fakes are really good, it won't be able to tell a difference. It will classify some real examples as fake, and some fakes as real. There are two classes. On average, two classes have probability of 50% each, if they have the same number of samples. If everything's perfect, the best D can do is guess whether it's a fake or a real example it's been shown, which gives probability of 50%. What would you say if I showed you two twin brothers, that you don't know, and asked you to tell me which one is which? The best you could do is guess. You could flip a coin, too, which is the same. If a GAN cannot differentiate, then it can only guess, because it knows there are two classes, and that further means we have trained our generator well. On a large number of examples, if D can't differentiate between real and fake ones, your GAN has achieved a state of Nash equilibrium. It's a game that G and D play against each other, and we hope they reach the equilibrium at some point. D is a common classifier. If it always outputs '1', for real examples, then P('1') = 1. If it always outputs '0', for fake examples, then P('0') = 1, or, P('1') = 0. So, either your G is really bad, or D is somehow broken. In the beginning, D will be able to differentiate between real and fake examples more easily. But you want to train your G to produce better examples. At the same time, you also train your D to be able to recognize fakes more accurately. You want your GAN to achieve the state of equilibrium, when neither player can improve.
Why Discriminator converges to 1/2 in Generative Adversarial Networks? I'll try to skip math, and use intuition instead. You show Discriminator a lot of examples. Some are real, some are fake (coming from the Generator). Intuitively, you want Generator to be able to fool
53,680
Why Discriminator converges to 1/2 in Generative Adversarial Networks?
{1} explains why the output of discriminator network $D$ converges to $\frac{1}{2}$: For $G$ fixed, the optimal discriminator $D$ is $D^*_G(\mathbb{x}) = \frac{p_\text{data}(\mathbb{x})}{p_\text{data}(\mathbb{x}) + p_g(\mathbb{x})}$. Therefore, if you have $p_g=p_\text{data}$, meaning that the neural network $G$ has learned the true distribution, then $D^*_G(\mathbb{x})=\frac{1}{2}$. {1} gives some proof of that claim, but intuitively you can consider Algorithm 1's weight update strategy: Sample minibatch of $m$ noise samples $\{ \mathbb{z}^{(1)}, \dots, \mathbb{z}^{(m)} \}$ from noise prior $p_g(\mathbb{z})$. Sample minibatch of $m$ examples $\{ \mathbb{x}^{(1)}, \dots, \mathbb{x}^{(m)} \}$ from data generating distribution $p_\text{data}(\mathbb{x})$. Update the discriminator by ascending its stochastic gradient: $$ \nabla_{\theta_d} \frac{1}{m} \sum_{i=1}^m \left[ \log D\left(\mathbb{x}^{(i)}\right) + \log \left(1-D\left(G\left({z}^{(i)}\right)\right)\right) \right]. $$ When $\mathbb{x}^{(i)}$ is undistinguishable from $\mathbb{z}^{(i)}$, $D$ simply cuts in the middle, i.e. $\mathbb{x}^{(i)} = \mathbb{z}^{(i)} = \frac{1}{2}$. References: {1} Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. "Generative adversarial nets." In Advances in Neural Information Processing Systems, pp. 2672-2680. 2014. https://arxiv.org/abs/1406.2661v1
Why Discriminator converges to 1/2 in Generative Adversarial Networks?
{1} explains why the output of discriminator network $D$ converges to $\frac{1}{2}$: For $G$ fixed, the optimal discriminator $D$ is $D^*_G(\mathbb{x}) = \frac{p_\text{data}(\mathbb{x})}{p_\text{dat
Why Discriminator converges to 1/2 in Generative Adversarial Networks? {1} explains why the output of discriminator network $D$ converges to $\frac{1}{2}$: For $G$ fixed, the optimal discriminator $D$ is $D^*_G(\mathbb{x}) = \frac{p_\text{data}(\mathbb{x})}{p_\text{data}(\mathbb{x}) + p_g(\mathbb{x})}$. Therefore, if you have $p_g=p_\text{data}$, meaning that the neural network $G$ has learned the true distribution, then $D^*_G(\mathbb{x})=\frac{1}{2}$. {1} gives some proof of that claim, but intuitively you can consider Algorithm 1's weight update strategy: Sample minibatch of $m$ noise samples $\{ \mathbb{z}^{(1)}, \dots, \mathbb{z}^{(m)} \}$ from noise prior $p_g(\mathbb{z})$. Sample minibatch of $m$ examples $\{ \mathbb{x}^{(1)}, \dots, \mathbb{x}^{(m)} \}$ from data generating distribution $p_\text{data}(\mathbb{x})$. Update the discriminator by ascending its stochastic gradient: $$ \nabla_{\theta_d} \frac{1}{m} \sum_{i=1}^m \left[ \log D\left(\mathbb{x}^{(i)}\right) + \log \left(1-D\left(G\left({z}^{(i)}\right)\right)\right) \right]. $$ When $\mathbb{x}^{(i)}$ is undistinguishable from $\mathbb{z}^{(i)}$, $D$ simply cuts in the middle, i.e. $\mathbb{x}^{(i)} = \mathbb{z}^{(i)} = \frac{1}{2}$. References: {1} Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. "Generative adversarial nets." In Advances in Neural Information Processing Systems, pp. 2672-2680. 2014. https://arxiv.org/abs/1406.2661v1
Why Discriminator converges to 1/2 in Generative Adversarial Networks? {1} explains why the output of discriminator network $D$ converges to $\frac{1}{2}$: For $G$ fixed, the optimal discriminator $D$ is $D^*_G(\mathbb{x}) = \frac{p_\text{data}(\mathbb{x})}{p_\text{dat
53,681
Why Discriminator converges to 1/2 in Generative Adversarial Networks?
If $p_g=p_d$ the input data $x_1,...,x_n\sim p_d$ and the generated data $G(z_1),...,G(z_n)\sim p_g$ come from the same distribution. It is thus impossible to distinguish between samples of real and generated data because they are samples of the same distribution. The best the discriminator can do is thus guessing if $x$ was generated or real. For the discriminator guessing corresponds to $D(x)=1/2$, that is, the probability of $x$ being real is $1/2$. Note this also makes the probability of $x$ being generated $1/2$. More formally, this follows directly from [1]'s Proposition 1. It says that for any fixed generator $G$ the optimal discriminator is $$D^*_G(x)=\frac{p_d(x)}{p_d(x)+p_g(x)}$$ Inserting $p_d(x)=p_g(x)$ we get $$D^*_G(x)=\frac{p_g(x)}{p_g(x)+p_g(x)}=\frac{1}{2}$$ In other words. If the generator recovers the real data distribution, the optimal discriminator blindly guess The idea of Proposition 1's proof is to find $D$ that minimizes the value of the game $$V(D, G)=\mathbb{E}_{x\sim p_d}\left[\log D(x)\right]+\mathbb{E}_{z\sim p_z}\left[\log(1-D(G(z)))\right]$$ for fixed $G$. This can be done by rewriting $V(D, G)$ a bit, then differentiating with respect to $D$ and solving for 0. This is the rough idea, however, to make the details work one needs to be a little bit more careful. Check out [1]'s proof if you want to see the details. [1] https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf
Why Discriminator converges to 1/2 in Generative Adversarial Networks?
If $p_g=p_d$ the input data $x_1,...,x_n\sim p_d$ and the generated data $G(z_1),...,G(z_n)\sim p_g$ come from the same distribution. It is thus impossible to distinguish between samples of real and g
Why Discriminator converges to 1/2 in Generative Adversarial Networks? If $p_g=p_d$ the input data $x_1,...,x_n\sim p_d$ and the generated data $G(z_1),...,G(z_n)\sim p_g$ come from the same distribution. It is thus impossible to distinguish between samples of real and generated data because they are samples of the same distribution. The best the discriminator can do is thus guessing if $x$ was generated or real. For the discriminator guessing corresponds to $D(x)=1/2$, that is, the probability of $x$ being real is $1/2$. Note this also makes the probability of $x$ being generated $1/2$. More formally, this follows directly from [1]'s Proposition 1. It says that for any fixed generator $G$ the optimal discriminator is $$D^*_G(x)=\frac{p_d(x)}{p_d(x)+p_g(x)}$$ Inserting $p_d(x)=p_g(x)$ we get $$D^*_G(x)=\frac{p_g(x)}{p_g(x)+p_g(x)}=\frac{1}{2}$$ In other words. If the generator recovers the real data distribution, the optimal discriminator blindly guess The idea of Proposition 1's proof is to find $D$ that minimizes the value of the game $$V(D, G)=\mathbb{E}_{x\sim p_d}\left[\log D(x)\right]+\mathbb{E}_{z\sim p_z}\left[\log(1-D(G(z)))\right]$$ for fixed $G$. This can be done by rewriting $V(D, G)$ a bit, then differentiating with respect to $D$ and solving for 0. This is the rough idea, however, to make the details work one needs to be a little bit more careful. Check out [1]'s proof if you want to see the details. [1] https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf
Why Discriminator converges to 1/2 in Generative Adversarial Networks? If $p_g=p_d$ the input data $x_1,...,x_n\sim p_d$ and the generated data $G(z_1),...,G(z_n)\sim p_g$ come from the same distribution. It is thus impossible to distinguish between samples of real and g
53,682
Which model is better: One that overfits or one that underfits?
Compare the two models on the plot below. First (red curve) heavily overfitts the data, while the second one (blue line) underfitts it. Can you tell which one is better? I'd say that both are bad. If your model overfitts, then it perfectly fits your data, but can possibly be poor for making out-of-sample predictions. If it underfitts, then if poorly fits your in-sample data, but it can work better or worse on external data. Both cases are bad. From practical point of view you choose the model that best fits your purpose, or saying it in two words, it depends. From practical point of view you wouldn't be considering model that is poor.
Which model is better: One that overfits or one that underfits?
Compare the two models on the plot below. First (red curve) heavily overfitts the data, while the second one (blue line) underfitts it. Can you tell which one is better? I'd say that both are bad. If
Which model is better: One that overfits or one that underfits? Compare the two models on the plot below. First (red curve) heavily overfitts the data, while the second one (blue line) underfitts it. Can you tell which one is better? I'd say that both are bad. If your model overfitts, then it perfectly fits your data, but can possibly be poor for making out-of-sample predictions. If it underfitts, then if poorly fits your in-sample data, but it can work better or worse on external data. Both cases are bad. From practical point of view you choose the model that best fits your purpose, or saying it in two words, it depends. From practical point of view you wouldn't be considering model that is poor.
Which model is better: One that overfits or one that underfits? Compare the two models on the plot below. First (red curve) heavily overfitts the data, while the second one (blue line) underfitts it. Can you tell which one is better? I'd say that both are bad. If
53,683
Which model is better: One that overfits or one that underfits?
It's hard to say in general whether overfitting or underfitting is less desireable — this probably depends on your application. In practice though, it's probably better to start with a model that has enough capacity to overfit your training data since there are lots of techniques for dealing with overfitting (e.g., regularization, dropout, dimensionality reduction, etc.)
Which model is better: One that overfits or one that underfits?
It's hard to say in general whether overfitting or underfitting is less desireable — this probably depends on your application. In practice though, it's probably better to start with a model that has
Which model is better: One that overfits or one that underfits? It's hard to say in general whether overfitting or underfitting is less desireable — this probably depends on your application. In practice though, it's probably better to start with a model that has enough capacity to overfit your training data since there are lots of techniques for dealing with overfitting (e.g., regularization, dropout, dimensionality reduction, etc.)
Which model is better: One that overfits or one that underfits? It's hard to say in general whether overfitting or underfitting is less desireable — this probably depends on your application. In practice though, it's probably better to start with a model that has
53,684
Which model is better: One that overfits or one that underfits?
In ARIMA model over-fitting is often performed due to model mis-specifcation incorporating an unwarranted differencing operator and a self-cancelling ma operator leads to poor forecasts. Incorporating self-cancelling ARMA structure also has the same effect. In terms of incorporating statistically non-significant (unnecessary i.e. not different from 0.0) lag structures for causal variables generally has little or no effect. I suggest that over-modelling ARIMA models can be a bad idea while under-modelling casual models is a bad idea.
Which model is better: One that overfits or one that underfits?
In ARIMA model over-fitting is often performed due to model mis-specifcation incorporating an unwarranted differencing operator and a self-cancelling ma operator leads to poor forecasts. Incorporating
Which model is better: One that overfits or one that underfits? In ARIMA model over-fitting is often performed due to model mis-specifcation incorporating an unwarranted differencing operator and a self-cancelling ma operator leads to poor forecasts. Incorporating self-cancelling ARMA structure also has the same effect. In terms of incorporating statistically non-significant (unnecessary i.e. not different from 0.0) lag structures for causal variables generally has little or no effect. I suggest that over-modelling ARIMA models can be a bad idea while under-modelling casual models is a bad idea.
Which model is better: One that overfits or one that underfits? In ARIMA model over-fitting is often performed due to model mis-specifcation incorporating an unwarranted differencing operator and a self-cancelling ma operator leads to poor forecasts. Incorporating
53,685
$\int_a^b \mathbb{P}(\mathrm{d}x) = \int_a^b \mathbb{P}(x)\mathrm{d}x$?
In general, the $\mathbb{P}$-integral of a measurable function $X$ on $\left(\Omega, \mathcal{A}, \mathbb{P} \right)$ is given by $$\int_{\Omega} X(\omega) \ d \mathbb{P}(\omega)$$ which is also written as $$\int_{\Omega} X(\omega) \ \mathbb{P}(d\omega) $$ In probability theory this is just the expectation of $X$ under the measure $\mathbb{P}$. $\int_\Omega \mathbb{P}(x)\mathrm{d}x$ on the other hand, I have never seen anywhere - it is not even clear if this is an abstract Lebesgue integral or a Riemann integral.
$\int_a^b \mathbb{P}(\mathrm{d}x) = \int_a^b \mathbb{P}(x)\mathrm{d}x$?
In general, the $\mathbb{P}$-integral of a measurable function $X$ on $\left(\Omega, \mathcal{A}, \mathbb{P} \right)$ is given by $$\int_{\Omega} X(\omega) \ d \mathbb{P}(\omega)$$ which is also writ
$\int_a^b \mathbb{P}(\mathrm{d}x) = \int_a^b \mathbb{P}(x)\mathrm{d}x$? In general, the $\mathbb{P}$-integral of a measurable function $X$ on $\left(\Omega, \mathcal{A}, \mathbb{P} \right)$ is given by $$\int_{\Omega} X(\omega) \ d \mathbb{P}(\omega)$$ which is also written as $$\int_{\Omega} X(\omega) \ \mathbb{P}(d\omega) $$ In probability theory this is just the expectation of $X$ under the measure $\mathbb{P}$. $\int_\Omega \mathbb{P}(x)\mathrm{d}x$ on the other hand, I have never seen anywhere - it is not even clear if this is an abstract Lebesgue integral or a Riemann integral.
$\int_a^b \mathbb{P}(\mathrm{d}x) = \int_a^b \mathbb{P}(x)\mathrm{d}x$? In general, the $\mathbb{P}$-integral of a measurable function $X$ on $\left(\Omega, \mathcal{A}, \mathbb{P} \right)$ is given by $$\int_{\Omega} X(\omega) \ d \mathbb{P}(\omega)$$ which is also writ
53,686
$\int_a^b \mathbb{P}(\mathrm{d}x) = \int_a^b \mathbb{P}(x)\mathrm{d}x$?
Just started to write a similar question before already finding it asked before on SO. Your suspicion is probably correct, at least as the notation $Pr(dx,dy)$ instead of $\mathbb P$ is concerned. See this SO answer to a question regarding an equation in the book The elements of statistical learning. @Jerry wrote: For the notation of $Pr(dx,dy)$, it is equal to $g(x,y)dxdy$, where $g(x,y)$ is the joint pdf of $x$ and $y$.
$\int_a^b \mathbb{P}(\mathrm{d}x) = \int_a^b \mathbb{P}(x)\mathrm{d}x$?
Just started to write a similar question before already finding it asked before on SO. Your suspicion is probably correct, at least as the notation $Pr(dx,dy)$ instead of $\mathbb P$ is concerned. See
$\int_a^b \mathbb{P}(\mathrm{d}x) = \int_a^b \mathbb{P}(x)\mathrm{d}x$? Just started to write a similar question before already finding it asked before on SO. Your suspicion is probably correct, at least as the notation $Pr(dx,dy)$ instead of $\mathbb P$ is concerned. See this SO answer to a question regarding an equation in the book The elements of statistical learning. @Jerry wrote: For the notation of $Pr(dx,dy)$, it is equal to $g(x,y)dxdy$, where $g(x,y)$ is the joint pdf of $x$ and $y$.
$\int_a^b \mathbb{P}(\mathrm{d}x) = \int_a^b \mathbb{P}(x)\mathrm{d}x$? Just started to write a similar question before already finding it asked before on SO. Your suspicion is probably correct, at least as the notation $Pr(dx,dy)$ instead of $\mathbb P$ is concerned. See
53,687
Does the square of the minimum of two correlated Normal variables have a chi-squared distribution?
The result is not true. As a counterexample, let $(X,Y)$ have standard Normal margins with a Clayton copula, as illustrated at https://stats.stackexchange.com/a/30205. Generating 10,000 independent realizations of this bivariate distribution, as shown in the lower left of the figure, produces 10,000 realizations of $Z^2$ that clearly do not follow a $\Gamma(1/2,1/2)$ distribution (in a Chi-squared test of fit, $\chi^2=121, p \lt 10^{-16}$). The qq plots in the top of the figure confirm that the marginals look standard Normal while the qq plot at the bottom right indicates the upper tail of $Z^2$ is too short. The result can be proven under the assumption that the distribution of $(X,Y)$ is centrally symmetric: that is, when it is invariant upon simultaneously negating both $X$ and $Y$. This includes all bivariate Normals (with mean $(0,0)$, of course). The key idea is that for any $z \ge 0$, the event $Z^2 \le z^2$ is the difference of the events $X \ge -z\cup Y \ge -z$ and $X \ge z \cup Y \ge z$. (The first is where the minimum is no less than $-z$, while the second will rule out where the minimum exceeds $z$.) These events in turn can be broken down as follows: $$\Pr(Z^2 \le z^2) = \Pr(X\ge -z) - \Pr(Y \le -z) + \Pr(X,Y\lt -z) - \Pr(X,Y\gt z).$$ The central symmetry assumption assures the last two probabilities cancel. The first two probabilities are given by the standard Normal CDF $\Phi$, yielding $$\Pr(Z^2 \le z^2) =1 - 2\Phi(-z).$$ That exhibits $Z$ as a half-normal distribution, whence its square will have the same distribution as the square of a standard Normal, which by definition is a $\chi^2(1)$ distribution. This demonstration can be reversed to show $Z^2$ has a $\chi^2(1)$ distribution if and only if $\Pr(X,Y\le -z) = \Pr(X,Y\ge z)$ for all $z\ge 0$. Here is the R code that produced the figures. library(copula) n <- 1e4 set.seed(17) xy <- qnorm(rCopula(n, claytonCopula(1))) colnames(xy) <- c("X", "Y") z2 <- pmin(xy[,1], xy[,2])^2 cutpoints <- c(0:10, Inf) z2.obs <- table(cut(z2, cutpoints)) z2.exp <- diff(pgamma(cutpoints, 1/2, 1/2)) rbind(Observed=z2.obs, Expected=z2.exp * length(z2)) chisq.test(z2.obs, p=z2.exp) par(mfrow=c(2,2)) qqnorm(xy[,1], ylab="X"); abline(c(0,1), col="Red", lwd=2) qqnorm(xy[,2], ylab="Y"); abline(c(0,1), col="Red", lwd=2) plot(xy, pch=19, cex=0.75, col="#00000003", main="Data") qqplot(qgamma(seq(0, 1, length.out=length(z)), 1/2, 1/2), z2, xlab="Theoretical Quantiles", ylab="Z2", main="Gamma(1/2,1/2) Q-Q Plot") abline(c(0,1), col="Red", lwd=2)
Does the square of the minimum of two correlated Normal variables have a chi-squared distribution?
The result is not true. As a counterexample, let $(X,Y)$ have standard Normal margins with a Clayton copula, as illustrated at https://stats.stackexchange.com/a/30205. Generating 10,000 independent r
Does the square of the minimum of two correlated Normal variables have a chi-squared distribution? The result is not true. As a counterexample, let $(X,Y)$ have standard Normal margins with a Clayton copula, as illustrated at https://stats.stackexchange.com/a/30205. Generating 10,000 independent realizations of this bivariate distribution, as shown in the lower left of the figure, produces 10,000 realizations of $Z^2$ that clearly do not follow a $\Gamma(1/2,1/2)$ distribution (in a Chi-squared test of fit, $\chi^2=121, p \lt 10^{-16}$). The qq plots in the top of the figure confirm that the marginals look standard Normal while the qq plot at the bottom right indicates the upper tail of $Z^2$ is too short. The result can be proven under the assumption that the distribution of $(X,Y)$ is centrally symmetric: that is, when it is invariant upon simultaneously negating both $X$ and $Y$. This includes all bivariate Normals (with mean $(0,0)$, of course). The key idea is that for any $z \ge 0$, the event $Z^2 \le z^2$ is the difference of the events $X \ge -z\cup Y \ge -z$ and $X \ge z \cup Y \ge z$. (The first is where the minimum is no less than $-z$, while the second will rule out where the minimum exceeds $z$.) These events in turn can be broken down as follows: $$\Pr(Z^2 \le z^2) = \Pr(X\ge -z) - \Pr(Y \le -z) + \Pr(X,Y\lt -z) - \Pr(X,Y\gt z).$$ The central symmetry assumption assures the last two probabilities cancel. The first two probabilities are given by the standard Normal CDF $\Phi$, yielding $$\Pr(Z^2 \le z^2) =1 - 2\Phi(-z).$$ That exhibits $Z$ as a half-normal distribution, whence its square will have the same distribution as the square of a standard Normal, which by definition is a $\chi^2(1)$ distribution. This demonstration can be reversed to show $Z^2$ has a $\chi^2(1)$ distribution if and only if $\Pr(X,Y\le -z) = \Pr(X,Y\ge z)$ for all $z\ge 0$. Here is the R code that produced the figures. library(copula) n <- 1e4 set.seed(17) xy <- qnorm(rCopula(n, claytonCopula(1))) colnames(xy) <- c("X", "Y") z2 <- pmin(xy[,1], xy[,2])^2 cutpoints <- c(0:10, Inf) z2.obs <- table(cut(z2, cutpoints)) z2.exp <- diff(pgamma(cutpoints, 1/2, 1/2)) rbind(Observed=z2.obs, Expected=z2.exp * length(z2)) chisq.test(z2.obs, p=z2.exp) par(mfrow=c(2,2)) qqnorm(xy[,1], ylab="X"); abline(c(0,1), col="Red", lwd=2) qqnorm(xy[,2], ylab="Y"); abline(c(0,1), col="Red", lwd=2) plot(xy, pch=19, cex=0.75, col="#00000003", main="Data") qqplot(qgamma(seq(0, 1, length.out=length(z)), 1/2, 1/2), z2, xlab="Theoretical Quantiles", ylab="Z2", main="Gamma(1/2,1/2) Q-Q Plot") abline(c(0,1), col="Red", lwd=2)
Does the square of the minimum of two correlated Normal variables have a chi-squared distribution? The result is not true. As a counterexample, let $(X,Y)$ have standard Normal margins with a Clayton copula, as illustrated at https://stats.stackexchange.com/a/30205. Generating 10,000 independent r
53,688
Does the square of the minimum of two correlated Normal variables have a chi-squared distribution?
Let $M=min(X,Y)^2$, $$ P(M<m) = P(M<m,X<Y) + P(M<m,X>Y) ~~~~~~~~~~~\\ = P(M<m|X<Y)P(X<Y) + P(M<m|X>Y)P(X>Y) \\ = P(X^2<m)P(X<Y) + P(Y^2<m)P(X>Y) ~~~~~~~~~~~~~~~~~~~~~~\\ = \frac{1}{2}P(X^2<m) + \frac{1}{2}P(Y^2<m) \quad \quad \quad\quad\quad\quad\quad\quad\quad\quad~ \\ =P(X^2<m) \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad ~~~ $$
Does the square of the minimum of two correlated Normal variables have a chi-squared distribution?
Let $M=min(X,Y)^2$, $$ P(M<m) = P(M<m,X<Y) + P(M<m,X>Y) ~~~~~~~~~~~\\ = P(M<m|X<Y)P(X<Y) + P(M<m|X>Y)P(X>Y) \\ = P(X^2<m)P(X<Y) + P(Y^2<m)P(X>Y) ~~~~~~~~~~~~~~~~~~~~~~\\ = \frac{1}{2}P(X^2<m) + \frac{
Does the square of the minimum of two correlated Normal variables have a chi-squared distribution? Let $M=min(X,Y)^2$, $$ P(M<m) = P(M<m,X<Y) + P(M<m,X>Y) ~~~~~~~~~~~\\ = P(M<m|X<Y)P(X<Y) + P(M<m|X>Y)P(X>Y) \\ = P(X^2<m)P(X<Y) + P(Y^2<m)P(X>Y) ~~~~~~~~~~~~~~~~~~~~~~\\ = \frac{1}{2}P(X^2<m) + \frac{1}{2}P(Y^2<m) \quad \quad \quad\quad\quad\quad\quad\quad\quad\quad~ \\ =P(X^2<m) \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad ~~~ $$
Does the square of the minimum of two correlated Normal variables have a chi-squared distribution? Let $M=min(X,Y)^2$, $$ P(M<m) = P(M<m,X<Y) + P(M<m,X>Y) ~~~~~~~~~~~\\ = P(M<m|X<Y)P(X<Y) + P(M<m|X>Y)P(X>Y) \\ = P(X^2<m)P(X<Y) + P(Y^2<m)P(X>Y) ~~~~~~~~~~~~~~~~~~~~~~\\ = \frac{1}{2}P(X^2<m) + \frac{
53,689
Normality test before testing the difference between two groups. Is it necessary?
While it is possible to test for normality, it is often not very useful to do so. Very few datasets come from an exactly normal distribution and many parametric statistical procedures work well even when the distribution is only "kind of normalish". (I will note that the unequal sample size may mean that procedures might not be quite so robust to departures from normality as would be the case with equal samples.) When the sample is small it contains little information about its underlying distribution and so the normal distribution test has low power and you get lots of false negatives. Conversely, when the sample is large and the test has high power, it starts to indicate significant departures in cases where the distribution is close enough to normal that there is no real problem. Examine your data in a couple of normal distribution plots to get a feel for the shape of the distributions. If there is substantial deviation then you can either transform the data (log transformations are often appropriate) or use non-parametric methods. With sample sizes of 17 and 70 most non-parametric tests will have good power relative to the normal distribution based tests. For example, a permutations test will power equal to that of a Student's t-test. Really you should provide a lot more information in your question, such as what the measurements are, what sort of tests you wish to perform, whether the research is exploratory or designed, what hypotheses you are interested in, and so on. That way the answers can be more specific and you will gain more assistance.
Normality test before testing the difference between two groups. Is it necessary?
While it is possible to test for normality, it is often not very useful to do so. Very few datasets come from an exactly normal distribution and many parametric statistical procedures work well even w
Normality test before testing the difference between two groups. Is it necessary? While it is possible to test for normality, it is often not very useful to do so. Very few datasets come from an exactly normal distribution and many parametric statistical procedures work well even when the distribution is only "kind of normalish". (I will note that the unequal sample size may mean that procedures might not be quite so robust to departures from normality as would be the case with equal samples.) When the sample is small it contains little information about its underlying distribution and so the normal distribution test has low power and you get lots of false negatives. Conversely, when the sample is large and the test has high power, it starts to indicate significant departures in cases where the distribution is close enough to normal that there is no real problem. Examine your data in a couple of normal distribution plots to get a feel for the shape of the distributions. If there is substantial deviation then you can either transform the data (log transformations are often appropriate) or use non-parametric methods. With sample sizes of 17 and 70 most non-parametric tests will have good power relative to the normal distribution based tests. For example, a permutations test will power equal to that of a Student's t-test. Really you should provide a lot more information in your question, such as what the measurements are, what sort of tests you wish to perform, whether the research is exploratory or designed, what hypotheses you are interested in, and so on. That way the answers can be more specific and you will gain more assistance.
Normality test before testing the difference between two groups. Is it necessary? While it is possible to test for normality, it is often not very useful to do so. Very few datasets come from an exactly normal distribution and many parametric statistical procedures work well even w
53,690
Normalizing Features for use with KNN
RESCALING attribute data to values to scale the range in [0, 1] or [−1, 1] is useful for the optimization algorithms, such as gradient descent, that are used within machine learning algorithms that weight inputs (e.g. regression and neural networks). Rescaling is also used for algorithms that use distance measurements for example K-Nearest-Neighbors (KNN). Rescaling like this is sometimes called "normalization". MinMaxScaler class in python skikit-learn does this. NORMALIZING attribute data is used to rescale components of a feature vector to have the complete vector length of 1. This is "scaling by unit length". This usually means dividing each component of the feature vector by the Euclidiean length of the vector but can also be Manhattan or other distance measurements. This pre-processing rescaling method is useful for sparse attribute features and algorithms using distance to learn such as KNN. Python scikit-learn Normalizer class can be used for this. STANDARDIZING attribute data is also a preprocessing method but it assumes a Gaussian distribution of input features. It "standardizes" to a mean of 0 and a standard deviation of 1. This works better with linear regression, logistic regression and linear discriminate analysis. Python StandardScaler class in scikit-learn works for this.
Normalizing Features for use with KNN
RESCALING attribute data to values to scale the range in [0, 1] or [−1, 1] is useful for the optimization algorithms, such as gradient descent, that are used within machine learning algorithms that we
Normalizing Features for use with KNN RESCALING attribute data to values to scale the range in [0, 1] or [−1, 1] is useful for the optimization algorithms, such as gradient descent, that are used within machine learning algorithms that weight inputs (e.g. regression and neural networks). Rescaling is also used for algorithms that use distance measurements for example K-Nearest-Neighbors (KNN). Rescaling like this is sometimes called "normalization". MinMaxScaler class in python skikit-learn does this. NORMALIZING attribute data is used to rescale components of a feature vector to have the complete vector length of 1. This is "scaling by unit length". This usually means dividing each component of the feature vector by the Euclidiean length of the vector but can also be Manhattan or other distance measurements. This pre-processing rescaling method is useful for sparse attribute features and algorithms using distance to learn such as KNN. Python scikit-learn Normalizer class can be used for this. STANDARDIZING attribute data is also a preprocessing method but it assumes a Gaussian distribution of input features. It "standardizes" to a mean of 0 and a standard deviation of 1. This works better with linear regression, logistic regression and linear discriminate analysis. Python StandardScaler class in scikit-learn works for this.
Normalizing Features for use with KNN RESCALING attribute data to values to scale the range in [0, 1] or [−1, 1] is useful for the optimization algorithms, such as gradient descent, that are used within machine learning algorithms that we
53,691
Practical meaning of expected value (mean value), variance and standard deviation?
I'll go with the cliche example - coin flipping. Note that I'm abandoning rigor and some important assumptions in this example, but that's just fine. Let's say I have a regular coin - that is, once I flip it, it has a 50% chance of landing heads and a 50% chance of landing tails. So if I flip it 10 times, I'd expect 5 tails and 5 heads. But I could very well get 6 heads and 4 tails. Or 7 heads and 3 tails. But wait a second - why would I expect 5 tails and 5 heads? Maybe it's obvious - because each flip has a 50% chance of landing heads - so $50\% \times 10 = 5$. In other words the expected value of my coin flipping exercise is 5 heads (and therefore 5 tails) . Let's make the example more interesting now. Let's flip the coin 100 times. But check it - nothing changes in terms of my expected value. I still expect half of the tosses to be heads - i.e. 50 heads. But in reality I might not get 50 heads. Let's say I got 45 heads. Is that far from my expected value of 50? Should I be surprised by that result? Would you be? Probably not. If I told you that I got only 20 heads, then you might think something's up. Why do you think that is? That's sort of the intuitive notion of variance. How likely is it for our results to deviate from the expected value? Some things (like coin flips) have a pretty good chance of deviating from their expected value. Other things don't. We can put a number on this. In some instances, for mathematical convenience and interpretability, we can take the square root of this number. That's the standard deviation. The definitions you refer to above are more technically accurate and have direct mathematical formulations, hence terms like probability weighted and random variable. But if you understand the coin flipping example, then you'll understand the spirit of the terms.
Practical meaning of expected value (mean value), variance and standard deviation?
I'll go with the cliche example - coin flipping. Note that I'm abandoning rigor and some important assumptions in this example, but that's just fine. Let's say I have a regular coin - that is, once I
Practical meaning of expected value (mean value), variance and standard deviation? I'll go with the cliche example - coin flipping. Note that I'm abandoning rigor and some important assumptions in this example, but that's just fine. Let's say I have a regular coin - that is, once I flip it, it has a 50% chance of landing heads and a 50% chance of landing tails. So if I flip it 10 times, I'd expect 5 tails and 5 heads. But I could very well get 6 heads and 4 tails. Or 7 heads and 3 tails. But wait a second - why would I expect 5 tails and 5 heads? Maybe it's obvious - because each flip has a 50% chance of landing heads - so $50\% \times 10 = 5$. In other words the expected value of my coin flipping exercise is 5 heads (and therefore 5 tails) . Let's make the example more interesting now. Let's flip the coin 100 times. But check it - nothing changes in terms of my expected value. I still expect half of the tosses to be heads - i.e. 50 heads. But in reality I might not get 50 heads. Let's say I got 45 heads. Is that far from my expected value of 50? Should I be surprised by that result? Would you be? Probably not. If I told you that I got only 20 heads, then you might think something's up. Why do you think that is? That's sort of the intuitive notion of variance. How likely is it for our results to deviate from the expected value? Some things (like coin flips) have a pretty good chance of deviating from their expected value. Other things don't. We can put a number on this. In some instances, for mathematical convenience and interpretability, we can take the square root of this number. That's the standard deviation. The definitions you refer to above are more technically accurate and have direct mathematical formulations, hence terms like probability weighted and random variable. But if you understand the coin flipping example, then you'll understand the spirit of the terms.
Practical meaning of expected value (mean value), variance and standard deviation? I'll go with the cliche example - coin flipping. Note that I'm abandoning rigor and some important assumptions in this example, but that's just fine. Let's say I have a regular coin - that is, once I
53,692
Practical meaning of expected value (mean value), variance and standard deviation?
A random variable is a quantity whose value appears to be random when measured. As @ilan man describes, the observation of Heads or Tails in a coin flip experiment is one simple example. You could also define a random variable to be the average value of the last ten tosses of the coin (mapping H to 1 and T to 0, for example). The time that you get to work each day is also a random variable. Your weight every morning is a random variable. Etc. We characterize random variables by their distribution, which, broadly speaking, gives a description of how likely particular values are to be observed. The Expected Value of the random variable is a measure of the center of this distribution and the Variance is a measure of its spread. (The Standard Deviation is the square root of the variance, which is a nice measure since it has the same units as the variable - roughly speaking it is measures the "width" of the distribution.) Note that both of these measures are "non-robust", meaning that they are sensitive to outlying values. There are also "robust" statistics for the "center" and "width". The median, which is the value for which there are as many points larger and smaller, is a robust measure of the centroid. Since it only depends on the count of values larger or smaller, the details very far from the median do not have an affect on the median. There are a number of robust width measures but the one most similar to the median is the "median absolute delta", which is $$ median(|X - median(X)|) $$ where X is the random variable in question.
Practical meaning of expected value (mean value), variance and standard deviation?
A random variable is a quantity whose value appears to be random when measured. As @ilan man describes, the observation of Heads or Tails in a coin flip experiment is one simple example. You could als
Practical meaning of expected value (mean value), variance and standard deviation? A random variable is a quantity whose value appears to be random when measured. As @ilan man describes, the observation of Heads or Tails in a coin flip experiment is one simple example. You could also define a random variable to be the average value of the last ten tosses of the coin (mapping H to 1 and T to 0, for example). The time that you get to work each day is also a random variable. Your weight every morning is a random variable. Etc. We characterize random variables by their distribution, which, broadly speaking, gives a description of how likely particular values are to be observed. The Expected Value of the random variable is a measure of the center of this distribution and the Variance is a measure of its spread. (The Standard Deviation is the square root of the variance, which is a nice measure since it has the same units as the variable - roughly speaking it is measures the "width" of the distribution.) Note that both of these measures are "non-robust", meaning that they are sensitive to outlying values. There are also "robust" statistics for the "center" and "width". The median, which is the value for which there are as many points larger and smaller, is a robust measure of the centroid. Since it only depends on the count of values larger or smaller, the details very far from the median do not have an affect on the median. There are a number of robust width measures but the one most similar to the median is the "median absolute delta", which is $$ median(|X - median(X)|) $$ where X is the random variable in question.
Practical meaning of expected value (mean value), variance and standard deviation? A random variable is a quantity whose value appears to be random when measured. As @ilan man describes, the observation of Heads or Tails in a coin flip experiment is one simple example. You could als
53,693
Practical meaning of expected value (mean value), variance and standard deviation?
May be for someone it will be helpful to see how expected value $\largeμ$, variance $\largeσ^2$ and standard deviation $\largeσ$ are related in normal distribution for random variables. In formula: $ f(x) = \Large\frac{1}{\sqrt{2 \pi \sigma^2}} e^{-\frac{(x-\mu)^2}{2 \sigma^2}} $ On the graph: For the normal distribution, the values less than one standard deviation away from the mean account for 68.27% of the set; while two standard deviations from the mean account for 95.45%; and three standard deviations account for 99.73%. Source: Normal distribution And real-world example with random variable. The heights of NBA players all follow a normal distribution:
Practical meaning of expected value (mean value), variance and standard deviation?
May be for someone it will be helpful to see how expected value $\largeμ$, variance $\largeσ^2$ and standard deviation $\largeσ$ are related in normal distribution for random variables. In formula: $
Practical meaning of expected value (mean value), variance and standard deviation? May be for someone it will be helpful to see how expected value $\largeμ$, variance $\largeσ^2$ and standard deviation $\largeσ$ are related in normal distribution for random variables. In formula: $ f(x) = \Large\frac{1}{\sqrt{2 \pi \sigma^2}} e^{-\frac{(x-\mu)^2}{2 \sigma^2}} $ On the graph: For the normal distribution, the values less than one standard deviation away from the mean account for 68.27% of the set; while two standard deviations from the mean account for 95.45%; and three standard deviations account for 99.73%. Source: Normal distribution And real-world example with random variable. The heights of NBA players all follow a normal distribution:
Practical meaning of expected value (mean value), variance and standard deviation? May be for someone it will be helpful to see how expected value $\largeμ$, variance $\largeσ^2$ and standard deviation $\largeσ$ are related in normal distribution for random variables. In formula: $
53,694
Practical meaning of expected value (mean value), variance and standard deviation?
I suspect that probability density function is called after the density in physics. Hence, the mean corresponds to a center of mass of a body.
Practical meaning of expected value (mean value), variance and standard deviation?
I suspect that probability density function is called after the density in physics. Hence, the mean corresponds to a center of mass of a body.
Practical meaning of expected value (mean value), variance and standard deviation? I suspect that probability density function is called after the density in physics. Hence, the mean corresponds to a center of mass of a body.
Practical meaning of expected value (mean value), variance and standard deviation? I suspect that probability density function is called after the density in physics. Hence, the mean corresponds to a center of mass of a body.
53,695
Make prediction equation from logistic regression coefficients
Unfortunately, what you seem to have run was not a logistic regression model. Note that linear regression (i.e., with normally-distributed residuals) is a special case of the generalized linear model. By default, R assumes a call to glm() is requesting that. You can see that you got that at the bottom of your output where it reads "Dispersion parameter for gaussian family...". To get a logistic regression fit, you need to add the argument family=binomial. From what you have, the prediction equation would be: $$ \text{is_bad} = 0.05693 + 0.03428 \text{ is_rentTRUE} + 0.002879 \text{ dti} + \varepsilon \\ \text{where }\varepsilon \sim\mathcal N(0, 0.1065742^2) $$ Now, let's assume that you had included the above argument to the function call (i.e., glm(is_bad~is_rent+dti, data=df, family=binomial)). Then we can state how you would convert the same numbers in the pasted output. (Note that they will actually be different numbers when you go back and do this, and moreover, that the numbers / coefficients will have different interpretations!) In that case, we can start by recognizing that the coefficients are used to recreate what we call the 'linear predictor'. Using it, we can further construct the prediction equation: \begin{align} \text{linear predictor} &= 0.05693 + 0.03428 \text{ is_rentTRUE} + 0.002879 \text{ dti} \\[7pt] p(\text{is_bad}=\text{TRUE}) &= \frac{\exp(\text{linear predictor})}{1+\exp(\text{linear predictor})} \end{align} For a more general reference to interpreting R's output for a logistic regression (including interpretations of the coefficients), it may help to read my answer here: Interpretation of R's output for binomial regression. Update: We can now use your updated model fit to get the equation you want. \begin{align} \text{linear predictor} &= -2.9020574 + 0.3105513 \text{ is_rentTRUE } + 0.0241821 \text{ dti } + \\ &\quad\quad\, 0.0044706\text{ bc_util } + 0.0030552\text{ open_acc} + \\ &\quad\quad\, 0.1117733\text{ pub_rec_bankruptcies } + \\ &\quad -0.0268015\text{ chargeoff_within_12_mths} \\[7pt] p(\text{is_bad}=\text{TRUE}) &= \frac{\exp(\text{linear predictor})}{1+\exp(\text{linear predictor})} \end{align}
Make prediction equation from logistic regression coefficients
Unfortunately, what you seem to have run was not a logistic regression model. Note that linear regression (i.e., with normally-distributed residuals) is a special case of the generalized linear model
Make prediction equation from logistic regression coefficients Unfortunately, what you seem to have run was not a logistic regression model. Note that linear regression (i.e., with normally-distributed residuals) is a special case of the generalized linear model. By default, R assumes a call to glm() is requesting that. You can see that you got that at the bottom of your output where it reads "Dispersion parameter for gaussian family...". To get a logistic regression fit, you need to add the argument family=binomial. From what you have, the prediction equation would be: $$ \text{is_bad} = 0.05693 + 0.03428 \text{ is_rentTRUE} + 0.002879 \text{ dti} + \varepsilon \\ \text{where }\varepsilon \sim\mathcal N(0, 0.1065742^2) $$ Now, let's assume that you had included the above argument to the function call (i.e., glm(is_bad~is_rent+dti, data=df, family=binomial)). Then we can state how you would convert the same numbers in the pasted output. (Note that they will actually be different numbers when you go back and do this, and moreover, that the numbers / coefficients will have different interpretations!) In that case, we can start by recognizing that the coefficients are used to recreate what we call the 'linear predictor'. Using it, we can further construct the prediction equation: \begin{align} \text{linear predictor} &= 0.05693 + 0.03428 \text{ is_rentTRUE} + 0.002879 \text{ dti} \\[7pt] p(\text{is_bad}=\text{TRUE}) &= \frac{\exp(\text{linear predictor})}{1+\exp(\text{linear predictor})} \end{align} For a more general reference to interpreting R's output for a logistic regression (including interpretations of the coefficients), it may help to read my answer here: Interpretation of R's output for binomial regression. Update: We can now use your updated model fit to get the equation you want. \begin{align} \text{linear predictor} &= -2.9020574 + 0.3105513 \text{ is_rentTRUE } + 0.0241821 \text{ dti } + \\ &\quad\quad\, 0.0044706\text{ bc_util } + 0.0030552\text{ open_acc} + \\ &\quad\quad\, 0.1117733\text{ pub_rec_bankruptcies } + \\ &\quad -0.0268015\text{ chargeoff_within_12_mths} \\[7pt] p(\text{is_bad}=\text{TRUE}) &= \frac{\exp(\text{linear predictor})}{1+\exp(\text{linear predictor})} \end{align}
Make prediction equation from logistic regression coefficients Unfortunately, what you seem to have run was not a logistic regression model. Note that linear regression (i.e., with normally-distributed residuals) is a special case of the generalized linear model
53,696
Practical applications of the Laplace and Cauchy distributions
One example is using them as robust priors for regression parameters, where Laplace prior corresponds to LASSO (Tibshirani, 1996) , but $t$-distribution, or Cauchy are other alternatives (Gelman et al, 2008). Moreover, you can have L1 regularized regression with Laplace errors (i.e. minimizing absolute error). Another example: Laplace noise is used in currently trendy field of differential-privacy. Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 267-288. Gelman, A., Jakulin, A., Pittau, G.M., and Su, Y.-S. (2008). A weakly informative default prior distribution for logistic and other regression models. The Annals of Applied Statistics, 2(4), 1360-1383.
Practical applications of the Laplace and Cauchy distributions
One example is using them as robust priors for regression parameters, where Laplace prior corresponds to LASSO (Tibshirani, 1996) , but $t$-distribution, or Cauchy are other alternatives (Gelman et al
Practical applications of the Laplace and Cauchy distributions One example is using them as robust priors for regression parameters, where Laplace prior corresponds to LASSO (Tibshirani, 1996) , but $t$-distribution, or Cauchy are other alternatives (Gelman et al, 2008). Moreover, you can have L1 regularized regression with Laplace errors (i.e. minimizing absolute error). Another example: Laplace noise is used in currently trendy field of differential-privacy. Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 267-288. Gelman, A., Jakulin, A., Pittau, G.M., and Su, Y.-S. (2008). A weakly informative default prior distribution for logistic and other regression models. The Annals of Applied Statistics, 2(4), 1360-1383.
Practical applications of the Laplace and Cauchy distributions One example is using them as robust priors for regression parameters, where Laplace prior corresponds to LASSO (Tibshirani, 1996) , but $t$-distribution, or Cauchy are other alternatives (Gelman et al
53,697
Practical applications of the Laplace and Cauchy distributions
I would like to add an interesting case where the Cauchy distribution can arise in Biology. Imagine you are a shark roaming around a space $A \subset \mathbb R^3$ in search of food, which is represented mathematically by the neighborhoods $N_\epsilon(x_i)$ of several food sources $x_i$. Brownian motion refers to the random walk process of swimming to a new location where the change in each dimension is a normally distributed random variable. If the normal distributions have mean zero, the shark is guaranteed to find a food source eventually. When food sources are scarce, the shark may abandon Brownian motion in favor of a Levy flight, in which the random walk distribution is a Cauchy distribution (or some other heavy tailed distribution). In Levy flight, the shark explores the space in a "greedy" way, sacrificing depth of the search in order to explore the space more quickly. This is referred to as the Levy flight foraging hypothesis. The movement of sharks, swordfish, ants, albatross, humans and other species have been shown to be well-modeled by levy flight (compared to traditional Brownian motion) in a wide variety of scenarios (although the evidence is somewhat mixed). In many mathematical settings, Levy flight is demonstrably faster at blindly searching a space and is often used in robotics competitions.
Practical applications of the Laplace and Cauchy distributions
I would like to add an interesting case where the Cauchy distribution can arise in Biology. Imagine you are a shark roaming around a space $A \subset \mathbb R^3$ in search of food, which is represen
Practical applications of the Laplace and Cauchy distributions I would like to add an interesting case where the Cauchy distribution can arise in Biology. Imagine you are a shark roaming around a space $A \subset \mathbb R^3$ in search of food, which is represented mathematically by the neighborhoods $N_\epsilon(x_i)$ of several food sources $x_i$. Brownian motion refers to the random walk process of swimming to a new location where the change in each dimension is a normally distributed random variable. If the normal distributions have mean zero, the shark is guaranteed to find a food source eventually. When food sources are scarce, the shark may abandon Brownian motion in favor of a Levy flight, in which the random walk distribution is a Cauchy distribution (or some other heavy tailed distribution). In Levy flight, the shark explores the space in a "greedy" way, sacrificing depth of the search in order to explore the space more quickly. This is referred to as the Levy flight foraging hypothesis. The movement of sharks, swordfish, ants, albatross, humans and other species have been shown to be well-modeled by levy flight (compared to traditional Brownian motion) in a wide variety of scenarios (although the evidence is somewhat mixed). In many mathematical settings, Levy flight is demonstrably faster at blindly searching a space and is often used in robotics competitions.
Practical applications of the Laplace and Cauchy distributions I would like to add an interesting case where the Cauchy distribution can arise in Biology. Imagine you are a shark roaming around a space $A \subset \mathbb R^3$ in search of food, which is represen
53,698
Practical applications of the Laplace and Cauchy distributions
The Laplace distribution is also related to median linear regression models. For a model: $$y_i=x_i^T\beta + \epsilon_i,$$ where $\epsilon_i$ are iid Laplace with location $0$ and scale $\sigma$, the maximum likelihood estimators of $\beta$ coincide with the median regression estimators $$\hat{\beta} =\text{argmin} \sum_i\vert y_i-x_i^t\beta\vert.$$ See: https://en.wikipedia.org/wiki/Quantile_regression The half-Cauchy prior is very popular in Bayesian hierarchical models: Nicholas G. Polson and James G. Scott (2012). On the Half-Cauchy Prior for a Global Scale Parameter. Bayesian Analysis.
Practical applications of the Laplace and Cauchy distributions
The Laplace distribution is also related to median linear regression models. For a model: $$y_i=x_i^T\beta + \epsilon_i,$$ where $\epsilon_i$ are iid Laplace with location $0$ and scale $\sigma$, the
Practical applications of the Laplace and Cauchy distributions The Laplace distribution is also related to median linear regression models. For a model: $$y_i=x_i^T\beta + \epsilon_i,$$ where $\epsilon_i$ are iid Laplace with location $0$ and scale $\sigma$, the maximum likelihood estimators of $\beta$ coincide with the median regression estimators $$\hat{\beta} =\text{argmin} \sum_i\vert y_i-x_i^t\beta\vert.$$ See: https://en.wikipedia.org/wiki/Quantile_regression The half-Cauchy prior is very popular in Bayesian hierarchical models: Nicholas G. Polson and James G. Scott (2012). On the Half-Cauchy Prior for a Global Scale Parameter. Bayesian Analysis.
Practical applications of the Laplace and Cauchy distributions The Laplace distribution is also related to median linear regression models. For a model: $$y_i=x_i^T\beta + \epsilon_i,$$ where $\epsilon_i$ are iid Laplace with location $0$ and scale $\sigma$, the
53,699
In linear regression, is there any meaning for the term $X^Ty$?
I'll try to explain it from the linear algebra point of view, but I'm not sure if it's what you need. First of all, when solving the equation in the case of inconsistent system, we know that $\hat y$ is the orthogonal projection of $y$ onto the column space of $X$. In other words, $\hat y$ can be estimated by $X \hat \beta$. Secondly, we know that when we subtract $y - \hat y$ we create the orthogonal component, which is orthogonal to column space of $X$. Moreover, we know, that orthogonality means that if some vector $a$, which is orthogonal to vector $b$ is multiplied by $b$, it will give $0$ as the result. Finally, to have column space, not row space of matrix $X$, we need to take the transpose of it. So, we have an equation $X^T(y - X\hat \beta) = 0$ When opening the brackets and putting different parts of the equation on the different sides, we receive the same equation you've been talking about. $\hat \beta = (X^TX)^{-1}X^Ty$
In linear regression, is there any meaning for the term $X^Ty$?
I'll try to explain it from the linear algebra point of view, but I'm not sure if it's what you need. First of all, when solving the equation in the case of inconsistent system, we know that $\hat y$
In linear regression, is there any meaning for the term $X^Ty$? I'll try to explain it from the linear algebra point of view, but I'm not sure if it's what you need. First of all, when solving the equation in the case of inconsistent system, we know that $\hat y$ is the orthogonal projection of $y$ onto the column space of $X$. In other words, $\hat y$ can be estimated by $X \hat \beta$. Secondly, we know that when we subtract $y - \hat y$ we create the orthogonal component, which is orthogonal to column space of $X$. Moreover, we know, that orthogonality means that if some vector $a$, which is orthogonal to vector $b$ is multiplied by $b$, it will give $0$ as the result. Finally, to have column space, not row space of matrix $X$, we need to take the transpose of it. So, we have an equation $X^T(y - X\hat \beta) = 0$ When opening the brackets and putting different parts of the equation on the different sides, we receive the same equation you've been talking about. $\hat \beta = (X^TX)^{-1}X^Ty$
In linear regression, is there any meaning for the term $X^Ty$? I'll try to explain it from the linear algebra point of view, but I'm not sure if it's what you need. First of all, when solving the equation in the case of inconsistent system, we know that $\hat y$
53,700
In linear regression, is there any meaning for the term $X^Ty$?
People sometimes break that quantity up a little differently and call $\bf{P=X(X^T X)^{-1}}X^{T}$ the Projection matrix, influence matrix, or hat matrix. You can think of the projection matrix as mapping between the actual $y$ values and the predicted ones. The projection matrix has a number of handy properties. In particular, the $k$th element of its main diagonal ($\mathbf{P}_{k,k}$) contains the leverage score for the $k$th piece of data, which can be a useful piece of diagnostic information.
In linear regression, is there any meaning for the term $X^Ty$?
People sometimes break that quantity up a little differently and call $\bf{P=X(X^T X)^{-1}}X^{T}$ the Projection matrix, influence matrix, or hat matrix. You can think of the projection matrix as mapp
In linear regression, is there any meaning for the term $X^Ty$? People sometimes break that quantity up a little differently and call $\bf{P=X(X^T X)^{-1}}X^{T}$ the Projection matrix, influence matrix, or hat matrix. You can think of the projection matrix as mapping between the actual $y$ values and the predicted ones. The projection matrix has a number of handy properties. In particular, the $k$th element of its main diagonal ($\mathbf{P}_{k,k}$) contains the leverage score for the $k$th piece of data, which can be a useful piece of diagnostic information.
In linear regression, is there any meaning for the term $X^Ty$? People sometimes break that quantity up a little differently and call $\bf{P=X(X^T X)^{-1}}X^{T}$ the Projection matrix, influence matrix, or hat matrix. You can think of the projection matrix as mapp