idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
55,601
How can residuals be iid and sum to zero at the same time?
I think you are confusing residuals and errors. Residuals, often noted $\hat{\varepsilon}_i$ or $e_i$ are $$\hat{\varepsilon}_i = y_i - \hat{y}_i = y_i - \hat{\beta}_0 - \hat{\beta}_1 x_i$$ whereas errors are $$\varepsilon_i = y_i - {\beta}_0 - {\beta}_1 x_i$$ The small (but critical!) difference is the hat over the betas. That's why residuals are often noted with a hat: they are estimates of the errors. The residuals are not independent, since they sum to 0, but the errors are (by assumption of the model).
How can residuals be iid and sum to zero at the same time?
I think you are confusing residuals and errors. Residuals, often noted $\hat{\varepsilon}_i$ or $e_i$ are $$\hat{\varepsilon}_i = y_i - \hat{y}_i = y_i - \hat{\beta}_0 - \hat{\beta}_1 x_i$$ whereas er
How can residuals be iid and sum to zero at the same time? I think you are confusing residuals and errors. Residuals, often noted $\hat{\varepsilon}_i$ or $e_i$ are $$\hat{\varepsilon}_i = y_i - \hat{y}_i = y_i - \hat{\beta}_0 - \hat{\beta}_1 x_i$$ whereas errors are $$\varepsilon_i = y_i - {\beta}_0 - {\beta}_1 x_i$$ The small (but critical!) difference is the hat over the betas. That's why residuals are often noted with a hat: they are estimates of the errors. The residuals are not independent, since they sum to 0, but the errors are (by assumption of the model).
How can residuals be iid and sum to zero at the same time? I think you are confusing residuals and errors. Residuals, often noted $\hat{\varepsilon}_i$ or $e_i$ are $$\hat{\varepsilon}_i = y_i - \hat{y}_i = y_i - \hat{\beta}_0 - \hat{\beta}_1 x_i$$ whereas er
55,602
Why is my Fisher's test "significant" but odds ratio overlaps 1?
This is an interesting phenomenon. The difference is basically that the null hypothesis is more powerful because it ends up being one-sided. The confidence interval is not based on the same powerful tests (but it could). Null hypothesis is tested with variable weight in left and right tails Note that probability for the value in the cell 1,1 (which we call $x$) has the p-value of 0.028 only based on the left tail (the sum of probability for values 3200 and above). There is no right tail in this example, because the value can't get lower than 3194. Confidence interval assumes a two-sided test with equal tails The confidence interval is computed based on Fisher's noncentral hypergeometric distribution. The lower interval boundary is based on those values of the odds ratio for which the probability of observing $x \geq 3200$ is 0.025 or less. This value happens to be below 1. This is not strange because we computed that the probability is 0.028 for the odds 1. The difference Thus we can say: The difference is that the computation for the confidence interval is based on hypothesis testing with two tails with equal weight. But this is not the case for the hypothesis test that is used to compute the p-value. The significance test for the null hypothesis will use the set values $\pm$ a certain distance from the maximum likelihood estimate. This might not need to be two equal tails. In this example, the maximum likelihood estimate is at 3196. So the p-value is based on the probability that the observed $x \geq 3200$ or $x\leq 3192$. Due to the asymmetry this lower tail does not exist. However, the computation for the confidence intervals uses a test with equal weight in both tails. R-code Below is some r-code that may help to manually compute the confidence intervals and p-values, which may be helpfull for gaining more insight in the fisher.test function. The code is a simplified version of the code that is under the hood of the fisher.test function. ### data mat <- matrix(c(3200,6,885,6),2, byrow = T) ### parameters describing the data x <- c(3194:3206) ### possible values for cell 1,1 m <- 3200+6 ### sum of row 1 n <- 885+6 ### sum of row 2 k <- 3200+885 ### sum of column 1 ### fisher test test <- fisher.test(mat) test ### manual computation of p-values f <- dhyper(x,m,n,k) plot(x,f) pvalue <- sum(f[x >= 3200]) ### compare p-values (gives the same) pvalue test$p.value ### non-central hypergemoetric distribution ### copied from fisher.test function in R ### greatly simplified for easier overview logdc <- dhyper(x, m, n, k, log = TRUE) ### PDF dnhyper <- function(ncp) { d <- logdc + log(ncp) * x d <- exp(d - max(d)) d / sum(d) } ### CDF pnhyper <- function(q, ncp = 1, uppertail = F) { if (uppertail) { sum(dnhyper(ncp)[x >= q]) } else { sum(dnhyper(ncp)[x <= q]) } } pnhyper <- Vectorize(pnhyper) ### alpha level alpha <- (1-0.95)/2 ### compute upper and lower boundaries x1 <- uniroot(function(t) pnhyper(3200, t) - alpha, c(0.5, 20))$root x2 <- uniroot(function(t) pnhyper(3200, t, uppertail = T) - alpha, c(0.5, 20))$root ### plotting t <- seq(0.2,20,0.001) plot(t,pnhyper(3200,t, uppertail = T), log = "x", type = "l", xlim = c(0.20,20), ylab = "P(x => 3200)", xlab = "odds") lines(c(10^-3,10^3), 0.025*c(1,1), col = 2) lines(c(x2,x2),c(0,1), lty = 2)
Why is my Fisher's test "significant" but odds ratio overlaps 1?
This is an interesting phenomenon. The difference is basically that the null hypothesis is more powerful because it ends up being one-sided. The confidence interval is not based on the same powerful t
Why is my Fisher's test "significant" but odds ratio overlaps 1? This is an interesting phenomenon. The difference is basically that the null hypothesis is more powerful because it ends up being one-sided. The confidence interval is not based on the same powerful tests (but it could). Null hypothesis is tested with variable weight in left and right tails Note that probability for the value in the cell 1,1 (which we call $x$) has the p-value of 0.028 only based on the left tail (the sum of probability for values 3200 and above). There is no right tail in this example, because the value can't get lower than 3194. Confidence interval assumes a two-sided test with equal tails The confidence interval is computed based on Fisher's noncentral hypergeometric distribution. The lower interval boundary is based on those values of the odds ratio for which the probability of observing $x \geq 3200$ is 0.025 or less. This value happens to be below 1. This is not strange because we computed that the probability is 0.028 for the odds 1. The difference Thus we can say: The difference is that the computation for the confidence interval is based on hypothesis testing with two tails with equal weight. But this is not the case for the hypothesis test that is used to compute the p-value. The significance test for the null hypothesis will use the set values $\pm$ a certain distance from the maximum likelihood estimate. This might not need to be two equal tails. In this example, the maximum likelihood estimate is at 3196. So the p-value is based on the probability that the observed $x \geq 3200$ or $x\leq 3192$. Due to the asymmetry this lower tail does not exist. However, the computation for the confidence intervals uses a test with equal weight in both tails. R-code Below is some r-code that may help to manually compute the confidence intervals and p-values, which may be helpfull for gaining more insight in the fisher.test function. The code is a simplified version of the code that is under the hood of the fisher.test function. ### data mat <- matrix(c(3200,6,885,6),2, byrow = T) ### parameters describing the data x <- c(3194:3206) ### possible values for cell 1,1 m <- 3200+6 ### sum of row 1 n <- 885+6 ### sum of row 2 k <- 3200+885 ### sum of column 1 ### fisher test test <- fisher.test(mat) test ### manual computation of p-values f <- dhyper(x,m,n,k) plot(x,f) pvalue <- sum(f[x >= 3200]) ### compare p-values (gives the same) pvalue test$p.value ### non-central hypergemoetric distribution ### copied from fisher.test function in R ### greatly simplified for easier overview logdc <- dhyper(x, m, n, k, log = TRUE) ### PDF dnhyper <- function(ncp) { d <- logdc + log(ncp) * x d <- exp(d - max(d)) d / sum(d) } ### CDF pnhyper <- function(q, ncp = 1, uppertail = F) { if (uppertail) { sum(dnhyper(ncp)[x >= q]) } else { sum(dnhyper(ncp)[x <= q]) } } pnhyper <- Vectorize(pnhyper) ### alpha level alpha <- (1-0.95)/2 ### compute upper and lower boundaries x1 <- uniroot(function(t) pnhyper(3200, t) - alpha, c(0.5, 20))$root x2 <- uniroot(function(t) pnhyper(3200, t, uppertail = T) - alpha, c(0.5, 20))$root ### plotting t <- seq(0.2,20,0.001) plot(t,pnhyper(3200,t, uppertail = T), log = "x", type = "l", xlim = c(0.20,20), ylab = "P(x => 3200)", xlab = "odds") lines(c(10^-3,10^3), 0.025*c(1,1), col = 2) lines(c(x2,x2),c(0,1), lty = 2)
Why is my Fisher's test "significant" but odds ratio overlaps 1? This is an interesting phenomenon. The difference is basically that the null hypothesis is more powerful because it ends up being one-sided. The confidence interval is not based on the same powerful t
55,603
High-dimensional Bernoulli Factory?
Your problem is related to the problem of simulating Boolean circuits that take inputs with a separate probability of being 0 or 1. This is called the stochastic logic problem. In this sense, Qian and Riedel (2008) proved that a function can arise this way if and only if it's a polynomial whose Bernstein coefficients all lie in [0, 1]. In fact these are the same polynomials that are possible in the traditional Bernoulli factory problem, and although I have no proof of this, it perhaps follows that a "multivariate Bernoulli factory" function can be simulated this way if and only if two sequences of polynomials of the kind just given exist that converge from above and below to that function (Łatuszyński et al. 2009/2011). See also Qian et al. 2011. Another related problem is the Dice Enterprise problem first given by Morina et al. 2019/2020, which involves simulating a m-faced die with a n-faced die, where the faces have separate probabilities of occurring. This is not the same as the problem in your question, though, as these probabilities are interrelated rather than independent. REFERENCES: Qian, W. And Riedel, M.D., 2008, June. The synthesis of robust polynomial arithmetic with stochastic logic. In 2008 45th ACM/IEEE Design Automation Conference (pp. 648-653). IEEE. Weikang Qian, Marc D. Riedel, Ivo Rosenberg, "Uniform approximation and Bernstein polynomials with coefficients in the unit interval", European Journal of Combinatorics 32(3), 2011, https://doi.org/10.1016/j.ejc.2010.11.004 http://www.sciencedirect.com/science/article/pii/S0195669810001666 Łatuszyński, K., Kosmidis, I., Papaspiliopoulos, O., Roberts, G.O., "Simulating events of unknown probabilities via reverse time martingales", arXiv:0907.4018v2 [stat.CO], 2009/2011. Morina, G., Łatuszyński, K., et al., "From the Bernoulli Factory to a Dice Enterprise via Perfect Sampling of Markov Chains", arXiv:1912.09229 [math.PR], 2019/2020. A very relevant paper was just made available and came to my attention: Niazadeh, R., Leme, R.P., Schneider, J., "Combinatorial Bernoulli Factories: Matchings, Flows, and Polytopes", arXiv:2011.03865v1 [cs.DS], Nov. 7, 2020. (Edited Apr. 4): Let $f:\mathcal{P}\to [0, 1]$ be a function. In the traditional Bernoulli factory problem, we have a coin with unknown probability of heads $\lambda$ and we seek to sample the probability $f(\lambda)$. In this case, the domain $\mathcal{P}$ is either $[0, 1]$ or a subset thereof. The paper cited above, however, studies Bernoulli factories when $f$ has a different domain: namely when $\mathcal{P}$ is a "polytope contained in the unit hypercube", that is, either $[0,1]^n$ or a subset thereof. Now the Bernoulli factory problem is as follows: Define a polytope $\mathcal{P}$ as above. We have $n$ coins with unknown probabilities of heads of $\lambda_1, ..., \lambda_n$. These probabilities form a vector $x = (\lambda_1, ..., \lambda_n)$ lying on inside the polytope. Assign a specially-designed coin to each vertex of $\mathcal{P}$. Sample a vertex (coin) of the polytope such that the expected value equals $x$. For example, given two coins the vector is $x = (\lambda_1, \lambda_2)$, and one possible choice of $\mathcal{P}$ is $[0, 1]\times[0, 1]$. In that case, there are four vertices, namely $(0, 0)$, $(0, 1)$, $(1, 0)$, and $(1, 1)$. When this procedure is done enough times, the sampled vertices average to $x$. The main question studied in this paper is whether and how a polytope of the kind just given admits a Bernoulli factory. (They show a polytope that lies entirely in $[0, 1]^d$ admits a Bernoulli factory if and only if every point in the polytope is an affine combination of a constant vector.) Unfortunately, this is a different problem from the Bernoulli factory problem you give in your question, where the goal is to sample the probability $f((\lambda_1, \lambda_2, ..., \lambda_n))$ given $n$ coins each with unknown probability of heads of $\lambda_1, ..., \lambda_n$ as the case may be. Moreover, the use of the term "Bernoulli factory" as studied in the paper may be misleading, because the output of the algorithm is not necessarily a Bernoulli random variable. (This is so even though the paper defines "one-bit Bernoulli factories" similarly to traditional Bernoulli factories.) As a result, the paper didn't study any conditions on $f$ that are necessary for a Bernoulli factory of the kind you give in your question, such as whether $f$ has to be continuous or bounded away from its domain. Nacu & Peres found algorithms to solve the Bernoulli factory problem using approximation theory. Currently, polynomials and rational functions are the main kinds of multivariate functions I am aware of that have Bernoulli factory algorithms (Morina et al.) Many multivariate analytic functions are also taken care of by the composition rule (e.g., proposition 14(ii) of Nacu and Peres). However, although the function $g = \min(\lambda_0, \lambda_1)$ is continuous, it's not differentiable, which presents an apparent difficulty. But by the Stone--Weierstrass theorem, any continuous function on $[0, 1]^d$ can be approximated arbitrarily well by polynomials, so I suspect the following: The necessary conditions of Keane and O'Brien extend to multivariate functions: the function must be continuous on $[0, 1]^d$ and either constant or polynomially bounded, which seems to be the case for $g$. I also suspect that similarly to the proof of Keane and O'Brien, any such function $f$ can be simulated by finding multivariate, continuous and polynomially bounded functions $f_k$ that approximate $f$ from below, although the algorithm in that proof is far from practical as it requires finding the degree of approximation for each function $f_k$. Since an approximate polynomial exists for any such continuous function, the algorithm for simulating polynomials given by Goyal and Sigman can be easily extended to the multivariate case: flip each coin n times, count the number of heads for each coin, then return 0 with probability equal to the chosen monomial's coefficient. There may be an algorithm that works similarly in the multivariate case to the general Bernoulli factory algorithms in the univariate case, including the one by Nacu & Peres. This will require looking at the research on multivariate polynomial approximation (especially research that gives bounds on the approximation error with polynomials, especially multivariate polynomials in Bernstein form). It's also possible that for the particular function $g$, a series expansion exists whose terms can be "tucked" under a discrete probability mass function, which enables an algorithm to simulate $g$ via convex combinations (Wästlund 1999, Theorem 2.7), as is the case for $\min(\lambda, 1/2)$, for example. EDIT (Sep. 28, 2021): Also, see chapter 3 of G. Morina's doctoral dissertation (2021), which shows that multivariate Bernoulli factories require the function to be continuous and polynomially bounded. EDIT (Feb. 18, 2022): A new paper by Leme and Schneider (2022), "Multiparameter Bernoulli Factories", deals with the problem you're asking about. Among other things, they show that a function $f(p_1, ..., p_n)$ admits a Bernoulli factory if and only if $f$ is continuous and meets a polynomial boundedness condition that reduces in the 1-dimensional case to that found in Keane and O'Brien. REFERENCES: Goyal, V. And Sigman, K., 2012. On simulating a class of Bernstein polynomials. ACM Transactions on Modeling and Computer Simulation (TOMACS), 22(2), pp.1-5. Wästlund, J., "Functions arising by coin flipping", 1999. Morina, Giulio (2021) Extending the Bernoulli Factory to a dice enterprise. PhD thesis, University of Warwick.
High-dimensional Bernoulli Factory?
Your problem is related to the problem of simulating Boolean circuits that take inputs with a separate probability of being 0 or 1. This is called the stochastic logic problem. In this sense, Qian and
High-dimensional Bernoulli Factory? Your problem is related to the problem of simulating Boolean circuits that take inputs with a separate probability of being 0 or 1. This is called the stochastic logic problem. In this sense, Qian and Riedel (2008) proved that a function can arise this way if and only if it's a polynomial whose Bernstein coefficients all lie in [0, 1]. In fact these are the same polynomials that are possible in the traditional Bernoulli factory problem, and although I have no proof of this, it perhaps follows that a "multivariate Bernoulli factory" function can be simulated this way if and only if two sequences of polynomials of the kind just given exist that converge from above and below to that function (Łatuszyński et al. 2009/2011). See also Qian et al. 2011. Another related problem is the Dice Enterprise problem first given by Morina et al. 2019/2020, which involves simulating a m-faced die with a n-faced die, where the faces have separate probabilities of occurring. This is not the same as the problem in your question, though, as these probabilities are interrelated rather than independent. REFERENCES: Qian, W. And Riedel, M.D., 2008, June. The synthesis of robust polynomial arithmetic with stochastic logic. In 2008 45th ACM/IEEE Design Automation Conference (pp. 648-653). IEEE. Weikang Qian, Marc D. Riedel, Ivo Rosenberg, "Uniform approximation and Bernstein polynomials with coefficients in the unit interval", European Journal of Combinatorics 32(3), 2011, https://doi.org/10.1016/j.ejc.2010.11.004 http://www.sciencedirect.com/science/article/pii/S0195669810001666 Łatuszyński, K., Kosmidis, I., Papaspiliopoulos, O., Roberts, G.O., "Simulating events of unknown probabilities via reverse time martingales", arXiv:0907.4018v2 [stat.CO], 2009/2011. Morina, G., Łatuszyński, K., et al., "From the Bernoulli Factory to a Dice Enterprise via Perfect Sampling of Markov Chains", arXiv:1912.09229 [math.PR], 2019/2020. A very relevant paper was just made available and came to my attention: Niazadeh, R., Leme, R.P., Schneider, J., "Combinatorial Bernoulli Factories: Matchings, Flows, and Polytopes", arXiv:2011.03865v1 [cs.DS], Nov. 7, 2020. (Edited Apr. 4): Let $f:\mathcal{P}\to [0, 1]$ be a function. In the traditional Bernoulli factory problem, we have a coin with unknown probability of heads $\lambda$ and we seek to sample the probability $f(\lambda)$. In this case, the domain $\mathcal{P}$ is either $[0, 1]$ or a subset thereof. The paper cited above, however, studies Bernoulli factories when $f$ has a different domain: namely when $\mathcal{P}$ is a "polytope contained in the unit hypercube", that is, either $[0,1]^n$ or a subset thereof. Now the Bernoulli factory problem is as follows: Define a polytope $\mathcal{P}$ as above. We have $n$ coins with unknown probabilities of heads of $\lambda_1, ..., \lambda_n$. These probabilities form a vector $x = (\lambda_1, ..., \lambda_n)$ lying on inside the polytope. Assign a specially-designed coin to each vertex of $\mathcal{P}$. Sample a vertex (coin) of the polytope such that the expected value equals $x$. For example, given two coins the vector is $x = (\lambda_1, \lambda_2)$, and one possible choice of $\mathcal{P}$ is $[0, 1]\times[0, 1]$. In that case, there are four vertices, namely $(0, 0)$, $(0, 1)$, $(1, 0)$, and $(1, 1)$. When this procedure is done enough times, the sampled vertices average to $x$. The main question studied in this paper is whether and how a polytope of the kind just given admits a Bernoulli factory. (They show a polytope that lies entirely in $[0, 1]^d$ admits a Bernoulli factory if and only if every point in the polytope is an affine combination of a constant vector.) Unfortunately, this is a different problem from the Bernoulli factory problem you give in your question, where the goal is to sample the probability $f((\lambda_1, \lambda_2, ..., \lambda_n))$ given $n$ coins each with unknown probability of heads of $\lambda_1, ..., \lambda_n$ as the case may be. Moreover, the use of the term "Bernoulli factory" as studied in the paper may be misleading, because the output of the algorithm is not necessarily a Bernoulli random variable. (This is so even though the paper defines "one-bit Bernoulli factories" similarly to traditional Bernoulli factories.) As a result, the paper didn't study any conditions on $f$ that are necessary for a Bernoulli factory of the kind you give in your question, such as whether $f$ has to be continuous or bounded away from its domain. Nacu & Peres found algorithms to solve the Bernoulli factory problem using approximation theory. Currently, polynomials and rational functions are the main kinds of multivariate functions I am aware of that have Bernoulli factory algorithms (Morina et al.) Many multivariate analytic functions are also taken care of by the composition rule (e.g., proposition 14(ii) of Nacu and Peres). However, although the function $g = \min(\lambda_0, \lambda_1)$ is continuous, it's not differentiable, which presents an apparent difficulty. But by the Stone--Weierstrass theorem, any continuous function on $[0, 1]^d$ can be approximated arbitrarily well by polynomials, so I suspect the following: The necessary conditions of Keane and O'Brien extend to multivariate functions: the function must be continuous on $[0, 1]^d$ and either constant or polynomially bounded, which seems to be the case for $g$. I also suspect that similarly to the proof of Keane and O'Brien, any such function $f$ can be simulated by finding multivariate, continuous and polynomially bounded functions $f_k$ that approximate $f$ from below, although the algorithm in that proof is far from practical as it requires finding the degree of approximation for each function $f_k$. Since an approximate polynomial exists for any such continuous function, the algorithm for simulating polynomials given by Goyal and Sigman can be easily extended to the multivariate case: flip each coin n times, count the number of heads for each coin, then return 0 with probability equal to the chosen monomial's coefficient. There may be an algorithm that works similarly in the multivariate case to the general Bernoulli factory algorithms in the univariate case, including the one by Nacu & Peres. This will require looking at the research on multivariate polynomial approximation (especially research that gives bounds on the approximation error with polynomials, especially multivariate polynomials in Bernstein form). It's also possible that for the particular function $g$, a series expansion exists whose terms can be "tucked" under a discrete probability mass function, which enables an algorithm to simulate $g$ via convex combinations (Wästlund 1999, Theorem 2.7), as is the case for $\min(\lambda, 1/2)$, for example. EDIT (Sep. 28, 2021): Also, see chapter 3 of G. Morina's doctoral dissertation (2021), which shows that multivariate Bernoulli factories require the function to be continuous and polynomially bounded. EDIT (Feb. 18, 2022): A new paper by Leme and Schneider (2022), "Multiparameter Bernoulli Factories", deals with the problem you're asking about. Among other things, they show that a function $f(p_1, ..., p_n)$ admits a Bernoulli factory if and only if $f$ is continuous and meets a polynomial boundedness condition that reduces in the 1-dimensional case to that found in Keane and O'Brien. REFERENCES: Goyal, V. And Sigman, K., 2012. On simulating a class of Bernstein polynomials. ACM Transactions on Modeling and Computer Simulation (TOMACS), 22(2), pp.1-5. Wästlund, J., "Functions arising by coin flipping", 1999. Morina, Giulio (2021) Extending the Bernoulli Factory to a dice enterprise. PhD thesis, University of Warwick.
High-dimensional Bernoulli Factory? Your problem is related to the problem of simulating Boolean circuits that take inputs with a separate probability of being 0 or 1. This is called the stochastic logic problem. In this sense, Qian and
55,604
Proper Scoring Rule in Optical Character Recognition
First off, I wouldn't say it's CrossValidated that "likes to promote proper scoring rules". It's more a few very vociferous users. Present company not excepted. I would agree that the role of scoring rules is much smaller in optical character recognition (OCR) than in many other domains, like medical diagnostics. The reason, IMO, is that the signal to noise ratio is much higher in OCR. We teach five-year-olds to read, after all. Nobody makes a conscious effort to obfuscate our classifiers. We rather make sure to display the signal in a standardized way (the address almost always goes in the same position on the envelope, pages are usually in portrait orientation etc.), and incentives are aligned with making classifiers' life easier. Finally, there is a very small number of target classes: 26 letters, 10 numerals. In contrast, spammers have an incentive to obfuscate classifiers. In medical diagnostics, the true disease lurks somewhere deep in a highly complex human-shaped black box. Anything beyond the most trivial use cases (the common cold, which we can usually diagnose ourselves and don't visit the doctor with) thus is interpreted by highly trained professionals (either the meat or the silicon version). Image recognition, apart from toy examples, has a limitless number of possible classes to classify an image into. In a high signal-to-noise situation like OCR on Western scripts, most instances will be probabilistically classified as one class with very high probability, and this classification will usually be correct. It's simply not very interesting to train a classifier to better probabilistically distinguish a lowercase g from a 9, because it's usually easy to do so well enough already, based on context. So I would say that the emphasis on proper scoring rules is more important in low signal to noise situations. And conversely, I sometimes have the impression that people who rely on accuracy have learned classification in high signal to noise situations (like OCR), and may have difficulties with their toolset when this ratio changes in a new situation.
Proper Scoring Rule in Optical Character Recognition
First off, I wouldn't say it's CrossValidated that "likes to promote proper scoring rules". It's more a few very vociferous users. Present company not excepted. I would agree that the role of scoring
Proper Scoring Rule in Optical Character Recognition First off, I wouldn't say it's CrossValidated that "likes to promote proper scoring rules". It's more a few very vociferous users. Present company not excepted. I would agree that the role of scoring rules is much smaller in optical character recognition (OCR) than in many other domains, like medical diagnostics. The reason, IMO, is that the signal to noise ratio is much higher in OCR. We teach five-year-olds to read, after all. Nobody makes a conscious effort to obfuscate our classifiers. We rather make sure to display the signal in a standardized way (the address almost always goes in the same position on the envelope, pages are usually in portrait orientation etc.), and incentives are aligned with making classifiers' life easier. Finally, there is a very small number of target classes: 26 letters, 10 numerals. In contrast, spammers have an incentive to obfuscate classifiers. In medical diagnostics, the true disease lurks somewhere deep in a highly complex human-shaped black box. Anything beyond the most trivial use cases (the common cold, which we can usually diagnose ourselves and don't visit the doctor with) thus is interpreted by highly trained professionals (either the meat or the silicon version). Image recognition, apart from toy examples, has a limitless number of possible classes to classify an image into. In a high signal-to-noise situation like OCR on Western scripts, most instances will be probabilistically classified as one class with very high probability, and this classification will usually be correct. It's simply not very interesting to train a classifier to better probabilistically distinguish a lowercase g from a 9, because it's usually easy to do so well enough already, based on context. So I would say that the emphasis on proper scoring rules is more important in low signal to noise situations. And conversely, I sometimes have the impression that people who rely on accuracy have learned classification in high signal to noise situations (like OCR), and may have difficulties with their toolset when this ratio changes in a new situation.
Proper Scoring Rule in Optical Character Recognition First off, I wouldn't say it's CrossValidated that "likes to promote proper scoring rules". It's more a few very vociferous users. Present company not excepted. I would agree that the role of scoring
55,605
Interpreting nested random effects
Does mod2 specify that the same people for same treatment should be more similar than others? mod2 implies that you have repeated measures within every combination of treatment and id. From your description, this does not seem to be the case. What kind of dependence does mod3 suggest? What's the difference from mod2? mod3 is also fitting random interceps for id, which implies that treatment is nested within id. Again this isn't the case here. Do we even need to specify dependence in sense of (1|treatment:id) if we already account for the treatment as a fixed effect? Since you seem to be interested in the fixed effect for treatment, it does not make sense to also include it as a grouping factor for random intercepts as part of an interaction. What do we gain additionally by specifying this as a nested random effect? We gain nothing. Since we don't have nested random effects, the standard errors for the fixed effects estimates will be wrong.
Interpreting nested random effects
Does mod2 specify that the same people for same treatment should be more similar than others? mod2 implies that you have repeated measures within every combination of treatment and id. From your desc
Interpreting nested random effects Does mod2 specify that the same people for same treatment should be more similar than others? mod2 implies that you have repeated measures within every combination of treatment and id. From your description, this does not seem to be the case. What kind of dependence does mod3 suggest? What's the difference from mod2? mod3 is also fitting random interceps for id, which implies that treatment is nested within id. Again this isn't the case here. Do we even need to specify dependence in sense of (1|treatment:id) if we already account for the treatment as a fixed effect? Since you seem to be interested in the fixed effect for treatment, it does not make sense to also include it as a grouping factor for random intercepts as part of an interaction. What do we gain additionally by specifying this as a nested random effect? We gain nothing. Since we don't have nested random effects, the standard errors for the fixed effects estimates will be wrong.
Interpreting nested random effects Does mod2 specify that the same people for same treatment should be more similar than others? mod2 implies that you have repeated measures within every combination of treatment and id. From your desc
55,606
Visualizing the folly of fitting random slopes for variables that don't vary within groups
I think it makes sense here to step back and simplify things. For the purpose of this answer we can think about this model: Y ~ X + (X | G) ...in two scenarios: where X varies at the individual / unit level, and where X varies at the group level. The motivation for fitting random slopes often arises out of the following. We have a study where we measures individuals, and we are interested in some fixed effect, ie slope of a variable. It could be the same variable measured over time, or it could be the response to different treatment levels of a variable, for example. If we had only one individual we would simply take measurements and think about a plot such as this: set.seed(1) X <- 1:20 Y <- 3 + X + rnorm(20, 0, 3) ggplot(data.frame(Y, X), aes(y = Y, x = X)) + geom_point() + geom_smooth(method = 'lm', se = FALSE) Our interest would then be in the slope of the fitted line, from the model: > lm(Y ~ X) %>% coef() (Intercept) X 3.062716 1.067789 Now, when we have multiple individuals, we don't want to fit seperate models for each individual, as discussed here: Difference between t-test on betas from individual regressions vs linear mixed modeling So we want random intercepts, where each individual will have the same fixed effect (slope) for X, but a different intercept. Moreover, we naturally would expect each individual to have their own slope, so we want random slopes for X: set.seed(1) n.group <- 10 dt <- expand.grid(G = 1:n.group, X = 1:20) dt$Y = 1 X <- model.matrix(~ X, dt) myFormula <- "Y ~ X + (X | G)" foo <- lFormula(eval(myFormula), dt) Z <- t(as.matrix(foo$reTrms$Zt)) betas <- c(3, 1) b1 <- rnorm(n.group, 0, 3) # random intercepts b2 <- rnorm(n.group, 0, 0.5) # random slopes b <- c(rbind(b1, b2)) dt$Y <- X %*% betas + Z %*% b + rnorm(nrow(dt), 1) dt$G <- as.factor(dt$G) ggplot(dt, aes(y = Y, x = X, colour = G)) + geom_point() + geom_smooth(method = 'lm', formula= y ~ x, se = FALSE) All is good. This is a classical plot to illlustrate random slopes and intercepts. Each line represents one individual / group and has it's own intercept and slope. Note that this is not plotted from the output of a mixed model, but rather from the data itself. We fit a mixed model in order to estimate the parameters, in the case of the random effects, the variance and covariance of the random intercepts and slopes. Now, if we let X be a group-level predictor: dt$X <- as.numeric(dt$G) / 4 X <- model.matrix(~ X, dt) dt$Y <- X %*% betas + Z %*% b + rnorm(nrow(dt), 1) ggplot(dt, aes(y = Y, x = X, colour = G)) + geom_point() + geom_smooth(method = 'lm', formula= y ~ x, se = FALSE) We can immediately see that each group is a vertical accumulation of points for each X value. So there is no slope for each group / individual. This is why it does not make sense to fit random slopes for a variable that only varies at the group level. If we try to fit a model with random slopes to such data, it will almost certainly not converge, or converge to a singular fit. I say almost certainly, because as noted in the OP, we do sometimes see such model that do converge. This is why it is necessary for analysts to think about what they are doing. Plotting the data is a very good first step in many analysis tasks and can help in avoiding mistakes, and generally guide the analysis in the right direction.
Visualizing the folly of fitting random slopes for variables that don't vary within groups
I think it makes sense here to step back and simplify things. For the purpose of this answer we can think about this model: Y ~ X + (X | G) ...in two scenarios: where X varies at the individual / uni
Visualizing the folly of fitting random slopes for variables that don't vary within groups I think it makes sense here to step back and simplify things. For the purpose of this answer we can think about this model: Y ~ X + (X | G) ...in two scenarios: where X varies at the individual / unit level, and where X varies at the group level. The motivation for fitting random slopes often arises out of the following. We have a study where we measures individuals, and we are interested in some fixed effect, ie slope of a variable. It could be the same variable measured over time, or it could be the response to different treatment levels of a variable, for example. If we had only one individual we would simply take measurements and think about a plot such as this: set.seed(1) X <- 1:20 Y <- 3 + X + rnorm(20, 0, 3) ggplot(data.frame(Y, X), aes(y = Y, x = X)) + geom_point() + geom_smooth(method = 'lm', se = FALSE) Our interest would then be in the slope of the fitted line, from the model: > lm(Y ~ X) %>% coef() (Intercept) X 3.062716 1.067789 Now, when we have multiple individuals, we don't want to fit seperate models for each individual, as discussed here: Difference between t-test on betas from individual regressions vs linear mixed modeling So we want random intercepts, where each individual will have the same fixed effect (slope) for X, but a different intercept. Moreover, we naturally would expect each individual to have their own slope, so we want random slopes for X: set.seed(1) n.group <- 10 dt <- expand.grid(G = 1:n.group, X = 1:20) dt$Y = 1 X <- model.matrix(~ X, dt) myFormula <- "Y ~ X + (X | G)" foo <- lFormula(eval(myFormula), dt) Z <- t(as.matrix(foo$reTrms$Zt)) betas <- c(3, 1) b1 <- rnorm(n.group, 0, 3) # random intercepts b2 <- rnorm(n.group, 0, 0.5) # random slopes b <- c(rbind(b1, b2)) dt$Y <- X %*% betas + Z %*% b + rnorm(nrow(dt), 1) dt$G <- as.factor(dt$G) ggplot(dt, aes(y = Y, x = X, colour = G)) + geom_point() + geom_smooth(method = 'lm', formula= y ~ x, se = FALSE) All is good. This is a classical plot to illlustrate random slopes and intercepts. Each line represents one individual / group and has it's own intercept and slope. Note that this is not plotted from the output of a mixed model, but rather from the data itself. We fit a mixed model in order to estimate the parameters, in the case of the random effects, the variance and covariance of the random intercepts and slopes. Now, if we let X be a group-level predictor: dt$X <- as.numeric(dt$G) / 4 X <- model.matrix(~ X, dt) dt$Y <- X %*% betas + Z %*% b + rnorm(nrow(dt), 1) ggplot(dt, aes(y = Y, x = X, colour = G)) + geom_point() + geom_smooth(method = 'lm', formula= y ~ x, se = FALSE) We can immediately see that each group is a vertical accumulation of points for each X value. So there is no slope for each group / individual. This is why it does not make sense to fit random slopes for a variable that only varies at the group level. If we try to fit a model with random slopes to such data, it will almost certainly not converge, or converge to a singular fit. I say almost certainly, because as noted in the OP, we do sometimes see such model that do converge. This is why it is necessary for analysts to think about what they are doing. Plotting the data is a very good first step in many analysis tasks and can help in avoiding mistakes, and generally guide the analysis in the right direction.
Visualizing the folly of fitting random slopes for variables that don't vary within groups I think it makes sense here to step back and simplify things. For the purpose of this answer we can think about this model: Y ~ X + (X | G) ...in two scenarios: where X varies at the individual / uni
55,607
ARIMA Model Changing With New Data
The updated data is presumably more correct, so it appears like a model fitted to the updated data is likely closer to the true data generating process, as well. So I would use the new model. Then again, large changes in the forecast (note that different models may give forecasts that are not very different, at least at short horizons) would be a cause for concern. So I would at least take a look at the differences in the forecasts from the two models. If two (or more) models are so equally reasonable that small changes in the data may make auto.arima() jump from one model to the other, it may also be worthwhile to use both models, by averaging the forecasts. As long as the order of integration is the same, you can also compare AICs and potentially use the AICs in a weighting scheme (e.g., Kolassa, 2011, IJF - sorry for the self-promotion). Note, however, that investing a lot of time in finding "optimal" weights may not help a lot (Claeskens et al., 2016, IJF). Finally, if you have the time, you could also disable some of the computational shortcuts that auto.arima() takes, which may give you yet other models to play with, by setting stepwise=FALSE and/or approximation=FALSE.
ARIMA Model Changing With New Data
The updated data is presumably more correct, so it appears like a model fitted to the updated data is likely closer to the true data generating process, as well. So I would use the new model. Then aga
ARIMA Model Changing With New Data The updated data is presumably more correct, so it appears like a model fitted to the updated data is likely closer to the true data generating process, as well. So I would use the new model. Then again, large changes in the forecast (note that different models may give forecasts that are not very different, at least at short horizons) would be a cause for concern. So I would at least take a look at the differences in the forecasts from the two models. If two (or more) models are so equally reasonable that small changes in the data may make auto.arima() jump from one model to the other, it may also be worthwhile to use both models, by averaging the forecasts. As long as the order of integration is the same, you can also compare AICs and potentially use the AICs in a weighting scheme (e.g., Kolassa, 2011, IJF - sorry for the self-promotion). Note, however, that investing a lot of time in finding "optimal" weights may not help a lot (Claeskens et al., 2016, IJF). Finally, if you have the time, you could also disable some of the computational shortcuts that auto.arima() takes, which may give you yet other models to play with, by setting stepwise=FALSE and/or approximation=FALSE.
ARIMA Model Changing With New Data The updated data is presumably more correct, so it appears like a model fitted to the updated data is likely closer to the true data generating process, as well. So I would use the new model. Then aga
55,608
Controlling for baseline in pre-post between design: using $\Delta(T_2-T_1)$ or controlling for T1 in the regression model (or both)? [duplicate]
The options under (d) are wrong, as a change score is associated with the baseline value. See this page, for example. Otherwise, it depends on what you mean by "taking into account the baseline measurement." You already note that option (a) doesn't do that at all. Option (b) looks only at the change from baseline as a function of Group. Based on your knowledge of the subject matter, do you think that is an adequate way to take the baseline into account? The advantage is all that you estimate is 3 parameter values. Option (c) allows for a slope in the relationship between T2 and T1, with the same slope for all Groups. (One could think of option (b) as forcing that slope to be 1 for all Groups.) But adding the slope to the model means you're now up to 4 parameter values to estimate. You could extend option (c) to include an interaction between Group and T1, allowing for different slopes among the Groups. That's a more complicated model, now with 6 parameter values to estimate by my count. So there is no clear answer about which is "best." More complicated models can capture more details about what's going on. The extra number of parameter values estimated from the data, however, can diminish the power to document truly significant relationships. A more complicated model and also lead to overfitting, building a model that fits your data set well but doesn't generalize to the underlying population. That can be a particular problem with small data sets. In many linear regression studies you typically want to have 10-20 cases per parameter estimated by the model, so if you have few cases you might need to restrict yourself to simpler models. Added in response to comments: This page and its links extensively discuss change scores, Option (b), versus regression of final values against initial values and a group indicator, Option (c). Allison provides a thorough comparison. As he says (page 106): It is unrealistic to expect either model to be the best in all situations; indeed, I shall argue that each of these models has its appropriate sphere of application. You will note, however, that Allison's arguments in favor of the change score in some circumstances are based on Option (b) without including the baseline value T1 as a predictor as Option (d) envisions. Consistent with that, Glymour et al report: ... in many plausible situations, baseline adjustment induces a spurious statistical association between education and change in cognitive score... In some cases, change-score analyses without baseline adjustment provide unbiased causal effect estimates when baseline-adjusted estimates are biased. Although Clifton & Clifton argue for including the baseline as a covariate when change scores are an outcome, they provide many cautions such as: Using change score as outcome has undesirable implications... By contrast, using post scores is always valid and never misleading. Both those arguments, for including baseline as a covariate and that "using post scores is always valid," seem to disagree with Allison's presentation in favor or change scores in some circumstances, as I understand it. Alternate approaches One might avoid some of these arguments with alternate modeling approaches. In some fields of study, errors tend to be proportional to observed values and effects are multiplicative rather than additive. If that's the case in your field of study, working with log-transformed values of T1 and T2 with a model like Option (c) provides a coefficient for T1 that expresses the fractional change in T2 per fractional change in T1, which is maybe even easier to explain than what you would get from the corresponding analysis of untransformed values. A mixed model that includes both T1 and T2 values as outcomes, with an indicator of the time of observation as a predictor, would have the advantage of putting T1 and T2 on equal footing. The fixed-effects regression approach in Option (c) implicitly assumes that T1 is known precisely and that all error is associated with T2. A mixed model with a random intercept for each individual could provide a way to "[take] into account the baseline measurement" that shares information from both T1 and T2 to get a potentially more reliable estimate of the true baseline condition rather than the particular observed baseline value. Looking over all of these different approaches, I think that this still comes down to what I said in the second paragraph: it depends on what you mean by "taking into account the baseline measurement." You have to use your knowledge of the subject matter to decide which accounting is most appropriate.
Controlling for baseline in pre-post between design: using $\Delta(T_2-T_1)$ or controlling for T1 i
The options under (d) are wrong, as a change score is associated with the baseline value. See this page, for example. Otherwise, it depends on what you mean by "taking into account the baseline measur
Controlling for baseline in pre-post between design: using $\Delta(T_2-T_1)$ or controlling for T1 in the regression model (or both)? [duplicate] The options under (d) are wrong, as a change score is associated with the baseline value. See this page, for example. Otherwise, it depends on what you mean by "taking into account the baseline measurement." You already note that option (a) doesn't do that at all. Option (b) looks only at the change from baseline as a function of Group. Based on your knowledge of the subject matter, do you think that is an adequate way to take the baseline into account? The advantage is all that you estimate is 3 parameter values. Option (c) allows for a slope in the relationship between T2 and T1, with the same slope for all Groups. (One could think of option (b) as forcing that slope to be 1 for all Groups.) But adding the slope to the model means you're now up to 4 parameter values to estimate. You could extend option (c) to include an interaction between Group and T1, allowing for different slopes among the Groups. That's a more complicated model, now with 6 parameter values to estimate by my count. So there is no clear answer about which is "best." More complicated models can capture more details about what's going on. The extra number of parameter values estimated from the data, however, can diminish the power to document truly significant relationships. A more complicated model and also lead to overfitting, building a model that fits your data set well but doesn't generalize to the underlying population. That can be a particular problem with small data sets. In many linear regression studies you typically want to have 10-20 cases per parameter estimated by the model, so if you have few cases you might need to restrict yourself to simpler models. Added in response to comments: This page and its links extensively discuss change scores, Option (b), versus regression of final values against initial values and a group indicator, Option (c). Allison provides a thorough comparison. As he says (page 106): It is unrealistic to expect either model to be the best in all situations; indeed, I shall argue that each of these models has its appropriate sphere of application. You will note, however, that Allison's arguments in favor of the change score in some circumstances are based on Option (b) without including the baseline value T1 as a predictor as Option (d) envisions. Consistent with that, Glymour et al report: ... in many plausible situations, baseline adjustment induces a spurious statistical association between education and change in cognitive score... In some cases, change-score analyses without baseline adjustment provide unbiased causal effect estimates when baseline-adjusted estimates are biased. Although Clifton & Clifton argue for including the baseline as a covariate when change scores are an outcome, they provide many cautions such as: Using change score as outcome has undesirable implications... By contrast, using post scores is always valid and never misleading. Both those arguments, for including baseline as a covariate and that "using post scores is always valid," seem to disagree with Allison's presentation in favor or change scores in some circumstances, as I understand it. Alternate approaches One might avoid some of these arguments with alternate modeling approaches. In some fields of study, errors tend to be proportional to observed values and effects are multiplicative rather than additive. If that's the case in your field of study, working with log-transformed values of T1 and T2 with a model like Option (c) provides a coefficient for T1 that expresses the fractional change in T2 per fractional change in T1, which is maybe even easier to explain than what you would get from the corresponding analysis of untransformed values. A mixed model that includes both T1 and T2 values as outcomes, with an indicator of the time of observation as a predictor, would have the advantage of putting T1 and T2 on equal footing. The fixed-effects regression approach in Option (c) implicitly assumes that T1 is known precisely and that all error is associated with T2. A mixed model with a random intercept for each individual could provide a way to "[take] into account the baseline measurement" that shares information from both T1 and T2 to get a potentially more reliable estimate of the true baseline condition rather than the particular observed baseline value. Looking over all of these different approaches, I think that this still comes down to what I said in the second paragraph: it depends on what you mean by "taking into account the baseline measurement." You have to use your knowledge of the subject matter to decide which accounting is most appropriate.
Controlling for baseline in pre-post between design: using $\Delta(T_2-T_1)$ or controlling for T1 i The options under (d) are wrong, as a change score is associated with the baseline value. See this page, for example. Otherwise, it depends on what you mean by "taking into account the baseline measur
55,609
How do you know the number of random effects in a mixed effects model?
This is expected behavious whenever you try to fit a model with random slopes where the variable for the random slopes is categorical and there is only one observation per treatment/group combination. This is because the levels of a categorical variable are represented by dummy variables - essentially they are treated as different variables. So in your case, when you fit random slopes only you are asking the software to estimate 5 random slopes for each group. When you fit random intercepts and random slopes there will be 407 random intercepts, but only 4 random slopes for each group (since one level will be treated as a reference group and included in the intercept), so either way you will have 5 x 407 random effects. The only way to solve this is by either coding the variable as numeric, if that is plausible in your study/data, or not fitting random slopes, or having more than 1 observation per treatment per group. It may be illustrative to see this with a toy dataset: > set.seed(1) > dt <- expand.grid(G = LETTERS[1:4], a = LETTERS[1:2]) > dt$Y = rnorm(nrow(dt)) > dt G a Y 1 A A -0.6264538 2 B A 0.1836433 3 C A -0.8356286 4 D A 1.5952808 5 A B 0.3295078 6 B B -0.8204684 7 C B 0.4874291 8 D B 0.7383247 Now we fit the models, both of which will not run for the reasons explained above. > lmer(Y ~ a + (0 + a | G), data = dt) %>% summary() Error: number of observations (=8) <= number of random effects (=8) for term (0 + a | G); the random-effects parameters and the residual variance (or scale parameter) are probably unidentifiable > lmer(Y ~ a + (1 + a | G), data = dt) %>% summary() Error: number of observations (=8) <= number of random effects (=8) for term (1 + a | G); the random-effects parameters and the residual variance (or scale parameter) are probably unidentifiable But now we add just 1 extra row to the dataset, and they run: > (dt <- rbind(dt, dt[1, ])) G a Y 1 A A -0.6264538 2 B A 0.1836433 3 C A -0.8356286 4 D A 1.5952808 5 A B 0.3295078 6 B B -0.8204684 7 C B 0.4874291 8 D B 0.7383247 9 A A -0.6264538 > lmer(Y ~ a + (0 + a | G), data = dt) %>% summary() Random effects: Groups Name Variance Std.Dev. Corr G aA 1.451e+00 1.205e+00 aB 3.224e-01 5.678e-01 -0.04 Residual 4.239e-15 6.511e-08 > lmer(Y ~ a + (1 + a | G), data = dt) %>% summary() Random effects: Groups Name Variance Std.Dev. Corr G (Intercept) 9.776e-01 9.887e-01 aB 1.222e+00 1.105e+00 -0.81 Residual 1.159e-14 1.077e-07 Number of obs: 9, groups: G, 4 In the model with random slopes only we have 2 random slopes in 4 groups (8 random effects), and in the model with both random intercepts and random slopes we have 4 random intercepts and 4 random slopes.
How do you know the number of random effects in a mixed effects model?
This is expected behavious whenever you try to fit a model with random slopes where the variable for the random slopes is categorical and there is only one observation per treatment/group combination.
How do you know the number of random effects in a mixed effects model? This is expected behavious whenever you try to fit a model with random slopes where the variable for the random slopes is categorical and there is only one observation per treatment/group combination. This is because the levels of a categorical variable are represented by dummy variables - essentially they are treated as different variables. So in your case, when you fit random slopes only you are asking the software to estimate 5 random slopes for each group. When you fit random intercepts and random slopes there will be 407 random intercepts, but only 4 random slopes for each group (since one level will be treated as a reference group and included in the intercept), so either way you will have 5 x 407 random effects. The only way to solve this is by either coding the variable as numeric, if that is plausible in your study/data, or not fitting random slopes, or having more than 1 observation per treatment per group. It may be illustrative to see this with a toy dataset: > set.seed(1) > dt <- expand.grid(G = LETTERS[1:4], a = LETTERS[1:2]) > dt$Y = rnorm(nrow(dt)) > dt G a Y 1 A A -0.6264538 2 B A 0.1836433 3 C A -0.8356286 4 D A 1.5952808 5 A B 0.3295078 6 B B -0.8204684 7 C B 0.4874291 8 D B 0.7383247 Now we fit the models, both of which will not run for the reasons explained above. > lmer(Y ~ a + (0 + a | G), data = dt) %>% summary() Error: number of observations (=8) <= number of random effects (=8) for term (0 + a | G); the random-effects parameters and the residual variance (or scale parameter) are probably unidentifiable > lmer(Y ~ a + (1 + a | G), data = dt) %>% summary() Error: number of observations (=8) <= number of random effects (=8) for term (1 + a | G); the random-effects parameters and the residual variance (or scale parameter) are probably unidentifiable But now we add just 1 extra row to the dataset, and they run: > (dt <- rbind(dt, dt[1, ])) G a Y 1 A A -0.6264538 2 B A 0.1836433 3 C A -0.8356286 4 D A 1.5952808 5 A B 0.3295078 6 B B -0.8204684 7 C B 0.4874291 8 D B 0.7383247 9 A A -0.6264538 > lmer(Y ~ a + (0 + a | G), data = dt) %>% summary() Random effects: Groups Name Variance Std.Dev. Corr G aA 1.451e+00 1.205e+00 aB 3.224e-01 5.678e-01 -0.04 Residual 4.239e-15 6.511e-08 > lmer(Y ~ a + (1 + a | G), data = dt) %>% summary() Random effects: Groups Name Variance Std.Dev. Corr G (Intercept) 9.776e-01 9.887e-01 aB 1.222e+00 1.105e+00 -0.81 Residual 1.159e-14 1.077e-07 Number of obs: 9, groups: G, 4 In the model with random slopes only we have 2 random slopes in 4 groups (8 random effects), and in the model with both random intercepts and random slopes we have 4 random intercepts and 4 random slopes.
How do you know the number of random effects in a mixed effects model? This is expected behavious whenever you try to fit a model with random slopes where the variable for the random slopes is categorical and there is only one observation per treatment/group combination.
55,610
Interpreting the statistical model implied by an lmer formula for mixed effect modelling
It depends on the study design and on how the data are encoded. Generally speaking, in the first model we have an intercept varying within g1 and g2, while in the 2nd model we have an intercept varying within g1, and g2 varying within g1. The second formulation is typically used for nested factors, where levels of g2 appear in 1 and only 1 level of g1. An example of this would be students nested within schools. Each student "belongs" to one and only one school. The first formulation is typically used when we have crossed factors, where individual obervations are associated with all levels of both factors (fully crossed in that case). An example of this would be students and exam questions. All students answer all questions on the exam, and all questions are answered by all students. In terms of the data, for a nested study, when the lower level factors are coded uniquely, then the two formulations will be equivalent. For example, with students nested within schools, of students are not coded uniquely. Consider two students in different schools. If both students had the same ID, say student1 then it is necessary to use the second formulation, but if the students are coded unuquely, say student1-1 and student1-2, then the two formulations are equivalent.
Interpreting the statistical model implied by an lmer formula for mixed effect modelling
It depends on the study design and on how the data are encoded. Generally speaking, in the first model we have an intercept varying within g1 and g2, while in the 2nd model we have an intercept varyin
Interpreting the statistical model implied by an lmer formula for mixed effect modelling It depends on the study design and on how the data are encoded. Generally speaking, in the first model we have an intercept varying within g1 and g2, while in the 2nd model we have an intercept varying within g1, and g2 varying within g1. The second formulation is typically used for nested factors, where levels of g2 appear in 1 and only 1 level of g1. An example of this would be students nested within schools. Each student "belongs" to one and only one school. The first formulation is typically used when we have crossed factors, where individual obervations are associated with all levels of both factors (fully crossed in that case). An example of this would be students and exam questions. All students answer all questions on the exam, and all questions are answered by all students. In terms of the data, for a nested study, when the lower level factors are coded uniquely, then the two formulations will be equivalent. For example, with students nested within schools, of students are not coded uniquely. Consider two students in different schools. If both students had the same ID, say student1 then it is necessary to use the second formulation, but if the students are coded unuquely, say student1-1 and student1-2, then the two formulations are equivalent.
Interpreting the statistical model implied by an lmer formula for mixed effect modelling It depends on the study design and on how the data are encoded. Generally speaking, in the first model we have an intercept varying within g1 and g2, while in the 2nd model we have an intercept varyin
55,611
What does it mean to say that a regression method is (not) "scale invariant"?
Scale invariance means that rescaling any or all of the columns will not change the results - that is, multiplying or dividing all the values from any variable will not affect the model predictions (ref). As @ericperkeson mentioned, rescaling in this manner is known as dilation (ref). Scale invariance for metrics about contingency tables refers to rescaling rows as well as columns, though I don't believe it applies here (see the scaling property section here). As to why PLSR is not scale invariant, I'm not completely certain, but I'll leave notes on what I've learned and possibly a better mathematician can clarify. Generally, regression with no regularisation (e.g. OLS) is scale invariant, and regularised regression (e.g. ridge regression) is not scale invariant, because the minimisers of the function change (ref). Now, I can't see an explicit penalty term in PLSR, but I it's constrained in a similar way to PCA. PCA chooses the axes of maximal variance - so if you rescale a variable, the variance relative to other variables can change (ref). PLSR tries to find the ' multidimensional direction in the X space that explains the maximum multidimensional variance direction in the Y space', hence rescaling an input can change the direction of maximum variance (ref).
What does it mean to say that a regression method is (not) "scale invariant"?
Scale invariance means that rescaling any or all of the columns will not change the results - that is, multiplying or dividing all the values from any variable will not affect the model predictions (r
What does it mean to say that a regression method is (not) "scale invariant"? Scale invariance means that rescaling any or all of the columns will not change the results - that is, multiplying or dividing all the values from any variable will not affect the model predictions (ref). As @ericperkeson mentioned, rescaling in this manner is known as dilation (ref). Scale invariance for metrics about contingency tables refers to rescaling rows as well as columns, though I don't believe it applies here (see the scaling property section here). As to why PLSR is not scale invariant, I'm not completely certain, but I'll leave notes on what I've learned and possibly a better mathematician can clarify. Generally, regression with no regularisation (e.g. OLS) is scale invariant, and regularised regression (e.g. ridge regression) is not scale invariant, because the minimisers of the function change (ref). Now, I can't see an explicit penalty term in PLSR, but I it's constrained in a similar way to PCA. PCA chooses the axes of maximal variance - so if you rescale a variable, the variance relative to other variables can change (ref). PLSR tries to find the ' multidimensional direction in the X space that explains the maximum multidimensional variance direction in the Y space', hence rescaling an input can change the direction of maximum variance (ref).
What does it mean to say that a regression method is (not) "scale invariant"? Scale invariance means that rescaling any or all of the columns will not change the results - that is, multiplying or dividing all the values from any variable will not affect the model predictions (r
55,612
What does it mean to say that a regression method is (not) "scale invariant"?
Start with the technical meanings of "location" and "scale" with respect to a one-dimensional probability distribution. The NIST handbook says: A probability distribution is characterized by location and scale parameters ... a location parameter simply shifts the graph left or right on the horizontal axis ... The effect of the scale parameter [with a value greater than 1] is to stretch out the graph ... The standard form of any distribution is the form that has location parameter zero and scale parameter one. Think of a data sample as a collection of empirical probability distributions for each of the predictors and outcomes. For the example in a comment, temperatures expressed either as degrees F or degrees C, there is a transformation with respect to both location and scale. Transformation from degrees C to degrees F changes the numerical values of degrees by a factor of $\frac {9}{5}$ (along with a subsequent location change of 32 degrees F). The variance of temperature values thus also changes by a factor of $\frac{81}{25}$. By "stretching out the graph," a transformation of the scale of a predictor changes the numerical values for the predictor and for its variance. Nevertheless, the underlying physical reality is the same. With standard multiple regression, a change in the units of a predictor can be counterbalanced by a corresponding change in the units of the regression coefficients. If temperature in degrees C is a predictor in a model and you switch from degrees C to degrees F then (along with altering the intercept appropriately) you multiply the regression coefficient for temperature by a factor of $\frac{5}{9}$ and the model is the same. In that sense, the modeling process is "scale invariant." Similarly, correlation coefficients are scale invariant as the calculation corrects for the scales of the variables. Regression modeling processes that differentially penalize predictors, in contrast, fundamentally depend on comparisons among the numerical values of the various predictors. That includes approaches like LASSO, ridge regression, principal components regression (PCR), and partial least squares (PLS). Say that both temperature and distance are predictors in a penalized model. In building the model you need to have a way to decide whether temperature or distance is relatively more important to weight in the model, yet all you have to work with is their numerical values. Those numerical comparisons between the temperature and distance predictor values will differ depending on whether temperature is expressed in degrees F or C, and on whether distances are expressed in miles or in millimeters. Such a modeling process is not scale invariant. With respect to PCR and PLS, you can see this in the problems that they solve at each step, as expressed on page 81 of ESL, second edition: ... partial least squares seeks directions that have high variance [of predictors] and have high correlation with the response, in contrast to principal components regression which keys only on high variance... In particular, the $m$th principal component direction $v_m$ solves: $$ \operatorname{max}_\alpha \operatorname{Var}(\mathbf{X} \alpha) $$ $$ \text{subject to } \lVert \alpha \rVert =1,\: \alpha^T \mathbf{S} v_{\ell} =0, \: \ell =1,\dots,m−1,$$ where $\mathbf{S}$ is the sample covariance matrix of the [vectors of predictor values, indexed by $j$ for predictors] $\mathbf{x}_j$. The conditions $ \alpha^T \mathbf{S} v_{\ell} =0$ ensures that $\mathbf{z}_m = \mathbf{X} \alpha$ is uncorrelated with all the previous linear combinations $\mathbf{z}_{\ell} = \mathbf{X} v{_\ell}$. The $m$th PLS direction $\hat{\varphi}_m$ solves: $$\operatorname{max}_{\alpha} \operatorname{Corr}^2(\mathbf{y},\mathbf{X}\alpha)\operatorname{Var}(\mathbf{X} \alpha) $$ $$\text{subject to } \lVert \alpha \rVert =1,\: \alpha^T \mathbf{S} \hat{\varphi}_{\ell} =0,\: \ell=1,\dots,m−1.$$ Here, the unit-norm vector $\alpha$ is the relative weighting of the predictors that will be added to the model at that step. $\operatorname{Var}(\mathbf{X} \alpha)$ is the variance among the observations of that weighted sum of predictor values. If the scales of the predictor values are transformed, that variance and thus the model itself is fundamentally transformed in a way that can't be undone by a simple change of units of the regression coefficients. So these are not scale-invariant modeling procedures. The usual procedure to maintain equivalence among continuous-valued predictors for such modeling approaches is to transform them to zero mean and unit standard deviation before anything that requires comparisons among predictors. Categorical predictors require some thought in terms of how to put them into "equivalent" scales with respect to each other or to continuous predictors, particularly if there are more than 2 categories. See this page and its links for some discussion.
What does it mean to say that a regression method is (not) "scale invariant"?
Start with the technical meanings of "location" and "scale" with respect to a one-dimensional probability distribution. The NIST handbook says: A probability distribution is characterized by location
What does it mean to say that a regression method is (not) "scale invariant"? Start with the technical meanings of "location" and "scale" with respect to a one-dimensional probability distribution. The NIST handbook says: A probability distribution is characterized by location and scale parameters ... a location parameter simply shifts the graph left or right on the horizontal axis ... The effect of the scale parameter [with a value greater than 1] is to stretch out the graph ... The standard form of any distribution is the form that has location parameter zero and scale parameter one. Think of a data sample as a collection of empirical probability distributions for each of the predictors and outcomes. For the example in a comment, temperatures expressed either as degrees F or degrees C, there is a transformation with respect to both location and scale. Transformation from degrees C to degrees F changes the numerical values of degrees by a factor of $\frac {9}{5}$ (along with a subsequent location change of 32 degrees F). The variance of temperature values thus also changes by a factor of $\frac{81}{25}$. By "stretching out the graph," a transformation of the scale of a predictor changes the numerical values for the predictor and for its variance. Nevertheless, the underlying physical reality is the same. With standard multiple regression, a change in the units of a predictor can be counterbalanced by a corresponding change in the units of the regression coefficients. If temperature in degrees C is a predictor in a model and you switch from degrees C to degrees F then (along with altering the intercept appropriately) you multiply the regression coefficient for temperature by a factor of $\frac{5}{9}$ and the model is the same. In that sense, the modeling process is "scale invariant." Similarly, correlation coefficients are scale invariant as the calculation corrects for the scales of the variables. Regression modeling processes that differentially penalize predictors, in contrast, fundamentally depend on comparisons among the numerical values of the various predictors. That includes approaches like LASSO, ridge regression, principal components regression (PCR), and partial least squares (PLS). Say that both temperature and distance are predictors in a penalized model. In building the model you need to have a way to decide whether temperature or distance is relatively more important to weight in the model, yet all you have to work with is their numerical values. Those numerical comparisons between the temperature and distance predictor values will differ depending on whether temperature is expressed in degrees F or C, and on whether distances are expressed in miles or in millimeters. Such a modeling process is not scale invariant. With respect to PCR and PLS, you can see this in the problems that they solve at each step, as expressed on page 81 of ESL, second edition: ... partial least squares seeks directions that have high variance [of predictors] and have high correlation with the response, in contrast to principal components regression which keys only on high variance... In particular, the $m$th principal component direction $v_m$ solves: $$ \operatorname{max}_\alpha \operatorname{Var}(\mathbf{X} \alpha) $$ $$ \text{subject to } \lVert \alpha \rVert =1,\: \alpha^T \mathbf{S} v_{\ell} =0, \: \ell =1,\dots,m−1,$$ where $\mathbf{S}$ is the sample covariance matrix of the [vectors of predictor values, indexed by $j$ for predictors] $\mathbf{x}_j$. The conditions $ \alpha^T \mathbf{S} v_{\ell} =0$ ensures that $\mathbf{z}_m = \mathbf{X} \alpha$ is uncorrelated with all the previous linear combinations $\mathbf{z}_{\ell} = \mathbf{X} v{_\ell}$. The $m$th PLS direction $\hat{\varphi}_m$ solves: $$\operatorname{max}_{\alpha} \operatorname{Corr}^2(\mathbf{y},\mathbf{X}\alpha)\operatorname{Var}(\mathbf{X} \alpha) $$ $$\text{subject to } \lVert \alpha \rVert =1,\: \alpha^T \mathbf{S} \hat{\varphi}_{\ell} =0,\: \ell=1,\dots,m−1.$$ Here, the unit-norm vector $\alpha$ is the relative weighting of the predictors that will be added to the model at that step. $\operatorname{Var}(\mathbf{X} \alpha)$ is the variance among the observations of that weighted sum of predictor values. If the scales of the predictor values are transformed, that variance and thus the model itself is fundamentally transformed in a way that can't be undone by a simple change of units of the regression coefficients. So these are not scale-invariant modeling procedures. The usual procedure to maintain equivalence among continuous-valued predictors for such modeling approaches is to transform them to zero mean and unit standard deviation before anything that requires comparisons among predictors. Categorical predictors require some thought in terms of how to put them into "equivalent" scales with respect to each other or to continuous predictors, particularly if there are more than 2 categories. See this page and its links for some discussion.
What does it mean to say that a regression method is (not) "scale invariant"? Start with the technical meanings of "location" and "scale" with respect to a one-dimensional probability distribution. The NIST handbook says: A probability distribution is characterized by location
55,613
What does it mean to say that a regression method is (not) "scale invariant"?
I think the comment by user "erikperkerson" was short and highly informative: I was under the impression that scale invariant usually means invariant with respect to a dilation (a proper linear mapping, like $f(x) = kx$ for some constant $k$), such as the unit conversion from miles to millimeters that EdM suggested. The example of converting C to F is not a dilation, because it is an affine linear mapping like $f(x) = kx + b$ instead of a proper linear mapping. Invariance under affine linear mappings would imply both scale and shift invariance.
What does it mean to say that a regression method is (not) "scale invariant"?
I think the comment by user "erikperkerson" was short and highly informative: I was under the impression that scale invariant usually means invariant with respect to a dilation (a proper linear mappi
What does it mean to say that a regression method is (not) "scale invariant"? I think the comment by user "erikperkerson" was short and highly informative: I was under the impression that scale invariant usually means invariant with respect to a dilation (a proper linear mapping, like $f(x) = kx$ for some constant $k$), such as the unit conversion from miles to millimeters that EdM suggested. The example of converting C to F is not a dilation, because it is an affine linear mapping like $f(x) = kx + b$ instead of a proper linear mapping. Invariance under affine linear mappings would imply both scale and shift invariance.
What does it mean to say that a regression method is (not) "scale invariant"? I think the comment by user "erikperkerson" was short and highly informative: I was under the impression that scale invariant usually means invariant with respect to a dilation (a proper linear mappi
55,614
How to evaluate logistic regression on continuous metric by having binary 0/1 data
Remember that a logistic regression outputs a probability, not a category. Your idea for using square loss is fine. In fact, that is known as the Brier score. If your label is $1$ and your predicted probability is $0.75$, your Brier score loss for that point is $(1-0.75)^2 = 0.0625$. If your next label is $0$ and your predicted probability is $0.6$, your Brier score loss for that point is $(0-0.6)^2=0.36$. Add them up and get $0.4225$ as the Brier score for this two-point model. $$ \text{Brier Score} $$ $$ \sum_{i=1}^n (y_i - \hat{p}_i)^2 $$ Brier score is one example of a strictly proper scoring rule. The other famous one, which might be preferred, is the log loss: $\sum_i y_i \log\hat{p}_i + (1-y_i) \log(1-\hat{p}_i)$. ($y_i$ is the true label; $\hat{p}_i$ is the predicted probability.) There are other strictly proper scoring rules, but these are the biggies. Notably, absolute loss is not a proper scoring rule: (Why) Is absolute loss not a proper scoring rule?.
How to evaluate logistic regression on continuous metric by having binary 0/1 data
Remember that a logistic regression outputs a probability, not a category. Your idea for using square loss is fine. In fact, that is known as the Brier score. If your label is $1$ and your predicted p
How to evaluate logistic regression on continuous metric by having binary 0/1 data Remember that a logistic regression outputs a probability, not a category. Your idea for using square loss is fine. In fact, that is known as the Brier score. If your label is $1$ and your predicted probability is $0.75$, your Brier score loss for that point is $(1-0.75)^2 = 0.0625$. If your next label is $0$ and your predicted probability is $0.6$, your Brier score loss for that point is $(0-0.6)^2=0.36$. Add them up and get $0.4225$ as the Brier score for this two-point model. $$ \text{Brier Score} $$ $$ \sum_{i=1}^n (y_i - \hat{p}_i)^2 $$ Brier score is one example of a strictly proper scoring rule. The other famous one, which might be preferred, is the log loss: $\sum_i y_i \log\hat{p}_i + (1-y_i) \log(1-\hat{p}_i)$. ($y_i$ is the true label; $\hat{p}_i$ is the predicted probability.) There are other strictly proper scoring rules, but these are the biggies. Notably, absolute loss is not a proper scoring rule: (Why) Is absolute loss not a proper scoring rule?.
How to evaluate logistic regression on continuous metric by having binary 0/1 data Remember that a logistic regression outputs a probability, not a category. Your idea for using square loss is fine. In fact, that is known as the Brier score. If your label is $1$ and your predicted p
55,615
what does it mean that there is leakage of information when one uses a test set?
Data leakage occurs when there is information in your test set's predictors that wouldn't be available when the model is "live." There are egregious and subtle cases of data leakage. Egregious case. Say the goal is predicting retention of an insurance policy during the first year. At month 3 there is a scheduled check-in with a company representative, and after the check-in the data element had_check_in flips from False to True. A junior modeler is working on a cross-sectional data set (no time dimension) with information from the last two years, and has_check_in is one of the variables. The modeler concludes that this variable is very important, because when it is True, the policy holder is more likely to keep the policy throughout the period of study. Clearly that contains information from the future, and in a live run of the model, all had_check_in values would be False for new cohorts! Subtle case. Suppose that now the junior modeler is approaching the above problem with a time dimension, having learned from the last mistake. He takes a holdout set of 2000 policy holders (across all time) and use the remaining policy holders' retention values to build a model that, among other variables, uses month and year. Then he runs the predictions on this test set to get holdout metrics. While this is unlikely to be a disaster, there's information leakage in that aspects of the particular months and years can be learned from the members of the training set. In a prediction scenario, you couldn't estimate the properties of the future time period from actual policy holders, so the holdout metrics are likely to be optimistic. I've seen both cases in practice, but the subtle case is much more common. It makes me hesitant to use automatic cross validation routines from sklearn, etc., because I feel these situations need to be carefully thought out in general.
what does it mean that there is leakage of information when one uses a test set?
Data leakage occurs when there is information in your test set's predictors that wouldn't be available when the model is "live." There are egregious and subtle cases of data leakage. Egregious case. S
what does it mean that there is leakage of information when one uses a test set? Data leakage occurs when there is information in your test set's predictors that wouldn't be available when the model is "live." There are egregious and subtle cases of data leakage. Egregious case. Say the goal is predicting retention of an insurance policy during the first year. At month 3 there is a scheduled check-in with a company representative, and after the check-in the data element had_check_in flips from False to True. A junior modeler is working on a cross-sectional data set (no time dimension) with information from the last two years, and has_check_in is one of the variables. The modeler concludes that this variable is very important, because when it is True, the policy holder is more likely to keep the policy throughout the period of study. Clearly that contains information from the future, and in a live run of the model, all had_check_in values would be False for new cohorts! Subtle case. Suppose that now the junior modeler is approaching the above problem with a time dimension, having learned from the last mistake. He takes a holdout set of 2000 policy holders (across all time) and use the remaining policy holders' retention values to build a model that, among other variables, uses month and year. Then he runs the predictions on this test set to get holdout metrics. While this is unlikely to be a disaster, there's information leakage in that aspects of the particular months and years can be learned from the members of the training set. In a prediction scenario, you couldn't estimate the properties of the future time period from actual policy holders, so the holdout metrics are likely to be optimistic. I've seen both cases in practice, but the subtle case is much more common. It makes me hesitant to use automatic cross validation routines from sklearn, etc., because I feel these situations need to be carefully thought out in general.
what does it mean that there is leakage of information when one uses a test set? Data leakage occurs when there is information in your test set's predictors that wouldn't be available when the model is "live." There are egregious and subtle cases of data leakage. Egregious case. S
55,616
Are residuals random variables?
Let's say that your model is $$y=X\beta+\epsilon,\quad E[y]=X\beta,\quad \epsilon\sim N(0,\sigma^2 I).$$ You estimate the $\beta$ coefficients by $$\hat\beta=(X'X)^{-1}X'y$$ and you get $$\hat{y}=Hy,\quad H=X(X'X)^{-1}X'$$ where $H$ is a symmetric idempotent matrix, and $$\hat\epsilon=y-Hy=(I-H)y,\quad E[\hat\epsilon]=0,\quad \text{Cov}(\hat\epsilon)=(I-H)\sigma^2.$$ You can see that, while the errors are independent and homoscedastic, the residuals are neither independent ($I-H$ is not a diagonal matrix) nor homoscedastic (the diagonal elements of $I-H$ are not equal). Moreover, the residuals' variance and covariance depend on $H$, therefore on your data $X$. The residual vector is a transformation of $\epsilon$: \begin{align*} \hat\epsilon &= (I-H)y=(I-H)X\beta+(I-H)\epsilon\\ &=[X-X(X'X)^{-1}(X'X)]\beta+(I-H)\epsilon\\ &=(I-H)\epsilon \end{align*} so it is a random variable, but is not an estimator of $\epsilon$. EDIT In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data. For example, if $X_1,\dots,X_n$ is a random sample, you can calculate the sample mean, i.e. the mean of observed realizations of $X_1,\dots,X_n$, to estimate $E[X]$. Since the error term is unobserved and unobservable, the residuals are not and cannot be observed realizations of the error term, $\hat\epsilon$ is not and cannot be an estimator of $\epsilon$ (I'm using your phrasing here, look at whuber's enlightening comments.) However, since the residual random vector is a transformation of $\epsilon$, a transformation which depends on your model, you can use $\hat\epsilon$ as a proxy for the error term, where "proxy" means: an observed variable that is used in place of an unobserved variable (clearly, proxy variables are not estimators.) If your residuals behave as you would expect from the error term, then you can hope that your model is 'good'. If residuals are 'strange', you do not think that you have estimated a 'true' strange error term: you think that your model is wrong. For example, the error term in your model is not a 'true' error term, but depends on missing transformations of predictors or outcome, or on omitted predictors (you can find several examples in Weisberg, Applied Linear Regression, chap. 8.) Let me stress this point. You get some residuals, if you like them then you accept them, otherwise you change your model, i.e. you change $X$, therefore $H$, therefore $I-H$, therefore $(I-H)\epsilon$. If you don't like the residuals you get, then you change them. Rather a bizarre "estimator"! You keep it if you like it, otherwise you change it, and change it again, until you like it. If you were sure that your model is the 'true' model, you could think of your residuals as (improper) estimators of the error term, but you'll never know that your model is 'true'. Thinking that the residuals estimate the errors is wishful thinking. IMHO, of course. EDIT 2 We need an estimate of $\sigma^2$ to obtain an estimation of the covariance matrix of $\hat\beta$. And we actually use residuals. Let's recall that the residuals are not an estimator of the error term, because: an estimator is a function of observable random variables, and an estimate is a function of their observed realized values, but the error term is unobservable; the error term is a random variable, is not a distributional property (see whuber's comments); the $\hat\epsilon$ random variable is a transformation of $\epsilon$, a transformation which depends on the model; if the model is correctly specified, the consistency of $\hat\beta$ implies that $\hat\epsilon\rightarrow\epsilon$ as $n\rightarrow\infty$, but the finite-sample properties of $\hat\epsilon$ always differ from those of $\epsilon$ (residuals are correlated and heteroscedastic). Moreover, $\text{Var}(\hat\epsilon_i)=(1-h_{ii})\sigma^2$, where $h_{ii}$ is a diagonal element of $H$ and $1-h_{ii}<1$, so the variance of $\hat\epsilon_i$ is less than $\sigma^2$ for every $i$. However, if the model is correctly specified, then we can use the method of moments to get a biased estimator of $\sigma^2$: $$\hat\sigma^2=\frac{1}{n}\sum_i\hat\epsilon_i^2,\quad E[\hat\sigma^2]=\frac{n-k}{n}\sigma^2$$ and the unbiased estimator is $$s^2=\frac{1}{n-k}\sum_i\hat\epsilon_i^2$$ where $k$ is the number of columns of $X$, the number of elements in $\beta$. But this is a very strong assumption. For example, if the model is overspecified, if we include irrelevant predictors, the variance of $\hat\beta$ will increase. If the model is underspecified, if we omit relevant predictors, $\hat\beta$ will generally be biased and inconsistent, the covariance matrix for $\hat\beta$ will be incorrect (see Davidson & MacKinnon, Econometric Theory and Methods, chap. 3 for more details.) Therefore, we can't use residuals as proper estimators of the error term or of its distributional properties. At first, we must use residuals to "estimate" (loosely speaking) the "goodness" of our model, and eventually to change it, then we use residuals as a transformation of the error term, as observable quantities in place of unobservable realizations of the error term, hoping that the transformation is "good enough", that we can indirectly get a reasonable estimate for $\sigma^2$.
Are residuals random variables?
Let's say that your model is $$y=X\beta+\epsilon,\quad E[y]=X\beta,\quad \epsilon\sim N(0,\sigma^2 I).$$ You estimate the $\beta$ coefficients by $$\hat\beta=(X'X)^{-1}X'y$$ and you get $$\hat{y}=Hy,\
Are residuals random variables? Let's say that your model is $$y=X\beta+\epsilon,\quad E[y]=X\beta,\quad \epsilon\sim N(0,\sigma^2 I).$$ You estimate the $\beta$ coefficients by $$\hat\beta=(X'X)^{-1}X'y$$ and you get $$\hat{y}=Hy,\quad H=X(X'X)^{-1}X'$$ where $H$ is a symmetric idempotent matrix, and $$\hat\epsilon=y-Hy=(I-H)y,\quad E[\hat\epsilon]=0,\quad \text{Cov}(\hat\epsilon)=(I-H)\sigma^2.$$ You can see that, while the errors are independent and homoscedastic, the residuals are neither independent ($I-H$ is not a diagonal matrix) nor homoscedastic (the diagonal elements of $I-H$ are not equal). Moreover, the residuals' variance and covariance depend on $H$, therefore on your data $X$. The residual vector is a transformation of $\epsilon$: \begin{align*} \hat\epsilon &= (I-H)y=(I-H)X\beta+(I-H)\epsilon\\ &=[X-X(X'X)^{-1}(X'X)]\beta+(I-H)\epsilon\\ &=(I-H)\epsilon \end{align*} so it is a random variable, but is not an estimator of $\epsilon$. EDIT In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data. For example, if $X_1,\dots,X_n$ is a random sample, you can calculate the sample mean, i.e. the mean of observed realizations of $X_1,\dots,X_n$, to estimate $E[X]$. Since the error term is unobserved and unobservable, the residuals are not and cannot be observed realizations of the error term, $\hat\epsilon$ is not and cannot be an estimator of $\epsilon$ (I'm using your phrasing here, look at whuber's enlightening comments.) However, since the residual random vector is a transformation of $\epsilon$, a transformation which depends on your model, you can use $\hat\epsilon$ as a proxy for the error term, where "proxy" means: an observed variable that is used in place of an unobserved variable (clearly, proxy variables are not estimators.) If your residuals behave as you would expect from the error term, then you can hope that your model is 'good'. If residuals are 'strange', you do not think that you have estimated a 'true' strange error term: you think that your model is wrong. For example, the error term in your model is not a 'true' error term, but depends on missing transformations of predictors or outcome, or on omitted predictors (you can find several examples in Weisberg, Applied Linear Regression, chap. 8.) Let me stress this point. You get some residuals, if you like them then you accept them, otherwise you change your model, i.e. you change $X$, therefore $H$, therefore $I-H$, therefore $(I-H)\epsilon$. If you don't like the residuals you get, then you change them. Rather a bizarre "estimator"! You keep it if you like it, otherwise you change it, and change it again, until you like it. If you were sure that your model is the 'true' model, you could think of your residuals as (improper) estimators of the error term, but you'll never know that your model is 'true'. Thinking that the residuals estimate the errors is wishful thinking. IMHO, of course. EDIT 2 We need an estimate of $\sigma^2$ to obtain an estimation of the covariance matrix of $\hat\beta$. And we actually use residuals. Let's recall that the residuals are not an estimator of the error term, because: an estimator is a function of observable random variables, and an estimate is a function of their observed realized values, but the error term is unobservable; the error term is a random variable, is not a distributional property (see whuber's comments); the $\hat\epsilon$ random variable is a transformation of $\epsilon$, a transformation which depends on the model; if the model is correctly specified, the consistency of $\hat\beta$ implies that $\hat\epsilon\rightarrow\epsilon$ as $n\rightarrow\infty$, but the finite-sample properties of $\hat\epsilon$ always differ from those of $\epsilon$ (residuals are correlated and heteroscedastic). Moreover, $\text{Var}(\hat\epsilon_i)=(1-h_{ii})\sigma^2$, where $h_{ii}$ is a diagonal element of $H$ and $1-h_{ii}<1$, so the variance of $\hat\epsilon_i$ is less than $\sigma^2$ for every $i$. However, if the model is correctly specified, then we can use the method of moments to get a biased estimator of $\sigma^2$: $$\hat\sigma^2=\frac{1}{n}\sum_i\hat\epsilon_i^2,\quad E[\hat\sigma^2]=\frac{n-k}{n}\sigma^2$$ and the unbiased estimator is $$s^2=\frac{1}{n-k}\sum_i\hat\epsilon_i^2$$ where $k$ is the number of columns of $X$, the number of elements in $\beta$. But this is a very strong assumption. For example, if the model is overspecified, if we include irrelevant predictors, the variance of $\hat\beta$ will increase. If the model is underspecified, if we omit relevant predictors, $\hat\beta$ will generally be biased and inconsistent, the covariance matrix for $\hat\beta$ will be incorrect (see Davidson & MacKinnon, Econometric Theory and Methods, chap. 3 for more details.) Therefore, we can't use residuals as proper estimators of the error term or of its distributional properties. At first, we must use residuals to "estimate" (loosely speaking) the "goodness" of our model, and eventually to change it, then we use residuals as a transformation of the error term, as observable quantities in place of unobservable realizations of the error term, hoping that the transformation is "good enough", that we can indirectly get a reasonable estimate for $\sigma^2$.
Are residuals random variables? Let's say that your model is $$y=X\beta+\epsilon,\quad E[y]=X\beta,\quad \epsilon\sim N(0,\sigma^2 I).$$ You estimate the $\beta$ coefficients by $$\hat\beta=(X'X)^{-1}X'y$$ and you get $$\hat{y}=Hy,\
55,617
What is better: Cross validation or a validation set for hyperparameter optimization?
Cross validation is more robust. So, in general, it is better. But, the marginal benefit you get decreases as dataset size increases. In small datasets, it's definitely suggested. On the other hand, it may not be the best choice due to computational complexity. For example, training might be very expensive, like in deep neural nets. In that case, a representative validation set is preferable over a statistical average of validation folds.
What is better: Cross validation or a validation set for hyperparameter optimization?
Cross validation is more robust. So, in general, it is better. But, the marginal benefit you get decreases as dataset size increases. In small datasets, it's definitely suggested. On the other hand, i
What is better: Cross validation or a validation set for hyperparameter optimization? Cross validation is more robust. So, in general, it is better. But, the marginal benefit you get decreases as dataset size increases. In small datasets, it's definitely suggested. On the other hand, it may not be the best choice due to computational complexity. For example, training might be very expensive, like in deep neural nets. In that case, a representative validation set is preferable over a statistical average of validation folds.
What is better: Cross validation or a validation set for hyperparameter optimization? Cross validation is more robust. So, in general, it is better. But, the marginal benefit you get decreases as dataset size increases. In small datasets, it's definitely suggested. On the other hand, i
55,618
multiple regression with continuous and binary regressors
Yes that is exactly what you would do. The software will then estimate an intercept along with a coefficient estimate for each variable.
multiple regression with continuous and binary regressors
Yes that is exactly what you would do. The software will then estimate an intercept along with a coefficient estimate for each variable.
multiple regression with continuous and binary regressors Yes that is exactly what you would do. The software will then estimate an intercept along with a coefficient estimate for each variable.
multiple regression with continuous and binary regressors Yes that is exactly what you would do. The software will then estimate an intercept along with a coefficient estimate for each variable.
55,619
Predicting if some variable is $\geq C$?
Interesting question. First of all, classic linear regression was developed for applications where the scatter is normally distributed. If you plot the residual distribution it should have the classic bell shape. When your data adhere to the these model prerequisites, your can just as well use linear regression. The confidence bounds for linear regression predictions are also known, it is advisable that you use these. When your predictor variables come from discrete, or multimodal continuous distributions then you can benefit from a nonparametric classifier. For example the histogram classifier or the K-nearest neighbor classifier can be used. I presuppose that training a neural network for the prediction of $y$ would be somehow 'over-the-top'. So my advice is to let your choice be given by the distributions of your data.
Predicting if some variable is $\geq C$?
Interesting question. First of all, classic linear regression was developed for applications where the scatter is normally distributed. If you plot the residual distribution it should have the classic
Predicting if some variable is $\geq C$? Interesting question. First of all, classic linear regression was developed for applications where the scatter is normally distributed. If you plot the residual distribution it should have the classic bell shape. When your data adhere to the these model prerequisites, your can just as well use linear regression. The confidence bounds for linear regression predictions are also known, it is advisable that you use these. When your predictor variables come from discrete, or multimodal continuous distributions then you can benefit from a nonparametric classifier. For example the histogram classifier or the K-nearest neighbor classifier can be used. I presuppose that training a neural network for the prediction of $y$ would be somehow 'over-the-top'. So my advice is to let your choice be given by the distributions of your data.
Predicting if some variable is $\geq C$? Interesting question. First of all, classic linear regression was developed for applications where the scatter is normally distributed. If you plot the residual distribution it should have the classic
55,620
Predicting if some variable is $\geq C$?
As you write, either approach is feasible. I don't think you can give a general recommendation. In some situations, there may be more existing knowledge pertaining to one than to the other approach - for instance, if you are forecasting a time series, there is much more work on forecasting a continuous target variable (approach 1) than a binary one (approach 2). So I would recommend that you try both approaches and see which one works better for your problem at hand. Just be sure to use an appropriate measure of "better" - not accuracy, that is.
Predicting if some variable is $\geq C$?
As you write, either approach is feasible. I don't think you can give a general recommendation. In some situations, there may be more existing knowledge pertaining to one than to the other approach -
Predicting if some variable is $\geq C$? As you write, either approach is feasible. I don't think you can give a general recommendation. In some situations, there may be more existing knowledge pertaining to one than to the other approach - for instance, if you are forecasting a time series, there is much more work on forecasting a continuous target variable (approach 1) than a binary one (approach 2). So I would recommend that you try both approaches and see which one works better for your problem at hand. Just be sure to use an appropriate measure of "better" - not accuracy, that is.
Predicting if some variable is $\geq C$? As you write, either approach is feasible. I don't think you can give a general recommendation. In some situations, there may be more existing knowledge pertaining to one than to the other approach -
55,621
How are eigenvalues/singular values related to variance (SVD/PCA)?
The variance of any $p$-vector $x$ is given by $$\operatorname{Var}(x) = x^\prime C x.\tag{1}$$ We may write $x^\prime$ as a linear combination of the rows of $V,$ $v_1,$ $v_2,\ldots,$ $v_p,$ because $$x^\prime = x^\prime\mathbb{I} = x^\prime V V^\prime = (x^\prime V)_1v_1 + (x^\prime V)_2v_2 + \cdots + (x^\prime V)_pv_p.$$ The coefficient of $v_i$ in this linear combination is $(x^\prime V)_i = (V^\prime x)_i.$ The diagonalization permits you to rewrite these relations more simply as $$\operatorname{Var}(x) = x^\prime(V\Lambda V^\prime) x = \sum_{i=1}^p \lambda_{ii} (V^\prime x)_i^2.$$ In other words, the variance of $x$ is found as the sum of $p$ terms, each obtained by (a) transforming to $y=V^\prime x,$ then (b) squaring each coefficient $y_i,$ and (c) multiplying the square by $\lambda_{ii}$. This enables us to understand the action of $C$ in simple terms: $y$ is just another way of expressing $x$ (it uses the row vectors of $V$ as a basis) and its terms contribute their squares to the variance, weighted by $\lambda_{ii}.$ The relationship to PCA is the following. It makes little sense to maximize the variance, because by scaling $x$ we can make the variance arbitrarily large. But if we think of $x$ solely as determining a linear subspace, (if you like, an unsigned direction) we may represent that direction by scaling $x$ to have unit length. Thus, assume $||x||^2=1.$ Because $V$ is an orthogonal matrix, $y$ also has unit length: $$||y||^2 = y^\prime y = (V^\prime x)^\prime(V^\prime x) = x^\prime(VV^\prime) x = x^\prime \mathbb{I}x = ||x||^2= 1.$$ To make the variance of $x$ as large as possible, you want to put as much weight as possible on the largest eigenvalue (the largest $\lambda_{ii}$). Without any loss of generality you can arrange the rows of $V$ so that this is $\lambda_{11}.$ A variance-maximizing vector therefore is $y^{(1)} = (1,0,\ldots,0)^\prime.$ The corresponding $x$ is $$x^{(1)} = V y^{(1)},$$ the first column of $V.$ This is the first principal component. Its variance is $\lambda_{11}.$ By construction, it is a unit vector with the largest possible variance. It represents a linear subspace. The rest of the principal components are obtained similarly from the other columns of $V$ because (by definition) those columns are mutually orthogonal. When all the $\lambda_{ii}$ are distinct, this method gives a unique set of solutions: The principal components of $C$ are the linear subspaces corresponding to the columns of $V.$ The variance of column $i$ is $\lambda_{ii}.$ More generally, there may be infinitely many ways to diagonalize $C$ (this is when there are one or more eigenspaces of dimension greater than $1,$ so-called "degenerate" eigenspaces). The columns of any particular such $V$ still enjoy the foregoing properties. $V$ is usually chosen so that $\lambda_{11}\ge\lambda_{22}\ge\cdots\ge\lambda_{pp}$ are the principal components in order.
How are eigenvalues/singular values related to variance (SVD/PCA)?
The variance of any $p$-vector $x$ is given by $$\operatorname{Var}(x) = x^\prime C x.\tag{1}$$ We may write $x^\prime$ as a linear combination of the rows of $V,$ $v_1,$ $v_2,\ldots,$ $v_p,$ because
How are eigenvalues/singular values related to variance (SVD/PCA)? The variance of any $p$-vector $x$ is given by $$\operatorname{Var}(x) = x^\prime C x.\tag{1}$$ We may write $x^\prime$ as a linear combination of the rows of $V,$ $v_1,$ $v_2,\ldots,$ $v_p,$ because $$x^\prime = x^\prime\mathbb{I} = x^\prime V V^\prime = (x^\prime V)_1v_1 + (x^\prime V)_2v_2 + \cdots + (x^\prime V)_pv_p.$$ The coefficient of $v_i$ in this linear combination is $(x^\prime V)_i = (V^\prime x)_i.$ The diagonalization permits you to rewrite these relations more simply as $$\operatorname{Var}(x) = x^\prime(V\Lambda V^\prime) x = \sum_{i=1}^p \lambda_{ii} (V^\prime x)_i^2.$$ In other words, the variance of $x$ is found as the sum of $p$ terms, each obtained by (a) transforming to $y=V^\prime x,$ then (b) squaring each coefficient $y_i,$ and (c) multiplying the square by $\lambda_{ii}$. This enables us to understand the action of $C$ in simple terms: $y$ is just another way of expressing $x$ (it uses the row vectors of $V$ as a basis) and its terms contribute their squares to the variance, weighted by $\lambda_{ii}.$ The relationship to PCA is the following. It makes little sense to maximize the variance, because by scaling $x$ we can make the variance arbitrarily large. But if we think of $x$ solely as determining a linear subspace, (if you like, an unsigned direction) we may represent that direction by scaling $x$ to have unit length. Thus, assume $||x||^2=1.$ Because $V$ is an orthogonal matrix, $y$ also has unit length: $$||y||^2 = y^\prime y = (V^\prime x)^\prime(V^\prime x) = x^\prime(VV^\prime) x = x^\prime \mathbb{I}x = ||x||^2= 1.$$ To make the variance of $x$ as large as possible, you want to put as much weight as possible on the largest eigenvalue (the largest $\lambda_{ii}$). Without any loss of generality you can arrange the rows of $V$ so that this is $\lambda_{11}.$ A variance-maximizing vector therefore is $y^{(1)} = (1,0,\ldots,0)^\prime.$ The corresponding $x$ is $$x^{(1)} = V y^{(1)},$$ the first column of $V.$ This is the first principal component. Its variance is $\lambda_{11}.$ By construction, it is a unit vector with the largest possible variance. It represents a linear subspace. The rest of the principal components are obtained similarly from the other columns of $V$ because (by definition) those columns are mutually orthogonal. When all the $\lambda_{ii}$ are distinct, this method gives a unique set of solutions: The principal components of $C$ are the linear subspaces corresponding to the columns of $V.$ The variance of column $i$ is $\lambda_{ii}.$ More generally, there may be infinitely many ways to diagonalize $C$ (this is when there are one or more eigenspaces of dimension greater than $1,$ so-called "degenerate" eigenspaces). The columns of any particular such $V$ still enjoy the foregoing properties. $V$ is usually chosen so that $\lambda_{11}\ge\lambda_{22}\ge\cdots\ge\lambda_{pp}$ are the principal components in order.
How are eigenvalues/singular values related to variance (SVD/PCA)? The variance of any $p$-vector $x$ is given by $$\operatorname{Var}(x) = x^\prime C x.\tag{1}$$ We may write $x^\prime$ as a linear combination of the rows of $V,$ $v_1,$ $v_2,\ldots,$ $v_p,$ because
55,622
Why would you want to fit/use a Poisson regression instead of Negative Binomial? [duplicate]
For many practical applications the negative binomial distribution is more appropriate and is often a reasonable default choice. This is the case whenever we assume that risk varies across observational units (such as patients, hospitals, ...). The Poisson distribution may be appropriate e.g. when it is very clear that units are truly identical (e.g. identical atoms) and should have the same event rate. It is rather easy to interpret as each unit having a Poisson distribution with the mean rate varying across units according to a Gamma distribution. Very reasonable alternatives include a Poisson where the logarithm of the mean rate varies across units according to a normal distribution (i.e. a Poisson generalized mixed effects model with normally distributed random effects on the log-mean rate). This does approximate a negative binomial distribution reasonably well - a log-normal is pretty close to a gamma for suitable parameters, and let's be honest, we usually don't really know what distribution the event rate folllows across units.
Why would you want to fit/use a Poisson regression instead of Negative Binomial? [duplicate]
For many practical applications the negative binomial distribution is more appropriate and is often a reasonable default choice. This is the case whenever we assume that risk varies across observation
Why would you want to fit/use a Poisson regression instead of Negative Binomial? [duplicate] For many practical applications the negative binomial distribution is more appropriate and is often a reasonable default choice. This is the case whenever we assume that risk varies across observational units (such as patients, hospitals, ...). The Poisson distribution may be appropriate e.g. when it is very clear that units are truly identical (e.g. identical atoms) and should have the same event rate. It is rather easy to interpret as each unit having a Poisson distribution with the mean rate varying across units according to a Gamma distribution. Very reasonable alternatives include a Poisson where the logarithm of the mean rate varies across units according to a normal distribution (i.e. a Poisson generalized mixed effects model with normally distributed random effects on the log-mean rate). This does approximate a negative binomial distribution reasonably well - a log-normal is pretty close to a gamma for suitable parameters, and let's be honest, we usually don't really know what distribution the event rate folllows across units.
Why would you want to fit/use a Poisson regression instead of Negative Binomial? [duplicate] For many practical applications the negative binomial distribution is more appropriate and is often a reasonable default choice. This is the case whenever we assume that risk varies across observation
55,623
Why would you want to fit/use a Poisson regression instead of Negative Binomial? [duplicate]
The Poisson distribution has a very simple heuristic for its single parameter: the rate of occurrence of a rare event, with events happening independently. Contrast that to the Wikipedia formulation of the negative binomial distribution: In probability theory and statistics, the negative binomial distribution is a discrete probability distribution that models the number of failures in a sequence of independent and identically distributed Bernoulli trials before a specified (non-random) number of successes (denoted $r$) occurs. Most scientists are well acquainted personally with situations involving many failures before a limited number of successes. Nevertheless, it can be hard to explain (for me, at least) what is going on with a certain set of observations that leads them to follow a negative binomial distribution. The rate in Poisson is much easier to interpret in physical terms, despite the sometimes counterintuitive appearance of a set of independent events. So in the spirit of "all models are wrong but some are useful" one might prefer to start with Poisson and only move on to a negative binomial when it's clear that the Poisson is inadequate.
Why would you want to fit/use a Poisson regression instead of Negative Binomial? [duplicate]
The Poisson distribution has a very simple heuristic for its single parameter: the rate of occurrence of a rare event, with events happening independently. Contrast that to the Wikipedia formulation o
Why would you want to fit/use a Poisson regression instead of Negative Binomial? [duplicate] The Poisson distribution has a very simple heuristic for its single parameter: the rate of occurrence of a rare event, with events happening independently. Contrast that to the Wikipedia formulation of the negative binomial distribution: In probability theory and statistics, the negative binomial distribution is a discrete probability distribution that models the number of failures in a sequence of independent and identically distributed Bernoulli trials before a specified (non-random) number of successes (denoted $r$) occurs. Most scientists are well acquainted personally with situations involving many failures before a limited number of successes. Nevertheless, it can be hard to explain (for me, at least) what is going on with a certain set of observations that leads them to follow a negative binomial distribution. The rate in Poisson is much easier to interpret in physical terms, despite the sometimes counterintuitive appearance of a set of independent events. So in the spirit of "all models are wrong but some are useful" one might prefer to start with Poisson and only move on to a negative binomial when it's clear that the Poisson is inadequate.
Why would you want to fit/use a Poisson regression instead of Negative Binomial? [duplicate] The Poisson distribution has a very simple heuristic for its single parameter: the rate of occurrence of a rare event, with events happening independently. Contrast that to the Wikipedia formulation o
55,624
Should I use highly skewed features in my model?
Appropriate question. The added value of preprocessing depends on the type of classifier you will train. If you use nonparametric classifiers like C4.5 (ID3), CART, the multinomial classifier, the webservice insight classifiers, random forests or the like - transformation of your skewed feature values is unnecessary. Their algorithms use histogram-like criteria to choose the optimal classifier parameters. Classifiers like (deep) neural networks, discriminant analysis, support vector machines, logistic regression - they all use some sort of (local) distance measure. For such models a log-transformation or a power transformation (e.g. $\sqrt{x}$) are highly recommended for your use case.
Should I use highly skewed features in my model?
Appropriate question. The added value of preprocessing depends on the type of classifier you will train. If you use nonparametric classifiers like C4.5 (ID3), CART, the multinomial classifier, the web
Should I use highly skewed features in my model? Appropriate question. The added value of preprocessing depends on the type of classifier you will train. If you use nonparametric classifiers like C4.5 (ID3), CART, the multinomial classifier, the webservice insight classifiers, random forests or the like - transformation of your skewed feature values is unnecessary. Their algorithms use histogram-like criteria to choose the optimal classifier parameters. Classifiers like (deep) neural networks, discriminant analysis, support vector machines, logistic regression - they all use some sort of (local) distance measure. For such models a log-transformation or a power transformation (e.g. $\sqrt{x}$) are highly recommended for your use case.
Should I use highly skewed features in my model? Appropriate question. The added value of preprocessing depends on the type of classifier you will train. If you use nonparametric classifiers like C4.5 (ID3), CART, the multinomial classifier, the web
55,625
How to find the 95% confidence interval when there is outliers?
Bootstrap might be one way to do this. In python... from sklearn.utils import resample import numpy as np x = np.array([8.5, 8.0, 16.0, 12.0, 2.5, 515.0, 5.0, 15.0, 13.0, 2.0, 950.0, 15.0, 9.0, 6.0, 12.0, 5.5, 19.5, 7.5, 37.5, 12.5]) xb = np.array([ resample(x).mean() for j in range(10000)]) low, high = np.quantile(xb, [0.025, 0.975]) This yields a bootstrap CI of (9.95 , 200.72). However, I think there is something driving higher costs. Because your data are from older patients, I imagine some patients have more co-morbidities than others which may lead to complications and hence higher costs. In the absence if additional information, or strong assumptions on the data generating processes, I think this is going to be the best you can do.
How to find the 95% confidence interval when there is outliers?
Bootstrap might be one way to do this. In python... from sklearn.utils import resample import numpy as np x = np.array([8.5, 8.0, 16.0, 12.0, 2.5, 515.0, 5.0, 15.0, 13.0, 2.0, 950.0, 15.0, 9.0, 6.0
How to find the 95% confidence interval when there is outliers? Bootstrap might be one way to do this. In python... from sklearn.utils import resample import numpy as np x = np.array([8.5, 8.0, 16.0, 12.0, 2.5, 515.0, 5.0, 15.0, 13.0, 2.0, 950.0, 15.0, 9.0, 6.0, 12.0, 5.5, 19.5, 7.5, 37.5, 12.5]) xb = np.array([ resample(x).mean() for j in range(10000)]) low, high = np.quantile(xb, [0.025, 0.975]) This yields a bootstrap CI of (9.95 , 200.72). However, I think there is something driving higher costs. Because your data are from older patients, I imagine some patients have more co-morbidities than others which may lead to complications and hence higher costs. In the absence if additional information, or strong assumptions on the data generating processes, I think this is going to be the best you can do.
How to find the 95% confidence interval when there is outliers? Bootstrap might be one way to do this. In python... from sklearn.utils import resample import numpy as np x = np.array([8.5, 8.0, 16.0, 12.0, 2.5, 515.0, 5.0, 15.0, 13.0, 2.0, 950.0, 15.0, 9.0, 6.0
55,626
How to find the 95% confidence interval when there is outliers?
Quick preliminary results from R: x=c(8.5, 8.0, 16.0, 12.0, 2.5, 515.0, 5.0, 15.0, 13.0, 2.0, 950.0, 15.0, 9.0, 6.0, 12.0, 5.5, 19.5, 7.5, 37.5, 12.5) summary(x) Min. 1st Qu. Median Mean 3rd Qu. Max. 2.000 7.125 12.000 83.575 15.250 950.000 (1) t interval for mean. Assumes normal data, which seems a bad assumption here. t.test(x) ... 95 percent confidence interval: -25.47337 192.62337 (2) Nonparametric Wilcoxon CI for population median. May be slightly inaccurate because of ties in your data. wilcox.test(x, conf.int=T) ... 95 percent confidence interval: 8.500058 21.750047 (3) 95% nonparamatric bootstrap quantile CI for population mean: $(10, 200).$ set.seed(2020) a.re=replicate(10^4, mean(sample(x, rep=T))) quantile(a.re, c(.025,.975)) 2.5% 97.5% 9.9750 200.5269 Leave comments/questions as appropriate. More later.
How to find the 95% confidence interval when there is outliers?
Quick preliminary results from R: x=c(8.5, 8.0, 16.0, 12.0, 2.5, 515.0, 5.0, 15.0, 13.0, 2.0, 950.0, 15.0, 9.0, 6.0, 12.0, 5.5, 19.5, 7.5, 37.5, 12.5) summary(x) Min. 1st Qu. Median Mean
How to find the 95% confidence interval when there is outliers? Quick preliminary results from R: x=c(8.5, 8.0, 16.0, 12.0, 2.5, 515.0, 5.0, 15.0, 13.0, 2.0, 950.0, 15.0, 9.0, 6.0, 12.0, 5.5, 19.5, 7.5, 37.5, 12.5) summary(x) Min. 1st Qu. Median Mean 3rd Qu. Max. 2.000 7.125 12.000 83.575 15.250 950.000 (1) t interval for mean. Assumes normal data, which seems a bad assumption here. t.test(x) ... 95 percent confidence interval: -25.47337 192.62337 (2) Nonparametric Wilcoxon CI for population median. May be slightly inaccurate because of ties in your data. wilcox.test(x, conf.int=T) ... 95 percent confidence interval: 8.500058 21.750047 (3) 95% nonparamatric bootstrap quantile CI for population mean: $(10, 200).$ set.seed(2020) a.re=replicate(10^4, mean(sample(x, rep=T))) quantile(a.re, c(.025,.975)) 2.5% 97.5% 9.9750 200.5269 Leave comments/questions as appropriate. More later.
How to find the 95% confidence interval when there is outliers? Quick preliminary results from R: x=c(8.5, 8.0, 16.0, 12.0, 2.5, 515.0, 5.0, 15.0, 13.0, 2.0, 950.0, 15.0, 9.0, 6.0, 12.0, 5.5, 19.5, 7.5, 37.5, 12.5) summary(x) Min. 1st Qu. Median Mean
55,627
covariance ,correlation, within subject and between subjects
The toy dataset provided isn't very useful for explaining these concepts so I will try my best to explain in an easy-to-understand way. The covariance of two variables is a measure of how much one variable goes up (or down) when the other goes up (or down). More technically, it is the average of the product of the differences of each variable from their expected values. It is calculated by first calculating the mean of each variable, then the difference between each measurement and the mean and multiplying the difference in one variable by that for the other variable. Then these are added up and the sum is divided by the number of observations. $$ \text{Cov}(X,Y) = \frac{1}{n} \sum_{i=1}^{n}(x_i- \mu_X)(y_i- \mu_Y) $$ Strictly speaking this formula is valid when calculating the covariance in a population. If we are calculating the covariance from a sample then we divide by $n-1$ not $n$. This is because in a sample we have used up 1 degree of freedom when we used it to calculate the mean of the sample. This is a rather non technical explanation. I hope the rigour police are off-duty today, or if not then I hope they forgive me ! Obviously in a large sample the difference will be tiny. Side note: A long time ago I was once taught that if you are in a situation where the difference between dividing by $n-1$ or $n$ is important then you probably have much more important things to worry about. Correlation is simply the covariance normalised by the variances of the two variables, so that it is bounded between -1 and +1. $$ \text{Cor}(X,Y) = \frac{\text{Cov}(X,Y)}{\sigma_X \sigma_Y}$$ Within-subject variance is simply the variance of a set of measures within the same subject. Between-subject variance doesn't really make sense. It could just be the covariance of measures between two subjects. However I am guessing that your question comes from the analysis of experiments involving repeated measures where variables are often described as "within subject" or "between subject" which gives rise to the terms "within subject variation" and "between subject variation" - note it is "variation" and not "variance". A good example of a "within subject" variable is blood pressure - it varies within each person. A good example of a "between subject" variable is blood type - this is fixed within each person, but varies between subjects.
covariance ,correlation, within subject and between subjects
The toy dataset provided isn't very useful for explaining these concepts so I will try my best to explain in an easy-to-understand way. The covariance of two variables is a measure of how much one var
covariance ,correlation, within subject and between subjects The toy dataset provided isn't very useful for explaining these concepts so I will try my best to explain in an easy-to-understand way. The covariance of two variables is a measure of how much one variable goes up (or down) when the other goes up (or down). More technically, it is the average of the product of the differences of each variable from their expected values. It is calculated by first calculating the mean of each variable, then the difference between each measurement and the mean and multiplying the difference in one variable by that for the other variable. Then these are added up and the sum is divided by the number of observations. $$ \text{Cov}(X,Y) = \frac{1}{n} \sum_{i=1}^{n}(x_i- \mu_X)(y_i- \mu_Y) $$ Strictly speaking this formula is valid when calculating the covariance in a population. If we are calculating the covariance from a sample then we divide by $n-1$ not $n$. This is because in a sample we have used up 1 degree of freedom when we used it to calculate the mean of the sample. This is a rather non technical explanation. I hope the rigour police are off-duty today, or if not then I hope they forgive me ! Obviously in a large sample the difference will be tiny. Side note: A long time ago I was once taught that if you are in a situation where the difference between dividing by $n-1$ or $n$ is important then you probably have much more important things to worry about. Correlation is simply the covariance normalised by the variances of the two variables, so that it is bounded between -1 and +1. $$ \text{Cor}(X,Y) = \frac{\text{Cov}(X,Y)}{\sigma_X \sigma_Y}$$ Within-subject variance is simply the variance of a set of measures within the same subject. Between-subject variance doesn't really make sense. It could just be the covariance of measures between two subjects. However I am guessing that your question comes from the analysis of experiments involving repeated measures where variables are often described as "within subject" or "between subject" which gives rise to the terms "within subject variation" and "between subject variation" - note it is "variation" and not "variance". A good example of a "within subject" variable is blood pressure - it varies within each person. A good example of a "between subject" variable is blood type - this is fixed within each person, but varies between subjects.
covariance ,correlation, within subject and between subjects The toy dataset provided isn't very useful for explaining these concepts so I will try my best to explain in an easy-to-understand way. The covariance of two variables is a measure of how much one var
55,628
Easy vs difficult distributions for sampling
Computers can only do pseudo random sampling directly from a Uniform Distribution. Sampling from any other distribution requires some numerical transformations, such as Inverse Transform Sampling. This method, however, only allows to sample from distribution that have a defined Cumulative Distribution Function that can be inverted - and this is the case for most common distributions such as Normal, Exponential, Beta, etc. - these are the "easy" ones. However, often times we might need to sample from a distribution whose CDF we cannot (or do not want to) compute, and we only have a Probability Density Function. This is indeed pretty standard, as you can often easily create a PDF with a desired shape, but do not have the means to compute its integral. These cases, where you cannot use inverse transform sampling, are the "hard" ones, where you need to use Rejection or Importance sampling, with the help of a known distribution that you can sample from directly (via ITS).
Easy vs difficult distributions for sampling
Computers can only do pseudo random sampling directly from a Uniform Distribution. Sampling from any other distribution requires some numerical transformations, such as Inverse Transform Sampling. Th
Easy vs difficult distributions for sampling Computers can only do pseudo random sampling directly from a Uniform Distribution. Sampling from any other distribution requires some numerical transformations, such as Inverse Transform Sampling. This method, however, only allows to sample from distribution that have a defined Cumulative Distribution Function that can be inverted - and this is the case for most common distributions such as Normal, Exponential, Beta, etc. - these are the "easy" ones. However, often times we might need to sample from a distribution whose CDF we cannot (or do not want to) compute, and we only have a Probability Density Function. This is indeed pretty standard, as you can often easily create a PDF with a desired shape, but do not have the means to compute its integral. These cases, where you cannot use inverse transform sampling, are the "hard" ones, where you need to use Rejection or Importance sampling, with the help of a known distribution that you can sample from directly (via ITS).
Easy vs difficult distributions for sampling Computers can only do pseudo random sampling directly from a Uniform Distribution. Sampling from any other distribution requires some numerical transformations, such as Inverse Transform Sampling. Th
55,629
Rubin's rule from scratch for multiple imputations
After multiple imputation of data sets (MI) and analyzing each of the imputed sets separately, Rubin's rules do have you take the mean over those imputations as the point estimate. For inference, confidence intervals and so forth, you then determine the overall variance of the point estimate as a combination of within-imputation and between-imputation variances. This paper by Marshall, Altman, et al. summarizes the rules nicely: "For a single population parameter of interest, $Q$, e.g. a regression coefficient, the MI overall point estimate is the average of the $m$ estimates of $Q$ from the imputed datasets, $\bar Q = \frac{1}{m} \sum_{i=1}^m \hat Q_i$. The associated total variance for this overall MI estimate is $T = \bar U + \left( 1+\frac{1}{m} \right) B$, where $\bar U = \frac{1}{m} \sum_{i=1}^m U_i$ is the estimated within imputation variance and $B=\frac{1}{m-1} \sum_{i=1}^m \left( \hat Q_i - \bar Q\right)^2$ is the between imputation variance. Inflating the between imputation variance by a factor $1/m$ reflects the extra variability as a consequence of imputing the missing data using a finite number of imputations instead of an infinite number of imputations. When $B$ dominates $\bar U$ greater efficiency, and hence more accurate estimates, can be obtained by increasing $m$. Conversely, when $\bar U$ dominates $B$, little is gained from increasing $m$." The paper then goes on to show how to apply these rules to multiple coefficients, situations in which coefficient estimates might not have normal distributions, hypothesis testing, and so on.
Rubin's rule from scratch for multiple imputations
After multiple imputation of data sets (MI) and analyzing each of the imputed sets separately, Rubin's rules do have you take the mean over those imputations as the point estimate. For inference, conf
Rubin's rule from scratch for multiple imputations After multiple imputation of data sets (MI) and analyzing each of the imputed sets separately, Rubin's rules do have you take the mean over those imputations as the point estimate. For inference, confidence intervals and so forth, you then determine the overall variance of the point estimate as a combination of within-imputation and between-imputation variances. This paper by Marshall, Altman, et al. summarizes the rules nicely: "For a single population parameter of interest, $Q$, e.g. a regression coefficient, the MI overall point estimate is the average of the $m$ estimates of $Q$ from the imputed datasets, $\bar Q = \frac{1}{m} \sum_{i=1}^m \hat Q_i$. The associated total variance for this overall MI estimate is $T = \bar U + \left( 1+\frac{1}{m} \right) B$, where $\bar U = \frac{1}{m} \sum_{i=1}^m U_i$ is the estimated within imputation variance and $B=\frac{1}{m-1} \sum_{i=1}^m \left( \hat Q_i - \bar Q\right)^2$ is the between imputation variance. Inflating the between imputation variance by a factor $1/m$ reflects the extra variability as a consequence of imputing the missing data using a finite number of imputations instead of an infinite number of imputations. When $B$ dominates $\bar U$ greater efficiency, and hence more accurate estimates, can be obtained by increasing $m$. Conversely, when $\bar U$ dominates $B$, little is gained from increasing $m$." The paper then goes on to show how to apply these rules to multiple coefficients, situations in which coefficient estimates might not have normal distributions, hypothesis testing, and so on.
Rubin's rule from scratch for multiple imputations After multiple imputation of data sets (MI) and analyzing each of the imputed sets separately, Rubin's rules do have you take the mean over those imputations as the point estimate. For inference, conf
55,630
Most powerful test of size zero for $\theta$ given random sample from $U(0, \theta)$
Yes, it is correct. You can derive this test from the likelihood ratio. The likelihood $L_\theta$ is $\theta^n$ if all $Y_i\leq\theta$ and 0 otherwise, so the likelihood ratio is $(1/4)^n$ if all $y_i\leq 1$ and $0$ otherwise. The most powerful test must choose $\theta=1$ if $Y_{(n)}\leq 1$ and $\theta=4$ otherwise, since those correspond to the only possible values of the likelihood ratio and as you show, this test has size zero and power 255/256.
Most powerful test of size zero for $\theta$ given random sample from $U(0, \theta)$
Yes, it is correct. You can derive this test from the likelihood ratio. The likelihood $L_\theta$ is $\theta^n$ if all $Y_i\leq\theta$ and 0 otherwise, so the likelihood ratio is $(1/4)^n$ if all $y_
Most powerful test of size zero for $\theta$ given random sample from $U(0, \theta)$ Yes, it is correct. You can derive this test from the likelihood ratio. The likelihood $L_\theta$ is $\theta^n$ if all $Y_i\leq\theta$ and 0 otherwise, so the likelihood ratio is $(1/4)^n$ if all $y_i\leq 1$ and $0$ otherwise. The most powerful test must choose $\theta=1$ if $Y_{(n)}\leq 1$ and $\theta=4$ otherwise, since those correspond to the only possible values of the likelihood ratio and as you show, this test has size zero and power 255/256.
Most powerful test of size zero for $\theta$ given random sample from $U(0, \theta)$ Yes, it is correct. You can derive this test from the likelihood ratio. The likelihood $L_\theta$ is $\theta^n$ if all $Y_i\leq\theta$ and 0 otherwise, so the likelihood ratio is $(1/4)^n$ if all $y_
55,631
At what point in analysis do you perform imputation for missing variables?
You should do all the imputations first, otherwise you may get biased results. I don't know what hotdeck in Stata does exactly, but if it is a single imputation method (ie you get one completed/imputed dataset) then I would advise against it. At the very least I would advise creating several completeted datasets, if the algorithm allows a different seed to create different imputations. I don't know what your reasons for choosing hot decking are, but I have always found multiple imputation to be superior and has desirable statistical properties, when certain assumptions hold, namely that the data missingness being MAR (missing at random) or MCAR (missing completely at random) and not MNAR (missing not at random). Roughly, this means that, for any particular variable, if the missing data can be predicted from the other variables, or if the missing values are simple a random sample, multiple imputation will produce unbiased results.
At what point in analysis do you perform imputation for missing variables?
You should do all the imputations first, otherwise you may get biased results. I don't know what hotdeck in Stata does exactly, but if it is a single imputation method (ie you get one completed/impute
At what point in analysis do you perform imputation for missing variables? You should do all the imputations first, otherwise you may get biased results. I don't know what hotdeck in Stata does exactly, but if it is a single imputation method (ie you get one completed/imputed dataset) then I would advise against it. At the very least I would advise creating several completeted datasets, if the algorithm allows a different seed to create different imputations. I don't know what your reasons for choosing hot decking are, but I have always found multiple imputation to be superior and has desirable statistical properties, when certain assumptions hold, namely that the data missingness being MAR (missing at random) or MCAR (missing completely at random) and not MNAR (missing not at random). Roughly, this means that, for any particular variable, if the missing data can be predicted from the other variables, or if the missing values are simple a random sample, multiple imputation will produce unbiased results.
At what point in analysis do you perform imputation for missing variables? You should do all the imputations first, otherwise you may get biased results. I don't know what hotdeck in Stata does exactly, but if it is a single imputation method (ie you get one completed/impute
55,632
At what point in analysis do you perform imputation for missing variables?
Since you’ve decided on an imputation method relying on MCAR (missing completely at random) data, I infer that your data are indeed MCAR. In this case, you should impute the missing values after the exclusion criteria are applied, for two reasons: Speed (because there are fewer data points to process, downstream of exclusion criteria); Bespoke imputation for your data of interest. (Whereas, imputing all 30 variables before exclusion would tap into a larger, less specific population than the one under study.) The caveat in the above is that it’s based on my inference that because you've chosen hotdeck you have MCAR data. If I’m mistaken, then: Don’t impute any data using hotdeck; use something such as multiple imputation by chained equations (MICE), for which there are toolboxes. Impute the data before the exclusion criteria are applied. Basically, see the other answer here by Robert Long. Good luck! References: Missing Data Problems in Machine Learning by B. Marlin (2008) Section 9.6 of The Elements of Statistical Learning, arguing for multiple imputation when data are not MCAR
At what point in analysis do you perform imputation for missing variables?
Since you’ve decided on an imputation method relying on MCAR (missing completely at random) data, I infer that your data are indeed MCAR. In this case, you should impute the missing values after the e
At what point in analysis do you perform imputation for missing variables? Since you’ve decided on an imputation method relying on MCAR (missing completely at random) data, I infer that your data are indeed MCAR. In this case, you should impute the missing values after the exclusion criteria are applied, for two reasons: Speed (because there are fewer data points to process, downstream of exclusion criteria); Bespoke imputation for your data of interest. (Whereas, imputing all 30 variables before exclusion would tap into a larger, less specific population than the one under study.) The caveat in the above is that it’s based on my inference that because you've chosen hotdeck you have MCAR data. If I’m mistaken, then: Don’t impute any data using hotdeck; use something such as multiple imputation by chained equations (MICE), for which there are toolboxes. Impute the data before the exclusion criteria are applied. Basically, see the other answer here by Robert Long. Good luck! References: Missing Data Problems in Machine Learning by B. Marlin (2008) Section 9.6 of The Elements of Statistical Learning, arguing for multiple imputation when data are not MCAR
At what point in analysis do you perform imputation for missing variables? Since you’ve decided on an imputation method relying on MCAR (missing completely at random) data, I infer that your data are indeed MCAR. In this case, you should impute the missing values after the e
55,633
Is Anomaly Detection Supervised or Un-supervised?
Typically, it is unsupervised. But actually it can be either. Let's start with supervised anomaly detection. Supervised anomaly/outlier detection For supervised anomaly detection, you need labelled training data where for each row you know if it is an outlier/anomaly or not. Any modeling technique for binary responses will work here, e.g. logistic regression or gradient boosting. The typical application is fraud detection. Usually, one does not have labelled data, so one has to rely on unsupervised methods with their usual pros and cons. Unsupervised anomaly/outlier detection We have a "reference" training data at hand but unfortunately without knowing which rows are outliers or not. Here, it is tempting to let statistical algorithms do the guess work. Some of the typical approaches are: density based: local outlier factor (LOF), isolation forests. distance based: How far away is a row from the average e.g in terms of Mahalanobis distance? autoencoder: How bad can the row be reconstructed by an autoencoder neural network? model based: model each variable by the others and hunt for high residuals. ... Each of the techniques has its pros and cons. There is no approach that does somehow better than the rest for all types of problems. Note about dimensions and unsupervised detection algos For 1-2 dimensional data, you can plot the data and visually identify outliers/anomalies as points far away from the rest. For very high dimensional data, unsupervised anomaly detection is close to being a hopeless task due to the curse of dimensionality, which - in the sense of anomaly detection - means that every point eventually becomes an outlier.
Is Anomaly Detection Supervised or Un-supervised?
Typically, it is unsupervised. But actually it can be either. Let's start with supervised anomaly detection. Supervised anomaly/outlier detection For supervised anomaly detection, you need labelled tr
Is Anomaly Detection Supervised or Un-supervised? Typically, it is unsupervised. But actually it can be either. Let's start with supervised anomaly detection. Supervised anomaly/outlier detection For supervised anomaly detection, you need labelled training data where for each row you know if it is an outlier/anomaly or not. Any modeling technique for binary responses will work here, e.g. logistic regression or gradient boosting. The typical application is fraud detection. Usually, one does not have labelled data, so one has to rely on unsupervised methods with their usual pros and cons. Unsupervised anomaly/outlier detection We have a "reference" training data at hand but unfortunately without knowing which rows are outliers or not. Here, it is tempting to let statistical algorithms do the guess work. Some of the typical approaches are: density based: local outlier factor (LOF), isolation forests. distance based: How far away is a row from the average e.g in terms of Mahalanobis distance? autoencoder: How bad can the row be reconstructed by an autoencoder neural network? model based: model each variable by the others and hunt for high residuals. ... Each of the techniques has its pros and cons. There is no approach that does somehow better than the rest for all types of problems. Note about dimensions and unsupervised detection algos For 1-2 dimensional data, you can plot the data and visually identify outliers/anomalies as points far away from the rest. For very high dimensional data, unsupervised anomaly detection is close to being a hopeless task due to the curse of dimensionality, which - in the sense of anomaly detection - means that every point eventually becomes an outlier.
Is Anomaly Detection Supervised or Un-supervised? Typically, it is unsupervised. But actually it can be either. Let's start with supervised anomaly detection. Supervised anomaly/outlier detection For supervised anomaly detection, you need labelled tr
55,634
Point estimator for product of independent RVs
I'm assuming what you want to estimate is $E[XY]$ (you don't say, but the use of the sample mean suggests it) Intuitively, $\overline{XY}$ would work even if $X$ and $Y$ weren't independent, so it should be less efficient under the additional assumption that they are independent. Let's see how that goes Let's look at the case where $X$ and $Y$ are Normal, to start off. The maximum likelihood estimators of the means $\mu_x$ and $\mu_y$ of $X$ and $Y$ are the sample averages $\bar X$ and $\bar Y$, and the invariance principle for MLEs says that the MLE of $\mu_x\mu_y$ is $\bar X\bar Y$. The mean of $\bar X\bar Y$ is $\mu_x\mu_u$ (by independence). Its variance is $\mu^2_x\sigma^2_y/n+\mu^2_y\sigma^2_x/n+\sigma^2_x\sigma^2/n^2$ The mean of $\overline{XY}$ is $\mu_x\mu_y$. The variance of $XY$ is $\mu^2_x\sigma^2_y+\mu^2_y\sigma^2_x+\sigma^2_x\sigma^2$ so the variance of $\overline{XY}$ is $(\mu^2_x\sigma^2_y+\mu^2_y\sigma^2_x+\sigma^2_x\sigma^2)/n$ which is larger than the variance of $\bar X\bar Y$. The mean and variance analysis still works when $X$ and $Y$ are not Normal, so it's still true that $\bar X\bar Y$ is more efficient. However, it's now possible that there are more efficient estimators, because the sample average is no longer the MLE. For example, if $X$ and $Y$ have a Laplace distribution, the sample medians are the MLEs of the means of $X$ and $Y$, so the product of the sample medians will be a more efficient estimator than $\bar X\bar Y$. In the nonparametric model where all you know about $X$ and $Y$ is that they have finite means, the sample average is efficient (because basically anything else is inconsistent) and $\bar X\bar Y$ will be optimal again.
Point estimator for product of independent RVs
I'm assuming what you want to estimate is $E[XY]$ (you don't say, but the use of the sample mean suggests it) Intuitively, $\overline{XY}$ would work even if $X$ and $Y$ weren't independent, so it sh
Point estimator for product of independent RVs I'm assuming what you want to estimate is $E[XY]$ (you don't say, but the use of the sample mean suggests it) Intuitively, $\overline{XY}$ would work even if $X$ and $Y$ weren't independent, so it should be less efficient under the additional assumption that they are independent. Let's see how that goes Let's look at the case where $X$ and $Y$ are Normal, to start off. The maximum likelihood estimators of the means $\mu_x$ and $\mu_y$ of $X$ and $Y$ are the sample averages $\bar X$ and $\bar Y$, and the invariance principle for MLEs says that the MLE of $\mu_x\mu_y$ is $\bar X\bar Y$. The mean of $\bar X\bar Y$ is $\mu_x\mu_u$ (by independence). Its variance is $\mu^2_x\sigma^2_y/n+\mu^2_y\sigma^2_x/n+\sigma^2_x\sigma^2/n^2$ The mean of $\overline{XY}$ is $\mu_x\mu_y$. The variance of $XY$ is $\mu^2_x\sigma^2_y+\mu^2_y\sigma^2_x+\sigma^2_x\sigma^2$ so the variance of $\overline{XY}$ is $(\mu^2_x\sigma^2_y+\mu^2_y\sigma^2_x+\sigma^2_x\sigma^2)/n$ which is larger than the variance of $\bar X\bar Y$. The mean and variance analysis still works when $X$ and $Y$ are not Normal, so it's still true that $\bar X\bar Y$ is more efficient. However, it's now possible that there are more efficient estimators, because the sample average is no longer the MLE. For example, if $X$ and $Y$ have a Laplace distribution, the sample medians are the MLEs of the means of $X$ and $Y$, so the product of the sample medians will be a more efficient estimator than $\bar X\bar Y$. In the nonparametric model where all you know about $X$ and $Y$ is that they have finite means, the sample average is efficient (because basically anything else is inconsistent) and $\bar X\bar Y$ will be optimal again.
Point estimator for product of independent RVs I'm assuming what you want to estimate is $E[XY]$ (you don't say, but the use of the sample mean suggests it) Intuitively, $\overline{XY}$ would work even if $X$ and $Y$ weren't independent, so it sh
55,635
Can I use the Kolmogorov-Smirnov test with estimated parameters?
This is an invalid procedure. https://en.m.wikipedia.org/wiki/Kolmogorov–Smirnov_test Scroll down to “Test with estimated parameters”. Disappointingly, they do not give much of a reference, but the book referenced in that paragraph might explain. (Cross Validated has many posts on this topic, too, though it would be nice to see it discussed in some primary literature.) The gist is that you’re giving the reference distribution more similarity to the data than you should. Yes, it is tempting to ask if your data fit a distribution by estimating the parameters from the data, but this is invalid. That you’re using a maximum likelihood estimator of the parameter(s) is not pertinent; this applies for any estimator. The good news is that this kind of goodness of fit testing is quite unhelpful, as many posts on Cross Validated discuss. Invalid Observe data with a mean of $7$ and a variance of $4$, then use KS to test if the data came from $N(7,4)$. Valid...perhaps not helpful Speculate that your data come from an exponential distribution with rate parameter $2$, observe data with a sample mean (so rate parameter) of $1$, and test the data against your speculated distribution of $exp(2)$. (Remember to be careful about the parameter in the exponential distribution, as there is disagreement about whether the exponent is $-\lambda x$ for a mean of $1/\lambda$ or $-x/\beta$ for a mean of $\beta$. In my example, the mean and parameter happen to be $1$ no matter which exponent you’re prefer.)
Can I use the Kolmogorov-Smirnov test with estimated parameters?
This is an invalid procedure. https://en.m.wikipedia.org/wiki/Kolmogorov–Smirnov_test Scroll down to “Test with estimated parameters”. Disappointingly, they do not give much of a reference, but the bo
Can I use the Kolmogorov-Smirnov test with estimated parameters? This is an invalid procedure. https://en.m.wikipedia.org/wiki/Kolmogorov–Smirnov_test Scroll down to “Test with estimated parameters”. Disappointingly, they do not give much of a reference, but the book referenced in that paragraph might explain. (Cross Validated has many posts on this topic, too, though it would be nice to see it discussed in some primary literature.) The gist is that you’re giving the reference distribution more similarity to the data than you should. Yes, it is tempting to ask if your data fit a distribution by estimating the parameters from the data, but this is invalid. That you’re using a maximum likelihood estimator of the parameter(s) is not pertinent; this applies for any estimator. The good news is that this kind of goodness of fit testing is quite unhelpful, as many posts on Cross Validated discuss. Invalid Observe data with a mean of $7$ and a variance of $4$, then use KS to test if the data came from $N(7,4)$. Valid...perhaps not helpful Speculate that your data come from an exponential distribution with rate parameter $2$, observe data with a sample mean (so rate parameter) of $1$, and test the data against your speculated distribution of $exp(2)$. (Remember to be careful about the parameter in the exponential distribution, as there is disagreement about whether the exponent is $-\lambda x$ for a mean of $1/\lambda$ or $-x/\beta$ for a mean of $\beta$. In my example, the mean and parameter happen to be $1$ no matter which exponent you’re prefer.)
Can I use the Kolmogorov-Smirnov test with estimated parameters? This is an invalid procedure. https://en.m.wikipedia.org/wiki/Kolmogorov–Smirnov_test Scroll down to “Test with estimated parameters”. Disappointingly, they do not give much of a reference, but the bo
55,636
Can I use the Kolmogorov-Smirnov test with estimated parameters?
The standard critical values will be too aggressive as the K-S test doesn’t take into account the sample error in the MLE estimates. You therefore need to bootstrap the critical values to form a valid inference.
Can I use the Kolmogorov-Smirnov test with estimated parameters?
The standard critical values will be too aggressive as the K-S test doesn’t take into account the sample error in the MLE estimates. You therefore need to bootstrap the critical values to form a vali
Can I use the Kolmogorov-Smirnov test with estimated parameters? The standard critical values will be too aggressive as the K-S test doesn’t take into account the sample error in the MLE estimates. You therefore need to bootstrap the critical values to form a valid inference.
Can I use the Kolmogorov-Smirnov test with estimated parameters? The standard critical values will be too aggressive as the K-S test doesn’t take into account the sample error in the MLE estimates. You therefore need to bootstrap the critical values to form a vali
55,637
What is the relationship between Boltzmann / Gibbs sampling and the softmax function?
Different feedback signals and loss functions The difference lies in the interpretation of the values / logits. More precisely, how the values / logits are tied to different feedback signals. First, their similarity First, let's paraphrase the question. Let $\mathbf{z}\in\mathbb{R}^n$ be proper logits and let $\mathbb{q}\in\mathbb{R}^n$ be (temperature-scaled) values. Then, from their softmax's $$ p_i\ =\ \frac{e^{z_i}}{\sum_je^{z_j}}\ , \qquad \tilde{p}_i\ =\ \frac{e^{q_i}}{\sum_je^{q_j}}\ . $$ it looks like $\mathbf{p}$ and $\tilde{\mathbf{p}}$ are pretty much the same. For instance, both $\mathbf{p}$ and $\tilde{\mathbf{p}}$ live in the probability simplex $\Delta^n=\{x\in[0,1]^n\,|\,\sum_ix_i=1\}$. Now suppose that $\mathbf{z}\in\mathbb{R}^n$ and $\mathbb{q}\in\mathbb{R}^n$ are outputs of some neural net. In order to learn these quantities, you need to tie them to some sort of feedback signal. This is where they differ. Categorical signal A proper logit is usually tied to some MLE objective associated with a categorical distribution, e.g. tensorflow's softmax_cross_entropy_with_logits. $$ \text{loss}\ =\ -\sum_iy_i\,\ln p_i $$ where $\mathbf{y}$ is a one-hot encoded categorical variate. Choosing an objective like this gives $\mathbf{z}$ the interpretation of proper logits. Gaussian signal In contrast, the values $\mathbb{q}$ are tied to an MLE objective associated with (multi-variate) Gaussian distribution, i.e. mean squared-error loss. $$ \text{loss}\ =\ (y_i - \tau\,q_i)^2 $$ where now $\mathbf{y}$ is just a real-valued vector in $\mathbb{R}^n$ and $\tau>0$ is the Boltzmann temperature. Conclusion Thus, $\mathbf{z}$ and $\mathbf{q}$ differ because they are tied to completely different feedback signals. Applying the same softmax operation to both doesn't undo their differences. Finally, it should be noted that there is in fact a close relation between the interpretations of $\mathbf{z}$ and $\mathbf{q}$ in the context of reinforcement learning, see [arXiv:1704.06440]. The relation is subtle, but it requires only a small amount of additional structure to derive. Some practical considerations The reason why all of this theoretical stuff matters is that in practice the values $\mathbf{q}$ may really not be suited to be interpreted as logits. The problem might be that the values fluctuate too much (resulting in insufficient exploration) or the values be too similar (resulting in too much exploration). In most cases, however, this may be fixed by tuning your Boltzmann temperature $\tau$ to suit your specific environment.
What is the relationship between Boltzmann / Gibbs sampling and the softmax function?
Different feedback signals and loss functions The difference lies in the interpretation of the values / logits. More precisely, how the values / logits are tied to different feedback signals. First, t
What is the relationship between Boltzmann / Gibbs sampling and the softmax function? Different feedback signals and loss functions The difference lies in the interpretation of the values / logits. More precisely, how the values / logits are tied to different feedback signals. First, their similarity First, let's paraphrase the question. Let $\mathbf{z}\in\mathbb{R}^n$ be proper logits and let $\mathbb{q}\in\mathbb{R}^n$ be (temperature-scaled) values. Then, from their softmax's $$ p_i\ =\ \frac{e^{z_i}}{\sum_je^{z_j}}\ , \qquad \tilde{p}_i\ =\ \frac{e^{q_i}}{\sum_je^{q_j}}\ . $$ it looks like $\mathbf{p}$ and $\tilde{\mathbf{p}}$ are pretty much the same. For instance, both $\mathbf{p}$ and $\tilde{\mathbf{p}}$ live in the probability simplex $\Delta^n=\{x\in[0,1]^n\,|\,\sum_ix_i=1\}$. Now suppose that $\mathbf{z}\in\mathbb{R}^n$ and $\mathbb{q}\in\mathbb{R}^n$ are outputs of some neural net. In order to learn these quantities, you need to tie them to some sort of feedback signal. This is where they differ. Categorical signal A proper logit is usually tied to some MLE objective associated with a categorical distribution, e.g. tensorflow's softmax_cross_entropy_with_logits. $$ \text{loss}\ =\ -\sum_iy_i\,\ln p_i $$ where $\mathbf{y}$ is a one-hot encoded categorical variate. Choosing an objective like this gives $\mathbf{z}$ the interpretation of proper logits. Gaussian signal In contrast, the values $\mathbb{q}$ are tied to an MLE objective associated with (multi-variate) Gaussian distribution, i.e. mean squared-error loss. $$ \text{loss}\ =\ (y_i - \tau\,q_i)^2 $$ where now $\mathbf{y}$ is just a real-valued vector in $\mathbb{R}^n$ and $\tau>0$ is the Boltzmann temperature. Conclusion Thus, $\mathbf{z}$ and $\mathbf{q}$ differ because they are tied to completely different feedback signals. Applying the same softmax operation to both doesn't undo their differences. Finally, it should be noted that there is in fact a close relation between the interpretations of $\mathbf{z}$ and $\mathbf{q}$ in the context of reinforcement learning, see [arXiv:1704.06440]. The relation is subtle, but it requires only a small amount of additional structure to derive. Some practical considerations The reason why all of this theoretical stuff matters is that in practice the values $\mathbf{q}$ may really not be suited to be interpreted as logits. The problem might be that the values fluctuate too much (resulting in insufficient exploration) or the values be too similar (resulting in too much exploration). In most cases, however, this may be fixed by tuning your Boltzmann temperature $\tau$ to suit your specific environment.
What is the relationship between Boltzmann / Gibbs sampling and the softmax function? Different feedback signals and loss functions The difference lies in the interpretation of the values / logits. More precisely, how the values / logits are tied to different feedback signals. First, t
55,638
What is the relationship between Boltzmann / Gibbs sampling and the softmax function?
What is the relationship between this and Gibbs sampling / Blotzmann sampling? Mathematically, the two functions are very similar. Gibbs sampling adds a scaling "temperature" factor which is applied to scores before using them in the softmax. The scenarios in which they are used are different: Softmax probabilities are used when the function's sole purpose is to generate probabilities, and you are free to adjust the input preferences (or logits) in order to converge on a target distribution. This is the case for policy functions in policy gradient methods. Gibbs sampling can be used when the inputs already represent some other relevant score function (e.g. an action value in reinforcement learning). The temperature parameter gives you some control over the impact in differences of that score between options, but not full control because the scores are measuring something else. This can still be useful for generating policies - both on-policy and a behaviour in off-policy - and has some nice properties for online learning in real systems (it quickly learns to avoid very bad action choices for example), although adding a new important hyperparameter in the form of the temperature value is not great.
What is the relationship between Boltzmann / Gibbs sampling and the softmax function?
What is the relationship between this and Gibbs sampling / Blotzmann sampling? Mathematically, the two functions are very similar. Gibbs sampling adds a scaling "temperature" factor which is applied
What is the relationship between Boltzmann / Gibbs sampling and the softmax function? What is the relationship between this and Gibbs sampling / Blotzmann sampling? Mathematically, the two functions are very similar. Gibbs sampling adds a scaling "temperature" factor which is applied to scores before using them in the softmax. The scenarios in which they are used are different: Softmax probabilities are used when the function's sole purpose is to generate probabilities, and you are free to adjust the input preferences (or logits) in order to converge on a target distribution. This is the case for policy functions in policy gradient methods. Gibbs sampling can be used when the inputs already represent some other relevant score function (e.g. an action value in reinforcement learning). The temperature parameter gives you some control over the impact in differences of that score between options, but not full control because the scores are measuring something else. This can still be useful for generating policies - both on-policy and a behaviour in off-policy - and has some nice properties for online learning in real systems (it quickly learns to avoid very bad action choices for example), although adding a new important hyperparameter in the form of the temperature value is not great.
What is the relationship between Boltzmann / Gibbs sampling and the softmax function? What is the relationship between this and Gibbs sampling / Blotzmann sampling? Mathematically, the two functions are very similar. Gibbs sampling adds a scaling "temperature" factor which is applied
55,639
Average of the outside of a truncated normal distribution
A simple way is using the total expectation formula: $$\mu=E[X]=E[X|a<X<b]P(a<X<b)+E[X|X<a\cup X>b](1-P(a<X<b))$$ The expected value, $E[X|a<X<b]$, is given in your post. And, the probability $P(a<X<b)$ can be written in terms of the standard normal CDF quite easily (which is $Z$).
Average of the outside of a truncated normal distribution
A simple way is using the total expectation formula: $$\mu=E[X]=E[X|a<X<b]P(a<X<b)+E[X|X<a\cup X>b](1-P(a<X<b))$$ The expected value, $E[X|a<X<b]$, is given in your post. And, the probability $P(a<X<b
Average of the outside of a truncated normal distribution A simple way is using the total expectation formula: $$\mu=E[X]=E[X|a<X<b]P(a<X<b)+E[X|X<a\cup X>b](1-P(a<X<b))$$ The expected value, $E[X|a<X<b]$, is given in your post. And, the probability $P(a<X<b)$ can be written in terms of the standard normal CDF quite easily (which is $Z$).
Average of the outside of a truncated normal distribution A simple way is using the total expectation formula: $$\mu=E[X]=E[X|a<X<b]P(a<X<b)+E[X|X<a\cup X>b](1-P(a<X<b))$$ The expected value, $E[X|a<X<b]$, is given in your post. And, the probability $P(a<X<b
55,640
Does this seems like a reasonable definition of a uniform distribution?
Let me answer the implicit question: what is a uniform distribution? Because $X$ is a random variable, $S$ really is the underlying set in a probability space $(S,\mathfrak F, \mathbb P).$ We say $X$ has a continuous uniform distribution when there exists a subset $A\subset \mathbb R$ such that, for every interval $(a,b]\subset \mathbb{R},$ $$\mathbb{P}(X\in (a,b]) \ \propto\ \lambda((a,b]\cap A)$$ where $\lambda$ is Lebesgue measure. $X$ has a discrete uniform distribution when there exists $A\subset \mathbb{R}$ such that for every interval $(a,b],$ $$\mathbb{P}(X\in (a,b])\ \propto\ |(a,b] \cap A|$$ where $|\cdot |$ is the cardinality of a set. The implicit normalizing constants (denominators) in these equations are $\lambda(A)$ in the first case and $|A|$ in the second, both of which (therefore) must be finite and nonzero. In particular, in the continuous case, $X$ has a probability density function equal to $$f_X(x) = \frac{1}{\lambda(A)} \mathcal{I}_A(x)$$ (where $\mathcal{I}_A$ is the indicator function of $A$); in the discrete case we may write $|A|=n$ (a nonzero natural number) and see that for any number $x\in \mathbb R,$ $\Pr(X=x) = 1/n$ when $x\in A$ and otherwise $\Pr(X=x)=0.$ This defines the probability mass function of $X.$
Does this seems like a reasonable definition of a uniform distribution?
Let me answer the implicit question: what is a uniform distribution? Because $X$ is a random variable, $S$ really is the underlying set in a probability space $(S,\mathfrak F, \mathbb P).$ We say $X$
Does this seems like a reasonable definition of a uniform distribution? Let me answer the implicit question: what is a uniform distribution? Because $X$ is a random variable, $S$ really is the underlying set in a probability space $(S,\mathfrak F, \mathbb P).$ We say $X$ has a continuous uniform distribution when there exists a subset $A\subset \mathbb R$ such that, for every interval $(a,b]\subset \mathbb{R},$ $$\mathbb{P}(X\in (a,b]) \ \propto\ \lambda((a,b]\cap A)$$ where $\lambda$ is Lebesgue measure. $X$ has a discrete uniform distribution when there exists $A\subset \mathbb{R}$ such that for every interval $(a,b],$ $$\mathbb{P}(X\in (a,b])\ \propto\ |(a,b] \cap A|$$ where $|\cdot |$ is the cardinality of a set. The implicit normalizing constants (denominators) in these equations are $\lambda(A)$ in the first case and $|A|$ in the second, both of which (therefore) must be finite and nonzero. In particular, in the continuous case, $X$ has a probability density function equal to $$f_X(x) = \frac{1}{\lambda(A)} \mathcal{I}_A(x)$$ (where $\mathcal{I}_A$ is the indicator function of $A$); in the discrete case we may write $|A|=n$ (a nonzero natural number) and see that for any number $x\in \mathbb R,$ $\Pr(X=x) = 1/n$ when $x\in A$ and otherwise $\Pr(X=x)=0.$ This defines the probability mass function of $X.$
Does this seems like a reasonable definition of a uniform distribution? Let me answer the implicit question: what is a uniform distribution? Because $X$ is a random variable, $S$ really is the underlying set in a probability space $(S,\mathfrak F, \mathbb P).$ We say $X$
55,641
Interpretation of Weibull Accelerated Failure Time Model Output
Many (including me) get confused by the different ways to define the parameters of a Weibull distribution, particularly since the standard R Weibull-related functions in the stats package and the survreg() parametric fitting function in the survival package use different parameterizations. The manual page for the R Weibull-related functions in stats says: The Weibull distribution with shape parameter $a$ and scale parameter $b$ has density given by $$\frac{a}{b}\left(\frac{x}{b}\right)^{a-1}e^{-(x/b)^{a}}$$ for $x$ > 0. That's called the "standard parameterization" on the Wikipedia page (where they use $k$ for shape and $\lambda$ for scale). The survreg() function uses a different parameterization, with differences explained on its manual page: There are multiple ways to parameterize a Weibull distribution. The survreg function embeds it in a general location-scale family, which is a different parameterization than the rweibull function, and often leads to confusion. survreg's scale = 1/(rweibull shape) survreg's intercept = log(rweibull scale). The WeibullReg() function effectively takes the result from survreg() and expresses the results in terms of the "standard parameterization." There is a potential confusion, however, as the $summary of the object produced by WeibullReg is "the summary table from the original survreg model." (Emphasis added.) So what you have displayed in the question includes results for both parameterizations. That dual representation of the results helps explain what's going on. Starting from the bottom, the survreg value of scale is the reciprocal of the "standard parameterization" value of shape. The "standard" shape parameter is called gamma in the WeibullReg $formula output near the top of your output. The value for gamma is 0.98434, with a reciprocal of 1.0159, rounding to the value of 1.02 shown as Scale in the last line of your output. The natural logarithm of 1.0159 is 0.01578, shown as Log(scale) in the next-to-last line. Those last lines of your output, remember, are based on the survreg definition of scale. The p-value for that Log(scale) is indeed very high. But that just means that the value of Log(scale) is not significantly different from 0, or that the scale itself (as defined in survreg) is not different from 1. That has nothing to do with the hazard ratios and so forth for the covariates. It just means that the baseline survival curve of your Weibull model can't be statistically distinguished from a simple exponential survival curve, which would have exactly a value of 1 for survreg scale or "standard" shape and a constant baseline hazard over time. So there is nothing to distrust about your results on that basis.
Interpretation of Weibull Accelerated Failure Time Model Output
Many (including me) get confused by the different ways to define the parameters of a Weibull distribution, particularly since the standard R Weibull-related functions in the stats package and the surv
Interpretation of Weibull Accelerated Failure Time Model Output Many (including me) get confused by the different ways to define the parameters of a Weibull distribution, particularly since the standard R Weibull-related functions in the stats package and the survreg() parametric fitting function in the survival package use different parameterizations. The manual page for the R Weibull-related functions in stats says: The Weibull distribution with shape parameter $a$ and scale parameter $b$ has density given by $$\frac{a}{b}\left(\frac{x}{b}\right)^{a-1}e^{-(x/b)^{a}}$$ for $x$ > 0. That's called the "standard parameterization" on the Wikipedia page (where they use $k$ for shape and $\lambda$ for scale). The survreg() function uses a different parameterization, with differences explained on its manual page: There are multiple ways to parameterize a Weibull distribution. The survreg function embeds it in a general location-scale family, which is a different parameterization than the rweibull function, and often leads to confusion. survreg's scale = 1/(rweibull shape) survreg's intercept = log(rweibull scale). The WeibullReg() function effectively takes the result from survreg() and expresses the results in terms of the "standard parameterization." There is a potential confusion, however, as the $summary of the object produced by WeibullReg is "the summary table from the original survreg model." (Emphasis added.) So what you have displayed in the question includes results for both parameterizations. That dual representation of the results helps explain what's going on. Starting from the bottom, the survreg value of scale is the reciprocal of the "standard parameterization" value of shape. The "standard" shape parameter is called gamma in the WeibullReg $formula output near the top of your output. The value for gamma is 0.98434, with a reciprocal of 1.0159, rounding to the value of 1.02 shown as Scale in the last line of your output. The natural logarithm of 1.0159 is 0.01578, shown as Log(scale) in the next-to-last line. Those last lines of your output, remember, are based on the survreg definition of scale. The p-value for that Log(scale) is indeed very high. But that just means that the value of Log(scale) is not significantly different from 0, or that the scale itself (as defined in survreg) is not different from 1. That has nothing to do with the hazard ratios and so forth for the covariates. It just means that the baseline survival curve of your Weibull model can't be statistically distinguished from a simple exponential survival curve, which would have exactly a value of 1 for survreg scale or "standard" shape and a constant baseline hazard over time. So there is nothing to distrust about your results on that basis.
Interpretation of Weibull Accelerated Failure Time Model Output Many (including me) get confused by the different ways to define the parameters of a Weibull distribution, particularly since the standard R Weibull-related functions in the stats package and the surv
55,642
When is multiple imputation useful for multilevel models?
In general, mixed-effects models will provide you with valid inferences under MAR, provided that the random-effects structure is appropriately specified. Therefore, no (multiple) imputation is required. Namely, the model specifies the distribution of the complete data outcome data $Y_i$ for all time points. Under MAR, we can predict/impute the missing outcome data $y_i^m$ using the observed data $y_i^o$. This is done by exploiting the correlation structure between $y_i^m$ and $y_i^o$ provided by the specification of the complete data distribution. Now, if you have doubts that the chosen random-effects structure captures the correlations in the outcome data sufficiently well, then you could consider applying multiple imputation. This is because in multiple imputation (assuming here that you do that with chained equations), you regress the outcome at each time point on the outcomes at all the other time points, specifying thus a more flexible correlation structure.
When is multiple imputation useful for multilevel models?
In general, mixed-effects models will provide you with valid inferences under MAR, provided that the random-effects structure is appropriately specified. Therefore, no (multiple) imputation is require
When is multiple imputation useful for multilevel models? In general, mixed-effects models will provide you with valid inferences under MAR, provided that the random-effects structure is appropriately specified. Therefore, no (multiple) imputation is required. Namely, the model specifies the distribution of the complete data outcome data $Y_i$ for all time points. Under MAR, we can predict/impute the missing outcome data $y_i^m$ using the observed data $y_i^o$. This is done by exploiting the correlation structure between $y_i^m$ and $y_i^o$ provided by the specification of the complete data distribution. Now, if you have doubts that the chosen random-effects structure captures the correlations in the outcome data sufficiently well, then you could consider applying multiple imputation. This is because in multiple imputation (assuming here that you do that with chained equations), you regress the outcome at each time point on the outcomes at all the other time points, specifying thus a more flexible correlation structure.
When is multiple imputation useful for multilevel models? In general, mixed-effects models will provide you with valid inferences under MAR, provided that the random-effects structure is appropriately specified. Therefore, no (multiple) imputation is require
55,643
Multiple Linear Regression Coefficient Estimators and Hypothesis Testing
You have already given the formula for the variance matrix for the coefficient estimator. The Gramian matrix of the design matrix for the regression ---which appears in that formula--- is: $$\begin{aligned} \mathbf{x}^\text{T} \mathbf{x} &= \begin{bmatrix} \mathbf{x}_1 \cdot \mathbf{x}_1 & \mathbf{x}_1 \cdot \mathbf{x}_2 & \mathbf{x}_1 \cdot \mathbf{x}_3 \\ \mathbf{x}_2 \cdot \mathbf{x}_1 & \mathbf{x}_2 \cdot \mathbf{x}_2 & \mathbf{x}_2 \cdot \mathbf{x}_3 \\ \mathbf{x}_3 \cdot \mathbf{x}_1 & \mathbf{x}_3 \cdot \mathbf{x}_2 & \mathbf{x}_3 \cdot \mathbf{x}_3 \\ \end{bmatrix} \\[6pt] &= \begin{bmatrix} \sum_{i=1}^{25} 2 \times 2 & 0 & 0 \\ 0 & \sum_{i=26}^{75} \sqrt{2} \times \sqrt{2} & 0 \\ 0 & 0 & \sum_{i=76}^{100} 2 \times 2 \\ \end{bmatrix} \\[6pt] &= \begin{bmatrix} 25 \times 4 & 0 & 0 \\ 0 & 50 \times 2 & 0 \\ 0 & 0 & 25 \times 4 \\ \end{bmatrix} \\[6pt] &= 100 \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix}. \\[6pt] \end{aligned}$$ This is proportionate to the identity matrix, which means that the coefficient estimators are uncorrelated with equal variance. You therefore have: $$\mathbb{V}(\hat{\boldsymbol{\beta}}) = \sigma^2 (\mathbf{x}^\text{T} \mathbf{x})^{-1} = \frac{\sigma^2}{100} \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix}. \\[6pt] $$ You should be able to derive the coefficient estimates from the standard regression formula and then use these to formulate the hypothesis tests. Each hypothesis test is testing a linear combination of the coefficients, so you can use the rules for linear combinations of normal random variables to derive the standard errors of the test statistics.
Multiple Linear Regression Coefficient Estimators and Hypothesis Testing
You have already given the formula for the variance matrix for the coefficient estimator. The Gramian matrix of the design matrix for the regression ---which appears in that formula--- is: $$\begin{a
Multiple Linear Regression Coefficient Estimators and Hypothesis Testing You have already given the formula for the variance matrix for the coefficient estimator. The Gramian matrix of the design matrix for the regression ---which appears in that formula--- is: $$\begin{aligned} \mathbf{x}^\text{T} \mathbf{x} &= \begin{bmatrix} \mathbf{x}_1 \cdot \mathbf{x}_1 & \mathbf{x}_1 \cdot \mathbf{x}_2 & \mathbf{x}_1 \cdot \mathbf{x}_3 \\ \mathbf{x}_2 \cdot \mathbf{x}_1 & \mathbf{x}_2 \cdot \mathbf{x}_2 & \mathbf{x}_2 \cdot \mathbf{x}_3 \\ \mathbf{x}_3 \cdot \mathbf{x}_1 & \mathbf{x}_3 \cdot \mathbf{x}_2 & \mathbf{x}_3 \cdot \mathbf{x}_3 \\ \end{bmatrix} \\[6pt] &= \begin{bmatrix} \sum_{i=1}^{25} 2 \times 2 & 0 & 0 \\ 0 & \sum_{i=26}^{75} \sqrt{2} \times \sqrt{2} & 0 \\ 0 & 0 & \sum_{i=76}^{100} 2 \times 2 \\ \end{bmatrix} \\[6pt] &= \begin{bmatrix} 25 \times 4 & 0 & 0 \\ 0 & 50 \times 2 & 0 \\ 0 & 0 & 25 \times 4 \\ \end{bmatrix} \\[6pt] &= 100 \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix}. \\[6pt] \end{aligned}$$ This is proportionate to the identity matrix, which means that the coefficient estimators are uncorrelated with equal variance. You therefore have: $$\mathbb{V}(\hat{\boldsymbol{\beta}}) = \sigma^2 (\mathbf{x}^\text{T} \mathbf{x})^{-1} = \frac{\sigma^2}{100} \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix}. \\[6pt] $$ You should be able to derive the coefficient estimates from the standard regression formula and then use these to formulate the hypothesis tests. Each hypothesis test is testing a linear combination of the coefficients, so you can use the rules for linear combinations of normal random variables to derive the standard errors of the test statistics.
Multiple Linear Regression Coefficient Estimators and Hypothesis Testing You have already given the formula for the variance matrix for the coefficient estimator. The Gramian matrix of the design matrix for the regression ---which appears in that formula--- is: $$\begin{a
55,644
Multiple Linear Regression Coefficient Estimators and Hypothesis Testing
As You know that the OLS estimator is a linear function of $y$. $\hat{\beta} \sim N(\beta, \sigma^2(X’X)^{-1})$ since $\epsilon \sim N(0,\sigma^2)$. 1) All you need to show is that the matrix $(X’X)^{-1}$ is a scalar matrix. Just compute the inverse using appropriate entries for $x_1,x_2,x_3$ 2) All you need to do is test the following Hypothesis :- $H_0 : l’{\beta} = 0$ vs $H_1 : l’{\beta} \neq 0$ Where $l’$ is (1,-2,1). Now use a $t$ test on $l’\hat{\beta}$ which follows a univariate normal distribution.
Multiple Linear Regression Coefficient Estimators and Hypothesis Testing
As You know that the OLS estimator is a linear function of $y$. $\hat{\beta} \sim N(\beta, \sigma^2(X’X)^{-1})$ since $\epsilon \sim N(0,\sigma^2)$. 1) All you need to show is that the matrix $(X’X)^{
Multiple Linear Regression Coefficient Estimators and Hypothesis Testing As You know that the OLS estimator is a linear function of $y$. $\hat{\beta} \sim N(\beta, \sigma^2(X’X)^{-1})$ since $\epsilon \sim N(0,\sigma^2)$. 1) All you need to show is that the matrix $(X’X)^{-1}$ is a scalar matrix. Just compute the inverse using appropriate entries for $x_1,x_2,x_3$ 2) All you need to do is test the following Hypothesis :- $H_0 : l’{\beta} = 0$ vs $H_1 : l’{\beta} \neq 0$ Where $l’$ is (1,-2,1). Now use a $t$ test on $l’\hat{\beta}$ which follows a univariate normal distribution.
Multiple Linear Regression Coefficient Estimators and Hypothesis Testing As You know that the OLS estimator is a linear function of $y$. $\hat{\beta} \sim N(\beta, \sigma^2(X’X)^{-1})$ since $\epsilon \sim N(0,\sigma^2)$. 1) All you need to show is that the matrix $(X’X)^{
55,645
Calculating Diagonal Elements of $(X^TX)^{-1}$ From R Output
Hint: Find the formula for the standard errors of the coefficient estimators. Notice also that these standard errors are given to you in the output.
Calculating Diagonal Elements of $(X^TX)^{-1}$ From R Output
Hint: Find the formula for the standard errors of the coefficient estimators. Notice also that these standard errors are given to you in the output.
Calculating Diagonal Elements of $(X^TX)^{-1}$ From R Output Hint: Find the formula for the standard errors of the coefficient estimators. Notice also that these standard errors are given to you in the output.
Calculating Diagonal Elements of $(X^TX)^{-1}$ From R Output Hint: Find the formula for the standard errors of the coefficient estimators. Notice also that these standard errors are given to you in the output.
55,646
Formula for difference in order statistics [closed]
Let $W_{i,j:n} = X_{j:n}-X_{i:n},\; 1\leq i<j\leq n$ be the difference between the $i$th and $j$th order statistics (aka the spacings). The pdf of $W_{i,j:n}$ is then given by: $$ f_{W_{i,j:n}}(w) = \frac{n!}{(i-1)!(j-i-1)!(n-j)!}\times \int_{-\infty}^{\infty}\left\{F(x_{i})\right\}^{i-1}\left\{F(x_{i} + w) - F(x_{i})\right\}^{j-i-1}\times \left\{1-F(x_{i}+w)\right\}^{n-j}f(x_{i})f(x_{i} + w)\;\mathrm{d}x_{i}, \quad 0<w<\infty $$ This formula is given in $[1]$. As far as I know, there is no simple formula for the standard normal. References $[1]$ Arnold BC, Balakrishnan N, Nagaraja HN (2008): A First Course in Order Statistics. Siam, Philadelphia.
Formula for difference in order statistics [closed]
Let $W_{i,j:n} = X_{j:n}-X_{i:n},\; 1\leq i<j\leq n$ be the difference between the $i$th and $j$th order statistics (aka the spacings). The pdf of $W_{i,j:n}$ is then given by: $$ f_{W_{i,j:n}}(w) = \
Formula for difference in order statistics [closed] Let $W_{i,j:n} = X_{j:n}-X_{i:n},\; 1\leq i<j\leq n$ be the difference between the $i$th and $j$th order statistics (aka the spacings). The pdf of $W_{i,j:n}$ is then given by: $$ f_{W_{i,j:n}}(w) = \frac{n!}{(i-1)!(j-i-1)!(n-j)!}\times \int_{-\infty}^{\infty}\left\{F(x_{i})\right\}^{i-1}\left\{F(x_{i} + w) - F(x_{i})\right\}^{j-i-1}\times \left\{1-F(x_{i}+w)\right\}^{n-j}f(x_{i})f(x_{i} + w)\;\mathrm{d}x_{i}, \quad 0<w<\infty $$ This formula is given in $[1]$. As far as I know, there is no simple formula for the standard normal. References $[1]$ Arnold BC, Balakrishnan N, Nagaraja HN (2008): A First Course in Order Statistics. Siam, Philadelphia.
Formula for difference in order statistics [closed] Let $W_{i,j:n} = X_{j:n}-X_{i:n},\; 1\leq i<j\leq n$ be the difference between the $i$th and $j$th order statistics (aka the spacings). The pdf of $W_{i,j:n}$ is then given by: $$ f_{W_{i,j:n}}(w) = \
55,647
Can a ratio of random variables be normal? [duplicate]
A trivial case: Let $Y$ be a normal RV, and $Z$ be a constant RV, then $X$ is going to be normally distributed. Another one: let $A,B$ normal RVs, and $C=A/B,D=1/B$ are two other RVs that belonging to Cauchy and Reciprocal Normal Distributions. Their ratio will be $C/D=A$ normally distributed.
Can a ratio of random variables be normal? [duplicate]
A trivial case: Let $Y$ be a normal RV, and $Z$ be a constant RV, then $X$ is going to be normally distributed. Another one: let $A,B$ normal RVs, and $C=A/B,D=1/B$ are two other RVs that belonging to
Can a ratio of random variables be normal? [duplicate] A trivial case: Let $Y$ be a normal RV, and $Z$ be a constant RV, then $X$ is going to be normally distributed. Another one: let $A,B$ normal RVs, and $C=A/B,D=1/B$ are two other RVs that belonging to Cauchy and Reciprocal Normal Distributions. Their ratio will be $C/D=A$ normally distributed.
Can a ratio of random variables be normal? [duplicate] A trivial case: Let $Y$ be a normal RV, and $Z$ be a constant RV, then $X$ is going to be normally distributed. Another one: let $A,B$ normal RVs, and $C=A/B,D=1/B$ are two other RVs that belonging to
55,648
What does dotted line mean in ResNet?
It's best to understand the model in terms of individual "Residual" blocks that stack up and result in the entire architecture. As you would have probably noticed, the dotted connections only come up at a few places where there is an increase in the depth (number of channels and not the spatial dimensions). In this case, the first dotted arrow of the network presents the case where the depth is increased from 64 to 128 channels by 1x1 convolution. Consider equation (2) of the ResNet paper: $$ y = F(\textbf{x}, \{W_i\}) + W_s \textbf{x} $$ This is used when the dimensions of the mapping function $F$ and the identity function $\textbf{x}$ do not match. The way this is solved is by introducing a linear projection $W_s$. Particularly, as described in page 4 of the Resnet paper, the projection approach means that 1x1 convolutions are performed such that the spatial dimensions remain the size but the number of channels can be increased/decreased (thereby, affecting the depth). See more about 1x1 convolutions and their use here. However, another method of matching the dimensions without having an increase in the number of parameters across the skip connections is to use what is the padding approach. Here, the input is first downsampled by using 1x1 pooling with a stride 2 and then padded with zero channels to increase the depth. Here is what the paper precisely mentions: When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.(2) is used to match dimensions (done by 1×1 convolutions). Here are some more references, in case needed - a Reddit thread, another SE question on similar lines.
What does dotted line mean in ResNet?
It's best to understand the model in terms of individual "Residual" blocks that stack up and result in the entire architecture. As you would have probably noticed, the dotted connections only come up
What does dotted line mean in ResNet? It's best to understand the model in terms of individual "Residual" blocks that stack up and result in the entire architecture. As you would have probably noticed, the dotted connections only come up at a few places where there is an increase in the depth (number of channels and not the spatial dimensions). In this case, the first dotted arrow of the network presents the case where the depth is increased from 64 to 128 channels by 1x1 convolution. Consider equation (2) of the ResNet paper: $$ y = F(\textbf{x}, \{W_i\}) + W_s \textbf{x} $$ This is used when the dimensions of the mapping function $F$ and the identity function $\textbf{x}$ do not match. The way this is solved is by introducing a linear projection $W_s$. Particularly, as described in page 4 of the Resnet paper, the projection approach means that 1x1 convolutions are performed such that the spatial dimensions remain the size but the number of channels can be increased/decreased (thereby, affecting the depth). See more about 1x1 convolutions and their use here. However, another method of matching the dimensions without having an increase in the number of parameters across the skip connections is to use what is the padding approach. Here, the input is first downsampled by using 1x1 pooling with a stride 2 and then padded with zero channels to increase the depth. Here is what the paper precisely mentions: When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.(2) is used to match dimensions (done by 1×1 convolutions). Here are some more references, in case needed - a Reddit thread, another SE question on similar lines.
What does dotted line mean in ResNet? It's best to understand the model in terms of individual "Residual" blocks that stack up and result in the entire architecture. As you would have probably noticed, the dotted connections only come up
55,649
What does dotted line mean in ResNet?
For a better understanding of the architecture, I'd suggest taking a look at an implementation. There are two types of blocks in the ResNet architecture, keras refers to them as the conv_block and the identity_block. The identity_block is the one with the straight line. It consists of three convolution layers (with Batch Norm and a ReLU). The input of the block is added to the last one right before the final activation function. The connection from the input to the add operation is called a skip connection (or a shortcut as keras calls it). The conv_block is the one with the dotted line. It two consists the three convolution layers (+BN+ReLU); these are different than the previous, though. The difference is that this time the skip connection passes through an independent convolution layer before being added to the output of the third convolution layer. This layer has a kernel of $1 \times 1$ and uses strides of $2$, which has the effect that changing the dimension of its input. If you read further down the post you linked to the question, the author does an amazing job of explaining these differences, while giving details about the shapes and dimensions of each layer. I'd definitely recommend reading it, as it will help you clear up the confusion. Visually to illustrate the difference between the two types of blocks they use the two types of lines: the straight and the dotted one. The straight line is a connection from the input to the output, while the dotted line passes through a convolution layer (which changes its dimensions).
What does dotted line mean in ResNet?
For a better understanding of the architecture, I'd suggest taking a look at an implementation. There are two types of blocks in the ResNet architecture, keras refers to them as the conv_block and the
What does dotted line mean in ResNet? For a better understanding of the architecture, I'd suggest taking a look at an implementation. There are two types of blocks in the ResNet architecture, keras refers to them as the conv_block and the identity_block. The identity_block is the one with the straight line. It consists of three convolution layers (with Batch Norm and a ReLU). The input of the block is added to the last one right before the final activation function. The connection from the input to the add operation is called a skip connection (or a shortcut as keras calls it). The conv_block is the one with the dotted line. It two consists the three convolution layers (+BN+ReLU); these are different than the previous, though. The difference is that this time the skip connection passes through an independent convolution layer before being added to the output of the third convolution layer. This layer has a kernel of $1 \times 1$ and uses strides of $2$, which has the effect that changing the dimension of its input. If you read further down the post you linked to the question, the author does an amazing job of explaining these differences, while giving details about the shapes and dimensions of each layer. I'd definitely recommend reading it, as it will help you clear up the confusion. Visually to illustrate the difference between the two types of blocks they use the two types of lines: the straight and the dotted one. The straight line is a connection from the input to the output, while the dotted line passes through a convolution layer (which changes its dimensions).
What does dotted line mean in ResNet? For a better understanding of the architecture, I'd suggest taking a look at an implementation. There are two types of blocks in the ResNet architecture, keras refers to them as the conv_block and the
55,650
What test exactly does lm.anova perform in R?
R has comprehensive documentation. For this specific case help(anova.lm) says: Details: Specifying a single object gives a sequential analysis of variance table for that fit. That is, the reductions in the residual sum of squares as each term of the formula is added in turn are given in as the rows of a table, plus the residual sum of squares. The table will contain F statistics (and P values) comparing the mean square for the row to the residual mean square. If more than one object is specified, the table has a row for the residual degrees of freedom and sum of squares for each model. For all but the first model, the change in degrees of freedom and sum of squares is also given. (This only make statistical sense if the models are nested.) It is conventional to list the models from smallest to largest, but this is up to the user. Optionally the table can include test statistics. Normally the F statistic is most appropriate, which compares the mean square for a row to the residual sum of squares for the largest model considered. If ‘scale’ is specified chi-squared tests can be used. Mallows' Cp statistic is the residual sum of squares plus twice the estimate of sigma^2 times the residual degrees of freedom. So, for the first case in your example, instead of comparing each covariate with the intercept-only model the covariates are added to the model one by one, hence the order matters. i.e. > anova(lm(Y ~ X1 + X2, data = data)) Analysis of Variance Table Response: Y Df Sum Sq Mean Sq F value Pr(>F) X1 2 1.883 0.9413 0.1677 0.8471 X2 1 3.265 3.2648 0.5817 0.4568 Residuals 16 89.803 5.6127 > anova(lm(Y ~ X2 + X1, data = data)) Analysis of Variance Table Response: Y Df Sum Sq Mean Sq F value Pr(>F) X2 1 2.042 2.0417 0.3638 0.5549 X1 2 3.106 1.5528 0.2767 0.7619 Residuals 16 89.803 5.6127 For the second case it compares the sequence of nested models with the first one: > anova(lm(Y ~ 1, data = data), lm(Y ~ X1, data = data), lm(Y ~ X1 + X2, data = data)) Analysis of Variance Table Model 1: Y ~ 1 Model 2: Y ~ X1 Model 3: Y ~ X1 + X2 Res.Df RSS Df Sum of Sq F Pr(>F) 1 19 94.950 2 17 93.067 2 1.8825 0.1677 0.8471 3 16 89.803 1 3.2648 0.5817 0.4568
What test exactly does lm.anova perform in R?
R has comprehensive documentation. For this specific case help(anova.lm) says: Details: Specifying a single object gives a sequential analysis of variance table for that fit. That is, the red
What test exactly does lm.anova perform in R? R has comprehensive documentation. For this specific case help(anova.lm) says: Details: Specifying a single object gives a sequential analysis of variance table for that fit. That is, the reductions in the residual sum of squares as each term of the formula is added in turn are given in as the rows of a table, plus the residual sum of squares. The table will contain F statistics (and P values) comparing the mean square for the row to the residual mean square. If more than one object is specified, the table has a row for the residual degrees of freedom and sum of squares for each model. For all but the first model, the change in degrees of freedom and sum of squares is also given. (This only make statistical sense if the models are nested.) It is conventional to list the models from smallest to largest, but this is up to the user. Optionally the table can include test statistics. Normally the F statistic is most appropriate, which compares the mean square for a row to the residual sum of squares for the largest model considered. If ‘scale’ is specified chi-squared tests can be used. Mallows' Cp statistic is the residual sum of squares plus twice the estimate of sigma^2 times the residual degrees of freedom. So, for the first case in your example, instead of comparing each covariate with the intercept-only model the covariates are added to the model one by one, hence the order matters. i.e. > anova(lm(Y ~ X1 + X2, data = data)) Analysis of Variance Table Response: Y Df Sum Sq Mean Sq F value Pr(>F) X1 2 1.883 0.9413 0.1677 0.8471 X2 1 3.265 3.2648 0.5817 0.4568 Residuals 16 89.803 5.6127 > anova(lm(Y ~ X2 + X1, data = data)) Analysis of Variance Table Response: Y Df Sum Sq Mean Sq F value Pr(>F) X2 1 2.042 2.0417 0.3638 0.5549 X1 2 3.106 1.5528 0.2767 0.7619 Residuals 16 89.803 5.6127 For the second case it compares the sequence of nested models with the first one: > anova(lm(Y ~ 1, data = data), lm(Y ~ X1, data = data), lm(Y ~ X1 + X2, data = data)) Analysis of Variance Table Model 1: Y ~ 1 Model 2: Y ~ X1 Model 3: Y ~ X1 + X2 Res.Df RSS Df Sum of Sq F Pr(>F) 1 19 94.950 2 17 93.067 2 1.8825 0.1677 0.8471 3 16 89.803 1 3.2648 0.5817 0.4568
What test exactly does lm.anova perform in R? R has comprehensive documentation. For this specific case help(anova.lm) says: Details: Specifying a single object gives a sequential analysis of variance table for that fit. That is, the red
55,651
Should I use GridSearchCV on all of my data? Or just the training set?
I think it's important to step back and consider the purpose of breaking your data into a training and test set in the first place. Ultimately, your goal is to build a model that will perform the best on a new set of data, given that it is trained on the data you have. One way to evaluate how well your model will perform on a new set of data is to break off some of your data into a "test" set, and only build your model on the remaining "training" set. Then, you can apply the model to your test set, and see how well it does in its prediction, with the belief that it will perform similarly to how it would perform on a new set of data. Technically speaking, there's nothing wrong with doing grid search to tune hyperparameters on all of your data; you're free to build a model however you want. But by using grid search on all of your data, you are defeating the purpose of doing a training/test split. That's because if you do the training/test split after doing grid search on all of your data to tune hyperparameters, applying your model to the test set no longer gives an estimate of how well your model will perform on new data, since your model has seen the test set, in the sense that the data in the test set was used to tune the hyperparameters. As a result, if you do grid search on all of your data, the error on your test set will be biased low, and when you go to apply your model to new data, the error could be much higher (and likely will, except for the effects of randomness). In summary, you should only use gridsearch on the training data after doing the train/test split, if you want to use the performance of the model on the test set as a metric for how your model will perform when it really does see new data.
Should I use GridSearchCV on all of my data? Or just the training set?
I think it's important to step back and consider the purpose of breaking your data into a training and test set in the first place. Ultimately, your goal is to build a model that will perform the best
Should I use GridSearchCV on all of my data? Or just the training set? I think it's important to step back and consider the purpose of breaking your data into a training and test set in the first place. Ultimately, your goal is to build a model that will perform the best on a new set of data, given that it is trained on the data you have. One way to evaluate how well your model will perform on a new set of data is to break off some of your data into a "test" set, and only build your model on the remaining "training" set. Then, you can apply the model to your test set, and see how well it does in its prediction, with the belief that it will perform similarly to how it would perform on a new set of data. Technically speaking, there's nothing wrong with doing grid search to tune hyperparameters on all of your data; you're free to build a model however you want. But by using grid search on all of your data, you are defeating the purpose of doing a training/test split. That's because if you do the training/test split after doing grid search on all of your data to tune hyperparameters, applying your model to the test set no longer gives an estimate of how well your model will perform on new data, since your model has seen the test set, in the sense that the data in the test set was used to tune the hyperparameters. As a result, if you do grid search on all of your data, the error on your test set will be biased low, and when you go to apply your model to new data, the error could be much higher (and likely will, except for the effects of randomness). In summary, you should only use gridsearch on the training data after doing the train/test split, if you want to use the performance of the model on the test set as a metric for how your model will perform when it really does see new data.
Should I use GridSearchCV on all of my data? Or just the training set? I think it's important to step back and consider the purpose of breaking your data into a training and test set in the first place. Ultimately, your goal is to build a model that will perform the best
55,652
Should I use GridSearchCV on all of my data? Or just the training set?
GridSearch Cv will calculate the average of out of fold recall for each combination of parameters, the set of parameters with best score, will be chosen by Grid search CV. It is fine to use the entire dataset, as you are using Cv method, which will check the score on out of fold set, hence you are not evaluating performance on Training data (on which model is trained) for parameter selection. for example: we want to tune max depth of tree, let's say maximum depth parameter we want to test are 5,10,15. Grid Search Cv will calculate recall score on out of fold set for all three value. The max depth value corresponding to best score on out of fold set will be chosen. https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html
Should I use GridSearchCV on all of my data? Or just the training set?
GridSearch Cv will calculate the average of out of fold recall for each combination of parameters, the set of parameters with best score, will be chosen by Grid search CV. It is fine to use the entire
Should I use GridSearchCV on all of my data? Or just the training set? GridSearch Cv will calculate the average of out of fold recall for each combination of parameters, the set of parameters with best score, will be chosen by Grid search CV. It is fine to use the entire dataset, as you are using Cv method, which will check the score on out of fold set, hence you are not evaluating performance on Training data (on which model is trained) for parameter selection. for example: we want to tune max depth of tree, let's say maximum depth parameter we want to test are 5,10,15. Grid Search Cv will calculate recall score on out of fold set for all three value. The max depth value corresponding to best score on out of fold set will be chosen. https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html
Should I use GridSearchCV on all of my data? Or just the training set? GridSearch Cv will calculate the average of out of fold recall for each combination of parameters, the set of parameters with best score, will be chosen by Grid search CV. It is fine to use the entire
55,653
What is the mean and standard deviation of $5x^3 + 8$ when $X$ is the sum of numbers of two fair dice
Make a table and apply the definitions. The expectation is found by multiplying each possible value by its chance and adding up those results. The residuals are the differences between each possible value and the expectation. The variance is the expectation of the squared residuals. The standard deviation is the square root of the variance. To illustrate, imagine a die with three equally likely sides bearing the numbers $1,$ $2,$ and $3.$ You can work out the chances of the sum on two independent dice, which ranges from $1$ through $6.$ Tabulate these values $X,$ their chances $p(X),$ and then add a column for $5x^3+8.$ $$\begin{array}{rrr} X & p(X) & 5X^3 + 8 \\ \hline 2 & \frac{1}{9} & 48 \\ 3 & \frac{2}{9} & 143 \\ 4 & \frac{3}{9} & 328 \\ 5 & \frac{2}{9} & 633 \\ 6 & \frac{1}{9} & 1088 \end{array}$$ Thus $$\eqalign{ \mu &= E[5X^3+8] \\&= (5(2)^3+8) \Pr(X=2) + (5(3)^3+8) \Pr(X=3) + \cdots + (5(6)^3+8)\Pr(X=6) \\ &= 408}.$$ Use this to create a new column of the differences between each value of $5X^3+8$ and their expectation, and then a fifth column of their squares: $$\begin{array}{rrrcc} X & p(X) & 5X^3 + 8 & 5X^3 + 8 - \mu & (5X^3 + 8-\mu)^2\\ \hline 2 & \frac{1}{9} & 48 & -360& 129600 \\ 3 & \frac{2}{9} & 143 & -265 &70225 \\ 4 & \frac{3}{9} & 328 & \vdots&\vdots\\ 5 & \frac{2}{9} & 633 & \vdots&\vdots\\ 6 & \frac{1}{9} & 1088 & 680 & 462400 \end{array}$$ Now find the expectation of the last column using the same formula as before: multiply each value by its probability and add them up. This is the variance. It's pretty cumbersome, isn't it? You can get the correct answer this way for the six-sided dice (with about twice as work involving much larger numbers), but if you know enough algebra you can identify the patterns and simplify the work. If you have some basic programming skills you can create a small function to do all the calculations for you. One conclusion is clear: this is not a good exercise for helping you understand expectations and variances; it's mainly a nuisance question that requires your arithmetical skills and patience. That's why this kind of question has largely disappeared from statistical pedagogy over the last 40 years. If you're self-learning from an old textbook, then consider adopting a more recent one. Finally, writing $E[X]=\bar{X}= 4,$ you will discover that the expectation of $5X^3+8$ (equal to about $408$) is not $5\bar{X}^3+8$ (which is only $328$). In this case, There is no rule of computing with expectations that relates these quantities, but there is an inequality (Jensen's Inequality) that implies the first value cannot be less than the second.
What is the mean and standard deviation of $5x^3 + 8$ when $X$ is the sum of numbers of two fair dic
Make a table and apply the definitions. The expectation is found by multiplying each possible value by its chance and adding up those results. The residuals are the differences between each possible
What is the mean and standard deviation of $5x^3 + 8$ when $X$ is the sum of numbers of two fair dice Make a table and apply the definitions. The expectation is found by multiplying each possible value by its chance and adding up those results. The residuals are the differences between each possible value and the expectation. The variance is the expectation of the squared residuals. The standard deviation is the square root of the variance. To illustrate, imagine a die with three equally likely sides bearing the numbers $1,$ $2,$ and $3.$ You can work out the chances of the sum on two independent dice, which ranges from $1$ through $6.$ Tabulate these values $X,$ their chances $p(X),$ and then add a column for $5x^3+8.$ $$\begin{array}{rrr} X & p(X) & 5X^3 + 8 \\ \hline 2 & \frac{1}{9} & 48 \\ 3 & \frac{2}{9} & 143 \\ 4 & \frac{3}{9} & 328 \\ 5 & \frac{2}{9} & 633 \\ 6 & \frac{1}{9} & 1088 \end{array}$$ Thus $$\eqalign{ \mu &= E[5X^3+8] \\&= (5(2)^3+8) \Pr(X=2) + (5(3)^3+8) \Pr(X=3) + \cdots + (5(6)^3+8)\Pr(X=6) \\ &= 408}.$$ Use this to create a new column of the differences between each value of $5X^3+8$ and their expectation, and then a fifth column of their squares: $$\begin{array}{rrrcc} X & p(X) & 5X^3 + 8 & 5X^3 + 8 - \mu & (5X^3 + 8-\mu)^2\\ \hline 2 & \frac{1}{9} & 48 & -360& 129600 \\ 3 & \frac{2}{9} & 143 & -265 &70225 \\ 4 & \frac{3}{9} & 328 & \vdots&\vdots\\ 5 & \frac{2}{9} & 633 & \vdots&\vdots\\ 6 & \frac{1}{9} & 1088 & 680 & 462400 \end{array}$$ Now find the expectation of the last column using the same formula as before: multiply each value by its probability and add them up. This is the variance. It's pretty cumbersome, isn't it? You can get the correct answer this way for the six-sided dice (with about twice as work involving much larger numbers), but if you know enough algebra you can identify the patterns and simplify the work. If you have some basic programming skills you can create a small function to do all the calculations for you. One conclusion is clear: this is not a good exercise for helping you understand expectations and variances; it's mainly a nuisance question that requires your arithmetical skills and patience. That's why this kind of question has largely disappeared from statistical pedagogy over the last 40 years. If you're self-learning from an old textbook, then consider adopting a more recent one. Finally, writing $E[X]=\bar{X}= 4,$ you will discover that the expectation of $5X^3+8$ (equal to about $408$) is not $5\bar{X}^3+8$ (which is only $328$). In this case, There is no rule of computing with expectations that relates these quantities, but there is an inequality (Jensen's Inequality) that implies the first value cannot be less than the second.
What is the mean and standard deviation of $5x^3 + 8$ when $X$ is the sum of numbers of two fair dic Make a table and apply the definitions. The expectation is found by multiplying each possible value by its chance and adding up those results. The residuals are the differences between each possible
55,654
What is the mean and standard deviation of $5x^3 + 8$ when $X$ is the sum of numbers of two fair dice
$X^3=[(X-\mu)+\mu]^3=(X-\mu)^3+3(X-\mu)^2.\mu+3(X-\mu).\mu^2+\mu^3$ Hence $E(X^3) = E(X-\mu)^3+3\mu E(X-\mu)^2+3\mu^2 E(X-\mu)+\mu^3=3\mu \sigma^2+\mu^3$. (The 1st term of the four is $0$ because of symmetry + finite support, the 3rd because $E(X)=\mu$) Now $\mu=7$ and $\sigma^2=\frac{35}{6}$, as you have already found, so you can get $E(X^3)$ directly from them. From there you just use linearity of expectation to get $E(5X^3+8)$.
What is the mean and standard deviation of $5x^3 + 8$ when $X$ is the sum of numbers of two fair dic
$X^3=[(X-\mu)+\mu]^3=(X-\mu)^3+3(X-\mu)^2.\mu+3(X-\mu).\mu^2+\mu^3$ Hence $E(X^3) = E(X-\mu)^3+3\mu E(X-\mu)^2+3\mu^2 E(X-\mu)+\mu^3=3\mu \sigma^2+\mu^3$. (The 1st term of the four is $0$ because of s
What is the mean and standard deviation of $5x^3 + 8$ when $X$ is the sum of numbers of two fair dice $X^3=[(X-\mu)+\mu]^3=(X-\mu)^3+3(X-\mu)^2.\mu+3(X-\mu).\mu^2+\mu^3$ Hence $E(X^3) = E(X-\mu)^3+3\mu E(X-\mu)^2+3\mu^2 E(X-\mu)+\mu^3=3\mu \sigma^2+\mu^3$. (The 1st term of the four is $0$ because of symmetry + finite support, the 3rd because $E(X)=\mu$) Now $\mu=7$ and $\sigma^2=\frac{35}{6}$, as you have already found, so you can get $E(X^3)$ directly from them. From there you just use linearity of expectation to get $E(5X^3+8)$.
What is the mean and standard deviation of $5x^3 + 8$ when $X$ is the sum of numbers of two fair dic $X^3=[(X-\mu)+\mu]^3=(X-\mu)^3+3(X-\mu)^2.\mu+3(X-\mu).\mu^2+\mu^3$ Hence $E(X^3) = E(X-\mu)^3+3\mu E(X-\mu)^2+3\mu^2 E(X-\mu)+\mu^3=3\mu \sigma^2+\mu^3$. (The 1st term of the four is $0$ because of s
55,655
In Bayesian inference, why is p(D) sometimes called "the evidence"? [duplicate]
It is called the marginal likelihood because: this constant value $p(D)$ is what you obtain when you integrate over all possible values of $H$, leaving you with the probability of observing $D$ under your model. It is called the normalizing constant because: this constant value $p(D)$ normalizes $p(D|H)p(H)$, making it a proper distribution that integrates to one. It is called the (model) evidence because: this constant $p(D)$ can act as a measure of quality of fit of your model. Informally, in some sense it maps your model (your choice of likelihood and prior) and observations to a single value that describes the probability of your observation. The higher the more suitable your model is for the data, hence "model evidence". It acts as some evidence supporting your claim that the data is generated under your model.
In Bayesian inference, why is p(D) sometimes called "the evidence"? [duplicate]
It is called the marginal likelihood because: this constant value $p(D)$ is what you obtain when you integrate over all possible values of $H$, leaving you with the probability of observing $D$ under
In Bayesian inference, why is p(D) sometimes called "the evidence"? [duplicate] It is called the marginal likelihood because: this constant value $p(D)$ is what you obtain when you integrate over all possible values of $H$, leaving you with the probability of observing $D$ under your model. It is called the normalizing constant because: this constant value $p(D)$ normalizes $p(D|H)p(H)$, making it a proper distribution that integrates to one. It is called the (model) evidence because: this constant $p(D)$ can act as a measure of quality of fit of your model. Informally, in some sense it maps your model (your choice of likelihood and prior) and observations to a single value that describes the probability of your observation. The higher the more suitable your model is for the data, hence "model evidence". It acts as some evidence supporting your claim that the data is generated under your model.
In Bayesian inference, why is p(D) sometimes called "the evidence"? [duplicate] It is called the marginal likelihood because: this constant value $p(D)$ is what you obtain when you integrate over all possible values of $H$, leaving you with the probability of observing $D$ under
55,656
In Bayesian inference, why is p(D) sometimes called "the evidence"? [duplicate]
Notice that you can multiply both sides by $p(D)$: $$p(H|D)p(D) = p(D|H)p(H)$$ This directly corresponds to the graphical proof of Bayes' Theorem - if all your variables are independent, you get the same slice of the pie (total probability) no matter in which order you make the cuts (conditioning on one variable). In other words, $p(D|H)$ is the space of events supported (with various probabilities) by the hypothesis, and the entire RHS just normalizes that fraction relative to how much of the space of all hypotheses under consideration is occupied by the hypothesis that can generate your data. Similarly, $p(H|D)p(D)$ is the space of all considered hypotheses supported by the data, normalized to the probability of observing that data. Calling $p(D)$ 'evidence' is therefore a bit of a mental shortcut; more accurately, it would be 'the probability of seeing the evidence', but that's a bit of a mouthful. $D$ would be a better match for the term - it's the set of events that supports or refutes the hypothesis, which is, more or less, the intuitive, real-world definition of evidence. For completeness' sake - please note the 'all's in the bolded parts - the formulation of the theorem you've used, while correct, can only be naively applied in a binary comparison. Otherwise, you need to sum/integrate over all the components - that's what had been holding large-scale Bayesian inference back for a long time.
In Bayesian inference, why is p(D) sometimes called "the evidence"? [duplicate]
Notice that you can multiply both sides by $p(D)$: $$p(H|D)p(D) = p(D|H)p(H)$$ This directly corresponds to the graphical proof of Bayes' Theorem - if all your variables are independent, you get the
In Bayesian inference, why is p(D) sometimes called "the evidence"? [duplicate] Notice that you can multiply both sides by $p(D)$: $$p(H|D)p(D) = p(D|H)p(H)$$ This directly corresponds to the graphical proof of Bayes' Theorem - if all your variables are independent, you get the same slice of the pie (total probability) no matter in which order you make the cuts (conditioning on one variable). In other words, $p(D|H)$ is the space of events supported (with various probabilities) by the hypothesis, and the entire RHS just normalizes that fraction relative to how much of the space of all hypotheses under consideration is occupied by the hypothesis that can generate your data. Similarly, $p(H|D)p(D)$ is the space of all considered hypotheses supported by the data, normalized to the probability of observing that data. Calling $p(D)$ 'evidence' is therefore a bit of a mental shortcut; more accurately, it would be 'the probability of seeing the evidence', but that's a bit of a mouthful. $D$ would be a better match for the term - it's the set of events that supports or refutes the hypothesis, which is, more or less, the intuitive, real-world definition of evidence. For completeness' sake - please note the 'all's in the bolded parts - the formulation of the theorem you've used, while correct, can only be naively applied in a binary comparison. Otherwise, you need to sum/integrate over all the components - that's what had been holding large-scale Bayesian inference back for a long time.
In Bayesian inference, why is p(D) sometimes called "the evidence"? [duplicate] Notice that you can multiply both sides by $p(D)$: $$p(H|D)p(D) = p(D|H)p(H)$$ This directly corresponds to the graphical proof of Bayes' Theorem - if all your variables are independent, you get the
55,657
In Bayesian inference, why is p(D) sometimes called "the evidence"? [duplicate]
I probably wouldn't call it "evidence", however I think it means "all the information in the data", which in Bayesian statistics is codified as $P(D)$ marginalised over all hypotheses/distributions deemed possible.
In Bayesian inference, why is p(D) sometimes called "the evidence"? [duplicate]
I probably wouldn't call it "evidence", however I think it means "all the information in the data", which in Bayesian statistics is codified as $P(D)$ marginalised over all hypotheses/distributions de
In Bayesian inference, why is p(D) sometimes called "the evidence"? [duplicate] I probably wouldn't call it "evidence", however I think it means "all the information in the data", which in Bayesian statistics is codified as $P(D)$ marginalised over all hypotheses/distributions deemed possible.
In Bayesian inference, why is p(D) sometimes called "the evidence"? [duplicate] I probably wouldn't call it "evidence", however I think it means "all the information in the data", which in Bayesian statistics is codified as $P(D)$ marginalised over all hypotheses/distributions de
55,658
For simple linear regression, is $\beta_1$ linear in $y_i$?
The second term is zero because: $$\sum_i (x_i - \bar x) = \sum_i x_i - n \bar x = n \bar x - n \bar x = 0$$ This is a trick that comes up pretty often, so it's nice to have an eye for spotting it.
For simple linear regression, is $\beta_1$ linear in $y_i$?
The second term is zero because: $$\sum_i (x_i - \bar x) = \sum_i x_i - n \bar x = n \bar x - n \bar x = 0$$ This is a trick that comes up pretty often, so it's nice to have an eye for spotting it.
For simple linear regression, is $\beta_1$ linear in $y_i$? The second term is zero because: $$\sum_i (x_i - \bar x) = \sum_i x_i - n \bar x = n \bar x - n \bar x = 0$$ This is a trick that comes up pretty often, so it's nice to have an eye for spotting it.
For simple linear regression, is $\beta_1$ linear in $y_i$? The second term is zero because: $$\sum_i (x_i - \bar x) = \sum_i x_i - n \bar x = n \bar x - n \bar x = 0$$ This is a trick that comes up pretty often, so it's nice to have an eye for spotting it.
55,659
Conditional expectation of random variables defined off of each other
Your first question is key, so let's focus on it. You are concerned about a bivariate random variable $(X_{n-1},X_n)$ with a probability distribution somehow defined by giving $X_{n-1}$ a distribution and then defining the distribution function of $X_n$ in terms of the random variable $X_{n-1}.$ There are many subtleties involved, so I write the following in the hope that exploring the consequences of the definitions in detail and carrying out the calculations explicitly will reveal what actually is being done when the distribution of one random variable is defined in terms of another random variable. We will see that this makes sense, but by means of the first example in the question (the uniform distributions) we will see that it does not uniquely determine the random variables themselves. You are working with conditional expectations, so let's begin by expressing probabilities in terms of expectations. Let $x\in\mathbb{R}$ be a value at which we wish to compute the distribution function of $X_n,$ $$F_n(x) = \Pr(X_n \le x) = \phi(x;X_{n-1})$$ where $\phi$ defines the distribution of $X_n$ in terms of $X_{n-1}.$ For instance, assuming $\Pr(X_n\gt 0) = 1$ for simplicity and taking $x\gt 0,$ $\phi$ might be a uniform distribution function $$\phi(x;X_{n-1}) = \frac{\min(X_{n-1},x)}{X_{n-1}} = \min\left(1, \frac{x}{X_{n-1}}\right) .$$ The probability determined by $F_n$ can be expressed in terms of the indicator function $\mathcal{I}(X_n\le x)$ as $$\Pr(X_n \le x) = E\left[\mathcal{I}(X_n\le x)\right].$$ Because a property of $X_n$ has been expressed in terms of $X_{n-1},$ to be explicit we must consider this to be a conditional expectation, $$E\left[I(X_n\le x)\right] = E\left[\mathcal{I}(X_n\le x)\mid X_{n-1}\right].$$ We may assemble all the foregoing into the formula $$ E\left[ \mathcal{I}(X_n\le x)\mid X_{n-1}\right] = \phi(x;X_{n-1}).\tag{*}$$ Does this make sense? Let's check it against a definition of conditional expectation. $X_n$ is a random variable with respect to some sigma algebra $\mathfrak{F}_n$ defined on a probability space $(\Omega, \mathfrak{F}_n, \mathbb{P}).$ Let $\mathfrak{F}_{n-1}\subset \mathfrak{F}_n$ be the sigma algebra generated by the conditioning variable $X_{n-1}.$ Then the conditional expectation in $(*)$ is a $\mathfrak{F}_{n-1}$-measurable random variable, allowing us to write expressions like $$E\left[ \mathcal{I}(X_n\le x)\mid X_{n-1}\right](\omega)$$ to refer to its value on an outcome $\omega\in\Omega.$ Notice that the right hand side of $(*)$ is also $\mathfrak{F}_{n-1}$-measurable provided $y \to \phi(x;y)$ is a measurable function for every $x,$ because it's a function of $X_{n-1}.$ So far so good: at least $(*)$ is equating two comparable mathematical objects! The defining property of a conditional expectation is $$\int_A E\left[ \mathcal{I}(X_n\le x)\mid X_{n-1}\right](\omega)\,\mathrm{d}\mathbb{P}(\omega) = \int_A \mathcal{I}(X_n(\omega)\le x)\, \mathrm{d}\mathbb{P}(\omega)\tag{**}$$ for all $\mathfrak{F}_{n-1}$-measurable sets $A.$ Substituting $(*)$ gives $$\int_A \phi(x;X_{n-1}(\omega))\,\mathrm{d}\mathbb{P}(\omega) = \int_A \mathcal{I}(X_n(\omega)\le x)\, \mathrm{d}\mathbb{P}(\omega)$$ for all events $A\in\mathfrak{F}_{n-1}.$ Thus, specifying $\phi$ amounts to specifying all possible values of the integral on the right for all $X_{n-1}$-measurable sets $A.$ That's enough to determine $X_n$ up to distribution because if $Y$ is another variable with this property, then for all $A\in\mathfrak{F}_{n-1},$ $$0 = \int_A ( \mathcal{I}(X_n(\omega)\le x) - \mathcal{I}(Y(\omega)\le x))\, \mathrm{d}\mathbb{P}(\omega)$$ implies the two indicator functions are almost surely ($\mathfrak{F}_{n-1}$) equal. In particular, setting $A=\Omega$ gives $$0 = \Pr(X_n \le x) - \Pr(Y \le x),$$ showing that $X_n$ and $Y$ are identically distributed. A counterexample might help reinforce the fact that the random variable $X_n$ is usually not uniquely determined by $\phi$ and $X_{n-1}.$ Let $\Omega = [0,1]\times [0,1]$ with the usual Borel sigma algebra and Lebesgue measure $\mathbb P.$ I will exploit the obvious fact that the map $$\iota: \Omega\to\Omega;\quad \iota(\omega_1,\omega_2) = (\omega_1,1-\omega_2)$$ (which merely flips the square $\Omega$ upside down) preserves all the essential properties of this probability space. The random variable defined by $$X_{n-1}(\omega_1,\omega_2) = \omega_1$$ has a Uniform$(0,1)$ distribution. The random variables $$X_n(\omega_1,\omega_2) = \omega_1\omega_2$$ and $$X_n^{(2)}(\omega_1,\omega_2) = \omega_1(1-\omega_2)$$ are identically distributed because they are related by $\iota.$ To compute their common conditional distribution, note that the sigma algebra $\sigma(X_{n-1}) = \mathfrak{F}_{n-1}$ is generated by the sets of the form $$\Omega_{t} = \{(\omega_1,\omega_2)\mid \omega_1\le t\}.$$ Visualize these as slicing the square $[0,1]\times[0,1]$ vertically at the location $t;$ the two halves are measurable in the subalgebra and all $\mathfrak{F}_{n-1}$-measurable sets consist of collections built out of such slices. It suffices therefore to let $A = \Omega_{t}$ for arbitrary $t$ and compute $$\eqalign{\int_A \mathcal{I}(X_n(\omega)\le x)\, \mathrm{d}\mathbb{P}(\omega) &= \int_0^1\int_0^t \mathcal{I}(\omega_1\omega_2 \le x)\, \mathrm{d}\omega_1\,\mathrm{d}\omega_2\\ &= \int_0^1\int_0^t \min\left(1,\frac{x}{\omega_1}\right)\, \mathrm{d}\omega_1\,\mathrm{d}\omega_2\\ &= \int_0^1\int_0^t \phi(x, \omega_1)\, \mathrm{d}\omega_1\,\mathrm{d}\omega_2\\ &= \int_0^1\int_0^t \phi(x, X_n(\omega))\, \mathrm{d}\omega_1\,\mathrm{d}\omega_2\\ &= \int_A \phi(x, X_n(\omega))\, \mathrm{d}\mathbb{P}. }$$ Everything in this chain of equalities merely substitutes a definition or previous equality except for the first and last lines, which apply Fubini's Theorem, and the move from the first line to the second, which is simple arithmetic. Thus, the integrand at the end must be the conditional expectation because it has been seen to satisfy the defining property $(**):$ $$E\left[ \mathcal{I}(X_n\le x)\mid X_{n-1}\right] = \phi(x, X_n(\omega)).$$ Although this equation is true of both $X_{n}$ and $X_{n}^{(2)},$ these are distinct random variables. Indeed, $$X_{n}(\omega_1,\omega_2) - X_{n}^{(2)}(\omega_1,\omega_2) = \omega_1\omega_2 - \omega_1(1-\omega_2) = \omega_1(2\omega_2 - 1)$$ is almost surely nonzero.
Conditional expectation of random variables defined off of each other
Your first question is key, so let's focus on it. You are concerned about a bivariate random variable $(X_{n-1},X_n)$ with a probability distribution somehow defined by giving $X_{n-1}$ a distributio
Conditional expectation of random variables defined off of each other Your first question is key, so let's focus on it. You are concerned about a bivariate random variable $(X_{n-1},X_n)$ with a probability distribution somehow defined by giving $X_{n-1}$ a distribution and then defining the distribution function of $X_n$ in terms of the random variable $X_{n-1}.$ There are many subtleties involved, so I write the following in the hope that exploring the consequences of the definitions in detail and carrying out the calculations explicitly will reveal what actually is being done when the distribution of one random variable is defined in terms of another random variable. We will see that this makes sense, but by means of the first example in the question (the uniform distributions) we will see that it does not uniquely determine the random variables themselves. You are working with conditional expectations, so let's begin by expressing probabilities in terms of expectations. Let $x\in\mathbb{R}$ be a value at which we wish to compute the distribution function of $X_n,$ $$F_n(x) = \Pr(X_n \le x) = \phi(x;X_{n-1})$$ where $\phi$ defines the distribution of $X_n$ in terms of $X_{n-1}.$ For instance, assuming $\Pr(X_n\gt 0) = 1$ for simplicity and taking $x\gt 0,$ $\phi$ might be a uniform distribution function $$\phi(x;X_{n-1}) = \frac{\min(X_{n-1},x)}{X_{n-1}} = \min\left(1, \frac{x}{X_{n-1}}\right) .$$ The probability determined by $F_n$ can be expressed in terms of the indicator function $\mathcal{I}(X_n\le x)$ as $$\Pr(X_n \le x) = E\left[\mathcal{I}(X_n\le x)\right].$$ Because a property of $X_n$ has been expressed in terms of $X_{n-1},$ to be explicit we must consider this to be a conditional expectation, $$E\left[I(X_n\le x)\right] = E\left[\mathcal{I}(X_n\le x)\mid X_{n-1}\right].$$ We may assemble all the foregoing into the formula $$ E\left[ \mathcal{I}(X_n\le x)\mid X_{n-1}\right] = \phi(x;X_{n-1}).\tag{*}$$ Does this make sense? Let's check it against a definition of conditional expectation. $X_n$ is a random variable with respect to some sigma algebra $\mathfrak{F}_n$ defined on a probability space $(\Omega, \mathfrak{F}_n, \mathbb{P}).$ Let $\mathfrak{F}_{n-1}\subset \mathfrak{F}_n$ be the sigma algebra generated by the conditioning variable $X_{n-1}.$ Then the conditional expectation in $(*)$ is a $\mathfrak{F}_{n-1}$-measurable random variable, allowing us to write expressions like $$E\left[ \mathcal{I}(X_n\le x)\mid X_{n-1}\right](\omega)$$ to refer to its value on an outcome $\omega\in\Omega.$ Notice that the right hand side of $(*)$ is also $\mathfrak{F}_{n-1}$-measurable provided $y \to \phi(x;y)$ is a measurable function for every $x,$ because it's a function of $X_{n-1}.$ So far so good: at least $(*)$ is equating two comparable mathematical objects! The defining property of a conditional expectation is $$\int_A E\left[ \mathcal{I}(X_n\le x)\mid X_{n-1}\right](\omega)\,\mathrm{d}\mathbb{P}(\omega) = \int_A \mathcal{I}(X_n(\omega)\le x)\, \mathrm{d}\mathbb{P}(\omega)\tag{**}$$ for all $\mathfrak{F}_{n-1}$-measurable sets $A.$ Substituting $(*)$ gives $$\int_A \phi(x;X_{n-1}(\omega))\,\mathrm{d}\mathbb{P}(\omega) = \int_A \mathcal{I}(X_n(\omega)\le x)\, \mathrm{d}\mathbb{P}(\omega)$$ for all events $A\in\mathfrak{F}_{n-1}.$ Thus, specifying $\phi$ amounts to specifying all possible values of the integral on the right for all $X_{n-1}$-measurable sets $A.$ That's enough to determine $X_n$ up to distribution because if $Y$ is another variable with this property, then for all $A\in\mathfrak{F}_{n-1},$ $$0 = \int_A ( \mathcal{I}(X_n(\omega)\le x) - \mathcal{I}(Y(\omega)\le x))\, \mathrm{d}\mathbb{P}(\omega)$$ implies the two indicator functions are almost surely ($\mathfrak{F}_{n-1}$) equal. In particular, setting $A=\Omega$ gives $$0 = \Pr(X_n \le x) - \Pr(Y \le x),$$ showing that $X_n$ and $Y$ are identically distributed. A counterexample might help reinforce the fact that the random variable $X_n$ is usually not uniquely determined by $\phi$ and $X_{n-1}.$ Let $\Omega = [0,1]\times [0,1]$ with the usual Borel sigma algebra and Lebesgue measure $\mathbb P.$ I will exploit the obvious fact that the map $$\iota: \Omega\to\Omega;\quad \iota(\omega_1,\omega_2) = (\omega_1,1-\omega_2)$$ (which merely flips the square $\Omega$ upside down) preserves all the essential properties of this probability space. The random variable defined by $$X_{n-1}(\omega_1,\omega_2) = \omega_1$$ has a Uniform$(0,1)$ distribution. The random variables $$X_n(\omega_1,\omega_2) = \omega_1\omega_2$$ and $$X_n^{(2)}(\omega_1,\omega_2) = \omega_1(1-\omega_2)$$ are identically distributed because they are related by $\iota.$ To compute their common conditional distribution, note that the sigma algebra $\sigma(X_{n-1}) = \mathfrak{F}_{n-1}$ is generated by the sets of the form $$\Omega_{t} = \{(\omega_1,\omega_2)\mid \omega_1\le t\}.$$ Visualize these as slicing the square $[0,1]\times[0,1]$ vertically at the location $t;$ the two halves are measurable in the subalgebra and all $\mathfrak{F}_{n-1}$-measurable sets consist of collections built out of such slices. It suffices therefore to let $A = \Omega_{t}$ for arbitrary $t$ and compute $$\eqalign{\int_A \mathcal{I}(X_n(\omega)\le x)\, \mathrm{d}\mathbb{P}(\omega) &= \int_0^1\int_0^t \mathcal{I}(\omega_1\omega_2 \le x)\, \mathrm{d}\omega_1\,\mathrm{d}\omega_2\\ &= \int_0^1\int_0^t \min\left(1,\frac{x}{\omega_1}\right)\, \mathrm{d}\omega_1\,\mathrm{d}\omega_2\\ &= \int_0^1\int_0^t \phi(x, \omega_1)\, \mathrm{d}\omega_1\,\mathrm{d}\omega_2\\ &= \int_0^1\int_0^t \phi(x, X_n(\omega))\, \mathrm{d}\omega_1\,\mathrm{d}\omega_2\\ &= \int_A \phi(x, X_n(\omega))\, \mathrm{d}\mathbb{P}. }$$ Everything in this chain of equalities merely substitutes a definition or previous equality except for the first and last lines, which apply Fubini's Theorem, and the move from the first line to the second, which is simple arithmetic. Thus, the integrand at the end must be the conditional expectation because it has been seen to satisfy the defining property $(**):$ $$E\left[ \mathcal{I}(X_n\le x)\mid X_{n-1}\right] = \phi(x, X_n(\omega)).$$ Although this equation is true of both $X_{n}$ and $X_{n}^{(2)},$ these are distinct random variables. Indeed, $$X_{n}(\omega_1,\omega_2) - X_{n}^{(2)}(\omega_1,\omega_2) = \omega_1\omega_2 - \omega_1(1-\omega_2) = \omega_1(2\omega_2 - 1)$$ is almost surely nonzero.
Conditional expectation of random variables defined off of each other Your first question is key, so let's focus on it. You are concerned about a bivariate random variable $(X_{n-1},X_n)$ with a probability distribution somehow defined by giving $X_{n-1}$ a distributio
55,660
Conditional expectation of random variables defined off of each other
Definitions Definition 1: If $\mathcal F \subseteq \mathcal G$ are two $\sigma$-fields, and $X$ a $\mathcal G$-measurable integrable random variable, then $\mathbb E[X | \mathcal F]$ is defined as any $\mathcal F$-measurable random variable $Y$, such that $\mathbb E[Y;A]=\mathbb E[X;A]$ for every $A \in \mathcal F$. Here $\mathbb E[X;A]$ is a notation for $\int_AX\,d\mathbb P$. Definition 2: We define conditional probability as $\mathbb P(A | \mathcal F)= \mathbb E[1_A|\mathcal F]$. https://math.stackexchange.com/questions/2373097/condition-on-sigma-algebra/2373105#2373105 $X_n\sim Unif(0,X_{n-1})$ it means $X_n|X_{n-1}\sim Unif(0,X_{n-1})$ it also means $X_n|X_{n-1}=t\sim Unif(0,t)$ rigorously it is a conditional probability. How it define? It define bayed on conditional Expectation , http://www2.stat.duke.edu/courses/Fall17/sta711/lec/wk-10.pdf , on the Other hand $$P(X_n\leq a|X_{n-1})=E(1_{X_n\leq a}|X_{n-1})=E(1_{X_n\leq a}|\sigma(X_{n-1}))$$ This type of conditional probability is unified definition that other definitions( continues variable, discrete variables, events,mixture variables) are a special case of this. (like a patch for old definition, $P(A|B)=\frac{P(AB)}{P(B)}$, problem with continues variable, like in this situation,$B=\{X_{n-1}=t\}$ so $P(B)=0$). we saw this (define a variable conditioned another variable) before in Bayesian approach, $$X|\mu \sim N(\mu , 1)$$, that conditioned based a variable $\mu\sim N(0,1)$, and in Metropolis-Hastings Method that conditioned based on previous observation. "Does it mean that for every $\omega \in \Omega$ $X_n(\omega)∼Unif(0,X_{n−1}(\omega))$?" Only enough for almost sure of them. $$E(X_{n+1}|A_n) = E(X_{n+1}|\sigma(X_1,\cdots ,X_n)) = E(X_{n+1}|X_1,\cdots ,X_n)= E(X_{n+1}|X_n)= \frac{X_n}{2}$$ all other question solve similar.
Conditional expectation of random variables defined off of each other
Definitions Definition 1: If $\mathcal F \subseteq \mathcal G$ are two $\sigma$-fields, and $X$ a $\mathcal G$-measurable integrable random variable, then $\mathbb E[X | \mathcal F]$ is defined as an
Conditional expectation of random variables defined off of each other Definitions Definition 1: If $\mathcal F \subseteq \mathcal G$ are two $\sigma$-fields, and $X$ a $\mathcal G$-measurable integrable random variable, then $\mathbb E[X | \mathcal F]$ is defined as any $\mathcal F$-measurable random variable $Y$, such that $\mathbb E[Y;A]=\mathbb E[X;A]$ for every $A \in \mathcal F$. Here $\mathbb E[X;A]$ is a notation for $\int_AX\,d\mathbb P$. Definition 2: We define conditional probability as $\mathbb P(A | \mathcal F)= \mathbb E[1_A|\mathcal F]$. https://math.stackexchange.com/questions/2373097/condition-on-sigma-algebra/2373105#2373105 $X_n\sim Unif(0,X_{n-1})$ it means $X_n|X_{n-1}\sim Unif(0,X_{n-1})$ it also means $X_n|X_{n-1}=t\sim Unif(0,t)$ rigorously it is a conditional probability. How it define? It define bayed on conditional Expectation , http://www2.stat.duke.edu/courses/Fall17/sta711/lec/wk-10.pdf , on the Other hand $$P(X_n\leq a|X_{n-1})=E(1_{X_n\leq a}|X_{n-1})=E(1_{X_n\leq a}|\sigma(X_{n-1}))$$ This type of conditional probability is unified definition that other definitions( continues variable, discrete variables, events,mixture variables) are a special case of this. (like a patch for old definition, $P(A|B)=\frac{P(AB)}{P(B)}$, problem with continues variable, like in this situation,$B=\{X_{n-1}=t\}$ so $P(B)=0$). we saw this (define a variable conditioned another variable) before in Bayesian approach, $$X|\mu \sim N(\mu , 1)$$, that conditioned based a variable $\mu\sim N(0,1)$, and in Metropolis-Hastings Method that conditioned based on previous observation. "Does it mean that for every $\omega \in \Omega$ $X_n(\omega)∼Unif(0,X_{n−1}(\omega))$?" Only enough for almost sure of them. $$E(X_{n+1}|A_n) = E(X_{n+1}|\sigma(X_1,\cdots ,X_n)) = E(X_{n+1}|X_1,\cdots ,X_n)= E(X_{n+1}|X_n)= \frac{X_n}{2}$$ all other question solve similar.
Conditional expectation of random variables defined off of each other Definitions Definition 1: If $\mathcal F \subseteq \mathcal G$ are two $\sigma$-fields, and $X$ a $\mathcal G$-measurable integrable random variable, then $\mathbb E[X | \mathcal F]$ is defined as an
55,661
Conditional expectation of random variables defined off of each other
So first: When you define a random variable $X_{n+1}$ to depend on the value of a different random variable $X_n$ you are effectively defining the conditional probability $P(X_{n+1}|X_n)$ as long as $P(X_n)$ was well defined this defines a probability distribution $P(X_{n+1}, X_n)$ over the two variables with the corresponding outer product $\Omega$ and $\sigma$-algebra. I think this is what you rigorously define when you write a distribution for a new random variable depending on an existing one. For your second question I would first comment on the writing: Generally, I would expect you to condition on the other random variables directly, not on the $\sigma$-algebra for them. If you mean something else for those, please clarify. Then the general part for your examples is that if you define $X_{n+1}$ only based on $X_n$ you can ignore all previous random variables for calculating the conditional expectation. This is indeed true. There are many ways of showing this. Maybe the simplest is to write out that $P(X_0,\dots,X_n)=P(X_0)\prod_{i=1}^n P(X_i|X_{i-1})$, and observing that $P(X_{n+1}| X_n)$ is indeed the conditional probability as we would assume based on the naming. Then the raw definition of the conditional expectation $\mathbb{E}(X_{n+1}|X_0,\dots,X_n)$ is: \begin{eqnarray} \mathbb{E}(X_{n+1}|X_0,\dots,X_n) &=& \int x_{n+1} dP(X_{n+1}|X_0,\dots,X_n)\\ &=& \int x_{n+1} d\left[\frac{P(X_{n+1},X_0,\dots,X_n)}{P(X_0,\dots,X_n)}\right]\\ &=& \int x_{n+1} dP(X_{n+1}|X_n) \end{eqnarray} as all other P cancel in the ratio. Note that this expectation is not the expectation for the (n+1)th variable $\mathbb{E}(X_{n+1})$, which is equal to the expectation of the marginal $P(X_{n+1})$, which will not necessarily have a simple distribution. If you are interested in this expectation, you'll have to find a way to calculate $\mathbb{E}(X_{n+1})$ from $\mathbb{E}(X_{n})$ and do the induction.
Conditional expectation of random variables defined off of each other
So first: When you define a random variable $X_{n+1}$ to depend on the value of a different random variable $X_n$ you are effectively defining the conditional probability $P(X_{n+1}|X_n)$ as long as $
Conditional expectation of random variables defined off of each other So first: When you define a random variable $X_{n+1}$ to depend on the value of a different random variable $X_n$ you are effectively defining the conditional probability $P(X_{n+1}|X_n)$ as long as $P(X_n)$ was well defined this defines a probability distribution $P(X_{n+1}, X_n)$ over the two variables with the corresponding outer product $\Omega$ and $\sigma$-algebra. I think this is what you rigorously define when you write a distribution for a new random variable depending on an existing one. For your second question I would first comment on the writing: Generally, I would expect you to condition on the other random variables directly, not on the $\sigma$-algebra for them. If you mean something else for those, please clarify. Then the general part for your examples is that if you define $X_{n+1}$ only based on $X_n$ you can ignore all previous random variables for calculating the conditional expectation. This is indeed true. There are many ways of showing this. Maybe the simplest is to write out that $P(X_0,\dots,X_n)=P(X_0)\prod_{i=1}^n P(X_i|X_{i-1})$, and observing that $P(X_{n+1}| X_n)$ is indeed the conditional probability as we would assume based on the naming. Then the raw definition of the conditional expectation $\mathbb{E}(X_{n+1}|X_0,\dots,X_n)$ is: \begin{eqnarray} \mathbb{E}(X_{n+1}|X_0,\dots,X_n) &=& \int x_{n+1} dP(X_{n+1}|X_0,\dots,X_n)\\ &=& \int x_{n+1} d\left[\frac{P(X_{n+1},X_0,\dots,X_n)}{P(X_0,\dots,X_n)}\right]\\ &=& \int x_{n+1} dP(X_{n+1}|X_n) \end{eqnarray} as all other P cancel in the ratio. Note that this expectation is not the expectation for the (n+1)th variable $\mathbb{E}(X_{n+1})$, which is equal to the expectation of the marginal $P(X_{n+1})$, which will not necessarily have a simple distribution. If you are interested in this expectation, you'll have to find a way to calculate $\mathbb{E}(X_{n+1})$ from $\mathbb{E}(X_{n})$ and do the induction.
Conditional expectation of random variables defined off of each other So first: When you define a random variable $X_{n+1}$ to depend on the value of a different random variable $X_n$ you are effectively defining the conditional probability $P(X_{n+1}|X_n)$ as long as $
55,662
ELBO maximization with SGD
I think you confuse the purpose of the two methods. Maximizing the ELBO leads to a parameterized class of densities that approximates closely the true distribution, in terms of Kullback-Leibler divergence. If you instead just do SGD on the target, what you will achieve is just a (local) maximum of parameters, but no approximate probability distribution. In other words, working with variational inference allows full approximate posterior inference (calculation of probabilities, intervals, expectations etc.), whereas SGD on the target just allows for point estimates of parameters, but no uncertainty quantification of these.
ELBO maximization with SGD
I think you confuse the purpose of the two methods. Maximizing the ELBO leads to a parameterized class of densities that approximates closely the true distribution, in terms of Kullback-Leibler diverg
ELBO maximization with SGD I think you confuse the purpose of the two methods. Maximizing the ELBO leads to a parameterized class of densities that approximates closely the true distribution, in terms of Kullback-Leibler divergence. If you instead just do SGD on the target, what you will achieve is just a (local) maximum of parameters, but no approximate probability distribution. In other words, working with variational inference allows full approximate posterior inference (calculation of probabilities, intervals, expectations etc.), whereas SGD on the target just allows for point estimates of parameters, but no uncertainty quantification of these.
ELBO maximization with SGD I think you confuse the purpose of the two methods. Maximizing the ELBO leads to a parameterized class of densities that approximates closely the true distribution, in terms of Kullback-Leibler diverg
55,663
Power Analysis for glmer using simr
Indeed, the best way to estimate the power in mixed models is using simulation. The following generic code shows how this can be done in R using the GLMMadaptive package. You can suitably adapt it to fit your needs: simulate_binary <- function (n) { K <- 8 # number of measurements per subject t_max <- 15 # maximum follow-up time # we constuct a data frame with the design: # everyone has a baseline measurment, and then measurements at random follow-up times DF <- data.frame(id = rep(seq_len(n), each = K), time = c(replicate(n, c(0, sort(runif(K - 1, 0, t_max))))), sex = rep(gl(2, n/2, labels = c("male", "female")), each = K)) # design matrices for the fixed and random effects X <- model.matrix(~ sex * time, data = DF) Z <- model.matrix(~ time, data = DF) betas <- c(-2.13, -0.25, 0.24, -0.05) # fixed effects coefficients D11 <- 0.48 # variance of random intercepts D22 <- 0.1 # variance of random slopes # we simulate random effects b <- cbind(rnorm(n, sd = sqrt(D11)), rnorm(n, sd = sqrt(D22))) # linear predictor eta_y <- drop(X %*% betas + rowSums(Z * b[DF$id, ])) # we simulate binary longitudinal data DF$y <- rbinom(n * K, 1, plogis(eta_y)) DF } ################################################################### library("GLMMadaptive") M <- 1000 # number of simulations to estimate power p_values <- numeric(M) for (m in seq_len(M)) { DF_m <- simulate_binary(n = 100) fm_m <- mixed_model(y ~ sex * time, random = ~ time | id, data = DF_m, family = binomial()) p_values[m] <- coef(summary(fm_m))["sexfemale:time", "p-value"] } # assuming a significance level of 5%, the power will be mean(p_values < 0.05)
Power Analysis for glmer using simr
Indeed, the best way to estimate the power in mixed models is using simulation. The following generic code shows how this can be done in R using the GLMMadaptive package. You can suitably adapt it to
Power Analysis for glmer using simr Indeed, the best way to estimate the power in mixed models is using simulation. The following generic code shows how this can be done in R using the GLMMadaptive package. You can suitably adapt it to fit your needs: simulate_binary <- function (n) { K <- 8 # number of measurements per subject t_max <- 15 # maximum follow-up time # we constuct a data frame with the design: # everyone has a baseline measurment, and then measurements at random follow-up times DF <- data.frame(id = rep(seq_len(n), each = K), time = c(replicate(n, c(0, sort(runif(K - 1, 0, t_max))))), sex = rep(gl(2, n/2, labels = c("male", "female")), each = K)) # design matrices for the fixed and random effects X <- model.matrix(~ sex * time, data = DF) Z <- model.matrix(~ time, data = DF) betas <- c(-2.13, -0.25, 0.24, -0.05) # fixed effects coefficients D11 <- 0.48 # variance of random intercepts D22 <- 0.1 # variance of random slopes # we simulate random effects b <- cbind(rnorm(n, sd = sqrt(D11)), rnorm(n, sd = sqrt(D22))) # linear predictor eta_y <- drop(X %*% betas + rowSums(Z * b[DF$id, ])) # we simulate binary longitudinal data DF$y <- rbinom(n * K, 1, plogis(eta_y)) DF } ################################################################### library("GLMMadaptive") M <- 1000 # number of simulations to estimate power p_values <- numeric(M) for (m in seq_len(M)) { DF_m <- simulate_binary(n = 100) fm_m <- mixed_model(y ~ sex * time, random = ~ time | id, data = DF_m, family = binomial()) p_values[m] <- coef(summary(fm_m))["sexfemale:time", "p-value"] } # assuming a significance level of 5%, the power will be mean(p_values < 0.05)
Power Analysis for glmer using simr Indeed, the best way to estimate the power in mixed models is using simulation. The following generic code shows how this can be done in R using the GLMMadaptive package. You can suitably adapt it to
55,664
statistical significance for non linear data
Often with data that look like this, the goal is to determine the x value beyond which there is no further (statistical) increase in the y variable. And also to determine the plateau y value. This might be done with linear-plateau or quadratic-plateau models. But eyeballing your data, a Cate-Nelson approach might be more useful to determine these values. This approach basically separates the low-x-low-y values from the high-x-high-y values. As a first approach, I might look at the data for each temperature separately. I might estimate the critical x value and plateau value for each of these separately, and then determine if the temperatures have a statistical effect by comparing the 95% confidence intervals of these statistics. It looks to me that Temperature 6 has a higher plateau than Temperature 1. For the purposes of this answer, I'm just going to examine the data for Temperature 6. First, let's try a linear plateau model on the Temperature 6 data. temperature <- c(1,1,1,3,3,3,6,6,6,1,1,1,3,3,3,6,6,6,1,1,1,3,3,3,6,6,6,1,1,1,3,3,3,6,6,6,1,1,1,3,3,3,6,6,6,1,1,1,3,3,3,6,6,6,1,1,1,3,3,3,6,6,6,1,1,1,3,3,3,6,6,6) nutrient <- c(41.922282,41.922282,41.922282,37.23794,37.23794,37.23794,31.662541,31.662541,31.662541,279.720746,279.720746,279.720746,248.465109,248.465109,248.465109,211.264016,211.264016,211.264016,27.946784,27.946784,27.946784,24.824046,24.824046,24.824046,21.1073,21.1073,21.1073,55.899183,55.899183,55.899183,49.65308,49.65308,49.65308,42.218842,42.218842,42.218842,97.838316,97.838316,97.838316,86.905988,86.905988,86.905988,73.89411,73.89411,73.89411,10.479381,10.479381,10.479381,9.308428,9.308428,9.308428,7.914737,7.914737,7.914737,1404.251692,1404.251692,1404.251692,1247.342413,1247.342413,1247.342413,1060.585803,1060.585803,1060.585803,139.790093,139.790093,139.790093,124.170127,124.170127,124.170127,105.578928,105.578928,105.578928) replicate <- c("1C_41.922282","1C_41.922282","1C_41.922282","3C_37.23794","3C_37.23794","3C_37.23794","6C_31.662541","6C_31.662541","6C_31.662541","1C_279.720746","1C_279.720746","1C_279.720746","3C_248.465109","3C_248.465109","3C_248.465109","6C_211.264016","6C_211.264016","6C_211.264016","1C_27.946784","1C_27.946784","1C_27.946784","3C_24.824046","3C_24.824046","3C_24.824046","6C_21.1073","6C_21.1073","6C_21.1073","1C_55.899183","1C_55.899183","1C_55.899183","3C_49.65308","3C_49.65308","3C_49.65308","6C_42.218842","6C_42.218842","6C_42.218842","1C_97.838316","1C_97.838316","1C_97.838316","3C_86.905988","3C_86.905988","3C_86.905988","6C_73.89411","6C_73.89411","6C_73.89411","1C_10.479381","1C_10.479381","1C_10.479381","3C_9.308428","3C_9.308428","3C_9.308428","6C_7.914737","6C_7.914737","6C_7.914737","1C_1404.251692","1C_1404.251692","1C_1404.251692","3C_1247.342413","3C_1247.342413","3C_1247.342413","6C_1060.585803","6C_1060.585803","6C_1060.585803","1C_139.790093","1C_139.790093","1C_139.790093","3C_124.170127","3C_124.170127","3C_124.170127","6C_105.578928","6C_105.578928","6C_105.578928") length <- c(0.284222,0.271812,0.287842,0.266703,0.325212,0.323167,0.368914,0.307848,0.331279,0.349361,0.344158,0.379752,0.418207,0.398789,0.397851,0.481935,0.46838,0.447341,0.291471,0.38784,0.355353,0.353436,0.40762,0.321866,0.284687,0.26343,0.281308,0.361157,0.367518,0.328645,0.390822,0.372086,0.366396,0.357013,0.388808,0.440506,0.351289,0.348172,0.345575,0.35433,0.363403,0.332073,0.34037,0.315966,0.351829,0.207838,0.227385,0.183385,0.198436,0.217075,0.270751,0.28564,0.228815,0.212524,0.410496,0.415918,0.416817,0.406967,0.38017,0.417732,0.453175,0.502706,0.477136,0.371708,0.344421,0.366723,0.398991,0.393513,0.442445,0.414689,0.442346,0.446943) mydata <-data.frame(nutrient, temperature, replicate, length) T6 = mydata[mydata$temperature==6,] plot(length ~ nutrient, data=T6, pch=16, col="firebrick") ### Guess some reasonable initial values for parameters a.ini = 0 b.ini = 1000 clx.ini = 200 ### Define linear plateau function linplat = function(x, a, b, clx) {ifelse(x < clx, a + b * x, a + b * clx)} ### Find best fit parameters model = nls(length ~ linplat(nutrient, a, b, clx), data = T6, start = list(a = a.ini, b = b.ini, clx = clx.ini), trace = FALSE, nls.control(maxiter = 1000)) summary(model) ### Parameters: ### Estimate Std. Error t value Pr(>|t|) ### a 2.607e-01 1.711e-02 15.231 8.01e-13 *** ### b 1.618e-03 2.972e-04 5.446 2.11e-05 *** ### clx 1.305e+02 1.957e+01 6.665 1.35e-06 *** plateau = 2.607e-01 + 1.618e-03 * 1.305e+02 plateau ### 0.472 So the critical x value is about 130 and the plateau is 0.472. This suggests that above a nutrient value of 130, there is no further rise in length with increasing nutrient value (for the Temperature 6 data). With the caveat that I am the author of the page, additional code to determine confidence intervals for the parameters, p value for the overall model, and pseudo r-square for the model can be found at rcompanion.org/handbook/I_11.html. This model is easy enough to plot, but I'll use a convenience function from the rcompanion package. require(rcompanion) plotPredy(data = T6, x = nutrient, y = length, model = model, xlab = "Nutrient", ylab = "Length for T 6") This model might be okay for these data. But in my mind, the data below nutrient = 200 don't really support a linear rise in length in relation to nutrient. The residuals also reveal that the model doesn't quite fit the trend of the data in this region. plot(predict(model), residuals(model)) hist(residuals(model), col="darkgray") We could try a quadratic-plateau model. As it turns out, it's not much different. Another idea is to use a Cate-Nelson approach. The goal here is to find a critical x value and a critical y that separates the low-x-low-y values from the high-x-high-y values. There are statistical approaches that can be employed. But if there are cutoff values of x or y that may be meaningful, this analysis can be accomplished by simply eyeballing these values. (An example might be, we are interested in getting at least 85% of maximum yield of our crop, so we want to determine the nutrient level in the soil (x) that gives us (usually) at least 85% of maximum yield (y).) I won't get into the specifics of analysis, but there are some resources here: rcompanion.org/rcompanion/h_02.html, (with the caveat that I am the author of that page). The following plots one possible solution. [Note the points in this plot are slightly jittered from the original data.] It turns out that this solution perfectly separates the data. That is, all data points fall into Quadrants II and IV. The Critical x value is 158, and the Critical y is 0.447. That is a nutrient value greater than 158 is likely to yield a length of greater than 0.447. And vice-versa. sum((T6$nutrient < 158) & (T6$length < 0.447142)) ### 18 sum((T6$nutrient > 158) & (T6$length > 0.447142)) ### 6 sum((T6$nutrient < 158) & (T6$length > 0.447142)) ### 0 sum((T6$nutrient > 158) & (T6$length < 0.447142)) ### 0
statistical significance for non linear data
Often with data that look like this, the goal is to determine the x value beyond which there is no further (statistical) increase in the y variable. And also to determine the plateau y value. This mig
statistical significance for non linear data Often with data that look like this, the goal is to determine the x value beyond which there is no further (statistical) increase in the y variable. And also to determine the plateau y value. This might be done with linear-plateau or quadratic-plateau models. But eyeballing your data, a Cate-Nelson approach might be more useful to determine these values. This approach basically separates the low-x-low-y values from the high-x-high-y values. As a first approach, I might look at the data for each temperature separately. I might estimate the critical x value and plateau value for each of these separately, and then determine if the temperatures have a statistical effect by comparing the 95% confidence intervals of these statistics. It looks to me that Temperature 6 has a higher plateau than Temperature 1. For the purposes of this answer, I'm just going to examine the data for Temperature 6. First, let's try a linear plateau model on the Temperature 6 data. temperature <- c(1,1,1,3,3,3,6,6,6,1,1,1,3,3,3,6,6,6,1,1,1,3,3,3,6,6,6,1,1,1,3,3,3,6,6,6,1,1,1,3,3,3,6,6,6,1,1,1,3,3,3,6,6,6,1,1,1,3,3,3,6,6,6,1,1,1,3,3,3,6,6,6) nutrient <- c(41.922282,41.922282,41.922282,37.23794,37.23794,37.23794,31.662541,31.662541,31.662541,279.720746,279.720746,279.720746,248.465109,248.465109,248.465109,211.264016,211.264016,211.264016,27.946784,27.946784,27.946784,24.824046,24.824046,24.824046,21.1073,21.1073,21.1073,55.899183,55.899183,55.899183,49.65308,49.65308,49.65308,42.218842,42.218842,42.218842,97.838316,97.838316,97.838316,86.905988,86.905988,86.905988,73.89411,73.89411,73.89411,10.479381,10.479381,10.479381,9.308428,9.308428,9.308428,7.914737,7.914737,7.914737,1404.251692,1404.251692,1404.251692,1247.342413,1247.342413,1247.342413,1060.585803,1060.585803,1060.585803,139.790093,139.790093,139.790093,124.170127,124.170127,124.170127,105.578928,105.578928,105.578928) replicate <- c("1C_41.922282","1C_41.922282","1C_41.922282","3C_37.23794","3C_37.23794","3C_37.23794","6C_31.662541","6C_31.662541","6C_31.662541","1C_279.720746","1C_279.720746","1C_279.720746","3C_248.465109","3C_248.465109","3C_248.465109","6C_211.264016","6C_211.264016","6C_211.264016","1C_27.946784","1C_27.946784","1C_27.946784","3C_24.824046","3C_24.824046","3C_24.824046","6C_21.1073","6C_21.1073","6C_21.1073","1C_55.899183","1C_55.899183","1C_55.899183","3C_49.65308","3C_49.65308","3C_49.65308","6C_42.218842","6C_42.218842","6C_42.218842","1C_97.838316","1C_97.838316","1C_97.838316","3C_86.905988","3C_86.905988","3C_86.905988","6C_73.89411","6C_73.89411","6C_73.89411","1C_10.479381","1C_10.479381","1C_10.479381","3C_9.308428","3C_9.308428","3C_9.308428","6C_7.914737","6C_7.914737","6C_7.914737","1C_1404.251692","1C_1404.251692","1C_1404.251692","3C_1247.342413","3C_1247.342413","3C_1247.342413","6C_1060.585803","6C_1060.585803","6C_1060.585803","1C_139.790093","1C_139.790093","1C_139.790093","3C_124.170127","3C_124.170127","3C_124.170127","6C_105.578928","6C_105.578928","6C_105.578928") length <- c(0.284222,0.271812,0.287842,0.266703,0.325212,0.323167,0.368914,0.307848,0.331279,0.349361,0.344158,0.379752,0.418207,0.398789,0.397851,0.481935,0.46838,0.447341,0.291471,0.38784,0.355353,0.353436,0.40762,0.321866,0.284687,0.26343,0.281308,0.361157,0.367518,0.328645,0.390822,0.372086,0.366396,0.357013,0.388808,0.440506,0.351289,0.348172,0.345575,0.35433,0.363403,0.332073,0.34037,0.315966,0.351829,0.207838,0.227385,0.183385,0.198436,0.217075,0.270751,0.28564,0.228815,0.212524,0.410496,0.415918,0.416817,0.406967,0.38017,0.417732,0.453175,0.502706,0.477136,0.371708,0.344421,0.366723,0.398991,0.393513,0.442445,0.414689,0.442346,0.446943) mydata <-data.frame(nutrient, temperature, replicate, length) T6 = mydata[mydata$temperature==6,] plot(length ~ nutrient, data=T6, pch=16, col="firebrick") ### Guess some reasonable initial values for parameters a.ini = 0 b.ini = 1000 clx.ini = 200 ### Define linear plateau function linplat = function(x, a, b, clx) {ifelse(x < clx, a + b * x, a + b * clx)} ### Find best fit parameters model = nls(length ~ linplat(nutrient, a, b, clx), data = T6, start = list(a = a.ini, b = b.ini, clx = clx.ini), trace = FALSE, nls.control(maxiter = 1000)) summary(model) ### Parameters: ### Estimate Std. Error t value Pr(>|t|) ### a 2.607e-01 1.711e-02 15.231 8.01e-13 *** ### b 1.618e-03 2.972e-04 5.446 2.11e-05 *** ### clx 1.305e+02 1.957e+01 6.665 1.35e-06 *** plateau = 2.607e-01 + 1.618e-03 * 1.305e+02 plateau ### 0.472 So the critical x value is about 130 and the plateau is 0.472. This suggests that above a nutrient value of 130, there is no further rise in length with increasing nutrient value (for the Temperature 6 data). With the caveat that I am the author of the page, additional code to determine confidence intervals for the parameters, p value for the overall model, and pseudo r-square for the model can be found at rcompanion.org/handbook/I_11.html. This model is easy enough to plot, but I'll use a convenience function from the rcompanion package. require(rcompanion) plotPredy(data = T6, x = nutrient, y = length, model = model, xlab = "Nutrient", ylab = "Length for T 6") This model might be okay for these data. But in my mind, the data below nutrient = 200 don't really support a linear rise in length in relation to nutrient. The residuals also reveal that the model doesn't quite fit the trend of the data in this region. plot(predict(model), residuals(model)) hist(residuals(model), col="darkgray") We could try a quadratic-plateau model. As it turns out, it's not much different. Another idea is to use a Cate-Nelson approach. The goal here is to find a critical x value and a critical y that separates the low-x-low-y values from the high-x-high-y values. There are statistical approaches that can be employed. But if there are cutoff values of x or y that may be meaningful, this analysis can be accomplished by simply eyeballing these values. (An example might be, we are interested in getting at least 85% of maximum yield of our crop, so we want to determine the nutrient level in the soil (x) that gives us (usually) at least 85% of maximum yield (y).) I won't get into the specifics of analysis, but there are some resources here: rcompanion.org/rcompanion/h_02.html, (with the caveat that I am the author of that page). The following plots one possible solution. [Note the points in this plot are slightly jittered from the original data.] It turns out that this solution perfectly separates the data. That is, all data points fall into Quadrants II and IV. The Critical x value is 158, and the Critical y is 0.447. That is a nutrient value greater than 158 is likely to yield a length of greater than 0.447. And vice-versa. sum((T6$nutrient < 158) & (T6$length < 0.447142)) ### 18 sum((T6$nutrient > 158) & (T6$length > 0.447142)) ### 6 sum((T6$nutrient < 158) & (T6$length > 0.447142)) ### 0 sum((T6$nutrient > 158) & (T6$length < 0.447142)) ### 0
statistical significance for non linear data Often with data that look like this, the goal is to determine the x value beyond which there is no further (statistical) increase in the y variable. And also to determine the plateau y value. This mig
55,665
statistical significance for non linear data
If you do as was suggested in a comment by Michael Lew and use log(nutrient) in the plot, you see a reasonably linear trend for each temperature: lattice::xyplot(length ~ log(nutrient) | temperature, data = mydata) Accordingly, I tried the following model, which fits quite well: mylm = lm(length ~ factor(temperature) * log(nutrient), data = mydata) The fitted trends can be visualized as follows: library(emmeans) emmip(mylm, temperature ~ nutrient, at = list(nutrient = c(10,20,40,80,160,320,640,1280))) This appears as linear trends because the values I chose go up exponentially. If we look at the same plot with a linear scale... pdat = .Last.value$data lattice::xyplot(yvar ~ nutrient, groups = ~temperature, data = pdat, type="l") You can compare the slopes of the lines in the preceding plot as follows: > emtrends(mylm, pairwise ~ temperature, var = "log(nutrient)") $emtrends temperature log(nutrient).trend SE df lower.CL upper.CL 1 0.0350 0.00614 66 0.0228 0.0473 3 0.0317 0.00614 66 0.0194 0.0439 6 0.0523 0.00614 66 0.0400 0.0645 Confidence level used: 0.95 $contrasts contrast estimate SE df t.ratio p.value 1 - 3 0.00333 0.00868 66 0.383 0.9223 1 - 6 -0.01728 0.00868 66 -1.992 0.1222 3 - 6 -0.02061 0.00868 66 -2.375 0.0527 P value adjustment: tukey method for comparing a family of 3 estimates I think this model fits the data better than broken lines or polynomials. It is parsimonious, and the interpretation is simple.
statistical significance for non linear data
If you do as was suggested in a comment by Michael Lew and use log(nutrient) in the plot, you see a reasonably linear trend for each temperature: lattice::xyplot(length ~ log(nutrient) | temperature,
statistical significance for non linear data If you do as was suggested in a comment by Michael Lew and use log(nutrient) in the plot, you see a reasonably linear trend for each temperature: lattice::xyplot(length ~ log(nutrient) | temperature, data = mydata) Accordingly, I tried the following model, which fits quite well: mylm = lm(length ~ factor(temperature) * log(nutrient), data = mydata) The fitted trends can be visualized as follows: library(emmeans) emmip(mylm, temperature ~ nutrient, at = list(nutrient = c(10,20,40,80,160,320,640,1280))) This appears as linear trends because the values I chose go up exponentially. If we look at the same plot with a linear scale... pdat = .Last.value$data lattice::xyplot(yvar ~ nutrient, groups = ~temperature, data = pdat, type="l") You can compare the slopes of the lines in the preceding plot as follows: > emtrends(mylm, pairwise ~ temperature, var = "log(nutrient)") $emtrends temperature log(nutrient).trend SE df lower.CL upper.CL 1 0.0350 0.00614 66 0.0228 0.0473 3 0.0317 0.00614 66 0.0194 0.0439 6 0.0523 0.00614 66 0.0400 0.0645 Confidence level used: 0.95 $contrasts contrast estimate SE df t.ratio p.value 1 - 3 0.00333 0.00868 66 0.383 0.9223 1 - 6 -0.01728 0.00868 66 -1.992 0.1222 3 - 6 -0.02061 0.00868 66 -2.375 0.0527 P value adjustment: tukey method for comparing a family of 3 estimates I think this model fits the data better than broken lines or polynomials. It is parsimonious, and the interpretation is simple.
statistical significance for non linear data If you do as was suggested in a comment by Michael Lew and use log(nutrient) in the plot, you see a reasonably linear trend for each temperature: lattice::xyplot(length ~ log(nutrient) | temperature,
55,666
statistical significance for non linear data
The first step usually taken for non-linear relationships like this is by adding a polynomial term. A linear relationship is y ~ x, but you can use a quadratic approximation by doing y ~ x + x ^ 2. This is like x interacting with itself: the relationship between x and y depends on the value of x. It is still a linear model, since you are adding up coefficients, but it is a very common "trick" to approximate nonlinear relationships while still using all of the benefits of the linear model. Here's code how to do it with lme4 and then the plotted relationship: library(lme4) library(lmerTest) # linear nutrient and temperature main effects # treating temperature as numeric m1 <- lmer(length ~ nutrient + temperature + (1 | replicate), mydata) summary(m1) # significant main effects of nutrient, but not temeprature # looks like curvilinear relationship between nutrient and length # simplest way to do this is quadratic: x + x ^ 2 # you can think of x ^ 2 as "x times x" or an interaction between x and itself # poly() gives us polynomial, and we indicate 2 for squared m2 <- lmer(length ~ poly(nutrient, 2) + temperature + (1 | replicate), mydata) summary(m2) # significant quadratic relationship # does this quadratic relationship depend on temperature? m3 <- lmer(length ~ poly(nutrient, 2) * temperature + (1 | replicate), mydata) summary(m3) # yes, p = .033. we see the quadratic effect of nutrient and # linear of temperature depend on one another. what does this look like? # let's get predicted values for the data and plot it library(ggplot2) library(emmeans) pred <- emmeans( m3, # specify model c("nutrient", "temperature"), # specify predictors # specify at what points you want to get predictions: at = list( nutrient = seq(8, 1000, length.out = 20), temperature = c(1, 3, 6) ) ) # make data frame for plotting pred <- as.data.frame(pred) # and plot ggplot(pred, aes(x = nutrient, y = emmean, group = factor(temperature),color = factor(temperature))) + geom_line() + labs(y = "predicted length") m3 has the quadratic relationship between nutrient and length that depends on temperature, and you can see that coefficient is significant. Just the fixed effect coefficient table from summary(m3): Fixed effects: Estimate Std. Error df t value Pr(>|t|) (Intercept) 0.333554 0.017941 17.999958 18.591 3.39e-13 *** poly(nutrient, 2)1 0.472626 0.162941 17.999958 2.901 0.00953 ** poly(nutrient, 2)2 -0.077723 0.150528 17.999958 -0.516 0.61190 temperature 0.004344 0.004663 17.999958 0.932 0.36386 poly(nutrient, 2)1:temperature -0.103052 0.053974 17.999958 -1.909 0.07230 . poly(nutrient, 2)2:temperature -0.109066 0.047116 17.999958 -2.315 0.03263 * And here's what the call to ggplot returns: If you are interested in what the different predicted effects for replicate look like, you could also add (1 + replicate) to the call to emmeans and include the IDs that you want to look at.
statistical significance for non linear data
The first step usually taken for non-linear relationships like this is by adding a polynomial term. A linear relationship is y ~ x, but you can use a quadratic approximation by doing y ~ x + x ^ 2. Th
statistical significance for non linear data The first step usually taken for non-linear relationships like this is by adding a polynomial term. A linear relationship is y ~ x, but you can use a quadratic approximation by doing y ~ x + x ^ 2. This is like x interacting with itself: the relationship between x and y depends on the value of x. It is still a linear model, since you are adding up coefficients, but it is a very common "trick" to approximate nonlinear relationships while still using all of the benefits of the linear model. Here's code how to do it with lme4 and then the plotted relationship: library(lme4) library(lmerTest) # linear nutrient and temperature main effects # treating temperature as numeric m1 <- lmer(length ~ nutrient + temperature + (1 | replicate), mydata) summary(m1) # significant main effects of nutrient, but not temeprature # looks like curvilinear relationship between nutrient and length # simplest way to do this is quadratic: x + x ^ 2 # you can think of x ^ 2 as "x times x" or an interaction between x and itself # poly() gives us polynomial, and we indicate 2 for squared m2 <- lmer(length ~ poly(nutrient, 2) + temperature + (1 | replicate), mydata) summary(m2) # significant quadratic relationship # does this quadratic relationship depend on temperature? m3 <- lmer(length ~ poly(nutrient, 2) * temperature + (1 | replicate), mydata) summary(m3) # yes, p = .033. we see the quadratic effect of nutrient and # linear of temperature depend on one another. what does this look like? # let's get predicted values for the data and plot it library(ggplot2) library(emmeans) pred <- emmeans( m3, # specify model c("nutrient", "temperature"), # specify predictors # specify at what points you want to get predictions: at = list( nutrient = seq(8, 1000, length.out = 20), temperature = c(1, 3, 6) ) ) # make data frame for plotting pred <- as.data.frame(pred) # and plot ggplot(pred, aes(x = nutrient, y = emmean, group = factor(temperature),color = factor(temperature))) + geom_line() + labs(y = "predicted length") m3 has the quadratic relationship between nutrient and length that depends on temperature, and you can see that coefficient is significant. Just the fixed effect coefficient table from summary(m3): Fixed effects: Estimate Std. Error df t value Pr(>|t|) (Intercept) 0.333554 0.017941 17.999958 18.591 3.39e-13 *** poly(nutrient, 2)1 0.472626 0.162941 17.999958 2.901 0.00953 ** poly(nutrient, 2)2 -0.077723 0.150528 17.999958 -0.516 0.61190 temperature 0.004344 0.004663 17.999958 0.932 0.36386 poly(nutrient, 2)1:temperature -0.103052 0.053974 17.999958 -1.909 0.07230 . poly(nutrient, 2)2:temperature -0.109066 0.047116 17.999958 -2.315 0.03263 * And here's what the call to ggplot returns: If you are interested in what the different predicted effects for replicate look like, you could also add (1 + replicate) to the call to emmeans and include the IDs that you want to look at.
statistical significance for non linear data The first step usually taken for non-linear relationships like this is by adding a polynomial term. A linear relationship is y ~ x, but you can use a quadratic approximation by doing y ~ x + x ^ 2. Th
55,667
Using R^2 in nonlinear regression
It depends on what is meant by $R^2$. In simple settings, multiple definitions give equal values. Squared correlation between the feature and outcome, $(\text{corr}(x,y))^2$, at least for simple linear regression with just one feature Squared correlation between the true and predicted outcomes, $(\text{corr}(y,\hat y))^2$ A comparison of model performance, in terms of square loss (sum or squares errors), to the performance of a model that predicts $\bar y$ every time The proportion of variance in $y$ that is explained by the regression In more complicated settings, these are not all equal. Thus, it is not clear what constitutes the calculation of $R^2$ in such a situation. I would say that #1 does not make sense unless we are interested in a linear model between two variables. However, that leaves the second option as viable. Unfortunately, this correlation need not have much to do with how close the predictions are to the true values. For instance, whether you predict the exactly correct values or always predict high (or low) by the same amount, this correlation will be perfect, such as $y = (1,2,3)$ yet $\hat y = (101, 102, 103)$. That such egregiously poor performance can be missed by this statistic makes it of questionable utility for model evaluation (though it might be useful to flag a model as having some kind of systemic bias that can be corrected). When we use a linear model fit with OLS (and use an intercept), such in-sample predictions cannot happen. When we deviate from such a setting, all bets are off. However, Minitab appears to take the stance that $R^2$ is calculated according to idea #3. $$ R^2=1-\left(\dfrac{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2 }{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\bar y \right)^2 }\right) $$ (This could be argued to be the Efron pseudo $R^2$ mentioned in the comments.) This means that Minitab takes the stance, with which I agree, that $R^2$ is a function of the sum of squared errors, which is a typical optimization criterion for fitting the parameters of a nonlinear regression. Consequently, any criticism of $R^2$ is also a criticism of SSE, MSE, and RMSE. I totally disagree with the following Minitab comment. As you can see, the underlying assumptions for R-squared aren’t true for nonlinear regression. I assumed nothing to give the above formula except that we are interested in estimating conditional means and use square loss to measure the pain of missing. You can go through the decomposition of the total sum of squares (denominator) to give the "proportion of variance explained" interpretation in the linear OLS setting (with an intercept), sure, but you do not have to. Consequently, I totally disagree with Minitab on this.
Using R^2 in nonlinear regression
It depends on what is meant by $R^2$. In simple settings, multiple definitions give equal values. Squared correlation between the feature and outcome, $(\text{corr}(x,y))^2$, at least for simple line
Using R^2 in nonlinear regression It depends on what is meant by $R^2$. In simple settings, multiple definitions give equal values. Squared correlation between the feature and outcome, $(\text{corr}(x,y))^2$, at least for simple linear regression with just one feature Squared correlation between the true and predicted outcomes, $(\text{corr}(y,\hat y))^2$ A comparison of model performance, in terms of square loss (sum or squares errors), to the performance of a model that predicts $\bar y$ every time The proportion of variance in $y$ that is explained by the regression In more complicated settings, these are not all equal. Thus, it is not clear what constitutes the calculation of $R^2$ in such a situation. I would say that #1 does not make sense unless we are interested in a linear model between two variables. However, that leaves the second option as viable. Unfortunately, this correlation need not have much to do with how close the predictions are to the true values. For instance, whether you predict the exactly correct values or always predict high (or low) by the same amount, this correlation will be perfect, such as $y = (1,2,3)$ yet $\hat y = (101, 102, 103)$. That such egregiously poor performance can be missed by this statistic makes it of questionable utility for model evaluation (though it might be useful to flag a model as having some kind of systemic bias that can be corrected). When we use a linear model fit with OLS (and use an intercept), such in-sample predictions cannot happen. When we deviate from such a setting, all bets are off. However, Minitab appears to take the stance that $R^2$ is calculated according to idea #3. $$ R^2=1-\left(\dfrac{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2 }{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\bar y \right)^2 }\right) $$ (This could be argued to be the Efron pseudo $R^2$ mentioned in the comments.) This means that Minitab takes the stance, with which I agree, that $R^2$ is a function of the sum of squared errors, which is a typical optimization criterion for fitting the parameters of a nonlinear regression. Consequently, any criticism of $R^2$ is also a criticism of SSE, MSE, and RMSE. I totally disagree with the following Minitab comment. As you can see, the underlying assumptions for R-squared aren’t true for nonlinear regression. I assumed nothing to give the above formula except that we are interested in estimating conditional means and use square loss to measure the pain of missing. You can go through the decomposition of the total sum of squares (denominator) to give the "proportion of variance explained" interpretation in the linear OLS setting (with an intercept), sure, but you do not have to. Consequently, I totally disagree with Minitab on this.
Using R^2 in nonlinear regression It depends on what is meant by $R^2$. In simple settings, multiple definitions give equal values. Squared correlation between the feature and outcome, $(\text{corr}(x,y))^2$, at least for simple line
55,668
Using R^2 in nonlinear regression
Taking the other side: The $R^2$ in OLS has a number of definitions and interpretations that are endemic to OLS. For instance, a "perfect fit" has $R^2 = 1$ and, conversely, a "worthless" fit has $R^2 = 0$. In OLS the $R^2$ is interpreted as a "proportion of 'explained' variance" in the response. It also has the formula $1 - SSR/SST$ You say "non-linear regression" but I think you mean generalized linear models. These are heteroscedastic models that not only transform the response variable, but also express the mean-variance relationship explicitly, such as in a Poisson regression where the variance of the response is proportional to the mean of the response. Contrast this with non-linear least squares where the $R^2$ continues to be a very useful metric. So if we consider GLMs, none of the interpretations we enjoy about the $R^2$ are valid. A "perfect" fit will not necessarily perfectly predict all observations at every observed level. So, the theoretical upper bound may be some value less than 1. Adding a predictor to a model does not optimally improve the $R^2$ in terms of that predictor's contribution: non-linear least squares would do that. The probability model for a GLM does not invoke a "residual" per se, (or methods that do do not treat the residual as normally distributed). So neither does the formula make any sense nor can it be interpreted as a fraction of "explained" variance. While incremental increases in the $R^2$ indicate improved predictiveness, you can't be guaranteed of the scales or unit differences. For instance, if two candidate predictors $u,v$ increase $R^2$ by 5% when added as separate regressors in separate models, the first, $u$ may predict variance really well in the tails but overall be a very lousy predictor and have disappointingly non-significant resluts, whereas the second, $v$, may not appear to improve predictions much, but when accounting for areas with low variance, the overall contributions are substantially better and corroborate statistical significance. Applying $R^2$ in a GLM regardless is called a pseudo $R^2$. In that regard, the GLM has a much more useful statistic, the deviance, which even R reports as a default model summary statistics. The deviance generalizes the residual for an OLS model, which has an identity link and gaussian variance structure. But for models such as Poisson the expression is: $$ D = 2 (y \log y \hat{y}^{-1} - y - \hat{y})$$
Using R^2 in nonlinear regression
Taking the other side: The $R^2$ in OLS has a number of definitions and interpretations that are endemic to OLS. For instance, a "perfect fit" has $R^2 = 1$ and, conversely, a "worthless" fit has $R^2
Using R^2 in nonlinear regression Taking the other side: The $R^2$ in OLS has a number of definitions and interpretations that are endemic to OLS. For instance, a "perfect fit" has $R^2 = 1$ and, conversely, a "worthless" fit has $R^2 = 0$. In OLS the $R^2$ is interpreted as a "proportion of 'explained' variance" in the response. It also has the formula $1 - SSR/SST$ You say "non-linear regression" but I think you mean generalized linear models. These are heteroscedastic models that not only transform the response variable, but also express the mean-variance relationship explicitly, such as in a Poisson regression where the variance of the response is proportional to the mean of the response. Contrast this with non-linear least squares where the $R^2$ continues to be a very useful metric. So if we consider GLMs, none of the interpretations we enjoy about the $R^2$ are valid. A "perfect" fit will not necessarily perfectly predict all observations at every observed level. So, the theoretical upper bound may be some value less than 1. Adding a predictor to a model does not optimally improve the $R^2$ in terms of that predictor's contribution: non-linear least squares would do that. The probability model for a GLM does not invoke a "residual" per se, (or methods that do do not treat the residual as normally distributed). So neither does the formula make any sense nor can it be interpreted as a fraction of "explained" variance. While incremental increases in the $R^2$ indicate improved predictiveness, you can't be guaranteed of the scales or unit differences. For instance, if two candidate predictors $u,v$ increase $R^2$ by 5% when added as separate regressors in separate models, the first, $u$ may predict variance really well in the tails but overall be a very lousy predictor and have disappointingly non-significant resluts, whereas the second, $v$, may not appear to improve predictions much, but when accounting for areas with low variance, the overall contributions are substantially better and corroborate statistical significance. Applying $R^2$ in a GLM regardless is called a pseudo $R^2$. In that regard, the GLM has a much more useful statistic, the deviance, which even R reports as a default model summary statistics. The deviance generalizes the residual for an OLS model, which has an identity link and gaussian variance structure. But for models such as Poisson the expression is: $$ D = 2 (y \log y \hat{y}^{-1} - y - \hat{y})$$
Using R^2 in nonlinear regression Taking the other side: The $R^2$ in OLS has a number of definitions and interpretations that are endemic to OLS. For instance, a "perfect fit" has $R^2 = 1$ and, conversely, a "worthless" fit has $R^2
55,669
What does it mean to save optimizer states in deep learning libraries?
The optimizer state is the optimizer's momentum vector or similar history-tracking properties. For example, the Adam optimizer tracks moving averages of the gradient and squared gradient. If you start training a model without restoring these data, the optimizer will operate differently. The updates will be different, so the optimizer will proceed along a different trajectory. More details about adam: How does the Adam method of stochastic gradient descent work?
What does it mean to save optimizer states in deep learning libraries?
The optimizer state is the optimizer's momentum vector or similar history-tracking properties. For example, the Adam optimizer tracks moving averages of the gradient and squared gradient. If you star
What does it mean to save optimizer states in deep learning libraries? The optimizer state is the optimizer's momentum vector or similar history-tracking properties. For example, the Adam optimizer tracks moving averages of the gradient and squared gradient. If you start training a model without restoring these data, the optimizer will operate differently. The updates will be different, so the optimizer will proceed along a different trajectory. More details about adam: How does the Adam method of stochastic gradient descent work?
What does it mean to save optimizer states in deep learning libraries? The optimizer state is the optimizer's momentum vector or similar history-tracking properties. For example, the Adam optimizer tracks moving averages of the gradient and squared gradient. If you star
55,670
Hellinger Distance between 2 vectors of data points using cumsum in R
There are many issues here: some statistical, some numerical. Statistical issues The chief statistical issue is that the Hellinger distance between two samples of random distributions is not defined. We have to decide whether the purpose is (a) to estimate the Hellinger distance of the underlying distributions or (b) to produce some Hellinger-distance like measure of discrepancy between the two empirical distributions. The problem with (a) is that just about anything we might try is going to be biased (high) due to the random deviations in the samples. Dealing with that will take us onto a different track, so instead I will address (b). Numerical issues The code in the problem replaces the empirical data with kernel density estimates (KDE) and performs a numerical integration of the KDEs which, because they represent continuous distributions, do have a well-defined Hellinger distance. The numerical issues this code encounters are The KDE replaces each data value with a multiple of a continuous distribution centered there (the "kernel"). Thus, the KDE extends beyond the original range of the data. This linear combination of densities is evaluated on a discrete set of bins, thereby creating some discretization error. The KDE depends strongly on the choice of kernel bandwidth. Default choices in software are based on different objectives and might not be suitable for the present purpose. In order to compare two KDEs quantitatively, they must (a) cover a common range of values and (b) discretize that range identically. In effect, each KDE is represented as a vector of estimated density values in a set of bins. Analysis In mathematical notation, given two data sets $x=(x_1,\ldots, x_m)$ and $y=(y_1, \ldots, y_n),$ we need to find (i) a suitable kernel bandwidth $h$ and (ii) an equally-spaced set of points $z_1, z_2=z_1+h, \ldots, z_N = z_1 + (N-1)h$ that covers not only the full range of values in $x$ and $y$ but also extends beyond that range by several multiples of $h.$ Each KDE is a parallel set of density estimates $f_x(i)$ and $f_y(i),$ $i=1,2,\ldots, N.$ Because these are density estimates, we therefore expect their integral to equal unity. The Riemann sum approximation to that integral is appropriate, given the discrete nature of these data structures, whence we expect that $$h \sum_{i=1}^N f_x(i) \approx 1.\tag{*}$$ For whatever reason, the density function in R creates densities that sum to $1+1/(2N),$ give or take some floating point error in the calculation. Although this is a small error, we can compensate by dividing the values returned by density by the value in $(*)$ before using them. Assuming this normalization has been done (and still calling the normalized densities $f_x$ and $f_y$), the squared Hellinger Distance can be estimated with the Riemann sum $$\operatorname{HD}(x,y)^2 = 1 - h \sum_{i=1}^n \sqrt{f_x(i)f_y(i)}.$$ Assuming the range of the bins $z_i$ has been extended sufficiently far, so that $f_x(1)\approx f_y(1)\approx f_x(N)\approx f_y(N)\approx 0,$ this is essentially the Trapezoidal Rule calculation (which is equivalent to integrating a linear spline of the data--thus, there should be no concern whatsoever about using such linear approximations). It therefore should be accurate to $O(h^2).$ Testing issues The datasets created in the problem are challenging because there is little overlap in their ranges and the extent of $y=x^2$ is hugely greater than the extent of $x.$ This makes it difficult to represent both datasets accurately with the bins $z_i.$ In this histogram of the combined data, the $y$ values are distinguished by the red ticks in the rug plot at the bottom; the $x$ values are shown with black ticks (all at the far left). Statistical issues redux Now that we have reviewed some of the numerical issues, let's return to the basic statistical one: what to do about the dependence of the KDE--and thence the Hellinger Distance--on the bandwidth? I have no answer, but do have a suggestion: study how the distance depends on the bandwidth so you can determine how sensitive the result is. Such a study is relatively simple to carry out: control the bandwidth in the calculation through an argument to the function. Systematically vary that argument around the default bandwidth. Plot the resulting distances against the amount of variation. If the plot is reasonably unvarying (horizontal), then bandwidth doesn't matter. But if the plot varies enough to matter, you will have to study this dependency more closely--and you won't want to rely on a single default result. Results The function f in the R code below handles all the issues raised here. As a test, I generated a dataset of 150 $x$ values and 50 $y$ values from the Standard Normal distribution and created four datasets of values $y+\mu,$ $\mu=0,1,2,3,$ to compare to those $x$ values by means of the Hellinger distance plot described in the preceding section. As a reference, I also plotted the Hellinger distances between the underlying Normal$(\mu,1)$ distributions (which appear as dotted horizontal lines). "Multiplier" is the multiple of a default bandwidth selected by density. We should expect distances to increase with smaller multiples, because they distinguish individual data points, thereby enlarging the distances. (In the limit of a zero multiple the distances will be $1$ unless some of the $x$ and $y$ values are identical.) For multiples larger than $1,$ the KDE is making both datasets look more and more Normal, so in the limit the distance will plateau at a value of $\sqrt{1-\exp(-d^2/8)}$ where $d-\bar y - \bar x$ is the difference in sample means. Evidently the empirical distance is fairly constant and close to the underlying Hellinger Distance for larger $\mu.$ For smaller $\mu,$ where $x$ and $y$ are not easy to distinguish, the empirical calculation depends noticeably on the bandwidth and appreciably overestimates the underlying Hellinger Distance. I expect these qualitative observations to hold generally. Answer to the question Let's study the data in the question. Here is the code: # Generate the datasets set.seed(17) x <- rchisq(n = 100, df = 8) y <- x^2 # Compute points in the HD plot extent <- exp(seq(log(1/10), log(10), length.out=51)) hd <- f(x, y, extent=extent, cut=20, n=2^14) # Graph those points plot(extent, hd, log="x", type="l", lwd=2, xlab="Multiplier", ylab="Hellinger Distance") abline(v=1, lwd=2, col="Gray", lty=3) abline(h=0.7955, col="Red", lwd=2) The plot is in black. The red line shows the value of $0.7955$ reported by HellingerD. It's near the high end of this plot, corresponding to a bandwidth only $0.2$ times as great as the default bandwidth. Over this range of multiples, the computed Hellinger distance ranges from over $0.8$ down to $0.5.$ If this amount of variation is important in your application, then you better watch out: the answer is sensitive to the choice of bandwidth. The value computed for the default R bandwidth (shown at a multiple of $1$) is f(x,y) [1] 0.7505968 # # Compare two empirical distributions. # f <- function(x, y, n=2^9, extent=1, tol=1e-2, cut=3, ...) { # Estimate bandwidths for KDEs z <- c(x,y) d <- density(z, n=n, cut=cut, ...) # Compute a default bandwidth d$bw r <- range(z) + cut*c(-1,1) # Range of future calculations # Compute Hellinger distances for various bandwidths. HD <- function(x, y, dx) { # Normalize the densities c.x <- sum(x)*dx * 2*length(x) / (2*length(x) + 1) c.y <- sum(y)*dx * 2*length(y) / (2*length(y) + 1) if (abs(c.x - 1) > tol || abs(c.y - 1) > tol) warning("Normalization factor errors are ", 1/(c.x-1), " and ", 1/(c.y-1)) x <- x / c.x y <- y / c.y # Return the HD between the KDEs sqrt(max(0.0, 1 - sum(sqrt(x*y)) * dx)) } # Apply `HD` to `extent` times the estimated bandwidth in `d$bw`. sapply(extent*d$bw, function(bw) { d.x <- density(x, from=r[1], to=r[2], n=n, width=bw, ...) d.y <- density(y, from=r[1], to=r[2], n=n, width=bw, ...) dx <- diff(d.x$x[1:2]) HD(d.x$y, d.y$y, dx) }) } # # Test the comparison. # set.seed(17) n <- 50 x <- rnorm(3*n) y <- rnorm(n) # Create a plotting region, axis labels, etc. extent <- seq(log(1/3), log(3), length.out=11) plot(range(exp(extent)), c(0,1), type="n", log="x", lwd=2, xlab="Multiplier", ylab="Hellinger Distance") abline(v=1, lty=3, lwd=2, col="Gray") # Study how HD varies with bandwidth multiplier for four shifted versions of `y`. invisible(lapply(0:3, function(mu) { n0 <- 3*(length(x)+length(y)) h <- f(x, y+mu, extent=exp(extent), cut=3, n=n0, tol=0.5/n0) lines(exp(extent), h, col=hsv(mu/5,.8,.8), lwd=2) abline(h = sqrt(1 - exp(-mu^2/8)), lty=3, col=hsv(mu/5,.8,.6), lwd=2) }))
Hellinger Distance between 2 vectors of data points using cumsum in R
There are many issues here: some statistical, some numerical. Statistical issues The chief statistical issue is that the Hellinger distance between two samples of random distributions is not defined.
Hellinger Distance between 2 vectors of data points using cumsum in R There are many issues here: some statistical, some numerical. Statistical issues The chief statistical issue is that the Hellinger distance between two samples of random distributions is not defined. We have to decide whether the purpose is (a) to estimate the Hellinger distance of the underlying distributions or (b) to produce some Hellinger-distance like measure of discrepancy between the two empirical distributions. The problem with (a) is that just about anything we might try is going to be biased (high) due to the random deviations in the samples. Dealing with that will take us onto a different track, so instead I will address (b). Numerical issues The code in the problem replaces the empirical data with kernel density estimates (KDE) and performs a numerical integration of the KDEs which, because they represent continuous distributions, do have a well-defined Hellinger distance. The numerical issues this code encounters are The KDE replaces each data value with a multiple of a continuous distribution centered there (the "kernel"). Thus, the KDE extends beyond the original range of the data. This linear combination of densities is evaluated on a discrete set of bins, thereby creating some discretization error. The KDE depends strongly on the choice of kernel bandwidth. Default choices in software are based on different objectives and might not be suitable for the present purpose. In order to compare two KDEs quantitatively, they must (a) cover a common range of values and (b) discretize that range identically. In effect, each KDE is represented as a vector of estimated density values in a set of bins. Analysis In mathematical notation, given two data sets $x=(x_1,\ldots, x_m)$ and $y=(y_1, \ldots, y_n),$ we need to find (i) a suitable kernel bandwidth $h$ and (ii) an equally-spaced set of points $z_1, z_2=z_1+h, \ldots, z_N = z_1 + (N-1)h$ that covers not only the full range of values in $x$ and $y$ but also extends beyond that range by several multiples of $h.$ Each KDE is a parallel set of density estimates $f_x(i)$ and $f_y(i),$ $i=1,2,\ldots, N.$ Because these are density estimates, we therefore expect their integral to equal unity. The Riemann sum approximation to that integral is appropriate, given the discrete nature of these data structures, whence we expect that $$h \sum_{i=1}^N f_x(i) \approx 1.\tag{*}$$ For whatever reason, the density function in R creates densities that sum to $1+1/(2N),$ give or take some floating point error in the calculation. Although this is a small error, we can compensate by dividing the values returned by density by the value in $(*)$ before using them. Assuming this normalization has been done (and still calling the normalized densities $f_x$ and $f_y$), the squared Hellinger Distance can be estimated with the Riemann sum $$\operatorname{HD}(x,y)^2 = 1 - h \sum_{i=1}^n \sqrt{f_x(i)f_y(i)}.$$ Assuming the range of the bins $z_i$ has been extended sufficiently far, so that $f_x(1)\approx f_y(1)\approx f_x(N)\approx f_y(N)\approx 0,$ this is essentially the Trapezoidal Rule calculation (which is equivalent to integrating a linear spline of the data--thus, there should be no concern whatsoever about using such linear approximations). It therefore should be accurate to $O(h^2).$ Testing issues The datasets created in the problem are challenging because there is little overlap in their ranges and the extent of $y=x^2$ is hugely greater than the extent of $x.$ This makes it difficult to represent both datasets accurately with the bins $z_i.$ In this histogram of the combined data, the $y$ values are distinguished by the red ticks in the rug plot at the bottom; the $x$ values are shown with black ticks (all at the far left). Statistical issues redux Now that we have reviewed some of the numerical issues, let's return to the basic statistical one: what to do about the dependence of the KDE--and thence the Hellinger Distance--on the bandwidth? I have no answer, but do have a suggestion: study how the distance depends on the bandwidth so you can determine how sensitive the result is. Such a study is relatively simple to carry out: control the bandwidth in the calculation through an argument to the function. Systematically vary that argument around the default bandwidth. Plot the resulting distances against the amount of variation. If the plot is reasonably unvarying (horizontal), then bandwidth doesn't matter. But if the plot varies enough to matter, you will have to study this dependency more closely--and you won't want to rely on a single default result. Results The function f in the R code below handles all the issues raised here. As a test, I generated a dataset of 150 $x$ values and 50 $y$ values from the Standard Normal distribution and created four datasets of values $y+\mu,$ $\mu=0,1,2,3,$ to compare to those $x$ values by means of the Hellinger distance plot described in the preceding section. As a reference, I also plotted the Hellinger distances between the underlying Normal$(\mu,1)$ distributions (which appear as dotted horizontal lines). "Multiplier" is the multiple of a default bandwidth selected by density. We should expect distances to increase with smaller multiples, because they distinguish individual data points, thereby enlarging the distances. (In the limit of a zero multiple the distances will be $1$ unless some of the $x$ and $y$ values are identical.) For multiples larger than $1,$ the KDE is making both datasets look more and more Normal, so in the limit the distance will plateau at a value of $\sqrt{1-\exp(-d^2/8)}$ where $d-\bar y - \bar x$ is the difference in sample means. Evidently the empirical distance is fairly constant and close to the underlying Hellinger Distance for larger $\mu.$ For smaller $\mu,$ where $x$ and $y$ are not easy to distinguish, the empirical calculation depends noticeably on the bandwidth and appreciably overestimates the underlying Hellinger Distance. I expect these qualitative observations to hold generally. Answer to the question Let's study the data in the question. Here is the code: # Generate the datasets set.seed(17) x <- rchisq(n = 100, df = 8) y <- x^2 # Compute points in the HD plot extent <- exp(seq(log(1/10), log(10), length.out=51)) hd <- f(x, y, extent=extent, cut=20, n=2^14) # Graph those points plot(extent, hd, log="x", type="l", lwd=2, xlab="Multiplier", ylab="Hellinger Distance") abline(v=1, lwd=2, col="Gray", lty=3) abline(h=0.7955, col="Red", lwd=2) The plot is in black. The red line shows the value of $0.7955$ reported by HellingerD. It's near the high end of this plot, corresponding to a bandwidth only $0.2$ times as great as the default bandwidth. Over this range of multiples, the computed Hellinger distance ranges from over $0.8$ down to $0.5.$ If this amount of variation is important in your application, then you better watch out: the answer is sensitive to the choice of bandwidth. The value computed for the default R bandwidth (shown at a multiple of $1$) is f(x,y) [1] 0.7505968 # # Compare two empirical distributions. # f <- function(x, y, n=2^9, extent=1, tol=1e-2, cut=3, ...) { # Estimate bandwidths for KDEs z <- c(x,y) d <- density(z, n=n, cut=cut, ...) # Compute a default bandwidth d$bw r <- range(z) + cut*c(-1,1) # Range of future calculations # Compute Hellinger distances for various bandwidths. HD <- function(x, y, dx) { # Normalize the densities c.x <- sum(x)*dx * 2*length(x) / (2*length(x) + 1) c.y <- sum(y)*dx * 2*length(y) / (2*length(y) + 1) if (abs(c.x - 1) > tol || abs(c.y - 1) > tol) warning("Normalization factor errors are ", 1/(c.x-1), " and ", 1/(c.y-1)) x <- x / c.x y <- y / c.y # Return the HD between the KDEs sqrt(max(0.0, 1 - sum(sqrt(x*y)) * dx)) } # Apply `HD` to `extent` times the estimated bandwidth in `d$bw`. sapply(extent*d$bw, function(bw) { d.x <- density(x, from=r[1], to=r[2], n=n, width=bw, ...) d.y <- density(y, from=r[1], to=r[2], n=n, width=bw, ...) dx <- diff(d.x$x[1:2]) HD(d.x$y, d.y$y, dx) }) } # # Test the comparison. # set.seed(17) n <- 50 x <- rnorm(3*n) y <- rnorm(n) # Create a plotting region, axis labels, etc. extent <- seq(log(1/3), log(3), length.out=11) plot(range(exp(extent)), c(0,1), type="n", log="x", lwd=2, xlab="Multiplier", ylab="Hellinger Distance") abline(v=1, lty=3, lwd=2, col="Gray") # Study how HD varies with bandwidth multiplier for four shifted versions of `y`. invisible(lapply(0:3, function(mu) { n0 <- 3*(length(x)+length(y)) h <- f(x, y+mu, extent=exp(extent), cut=3, n=n0, tol=0.5/n0) lines(exp(extent), h, col=hsv(mu/5,.8,.8), lwd=2) abline(h = sqrt(1 - exp(-mu^2/8)), lty=3, col=hsv(mu/5,.8,.6), lwd=2) }))
Hellinger Distance between 2 vectors of data points using cumsum in R There are many issues here: some statistical, some numerical. Statistical issues The chief statistical issue is that the Hellinger distance between two samples of random distributions is not defined.
55,671
What does "the denominator does not contain any theta dependence" mean in Bayes' Rule? [duplicate]
In the Bayesian formula: $$\text{posterior} = \,\frac{\text{likelihood} \cdot \text{prior}}{\text{normalizing constant}}$$ If we call the observations $y$ and the parameters $\theta$, then this equates: $$p(\theta | y) = \, \frac{p(y | \theta) \cdot p(\theta)}{p(y)}$$ Here, the normalizing constant $p(y)$ is calculated as: $$p(y) = \int p(y | \theta) \cdot p(\theta) \,\mathrm{d}\theta$$ Since you integrate out $\theta$ (the parameters), the denominator no longer depends on it.
What does "the denominator does not contain any theta dependence" mean in Bayes' Rule? [duplicate]
In the Bayesian formula: $$\text{posterior} = \,\frac{\text{likelihood} \cdot \text{prior}}{\text{normalizing constant}}$$ If we call the observations $y$ and the parameters $\theta$, then this equate
What does "the denominator does not contain any theta dependence" mean in Bayes' Rule? [duplicate] In the Bayesian formula: $$\text{posterior} = \,\frac{\text{likelihood} \cdot \text{prior}}{\text{normalizing constant}}$$ If we call the observations $y$ and the parameters $\theta$, then this equates: $$p(\theta | y) = \, \frac{p(y | \theta) \cdot p(\theta)}{p(y)}$$ Here, the normalizing constant $p(y)$ is calculated as: $$p(y) = \int p(y | \theta) \cdot p(\theta) \,\mathrm{d}\theta$$ Since you integrate out $\theta$ (the parameters), the denominator no longer depends on it.
What does "the denominator does not contain any theta dependence" mean in Bayes' Rule? [duplicate] In the Bayesian formula: $$\text{posterior} = \,\frac{\text{likelihood} \cdot \text{prior}}{\text{normalizing constant}}$$ If we call the observations $y$ and the parameters $\theta$, then this equate
55,672
Likelihood of linear mixed effects model
It's a little bit semantics. Namely, to do empirical Bayes you need to write down the posterior distribution of the random effects $b_i$ given the data $y_i$ and (the maximum likelihood) estimates of the parameters $\hat \theta$, i.e., $$p(b_i \mid y_i, \hat \theta) \propto p(y_i \mid b_i, \hat \theta) p(b_i \mid \hat \theta).$$ Now in this expression, the likelihood is the first term $p(y_i \mid b_i, \hat \theta)$ and the prior the second term $p(b_i \mid \hat \theta)$. Hence, to find the modes $\hat b_i$ of this posterior, you need to find the mode of the combined likelihood and prior terms, which is equivalent to finding the mode of $\log p(y_i, b; \hat \theta)$ wrt $b$. With that being said though, these two combined terms have been called a likelihood. An example of this is the H-likelihood approach for fitting mixed models.
Likelihood of linear mixed effects model
It's a little bit semantics. Namely, to do empirical Bayes you need to write down the posterior distribution of the random effects $b_i$ given the data $y_i$ and (the maximum likelihood) estimates of
Likelihood of linear mixed effects model It's a little bit semantics. Namely, to do empirical Bayes you need to write down the posterior distribution of the random effects $b_i$ given the data $y_i$ and (the maximum likelihood) estimates of the parameters $\hat \theta$, i.e., $$p(b_i \mid y_i, \hat \theta) \propto p(y_i \mid b_i, \hat \theta) p(b_i \mid \hat \theta).$$ Now in this expression, the likelihood is the first term $p(y_i \mid b_i, \hat \theta)$ and the prior the second term $p(b_i \mid \hat \theta)$. Hence, to find the modes $\hat b_i$ of this posterior, you need to find the mode of the combined likelihood and prior terms, which is equivalent to finding the mode of $\log p(y_i, b; \hat \theta)$ wrt $b$. With that being said though, these two combined terms have been called a likelihood. An example of this is the H-likelihood approach for fitting mixed models.
Likelihood of linear mixed effects model It's a little bit semantics. Namely, to do empirical Bayes you need to write down the posterior distribution of the random effects $b_i$ given the data $y_i$ and (the maximum likelihood) estimates of
55,673
How can I show that $X_n=e^n I_{\{Y>n\}} \to 0$ in probability?
Consider the following inequality for $\varepsilon \leq 1$ $$P( |X_n| > \varepsilon ) = P(e^n I_{\{Y>n\}}>\varepsilon) \leq P( I_{\{Y>n\}} > \varepsilon) = P(Y > n) = e^{-n} \to 0.$$ Actually we do not need to know the distribution of $Y$. We can use the cumulative function properties $$ P( Y > n) = 1 - F_Y(n) \underset{n\to \infty}{\to} 1 -1 = 0$$ For $\varepsilon > 1$ we get $$ P( |X_n| > \varepsilon) \leq P(|X_n| > 1) \to 0$$
How can I show that $X_n=e^n I_{\{Y>n\}} \to 0$ in probability?
Consider the following inequality for $\varepsilon \leq 1$ $$P( |X_n| > \varepsilon ) = P(e^n I_{\{Y>n\}}>\varepsilon) \leq P( I_{\{Y>n\}} > \varepsilon) = P(Y > n) = e^{-n} \to 0.$$ Actually we do no
How can I show that $X_n=e^n I_{\{Y>n\}} \to 0$ in probability? Consider the following inequality for $\varepsilon \leq 1$ $$P( |X_n| > \varepsilon ) = P(e^n I_{\{Y>n\}}>\varepsilon) \leq P( I_{\{Y>n\}} > \varepsilon) = P(Y > n) = e^{-n} \to 0.$$ Actually we do not need to know the distribution of $Y$. We can use the cumulative function properties $$ P( Y > n) = 1 - F_Y(n) \underset{n\to \infty}{\to} 1 -1 = 0$$ For $\varepsilon > 1$ we get $$ P( |X_n| > \varepsilon) \leq P(|X_n| > 1) \to 0$$
How can I show that $X_n=e^n I_{\{Y>n\}} \to 0$ in probability? Consider the following inequality for $\varepsilon \leq 1$ $$P( |X_n| > \varepsilon ) = P(e^n I_{\{Y>n\}}>\varepsilon) \leq P( I_{\{Y>n\}} > \varepsilon) = P(Y > n) = e^{-n} \to 0.$$ Actually we do no
55,674
How can I show that $X_n=e^n I_{\{Y>n\}} \to 0$ in probability?
First, $\text{Prob}[Y>n]=e^{-n}$ since $Y$ has an exponential distribution. Therefore, $X_n$ can have only two values, namely $e^n$ with probability $e^{-n}$ and $0$ with probability $1-e^{-n}$. Let's fix some positive value $\epsilon$ and call $n'$ the smallest nonnegative integer bigger than $\ln \epsilon$. Then, for any $n \geqslant n'$, $\text{Prob}[X_n>\epsilon]=e^{-n}$, which clearly goes to 0 for large $n$.
How can I show that $X_n=e^n I_{\{Y>n\}} \to 0$ in probability?
First, $\text{Prob}[Y>n]=e^{-n}$ since $Y$ has an exponential distribution. Therefore, $X_n$ can have only two values, namely $e^n$ with probability $e^{-n}$ and $0$ with probability $1-e^{-n}$. Let's
How can I show that $X_n=e^n I_{\{Y>n\}} \to 0$ in probability? First, $\text{Prob}[Y>n]=e^{-n}$ since $Y$ has an exponential distribution. Therefore, $X_n$ can have only two values, namely $e^n$ with probability $e^{-n}$ and $0$ with probability $1-e^{-n}$. Let's fix some positive value $\epsilon$ and call $n'$ the smallest nonnegative integer bigger than $\ln \epsilon$. Then, for any $n \geqslant n'$, $\text{Prob}[X_n>\epsilon]=e^{-n}$, which clearly goes to 0 for large $n$.
How can I show that $X_n=e^n I_{\{Y>n\}} \to 0$ in probability? First, $\text{Prob}[Y>n]=e^{-n}$ since $Y$ has an exponential distribution. Therefore, $X_n$ can have only two values, namely $e^n$ with probability $e^{-n}$ and $0$ with probability $1-e^{-n}$. Let's
55,675
How can I show that $X_n=e^n I_{\{Y>n\}} \to 0$ in probability?
$X_n$ can have two values (i.e. either $0$ or $e^n$) with $p=P(X=e^n)=e^{-n}$. For $\epsilon\geq e^{n}$, $P(X_n>\epsilon)$ is always $0$. For $\epsilon<e^n$, $P(X_n>\epsilon)=p=e^{-n}$, and this goes to $0$ as $n$ goes to infinity.
How can I show that $X_n=e^n I_{\{Y>n\}} \to 0$ in probability?
$X_n$ can have two values (i.e. either $0$ or $e^n$) with $p=P(X=e^n)=e^{-n}$. For $\epsilon\geq e^{n}$, $P(X_n>\epsilon)$ is always $0$. For $\epsilon<e^n$, $P(X_n>\epsilon)=p=e^{-n}$, and this goes
How can I show that $X_n=e^n I_{\{Y>n\}} \to 0$ in probability? $X_n$ can have two values (i.e. either $0$ or $e^n$) with $p=P(X=e^n)=e^{-n}$. For $\epsilon\geq e^{n}$, $P(X_n>\epsilon)$ is always $0$. For $\epsilon<e^n$, $P(X_n>\epsilon)=p=e^{-n}$, and this goes to $0$ as $n$ goes to infinity.
How can I show that $X_n=e^n I_{\{Y>n\}} \to 0$ in probability? $X_n$ can have two values (i.e. either $0$ or $e^n$) with $p=P(X=e^n)=e^{-n}$. For $\epsilon\geq e^{n}$, $P(X_n>\epsilon)$ is always $0$. For $\epsilon<e^n$, $P(X_n>\epsilon)=p=e^{-n}$, and this goes
55,676
Variance of $Z = X_1 + X_1 X_2 + X_1 X_2 X_3 +\cdots$
Final update on 11/29/2019: I have worked on this a bit more, and wrote an article summarizing all the main findings. You can read it here. Surprisingly, there is a simple and general answer to this problem, despite the fact that all the terms in the infinite sum defining $Z$, are correlated. First, let us assume that $|E(X_i)| < 1$. This is required for convergence. Let us also assume that $E(X_i^2)<1$. This guarantees that the variance exists. We have the following formula for the $k$-th moment, for $k\geq 0$: $$E(Z^k) = E[(X_i(1+Z))^k]=E(X_i^k)E[(1+Z)^k].$$ It can be re-written as $$E(Z^k) =\frac{E(X_1^k)}{1-E(X_1^k)} \cdot\sum_{j=0}^{k-1} \frac{k!}{j!(k-j)}E(Z^j).$$ I suspect much simpler recurrence formulas can be found, for $E(Z^k)$. It follows immediately that $E(Z)=E(X_i)/(1-E(X_i))$. Moments of order 2, 3, and so on can be obtained iteratively. A little computation shows that $$Var(Z) = \frac{Var(X_i)}{(1-E(X_i^2))(1-E(X_i))^2}.$$ I checked the formula when $X_i$ is Bernouilli($p$), and it is exactly correct. I also checked empirically when $X_i$ is Uniform$[0,1]$, and it looks correct: $Var(Z) = 0.506$ based on 20,000 simulated $Z$ deviates, while the true value (according to my formula) should be $\frac{1}{2}$. Now let's look at $X_i = \sin(\pi Y_i)$ with $Y_i \sim$ Normal($0,1$). There is some simplification due to $E(X_i) = 0$ in this case: $Var(Z) = E(X_i^2) / (1 - E(X_i^2))$. To prove that $Var(Z)=1$ amounts to proving that $E(X_i^2) = 1/2$, that is: $$E(X_i^2) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} x^2 \sin^2(\pi x) e^{-x^2/2} dx = \frac{1}{2} .$$ I computed this integral using WolframAlpha, see here. The approximation is excellent, at least the first 7 digits are correct. However, the exact answer is not $1/2$, but instead $$E(X_i^2) = \frac{1+(4\pi^2-1)\exp(-2\pi^2)}{2} = 0.500000051...$$ Now, to answer the most challenging question - what kind of distribution is an attractor in this framework - we need to look at the formula that gives the moments of $Z$. Clearly, they can be pretty arbitrary, meaning that the class of attractors is very rich. Of course, not all sequences of numbers represent the moments of a distribution. In order to correspond to an actual distribution, moments most satisfy some conditions, see here. A less challenging question is to find a non-trivial distribution that can not be an attractor, that is a distribution that can never be the distribution of the infinite sum $Z$, no matter what the $X_i$'s are. This is the object of the next section. Distributions that can not be attractors The distribution for $Z$ is highly constrained. It must satisfy a number of conditions, and thus, few distributions are attractors (though far more than in the central limit theorem framework, where by far the normal distribution is the main attractor, and the only one with a finite variance.) I'll give just one example here, for $Z$ distributions whose support domain is the set of all natural numbers. Let us consider a very general discrete distribution for $X_i$, with $P(X_i = k) = p_k, k = 0, 1, 2$ and so on. In this case, $Z$'s distribution must also be discrete on the same support domain. This case covers all possible discrete distributions for $Z$, with support domain being the set of natural numbers. Let's use the notation $P(Z=k) = q_k$. Then we have: $P(Z=0) = p_0 = q_0 = P(X_1 =0)$, $P(Z=1) = p_1 p_0 = q_1 = P(X_1 = 1, X_2 =0)$, $P(Z=2) = (p_1^2 + p_2)p_0 = q_2 = P(X_1 = X_2 =1, X_3 =0)+P(X_1 = 2, X_2 =0)$, $P(Z=3) = (p_1^3 + 2 p_1 p_2 + p_3)p_0 = q_3$, $P(Z = 4) = (p_1^4 + 3 p_1^2 p_2 + 2 p_1 p_3+ p_2^2 + p_4 ) p_0 = q_4$. We don't even need to use the third, fourth or firth equation. Let's focus on the two first ones. The second one implies that $p_1 = q_1 / p_0 = q_1 / q_0$. Thus we must have $q_1 \leq q_0$ for $Z$ to be an attractor. In short any discrete distribution with $P(Z= 0) < P(Z = 1)$ is not an attractor. The geometric distribution is actually an attractor, the most obvious one, and possibly the only one with a simple representation. Another interesting question is the following: can two different $X_i$ distributions lead to the same attractor? In the case of the central limit theorem, this is true: whether you average exponential, Poisson, Bernoulli or uniform variables, you end up with a Gaussian variable - in this case the universal attractor; exceptions are few (the Lorenz distribution being one of them). The following section provides an answer for a specific attractor. If $Z$ is the geometric attractor, then $X_i$ must be Bernouilli Using the same notation as in the previous section, if $Z$ is geometric, then $P(Z = k) = q_k = q_0 (1-q_0)^k$. The equation $p_1 p_0 = q_1 = q_0(1-q_0)$ combined with $p_0 = q_0$ yields $p_1 = 1-q_0$. As a result, $p_0 + p_1 = q_0 + (1-q_0) =1$. Thus if $k> 1$ then $P(X_i = k) = p_k = 0$. This corresponds to a Bernouilli distribution for $X_i$. Interestingly, the Lorenz attractor in the central limit theorem framework can only be attained if the $X_i$'s themselves have a Lorenz distribution. Connection with the Fixed-Point theorem for distributions Consider $Z_k = X_k + X_{k} X_{k+1} + X_{k} X_{k+1} X_{k+2}+ \cdots$. We have $Z_k = X_k (1+ Z_{k+1})$ . As $k\rightarrow \infty, Z_k \rightarrow Z$. The convergence is in distribution. So at the limit, $Z \sim X_i(1+Z)$, that is, the distributions on both sides are identical. Also, $X_k$ is independent of $Z_{k+1}$. In other words, $Z$ (specifically, its distribution) is a fixed-point of the backward stochastic recurrence $Z_k = X_k (1+ Z_{k+1})$. Solving for $Z$ amounts to solving a stochastic integral equation.
Variance of $Z = X_1 + X_1 X_2 + X_1 X_2 X_3 +\cdots$
Final update on 11/29/2019: I have worked on this a bit more, and wrote an article summarizing all the main findings. You can read it here. Surprisingly, there is a simple and general answer to this p
Variance of $Z = X_1 + X_1 X_2 + X_1 X_2 X_3 +\cdots$ Final update on 11/29/2019: I have worked on this a bit more, and wrote an article summarizing all the main findings. You can read it here. Surprisingly, there is a simple and general answer to this problem, despite the fact that all the terms in the infinite sum defining $Z$, are correlated. First, let us assume that $|E(X_i)| < 1$. This is required for convergence. Let us also assume that $E(X_i^2)<1$. This guarantees that the variance exists. We have the following formula for the $k$-th moment, for $k\geq 0$: $$E(Z^k) = E[(X_i(1+Z))^k]=E(X_i^k)E[(1+Z)^k].$$ It can be re-written as $$E(Z^k) =\frac{E(X_1^k)}{1-E(X_1^k)} \cdot\sum_{j=0}^{k-1} \frac{k!}{j!(k-j)}E(Z^j).$$ I suspect much simpler recurrence formulas can be found, for $E(Z^k)$. It follows immediately that $E(Z)=E(X_i)/(1-E(X_i))$. Moments of order 2, 3, and so on can be obtained iteratively. A little computation shows that $$Var(Z) = \frac{Var(X_i)}{(1-E(X_i^2))(1-E(X_i))^2}.$$ I checked the formula when $X_i$ is Bernouilli($p$), and it is exactly correct. I also checked empirically when $X_i$ is Uniform$[0,1]$, and it looks correct: $Var(Z) = 0.506$ based on 20,000 simulated $Z$ deviates, while the true value (according to my formula) should be $\frac{1}{2}$. Now let's look at $X_i = \sin(\pi Y_i)$ with $Y_i \sim$ Normal($0,1$). There is some simplification due to $E(X_i) = 0$ in this case: $Var(Z) = E(X_i^2) / (1 - E(X_i^2))$. To prove that $Var(Z)=1$ amounts to proving that $E(X_i^2) = 1/2$, that is: $$E(X_i^2) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} x^2 \sin^2(\pi x) e^{-x^2/2} dx = \frac{1}{2} .$$ I computed this integral using WolframAlpha, see here. The approximation is excellent, at least the first 7 digits are correct. However, the exact answer is not $1/2$, but instead $$E(X_i^2) = \frac{1+(4\pi^2-1)\exp(-2\pi^2)}{2} = 0.500000051...$$ Now, to answer the most challenging question - what kind of distribution is an attractor in this framework - we need to look at the formula that gives the moments of $Z$. Clearly, they can be pretty arbitrary, meaning that the class of attractors is very rich. Of course, not all sequences of numbers represent the moments of a distribution. In order to correspond to an actual distribution, moments most satisfy some conditions, see here. A less challenging question is to find a non-trivial distribution that can not be an attractor, that is a distribution that can never be the distribution of the infinite sum $Z$, no matter what the $X_i$'s are. This is the object of the next section. Distributions that can not be attractors The distribution for $Z$ is highly constrained. It must satisfy a number of conditions, and thus, few distributions are attractors (though far more than in the central limit theorem framework, where by far the normal distribution is the main attractor, and the only one with a finite variance.) I'll give just one example here, for $Z$ distributions whose support domain is the set of all natural numbers. Let us consider a very general discrete distribution for $X_i$, with $P(X_i = k) = p_k, k = 0, 1, 2$ and so on. In this case, $Z$'s distribution must also be discrete on the same support domain. This case covers all possible discrete distributions for $Z$, with support domain being the set of natural numbers. Let's use the notation $P(Z=k) = q_k$. Then we have: $P(Z=0) = p_0 = q_0 = P(X_1 =0)$, $P(Z=1) = p_1 p_0 = q_1 = P(X_1 = 1, X_2 =0)$, $P(Z=2) = (p_1^2 + p_2)p_0 = q_2 = P(X_1 = X_2 =1, X_3 =0)+P(X_1 = 2, X_2 =0)$, $P(Z=3) = (p_1^3 + 2 p_1 p_2 + p_3)p_0 = q_3$, $P(Z = 4) = (p_1^4 + 3 p_1^2 p_2 + 2 p_1 p_3+ p_2^2 + p_4 ) p_0 = q_4$. We don't even need to use the third, fourth or firth equation. Let's focus on the two first ones. The second one implies that $p_1 = q_1 / p_0 = q_1 / q_0$. Thus we must have $q_1 \leq q_0$ for $Z$ to be an attractor. In short any discrete distribution with $P(Z= 0) < P(Z = 1)$ is not an attractor. The geometric distribution is actually an attractor, the most obvious one, and possibly the only one with a simple representation. Another interesting question is the following: can two different $X_i$ distributions lead to the same attractor? In the case of the central limit theorem, this is true: whether you average exponential, Poisson, Bernoulli or uniform variables, you end up with a Gaussian variable - in this case the universal attractor; exceptions are few (the Lorenz distribution being one of them). The following section provides an answer for a specific attractor. If $Z$ is the geometric attractor, then $X_i$ must be Bernouilli Using the same notation as in the previous section, if $Z$ is geometric, then $P(Z = k) = q_k = q_0 (1-q_0)^k$. The equation $p_1 p_0 = q_1 = q_0(1-q_0)$ combined with $p_0 = q_0$ yields $p_1 = 1-q_0$. As a result, $p_0 + p_1 = q_0 + (1-q_0) =1$. Thus if $k> 1$ then $P(X_i = k) = p_k = 0$. This corresponds to a Bernouilli distribution for $X_i$. Interestingly, the Lorenz attractor in the central limit theorem framework can only be attained if the $X_i$'s themselves have a Lorenz distribution. Connection with the Fixed-Point theorem for distributions Consider $Z_k = X_k + X_{k} X_{k+1} + X_{k} X_{k+1} X_{k+2}+ \cdots$. We have $Z_k = X_k (1+ Z_{k+1})$ . As $k\rightarrow \infty, Z_k \rightarrow Z$. The convergence is in distribution. So at the limit, $Z \sim X_i(1+Z)$, that is, the distributions on both sides are identical. Also, $X_k$ is independent of $Z_{k+1}$. In other words, $Z$ (specifically, its distribution) is a fixed-point of the backward stochastic recurrence $Z_k = X_k (1+ Z_{k+1})$. Solving for $Z$ amounts to solving a stochastic integral equation.
Variance of $Z = X_1 + X_1 X_2 + X_1 X_2 X_3 +\cdots$ Final update on 11/29/2019: I have worked on this a bit more, and wrote an article summarizing all the main findings. You can read it here. Surprisingly, there is a simple and general answer to this p
55,677
Maximum likelihood as minimizing the dissimilarity between the empirical distriution and the model distribution
This is a late response but I hope it may help: Proof (this proof is basically a summary of the explanations from the author): To go for the proof, first we can follow the procedure given by the author which allows us to have a more convenient expression: $$\begin{aligned} \theta_{ML} &=\arg \max_\theta p_{model}(\mathbb{X};\theta)\\ &= \arg\max_\theta\prod_{i=1}^m p_{model}(x^{(i)};\theta) \\ &= \arg\max_\theta\sum_{i=1}^m \log(p_{model}(x^{(i)};\theta)) \\ &= \arg\max_\theta \frac{1}{m}\sum_{i=1}^m \log(p_{model}(x^{(i)};\theta))\\ &= \arg\max_\theta \sum_{i=1}^m \frac{1}{m} \log(p_{model}(x^{(i)};\theta)) \end{aligned}$$ Note this last expression is the expectation of this $\log(p_{model}(x;\theta))$ function with respect to the empirical distribution defined by the training data ($\hat{p}_{data}$) which puts a probability of $1/m$ on each of the $m$ points $x^{(1)},x^{(2)},...,x^{(m)}$. So we can equivalently write this last expression as: $$ \theta_{ML} = \arg\max_{\theta}\mathbb{E}_{x\sim \hat{p}_{data}} \log(p_{model}(x;\theta))$$ Hence, obtaining the value of $\theta$ which satisfies this expresion will maximize the likelihood of $p_{model}(x,\theta)$ being the statistical model that best fits our set of data samples $\mathbb{X}$. But we can also get the parameter $\theta$ that maximizes the likelihood using the KL divergence of the probability distributions $\hat{p}_{data}$ (empirical distribution of $\mathbb{X}$) and $p_{model}$ (our statistical model that we are using to fit $\mathbb{X}$): $$ D_{\text{KL}}(\hat{p}_{data} \parallel p_{model}) = \mathbb{E}_{x\sim \hat{p}_{data}} [ \log(\hat{p}_{data}(x)) - \log(p_{model}(x;\theta))]$$ This is because of the explanation given by the author, which is that $\mathbb{E}_{x\sim \hat{p}_{data}} \log(\hat{p}_{data}(x))$ does not depend on $\theta$ (it only depends on the data generating process), so it can be trated as a constant. Hence we can adress the same problem of finding the value of $\theta$ that maximizes the likelihood by minimizing this KL divergence, because this is the same as minimizing: $$ \mathbb{E}_{x\sim \hat{p}_{data}} [- \log(p_{model}(x;\theta))]$$ Which is just the negative form of the simplified expression for $\theta_{ML}$ that we have written earlier from the perspective of the likelihood. So, to sum up, we can also calculate the parameter $\theta_{ML}$ by: $$ \theta_{ML} = \arg\min_{\theta} D_{\text{KL}}(\hat{p}_{data} \parallel p_{model})= \arg\min_{\theta} \mathbb{E}_{x\sim \hat{p}_{data}} [- \log(p_{model}(x;\theta))] $$ Intuition: With this said, I believe we can think of using the KL divergence for maximizing the likelihood as a way of making the predicted distribution $(p_{model}(\mathbb{X},\theta))$ as close as possible to the empirical distribution. Thereby with our predicted distribution and by sampling it, we would be able to obtain a set of samples similar to the initial ones ($\mathbb{X}$). So this may mean that we have correctly calculate the true distribution of the data $\mathbb{X}$.
Maximum likelihood as minimizing the dissimilarity between the empirical distriution and the model d
This is a late response but I hope it may help: Proof (this proof is basically a summary of the explanations from the author): To go for the proof, first we can follow the procedure given by the autho
Maximum likelihood as minimizing the dissimilarity between the empirical distriution and the model distribution This is a late response but I hope it may help: Proof (this proof is basically a summary of the explanations from the author): To go for the proof, first we can follow the procedure given by the author which allows us to have a more convenient expression: $$\begin{aligned} \theta_{ML} &=\arg \max_\theta p_{model}(\mathbb{X};\theta)\\ &= \arg\max_\theta\prod_{i=1}^m p_{model}(x^{(i)};\theta) \\ &= \arg\max_\theta\sum_{i=1}^m \log(p_{model}(x^{(i)};\theta)) \\ &= \arg\max_\theta \frac{1}{m}\sum_{i=1}^m \log(p_{model}(x^{(i)};\theta))\\ &= \arg\max_\theta \sum_{i=1}^m \frac{1}{m} \log(p_{model}(x^{(i)};\theta)) \end{aligned}$$ Note this last expression is the expectation of this $\log(p_{model}(x;\theta))$ function with respect to the empirical distribution defined by the training data ($\hat{p}_{data}$) which puts a probability of $1/m$ on each of the $m$ points $x^{(1)},x^{(2)},...,x^{(m)}$. So we can equivalently write this last expression as: $$ \theta_{ML} = \arg\max_{\theta}\mathbb{E}_{x\sim \hat{p}_{data}} \log(p_{model}(x;\theta))$$ Hence, obtaining the value of $\theta$ which satisfies this expresion will maximize the likelihood of $p_{model}(x,\theta)$ being the statistical model that best fits our set of data samples $\mathbb{X}$. But we can also get the parameter $\theta$ that maximizes the likelihood using the KL divergence of the probability distributions $\hat{p}_{data}$ (empirical distribution of $\mathbb{X}$) and $p_{model}$ (our statistical model that we are using to fit $\mathbb{X}$): $$ D_{\text{KL}}(\hat{p}_{data} \parallel p_{model}) = \mathbb{E}_{x\sim \hat{p}_{data}} [ \log(\hat{p}_{data}(x)) - \log(p_{model}(x;\theta))]$$ This is because of the explanation given by the author, which is that $\mathbb{E}_{x\sim \hat{p}_{data}} \log(\hat{p}_{data}(x))$ does not depend on $\theta$ (it only depends on the data generating process), so it can be trated as a constant. Hence we can adress the same problem of finding the value of $\theta$ that maximizes the likelihood by minimizing this KL divergence, because this is the same as minimizing: $$ \mathbb{E}_{x\sim \hat{p}_{data}} [- \log(p_{model}(x;\theta))]$$ Which is just the negative form of the simplified expression for $\theta_{ML}$ that we have written earlier from the perspective of the likelihood. So, to sum up, we can also calculate the parameter $\theta_{ML}$ by: $$ \theta_{ML} = \arg\min_{\theta} D_{\text{KL}}(\hat{p}_{data} \parallel p_{model})= \arg\min_{\theta} \mathbb{E}_{x\sim \hat{p}_{data}} [- \log(p_{model}(x;\theta))] $$ Intuition: With this said, I believe we can think of using the KL divergence for maximizing the likelihood as a way of making the predicted distribution $(p_{model}(\mathbb{X},\theta))$ as close as possible to the empirical distribution. Thereby with our predicted distribution and by sampling it, we would be able to obtain a set of samples similar to the initial ones ($\mathbb{X}$). So this may mean that we have correctly calculate the true distribution of the data $\mathbb{X}$.
Maximum likelihood as minimizing the dissimilarity between the empirical distriution and the model d This is a late response but I hope it may help: Proof (this proof is basically a summary of the explanations from the author): To go for the proof, first we can follow the procedure given by the autho
55,678
Given a pmf, how is it possible to calculate the cdf?
Given that you're talking about a discrete random variable over the integers, you should certainly know how the pmf behaves between those values - the probability of it taking any value in any interval strictly between (say) $1$ and $2$ is $0$. Consequently, you do also know how the cdf $F(x)$ behaves, since it's just the sum of all the probabilities up to $x$. It doesn't matter how many zeroes you want add in, they don't change anything. For further discussion of this point, see Wikipedia's Cumulative Distribution Function; Definition ... and the two sections immediately under that (Properties and Examples). You may find the drawing of a discrete cdf at the right hand side of the Properties section helpful (it's the top one). Here's an example for a slightly different distribution than the one in your question (though it's broadly similar).
Given a pmf, how is it possible to calculate the cdf?
Given that you're talking about a discrete random variable over the integers, you should certainly know how the pmf behaves between those values - the probability of it taking any value in any interva
Given a pmf, how is it possible to calculate the cdf? Given that you're talking about a discrete random variable over the integers, you should certainly know how the pmf behaves between those values - the probability of it taking any value in any interval strictly between (say) $1$ and $2$ is $0$. Consequently, you do also know how the cdf $F(x)$ behaves, since it's just the sum of all the probabilities up to $x$. It doesn't matter how many zeroes you want add in, they don't change anything. For further discussion of this point, see Wikipedia's Cumulative Distribution Function; Definition ... and the two sections immediately under that (Properties and Examples). You may find the drawing of a discrete cdf at the right hand side of the Properties section helpful (it's the top one). Here's an example for a slightly different distribution than the one in your question (though it's broadly similar).
Given a pmf, how is it possible to calculate the cdf? Given that you're talking about a discrete random variable over the integers, you should certainly know how the pmf behaves between those values - the probability of it taking any value in any interva
55,679
Given a pmf, how is it possible to calculate the cdf?
You can compute the CDF using delta-functions. Express the PMF as follows, $$ p(x) = (0.4) \delta(x-1) + (0.3) \delta(x-2) + (0.2) \delta(x-3) + (0.1) \delta(x-4) $$ The CDF is then given by integration, by definition, if $P(x)$ is the CDF then, $$ P(x) = \int_{-\infty}^x p(y) ~ dy $$ Observe that if $x<1$ then each of the delta functions vanish and so $P(x) = 0$. If $1<x<2$ then the only delta function which contributes to the integral is $\delta(x-1)$, so we see that $P(x) = (0.4)$ on this interval. The same procedure can now be carried out on the other invervals.
Given a pmf, how is it possible to calculate the cdf?
You can compute the CDF using delta-functions. Express the PMF as follows, $$ p(x) = (0.4) \delta(x-1) + (0.3) \delta(x-2) + (0.2) \delta(x-3) + (0.1) \delta(x-4) $$ The CDF is then given by integrat
Given a pmf, how is it possible to calculate the cdf? You can compute the CDF using delta-functions. Express the PMF as follows, $$ p(x) = (0.4) \delta(x-1) + (0.3) \delta(x-2) + (0.2) \delta(x-3) + (0.1) \delta(x-4) $$ The CDF is then given by integration, by definition, if $P(x)$ is the CDF then, $$ P(x) = \int_{-\infty}^x p(y) ~ dy $$ Observe that if $x<1$ then each of the delta functions vanish and so $P(x) = 0$. If $1<x<2$ then the only delta function which contributes to the integral is $\delta(x-1)$, so we see that $P(x) = (0.4)$ on this interval. The same procedure can now be carried out on the other invervals.
Given a pmf, how is it possible to calculate the cdf? You can compute the CDF using delta-functions. Express the PMF as follows, $$ p(x) = (0.4) \delta(x-1) + (0.3) \delta(x-2) + (0.2) \delta(x-3) + (0.1) \delta(x-4) $$ The CDF is then given by integrat
55,680
Matching vs simple regression for causal inference?
Your question rightly acknowledges that throwing away cases can lose useful information and power. It doesn't, however, acknowledge the danger in using regression as the alternative: what if your regression model is incorrect? Are you sure that the log-odds of outcome are linearly related to treatment and to the covariate values as they are entered into your logistic regression model? Might some continuous predictors like age need to modeled with logs/polynomials/splines instead of just with linear terms? Might the effects of treatment depend on some of those covariate values? Even if you account for that last possibility with treatment-covariate interaction terms, how do you know that you accounted for it properly with the linear interaction terms you included? A perfectly matched set of treatment and control cases would get around those potential problems with regression.* That leads to the next practical problem: exact matching is seldom possible, so you have to use some approximation. There are several approaches to inexact matching; see this page for some discussion. Matching based on propensity scores, the probability of being in a treatment group give the covariate values for a case, is one frequently used method. You can also combine matching with regression. You could include covariates in a regression model of matched cases; some argue that you should do this in any event, as noted on this page. You can go even further to potentially include all cases: weighting cases according to their treatment/control propensity scores (inversely) in your regression model. This page nicely outlines matching versus weighting; this page goes into more details. Both regression and matching have strengths and weaknesses. You need not think of them necessarily as alternatives; combining them intelligently can sometimes work better than either alone. *Even a data set perfectly matched on the known covariates can't rule out the problem posed by unknown covariates that might affect outcome directly or change the effect of treatment on outcome. That's why randomized trials, which in principle average out those unknown effects, can be so important.
Matching vs simple regression for causal inference?
Your question rightly acknowledges that throwing away cases can lose useful information and power. It doesn't, however, acknowledge the danger in using regression as the alternative: what if your regr
Matching vs simple regression for causal inference? Your question rightly acknowledges that throwing away cases can lose useful information and power. It doesn't, however, acknowledge the danger in using regression as the alternative: what if your regression model is incorrect? Are you sure that the log-odds of outcome are linearly related to treatment and to the covariate values as they are entered into your logistic regression model? Might some continuous predictors like age need to modeled with logs/polynomials/splines instead of just with linear terms? Might the effects of treatment depend on some of those covariate values? Even if you account for that last possibility with treatment-covariate interaction terms, how do you know that you accounted for it properly with the linear interaction terms you included? A perfectly matched set of treatment and control cases would get around those potential problems with regression.* That leads to the next practical problem: exact matching is seldom possible, so you have to use some approximation. There are several approaches to inexact matching; see this page for some discussion. Matching based on propensity scores, the probability of being in a treatment group give the covariate values for a case, is one frequently used method. You can also combine matching with regression. You could include covariates in a regression model of matched cases; some argue that you should do this in any event, as noted on this page. You can go even further to potentially include all cases: weighting cases according to their treatment/control propensity scores (inversely) in your regression model. This page nicely outlines matching versus weighting; this page goes into more details. Both regression and matching have strengths and weaknesses. You need not think of them necessarily as alternatives; combining them intelligently can sometimes work better than either alone. *Even a data set perfectly matched on the known covariates can't rule out the problem posed by unknown covariates that might affect outcome directly or change the effect of treatment on outcome. That's why randomized trials, which in principle average out those unknown effects, can be so important.
Matching vs simple regression for causal inference? Your question rightly acknowledges that throwing away cases can lose useful information and power. It doesn't, however, acknowledge the danger in using regression as the alternative: what if your regr
55,681
Is appropriate to use empirical Bayes (EB) in this way?
Whether or not your approach is legitimate depends in large part about how you describe your approach when publishing or presenting your results. If you are completely open about your approach and your process then the reader is able to judge your approach for themselves. I state this because statistics so often involves subjective choices that have no clear right or wrong answer that the best approach is to simply make all of your choices open. What I mean is an approach can be a perfectly legitimate use of Empirical Bayes but the reader might take issue with Empirical Bayes. For the purposes of your model though: you have picked one approach that is consistent with Empirical Bayes work. See for example here: http://varianceexplained.org/r/empirical_bayes_baseball/ Reference supporting your approach: https://www.jstor.org/stable/2669771?seq=1#page_scan_tab_contents Again as long as you make it clear to the reader/consumer of your work how you chose each prior distribution you allow the reader to make the decision of how much they agree with your analysis. Another approach: If this approach is making you uncomfortable then there is an approach that I personally like better and that is to pick a prior distribution that generates reasonable data. This is nicely demonstrated here: https://rss.onlinelibrary.wiley.com/doi/pdf/10.1111/rssa.12378 (especially figure 4). Basically what you do is simulate your data from the prior and see if it reasonable approximates the real data. I also recommend reading this: https://arxiv.org/pdf/1708.07487.pdf to get an understanding of the entire thought process.
Is appropriate to use empirical Bayes (EB) in this way?
Whether or not your approach is legitimate depends in large part about how you describe your approach when publishing or presenting your results. If you are completely open about your approach and you
Is appropriate to use empirical Bayes (EB) in this way? Whether or not your approach is legitimate depends in large part about how you describe your approach when publishing or presenting your results. If you are completely open about your approach and your process then the reader is able to judge your approach for themselves. I state this because statistics so often involves subjective choices that have no clear right or wrong answer that the best approach is to simply make all of your choices open. What I mean is an approach can be a perfectly legitimate use of Empirical Bayes but the reader might take issue with Empirical Bayes. For the purposes of your model though: you have picked one approach that is consistent with Empirical Bayes work. See for example here: http://varianceexplained.org/r/empirical_bayes_baseball/ Reference supporting your approach: https://www.jstor.org/stable/2669771?seq=1#page_scan_tab_contents Again as long as you make it clear to the reader/consumer of your work how you chose each prior distribution you allow the reader to make the decision of how much they agree with your analysis. Another approach: If this approach is making you uncomfortable then there is an approach that I personally like better and that is to pick a prior distribution that generates reasonable data. This is nicely demonstrated here: https://rss.onlinelibrary.wiley.com/doi/pdf/10.1111/rssa.12378 (especially figure 4). Basically what you do is simulate your data from the prior and see if it reasonable approximates the real data. I also recommend reading this: https://arxiv.org/pdf/1708.07487.pdf to get an understanding of the entire thought process.
Is appropriate to use empirical Bayes (EB) in this way? Whether or not your approach is legitimate depends in large part about how you describe your approach when publishing or presenting your results. If you are completely open about your approach and you
55,682
Is appropriate to use empirical Bayes (EB) in this way?
Let me slightly diverge from your exact question. You are describing your intended model, where the probability of giving the answer $y=1$ is modeled as $$ p(y_i=1) = \lambda\,0.5 + (1 - \lambda) \, p^*_i $$ Notice that the proposed model can be described in different form $$ \lambda\;0.5 + (1 - \lambda) \, p^*_i = \alpha + \gamma_i $$ in such case, you could write $$ \lambda = \frac{\alpha}{0.5}, \qquad p^*_i = \frac{\gamma_i}{1-\lambda} $$ If you think about it, then it seems that you can re-define your model to logistic regression with $\alpha$ and $\gamma$ being unbounded, real-valued parameters, that get turned into probabilities by passing them through the logistic function $\sigma(\cdot)$, $$ p(y_i=1) = \sigma(\alpha + \gamma\,d_i) $$ where $d$ is an indicator that is equal to $1$ for the regular trials, and $0$ for the "catch" trials. In such case, probability of $y=1$ for the "catch" trial is simply $\sigma(\alpha)$. You should be able to learn $\alpha$ from your data in a single step, without any empirical Bayesian tricks. In such case, $\alpha$ would be the "base rate" for "catch" trials, and the added effect of non-random answers would be modeled by $\gamma$. Finally, this enables us to re-define your complete model as $$ p(y_i=1) = \sigma\big(\alpha + d \cdot [\beta_0 + u_{j0} + (\beta_1 + u_{j1}) \,x_i]\big) $$ I do not have any formal arguments to back my thesis, but for me, this formulation seems to be simpler and more flexible (e.g. $\lambda$ and $p^*$ are not constrained to unit interval). You could argue with that, but for me, such formulation is also cleaner in terms of interpretability, because you model the additive effect directly, withouth the weighting by $\lambda$. As about priors, for the above model you could choose something like $\alpha \sim \mathcal{N}(0, \tau)$, where $\tau$ would control the variability around $0.5$ assumed a priori for $p(y=0)$.
Is appropriate to use empirical Bayes (EB) in this way?
Let me slightly diverge from your exact question. You are describing your intended model, where the probability of giving the answer $y=1$ is modeled as $$ p(y_i=1) = \lambda\,0.5 + (1 - \lambda) \, p
Is appropriate to use empirical Bayes (EB) in this way? Let me slightly diverge from your exact question. You are describing your intended model, where the probability of giving the answer $y=1$ is modeled as $$ p(y_i=1) = \lambda\,0.5 + (1 - \lambda) \, p^*_i $$ Notice that the proposed model can be described in different form $$ \lambda\;0.5 + (1 - \lambda) \, p^*_i = \alpha + \gamma_i $$ in such case, you could write $$ \lambda = \frac{\alpha}{0.5}, \qquad p^*_i = \frac{\gamma_i}{1-\lambda} $$ If you think about it, then it seems that you can re-define your model to logistic regression with $\alpha$ and $\gamma$ being unbounded, real-valued parameters, that get turned into probabilities by passing them through the logistic function $\sigma(\cdot)$, $$ p(y_i=1) = \sigma(\alpha + \gamma\,d_i) $$ where $d$ is an indicator that is equal to $1$ for the regular trials, and $0$ for the "catch" trials. In such case, probability of $y=1$ for the "catch" trial is simply $\sigma(\alpha)$. You should be able to learn $\alpha$ from your data in a single step, without any empirical Bayesian tricks. In such case, $\alpha$ would be the "base rate" for "catch" trials, and the added effect of non-random answers would be modeled by $\gamma$. Finally, this enables us to re-define your complete model as $$ p(y_i=1) = \sigma\big(\alpha + d \cdot [\beta_0 + u_{j0} + (\beta_1 + u_{j1}) \,x_i]\big) $$ I do not have any formal arguments to back my thesis, but for me, this formulation seems to be simpler and more flexible (e.g. $\lambda$ and $p^*$ are not constrained to unit interval). You could argue with that, but for me, such formulation is also cleaner in terms of interpretability, because you model the additive effect directly, withouth the weighting by $\lambda$. As about priors, for the above model you could choose something like $\alpha \sim \mathcal{N}(0, \tau)$, where $\tau$ would control the variability around $0.5$ assumed a priori for $p(y=0)$.
Is appropriate to use empirical Bayes (EB) in this way? Let me slightly diverge from your exact question. You are describing your intended model, where the probability of giving the answer $y=1$ is modeled as $$ p(y_i=1) = \lambda\,0.5 + (1 - \lambda) \, p
55,683
Showing t-distribution from multivariate standard normals
Continuing from my comment above: Let $$T=\frac{(\sqrt{n-1})W}{\sqrt{1 - W^2}}$$ Now, $$1-W^2 = 1-\frac{X'aa'X}{X'X}$$ $$\implies (1-W^2)X'X=X'X-X'AX=X'(I-A)X\,,\qquad A\equiv aa'$$ Therefore, \begin{align} T&=\frac{(\sqrt{n-1})a'X}{\sqrt{X'X}}\frac{\sqrt{X'X}}{\sqrt{X'(I-A)X}} \\&=\frac{(\sqrt{n-1})a'X}{\sqrt{X'(I-A)X}} \end{align} See that $A$ is a symmetric idempotent matrix with $\operatorname{tr}(A)=\operatorname{rank}(A)=1$. Similarly, $(I-A)$ is also symmetric idempotent so $\operatorname{rank}(I-A)=n-1$. Now let $$Z=a'X \quad\text{ and }\quad V=X'(I-A)X$$ We have, $$X'X=X'AX+X'(I-A)X = Z^2+V$$ Since $\operatorname{rank}(A)+\operatorname{rank}(I-A)=n$, from Cochran's theorem: $X'AX$ and $X'(I-A)X$ are independent. Therefore $Z$ and $V$ are independent and $Z^2\sim \chi^2_{(1)}$ and $V\sim \chi^2_{(n-1)}$. Finally, $$T=\frac{Z}{\sqrt{V/(n-1)}}\,,$$ which by definition now follows t-distribution with $(n-1)$ degrees of freedom.
Showing t-distribution from multivariate standard normals
Continuing from my comment above: Let $$T=\frac{(\sqrt{n-1})W}{\sqrt{1 - W^2}}$$ Now, $$1-W^2 = 1-\frac{X'aa'X}{X'X}$$ $$\implies (1-W^2)X'X=X'X-X'AX=X'(I-A)X\,,\qquad A\equiv aa'$$ Therefore, \begin
Showing t-distribution from multivariate standard normals Continuing from my comment above: Let $$T=\frac{(\sqrt{n-1})W}{\sqrt{1 - W^2}}$$ Now, $$1-W^2 = 1-\frac{X'aa'X}{X'X}$$ $$\implies (1-W^2)X'X=X'X-X'AX=X'(I-A)X\,,\qquad A\equiv aa'$$ Therefore, \begin{align} T&=\frac{(\sqrt{n-1})a'X}{\sqrt{X'X}}\frac{\sqrt{X'X}}{\sqrt{X'(I-A)X}} \\&=\frac{(\sqrt{n-1})a'X}{\sqrt{X'(I-A)X}} \end{align} See that $A$ is a symmetric idempotent matrix with $\operatorname{tr}(A)=\operatorname{rank}(A)=1$. Similarly, $(I-A)$ is also symmetric idempotent so $\operatorname{rank}(I-A)=n-1$. Now let $$Z=a'X \quad\text{ and }\quad V=X'(I-A)X$$ We have, $$X'X=X'AX+X'(I-A)X = Z^2+V$$ Since $\operatorname{rank}(A)+\operatorname{rank}(I-A)=n$, from Cochran's theorem: $X'AX$ and $X'(I-A)X$ are independent. Therefore $Z$ and $V$ are independent and $Z^2\sim \chi^2_{(1)}$ and $V\sim \chi^2_{(n-1)}$. Finally, $$T=\frac{Z}{\sqrt{V/(n-1)}}\,,$$ which by definition now follows t-distribution with $(n-1)$ degrees of freedom.
Showing t-distribution from multivariate standard normals Continuing from my comment above: Let $$T=\frac{(\sqrt{n-1})W}{\sqrt{1 - W^2}}$$ Now, $$1-W^2 = 1-\frac{X'aa'X}{X'X}$$ $$\implies (1-W^2)X'X=X'X-X'AX=X'(I-A)X\,,\qquad A\equiv aa'$$ Therefore, \begin
55,684
Probability of One Random Variable Less than Another -- Why is this approach Wrong?
Your expression is correct: Since $X_1 \ \bot \ X_2$ we can use a Riemann-Stieltjes integral to write the probability of interest as: $$\begin{equation} \begin{aligned} \mathbb{P}(X_1 \geqslant X_2) &= \mathbb{P}(X_2 \leqslant X_1) \\[6pt] &= \int \mathbb{P}(X_2 \leqslant X_1 | X_1 = t) \ d F_{X_1}(t) \\[6pt] &= \int \mathbb{P}(X_2 \leqslant t) \ d F_{X_1}(t) \\[6pt] &= \int F_{X_2}(x) \ d F_{X_1}(t). \\[6pt] \end{aligned} \end{equation}$$
Probability of One Random Variable Less than Another -- Why is this approach Wrong?
Your expression is correct: Since $X_1 \ \bot \ X_2$ we can use a Riemann-Stieltjes integral to write the probability of interest as: $$\begin{equation} \begin{aligned} \mathbb{P}(X_1 \geqslant X_2)
Probability of One Random Variable Less than Another -- Why is this approach Wrong? Your expression is correct: Since $X_1 \ \bot \ X_2$ we can use a Riemann-Stieltjes integral to write the probability of interest as: $$\begin{equation} \begin{aligned} \mathbb{P}(X_1 \geqslant X_2) &= \mathbb{P}(X_2 \leqslant X_1) \\[6pt] &= \int \mathbb{P}(X_2 \leqslant X_1 | X_1 = t) \ d F_{X_1}(t) \\[6pt] &= \int \mathbb{P}(X_2 \leqslant t) \ d F_{X_1}(t) \\[6pt] &= \int F_{X_2}(x) \ d F_{X_1}(t). \\[6pt] \end{aligned} \end{equation}$$
Probability of One Random Variable Less than Another -- Why is this approach Wrong? Your expression is correct: Since $X_1 \ \bot \ X_2$ we can use a Riemann-Stieltjes integral to write the probability of interest as: $$\begin{equation} \begin{aligned} \mathbb{P}(X_1 \geqslant X_2)
55,685
Probability of One Random Variable Less than Another -- Why is this approach Wrong?
It seems correct to me, because what you write is actually $\int F_{X_2}(t)f_{X_1}(t)dt$ and can be interpreted as swiping $t$ values such that when $X_1$ is equal to $t$, we consider all $X_2$ smaller than $t$; applying the total probability law gives us $P(X_2\leq X_1)\approx\sum_t P(X_2\leq t)P(X_1=t)$, which makes sense. An example: let $X_1\sim\exp(\lambda),X_2\sim\exp(\mu)\rightarrow F_{X_2}(t)=1-e^{-\mu t}$, $dF_{X_1}(t)=\lambda e^{-\lambda t}dt$. Then, the integral becomes: $$P(X_2\leq X_1)=\int_0^\infty (1-e^{-\mu t})\lambda e^{-\lambda t}dt=1-\lambda\int_0^\infty e^{-(\lambda+\mu)t}dt=\frac{\mu}{\lambda+\mu}$$ which can be found via joint PDF directly as in @MichaelChernick's comment: $$P(X_2\leq X_1)=\int_0^\infty\int_0^{x_1}\lambda\mu e^{-\mu x_2}e^{-\lambda x_1}dx_2dx_1=\underbrace{\int_0^\infty\lambda e^{-\lambda x_1}(1-e^{-\mu x_1})dx_{1}}_{\text{above integral}}$$
Probability of One Random Variable Less than Another -- Why is this approach Wrong?
It seems correct to me, because what you write is actually $\int F_{X_2}(t)f_{X_1}(t)dt$ and can be interpreted as swiping $t$ values such that when $X_1$ is equal to $t$, we consider all $X_2$ smalle
Probability of One Random Variable Less than Another -- Why is this approach Wrong? It seems correct to me, because what you write is actually $\int F_{X_2}(t)f_{X_1}(t)dt$ and can be interpreted as swiping $t$ values such that when $X_1$ is equal to $t$, we consider all $X_2$ smaller than $t$; applying the total probability law gives us $P(X_2\leq X_1)\approx\sum_t P(X_2\leq t)P(X_1=t)$, which makes sense. An example: let $X_1\sim\exp(\lambda),X_2\sim\exp(\mu)\rightarrow F_{X_2}(t)=1-e^{-\mu t}$, $dF_{X_1}(t)=\lambda e^{-\lambda t}dt$. Then, the integral becomes: $$P(X_2\leq X_1)=\int_0^\infty (1-e^{-\mu t})\lambda e^{-\lambda t}dt=1-\lambda\int_0^\infty e^{-(\lambda+\mu)t}dt=\frac{\mu}{\lambda+\mu}$$ which can be found via joint PDF directly as in @MichaelChernick's comment: $$P(X_2\leq X_1)=\int_0^\infty\int_0^{x_1}\lambda\mu e^{-\mu x_2}e^{-\lambda x_1}dx_2dx_1=\underbrace{\int_0^\infty\lambda e^{-\lambda x_1}(1-e^{-\mu x_1})dx_{1}}_{\text{above integral}}$$
Probability of One Random Variable Less than Another -- Why is this approach Wrong? It seems correct to me, because what you write is actually $\int F_{X_2}(t)f_{X_1}(t)dt$ and can be interpreted as swiping $t$ values such that when $X_1$ is equal to $t$, we consider all $X_2$ smalle
55,686
Dividing the MAE by the average of the values
Shameless piece of self-promotion: Kolassa & Schütz (2007, Foresight) call this quantity the "MAD/Mean" or "weighted MAPE" (because it is) and discuss it. As to drawbacks, the wMAPE, as a scaled MAD, will reward biased forecasts if your future distribution is asymmetrical, just like the "plain" MAD (Kolassa, 2020, IJF). What are the shortcomings of the Mean Absolute Percentage Error (MAPE)? is related. (Yes, I do have a thing for forecast accuracy measures.) Feel free to ping me for the papers on ResearchGate.
Dividing the MAE by the average of the values
Shameless piece of self-promotion: Kolassa & Schütz (2007, Foresight) call this quantity the "MAD/Mean" or "weighted MAPE" (because it is) and discuss it. As to drawbacks, the wMAPE, as a scaled MAD,
Dividing the MAE by the average of the values Shameless piece of self-promotion: Kolassa & Schütz (2007, Foresight) call this quantity the "MAD/Mean" or "weighted MAPE" (because it is) and discuss it. As to drawbacks, the wMAPE, as a scaled MAD, will reward biased forecasts if your future distribution is asymmetrical, just like the "plain" MAD (Kolassa, 2020, IJF). What are the shortcomings of the Mean Absolute Percentage Error (MAPE)? is related. (Yes, I do have a thing for forecast accuracy measures.) Feel free to ping me for the papers on ResearchGate.
Dividing the MAE by the average of the values Shameless piece of self-promotion: Kolassa & Schütz (2007, Foresight) call this quantity the "MAD/Mean" or "weighted MAPE" (because it is) and discuss it. As to drawbacks, the wMAPE, as a scaled MAD,
55,687
ROC-style curves for calculating sample size, power, alpha, and effect size
A classic way to proceed is to determine the difference that you wish to be able to detect at a given combination of Type I error and Type II error, and then design a study with a large enough sample size to meet your requirements, given the variability you expect in your measurements. All of the "playing" you are doing is legitimate in some sense. In particular, "playing games" with sample size is the classic application of power calculations for study design. You should nevertheless be aware of the following. First, the standard frequentist requirement for "statistical significance" is p < 0.05, so if you plan on publishing the results of your study it would be wise to stick with $\alpha = 0.05$. Second, the power represents your willingness to miss a truly "significant" (in the sense noted above) result. If you are willing to risk a 21% or 23% chance of such a miss then you can go ahead with 79% or 77% power. A funding agency or a supervisor, however, might not be willing to go so low. 80% power is commonly used; you might want to go even higher. Third, what is most important (if perhaps most difficult) is having a good handle on the difference you would like to detect and the variability in the measurements. If you have made an unreasonable choice for the difference you're trying to detect, then no playing with power calculations will help. If you don't have a reliable estimate of measurement variability, the power calculations will be correspondingly unreliable. Focus on those matters.
ROC-style curves for calculating sample size, power, alpha, and effect size
A classic way to proceed is to determine the difference that you wish to be able to detect at a given combination of Type I error and Type II error, and then design a study with a large enough sample
ROC-style curves for calculating sample size, power, alpha, and effect size A classic way to proceed is to determine the difference that you wish to be able to detect at a given combination of Type I error and Type II error, and then design a study with a large enough sample size to meet your requirements, given the variability you expect in your measurements. All of the "playing" you are doing is legitimate in some sense. In particular, "playing games" with sample size is the classic application of power calculations for study design. You should nevertheless be aware of the following. First, the standard frequentist requirement for "statistical significance" is p < 0.05, so if you plan on publishing the results of your study it would be wise to stick with $\alpha = 0.05$. Second, the power represents your willingness to miss a truly "significant" (in the sense noted above) result. If you are willing to risk a 21% or 23% chance of such a miss then you can go ahead with 79% or 77% power. A funding agency or a supervisor, however, might not be willing to go so low. 80% power is commonly used; you might want to go even higher. Third, what is most important (if perhaps most difficult) is having a good handle on the difference you would like to detect and the variability in the measurements. If you have made an unreasonable choice for the difference you're trying to detect, then no playing with power calculations will help. If you don't have a reliable estimate of measurement variability, the power calculations will be correspondingly unreliable. Focus on those matters.
ROC-style curves for calculating sample size, power, alpha, and effect size A classic way to proceed is to determine the difference that you wish to be able to detect at a given combination of Type I error and Type II error, and then design a study with a large enough sample
55,688
Do ordinal variables require one hot encoding?
The proper treatment of ordinal independent data in regression is tricky. The two most common approaches are: Treat it as continuous (but this ignores the fact that the differences in levels may not be similar). Treat it as categorical (but this ignores the ordered nature of the variable). The first method would not require one-hot encoding. The second would. Some new methods have been developed. One that I have sometimes found useful is optimal scaling.
Do ordinal variables require one hot encoding?
The proper treatment of ordinal independent data in regression is tricky. The two most common approaches are: Treat it as continuous (but this ignores the fact that the differences in levels may not
Do ordinal variables require one hot encoding? The proper treatment of ordinal independent data in regression is tricky. The two most common approaches are: Treat it as continuous (but this ignores the fact that the differences in levels may not be similar). Treat it as categorical (but this ignores the ordered nature of the variable). The first method would not require one-hot encoding. The second would. Some new methods have been developed. One that I have sometimes found useful is optimal scaling.
Do ordinal variables require one hot encoding? The proper treatment of ordinal independent data in regression is tricky. The two most common approaches are: Treat it as continuous (but this ignores the fact that the differences in levels may not
55,689
Calculate the intercept from lm
All the coefficient estimators in the model (included the intercept estimator) are computed using the standard ordinary least squares (OLS) estimator used in linear regression. Before replicating the calculation manually, we can produce the coefficients from the lm function. #Input the data and model DATA <- data.frame(f1, f2, f3, r); MODEL <- lm(r ~ ., data = DATA); #Extract the coefficient estimates summary(MODEL)$coefficients Estimate Std. Error t value Pr(>|t|) (Intercept) 71.3172222 1.197402 59.5599462 0.0002817785 f12 -9.6166667 1.108579 -8.6747673 0.0130296029 f13 -10.8296667 1.108579 -9.7689607 0.0103167234 f22 -0.6963333 1.108579 -0.6281313 0.5940817326 f23 3.4840000 1.108579 3.1427615 0.0880753355 f32 13.2413333 1.108579 11.9444179 0.0069363819 f33 5.3413333 1.108579 4.8181793 0.0404784180 We can replicate these estimated coefficients manually using the OLS formula. Letting $\mathbf{x}$ be the design matrix and $\mathbf{r}$ the response vector, the estimated coefficient vector is $\hat{\boldsymbol{\beta}} = (\mathbf{x}^\text{T} \mathbf{x})^{-1} (\mathbf{x}^\text{T} \mathbf{r})$. This can be programmed in R using matrix operations. We see that this replicates the same numbers calculated in the lm function. #Compute the coefficient estimator manually DESIGN <- model.matrix(r ~ ., data = DATA); COEF <- solve(t(DESIGN) %*% DESIGN, t(DESIGN) %*% r); colnames(COEF) <- "Estimate"; #Extract the coefficient estimates COEF; Estimate (Intercept) 71.3172222 f12 -9.6166667 f13 -10.8296667 f22 -0.6963333 f23 3.4840000 f32 13.2413333 f33 5.3413333
Calculate the intercept from lm
All the coefficient estimators in the model (included the intercept estimator) are computed using the standard ordinary least squares (OLS) estimator used in linear regression. Before replicating the
Calculate the intercept from lm All the coefficient estimators in the model (included the intercept estimator) are computed using the standard ordinary least squares (OLS) estimator used in linear regression. Before replicating the calculation manually, we can produce the coefficients from the lm function. #Input the data and model DATA <- data.frame(f1, f2, f3, r); MODEL <- lm(r ~ ., data = DATA); #Extract the coefficient estimates summary(MODEL)$coefficients Estimate Std. Error t value Pr(>|t|) (Intercept) 71.3172222 1.197402 59.5599462 0.0002817785 f12 -9.6166667 1.108579 -8.6747673 0.0130296029 f13 -10.8296667 1.108579 -9.7689607 0.0103167234 f22 -0.6963333 1.108579 -0.6281313 0.5940817326 f23 3.4840000 1.108579 3.1427615 0.0880753355 f32 13.2413333 1.108579 11.9444179 0.0069363819 f33 5.3413333 1.108579 4.8181793 0.0404784180 We can replicate these estimated coefficients manually using the OLS formula. Letting $\mathbf{x}$ be the design matrix and $\mathbf{r}$ the response vector, the estimated coefficient vector is $\hat{\boldsymbol{\beta}} = (\mathbf{x}^\text{T} \mathbf{x})^{-1} (\mathbf{x}^\text{T} \mathbf{r})$. This can be programmed in R using matrix operations. We see that this replicates the same numbers calculated in the lm function. #Compute the coefficient estimator manually DESIGN <- model.matrix(r ~ ., data = DATA); COEF <- solve(t(DESIGN) %*% DESIGN, t(DESIGN) %*% r); colnames(COEF) <- "Estimate"; #Extract the coefficient estimates COEF; Estimate (Intercept) 71.3172222 f12 -9.6166667 f13 -10.8296667 f22 -0.6963333 f23 3.4840000 f32 13.2413333 f33 5.3413333
Calculate the intercept from lm All the coefficient estimators in the model (included the intercept estimator) are computed using the standard ordinary least squares (OLS) estimator used in linear regression. Before replicating the
55,690
Calculate the intercept from lm
The intercept is the baseline value excluding explanatory variables. Because your explanatory variables are all categorical, in practice your regression will simply calculate means per group. In fact, your data supposes 3 times 3 = 27 unique groups. R will generally choose one group as the baseline and then give the added value of the other groups. In your case, the output is: (Intercept) 71.317 1.197 59.56 0.00028 *** f12 -9.617 1.109 -8.67 0.01303 * f13 -10.830 1.109 -9.77 0.01032 * f22 -0.696 1.109 -0.63 0.59408 f23 3.484 1.109 3.14 0.08808 . f32 13.241 1.109 11.94 0.00694 ** f33 5.341 1.109 4.82 0.04048 * The intercept in this case is the value of the group (1,1,1) - which is not actually in your data, but this is the expected value of this group if it were. The coefficients should be read as follows: for coefficient fxy, x indicates for which variable the coefficient is given and y indicates what value the variable gets (1, 2 or 3 in your case). Since the baseline group is (1,1,1), a value for y of 1 never occurs, it is already the base value. Small example: f12 shows the expected value of changing the first variable to group 2, meaning a group characterised by (2,1,1), which is in your data (value = 60.8, estimate from model is 71.3-9.6 = 61.7). To calculate the expectation of group (2,2,1), add both the f12 coefficient and the f22 coefficient (71.3 - 9.6 - 0.7 = 61). Finally, the expected value for group (2,2,2), which is in your data with value 74.9, would be Intercept + f12 + f22 + f32 = 71.3-9.6-0.7+13.2 = 74.2 Thus, to summarise, the intercept is that group or those observations where all regressors have a value of zero, in your case meaning the remaining group. EDIT: to calculate the expected value of (3,3,3), you add f13, f23 and f33 to the intercept. I just realised my examples above may not cover that you do not add both f12 and f13 to the intercept to get the first factor variable to 3.
Calculate the intercept from lm
The intercept is the baseline value excluding explanatory variables. Because your explanatory variables are all categorical, in practice your regression will simply calculate means per group. In fact,
Calculate the intercept from lm The intercept is the baseline value excluding explanatory variables. Because your explanatory variables are all categorical, in practice your regression will simply calculate means per group. In fact, your data supposes 3 times 3 = 27 unique groups. R will generally choose one group as the baseline and then give the added value of the other groups. In your case, the output is: (Intercept) 71.317 1.197 59.56 0.00028 *** f12 -9.617 1.109 -8.67 0.01303 * f13 -10.830 1.109 -9.77 0.01032 * f22 -0.696 1.109 -0.63 0.59408 f23 3.484 1.109 3.14 0.08808 . f32 13.241 1.109 11.94 0.00694 ** f33 5.341 1.109 4.82 0.04048 * The intercept in this case is the value of the group (1,1,1) - which is not actually in your data, but this is the expected value of this group if it were. The coefficients should be read as follows: for coefficient fxy, x indicates for which variable the coefficient is given and y indicates what value the variable gets (1, 2 or 3 in your case). Since the baseline group is (1,1,1), a value for y of 1 never occurs, it is already the base value. Small example: f12 shows the expected value of changing the first variable to group 2, meaning a group characterised by (2,1,1), which is in your data (value = 60.8, estimate from model is 71.3-9.6 = 61.7). To calculate the expectation of group (2,2,1), add both the f12 coefficient and the f22 coefficient (71.3 - 9.6 - 0.7 = 61). Finally, the expected value for group (2,2,2), which is in your data with value 74.9, would be Intercept + f12 + f22 + f32 = 71.3-9.6-0.7+13.2 = 74.2 Thus, to summarise, the intercept is that group or those observations where all regressors have a value of zero, in your case meaning the remaining group. EDIT: to calculate the expected value of (3,3,3), you add f13, f23 and f33 to the intercept. I just realised my examples above may not cover that you do not add both f12 and f13 to the intercept to get the first factor variable to 3.
Calculate the intercept from lm The intercept is the baseline value excluding explanatory variables. Because your explanatory variables are all categorical, in practice your regression will simply calculate means per group. In fact,
55,691
Is there a Continuous Conditional Variational Autoencoder?
Yes. CVAEs, as introduced in Sohn, et al (2015), make no assumptions on the conditioning variable. Letting $\mathbf{x}$ denote the conditioning/input variable, $\mathbf{y}$ the output variable, and $\mathbf{z}$ the latent variable, a CVAE consists of three components: the prior $p_\theta(\mathbf{z} \mid \mathbf{x})$, which generates the latent variable $\mathbf{z}$ using only the input $\mathbf{x}$ and the parameters $\theta$, the encoder (the estimated posterior) $q_\theta(\mathbf{z} \mid \mathbf{x}, \mathbf{y})$, which generates $\mathbf{z}$ using the input, parameters, and output $\mathbf{y}$, and the decoder $p_\theta(\mathbf{y} \mid \mathbf{x}, \mathbf{z})$, which generates the output variable $\mathbf{y}$ using the input and latent variables and the parameters. Finding the model parameters $\theta$ amounts to maximizing the evidence lower bound (ELBO): $$ \operatorname{ELBO}(\theta) = \mathbb{E}_{\mathbf{z} \sim q_\theta(\mathbf{z} \mid \mathbf{x}, \mathbf{y})} \left[\log p_\theta(\mathbf{y} \mid \mathbf{x}, \mathbf{z})\right] - \operatorname{KL}\left(q_\theta(\mathbf{z} \mid \mathbf{x}, \mathbf{y}) \,\middle\|\, p_\theta(\mathbf{z} \mid \mathbf{y})\right). $$ As you can see, nothing so far depends on the input variable $\mathbf{x}$ being discrete. In fact, Dupont (2018) proposes a slightly differently formulated model and gives explicit examples with continuous conditioning variables.
Is there a Continuous Conditional Variational Autoencoder?
Yes. CVAEs, as introduced in Sohn, et al (2015), make no assumptions on the conditioning variable. Letting $\mathbf{x}$ denote the conditioning/input variable, $\mathbf{y}$ the output variable, and $\
Is there a Continuous Conditional Variational Autoencoder? Yes. CVAEs, as introduced in Sohn, et al (2015), make no assumptions on the conditioning variable. Letting $\mathbf{x}$ denote the conditioning/input variable, $\mathbf{y}$ the output variable, and $\mathbf{z}$ the latent variable, a CVAE consists of three components: the prior $p_\theta(\mathbf{z} \mid \mathbf{x})$, which generates the latent variable $\mathbf{z}$ using only the input $\mathbf{x}$ and the parameters $\theta$, the encoder (the estimated posterior) $q_\theta(\mathbf{z} \mid \mathbf{x}, \mathbf{y})$, which generates $\mathbf{z}$ using the input, parameters, and output $\mathbf{y}$, and the decoder $p_\theta(\mathbf{y} \mid \mathbf{x}, \mathbf{z})$, which generates the output variable $\mathbf{y}$ using the input and latent variables and the parameters. Finding the model parameters $\theta$ amounts to maximizing the evidence lower bound (ELBO): $$ \operatorname{ELBO}(\theta) = \mathbb{E}_{\mathbf{z} \sim q_\theta(\mathbf{z} \mid \mathbf{x}, \mathbf{y})} \left[\log p_\theta(\mathbf{y} \mid \mathbf{x}, \mathbf{z})\right] - \operatorname{KL}\left(q_\theta(\mathbf{z} \mid \mathbf{x}, \mathbf{y}) \,\middle\|\, p_\theta(\mathbf{z} \mid \mathbf{y})\right). $$ As you can see, nothing so far depends on the input variable $\mathbf{x}$ being discrete. In fact, Dupont (2018) proposes a slightly differently formulated model and gives explicit examples with continuous conditioning variables.
Is there a Continuous Conditional Variational Autoencoder? Yes. CVAEs, as introduced in Sohn, et al (2015), make no assumptions on the conditioning variable. Letting $\mathbf{x}$ denote the conditioning/input variable, $\mathbf{y}$ the output variable, and $\
55,692
Check if data (N datapoints) originate from known distribution
Maybe Kolmogorov-Smirnov test with correction for small samples provided by Jan Vrbik in: Vrbik, Jan (2018). "Small-Sample Corrections to Kolmogorov–Smirnov Test Statistic". Pioneer Journal of Theoretical and Applied Statistics. 15 (1–2): 15–23. Correction itself is also described on Wikipedia site for Kolmogorov-Smirnov test: replace $D_N$ with $$ D_N+{\frac {1}{6{\sqrt {N}}}}+{\frac {D_N-1}{4N}}$$ where $D_N$ is standard Kolmogorov-Smirnov statistic.
Check if data (N datapoints) originate from known distribution
Maybe Kolmogorov-Smirnov test with correction for small samples provided by Jan Vrbik in: Vrbik, Jan (2018). "Small-Sample Corrections to Kolmogorov–Smirnov Test Statistic". Pioneer Journal of Theoret
Check if data (N datapoints) originate from known distribution Maybe Kolmogorov-Smirnov test with correction for small samples provided by Jan Vrbik in: Vrbik, Jan (2018). "Small-Sample Corrections to Kolmogorov–Smirnov Test Statistic". Pioneer Journal of Theoretical and Applied Statistics. 15 (1–2): 15–23. Correction itself is also described on Wikipedia site for Kolmogorov-Smirnov test: replace $D_N$ with $$ D_N+{\frac {1}{6{\sqrt {N}}}}+{\frac {D_N-1}{4N}}$$ where $D_N$ is standard Kolmogorov-Smirnov statistic.
Check if data (N datapoints) originate from known distribution Maybe Kolmogorov-Smirnov test with correction for small samples provided by Jan Vrbik in: Vrbik, Jan (2018). "Small-Sample Corrections to Kolmogorov–Smirnov Test Statistic". Pioneer Journal of Theoret
55,693
Check if data (N datapoints) originate from known distribution
Use R to generate 10 observations from a standard uniform distribution: set seed(722) # for reproducibility x = runif(10) summary(x); sd(x) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.1270 0.4940 0.7454 0.6627 0.9070 0.9477 [1] 0.293335 # SD Use the Kolmogorov-Smirnov test to see if the sample is consistent with standard uniform. Appropriately, the answer is Yes because data were sampled from a standard uniform distribution: Large P-value, no rejection. ks.test(x, punif) One-sample Kolmogorov-Smirnov test data: x D = 0.31507, p-value = 0.2217 alternative hypothesis: two-sided Is the sample also consistent with $\mathsf{Norm}(.5, \sqrt{1/12})?$ The mean and variance match, but shapes differ. Notice that the parameters mean and standard deviation are specified. Again consistent, but we know the normal distribution is not correct. ks.test(x, pnorm, .5, sqrt(1/12)) One-sample Kolmogorov-Smirnov test data: x D = 0.36246, p-value = 0.1104 alternative hypothesis: two-sided However, the K-S test easily rejects that this sample is from $\mathsf{Exp}(rate=2),$ which has mean $1/2 = 0.5,$ but the wrong SD. This exponential distribution has almost 14% of its probability above $1,$ but our sample has no observation above 0.948. ks.test(x, dexp, 2) One-sample Kolmogorov-Smirnov test data: x D = 1.5513, p-value < 2.2e-16 alternative hypothesis: two-sided Notes: (1) See other pages on this site and the Internet, including the relevant Wikipedia page, which has a brief explanation of the test and some remarks about cases in which parameters must be estimated from data. (2) Several well-known statistical software programs have procedures that check a sample against a list of often used distributions to estimate parameters and see if any distribution is a fit. Often these are called 'distribution ID' procedures and sometimes they are restricted to non-negative data. For example, when the distribution ID procedure in Minitab is asked to compare the small sample above to normal, lognormal, Weibull, and gamma families, here are the parameter estimates: ML Estimates of Distribution Parameters Distribution Location Shape Scale Normal* 0.66265 0.29334 Lognormal* -0.55937 0.66158 Weibull 2.62094 0.74268 Gamma 3.53947 0.18722 * Scale: Adjusted ML estimate And here are appropriate probability plots with P-values of Anderson-Darling goodness-of-fit tests in legends. The data are clearly inconsistent with distributions in the lognormal family. (2) For very large sample sizes, Kolmogorov-Smirnov, Anderson-Darling and other goodness-of-fit tests can reject some distributions as not fitting---even when the fit might be good enough for some practical applications.
Check if data (N datapoints) originate from known distribution
Use R to generate 10 observations from a standard uniform distribution: set seed(722) # for reproducibility x = runif(10) summary(x); sd(x) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.1270 0
Check if data (N datapoints) originate from known distribution Use R to generate 10 observations from a standard uniform distribution: set seed(722) # for reproducibility x = runif(10) summary(x); sd(x) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.1270 0.4940 0.7454 0.6627 0.9070 0.9477 [1] 0.293335 # SD Use the Kolmogorov-Smirnov test to see if the sample is consistent with standard uniform. Appropriately, the answer is Yes because data were sampled from a standard uniform distribution: Large P-value, no rejection. ks.test(x, punif) One-sample Kolmogorov-Smirnov test data: x D = 0.31507, p-value = 0.2217 alternative hypothesis: two-sided Is the sample also consistent with $\mathsf{Norm}(.5, \sqrt{1/12})?$ The mean and variance match, but shapes differ. Notice that the parameters mean and standard deviation are specified. Again consistent, but we know the normal distribution is not correct. ks.test(x, pnorm, .5, sqrt(1/12)) One-sample Kolmogorov-Smirnov test data: x D = 0.36246, p-value = 0.1104 alternative hypothesis: two-sided However, the K-S test easily rejects that this sample is from $\mathsf{Exp}(rate=2),$ which has mean $1/2 = 0.5,$ but the wrong SD. This exponential distribution has almost 14% of its probability above $1,$ but our sample has no observation above 0.948. ks.test(x, dexp, 2) One-sample Kolmogorov-Smirnov test data: x D = 1.5513, p-value < 2.2e-16 alternative hypothesis: two-sided Notes: (1) See other pages on this site and the Internet, including the relevant Wikipedia page, which has a brief explanation of the test and some remarks about cases in which parameters must be estimated from data. (2) Several well-known statistical software programs have procedures that check a sample against a list of often used distributions to estimate parameters and see if any distribution is a fit. Often these are called 'distribution ID' procedures and sometimes they are restricted to non-negative data. For example, when the distribution ID procedure in Minitab is asked to compare the small sample above to normal, lognormal, Weibull, and gamma families, here are the parameter estimates: ML Estimates of Distribution Parameters Distribution Location Shape Scale Normal* 0.66265 0.29334 Lognormal* -0.55937 0.66158 Weibull 2.62094 0.74268 Gamma 3.53947 0.18722 * Scale: Adjusted ML estimate And here are appropriate probability plots with P-values of Anderson-Darling goodness-of-fit tests in legends. The data are clearly inconsistent with distributions in the lognormal family. (2) For very large sample sizes, Kolmogorov-Smirnov, Anderson-Darling and other goodness-of-fit tests can reject some distributions as not fitting---even when the fit might be good enough for some practical applications.
Check if data (N datapoints) originate from known distribution Use R to generate 10 observations from a standard uniform distribution: set seed(722) # for reproducibility x = runif(10) summary(x); sd(x) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.1270 0
55,694
How to interpret bayesian posterior distributions
What can we say about the two versions? You are directly modelling the probability distributions for the two conversion rates. You can use the posterior to answer questions about the two conversion rates. Such questions might be... What is the probability the new version has a larger conversion rate? What is the probability the new version has a smaller conversion rate? etc. You can answer whatever question your posterior allows you to answer. Do we "reject" one curve? No. Like I said, you are modelling the posterior distribution. You can use the posterior to answer any hypotheses you've made.
How to interpret bayesian posterior distributions
What can we say about the two versions? You are directly modelling the probability distributions for the two conversion rates. You can use the posterior to answer questions about the two conversion
How to interpret bayesian posterior distributions What can we say about the two versions? You are directly modelling the probability distributions for the two conversion rates. You can use the posterior to answer questions about the two conversion rates. Such questions might be... What is the probability the new version has a larger conversion rate? What is the probability the new version has a smaller conversion rate? etc. You can answer whatever question your posterior allows you to answer. Do we "reject" one curve? No. Like I said, you are modelling the posterior distribution. You can use the posterior to answer any hypotheses you've made.
How to interpret bayesian posterior distributions What can we say about the two versions? You are directly modelling the probability distributions for the two conversion rates. You can use the posterior to answer questions about the two conversion
55,695
Are there examples of covariance functions used in Gaussian processes with negative non-diagonal elements?
Actually your linked source gives already an example where the kernel matrix can have negative entries, the linear kernel: $$ k(x,y) = a + (x - c)(y - c).$$ Other examples are given by dot product kernels such as $$ k(x,y)= <x,y>^n.$$ Your impression was probably formed because many kernels used in practice are radial basis functions $$ k(x,y) = g(\|x - y\|)$$ based on euclidean distance. For those all entries must be positive indeed, because $g$ has to be completely monotonic, which in particular requires $g$ to be positive. (See Theorem 7.13 of Wendland)
Are there examples of covariance functions used in Gaussian processes with negative non-diagonal ele
Actually your linked source gives already an example where the kernel matrix can have negative entries, the linear kernel: $$ k(x,y) = a + (x - c)(y - c).$$ Other examples are given by dot product ker
Are there examples of covariance functions used in Gaussian processes with negative non-diagonal elements? Actually your linked source gives already an example where the kernel matrix can have negative entries, the linear kernel: $$ k(x,y) = a + (x - c)(y - c).$$ Other examples are given by dot product kernels such as $$ k(x,y)= <x,y>^n.$$ Your impression was probably formed because many kernels used in practice are radial basis functions $$ k(x,y) = g(\|x - y\|)$$ based on euclidean distance. For those all entries must be positive indeed, because $g$ has to be completely monotonic, which in particular requires $g$ to be positive. (See Theorem 7.13 of Wendland)
Are there examples of covariance functions used in Gaussian processes with negative non-diagonal ele Actually your linked source gives already an example where the kernel matrix can have negative entries, the linear kernel: $$ k(x,y) = a + (x - c)(y - c).$$ Other examples are given by dot product ker
55,696
Why is a Gelman-Rubin diagnostic of < 1.1 considered acceptable?
Are you wondering how GR works, or why 1.1 seems to be the accepted cut-off. If the latter, you're not alone: arXiv paper questioning 1.1 cutoff argues that 1.1 is too high. They also propose a revised version of GR that is improved and can even evaluate a single chain. The Stan folks are also working on a revised version of Stan's Rhat, which I believe is GR. So if you're questioning 1.1, your instincts seem to be good. If you're questioning GR, the proposed revisions may also support your instincts.
Why is a Gelman-Rubin diagnostic of < 1.1 considered acceptable?
Are you wondering how GR works, or why 1.1 seems to be the accepted cut-off. If the latter, you're not alone: arXiv paper questioning 1.1 cutoff argues that 1.1 is too high. They also propose a revise
Why is a Gelman-Rubin diagnostic of < 1.1 considered acceptable? Are you wondering how GR works, or why 1.1 seems to be the accepted cut-off. If the latter, you're not alone: arXiv paper questioning 1.1 cutoff argues that 1.1 is too high. They also propose a revised version of GR that is improved and can even evaluate a single chain. The Stan folks are also working on a revised version of Stan's Rhat, which I believe is GR. So if you're questioning 1.1, your instincts seem to be good. If you're questioning GR, the proposed revisions may also support your instincts.
Why is a Gelman-Rubin diagnostic of < 1.1 considered acceptable? Are you wondering how GR works, or why 1.1 seems to be the accepted cut-off. If the latter, you're not alone: arXiv paper questioning 1.1 cutoff argues that 1.1 is too high. They also propose a revise
55,697
Why is $\frac{\sum^n_{i=1}(X_i-\bar{X})^2}{\sigma^2}$chi-square distributed with $n-1$ degrees of freedom? [duplicate]
Consider samples of size $n= 5$ from a standard normal distribution. then $Q =(n-1)S^2 \sim \mathsf{Chisq}(\nu=4),$ not $\mathsf{Chisq}(\nu=5).$ set.seed(706); m = 10^6; n = 5 q = replicate( m, (n-1)*var(rnorm(n)) ) mean(q) [1] 4.002257 # aprx E(Q) = 4 hdr = "Simulated Dist'n of Q fits CHISQ(4)[blue], not CHISQ(5) [red]" hist(q, prob=T, br=50, col="skyblue2", main=hdr) curve(dchisq(x, 4), add=T, lwd=2, col="blue") curve(dchisq(x, 5), add=T, lwd=3, col="red", lty="dotted") Note: If you consider $n = 2,$ then it is easy to verify that $\bar X$ is a function of $X_1 + X_2$ and independently, $S^2$ is a function of $X_1 - X_2$ so that $S^2 \sim \mathsf{Chisq}(1).$ One proof for $n > 2$ shows that $\bar X$ is a function of a vector in one dimention, and that (orthogonally) $S^2$ is a function of vectors in $n-1$ dimensions.
Why is $\frac{\sum^n_{i=1}(X_i-\bar{X})^2}{\sigma^2}$chi-square distributed with $n-1$ degrees of fr
Consider samples of size $n= 5$ from a standard normal distribution. then $Q =(n-1)S^2 \sim \mathsf{Chisq}(\nu=4),$ not $\mathsf{Chisq}(\nu=5).$ set.seed(706); m = 10^6; n = 5 q = replicate( m, (n-1
Why is $\frac{\sum^n_{i=1}(X_i-\bar{X})^2}{\sigma^2}$chi-square distributed with $n-1$ degrees of freedom? [duplicate] Consider samples of size $n= 5$ from a standard normal distribution. then $Q =(n-1)S^2 \sim \mathsf{Chisq}(\nu=4),$ not $\mathsf{Chisq}(\nu=5).$ set.seed(706); m = 10^6; n = 5 q = replicate( m, (n-1)*var(rnorm(n)) ) mean(q) [1] 4.002257 # aprx E(Q) = 4 hdr = "Simulated Dist'n of Q fits CHISQ(4)[blue], not CHISQ(5) [red]" hist(q, prob=T, br=50, col="skyblue2", main=hdr) curve(dchisq(x, 4), add=T, lwd=2, col="blue") curve(dchisq(x, 5), add=T, lwd=3, col="red", lty="dotted") Note: If you consider $n = 2,$ then it is easy to verify that $\bar X$ is a function of $X_1 + X_2$ and independently, $S^2$ is a function of $X_1 - X_2$ so that $S^2 \sim \mathsf{Chisq}(1).$ One proof for $n > 2$ shows that $\bar X$ is a function of a vector in one dimention, and that (orthogonally) $S^2$ is a function of vectors in $n-1$ dimensions.
Why is $\frac{\sum^n_{i=1}(X_i-\bar{X})^2}{\sigma^2}$chi-square distributed with $n-1$ degrees of fr Consider samples of size $n= 5$ from a standard normal distribution. then $Q =(n-1)S^2 \sim \mathsf{Chisq}(\nu=4),$ not $\mathsf{Chisq}(\nu=5).$ set.seed(706); m = 10^6; n = 5 q = replicate( m, (n-1
55,698
Clear explanation of dummy variable trap [duplicate]
Let's say you have a binary variable, like sex. You create two dummy variables to reflect that in your model. Let's say you have six individuals $(M,F,F,M,M,F)$. Your dummy variables look like: $X_1=(0,1,1,0,0,1)$ $X_2=(1,0,0,1,1,0)$ But now $X_{i1}+X_{i2} = 1$ for every possible $i$ so you have a case of perfect multicolinearity. The model will not distinguish between an effect caused by a high $X_1$ or a low $X_2$ and vice-versa. The way to avoid this trap is to get rid of one of those variables. but this implies taking one of the groups as a "reference" which is kind of an arbitraty choice. More importantly, when considering multiple factors simultaneously, it may be the case that some of the dummy variables reach perfect multicolinearity due to the way your individuals are distributed among the groups. Imagine, for example, you also have data like "taller than 170 cm/shorter than 170 cm" and you get $(T,S,S,T,T,S)$ (which is not rare to expect) You will be facing a similar problem to that we had when considering $X_1$ and $X_2$
Clear explanation of dummy variable trap [duplicate]
Let's say you have a binary variable, like sex. You create two dummy variables to reflect that in your model. Let's say you have six individuals $(M,F,F,M,M,F)$. Your dummy variables look like: $X_1=
Clear explanation of dummy variable trap [duplicate] Let's say you have a binary variable, like sex. You create two dummy variables to reflect that in your model. Let's say you have six individuals $(M,F,F,M,M,F)$. Your dummy variables look like: $X_1=(0,1,1,0,0,1)$ $X_2=(1,0,0,1,1,0)$ But now $X_{i1}+X_{i2} = 1$ for every possible $i$ so you have a case of perfect multicolinearity. The model will not distinguish between an effect caused by a high $X_1$ or a low $X_2$ and vice-versa. The way to avoid this trap is to get rid of one of those variables. but this implies taking one of the groups as a "reference" which is kind of an arbitraty choice. More importantly, when considering multiple factors simultaneously, it may be the case that some of the dummy variables reach perfect multicolinearity due to the way your individuals are distributed among the groups. Imagine, for example, you also have data like "taller than 170 cm/shorter than 170 cm" and you get $(T,S,S,T,T,S)$ (which is not rare to expect) You will be facing a similar problem to that we had when considering $X_1$ and $X_2$
Clear explanation of dummy variable trap [duplicate] Let's say you have a binary variable, like sex. You create two dummy variables to reflect that in your model. Let's say you have six individuals $(M,F,F,M,M,F)$. Your dummy variables look like: $X_1=
55,699
When to prefer PCA over regularization methods in regression?
PCA considers only the variance of the features ($X$) but not the relationship between features and labels while doing this compression. Regularization, on the other hand, acts directly on the relationship between features and labels and hence develops models which are better at explaining the labels given the features. I'm not familiar with other fields but Finance literature, in particular Shrinking the Cross Section, has done this comparison (PCA of features vs regularized model) and found that regularized model does a better job of predicting portfolio returns (better out-of-sample $R^2$)
When to prefer PCA over regularization methods in regression?
PCA considers only the variance of the features ($X$) but not the relationship between features and labels while doing this compression. Regularization, on the other hand, acts directly on the relatio
When to prefer PCA over regularization methods in regression? PCA considers only the variance of the features ($X$) but not the relationship between features and labels while doing this compression. Regularization, on the other hand, acts directly on the relationship between features and labels and hence develops models which are better at explaining the labels given the features. I'm not familiar with other fields but Finance literature, in particular Shrinking the Cross Section, has done this comparison (PCA of features vs regularized model) and found that regularized model does a better job of predicting portfolio returns (better out-of-sample $R^2$)
When to prefer PCA over regularization methods in regression? PCA considers only the variance of the features ($X$) but not the relationship between features and labels while doing this compression. Regularization, on the other hand, acts directly on the relatio
55,700
Random effects in repeated-measures design using lme
It appears that you have a case of a partially crossed, partially nested design, because if I understand correctly, day and cond are crossed (ie neither are nested in the other), while both appear to be nested within subject. measurement is an id variable that indexes the measurement occasion on each day and within each condition, and as such should not be treated as a random factor because there is only one observation of the dependent variable for each measurement occasion. Even though they are indexed as 1-4 for each day/condition, they are different measurements (that is, measurement 1 for day 1 condition 0 and measurement 1 for day 1 condition 1 are not the same measurement) and therefore there can be no random variation in it. If you specified it as random in the way you have coded the data above, it would be a mistake. If this is the case, then lme is unable to fit such a model, and you could use something like lme4 instead. You could specify the structure in lme4 as follows: DV ~ 1 + (1|subject) + (1|day) + (1|cond) + (1|subject:day) + (1|subject:cond) If measurement is a measurement of time within each day or cond and you expect some temporal effect, then you could include measurement as a fixed effect (and also potentially fit random slopes, if the data supported such a model) However, fitting a model with random intercepts for day and cond would not be a good idea because you have only 2 of each, so you would be asking the software to estimate a variance for a normally distributed variable having only 2 observations, which does not make any sense. So a better way forward is to treat day and cond as fixed effects, and simply fit random intercepts for subject: DV ~ day + cond + (1|subject) The fact that day and cond were randomly assigned is not relevant. The same comment as above applies for measurement again here. That is, you might want to fit DV ~ day + cond + measurement + (1|subject) and again, you could also have random slopes for day and/or cond and/or measurement if suggested by the domain theory and supported by the data. Of course, now that we have discarded day and cond as random, you can go back to the nlme package if you wish (athough lme4 is really the successor to nlme for most cases)
Random effects in repeated-measures design using lme
It appears that you have a case of a partially crossed, partially nested design, because if I understand correctly, day and cond are crossed (ie neither are nested in the other), while both appear to
Random effects in repeated-measures design using lme It appears that you have a case of a partially crossed, partially nested design, because if I understand correctly, day and cond are crossed (ie neither are nested in the other), while both appear to be nested within subject. measurement is an id variable that indexes the measurement occasion on each day and within each condition, and as such should not be treated as a random factor because there is only one observation of the dependent variable for each measurement occasion. Even though they are indexed as 1-4 for each day/condition, they are different measurements (that is, measurement 1 for day 1 condition 0 and measurement 1 for day 1 condition 1 are not the same measurement) and therefore there can be no random variation in it. If you specified it as random in the way you have coded the data above, it would be a mistake. If this is the case, then lme is unable to fit such a model, and you could use something like lme4 instead. You could specify the structure in lme4 as follows: DV ~ 1 + (1|subject) + (1|day) + (1|cond) + (1|subject:day) + (1|subject:cond) If measurement is a measurement of time within each day or cond and you expect some temporal effect, then you could include measurement as a fixed effect (and also potentially fit random slopes, if the data supported such a model) However, fitting a model with random intercepts for day and cond would not be a good idea because you have only 2 of each, so you would be asking the software to estimate a variance for a normally distributed variable having only 2 observations, which does not make any sense. So a better way forward is to treat day and cond as fixed effects, and simply fit random intercepts for subject: DV ~ day + cond + (1|subject) The fact that day and cond were randomly assigned is not relevant. The same comment as above applies for measurement again here. That is, you might want to fit DV ~ day + cond + measurement + (1|subject) and again, you could also have random slopes for day and/or cond and/or measurement if suggested by the domain theory and supported by the data. Of course, now that we have discarded day and cond as random, you can go back to the nlme package if you wish (athough lme4 is really the successor to nlme for most cases)
Random effects in repeated-measures design using lme It appears that you have a case of a partially crossed, partially nested design, because if I understand correctly, day and cond are crossed (ie neither are nested in the other), while both appear to