idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
52,501
Interpreting table 1 in clinical research papers
No, your table means that the mean of HbA1c is 7.7 (in %), and that the standard deviation (SD) is 1.5 (again in %). The range will usually be much wider than 6.2-9.2%. To be honest, the table is a little confusing. If they write "mean $\pm$ SD" in the caption, they should also write "$7.7 \pm 1.5$" in the table. Or conversely, they should write "mean (SD)" in the caption. Note that sometimes people do include the range, but from what I have seen, the mean and SD is more common.
Interpreting table 1 in clinical research papers
No, your table means that the mean of HbA1c is 7.7 (in %), and that the standard deviation (SD) is 1.5 (again in %). The range will usually be much wider than 6.2-9.2%. To be honest, the table is a li
Interpreting table 1 in clinical research papers No, your table means that the mean of HbA1c is 7.7 (in %), and that the standard deviation (SD) is 1.5 (again in %). The range will usually be much wider than 6.2-9.2%. To be honest, the table is a little confusing. If they write "mean $\pm$ SD" in the caption, they should also write "$7.7 \pm 1.5$" in the table. Or conversely, they should write "mean (SD)" in the caption. Note that sometimes people do include the range, but from what I have seen, the mean and SD is more common.
Interpreting table 1 in clinical research papers No, your table means that the mean of HbA1c is 7.7 (in %), and that the standard deviation (SD) is 1.5 (again in %). The range will usually be much wider than 6.2-9.2%. To be honest, the table is a li
52,502
Interpreting table 1 in clinical research papers
The first line in your picture says "mean +/- SD" so it seems reasonable to assume, that the mean HbA1c was 7.7% with a standard deviation of 1.5% and the mean Cholesterol was 4.9mmol/L with a standard deviation of 1.0mmol/L The standard deviation is the square root of the variance an thus a measure of the amount of variation or dispersion around the mean. A small value implies that most values were close to the mean, a large value means that the spread around the mean was large. First you compute the difference of each value from the mean and then you square that. Then you compute the mean of the squared differences and take the square root from that. Voilà, that is your standard deviation or SD. If those values were normally distributed you could draw more conclusions from that but biomarkers are usually not normally distributed. So take it as no more or less then a way to give you an idea, whether values were generally close to the mean or not.
Interpreting table 1 in clinical research papers
The first line in your picture says "mean +/- SD" so it seems reasonable to assume, that the mean HbA1c was 7.7% with a standard deviation of 1.5% and the mean Cholesterol was 4.9mmol/L with a standar
Interpreting table 1 in clinical research papers The first line in your picture says "mean +/- SD" so it seems reasonable to assume, that the mean HbA1c was 7.7% with a standard deviation of 1.5% and the mean Cholesterol was 4.9mmol/L with a standard deviation of 1.0mmol/L The standard deviation is the square root of the variance an thus a measure of the amount of variation or dispersion around the mean. A small value implies that most values were close to the mean, a large value means that the spread around the mean was large. First you compute the difference of each value from the mean and then you square that. Then you compute the mean of the squared differences and take the square root from that. Voilà, that is your standard deviation or SD. If those values were normally distributed you could draw more conclusions from that but biomarkers are usually not normally distributed. So take it as no more or less then a way to give you an idea, whether values were generally close to the mean or not.
Interpreting table 1 in clinical research papers The first line in your picture says "mean +/- SD" so it seems reasonable to assume, that the mean HbA1c was 7.7% with a standard deviation of 1.5% and the mean Cholesterol was 4.9mmol/L with a standar
52,503
Minimum expectation
When $p$ increases, that means its cumulative distribution function $$P(x) = \int_0^x p(x)\,\mathrm{d}x$$ is convex. Since $P(0)=0$ and $P(1)=1,$ the convexity implies the graph of $P$ lies on or below the line segment connecting $(0,0)$ and $(1,1),$ a segment that covers a triangular area of $1/2.$ (The graph is the graph curve and the segment is the dotted line in the figure.) Thus, the area under the graph of $P$ cannot exceed $1/2:$ $$\frac{1}{2} \ge \int_0^1 P(x)\,\mathrm{d}x.$$ This area is that of the shaded region in the figure: Since $$E_p(X) = \int_0^1 (1-P(x))\,\mathrm{d}x = 1 - \int_0^1 P(x)\,\mathrm{d}x \ge 1 - \frac{1}{2},$$ it follows that $E_p(X) \ge 1/2.$ This minimum value is the best possible because it can be attained by the uniform distribution where $p(x)=1$ for $0\le x \le 1.$
Minimum expectation
When $p$ increases, that means its cumulative distribution function $$P(x) = \int_0^x p(x)\,\mathrm{d}x$$ is convex. Since $P(0)=0$ and $P(1)=1,$ the convexity implies the graph of $P$ lies on or bel
Minimum expectation When $p$ increases, that means its cumulative distribution function $$P(x) = \int_0^x p(x)\,\mathrm{d}x$$ is convex. Since $P(0)=0$ and $P(1)=1,$ the convexity implies the graph of $P$ lies on or below the line segment connecting $(0,0)$ and $(1,1),$ a segment that covers a triangular area of $1/2.$ (The graph is the graph curve and the segment is the dotted line in the figure.) Thus, the area under the graph of $P$ cannot exceed $1/2:$ $$\frac{1}{2} \ge \int_0^1 P(x)\,\mathrm{d}x.$$ This area is that of the shaded region in the figure: Since $$E_p(X) = \int_0^1 (1-P(x))\,\mathrm{d}x = 1 - \int_0^1 P(x)\,\mathrm{d}x \ge 1 - \frac{1}{2},$$ it follows that $E_p(X) \ge 1/2.$ This minimum value is the best possible because it can be attained by the uniform distribution where $p(x)=1$ for $0\le x \le 1.$
Minimum expectation When $p$ increases, that means its cumulative distribution function $$P(x) = \int_0^x p(x)\,\mathrm{d}x$$ is convex. Since $P(0)=0$ and $P(1)=1,$ the convexity implies the graph of $P$ lies on or bel
52,504
Minimum expectation
First, change the problem to say that the density is non-decreasing and piecewise constant. Solve that problem first and then return to this problem. Suppose there is a density $f(x)$ that minimizes $EX$ where $n$ is a positive integer and for the integers $1 \le i \le n$ the density is constant and equal to $a_i$ on each of the the intervals $((i-1)/n,i/n)$. The constants are non-decreasing and $a_1 \le a_2 \le ... \le a_n$. If there is any $j$ where the inequality is strict $a_j<a_{j+1}$, then you could define a new density equal to $f(x)$ outside of the interval $((j-1)/n,(j+1)/n)$ but constant and equal to the average $\frac{a_j+a_{j+1}}{2}$ in the interval $((j-1)/n,(j+1)/n)$ and then $EX$ would be strictly smaller. Thus, all $a_i$ must be equal to $1/n$. The uniform density has that property and is the only density that has that property for all $n$. If X is uniform, $EX=\frac{1}{2}$. That proof only works for piecewise constant densities, but for an arbitrary density, the integral is defined as the limit of these piecewise constant approximations. Thus, the minimum over arbitrary non-decreasing densities is also $\frac{1}2$. For the original problem, where you want $f(x)$ to be increasing, for any small $\epsilon>0$, you can have $EX=\frac{1}{2}+\epsilon$ by taking $f(X)=1+6\epsilon(2 x-1)$. But you can never have $EX=\frac{1}{2}$ with an increasing density because of the argument above.
Minimum expectation
First, change the problem to say that the density is non-decreasing and piecewise constant. Solve that problem first and then return to this problem. Suppose there is a density $f(x)$ that minimizes $
Minimum expectation First, change the problem to say that the density is non-decreasing and piecewise constant. Solve that problem first and then return to this problem. Suppose there is a density $f(x)$ that minimizes $EX$ where $n$ is a positive integer and for the integers $1 \le i \le n$ the density is constant and equal to $a_i$ on each of the the intervals $((i-1)/n,i/n)$. The constants are non-decreasing and $a_1 \le a_2 \le ... \le a_n$. If there is any $j$ where the inequality is strict $a_j<a_{j+1}$, then you could define a new density equal to $f(x)$ outside of the interval $((j-1)/n,(j+1)/n)$ but constant and equal to the average $\frac{a_j+a_{j+1}}{2}$ in the interval $((j-1)/n,(j+1)/n)$ and then $EX$ would be strictly smaller. Thus, all $a_i$ must be equal to $1/n$. The uniform density has that property and is the only density that has that property for all $n$. If X is uniform, $EX=\frac{1}{2}$. That proof only works for piecewise constant densities, but for an arbitrary density, the integral is defined as the limit of these piecewise constant approximations. Thus, the minimum over arbitrary non-decreasing densities is also $\frac{1}2$. For the original problem, where you want $f(x)$ to be increasing, for any small $\epsilon>0$, you can have $EX=\frac{1}{2}+\epsilon$ by taking $f(X)=1+6\epsilon(2 x-1)$. But you can never have $EX=\frac{1}{2}$ with an increasing density because of the argument above.
Minimum expectation First, change the problem to say that the density is non-decreasing and piecewise constant. Solve that problem first and then return to this problem. Suppose there is a density $f(x)$ that minimizes $
52,505
P value as a measure of effect size?
You intuition is correct here --- although the p-value is not used as a measure of effect size, you are correct that in some tests, for a fixed sample size the distribution of the p-value is monotonically related to the effect size, and thus is implicitly a transformed estimator of the effect size. Generally speaking, a larger effect size (further from the null hypothesis) manifests in a smaller p-value. In many cases it is possible to establish a stochastic dominance result to this effect. Example - One sample two-sided Z-test: To illustrate this phenomenon, consider the simple case where we have IID normal data and we take a one-sample Z test of the population mean $\mu \in \mathbb{R}$ with known population variance $\sigma = 1$. (This is not a very realistic scenario, but it is the simplest version of the hypothesis test for a mean, so it is useful for illustrative purposes.) Taking a two-sided test with null hypothesis $H_0: \mu = \mu_0$ we have the test statistic: $$Z(\mathbf{x}_n) = \sqrt{n} \cdot (\bar{x}_n - \mu_0),$$ with the corresponding p-value function $p(\mathbf{x}_n) = 2 \cdot \Phi(-|Z(\mathbf{x}_n)|)$. If the true mean is $\mu$ then the absolute value of the test statistic has a folded normal distribution: $$|Z(\mathbf{X}_n)| \sim \text{FN} \Big( \sqrt{n} \cdot (\mu - \mu_0), 1 \Big).$$ Now we apply the standard rules for transformations of probability density functions to obtain the the p-value density function. The transformation $p = 2 \Phi(-z)$ has inverse $z = - \Phi^{-1} (p/2)$, so we get: $$\begin{align} f(p) &= f(z(p)) \times \Bigg| \frac{dz}{dp} \Bigg| \\[6pt] &= \text{FN} \Big( - \Phi^{-1} (\tfrac{p}{2}) \Big| \sqrt{n} \cdot (\mu - \mu_0), 1 \Big) \times \Bigg( \frac{1}{2} \cdot \frac{1}{\text{N}(\Phi^{-1} (\tfrac{p}{2})|0,1)} \Bigg) \\[6pt] &= \frac{1}{2} \cdot \frac{\exp \big( -\frac{1}{2} \cdot (- \Phi^{-1} (\tfrac{p}{2}) - n (\mu - \mu_0)^2)^2 \big) + \exp \big( -\frac{1}{2} \cdot (- \Phi^{-1} (\tfrac{p}{2}) + n (\mu - \mu_0)^2)^2 \big)}{\exp \big( -\frac{1}{2} \cdot (-\Phi^{-1} (\tfrac{p}{2}))^2 \big)}. \\[6pt] \end{align}$$ As you can see, the distribution of the p-value depends on the population mean $\mu$. With some more algebra, it can be shown that the distribution of the p-value is "stochastically dominated" as $|\mu - \mu_0|$ increases (i.e., the p-value tends to get smaller in this case).
P value as a measure of effect size?
You intuition is correct here --- although the p-value is not used as a measure of effect size, you are correct that in some tests, for a fixed sample size the distribution of the p-value is monotonic
P value as a measure of effect size? You intuition is correct here --- although the p-value is not used as a measure of effect size, you are correct that in some tests, for a fixed sample size the distribution of the p-value is monotonically related to the effect size, and thus is implicitly a transformed estimator of the effect size. Generally speaking, a larger effect size (further from the null hypothesis) manifests in a smaller p-value. In many cases it is possible to establish a stochastic dominance result to this effect. Example - One sample two-sided Z-test: To illustrate this phenomenon, consider the simple case where we have IID normal data and we take a one-sample Z test of the population mean $\mu \in \mathbb{R}$ with known population variance $\sigma = 1$. (This is not a very realistic scenario, but it is the simplest version of the hypothesis test for a mean, so it is useful for illustrative purposes.) Taking a two-sided test with null hypothesis $H_0: \mu = \mu_0$ we have the test statistic: $$Z(\mathbf{x}_n) = \sqrt{n} \cdot (\bar{x}_n - \mu_0),$$ with the corresponding p-value function $p(\mathbf{x}_n) = 2 \cdot \Phi(-|Z(\mathbf{x}_n)|)$. If the true mean is $\mu$ then the absolute value of the test statistic has a folded normal distribution: $$|Z(\mathbf{X}_n)| \sim \text{FN} \Big( \sqrt{n} \cdot (\mu - \mu_0), 1 \Big).$$ Now we apply the standard rules for transformations of probability density functions to obtain the the p-value density function. The transformation $p = 2 \Phi(-z)$ has inverse $z = - \Phi^{-1} (p/2)$, so we get: $$\begin{align} f(p) &= f(z(p)) \times \Bigg| \frac{dz}{dp} \Bigg| \\[6pt] &= \text{FN} \Big( - \Phi^{-1} (\tfrac{p}{2}) \Big| \sqrt{n} \cdot (\mu - \mu_0), 1 \Big) \times \Bigg( \frac{1}{2} \cdot \frac{1}{\text{N}(\Phi^{-1} (\tfrac{p}{2})|0,1)} \Bigg) \\[6pt] &= \frac{1}{2} \cdot \frac{\exp \big( -\frac{1}{2} \cdot (- \Phi^{-1} (\tfrac{p}{2}) - n (\mu - \mu_0)^2)^2 \big) + \exp \big( -\frac{1}{2} \cdot (- \Phi^{-1} (\tfrac{p}{2}) + n (\mu - \mu_0)^2)^2 \big)}{\exp \big( -\frac{1}{2} \cdot (-\Phi^{-1} (\tfrac{p}{2}))^2 \big)}. \\[6pt] \end{align}$$ As you can see, the distribution of the p-value depends on the population mean $\mu$. With some more algebra, it can be shown that the distribution of the p-value is "stochastically dominated" as $|\mu - \mu_0|$ increases (i.e., the p-value tends to get smaller in this case).
P value as a measure of effect size? You intuition is correct here --- although the p-value is not used as a measure of effect size, you are correct that in some tests, for a fixed sample size the distribution of the p-value is monotonic
52,506
P value as a measure of effect size?
Let’s do two t-test examples. In the first situation, we take $25$ observations and get a mean of $0.59218$ and variance of $1.891$. Running through the one-sample t-test calculations, we get a t-stat of $2.1532$ and a p-value of $0.04157$, significant at the legendary $0.05$-level. In the second situation, we take $250,000$ observations and get a mean of $0.0245$ and variance of $0.9948$. These result in a t-stat of $12.283$ and a p-value of $\approx 0$. The p-value is much smaller for the situation with the smaller observed effect! What’s going on is that the p-value is sensitive to the sample size. This makes it so what it is supposed to in order to contradict the null hypothesis, but it does not just measure the effect size.
P value as a measure of effect size?
Let’s do two t-test examples. In the first situation, we take $25$ observations and get a mean of $0.59218$ and variance of $1.891$. Running through the one-sample t-test calculations, we get a t-stat
P value as a measure of effect size? Let’s do two t-test examples. In the first situation, we take $25$ observations and get a mean of $0.59218$ and variance of $1.891$. Running through the one-sample t-test calculations, we get a t-stat of $2.1532$ and a p-value of $0.04157$, significant at the legendary $0.05$-level. In the second situation, we take $250,000$ observations and get a mean of $0.0245$ and variance of $0.9948$. These result in a t-stat of $12.283$ and a p-value of $\approx 0$. The p-value is much smaller for the situation with the smaller observed effect! What’s going on is that the p-value is sensitive to the sample size. This makes it so what it is supposed to in order to contradict the null hypothesis, but it does not just measure the effect size.
P value as a measure of effect size? Let’s do two t-test examples. In the first situation, we take $25$ observations and get a mean of $0.59218$ and variance of $1.891$. Running through the one-sample t-test calculations, we get a t-stat
52,507
P value as a measure of effect size?
If sample size is equal, t is a function of effect size, and p is a function of p. So higher effect size is associated with lower p. I wouldn't say " lower p-value in the tests indicate higher effect". I might say larger effect sizes are associated with smaller p-values. But why would you take a measure that is fairly easy to interpret and turn it into one that is difficult to interpret (and very commonly misinterpreted).
P value as a measure of effect size?
If sample size is equal, t is a function of effect size, and p is a function of p. So higher effect size is associated with lower p. I wouldn't say " lower p-value in the tests indicate higher effect"
P value as a measure of effect size? If sample size is equal, t is a function of effect size, and p is a function of p. So higher effect size is associated with lower p. I wouldn't say " lower p-value in the tests indicate higher effect". I might say larger effect sizes are associated with smaller p-values. But why would you take a measure that is fairly easy to interpret and turn it into one that is difficult to interpret (and very commonly misinterpreted).
P value as a measure of effect size? If sample size is equal, t is a function of effect size, and p is a function of p. So higher effect size is associated with lower p. I wouldn't say " lower p-value in the tests indicate higher effect"
52,508
P value as a measure of effect size?
The p-value is the probability that the chosen test statistic would be as large or larger than you observed from the data, given the null hypothesis. Typically, our null hypothesis is something like "this population parameter is equal to this constant value" and our test statistic is generally chosen in a way that will provide evidence against the null hypothesis if it's false - i.e. something that, if the population parameter is not the given value, will give you results that are highly improbable under the null hypothesis. And of course many hypothesis tests are trying to detect whether there's some kind of underlying effect going on - whether it's "this coin is unfair" or "giving the COVID vaccine makes people less likely to catch the disease" or "every basketball team signs players of roughly equal height". So, if we're performing something resembling a normal hypothesis test, a larger effect (i.e. a bigger gap between the truth and the "everything is normal" null hypothesis) will give you lower p-values (at least in expectation) if everything else remains the same. That said, the reasons we tend to not say that p-value is not a measure of effect size are: It assumes that there even is an "effect size" in the first place. If the test is comparing how people rank in some list before and after an intervention, then you check whether the order of the list changes but there's no real numeric effect that you're measuring. The test statistic is a function of your observed data, i.e. your sample, which means that it's a random value, which means that it can't be a consistent measure of the effect size. As an easy example, consider the case where the null hypothesis is true - then the p-value is literally just a measure of how unusual your data happens to be by sheer chance, meaning that you will get a p-value of 0.05 or smaller roughly 1 time in 20. If you compare 20 basketball teams to the Warriors, you should expect to see at least one tiny p-value even if the teams were formed by getting ~1000 people in a room together and just drawing their names from a hat. P-values are probabilities, so their relationship to effect size tends to be highly non-linear. So even if you do take all of the above into account and deal with the probabilistic components and yada yada, then turning that p-value into an actual measure of the effect size may be a massive pain.
P value as a measure of effect size?
The p-value is the probability that the chosen test statistic would be as large or larger than you observed from the data, given the null hypothesis. Typically, our null hypothesis is something like "
P value as a measure of effect size? The p-value is the probability that the chosen test statistic would be as large or larger than you observed from the data, given the null hypothesis. Typically, our null hypothesis is something like "this population parameter is equal to this constant value" and our test statistic is generally chosen in a way that will provide evidence against the null hypothesis if it's false - i.e. something that, if the population parameter is not the given value, will give you results that are highly improbable under the null hypothesis. And of course many hypothesis tests are trying to detect whether there's some kind of underlying effect going on - whether it's "this coin is unfair" or "giving the COVID vaccine makes people less likely to catch the disease" or "every basketball team signs players of roughly equal height". So, if we're performing something resembling a normal hypothesis test, a larger effect (i.e. a bigger gap between the truth and the "everything is normal" null hypothesis) will give you lower p-values (at least in expectation) if everything else remains the same. That said, the reasons we tend to not say that p-value is not a measure of effect size are: It assumes that there even is an "effect size" in the first place. If the test is comparing how people rank in some list before and after an intervention, then you check whether the order of the list changes but there's no real numeric effect that you're measuring. The test statistic is a function of your observed data, i.e. your sample, which means that it's a random value, which means that it can't be a consistent measure of the effect size. As an easy example, consider the case where the null hypothesis is true - then the p-value is literally just a measure of how unusual your data happens to be by sheer chance, meaning that you will get a p-value of 0.05 or smaller roughly 1 time in 20. If you compare 20 basketball teams to the Warriors, you should expect to see at least one tiny p-value even if the teams were formed by getting ~1000 people in a room together and just drawing their names from a hat. P-values are probabilities, so their relationship to effect size tends to be highly non-linear. So even if you do take all of the above into account and deal with the probabilistic components and yada yada, then turning that p-value into an actual measure of the effect size may be a massive pain.
P value as a measure of effect size? The p-value is the probability that the chosen test statistic would be as large or larger than you observed from the data, given the null hypothesis. Typically, our null hypothesis is something like "
52,509
P value as a measure of effect size?
P-values are used as a measure of effect size all the time. The simplest way to express effect sizes are The raw or absolute effect sizes, like a difference between means. Some alternatives to express (relative) effect sizes are Cohen's D, which expresses effect size relative to the pooled population variances t-statistic, which expresses effect size relative to the variance of the sample means. And also p-values, which are a way to express the effect size relative to the statistical probability when some null hypothesis is true. Depending on the application one or the other may be preferred. In publications you may often see that multiple values are reported. For instance you could something like: Our study found that participants who drank coffee had a statistically significant increase in productivity, with a mean effect size of $0.8$ ($t_{df=23}=3.2$, $p = 0.002$), proving once and for all that caffeine is the real workhorse in your cup of joe. (The content of the quote is fantasy and any resemblance to real studies are purely conincidence. I had chat-gpt make that quote for me) Note that these relative ways of expressing effect sizes do not always coincidence. For a given absolute effect size you can have different statistics and p-values depending on the sample sizes and estimated variances. The same p-value may occur with large and small effect sizes. The same effect size may occur with large and small p-values. The p-value is not a measure of the absolute effect size.
P value as a measure of effect size?
P-values are used as a measure of effect size all the time. The simplest way to express effect sizes are The raw or absolute effect sizes, like a difference between means. Some alternatives to expre
P value as a measure of effect size? P-values are used as a measure of effect size all the time. The simplest way to express effect sizes are The raw or absolute effect sizes, like a difference between means. Some alternatives to express (relative) effect sizes are Cohen's D, which expresses effect size relative to the pooled population variances t-statistic, which expresses effect size relative to the variance of the sample means. And also p-values, which are a way to express the effect size relative to the statistical probability when some null hypothesis is true. Depending on the application one or the other may be preferred. In publications you may often see that multiple values are reported. For instance you could something like: Our study found that participants who drank coffee had a statistically significant increase in productivity, with a mean effect size of $0.8$ ($t_{df=23}=3.2$, $p = 0.002$), proving once and for all that caffeine is the real workhorse in your cup of joe. (The content of the quote is fantasy and any resemblance to real studies are purely conincidence. I had chat-gpt make that quote for me) Note that these relative ways of expressing effect sizes do not always coincidence. For a given absolute effect size you can have different statistics and p-values depending on the sample sizes and estimated variances. The same p-value may occur with large and small effect sizes. The same effect size may occur with large and small p-values. The p-value is not a measure of the absolute effect size.
P value as a measure of effect size? P-values are used as a measure of effect size all the time. The simplest way to express effect sizes are The raw or absolute effect sizes, like a difference between means. Some alternatives to expre
52,510
Can I make economic decisions (e.g., profit maximisation) using inferences from multiple linear regression?
This is a cost-benefit problem (or equivalently, a profit-optimisation problem) that is not fully answered by the regression model. This kind of problem requires you to combine your statistical inference about the product sales with economic analysis. Generally this entails writing the profit function for the firm as a function of the number of sales representatives, by including the sales and the variable costs of employing those representatives. You use your regression model to estimate the "sales function" for the firm and you then get an estimate of the "profit function" for the firm. You can then estimate the optimal number of sales representatives by taking account of the estimated number of sales you will get and the known variable costs (e.g., salary, etc.) of employing them. A secondary problem here is whether you can interpret your regression outcomes causally, such that if the firm changes the number of sales representatives, they will thereby change the number of product sales. Using passive observation, it is unlikely that your regression outputs represent purely causal effects, and so you may need to do some further work later to deal with this. This is complicated, so I will just leave it as a caveat, and in the analysis below I will assume you can interpret your regression output in a causal manner. Setting up a profit-optimisation problem: Let $r$ denote the number of sales representatives that are employed by the firm and let $c$ denote the "variable costs" of employing an additional sales representative (e.g., salary costs, etc.). Moreover, let $\mathcal{P}$ be the set of products sold by the firm and let $\pi_i$ be the profit from the sale of a single item of that product (i.e., the revenue for that item less variable costs of getting that item) for each product $i \in \mathscr{P}$. Let $S(i, r)$ be the number of sales of produce $i$ when the firm has $r$ sales representatives (and note that this part is stochastic --- hence the upper case). Then the profit function for the firm is: $$\pi(r) = \sum_{i \in \mathcal{P}} \pi_i \cdot S(i,r) - r \cdot c - \text{Other Fixed Costs}.$$ In econometric analysis for input factor optimisation by the firm, we generally wish to maximise the expected profit $\mathbb{E} (\pi(r))$ (which assumes that shareholders are risk-neutral at the margins). Your regression model is giving you a model for the stochastic part $S$, which lets you estimate this part, giving you some estimator $\hat{S}$ for the expected sales. Assuming you can estimate over all the products, you then have an estimator for the expected profit, which is: $$\hat{\pi}(r) = \sum_{i \in \mathcal{P}} \pi_i \cdot \hat{S}(i,r) - r \cdot c - \text{Other Fixed Costs}.$$ The estimated optimal number of sales representatives is: $$\hat{r} = \underset{r \in \mathbb{N}}{\text{arg max}} \ \hat{\pi}(r).$$ Assuming some basic properties that ensure smoothness and quasi-concavity of the revenue function, the optimum (real) number of sales representatives will occur at the "critical point" where the marginal revenue of an additional sales-rep is equal to the marginal cost (i.e., salary, etc.) --- i.e., we have: $$\text{Marginal revenue}(\hat{r}) \equiv \sum_{i \in \mathcal{P}} \pi_i \cdot \frac{\partial \hat{S}}{\partial r} (i,\hat{r}) = c = \text{Marginal Cost}.$$ This is a standard result in the economics of firm behaviour --- under some basic regularity conditions, profit maximising firms will employ factors of production up to the point where their marginal revenue equals their marginal cost (called the profit maximisation rule). (You might have to vary this slightly because the number of sales representatives must be a non-negative integer, so the optima will occur nearby the exact solution to the above equation.) This gives you a simple outline of how you might pose an econometric optimisation problem of this kind. As you can see, the answer here is affected not only by the inferences from the regression model, but also by the variable costs of employing the sales representatives. If the salary costs of employment increase then, ceteris paribus, we would expect the optimal number of sales reps to decrease, and vice versa.
Can I make economic decisions (e.g., profit maximisation) using inferences from multiple linear regr
This is a cost-benefit problem (or equivalently, a profit-optimisation problem) that is not fully answered by the regression model. This kind of problem requires you to combine your statistical infer
Can I make economic decisions (e.g., profit maximisation) using inferences from multiple linear regression? This is a cost-benefit problem (or equivalently, a profit-optimisation problem) that is not fully answered by the regression model. This kind of problem requires you to combine your statistical inference about the product sales with economic analysis. Generally this entails writing the profit function for the firm as a function of the number of sales representatives, by including the sales and the variable costs of employing those representatives. You use your regression model to estimate the "sales function" for the firm and you then get an estimate of the "profit function" for the firm. You can then estimate the optimal number of sales representatives by taking account of the estimated number of sales you will get and the known variable costs (e.g., salary, etc.) of employing them. A secondary problem here is whether you can interpret your regression outcomes causally, such that if the firm changes the number of sales representatives, they will thereby change the number of product sales. Using passive observation, it is unlikely that your regression outputs represent purely causal effects, and so you may need to do some further work later to deal with this. This is complicated, so I will just leave it as a caveat, and in the analysis below I will assume you can interpret your regression output in a causal manner. Setting up a profit-optimisation problem: Let $r$ denote the number of sales representatives that are employed by the firm and let $c$ denote the "variable costs" of employing an additional sales representative (e.g., salary costs, etc.). Moreover, let $\mathcal{P}$ be the set of products sold by the firm and let $\pi_i$ be the profit from the sale of a single item of that product (i.e., the revenue for that item less variable costs of getting that item) for each product $i \in \mathscr{P}$. Let $S(i, r)$ be the number of sales of produce $i$ when the firm has $r$ sales representatives (and note that this part is stochastic --- hence the upper case). Then the profit function for the firm is: $$\pi(r) = \sum_{i \in \mathcal{P}} \pi_i \cdot S(i,r) - r \cdot c - \text{Other Fixed Costs}.$$ In econometric analysis for input factor optimisation by the firm, we generally wish to maximise the expected profit $\mathbb{E} (\pi(r))$ (which assumes that shareholders are risk-neutral at the margins). Your regression model is giving you a model for the stochastic part $S$, which lets you estimate this part, giving you some estimator $\hat{S}$ for the expected sales. Assuming you can estimate over all the products, you then have an estimator for the expected profit, which is: $$\hat{\pi}(r) = \sum_{i \in \mathcal{P}} \pi_i \cdot \hat{S}(i,r) - r \cdot c - \text{Other Fixed Costs}.$$ The estimated optimal number of sales representatives is: $$\hat{r} = \underset{r \in \mathbb{N}}{\text{arg max}} \ \hat{\pi}(r).$$ Assuming some basic properties that ensure smoothness and quasi-concavity of the revenue function, the optimum (real) number of sales representatives will occur at the "critical point" where the marginal revenue of an additional sales-rep is equal to the marginal cost (i.e., salary, etc.) --- i.e., we have: $$\text{Marginal revenue}(\hat{r}) \equiv \sum_{i \in \mathcal{P}} \pi_i \cdot \frac{\partial \hat{S}}{\partial r} (i,\hat{r}) = c = \text{Marginal Cost}.$$ This is a standard result in the economics of firm behaviour --- under some basic regularity conditions, profit maximising firms will employ factors of production up to the point where their marginal revenue equals their marginal cost (called the profit maximisation rule). (You might have to vary this slightly because the number of sales representatives must be a non-negative integer, so the optima will occur nearby the exact solution to the above equation.) This gives you a simple outline of how you might pose an econometric optimisation problem of this kind. As you can see, the answer here is affected not only by the inferences from the regression model, but also by the variable costs of employing the sales representatives. If the salary costs of employment increase then, ceteris paribus, we would expect the optimal number of sales reps to decrease, and vice versa.
Can I make economic decisions (e.g., profit maximisation) using inferences from multiple linear regr This is a cost-benefit problem (or equivalently, a profit-optimisation problem) that is not fully answered by the regression model. This kind of problem requires you to combine your statistical infer
52,511
Can I make economic decisions (e.g., profit maximisation) using inferences from multiple linear regression?
I believe what you are asking here is if the linear regression coefficients can be interpreted causally, i.e. if they can be used to calculate the effects of a (hypothetical) intervention, where you would change certain predictors by a value X, and use the regression to calculate the increase in sales etc. The basic answer to that question is (apart from the usual caveats of finding the correct model, etc.): yes, if you stick to certain rules about the selection of variables, i.e. if you avoid having colliders in your regression, and maybe you should also think about mediators. For a detailed explanation, see Lederer, D. J., Bell, S. C., Branson, R. D., Chalmers, J. D., Marshall, R., Maslove, D. M., … & Stewart, P. W. (2019) Control of confounding and reporting of results in causal inference studies. Guidance for authors from editors of respiratory, sleep, and critical care journals. Annals of the American Thoracic Society, 16(1), 22-28.
Can I make economic decisions (e.g., profit maximisation) using inferences from multiple linear regr
I believe what you are asking here is if the linear regression coefficients can be interpreted causally, i.e. if they can be used to calculate the effects of a (hypothetical) intervention, where you w
Can I make economic decisions (e.g., profit maximisation) using inferences from multiple linear regression? I believe what you are asking here is if the linear regression coefficients can be interpreted causally, i.e. if they can be used to calculate the effects of a (hypothetical) intervention, where you would change certain predictors by a value X, and use the regression to calculate the increase in sales etc. The basic answer to that question is (apart from the usual caveats of finding the correct model, etc.): yes, if you stick to certain rules about the selection of variables, i.e. if you avoid having colliders in your regression, and maybe you should also think about mediators. For a detailed explanation, see Lederer, D. J., Bell, S. C., Branson, R. D., Chalmers, J. D., Marshall, R., Maslove, D. M., … & Stewart, P. W. (2019) Control of confounding and reporting of results in causal inference studies. Guidance for authors from editors of respiratory, sleep, and critical care journals. Annals of the American Thoracic Society, 16(1), 22-28.
Can I make economic decisions (e.g., profit maximisation) using inferences from multiple linear regr I believe what you are asking here is if the linear regression coefficients can be interpreted causally, i.e. if they can be used to calculate the effects of a (hypothetical) intervention, where you w
52,512
Can I make economic decisions (e.g., profit maximisation) using inferences from multiple linear regression?
I agree with the sentiment and can also imagine there are dangers assuming an equal functionality of a few variable regression model applied to a different region is entirely appropriate to base a decision to increase the number of sale representatives. Having at one time actually being in a sales position, one must understand that there are at least two parts to complete a sale, a good sale representative and a buyer capable of actually being able to afford buying the product. The implication of this last statement is that in a region that may be undergoing an economic downturn, while the rest of the country is showing growth, may be an outlier with respect to a national model. As fundamental economic variables are not part of the model, regional differentiation in sales by the model could be remiss. So even though a summary national regression relationship seems to imply that one can improve total sales by simply adding salespeople, this may not be the case if all other things are not equal, namely the economic capacity of people in the region to actually purchase. Understanding the potential limitations of a model for forecasting in practice is paramount.
Can I make economic decisions (e.g., profit maximisation) using inferences from multiple linear regr
I agree with the sentiment and can also imagine there are dangers assuming an equal functionality of a few variable regression model applied to a different region is entirely appropriate to base a dec
Can I make economic decisions (e.g., profit maximisation) using inferences from multiple linear regression? I agree with the sentiment and can also imagine there are dangers assuming an equal functionality of a few variable regression model applied to a different region is entirely appropriate to base a decision to increase the number of sale representatives. Having at one time actually being in a sales position, one must understand that there are at least two parts to complete a sale, a good sale representative and a buyer capable of actually being able to afford buying the product. The implication of this last statement is that in a region that may be undergoing an economic downturn, while the rest of the country is showing growth, may be an outlier with respect to a national model. As fundamental economic variables are not part of the model, regional differentiation in sales by the model could be remiss. So even though a summary national regression relationship seems to imply that one can improve total sales by simply adding salespeople, this may not be the case if all other things are not equal, namely the economic capacity of people in the region to actually purchase. Understanding the potential limitations of a model for forecasting in practice is paramount.
Can I make economic decisions (e.g., profit maximisation) using inferences from multiple linear regr I agree with the sentiment and can also imagine there are dangers assuming an equal functionality of a few variable regression model applied to a different region is entirely appropriate to base a dec
52,513
Is there a statistical distribution whose values are bounded $[-1,1]$ and sum to 1?
Scaling a Dirichlet distribution If you want a variable that is distributed like a Dirichlet distributed variable but with a different range then you can scale and shift (transform the variable). This is effectively rescaling the axes. To get from $[0,1]$ to $[-1,1]$ you can multiply by 2 and subtract 1. That is, your new variable $Y$ can be based on a regular Dirichlet distributed variable $X$ by the transformation $$Y = 2X -1$$ (Where the transformation is done for each of the components, that is for every $y_i$ you compute $y_i = 2x_i-1$) The probability density function will scale similarly but with an additional scaling factor (the density is less when you spread out over a larger range). So the regular Dirichlet distributed variable $X$ has the density distribution $f_X$: $$f_X(\mathbf{x}) = \frac{1}{B(\boldsymbol{\alpha})} \prod_{i=1}^K x_i^{\alpha_i-1}$$ and the variable $Y = 2X-1$ has this density distribution $f_Y$: $$f_Y(\mathbf{y}) = \frac{1}{2^K} f_X \left(\frac{\mathbf{y}+1}{2}\right) = \frac{1}{B(\boldsymbol{\alpha})2^K} \prod_{i=1}^K \left(\frac{y_i+1}{2}\right)^{\alpha_i-1}$$ where $B(\mathbf{\boldsymbol{\alpha}}) =\prod_{i=1}^K \frac{\Gamma(\alpha_i)}{\Gamma(\sum_{i=1}^K \alpha_i)}$ So you do not need to change anything to $\alpha$. The transformation only requires scaling and shifting the axes (which also includes a scaling of the density by a factor $1/2^K$). Whatever $\alpha$ needs to be will depend on your application. When there is a constraint Is there a statistical distribution whose values are bounded [−1,1] and sum to 1? Note: This transformation by scaling the axis is not always generally possible in case of your additional constraint. Your additional condition requires $$\sum_{i=1}^n y_i = \sum_{i=1}^n (a + b x_i) = an + b \sum_{i=1}^n x_i = 1$$ and this only holds when $n = \frac{1-b}{a}$. With our straightforward transformation $a=-1$ and $b=2$ it does not hold. We need to use instead $a=1$ and $b=-2$, and then it will only work for a Dirichlet distribution with $n=3$. The figure below shows this The red plane is the domain of the 'regular' Dirichlet distribution. The green plane is when you apply the transformation $y_i = 2x_i -1$, but then you do not get anymore that the variables sum up to 1. Instead the variables will sum up to -1. The blue plane $y_i = 1 - 2 x_i$ will give you a transformation such that the sum is still 1. A homogeneous distribution Based on your stackoverflow question it seems that you are not looking for a distribution like the Dirichlet distribution, but you are looking for a homogeneous distribution (a special case of the Dirichlet distribution when all $\alpha_i =1$), where the pdf equals some constant $f(\mathbf{x}) = c$. You can do this by rejection sampling or by an iterative computation of the coordinates $x_i$ where conditional/marginal distributions $f(x_i|x_1,x_2,\dots,x_{i-1})$ can be derived from rescaled and truncated versions of the Irwin Hall distribution. It is explained in the answer to your stackoverflow question.
Is there a statistical distribution whose values are bounded $[-1,1]$ and sum to 1?
Scaling a Dirichlet distribution If you want a variable that is distributed like a Dirichlet distributed variable but with a different range then you can scale and shift (transform the variable). This
Is there a statistical distribution whose values are bounded $[-1,1]$ and sum to 1? Scaling a Dirichlet distribution If you want a variable that is distributed like a Dirichlet distributed variable but with a different range then you can scale and shift (transform the variable). This is effectively rescaling the axes. To get from $[0,1]$ to $[-1,1]$ you can multiply by 2 and subtract 1. That is, your new variable $Y$ can be based on a regular Dirichlet distributed variable $X$ by the transformation $$Y = 2X -1$$ (Where the transformation is done for each of the components, that is for every $y_i$ you compute $y_i = 2x_i-1$) The probability density function will scale similarly but with an additional scaling factor (the density is less when you spread out over a larger range). So the regular Dirichlet distributed variable $X$ has the density distribution $f_X$: $$f_X(\mathbf{x}) = \frac{1}{B(\boldsymbol{\alpha})} \prod_{i=1}^K x_i^{\alpha_i-1}$$ and the variable $Y = 2X-1$ has this density distribution $f_Y$: $$f_Y(\mathbf{y}) = \frac{1}{2^K} f_X \left(\frac{\mathbf{y}+1}{2}\right) = \frac{1}{B(\boldsymbol{\alpha})2^K} \prod_{i=1}^K \left(\frac{y_i+1}{2}\right)^{\alpha_i-1}$$ where $B(\mathbf{\boldsymbol{\alpha}}) =\prod_{i=1}^K \frac{\Gamma(\alpha_i)}{\Gamma(\sum_{i=1}^K \alpha_i)}$ So you do not need to change anything to $\alpha$. The transformation only requires scaling and shifting the axes (which also includes a scaling of the density by a factor $1/2^K$). Whatever $\alpha$ needs to be will depend on your application. When there is a constraint Is there a statistical distribution whose values are bounded [−1,1] and sum to 1? Note: This transformation by scaling the axis is not always generally possible in case of your additional constraint. Your additional condition requires $$\sum_{i=1}^n y_i = \sum_{i=1}^n (a + b x_i) = an + b \sum_{i=1}^n x_i = 1$$ and this only holds when $n = \frac{1-b}{a}$. With our straightforward transformation $a=-1$ and $b=2$ it does not hold. We need to use instead $a=1$ and $b=-2$, and then it will only work for a Dirichlet distribution with $n=3$. The figure below shows this The red plane is the domain of the 'regular' Dirichlet distribution. The green plane is when you apply the transformation $y_i = 2x_i -1$, but then you do not get anymore that the variables sum up to 1. Instead the variables will sum up to -1. The blue plane $y_i = 1 - 2 x_i$ will give you a transformation such that the sum is still 1. A homogeneous distribution Based on your stackoverflow question it seems that you are not looking for a distribution like the Dirichlet distribution, but you are looking for a homogeneous distribution (a special case of the Dirichlet distribution when all $\alpha_i =1$), where the pdf equals some constant $f(\mathbf{x}) = c$. You can do this by rejection sampling or by an iterative computation of the coordinates $x_i$ where conditional/marginal distributions $f(x_i|x_1,x_2,\dots,x_{i-1})$ can be derived from rescaled and truncated versions of the Irwin Hall distribution. It is explained in the answer to your stackoverflow question.
Is there a statistical distribution whose values are bounded $[-1,1]$ and sum to 1? Scaling a Dirichlet distribution If you want a variable that is distributed like a Dirichlet distributed variable but with a different range then you can scale and shift (transform the variable). This
52,514
Is there a statistical distribution whose values are bounded $[-1,1]$ and sum to 1?
If you really need the variables to sum to one, you could "force it" by dividing by the sum. That is, if $X_1, X_2, \cdots X_n$ are random variables, then the RVs $$Z_i = \frac{X_i}{\sum_{i=1}^n X_i}$$ have the property that $\sum_{i=1}^nZ_i = 1$ (so long as $\sum X_i \neq 0$). This is easy to show. $$\sum_{j=1}^n Z_j = \sum_{j=1}^n \frac{X_j}{\sum_{i=1}^n X_i} = \frac{1}{\sum_{i=1}^n X_i}\sum_{j=1}^n X_j = 1$$ N <- 10000 x <- 1 - 2*rbeta(N, 3, 3) z <- x/sum(x) w <- -1 + 2*(z-min(z))/(max(z) - min(z)) par(mfrow=c(1,2)) hist(x) hist(z)
Is there a statistical distribution whose values are bounded $[-1,1]$ and sum to 1?
If you really need the variables to sum to one, you could "force it" by dividing by the sum. That is, if $X_1, X_2, \cdots X_n$ are random variables, then the RVs $$Z_i = \frac{X_i}{\sum_{i=1}^n X_i}$
Is there a statistical distribution whose values are bounded $[-1,1]$ and sum to 1? If you really need the variables to sum to one, you could "force it" by dividing by the sum. That is, if $X_1, X_2, \cdots X_n$ are random variables, then the RVs $$Z_i = \frac{X_i}{\sum_{i=1}^n X_i}$$ have the property that $\sum_{i=1}^nZ_i = 1$ (so long as $\sum X_i \neq 0$). This is easy to show. $$\sum_{j=1}^n Z_j = \sum_{j=1}^n \frac{X_j}{\sum_{i=1}^n X_i} = \frac{1}{\sum_{i=1}^n X_i}\sum_{j=1}^n X_j = 1$$ N <- 10000 x <- 1 - 2*rbeta(N, 3, 3) z <- x/sum(x) w <- -1 + 2*(z-min(z))/(max(z) - min(z)) par(mfrow=c(1,2)) hist(x) hist(z)
Is there a statistical distribution whose values are bounded $[-1,1]$ and sum to 1? If you really need the variables to sum to one, you could "force it" by dividing by the sum. That is, if $X_1, X_2, \cdots X_n$ are random variables, then the RVs $$Z_i = \frac{X_i}{\sum_{i=1}^n X_i}$
52,515
Can someone please explain how to read the unconfoundedness assumption. I'm unable to understand the symbols specifically the upside down "T"
Generally, that "upside down T" indicates orthogonality or independence (or unconfoundedness?) and I would interpret that notation as saying that the random vector $(Y_i(0), Y_i(1))$ is (conditionally) independent of the random variable $W$ given $X_i$, or more verbosely, conditioned on knowledge of $X_i$. If the random variables are jointly continuous and thus have joint densities, we can write $$f_{Y_i(0), Y_i(1), W \mid X_i = x}(y_0,y_1, w \mid X_i = x) = f_{Y_i(0), Y_i(1) \mid X_i = x}(y_0,y_1, \mid X_i = x)f_{W \mid X_i = x}(w \mid X _i= x)$$ telling us that the conditional pdf of $Y_i(0), Y_i(1), W$ given $X_i=x$ factors into the product of the conditional pdf of $Y_i(0), Y_i(1)$ given $X_i=x$ and the conditional pdf of $W$ given $X_i=x$ and this factorization holds for all values $x$ that $X_i$ might take on. It is worth keeping in mind that conditional independence does not imply unconditional independence. It is not necessarily the case that $f_{Y_i(0), Y_i(1), W}(y_0,y_1, w)$ factors into the product $f_{Y_i(0), Y_i(1)}(y_0,y_1)f_{W}(w)$.
Can someone please explain how to read the unconfoundedness assumption. I'm unable to understand the
Generally, that "upside down T" indicates orthogonality or independence (or unconfoundedness?) and I would interpret that notation as saying that the random vector $(Y_i(0), Y_i(1))$ is (conditionally
Can someone please explain how to read the unconfoundedness assumption. I'm unable to understand the symbols specifically the upside down "T" Generally, that "upside down T" indicates orthogonality or independence (or unconfoundedness?) and I would interpret that notation as saying that the random vector $(Y_i(0), Y_i(1))$ is (conditionally) independent of the random variable $W$ given $X_i$, or more verbosely, conditioned on knowledge of $X_i$. If the random variables are jointly continuous and thus have joint densities, we can write $$f_{Y_i(0), Y_i(1), W \mid X_i = x}(y_0,y_1, w \mid X_i = x) = f_{Y_i(0), Y_i(1) \mid X_i = x}(y_0,y_1, \mid X_i = x)f_{W \mid X_i = x}(w \mid X _i= x)$$ telling us that the conditional pdf of $Y_i(0), Y_i(1), W$ given $X_i=x$ factors into the product of the conditional pdf of $Y_i(0), Y_i(1)$ given $X_i=x$ and the conditional pdf of $W$ given $X_i=x$ and this factorization holds for all values $x$ that $X_i$ might take on. It is worth keeping in mind that conditional independence does not imply unconditional independence. It is not necessarily the case that $f_{Y_i(0), Y_i(1), W}(y_0,y_1, w)$ factors into the product $f_{Y_i(0), Y_i(1)}(y_0,y_1)f_{W}(w)$.
Can someone please explain how to read the unconfoundedness assumption. I'm unable to understand the Generally, that "upside down T" indicates orthogonality or independence (or unconfoundedness?) and I would interpret that notation as saying that the random vector $(Y_i(0), Y_i(1))$ is (conditionally
52,516
Random variables $(X,Y)$ with $\text{Var}(X)<\text{Var}(Y)$ and $\mathbb{E}(|X-\mu_X|)>\mathbb{E}(|Y-\mu_Y|)$
Consider a Laplace distribution with scale parameter equal to $1$. One can compute that the MAD equals $1$, and the variance equals $2$. Now consider a Normal distribution with arbitrary mean, and with variance equal to $1.9$. One can compute the MAD to be $$\sqrt{\frac{1.9 \cdot 2}{\pi}} = \sqrt{\frac{3.8}{\pi}} > \sqrt{\frac{3.8}{3.2}} > 1,$$ using that $0 < \pi < 3.2$. As such, the variance is less than that of our Laplace distribution, but the MAD is greater.
Random variables $(X,Y)$ with $\text{Var}(X)<\text{Var}(Y)$ and $\mathbb{E}(|X-\mu_X|)>\mathbb{E}(|Y
Consider a Laplace distribution with scale parameter equal to $1$. One can compute that the MAD equals $1$, and the variance equals $2$. Now consider a Normal distribution with arbitrary mean, and wit
Random variables $(X,Y)$ with $\text{Var}(X)<\text{Var}(Y)$ and $\mathbb{E}(|X-\mu_X|)>\mathbb{E}(|Y-\mu_Y|)$ Consider a Laplace distribution with scale parameter equal to $1$. One can compute that the MAD equals $1$, and the variance equals $2$. Now consider a Normal distribution with arbitrary mean, and with variance equal to $1.9$. One can compute the MAD to be $$\sqrt{\frac{1.9 \cdot 2}{\pi}} = \sqrt{\frac{3.8}{\pi}} > \sqrt{\frac{3.8}{3.2}} > 1,$$ using that $0 < \pi < 3.2$. As such, the variance is less than that of our Laplace distribution, but the MAD is greater.
Random variables $(X,Y)$ with $\text{Var}(X)<\text{Var}(Y)$ and $\mathbb{E}(|X-\mu_X|)>\mathbb{E}(|Y Consider a Laplace distribution with scale parameter equal to $1$. One can compute that the MAD equals $1$, and the variance equals $2$. Now consider a Normal distribution with arbitrary mean, and wit
52,517
Random variables $(X,Y)$ with $\text{Var}(X)<\text{Var}(Y)$ and $\mathbb{E}(|X-\mu_X|)>\mathbb{E}(|Y-\mu_Y|)$
Consider a discrete random variable $Y$ that, for some constants $M\in\mathbb{R}$ and $\epsilon\in [0, 1]$, takes the following values: $$ Y = \left\{\begin{array} ~-M+\mu_Y & \text{w.p.}~\epsilon/2 \\ \mu_Y & \text{w.p.}~1-\epsilon \\ M+\mu_Y & \text{w.p.}~\epsilon/2\end{array}\right. $$ Simple calculations show that $Y$ has the desired mean $\mu_Y$, $Var(Y) = M^2\epsilon$, and $E[|Y-\mu_Y|] = M\epsilon$. Taking $M = \epsilon^{-0.75}$ and $\epsilon\rightarrow 0$, we have $Var(Y)\rightarrow\infty$ and $E[|Y-\mu_Y|]\rightarrow 0$. So by selecting small enough $\epsilon$, we can obtain a $Y$ with the desired mean, arbitrarily large $Var(Y)$, and arbitrarily small $[|Y-\mu_Y|]$. For instance, with $\epsilon=10^{-12}$ and $M = 10^9$, we have $Var(Y) = 10^6$ and $E[|Y-\mu_Y|] = 0.001$. So now we have a wealth of options for $X$ that will cause the desired inequalities to hold. An example would be, a normal distribution with mean $\mu_X$ and variance 1.
Random variables $(X,Y)$ with $\text{Var}(X)<\text{Var}(Y)$ and $\mathbb{E}(|X-\mu_X|)>\mathbb{E}(|Y
Consider a discrete random variable $Y$ that, for some constants $M\in\mathbb{R}$ and $\epsilon\in [0, 1]$, takes the following values: $$ Y = \left\{\begin{array} ~-M+\mu_Y & \text{w.p.}~\epsilon/2 \
Random variables $(X,Y)$ with $\text{Var}(X)<\text{Var}(Y)$ and $\mathbb{E}(|X-\mu_X|)>\mathbb{E}(|Y-\mu_Y|)$ Consider a discrete random variable $Y$ that, for some constants $M\in\mathbb{R}$ and $\epsilon\in [0, 1]$, takes the following values: $$ Y = \left\{\begin{array} ~-M+\mu_Y & \text{w.p.}~\epsilon/2 \\ \mu_Y & \text{w.p.}~1-\epsilon \\ M+\mu_Y & \text{w.p.}~\epsilon/2\end{array}\right. $$ Simple calculations show that $Y$ has the desired mean $\mu_Y$, $Var(Y) = M^2\epsilon$, and $E[|Y-\mu_Y|] = M\epsilon$. Taking $M = \epsilon^{-0.75}$ and $\epsilon\rightarrow 0$, we have $Var(Y)\rightarrow\infty$ and $E[|Y-\mu_Y|]\rightarrow 0$. So by selecting small enough $\epsilon$, we can obtain a $Y$ with the desired mean, arbitrarily large $Var(Y)$, and arbitrarily small $[|Y-\mu_Y|]$. For instance, with $\epsilon=10^{-12}$ and $M = 10^9$, we have $Var(Y) = 10^6$ and $E[|Y-\mu_Y|] = 0.001$. So now we have a wealth of options for $X$ that will cause the desired inequalities to hold. An example would be, a normal distribution with mean $\mu_X$ and variance 1.
Random variables $(X,Y)$ with $\text{Var}(X)<\text{Var}(Y)$ and $\mathbb{E}(|X-\mu_X|)>\mathbb{E}(|Y Consider a discrete random variable $Y$ that, for some constants $M\in\mathbb{R}$ and $\epsilon\in [0, 1]$, takes the following values: $$ Y = \left\{\begin{array} ~-M+\mu_Y & \text{w.p.}~\epsilon/2 \
52,518
What is the intuition behind the Metropolis-Hastings Algorithm? [duplicate]
How is the $q$ distribution (the proposal) related to the intractable posterior? I don't see how $q$ popped out of nowhere. The posterior is not intractable: $f(x)$ must be available (in a numerical sense) up to a multiplicative constant for the Metropolis-Hastings algorithm to apply. Otherwise, latent or auxiliary variables must be called in. Or else approximate methods such as ABC are needed. The density $q(\cdot|\cdot)$ is essentially arbitrary, provided it operates on the same "space" as $f$. Meaning events measurable under $f$ should also be measurable under $q$. It is chosen when running the algorithm with competing goals of (i) a manageable enough simulation of $y\sim q(y|x)$ (ii) a computable density function $q(y|x)$ [up to a multiplicative constant] (iii) a sufficient coverage of the neighbourhood of $x$ towards ensuring eventually (in the number of iterations) a likely exploration of the whole support of the density $f$ (which leads to irreducibility for the associated Markov chain). Why is the acceptance ratio calculated the way it is? It doesn't make intuitive sense to me. The acceptance probability$$\alpha(x,y)=1 \wedge \frac{f(y)q(x|y)}{f(x)q(y|x)}$$is one of several choices that ensures $f$ is the stationary distribution density of the associated Markov chain. This means that (i) if $X_t\sim f(x)$, then after one iteration of the algorithm $X_{t+1}\sim f(x)$ (invariance of $f$) (ii) whatever the choice (or distribution) of $X_1$, the limiting distribution of $X_t$ as $t$ grows to $\infty$ is $f$. One direct explanation for this stationarity is that $$f(x)q(y|x)\alpha(x,y) = f(y)q(x|y)\alpha(y,x)$$ which is called detailed balance. It shows that the flow of the Markov chain is the same looking toward the future and looking toward the past (this is called reversibility). Any other function $\alpha$ that satisfies detailed balance works as well. Take for instance Barker's ratio $$\alpha(x,y)=\dfrac{1}{1+\frac{f(x)q(y|x)}{f(y)q(x|y)}}=\dfrac{f(y)q(x|y)}{f(x)q(y|x)+f(y)q(x|y)}$$ Another intuitive if informal explanation for this property is that, if $X_t\sim f(x)$ and $Y|X_t=x_t\sim q(y|x_t)$, then accepting the value $Y=y$ with probability $\alpha(x_t,y)$ turns the distribution of the pair into $Y\sim f(y)$ and $X_t|Y=y\sim q(x_t|y)$. In Step 3, we accept the X we sampled from the q distribution with some probability - why is that? How does that get me something closer to the intractable posterior, which is our goal? The algorithm produces a Markov chain $(X_t)_{t\ge 1}$ that is converging in distribution to the target distribution $f$. Hence the distribution of $X_t$ is eventually getting close to $f$ and hence for $t$ large enough the marginal distribution of $X_t$ is approximately $f$. But it does not make sense to consider that after each iteration $X_t$ is closer to the posterior distribution $f$.
What is the intuition behind the Metropolis-Hastings Algorithm? [duplicate]
How is the $q$ distribution (the proposal) related to the intractable posterior? I don't see how $q$ popped out of nowhere. The posterior is not intractable: $f(x)$ must be available (in a numerica
What is the intuition behind the Metropolis-Hastings Algorithm? [duplicate] How is the $q$ distribution (the proposal) related to the intractable posterior? I don't see how $q$ popped out of nowhere. The posterior is not intractable: $f(x)$ must be available (in a numerical sense) up to a multiplicative constant for the Metropolis-Hastings algorithm to apply. Otherwise, latent or auxiliary variables must be called in. Or else approximate methods such as ABC are needed. The density $q(\cdot|\cdot)$ is essentially arbitrary, provided it operates on the same "space" as $f$. Meaning events measurable under $f$ should also be measurable under $q$. It is chosen when running the algorithm with competing goals of (i) a manageable enough simulation of $y\sim q(y|x)$ (ii) a computable density function $q(y|x)$ [up to a multiplicative constant] (iii) a sufficient coverage of the neighbourhood of $x$ towards ensuring eventually (in the number of iterations) a likely exploration of the whole support of the density $f$ (which leads to irreducibility for the associated Markov chain). Why is the acceptance ratio calculated the way it is? It doesn't make intuitive sense to me. The acceptance probability$$\alpha(x,y)=1 \wedge \frac{f(y)q(x|y)}{f(x)q(y|x)}$$is one of several choices that ensures $f$ is the stationary distribution density of the associated Markov chain. This means that (i) if $X_t\sim f(x)$, then after one iteration of the algorithm $X_{t+1}\sim f(x)$ (invariance of $f$) (ii) whatever the choice (or distribution) of $X_1$, the limiting distribution of $X_t$ as $t$ grows to $\infty$ is $f$. One direct explanation for this stationarity is that $$f(x)q(y|x)\alpha(x,y) = f(y)q(x|y)\alpha(y,x)$$ which is called detailed balance. It shows that the flow of the Markov chain is the same looking toward the future and looking toward the past (this is called reversibility). Any other function $\alpha$ that satisfies detailed balance works as well. Take for instance Barker's ratio $$\alpha(x,y)=\dfrac{1}{1+\frac{f(x)q(y|x)}{f(y)q(x|y)}}=\dfrac{f(y)q(x|y)}{f(x)q(y|x)+f(y)q(x|y)}$$ Another intuitive if informal explanation for this property is that, if $X_t\sim f(x)$ and $Y|X_t=x_t\sim q(y|x_t)$, then accepting the value $Y=y$ with probability $\alpha(x_t,y)$ turns the distribution of the pair into $Y\sim f(y)$ and $X_t|Y=y\sim q(x_t|y)$. In Step 3, we accept the X we sampled from the q distribution with some probability - why is that? How does that get me something closer to the intractable posterior, which is our goal? The algorithm produces a Markov chain $(X_t)_{t\ge 1}$ that is converging in distribution to the target distribution $f$. Hence the distribution of $X_t$ is eventually getting close to $f$ and hence for $t$ large enough the marginal distribution of $X_t$ is approximately $f$. But it does not make sense to consider that after each iteration $X_t$ is closer to the posterior distribution $f$.
What is the intuition behind the Metropolis-Hastings Algorithm? [duplicate] How is the $q$ distribution (the proposal) related to the intractable posterior? I don't see how $q$ popped out of nowhere. The posterior is not intractable: $f(x)$ must be available (in a numerica
52,519
What is the intuition behind the Metropolis-Hastings Algorithm? [duplicate]
Ok. Let's starting by addressing your question piecemeal. First, how is $q$, called the jumping distribution chosen? It's up to you, the model-er. A reasonable assumption, as always, would be a Gaussian, but this may change according to the problem at hand. The choice of the jumping distribution will change how you walk, of course, but it an arbitrary choice. Now, the core of Metropolis-Hastings is the choice of $\alpha$. You can think of $\alpha$ as the way you control the sampling procedure. The main idea behind MCMC is that in order to estimation an unknown distribution, you 'walk around' the distribution such that the amount of time spent in each location is proportional to the height of the distribution. What $\alpha$ does is ask, 'compared to our previous location, how much higher/lower are we?' If we are higher, then the chance that we pick to move to the next point is higher, and if we are lower, then it's more likely that we stay where we are (this refers to Step 3 from the algorithm you reference). The precise functional form of $\alpha$ can be derived, fundamentally, it comes from the condition that we want our final distribution to be stationary. Next, let's discuss your final question. Generally speaking, this notion goes beyond Metropolis-Hastings, you should google 'rejection sampling.' If you've heard of that, that's all this is. This is to ensure that you've fully explored the distribution, and don't get 'stuck' in one place. Hopefully this has given you some greater intuition behind the algorithm. I do recommend spending some time delving into the math, my approach is very casual, focused on interpretability. Though the math can be intimidating, it's the best way to build intuition. Perhaps looking at a software implementation may help. As always, The Elements of Stat. Learning and Bishop are great references, and there are a plethora of online resources you could fine to further your understanding. cheers!
What is the intuition behind the Metropolis-Hastings Algorithm? [duplicate]
Ok. Let's starting by addressing your question piecemeal. First, how is $q$, called the jumping distribution chosen? It's up to you, the model-er. A reasonable assumption, as always, would be a Gaussi
What is the intuition behind the Metropolis-Hastings Algorithm? [duplicate] Ok. Let's starting by addressing your question piecemeal. First, how is $q$, called the jumping distribution chosen? It's up to you, the model-er. A reasonable assumption, as always, would be a Gaussian, but this may change according to the problem at hand. The choice of the jumping distribution will change how you walk, of course, but it an arbitrary choice. Now, the core of Metropolis-Hastings is the choice of $\alpha$. You can think of $\alpha$ as the way you control the sampling procedure. The main idea behind MCMC is that in order to estimation an unknown distribution, you 'walk around' the distribution such that the amount of time spent in each location is proportional to the height of the distribution. What $\alpha$ does is ask, 'compared to our previous location, how much higher/lower are we?' If we are higher, then the chance that we pick to move to the next point is higher, and if we are lower, then it's more likely that we stay where we are (this refers to Step 3 from the algorithm you reference). The precise functional form of $\alpha$ can be derived, fundamentally, it comes from the condition that we want our final distribution to be stationary. Next, let's discuss your final question. Generally speaking, this notion goes beyond Metropolis-Hastings, you should google 'rejection sampling.' If you've heard of that, that's all this is. This is to ensure that you've fully explored the distribution, and don't get 'stuck' in one place. Hopefully this has given you some greater intuition behind the algorithm. I do recommend spending some time delving into the math, my approach is very casual, focused on interpretability. Though the math can be intimidating, it's the best way to build intuition. Perhaps looking at a software implementation may help. As always, The Elements of Stat. Learning and Bishop are great references, and there are a plethora of online resources you could fine to further your understanding. cheers!
What is the intuition behind the Metropolis-Hastings Algorithm? [duplicate] Ok. Let's starting by addressing your question piecemeal. First, how is $q$, called the jumping distribution chosen? It's up to you, the model-er. A reasonable assumption, as always, would be a Gaussi
52,520
the rules of "approximately independent"
The rule you outlined is the rule used for something being approximately independent. This is for when the sample is taken without replacement. Consider that if you sample a population randomly with replacement (so that it's possible you can pick the same person twice) this sample is taken completely randomly and fairly, and each measurement is not dependent on the previous (they are independent). But if you take samples without replacement, each sample effects the next, thus to call a sample 'approximately independent' the change to the probability of selecting for a trait in a population needs to be as minimal as possible. This is done by making sure the percentage of the population represented in your sample is as small as possible. If you pick a red card from a deck of playing cards and don't put it back the probability of picking another red is now 25 in 51. But if you combine three decks of cards and take a red one now, the new probability is 77 out of 155, closer to 50%.
the rules of "approximately independent"
The rule you outlined is the rule used for something being approximately independent. This is for when the sample is taken without replacement. Consider that if you sample a population randomly with r
the rules of "approximately independent" The rule you outlined is the rule used for something being approximately independent. This is for when the sample is taken without replacement. Consider that if you sample a population randomly with replacement (so that it's possible you can pick the same person twice) this sample is taken completely randomly and fairly, and each measurement is not dependent on the previous (they are independent). But if you take samples without replacement, each sample effects the next, thus to call a sample 'approximately independent' the change to the probability of selecting for a trait in a population needs to be as minimal as possible. This is done by making sure the percentage of the population represented in your sample is as small as possible. If you pick a red card from a deck of playing cards and don't put it back the probability of picking another red is now 25 in 51. But if you combine three decks of cards and take a red one now, the new probability is 77 out of 155, closer to 50%.
the rules of "approximately independent" The rule you outlined is the rule used for something being approximately independent. This is for when the sample is taken without replacement. Consider that if you sample a population randomly with r
52,521
the rules of "approximately independent"
It's a widely recognized rule for sampling without replacement. The intuition behind this rule is that in the 10% case even after drawing an element of the population the probability for the next element does not change so much that we simply neglect it. E.g.: Let's say we have 100 Persons, then the first we ask has a probability to be drawn of $\frac{1}{100} = 0.01$. Person 5 has a probability of $\frac{1}{96} = 0.01041667$ and Person 10 has a probability of $\frac{1}{91} = 0.01098901$. So, our probabilites are approximately identical. Like many other rules, the value of 10% is not derived from theory but rather from experience. If you conduct experiments you could also say it has to be 5% of the population size or 7.66% or 11.01%.
the rules of "approximately independent"
It's a widely recognized rule for sampling without replacement. The intuition behind this rule is that in the 10% case even after drawing an element of the population the probability for the next elem
the rules of "approximately independent" It's a widely recognized rule for sampling without replacement. The intuition behind this rule is that in the 10% case even after drawing an element of the population the probability for the next element does not change so much that we simply neglect it. E.g.: Let's say we have 100 Persons, then the first we ask has a probability to be drawn of $\frac{1}{100} = 0.01$. Person 5 has a probability of $\frac{1}{96} = 0.01041667$ and Person 10 has a probability of $\frac{1}{91} = 0.01098901$. So, our probabilites are approximately identical. Like many other rules, the value of 10% is not derived from theory but rather from experience. If you conduct experiments you could also say it has to be 5% of the population size or 7.66% or 11.01%.
the rules of "approximately independent" It's a widely recognized rule for sampling without replacement. The intuition behind this rule is that in the 10% case even after drawing an element of the population the probability for the next elem
52,522
the rules of "approximately independent"
$\newcommand{\Z}{\mathbf Z}$$\newcommand{\y}{\mathbf y}$This sounds like this is a question about finite populations. Suppose we have a population $\mathscr U = \{1, \dots, N\}$ and each unit has a value $y_i$ associated with it. If we are taking a sample and using design-based inference we will need to consider $$ \pi_{ij} := P(i\in S, j \in S). $$ If sampling was done independently, which here is with replacement, we'll have $\pi_{ij} = \pi_i \pi_j$. But when it isn't then the discrepancy between $\pi_{ij}$ and $\pi_i\pi_j$ can affect estimators of interest. I'll go through an example with estimating the mean of this finite population that shows where terms like that 10% would appear and what their effect is. For instance, suppose $S\subset\mathscr U$ is our sample of size $n$ and we are estimating the population mean $\mu$ via $$ \bar y = \frac 1n \sum_{i \in S}y_i = \frac 1n \sum_{i=1}^N Z_iy_i $$ for $Z_i = \mathbf 1_{i \in S}$. To get the expected value of this estimator we don't have to worry about independence of sampling due to the linearity of expectation: $$ \text{E}(\hat y) = \frac 1n \sum_{i=1}^N y_i \pi_i. $$ If this is a simple random sample we'll have $\pi_i = \frac nN$ so $\hat y$ is unbiased, but even if $\hat y$ is biased it isn't due to the lack of independence between the $Z_i$ but rather due to some units having too much or too little probability associated with them (this could be fixed with weighting, like with Horvitz-Thompson estimation). To get the variance of this estimator I'll use $\y = (y_1,\dots,y_N)^T$ and $\Z = (Z_1,\dots,Z_N)^T$ so $\hat y = \frac 1n \y^T\Z$ and $$ \text{Var}(\hat y) = \frac 1{n^2}\y^T\text{Var}(Z)\y. $$ $$ \text{Var}(Z_i) = \text{E}(Z_i^2)- \text{E}(Z_i)^2 = \pi_i(1-\pi_i) $$ and $$ \text{Cov}(Z_i,Z_j) = \pi_{ij} - \pi_i\pi_j $$ so now the dependence matters. If the samples are taken independently then $\pi_{ij} - \pi_i\pi_j = 0$ so the covariance matrix of $\Z$ is diagonal and $$ \text{Var}(\hat y) = \frac 1{n^2}\sum_i y_i^2 \pi_i(1-\pi_i). $$ If this was a simple random sample (SRS) with replacement then we'd be in this situation with $\pi_i = \frac nN$ so $$ \text{Var}(\hat y) = \frac 1n\left(1-\frac nN\right)\y^T\y. $$ Suppose now that this is a simple random sample without replacement. That means we'll still have $\pi_i = \frac nN$ but now $$ \pi_{ij} = \frac{n-1}{N-1}\frac nN. $$ Note that the larger $n$ and $N$ are the closer this is to $0$, and as $N\to\infty$ the samples are closer and closer to independence too. Plugging these values in and rearranging a little, $$ \text{Var}(\Z)_{ij} = \begin{cases} \frac nN\left(1 - \frac nN\right) & i = j \\ -\frac{1}{N-1}\frac nN\left(1 - \frac nN\right) & i\neq j\end{cases}. $$ This means $$ \text{Var}(\hat y) = \frac 1{nN}\left(1 - \frac nN\right) \y^T\left(\frac{N}{N-1}I - \frac 1{N-1}\mathbf 1 \mathbf 1^T\right)\y \\ = \frac 1n\left(1 - \frac nN\right) \cdot \frac 1{N-1}\y^T\left(I - \frac 1{N}\mathbf 1 \mathbf 1^T\right)\y \\ = \frac 1n\left(1 - \frac nN\right) S^2 $$ since $\frac 1{N-1}\y^T\left(I - \frac 1N\mathbf 1\mathbf 1^T\right)\y$ can be recognized as the quadratic form of the sample variance. One key feature of this is that the smaller $1 - \frac nN$ is, the closer this will be to what we'd have if this was an infinite population. This reflects the independence decreasing as there are more and more units. $1 - \frac nN$ shrinks the variance due to exhausting the population, but the dependence incorporated in the $\pi_{ij}$ further shrinks the variance, which makes sense since the more coupled the observations are the less variability there will be. The term $1 - \frac nN$ is often called the finite population correction (FPC) and this also shows why. This equips you to answer your question: the 10% is about the FPC so it's getting at this idea, that if we aren't exhausting too much of the population then we'll be closer to independent.
the rules of "approximately independent"
$\newcommand{\Z}{\mathbf Z}$$\newcommand{\y}{\mathbf y}$This sounds like this is a question about finite populations. Suppose we have a population $\mathscr U = \{1, \dots, N\}$ and each unit has a va
the rules of "approximately independent" $\newcommand{\Z}{\mathbf Z}$$\newcommand{\y}{\mathbf y}$This sounds like this is a question about finite populations. Suppose we have a population $\mathscr U = \{1, \dots, N\}$ and each unit has a value $y_i$ associated with it. If we are taking a sample and using design-based inference we will need to consider $$ \pi_{ij} := P(i\in S, j \in S). $$ If sampling was done independently, which here is with replacement, we'll have $\pi_{ij} = \pi_i \pi_j$. But when it isn't then the discrepancy between $\pi_{ij}$ and $\pi_i\pi_j$ can affect estimators of interest. I'll go through an example with estimating the mean of this finite population that shows where terms like that 10% would appear and what their effect is. For instance, suppose $S\subset\mathscr U$ is our sample of size $n$ and we are estimating the population mean $\mu$ via $$ \bar y = \frac 1n \sum_{i \in S}y_i = \frac 1n \sum_{i=1}^N Z_iy_i $$ for $Z_i = \mathbf 1_{i \in S}$. To get the expected value of this estimator we don't have to worry about independence of sampling due to the linearity of expectation: $$ \text{E}(\hat y) = \frac 1n \sum_{i=1}^N y_i \pi_i. $$ If this is a simple random sample we'll have $\pi_i = \frac nN$ so $\hat y$ is unbiased, but even if $\hat y$ is biased it isn't due to the lack of independence between the $Z_i$ but rather due to some units having too much or too little probability associated with them (this could be fixed with weighting, like with Horvitz-Thompson estimation). To get the variance of this estimator I'll use $\y = (y_1,\dots,y_N)^T$ and $\Z = (Z_1,\dots,Z_N)^T$ so $\hat y = \frac 1n \y^T\Z$ and $$ \text{Var}(\hat y) = \frac 1{n^2}\y^T\text{Var}(Z)\y. $$ $$ \text{Var}(Z_i) = \text{E}(Z_i^2)- \text{E}(Z_i)^2 = \pi_i(1-\pi_i) $$ and $$ \text{Cov}(Z_i,Z_j) = \pi_{ij} - \pi_i\pi_j $$ so now the dependence matters. If the samples are taken independently then $\pi_{ij} - \pi_i\pi_j = 0$ so the covariance matrix of $\Z$ is diagonal and $$ \text{Var}(\hat y) = \frac 1{n^2}\sum_i y_i^2 \pi_i(1-\pi_i). $$ If this was a simple random sample (SRS) with replacement then we'd be in this situation with $\pi_i = \frac nN$ so $$ \text{Var}(\hat y) = \frac 1n\left(1-\frac nN\right)\y^T\y. $$ Suppose now that this is a simple random sample without replacement. That means we'll still have $\pi_i = \frac nN$ but now $$ \pi_{ij} = \frac{n-1}{N-1}\frac nN. $$ Note that the larger $n$ and $N$ are the closer this is to $0$, and as $N\to\infty$ the samples are closer and closer to independence too. Plugging these values in and rearranging a little, $$ \text{Var}(\Z)_{ij} = \begin{cases} \frac nN\left(1 - \frac nN\right) & i = j \\ -\frac{1}{N-1}\frac nN\left(1 - \frac nN\right) & i\neq j\end{cases}. $$ This means $$ \text{Var}(\hat y) = \frac 1{nN}\left(1 - \frac nN\right) \y^T\left(\frac{N}{N-1}I - \frac 1{N-1}\mathbf 1 \mathbf 1^T\right)\y \\ = \frac 1n\left(1 - \frac nN\right) \cdot \frac 1{N-1}\y^T\left(I - \frac 1{N}\mathbf 1 \mathbf 1^T\right)\y \\ = \frac 1n\left(1 - \frac nN\right) S^2 $$ since $\frac 1{N-1}\y^T\left(I - \frac 1N\mathbf 1\mathbf 1^T\right)\y$ can be recognized as the quadratic form of the sample variance. One key feature of this is that the smaller $1 - \frac nN$ is, the closer this will be to what we'd have if this was an infinite population. This reflects the independence decreasing as there are more and more units. $1 - \frac nN$ shrinks the variance due to exhausting the population, but the dependence incorporated in the $\pi_{ij}$ further shrinks the variance, which makes sense since the more coupled the observations are the less variability there will be. The term $1 - \frac nN$ is often called the finite population correction (FPC) and this also shows why. This equips you to answer your question: the 10% is about the FPC so it's getting at this idea, that if we aren't exhausting too much of the population then we'll be closer to independent.
the rules of "approximately independent" $\newcommand{\Z}{\mathbf Z}$$\newcommand{\y}{\mathbf y}$This sounds like this is a question about finite populations. Suppose we have a population $\mathscr U = \{1, \dots, N\}$ and each unit has a va
52,523
What methods can be used for distribution generation other than GANs?
There are many popular classes of generative models. Autoregressive models: Here we model $\log p(x)$ as a sum of conditional terms $\sum_i \log p(x_i | x_{j < i})$. This group includes most natural language models, Transformers, PixelRNN, PixelCNN, and Wavenet. Can be used for image, text, sound, and almost any other domain. Provides an explicit $p(x)$. Not vulnerable to mode dropping like GANs are. Downside is that it's moderately difficult to train and sample efficiently. Variational Autoencoders: Here we model $p(x) = \int p(x|z)p(z) dz$. The integral is intractable so we resort to computing a lower bound (called the ELBO). Can be used to model pretty much any mode of data. Upsides are: it's fast, provides an interpretable latent space, and doesn't drop modes. Downsides are: cannot tractably compute $p(x)$, can suffer from posterior collapse. The most powerful VAEs (see VQ-VAE2) are competitive with the best GANs for image generation. Flow-based models: Here we model $\log p(x) = \log p_z(g^{-1}(x)) + \log \left|J_{g^{-1}}(x) \right|$, where $g$ is an invertible mapping from $x$ to some latent space $z$ and $J_{g^{-1}}$ is the jacobian of it's inverse. Upsides are: provides an explicit form of $p(x)$, doesn't drop modes, interpretable latent space, and they are also theoretically appealing. Downsides are: these models tend to be very computationally expensive. Popular models include NICE, Real-NVP, GLOW, and invertible ResNets. Some other interesting lines of research I've seen include Implicit MLE (one sentence summary: minimizing expected distance to the nearest ground-truth point is equivalent to maximizing likelihood) and Generative Latent Optimization (one sentence summary: learn $p(x|z)$ first, decide on $p(z)$ later).
What methods can be used for distribution generation other than GANs?
There are many popular classes of generative models. Autoregressive models: Here we model $\log p(x)$ as a sum of conditional terms $\sum_i \log p(x_i | x_{j < i})$. This group includes most natural l
What methods can be used for distribution generation other than GANs? There are many popular classes of generative models. Autoregressive models: Here we model $\log p(x)$ as a sum of conditional terms $\sum_i \log p(x_i | x_{j < i})$. This group includes most natural language models, Transformers, PixelRNN, PixelCNN, and Wavenet. Can be used for image, text, sound, and almost any other domain. Provides an explicit $p(x)$. Not vulnerable to mode dropping like GANs are. Downside is that it's moderately difficult to train and sample efficiently. Variational Autoencoders: Here we model $p(x) = \int p(x|z)p(z) dz$. The integral is intractable so we resort to computing a lower bound (called the ELBO). Can be used to model pretty much any mode of data. Upsides are: it's fast, provides an interpretable latent space, and doesn't drop modes. Downsides are: cannot tractably compute $p(x)$, can suffer from posterior collapse. The most powerful VAEs (see VQ-VAE2) are competitive with the best GANs for image generation. Flow-based models: Here we model $\log p(x) = \log p_z(g^{-1}(x)) + \log \left|J_{g^{-1}}(x) \right|$, where $g$ is an invertible mapping from $x$ to some latent space $z$ and $J_{g^{-1}}$ is the jacobian of it's inverse. Upsides are: provides an explicit form of $p(x)$, doesn't drop modes, interpretable latent space, and they are also theoretically appealing. Downsides are: these models tend to be very computationally expensive. Popular models include NICE, Real-NVP, GLOW, and invertible ResNets. Some other interesting lines of research I've seen include Implicit MLE (one sentence summary: minimizing expected distance to the nearest ground-truth point is equivalent to maximizing likelihood) and Generative Latent Optimization (one sentence summary: learn $p(x|z)$ first, decide on $p(z)$ later).
What methods can be used for distribution generation other than GANs? There are many popular classes of generative models. Autoregressive models: Here we model $\log p(x)$ as a sum of conditional terms $\sum_i \log p(x_i | x_{j < i})$. This group includes most natural l
52,524
What methods can be used for distribution generation other than GANs?
The other large class of models that can do that are Variational Autoencoders. Basically, they explicitly try to parameterize some specified probability distribution (up to you what it is) with one half of your neural network so that the parameters match your specification AND that the samples from that distribution can be decoded by the other half to reconstruct your inputs. In a successfully trained model, you can throw out the encoder and feed it some arbitrary latent encodings; unlike a standard Autoencoder, the latent space is contiguous. Note that this method is not mutually exclusive with a GAN, and indeed such a hybrid often generates nicer-looking synthetic decodings.
What methods can be used for distribution generation other than GANs?
The other large class of models that can do that are Variational Autoencoders. Basically, they explicitly try to parameterize some specified probability distribution (up to you what it is) with one h
What methods can be used for distribution generation other than GANs? The other large class of models that can do that are Variational Autoencoders. Basically, they explicitly try to parameterize some specified probability distribution (up to you what it is) with one half of your neural network so that the parameters match your specification AND that the samples from that distribution can be decoded by the other half to reconstruct your inputs. In a successfully trained model, you can throw out the encoder and feed it some arbitrary latent encodings; unlike a standard Autoencoder, the latent space is contiguous. Note that this method is not mutually exclusive with a GAN, and indeed such a hybrid often generates nicer-looking synthetic decodings.
What methods can be used for distribution generation other than GANs? The other large class of models that can do that are Variational Autoencoders. Basically, they explicitly try to parameterize some specified probability distribution (up to you what it is) with one h
52,525
What methods can be used for distribution generation other than GANs?
Yes, there are. Goodfellow's GAN tutorial lists a taxonomy of such methods. Some example approaches that fall into Maximum Likelihood category: autoregressive methods. Use $p_{\theta}(x) = \prod_{i} p(x_i|(x_j)_{j<1:i})$. Example: Pixel-Recurrent NN. variational methods. Use $p_{\theta}(x) = \int p(x|z) p(z) dz$ Where $p(x|z)$ is learnable, prior distribution on latent variables $p(z)$ is something simple (Gaussian for example) Example: variational autoencoder. flow models. These are like nonlinear Independent component analysis. If you want more, just google 'deep generative models' - there are several courses on these methods, for example from Stanford.
What methods can be used for distribution generation other than GANs?
Yes, there are. Goodfellow's GAN tutorial lists a taxonomy of such methods. Some example approaches that fall into Maximum Likelihood category: autoregressive methods. Use $p_{\theta}(x) = \prod_{i}
What methods can be used for distribution generation other than GANs? Yes, there are. Goodfellow's GAN tutorial lists a taxonomy of such methods. Some example approaches that fall into Maximum Likelihood category: autoregressive methods. Use $p_{\theta}(x) = \prod_{i} p(x_i|(x_j)_{j<1:i})$. Example: Pixel-Recurrent NN. variational methods. Use $p_{\theta}(x) = \int p(x|z) p(z) dz$ Where $p(x|z)$ is learnable, prior distribution on latent variables $p(z)$ is something simple (Gaussian for example) Example: variational autoencoder. flow models. These are like nonlinear Independent component analysis. If you want more, just google 'deep generative models' - there are several courses on these methods, for example from Stanford.
What methods can be used for distribution generation other than GANs? Yes, there are. Goodfellow's GAN tutorial lists a taxonomy of such methods. Some example approaches that fall into Maximum Likelihood category: autoregressive methods. Use $p_{\theta}(x) = \prod_{i}
52,526
Polynomial Chebyshev Regression versus multi-linear regression
I think you have misunderstood the motivation for Chebyshev polynomials. Chebyshev polynomials are not used for statistical modelling at all --- their purpose is quite different. Chebyshev polynomials are a good and convenient solution to the classic numerical interpolation problem. Suppose I want to approximate a general function smooth $f(x)$ on a bounded interval with a polynomial of a given order. How can I choose the polynomial $g(x)$ to minimize the maximum difference $|f(x)-g(x)|$ on the interval? Then the Chebyshev polynomial on the interval is a good approximation to the optimal solution for $g(x)$. I gave a simple example of this in Figure 1 of Smyth (1998). The Chebyshev polynomial is defined by an interpolation problem. It interpolates the function $f(x)$ exactly at $n+1$ points, where $n$ is the order of the polynomial. The Chebyshev idea is to choose the $n+1$ points so to minimize the maximum discrepancy between $f(x)$ and $g(x)$ on the interval. Chebyshev polynomials have little to do with statistics. Their purpose in numerical analysis is to derive well-behaved and efficiently computed approximations to smooth but complex mathematical functions. They are used to approximate frequently used mathematical functions that don't have algebraic closed forms. If you dig down into the numerical implement of functions like pnorm or qnorm in R, you may well find a Chebyshev polynomial lurking somewhere deep down in the low-level C or Fortran code. To everyday users of the functions, the Chebyshev approximation will be invisible, and so it should be. The linear regression problem is quite different: different data, different scientific purpose, and different optimality function. In linear regression, we have response values $y_i$ and one or more covariates $x_i$. We have data values, not a smooth function. The covariate values $x_i$ are given, we do not get to choose them. We seek to fit the $y_i$ in a least squares sense rather than a mini-max sense. We are seeking to separate signal from noise rather than to reproduce every decimal point in $y$. We are looking for a parsimonious and interpretable representation of the signal that has a scientific meaning. In the linear regression context, there is no interpolating Chebyshev polynomial because we don't observe $y$ at the $x$-values necessary to define it. It is possible to use standardized Chebyshev polynomials as basis functions for polynomial regression, but, in the least squares regression context, any polynomial basis gives rise to the same fitted values and residuals, just with different coefficients. Chebyshev polynomials have no advantage here --- defining instead polynomials that are orthogonal with respect to the observed $x_i$ is more usual and more useful (see my article referenced below). All this has nothing to do with dataset size or with computing power. There are very efficient numerical implementations of multiple linear regression. Like many statisticians, I find myself fitting linear regressions every day for problems as small as $n=3$ or as large as $n=10^9$. And that's just on my PC. References Smyth, G. K. (1998). Polynomial approximation. In: Encyclopedia of Biostatistics, P. Armitage and T. Colton (eds.), Wiley, London, pp. 3425-3429. http://www.statsci.org/smyth/pubs/PolyApprox-Preprint.pdf [Very brief treatment of Chebyshev polynomials, but has the advantage of treating polynomial interpolation and polynomial regression in one article.] Stewart, G. W. (1996). Afternotes on Numerical Analysis. SIAM, Philadelphia. [My favourite reference on polynomial interpolation!]
Polynomial Chebyshev Regression versus multi-linear regression
I think you have misunderstood the motivation for Chebyshev polynomials. Chebyshev polynomials are not used for statistical modelling at all --- their purpose is quite different. Chebyshev polynomials
Polynomial Chebyshev Regression versus multi-linear regression I think you have misunderstood the motivation for Chebyshev polynomials. Chebyshev polynomials are not used for statistical modelling at all --- their purpose is quite different. Chebyshev polynomials are a good and convenient solution to the classic numerical interpolation problem. Suppose I want to approximate a general function smooth $f(x)$ on a bounded interval with a polynomial of a given order. How can I choose the polynomial $g(x)$ to minimize the maximum difference $|f(x)-g(x)|$ on the interval? Then the Chebyshev polynomial on the interval is a good approximation to the optimal solution for $g(x)$. I gave a simple example of this in Figure 1 of Smyth (1998). The Chebyshev polynomial is defined by an interpolation problem. It interpolates the function $f(x)$ exactly at $n+1$ points, where $n$ is the order of the polynomial. The Chebyshev idea is to choose the $n+1$ points so to minimize the maximum discrepancy between $f(x)$ and $g(x)$ on the interval. Chebyshev polynomials have little to do with statistics. Their purpose in numerical analysis is to derive well-behaved and efficiently computed approximations to smooth but complex mathematical functions. They are used to approximate frequently used mathematical functions that don't have algebraic closed forms. If you dig down into the numerical implement of functions like pnorm or qnorm in R, you may well find a Chebyshev polynomial lurking somewhere deep down in the low-level C or Fortran code. To everyday users of the functions, the Chebyshev approximation will be invisible, and so it should be. The linear regression problem is quite different: different data, different scientific purpose, and different optimality function. In linear regression, we have response values $y_i$ and one or more covariates $x_i$. We have data values, not a smooth function. The covariate values $x_i$ are given, we do not get to choose them. We seek to fit the $y_i$ in a least squares sense rather than a mini-max sense. We are seeking to separate signal from noise rather than to reproduce every decimal point in $y$. We are looking for a parsimonious and interpretable representation of the signal that has a scientific meaning. In the linear regression context, there is no interpolating Chebyshev polynomial because we don't observe $y$ at the $x$-values necessary to define it. It is possible to use standardized Chebyshev polynomials as basis functions for polynomial regression, but, in the least squares regression context, any polynomial basis gives rise to the same fitted values and residuals, just with different coefficients. Chebyshev polynomials have no advantage here --- defining instead polynomials that are orthogonal with respect to the observed $x_i$ is more usual and more useful (see my article referenced below). All this has nothing to do with dataset size or with computing power. There are very efficient numerical implementations of multiple linear regression. Like many statisticians, I find myself fitting linear regressions every day for problems as small as $n=3$ or as large as $n=10^9$. And that's just on my PC. References Smyth, G. K. (1998). Polynomial approximation. In: Encyclopedia of Biostatistics, P. Armitage and T. Colton (eds.), Wiley, London, pp. 3425-3429. http://www.statsci.org/smyth/pubs/PolyApprox-Preprint.pdf [Very brief treatment of Chebyshev polynomials, but has the advantage of treating polynomial interpolation and polynomial regression in one article.] Stewart, G. W. (1996). Afternotes on Numerical Analysis. SIAM, Philadelphia. [My favourite reference on polynomial interpolation!]
Polynomial Chebyshev Regression versus multi-linear regression I think you have misunderstood the motivation for Chebyshev polynomials. Chebyshev polynomials are not used for statistical modelling at all --- their purpose is quite different. Chebyshev polynomials
52,527
What is the difference between first order difference and moving average?
Suppose you have a set of time-series data values $x_1,...,x_n$. For some value $k<n$ in the series, the corresponding moving average over $k$ periods up to time $t$ is: $$\bar{x}_{t}^{(k)} = \frac{1}{k} \sum_{i=0}^{k-1} x_{t-i}.$$ As you change the last time period $t$ you move the average to be an average over different values (but always over $k$ consecutive values). This means that you get a running sequence of moving averages $\bar{x}_{k}^{(k)}, ..., \bar{x}_{n}^{(k)}$ where each average is over $k$ consecutive values. This "moving average" generally gives you a sense of how the series is changing, but in a "smoothed" fashion. Now, first-order differences are a different thing to this. The (backwards) first-order differences are the values: $$\nabla x_t = x_t - x_{t-1}.$$ By taking the first-order differences at different time points you get a sequence of differences $\nabla x_2, ..., \nabla x_n$ that show how the time-series is changing each period. The moving average and the first-order difference are two different things, but it is useful to note that they can be interacted with each other in useful ways. For example, if we take the first-order difference of the moving average, we get: $$\begin{equation} \begin{aligned} \nabla \bar{x}_{t}^{(k)} &= \bar{x}_{t,k} - \bar{x}_{t-1,k} \\[6pt] &= \frac{1}{k} \sum_{i=0}^{k-1} x_{t-i} - \frac{1}{k} \sum_{i=0}^{k-1} x_{t-i-1} \\[6pt] &= \frac{1}{k} \Bigg( \sum_{i=0}^{k-1} x_{t-i} - \sum_{i=1}^{k} x_{t-i} \Bigg) \\[6pt] &= \frac{1}{k} ( x_{t} - x_{t-k} ) \\[6pt] &= \frac{\nabla_k}{k} x_{t}. \\[6pt] \end{aligned} \end{equation}$$ We can see that this is $1/k$ times the $k$-level difference of the series values (which is a useful demonstration of the fact that using the moving average causes the changes in the values to be "smoothed").
What is the difference between first order difference and moving average?
Suppose you have a set of time-series data values $x_1,...,x_n$. For some value $k<n$ in the series, the corresponding moving average over $k$ periods up to time $t$ is: $$\bar{x}_{t}^{(k)} = \frac{1
What is the difference between first order difference and moving average? Suppose you have a set of time-series data values $x_1,...,x_n$. For some value $k<n$ in the series, the corresponding moving average over $k$ periods up to time $t$ is: $$\bar{x}_{t}^{(k)} = \frac{1}{k} \sum_{i=0}^{k-1} x_{t-i}.$$ As you change the last time period $t$ you move the average to be an average over different values (but always over $k$ consecutive values). This means that you get a running sequence of moving averages $\bar{x}_{k}^{(k)}, ..., \bar{x}_{n}^{(k)}$ where each average is over $k$ consecutive values. This "moving average" generally gives you a sense of how the series is changing, but in a "smoothed" fashion. Now, first-order differences are a different thing to this. The (backwards) first-order differences are the values: $$\nabla x_t = x_t - x_{t-1}.$$ By taking the first-order differences at different time points you get a sequence of differences $\nabla x_2, ..., \nabla x_n$ that show how the time-series is changing each period. The moving average and the first-order difference are two different things, but it is useful to note that they can be interacted with each other in useful ways. For example, if we take the first-order difference of the moving average, we get: $$\begin{equation} \begin{aligned} \nabla \bar{x}_{t}^{(k)} &= \bar{x}_{t,k} - \bar{x}_{t-1,k} \\[6pt] &= \frac{1}{k} \sum_{i=0}^{k-1} x_{t-i} - \frac{1}{k} \sum_{i=0}^{k-1} x_{t-i-1} \\[6pt] &= \frac{1}{k} \Bigg( \sum_{i=0}^{k-1} x_{t-i} - \sum_{i=1}^{k} x_{t-i} \Bigg) \\[6pt] &= \frac{1}{k} ( x_{t} - x_{t-k} ) \\[6pt] &= \frac{\nabla_k}{k} x_{t}. \\[6pt] \end{aligned} \end{equation}$$ We can see that this is $1/k$ times the $k$-level difference of the series values (which is a useful demonstration of the fact that using the moving average causes the changes in the values to be "smoothed").
What is the difference between first order difference and moving average? Suppose you have a set of time-series data values $x_1,...,x_n$. For some value $k<n$ in the series, the corresponding moving average over $k$ periods up to time $t$ is: $$\bar{x}_{t}^{(k)} = \frac{1
52,528
What is the difference between first order difference and moving average?
First I talk about first order difference First order difference: To run most time series regressions stationary is essential condition. If your data is not stationary then we use differencing.When we deduct present observation from it's lag it's called first order difference. To run whether MA or AR or ARMA you should first ensure stationary. Moving Average: Moving average is used when your time series suffers from auto-correlation. It is basically a regression technique in which you regress the error term of the model with its lag. If you regress it with first lag we call it MA(1). If second we call MA(2) and so on.
What is the difference between first order difference and moving average?
First I talk about first order difference First order difference: To run most time series regressions stationary is essential condition. If your data is not stationary then we use differencing.When w
What is the difference between first order difference and moving average? First I talk about first order difference First order difference: To run most time series regressions stationary is essential condition. If your data is not stationary then we use differencing.When we deduct present observation from it's lag it's called first order difference. To run whether MA or AR or ARMA you should first ensure stationary. Moving Average: Moving average is used when your time series suffers from auto-correlation. It is basically a regression technique in which you regress the error term of the model with its lag. If you regress it with first lag we call it MA(1). If second we call MA(2) and so on.
What is the difference between first order difference and moving average? First I talk about first order difference First order difference: To run most time series regressions stationary is essential condition. If your data is not stationary then we use differencing.When w
52,529
Minimum sample size for Spearman's correlation and Kendall's Tau b
For the purposes of a hypothesis test, there are two related approaches to finding an optimal sample size that are viable if you're willing to assume bivariate normality. Power To estimate minimal sample size at a given confidence level ($1-\alpha$) and power ($1-\beta$), we can use a modification of the equation for calculating the power of a Pearson correlation ($r$): $$n=3+\bigg(\frac{z_{\alpha/2}+z_{\beta}}{z(r_{1})-z(r_{0})}\bigg)^2$$ Where the numerator represents the boundaries of a normal distribution at a specified $\alpha$ and $\beta$, respectively. The denominator takes the Fisher Z transformed estimated values of the expected ($r_1$) and null ($r_0$) correlation (Bonett, 2016). For a null hypothesis of no correlation $r_0 = 0$, though this need not be the case as the formula accommodates different values for a more specific null. For the Kendall coefficient ($\tau$), we use a monotonic transform as per Fieller, Hartley, & Pearson (1957) to modify the formula slightly and solve for n: $$n=4+.437\bigg(\frac{z_{\alpha/2}+z_{\beta}}{z(\tau_{b1})-z(\tau_{b0})}\bigg)^2$$ For the Spearman coefficient ($\rho$), following the transform in Bonett & Wright (2000), the formula is: $$n=3+\bigg(1+\frac{\rho^2_{s1}}{2}\bigg)\bigg(\frac{z_{\alpha/2}+z_{\beta}}{z(\rho_{s1})-z(\rho_{s0})}\bigg)^2$$ The minimum sample size will therefore depend upon the expected level of $\tau$ or $\rho$ to reject the null, at a specified confidence level and power. More details on power can be found in Looney (2018). Precision Say you have a reasonable idea what range of values your $\tau$ or $\rho$ 'should' be in. In this case, you may want to estimate the sample size to achieve a particular confidence interval (CI) width (e.g. $\rho=.3\pm.1$) for precision. Provided the CI width ($w$) specified does not cross zero, you've effectively guaranteed sufficient power for a standard null hypothesis significance test as well. Bonett & Wright (2000) established a two-step method to achieve this, again using a monotonic transform assuming bivariate normality. They provide a handy table of minimum sample sizes required across a range of different correlation values, CI widths, and alphas on page 26. But I'll outline the rough approach below. First, calculate an initial imprecise approximation of the required sample size ($n_0$) at a given estimate of the Pearson, Spearman, or Kendall correlation (denoted more generally below as $\hat{r}$) and a desired confidence interval width (set at $w$). This given by: $$n_0 = b+4c^2 (1-\hat{r}^2)^2 \bigg( \frac{z_{\alpha/2}}{w}\bigg)^2$$ Where $b$ is equal to 3 for the Pearson and Spearman correlations and 4 for the Kendall; and $c$ is equal to $1$ for the Pearson, $\sqrt{(1+\hat{r}^2)/2}$ for the Spearman, and $\sqrt{.437}$ for the Kendall. Second, use a Fisher Z type transform to find the confidence intervals of $\hat{r}$ given the initial sample size approximation of $n_0$ $$\text{Lower Limit}=\frac{[\exp(2L_1)-1]}{[\exp(2L_1)+1]}$$ $$\text{Upper Limit}=\frac{[\exp(2L_2)-1]}{[\exp(2L_2)+1]}$$ Where $$L_1=.5[\ln(1+\hat{r})-\ln(1-\hat{r})]-\frac{c(z_{\alpha/2})}{\sqrt{n_0-b}}$$ $$L_2=.5[\ln(1+\hat{r})-\ln(1-\hat{r})]+\frac{c(z_{\alpha/2})}{\sqrt{n_0-b}}$$ The idea is to now subtract the lower limit from the upper limit to find the estimated width ($w_0$) of the confidence interval of $\hat{r}$ given the initial sample size estimate $n_0$. This $n_0$ sample size will typically not be exactly correct, but we can use this information to adjust to find a better estimate by: $$n = (n_0-b)\bigg(\frac{w_0}{w}\bigg)^2+b$$ Note however that this estimate may perform poorly at the upper ends of the correlation (say, $\hat{r}>.8$). In this case, a more conservative estimate is given in Doug Bonett's lecture notes as: $$n = n_0\bigg(\frac{w_0^2}{w^2}\bigg)$$ In practice, I've found this gives approximately the same values at the lower ends of possible $\hat{r}$ values too, so maybe just default to that. Problems As stated a couple of times here - the critical assumption to all of these theoretical estimates is that both variables are normally distributed. This may be problematic. Frequently researchers will want to use $\tau$ and $\rho$ specifically because their variables are not normally distributed (but are instead skewed, ordinal, etc). The extent to which these methods will work in those circumstances is uncertain. Simulating data that resemble your own to empirically estimate power or precision across a range of possible sample sizes may be a reasonable work-around, but I'm not sure. References Bonett, D.G. (2016).Sample Size Planning for Behavioral Science Research. Retrieved from http://people.ucsc.edu/~dgbonett/sample.html. Bonett, D. G., & Wright, T. A. (2000). Sample size requirements for estimating Pearson, Kendall and Spearman correlations. Psychometrika, 65(1), 23-28. Fieller, E. C., Hartley, H. O., & Pearson, E. S. (1957). Tests for rank correlation coefficients. I. Biometrika, 44(3/4), 470-481. Looney, S. W. (2018). Practical Issues in Sample Size Determination for Correlation Coefficient Inference. SM Journal of Biometrics & Biostatistics, 3(1), 1027.
Minimum sample size for Spearman's correlation and Kendall's Tau b
For the purposes of a hypothesis test, there are two related approaches to finding an optimal sample size that are viable if you're willing to assume bivariate normality. Power To estimate minimal sa
Minimum sample size for Spearman's correlation and Kendall's Tau b For the purposes of a hypothesis test, there are two related approaches to finding an optimal sample size that are viable if you're willing to assume bivariate normality. Power To estimate minimal sample size at a given confidence level ($1-\alpha$) and power ($1-\beta$), we can use a modification of the equation for calculating the power of a Pearson correlation ($r$): $$n=3+\bigg(\frac{z_{\alpha/2}+z_{\beta}}{z(r_{1})-z(r_{0})}\bigg)^2$$ Where the numerator represents the boundaries of a normal distribution at a specified $\alpha$ and $\beta$, respectively. The denominator takes the Fisher Z transformed estimated values of the expected ($r_1$) and null ($r_0$) correlation (Bonett, 2016). For a null hypothesis of no correlation $r_0 = 0$, though this need not be the case as the formula accommodates different values for a more specific null. For the Kendall coefficient ($\tau$), we use a monotonic transform as per Fieller, Hartley, & Pearson (1957) to modify the formula slightly and solve for n: $$n=4+.437\bigg(\frac{z_{\alpha/2}+z_{\beta}}{z(\tau_{b1})-z(\tau_{b0})}\bigg)^2$$ For the Spearman coefficient ($\rho$), following the transform in Bonett & Wright (2000), the formula is: $$n=3+\bigg(1+\frac{\rho^2_{s1}}{2}\bigg)\bigg(\frac{z_{\alpha/2}+z_{\beta}}{z(\rho_{s1})-z(\rho_{s0})}\bigg)^2$$ The minimum sample size will therefore depend upon the expected level of $\tau$ or $\rho$ to reject the null, at a specified confidence level and power. More details on power can be found in Looney (2018). Precision Say you have a reasonable idea what range of values your $\tau$ or $\rho$ 'should' be in. In this case, you may want to estimate the sample size to achieve a particular confidence interval (CI) width (e.g. $\rho=.3\pm.1$) for precision. Provided the CI width ($w$) specified does not cross zero, you've effectively guaranteed sufficient power for a standard null hypothesis significance test as well. Bonett & Wright (2000) established a two-step method to achieve this, again using a monotonic transform assuming bivariate normality. They provide a handy table of minimum sample sizes required across a range of different correlation values, CI widths, and alphas on page 26. But I'll outline the rough approach below. First, calculate an initial imprecise approximation of the required sample size ($n_0$) at a given estimate of the Pearson, Spearman, or Kendall correlation (denoted more generally below as $\hat{r}$) and a desired confidence interval width (set at $w$). This given by: $$n_0 = b+4c^2 (1-\hat{r}^2)^2 \bigg( \frac{z_{\alpha/2}}{w}\bigg)^2$$ Where $b$ is equal to 3 for the Pearson and Spearman correlations and 4 for the Kendall; and $c$ is equal to $1$ for the Pearson, $\sqrt{(1+\hat{r}^2)/2}$ for the Spearman, and $\sqrt{.437}$ for the Kendall. Second, use a Fisher Z type transform to find the confidence intervals of $\hat{r}$ given the initial sample size approximation of $n_0$ $$\text{Lower Limit}=\frac{[\exp(2L_1)-1]}{[\exp(2L_1)+1]}$$ $$\text{Upper Limit}=\frac{[\exp(2L_2)-1]}{[\exp(2L_2)+1]}$$ Where $$L_1=.5[\ln(1+\hat{r})-\ln(1-\hat{r})]-\frac{c(z_{\alpha/2})}{\sqrt{n_0-b}}$$ $$L_2=.5[\ln(1+\hat{r})-\ln(1-\hat{r})]+\frac{c(z_{\alpha/2})}{\sqrt{n_0-b}}$$ The idea is to now subtract the lower limit from the upper limit to find the estimated width ($w_0$) of the confidence interval of $\hat{r}$ given the initial sample size estimate $n_0$. This $n_0$ sample size will typically not be exactly correct, but we can use this information to adjust to find a better estimate by: $$n = (n_0-b)\bigg(\frac{w_0}{w}\bigg)^2+b$$ Note however that this estimate may perform poorly at the upper ends of the correlation (say, $\hat{r}>.8$). In this case, a more conservative estimate is given in Doug Bonett's lecture notes as: $$n = n_0\bigg(\frac{w_0^2}{w^2}\bigg)$$ In practice, I've found this gives approximately the same values at the lower ends of possible $\hat{r}$ values too, so maybe just default to that. Problems As stated a couple of times here - the critical assumption to all of these theoretical estimates is that both variables are normally distributed. This may be problematic. Frequently researchers will want to use $\tau$ and $\rho$ specifically because their variables are not normally distributed (but are instead skewed, ordinal, etc). The extent to which these methods will work in those circumstances is uncertain. Simulating data that resemble your own to empirically estimate power or precision across a range of possible sample sizes may be a reasonable work-around, but I'm not sure. References Bonett, D.G. (2016).Sample Size Planning for Behavioral Science Research. Retrieved from http://people.ucsc.edu/~dgbonett/sample.html. Bonett, D. G., & Wright, T. A. (2000). Sample size requirements for estimating Pearson, Kendall and Spearman correlations. Psychometrika, 65(1), 23-28. Fieller, E. C., Hartley, H. O., & Pearson, E. S. (1957). Tests for rank correlation coefficients. I. Biometrika, 44(3/4), 470-481. Looney, S. W. (2018). Practical Issues in Sample Size Determination for Correlation Coefficient Inference. SM Journal of Biometrics & Biostatistics, 3(1), 1027.
Minimum sample size for Spearman's correlation and Kendall's Tau b For the purposes of a hypothesis test, there are two related approaches to finding an optimal sample size that are viable if you're willing to assume bivariate normality. Power To estimate minimal sa
52,530
Is age categorical or quantitative or both?
Generally speaking, you should treat age as a quantitative variable, assuming you have the actual ages and not age brackets. There are several reasons for this. Perhaps most importantly, if you use age as a categorical variable, you typically would need $c-1$ variables to represent the age categories, $c$, in a regression model, and would lose degrees of freedom for each of these categories. This results in less powerful tests. On the other hand, using a single quantitative/numeric variable age requires only a single variable and a single degree of freedom. Age as a quantitative variable contains more information than as a categorical variable. If you were to represent age as a categorical variable, then you are doing away with the natural ordering of the ages you'd have by leaving it as a quantitative variable. In other words, a model with categorical ages is unable to tell that 70 years old is closer to 80 years old than 5 years old (because 70 comes 10 before 80, but if you modeled age as a category, there is no information that indicates to your model that category A -- which might represent your first age category -- comes before category C, which might represent another age category). You will get different coefficient estimates for a quantitative age model than a qualitative age model because the models make different assumptions. There are several other important reasons why it's generally not a good idea to treat your quantitative measures as categories. Frank Harrell has a good list of additional reasons here. There is no need to treat age as a category to estimate risks associated with a subject's age. You can simply estimate the risk at any given age by multiplying the estimated coefficient for age by the subject's age (in years) and exponentiating. Update to your "EDIT 1" What I don't understand, ..., is why would I add a quadratic or cubic term to my model? What will that achieve? First, let me say that I'm glad to see you've decided to use age as a quantitative variable. I think all of us applaud you on that decision. The reason we've suggested the possibility of trying quadratic or cubic terms to your model is that age may not have a strictly linear relationship to your outcome. Without loss of generality to your exact problem and model, I think it's easier and more instructive to think about this in terms of a simple linear regression model. Let's assume you had a simple linear regression model in which you regressed weight on age. If you included age as a linear term (e.g. $\beta_{age} \times X_{age}$) in your model, then you are assuming that weight increases steadily as as person ages, no matter how old a person gets (see Linear Model graph below). This may not be the appropriate functional relationship. Instead, in reality, what typically happens is that a person gains weight as they age, but as they enter old age they begin to lose weight. This would not be well modeled as a linear function, because of this drop-off in weight in old-age. Instead, if one included a quadratic term (e.g. $\beta_{age} \times X_{age}^2$) in addition to the linear term, this could better capture the non-linearity and quadratic nature of the relationship between weight and age. Using an $X_{age}^2$ term allows the regression model to predict an increasing weight as one ages up to a point, and then the model will start to predict a decrease in weight as one ages (see Quadratic Model graph below). If you simply included a linear term, your model would not be able to capture this drop-off in weight in old age. Instead, your model will simply continue predicting increases in weight well into old age. The same concept applies to your model. Besides including quadratic or linear terms in your model, you may also want to explore the use of splines or generalized additive models (GAMs) to model these types of non-linear relationships. Linear Model: Quadratic Model: PS. Keep in mind I just made up these graphs. They don't really represent real weights and ages in real life. For example, in real life someone doesn't only weigh 8 pounds upon reaching 100, but I think you get the general idea.
Is age categorical or quantitative or both?
Generally speaking, you should treat age as a quantitative variable, assuming you have the actual ages and not age brackets. There are several reasons for this. Perhaps most importantly, if you use
Is age categorical or quantitative or both? Generally speaking, you should treat age as a quantitative variable, assuming you have the actual ages and not age brackets. There are several reasons for this. Perhaps most importantly, if you use age as a categorical variable, you typically would need $c-1$ variables to represent the age categories, $c$, in a regression model, and would lose degrees of freedom for each of these categories. This results in less powerful tests. On the other hand, using a single quantitative/numeric variable age requires only a single variable and a single degree of freedom. Age as a quantitative variable contains more information than as a categorical variable. If you were to represent age as a categorical variable, then you are doing away with the natural ordering of the ages you'd have by leaving it as a quantitative variable. In other words, a model with categorical ages is unable to tell that 70 years old is closer to 80 years old than 5 years old (because 70 comes 10 before 80, but if you modeled age as a category, there is no information that indicates to your model that category A -- which might represent your first age category -- comes before category C, which might represent another age category). You will get different coefficient estimates for a quantitative age model than a qualitative age model because the models make different assumptions. There are several other important reasons why it's generally not a good idea to treat your quantitative measures as categories. Frank Harrell has a good list of additional reasons here. There is no need to treat age as a category to estimate risks associated with a subject's age. You can simply estimate the risk at any given age by multiplying the estimated coefficient for age by the subject's age (in years) and exponentiating. Update to your "EDIT 1" What I don't understand, ..., is why would I add a quadratic or cubic term to my model? What will that achieve? First, let me say that I'm glad to see you've decided to use age as a quantitative variable. I think all of us applaud you on that decision. The reason we've suggested the possibility of trying quadratic or cubic terms to your model is that age may not have a strictly linear relationship to your outcome. Without loss of generality to your exact problem and model, I think it's easier and more instructive to think about this in terms of a simple linear regression model. Let's assume you had a simple linear regression model in which you regressed weight on age. If you included age as a linear term (e.g. $\beta_{age} \times X_{age}$) in your model, then you are assuming that weight increases steadily as as person ages, no matter how old a person gets (see Linear Model graph below). This may not be the appropriate functional relationship. Instead, in reality, what typically happens is that a person gains weight as they age, but as they enter old age they begin to lose weight. This would not be well modeled as a linear function, because of this drop-off in weight in old-age. Instead, if one included a quadratic term (e.g. $\beta_{age} \times X_{age}^2$) in addition to the linear term, this could better capture the non-linearity and quadratic nature of the relationship between weight and age. Using an $X_{age}^2$ term allows the regression model to predict an increasing weight as one ages up to a point, and then the model will start to predict a decrease in weight as one ages (see Quadratic Model graph below). If you simply included a linear term, your model would not be able to capture this drop-off in weight in old age. Instead, your model will simply continue predicting increases in weight well into old age. The same concept applies to your model. Besides including quadratic or linear terms in your model, you may also want to explore the use of splines or generalized additive models (GAMs) to model these types of non-linear relationships. Linear Model: Quadratic Model: PS. Keep in mind I just made up these graphs. They don't really represent real weights and ages in real life. For example, in real life someone doesn't only weigh 8 pounds upon reaching 100, but I think you get the general idea.
Is age categorical or quantitative or both? Generally speaking, you should treat age as a quantitative variable, assuming you have the actual ages and not age brackets. There are several reasons for this. Perhaps most importantly, if you use
52,531
Is age categorical or quantitative or both?
I do not think it's a very good idea to categorize age in this way. This is not only for statistical reasons, but also because the outcome has only limited value: your output states, that a 39 year old person has a lesser risk than a 38 or 40 year old person. I guess the general interpretation - that age lowers the risk - might get lost this way. There are other ways to get more information about the variable 'age'. You could add a quadratic or cubic term to test if the effect of age is linear. 2) If I want to control for age when measuring the impact a particular drug has on patient time-outcome, should this be quantitative or categorical? You can estimate more then one model. First, fit a model in which you only include the drugs as explanatory variables. Then you fit another model where you add variables that might also have an effect, and check if the effects of the drugs have changed. Are they still the same? In that case, then the drug is the causal reason. If they get a lot lower, it just correlated with the effect of other variables.
Is age categorical or quantitative or both?
I do not think it's a very good idea to categorize age in this way. This is not only for statistical reasons, but also because the outcome has only limited value: your output states, that a 39 year ol
Is age categorical or quantitative or both? I do not think it's a very good idea to categorize age in this way. This is not only for statistical reasons, but also because the outcome has only limited value: your output states, that a 39 year old person has a lesser risk than a 38 or 40 year old person. I guess the general interpretation - that age lowers the risk - might get lost this way. There are other ways to get more information about the variable 'age'. You could add a quadratic or cubic term to test if the effect of age is linear. 2) If I want to control for age when measuring the impact a particular drug has on patient time-outcome, should this be quantitative or categorical? You can estimate more then one model. First, fit a model in which you only include the drugs as explanatory variables. Then you fit another model where you add variables that might also have an effect, and check if the effects of the drugs have changed. Are they still the same? In that case, then the drug is the causal reason. If they get a lot lower, it just correlated with the effect of other variables.
Is age categorical or quantitative or both? I do not think it's a very good idea to categorize age in this way. This is not only for statistical reasons, but also because the outcome has only limited value: your output states, that a 39 year ol
52,532
How to use press statistic for model selection
You calculate PRESS on a model trained on $n$ values to get an idea of its out-of-sample performance, by leaving out one sample at a time. So while you indeed end up with $n$ models to determine the statistic, you eventually use the original model trained on all $n$ values. Since you are only leaving out a single observation at a time (like in LOOCV), the addition of the 'last' sample has minimal influence on the final model. Because of this you can safely use PRESS to compare models, even though the actual models you are comparing were not used to calculate it. If you have a larger sample size, you could consider a form of nested cross-validation, comparing models with the inner cross-validation and evaluating the performance of the 'winning' model on the outer cross-validation loop.
How to use press statistic for model selection
You calculate PRESS on a model trained on $n$ values to get an idea of its out-of-sample performance, by leaving out one sample at a time. So while you indeed end up with $n$ models to determine the s
How to use press statistic for model selection You calculate PRESS on a model trained on $n$ values to get an idea of its out-of-sample performance, by leaving out one sample at a time. So while you indeed end up with $n$ models to determine the statistic, you eventually use the original model trained on all $n$ values. Since you are only leaving out a single observation at a time (like in LOOCV), the addition of the 'last' sample has minimal influence on the final model. Because of this you can safely use PRESS to compare models, even though the actual models you are comparing were not used to calculate it. If you have a larger sample size, you could consider a form of nested cross-validation, comparing models with the inner cross-validation and evaluating the performance of the 'winning' model on the outer cross-validation loop.
How to use press statistic for model selection You calculate PRESS on a model trained on $n$ values to get an idea of its out-of-sample performance, by leaving out one sample at a time. So while you indeed end up with $n$ models to determine the s
52,533
How to use press statistic for model selection
"My confusion lies in the fact that a new regression equation (hence a new model) is estimated each time a data point is dropped" This usually isn't the case, at least for standard [ridge-] regression models; you don't actually create a new model each time, but instead can work out what the output of that model would be by looking at the "hat matrix" $\matrix{H} = \matrix{X}[\matrix{X}^T\matrix{X}]^{-1}\matrix{X}^T$ You can think of the hat matrix as providing information about the stability of the model to each datapoint ("leverage"), so you can view this as being a representation of the quality of the model. If the model is highly sensitive to the particular sample of data to which it has been fitted (points have high leverage), it's PRESS statistic will be much higher than the resubstitution MSE (the MSE on the training data). However, cross-validation (including PRESS) is best thought of as a means of evaluating the performance of a method of constructing a model, rather than directly of the model itself. So PRESS is an estimate of the generalisation performance of a model trained using that procedure on the entire dataset.
How to use press statistic for model selection
"My confusion lies in the fact that a new regression equation (hence a new model) is estimated each time a data point is dropped" This usually isn't the case, at least for standard [ridge-] regression
How to use press statistic for model selection "My confusion lies in the fact that a new regression equation (hence a new model) is estimated each time a data point is dropped" This usually isn't the case, at least for standard [ridge-] regression models; you don't actually create a new model each time, but instead can work out what the output of that model would be by looking at the "hat matrix" $\matrix{H} = \matrix{X}[\matrix{X}^T\matrix{X}]^{-1}\matrix{X}^T$ You can think of the hat matrix as providing information about the stability of the model to each datapoint ("leverage"), so you can view this as being a representation of the quality of the model. If the model is highly sensitive to the particular sample of data to which it has been fitted (points have high leverage), it's PRESS statistic will be much higher than the resubstitution MSE (the MSE on the training data). However, cross-validation (including PRESS) is best thought of as a means of evaluating the performance of a method of constructing a model, rather than directly of the model itself. So PRESS is an estimate of the generalisation performance of a model trained using that procedure on the entire dataset.
How to use press statistic for model selection "My confusion lies in the fact that a new regression equation (hence a new model) is estimated each time a data point is dropped" This usually isn't the case, at least for standard [ridge-] regression
52,534
Why is my p-value correlated to difference between means in two sample tests?
As you said, the p-value is uniformly distributed under the null hypothesis. That is, if the null hypothesis is really true, then upon repeated experiments we expect to find a fully random, flat distribution of p-values between [0, 1]. Consequently, a frequentist p-value says nothing about how likely the null hypothesis is to be true, since any p-value is equally probable under the null. What you're looking at is the distribution of p-values under an alternative hypothesis. Depending on the formulation of this hypothesis, the resulting p-values can have any non-Uniform, positively skewed distribution between [0, 1]. But this doesn't tell you anything about the probability of the null. The reason is that the p-value expresses the probability of the evidence under the null hypothesis, i.e. $p(D|H_0)$, whereas you want to know $p(H_0|D)$. These two are related by Bayes' rule: $$ p(H_0|D) = \frac{p(D|H_0)p(H_0)}{p(D|H_0)p(H_0)+p(D|\neg H_0)p(\neg H_0)} $$ This means that in order to calculate the probability you're interested in, you need to know and take into account the prior probability of the null being true ($p(H_0)$), the prior probability of the null being false ($p(\neg H_0)$) and the probability of the data given that the null is false ($p(D|\neg H_0)$). This is the purview of Bayesian, rather than frequentist statistics. As for the correlation you observed: as I said above the p-values will be positively skewed under the alternative hypothesis. How skewed depends what that alternative hypothesis is. In the case of a two-sample t-test, the more you increase the difference between your population means, the more skewed the p-values will become. This reflects the fact that you're making your samples increasingly more different from what is plausible under the null, and so by definition the resulting p-values (reflecting the probability of the data under the null) must decrease.
Why is my p-value correlated to difference between means in two sample tests?
As you said, the p-value is uniformly distributed under the null hypothesis. That is, if the null hypothesis is really true, then upon repeated experiments we expect to find a fully random, flat distr
Why is my p-value correlated to difference between means in two sample tests? As you said, the p-value is uniformly distributed under the null hypothesis. That is, if the null hypothesis is really true, then upon repeated experiments we expect to find a fully random, flat distribution of p-values between [0, 1]. Consequently, a frequentist p-value says nothing about how likely the null hypothesis is to be true, since any p-value is equally probable under the null. What you're looking at is the distribution of p-values under an alternative hypothesis. Depending on the formulation of this hypothesis, the resulting p-values can have any non-Uniform, positively skewed distribution between [0, 1]. But this doesn't tell you anything about the probability of the null. The reason is that the p-value expresses the probability of the evidence under the null hypothesis, i.e. $p(D|H_0)$, whereas you want to know $p(H_0|D)$. These two are related by Bayes' rule: $$ p(H_0|D) = \frac{p(D|H_0)p(H_0)}{p(D|H_0)p(H_0)+p(D|\neg H_0)p(\neg H_0)} $$ This means that in order to calculate the probability you're interested in, you need to know and take into account the prior probability of the null being true ($p(H_0)$), the prior probability of the null being false ($p(\neg H_0)$) and the probability of the data given that the null is false ($p(D|\neg H_0)$). This is the purview of Bayesian, rather than frequentist statistics. As for the correlation you observed: as I said above the p-values will be positively skewed under the alternative hypothesis. How skewed depends what that alternative hypothesis is. In the case of a two-sample t-test, the more you increase the difference between your population means, the more skewed the p-values will become. This reflects the fact that you're making your samples increasingly more different from what is plausible under the null, and so by definition the resulting p-values (reflecting the probability of the data under the null) must decrease.
Why is my p-value correlated to difference between means in two sample tests? As you said, the p-value is uniformly distributed under the null hypothesis. That is, if the null hypothesis is really true, then upon repeated experiments we expect to find a fully random, flat distr
52,535
Why is my p-value correlated to difference between means in two sample tests?
Why would you expect anything else? You don't need a simulation to know this is going to happen. Look at the formula for the t-statistic: $t = \frac{\bar{x_1} - \bar{x_2} }{\sqrt{ \frac{s^2_1}{n_1} + \frac{s^2_2}{n_2} }}$ Obviously if you increase the true difference of means you expect $\bar{x_1} - \bar{x_2}$ will be larger. You are holding the variance and sample size constant, so the t-statistic must be larger and thus the p-value smaller. I think you are confusing a philosophical rule about hypothesis testing with a mathematical fact. If the null hypothesis is true, you would expect a higher p-value. This has to be true in order for hypothesis testing to make any sense.
Why is my p-value correlated to difference between means in two sample tests?
Why would you expect anything else? You don't need a simulation to know this is going to happen. Look at the formula for the t-statistic: $t = \frac{\bar{x_1} - \bar{x_2} }{\sqrt{ \frac{s^2_1}{n_1} +
Why is my p-value correlated to difference between means in two sample tests? Why would you expect anything else? You don't need a simulation to know this is going to happen. Look at the formula for the t-statistic: $t = \frac{\bar{x_1} - \bar{x_2} }{\sqrt{ \frac{s^2_1}{n_1} + \frac{s^2_2}{n_2} }}$ Obviously if you increase the true difference of means you expect $\bar{x_1} - \bar{x_2}$ will be larger. You are holding the variance and sample size constant, so the t-statistic must be larger and thus the p-value smaller. I think you are confusing a philosophical rule about hypothesis testing with a mathematical fact. If the null hypothesis is true, you would expect a higher p-value. This has to be true in order for hypothesis testing to make any sense.
Why is my p-value correlated to difference between means in two sample tests? Why would you expect anything else? You don't need a simulation to know this is going to happen. Look at the formula for the t-statistic: $t = \frac{\bar{x_1} - \bar{x_2} }{\sqrt{ \frac{s^2_1}{n_1} +
52,536
Why is my p-value correlated to difference between means in two sample tests?
You should indeed not interpret the p-value as a probability that the null hypothesis is true. However, a higher p-value does relate to stronger support for the null hypothesis. Considering p-values as a random variable You could consider p-values as a transformation of your statistic. See for instance the secondary x-axis in the graph below in which the t-distribution is plotted with $\nu=99$. Here you see that a larger p-value corresponds to a smaller t-statistic (and also, for a two-sided test, there are two t-statistic associated with one p-value). Distribution of p-values $P(\text{p-value}|\mu_1-\mu_2)$ When we plot the distribution density of the p-values, parameterized by $\mu_1-\mu_2$, you see that higher p-values are less likely for $\mu_1-\mu_2 \neq 0$. # compute CDF for a given observed p-value and parameter ncp=mu_1-mu_2 qp <- function(p,ncp) { from_p_to_t <- qt(1-p/2,99) # transform from p-value to t-statistic 1-pt(from_p_to_t,99,ncp=ncp) + pt(-from_p_to_t,99,ncp=ncp) # compute CDF for t-statistic (two-sided) } qp <- Vectorize(qp) # plotting density function p <- seq(0,1,0.001) plot(-1,-1, xlim=c(0,1), ylim=c(0,9), xlab = "p-value", ylab = "probability density") # use difference between CDF to plot PDF lines(p[-1]-0.001/2,(qp(p,0)[-1]-qp(p,0)[-1001])/0.001,type="l") lines(p[-1]-0.001/2,(qp(p,1)[-1]-qp(p,1)[-1001])/0.001,type="l", lty=2) lines(p[-1]-0.001/2,(qp(p,2)[-1]-qp(p,2)[-1001])/0.001,type="l", lty=3) The bayes factor, the ratio of the likelihood for different hypotheses is larger for larger p-values. And you could consider higher p-values as stronger support. Depending on the alternative hypothesis this strong support is reached at different p-values. The more extreme the alternative hypothesis, or the larger the sample of the test, the smaller the p-value needs to be in order to be strong support. Illustration See below an example with simulations for two different situations. You sample $X \sim N(\mu_1,2)$ and $X \sim N(\mu_2,2)$ Let in one case $\mu_i \sim N(i,1)$ such that $\mu_2-\mu_1 \sim N(1,\sqrt{2})$ the other case $\mu_i \sim N(0,1)$ such that $\mu_2-\mu_1 \sim N(0,\sqrt{2})$. In the first case you can see that the probability for $\mu_1-\mu_2$ is most likely to be around 1, also for higher p-values. This is because the marginal probability $\mu_1-\mu_2 \sim N(1,\sqrt{2})$ is already close to 1 to start with. So a high p-value will be support for the hypothesis $\mu_1-\mu_2$ but is is not strong enough. In the second case you can see that $\mu_1-\mu_2$ is indeed most likely to be around zero when the p-value is large. So, you could consider it as some sort of support for the null hypothesis. So in any of the cases a high p-value is support for the null hypothesis. But, it should not be considered as the probability that the hypothesis is true. This probability needs to be considered case by case. You can evaluate it when you know the joint distribution of the mean and the p-value (that is, you know something like a prior probability for the distribution of the mean). Sidenote: When you use the p-value in this way, to indicate support for the null hypothesis, then you are actually not using this value in the way that is was intended for. Then you may better just report the t-statistic and present something like a plot of a likelihood function (or bayes factor).
Why is my p-value correlated to difference between means in two sample tests?
You should indeed not interpret the p-value as a probability that the null hypothesis is true. However, a higher p-value does relate to stronger support for the null hypothesis. Considering p-values
Why is my p-value correlated to difference between means in two sample tests? You should indeed not interpret the p-value as a probability that the null hypothesis is true. However, a higher p-value does relate to stronger support for the null hypothesis. Considering p-values as a random variable You could consider p-values as a transformation of your statistic. See for instance the secondary x-axis in the graph below in which the t-distribution is plotted with $\nu=99$. Here you see that a larger p-value corresponds to a smaller t-statistic (and also, for a two-sided test, there are two t-statistic associated with one p-value). Distribution of p-values $P(\text{p-value}|\mu_1-\mu_2)$ When we plot the distribution density of the p-values, parameterized by $\mu_1-\mu_2$, you see that higher p-values are less likely for $\mu_1-\mu_2 \neq 0$. # compute CDF for a given observed p-value and parameter ncp=mu_1-mu_2 qp <- function(p,ncp) { from_p_to_t <- qt(1-p/2,99) # transform from p-value to t-statistic 1-pt(from_p_to_t,99,ncp=ncp) + pt(-from_p_to_t,99,ncp=ncp) # compute CDF for t-statistic (two-sided) } qp <- Vectorize(qp) # plotting density function p <- seq(0,1,0.001) plot(-1,-1, xlim=c(0,1), ylim=c(0,9), xlab = "p-value", ylab = "probability density") # use difference between CDF to plot PDF lines(p[-1]-0.001/2,(qp(p,0)[-1]-qp(p,0)[-1001])/0.001,type="l") lines(p[-1]-0.001/2,(qp(p,1)[-1]-qp(p,1)[-1001])/0.001,type="l", lty=2) lines(p[-1]-0.001/2,(qp(p,2)[-1]-qp(p,2)[-1001])/0.001,type="l", lty=3) The bayes factor, the ratio of the likelihood for different hypotheses is larger for larger p-values. And you could consider higher p-values as stronger support. Depending on the alternative hypothesis this strong support is reached at different p-values. The more extreme the alternative hypothesis, or the larger the sample of the test, the smaller the p-value needs to be in order to be strong support. Illustration See below an example with simulations for two different situations. You sample $X \sim N(\mu_1,2)$ and $X \sim N(\mu_2,2)$ Let in one case $\mu_i \sim N(i,1)$ such that $\mu_2-\mu_1 \sim N(1,\sqrt{2})$ the other case $\mu_i \sim N(0,1)$ such that $\mu_2-\mu_1 \sim N(0,\sqrt{2})$. In the first case you can see that the probability for $\mu_1-\mu_2$ is most likely to be around 1, also for higher p-values. This is because the marginal probability $\mu_1-\mu_2 \sim N(1,\sqrt{2})$ is already close to 1 to start with. So a high p-value will be support for the hypothesis $\mu_1-\mu_2$ but is is not strong enough. In the second case you can see that $\mu_1-\mu_2$ is indeed most likely to be around zero when the p-value is large. So, you could consider it as some sort of support for the null hypothesis. So in any of the cases a high p-value is support for the null hypothesis. But, it should not be considered as the probability that the hypothesis is true. This probability needs to be considered case by case. You can evaluate it when you know the joint distribution of the mean and the p-value (that is, you know something like a prior probability for the distribution of the mean). Sidenote: When you use the p-value in this way, to indicate support for the null hypothesis, then you are actually not using this value in the way that is was intended for. Then you may better just report the t-statistic and present something like a plot of a likelihood function (or bayes factor).
Why is my p-value correlated to difference between means in two sample tests? You should indeed not interpret the p-value as a probability that the null hypothesis is true. However, a higher p-value does relate to stronger support for the null hypothesis. Considering p-values
52,537
Transforming data with positive, negative, and zero values
Yes, you can add a constant and then take a logs. There are many ways to transform data. There is nothing inherently invalid about doing this, but very often such transformations are misguided. It is not necessary for the dependent variable to be normally distributed. The assumption about normality concerns the residuals, not the response variable itself. If the residuals are not plausibly normally distributed then of course some transformation may be warranted. One major downside of such transformations is that it makes sensible model interpretation much more difficult.
Transforming data with positive, negative, and zero values
Yes, you can add a constant and then take a logs. There are many ways to transform data. There is nothing inherently invalid about doing this, but very often such transformations are misguided. It is
Transforming data with positive, negative, and zero values Yes, you can add a constant and then take a logs. There are many ways to transform data. There is nothing inherently invalid about doing this, but very often such transformations are misguided. It is not necessary for the dependent variable to be normally distributed. The assumption about normality concerns the residuals, not the response variable itself. If the residuals are not plausibly normally distributed then of course some transformation may be warranted. One major downside of such transformations is that it makes sensible model interpretation much more difficult.
Transforming data with positive, negative, and zero values Yes, you can add a constant and then take a logs. There are many ways to transform data. There is nothing inherently invalid about doing this, but very often such transformations are misguided. It is
52,538
Transforming data with positive, negative, and zero values
In the case of negative values, you can use the PowerTransformer(method='yeo-johnson') method from sklearn. It is capable of handling positive and negative values, also values of zero. The skewness (measure of normality) of the data should decrease substantially. As with any transform, you should use fit and transform on your training data, then use transform only on the test and validation dataset. pt.fit(X_train) ## Fit the PT on training data X_train = pt.transform(X_train) ## Then apply on all data X_test = pt.transform(X_test) X_val = pt.transform(X_val) Example showing skewness of data decreases using PowerTransformer: from sklearn.preprocessing import PowerTransformer ... # find numeric features in your dataset to transform numeric_feats = X.dtypes[X.dtypes != "object"].index # calculate skew of all numeric features skewed_feats = X[numeric_feats].apply(lambda x: skew(x.dropna())).sort_values(ascending=False) # convert to dataframe for easier processing skewness = pd.DataFrame({'Skew' :skewed_feats}) # print performance before transform print("Pre: There are {} skewed numerical features to Box Cox transform".format(skewness.shape[0])) print("Pre", abs(skewness.Skew).mean()) # transform data pt = PowerTransformer(method='yeo-johnson').fit(X) X = pd.DataFrame(pt.transform(X), index=X.index, columns=X.columns) numeric_feats = X.dtypes[X.dtypes != "object"].index skewed_feats = X[numeric_feats].apply(lambda x: skew(x.dropna())).sort_values(ascending=False) skewness = pd.DataFrame({'Skew' :skewed_feats}) # print performance after transform print("Post: There are {} skewed numerical features to Box Cox transform".format(skewness.shape[0])) print("Post", abs(skewness.Skew).mean()) Example results: Pre: There are 17 skewed numerical features to Box Cox transform Pre 3.514581911418132 Post: There are 8 skewed numerical features to Box Cox transform Post 1.977383868458546 This method was also designed to handle heteroscedastic data, which is data that has a non-uniform variance across x/y values. Example: Reference: PowerTransformer from sklearn
Transforming data with positive, negative, and zero values
In the case of negative values, you can use the PowerTransformer(method='yeo-johnson') method from sklearn. It is capable of handling positive and negative values, also values of zero. The skewness (m
Transforming data with positive, negative, and zero values In the case of negative values, you can use the PowerTransformer(method='yeo-johnson') method from sklearn. It is capable of handling positive and negative values, also values of zero. The skewness (measure of normality) of the data should decrease substantially. As with any transform, you should use fit and transform on your training data, then use transform only on the test and validation dataset. pt.fit(X_train) ## Fit the PT on training data X_train = pt.transform(X_train) ## Then apply on all data X_test = pt.transform(X_test) X_val = pt.transform(X_val) Example showing skewness of data decreases using PowerTransformer: from sklearn.preprocessing import PowerTransformer ... # find numeric features in your dataset to transform numeric_feats = X.dtypes[X.dtypes != "object"].index # calculate skew of all numeric features skewed_feats = X[numeric_feats].apply(lambda x: skew(x.dropna())).sort_values(ascending=False) # convert to dataframe for easier processing skewness = pd.DataFrame({'Skew' :skewed_feats}) # print performance before transform print("Pre: There are {} skewed numerical features to Box Cox transform".format(skewness.shape[0])) print("Pre", abs(skewness.Skew).mean()) # transform data pt = PowerTransformer(method='yeo-johnson').fit(X) X = pd.DataFrame(pt.transform(X), index=X.index, columns=X.columns) numeric_feats = X.dtypes[X.dtypes != "object"].index skewed_feats = X[numeric_feats].apply(lambda x: skew(x.dropna())).sort_values(ascending=False) skewness = pd.DataFrame({'Skew' :skewed_feats}) # print performance after transform print("Post: There are {} skewed numerical features to Box Cox transform".format(skewness.shape[0])) print("Post", abs(skewness.Skew).mean()) Example results: Pre: There are 17 skewed numerical features to Box Cox transform Pre 3.514581911418132 Post: There are 8 skewed numerical features to Box Cox transform Post 1.977383868458546 This method was also designed to handle heteroscedastic data, which is data that has a non-uniform variance across x/y values. Example: Reference: PowerTransformer from sklearn
Transforming data with positive, negative, and zero values In the case of negative values, you can use the PowerTransformer(method='yeo-johnson') method from sklearn. It is capable of handling positive and negative values, also values of zero. The skewness (m
52,539
Transforming data with positive, negative, and zero values
In principle, transformations possible with variables that may be negative, zero or positive include $\text{sign}(x) \log(1 + |x|)$, which conveniently preserves the sign of its argument (including mapping $0$ to $0$) while behaving like $\log x$ for $x \gg 0$ and like $-\log(- x)$ for $x \ll 0$. Inverse hyperbolic sine $\text{asinh}(x)$ or more generally $\text{asinh}(kx)$ for some $k > 0$, which also preserves the sign of its argument cube roots $x^{1/3}$ or more generally odd integer roots, such as also fifth roots, seventh roots, and so on, which also preserve the sign of their arguments. In practice, you may need to compute this from the product of $\text{sign}(x)$ and the chosen root of its absolute value $|x|$. Depending on your software, the root of a negative or even a zero value may be returned as missing, NA, or NaN if in practice the routine for general roots depends on first pushing the argument through code for a logarithm. As already pointed out, whether any variable is normally distributed it is not itself important for regression, or even an ideal condition (often described as an assumption). However, pulling in outliers may be helpful for analysis of highly skewed or heavy-tailed distributions, at least for exploratory data analysis. Practical examples may include profit (negative values meaning loss) of many firms or any variable that is a change or difference. The psychology and sociology of what a readership knows about can be important in practice in choosing one of these transformations, assuming that one such is a good idea. Thus many readers should have learned about cube roots in secondary school but inverse hyperbolic sines are more likely to seem exotic or mysterious.
Transforming data with positive, negative, and zero values
In principle, transformations possible with variables that may be negative, zero or positive include $\text{sign}(x) \log(1 + |x|)$, which conveniently preserves the sign of its argument (including
Transforming data with positive, negative, and zero values In principle, transformations possible with variables that may be negative, zero or positive include $\text{sign}(x) \log(1 + |x|)$, which conveniently preserves the sign of its argument (including mapping $0$ to $0$) while behaving like $\log x$ for $x \gg 0$ and like $-\log(- x)$ for $x \ll 0$. Inverse hyperbolic sine $\text{asinh}(x)$ or more generally $\text{asinh}(kx)$ for some $k > 0$, which also preserves the sign of its argument cube roots $x^{1/3}$ or more generally odd integer roots, such as also fifth roots, seventh roots, and so on, which also preserve the sign of their arguments. In practice, you may need to compute this from the product of $\text{sign}(x)$ and the chosen root of its absolute value $|x|$. Depending on your software, the root of a negative or even a zero value may be returned as missing, NA, or NaN if in practice the routine for general roots depends on first pushing the argument through code for a logarithm. As already pointed out, whether any variable is normally distributed it is not itself important for regression, or even an ideal condition (often described as an assumption). However, pulling in outliers may be helpful for analysis of highly skewed or heavy-tailed distributions, at least for exploratory data analysis. Practical examples may include profit (negative values meaning loss) of many firms or any variable that is a change or difference. The psychology and sociology of what a readership knows about can be important in practice in choosing one of these transformations, assuming that one such is a good idea. Thus many readers should have learned about cube roots in secondary school but inverse hyperbolic sines are more likely to seem exotic or mysterious.
Transforming data with positive, negative, and zero values In principle, transformations possible with variables that may be negative, zero or positive include $\text{sign}(x) \log(1 + |x|)$, which conveniently preserves the sign of its argument (including
52,540
Regression predictions show far less variance than expected
Your training data - just as any other data - is a mixture of signal and noise. In modeling, we try to capture the signal, since the noise is by definition not predictable, except in a probabilistic sense. predict.zeroinfl() by default predicts the expected response, i.e., the signal. Since the noise is mainly variation, stripping this out means that the signal is far less variable. You may be interested in a prediction-interval, i.e., intervals that contain future observations with a prespecified probability. These of course (need to) account for noise. You can use predict.zeroinfl(..., type="prob") for this, which should give you predicted probabilities for various possible outcomes.
Regression predictions show far less variance than expected
Your training data - just as any other data - is a mixture of signal and noise. In modeling, we try to capture the signal, since the noise is by definition not predictable, except in a probabilistic s
Regression predictions show far less variance than expected Your training data - just as any other data - is a mixture of signal and noise. In modeling, we try to capture the signal, since the noise is by definition not predictable, except in a probabilistic sense. predict.zeroinfl() by default predicts the expected response, i.e., the signal. Since the noise is mainly variation, stripping this out means that the signal is far less variable. You may be interested in a prediction-interval, i.e., intervals that contain future observations with a prespecified probability. These of course (need to) account for noise. You can use predict.zeroinfl(..., type="prob") for this, which should give you predicted probabilities for various possible outcomes.
Regression predictions show far less variance than expected Your training data - just as any other data - is a mixture of signal and noise. In modeling, we try to capture the signal, since the noise is by definition not predictable, except in a probabilistic s
52,541
Is a kernel function basically just a mapping?
My initial understanding is that a kernel is essentially just a mapping into a higher dimension. No. Kernel is a function that calculates dot product in the image of this mapping. It can be thought of as defining dot product, using dot product from another space, where the mapping into this (often higher-dimensional) space is implicit.
Is a kernel function basically just a mapping?
My initial understanding is that a kernel is essentially just a mapping into a higher dimension. No. Kernel is a function that calculates dot product in the image of this mapping. It can be thought
Is a kernel function basically just a mapping? My initial understanding is that a kernel is essentially just a mapping into a higher dimension. No. Kernel is a function that calculates dot product in the image of this mapping. It can be thought of as defining dot product, using dot product from another space, where the mapping into this (often higher-dimensional) space is implicit.
Is a kernel function basically just a mapping? My initial understanding is that a kernel is essentially just a mapping into a higher dimension. No. Kernel is a function that calculates dot product in the image of this mapping. It can be thought
52,542
Is a kernel function basically just a mapping?
My answer may be paraphrasing what you already gathered in the other threads, but here's my way of seeing this. Technically, the kernel trick can indeed be seen as a mapping to a higher dimension (possibly infinite) where a linear method works. But it is much more than that, because you use an implicit definition of the transformation, through the definition of the inner product in the target space. $$k(x_i, x_j) = \left< \phi(x_i), \phi(x_j) \right>$$ where $\phi$ is the transformation and $k$ your kernel function. The kernel function itself can be seen as a measure of similarity between vectors, or (in my interpretation) as a representant for the local behaviour shape. Check out the comparison of how kernels determine the shape of a 1-D gaussian process regressor estimator on this page, I found it very useful on the way to understanding what the kernel choice implies.
Is a kernel function basically just a mapping?
My answer may be paraphrasing what you already gathered in the other threads, but here's my way of seeing this. Technically, the kernel trick can indeed be seen as a mapping to a higher dimension (pos
Is a kernel function basically just a mapping? My answer may be paraphrasing what you already gathered in the other threads, but here's my way of seeing this. Technically, the kernel trick can indeed be seen as a mapping to a higher dimension (possibly infinite) where a linear method works. But it is much more than that, because you use an implicit definition of the transformation, through the definition of the inner product in the target space. $$k(x_i, x_j) = \left< \phi(x_i), \phi(x_j) \right>$$ where $\phi$ is the transformation and $k$ your kernel function. The kernel function itself can be seen as a measure of similarity between vectors, or (in my interpretation) as a representant for the local behaviour shape. Check out the comparison of how kernels determine the shape of a 1-D gaussian process regressor estimator on this page, I found it very useful on the way to understanding what the kernel choice implies.
Is a kernel function basically just a mapping? My answer may be paraphrasing what you already gathered in the other threads, but here's my way of seeing this. Technically, the kernel trick can indeed be seen as a mapping to a higher dimension (pos
52,543
Why does changing random seeds alter results?
tl;dr practically speaking, you can probably set the seed to anything you want (e.g. your birthday or phone number [although there are obvious privacy issues there :-)] or your lucky number); with some interesting caveats, you can use the same random number seed for most of your analyses (I often use 1001). In order to be useful, stochastic algorithms are generally insensitive to the random number seed. the long answer Classical statistical methods (t-test, ANOVA, regression etc.) are deterministic algorithms, but many modern algorithmic approaches include a stochastic component. (In between are methods like k-means clustering or expectation-maximization, which are intrinsically deterministic but are usually run from multiple randomly chosen starting points to mitigate their sensitivity to starting conditions.) SVM need not be stochastic (e.g. the implementation in the e1071 package for R appears to be deterministic), but it is often implemented using stochastic gradient descent (SGD: e.g. see here) for computational reasons. Methods that are using large ensembles of random samples from the data (e.g. bootstrapping, bagging, as well as SGD, which picks a different sample of the data at each update step) are effectively averaging across many samples, and are likely to be relatively insensitive to the random-number seed. Methods that are likely to be unstable with respect to the random-number seed (e.g. EM, k-means clustering) will generally have mechanisms built into the software that will automatically run several realizations and do something sensible with the results (i.e. average them), to make the method less sensitive. This sensitivity is part of the information that you should know about a method before using it (along with some idea of its strengths and weaknesses, what meta-parameters it has that need to be tuned, etc.). The best thing to do in the course of learning is to try some experiments - for a particular data set and model, try the same method with a handful of different random-number seeds and see how much the results vary!
Why does changing random seeds alter results?
tl;dr practically speaking, you can probably set the seed to anything you want (e.g. your birthday or phone number [although there are obvious privacy issues there :-)] or your lucky number); with som
Why does changing random seeds alter results? tl;dr practically speaking, you can probably set the seed to anything you want (e.g. your birthday or phone number [although there are obvious privacy issues there :-)] or your lucky number); with some interesting caveats, you can use the same random number seed for most of your analyses (I often use 1001). In order to be useful, stochastic algorithms are generally insensitive to the random number seed. the long answer Classical statistical methods (t-test, ANOVA, regression etc.) are deterministic algorithms, but many modern algorithmic approaches include a stochastic component. (In between are methods like k-means clustering or expectation-maximization, which are intrinsically deterministic but are usually run from multiple randomly chosen starting points to mitigate their sensitivity to starting conditions.) SVM need not be stochastic (e.g. the implementation in the e1071 package for R appears to be deterministic), but it is often implemented using stochastic gradient descent (SGD: e.g. see here) for computational reasons. Methods that are using large ensembles of random samples from the data (e.g. bootstrapping, bagging, as well as SGD, which picks a different sample of the data at each update step) are effectively averaging across many samples, and are likely to be relatively insensitive to the random-number seed. Methods that are likely to be unstable with respect to the random-number seed (e.g. EM, k-means clustering) will generally have mechanisms built into the software that will automatically run several realizations and do something sensible with the results (i.e. average them), to make the method less sensitive. This sensitivity is part of the information that you should know about a method before using it (along with some idea of its strengths and weaknesses, what meta-parameters it has that need to be tuned, etc.). The best thing to do in the course of learning is to try some experiments - for a particular data set and model, try the same method with a handful of different random-number seeds and see how much the results vary!
Why does changing random seeds alter results? tl;dr practically speaking, you can probably set the seed to anything you want (e.g. your birthday or phone number [although there are obvious privacy issues there :-)] or your lucky number); with som
52,544
Why bootstrapping?
Welcome to CV! In bootstrapping, you repeatedly take samples with replacement from the original sample. The general idea behind this is that if you can estimate the uncertainty in your sample by asking the question: What if I didn't observe this observation, or that one, or if I observed this observation more than once? You do this, say, $B = 1,000$ times, and end up with $1,000$ slightly different estimates of your statistic of interest. Depending on how strongly the calculated statistic is affected by this, the variance of your bootstrapped statistic will be larger. In fact, it turns out that the standard deviation of the bootstrapped statistic can be a really good estimator of the standard error of your statistic. And so, by simply randomly resampling our original sample with replacement, over and over, we have obtained an idea of how precise the estimate is, given that we only have a sample of the population. Of course, if you can measure the entire population, then there is no point in bootstrapping.
Why bootstrapping?
Welcome to CV! In bootstrapping, you repeatedly take samples with replacement from the original sample. The general idea behind this is that if you can estimate the uncertainty in your sample by aski
Why bootstrapping? Welcome to CV! In bootstrapping, you repeatedly take samples with replacement from the original sample. The general idea behind this is that if you can estimate the uncertainty in your sample by asking the question: What if I didn't observe this observation, or that one, or if I observed this observation more than once? You do this, say, $B = 1,000$ times, and end up with $1,000$ slightly different estimates of your statistic of interest. Depending on how strongly the calculated statistic is affected by this, the variance of your bootstrapped statistic will be larger. In fact, it turns out that the standard deviation of the bootstrapped statistic can be a really good estimator of the standard error of your statistic. And so, by simply randomly resampling our original sample with replacement, over and over, we have obtained an idea of how precise the estimate is, given that we only have a sample of the population. Of course, if you can measure the entire population, then there is no point in bootstrapping.
Why bootstrapping? Welcome to CV! In bootstrapping, you repeatedly take samples with replacement from the original sample. The general idea behind this is that if you can estimate the uncertainty in your sample by aski
52,545
Why bootstrapping?
I understood that bootstrapping is a technique used to estimate statistics of a population. It is a technique mainly used to estimate the standard error of an estimator of a population parameter $\theta$ and/or to derive confidence intervals for $\theta$ in situations where these figures are too difficult to derive by mathematical statistics. In bootstrapping we take many samples of chosen size, estimate statistics and obtain the mean of these statistics. This mean is representative of the whole population. It is indeed a resampling technique and works by sampling $n$ observations with replacement from the original $n$ observations. The key statement is that such resampled bootstrap sample is to the original sample as the original sample is to the population. Not the mean of the bootstrapped statistic is usually of interest, but rather the variation as it allows to find confidence intervals. My doubt is that why take the samples in the first place. If you have the whole population with you, calculate the statistics on the whole for which you get 100% accurate statistics ? Normally, we do not have access to the whole population. But if we have, indeed there is usually no need to do any inferential statistics (including the bootstrap).
Why bootstrapping?
I understood that bootstrapping is a technique used to estimate statistics of a population. It is a technique mainly used to estimate the standard error of an estimator of a population parameter $\t
Why bootstrapping? I understood that bootstrapping is a technique used to estimate statistics of a population. It is a technique mainly used to estimate the standard error of an estimator of a population parameter $\theta$ and/or to derive confidence intervals for $\theta$ in situations where these figures are too difficult to derive by mathematical statistics. In bootstrapping we take many samples of chosen size, estimate statistics and obtain the mean of these statistics. This mean is representative of the whole population. It is indeed a resampling technique and works by sampling $n$ observations with replacement from the original $n$ observations. The key statement is that such resampled bootstrap sample is to the original sample as the original sample is to the population. Not the mean of the bootstrapped statistic is usually of interest, but rather the variation as it allows to find confidence intervals. My doubt is that why take the samples in the first place. If you have the whole population with you, calculate the statistics on the whole for which you get 100% accurate statistics ? Normally, we do not have access to the whole population. But if we have, indeed there is usually no need to do any inferential statistics (including the bootstrap).
Why bootstrapping? I understood that bootstrapping is a technique used to estimate statistics of a population. It is a technique mainly used to estimate the standard error of an estimator of a population parameter $\t
52,546
How to test for difference in means between 5 groups? The variance between the groups are not equal. If they were equal, then I could use ANOVA
I would like to recommend generalized least squares as an option. Same way you can create a model for the mean of the data given your predictors, you can also create a model for variance. The gls() function in the nlme package permits us to do this. I will demo how below: set.seed(1984) n <- 25 # Create heteroskedastic data, balanced design dat <- data.frame( y = c( rnorm(n, 1, 1), rnorm(n, 0, 1), rnorm(n, .5, 1), rnorm(n, 0, 3), rnorm(n, -.5, 4)), groups = sort(rep(LETTERS[1:5], n)) ) boxplot(y ~ groups, dat) # Assuming all cases have identical variances summary(aov(y ~ groups, dat)) Df Sum Sq Mean Sq F value Pr(>F) groups 4 69.0 17.255 3.475 0.0101 * Residuals 120 595.9 4.966 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # Modeling the error variances library(nlme) # This is your standard linear model coef(summary(fit.gls.baseline <- gls(y ~ groups, dat)))) Value Std.Error t-value p-value (Intercept) 1.1784414 0.4457005 2.644021 0.0092875812 groupsB -0.9748277 0.6303157 -1.546571 0.1246000669 groupsC -0.8980932 0.6303157 -1.424831 0.1568017302 groupsD -1.2903790 0.6303157 -2.047195 0.0428223361 groupsE -2.3076197 0.6303157 -3.661054 0.0003747902 sigma(fit.gls.baseline) # get residual standard deviation [1] 2.228502 # Next we fit a heteroskedastic model, hidden some output # varIdent says the variances are identical for cases that have the same value of group summary(fit.gls <- gls( y ~ groups, dat, weights = varIdent(form = ~ 1 | groups))) Generalized least squares fit by REML Model: y ~ groups Data: dat AIC BIC logLik 479.5354 507.4103 -229.7677 Variance function: Structure: Different standard deviations per stratum Formula: ~1 | groups Parameter estimates: A B C D E 1.0000000 0.9741804 0.8698579 3.0062743 3.7867899 Coefficients: Value Std.Error t-value p-value (Intercept) 1.1784414 0.1951410 6.038923 0.0000 groupsB -0.9748277 0.2724316 -3.578248 0.0005 groupsC -0.8980932 0.2586375 -3.472401 0.0007 groupsD -1.2903790 0.6182517 -2.087142 0.0390 groupsE -2.3076197 0.7642898 -3.019299 0.0031 Residual standard error: 0.975705 A few notes about this model. You will find the residual standard deviation (error) is much less for the heteroskedastic model when compared to the same estimate for the earlier model. Now take a look at the Variance function section, and you will see group A had a standard deviation of 1. These values are relative standard deviations. So the $.976$ reported for the model is the residual SD of group A. $.974 \times .976$ is the value for group B. Groups D $(3 \times .976)$ and E $(3.79 \times .976)$ have much larger residual standard deviations. Also, let's obtain the heteroskedastic version of the F-test: # marginal for type III sum of squares, only makes a difference for the # test of the grouping variable if we have more than one grouping variable anova(fit.gls, type = "marginal") Denom. DF: 120 numDF F-value p-value (Intercept) 1 36.46859 <.0001 groups 4 5.51477 4e-04 Sufficient evidence to suggest that not all groups share the same mean, $F(4, 120)=5.51, p < .001$. Now to compare the heteroskedastic model to the standard model, note that the coefficients are about the same, however, the standard errors are much smaller for the heteroskedastic model, except for groups D and E. So our p-values are also smaller. A global approach to model comparison would be: anova(fit.gls, fit.gls.baseline) # Model fit improved Model df AIC BIC logLik Test L.Ratio p-value fit.gls 1 10 479.5354 507.4103 -229.7677 fit.gls.baseline 2 6 560.9588 577.6837 -274.4794 1 vs 2 89.42343 <.0001 These results suggest the heteroskedastic model was better, lower AIC, BIC and statistically significant likelihood ratio test. Now, we do not have access to the data generation process in reality, but we can always plot the boxplt above. Assuming we want to opt for a simpler model, we can assume groups A, B and C have the same variances and groups D and E are different. We can then try: # Create a variable that classifies the groups according to variance dat$var.groups <- (dat$groups %in% c("D", "E")) + 0 # then run: summary(fit.gls.parsim <- gls( y ~ groups, dat, weights = varIdent(form = ~ 1 | var.groups))) Generalized least squares fit by REML Model: y ~ groups Data: dat AIC BIC logLik 475.3162 494.8287 -230.6581 Variance function: Structure: Different standard deviations per stratum Formula: ~1 | var.groups Parameter estimates: 0 1 1.000000 3.600023 Coefficients: Value Std.Error t-value p-value (Intercept) 1.1784414 0.1853218 6.358893 0.0000 groupsB -0.9748277 0.2620846 -3.719516 0.0003 groupsC -0.8980932 0.2620846 -3.426730 0.0008 groupsD -1.2903790 0.6924235 -1.863569 0.0648 groupsE -2.3076197 0.6924235 -3.332671 0.0011 # A model comparison between our previous best and this new model anova(fit.gls, fit.gls.parsim) Model df AIC BIC logLik Test L.Ratio p-value fit.gls 1 10 479.5354 507.4103 -229.7677 fit.gls.parsim 2 7 475.3162 494.8287 -230.6581 1 vs 2 1.780859 0.6191 The likelihood ratio test cannot distinguish between this simpler model that has three fewer parameters and our earlier heteroskedastic model. Also AIC and BIC lean towards this model. So we may opt for this model going forward since we do not know the truth. Theory may also suggest certain things about the differences in group variances. One can see how the particular issue of heteroskedasticity can be of substantive interest in itself. F-test again would be: anova(fit.gls.parsim, type = "marginal") Denom. DF: 120 numDF F-value p-value (Intercept) 1 40.43552 <.0001 groups 4 6.04201 2e-04 Now, the next thing may be contrasts. I do not care much for this in my work, so I do the simplest things here. There are probably better ways to deal with contrasts, see the emmeans package documentation for more options. I just use defaults. library(emmeans) pairs(emmeans(fit.gls.parsim, "groups")) contrast estimate SE df t.ratio p.value A - B 0.9748277 0.2620846 120 3.720 0.0028 A - C 0.8980932 0.2620846 120 3.427 0.0073 A - D 1.2903790 0.6924235 120 1.864 0.3427 A - E 2.3076197 0.6924235 120 3.333 0.0099 B - C -0.0767345 0.2620846 120 -0.293 0.9984 B - D 0.3155513 0.6924235 120 0.456 0.9910 B - E 1.3327920 0.6924235 120 1.925 0.3099 C - D 0.3922858 0.6924235 120 0.567 0.9796 C - E 1.4095265 0.6924235 120 2.036 0.2555 D - E 1.0172407 0.9435106 120 1.078 0.8174 P value adjustment: tukey method for comparing a family of 5 estimates These contrasts use information from the heteroskedastic model as the results would be different if you used the fit.gls.baseline model. I hope the demonstration above shows how you can account for heteroskedasticity using generalized least squares when performing analysis of variance. Compared to many other approaches, this approach does not simply treat heteroskedasticity as a nuisance. One can also include control variables as in ANCOVA. As an aside, if one also suspects that there is also heteroskedasticity by the control variables in ANCOVA, then the glmmTMB() function in the glmmTMB package can handle this situation. You simply specify a regression equation for the variance using the dispformula = argument.
How to test for difference in means between 5 groups? The variance between the groups are not equal.
I would like to recommend generalized least squares as an option. Same way you can create a model for the mean of the data given your predictors, you can also create a model for variance. The gls() fu
How to test for difference in means between 5 groups? The variance between the groups are not equal. If they were equal, then I could use ANOVA I would like to recommend generalized least squares as an option. Same way you can create a model for the mean of the data given your predictors, you can also create a model for variance. The gls() function in the nlme package permits us to do this. I will demo how below: set.seed(1984) n <- 25 # Create heteroskedastic data, balanced design dat <- data.frame( y = c( rnorm(n, 1, 1), rnorm(n, 0, 1), rnorm(n, .5, 1), rnorm(n, 0, 3), rnorm(n, -.5, 4)), groups = sort(rep(LETTERS[1:5], n)) ) boxplot(y ~ groups, dat) # Assuming all cases have identical variances summary(aov(y ~ groups, dat)) Df Sum Sq Mean Sq F value Pr(>F) groups 4 69.0 17.255 3.475 0.0101 * Residuals 120 595.9 4.966 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # Modeling the error variances library(nlme) # This is your standard linear model coef(summary(fit.gls.baseline <- gls(y ~ groups, dat)))) Value Std.Error t-value p-value (Intercept) 1.1784414 0.4457005 2.644021 0.0092875812 groupsB -0.9748277 0.6303157 -1.546571 0.1246000669 groupsC -0.8980932 0.6303157 -1.424831 0.1568017302 groupsD -1.2903790 0.6303157 -2.047195 0.0428223361 groupsE -2.3076197 0.6303157 -3.661054 0.0003747902 sigma(fit.gls.baseline) # get residual standard deviation [1] 2.228502 # Next we fit a heteroskedastic model, hidden some output # varIdent says the variances are identical for cases that have the same value of group summary(fit.gls <- gls( y ~ groups, dat, weights = varIdent(form = ~ 1 | groups))) Generalized least squares fit by REML Model: y ~ groups Data: dat AIC BIC logLik 479.5354 507.4103 -229.7677 Variance function: Structure: Different standard deviations per stratum Formula: ~1 | groups Parameter estimates: A B C D E 1.0000000 0.9741804 0.8698579 3.0062743 3.7867899 Coefficients: Value Std.Error t-value p-value (Intercept) 1.1784414 0.1951410 6.038923 0.0000 groupsB -0.9748277 0.2724316 -3.578248 0.0005 groupsC -0.8980932 0.2586375 -3.472401 0.0007 groupsD -1.2903790 0.6182517 -2.087142 0.0390 groupsE -2.3076197 0.7642898 -3.019299 0.0031 Residual standard error: 0.975705 A few notes about this model. You will find the residual standard deviation (error) is much less for the heteroskedastic model when compared to the same estimate for the earlier model. Now take a look at the Variance function section, and you will see group A had a standard deviation of 1. These values are relative standard deviations. So the $.976$ reported for the model is the residual SD of group A. $.974 \times .976$ is the value for group B. Groups D $(3 \times .976)$ and E $(3.79 \times .976)$ have much larger residual standard deviations. Also, let's obtain the heteroskedastic version of the F-test: # marginal for type III sum of squares, only makes a difference for the # test of the grouping variable if we have more than one grouping variable anova(fit.gls, type = "marginal") Denom. DF: 120 numDF F-value p-value (Intercept) 1 36.46859 <.0001 groups 4 5.51477 4e-04 Sufficient evidence to suggest that not all groups share the same mean, $F(4, 120)=5.51, p < .001$. Now to compare the heteroskedastic model to the standard model, note that the coefficients are about the same, however, the standard errors are much smaller for the heteroskedastic model, except for groups D and E. So our p-values are also smaller. A global approach to model comparison would be: anova(fit.gls, fit.gls.baseline) # Model fit improved Model df AIC BIC logLik Test L.Ratio p-value fit.gls 1 10 479.5354 507.4103 -229.7677 fit.gls.baseline 2 6 560.9588 577.6837 -274.4794 1 vs 2 89.42343 <.0001 These results suggest the heteroskedastic model was better, lower AIC, BIC and statistically significant likelihood ratio test. Now, we do not have access to the data generation process in reality, but we can always plot the boxplt above. Assuming we want to opt for a simpler model, we can assume groups A, B and C have the same variances and groups D and E are different. We can then try: # Create a variable that classifies the groups according to variance dat$var.groups <- (dat$groups %in% c("D", "E")) + 0 # then run: summary(fit.gls.parsim <- gls( y ~ groups, dat, weights = varIdent(form = ~ 1 | var.groups))) Generalized least squares fit by REML Model: y ~ groups Data: dat AIC BIC logLik 475.3162 494.8287 -230.6581 Variance function: Structure: Different standard deviations per stratum Formula: ~1 | var.groups Parameter estimates: 0 1 1.000000 3.600023 Coefficients: Value Std.Error t-value p-value (Intercept) 1.1784414 0.1853218 6.358893 0.0000 groupsB -0.9748277 0.2620846 -3.719516 0.0003 groupsC -0.8980932 0.2620846 -3.426730 0.0008 groupsD -1.2903790 0.6924235 -1.863569 0.0648 groupsE -2.3076197 0.6924235 -3.332671 0.0011 # A model comparison between our previous best and this new model anova(fit.gls, fit.gls.parsim) Model df AIC BIC logLik Test L.Ratio p-value fit.gls 1 10 479.5354 507.4103 -229.7677 fit.gls.parsim 2 7 475.3162 494.8287 -230.6581 1 vs 2 1.780859 0.6191 The likelihood ratio test cannot distinguish between this simpler model that has three fewer parameters and our earlier heteroskedastic model. Also AIC and BIC lean towards this model. So we may opt for this model going forward since we do not know the truth. Theory may also suggest certain things about the differences in group variances. One can see how the particular issue of heteroskedasticity can be of substantive interest in itself. F-test again would be: anova(fit.gls.parsim, type = "marginal") Denom. DF: 120 numDF F-value p-value (Intercept) 1 40.43552 <.0001 groups 4 6.04201 2e-04 Now, the next thing may be contrasts. I do not care much for this in my work, so I do the simplest things here. There are probably better ways to deal with contrasts, see the emmeans package documentation for more options. I just use defaults. library(emmeans) pairs(emmeans(fit.gls.parsim, "groups")) contrast estimate SE df t.ratio p.value A - B 0.9748277 0.2620846 120 3.720 0.0028 A - C 0.8980932 0.2620846 120 3.427 0.0073 A - D 1.2903790 0.6924235 120 1.864 0.3427 A - E 2.3076197 0.6924235 120 3.333 0.0099 B - C -0.0767345 0.2620846 120 -0.293 0.9984 B - D 0.3155513 0.6924235 120 0.456 0.9910 B - E 1.3327920 0.6924235 120 1.925 0.3099 C - D 0.3922858 0.6924235 120 0.567 0.9796 C - E 1.4095265 0.6924235 120 2.036 0.2555 D - E 1.0172407 0.9435106 120 1.078 0.8174 P value adjustment: tukey method for comparing a family of 5 estimates These contrasts use information from the heteroskedastic model as the results would be different if you used the fit.gls.baseline model. I hope the demonstration above shows how you can account for heteroskedasticity using generalized least squares when performing analysis of variance. Compared to many other approaches, this approach does not simply treat heteroskedasticity as a nuisance. One can also include control variables as in ANCOVA. As an aside, if one also suspects that there is also heteroskedasticity by the control variables in ANCOVA, then the glmmTMB() function in the glmmTMB package can handle this situation. You simply specify a regression equation for the variance using the dispformula = argument.
How to test for difference in means between 5 groups? The variance between the groups are not equal. I would like to recommend generalized least squares as an option. Same way you can create a model for the mean of the data given your predictors, you can also create a model for variance. The gls() fu
52,547
How to test for difference in means between 5 groups? The variance between the groups are not equal. If they were equal, then I could use ANOVA
One of several possible methods is to use oneway.test in R. Here is an example with three groups, each with ten observations: x1 = rnorm(10, 100, 10); x2 = rnorm(10, 95, 15); x3 = rnorm(10, 90, 5) x = c(x1, x2, x3); group = rep(1:3, each=10) boxplot(x ~ group) You can see that my fake data were generated with different standard deviations in each of the three groups. The boxplots show this heteroscedasticity. A Bartlett test confirms significance (P-value 0.004). sd(x1); sd(x2); sd(x3) [1] 10.88923 [1] 16.35099 [1] 4.656118 bartlett.test(x, group) Bartlett test of homogeneity of variances data: x and group Bartlett's K-squared = 11.103, df = 2, p-value = 0.003881 The oneway.test procedure allows for different variances in somewhat the same way as does the Welch 2-sample t test. It indicates that not all group population means are equal (P-value < 5%). Notice that the denominator DF $\approx 15;$ a standard ANOVA assuming equal variances would have denominator DF $= 27.$ [I believe this is the 'Welch ANOVA' suggested by @SalMangiafico.] mean(x1); mean(x2); mean(x3) [1] 98.00458 [1] 105.3806 [1] 92.0077 oneway.test(x ~ group) One-way analysis of means (not assuming equal variances) data: x and group F = 3.818, num df = 2.000, denom df = 14.536, p-value = 0.04649 You could use Welch 2-sample t tests to explore paired comparisons, perhaps with a Bonferroni family error rate. Reference: This Q & A mentions a variety of alternative methods.
How to test for difference in means between 5 groups? The variance between the groups are not equal.
One of several possible methods is to use oneway.test in R. Here is an example with three groups, each with ten observations: x1 = rnorm(10, 100, 10); x2 = rnorm(10, 95, 15); x3 = rnorm(10, 90, 5) x
How to test for difference in means between 5 groups? The variance between the groups are not equal. If they were equal, then I could use ANOVA One of several possible methods is to use oneway.test in R. Here is an example with three groups, each with ten observations: x1 = rnorm(10, 100, 10); x2 = rnorm(10, 95, 15); x3 = rnorm(10, 90, 5) x = c(x1, x2, x3); group = rep(1:3, each=10) boxplot(x ~ group) You can see that my fake data were generated with different standard deviations in each of the three groups. The boxplots show this heteroscedasticity. A Bartlett test confirms significance (P-value 0.004). sd(x1); sd(x2); sd(x3) [1] 10.88923 [1] 16.35099 [1] 4.656118 bartlett.test(x, group) Bartlett test of homogeneity of variances data: x and group Bartlett's K-squared = 11.103, df = 2, p-value = 0.003881 The oneway.test procedure allows for different variances in somewhat the same way as does the Welch 2-sample t test. It indicates that not all group population means are equal (P-value < 5%). Notice that the denominator DF $\approx 15;$ a standard ANOVA assuming equal variances would have denominator DF $= 27.$ [I believe this is the 'Welch ANOVA' suggested by @SalMangiafico.] mean(x1); mean(x2); mean(x3) [1] 98.00458 [1] 105.3806 [1] 92.0077 oneway.test(x ~ group) One-way analysis of means (not assuming equal variances) data: x and group F = 3.818, num df = 2.000, denom df = 14.536, p-value = 0.04649 You could use Welch 2-sample t tests to explore paired comparisons, perhaps with a Bonferroni family error rate. Reference: This Q & A mentions a variety of alternative methods.
How to test for difference in means between 5 groups? The variance between the groups are not equal. One of several possible methods is to use oneway.test in R. Here is an example with three groups, each with ten observations: x1 = rnorm(10, 100, 10); x2 = rnorm(10, 95, 15); x3 = rnorm(10, 90, 5) x
52,548
How to estimate the scale factor for MAD for a non-normal distribution?
Definition: In R, the MAD of a vector x of observations is median(abs(x - median(x))) multiplied by the default constant you mention in your Question. set.seed (726); x = round(rnorm(10, 100, 15)) # rounded-normal data x [1] 95 80 108 84 115 76 82 93 121 117 mad(x) [1] 20.7564 # default MAD in R mad(x, const=1) [1] 14 # MAD with constant set to 1 median(abs(x-median(x))) [1] 14 # MAD using definition The rationale for the constant $c = 1.4826$ is to put MAD on the same 'scale' as the sample standard deviation $S$ for large normal samples, in the sense that $E(S) \approx \sigma$ and $E(cD) \approx \sigma,$ where $S$ is the sample SD, $D$ is my notation for the sample MAD (without constant), and $\sigma$ is the SD of the normal population from which a large sample has been taken. Illustrating with $n = 1000$ observations from $\mathsf{Norm}(\mu=100,\sigma=15):$ set.seed(725); y = rnorm(1000, 100, 15) sd(y); mad(y) [1] 14.64436 # sample SD, aprx pop SD 15 [1] 14.54209 # sample MAD, aprx same as sample and pop SD mad(y, const=1) [1] 9.808504 # MAD without constant multiple sd(y)/mad(y, const=1) [1] 1.493026 # Shows ratio aprx 1.4826 So with one normal sample of size $n =1000$ we have seen that a constant multiple of roughly 1.5 converts sample MAD $D$ to about the same scale as sample SD $S.$ We could get a more precise value for the constant by looking at many large normal samples. [Note: If you use the same seed as shown, you will get exactly the same example. If you choose a different seed, or use no set.seed statement, you will get a fresh example.] Uniform Data: If $U \sim \mathsf{Unif}(100-15\sqrt{3}, 100+15\sqrt{3}),$ then $E(U) = 100, SD(U) = 15.$ set.seed(1234); u = runif(1000, 100-15*sqrt(3), 100+15*sqrt(3)) sd(u); mad(u, const=1); mad(u) [1] 15.13162 # S [1] 13.01111 # D [1] 19.29028 # MAD with NORMAL constant sd(u)/mad(u, const=1) [1] 1.162977 # suggests UNIFORM const is aprx 1.16 So from one large uniform sample, we see that the constant for uniform data may be about $c = 1.16.$ Intuitively, it seems the same constant ought to work for all uniform populations. Here is a simulation using a 100,000 samples of size $n = 1000$ from a standard uniform distribution $\mathsf{Unif}(0,1).$ It shows the the constant for uniform data is about $c = 1.16.$ The 95% margin of simulation error for samples of size $n = 1000$ is about $\pm 0.0002.$ Larger samples might give a slightly different value. m = 10^5; n = 1000; c = numeric(m) for(i in 1:m) { u = runif(n); s = sd(u); d = mad(u, const=T) c[i] = s/d } mean(c) [1] 1.157575 2*sd(c)/sqrt(m) [1] 0.0001560186 Exponential Data: An analogous simulation for exponential data gives $c \approx 2.08.$ Laplace Data: For random samples from a Laplace distribution the sample MAD is preferred to the sample SD as an estimate of the Laplace scale parameter. For Laplace data my simulation showed that $c \approx 2.04.$
How to estimate the scale factor for MAD for a non-normal distribution?
Definition: In R, the MAD of a vector x of observations is median(abs(x - median(x))) multiplied by the default constant you mention in your Question. set.seed (726); x = round(rnorm(10, 100, 15)) #
How to estimate the scale factor for MAD for a non-normal distribution? Definition: In R, the MAD of a vector x of observations is median(abs(x - median(x))) multiplied by the default constant you mention in your Question. set.seed (726); x = round(rnorm(10, 100, 15)) # rounded-normal data x [1] 95 80 108 84 115 76 82 93 121 117 mad(x) [1] 20.7564 # default MAD in R mad(x, const=1) [1] 14 # MAD with constant set to 1 median(abs(x-median(x))) [1] 14 # MAD using definition The rationale for the constant $c = 1.4826$ is to put MAD on the same 'scale' as the sample standard deviation $S$ for large normal samples, in the sense that $E(S) \approx \sigma$ and $E(cD) \approx \sigma,$ where $S$ is the sample SD, $D$ is my notation for the sample MAD (without constant), and $\sigma$ is the SD of the normal population from which a large sample has been taken. Illustrating with $n = 1000$ observations from $\mathsf{Norm}(\mu=100,\sigma=15):$ set.seed(725); y = rnorm(1000, 100, 15) sd(y); mad(y) [1] 14.64436 # sample SD, aprx pop SD 15 [1] 14.54209 # sample MAD, aprx same as sample and pop SD mad(y, const=1) [1] 9.808504 # MAD without constant multiple sd(y)/mad(y, const=1) [1] 1.493026 # Shows ratio aprx 1.4826 So with one normal sample of size $n =1000$ we have seen that a constant multiple of roughly 1.5 converts sample MAD $D$ to about the same scale as sample SD $S.$ We could get a more precise value for the constant by looking at many large normal samples. [Note: If you use the same seed as shown, you will get exactly the same example. If you choose a different seed, or use no set.seed statement, you will get a fresh example.] Uniform Data: If $U \sim \mathsf{Unif}(100-15\sqrt{3}, 100+15\sqrt{3}),$ then $E(U) = 100, SD(U) = 15.$ set.seed(1234); u = runif(1000, 100-15*sqrt(3), 100+15*sqrt(3)) sd(u); mad(u, const=1); mad(u) [1] 15.13162 # S [1] 13.01111 # D [1] 19.29028 # MAD with NORMAL constant sd(u)/mad(u, const=1) [1] 1.162977 # suggests UNIFORM const is aprx 1.16 So from one large uniform sample, we see that the constant for uniform data may be about $c = 1.16.$ Intuitively, it seems the same constant ought to work for all uniform populations. Here is a simulation using a 100,000 samples of size $n = 1000$ from a standard uniform distribution $\mathsf{Unif}(0,1).$ It shows the the constant for uniform data is about $c = 1.16.$ The 95% margin of simulation error for samples of size $n = 1000$ is about $\pm 0.0002.$ Larger samples might give a slightly different value. m = 10^5; n = 1000; c = numeric(m) for(i in 1:m) { u = runif(n); s = sd(u); d = mad(u, const=T) c[i] = s/d } mean(c) [1] 1.157575 2*sd(c)/sqrt(m) [1] 0.0001560186 Exponential Data: An analogous simulation for exponential data gives $c \approx 2.08.$ Laplace Data: For random samples from a Laplace distribution the sample MAD is preferred to the sample SD as an estimate of the Laplace scale parameter. For Laplace data my simulation showed that $c \approx 2.04.$
How to estimate the scale factor for MAD for a non-normal distribution? Definition: In R, the MAD of a vector x of observations is median(abs(x - median(x))) multiplied by the default constant you mention in your Question. set.seed (726); x = round(rnorm(10, 100, 15)) #
52,549
Paired data comparison: regression or paired t-test?
Your t-test is answering the question you want, which is (in your own words) "check whether two methods (on average) yielded the same results", and on that side your analysis looks correct. This is simple, correct, and appropriate given your small sample size n=7. Your regression model, however, is not set up to answer the same question, so it doesn't make sense to compare these two methods head-to-head. Let's first look at what the model you specify is actually doing. The model given is just predicting methodx from methody. This model is always of the form methodx = slope * methody + Intercept. If H0 is true we'd expect slope to be 1 and Intercept to be 0. In fact, especially under H0, we would expect a very high correlation so the fact that you're seeing it tells us very little. Not only that, but correlation actually doesn't change at all if the Intercept changes! So I would completely ignore the $R^2$ numbers on the default regression analysis - they tell us nothing interesting about this specific problem. To falsify H0, we have to make and defend one of the following statements: the intercept is significantly different than 0. the slope is significantly different than 1. the combination of slope and intercept is significantly different than 1/0. Statement #1 can (almost) be read off the regression diagnosis: the t-value and p-values are for the hypothesis test that Intercept is different from zero. But notice that the t-value is much lower and the p-value is much higher than when you did the direct t-test. In fact, we have to understand the t-test of a single coefficient as controlling for all other variables in the models: in this case, slope. That's a fundamentally different question than the one we originally asked. Statement #2 cannot be read directly off the regression diagnosis, because by default it's comparing the slope to 0, whereas you want to compare it to a slope of 1, because that's what the slope would be under your null hypothesis. You can can do this yourself as an exercise by subtracting one from the fitted slope coefficient, dividing by the given "Std. Error" for the slope parameter in the second column, and applying a t-test to the resulting t-statistic. However, we still have the problem of co-mingling both the slope and intercepts, and it's not the "difference in means" you're looking for, so I don't recommend this either. For Statement #3, we want want to do both together. We can do this with ANOVA: First, we build a (somewhat trivial) model with a forced slope of 1 and intercept 0. This model in fact has no free parameters, but R is happy to build an object of class lm for us, which what we want. null_model <- lm(methodx ~ offset(1*methody) -1, data=data) Then you can compare this to the model you fit above: anova(model, null_model) When I do this for your data, I get: Analysis of Variance Table Model 1: methodx ~ methody Model 2: methodx ~ offset(1 * methody) - 1 Res.Df RSS Df Sum of Sq F Pr(>F) 1 6 0.00032622 2 8 0.00120000 -2 -0.00087378 8.0355 0.02009 * So the omnibus p-value is 0.02. This is basically using a F-test under the hood and asking the question, "do I explain more additional RSS when using a more complicated model than would be explained by pure chance?" You might want to use this kind of omnibus test if your not sure if the true slope relating methodx and methody is exactly 1. That's the only thing we gain with respect to simply applying a t-test to the difference of means between groups. Note that this means we're "taking credit" (in the sense of reporting a lower, more significant p-value) for any difference in slopes we observe. Depending on the experiment, this may actually be completely backwards - a slope different than 1 might indicate inconsistent methodology between the two measurements, for example.
Paired data comparison: regression or paired t-test?
Your t-test is answering the question you want, which is (in your own words) "check whether two methods (on average) yielded the same results", and on that side your analysis looks correct. This is si
Paired data comparison: regression or paired t-test? Your t-test is answering the question you want, which is (in your own words) "check whether two methods (on average) yielded the same results", and on that side your analysis looks correct. This is simple, correct, and appropriate given your small sample size n=7. Your regression model, however, is not set up to answer the same question, so it doesn't make sense to compare these two methods head-to-head. Let's first look at what the model you specify is actually doing. The model given is just predicting methodx from methody. This model is always of the form methodx = slope * methody + Intercept. If H0 is true we'd expect slope to be 1 and Intercept to be 0. In fact, especially under H0, we would expect a very high correlation so the fact that you're seeing it tells us very little. Not only that, but correlation actually doesn't change at all if the Intercept changes! So I would completely ignore the $R^2$ numbers on the default regression analysis - they tell us nothing interesting about this specific problem. To falsify H0, we have to make and defend one of the following statements: the intercept is significantly different than 0. the slope is significantly different than 1. the combination of slope and intercept is significantly different than 1/0. Statement #1 can (almost) be read off the regression diagnosis: the t-value and p-values are for the hypothesis test that Intercept is different from zero. But notice that the t-value is much lower and the p-value is much higher than when you did the direct t-test. In fact, we have to understand the t-test of a single coefficient as controlling for all other variables in the models: in this case, slope. That's a fundamentally different question than the one we originally asked. Statement #2 cannot be read directly off the regression diagnosis, because by default it's comparing the slope to 0, whereas you want to compare it to a slope of 1, because that's what the slope would be under your null hypothesis. You can can do this yourself as an exercise by subtracting one from the fitted slope coefficient, dividing by the given "Std. Error" for the slope parameter in the second column, and applying a t-test to the resulting t-statistic. However, we still have the problem of co-mingling both the slope and intercepts, and it's not the "difference in means" you're looking for, so I don't recommend this either. For Statement #3, we want want to do both together. We can do this with ANOVA: First, we build a (somewhat trivial) model with a forced slope of 1 and intercept 0. This model in fact has no free parameters, but R is happy to build an object of class lm for us, which what we want. null_model <- lm(methodx ~ offset(1*methody) -1, data=data) Then you can compare this to the model you fit above: anova(model, null_model) When I do this for your data, I get: Analysis of Variance Table Model 1: methodx ~ methody Model 2: methodx ~ offset(1 * methody) - 1 Res.Df RSS Df Sum of Sq F Pr(>F) 1 6 0.00032622 2 8 0.00120000 -2 -0.00087378 8.0355 0.02009 * So the omnibus p-value is 0.02. This is basically using a F-test under the hood and asking the question, "do I explain more additional RSS when using a more complicated model than would be explained by pure chance?" You might want to use this kind of omnibus test if your not sure if the true slope relating methodx and methody is exactly 1. That's the only thing we gain with respect to simply applying a t-test to the difference of means between groups. Note that this means we're "taking credit" (in the sense of reporting a lower, more significant p-value) for any difference in slopes we observe. Depending on the experiment, this may actually be completely backwards - a slope different than 1 might indicate inconsistent methodology between the two measurements, for example.
Paired data comparison: regression or paired t-test? Your t-test is answering the question you want, which is (in your own words) "check whether two methods (on average) yielded the same results", and on that side your analysis looks correct. This is si
52,550
Paired data comparison: regression or paired t-test?
The way you set up the regression in a "correlation style" does answer a different question (which is explained nicely by @olooney). However, you can use regressions to test for mean differences. In the end this is what we call AN(C)OVAs. Similarly, the t-test for mean differences can be seen as a spacial case of the test of regression slopes. To estimate the mean difference between methodx and methody with lm() we first convert the data to the long format. library(reshape2) d_long <- melt(data, id.vars = "sample", variable.name = "method") d_long$method <- relevel(d_long$method, ref = "methody") Performing the following regression yields the correct mean difference (-0.01) which is represented by the slope estimate. The estimate of the intercept corresponds to the mean of the reference group, i. e. methody. summary(lm(value ~ method, d_long)) [...] Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.40000 0.03485 11.479 1.65e-08 *** methodmethodx -0.01000 0.04928 -0.203 0.842 [...] However, the standard error/degrees of freedom are wrong as this model treats every sample as an independent observation. Thus, this model gives you a p-value corresponding to an unpaired t-test. t.test(data$methodx, data$methody, var.equal = TRUE) Two Sample t-test data: data$methodx and data$methody t = -0.20292, df = 14, p-value = 0.8421 [...] sample estimates: mean of x mean of y 0.39 0.40 To incorporate the dependency structure we can use a multilevel regression/ linear mixed model and approximate the degrees of freedom with the Kenward-Roger method. This analysis and the paired t-test are closely related (see e.g. Paired t-test as a special case of linear mixed-effect modeling) and in this case it gives you the exact same result you obtained with the paired t-test: library(lmerTest) summary(lmer(value ~ method + (1|sample), d_long), ddf = "Kenward-Roger") [...] Fixed effects: Estimate Std. Error df t value Pr(>|t|) (Intercept) 0.400000 0.034847 7.020619 11.479 8.37e-06 *** methodmethodx -0.010000 0.002673 7.000000 -3.742 0.00725 ** [...]
Paired data comparison: regression or paired t-test?
The way you set up the regression in a "correlation style" does answer a different question (which is explained nicely by @olooney). However, you can use regressions to test for mean differences. In t
Paired data comparison: regression or paired t-test? The way you set up the regression in a "correlation style" does answer a different question (which is explained nicely by @olooney). However, you can use regressions to test for mean differences. In the end this is what we call AN(C)OVAs. Similarly, the t-test for mean differences can be seen as a spacial case of the test of regression slopes. To estimate the mean difference between methodx and methody with lm() we first convert the data to the long format. library(reshape2) d_long <- melt(data, id.vars = "sample", variable.name = "method") d_long$method <- relevel(d_long$method, ref = "methody") Performing the following regression yields the correct mean difference (-0.01) which is represented by the slope estimate. The estimate of the intercept corresponds to the mean of the reference group, i. e. methody. summary(lm(value ~ method, d_long)) [...] Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.40000 0.03485 11.479 1.65e-08 *** methodmethodx -0.01000 0.04928 -0.203 0.842 [...] However, the standard error/degrees of freedom are wrong as this model treats every sample as an independent observation. Thus, this model gives you a p-value corresponding to an unpaired t-test. t.test(data$methodx, data$methody, var.equal = TRUE) Two Sample t-test data: data$methodx and data$methody t = -0.20292, df = 14, p-value = 0.8421 [...] sample estimates: mean of x mean of y 0.39 0.40 To incorporate the dependency structure we can use a multilevel regression/ linear mixed model and approximate the degrees of freedom with the Kenward-Roger method. This analysis and the paired t-test are closely related (see e.g. Paired t-test as a special case of linear mixed-effect modeling) and in this case it gives you the exact same result you obtained with the paired t-test: library(lmerTest) summary(lmer(value ~ method + (1|sample), d_long), ddf = "Kenward-Roger") [...] Fixed effects: Estimate Std. Error df t value Pr(>|t|) (Intercept) 0.400000 0.034847 7.020619 11.479 8.37e-06 *** methodmethodx -0.010000 0.002673 7.000000 -3.742 0.00725 ** [...]
Paired data comparison: regression or paired t-test? The way you set up the regression in a "correlation style" does answer a different question (which is explained nicely by @olooney). However, you can use regressions to test for mean differences. In t
52,551
Paired data comparison: regression or paired t-test?
Both methods answer slightly different questions. The t-test is all about means, the regression, as you used it, is about finding an optimal linear relationship. Of course, if the intercept is 0 and the slope is 1, then everything is easy. But what, if not? x = runif(30, -5, 10) y = jitter(1.2*x) summary(lm(y~x)) The intercept is 0, the slope is not. Can you tell from the regression result, whether the means are the same or not? You cant. There will be different means dependent on whether you start with x1 = runif(30, -5, 10) x2 = runif(30, -5, 5) x3 = runif(30, -5, 0) And you can't tell that from the result of the regression. So no: Both methods do not report the same.
Paired data comparison: regression or paired t-test?
Both methods answer slightly different questions. The t-test is all about means, the regression, as you used it, is about finding an optimal linear relationship. Of course, if the intercept is 0 and t
Paired data comparison: regression or paired t-test? Both methods answer slightly different questions. The t-test is all about means, the regression, as you used it, is about finding an optimal linear relationship. Of course, if the intercept is 0 and the slope is 1, then everything is easy. But what, if not? x = runif(30, -5, 10) y = jitter(1.2*x) summary(lm(y~x)) The intercept is 0, the slope is not. Can you tell from the regression result, whether the means are the same or not? You cant. There will be different means dependent on whether you start with x1 = runif(30, -5, 10) x2 = runif(30, -5, 5) x3 = runif(30, -5, 0) And you can't tell that from the result of the regression. So no: Both methods do not report the same.
Paired data comparison: regression or paired t-test? Both methods answer slightly different questions. The t-test is all about means, the regression, as you used it, is about finding an optimal linear relationship. Of course, if the intercept is 0 and t
52,552
Bias induced from model selection
I believe what they are saying is if you use a hold out data for cross validation to estimate the generalization error of your model then you land up with the unbiased estimate of generalization error of the model. But once you use that data set for selection process for a model, which I believe is tuning the model and minimizing the error on that particular data set and then you choose one model out of numerous others, then the error estimate you get is not an unbiased generalization error estimate. This is because the model has been optimized to perform well on the data set used for cross validation and in a way model has 'seen' the data. So there are good chances that the estimate you will get underestimates the true error. To avoid this and get the unbiased generalization error estimate one must evaluate the model on a 'test' set that has not not been used for training or validation (model-selection).
Bias induced from model selection
I believe what they are saying is if you use a hold out data for cross validation to estimate the generalization error of your model then you land up with the unbiased estimate of generalization error
Bias induced from model selection I believe what they are saying is if you use a hold out data for cross validation to estimate the generalization error of your model then you land up with the unbiased estimate of generalization error of the model. But once you use that data set for selection process for a model, which I believe is tuning the model and minimizing the error on that particular data set and then you choose one model out of numerous others, then the error estimate you get is not an unbiased generalization error estimate. This is because the model has been optimized to perform well on the data set used for cross validation and in a way model has 'seen' the data. So there are good chances that the estimate you will get underestimates the true error. To avoid this and get the unbiased generalization error estimate one must evaluate the model on a 'test' set that has not not been used for training or validation (model-selection).
Bias induced from model selection I believe what they are saying is if you use a hold out data for cross validation to estimate the generalization error of your model then you land up with the unbiased estimate of generalization error
52,553
Bias induced from model selection
When you evaluate a model it is assumed that your model is given apriori any seen data. As such model evaluation is fine considering the sample is representative. However when you choose a model based on data, the model fitting becomes a random variable estimation. Assessing performance on the same data is not unbiased anymore, usually more optimistic than is the case. As such to avoid this bias in evaluation you need fresh data. [Later edit] I really well written and involved discussion on this topic you can find in Advanced Data Analysis from an Elementary Point of View by Cosma Rohilla Shalizi. It's free and it is one of the best books no deeper understanding those kind of intricacies. For this discussion see section 3.5.1 Inference after selection. Regarding you comment I do not completely understand what do you mean by 'relative' fit. If relative performance means the evaluated performance of one model against other, for model selection purposes and you care about prediction, I think that you can go without additional data set. It's like choosing the best you have at hand, but without knowing really how well will perform your best model.
Bias induced from model selection
When you evaluate a model it is assumed that your model is given apriori any seen data. As such model evaluation is fine considering the sample is representative. However when you choose a model based
Bias induced from model selection When you evaluate a model it is assumed that your model is given apriori any seen data. As such model evaluation is fine considering the sample is representative. However when you choose a model based on data, the model fitting becomes a random variable estimation. Assessing performance on the same data is not unbiased anymore, usually more optimistic than is the case. As such to avoid this bias in evaluation you need fresh data. [Later edit] I really well written and involved discussion on this topic you can find in Advanced Data Analysis from an Elementary Point of View by Cosma Rohilla Shalizi. It's free and it is one of the best books no deeper understanding those kind of intricacies. For this discussion see section 3.5.1 Inference after selection. Regarding you comment I do not completely understand what do you mean by 'relative' fit. If relative performance means the evaluated performance of one model against other, for model selection purposes and you care about prediction, I think that you can go without additional data set. It's like choosing the best you have at hand, but without knowing really how well will perform your best model.
Bias induced from model selection When you evaluate a model it is assumed that your model is given apriori any seen data. As such model evaluation is fine considering the sample is representative. However when you choose a model based
52,554
Bias induced from model selection
Below is my understanding of the paragraph (without having the context): Start: Cross-validation and information criteria make a correction for using the data twice (in constructing the posterior and in model assessment). Cross-validation helps to avoid the so called "double-dipping" problem - when one uses the same data to estimate the model and to check how well it does. Continues: and obtain asymptotically unbiased estimates of predictive performance for a given model. When sample size grows to infinity cross validation will generate unbiased estimates for predictive performance. With small sample sizes it will be more biased because in each cross-validation fold we will not be using all of the training data - but only a part of it (like 9/10 in 10-fold CV). Continues: However, when these methods are used to choose a model selection That is when cross-validation is used multiple times, not only once. For example when selecting between multiple models we might have an idea to run all of those models through cross validation and select the best one. Continues: the predictive performance estimate of the selected model is biased due to the selection process. Then the final predictive performance (the very thing we used in selecting our model) will be biased. This is because each model has some variability or randomness in them. When estimated on final data (with cross validation) performance estimates do not give us the exact true performance of those models on the population. Instead there is always some error associated with this performance measure. And because of this error - we will end up picking the model whose error (difference between true performance and estimated cross-validation performance) is optimistically biased. And the more models we try on cross-validation - the worse it will be. Bottom line: Cross-validation is unbiased (asymptotically) when used once. But it is not panacea for overfitting and will become biased once we start comparing different models on the same cross-validation data.
Bias induced from model selection
Below is my understanding of the paragraph (without having the context): Start: Cross-validation and information criteria make a correction for using the data twice (in constructing the posterior and
Bias induced from model selection Below is my understanding of the paragraph (without having the context): Start: Cross-validation and information criteria make a correction for using the data twice (in constructing the posterior and in model assessment). Cross-validation helps to avoid the so called "double-dipping" problem - when one uses the same data to estimate the model and to check how well it does. Continues: and obtain asymptotically unbiased estimates of predictive performance for a given model. When sample size grows to infinity cross validation will generate unbiased estimates for predictive performance. With small sample sizes it will be more biased because in each cross-validation fold we will not be using all of the training data - but only a part of it (like 9/10 in 10-fold CV). Continues: However, when these methods are used to choose a model selection That is when cross-validation is used multiple times, not only once. For example when selecting between multiple models we might have an idea to run all of those models through cross validation and select the best one. Continues: the predictive performance estimate of the selected model is biased due to the selection process. Then the final predictive performance (the very thing we used in selecting our model) will be biased. This is because each model has some variability or randomness in them. When estimated on final data (with cross validation) performance estimates do not give us the exact true performance of those models on the population. Instead there is always some error associated with this performance measure. And because of this error - we will end up picking the model whose error (difference between true performance and estimated cross-validation performance) is optimistically biased. And the more models we try on cross-validation - the worse it will be. Bottom line: Cross-validation is unbiased (asymptotically) when used once. But it is not panacea for overfitting and will become biased once we start comparing different models on the same cross-validation data.
Bias induced from model selection Below is my understanding of the paragraph (without having the context): Start: Cross-validation and information criteria make a correction for using the data twice (in constructing the posterior and
52,555
Bias induced from model selection
Cross validation (applied solely for validation/verification purposes) avoids the double use of cases being in the training set and in the test set for the same model. However, performance estimates that are used to select the apparently best of a variety of models are in fact part of model training. So cross validation used for model selection or optimization is part of the training and then a separate test using data that is unknown to the whole training process (including model selection) is needed. The reason for this is that the cross validation results are only estimates (or measurements) of model performance: they are subject to bias and variance, i.e. systematic and random errors. because the sample may not be representative, the methods above may over/under-estimate the true model performance, That's another way of saying, there's variance on the performance estimate in addition to possible bias. This is true for any kind of performance measurement (estimate based on testing cases): the test sample (whether held back by cross validation or obtained in any other way) may accidentally contain more easy cases or more difficult cases. So you'll have to expect some variance when testing the same model with different test sets. In resampling validation (including cross validation) you have an additional source of variance: you are actually testing surrogate models that are assumed to be sufficiently similar to the model trained on the whole data (for which the performance estimate is used) to be considered equivalent for practical purposes. However, if your training procedure is not stable, you'll see variance among the surrogate models, which will also add to the variance of your cross validation estimate. So we end up with an almost unbiased but somewhat noisy estimate of performance ... and so we will tend to choose those who perform better under the selected data, So yes, when picking the apparently best performing model, we'll "skim off the noise", i.e. models that accidentally look good with the cross validation split we did will be favored. The risk of skimming variance (= overfitting, selecting the wrong model) increases with increasing number of compared models increasing variance uncertainty on the performance estimates, and decreasing true difference in performance among the considered models (although one may argue that this is less of a problem as the mistake here is only to select a not totally perfect model from a number of almost equally good models) but not for out-of-sample data? While the out-of-sample test set will may accidentally contain more easy cases than the population, it is unlikely that we're (un*)lucky here as well. Side note: this may happen of course. But we can estimate the likelihood/extent of such random (bad) luck with the usual tools to estimate uncertainty on our point estimate. My impression, however, is that in practice overoptimistic assessment of models more frequently happens due to biased sampling, such as cases being excluded for which no labels can be obtained (possibly because they are hard/borderline cases). * I consider it bad luck if a model appears better than it actually is as I've had to deal a lot with data where unwarranted overoptimism may lead to harm.
Bias induced from model selection
Cross validation (applied solely for validation/verification purposes) avoids the double use of cases being in the training set and in the test set for the same model. However, performance estimates
Bias induced from model selection Cross validation (applied solely for validation/verification purposes) avoids the double use of cases being in the training set and in the test set for the same model. However, performance estimates that are used to select the apparently best of a variety of models are in fact part of model training. So cross validation used for model selection or optimization is part of the training and then a separate test using data that is unknown to the whole training process (including model selection) is needed. The reason for this is that the cross validation results are only estimates (or measurements) of model performance: they are subject to bias and variance, i.e. systematic and random errors. because the sample may not be representative, the methods above may over/under-estimate the true model performance, That's another way of saying, there's variance on the performance estimate in addition to possible bias. This is true for any kind of performance measurement (estimate based on testing cases): the test sample (whether held back by cross validation or obtained in any other way) may accidentally contain more easy cases or more difficult cases. So you'll have to expect some variance when testing the same model with different test sets. In resampling validation (including cross validation) you have an additional source of variance: you are actually testing surrogate models that are assumed to be sufficiently similar to the model trained on the whole data (for which the performance estimate is used) to be considered equivalent for practical purposes. However, if your training procedure is not stable, you'll see variance among the surrogate models, which will also add to the variance of your cross validation estimate. So we end up with an almost unbiased but somewhat noisy estimate of performance ... and so we will tend to choose those who perform better under the selected data, So yes, when picking the apparently best performing model, we'll "skim off the noise", i.e. models that accidentally look good with the cross validation split we did will be favored. The risk of skimming variance (= overfitting, selecting the wrong model) increases with increasing number of compared models increasing variance uncertainty on the performance estimates, and decreasing true difference in performance among the considered models (although one may argue that this is less of a problem as the mistake here is only to select a not totally perfect model from a number of almost equally good models) but not for out-of-sample data? While the out-of-sample test set will may accidentally contain more easy cases than the population, it is unlikely that we're (un*)lucky here as well. Side note: this may happen of course. But we can estimate the likelihood/extent of such random (bad) luck with the usual tools to estimate uncertainty on our point estimate. My impression, however, is that in practice overoptimistic assessment of models more frequently happens due to biased sampling, such as cases being excluded for which no labels can be obtained (possibly because they are hard/borderline cases). * I consider it bad luck if a model appears better than it actually is as I've had to deal a lot with data where unwarranted overoptimism may lead to harm.
Bias induced from model selection Cross validation (applied solely for validation/verification purposes) avoids the double use of cases being in the training set and in the test set for the same model. However, performance estimates
52,556
Yeo-Johnson and Logarithmic transformation
Almost. For a vector $U,$ the Yeo-Johnson with $\lambda=0$ is equivalent to the natural logarithm of $( U + 1 ).$
Yeo-Johnson and Logarithmic transformation
Almost. For a vector $U,$ the Yeo-Johnson with $\lambda=0$ is equivalent to the natural logarithm of $( U + 1 ).$
Yeo-Johnson and Logarithmic transformation Almost. For a vector $U,$ the Yeo-Johnson with $\lambda=0$ is equivalent to the natural logarithm of $( U + 1 ).$
Yeo-Johnson and Logarithmic transformation Almost. For a vector $U,$ the Yeo-Johnson with $\lambda=0$ is equivalent to the natural logarithm of $( U + 1 ).$
52,557
how to detect the exact size of an object in an image using machine learning?
This task of depth estimation is part of a hard and fundamental problem in computer vision called 3D reconstruction. Recovering metric information from images is sometimes called photogrammetry. It's hard because when you move from the real world to an image you lose information. Specifically, the projective transformation $T$ that takes your 3D point $p$ to your 2D point $x$ via $x = Tp$ does not preserve distance. Since $T$ is a $2\times 3$ matrix, calculating $T^{-1}$ to solve $T^{-1}x= p$ is an underdetermined inverse problem. A consequence of this is that pixel lengths are not generally going to be meaningful in terms of real world distances. You can see a simple example of why doing 3D reconstruction is tricky by considering the forced perspective from the Ames room optical illusion: (Source: Ian Stannard https://flic.kr/p/8Pw5Rd) Your visual processing system and many algorithms use cues such as shading and parallel lines to estimate depth but these can be tricked. Generally you need to know the camera location, and something of a known size observable in the image. If you want really accurate length measurements from photography you have to plan for it in the data collection process (it's very helpful to include these chess boards in the camera field of view). Here are a bunch of well studied subproblems: If you have one image you need to estimate everything from the image cues mentioned before. This is called monocular reconstruction or depth estimation. If you have two overlapping images taken at the same time from different cameras then you can estimate the discrepancy between the images and triangulate using that. That's called stereo-reconstruction. If you have multiple images taken from a single camera which is moving around you can estimate the camera location and then triangulate. This is called monocular simultaneous location and mapping (monoSLAM). If you have many overlapping image than you can identify common points, estimate camera location, and triangulate as with stereo-reconstruction or monoSLAM but you do an extra step called bundle adjustment to correct for error propagation. This is called 3D reconstruction from multiple images. There is some variation as to whether they recover the scene geometry up to a projective transformation, up to an affine transformation or up to a Euclidean transformation. Here is a nice list of papers and software on the whole topic of 3D reconstruction here. A classic reference textbook is: Hartley, Richard, and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003. This paper gives an example of doing depth estimate from a single RGB image using a CNN (the code is also available): Laina, Iro, et al. "Deeper depth prediction with fully convolutional residual networks." 3D Vision (3DV), 2016 Fourth International Conference on. IEEE, 2016.
how to detect the exact size of an object in an image using machine learning?
This task of depth estimation is part of a hard and fundamental problem in computer vision called 3D reconstruction. Recovering metric information from images is sometimes called photogrammetry. It's
how to detect the exact size of an object in an image using machine learning? This task of depth estimation is part of a hard and fundamental problem in computer vision called 3D reconstruction. Recovering metric information from images is sometimes called photogrammetry. It's hard because when you move from the real world to an image you lose information. Specifically, the projective transformation $T$ that takes your 3D point $p$ to your 2D point $x$ via $x = Tp$ does not preserve distance. Since $T$ is a $2\times 3$ matrix, calculating $T^{-1}$ to solve $T^{-1}x= p$ is an underdetermined inverse problem. A consequence of this is that pixel lengths are not generally going to be meaningful in terms of real world distances. You can see a simple example of why doing 3D reconstruction is tricky by considering the forced perspective from the Ames room optical illusion: (Source: Ian Stannard https://flic.kr/p/8Pw5Rd) Your visual processing system and many algorithms use cues such as shading and parallel lines to estimate depth but these can be tricked. Generally you need to know the camera location, and something of a known size observable in the image. If you want really accurate length measurements from photography you have to plan for it in the data collection process (it's very helpful to include these chess boards in the camera field of view). Here are a bunch of well studied subproblems: If you have one image you need to estimate everything from the image cues mentioned before. This is called monocular reconstruction or depth estimation. If you have two overlapping images taken at the same time from different cameras then you can estimate the discrepancy between the images and triangulate using that. That's called stereo-reconstruction. If you have multiple images taken from a single camera which is moving around you can estimate the camera location and then triangulate. This is called monocular simultaneous location and mapping (monoSLAM). If you have many overlapping image than you can identify common points, estimate camera location, and triangulate as with stereo-reconstruction or monoSLAM but you do an extra step called bundle adjustment to correct for error propagation. This is called 3D reconstruction from multiple images. There is some variation as to whether they recover the scene geometry up to a projective transformation, up to an affine transformation or up to a Euclidean transformation. Here is a nice list of papers and software on the whole topic of 3D reconstruction here. A classic reference textbook is: Hartley, Richard, and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003. This paper gives an example of doing depth estimate from a single RGB image using a CNN (the code is also available): Laina, Iro, et al. "Deeper depth prediction with fully convolutional residual networks." 3D Vision (3DV), 2016 Fourth International Conference on. IEEE, 2016.
how to detect the exact size of an object in an image using machine learning? This task of depth estimation is part of a hard and fundamental problem in computer vision called 3D reconstruction. Recovering metric information from images is sometimes called photogrammetry. It's
52,558
how to detect the exact size of an object in an image using machine learning?
If you have enough labeled data, go for supervised learning with CNNs. Try object detections such as YOLO. They give you the bounding box of the object, as well as the predicted class. Some pre-trained models are also available, which in the latest version (YOLO9000!) supports around 9k classes. But not sure if your classes can be found there. Maybe use the pre-trained models and then fine-tune it with your data. Another approach would be semantic segmentation, something like a U-net. This one will give you segmentation but it requires masks as labels. As I said, if you have the labels, there are multiple options depending on the amount of the labeled data you have, or the kind of data available. If you have no labeled data, there are some unsupervised image segmentation methods as well. Have a look at here for example.
how to detect the exact size of an object in an image using machine learning?
If you have enough labeled data, go for supervised learning with CNNs. Try object detections such as YOLO. They give you the bounding box of the object, as well as the predicted class. Some pre-train
how to detect the exact size of an object in an image using machine learning? If you have enough labeled data, go for supervised learning with CNNs. Try object detections such as YOLO. They give you the bounding box of the object, as well as the predicted class. Some pre-trained models are also available, which in the latest version (YOLO9000!) supports around 9k classes. But not sure if your classes can be found there. Maybe use the pre-trained models and then fine-tune it with your data. Another approach would be semantic segmentation, something like a U-net. This one will give you segmentation but it requires masks as labels. As I said, if you have the labels, there are multiple options depending on the amount of the labeled data you have, or the kind of data available. If you have no labeled data, there are some unsupervised image segmentation methods as well. Have a look at here for example.
how to detect the exact size of an object in an image using machine learning? If you have enough labeled data, go for supervised learning with CNNs. Try object detections such as YOLO. They give you the bounding box of the object, as well as the predicted class. Some pre-train
52,559
Can sub-Gaussian distributions have non-zero mean?
It is standard in the bandit literature to abuse notation by considering a random variable $X$ to be $\sigma$-subgaussian if the noise $X - \mathbb{E}[X]$ is $\sigma$-subgaussian. See the note on page 78 of Tor Lattimore and Csaba Szepesvari's book
Can sub-Gaussian distributions have non-zero mean?
It is standard in the bandit literature to abuse notation by considering a random variable $X$ to be $\sigma$-subgaussian if the noise $X - \mathbb{E}[X]$ is $\sigma$-subgaussian. See the note on pag
Can sub-Gaussian distributions have non-zero mean? It is standard in the bandit literature to abuse notation by considering a random variable $X$ to be $\sigma$-subgaussian if the noise $X - \mathbb{E}[X]$ is $\sigma$-subgaussian. See the note on page 78 of Tor Lattimore and Csaba Szepesvari's book
Can sub-Gaussian distributions have non-zero mean? It is standard in the bandit literature to abuse notation by considering a random variable $X$ to be $\sigma$-subgaussian if the noise $X - \mathbb{E}[X]$ is $\sigma$-subgaussian. See the note on pag
52,560
Can sub-Gaussian distributions have non-zero mean?
The first comment on this question is incorrect strictly speaking; the proposition you link to shows that having zero mean is a necessary condition for being sub-Gaussian. In some sense, you can still get a useful notion for non-centered random variables to be sub-Gaussian, but only if you change the definition. If you look at one of the other sources you linked to, if you look at the "converse proposition", which shows that the tail bounds imply the MGF bound up to a multiplicative constant in the exponent, you will see that the assumption that $\mathbb{E}Y=0$ is used implicitly. (It appears to be a typo that Lemma 1.5 omits the assumption in its statement, but it is clearly used in the proof, and when my professor assigned it as a homework assignment, he specifically made sure to add the assumption to the problem.) In order for this Taylor expansion of the MGF to be correct: $$\mathbb{E}e^{\lambda Y} = 1 + \sum\limits_{k=2}^{\infty} \frac{\lambda^k \mathbb{E}Y^k}{k!} \,, $$ for all $\lambda \in \mathbb{R}$, we need that $\mathbb{E}Y=0$. Admittedly the assumption does not appear to be used in an essential way in that lemma, since we can still bound $\mathbb{E}Y$ by $\mathbb{E}|Y| \le \sigma \sqrt{2\pi}$, and this doesn't appear large enough to prevent the upper bounds user later in the proof from working. To show more convincingly that the definition of sub-Gaussian as stated just doesn't work when $\mathbb{E}Y\not=0$, let's try to show that if $\mathbb{E}Y \not=0$ and $Y \in SG(\sigma^2)$, then $(Y - \mathbb{E}Y) \in SG(\sigma^2)$, which we would need to be true for it to be the case that , for any mean-zero sub-Gaussian $X$, $X+b$ is also sub-Gaussian for any constant $b \not=0$. The proof founders on an irresolvable problem. Now we definitionally have that $(Y - \mathbb{E}Y) \in SG(\sigma^2) \iff \forall \lambda \in \mathbb{R}, \mathbb{E}e^{\lambda(Y-\mathbb{E})} \le \exp(\frac{1}{2} \sigma^2 \lambda^2)$. Before going any further, let's just note that we shouldn't expect this to work out well. $\exp(\frac{1}{2} \sigma^2 \lambda^2)$ is the MGF bound for a Gaussian centered at 0. So even if we had a "sub-Gaussian" random variable $Y$ which wasn't centered at $0$, we still wouldn't expect it to satisfy this MGF bound (thus it couldn't satisfy the conventional definition of sub-Gaussian), we would expect it to an MGF bound corresponding to $\mathscr{N}(\mathbb{E}Y, \sigma^2)$, which is different than the one above, for $\mathscr{N}(0,\sigma^2)$. Anyway, $$\mathbb{E}e^{\lambda(Y - \mathbb{E}Y)} = e^{-\lambda \mathbb{E}Y} \mathbb{E}e^{\lambda Y} \le e^{-\lambda \mathbb{E}Y} e^{\frac{1}{2}\lambda^2 \sigma^2} = \exp\left( \frac{1}{2}\lambda^2\left(\sigma^2 - \frac{2 \mathbb{E}Y}{\lambda}\right) \right)\,.$$ So if we want to show that this is sub-Gaussian, there needs to exists a constant $\tau >0$, such that for all $\lambda \in \mathbb{R}$, one has $$ \tau^2 \ge \sigma^2 - \frac{2 \mathbb{E}Y}{\lambda} \,,$$ which is clearly impossible, since as one "ranges over" all possible values of $\lambda \in \mathbb{R}$, $-\frac{1}{\lambda}$ is a function of $\lambda$ which is unbounded from above. Note also that the tail bound definitions are in terms of $t \ge 0$, i.e. they implicitly assume/require that the random variable is centered around $0$, i.e. has zero expectation. Thus heuristically we should also expect the tail bound definitions to fail when $\mathbb{E}Y \not=0$, or at least become very messy. One consequence of the tail bound definition is that $\mathbb{P}(|Y| > t) \le \exp\left( \frac{-t^2}{2\sigma^2} \right)$ for all $t \ge 0$. If $Y - \mathbb{E}Y$ is also to be sub-Gaussian, then we would need that $$\mathbb{P}(|Y - \mathbb{E}Y| > t) \le \exp\left( \frac{-t^2}{2\sigma^2} \right) \quad \text{for all }t \ge 0 \,. $$ Now the event $\{ | Y - \mathbb{E}Y| > t\}$ is equal to the event $$ \{ Y > t + \mathbb{E}Y \} \cup \{ Y < -(t - \mathbb{E}Y) \} \,. $$ Even if it is possible for the required bound on the probability of these events, for all $t \ge 0$, to hold if $Y$ were sub-Gaussian but $\mathbb{E}Y \not=0$ (I'm not sure how to prove or disprove that it is or isn't though - intuitively, since the events aren't centered at $0$, but range over all possible values of $t \ge 0$, conceivably we might get two contradictory bounds from two different values of $t$), the fact that it holds would not follow from the assumption that $Y$ was sub-Gaussian. This also means (or at least suggests) that in general we cannot conclude that translates of sub-Gaussian random variables will still be sub-Gaussian. This is all perhaps somewhat disingenuous though, since even to the extent that any of it is true, I have yet to acknowledge how these arguments all suggest the same way to generalize the definition of sub-Gaussian. Namely, I claim that the following three definitions are equivalent: $Y$ is sub-Gaussian sensu lato if and only if $(Y - \mathbb{E}Y)$ is sub-Gaussian sensu stricto. $Y$ is sub-Gaussian sensu lato if and only if its moment generating function is bounded by the moment generating function of $\mathscr{N}(\mathbb{E}Y, \sigma^2)$ for some $\sigma > 0$. $Y$ is sub-Gaussian sensu lato if and only if for some $\sigma >0$, for all $t \ge 0$, $$\mathbb{P}(Y > \mathbb{E}Y + t) \le \exp\left( \frac{-t^2}{2\sigma^2} \right) \quad \text{and} \quad \mathbb{P}(Y < \mathbb{E}Y - t) \le \exp\left( \frac{-t^2}{2\sigma^2} \right) \,. $$
Can sub-Gaussian distributions have non-zero mean?
The first comment on this question is incorrect strictly speaking; the proposition you link to shows that having zero mean is a necessary condition for being sub-Gaussian. In some sense, you can still
Can sub-Gaussian distributions have non-zero mean? The first comment on this question is incorrect strictly speaking; the proposition you link to shows that having zero mean is a necessary condition for being sub-Gaussian. In some sense, you can still get a useful notion for non-centered random variables to be sub-Gaussian, but only if you change the definition. If you look at one of the other sources you linked to, if you look at the "converse proposition", which shows that the tail bounds imply the MGF bound up to a multiplicative constant in the exponent, you will see that the assumption that $\mathbb{E}Y=0$ is used implicitly. (It appears to be a typo that Lemma 1.5 omits the assumption in its statement, but it is clearly used in the proof, and when my professor assigned it as a homework assignment, he specifically made sure to add the assumption to the problem.) In order for this Taylor expansion of the MGF to be correct: $$\mathbb{E}e^{\lambda Y} = 1 + \sum\limits_{k=2}^{\infty} \frac{\lambda^k \mathbb{E}Y^k}{k!} \,, $$ for all $\lambda \in \mathbb{R}$, we need that $\mathbb{E}Y=0$. Admittedly the assumption does not appear to be used in an essential way in that lemma, since we can still bound $\mathbb{E}Y$ by $\mathbb{E}|Y| \le \sigma \sqrt{2\pi}$, and this doesn't appear large enough to prevent the upper bounds user later in the proof from working. To show more convincingly that the definition of sub-Gaussian as stated just doesn't work when $\mathbb{E}Y\not=0$, let's try to show that if $\mathbb{E}Y \not=0$ and $Y \in SG(\sigma^2)$, then $(Y - \mathbb{E}Y) \in SG(\sigma^2)$, which we would need to be true for it to be the case that , for any mean-zero sub-Gaussian $X$, $X+b$ is also sub-Gaussian for any constant $b \not=0$. The proof founders on an irresolvable problem. Now we definitionally have that $(Y - \mathbb{E}Y) \in SG(\sigma^2) \iff \forall \lambda \in \mathbb{R}, \mathbb{E}e^{\lambda(Y-\mathbb{E})} \le \exp(\frac{1}{2} \sigma^2 \lambda^2)$. Before going any further, let's just note that we shouldn't expect this to work out well. $\exp(\frac{1}{2} \sigma^2 \lambda^2)$ is the MGF bound for a Gaussian centered at 0. So even if we had a "sub-Gaussian" random variable $Y$ which wasn't centered at $0$, we still wouldn't expect it to satisfy this MGF bound (thus it couldn't satisfy the conventional definition of sub-Gaussian), we would expect it to an MGF bound corresponding to $\mathscr{N}(\mathbb{E}Y, \sigma^2)$, which is different than the one above, for $\mathscr{N}(0,\sigma^2)$. Anyway, $$\mathbb{E}e^{\lambda(Y - \mathbb{E}Y)} = e^{-\lambda \mathbb{E}Y} \mathbb{E}e^{\lambda Y} \le e^{-\lambda \mathbb{E}Y} e^{\frac{1}{2}\lambda^2 \sigma^2} = \exp\left( \frac{1}{2}\lambda^2\left(\sigma^2 - \frac{2 \mathbb{E}Y}{\lambda}\right) \right)\,.$$ So if we want to show that this is sub-Gaussian, there needs to exists a constant $\tau >0$, such that for all $\lambda \in \mathbb{R}$, one has $$ \tau^2 \ge \sigma^2 - \frac{2 \mathbb{E}Y}{\lambda} \,,$$ which is clearly impossible, since as one "ranges over" all possible values of $\lambda \in \mathbb{R}$, $-\frac{1}{\lambda}$ is a function of $\lambda$ which is unbounded from above. Note also that the tail bound definitions are in terms of $t \ge 0$, i.e. they implicitly assume/require that the random variable is centered around $0$, i.e. has zero expectation. Thus heuristically we should also expect the tail bound definitions to fail when $\mathbb{E}Y \not=0$, or at least become very messy. One consequence of the tail bound definition is that $\mathbb{P}(|Y| > t) \le \exp\left( \frac{-t^2}{2\sigma^2} \right)$ for all $t \ge 0$. If $Y - \mathbb{E}Y$ is also to be sub-Gaussian, then we would need that $$\mathbb{P}(|Y - \mathbb{E}Y| > t) \le \exp\left( \frac{-t^2}{2\sigma^2} \right) \quad \text{for all }t \ge 0 \,. $$ Now the event $\{ | Y - \mathbb{E}Y| > t\}$ is equal to the event $$ \{ Y > t + \mathbb{E}Y \} \cup \{ Y < -(t - \mathbb{E}Y) \} \,. $$ Even if it is possible for the required bound on the probability of these events, for all $t \ge 0$, to hold if $Y$ were sub-Gaussian but $\mathbb{E}Y \not=0$ (I'm not sure how to prove or disprove that it is or isn't though - intuitively, since the events aren't centered at $0$, but range over all possible values of $t \ge 0$, conceivably we might get two contradictory bounds from two different values of $t$), the fact that it holds would not follow from the assumption that $Y$ was sub-Gaussian. This also means (or at least suggests) that in general we cannot conclude that translates of sub-Gaussian random variables will still be sub-Gaussian. This is all perhaps somewhat disingenuous though, since even to the extent that any of it is true, I have yet to acknowledge how these arguments all suggest the same way to generalize the definition of sub-Gaussian. Namely, I claim that the following three definitions are equivalent: $Y$ is sub-Gaussian sensu lato if and only if $(Y - \mathbb{E}Y)$ is sub-Gaussian sensu stricto. $Y$ is sub-Gaussian sensu lato if and only if its moment generating function is bounded by the moment generating function of $\mathscr{N}(\mathbb{E}Y, \sigma^2)$ for some $\sigma > 0$. $Y$ is sub-Gaussian sensu lato if and only if for some $\sigma >0$, for all $t \ge 0$, $$\mathbb{P}(Y > \mathbb{E}Y + t) \le \exp\left( \frac{-t^2}{2\sigma^2} \right) \quad \text{and} \quad \mathbb{P}(Y < \mathbb{E}Y - t) \le \exp\left( \frac{-t^2}{2\sigma^2} \right) \,. $$
Can sub-Gaussian distributions have non-zero mean? The first comment on this question is incorrect strictly speaking; the proposition you link to shows that having zero mean is a necessary condition for being sub-Gaussian. In some sense, you can still
52,561
Average absolute value of a coordinate of a random unit vector?
This problem has a very nice geometrical interpretation if we can assume that the distribution $f(\vec{x})$ is constant over the surface of the $n-1$-unit-sphere in $n$-dimensional space. Then $f(\vec{x} \wedge (\vert x_i \vert=a))$ relates to the surface of two slices (at negative and positive coordinate) of the n-1 sphere in n-dimensional space, which is a (n-2)-dimensional sphere with radius $r = \sqrt{1-a^2}$. The area of this slice is related to the area (or more like a length, since it is a curve) $A_{n-2}$ of the n-2 sphere multiplied with the distance $ds$ which is perpendicular to $\vec{r}$. The fraction of this slice must be related to the total, thus we use the ratio with the area of a $A_{n-1}$ sphere. We have the area of the slice: $$A_{slice} = A_{n-2}(r) ds$$ And the relative area of the slice is: $$\frac{A_{slice}}{A_{total}} = \frac{A_{n-2}(r)}{A_{n-1}(1)} ds$$ with $$A_{n-2}(r) = \frac{2\pi^{\frac{n-1}{2}}}{\Gamma\left(\frac{n-1}{2}\right)}r^{n-2}$$ $$A_{n-1}(1) = \frac{2\pi^{\frac{n}{2}}}{\Gamma\left(\frac{n}{2}\right)}$$ $$r = \sqrt{1- x_i ^2} $$ $$\begin{array} \\ds &= \sqrt{(dr)^2+(dx_i)^2}\\ %% &= \sqrt{\left(dx_i \frac{-x_i}{\sqrt{1-x_i^2}}\right)^2+(dx_i)^2}\\ %% &= \sqrt{(dx_i)^2 \frac{x_i^2}{{1-x_i^2}}+(dx_i)^2}\\ %% &= \sqrt{(dx_i)^2 \frac{1}{{1-x_i^2}}}\\ &= dx_i {\frac{1}{\sqrt{1-x_i^2}}} \end{array}$$ Thus: $$\begin{array}\\ f(\vert x_i \vert) &= \frac{2 \Gamma\left(\frac{n}{2}\right)/\Gamma\left(\frac{n-1}{2}\right)}{\pi^{1/2}}\left(1-x_i^2\right)^{\frac{n-3}{2}}\end{array}$$ using the Beta function $$\begin{array}\\ f(\vert x_i \vert) &= \frac{\left(1-x_i^2\right)^{\frac{n-3}{2}}}{B(\frac{1}{2},\frac{n-1}{2})} \end{array}$$ which becomes for n=3: $f(\vert x_i \vert) = 1$ Which makes sense, intuitively. At the pole the slice has smaller radius but is thicker and at the equator the slice has larger radius but is more thin. This results in equal probability, whatever $x_i$ at the pole, the equator, or in between. Note, some help about the intuition behind the integration step: The n-1 sphere in n-dimensional space is a hypersurface and intersection with $\vert x_i\vert=a$ is a hypercurve. By integrating the hypercurve, along the direction $\vec{s}$, you get the hypersurface
Average absolute value of a coordinate of a random unit vector?
This problem has a very nice geometrical interpretation if we can assume that the distribution $f(\vec{x})$ is constant over the surface of the $n-1$-unit-sphere in $n$-dimensional space. Then $f(\vec
Average absolute value of a coordinate of a random unit vector? This problem has a very nice geometrical interpretation if we can assume that the distribution $f(\vec{x})$ is constant over the surface of the $n-1$-unit-sphere in $n$-dimensional space. Then $f(\vec{x} \wedge (\vert x_i \vert=a))$ relates to the surface of two slices (at negative and positive coordinate) of the n-1 sphere in n-dimensional space, which is a (n-2)-dimensional sphere with radius $r = \sqrt{1-a^2}$. The area of this slice is related to the area (or more like a length, since it is a curve) $A_{n-2}$ of the n-2 sphere multiplied with the distance $ds$ which is perpendicular to $\vec{r}$. The fraction of this slice must be related to the total, thus we use the ratio with the area of a $A_{n-1}$ sphere. We have the area of the slice: $$A_{slice} = A_{n-2}(r) ds$$ And the relative area of the slice is: $$\frac{A_{slice}}{A_{total}} = \frac{A_{n-2}(r)}{A_{n-1}(1)} ds$$ with $$A_{n-2}(r) = \frac{2\pi^{\frac{n-1}{2}}}{\Gamma\left(\frac{n-1}{2}\right)}r^{n-2}$$ $$A_{n-1}(1) = \frac{2\pi^{\frac{n}{2}}}{\Gamma\left(\frac{n}{2}\right)}$$ $$r = \sqrt{1- x_i ^2} $$ $$\begin{array} \\ds &= \sqrt{(dr)^2+(dx_i)^2}\\ %% &= \sqrt{\left(dx_i \frac{-x_i}{\sqrt{1-x_i^2}}\right)^2+(dx_i)^2}\\ %% &= \sqrt{(dx_i)^2 \frac{x_i^2}{{1-x_i^2}}+(dx_i)^2}\\ %% &= \sqrt{(dx_i)^2 \frac{1}{{1-x_i^2}}}\\ &= dx_i {\frac{1}{\sqrt{1-x_i^2}}} \end{array}$$ Thus: $$\begin{array}\\ f(\vert x_i \vert) &= \frac{2 \Gamma\left(\frac{n}{2}\right)/\Gamma\left(\frac{n-1}{2}\right)}{\pi^{1/2}}\left(1-x_i^2\right)^{\frac{n-3}{2}}\end{array}$$ using the Beta function $$\begin{array}\\ f(\vert x_i \vert) &= \frac{\left(1-x_i^2\right)^{\frac{n-3}{2}}}{B(\frac{1}{2},\frac{n-1}{2})} \end{array}$$ which becomes for n=3: $f(\vert x_i \vert) = 1$ Which makes sense, intuitively. At the pole the slice has smaller radius but is thicker and at the equator the slice has larger radius but is more thin. This results in equal probability, whatever $x_i$ at the pole, the equator, or in between. Note, some help about the intuition behind the integration step: The n-1 sphere in n-dimensional space is a hypersurface and intersection with $\vert x_i\vert=a$ is a hypercurve. By integrating the hypercurve, along the direction $\vec{s}$, you get the hypersurface
Average absolute value of a coordinate of a random unit vector? This problem has a very nice geometrical interpretation if we can assume that the distribution $f(\vec{x})$ is constant over the surface of the $n-1$-unit-sphere in $n$-dimensional space. Then $f(\vec
52,562
Average absolute value of a coordinate of a random unit vector?
The answer is $1/2$ This paper has the probability density $f_n(x_i)$ of $x_i$ for the vector inside an n-dimensional hypersphere. You're interested in the vector from origin to the random point on a surface of a hypersphere. To get the surface of a unit hypersphere you simply take a derivative along the radius then fix the radius at 1. Therefore, you can use $2f_{n-2}(x_i)$ from the paper as a density for your problem and take an integral from 0 to 1: $$\int_0^1 2f_{n-2}x dx=\int_0^1 f_{n-2} dx^2$$ The equations for $f_n$ from the paper: $$\frac{\Gamma(n/2+1)}{\Gamma((n+1)/2)}\frac{(\sqrt{1-x^2})^{n-1}}{\sqrt\pi}$$ Hence, the density of the $|x_i|$ is given by: $$\tilde f_n(|x|)=2\frac{\Gamma((n-2)/2+1)}{\Gamma((n-1)/2)}\frac{(\sqrt{1-x^2})^{n-3}}{\sqrt\pi}$$ For three dimensional case you get: $$\tilde f_3(|x|)=\frac{2\Gamma(3/2)}{\sqrt\pi}=1$$ You get the mean absolute value by integrating: $$\int_0^1\tilde f_3(|x|)xdx=\int_0^1xdx=1/2$$ Two points worth noting. First, it's easy to calculate the integral for any $n$ by substitution $z=x^2$ : $$\frac{2 \Gamma(\frac n 2)}{(n-1)\Gamma(\frac{n-1}{2})\sqrt\pi}$$ You can see now that the average absolute value of the coordinate converges to zero as dimensionality increases as shown on the plot of the average vs. $n$ number of dimensions: Second, the sequence of functions on these densities converges to the normal distribution: $$\frac{f_n(v/\sqrt{n+2})}{\sqrt{n+2}}\to_{n\to\infty}\mathcal{N}(0,1)$$
Average absolute value of a coordinate of a random unit vector?
The answer is $1/2$ This paper has the probability density $f_n(x_i)$ of $x_i$ for the vector inside an n-dimensional hypersphere. You're interested in the vector from origin to the random point on a
Average absolute value of a coordinate of a random unit vector? The answer is $1/2$ This paper has the probability density $f_n(x_i)$ of $x_i$ for the vector inside an n-dimensional hypersphere. You're interested in the vector from origin to the random point on a surface of a hypersphere. To get the surface of a unit hypersphere you simply take a derivative along the radius then fix the radius at 1. Therefore, you can use $2f_{n-2}(x_i)$ from the paper as a density for your problem and take an integral from 0 to 1: $$\int_0^1 2f_{n-2}x dx=\int_0^1 f_{n-2} dx^2$$ The equations for $f_n$ from the paper: $$\frac{\Gamma(n/2+1)}{\Gamma((n+1)/2)}\frac{(\sqrt{1-x^2})^{n-1}}{\sqrt\pi}$$ Hence, the density of the $|x_i|$ is given by: $$\tilde f_n(|x|)=2\frac{\Gamma((n-2)/2+1)}{\Gamma((n-1)/2)}\frac{(\sqrt{1-x^2})^{n-3}}{\sqrt\pi}$$ For three dimensional case you get: $$\tilde f_3(|x|)=\frac{2\Gamma(3/2)}{\sqrt\pi}=1$$ You get the mean absolute value by integrating: $$\int_0^1\tilde f_3(|x|)xdx=\int_0^1xdx=1/2$$ Two points worth noting. First, it's easy to calculate the integral for any $n$ by substitution $z=x^2$ : $$\frac{2 \Gamma(\frac n 2)}{(n-1)\Gamma(\frac{n-1}{2})\sqrt\pi}$$ You can see now that the average absolute value of the coordinate converges to zero as dimensionality increases as shown on the plot of the average vs. $n$ number of dimensions: Second, the sequence of functions on these densities converges to the normal distribution: $$\frac{f_n(v/\sqrt{n+2})}{\sqrt{n+2}}\to_{n\to\infty}\mathcal{N}(0,1)$$
Average absolute value of a coordinate of a random unit vector? The answer is $1/2$ This paper has the probability density $f_n(x_i)$ of $x_i$ for the vector inside an n-dimensional hypersphere. You're interested in the vector from origin to the random point on a
52,563
In model selection with $Y$ as the outcome and $ X = (X_1, \ldots, X_p)$ the explanatory variables, why do we care about $F_{Y|X}$?
A regression model or a GLM (or indeed nay number of other models) are at heart a model of the conditional distribution, generally including an explicit description of the conditional mean in terms of the parameters. When you're estimating parameters in (say) a linear regression, you're actually working out how (in your model) the conditional mean of $Y$ changes as the things you're conditioning on (the $x$'s, the independent variables, the predictors) change. In a regression model other aspects of the conditional distribution are also defined -- the conditional variance for example, may often be assumed to be constant. Similarly, with a Poisson GLM, when I say that $Y|x$ is Poisson, with $$\mu(x) = E(Y|x) = \exp(\beta_0+\beta_1x)\,,$$ I am explicitly modelling the conditional distribution by writing a model for the conditional mean of $Y$ -- that's what the parameters $\beta_0$ and $\beta_1$ define; then the conditional distribution follows from the fact that once you have the mean of the Poisson, you have its distribution: Here the model is that at any given value of $x$ (i.e. conditional on $x$) the $Y$ value has a Poisson distribution. Note that the marginal distribution of $Y$ is not Poisson. The blue points (some laying one on another) are - according to the model - generated from a Poisson distribution at each value of $x$ (the conditional means of which are related by a model). The "+" symbols mark the estimates of these conditional means at each $x$ value for which we had data.
In model selection with $Y$ as the outcome and $ X = (X_1, \ldots, X_p)$ the explanatory variables,
A regression model or a GLM (or indeed nay number of other models) are at heart a model of the conditional distribution, generally including an explicit description of the conditional mean in terms of
In model selection with $Y$ as the outcome and $ X = (X_1, \ldots, X_p)$ the explanatory variables, why do we care about $F_{Y|X}$? A regression model or a GLM (or indeed nay number of other models) are at heart a model of the conditional distribution, generally including an explicit description of the conditional mean in terms of the parameters. When you're estimating parameters in (say) a linear regression, you're actually working out how (in your model) the conditional mean of $Y$ changes as the things you're conditioning on (the $x$'s, the independent variables, the predictors) change. In a regression model other aspects of the conditional distribution are also defined -- the conditional variance for example, may often be assumed to be constant. Similarly, with a Poisson GLM, when I say that $Y|x$ is Poisson, with $$\mu(x) = E(Y|x) = \exp(\beta_0+\beta_1x)\,,$$ I am explicitly modelling the conditional distribution by writing a model for the conditional mean of $Y$ -- that's what the parameters $\beta_0$ and $\beta_1$ define; then the conditional distribution follows from the fact that once you have the mean of the Poisson, you have its distribution: Here the model is that at any given value of $x$ (i.e. conditional on $x$) the $Y$ value has a Poisson distribution. Note that the marginal distribution of $Y$ is not Poisson. The blue points (some laying one on another) are - according to the model - generated from a Poisson distribution at each value of $x$ (the conditional means of which are related by a model). The "+" symbols mark the estimates of these conditional means at each $x$ value for which we had data.
In model selection with $Y$ as the outcome and $ X = (X_1, \ldots, X_p)$ the explanatory variables, A regression model or a GLM (or indeed nay number of other models) are at heart a model of the conditional distribution, generally including an explicit description of the conditional mean in terms of
52,564
In model selection with $Y$ as the outcome and $ X = (X_1, \ldots, X_p)$ the explanatory variables, why do we care about $F_{Y|X}$?
Suppose you want to build a predictive model. That is, you have some data $(X, Y)$, and you want to use knowledge of $X$ to make a prediction for the value of $Y$. Practically, you want to construct a function of $X$, so that $f(X)$ can be in some way construed as a "prediction for $Y$". The thing you would most hope for is that you can build an $f$ that satisfies $f(X) = Y$ for all possibilities of $X$ and $Y$, a perfect link between two phenomina. In practice, this is just not possible. Whether it because of boundaries in our knowledge, deficiencies in our measurement, or some actual randomness in a process, we do not often believe that we can build a perfect link between $X$ and $Y$. Given this reality, the conditional distribution $Y \mid X$ is simply a mathematical tool that is used to describe our state of knowledge about $Y$ once we have complete knowledge of $X$. So if we construe $f(X)$ as a prediction of $Y$, then $f(X)$ must tell us something about the conditional distribution $Y | X$. Sometimes we try to predict the mean of the conditional distribution, sometimes the median, sometimes something else, but a predictive model is, at the end of the day, always trying to tell us something about the conditional distribution.
In model selection with $Y$ as the outcome and $ X = (X_1, \ldots, X_p)$ the explanatory variables,
Suppose you want to build a predictive model. That is, you have some data $(X, Y)$, and you want to use knowledge of $X$ to make a prediction for the value of $Y$. Practically, you want to construct
In model selection with $Y$ as the outcome and $ X = (X_1, \ldots, X_p)$ the explanatory variables, why do we care about $F_{Y|X}$? Suppose you want to build a predictive model. That is, you have some data $(X, Y)$, and you want to use knowledge of $X$ to make a prediction for the value of $Y$. Practically, you want to construct a function of $X$, so that $f(X)$ can be in some way construed as a "prediction for $Y$". The thing you would most hope for is that you can build an $f$ that satisfies $f(X) = Y$ for all possibilities of $X$ and $Y$, a perfect link between two phenomina. In practice, this is just not possible. Whether it because of boundaries in our knowledge, deficiencies in our measurement, or some actual randomness in a process, we do not often believe that we can build a perfect link between $X$ and $Y$. Given this reality, the conditional distribution $Y \mid X$ is simply a mathematical tool that is used to describe our state of knowledge about $Y$ once we have complete knowledge of $X$. So if we construe $f(X)$ as a prediction of $Y$, then $f(X)$ must tell us something about the conditional distribution $Y | X$. Sometimes we try to predict the mean of the conditional distribution, sometimes the median, sometimes something else, but a predictive model is, at the end of the day, always trying to tell us something about the conditional distribution.
In model selection with $Y$ as the outcome and $ X = (X_1, \ldots, X_p)$ the explanatory variables, Suppose you want to build a predictive model. That is, you have some data $(X, Y)$, and you want to use knowledge of $X$ to make a prediction for the value of $Y$. Practically, you want to construct
52,565
If the Pearson r is .1, is there a weak relationship between the two variables?
Let me again post the same quote from the web: I once asked a chemist who was calibrating a laboratory instrument to a standard what value of the correlation coefficient she was looking for. “0.9 is too low. You need at least 0.98 or 0.99.” She got the number from a government guidance document. I once asked an engineer who was conducting a regression analysis of a treatment process what value of the correlation coefficient he was looking for. “Anything between 0.6 and 0.8 is acceptable.” His college professor told him this. I once asked a biologist who was conducting an ANOVA of the size of field mice living in contaminated versus pristine soils what value of the correlation coefficient he was looking for. He didn’t know, but his cutoff was 0.2 based on the smallest size difference his model could detect with the number of samples he had. It is true that correlation is a value between $-1$ and $+1$, where $0$ is "no correlation", $-1$ is perfect, negative correlation and $+1$ is perfect, positive correlation. However besides that, there is no such a thing as objectively "strong", or "week" correlation. It depends on the kind of data you are dealing with what magnitudes of correlation can you expect.
If the Pearson r is .1, is there a weak relationship between the two variables?
Let me again post the same quote from the web: I once asked a chemist who was calibrating a laboratory instrument to a standard what value of the correlation coefficient she was looking for. “0.
If the Pearson r is .1, is there a weak relationship between the two variables? Let me again post the same quote from the web: I once asked a chemist who was calibrating a laboratory instrument to a standard what value of the correlation coefficient she was looking for. “0.9 is too low. You need at least 0.98 or 0.99.” She got the number from a government guidance document. I once asked an engineer who was conducting a regression analysis of a treatment process what value of the correlation coefficient he was looking for. “Anything between 0.6 and 0.8 is acceptable.” His college professor told him this. I once asked a biologist who was conducting an ANOVA of the size of field mice living in contaminated versus pristine soils what value of the correlation coefficient he was looking for. He didn’t know, but his cutoff was 0.2 based on the smallest size difference his model could detect with the number of samples he had. It is true that correlation is a value between $-1$ and $+1$, where $0$ is "no correlation", $-1$ is perfect, negative correlation and $+1$ is perfect, positive correlation. However besides that, there is no such a thing as objectively "strong", or "week" correlation. It depends on the kind of data you are dealing with what magnitudes of correlation can you expect.
If the Pearson r is .1, is there a weak relationship between the two variables? Let me again post the same quote from the web: I once asked a chemist who was calibrating a laboratory instrument to a standard what value of the correlation coefficient she was looking for. “0.
52,566
If the Pearson r is .1, is there a weak relationship between the two variables?
It depends on the sample size. If the sample size is small the estimate may not be significantly different from 0. On the other hand, if the sample size is large it may be statistically different from 0. In the latter case you might say that it is significant but to call it strong is a matter of judgement and depends on the subject matter.
If the Pearson r is .1, is there a weak relationship between the two variables?
It depends on the sample size. If the sample size is small the estimate may not be significantly different from 0. On the other hand, if the sample size is large it may be statistically different fro
If the Pearson r is .1, is there a weak relationship between the two variables? It depends on the sample size. If the sample size is small the estimate may not be significantly different from 0. On the other hand, if the sample size is large it may be statistically different from 0. In the latter case you might say that it is significant but to call it strong is a matter of judgement and depends on the subject matter.
If the Pearson r is .1, is there a weak relationship between the two variables? It depends on the sample size. If the sample size is small the estimate may not be significantly different from 0. On the other hand, if the sample size is large it may be statistically different fro
52,567
If the Pearson r is .1, is there a weak relationship between the two variables?
To add on the other answers, you might well also have a nonlinear dependency between the two variables that the Pearson's $r$ does not capture. Then you might have a look at the Maximal Information Coefficient MIC: here. Anyway, the best thing you can probably do with two variables is to have a look at their scatter plot. So you can judge if you either have a weak or strong relationship yourself.
If the Pearson r is .1, is there a weak relationship between the two variables?
To add on the other answers, you might well also have a nonlinear dependency between the two variables that the Pearson's $r$ does not capture. Then you might have a look at the Maximal Information C
If the Pearson r is .1, is there a weak relationship between the two variables? To add on the other answers, you might well also have a nonlinear dependency between the two variables that the Pearson's $r$ does not capture. Then you might have a look at the Maximal Information Coefficient MIC: here. Anyway, the best thing you can probably do with two variables is to have a look at their scatter plot. So you can judge if you either have a weak or strong relationship yourself.
If the Pearson r is .1, is there a weak relationship between the two variables? To add on the other answers, you might well also have a nonlinear dependency between the two variables that the Pearson's $r$ does not capture. Then you might have a look at the Maximal Information C
52,568
Why would someone plot variance normalized by the mean?
Variance over mean is known as the Index of dispersion. This can be useful when comparing two random variables with different means, in order to account for larger variance for larges means. Example: suppose two fields have some sheep, and I hire 2 "counters" to count the sheep in each field. The counters count the sheep 10 times for their field and report the number of sheep counted. If the first counter gets a field with 50 sheep, then for that counter the variability across their counts will be naturally low. On the other hand, for the second counter if their field has 1000 sheep, then the variability of their counting will most definitely be higher than the first counter. So in this case, we can't just study the variance for the two counters and declare the first counter to be better since he/she had a lower variance. We have to account for the number of sheep in their respective fields. This can be done by weighting the variance by the mean, i.e., index of dispersion.
Why would someone plot variance normalized by the mean?
Variance over mean is known as the Index of dispersion. This can be useful when comparing two random variables with different means, in order to account for larger variance for larges means. Example:
Why would someone plot variance normalized by the mean? Variance over mean is known as the Index of dispersion. This can be useful when comparing two random variables with different means, in order to account for larger variance for larges means. Example: suppose two fields have some sheep, and I hire 2 "counters" to count the sheep in each field. The counters count the sheep 10 times for their field and report the number of sheep counted. If the first counter gets a field with 50 sheep, then for that counter the variability across their counts will be naturally low. On the other hand, for the second counter if their field has 1000 sheep, then the variability of their counting will most definitely be higher than the first counter. So in this case, we can't just study the variance for the two counters and declare the first counter to be better since he/she had a lower variance. We have to account for the number of sheep in their respective fields. This can be done by weighting the variance by the mean, i.e., index of dispersion.
Why would someone plot variance normalized by the mean? Variance over mean is known as the Index of dispersion. This can be useful when comparing two random variables with different means, in order to account for larger variance for larges means. Example:
52,569
t test with drastically different sample sizes [duplicate]
The Welch approximation t-test is designed to do the same thing as an independent samples t-test, but without relying on the assumption that the variances are equal. It is readily available in most standard statistical software. In R, it's actually the default for the t.test function (the argument var.equal controls whether a IST or Welch approximation is performed). Here's an interesting blog post advocating for the use of a Welch approximation regardless of whether or not you think you might be violating the assumption of equal variances. Especially if you have very unequal sample sizes, you may wish to use bootstrapping instead since it doesn't make any assumptions at all about the distribution of the sample statistic. Here are instructions for a bootstrap t-test from another SE post.
t test with drastically different sample sizes [duplicate]
The Welch approximation t-test is designed to do the same thing as an independent samples t-test, but without relying on the assumption that the variances are equal. It is readily available in most st
t test with drastically different sample sizes [duplicate] The Welch approximation t-test is designed to do the same thing as an independent samples t-test, but without relying on the assumption that the variances are equal. It is readily available in most standard statistical software. In R, it's actually the default for the t.test function (the argument var.equal controls whether a IST or Welch approximation is performed). Here's an interesting blog post advocating for the use of a Welch approximation regardless of whether or not you think you might be violating the assumption of equal variances. Especially if you have very unequal sample sizes, you may wish to use bootstrapping instead since it doesn't make any assumptions at all about the distribution of the sample statistic. Here are instructions for a bootstrap t-test from another SE post.
t test with drastically different sample sizes [duplicate] The Welch approximation t-test is designed to do the same thing as an independent samples t-test, but without relying on the assumption that the variances are equal. It is readily available in most st
52,570
t test with drastically different sample sizes [duplicate]
The t-test is not dependent on equal, similar, or even close sample sizes. A t-test can be done with any sample sizes. Go ahead and use the t-test you have. I wish I knew where people got the idea that a t-test requires equal sample sizes.
t test with drastically different sample sizes [duplicate]
The t-test is not dependent on equal, similar, or even close sample sizes. A t-test can be done with any sample sizes. Go ahead and use the t-test you have. I wish I knew where people got the idea tha
t test with drastically different sample sizes [duplicate] The t-test is not dependent on equal, similar, or even close sample sizes. A t-test can be done with any sample sizes. Go ahead and use the t-test you have. I wish I knew where people got the idea that a t-test requires equal sample sizes.
t test with drastically different sample sizes [duplicate] The t-test is not dependent on equal, similar, or even close sample sizes. A t-test can be done with any sample sizes. Go ahead and use the t-test you have. I wish I knew where people got the idea tha
52,571
What is the probability of selecting all the blue balls?
This can be reduced to a combinatorics problem. Let's make it a little more general. Suppose you have $n$ balls, of which $r$ are blue. You select $k,$ where $k > r.$ What are the total number of ways you can select $k$ balls out of $n?$ Then, what are the total number of ways you can select $k$ balls out of $n,$ where $r$ of the $k$ balls are necessarily the blue balls? Divide the latter answer by the former, and you have your answer. CLARIFICATION: The answer is easy if in the second part you think of all blue balls as a single package. Then to the latter point, suppose we have already selected this blue package. What is the other possible combinations? The answer would be choosing k-r (not k, because we have already selected all r blue balls and put it aside) from n-r remaining balls. In summary the probability that we have selected all of the blue balls in our k sample is equal to $${^{n-r}C_{k-r}\over ^n C_k} = {(n-r)! k! \over (k-r)! n!}$$. For the special case of k=n/2 we have $${^{n-r}C_{{n\over 2}-r}\over ^n C_{n\over 2}} = {(n-r)!{n\over 2}!\over ({n\over 2} -r)! n!}$$
What is the probability of selecting all the blue balls?
This can be reduced to a combinatorics problem. Let's make it a little more general. Suppose you have $n$ balls, of which $r$ are blue. You select $k,$ where $k > r.$ What are the total number of way
What is the probability of selecting all the blue balls? This can be reduced to a combinatorics problem. Let's make it a little more general. Suppose you have $n$ balls, of which $r$ are blue. You select $k,$ where $k > r.$ What are the total number of ways you can select $k$ balls out of $n?$ Then, what are the total number of ways you can select $k$ balls out of $n,$ where $r$ of the $k$ balls are necessarily the blue balls? Divide the latter answer by the former, and you have your answer. CLARIFICATION: The answer is easy if in the second part you think of all blue balls as a single package. Then to the latter point, suppose we have already selected this blue package. What is the other possible combinations? The answer would be choosing k-r (not k, because we have already selected all r blue balls and put it aside) from n-r remaining balls. In summary the probability that we have selected all of the blue balls in our k sample is equal to $${^{n-r}C_{k-r}\over ^n C_k} = {(n-r)! k! \over (k-r)! n!}$$. For the special case of k=n/2 we have $${^{n-r}C_{{n\over 2}-r}\over ^n C_{n\over 2}} = {(n-r)!{n\over 2}!\over ({n\over 2} -r)! n!}$$
What is the probability of selecting all the blue balls? This can be reduced to a combinatorics problem. Let's make it a little more general. Suppose you have $n$ balls, of which $r$ are blue. You select $k,$ where $k > r.$ What are the total number of way
52,572
What is the probability of selecting all the blue balls?
It's a typical brain teaser. Invert the question: what is the probability that none of the blue balls are scooped out of the bag? All of a sudden it's easy to answer! $$\frac{n-r}{n}\frac{n-r-1}{n-1}\dots\frac{n-m+1-r}{n-m+1}=\frac{(n-m)!(n-r)!}{n!(n-r-m)!}$$ You pull the first ball, and it's not blue. How likely it is? $\frac{n-r}{n}$ So, you keep pulling them until you got $m$ balls out, and none of the blue balls showed up yet. And you get the answer above, which is also the answer to the original question where $m=n/2$. thanks to @Bridgeburners comment Here's why this works. The original problem is formulated in terms of the balls that were pulled (chosen) from the bag: we want all the blue balls out. It's shown in the picture below. However, the next picture shows that the answer got to be the same in terms of the inverted problem about all balls being red, i.e. none of the balls being blue. So, if the question is about probability of all $r$ blue balls being chosen in the group of $k$ balls, then it's equivalent to a question about probability of $m=n-k$ balls being left in the bag after $k$ balls are removed from it. Hence, in terms of the $k$ balls out the same answer can be written as: $$\frac{k!(n-r)!}{n!(k-r)!}$$ To summarize, the solution is very simple and intuitive if you re-formulate the problem in terms of the unchosen balls being all red.
What is the probability of selecting all the blue balls?
It's a typical brain teaser. Invert the question: what is the probability that none of the blue balls are scooped out of the bag? All of a sudden it's easy to answer! $$\frac{n-r}{n}\frac{n-r-1}{n-1}\
What is the probability of selecting all the blue balls? It's a typical brain teaser. Invert the question: what is the probability that none of the blue balls are scooped out of the bag? All of a sudden it's easy to answer! $$\frac{n-r}{n}\frac{n-r-1}{n-1}\dots\frac{n-m+1-r}{n-m+1}=\frac{(n-m)!(n-r)!}{n!(n-r-m)!}$$ You pull the first ball, and it's not blue. How likely it is? $\frac{n-r}{n}$ So, you keep pulling them until you got $m$ balls out, and none of the blue balls showed up yet. And you get the answer above, which is also the answer to the original question where $m=n/2$. thanks to @Bridgeburners comment Here's why this works. The original problem is formulated in terms of the balls that were pulled (chosen) from the bag: we want all the blue balls out. It's shown in the picture below. However, the next picture shows that the answer got to be the same in terms of the inverted problem about all balls being red, i.e. none of the balls being blue. So, if the question is about probability of all $r$ blue balls being chosen in the group of $k$ balls, then it's equivalent to a question about probability of $m=n-k$ balls being left in the bag after $k$ balls are removed from it. Hence, in terms of the $k$ balls out the same answer can be written as: $$\frac{k!(n-r)!}{n!(k-r)!}$$ To summarize, the solution is very simple and intuitive if you re-formulate the problem in terms of the unchosen balls being all red.
What is the probability of selecting all the blue balls? It's a typical brain teaser. Invert the question: what is the probability that none of the blue balls are scooped out of the bag? All of a sudden it's easy to answer! $$\frac{n-r}{n}\frac{n-r-1}{n-1}\
52,573
Notation for two expressions with the same distribution
Notation : $\cos(U) \stackrel d=\sin(U)$ If $X$ and $Y$ follow same distribution then mathematically you can write $X \stackrel d=Y$.
Notation for two expressions with the same distribution
Notation : $\cos(U) \stackrel d=\sin(U)$ If $X$ and $Y$ follow same distribution then mathematically you can write $X \stackrel d=Y$.
Notation for two expressions with the same distribution Notation : $\cos(U) \stackrel d=\sin(U)$ If $X$ and $Y$ follow same distribution then mathematically you can write $X \stackrel d=Y$.
Notation for two expressions with the same distribution Notation : $\cos(U) \stackrel d=\sin(U)$ If $X$ and $Y$ follow same distribution then mathematically you can write $X \stackrel d=Y$.
52,574
Is it valid to do a test only on most extreme subjects, instead of everybody?
Since you raise pre-registering a procedure in comments I thought I'd post a brief discussion related to it. Let's imagine you can avoid all of the pitfalls in the Garden of forking paths paper linked in another answer. So we're at the point of that choices of what procedures to use. In what follows I'll make a number of simplifying assumptions (though generally speaking the conclusions will carry much more broadly). First, for simplicity, I'll restrict consideration to the case of a single predictor (independent variable). Let's consider two possible situations: In this situation you're able to choose your design -- that is what values the predictor variable (independent variable) will take. If you believe before getting the data that the relationship is weak (so power matters) and linear, the choice is between linear regression and a t-test after choosing how to place the values taken by the independent variable (predictor). If those values are based on placing the data into two extreme groups in such a ways as to give the two-group t-test the best possible power then the $x$'s will be at some extreme possible low value $x_L$ and extreme possible high value $x_U$ with nothing in between (and no ability to assess that linearity assumption). The resulting choice between tests is simple enough -- the two group t-test and test of regression slope are exactly the same - they will have the same t-statistic. So nothing is gained here by doing anything but regression. In this second situation you don't choose what values are taken by your predictor or for other reasons you don't get to just place them in a way that maximizes power for the t-test. If you believe before getting the data that the relationship is weak and linear ($y=\alpha+\beta x +\epsilon$) and the usual regression assumptions hold, the choice between linear regression and a t-test based on splitting the data into three groups (not necessarily all equal sized, but where their sizes are specified before seeing the data), leaving out the middle group and doing a t-test on the outer pair of groups is simple: the linear regression has more power. Let us investigate why this is obvious. Note that instead of doing the t-test on the difference in two group means ($\bar y_U-\bar y_L$), if we have the $x$ values we can compute the expected difference under the linear assumption. It is some multiple of $\beta$ (which multiple depends on the $x$'s we have). As a result we can scale the difference (as a function of the $x$-values in the two groups) so that the scaled difference has an expected value of $\beta$, the population slope of the regression line. This won't change the t-ratio at all, since it will scale both the numerator and the denominator of the t-statistic by the same scale-factor. [I will assume for simplicity that the placements of the $x$'s is symmetric either by design, or if the $x$'s are drawn at random from some symmetric distribution but we won't know the x-values until we draw the sample. In this situation it will be best for the two groups to contain the same number of observations. I'll also assume that $n$ is even. Neither is necessary but the discussion is simplified.] So now we're considering two different estimators of $\beta$, both linear. One is the usual least squares estimate ($\hat \beta$) and the other has $i$ observations in each of the two groups. [If the IV is uniformly placed in some range, the choice of how many points to include in each group is an old solved problem; it turns out it's roughly $\frac13$. If they're normally distributed it's smaller - I believe about 27%. If the design-points are mostly near the ends it's higher, until in the extreme situation we're back in case 1 above, with 50% in each group. The peaks in power are pretty flat, so it's not critical exactly which value you use.] Immediately we can apply the Gauss-Markov theorem and know that the linear regression will outperform the two-group one -- you'll have two estimators with the same expected value but the linear regression one will have the smaller expected standard error (and so have more power). [With proportions close to the optimal one, the power gets pretty close to that from linear regression, but not so close that you'd regard it as basically a toss-up.] If you see that someone chooses to go the t-test route and gets a significant result that would not have been obtained with a full linear regression then either they were just really lucky with the way their data turned out or you have to wonder whether the procedure was really so "fully pre-registered" after all. There's another situation that modifies this discussion somewhat - the errors-in-variables case ($x$'s observed with error, sometimes called Model II regression). In this case, ordinary regression is neither optimal nor unbiased and should not be used. That would be a comparison for a different question, though.
Is it valid to do a test only on most extreme subjects, instead of everybody?
Since you raise pre-registering a procedure in comments I thought I'd post a brief discussion related to it. Let's imagine you can avoid all of the pitfalls in the Garden of forking paths paper linked
Is it valid to do a test only on most extreme subjects, instead of everybody? Since you raise pre-registering a procedure in comments I thought I'd post a brief discussion related to it. Let's imagine you can avoid all of the pitfalls in the Garden of forking paths paper linked in another answer. So we're at the point of that choices of what procedures to use. In what follows I'll make a number of simplifying assumptions (though generally speaking the conclusions will carry much more broadly). First, for simplicity, I'll restrict consideration to the case of a single predictor (independent variable). Let's consider two possible situations: In this situation you're able to choose your design -- that is what values the predictor variable (independent variable) will take. If you believe before getting the data that the relationship is weak (so power matters) and linear, the choice is between linear regression and a t-test after choosing how to place the values taken by the independent variable (predictor). If those values are based on placing the data into two extreme groups in such a ways as to give the two-group t-test the best possible power then the $x$'s will be at some extreme possible low value $x_L$ and extreme possible high value $x_U$ with nothing in between (and no ability to assess that linearity assumption). The resulting choice between tests is simple enough -- the two group t-test and test of regression slope are exactly the same - they will have the same t-statistic. So nothing is gained here by doing anything but regression. In this second situation you don't choose what values are taken by your predictor or for other reasons you don't get to just place them in a way that maximizes power for the t-test. If you believe before getting the data that the relationship is weak and linear ($y=\alpha+\beta x +\epsilon$) and the usual regression assumptions hold, the choice between linear regression and a t-test based on splitting the data into three groups (not necessarily all equal sized, but where their sizes are specified before seeing the data), leaving out the middle group and doing a t-test on the outer pair of groups is simple: the linear regression has more power. Let us investigate why this is obvious. Note that instead of doing the t-test on the difference in two group means ($\bar y_U-\bar y_L$), if we have the $x$ values we can compute the expected difference under the linear assumption. It is some multiple of $\beta$ (which multiple depends on the $x$'s we have). As a result we can scale the difference (as a function of the $x$-values in the two groups) so that the scaled difference has an expected value of $\beta$, the population slope of the regression line. This won't change the t-ratio at all, since it will scale both the numerator and the denominator of the t-statistic by the same scale-factor. [I will assume for simplicity that the placements of the $x$'s is symmetric either by design, or if the $x$'s are drawn at random from some symmetric distribution but we won't know the x-values until we draw the sample. In this situation it will be best for the two groups to contain the same number of observations. I'll also assume that $n$ is even. Neither is necessary but the discussion is simplified.] So now we're considering two different estimators of $\beta$, both linear. One is the usual least squares estimate ($\hat \beta$) and the other has $i$ observations in each of the two groups. [If the IV is uniformly placed in some range, the choice of how many points to include in each group is an old solved problem; it turns out it's roughly $\frac13$. If they're normally distributed it's smaller - I believe about 27%. If the design-points are mostly near the ends it's higher, until in the extreme situation we're back in case 1 above, with 50% in each group. The peaks in power are pretty flat, so it's not critical exactly which value you use.] Immediately we can apply the Gauss-Markov theorem and know that the linear regression will outperform the two-group one -- you'll have two estimators with the same expected value but the linear regression one will have the smaller expected standard error (and so have more power). [With proportions close to the optimal one, the power gets pretty close to that from linear regression, but not so close that you'd regard it as basically a toss-up.] If you see that someone chooses to go the t-test route and gets a significant result that would not have been obtained with a full linear regression then either they were just really lucky with the way their data turned out or you have to wonder whether the procedure was really so "fully pre-registered" after all. There's another situation that modifies this discussion somewhat - the errors-in-variables case ($x$'s observed with error, sometimes called Model II regression). In this case, ordinary regression is neither optimal nor unbiased and should not be used. That would be a comparison for a different question, though.
Is it valid to do a test only on most extreme subjects, instead of everybody? Since you raise pre-registering a procedure in comments I thought I'd post a brief discussion related to it. Let's imagine you can avoid all of the pitfalls in the Garden of forking paths paper linked
52,575
Is it valid to do a test only on most extreme subjects, instead of everybody?
This is not valid. People are only going onto the t-tests because the regression failed to yield a significant result. Andrew Gelman refers to these choices as the "garden of forking paths," and if a researcher does enough things to the data in search of p < .05, the Type I error rate can be greatly inflated. Dichotomizing a continuous variable is likely not a good idea, but this does not sound like that. It seems like a researcher using only the subjects that will confirm their hypothesis. They are not making the decision for choosing these people a priori, but on a post hoc basis after looking at what the data will give them. This is p-hacking and must be avoided.
Is it valid to do a test only on most extreme subjects, instead of everybody?
This is not valid. People are only going onto the t-tests because the regression failed to yield a significant result. Andrew Gelman refers to these choices as the "garden of forking paths," and if a
Is it valid to do a test only on most extreme subjects, instead of everybody? This is not valid. People are only going onto the t-tests because the regression failed to yield a significant result. Andrew Gelman refers to these choices as the "garden of forking paths," and if a researcher does enough things to the data in search of p < .05, the Type I error rate can be greatly inflated. Dichotomizing a continuous variable is likely not a good idea, but this does not sound like that. It seems like a researcher using only the subjects that will confirm their hypothesis. They are not making the decision for choosing these people a priori, but on a post hoc basis after looking at what the data will give them. This is p-hacking and must be avoided.
Is it valid to do a test only on most extreme subjects, instead of everybody? This is not valid. People are only going onto the t-tests because the regression failed to yield a significant result. Andrew Gelman refers to these choices as the "garden of forking paths," and if a
52,576
Is it valid to do a test only on most extreme subjects, instead of everybody?
I would comment if I could, but you may find this What is the benefit of breaking up a continuous predictor variable? useful -- Scorthi's answer in particular. Also this http://biostat.mc.vanderbilt.edu/wiki/Main/CatContinuous for a list of problems caused by categorising a continuous variable. To me what you describe sounds like "p-hacking", and results in a loss of information.
Is it valid to do a test only on most extreme subjects, instead of everybody?
I would comment if I could, but you may find this What is the benefit of breaking up a continuous predictor variable? useful -- Scorthi's answer in particular. Also this http://biostat.mc.vanderbilt.e
Is it valid to do a test only on most extreme subjects, instead of everybody? I would comment if I could, but you may find this What is the benefit of breaking up a continuous predictor variable? useful -- Scorthi's answer in particular. Also this http://biostat.mc.vanderbilt.edu/wiki/Main/CatContinuous for a list of problems caused by categorising a continuous variable. To me what you describe sounds like "p-hacking", and results in a loss of information.
Is it valid to do a test only on most extreme subjects, instead of everybody? I would comment if I could, but you may find this What is the benefit of breaking up a continuous predictor variable? useful -- Scorthi's answer in particular. Also this http://biostat.mc.vanderbilt.e
52,577
Logistic regression with poor goodness of fit (hosmer lemeshow)?
The Hosmer and Lemeshow test is obsolete as has been discussed elsewhere on this site. See also Goodness-of-fit test in Logistic regression; which 'fit' do we want to test?. My Regression Modeling Strategies course notes at https://hbiostat.org/rms present a hopefully coherent strategy for logistic regression model specification and validation. We almost never want to trade off predictive discrimination just to make a model have a better calibration curve. Note that if the model contains only categorical variables and interactions among the variables are not needed, the model must fit the data and no calibration assessment is needed. To check co-linearities I suggest variable clustering, e.g., the R Hmisc package varclus function.
Logistic regression with poor goodness of fit (hosmer lemeshow)?
The Hosmer and Lemeshow test is obsolete as has been discussed elsewhere on this site. See also Goodness-of-fit test in Logistic regression; which 'fit' do we want to test?. My Regression Modeling St
Logistic regression with poor goodness of fit (hosmer lemeshow)? The Hosmer and Lemeshow test is obsolete as has been discussed elsewhere on this site. See also Goodness-of-fit test in Logistic regression; which 'fit' do we want to test?. My Regression Modeling Strategies course notes at https://hbiostat.org/rms present a hopefully coherent strategy for logistic regression model specification and validation. We almost never want to trade off predictive discrimination just to make a model have a better calibration curve. Note that if the model contains only categorical variables and interactions among the variables are not needed, the model must fit the data and no calibration assessment is needed. To check co-linearities I suggest variable clustering, e.g., the R Hmisc package varclus function.
Logistic regression with poor goodness of fit (hosmer lemeshow)? The Hosmer and Lemeshow test is obsolete as has been discussed elsewhere on this site. See also Goodness-of-fit test in Logistic regression; which 'fit' do we want to test?. My Regression Modeling St
52,578
Motivation for average log-likelihood
TLDR version: If you are using a first order optimization algorithm, such as gradient ascent, using the average likelihood as your objective function stabilizes the behavior of algorithm as the sample size changes. On the other hand, if you are using a second order optimization algorithm, such as Newton's Method, whether you use likelihood or normalized likelihood is irrelevant. More Details In many problems, the log likelihood can be written as $L(\theta|x) = \sum_{i = 1}^n L(\theta|x_i)$ i.e. the log likelihood is the sum of contributions from each observation. In standard gradient ascent, we update $\theta$ with $\theta^{(t+1)} = \theta^{(t)} + \alpha \frac{\partial L}{\partial \theta^{(t)}}$ Here, $\alpha$ is often referred to as the "learning rate", and in many problems, is selected without strong justification (i.e. empirically seems to work well, but no proof that is optimal). Choosing a poor learning rate can lead to very poor performance. Now, the tricky thing is that we need to realize that $\frac{\partial L}{\partial \theta} = \sum_{i = 1}^n \frac{\partial L(\theta | x_i)}{\partial \theta}$ This means that as our sample size gets larger, we take larger steps when using the unnormalized log-likelihood, even when we are the same distance away from the optimal solution. With that in mind, we can see that it should be impossible to pick an $\alpha$ that is optimal for all sample sizes. However, if we use the normalized log-likelihood, then our step size should be approximately the same size as the sample size grows. Thus, $\alpha$ is likely to be less sensitive to sample size. When using Newton's Method, this is not an issue: the step is $\theta^{(t+1)} = \theta - H^{-1} \frac{\partial L}{\partial \theta^{(t)}}$ where $H$ is the Hessian. Because the Hessian is affected by the multiplication of normalizing constants in the same way, multiplying the objective function by $1/n$ does not change the step size.
Motivation for average log-likelihood
TLDR version: If you are using a first order optimization algorithm, such as gradient ascent, using the average likelihood as your objective function stabilizes the behavior of algorithm as the sample
Motivation for average log-likelihood TLDR version: If you are using a first order optimization algorithm, such as gradient ascent, using the average likelihood as your objective function stabilizes the behavior of algorithm as the sample size changes. On the other hand, if you are using a second order optimization algorithm, such as Newton's Method, whether you use likelihood or normalized likelihood is irrelevant. More Details In many problems, the log likelihood can be written as $L(\theta|x) = \sum_{i = 1}^n L(\theta|x_i)$ i.e. the log likelihood is the sum of contributions from each observation. In standard gradient ascent, we update $\theta$ with $\theta^{(t+1)} = \theta^{(t)} + \alpha \frac{\partial L}{\partial \theta^{(t)}}$ Here, $\alpha$ is often referred to as the "learning rate", and in many problems, is selected without strong justification (i.e. empirically seems to work well, but no proof that is optimal). Choosing a poor learning rate can lead to very poor performance. Now, the tricky thing is that we need to realize that $\frac{\partial L}{\partial \theta} = \sum_{i = 1}^n \frac{\partial L(\theta | x_i)}{\partial \theta}$ This means that as our sample size gets larger, we take larger steps when using the unnormalized log-likelihood, even when we are the same distance away from the optimal solution. With that in mind, we can see that it should be impossible to pick an $\alpha$ that is optimal for all sample sizes. However, if we use the normalized log-likelihood, then our step size should be approximately the same size as the sample size grows. Thus, $\alpha$ is likely to be less sensitive to sample size. When using Newton's Method, this is not an issue: the step is $\theta^{(t+1)} = \theta - H^{-1} \frac{\partial L}{\partial \theta^{(t)}}$ where $H$ is the Hessian. Because the Hessian is affected by the multiplication of normalizing constants in the same way, multiplying the objective function by $1/n$ does not change the step size.
Motivation for average log-likelihood TLDR version: If you are using a first order optimization algorithm, such as gradient ascent, using the average likelihood as your objective function stabilizes the behavior of algorithm as the sample
52,579
Motivation for average log-likelihood
One reason to divide by $N$ would be to make numbers comparable over data sets with different sample sizes. Otherwise it does not seem a very important issue.
Motivation for average log-likelihood
One reason to divide by $N$ would be to make numbers comparable over data sets with different sample sizes. Otherwise it does not seem a very important issue.
Motivation for average log-likelihood One reason to divide by $N$ would be to make numbers comparable over data sets with different sample sizes. Otherwise it does not seem a very important issue.
Motivation for average log-likelihood One reason to divide by $N$ would be to make numbers comparable over data sets with different sample sizes. Otherwise it does not seem a very important issue.
52,580
How to test the statistical hypothesis that data was generated from a multinomial distribution?
This looks like a goodness-of-fit application to me. You should use the chi-square test. So if you have 3 sides and your probability vector is (0.1, 0.2, 0.7), and you have 100 trials. You would expect the outcomes be (10, 20, 70). Use the theoretical counts to compare with your observed counts in the chi-square test. Look at https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test#Two-by-two_contingency_tables
How to test the statistical hypothesis that data was generated from a multinomial distribution?
This looks like a goodness-of-fit application to me. You should use the chi-square test. So if you have 3 sides and your probability vector is (0.1, 0.2, 0.7), and you have 100 trials. You would expec
How to test the statistical hypothesis that data was generated from a multinomial distribution? This looks like a goodness-of-fit application to me. You should use the chi-square test. So if you have 3 sides and your probability vector is (0.1, 0.2, 0.7), and you have 100 trials. You would expect the outcomes be (10, 20, 70). Use the theoretical counts to compare with your observed counts in the chi-square test. Look at https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test#Two-by-two_contingency_tables
How to test the statistical hypothesis that data was generated from a multinomial distribution? This looks like a goodness-of-fit application to me. You should use the chi-square test. So if you have 3 sides and your probability vector is (0.1, 0.2, 0.7), and you have 100 trials. You would expec
52,581
How to test the statistical hypothesis that data was generated from a multinomial distribution?
The existing answers point out Pearson's chi-squared test. Though this is usually a fine solution, it does rely on an approximation. Pearson derived that the chi-squared statistic approximately follows a chi-squared distribution, which he used to calculate p values. He did this derivation by assuming that the multinomial random vector is actually multivariate normal. So this works well when all the counts are pretty high, but not necessarily when they are low. I want to point out four alternatives that do not rely on this approximation. 1. Enumeration One is an exact multinomial test, with the p value calculated by enumeration. I mean, you calculate a badness-of-fit statistic from your data, either the chi-squared statistic or the likelihood ratio statistic. Then, you actually go through every possible multinomial random vector, check whether the test statistic for that vector exceeds the observed one, and add up the probabilities. The total probability of equaling or exceeding the observed value is your p-value. This method is implemented in the R package XNomial, as xmulti(). 2. Monte Carlo simulation Choose a measure of badness-of-fit to use as a test statistic. Again, probably either chi-squared or likelihood ratio statistic. Simulate samples from your theoretical distribution, and count what fraction of the time the statistic equals or exceeds the observed one. That fraction is your p-value. This method is also implemented in XNomial, as xmonte(). It's also really easy to code yourself, using rmultinom in R or the equivalent in another language. 3. Plug the highest observed value into the multinomial "CDF" This only really works for a special case, where the null distribution is kind of "even", and you're testing whether the data is more "uneven" than it should be. For example, the astronomer Rudolph Wolf did an experiment where he rolled two dice 20,000 times. One die came up on side 6 most often--3932 times. The other came up on side 2 most often--3631 times. Just this information is sufficient to reject the null hypothesis that the dice are fair. Your test statistic is the number of observations of the most frequently observed side. Focusing on the second die, the probability that this number is less than observed is this, which looks kind of like a CDF: $$P(X_1 \le 3630, X_2 \le 3630, X_3 \le 3630, X_4 \le 3630, X_5 \le 3630, X_6 \le 3630)$$ The probability that all sides were observed 3630 times or fewer, see? $X_i$ here means the number of times $i$ was observed. So this is the probability that the maximum of the $X$'s is 3630 or less. So, our p value--the probability that the maximum is 3631 or more--is one minus this probability. This probability can be calculated in R using my package pmultinom. In fact it's the example code in the documentation for the "pmultinom" function. There's also another package called pmultinom. Theoretically, the other package is using an approximation whereas mine is an exact calculation, but I haven't gotten a chance to compare and see if the difference is large enough to matter. So, this is a special case, and it can also be less powerful since you're only using the information from one possible outcome, but sometimes it's enough. 4. Approximate likelihood ratio test And I guess there's a fourth possibility, which is to use a likelihood ratio test assuming 2 * log(likelihood ratio) has a chi-squared distribution. (where likelihood ratio is probability of the data given that the true probabilities are the observed frequencies, divided by the probability of the data given your hypothesized ones. This is upside-down relative to the definition on Wikipedia, but it makes more sense to me in this context, since it's high when the fit is bad, just like the chi-squared statistic.) This is also approximate, but maybe it works in different situations. Surprisingly I can't find an R function which does this. It should be very easy to code yourself. I'm not totally sure how many degrees of freedom to use for the chi-squared distribution though.
How to test the statistical hypothesis that data was generated from a multinomial distribution?
The existing answers point out Pearson's chi-squared test. Though this is usually a fine solution, it does rely on an approximation. Pearson derived that the chi-squared statistic approximately follow
How to test the statistical hypothesis that data was generated from a multinomial distribution? The existing answers point out Pearson's chi-squared test. Though this is usually a fine solution, it does rely on an approximation. Pearson derived that the chi-squared statistic approximately follows a chi-squared distribution, which he used to calculate p values. He did this derivation by assuming that the multinomial random vector is actually multivariate normal. So this works well when all the counts are pretty high, but not necessarily when they are low. I want to point out four alternatives that do not rely on this approximation. 1. Enumeration One is an exact multinomial test, with the p value calculated by enumeration. I mean, you calculate a badness-of-fit statistic from your data, either the chi-squared statistic or the likelihood ratio statistic. Then, you actually go through every possible multinomial random vector, check whether the test statistic for that vector exceeds the observed one, and add up the probabilities. The total probability of equaling or exceeding the observed value is your p-value. This method is implemented in the R package XNomial, as xmulti(). 2. Monte Carlo simulation Choose a measure of badness-of-fit to use as a test statistic. Again, probably either chi-squared or likelihood ratio statistic. Simulate samples from your theoretical distribution, and count what fraction of the time the statistic equals or exceeds the observed one. That fraction is your p-value. This method is also implemented in XNomial, as xmonte(). It's also really easy to code yourself, using rmultinom in R or the equivalent in another language. 3. Plug the highest observed value into the multinomial "CDF" This only really works for a special case, where the null distribution is kind of "even", and you're testing whether the data is more "uneven" than it should be. For example, the astronomer Rudolph Wolf did an experiment where he rolled two dice 20,000 times. One die came up on side 6 most often--3932 times. The other came up on side 2 most often--3631 times. Just this information is sufficient to reject the null hypothesis that the dice are fair. Your test statistic is the number of observations of the most frequently observed side. Focusing on the second die, the probability that this number is less than observed is this, which looks kind of like a CDF: $$P(X_1 \le 3630, X_2 \le 3630, X_3 \le 3630, X_4 \le 3630, X_5 \le 3630, X_6 \le 3630)$$ The probability that all sides were observed 3630 times or fewer, see? $X_i$ here means the number of times $i$ was observed. So this is the probability that the maximum of the $X$'s is 3630 or less. So, our p value--the probability that the maximum is 3631 or more--is one minus this probability. This probability can be calculated in R using my package pmultinom. In fact it's the example code in the documentation for the "pmultinom" function. There's also another package called pmultinom. Theoretically, the other package is using an approximation whereas mine is an exact calculation, but I haven't gotten a chance to compare and see if the difference is large enough to matter. So, this is a special case, and it can also be less powerful since you're only using the information from one possible outcome, but sometimes it's enough. 4. Approximate likelihood ratio test And I guess there's a fourth possibility, which is to use a likelihood ratio test assuming 2 * log(likelihood ratio) has a chi-squared distribution. (where likelihood ratio is probability of the data given that the true probabilities are the observed frequencies, divided by the probability of the data given your hypothesized ones. This is upside-down relative to the definition on Wikipedia, but it makes more sense to me in this context, since it's high when the fit is bad, just like the chi-squared statistic.) This is also approximate, but maybe it works in different situations. Surprisingly I can't find an R function which does this. It should be very easy to code yourself. I'm not totally sure how many degrees of freedom to use for the chi-squared distribution though.
How to test the statistical hypothesis that data was generated from a multinomial distribution? The existing answers point out Pearson's chi-squared test. Though this is usually a fine solution, it does rely on an approximation. Pearson derived that the chi-squared statistic approximately follow
52,582
How to test the statistical hypothesis that data was generated from a multinomial distribution?
Yes, it's called a Pearson's $\chi^2$ test. The expected frequencies are $E_i = N \times p_i$ (where $N$ is the total sample size) and the observed frequencies are what you called $n_1, ..., n_k$.
How to test the statistical hypothesis that data was generated from a multinomial distribution?
Yes, it's called a Pearson's $\chi^2$ test. The expected frequencies are $E_i = N \times p_i$ (where $N$ is the total sample size) and the observed frequencies are what you called $n_1, ..., n_k$.
How to test the statistical hypothesis that data was generated from a multinomial distribution? Yes, it's called a Pearson's $\chi^2$ test. The expected frequencies are $E_i = N \times p_i$ (where $N$ is the total sample size) and the observed frequencies are what you called $n_1, ..., n_k$.
How to test the statistical hypothesis that data was generated from a multinomial distribution? Yes, it's called a Pearson's $\chi^2$ test. The expected frequencies are $E_i = N \times p_i$ (where $N$ is the total sample size) and the observed frequencies are what you called $n_1, ..., n_k$.
52,583
How to test the statistical hypothesis that data was generated from a multinomial distribution?
The application of the goodness of fit tests is the wrong solutions because they reject the hypothesis, which you like to confirm. The right technique the equivalence testing. Please, look at my papers https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2907258 and https://www.researchgate.net/publication/312481284_Testing_equivalence_of_multinomial_distributions. The implementation is also available on github. Do not hesitate to ask me if any questions occur.
How to test the statistical hypothesis that data was generated from a multinomial distribution?
The application of the goodness of fit tests is the wrong solutions because they reject the hypothesis, which you like to confirm. The right technique the equivalence testing. Please, look at my paper
How to test the statistical hypothesis that data was generated from a multinomial distribution? The application of the goodness of fit tests is the wrong solutions because they reject the hypothesis, which you like to confirm. The right technique the equivalence testing. Please, look at my papers https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2907258 and https://www.researchgate.net/publication/312481284_Testing_equivalence_of_multinomial_distributions. The implementation is also available on github. Do not hesitate to ask me if any questions occur.
How to test the statistical hypothesis that data was generated from a multinomial distribution? The application of the goodness of fit tests is the wrong solutions because they reject the hypothesis, which you like to confirm. The right technique the equivalence testing. Please, look at my paper
52,584
Things that I am not sure about "LASSO" regression method
For the first question, recall that in centering we replace each value $y_i$ with $y_i - \bar y$, where $\bar y$ is the mean of the $y$ vector. Then $$ \sum_i (y_i - \bar y) = \sum_i y_i - n \bar y = \sum_i y_i - \sum_i y_i = 0 $$ I would like to make sure, is the reason of $ \bar y $ equals zero because $ \bar y = \frac{1}{n} \sum_i (y_i − \bar y) $ Well, no. First $\bar y = 0$ is only true is $y$ is the centered response, like this $$ \overline{y - \bar y} = 0 $$ the mean of the centered response is zero. This relies on two facts: The definition of the mean of y is $\frac{1}{n} \sum_i y_i$. The mean of y is a constant, it does not depend on $i$. These two together mean that $\sum_i \bar y = n \bar y$, as we have only added a constant to itself $n$ times. Using the definition of the mean $n \bar y = \sum_i y_i$. All this together gives the cancellation I discussed in above. Now, if $y$ is centered, then the intercept in LASSO is always zero. To see why, recall that LASSO is attempting to minimize $$ L(\beta) = \sum_i (y_i - \sum_j \beta_j x_{ij})^2 + \lambda \sum_{j > 0} \left| \beta_j \right| $$ The intercept parameter does not appear in the penalty term, which is very important here. Since the penalty term is the only place the absolute values appear, the loss function is differentiable with respect to the intercept parameter. Therefore this partial derivative must be zero at any extrema (maximum or minimum) of the loss function. So let's compute this partial derivative $$ \frac{\partial L}{\partial \beta_0} = -2 \sum_i(y_i - \sum_j \beta_j x_{ij}) x_{i0} $$ Now $x_{i0} = 1$ for all $i$, because this is the intercept column of $X$. Furthermore, $\sum_i x_{ij} = 0$ for all other $j$, because we always center all of the other (non-intercept) predictors. With these two relations we can simplify the above to $$ \frac{\partial L}{\partial \beta_0} = -2 \sum_i y_i + 2 \sum_{j > 0} \beta_j \sum_i x_{ij} + 2 n \beta_0 = - 0 + 0 + 2 n \beta_0 $$ Setting to zero $$ \beta_0 = 0 $$ Notice that this happens regardless of lambda. could you show me please where $-2 \beta_0$ comes from? Well, I think I broke down the general case as far as I am capable of, so maybe simplifying the problem itself a bit will illuminate whats going on. Consider the one variable case of lasso, we have one predictor $x$, and two coefficients to estimate, the intercept $\beta_0$ and the coefficient of $x$, $\beta_1$. The loss function in this simplified case is easier to write out in explicit detail $$ L = \sum_i (y_i - \beta_0 - \beta_1 x_i)^2 - \lambda | \beta_1 | $$ The important feature here is that $\beta_0$ only appears in the sum of squares term, not in the penalty. So $L$ is a differentiable function with respect to the variable $\beta_0$, but is not differentiable with respect to the variable $\beta_1$. So, at a minimum, we can be guaranteed that the partial derivative with respect to $\beta_0$ is zero, but we can make no such claim about $\beta_1$. To actually compute this partial is pretty simple using the standard rules of differential calculus: the differential of the penalty term is zero because $\beta_0$ does not appear, and then it is one application of the chain rule to take the derivative of the sum of squares term $$ \frac{\partial L}{\partial \beta_0} = -2 \sum_i (y_i - \beta_0 - \beta_1 x_i) $$ Using the fact that there response and predictors are centered, this expression reduces to $$ 2 n \beta_0 $$ how to make derivative with respect to $\beta_0$ while it is not found in the function that you the derivative for? It is in there. There are two terms to the loss function being minimized, the residual sum of squares and and the penalty. It is true that $\beta_0$ does not appear in the penalty term, but it does appear in the residual sum of squares. Notice that the sum in the residual sum of squares is over all $j$, while in the penalty term it is only over those $j > 0$. For the second question, the point the books are trying to make is that there is no closed form solution for the LASSO coefficients. That is, there is no explicit, purely algebraic, procedure for determining the coefficients. For contrast, there is an algebraic procedure for solving linear regression, you just solve the equation $$ X^t X \beta = X^t y $$ Even if there is no algebraic solution, that does not mean that there is no algorithmic solution. This is why iterative methods are used to solve the LASSO problem. In addition, can I find $ \hat \beta_\text{lasso} $ "the coefficients" which is equal to $\sum_i (y_i - \beta x_i)^2 + \lambda \sum_j |\beta_j|$ mathematically ? Be careful, the estimated coefficients are equal to the minimizer of the expression, not the expression itself. As for the question, it depends on what you mean by "mathematically". You can not find them exactly by mathematical manipulation, moving around symbols and taking square roots and such. You can find them (to any desired accuracy) using iterative optimization methods, which is what is generally done is software pacakges. Also, somewhat miraculously, you can find them exactly using some very clever geometrical arguments, which is explained in the LARS paper. Unfortunately, this is only possible in very nice cases, and fails for more general models, like logistic or poisson regressions.
Things that I am not sure about "LASSO" regression method
For the first question, recall that in centering we replace each value $y_i$ with $y_i - \bar y$, where $\bar y$ is the mean of the $y$ vector. Then $$ \sum_i (y_i - \bar y) = \sum_i y_i - n \bar y =
Things that I am not sure about "LASSO" regression method For the first question, recall that in centering we replace each value $y_i$ with $y_i - \bar y$, where $\bar y$ is the mean of the $y$ vector. Then $$ \sum_i (y_i - \bar y) = \sum_i y_i - n \bar y = \sum_i y_i - \sum_i y_i = 0 $$ I would like to make sure, is the reason of $ \bar y $ equals zero because $ \bar y = \frac{1}{n} \sum_i (y_i − \bar y) $ Well, no. First $\bar y = 0$ is only true is $y$ is the centered response, like this $$ \overline{y - \bar y} = 0 $$ the mean of the centered response is zero. This relies on two facts: The definition of the mean of y is $\frac{1}{n} \sum_i y_i$. The mean of y is a constant, it does not depend on $i$. These two together mean that $\sum_i \bar y = n \bar y$, as we have only added a constant to itself $n$ times. Using the definition of the mean $n \bar y = \sum_i y_i$. All this together gives the cancellation I discussed in above. Now, if $y$ is centered, then the intercept in LASSO is always zero. To see why, recall that LASSO is attempting to minimize $$ L(\beta) = \sum_i (y_i - \sum_j \beta_j x_{ij})^2 + \lambda \sum_{j > 0} \left| \beta_j \right| $$ The intercept parameter does not appear in the penalty term, which is very important here. Since the penalty term is the only place the absolute values appear, the loss function is differentiable with respect to the intercept parameter. Therefore this partial derivative must be zero at any extrema (maximum or minimum) of the loss function. So let's compute this partial derivative $$ \frac{\partial L}{\partial \beta_0} = -2 \sum_i(y_i - \sum_j \beta_j x_{ij}) x_{i0} $$ Now $x_{i0} = 1$ for all $i$, because this is the intercept column of $X$. Furthermore, $\sum_i x_{ij} = 0$ for all other $j$, because we always center all of the other (non-intercept) predictors. With these two relations we can simplify the above to $$ \frac{\partial L}{\partial \beta_0} = -2 \sum_i y_i + 2 \sum_{j > 0} \beta_j \sum_i x_{ij} + 2 n \beta_0 = - 0 + 0 + 2 n \beta_0 $$ Setting to zero $$ \beta_0 = 0 $$ Notice that this happens regardless of lambda. could you show me please where $-2 \beta_0$ comes from? Well, I think I broke down the general case as far as I am capable of, so maybe simplifying the problem itself a bit will illuminate whats going on. Consider the one variable case of lasso, we have one predictor $x$, and two coefficients to estimate, the intercept $\beta_0$ and the coefficient of $x$, $\beta_1$. The loss function in this simplified case is easier to write out in explicit detail $$ L = \sum_i (y_i - \beta_0 - \beta_1 x_i)^2 - \lambda | \beta_1 | $$ The important feature here is that $\beta_0$ only appears in the sum of squares term, not in the penalty. So $L$ is a differentiable function with respect to the variable $\beta_0$, but is not differentiable with respect to the variable $\beta_1$. So, at a minimum, we can be guaranteed that the partial derivative with respect to $\beta_0$ is zero, but we can make no such claim about $\beta_1$. To actually compute this partial is pretty simple using the standard rules of differential calculus: the differential of the penalty term is zero because $\beta_0$ does not appear, and then it is one application of the chain rule to take the derivative of the sum of squares term $$ \frac{\partial L}{\partial \beta_0} = -2 \sum_i (y_i - \beta_0 - \beta_1 x_i) $$ Using the fact that there response and predictors are centered, this expression reduces to $$ 2 n \beta_0 $$ how to make derivative with respect to $\beta_0$ while it is not found in the function that you the derivative for? It is in there. There are two terms to the loss function being minimized, the residual sum of squares and and the penalty. It is true that $\beta_0$ does not appear in the penalty term, but it does appear in the residual sum of squares. Notice that the sum in the residual sum of squares is over all $j$, while in the penalty term it is only over those $j > 0$. For the second question, the point the books are trying to make is that there is no closed form solution for the LASSO coefficients. That is, there is no explicit, purely algebraic, procedure for determining the coefficients. For contrast, there is an algebraic procedure for solving linear regression, you just solve the equation $$ X^t X \beta = X^t y $$ Even if there is no algebraic solution, that does not mean that there is no algorithmic solution. This is why iterative methods are used to solve the LASSO problem. In addition, can I find $ \hat \beta_\text{lasso} $ "the coefficients" which is equal to $\sum_i (y_i - \beta x_i)^2 + \lambda \sum_j |\beta_j|$ mathematically ? Be careful, the estimated coefficients are equal to the minimizer of the expression, not the expression itself. As for the question, it depends on what you mean by "mathematically". You can not find them exactly by mathematical manipulation, moving around symbols and taking square roots and such. You can find them (to any desired accuracy) using iterative optimization methods, which is what is generally done is software pacakges. Also, somewhat miraculously, you can find them exactly using some very clever geometrical arguments, which is explained in the LARS paper. Unfortunately, this is only possible in very nice cases, and fails for more general models, like logistic or poisson regressions.
Things that I am not sure about "LASSO" regression method For the first question, recall that in centering we replace each value $y_i$ with $y_i - \bar y$, where $\bar y$ is the mean of the $y$ vector. Then $$ \sum_i (y_i - \bar y) = \sum_i y_i - n \bar y =
52,585
Can you test for normality for a (0,1) bounded distribution?
It makes about as much sense to test these data for Normality (specifically, to compute some test statistic and compare it against the distribution of the test statistic expected for samples that truly came from a Normal distribution) as it ever does. If this kind of test is a usual part of your workflow for other data sets, and you think after reading below that it should be, then go ahead and use this test on your current data set. Expanding on that answer ... There is a strong opinion (that I share) that many of the common applications of normality testing are silly. We know that the null hypothesis is false/any real data set is very, very, very unlikely to actually be Normal (is there some theoretical min or max value to the data? Are the data measured to infinite precision, or only recorded up to some finite precision? Is there any way in which the data are not independent and identically distributed?) The real question is whether the data are close enough to Normality for your current practical purposes, e.g. whether linear models will give a sufficiently accurate answer. While testing the $p$-value might be a silly way to try to answer this question (the linked question emphasizes that the $p$-value will always be small for large data sets), for small to moderate data sets it may be true that $p>0.05$ is approximately equivalent to "the data are close enough to Normality for procedures that assume Normality to be useful". (It would probably be better to use a fixed $W$ value, rather than a fixed $p$ value, for these purposes.) You could do some numerical experiments to see how big the effects of truncation are (the mean should converge to 0.5 for large cutoff values, since the p-value distribution will be $U(0,1)$, but this plot is quite noisy because the distribution of p-values is highly variable ...) set.seed(101) cutvec <- seq(3,1,by=-0.1) n <- 100 res <- sapply(cutvec, function(c) { x <- rnorm(n) mean(replicate(100000,shapiro.test(x[abs(x)<c])$p.value)) }) library(ggplot2); theme_set(theme_bw()) ggplot(data.frame(cutvec,res), aes(cutvec,res))+ geom_point()+ scale_y_log10()+ geom_smooth()+ geom_hline(yintercept=0.5,lty=2)
Can you test for normality for a (0,1) bounded distribution?
It makes about as much sense to test these data for Normality (specifically, to compute some test statistic and compare it against the distribution of the test statistic expected for samples that trul
Can you test for normality for a (0,1) bounded distribution? It makes about as much sense to test these data for Normality (specifically, to compute some test statistic and compare it against the distribution of the test statistic expected for samples that truly came from a Normal distribution) as it ever does. If this kind of test is a usual part of your workflow for other data sets, and you think after reading below that it should be, then go ahead and use this test on your current data set. Expanding on that answer ... There is a strong opinion (that I share) that many of the common applications of normality testing are silly. We know that the null hypothesis is false/any real data set is very, very, very unlikely to actually be Normal (is there some theoretical min or max value to the data? Are the data measured to infinite precision, or only recorded up to some finite precision? Is there any way in which the data are not independent and identically distributed?) The real question is whether the data are close enough to Normality for your current practical purposes, e.g. whether linear models will give a sufficiently accurate answer. While testing the $p$-value might be a silly way to try to answer this question (the linked question emphasizes that the $p$-value will always be small for large data sets), for small to moderate data sets it may be true that $p>0.05$ is approximately equivalent to "the data are close enough to Normality for procedures that assume Normality to be useful". (It would probably be better to use a fixed $W$ value, rather than a fixed $p$ value, for these purposes.) You could do some numerical experiments to see how big the effects of truncation are (the mean should converge to 0.5 for large cutoff values, since the p-value distribution will be $U(0,1)$, but this plot is quite noisy because the distribution of p-values is highly variable ...) set.seed(101) cutvec <- seq(3,1,by=-0.1) n <- 100 res <- sapply(cutvec, function(c) { x <- rnorm(n) mean(replicate(100000,shapiro.test(x[abs(x)<c])$p.value)) }) library(ggplot2); theme_set(theme_bw()) ggplot(data.frame(cutvec,res), aes(cutvec,res))+ geom_point()+ scale_y_log10()+ geom_smooth()+ geom_hline(yintercept=0.5,lty=2)
Can you test for normality for a (0,1) bounded distribution? It makes about as much sense to test these data for Normality (specifically, to compute some test statistic and compare it against the distribution of the test statistic expected for samples that trul
52,586
Understanding the solution of this integral
Integrals are linear: $$\int_{z^*}^\infty \left(S_t\, e^{\mu\tau-\sigma^2\tau/2+\sigma\sqrt{\tau}z}\right)\frac{1}{\sqrt{2\pi}}e^{-\frac{z^2}{2}}dz =\color{blue}{\left( S_t\, e^{\mu\tau-\sigma^2\tau/2}\right)}\frac{1}{\sqrt{2\pi}}\int_{z^*}^\infty e^{\sigma\sqrt{\tau}z-\frac{z^2}{2}}dz.$$ The exponent in the integrand looks very much like a Normal density, but shifted. Let's force it to look like that: $$\sigma\sqrt{\tau}z - z^2/2 = -\frac{1}{2}\left(z - \sigma\sqrt{\tau}\right)^2 + \sigma^2\tau/2.$$ Let us therefore change to the variable $y = z - \sigma\sqrt{\tau}$. It presents no difficulty because $dy = dz$. Once again, linearity allows us to factor out the part that does not depend on $z$: $$\frac{1}{\sqrt{2\pi}}\int_{z^*}^\infty e^{\sigma\sqrt{\tau}z-\frac{z^2}{2}}dz =\color{red}{e^{\sigma^2\tau/2}}\frac{1}{\sqrt{2\pi}}\int_{z^*-\sigma\sqrt{\tau}}^\infty e^{-\frac{y^2}{2}}dy.$$ The right hand side obviously is a multiple of the right tail of a Standard Normal distribution. By virtue of its symmetry, this integral gives the same value as integrating from $-\infty$ to $-\left(z^{*}-\sigma\sqrt{\tau}\right)$, which is (by definition) given by the CDF $\Phi$: $$\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{-z^*+\sigma\sqrt{\tau}} e^{-\frac{y^2}{2}}dy = \Phi\left(-z^*+\sigma\sqrt{\tau}\right) = \Phi(d_1^*).$$ (The algebraic equivalence of $-z^*+\sigma\sqrt{\tau}$ and $d_1^*$ assumes $\sigma \gt 0$.) Plugging everything back where it belongs shows us the original integral equals $$\color{blue}{\left(S_t\, e^{\mu\tau-\sigma^2\tau/2}\right)} \color{red}{e^{\sigma^2\tau/2}} \Phi(d_1^*) = S_t\, e^{\mu\tau}\, \Phi(d_1^*).$$
Understanding the solution of this integral
Integrals are linear: $$\int_{z^*}^\infty \left(S_t\, e^{\mu\tau-\sigma^2\tau/2+\sigma\sqrt{\tau}z}\right)\frac{1}{\sqrt{2\pi}}e^{-\frac{z^2}{2}}dz =\color{blue}{\left( S_t\, e^{\mu\tau-\sigma^2\tau/
Understanding the solution of this integral Integrals are linear: $$\int_{z^*}^\infty \left(S_t\, e^{\mu\tau-\sigma^2\tau/2+\sigma\sqrt{\tau}z}\right)\frac{1}{\sqrt{2\pi}}e^{-\frac{z^2}{2}}dz =\color{blue}{\left( S_t\, e^{\mu\tau-\sigma^2\tau/2}\right)}\frac{1}{\sqrt{2\pi}}\int_{z^*}^\infty e^{\sigma\sqrt{\tau}z-\frac{z^2}{2}}dz.$$ The exponent in the integrand looks very much like a Normal density, but shifted. Let's force it to look like that: $$\sigma\sqrt{\tau}z - z^2/2 = -\frac{1}{2}\left(z - \sigma\sqrt{\tau}\right)^2 + \sigma^2\tau/2.$$ Let us therefore change to the variable $y = z - \sigma\sqrt{\tau}$. It presents no difficulty because $dy = dz$. Once again, linearity allows us to factor out the part that does not depend on $z$: $$\frac{1}{\sqrt{2\pi}}\int_{z^*}^\infty e^{\sigma\sqrt{\tau}z-\frac{z^2}{2}}dz =\color{red}{e^{\sigma^2\tau/2}}\frac{1}{\sqrt{2\pi}}\int_{z^*-\sigma\sqrt{\tau}}^\infty e^{-\frac{y^2}{2}}dy.$$ The right hand side obviously is a multiple of the right tail of a Standard Normal distribution. By virtue of its symmetry, this integral gives the same value as integrating from $-\infty$ to $-\left(z^{*}-\sigma\sqrt{\tau}\right)$, which is (by definition) given by the CDF $\Phi$: $$\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{-z^*+\sigma\sqrt{\tau}} e^{-\frac{y^2}{2}}dy = \Phi\left(-z^*+\sigma\sqrt{\tau}\right) = \Phi(d_1^*).$$ (The algebraic equivalence of $-z^*+\sigma\sqrt{\tau}$ and $d_1^*$ assumes $\sigma \gt 0$.) Plugging everything back where it belongs shows us the original integral equals $$\color{blue}{\left(S_t\, e^{\mu\tau-\sigma^2\tau/2}\right)} \color{red}{e^{\sigma^2\tau/2}} \Phi(d_1^*) = S_t\, e^{\mu\tau}\, \Phi(d_1^*).$$
Understanding the solution of this integral Integrals are linear: $$\int_{z^*}^\infty \left(S_t\, e^{\mu\tau-\sigma^2\tau/2+\sigma\sqrt{\tau}z}\right)\frac{1}{\sqrt{2\pi}}e^{-\frac{z^2}{2}}dz =\color{blue}{\left( S_t\, e^{\mu\tau-\sigma^2\tau/
52,587
Granger Causality vs. Forecasting
Granger causality tells you that variable $X$ provides helpful information about future values of $Y$ <...> The devil is in the details. Granger causality considers the incremental benefits on forecasting $Y$ due to using the history of $X$ extra to using the history of $Y$ alone. That is, the benchmark forecast is based on past values of $Y$ alone while the challenger forecast uses the past values of $X$ in addition. Meanwhile, if past values of $Y$ are excluded when making the prediction (which may or may not be the case based on the excerpt cited above) and $X$ helps predict $Y$, we cannot say much about Granger causality. Here is Definition 1 from p. 428 of Granger's original paper "Investigating causal relations by econometric models and cross-spectral methods" (1969): Let $U_t$ be all the information in the universe accumulated since time $t-1$ and let $U_t-Y_t$ denote all this information apart from the specified series $Y_t$. Definition 1: Causality. If $\sigma^2(X|U)<\sigma^2(X|\overline{U-Y})$, we say that $Y$ is causing $X$, denoted by $Y_t \rightarrow X_t$. We say that $Y_t$ is causing $X_t$ if we are better able to predict $X_t$ using all available information than if the information apart from $Y_t$ had been used.
Granger Causality vs. Forecasting
Granger causality tells you that variable $X$ provides helpful information about future values of $Y$ <...> The devil is in the details. Granger causality considers the incremental benefits on foreca
Granger Causality vs. Forecasting Granger causality tells you that variable $X$ provides helpful information about future values of $Y$ <...> The devil is in the details. Granger causality considers the incremental benefits on forecasting $Y$ due to using the history of $X$ extra to using the history of $Y$ alone. That is, the benchmark forecast is based on past values of $Y$ alone while the challenger forecast uses the past values of $X$ in addition. Meanwhile, if past values of $Y$ are excluded when making the prediction (which may or may not be the case based on the excerpt cited above) and $X$ helps predict $Y$, we cannot say much about Granger causality. Here is Definition 1 from p. 428 of Granger's original paper "Investigating causal relations by econometric models and cross-spectral methods" (1969): Let $U_t$ be all the information in the universe accumulated since time $t-1$ and let $U_t-Y_t$ denote all this information apart from the specified series $Y_t$. Definition 1: Causality. If $\sigma^2(X|U)<\sigma^2(X|\overline{U-Y})$, we say that $Y$ is causing $X$, denoted by $Y_t \rightarrow X_t$. We say that $Y_t$ is causing $X_t$ if we are better able to predict $X_t$ using all available information than if the information apart from $Y_t$ had been used.
Granger Causality vs. Forecasting Granger causality tells you that variable $X$ provides helpful information about future values of $Y$ <...> The devil is in the details. Granger causality considers the incremental benefits on foreca
52,588
Granger Causality vs. Forecasting
Granger causality is a weak form of causal inference, particularly when compared with Pearl-type causality and Bayesian networks. Forecasts, on the other hand, make no assumptions about possible causal relationships and, given that, demonstrate association for the purposes of prediction.
Granger Causality vs. Forecasting
Granger causality is a weak form of causal inference, particularly when compared with Pearl-type causality and Bayesian networks. Forecasts, on the other hand, make no assumptions about possible causa
Granger Causality vs. Forecasting Granger causality is a weak form of causal inference, particularly when compared with Pearl-type causality and Bayesian networks. Forecasts, on the other hand, make no assumptions about possible causal relationships and, given that, demonstrate association for the purposes of prediction.
Granger Causality vs. Forecasting Granger causality is a weak form of causal inference, particularly when compared with Pearl-type causality and Bayesian networks. Forecasts, on the other hand, make no assumptions about possible causa
52,589
Why 0 for failure and 1 for success in a Bernoulli distribution?
As already noted by Mark L. Stone, it is used because of tradition and convenience. It could have been $-1$ and $+1$ as with Rademacher distribution, or any other values. However there are also other reasons that make $0$ and $1$ a convenient choice. First, expected value of Bernoulli distributed random variable is $$ E[X] = 0\times(1-p)+ 1\times p = p $$ ...so it is instantly obvious from the distributions definition. Sample mean is being maximum likelihood estimator of $p$ since taking the arithmetic mean of zeros and ones leads to proportion of ones in the whole sample. Moreover, we can easily extend Bernoulli distribution to binomial distribution, i.e. from modelling single success in single draw, change to modelling $k$ successes in $n$ draws -- it is quite helpful that you can just sum $1$'s to get number of successes. There are also multiple other reasons why such coding is convenient for computations, to learn more see Why is gender typically coded 0/1 rather than 1/2, for example?
Why 0 for failure and 1 for success in a Bernoulli distribution?
As already noted by Mark L. Stone, it is used because of tradition and convenience. It could have been $-1$ and $+1$ as with Rademacher distribution, or any other values. However there are also other
Why 0 for failure and 1 for success in a Bernoulli distribution? As already noted by Mark L. Stone, it is used because of tradition and convenience. It could have been $-1$ and $+1$ as with Rademacher distribution, or any other values. However there are also other reasons that make $0$ and $1$ a convenient choice. First, expected value of Bernoulli distributed random variable is $$ E[X] = 0\times(1-p)+ 1\times p = p $$ ...so it is instantly obvious from the distributions definition. Sample mean is being maximum likelihood estimator of $p$ since taking the arithmetic mean of zeros and ones leads to proportion of ones in the whole sample. Moreover, we can easily extend Bernoulli distribution to binomial distribution, i.e. from modelling single success in single draw, change to modelling $k$ successes in $n$ draws -- it is quite helpful that you can just sum $1$'s to get number of successes. There are also multiple other reasons why such coding is convenient for computations, to learn more see Why is gender typically coded 0/1 rather than 1/2, for example?
Why 0 for failure and 1 for success in a Bernoulli distribution? As already noted by Mark L. Stone, it is used because of tradition and convenience. It could have been $-1$ and $+1$ as with Rademacher distribution, or any other values. However there are also other
52,590
Why 0 for failure and 1 for success in a Bernoulli distribution?
Mathematical convenience. Here is an example: if you set the success case to be $1$ and the failure case to be $0$, then you have that the binomial variable $X \sim B(n, p)$, which counts the number of successes in $n$ independent Bernoulli trials $X_i \sim B(1, p)$, with $i = 1, 2, ..., n$, can be written as $$ X = \sum_i X_i $$ Therefore, $$ E[X] = \sum_i E[X_i] = np$$ In addition, $$ V[X] = \sum_i V[X_i] = np(1 - p)$$
Why 0 for failure and 1 for success in a Bernoulli distribution?
Mathematical convenience. Here is an example: if you set the success case to be $1$ and the failure case to be $0$, then you have that the binomial variable $X \sim B(n, p)$, which counts the number o
Why 0 for failure and 1 for success in a Bernoulli distribution? Mathematical convenience. Here is an example: if you set the success case to be $1$ and the failure case to be $0$, then you have that the binomial variable $X \sim B(n, p)$, which counts the number of successes in $n$ independent Bernoulli trials $X_i \sim B(1, p)$, with $i = 1, 2, ..., n$, can be written as $$ X = \sum_i X_i $$ Therefore, $$ E[X] = \sum_i E[X_i] = np$$ In addition, $$ V[X] = \sum_i V[X_i] = np(1 - p)$$
Why 0 for failure and 1 for success in a Bernoulli distribution? Mathematical convenience. Here is an example: if you set the success case to be $1$ and the failure case to be $0$, then you have that the binomial variable $X \sim B(n, p)$, which counts the number o
52,591
Two-way ANOVA vs ANCOVA in R
Order matters whenever the predictors aren't independent. They're correlated in your first example, as they're continuous measurements on each of the diamonds, but not in the second, as those are assigned in a balanced way. ANOVA/ANCOVA/regression are all names for linear models; they do exactly the same thing mathematically. The name ANOVA is usually used when the predictors are categorical, and the name regression is usually used when the predictors are continuous. ANCOVA and "regression with different slopes" are usually used when there are both continuous and categorical predictors. There is a way to force the interaction to be first using terms, but this almost never makes sense.
Two-way ANOVA vs ANCOVA in R
Order matters whenever the predictors aren't independent. They're correlated in your first example, as they're continuous measurements on each of the diamonds, but not in the second, as those are assi
Two-way ANOVA vs ANCOVA in R Order matters whenever the predictors aren't independent. They're correlated in your first example, as they're continuous measurements on each of the diamonds, but not in the second, as those are assigned in a balanced way. ANOVA/ANCOVA/regression are all names for linear models; they do exactly the same thing mathematically. The name ANOVA is usually used when the predictors are categorical, and the name regression is usually used when the predictors are continuous. ANCOVA and "regression with different slopes" are usually used when there are both continuous and categorical predictors. There is a way to force the interaction to be first using terms, but this almost never makes sense.
Two-way ANOVA vs ANCOVA in R Order matters whenever the predictors aren't independent. They're correlated in your first example, as they're continuous measurements on each of the diamonds, but not in the second, as those are assi
52,592
Two-way ANOVA vs ANCOVA in R
Stumbled on this question a couple of years after it was posted while looking for some info on ANCOVA with R. Wound up writing a much longer example using the diamonds dataset I won't try and cram it in here but for future searchers it is located here: https://ibecav.github.io/ancova_example/ . I would answer your 3 original questions this way: Technically, your first example is neither an ANOVA or an ANCOVA. price, table and depth are all numeric variables. You're just regressing. ANOVA and ANCOVA always involve at least one predictor that is ordinal or nominal in nature. As noted by others you got different answers because by default aov uses type I sums of squares and with that type order matters. As others have noted you have to attend to the difference between * and + in your formula whether you are using lm or aov As I said I don't want to try and reproduce my example in it's entirety but here is a simple example of price by cut and color noCOVmodel <- aov(price ~ cut * color, diamonds2) car::Anova(noCOVmodel, type = 2) versus price by cut and color with carat as a covariate COVmodel <- aov(price ~ cut * color + carat, diamonds2) car::Anova(COVmodel, type = 2)
Two-way ANOVA vs ANCOVA in R
Stumbled on this question a couple of years after it was posted while looking for some info on ANCOVA with R. Wound up writing a much longer example using the diamonds dataset I won't try and cram it
Two-way ANOVA vs ANCOVA in R Stumbled on this question a couple of years after it was posted while looking for some info on ANCOVA with R. Wound up writing a much longer example using the diamonds dataset I won't try and cram it in here but for future searchers it is located here: https://ibecav.github.io/ancova_example/ . I would answer your 3 original questions this way: Technically, your first example is neither an ANOVA or an ANCOVA. price, table and depth are all numeric variables. You're just regressing. ANOVA and ANCOVA always involve at least one predictor that is ordinal or nominal in nature. As noted by others you got different answers because by default aov uses type I sums of squares and with that type order matters. As others have noted you have to attend to the difference between * and + in your formula whether you are using lm or aov As I said I don't want to try and reproduce my example in it's entirety but here is a simple example of price by cut and color noCOVmodel <- aov(price ~ cut * color, diamonds2) car::Anova(noCOVmodel, type = 2) versus price by cut and color with carat as a covariate COVmodel <- aov(price ~ cut * color + carat, diamonds2) car::Anova(COVmodel, type = 2)
Two-way ANOVA vs ANCOVA in R Stumbled on this question a couple of years after it was posted while looking for some info on ANCOVA with R. Wound up writing a much longer example using the diamonds dataset I won't try and cram it
52,593
Can lavaan (SEM/CFA) be used to do factor analysis like factanal (EFA)
It is possible to do EFA in a CFA framework. This is sometimes called "E/CFA". A nice discussion of this can be found in: Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York: Guilford Press. For this to work, you need to have an "anchor item" for each factor, for which there are no cross-loadings. Looking at the results from factanal, it would make sense to make item x5 the anchor item for the first factor and x3 the anchor item for the second factor. You also need to constrain the variances of the latent factors to 1. And since factanal standardizes the variables, you'll want to do the same here. At the same time, to make things more comparable, I would use an oblique rotation method for EFA (since the E/CFA model will allow the factors to be correlated). So, putting this all together, you can compare: model <- 'f1 =~ x1 + x2 + 0*x3 + x4 + x5 + x6 f2 =~ x1 + x2 + x3 + x4 + 0*x5 + x6' fit <- cfa(model, data=HolzingerSwineford1939, std.lv=T, std.ov=T) summary(fit) factanal(~x1+x2+x3+x4+x5+x6, factors=2, data=HolzingerSwineford1939, rotation="promax") Here are the results: lavaan (0.5-18) converged normally after 26 iterations Number of observations 301 Estimator ML Minimum Function Test Statistic 2.109 Degrees of freedom 4 P-value (Chi-square) 0.716 Parameter estimates: Information Expected Standard Errors Standard Estimate Std.err Z-value P(>|z|) Latent variables: f1 =~ x1 0.275 0.061 4.499 0.000 x2 0.092 0.063 1.469 0.142 x3 0.000 x4 0.822 0.050 16.550 0.000 x5 0.875 0.049 17.964 0.000 x6 0.798 0.050 16.013 0.000 f2 =~ x1 0.559 0.074 7.520 0.000 x2 0.441 0.071 6.183 0.000 x3 0.746 0.086 8.644 0.000 x4 0.120 0.052 2.321 0.020 x5 0.000 x6 0.161 0.052 3.085 0.002 Covariances: f1 ~~ f2 0.119 0.088 1.348 0.178 Variances: x1 0.572 0.072 x2 0.784 0.074 x3 0.440 0.112 x4 0.283 0.035 x5 0.231 0.037 x6 0.303 0.035 f1 1.000 f2 1.000 > > factanal(~x1+x2+x3+x4+x5 + x6, factors = 2, data = HolzingerSwineford1939, rotation="promax") Call: factanal(x = ~x1 + x2 + x3 + x4 + x5 + x6, factors = 2, data = HolzingerSwineford1939, rotation = "promax") Uniquenesses: x1 x2 x3 x4 x5 x6 0.574 0.787 0.441 0.284 0.232 0.304 Loadings: Factor1 Factor2 x1 0.180 0.557 x2 0.457 x3 -0.147 0.797 x4 0.842 x5 0.922 -0.126 x6 0.809 Factor1 Factor2 SS loadings 2.268 1.173 Proportion Var 0.378 0.196 Cumulative Var 0.378 0.574 Factor Correlations: Factor1 Factor2 Factor1 1.000 0.417 Factor2 0.417 1.000 Test of the hypothesis that 2 factors are sufficient. The chi square statistic is 2.07 on 4 degrees of freedom. The p-value is 0.722 According to Brown (2006), the chi-square statistic should be the same for both approaches. For the example shown in the book, this is indeed the case (but the author uses Mplus for the EFA and E/CFA analyses). In this particular example, the values are close (2.109 and 2.07), but not identical. There seems to be some minor difference in how lavaan and factanal are working here, but ultimately the point is that one can indeed do an exploratory factor analysis using CFA software.
Can lavaan (SEM/CFA) be used to do factor analysis like factanal (EFA)
It is possible to do EFA in a CFA framework. This is sometimes called "E/CFA". A nice discussion of this can be found in: Brown, T. A. (2006). Confirmatory factor analysis for applied research. New Yo
Can lavaan (SEM/CFA) be used to do factor analysis like factanal (EFA) It is possible to do EFA in a CFA framework. This is sometimes called "E/CFA". A nice discussion of this can be found in: Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York: Guilford Press. For this to work, you need to have an "anchor item" for each factor, for which there are no cross-loadings. Looking at the results from factanal, it would make sense to make item x5 the anchor item for the first factor and x3 the anchor item for the second factor. You also need to constrain the variances of the latent factors to 1. And since factanal standardizes the variables, you'll want to do the same here. At the same time, to make things more comparable, I would use an oblique rotation method for EFA (since the E/CFA model will allow the factors to be correlated). So, putting this all together, you can compare: model <- 'f1 =~ x1 + x2 + 0*x3 + x4 + x5 + x6 f2 =~ x1 + x2 + x3 + x4 + 0*x5 + x6' fit <- cfa(model, data=HolzingerSwineford1939, std.lv=T, std.ov=T) summary(fit) factanal(~x1+x2+x3+x4+x5+x6, factors=2, data=HolzingerSwineford1939, rotation="promax") Here are the results: lavaan (0.5-18) converged normally after 26 iterations Number of observations 301 Estimator ML Minimum Function Test Statistic 2.109 Degrees of freedom 4 P-value (Chi-square) 0.716 Parameter estimates: Information Expected Standard Errors Standard Estimate Std.err Z-value P(>|z|) Latent variables: f1 =~ x1 0.275 0.061 4.499 0.000 x2 0.092 0.063 1.469 0.142 x3 0.000 x4 0.822 0.050 16.550 0.000 x5 0.875 0.049 17.964 0.000 x6 0.798 0.050 16.013 0.000 f2 =~ x1 0.559 0.074 7.520 0.000 x2 0.441 0.071 6.183 0.000 x3 0.746 0.086 8.644 0.000 x4 0.120 0.052 2.321 0.020 x5 0.000 x6 0.161 0.052 3.085 0.002 Covariances: f1 ~~ f2 0.119 0.088 1.348 0.178 Variances: x1 0.572 0.072 x2 0.784 0.074 x3 0.440 0.112 x4 0.283 0.035 x5 0.231 0.037 x6 0.303 0.035 f1 1.000 f2 1.000 > > factanal(~x1+x2+x3+x4+x5 + x6, factors = 2, data = HolzingerSwineford1939, rotation="promax") Call: factanal(x = ~x1 + x2 + x3 + x4 + x5 + x6, factors = 2, data = HolzingerSwineford1939, rotation = "promax") Uniquenesses: x1 x2 x3 x4 x5 x6 0.574 0.787 0.441 0.284 0.232 0.304 Loadings: Factor1 Factor2 x1 0.180 0.557 x2 0.457 x3 -0.147 0.797 x4 0.842 x5 0.922 -0.126 x6 0.809 Factor1 Factor2 SS loadings 2.268 1.173 Proportion Var 0.378 0.196 Cumulative Var 0.378 0.574 Factor Correlations: Factor1 Factor2 Factor1 1.000 0.417 Factor2 0.417 1.000 Test of the hypothesis that 2 factors are sufficient. The chi square statistic is 2.07 on 4 degrees of freedom. The p-value is 0.722 According to Brown (2006), the chi-square statistic should be the same for both approaches. For the example shown in the book, this is indeed the case (but the author uses Mplus for the EFA and E/CFA analyses). In this particular example, the values are close (2.109 and 2.07), but not identical. There seems to be some minor difference in how lavaan and factanal are working here, but ultimately the point is that one can indeed do an exploratory factor analysis using CFA software.
Can lavaan (SEM/CFA) be used to do factor analysis like factanal (EFA) It is possible to do EFA in a CFA framework. This is sometimes called "E/CFA". A nice discussion of this can be found in: Brown, T. A. (2006). Confirmatory factor analysis for applied research. New Yo
52,594
How to read a boxplot in R? [duplicate]
The documentation seems fairly clear to me, although it certainly helps to be familiar with how to read R documentation and with boxplots more generally. Towards the bottom of the page it says: See Also boxplot.stats which does the computation... So we can navigate there. It reads: Details The two ‘hinges’ are versions of the first and third quartile, i.e., close to quantile(x, c(1,3)/4). The hinges equal the quartiles for odd n (where n <- length(x)) and differ for even n. Whereas the quartiles only equal observations for n %% 4 == 1 (n = 1 mod 4), the hinges do so additionally for n %% 4 == 2 (n = 2 mod 4), and are in the middle of two observations otherwise. And the Values section includes: stats a vector of length 5, containing the extreme of the lower whisker, the lower ‘hinge’, the median, the upper ‘hinge’ and the extreme of the upper whisker. Moreover, above that we see that the argument coef is set to 1.5 by default (so that is what you would get unless you had changed the default for range in the original boxplot call). The coef argument is documented: coef this determines how far the plot ‘whiskers’ extend out from the box. If coef is positive, the whiskers extend to the most extreme data point which is no more than coef times the length of the box away from the box. A value of zero causes the whiskers to extend to the data extremes (and no outliers be returned). From these, we learn that the midline is the median of your data, with the upper and lower limits of the box being the third and first quartile1 (75th and 25th percentile) respectively. By default, the whiskers will extend up to 1.5 times the interquartile range from the top (bottom) of the box to the furthest datum within that distance. If there are any data beyond that distance, they are represented individually as points ('outliers'). To be explicit, they do not show standard deviations. 1. Note that determining the value for a quantile (e.g., the 25th percentile is potentially more complicated than people realize. There are at least nine different methods that have been discussed. For a nice overview, see @Glen_b's excellent answer here: Relation between Quintiles and the Arithmetic Mean.
How to read a boxplot in R? [duplicate]
The documentation seems fairly clear to me, although it certainly helps to be familiar with how to read R documentation and with boxplots more generally. Towards the bottom of the page it says: See
How to read a boxplot in R? [duplicate] The documentation seems fairly clear to me, although it certainly helps to be familiar with how to read R documentation and with boxplots more generally. Towards the bottom of the page it says: See Also boxplot.stats which does the computation... So we can navigate there. It reads: Details The two ‘hinges’ are versions of the first and third quartile, i.e., close to quantile(x, c(1,3)/4). The hinges equal the quartiles for odd n (where n <- length(x)) and differ for even n. Whereas the quartiles only equal observations for n %% 4 == 1 (n = 1 mod 4), the hinges do so additionally for n %% 4 == 2 (n = 2 mod 4), and are in the middle of two observations otherwise. And the Values section includes: stats a vector of length 5, containing the extreme of the lower whisker, the lower ‘hinge’, the median, the upper ‘hinge’ and the extreme of the upper whisker. Moreover, above that we see that the argument coef is set to 1.5 by default (so that is what you would get unless you had changed the default for range in the original boxplot call). The coef argument is documented: coef this determines how far the plot ‘whiskers’ extend out from the box. If coef is positive, the whiskers extend to the most extreme data point which is no more than coef times the length of the box away from the box. A value of zero causes the whiskers to extend to the data extremes (and no outliers be returned). From these, we learn that the midline is the median of your data, with the upper and lower limits of the box being the third and first quartile1 (75th and 25th percentile) respectively. By default, the whiskers will extend up to 1.5 times the interquartile range from the top (bottom) of the box to the furthest datum within that distance. If there are any data beyond that distance, they are represented individually as points ('outliers'). To be explicit, they do not show standard deviations. 1. Note that determining the value for a quantile (e.g., the 25th percentile is potentially more complicated than people realize. There are at least nine different methods that have been discussed. For a nice overview, see @Glen_b's excellent answer here: Relation between Quintiles and the Arithmetic Mean.
How to read a boxplot in R? [duplicate] The documentation seems fairly clear to me, although it certainly helps to be familiar with how to read R documentation and with boxplots more generally. Towards the bottom of the page it says: See
52,595
How to read a boxplot in R? [duplicate]
This sums up the box plot and what each line represents. Source: http://www.physics.csbsju.edu/stats/box2.html
How to read a boxplot in R? [duplicate]
This sums up the box plot and what each line represents. Source: http://www.physics.csbsju.edu/stats/box2.html
How to read a boxplot in R? [duplicate] This sums up the box plot and what each line represents. Source: http://www.physics.csbsju.edu/stats/box2.html
How to read a boxplot in R? [duplicate] This sums up the box plot and what each line represents. Source: http://www.physics.csbsju.edu/stats/box2.html
52,596
What does "linear-by-linear association" in SPSS mean?
It is. You might have a look at IBM's support page for SPSS, where it is stated in a technote on the Chi² test: 'The Crosstabs procedure includes the Mantel-Haenszel test of trend among its chi-square test statistics. ... The MH test for trend will be printed in the "Chi-Square Tests" table and labelled "Linear-by-Linear Association".' see: https://www-304.ibm.com/support/docview.wss?uid=swg21477269
What does "linear-by-linear association" in SPSS mean?
It is. You might have a look at IBM's support page for SPSS, where it is stated in a technote on the Chi² test: 'The Crosstabs procedure includes the Mantel-Haenszel test of trend among its chi-square
What does "linear-by-linear association" in SPSS mean? It is. You might have a look at IBM's support page for SPSS, where it is stated in a technote on the Chi² test: 'The Crosstabs procedure includes the Mantel-Haenszel test of trend among its chi-square test statistics. ... The MH test for trend will be printed in the "Chi-Square Tests" table and labelled "Linear-by-Linear Association".' see: https://www-304.ibm.com/support/docview.wss?uid=swg21477269
What does "linear-by-linear association" in SPSS mean? It is. You might have a look at IBM's support page for SPSS, where it is stated in a technote on the Chi² test: 'The Crosstabs procedure includes the Mantel-Haenszel test of trend among its chi-square
52,597
What does "linear-by-linear association" in SPSS mean?
As a previous reply mentioned, yes it is and the technical description is at SPSS's support page: https://www-304.ibm.com/support/docview.wss?uid=swg21477269 This is a useful statistic for those who understand it. Suppose we investigate whether 78 employees' promotion (yes/no) is related to their performance ranking in the previous year (1-4, 1=low), as follows: Ranking 1: Not promoted 17, Promoted 2, Total 19. Ranking 2: Not promoted 16, Promoted 4, Total 20. Ranking 3: Not promoted 14, Promoted 6, Total 20. Ranking 4: Not promoted 10, Promoted 9, Total 19. SPSS shows a significant linear-by-linear association (p=.008) showing that there is a significant association between the ranking and being promoted. Some useful details of how this works are: 1. The test relates to the odds. Odds are used for their statistical properties, and are not quite the same as probabilities. For ranking 1, the odds of being promoted are 2:17, as opposed to the probability which is 2:19. 2. Then, the test is on the odds ratios; e.g. if you move from rank 1 to rank 2, the odds ratio is 4:16/2:17 = 0.250/0.118 = 2.12. (The null hypothesis is that the odds ratio is 1, i.e. a change in ranks makes no difference to the odds.) 3. The procedure presumes that the odds ratios (in the population) are the same for all steps (i.e. if moving from rank 1 to rank 2 doubles the odds of promotion, moving from rank 2 to rank 3 would also double the odds of promotion). That is why there is only 1 degree of freedom. (This assumption is known as "linearity in the logit".) 4. The test is therefore conceptually the same (and gives a similar answer) to doing logistic regression with just one covariate. (In logistic regression, "covariate" means a variable like this one). In this case the covariate would be ranking, and the DV would be promotion decision.
What does "linear-by-linear association" in SPSS mean?
As a previous reply mentioned, yes it is and the technical description is at SPSS's support page: https://www-304.ibm.com/support/docview.wss?uid=swg21477269 This is a useful statistic for those who u
What does "linear-by-linear association" in SPSS mean? As a previous reply mentioned, yes it is and the technical description is at SPSS's support page: https://www-304.ibm.com/support/docview.wss?uid=swg21477269 This is a useful statistic for those who understand it. Suppose we investigate whether 78 employees' promotion (yes/no) is related to their performance ranking in the previous year (1-4, 1=low), as follows: Ranking 1: Not promoted 17, Promoted 2, Total 19. Ranking 2: Not promoted 16, Promoted 4, Total 20. Ranking 3: Not promoted 14, Promoted 6, Total 20. Ranking 4: Not promoted 10, Promoted 9, Total 19. SPSS shows a significant linear-by-linear association (p=.008) showing that there is a significant association between the ranking and being promoted. Some useful details of how this works are: 1. The test relates to the odds. Odds are used for their statistical properties, and are not quite the same as probabilities. For ranking 1, the odds of being promoted are 2:17, as opposed to the probability which is 2:19. 2. Then, the test is on the odds ratios; e.g. if you move from rank 1 to rank 2, the odds ratio is 4:16/2:17 = 0.250/0.118 = 2.12. (The null hypothesis is that the odds ratio is 1, i.e. a change in ranks makes no difference to the odds.) 3. The procedure presumes that the odds ratios (in the population) are the same for all steps (i.e. if moving from rank 1 to rank 2 doubles the odds of promotion, moving from rank 2 to rank 3 would also double the odds of promotion). That is why there is only 1 degree of freedom. (This assumption is known as "linearity in the logit".) 4. The test is therefore conceptually the same (and gives a similar answer) to doing logistic regression with just one covariate. (In logistic regression, "covariate" means a variable like this one). In this case the covariate would be ranking, and the DV would be promotion decision.
What does "linear-by-linear association" in SPSS mean? As a previous reply mentioned, yes it is and the technical description is at SPSS's support page: https://www-304.ibm.com/support/docview.wss?uid=swg21477269 This is a useful statistic for those who u
52,598
How to write a piecewise regression model as a linear model?
The overall model has four parameters: $\alpha_0,$ $\alpha_1,$ $\beta_0,$ and $\beta_1.$ Therefore, if a solution is at all possible, we must be able to construct four corresponding variables $z_1, z_2,$ $z_3,$ and $z_4.$ One solution dedicates the first two parameters to the first model and the second two parameters to the second model. Thus, when $x\le x_0,$ we may set $\mathbf z = (z_1,z_2,z_3,z_4)=(1,x,0,0)$ and when $x \gt x_0$ set $\mathbf z = (z_1,z_2,z_3,z_4)=(0,0,1,x).$ The zeros ensure the parameters for one part of the model do not affect the other part of the model. To put it another way, for any parameter vector $\gamma = (\gamma_1,\gamma_2,\gamma_3,\gamma_4)^\prime$ notice that $$\begin{aligned} \mathbf z \gamma &= (1,x,0,0)(\gamma_1,\gamma_2,\gamma_3,\gamma_4)^\prime = \gamma_1 + \gamma_2 x,& \ x \le x_0\\ \mathbf z \gamma &= (0,0,1,x)(\gamma_1,\gamma_2,\gamma_3,\gamma_4)^\prime = \gamma_3 + \gamma_4 x,& \ x \gt x_0. \end{aligned}$$ This is precisely the desired model with $\gamma_1=\alpha_0, \gamma_2=\alpha_1,$ $\gamma_3=\beta_0,$ and $\gamma_4=\beta_1.$ As is usual, when you assemble the observations as rows in a "model matrix" $Z,$ the model can be written $y = Z\gamma + E$ where $E$ is the vector of errors. The $x$ values in the plot are $0, 1, 2, \ldots, 20$ and $x_0=11.$ The model matrix $Z$ is [,1] [,2] [,3] [,4] [1,] 1 0 0 0 [2,] 1 1 0 0 [3,] 1 2 0 0 [4,] 1 3 0 0 [5,] 1 4 0 0 [6,] 1 5 0 0 [7,] 1 6 0 0 [8,] 1 7 0 0 [9,] 1 8 0 0 [10,] 1 9 0 0 [11,] 1 10 0 0 [12,] 1 11 0 0 [13,] 0 0 1 12 [14,] 0 0 1 13 [15,] 0 0 1 14 [16,] 0 0 1 15 [17,] 0 0 1 16 [18,] 0 0 1 17 [19,] 0 0 1 18 [20,] 0 0 1 19 [21,] 0 0 1 20 The pattern is clear. These data and fits were produced by the following R implementation of this procedure. x <- seq(0, 20) gamma <- c(12, -1, 0, 1/2) x0 <- 11 sigma <- 1 # # Create the model matrix `Z` and true responses `y.` # Z <- cbind(x <= x0, x*(x <= x0), x > x0, x*(x > x0)) y. <- Z %*% gamma # # Create random responses according to the model. # set.seed(17) y <- y. + rnorm(length(y.), 0, sigma) # # Find the least-squares fit. # fit <- lm.fit(Z, y) # # Plot the data, model, and fit. # plot(x, y, type="n", ylim=c(0,15), main="Data, True Model (Red), and Fit (Dashed Blue)") f <- function(x, gamma) cbind(x <= x0, x*(x <= x0), x > x0, x*(x > x0)) %*% gamma curve(f(x, gamma), add=TRUE, n=2001, col="Red", lwd=2) curve(f(x, fit$coefficients), add=TRUE, n=2001, lty=3, col="Blue", lwd=2) abline(v = x0, lwd=2, col="Gray") text(x0, 14, expression(x[0]), pos=4) text(1,1, expression(y==gamma[1]+gamma[2]*x+epsilon), pos=4) text(19,1, expression(y==gamma[3]+gamma[4]*x+epsilon), pos=2) points(x, y, pch=19)
How to write a piecewise regression model as a linear model?
The overall model has four parameters: $\alpha_0,$ $\alpha_1,$ $\beta_0,$ and $\beta_1.$ Therefore, if a solution is at all possible, we must be able to construct four corresponding variables $z_1, z
How to write a piecewise regression model as a linear model? The overall model has four parameters: $\alpha_0,$ $\alpha_1,$ $\beta_0,$ and $\beta_1.$ Therefore, if a solution is at all possible, we must be able to construct four corresponding variables $z_1, z_2,$ $z_3,$ and $z_4.$ One solution dedicates the first two parameters to the first model and the second two parameters to the second model. Thus, when $x\le x_0,$ we may set $\mathbf z = (z_1,z_2,z_3,z_4)=(1,x,0,0)$ and when $x \gt x_0$ set $\mathbf z = (z_1,z_2,z_3,z_4)=(0,0,1,x).$ The zeros ensure the parameters for one part of the model do not affect the other part of the model. To put it another way, for any parameter vector $\gamma = (\gamma_1,\gamma_2,\gamma_3,\gamma_4)^\prime$ notice that $$\begin{aligned} \mathbf z \gamma &= (1,x,0,0)(\gamma_1,\gamma_2,\gamma_3,\gamma_4)^\prime = \gamma_1 + \gamma_2 x,& \ x \le x_0\\ \mathbf z \gamma &= (0,0,1,x)(\gamma_1,\gamma_2,\gamma_3,\gamma_4)^\prime = \gamma_3 + \gamma_4 x,& \ x \gt x_0. \end{aligned}$$ This is precisely the desired model with $\gamma_1=\alpha_0, \gamma_2=\alpha_1,$ $\gamma_3=\beta_0,$ and $\gamma_4=\beta_1.$ As is usual, when you assemble the observations as rows in a "model matrix" $Z,$ the model can be written $y = Z\gamma + E$ where $E$ is the vector of errors. The $x$ values in the plot are $0, 1, 2, \ldots, 20$ and $x_0=11.$ The model matrix $Z$ is [,1] [,2] [,3] [,4] [1,] 1 0 0 0 [2,] 1 1 0 0 [3,] 1 2 0 0 [4,] 1 3 0 0 [5,] 1 4 0 0 [6,] 1 5 0 0 [7,] 1 6 0 0 [8,] 1 7 0 0 [9,] 1 8 0 0 [10,] 1 9 0 0 [11,] 1 10 0 0 [12,] 1 11 0 0 [13,] 0 0 1 12 [14,] 0 0 1 13 [15,] 0 0 1 14 [16,] 0 0 1 15 [17,] 0 0 1 16 [18,] 0 0 1 17 [19,] 0 0 1 18 [20,] 0 0 1 19 [21,] 0 0 1 20 The pattern is clear. These data and fits were produced by the following R implementation of this procedure. x <- seq(0, 20) gamma <- c(12, -1, 0, 1/2) x0 <- 11 sigma <- 1 # # Create the model matrix `Z` and true responses `y.` # Z <- cbind(x <= x0, x*(x <= x0), x > x0, x*(x > x0)) y. <- Z %*% gamma # # Create random responses according to the model. # set.seed(17) y <- y. + rnorm(length(y.), 0, sigma) # # Find the least-squares fit. # fit <- lm.fit(Z, y) # # Plot the data, model, and fit. # plot(x, y, type="n", ylim=c(0,15), main="Data, True Model (Red), and Fit (Dashed Blue)") f <- function(x, gamma) cbind(x <= x0, x*(x <= x0), x > x0, x*(x > x0)) %*% gamma curve(f(x, gamma), add=TRUE, n=2001, col="Red", lwd=2) curve(f(x, fit$coefficients), add=TRUE, n=2001, lty=3, col="Blue", lwd=2) abline(v = x0, lwd=2, col="Gray") text(x0, 14, expression(x[0]), pos=4) text(1,1, expression(y==gamma[1]+gamma[2]*x+epsilon), pos=4) text(19,1, expression(y==gamma[3]+gamma[4]*x+epsilon), pos=2) points(x, y, pch=19)
How to write a piecewise regression model as a linear model? The overall model has four parameters: $\alpha_0,$ $\alpha_1,$ $\beta_0,$ and $\beta_1.$ Therefore, if a solution is at all possible, we must be able to construct four corresponding variables $z_1, z
52,599
How to write a piecewise regression model as a linear model?
For a known discontinuous break point $x_0$, the following piecewise regression model: $$y= \alpha_0 + \alpha_1 x +\epsilon ;\ \ x\le x_0 $$ $$y=\beta_0 +\beta_1 x + \epsilon;\ \ x\gt x_0$$ can instead be expressed using an indicator function of $x$ and $x_0$ for the change in intercept, and a hinge function of $x_i$ and $x_0$ for the change in slope: $$y_{i} = \beta_{0} + \beta_{x}x_{i} + \beta_{0\text{c}}I(x_{i} \ge x_0) + \beta_{x\text{c}} \max(x_i - x_0,0) + \varepsilon_{i}$$ Explanation Change in intercept: If $x_i < x_0$, then $I(x_i \ge x_0) = 0$, so the $\beta_{0\text{c}}I(x_{i} \ge x_0)$ term also equals $0$. However, if $x_i \ge x_0$, then $I(x_i \ge x_0) = 1$, and the $\beta_{0\text{c}}I(x_{i} \ge x_0)$ term equals $\beta_{0\text{c}}$… which is just a constant, so the intercept for values at $x_0$ and higher equals $\beta_0 + \beta_{0\text{c}}$. Change in slope: If $x_i < x_0$, then $\max(x_i - x_0,0) = 0$, so the $\beta_{x\text{c}} \max(x_i - x_0,0)$ term also equals $0$. However, if $x_i \ge x_0$, then the $\max(x_i - x_0,0)$ term increases at exactly the same rate as $x_i$: a one-unit increase in $x_i$ corresponds to a one-unit increase in $\max(x_i-x_0,0)$. For example, if $x_i = x_0 + 2$, then $\max(x_i - x_0,0)=2$. The effect of $x$ on $y$ when $x_i < x_0$ is just $\beta_x$, but the effect of $x$ on $y$ changes from $\beta_{x}$ to $\beta_x + \beta_{x\text{c}}$ for values of $x_i \ge x_0$. A single linear model You can thus create a single linear model with two new variables, where $x_{1i} = x_i$ and, say, $x_{2i} = I(x_{1i} \ge x_0)$, and $x_{3i} = \max(x_{1i}-x_0,0)$ and estimate (e.g., using OLS): $$\boldsymbol{y_i = \beta_0 + \beta_x x_{1i} + \beta_{0\text{c}}x_{2i} + \beta_{x\text{c}}x_{3i} + \varepsilon_{i}} = BX_{i} + \varepsilon_{i}$$
How to write a piecewise regression model as a linear model?
For a known discontinuous break point $x_0$, the following piecewise regression model: $$y= \alpha_0 + \alpha_1 x +\epsilon ;\ \ x\le x_0 $$ $$y=\beta_0 +\beta_1 x + \epsilon;\ \ x\gt x_0$$ can inste
How to write a piecewise regression model as a linear model? For a known discontinuous break point $x_0$, the following piecewise regression model: $$y= \alpha_0 + \alpha_1 x +\epsilon ;\ \ x\le x_0 $$ $$y=\beta_0 +\beta_1 x + \epsilon;\ \ x\gt x_0$$ can instead be expressed using an indicator function of $x$ and $x_0$ for the change in intercept, and a hinge function of $x_i$ and $x_0$ for the change in slope: $$y_{i} = \beta_{0} + \beta_{x}x_{i} + \beta_{0\text{c}}I(x_{i} \ge x_0) + \beta_{x\text{c}} \max(x_i - x_0,0) + \varepsilon_{i}$$ Explanation Change in intercept: If $x_i < x_0$, then $I(x_i \ge x_0) = 0$, so the $\beta_{0\text{c}}I(x_{i} \ge x_0)$ term also equals $0$. However, if $x_i \ge x_0$, then $I(x_i \ge x_0) = 1$, and the $\beta_{0\text{c}}I(x_{i} \ge x_0)$ term equals $\beta_{0\text{c}}$… which is just a constant, so the intercept for values at $x_0$ and higher equals $\beta_0 + \beta_{0\text{c}}$. Change in slope: If $x_i < x_0$, then $\max(x_i - x_0,0) = 0$, so the $\beta_{x\text{c}} \max(x_i - x_0,0)$ term also equals $0$. However, if $x_i \ge x_0$, then the $\max(x_i - x_0,0)$ term increases at exactly the same rate as $x_i$: a one-unit increase in $x_i$ corresponds to a one-unit increase in $\max(x_i-x_0,0)$. For example, if $x_i = x_0 + 2$, then $\max(x_i - x_0,0)=2$. The effect of $x$ on $y$ when $x_i < x_0$ is just $\beta_x$, but the effect of $x$ on $y$ changes from $\beta_{x}$ to $\beta_x + \beta_{x\text{c}}$ for values of $x_i \ge x_0$. A single linear model You can thus create a single linear model with two new variables, where $x_{1i} = x_i$ and, say, $x_{2i} = I(x_{1i} \ge x_0)$, and $x_{3i} = \max(x_{1i}-x_0,0)$ and estimate (e.g., using OLS): $$\boldsymbol{y_i = \beta_0 + \beta_x x_{1i} + \beta_{0\text{c}}x_{2i} + \beta_{x\text{c}}x_{3i} + \varepsilon_{i}} = BX_{i} + \varepsilon_{i}$$
How to write a piecewise regression model as a linear model? For a known discontinuous break point $x_0$, the following piecewise regression model: $$y= \alpha_0 + \alpha_1 x +\epsilon ;\ \ x\le x_0 $$ $$y=\beta_0 +\beta_1 x + \epsilon;\ \ x\gt x_0$$ can inste
52,600
Linear regression performing better than random forest in Caret
Definitely check on how $R^2$ is being evaluated and whether it is uniform across the different algorithms. Beyond that, a few thoughts occur to me: If your features have a smooth, nearly linear dependence on the covariates, then linear regression will model the dependence better than random forests, which will basically approximate a linear curve with an ugly irregular step function. If the dependence is multivariate linear and smooth, with $v$ significant covariates producing the dependence, the fit performance of random forests can be expected to get worse and worse for larger and larger $v$. RF has a much greater ability than a single decision tree to model linearity, since we are adding tree predictions together - but still, it's just not very efficient to approximate a high-dimensional linear relationship with a series of step functions. I think this is the most likely theoretical explanation for RF underperforming linear regression. ...that said, the step functions will get nicer as you add more trees to the random forest. So you may want to consider increasing the number of trees a lot - as high as you are willing to wait for. Maybe see if caret will allow you to track the $R^2$ as the number of trees increases? I would also expect boosted trees to potentially work well here, but you may have to fit a ton of trees and fiddle a lot with the parameters. There's no guarantee though - if this is a really high-dimensional smooth relationship, it could take an unfathomably large number of trees to approximate it well. Preprocessing with PCA is not ideal for high-dimensional learners like random forests. For best results, they should generally get direct access to the full dataset. If the raw dataset has sparsity of main effects, this nice property could easily be destroyed by PCA. This would make even worse the problem of my first bullet, where RF is trying to approximate a high-dimensional linear relationship by step functions. Advanced machine learning algorithms are favored for their generalization performance, not their ability to closely fit training data and produce a high $R^2$. You are performing an interesting diagnostic and that is worth doing, but be aware that this diagnostic is not a good measure of final performance. Random Forests are not the only useful algorithm out there for high-dimensional data. You could also try elastic net regression either on your PCA processed data or on the raw dataset. Elastic net gives you the ability to model smooth relationships, as well as the ability to perform high-dimensional feature selection using the $\ell_1$ regression coefficient penalty.
Linear regression performing better than random forest in Caret
Definitely check on how $R^2$ is being evaluated and whether it is uniform across the different algorithms. Beyond that, a few thoughts occur to me: If your features have a smooth, nearly linear depe
Linear regression performing better than random forest in Caret Definitely check on how $R^2$ is being evaluated and whether it is uniform across the different algorithms. Beyond that, a few thoughts occur to me: If your features have a smooth, nearly linear dependence on the covariates, then linear regression will model the dependence better than random forests, which will basically approximate a linear curve with an ugly irregular step function. If the dependence is multivariate linear and smooth, with $v$ significant covariates producing the dependence, the fit performance of random forests can be expected to get worse and worse for larger and larger $v$. RF has a much greater ability than a single decision tree to model linearity, since we are adding tree predictions together - but still, it's just not very efficient to approximate a high-dimensional linear relationship with a series of step functions. I think this is the most likely theoretical explanation for RF underperforming linear regression. ...that said, the step functions will get nicer as you add more trees to the random forest. So you may want to consider increasing the number of trees a lot - as high as you are willing to wait for. Maybe see if caret will allow you to track the $R^2$ as the number of trees increases? I would also expect boosted trees to potentially work well here, but you may have to fit a ton of trees and fiddle a lot with the parameters. There's no guarantee though - if this is a really high-dimensional smooth relationship, it could take an unfathomably large number of trees to approximate it well. Preprocessing with PCA is not ideal for high-dimensional learners like random forests. For best results, they should generally get direct access to the full dataset. If the raw dataset has sparsity of main effects, this nice property could easily be destroyed by PCA. This would make even worse the problem of my first bullet, where RF is trying to approximate a high-dimensional linear relationship by step functions. Advanced machine learning algorithms are favored for their generalization performance, not their ability to closely fit training data and produce a high $R^2$. You are performing an interesting diagnostic and that is worth doing, but be aware that this diagnostic is not a good measure of final performance. Random Forests are not the only useful algorithm out there for high-dimensional data. You could also try elastic net regression either on your PCA processed data or on the raw dataset. Elastic net gives you the ability to model smooth relationships, as well as the ability to perform high-dimensional feature selection using the $\ell_1$ regression coefficient penalty.
Linear regression performing better than random forest in Caret Definitely check on how $R^2$ is being evaluated and whether it is uniform across the different algorithms. Beyond that, a few thoughts occur to me: If your features have a smooth, nearly linear depe