idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
53,001
Skewed but bell-shaped still considered as normal distribution for ANOVA?
If the distributions are similar (in particular have the same variance) and the group sizes are identical (balanced design), you probably have no reason to worry. Formally, the normality assumption is violated and it can matter but it is less important than the equality of variance assumption and simulation studies have shown ANOVA to be quite robust to such violations as long as the sample size and the variance are the same across all cells of the design. If you combine several violations (say non-normality and heteroscedasticity) or have an unbalanced design, you cannot trust the F test anymore. That said, the distribution will also have an impact on the error variance and even if the nominal error level is preserved, non-normal data can severely reduce the power to detect a given difference. Also, when you are looking at skewed distributions, a few large values can have a big influence on the mean. Consequently, it's possible that two groups really have different means (in the sample and in the population) but that most of the observations (i.e. most of the test runs in your case) are in fact very similar. The mean therefore might not be what you are interested in (or at least not be all you are interested in). In a nutshell, you could probably still use ANOVA as inference will not necessarily be threatened but you might also want to consider alternatives to increase power or learn more about your data. Also note that strictly speaking the normality assumption applies to the distribution of the residuals, you should therefore look at residual plots or at least at the distribution in each cell, not at the whole data set at once.
Skewed but bell-shaped still considered as normal distribution for ANOVA?
If the distributions are similar (in particular have the same variance) and the group sizes are identical (balanced design), you probably have no reason to worry. Formally, the normality assumption is
Skewed but bell-shaped still considered as normal distribution for ANOVA? If the distributions are similar (in particular have the same variance) and the group sizes are identical (balanced design), you probably have no reason to worry. Formally, the normality assumption is violated and it can matter but it is less important than the equality of variance assumption and simulation studies have shown ANOVA to be quite robust to such violations as long as the sample size and the variance are the same across all cells of the design. If you combine several violations (say non-normality and heteroscedasticity) or have an unbalanced design, you cannot trust the F test anymore. That said, the distribution will also have an impact on the error variance and even if the nominal error level is preserved, non-normal data can severely reduce the power to detect a given difference. Also, when you are looking at skewed distributions, a few large values can have a big influence on the mean. Consequently, it's possible that two groups really have different means (in the sample and in the population) but that most of the observations (i.e. most of the test runs in your case) are in fact very similar. The mean therefore might not be what you are interested in (or at least not be all you are interested in). In a nutshell, you could probably still use ANOVA as inference will not necessarily be threatened but you might also want to consider alternatives to increase power or learn more about your data. Also note that strictly speaking the normality assumption applies to the distribution of the residuals, you should therefore look at residual plots or at least at the distribution in each cell, not at the whole data set at once.
Skewed but bell-shaped still considered as normal distribution for ANOVA? If the distributions are similar (in particular have the same variance) and the group sizes are identical (balanced design), you probably have no reason to worry. Formally, the normality assumption is
53,002
Skewed but bell-shaped still considered as normal distribution for ANOVA?
There is a degree of sensitivity to heavy skewness in ANOVA Times, in particular, tend to be much more skewed than speeds (inverse-times) and log-time. If your question of interest could be stated in terms of one of those (and they are less skew) then you may not have to rely on an assumption that doesn't hold. Further, and even more critically, you also tend to get strongly different spreads with times (rather than either log-times or with speeds). What does your data look like? Have you got some by-group displays (e.g QQ plots, box plots) and summaries (e.g. mean, sd, median, quartiles) ?
Skewed but bell-shaped still considered as normal distribution for ANOVA?
There is a degree of sensitivity to heavy skewness in ANOVA Times, in particular, tend to be much more skewed than speeds (inverse-times) and log-time. If your question of interest could be stated i
Skewed but bell-shaped still considered as normal distribution for ANOVA? There is a degree of sensitivity to heavy skewness in ANOVA Times, in particular, tend to be much more skewed than speeds (inverse-times) and log-time. If your question of interest could be stated in terms of one of those (and they are less skew) then you may not have to rely on an assumption that doesn't hold. Further, and even more critically, you also tend to get strongly different spreads with times (rather than either log-times or with speeds). What does your data look like? Have you got some by-group displays (e.g QQ plots, box plots) and summaries (e.g. mean, sd, median, quartiles) ?
Skewed but bell-shaped still considered as normal distribution for ANOVA? There is a degree of sensitivity to heavy skewness in ANOVA Times, in particular, tend to be much more skewed than speeds (inverse-times) and log-time. If your question of interest could be stated i
53,003
R: How to interpret the QQplot's outlier numbers?
The number in the plot corresponds to the indices of the standardized residuals and the original data. By default, R labels the three most extreme residuals, even if they don't deviate much from the QQ-line. So the fact that the points are labelled doesn't mean that the fit is bad or anything. This behaviour can be changed by specifying the option id.n. Let me illustrate this with your example set.seed(1) y <- rnorm(100) x <- rnorm(100) lm.mod <- lm(y ~ x) # linear regression model plot(lm.mod, which=2) # QQ-Plot lm.resid <- residuals(lm(y ~ x)) # save the residuals sort(abs(lm.resid), decreasing=TRUE) # sort the absolute values of the residals 14 61 24 2.32415869 2.29316200 2.09837122 The first three most extreme residuals are number 14, 61 and 24. These are the numbers in the plot. These indices correspond to the indices of the original data. So the data points 14, 24 and 26 are the ones that cause the most extreme residuals. We can also mark them in a scatterplot (the blue points). Note that because you generated your y and x independently, the regression line is simply the mean of y without any slope: # The original data points corresponding to the 3 most extreme residuals cbind(x,y)[c(14, 24, 61), ] x y [1,] -0.6506964 -2.214700 [2,] -0.1795565 -1.989352 [3,] 0.4251004 2.401618 # Make a scatterplot of the original data and mark the three points # and add the residuals par(bg="white", cex=1.6) plot(y~x, pch=16, las=1) abline(lm.mod, lwd=2) # add regression line pre <- predict(lm.mod) # Add the residual lines segments(x[c(14, 24, 61)], y[c(14, 24, 61)], x[c(14, 24, 61)], pre[c(14, 24, 61)], col="red", lwd=2) # Add the points points(x[c(14, 24, 61)], y[c(14, 24, 61)], pch=16, cex=1.1, col="steelblue", las=1)
R: How to interpret the QQplot's outlier numbers?
The number in the plot corresponds to the indices of the standardized residuals and the original data. By default, R labels the three most extreme residuals, even if they don't deviate much from the Q
R: How to interpret the QQplot's outlier numbers? The number in the plot corresponds to the indices of the standardized residuals and the original data. By default, R labels the three most extreme residuals, even if they don't deviate much from the QQ-line. So the fact that the points are labelled doesn't mean that the fit is bad or anything. This behaviour can be changed by specifying the option id.n. Let me illustrate this with your example set.seed(1) y <- rnorm(100) x <- rnorm(100) lm.mod <- lm(y ~ x) # linear regression model plot(lm.mod, which=2) # QQ-Plot lm.resid <- residuals(lm(y ~ x)) # save the residuals sort(abs(lm.resid), decreasing=TRUE) # sort the absolute values of the residals 14 61 24 2.32415869 2.29316200 2.09837122 The first three most extreme residuals are number 14, 61 and 24. These are the numbers in the plot. These indices correspond to the indices of the original data. So the data points 14, 24 and 26 are the ones that cause the most extreme residuals. We can also mark them in a scatterplot (the blue points). Note that because you generated your y and x independently, the regression line is simply the mean of y without any slope: # The original data points corresponding to the 3 most extreme residuals cbind(x,y)[c(14, 24, 61), ] x y [1,] -0.6506964 -2.214700 [2,] -0.1795565 -1.989352 [3,] 0.4251004 2.401618 # Make a scatterplot of the original data and mark the three points # and add the residuals par(bg="white", cex=1.6) plot(y~x, pch=16, las=1) abline(lm.mod, lwd=2) # add regression line pre <- predict(lm.mod) # Add the residual lines segments(x[c(14, 24, 61)], y[c(14, 24, 61)], x[c(14, 24, 61)], pre[c(14, 24, 61)], col="red", lwd=2) # Add the points points(x[c(14, 24, 61)], y[c(14, 24, 61)], pch=16, cex=1.1, col="steelblue", las=1)
R: How to interpret the QQplot's outlier numbers? The number in the plot corresponds to the indices of the standardized residuals and the original data. By default, R labels the three most extreme residuals, even if they don't deviate much from the Q
53,004
Standard Deviation of Random effect is 0?
Sometimes the maximum likelihood estimate for a variance component is zero. Or more generally, sometimes a given algorithm for getting approximate MLEs will return zero. This happens when, for example, your fixed effects happen to be able to fit all the members of a group perfectly, and no variance is left over. This is mainly an issue when the sample size is small relative to the number of groups. Doug Bates, the author of the lme4 package, discusses this on page 11 of his book/manual on R-Forge. He calls it a "degenerate" model, and says it's equivalent to if you didn't include that random effect at all. The model is still valid, but you may have reasons not to trust its estimates, as discussed below. Andrew Gelman and a bunch of collaborators discuss the issue in more depth here. They think that pure maximum likelihood's tendency to return zeros in this case can cause a number of problems (discussed on page 2). They suggest weakly bumping the expected variance of the random effects up slightly to make it return nonzero estimates. These adjusted estimates can be more stable, and may be more consistent with a researcher's prior knowledge about the problem. This approach will also tend to give better standard errors. While the approach they suggest is Bayesian, they show that it has good frequency properties as well. One of the authors, Vincent Dorie, has an R package that extends lme4 using their suggested method here.
Standard Deviation of Random effect is 0?
Sometimes the maximum likelihood estimate for a variance component is zero. Or more generally, sometimes a given algorithm for getting approximate MLEs will return zero. This happens when, for example
Standard Deviation of Random effect is 0? Sometimes the maximum likelihood estimate for a variance component is zero. Or more generally, sometimes a given algorithm for getting approximate MLEs will return zero. This happens when, for example, your fixed effects happen to be able to fit all the members of a group perfectly, and no variance is left over. This is mainly an issue when the sample size is small relative to the number of groups. Doug Bates, the author of the lme4 package, discusses this on page 11 of his book/manual on R-Forge. He calls it a "degenerate" model, and says it's equivalent to if you didn't include that random effect at all. The model is still valid, but you may have reasons not to trust its estimates, as discussed below. Andrew Gelman and a bunch of collaborators discuss the issue in more depth here. They think that pure maximum likelihood's tendency to return zeros in this case can cause a number of problems (discussed on page 2). They suggest weakly bumping the expected variance of the random effects up slightly to make it return nonzero estimates. These adjusted estimates can be more stable, and may be more consistent with a researcher's prior knowledge about the problem. This approach will also tend to give better standard errors. While the approach they suggest is Bayesian, they show that it has good frequency properties as well. One of the authors, Vincent Dorie, has an R package that extends lme4 using their suggested method here.
Standard Deviation of Random effect is 0? Sometimes the maximum likelihood estimate for a variance component is zero. Or more generally, sometimes a given algorithm for getting approximate MLEs will return zero. This happens when, for example
53,005
Adding noise to a column of data
It depends on the kind of noise you want to add. Here's an example: x <- runif(100,100,150) # this is our original vector, which I'm just making up corrupt <- rbinom(length(x),1,0.1) # choose an average of 10% to corrupt at random corrupt <- as.logical(corrupt) noise <- rnorm(sum(corrupt),1000,200) # generate the noise to add x[corrupt] <- x[corrupt] + noise # about 10% of x has been corrupted Here's a plot of corrupted values against the original ones: You should substitute your own vector and the kind of noise. Edit: I've just seen your response to my question. To get a random value from 0 to 9, you use sample like so: noise <- sample(0:9,sum(corrupt),replace=TRUE) and because you're replacing rather than adding, you then do: x[corrupt] <- noise Giving this on the same data as before, and with the same values replaced: If you wanted to replace a precise number of values (exactly 100 of 1000 values say, rather than an average of 100), you could sample to choose from a set of indices indicating which values to replace.
Adding noise to a column of data
It depends on the kind of noise you want to add. Here's an example: x <- runif(100,100,150) # this is our original vector, which I'm just making up corrupt <- rbinom(length(x),1,0.1) # choose an
Adding noise to a column of data It depends on the kind of noise you want to add. Here's an example: x <- runif(100,100,150) # this is our original vector, which I'm just making up corrupt <- rbinom(length(x),1,0.1) # choose an average of 10% to corrupt at random corrupt <- as.logical(corrupt) noise <- rnorm(sum(corrupt),1000,200) # generate the noise to add x[corrupt] <- x[corrupt] + noise # about 10% of x has been corrupted Here's a plot of corrupted values against the original ones: You should substitute your own vector and the kind of noise. Edit: I've just seen your response to my question. To get a random value from 0 to 9, you use sample like so: noise <- sample(0:9,sum(corrupt),replace=TRUE) and because you're replacing rather than adding, you then do: x[corrupt] <- noise Giving this on the same data as before, and with the same values replaced: If you wanted to replace a precise number of values (exactly 100 of 1000 values say, rather than an average of 100), you could sample to choose from a set of indices indicating which values to replace.
Adding noise to a column of data It depends on the kind of noise you want to add. Here's an example: x <- runif(100,100,150) # this is our original vector, which I'm just making up corrupt <- rbinom(length(x),1,0.1) # choose an
53,006
Algorithm to produce autocorrelated uniformly distributed number
The first abstract might as well be the entire paper, because it perfectly describes an algorithm. :-) The following is my interpretation of that abstract. The idea emulates that of generating a Gaussian process with specified autocorrelation $\rho = \cos(\theta)$ (re-expressing $\rho$ as the cosine of an angle between $0$ and $\pi$). By rescaling and shifting the mean, it suffices to generate a process $X_n$ with Standard Normal marginals (that is, with zero mean and unit variance). If we generate an iid sequence of Standard Normals $\varepsilon_n$, then the desired process is obtained recursively via $$\eqalign{ X_0 &= \varepsilon_0 \\ X_{n+1} &= \cos(\theta)X_n + \sin(\theta)\varepsilon_{n+1}, \quad n \ge 0. }$$ This works because $X_0$ is Standard Normal by construction, All the $X_n$ have Normal distributions because they are (inductively) linear combinations of independent Normals, $\text{Var}(X_{n+1})$ = $\cos(\theta)^2 \text{Var}(X_n) + 2\cos(\theta)\sin(\theta) \text{Cov}(X_n, \varepsilon_{n+1}) + \sin(\theta)^2 \text{Var}(\varepsilon_{n+1})$ = $\cos(\theta)^2 1 + \sin(\theta)^2 1 = 1$ by induction on $n$. $\text{Cov}(X_n, X_{n+1})$ = $\text{Cov}(X_n, \cos(\theta)X_n + \sin(\theta)\varepsilon_{n+1})$ = $\cos(\theta)\text{Var}(X_n, X_n) = \cos(\theta)$, as intended. The analog of a Standard Normal distribution might be called the "standard uniform" distribution; an easy calculation establishes that it is the uniform distribution on $[-\sqrt{3}, \sqrt{3}]$. If we try the same construction, beginning with a sequence $(\varepsilon_n)$ of iid standard uniforms, then all the variance and covariance calculations remain correct, but unfortunately, $Y_{n+1} = \cos(\theta)X_n + \sin(\theta)\varepsilon_{n+1}$ is not uniform. We can make it uniform through the probability transform: simply apply the CDF of $Y_{n+1}$ to the value which is obtained--which will produce a uniform distribution on $[0,1]$--and then standardize that to make it standard uniform. Although the correlation is no longer exactly $\cos(\theta)$, it turns out it will be extremely close to $\cos(\theta)$. A simple way to compute the CDF of $Y_{n+1}$, which by definition is the CDF of a linear combination of two standard uniforms with coefficients $\cos(\theta)$ and $\sin(\theta)$, begins with the case $0 \le \theta \le \pi/4$. Computing the integrals (which involve areas of triangles and parallelograms) gives $$ \eqalign { 1,\quad & z\geq \sqrt{3} (c+s) \\ \frac{1}{12} \csc(2 \theta) \left(-3-z^2+2 \sqrt{3} z (c+s)+9 \sin(2 \theta)\right),\quad & \sqrt{3} (c-s)<z<\sqrt{3} (c+s) \\ \frac{1}{6} \left(3+\sqrt{3} z /c\right),\quad & \sqrt{3} (-c+s)<z\leq \sqrt{3} (c-s) \\ \frac{1}{12} \csc(2 \theta) \left(z^2+2 \sqrt{3} z (c+s)+3 (1+\sin(2 \theta))\right),\quad & -\sqrt{3} (c+s)<z\leq \sqrt{3} (-c+s) \\ 0, \quad &\text{otherwise} } $$ where $c=\cos(\theta)$ and $s=\sin(\theta)$. (Computations are made a little easier by noting the distribution is symmetric.) Let's call this function $F(z;\theta)$. Here is a plot of $F$ and its derivative (the PDF) for $\theta=1/2$: I have broken the graph of $F$ (in blue) at the points where its definition changes. These correspond to changes in slope of its derivative (in red, shaded). Now exploit the symmetries available via $x\to -x$, $y \to -y$, and $(x,y)\to (y,x)$ to reduce all computations to the case $0 \le \theta \lt \pi/4$: define $$\psi = |\text{mod}(\text{mod}(\theta, \pi), \pi/2, -\pi/4))|$$ to be the absolute difference between $\theta$ (reduced modulo $\pi$) and the nearest multiple of $\pi/2$ and set $$h(x,y;\theta) = 2\sqrt{3} (F(\cos(\theta)x + \sin(\theta)y; \psi) - 1/2).$$ The method of obtaining $X_{n+1}$ is to compute $h(X_n, \varepsilon_{n+1})$. As a check, here are results obtained by simulating $10^5$ iid standard uniform $\varepsilon_n$ and using them to generate realizations of four processes with a wide scope of intended correlations at $\rho=0.9$, $0.7$, $0$, and $-0.99$, respectively. Their means are $0.0146058,0.00813056,0.0040066,0.000269295$. These are close to zero and can be attributed to random variation; the large serial correlation in the first accounts for the relatively large departure of its mean from $0$. Likewise, their variances of $1.00962,1.00654,0.99683,0.996235$ are only insignificantly different from $1$. We visually check the distributions are uniform by plotting their histograms: They look close to uniform, as intended. (This really wasn't in doubt, because the construction--assuming the calculation is carried out correctly--forces them to be uniform; the main concern is whether the serial correlation coefficients are close to the intended values.) With $10^5$ iterations we should be able to pin down the correlations to better than two decimal places. The lag-one serial correlations are $$0.906885, 0.696603, 0.00768022, -0.990412,$$ comfortably close to the targets. For further confirmation, here are plots of the first $300$ values in each realization:
Algorithm to produce autocorrelated uniformly distributed number
The first abstract might as well be the entire paper, because it perfectly describes an algorithm. :-) The following is my interpretation of that abstract. The idea emulates that of generating a Gaus
Algorithm to produce autocorrelated uniformly distributed number The first abstract might as well be the entire paper, because it perfectly describes an algorithm. :-) The following is my interpretation of that abstract. The idea emulates that of generating a Gaussian process with specified autocorrelation $\rho = \cos(\theta)$ (re-expressing $\rho$ as the cosine of an angle between $0$ and $\pi$). By rescaling and shifting the mean, it suffices to generate a process $X_n$ with Standard Normal marginals (that is, with zero mean and unit variance). If we generate an iid sequence of Standard Normals $\varepsilon_n$, then the desired process is obtained recursively via $$\eqalign{ X_0 &= \varepsilon_0 \\ X_{n+1} &= \cos(\theta)X_n + \sin(\theta)\varepsilon_{n+1}, \quad n \ge 0. }$$ This works because $X_0$ is Standard Normal by construction, All the $X_n$ have Normal distributions because they are (inductively) linear combinations of independent Normals, $\text{Var}(X_{n+1})$ = $\cos(\theta)^2 \text{Var}(X_n) + 2\cos(\theta)\sin(\theta) \text{Cov}(X_n, \varepsilon_{n+1}) + \sin(\theta)^2 \text{Var}(\varepsilon_{n+1})$ = $\cos(\theta)^2 1 + \sin(\theta)^2 1 = 1$ by induction on $n$. $\text{Cov}(X_n, X_{n+1})$ = $\text{Cov}(X_n, \cos(\theta)X_n + \sin(\theta)\varepsilon_{n+1})$ = $\cos(\theta)\text{Var}(X_n, X_n) = \cos(\theta)$, as intended. The analog of a Standard Normal distribution might be called the "standard uniform" distribution; an easy calculation establishes that it is the uniform distribution on $[-\sqrt{3}, \sqrt{3}]$. If we try the same construction, beginning with a sequence $(\varepsilon_n)$ of iid standard uniforms, then all the variance and covariance calculations remain correct, but unfortunately, $Y_{n+1} = \cos(\theta)X_n + \sin(\theta)\varepsilon_{n+1}$ is not uniform. We can make it uniform through the probability transform: simply apply the CDF of $Y_{n+1}$ to the value which is obtained--which will produce a uniform distribution on $[0,1]$--and then standardize that to make it standard uniform. Although the correlation is no longer exactly $\cos(\theta)$, it turns out it will be extremely close to $\cos(\theta)$. A simple way to compute the CDF of $Y_{n+1}$, which by definition is the CDF of a linear combination of two standard uniforms with coefficients $\cos(\theta)$ and $\sin(\theta)$, begins with the case $0 \le \theta \le \pi/4$. Computing the integrals (which involve areas of triangles and parallelograms) gives $$ \eqalign { 1,\quad & z\geq \sqrt{3} (c+s) \\ \frac{1}{12} \csc(2 \theta) \left(-3-z^2+2 \sqrt{3} z (c+s)+9 \sin(2 \theta)\right),\quad & \sqrt{3} (c-s)<z<\sqrt{3} (c+s) \\ \frac{1}{6} \left(3+\sqrt{3} z /c\right),\quad & \sqrt{3} (-c+s)<z\leq \sqrt{3} (c-s) \\ \frac{1}{12} \csc(2 \theta) \left(z^2+2 \sqrt{3} z (c+s)+3 (1+\sin(2 \theta))\right),\quad & -\sqrt{3} (c+s)<z\leq \sqrt{3} (-c+s) \\ 0, \quad &\text{otherwise} } $$ where $c=\cos(\theta)$ and $s=\sin(\theta)$. (Computations are made a little easier by noting the distribution is symmetric.) Let's call this function $F(z;\theta)$. Here is a plot of $F$ and its derivative (the PDF) for $\theta=1/2$: I have broken the graph of $F$ (in blue) at the points where its definition changes. These correspond to changes in slope of its derivative (in red, shaded). Now exploit the symmetries available via $x\to -x$, $y \to -y$, and $(x,y)\to (y,x)$ to reduce all computations to the case $0 \le \theta \lt \pi/4$: define $$\psi = |\text{mod}(\text{mod}(\theta, \pi), \pi/2, -\pi/4))|$$ to be the absolute difference between $\theta$ (reduced modulo $\pi$) and the nearest multiple of $\pi/2$ and set $$h(x,y;\theta) = 2\sqrt{3} (F(\cos(\theta)x + \sin(\theta)y; \psi) - 1/2).$$ The method of obtaining $X_{n+1}$ is to compute $h(X_n, \varepsilon_{n+1})$. As a check, here are results obtained by simulating $10^5$ iid standard uniform $\varepsilon_n$ and using them to generate realizations of four processes with a wide scope of intended correlations at $\rho=0.9$, $0.7$, $0$, and $-0.99$, respectively. Their means are $0.0146058,0.00813056,0.0040066,0.000269295$. These are close to zero and can be attributed to random variation; the large serial correlation in the first accounts for the relatively large departure of its mean from $0$. Likewise, their variances of $1.00962,1.00654,0.99683,0.996235$ are only insignificantly different from $1$. We visually check the distributions are uniform by plotting their histograms: They look close to uniform, as intended. (This really wasn't in doubt, because the construction--assuming the calculation is carried out correctly--forces them to be uniform; the main concern is whether the serial correlation coefficients are close to the intended values.) With $10^5$ iterations we should be able to pin down the correlations to better than two decimal places. The lag-one serial correlations are $$0.906885, 0.696603, 0.00768022, -0.990412,$$ comfortably close to the targets. For further confirmation, here are plots of the first $300$ values in each realization:
Algorithm to produce autocorrelated uniformly distributed number The first abstract might as well be the entire paper, because it perfectly describes an algorithm. :-) The following is my interpretation of that abstract. The idea emulates that of generating a Gaus
53,007
How to compute the distribution of sums when rolling 'N' dice with 'M' faces?
For many purposes, you can use a normal approximation with continuity correction. You can also use recursion for the probability that the total equals $y$, and add these from $y=x+1$ through $y=mn$. There is also an exact formula for the probability that the sum equals $y$ involving a single summation, which gives you a double sum for the probability the sum is greater than $x$. Be careful that in the linked answer, the "dice" had values from $0$ through $m$ instead of from $1$ through $m$. You can convert these by subtracting $1$ from each die and $n$ from the total, and using $m-1$ in place of $m$. Also, as whuber pointed out, there can be numerical instability if you are not careful in the order in which you add the terms of an alternating sum.
How to compute the distribution of sums when rolling 'N' dice with 'M' faces?
For many purposes, you can use a normal approximation with continuity correction. You can also use recursion for the probability that the total equals $y$, and add these from $y=x+1$ through $y=mn$.
How to compute the distribution of sums when rolling 'N' dice with 'M' faces? For many purposes, you can use a normal approximation with continuity correction. You can also use recursion for the probability that the total equals $y$, and add these from $y=x+1$ through $y=mn$. There is also an exact formula for the probability that the sum equals $y$ involving a single summation, which gives you a double sum for the probability the sum is greater than $x$. Be careful that in the linked answer, the "dice" had values from $0$ through $m$ instead of from $1$ through $m$. You can convert these by subtracting $1$ from each die and $n$ from the total, and using $m-1$ in place of $m$. Also, as whuber pointed out, there can be numerical instability if you are not careful in the order in which you add the terms of an alternating sum.
How to compute the distribution of sums when rolling 'N' dice with 'M' faces? For many purposes, you can use a normal approximation with continuity correction. You can also use recursion for the probability that the total equals $y$, and add these from $y=x+1$ through $y=mn$.
53,008
How to compute the distribution of sums when rolling 'N' dice with 'M' faces?
Why don't you compute $P(sum > x)$ using a recursive formula with respect to the number of dices? Something like $P(sum_n > x) = P(sum_{n - 1} + outcome_n > x) = \sum_{1}^{m}p_i1_{1 \le i \le m}P(sum_{n-1}>x-i)$.
How to compute the distribution of sums when rolling 'N' dice with 'M' faces?
Why don't you compute $P(sum > x)$ using a recursive formula with respect to the number of dices? Something like $P(sum_n > x) = P(sum_{n - 1} + outcome_n > x) = \sum_{1}^{m}p_i1_{1 \le i \le m}P(sum_
How to compute the distribution of sums when rolling 'N' dice with 'M' faces? Why don't you compute $P(sum > x)$ using a recursive formula with respect to the number of dices? Something like $P(sum_n > x) = P(sum_{n - 1} + outcome_n > x) = \sum_{1}^{m}p_i1_{1 \le i \le m}P(sum_{n-1}>x-i)$.
How to compute the distribution of sums when rolling 'N' dice with 'M' faces? Why don't you compute $P(sum > x)$ using a recursive formula with respect to the number of dices? Something like $P(sum_n > x) = P(sum_{n - 1} + outcome_n > x) = \sum_{1}^{m}p_i1_{1 \le i \le m}P(sum_
53,009
How to compute the distribution of sums when rolling 'N' dice with 'M' faces?
Here's an implementation of @ThePawn's answer in Julia. This computes the probability that they sum up to exactly n. using Memoize # using Pkg; Pkg.add("Memoize") @memoize function p_dice(dice, sides, n) # Returns the probability dice dice with side sides # sum up to n, # where side ∈ 1:sides if dice == 1 if 1 <= n <= sides return 1/sides else return 0 end end return sum(1/sides * p_dice(dice-1, sides, n-outcome) for outcome in 1:sides) end and one in python: import functools @functools.lru_cache def p_dice(dice, sides, n): # Returns the probability dice dice with side sides # sum up to n, # where side in range(1, side+1) if dice == 1: if 1 <= n <= sides: return 1/sides # equal probability for each outcome else: return 0 return sum(1/sides * p_dice(dice-1, sides, n-outcome) for outcome in range(1, sides+1) Note that the memoization may very well be suboptimal for larger number of dice.
How to compute the distribution of sums when rolling 'N' dice with 'M' faces?
Here's an implementation of @ThePawn's answer in Julia. This computes the probability that they sum up to exactly n. using Memoize # using Pkg; Pkg.add("Memoize") @memoize function p_dice(dice, sides,
How to compute the distribution of sums when rolling 'N' dice with 'M' faces? Here's an implementation of @ThePawn's answer in Julia. This computes the probability that they sum up to exactly n. using Memoize # using Pkg; Pkg.add("Memoize") @memoize function p_dice(dice, sides, n) # Returns the probability dice dice with side sides # sum up to n, # where side ∈ 1:sides if dice == 1 if 1 <= n <= sides return 1/sides else return 0 end end return sum(1/sides * p_dice(dice-1, sides, n-outcome) for outcome in 1:sides) end and one in python: import functools @functools.lru_cache def p_dice(dice, sides, n): # Returns the probability dice dice with side sides # sum up to n, # where side in range(1, side+1) if dice == 1: if 1 <= n <= sides: return 1/sides # equal probability for each outcome else: return 0 return sum(1/sides * p_dice(dice-1, sides, n-outcome) for outcome in range(1, sides+1) Note that the memoization may very well be suboptimal for larger number of dice.
How to compute the distribution of sums when rolling 'N' dice with 'M' faces? Here's an implementation of @ThePawn's answer in Julia. This computes the probability that they sum up to exactly n. using Memoize # using Pkg; Pkg.add("Memoize") @memoize function p_dice(dice, sides,
53,010
How to compute the distribution of sums when rolling 'N' dice with 'M' faces?
If the faces are uniform then the distribution will resemble a discrete form of the Irwin Hall distribution. You could figure out the equations for the distribution, which will be some piecewise polynomial. Possibly easier is to compute it by convolution (as given in another answer using a recursive formula) or by using the normal approximation.
How to compute the distribution of sums when rolling 'N' dice with 'M' faces?
If the faces are uniform then the distribution will resemble a discrete form of the Irwin Hall distribution. You could figure out the equations for the distribution, which will be some piecewise polyn
How to compute the distribution of sums when rolling 'N' dice with 'M' faces? If the faces are uniform then the distribution will resemble a discrete form of the Irwin Hall distribution. You could figure out the equations for the distribution, which will be some piecewise polynomial. Possibly easier is to compute it by convolution (as given in another answer using a recursive formula) or by using the normal approximation.
How to compute the distribution of sums when rolling 'N' dice with 'M' faces? If the faces are uniform then the distribution will resemble a discrete form of the Irwin Hall distribution. You could figure out the equations for the distribution, which will be some piecewise polyn
53,011
Can confidence interval be equal to zero?
In a remarkable paper, Gleser and Hwang showed that, for some models, confidence intervals must have infinite expected length for any positive confidence level $\alpha$ to be attained. More precisely, there is a subset of observations with positive measure for which the length of the confidence interval is infinite. Examples include error-in-variable models and the Fieller-Creasy problem, where one estimates the ratio of two normal means. Conversely, a confidence interval may have zero length for a subset of observations with positive measure without jeopardising the (overall) coverage probability. For instance, in the normal $X\sim\mathcal{N}(\theta,1)$ case, if we pick an empty confidence interval with probability .01 and the interval $(x-2.054,x+2.054)$ with probability .99, we end up with an overall coverage probability of $$ \mathbb{P}_\theta(|X-\theta|<2.054)\times .99 = .96\times .99 = .95 $$
Can confidence interval be equal to zero?
In a remarkable paper, Gleser and Hwang showed that, for some models, confidence intervals must have infinite expected length for any positive confidence level $\alpha$ to be attained. More precisely,
Can confidence interval be equal to zero? In a remarkable paper, Gleser and Hwang showed that, for some models, confidence intervals must have infinite expected length for any positive confidence level $\alpha$ to be attained. More precisely, there is a subset of observations with positive measure for which the length of the confidence interval is infinite. Examples include error-in-variable models and the Fieller-Creasy problem, where one estimates the ratio of two normal means. Conversely, a confidence interval may have zero length for a subset of observations with positive measure without jeopardising the (overall) coverage probability. For instance, in the normal $X\sim\mathcal{N}(\theta,1)$ case, if we pick an empty confidence interval with probability .01 and the interval $(x-2.054,x+2.054)$ with probability .99, we end up with an overall coverage probability of $$ \mathbb{P}_\theta(|X-\theta|<2.054)\times .99 = .96\times .99 = .95 $$
Can confidence interval be equal to zero? In a remarkable paper, Gleser and Hwang showed that, for some models, confidence intervals must have infinite expected length for any positive confidence level $\alpha$ to be attained. More precisely,
53,012
Can confidence interval be equal to zero?
An odd ratio of 2.97E006 (if you mean 2970000) seems odd to me, since it is way too high. Given your sample size it could be that some categories of the independed variables have a low frequency (e.g. a 1). Based on the information you provided it is indeed not safe to report the CI. I think no journal will expect a CI of zero, if it is zero there is no interval! The ci for a logit is calculated as $ln(OR)-(Z*SE)\leq\theta \leq ln(OR)+(Z*SE)$ If this results in zero, something is wrong.
Can confidence interval be equal to zero?
An odd ratio of 2.97E006 (if you mean 2970000) seems odd to me, since it is way too high. Given your sample size it could be that some categories of the independed variables have a low frequency (e.g.
Can confidence interval be equal to zero? An odd ratio of 2.97E006 (if you mean 2970000) seems odd to me, since it is way too high. Given your sample size it could be that some categories of the independed variables have a low frequency (e.g. a 1). Based on the information you provided it is indeed not safe to report the CI. I think no journal will expect a CI of zero, if it is zero there is no interval! The ci for a logit is calculated as $ln(OR)-(Z*SE)\leq\theta \leq ln(OR)+(Z*SE)$ If this results in zero, something is wrong.
Can confidence interval be equal to zero? An odd ratio of 2.97E006 (if you mean 2970000) seems odd to me, since it is way too high. Given your sample size it could be that some categories of the independed variables have a low frequency (e.g.
53,013
Experiment or simulation to undestand type I and type II errors
This would be the most basic procedure behind any such simulation: Type I errors: Have the computer generate a set (of size $n$) of pseudorandom numbers that conform to a particular distribution (the normal would be most typical). Generate a second identical set (i.e., same distribution, parameters, and size). Conduct a statistical test on these data (as I have described this, a t-test would be appropriate). Store the resulting p-value. Iterate over (repeat) the above procedure many times (e.g., 10k is popular). Determine the proportion of observed p-values fall below your chosen $\alpha$ level (typically .05). (Note that this observed proportion ought to be very close to $\alpha$.) Type II errors (modify the above procedure as follows): For the second step: Generate a second set of pseudorandom numbers that differ from the first set in a pre-specified way (typically the mean would differ by some amount). On the sixth step: the proportion of observed p-values below $\alpha$ will almost certainly differ from $\alpha$ by a large amount. The observed proportion is an estimate of the statistical power of your test for that exact situation (i.e., data from those distributions, with those parameters, with those $n$'s). Using simulations in a manner like this to explore properties of tests or situations, or to conduct power analyses is very common. Moreover, they have been commonly used on this site to demonstrate / explain statistical concepts. Here are some threads you can explore if you want: Here is an example where I used such a simulation to show that power decreases as the two group sizes become increasingly unequal: How should one interpret the comparison of means from different sample sizes? This is a very extensive discussion of using simulations to assess power: Simulation of logistic regression power analysis - designed experiments. A broader discussion of using simulations to understand statistical concepts can be found here: Using computer simulations to better understand statistical concepts at the graduate level.
Experiment or simulation to undestand type I and type II errors
This would be the most basic procedure behind any such simulation: Type I errors: Have the computer generate a set (of size $n$) of pseudorandom numbers that conform to a particular distribution (
Experiment or simulation to undestand type I and type II errors This would be the most basic procedure behind any such simulation: Type I errors: Have the computer generate a set (of size $n$) of pseudorandom numbers that conform to a particular distribution (the normal would be most typical). Generate a second identical set (i.e., same distribution, parameters, and size). Conduct a statistical test on these data (as I have described this, a t-test would be appropriate). Store the resulting p-value. Iterate over (repeat) the above procedure many times (e.g., 10k is popular). Determine the proportion of observed p-values fall below your chosen $\alpha$ level (typically .05). (Note that this observed proportion ought to be very close to $\alpha$.) Type II errors (modify the above procedure as follows): For the second step: Generate a second set of pseudorandom numbers that differ from the first set in a pre-specified way (typically the mean would differ by some amount). On the sixth step: the proportion of observed p-values below $\alpha$ will almost certainly differ from $\alpha$ by a large amount. The observed proportion is an estimate of the statistical power of your test for that exact situation (i.e., data from those distributions, with those parameters, with those $n$'s). Using simulations in a manner like this to explore properties of tests or situations, or to conduct power analyses is very common. Moreover, they have been commonly used on this site to demonstrate / explain statistical concepts. Here are some threads you can explore if you want: Here is an example where I used such a simulation to show that power decreases as the two group sizes become increasingly unequal: How should one interpret the comparison of means from different sample sizes? This is a very extensive discussion of using simulations to assess power: Simulation of logistic regression power analysis - designed experiments. A broader discussion of using simulations to understand statistical concepts can be found here: Using computer simulations to better understand statistical concepts at the graduate level.
Experiment or simulation to undestand type I and type II errors This would be the most basic procedure behind any such simulation: Type I errors: Have the computer generate a set (of size $n$) of pseudorandom numbers that conform to a particular distribution (
53,014
Experiment or simulation to undestand type I and type II errors
The most common example is of this kind: take a normal variate, $X_1$, which can be either $\mathcal{N}(0,1)$ or $\mathcal{N}(2,1)$. If you build a test accepting $\mathcal{N}(0,1)$ when $x_1<1.68$ and rejecting $\mathcal{N}(0,1)$ when $x_1>1.68$, it is rather simple to check by simulation that the type I error is $0.05$ and the type II error is $0.37$. For instance, #type I error x=rnorm(10^3) sum(x>1.68)/10^3 and #type II error x=rnorm(10^3,2) sum(x<1.68)/10^3 Of course, simulation is not very helpful in this case, where everything can be computed analytically.
Experiment or simulation to undestand type I and type II errors
The most common example is of this kind: take a normal variate, $X_1$, which can be either $\mathcal{N}(0,1)$ or $\mathcal{N}(2,1)$. If you build a test accepting $\mathcal{N}(0,1)$ when $x_1<1.68$ an
Experiment or simulation to undestand type I and type II errors The most common example is of this kind: take a normal variate, $X_1$, which can be either $\mathcal{N}(0,1)$ or $\mathcal{N}(2,1)$. If you build a test accepting $\mathcal{N}(0,1)$ when $x_1<1.68$ and rejecting $\mathcal{N}(0,1)$ when $x_1>1.68$, it is rather simple to check by simulation that the type I error is $0.05$ and the type II error is $0.37$. For instance, #type I error x=rnorm(10^3) sum(x>1.68)/10^3 and #type II error x=rnorm(10^3,2) sum(x<1.68)/10^3 Of course, simulation is not very helpful in this case, where everything can be computed analytically.
Experiment or simulation to undestand type I and type II errors The most common example is of this kind: take a normal variate, $X_1$, which can be either $\mathcal{N}(0,1)$ or $\mathcal{N}(2,1)$. If you build a test accepting $\mathcal{N}(0,1)$ when $x_1<1.68$ an
53,015
Experiment or simulation to undestand type I and type II errors
Check out Geoff Cumming's "dancing p-values" http://www.youtube.com/watch?v=ez4DgdurRPg&feature=plcp Cummings is the author of "Understanding the new statistics."
Experiment or simulation to undestand type I and type II errors
Check out Geoff Cumming's "dancing p-values" http://www.youtube.com/watch?v=ez4DgdurRPg&feature=plcp Cummings is the author of "Understanding the new statistics."
Experiment or simulation to undestand type I and type II errors Check out Geoff Cumming's "dancing p-values" http://www.youtube.com/watch?v=ez4DgdurRPg&feature=plcp Cummings is the author of "Understanding the new statistics."
Experiment or simulation to undestand type I and type II errors Check out Geoff Cumming's "dancing p-values" http://www.youtube.com/watch?v=ez4DgdurRPg&feature=plcp Cummings is the author of "Understanding the new statistics."
53,016
Relations between distributions
How about this: http://www.johndcook.com/distribution_chart.html
Relations between distributions
How about this: http://www.johndcook.com/distribution_chart.html
Relations between distributions How about this: http://www.johndcook.com/distribution_chart.html
Relations between distributions How about this: http://www.johndcook.com/distribution_chart.html
53,017
Relations between distributions
A fairly complete graph is given in "Univariate Distribution Relationships" Lawrence M. Leemis and Jacquelyn T. McQueston The American Statistician 2008, vol. 62 n. 1 pp. 45-53.
Relations between distributions
A fairly complete graph is given in "Univariate Distribution Relationships" Lawrence M. Leemis and Jacquelyn T. McQueston The American Statistician 2008, vol. 62 n. 1 pp. 45-53.
Relations between distributions A fairly complete graph is given in "Univariate Distribution Relationships" Lawrence M. Leemis and Jacquelyn T. McQueston The American Statistician 2008, vol. 62 n. 1 pp. 45-53.
Relations between distributions A fairly complete graph is given in "Univariate Distribution Relationships" Lawrence M. Leemis and Jacquelyn T. McQueston The American Statistician 2008, vol. 62 n. 1 pp. 45-53.
53,018
Singular covariance matrix in Mahalanobis distance in Matlab
To add on BGreene 's answer, you can use the Moore-Penrose inverse. When you use the Mahalanobis distance modified in this way, on the same data used to estimate the covariance matrix, you lose nothing. The covariance matrix is singular because your data happen to live in a linear subspace, and your modified Mahalanobis distance is identical to the Mahalanobis distance you could compute by first transforming your feature set to get a lesser set generating the same space. If you later use it on new data, which happens to have components outside that linear space, that components will just be zeroed in the distance. But if your data are reasonably representative of what you will see later, that should not be a problem.
Singular covariance matrix in Mahalanobis distance in Matlab
To add on BGreene 's answer, you can use the Moore-Penrose inverse. When you use the Mahalanobis distance modified in this way, on the same data used to estimate the covariance matrix, you lose nothin
Singular covariance matrix in Mahalanobis distance in Matlab To add on BGreene 's answer, you can use the Moore-Penrose inverse. When you use the Mahalanobis distance modified in this way, on the same data used to estimate the covariance matrix, you lose nothing. The covariance matrix is singular because your data happen to live in a linear subspace, and your modified Mahalanobis distance is identical to the Mahalanobis distance you could compute by first transforming your feature set to get a lesser set generating the same space. If you later use it on new data, which happens to have components outside that linear space, that components will just be zeroed in the distance. But if your data are reasonably representative of what you will see later, that should not be a problem.
Singular covariance matrix in Mahalanobis distance in Matlab To add on BGreene 's answer, you can use the Moore-Penrose inverse. When you use the Mahalanobis distance modified in this way, on the same data used to estimate the covariance matrix, you lose nothin
53,019
Singular covariance matrix in Mahalanobis distance in Matlab
You could try using the Moore-Penrose pseudoinverse (pinv function in Matlab), to invert your covariance matrix
Singular covariance matrix in Mahalanobis distance in Matlab
You could try using the Moore-Penrose pseudoinverse (pinv function in Matlab), to invert your covariance matrix
Singular covariance matrix in Mahalanobis distance in Matlab You could try using the Moore-Penrose pseudoinverse (pinv function in Matlab), to invert your covariance matrix
Singular covariance matrix in Mahalanobis distance in Matlab You could try using the Moore-Penrose pseudoinverse (pinv function in Matlab), to invert your covariance matrix
53,020
Singular covariance matrix in Mahalanobis distance in Matlab
The typical handwriting dataset everybody seems to use is quite ill formed for PCA and many mathematical methods. It is pixels, and obviously some of these (bottom right corner, whatever) are never really painted in any of the samples you have. Even worse, the values are discrete. This can cause all kind of artifacts. PCA now comes from the perfect world where all dimensions are continuous and just have a different amount of variance to them. And it assumes the variance is caused by importance, not by natural scale of the axes ... Now throw in that we are (probably) talking about pixel data here. For obvious reasons, we can expect neighboring pixels to be strongly correlated. Guess what PCA will do... you could as well just downsample it to a lower resolution image anyway. Consider using something else. Just working around the matrix inversion will probably not save your analysis. And maybe you can do without the dimensionality reduction, too?
Singular covariance matrix in Mahalanobis distance in Matlab
The typical handwriting dataset everybody seems to use is quite ill formed for PCA and many mathematical methods. It is pixels, and obviously some of these (bottom right corner, whatever) are never re
Singular covariance matrix in Mahalanobis distance in Matlab The typical handwriting dataset everybody seems to use is quite ill formed for PCA and many mathematical methods. It is pixels, and obviously some of these (bottom right corner, whatever) are never really painted in any of the samples you have. Even worse, the values are discrete. This can cause all kind of artifacts. PCA now comes from the perfect world where all dimensions are continuous and just have a different amount of variance to them. And it assumes the variance is caused by importance, not by natural scale of the axes ... Now throw in that we are (probably) talking about pixel data here. For obvious reasons, we can expect neighboring pixels to be strongly correlated. Guess what PCA will do... you could as well just downsample it to a lower resolution image anyway. Consider using something else. Just working around the matrix inversion will probably not save your analysis. And maybe you can do without the dimensionality reduction, too?
Singular covariance matrix in Mahalanobis distance in Matlab The typical handwriting dataset everybody seems to use is quite ill formed for PCA and many mathematical methods. It is pixels, and obviously some of these (bottom right corner, whatever) are never re
53,021
Singular covariance matrix in Mahalanobis distance in Matlab
This is really a comment but to long: Thanks Michael! The way to think about the Moore-Penrose inverse is as follows: Any Matrix represents a linear operator. First some notation: Let $A$ be an $n \times m$-matrix, where we assume for simplicity that $n \ge m$. This matrix really represents a linear transformation: $A \colon {\mathbb R^m } \mapsto {\mathbb R^n}$. Let ${\mathcal N}(A)$ be the nullspace of $A$. Then we can decompose $A$ as a direct sum of two operators, on acting on the nullspace (sending it to the zero vector in $\mathbb R^n$), the other acting on the ortogonal complement of the nullspace. Now the singular value decomposition is giving us an coordinate sytem adapted to this situation. Suppose the rank of $A$ is $r \le \min(n,m)$. Then we can write the SVD as $$ A = U \Lambda V^T = [U_1\colon U_0] \begin{pmatrix} \Lambda_1 & 0 \\ 0 & \Lambda_0\end{pmatrix} = U_1 \Lambda_1 V_1^T $$ Where $\Lambda_0=0$, $\Lambda_1$ is $r \times r$ and $U_1$ has $r$ columns, $V_1$ is $m\times r$. Note that the the coumns of $V$ gives an orthogonal basis for $\mathbb R^m$, the first $r$ of which, that is, the columns of $V_1$ gives a basis for ${\mathcal N}(A)^{\perp}$. Now, writing a general point in this basis, using $x$ for the coordinates, we get $$ x_1 v_1 + \dots + x_r v_r + x_{r+1} v_{r+1} \dots x_m v_m $$ which we can write as $V_1 x_1 + V_0 x_0$ where now $X_1, x_0$ are subvectors of the vector $x$. Now letting $A$ act we find that $$ A (V_1 x_1 + V_0 x_0) = U_1 \Lambda_1 V_1^T V_1 x_1 + 0 = U_1 \begin{pmatrix} \lambda_1 x_1 \\ \vdots \\ \lambda_r x_r \end{pmatrix} $$ Observe that the $r$ columns of $U_1$ forms an orthogonal basis of the image space of $A$ in $\mathbb R^n$. Call now $A$ reduced to acting on ${\mathcal N}(A)$ for the nonsingulatr part of $A$. A then consists of the direct sum of its nonsingular part and the zero operator acting on the nullspace. We get btyhne Moore-penrose inverse by taking a direct sum of the usual inverse of the nonsingular part and a zero operator. All other generalized inverses can be otained in this way, as a direct sum of the usual inverse of the nonsingular part, and some arbitrary operator $B$ replacing the zero operator in the case of Moore-Penrose. This explains the special role of the Moore-Penrose generalized inverse.
Singular covariance matrix in Mahalanobis distance in Matlab
This is really a comment but to long: Thanks Michael! The way to think about the Moore-Penrose inverse is as follows: Any Matrix represents a linear operator. First some notation: Let $A$ be an $n
Singular covariance matrix in Mahalanobis distance in Matlab This is really a comment but to long: Thanks Michael! The way to think about the Moore-Penrose inverse is as follows: Any Matrix represents a linear operator. First some notation: Let $A$ be an $n \times m$-matrix, where we assume for simplicity that $n \ge m$. This matrix really represents a linear transformation: $A \colon {\mathbb R^m } \mapsto {\mathbb R^n}$. Let ${\mathcal N}(A)$ be the nullspace of $A$. Then we can decompose $A$ as a direct sum of two operators, on acting on the nullspace (sending it to the zero vector in $\mathbb R^n$), the other acting on the ortogonal complement of the nullspace. Now the singular value decomposition is giving us an coordinate sytem adapted to this situation. Suppose the rank of $A$ is $r \le \min(n,m)$. Then we can write the SVD as $$ A = U \Lambda V^T = [U_1\colon U_0] \begin{pmatrix} \Lambda_1 & 0 \\ 0 & \Lambda_0\end{pmatrix} = U_1 \Lambda_1 V_1^T $$ Where $\Lambda_0=0$, $\Lambda_1$ is $r \times r$ and $U_1$ has $r$ columns, $V_1$ is $m\times r$. Note that the the coumns of $V$ gives an orthogonal basis for $\mathbb R^m$, the first $r$ of which, that is, the columns of $V_1$ gives a basis for ${\mathcal N}(A)^{\perp}$. Now, writing a general point in this basis, using $x$ for the coordinates, we get $$ x_1 v_1 + \dots + x_r v_r + x_{r+1} v_{r+1} \dots x_m v_m $$ which we can write as $V_1 x_1 + V_0 x_0$ where now $X_1, x_0$ are subvectors of the vector $x$. Now letting $A$ act we find that $$ A (V_1 x_1 + V_0 x_0) = U_1 \Lambda_1 V_1^T V_1 x_1 + 0 = U_1 \begin{pmatrix} \lambda_1 x_1 \\ \vdots \\ \lambda_r x_r \end{pmatrix} $$ Observe that the $r$ columns of $U_1$ forms an orthogonal basis of the image space of $A$ in $\mathbb R^n$. Call now $A$ reduced to acting on ${\mathcal N}(A)$ for the nonsingulatr part of $A$. A then consists of the direct sum of its nonsingular part and the zero operator acting on the nullspace. We get btyhne Moore-penrose inverse by taking a direct sum of the usual inverse of the nonsingular part and a zero operator. All other generalized inverses can be otained in this way, as a direct sum of the usual inverse of the nonsingular part, and some arbitrary operator $B$ replacing the zero operator in the case of Moore-Penrose. This explains the special role of the Moore-Penrose generalized inverse.
Singular covariance matrix in Mahalanobis distance in Matlab This is really a comment but to long: Thanks Michael! The way to think about the Moore-Penrose inverse is as follows: Any Matrix represents a linear operator. First some notation: Let $A$ be an $n
53,022
Algorithim to determine if point is "too far from the average"
As Richard pointed out, the three-sigma "rule" only applies to the normal distribution (bell curve), which definitely doesn't apply since your runtimes can't be negative: The log-normal distribution might possibly be more accurate, since it at least only allows for positive runtimes and might sort of make sense, anyway: But rather than making a different "parametric" assumption about your runtimes (assuming some functional , you could use a non-parametric method instead. In particular, if you have a lot of previous runs, why not just look at the empirical CDF? This is just a function that, for a given x, gives you f(x) = the proportion of previously seen samples that are less than the given value. If you want 1% of runs to be notified, just look for runs that took longer than 99% of previous runs (the 99th percentile of the empirical distribution).
Algorithim to determine if point is "too far from the average"
As Richard pointed out, the three-sigma "rule" only applies to the normal distribution (bell curve), which definitely doesn't apply since your runtimes can't be negative: The log-normal distribution
Algorithim to determine if point is "too far from the average" As Richard pointed out, the three-sigma "rule" only applies to the normal distribution (bell curve), which definitely doesn't apply since your runtimes can't be negative: The log-normal distribution might possibly be more accurate, since it at least only allows for positive runtimes and might sort of make sense, anyway: But rather than making a different "parametric" assumption about your runtimes (assuming some functional , you could use a non-parametric method instead. In particular, if you have a lot of previous runs, why not just look at the empirical CDF? This is just a function that, for a given x, gives you f(x) = the proportion of previously seen samples that are less than the given value. If you want 1% of runs to be notified, just look for runs that took longer than 99% of previous runs (the 99th percentile of the empirical distribution).
Algorithim to determine if point is "too far from the average" As Richard pointed out, the three-sigma "rule" only applies to the normal distribution (bell curve), which definitely doesn't apply since your runtimes can't be negative: The log-normal distribution
53,023
Algorithim to determine if point is "too far from the average"
The "rule" that only 1% of a sample is outside 3 standard deviations works when the distribution of values is a Gaussian (also known as "Normal") Distribution. I don't know what distribution you will be getting for your run times. But they are likely not to be Normal (they can not be less than zero). Also the different scripts may have different distributions -- and a mixture truly fouls up the "Normality". I will say nothing about the way run times tend to vary from time to time. You should get a look at the distributions of each script's run-time -- get a histogram of a large sample. And only then start thinking about means and standard deviations. I hope that someone on this mailing list can suggest a transformation of run-time data that is likely to give a normal distribution. Meanwhile: how about looking at the longest run time in each batch? Or a simple (ASCII style) graphic can be very helpful for spotting a problem. I used to use a dumb UNIX shell script like the following when I was monitoring the performance of our networks in the 1990's. : Display flipped histogram on ASCII display sort -n|uniq -c|awk '{for(i=2;i<=NF;i++)printf("%s\t",$i); mpl=85-NF*8; if($1>=mpl)plot=mpl; else plot=$1; for(i=1;i<=plot;i++)printf("="); print ""}'
Algorithim to determine if point is "too far from the average"
The "rule" that only 1% of a sample is outside 3 standard deviations works when the distribution of values is a Gaussian (also known as "Normal") Distribution. I don't know what distribution you will
Algorithim to determine if point is "too far from the average" The "rule" that only 1% of a sample is outside 3 standard deviations works when the distribution of values is a Gaussian (also known as "Normal") Distribution. I don't know what distribution you will be getting for your run times. But they are likely not to be Normal (they can not be less than zero). Also the different scripts may have different distributions -- and a mixture truly fouls up the "Normality". I will say nothing about the way run times tend to vary from time to time. You should get a look at the distributions of each script's run-time -- get a histogram of a large sample. And only then start thinking about means and standard deviations. I hope that someone on this mailing list can suggest a transformation of run-time data that is likely to give a normal distribution. Meanwhile: how about looking at the longest run time in each batch? Or a simple (ASCII style) graphic can be very helpful for spotting a problem. I used to use a dumb UNIX shell script like the following when I was monitoring the performance of our networks in the 1990's. : Display flipped histogram on ASCII display sort -n|uniq -c|awk '{for(i=2;i<=NF;i++)printf("%s\t",$i); mpl=85-NF*8; if($1>=mpl)plot=mpl; else plot=$1; for(i=1;i<=plot;i++)printf("="); print ""}'
Algorithim to determine if point is "too far from the average" The "rule" that only 1% of a sample is outside 3 standard deviations works when the distribution of values is a Gaussian (also known as "Normal") Distribution. I don't know what distribution you will
53,024
Generation of random numbers from the multivariate normal distribution in spherical coordinates
This book has this reference to this book here. Not exactly cheap, but I think it's the one you are looking for. ;)
Generation of random numbers from the multivariate normal distribution in spherical coordinates
This book has this reference to this book here. Not exactly cheap, but I think it's the one you are looking for. ;)
Generation of random numbers from the multivariate normal distribution in spherical coordinates This book has this reference to this book here. Not exactly cheap, but I think it's the one you are looking for. ;)
Generation of random numbers from the multivariate normal distribution in spherical coordinates This book has this reference to this book here. Not exactly cheap, but I think it's the one you are looking for. ;)
53,025
Generation of random numbers from the multivariate normal distribution in spherical coordinates
Given how easy it is to generate the normal variates, I would do just that, and then convert to the spherical coordinates directly. If you need $N_p(0,I_p)$, where $p$ is the dimension, then in spherical coordinates, $r\sim \chi^2_p = \Gamma(p/2,1/2)$, and the angles are all independent of each other and the length, with uniform distribution on their respective ranges $[0,\pi)$ for the first one (the one that goes with the cosine only), $[0,2\pi)$ for the remaining ones. If $p$ is even, $p=2m$, you can generate $r_p = \sum_{k=1}^{m} (-\frac12) \ln U_k$, $U_k \sim \mbox{i.i.d. } U[0,1]$. If $p$ is odd, $p=2m+1$, then you need to add another square of a normal to this (see above how to generate them), $r_p = r_{p-1} + z^2$, $z\sim N(0,1)$.
Generation of random numbers from the multivariate normal distribution in spherical coordinates
Given how easy it is to generate the normal variates, I would do just that, and then convert to the spherical coordinates directly. If you need $N_p(0,I_p)$, where $p$ is the dimension, then in spheri
Generation of random numbers from the multivariate normal distribution in spherical coordinates Given how easy it is to generate the normal variates, I would do just that, and then convert to the spherical coordinates directly. If you need $N_p(0,I_p)$, where $p$ is the dimension, then in spherical coordinates, $r\sim \chi^2_p = \Gamma(p/2,1/2)$, and the angles are all independent of each other and the length, with uniform distribution on their respective ranges $[0,\pi)$ for the first one (the one that goes with the cosine only), $[0,2\pi)$ for the remaining ones. If $p$ is even, $p=2m$, you can generate $r_p = \sum_{k=1}^{m} (-\frac12) \ln U_k$, $U_k \sim \mbox{i.i.d. } U[0,1]$. If $p$ is odd, $p=2m+1$, then you need to add another square of a normal to this (see above how to generate them), $r_p = r_{p-1} + z^2$, $z\sim N(0,1)$.
Generation of random numbers from the multivariate normal distribution in spherical coordinates Given how easy it is to generate the normal variates, I would do just that, and then convert to the spherical coordinates directly. If you need $N_p(0,I_p)$, where $p$ is the dimension, then in spheri
53,026
How does R's auto.arima() function determine the order of differencing when estimating a regression with seasonal ARIMA errors?
OCSB test: Osborn DR, Chui APL, Smith J, and Birchenhall CR (1988) "Seasonality and the order of integration for consumption", Oxford Bulletin of Economics and Statistics 50(4):361-377. The change in the algorithm is explained at http://robjhyndman.com/researchtips/forecast3/ When regressors are used, auto.arima() fits the linear model using lm() and applies the unit root tests to the residuals.
How does R's auto.arima() function determine the order of differencing when estimating a regression
OCSB test: Osborn DR, Chui APL, Smith J, and Birchenhall CR (1988) "Seasonality and the order of integration for consumption", Oxford Bulletin of Economics and Statistics 50(4):361-377. The change in
How does R's auto.arima() function determine the order of differencing when estimating a regression with seasonal ARIMA errors? OCSB test: Osborn DR, Chui APL, Smith J, and Birchenhall CR (1988) "Seasonality and the order of integration for consumption", Oxford Bulletin of Economics and Statistics 50(4):361-377. The change in the algorithm is explained at http://robjhyndman.com/researchtips/forecast3/ When regressors are used, auto.arima() fits the linear model using lm() and applies the unit root tests to the residuals.
How does R's auto.arima() function determine the order of differencing when estimating a regression OCSB test: Osborn DR, Chui APL, Smith J, and Birchenhall CR (1988) "Seasonality and the order of integration for consumption", Oxford Bulletin of Economics and Statistics 50(4):361-377. The change in
53,027
From confidence interval to standard deviation - what am I missing?
Analysis The calculations in the question support the assumption that a binomial model is being used. This treats the incidents as if 211,101 slips of paper were drawn randomly from a hat having millions of slips (far more than the number observed). On each slip is drawn either a $1$ (an "incident") or a $0$. Evidently, a total of $164/10000 \times 211101 = 346$ incidents (ones) were observed. This total is large enough to let us estimate that a fraction $164/10000$ of all the slips in the hat are marked with ones. This is an estimate $\hat{p}$ of the hat's expectation, $p$. Probability theory tells us that the variance of the total of the numbers observed on $N$ draws from this hat is approximately $p(1-p)N$. Its square root is the standard deviation for the total of $N$ draws. ("Approximately" weasels around a potential, but likely small, "finite population" adjustment when the slips are not drawn with replacement.) As a proxy for the unknown value of $p$, we have the estimate $\hat{p}$. Using the information $\hat{p} = 164/100000$ and $N=100000$ we get an (estimated) standard deviation ($\hat{s}$) equal to $4046$. It seems high compared to the expected total of only $164$ incidents per $100000$, but it's correct: binomial distributions with rare outcomes are highly skewed. Interpretations This standard deviation itself isn't usually interpreted, but it is useful for constructing confidence intervals and other quantities related to the chance outcomes. For instance, the standard error of the sample mean is obtained by dividing $\hat{s}$ by the root of $N$. Taking $N=211101$ gives a standard error of $4046 / \sqrt{211101}$ = $8.81$. We see a close connection between these results with the calculations in the question itself, which deduced (from the confidence interval) that the standard error is $8.93$. If we were able to peer into the hat and total the values on all its tickets, we would anticipate the result to lie between $147/100000$ and $182/100000$ times the number of tickets. In deducing this, I have used a procedure (the 95% CI) that will fool me at most 5% of the time (due to the chance behavior of the 211,101 random draws). Similarly, the standard error for $N$ = $100000$ draws equals $12.80$. This is what has been calculated at the end of the question. It means that based on what we have seen so far, we would anticipate the number of incidents observed in another $100000$ draws from this hat would differ from $164$ (due to chance alone), but only by $12.8$ or so. A difference much larger than this--say, less than $125$ or more than $205$--would be surprising. (This is a prediction interval. It is wider than the confidence interval because it needs to account not only for the element of chance in the 211,101 outcomes already observed, which makes us somewhat uncertain about the true value of $p$, but also for the element of chance in the 100,000 future outcomes.)
From confidence interval to standard deviation - what am I missing?
Analysis The calculations in the question support the assumption that a binomial model is being used. This treats the incidents as if 211,101 slips of paper were drawn randomly from a hat having mill
From confidence interval to standard deviation - what am I missing? Analysis The calculations in the question support the assumption that a binomial model is being used. This treats the incidents as if 211,101 slips of paper were drawn randomly from a hat having millions of slips (far more than the number observed). On each slip is drawn either a $1$ (an "incident") or a $0$. Evidently, a total of $164/10000 \times 211101 = 346$ incidents (ones) were observed. This total is large enough to let us estimate that a fraction $164/10000$ of all the slips in the hat are marked with ones. This is an estimate $\hat{p}$ of the hat's expectation, $p$. Probability theory tells us that the variance of the total of the numbers observed on $N$ draws from this hat is approximately $p(1-p)N$. Its square root is the standard deviation for the total of $N$ draws. ("Approximately" weasels around a potential, but likely small, "finite population" adjustment when the slips are not drawn with replacement.) As a proxy for the unknown value of $p$, we have the estimate $\hat{p}$. Using the information $\hat{p} = 164/100000$ and $N=100000$ we get an (estimated) standard deviation ($\hat{s}$) equal to $4046$. It seems high compared to the expected total of only $164$ incidents per $100000$, but it's correct: binomial distributions with rare outcomes are highly skewed. Interpretations This standard deviation itself isn't usually interpreted, but it is useful for constructing confidence intervals and other quantities related to the chance outcomes. For instance, the standard error of the sample mean is obtained by dividing $\hat{s}$ by the root of $N$. Taking $N=211101$ gives a standard error of $4046 / \sqrt{211101}$ = $8.81$. We see a close connection between these results with the calculations in the question itself, which deduced (from the confidence interval) that the standard error is $8.93$. If we were able to peer into the hat and total the values on all its tickets, we would anticipate the result to lie between $147/100000$ and $182/100000$ times the number of tickets. In deducing this, I have used a procedure (the 95% CI) that will fool me at most 5% of the time (due to the chance behavior of the 211,101 random draws). Similarly, the standard error for $N$ = $100000$ draws equals $12.80$. This is what has been calculated at the end of the question. It means that based on what we have seen so far, we would anticipate the number of incidents observed in another $100000$ draws from this hat would differ from $164$ (due to chance alone), but only by $12.8$ or so. A difference much larger than this--say, less than $125$ or more than $205$--would be surprising. (This is a prediction interval. It is wider than the confidence interval because it needs to account not only for the element of chance in the 211,101 outcomes already observed, which makes us somewhat uncertain about the true value of $p$, but also for the element of chance in the 100,000 future outcomes.)
From confidence interval to standard deviation - what am I missing? Analysis The calculations in the question support the assumption that a binomial model is being used. This treats the incidents as if 211,101 slips of paper were drawn randomly from a hat having mill
53,028
From confidence interval to standard deviation - what am I missing?
I'd answer differently. If you are talking about measured (ratio or interval) data, then the standard deviation of the data measures scatter, and the standard error of the mean quantifies how precisely that mean is known (from its sample size and SD). The two are very different. Converting from one to the other is straightforward using the equation the original poster used. But these data are binomial. There are two outcomes (new case of disease or not), and the incidence rate is the proportion of people who get a case this year (or whatever unit of time they used). It is possible to compute a standard error of almost any value computed from a sample of data assumed to be drawn from (or representative of) a larger population. In this case, it makes perfect sense to compute the standard error of the proportion, which the OP did. The OP and WHuber then computed the SD of the number of expected cases, which is either 4103 or 4046 (I didn't try to figure out why those two calculations aren't identical). That is the SD for the number of cases you'd expect to see in a group of 211,101 people. It is not the SD of the incidence rate. There really is no standard deviation of the incidence rate. Or, more accurately, the standard deviation and standard error are really the same thing when the goal is to quantify the precision of a computed parameter like the rate. The standard error of a mean, for example, can be thought of as the standard deviation of the mean. Two terms for the same value. That value is very very different than the standard deviation of the data. Similarly the standard error of a proportion is the same as the standard deviation of a proportion. The difference is that the term standard deviation is rarely used in that context. My answer. The standard deviation of the incidence rate is 8.93 per 100,000.
From confidence interval to standard deviation - what am I missing?
I'd answer differently. If you are talking about measured (ratio or interval) data, then the standard deviation of the data measures scatter, and the standard error of the mean quantifies how precise
From confidence interval to standard deviation - what am I missing? I'd answer differently. If you are talking about measured (ratio or interval) data, then the standard deviation of the data measures scatter, and the standard error of the mean quantifies how precisely that mean is known (from its sample size and SD). The two are very different. Converting from one to the other is straightforward using the equation the original poster used. But these data are binomial. There are two outcomes (new case of disease or not), and the incidence rate is the proportion of people who get a case this year (or whatever unit of time they used). It is possible to compute a standard error of almost any value computed from a sample of data assumed to be drawn from (or representative of) a larger population. In this case, it makes perfect sense to compute the standard error of the proportion, which the OP did. The OP and WHuber then computed the SD of the number of expected cases, which is either 4103 or 4046 (I didn't try to figure out why those two calculations aren't identical). That is the SD for the number of cases you'd expect to see in a group of 211,101 people. It is not the SD of the incidence rate. There really is no standard deviation of the incidence rate. Or, more accurately, the standard deviation and standard error are really the same thing when the goal is to quantify the precision of a computed parameter like the rate. The standard error of a mean, for example, can be thought of as the standard deviation of the mean. Two terms for the same value. That value is very very different than the standard deviation of the data. Similarly the standard error of a proportion is the same as the standard deviation of a proportion. The difference is that the term standard deviation is rarely used in that context. My answer. The standard deviation of the incidence rate is 8.93 per 100,000.
From confidence interval to standard deviation - what am I missing? I'd answer differently. If you are talking about measured (ratio or interval) data, then the standard deviation of the data measures scatter, and the standard error of the mean quantifies how precise
53,029
Binary classification vs. continuous output with neural networks
It is a bad idea. It increases both type I and type II error. It also invokes "magical thinking" - that is, that something magical happens at the cutoff value. For example, with newborns, it is common to say babies under 2.5 kg are "low birth weight" and those above 2.5 kg are not. This treats a baby of 2.49 kg as being the same as one of 1.4 kg, but vastly different from a baby of 2.51 kg. Similarly, the 2.51 kg baby is treated just like a baby of 4.5 kg. It is true that people sometimes need to make "yes/no" decisions based on the output of a statistical model. But the statistical model and its results should be a guide and a tool, not a straitjacket.
Binary classification vs. continuous output with neural networks
It is a bad idea. It increases both type I and type II error. It also invokes "magical thinking" - that is, that something magical happens at the cutoff value. For example, with newborns, it is common
Binary classification vs. continuous output with neural networks It is a bad idea. It increases both type I and type II error. It also invokes "magical thinking" - that is, that something magical happens at the cutoff value. For example, with newborns, it is common to say babies under 2.5 kg are "low birth weight" and those above 2.5 kg are not. This treats a baby of 2.49 kg as being the same as one of 1.4 kg, but vastly different from a baby of 2.51 kg. Similarly, the 2.51 kg baby is treated just like a baby of 4.5 kg. It is true that people sometimes need to make "yes/no" decisions based on the output of a statistical model. But the statistical model and its results should be a guide and a tool, not a straitjacket.
Binary classification vs. continuous output with neural networks It is a bad idea. It increases both type I and type II error. It also invokes "magical thinking" - that is, that something magical happens at the cutoff value. For example, with newborns, it is common
53,030
Binary classification vs. continuous output with neural networks
If you convert a continous variable to binary you throw out a lot of detailed information. So in my opinion it is advisable not to do it. For tree classifiers binary splits are used but the information in the continuous variable if used to get the first split and the variable can be split again if it is very important to the classification. I don't think it works that way with neural networks.
Binary classification vs. continuous output with neural networks
If you convert a continous variable to binary you throw out a lot of detailed information. So in my opinion it is advisable not to do it. For tree classifiers binary splits are used but the informat
Binary classification vs. continuous output with neural networks If you convert a continous variable to binary you throw out a lot of detailed information. So in my opinion it is advisable not to do it. For tree classifiers binary splits are used but the information in the continuous variable if used to get the first split and the variable can be split again if it is very important to the classification. I don't think it works that way with neural networks.
Binary classification vs. continuous output with neural networks If you convert a continous variable to binary you throw out a lot of detailed information. So in my opinion it is advisable not to do it. For tree classifiers binary splits are used but the informat
53,031
Binary classification vs. continuous output with neural networks
Peter Flom's answer here suggests that discretizing your output variable is going to hurt. If so, you obviously shouldn't do it! This is definitely the conventional wisdom for predictor variables. However, having thought about it for a little while, I'm not even sure how one could fairly compare the two situations for outputs. I suppose you could replace each discretized class with its mean value and compare the mean-squared errors, but that seems a little biased. If Peter Flom (or anyone else) has references or suggestions, I'd be very interested in seeing them! Your specific application might determine whether one or the other is more appropriate. An automatic defibrillator needs to decide whether or not to shock the patient; it can't show estimates of what the pulse is/should be, then tell you to ask a doctor. On the other hand, a cardiologist or sports coach might be interested in having those numbers. From your question, it sounds like continuous output might be preferable. That's certainly a more flexible option. If you were soliciting customers, knowing which ones will buy your product is nice, but being able to predict how much each will spend is even better. For example, you might offer Bill Gates lots of free samples, whereas I get a leaky pen. There are ways to rank discretized outputs (e.g., using the activation function), but that might not be exactly what you want. However, you might be better off modeling some situations as discrete, even if your actual measurements are continuous. Suppose you were trying to predict a person's electricity usage from their activities. Power is definitely a continuous variable. However, I suspect many appliances use a lot of power (when on) or very little (when off). There might be some variance, either due to measurement error or other factors (e.g., it's especially hot, so the computer's fan runs a little bit harder). Depending on your predictors, you might do better predicting COMPUTER=ON, TV=OFF, STEREO=ON and then substituting known values for each of those. You'd have to look at a histogram to see if your output variable has many narrow, isolated peaks or if it's more continuous. Finally, it may depend on your audience. Peter Flom makes an excellent point about how people treat values that are on opposite sides of a threshold as drastically different. A 2.501 kg baby probably has a very similar prognosis to a 2.499 kg baby, even if only one officially has a "Low Birth Weight" diagnosis. Sometimes one needs a threshold or it's just convenient shorthand. One alternative, frequently used with children, is to report percentiles (e.g., "he's a little more than 2.5 kg, but still in the 5th percentile so we ought to keep an eye on him"). On the other hand, some sharp thresholds are reasonable: $H_2O$ behaves very differently at -1 and +1 degrees. You'll have to decide based on the nature of your audience and your data.
Binary classification vs. continuous output with neural networks
Peter Flom's answer here suggests that discretizing your output variable is going to hurt. If so, you obviously shouldn't do it! This is definitely the conventional wisdom for predictor variables. How
Binary classification vs. continuous output with neural networks Peter Flom's answer here suggests that discretizing your output variable is going to hurt. If so, you obviously shouldn't do it! This is definitely the conventional wisdom for predictor variables. However, having thought about it for a little while, I'm not even sure how one could fairly compare the two situations for outputs. I suppose you could replace each discretized class with its mean value and compare the mean-squared errors, but that seems a little biased. If Peter Flom (or anyone else) has references or suggestions, I'd be very interested in seeing them! Your specific application might determine whether one or the other is more appropriate. An automatic defibrillator needs to decide whether or not to shock the patient; it can't show estimates of what the pulse is/should be, then tell you to ask a doctor. On the other hand, a cardiologist or sports coach might be interested in having those numbers. From your question, it sounds like continuous output might be preferable. That's certainly a more flexible option. If you were soliciting customers, knowing which ones will buy your product is nice, but being able to predict how much each will spend is even better. For example, you might offer Bill Gates lots of free samples, whereas I get a leaky pen. There are ways to rank discretized outputs (e.g., using the activation function), but that might not be exactly what you want. However, you might be better off modeling some situations as discrete, even if your actual measurements are continuous. Suppose you were trying to predict a person's electricity usage from their activities. Power is definitely a continuous variable. However, I suspect many appliances use a lot of power (when on) or very little (when off). There might be some variance, either due to measurement error or other factors (e.g., it's especially hot, so the computer's fan runs a little bit harder). Depending on your predictors, you might do better predicting COMPUTER=ON, TV=OFF, STEREO=ON and then substituting known values for each of those. You'd have to look at a histogram to see if your output variable has many narrow, isolated peaks or if it's more continuous. Finally, it may depend on your audience. Peter Flom makes an excellent point about how people treat values that are on opposite sides of a threshold as drastically different. A 2.501 kg baby probably has a very similar prognosis to a 2.499 kg baby, even if only one officially has a "Low Birth Weight" diagnosis. Sometimes one needs a threshold or it's just convenient shorthand. One alternative, frequently used with children, is to report percentiles (e.g., "he's a little more than 2.5 kg, but still in the 5th percentile so we ought to keep an eye on him"). On the other hand, some sharp thresholds are reasonable: $H_2O$ behaves very differently at -1 and +1 degrees. You'll have to decide based on the nature of your audience and your data.
Binary classification vs. continuous output with neural networks Peter Flom's answer here suggests that discretizing your output variable is going to hurt. If so, you obviously shouldn't do it! This is definitely the conventional wisdom for predictor variables. How
53,032
Poisson likelihood and zero counts in expected value
I think that the sentence is poorly worded, one should not use the word "expect" in that context, it can induce confusion. Because to "expect either 0 or 1 count per bin" does NOT mean that "the expected value per bin is either 0 or 1 ($m_i=0$ or $m_i=1$)". It rather means that the expected value per bin is quite small ($m_i$ is quite below 1 - but not zero!), so that the probability of $d_i$ being 0 or 1 is high (or put in other way, the likelihood of the counting being greater than 1 is small). For example, if $m_i=0.2$, we'd have a Poisson with probabilites $L_0 = 0.8187$, $L_1=0.1638$, so the probability that we have "either 0 or 1 count per bin" is high (0.9825); and that's what the (confusing) expression "we expect either 0 or 1 count per bin" means.
Poisson likelihood and zero counts in expected value
I think that the sentence is poorly worded, one should not use the word "expect" in that context, it can induce confusion. Because to "expect either 0 or 1 count per bin" does NOT mean that "the expe
Poisson likelihood and zero counts in expected value I think that the sentence is poorly worded, one should not use the word "expect" in that context, it can induce confusion. Because to "expect either 0 or 1 count per bin" does NOT mean that "the expected value per bin is either 0 or 1 ($m_i=0$ or $m_i=1$)". It rather means that the expected value per bin is quite small ($m_i$ is quite below 1 - but not zero!), so that the probability of $d_i$ being 0 or 1 is high (or put in other way, the likelihood of the counting being greater than 1 is small). For example, if $m_i=0.2$, we'd have a Poisson with probabilites $L_0 = 0.8187$, $L_1=0.1638$, so the probability that we have "either 0 or 1 count per bin" is high (0.9825); and that's what the (confusing) expression "we expect either 0 or 1 count per bin" means.
Poisson likelihood and zero counts in expected value I think that the sentence is poorly worded, one should not use the word "expect" in that context, it can induce confusion. Because to "expect either 0 or 1 count per bin" does NOT mean that "the expe
53,033
Poisson likelihood and zero counts in expected value
If $m_i=0$ then the likelihood $L_i$ of observing $0$ is $1$ and of observing anything else is $0$. This is an example of the wider convention where $0^0$ is taken to be $1$ so $\frac{0^0}{0!}\exp(-0)=1$
Poisson likelihood and zero counts in expected value
If $m_i=0$ then the likelihood $L_i$ of observing $0$ is $1$ and of observing anything else is $0$. This is an example of the wider convention where $0^0$ is taken to be $1$ so $\frac{0^0}{0!}\exp(-
Poisson likelihood and zero counts in expected value If $m_i=0$ then the likelihood $L_i$ of observing $0$ is $1$ and of observing anything else is $0$. This is an example of the wider convention where $0^0$ is taken to be $1$ so $\frac{0^0}{0!}\exp(-0)=1$
Poisson likelihood and zero counts in expected value If $m_i=0$ then the likelihood $L_i$ of observing $0$ is $1$ and of observing anything else is $0$. This is an example of the wider convention where $0^0$ is taken to be $1$ so $\frac{0^0}{0!}\exp(-
53,034
Poisson likelihood and zero counts in expected value
The statement "Suppose that in a particular model $m$, you expect there to be $m_i$ counts in bin $i$. Then if the model is correct the likelihood of actually observing $d_i$ counts in bin $i$ of the data is, from the Poisson distribution,* $L_i = m_i^{d_i}/d_i! \; exp(-m_i)$" is incorrect in a couple of ways. First, $L$ is not the formula for the Poisson distribution. The Poisson has one parameter, let us label it $\lambda$, not one parameter $m_i$ for each bin $i$. The parameter $\lambda$ is the expected value of a random variable from the distribution, not the expected value of the number of counts in a specific bin. Also, the use of $d_i$ is confusing. Here's the correct probability for a single draw from a Poisson to equal $i$: $L_i = \frac{\lambda^{i}}{i!} \exp(-\lambda)$ Note that $L_i > 0 \space \forall i \ge 0$. If we make $N$ draws from a Poisson, $NL_i$ can be interpreted as the expected number of draws which will have value $i$. This is the same as $m_i$ in the initial statement. As $L_i > 0$, $m_i > 0$ as well. $m_i$ can be any positive real number, as in the initial statement. The initial statement would seem to lead more naturally to a multinomial distribution, where the expected number of counts in a bin divided by the total number of counts is indeed a parameter, and there is one such for each bin. When reduced to just "counts in bin $i$ vs. counts not in bin $i$" the multinomial simplifies to a binomial; the probability of seeing $d_i$ counts in bin $i$ given $N$ total counts and an expected number of counts in bin $i$ of $m_i$ is: $L_i = \binom {N}{d_i}(m_i/N)^{d_i}(1-m_i/N)^{N-d_i}$ I'm a little hesitant to post this, because one doesn't expect lecture notes to be this wrong, which implies a significant chance that my brain has malfunctioned and I've just missed something myself.
Poisson likelihood and zero counts in expected value
The statement "Suppose that in a particular model $m$, you expect there to be $m_i$ counts in bin $i$. Then if the model is correct the likelihood of actually observing $d_i$ counts in bin $i$ of the
Poisson likelihood and zero counts in expected value The statement "Suppose that in a particular model $m$, you expect there to be $m_i$ counts in bin $i$. Then if the model is correct the likelihood of actually observing $d_i$ counts in bin $i$ of the data is, from the Poisson distribution,* $L_i = m_i^{d_i}/d_i! \; exp(-m_i)$" is incorrect in a couple of ways. First, $L$ is not the formula for the Poisson distribution. The Poisson has one parameter, let us label it $\lambda$, not one parameter $m_i$ for each bin $i$. The parameter $\lambda$ is the expected value of a random variable from the distribution, not the expected value of the number of counts in a specific bin. Also, the use of $d_i$ is confusing. Here's the correct probability for a single draw from a Poisson to equal $i$: $L_i = \frac{\lambda^{i}}{i!} \exp(-\lambda)$ Note that $L_i > 0 \space \forall i \ge 0$. If we make $N$ draws from a Poisson, $NL_i$ can be interpreted as the expected number of draws which will have value $i$. This is the same as $m_i$ in the initial statement. As $L_i > 0$, $m_i > 0$ as well. $m_i$ can be any positive real number, as in the initial statement. The initial statement would seem to lead more naturally to a multinomial distribution, where the expected number of counts in a bin divided by the total number of counts is indeed a parameter, and there is one such for each bin. When reduced to just "counts in bin $i$ vs. counts not in bin $i$" the multinomial simplifies to a binomial; the probability of seeing $d_i$ counts in bin $i$ given $N$ total counts and an expected number of counts in bin $i$ of $m_i$ is: $L_i = \binom {N}{d_i}(m_i/N)^{d_i}(1-m_i/N)^{N-d_i}$ I'm a little hesitant to post this, because one doesn't expect lecture notes to be this wrong, which implies a significant chance that my brain has malfunctioned and I've just missed something myself.
Poisson likelihood and zero counts in expected value The statement "Suppose that in a particular model $m$, you expect there to be $m_i$ counts in bin $i$. Then if the model is correct the likelihood of actually observing $d_i$ counts in bin $i$ of the
53,035
Poisson likelihood and zero counts in expected value
I think that, you are using Bayesian statistics here. The posterior probability is proportional to likelihood function, then you have P(model(parameters)|data) \propto L(data|model(parameters)). Therefore, you can have such problem. Actually, I have a similar problem. Nonetheless, I use greater bins. It is a possible solution, to avoid the problem for bins with zero value in the model.
Poisson likelihood and zero counts in expected value
I think that, you are using Bayesian statistics here. The posterior probability is proportional to likelihood function, then you have P(model(parameters)|data) \propto L(data|model(parameters)). There
Poisson likelihood and zero counts in expected value I think that, you are using Bayesian statistics here. The posterior probability is proportional to likelihood function, then you have P(model(parameters)|data) \propto L(data|model(parameters)). Therefore, you can have such problem. Actually, I have a similar problem. Nonetheless, I use greater bins. It is a possible solution, to avoid the problem for bins with zero value in the model.
Poisson likelihood and zero counts in expected value I think that, you are using Bayesian statistics here. The posterior probability is proportional to likelihood function, then you have P(model(parameters)|data) \propto L(data|model(parameters)). There
53,036
Significance tests
This reply describes two good solutions, a permutation test and a Student t-test, and compares and contrasts them. Michael Lew recommends a permutation test. This is good advice: such a test is conceptually simple and makes few assumptions. It interprets the null hypothesis as meaning it makes no difference which sample a value is from, because both samples are drawn from the same distribution. (Notice that this adds an unstated but common assumption; namely, that the distribution with mean $\mu_1$ has exactly the same shape as that with mean $\mu_2$.) Because this dataset is so small--only 20 numbers are involved in two samples of 10 each--no simulation is needed to carry out the permutation test: we can directly obtain all $\binom{20}{10} = 184756$ distinct ways in which $10$ of the values can be drawn from all $20$ numbers. In each case we can compare the mean of the $10$ values (taken to represent possible values of $x$ under the null hypothesis) to the mean of the $10$ values that remain (i.e., the values of $y$): this is a natural statistic for comparing two means. Here is a working R example: x <- c(5,3,2,2,3,3,1,4,5,5) # One sample y <- c(4,3,1,3,5,2,2,3,5,3) # The other sample # Construct a test statistic sum.all <- sum(c(x,y)) n.y <- length(y) test.statistic <- function(u) mean(u) - (sum.all - sum(u)) / n.y # Apply it to all possible ways in which x could have occurred. perms <- combn(c(x,y), length(x)) p <- apply(perms, 2, test.statistic) # Display the sample distribution of the test statistic. hist(p) To use this histogram, note that the value of the test statistic for the actual observations is 0.2: > test.statistic(x) [1] 0.2 It is apparent in the histogram that many of the permutation results are larger in size than 0.2. We will quantify this in a moment, but at this point it is clear that the difference is relatively small. It is worth noticing that the test statistic can only have a value in the range $[-2,2]$ in multiples of $0.2$: its sampling distribution is discrete. > table(p) p -2 -1.8 -1.6 -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0 35 154 560 1502 3316 6320 10356 15192 19679 23164 24200 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 23164 19679 15192 10356 6320 3316 1502 560 154 35 (The numbers running -2 -1.8 ... 2 are the values of $p$ and beneath them are the numbers of times each occurs.) We find, easily enough, that (a) 86.9% of the values are equal to or exceed the observed test statistic in size: > length(p[abs(p) >= abs(test.statistic(x))]) / length(p) [1] 0.8690164 and (b) 61.8% of the values strictly exceed the observed test statistic in size: length(p[abs(p) > abs(test.statistic(x))]) / length(p) [1] 0.6182641 There is little basis to choose one of these figures over the other; we might indeed just split the difference and take their average, equal to 0.744. This tells us that randomly dividing the 20 data values into two groups of 10 each, to simulate conditions under the null hypothesis, produces a greater mean difference either 87%, 62%, or 74% of the time, depending on how you wish to interpret "greater." These large results indicate the difference that has been observed could be attributed to chance alone: there is no basis for inferring the null hypothesis is false. Anyone carrying out the calculations shown here would likely wait a few seconds for them to complete. They would not be practicable for larger datasets: in such cases there are just two many possible ways that sample $x$ could have occurred among all the numbers. That's why, when the two groups look similar and do not present a terribly skewed distribution, we often look first to a Student T test. This test is an approximation to the permutation test. It is intended to produce a comparable result while circumventing the large number of calculations needed to run the permutation test. First we check that the t-test results may be applicable to these data: > require("moments") # For skewness() > sd(x) [1] 1.418136 > sd(y) [1] 1.286684 > skewness(x) [1] -0.06406292 > skewness(y) [1] 0.1385547 The two groups have comparable standard deviations and low skewnesses. Although they are small in size (10 numbers each), they are not too small. The t-test should therefore work well. Let's apply it: > t.test(x,y, var.equal=TRUE, alternative="two.sided") Two Sample t-test data: x and y t = 0.3303, df = 18, p-value = 0.745 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -1.072171 1.472171 sample estimates: mean of x mean of y 3.3 3.1 The output is instantaneous, because little calculation is needed. As we saw before, the means differ by $3.3-3.1 = 0.2$. The p-value of 0.745 is remarkably close to the permutation test's result of 0.744 (q.v.).
Significance tests
This reply describes two good solutions, a permutation test and a Student t-test, and compares and contrasts them. Michael Lew recommends a permutation test. This is good advice: such a test is conc
Significance tests This reply describes two good solutions, a permutation test and a Student t-test, and compares and contrasts them. Michael Lew recommends a permutation test. This is good advice: such a test is conceptually simple and makes few assumptions. It interprets the null hypothesis as meaning it makes no difference which sample a value is from, because both samples are drawn from the same distribution. (Notice that this adds an unstated but common assumption; namely, that the distribution with mean $\mu_1$ has exactly the same shape as that with mean $\mu_2$.) Because this dataset is so small--only 20 numbers are involved in two samples of 10 each--no simulation is needed to carry out the permutation test: we can directly obtain all $\binom{20}{10} = 184756$ distinct ways in which $10$ of the values can be drawn from all $20$ numbers. In each case we can compare the mean of the $10$ values (taken to represent possible values of $x$ under the null hypothesis) to the mean of the $10$ values that remain (i.e., the values of $y$): this is a natural statistic for comparing two means. Here is a working R example: x <- c(5,3,2,2,3,3,1,4,5,5) # One sample y <- c(4,3,1,3,5,2,2,3,5,3) # The other sample # Construct a test statistic sum.all <- sum(c(x,y)) n.y <- length(y) test.statistic <- function(u) mean(u) - (sum.all - sum(u)) / n.y # Apply it to all possible ways in which x could have occurred. perms <- combn(c(x,y), length(x)) p <- apply(perms, 2, test.statistic) # Display the sample distribution of the test statistic. hist(p) To use this histogram, note that the value of the test statistic for the actual observations is 0.2: > test.statistic(x) [1] 0.2 It is apparent in the histogram that many of the permutation results are larger in size than 0.2. We will quantify this in a moment, but at this point it is clear that the difference is relatively small. It is worth noticing that the test statistic can only have a value in the range $[-2,2]$ in multiples of $0.2$: its sampling distribution is discrete. > table(p) p -2 -1.8 -1.6 -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0 35 154 560 1502 3316 6320 10356 15192 19679 23164 24200 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 23164 19679 15192 10356 6320 3316 1502 560 154 35 (The numbers running -2 -1.8 ... 2 are the values of $p$ and beneath them are the numbers of times each occurs.) We find, easily enough, that (a) 86.9% of the values are equal to or exceed the observed test statistic in size: > length(p[abs(p) >= abs(test.statistic(x))]) / length(p) [1] 0.8690164 and (b) 61.8% of the values strictly exceed the observed test statistic in size: length(p[abs(p) > abs(test.statistic(x))]) / length(p) [1] 0.6182641 There is little basis to choose one of these figures over the other; we might indeed just split the difference and take their average, equal to 0.744. This tells us that randomly dividing the 20 data values into two groups of 10 each, to simulate conditions under the null hypothesis, produces a greater mean difference either 87%, 62%, or 74% of the time, depending on how you wish to interpret "greater." These large results indicate the difference that has been observed could be attributed to chance alone: there is no basis for inferring the null hypothesis is false. Anyone carrying out the calculations shown here would likely wait a few seconds for them to complete. They would not be practicable for larger datasets: in such cases there are just two many possible ways that sample $x$ could have occurred among all the numbers. That's why, when the two groups look similar and do not present a terribly skewed distribution, we often look first to a Student T test. This test is an approximation to the permutation test. It is intended to produce a comparable result while circumventing the large number of calculations needed to run the permutation test. First we check that the t-test results may be applicable to these data: > require("moments") # For skewness() > sd(x) [1] 1.418136 > sd(y) [1] 1.286684 > skewness(x) [1] -0.06406292 > skewness(y) [1] 0.1385547 The two groups have comparable standard deviations and low skewnesses. Although they are small in size (10 numbers each), they are not too small. The t-test should therefore work well. Let's apply it: > t.test(x,y, var.equal=TRUE, alternative="two.sided") Two Sample t-test data: x and y t = 0.3303, df = 18, p-value = 0.745 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -1.072171 1.472171 sample estimates: mean of x mean of y 3.3 3.1 The output is instantaneous, because little calculation is needed. As we saw before, the means differ by $3.3-3.1 = 0.2$. The p-value of 0.745 is remarkably close to the permutation test's result of 0.744 (q.v.).
Significance tests This reply describes two good solutions, a permutation test and a Student t-test, and compares and contrasts them. Michael Lew recommends a permutation test. This is good advice: such a test is conc
53,037
Significance tests
You might consider a permutations test. A permutations test assumes that the observations are drawn from one population and then treatments are randomly allocated. Thus in the context of a permutations test for a difference in the means the null hypothesis (no treatment effect) becomes equivalent to a statement that any difference between the groups under the null hypothesis is a consequence of only the random allocation of the values into the groups. The significance of the observed differences between the treatment groups is thus just a measure of how unusual the observed allocation is relative to all possible random allocations. The significance is thus calculable by enumerating all possible allocations and finding from that list the distribution of random differences between group means. The probability under the null hypothesis of obtaining a difference as great as that observed or greater is equal to the proportion of the population of differences that is as as that observed or greater. More detail, some references and free software (a bit archaic...) can be had from my webpage: http://www.pharmacology.unimelb.edu.au/statboss/permutations%20test.html
Significance tests
You might consider a permutations test. A permutations test assumes that the observations are drawn from one population and then treatments are randomly allocated. Thus in the context of a permutation
Significance tests You might consider a permutations test. A permutations test assumes that the observations are drawn from one population and then treatments are randomly allocated. Thus in the context of a permutations test for a difference in the means the null hypothesis (no treatment effect) becomes equivalent to a statement that any difference between the groups under the null hypothesis is a consequence of only the random allocation of the values into the groups. The significance of the observed differences between the treatment groups is thus just a measure of how unusual the observed allocation is relative to all possible random allocations. The significance is thus calculable by enumerating all possible allocations and finding from that list the distribution of random differences between group means. The probability under the null hypothesis of obtaining a difference as great as that observed or greater is equal to the proportion of the population of differences that is as as that observed or greater. More detail, some references and free software (a bit archaic...) can be had from my webpage: http://www.pharmacology.unimelb.edu.au/statboss/permutations%20test.html
Significance tests You might consider a permutations test. A permutations test assumes that the observations are drawn from one population and then treatments are randomly allocated. Thus in the context of a permutation
53,038
Calculating the $p$-value of an $F$- statistic
More details than you probably want on the mathematics of the F distribution can be found here (and other places). The formula for the pdf is there and the p-value is just the integral from your F-statistic to infinity of that function. It also has the Cumulative distribution function for the F distribution and the p-value will just be one minus the cumulative up to your F statistic. The cumulative function is expressed in terms of the incomplete beta distribution (with link if you want the details). There may be implementations of the incomplete beta distribution already in Scheme that you could use (this would be fairly simple then), if not there are implementations in other languages (here is one location) that you could probably link to your Scheme program (I don't know Scheme enough to know about linking, but you should be able to find a way).
Calculating the $p$-value of an $F$- statistic
More details than you probably want on the mathematics of the F distribution can be found here (and other places). The formula for the pdf is there and the p-value is just the integral from your F-st
Calculating the $p$-value of an $F$- statistic More details than you probably want on the mathematics of the F distribution can be found here (and other places). The formula for the pdf is there and the p-value is just the integral from your F-statistic to infinity of that function. It also has the Cumulative distribution function for the F distribution and the p-value will just be one minus the cumulative up to your F statistic. The cumulative function is expressed in terms of the incomplete beta distribution (with link if you want the details). There may be implementations of the incomplete beta distribution already in Scheme that you could use (this would be fairly simple then), if not there are implementations in other languages (here is one location) that you could probably link to your Scheme program (I don't know Scheme enough to know about linking, but you should be able to find a way).
Calculating the $p$-value of an $F$- statistic More details than you probably want on the mathematics of the F distribution can be found here (and other places). The formula for the pdf is there and the p-value is just the integral from your F-st
53,039
Calculating the $p$-value of an $F$- statistic
About Scheme libraries specifically, here are two GSL bindings that you might be interested in: Noel Welsh's fork of mzgsl. The Science collection, by Doug Williams, provides a collection of modules for numerical computing; it includes random number distributions, among others. The second project is readily available on PLaneT if you use Racket. Here is an example that returns the $p$-value for the quantile $x=4.2$ of an ${\cal F}(2, 10)$ distribution ($p=0.047$): (require (planet williams/science/random-distributions/f-distribution)) (- 1 (f-distribution-cdf 4.2 2 10)) with the corresponding CDF (require (planet williams/science/random-distributions/f-distribution-graphics)) (f-distribution-plot 2 10) There are also some statistical functions available for Chicken Scheme (release branch 4). After having installed the required dependencies, e.g. $ sudo chicken-install statistics you will be able to do something like (use statistics) (f-significance 4.2 2 10 #:one-tailed? #t) in the interactive Chicken shell (csi). As pointed out by @cardinal, the Guile-numeric bindings look promising as it seems to provide a more integrated framework (with libfft and lapack support) for statistical computing. I have no time to test it presently but it is worth trying to.
Calculating the $p$-value of an $F$- statistic
About Scheme libraries specifically, here are two GSL bindings that you might be interested in: Noel Welsh's fork of mzgsl. The Science collection, by Doug Williams, provides a collection of modules
Calculating the $p$-value of an $F$- statistic About Scheme libraries specifically, here are two GSL bindings that you might be interested in: Noel Welsh's fork of mzgsl. The Science collection, by Doug Williams, provides a collection of modules for numerical computing; it includes random number distributions, among others. The second project is readily available on PLaneT if you use Racket. Here is an example that returns the $p$-value for the quantile $x=4.2$ of an ${\cal F}(2, 10)$ distribution ($p=0.047$): (require (planet williams/science/random-distributions/f-distribution)) (- 1 (f-distribution-cdf 4.2 2 10)) with the corresponding CDF (require (planet williams/science/random-distributions/f-distribution-graphics)) (f-distribution-plot 2 10) There are also some statistical functions available for Chicken Scheme (release branch 4). After having installed the required dependencies, e.g. $ sudo chicken-install statistics you will be able to do something like (use statistics) (f-significance 4.2 2 10 #:one-tailed? #t) in the interactive Chicken shell (csi). As pointed out by @cardinal, the Guile-numeric bindings look promising as it seems to provide a more integrated framework (with libfft and lapack support) for statistical computing. I have no time to test it presently but it is worth trying to.
Calculating the $p$-value of an $F$- statistic About Scheme libraries specifically, here are two GSL bindings that you might be interested in: Noel Welsh's fork of mzgsl. The Science collection, by Doug Williams, provides a collection of modules
53,040
Calculating the $p$-value of an $F$- statistic
Calculating the p-value for an F-Test is a quite straightforward job in every statistical programme. Consider for example r (r-project). Here's an example in R: pf(F, df1=dfa, df2=dfb) where F is the value from the statistic and dfa and dfb are the degrees of freedom. Hope this helps you!
Calculating the $p$-value of an $F$- statistic
Calculating the p-value for an F-Test is a quite straightforward job in every statistical programme. Consider for example r (r-project). Here's an example in R: pf(F, df1=dfa, df2=dfb) where F is the
Calculating the $p$-value of an $F$- statistic Calculating the p-value for an F-Test is a quite straightforward job in every statistical programme. Consider for example r (r-project). Here's an example in R: pf(F, df1=dfa, df2=dfb) where F is the value from the statistic and dfa and dfb are the degrees of freedom. Hope this helps you!
Calculating the $p$-value of an $F$- statistic Calculating the p-value for an F-Test is a quite straightforward job in every statistical programme. Consider for example r (r-project). Here's an example in R: pf(F, df1=dfa, df2=dfb) where F is the
53,041
Adjusting kappa inter-rater agreement for prevalence
Kappa is regarded as a measure of chance-adjusted agreement, calculated as $\frac{{{p_{obs}} - {p_{exp}}}}{{1 - {p_{exp}}}}$ where ${p_{obs}} = \sum\limits_{i = 1}^k {{p_{ii}}} $ and ${p_{exp}} = \sum\limits_{i = 1}^k {{p_{i + }}{p_{ + i}}} $ ($p_{i+}$ and $p_{+i}$ are the marginal totals). Essentially, it is a measure of the agreement that is greater than expected by chance. Where the prevalence of one of the categories is high, chance agreement will be high and kappa can have unexpectedly low values. To adjust for this, and for the bias of kappa (I don't have sources nearby to refresh my memory to be able to write about the bias), the prevalence and bias adjusted kappa (PABAK) can be used. This can be calculated as $\frac{{k{p_{obs}} - 1}}{{k - 1}}$ where $k$ is the number of categories. There is some difference of opinion on whether to use PABAK. Some commentators believe the prevalence and bias effects are important in themselves so kappa should be reported, together with measures of prevalence and bias. Others like the convenience of a single number. I chose to report both PABAK and kappa when I needed to use kappa.
Adjusting kappa inter-rater agreement for prevalence
Kappa is regarded as a measure of chance-adjusted agreement, calculated as $\frac{{{p_{obs}} - {p_{exp}}}}{{1 - {p_{exp}}}}$ where ${p_{obs}} = \sum\limits_{i = 1}^k {{p_{ii}}} $ and ${p_{exp}} = \sum
Adjusting kappa inter-rater agreement for prevalence Kappa is regarded as a measure of chance-adjusted agreement, calculated as $\frac{{{p_{obs}} - {p_{exp}}}}{{1 - {p_{exp}}}}$ where ${p_{obs}} = \sum\limits_{i = 1}^k {{p_{ii}}} $ and ${p_{exp}} = \sum\limits_{i = 1}^k {{p_{i + }}{p_{ + i}}} $ ($p_{i+}$ and $p_{+i}$ are the marginal totals). Essentially, it is a measure of the agreement that is greater than expected by chance. Where the prevalence of one of the categories is high, chance agreement will be high and kappa can have unexpectedly low values. To adjust for this, and for the bias of kappa (I don't have sources nearby to refresh my memory to be able to write about the bias), the prevalence and bias adjusted kappa (PABAK) can be used. This can be calculated as $\frac{{k{p_{obs}} - 1}}{{k - 1}}$ where $k$ is the number of categories. There is some difference of opinion on whether to use PABAK. Some commentators believe the prevalence and bias effects are important in themselves so kappa should be reported, together with measures of prevalence and bias. Others like the convenience of a single number. I chose to report both PABAK and kappa when I needed to use kappa.
Adjusting kappa inter-rater agreement for prevalence Kappa is regarded as a measure of chance-adjusted agreement, calculated as $\frac{{{p_{obs}} - {p_{exp}}}}{{1 - {p_{exp}}}}$ where ${p_{obs}} = \sum\limits_{i = 1}^k {{p_{ii}}} $ and ${p_{exp}} = \sum
53,042
Learning probability and statistics together
I have the 8th edition of Modern Elementary Statistics which I see has a companion answers book to odd numbered questions. I also have an older version of Ott's Introduction to Statistical Methods and Data Analysis which I find incredibly useful to pointing out to colleagues about how I would love to see GLM models specified in journal articles. When I was studying, I found Harraway's book incredibly useful but sadly it never went to another edition. He's a really good writer that had good social science examples.
Learning probability and statistics together
I have the 8th edition of Modern Elementary Statistics which I see has a companion answers book to odd numbered questions. I also have an older version of Ott's Introduction to Statistical Methods and
Learning probability and statistics together I have the 8th edition of Modern Elementary Statistics which I see has a companion answers book to odd numbered questions. I also have an older version of Ott's Introduction to Statistical Methods and Data Analysis which I find incredibly useful to pointing out to colleagues about how I would love to see GLM models specified in journal articles. When I was studying, I found Harraway's book incredibly useful but sadly it never went to another edition. He's a really good writer that had good social science examples.
Learning probability and statistics together I have the 8th edition of Modern Elementary Statistics which I see has a companion answers book to odd numbered questions. I also have an older version of Ott's Introduction to Statistical Methods and
53,043
Learning probability and statistics together
The first book "All of Statistics" is nice book to read. I dont know about second book. However, I would recommend the following book. It is really nice book to get an idea whats going on in statistics and probability. A Modern Introduction to Probability and Statistics: Understanding Why and How (Springer Texts in Statistics) by: F.M. Dekking, C. Kraaikamp, H.P. Lopuhaä, L.E. Meester
Learning probability and statistics together
The first book "All of Statistics" is nice book to read. I dont know about second book. However, I would recommend the following book. It is really nice book to get an idea whats going on in statistic
Learning probability and statistics together The first book "All of Statistics" is nice book to read. I dont know about second book. However, I would recommend the following book. It is really nice book to get an idea whats going on in statistics and probability. A Modern Introduction to Probability and Statistics: Understanding Why and How (Springer Texts in Statistics) by: F.M. Dekking, C. Kraaikamp, H.P. Lopuhaä, L.E. Meester
Learning probability and statistics together The first book "All of Statistics" is nice book to read. I dont know about second book. However, I would recommend the following book. It is really nice book to get an idea whats going on in statistic
53,044
Learning probability and statistics together
I still like DeGroot / Schervish. Please, take a look: http://www.amazon.com/Probability-Statistics-Edition-Morris-DeGroot/dp/0321500466 It is highly self-contained, starting with basic set theory, covering all the standard material on probability (without measure theory). After that, the inference part shines, with excellent explanations of sufficiency, estimation, the Neyman-Pearson lemma, hypothesis testing. He goes through nonparametric methods and ANOVA at the end of the book. DeGroot writing is not fast paced and is extremely clear. Since he was a Bayesian statistician (as is Schervish), the discussion of Bayesian topics is more detailed than we usually see in other books at the same level. The third edition covers simulation at the end, including MCMC tools.
Learning probability and statistics together
I still like DeGroot / Schervish. Please, take a look: http://www.amazon.com/Probability-Statistics-Edition-Morris-DeGroot/dp/0321500466 It is highly self-contained, starting with basic set theory, co
Learning probability and statistics together I still like DeGroot / Schervish. Please, take a look: http://www.amazon.com/Probability-Statistics-Edition-Morris-DeGroot/dp/0321500466 It is highly self-contained, starting with basic set theory, covering all the standard material on probability (without measure theory). After that, the inference part shines, with excellent explanations of sufficiency, estimation, the Neyman-Pearson lemma, hypothesis testing. He goes through nonparametric methods and ANOVA at the end of the book. DeGroot writing is not fast paced and is extremely clear. Since he was a Bayesian statistician (as is Schervish), the discussion of Bayesian topics is more detailed than we usually see in other books at the same level. The third edition covers simulation at the end, including MCMC tools.
Learning probability and statistics together I still like DeGroot / Schervish. Please, take a look: http://www.amazon.com/Probability-Statistics-Edition-Morris-DeGroot/dp/0321500466 It is highly self-contained, starting with basic set theory, co
53,045
Learning probability and statistics together
I worked through the entire book about a year ago (2019) and put most of my solutions to Chapters 8–16 on github. Please comment and correct me if I’m wrong, but I believe the free pdf released is a preprint, and the printed book is the official version. I used the free pdf (and included a copy of it in my git repo) but there will be some differences (I think some chapters are in a different order, some problems are different, etc), so keep that in mind if you decide to follow the physical text. In my git repo I include a copy of the free pdf, and have some links pointing to old class versions that Larry taught at CMU. The homeworks, and included solutions, follow the text pretty closely, so they are helpful to look at when working through the text.
Learning probability and statistics together
I worked through the entire book about a year ago (2019) and put most of my solutions to Chapters 8–16 on github. Please comment and correct me if I’m wrong, but I believe the free pdf released is a p
Learning probability and statistics together I worked through the entire book about a year ago (2019) and put most of my solutions to Chapters 8–16 on github. Please comment and correct me if I’m wrong, but I believe the free pdf released is a preprint, and the printed book is the official version. I used the free pdf (and included a copy of it in my git repo) but there will be some differences (I think some chapters are in a different order, some problems are different, etc), so keep that in mind if you decide to follow the physical text. In my git repo I include a copy of the free pdf, and have some links pointing to old class versions that Larry taught at CMU. The homeworks, and included solutions, follow the text pretty closely, so they are helpful to look at when working through the text.
Learning probability and statistics together I worked through the entire book about a year ago (2019) and put most of my solutions to Chapters 8–16 on github. Please comment and correct me if I’m wrong, but I believe the free pdf released is a p
53,046
PCA to decorrelate variables
With just two variables $X$ and $Y$, there are two sample variances $s^2$ and $t^2$, respectively, and the sample correlation coefficient, $r$. If you standardize the variables in the usual way to have unit variances, so that $\xi = X/s$ and $\eta = Y/t$, then the two principal components are $$PC_1 = \xi+\eta = X/s + Y/t, \quad PC_2 = \xi-\eta = X/s - Y/t.$$ As a check, note that Covar($PC_1, PC_2$) = Var($X/s$) - Var($Y/t$) = $1-1=0$, proving the components are orthogonal (uncorrelated). Visually: when you plot a scatterplot of $X$ and $Y$ in which the coordinate axes are expressed in standard units and have an aspect ratio of 1:1, then the axes of the point cloud fall along diagonal lines parallel to $X=Y$ and $X=-Y$. In this example the variances are $s^2 = 0.98$, $t^2 = 7.90$ and the correlation is $r=-0.67$. Because $X$ and $Y$ are plotted on standardized scales with unit aspect ratio, the major axis of the cloud is diagonal (downward, due to negative correlation). This is the first principal component, $X/s-Y/t$. The minor axis of the cloud is also diagonal (upward) and forms the second principal component, $X/s+Y/t$.
PCA to decorrelate variables
With just two variables $X$ and $Y$, there are two sample variances $s^2$ and $t^2$, respectively, and the sample correlation coefficient, $r$. If you standardize the variables in the usual way to ha
PCA to decorrelate variables With just two variables $X$ and $Y$, there are two sample variances $s^2$ and $t^2$, respectively, and the sample correlation coefficient, $r$. If you standardize the variables in the usual way to have unit variances, so that $\xi = X/s$ and $\eta = Y/t$, then the two principal components are $$PC_1 = \xi+\eta = X/s + Y/t, \quad PC_2 = \xi-\eta = X/s - Y/t.$$ As a check, note that Covar($PC_1, PC_2$) = Var($X/s$) - Var($Y/t$) = $1-1=0$, proving the components are orthogonal (uncorrelated). Visually: when you plot a scatterplot of $X$ and $Y$ in which the coordinate axes are expressed in standard units and have an aspect ratio of 1:1, then the axes of the point cloud fall along diagonal lines parallel to $X=Y$ and $X=-Y$. In this example the variances are $s^2 = 0.98$, $t^2 = 7.90$ and the correlation is $r=-0.67$. Because $X$ and $Y$ are plotted on standardized scales with unit aspect ratio, the major axis of the cloud is diagonal (downward, due to negative correlation). This is the first principal component, $X/s-Y/t$. The minor axis of the cloud is also diagonal (upward) and forms the second principal component, $X/s+Y/t$.
PCA to decorrelate variables With just two variables $X$ and $Y$, there are two sample variances $s^2$ and $t^2$, respectively, and the sample correlation coefficient, $r$. If you standardize the variables in the usual way to ha
53,047
PCA to decorrelate variables
It varies with your software, but you should have something like a component score matrix. Multiply that with your original variables to get the new set of transformed data.
PCA to decorrelate variables
It varies with your software, but you should have something like a component score matrix. Multiply that with your original variables to get the new set of transformed data.
PCA to decorrelate variables It varies with your software, but you should have something like a component score matrix. Multiply that with your original variables to get the new set of transformed data.
PCA to decorrelate variables It varies with your software, but you should have something like a component score matrix. Multiply that with your original variables to get the new set of transformed data.
53,048
PCA to decorrelate variables
You have loading for each component (P1, P2,....Pi). $$P_1=l_1x_1+l_2x_2+...+l_jx_j$$ $$P_2=l_1x_1+l_2x_2+...+l_jx_j$$ $$.$$ $$.$$ $$P_i=l_1x_1+l_2x_2+...+l_jx_j$$ where $x$ is orginal data and $P_i$ is rotated component. The important is loadings ($l_1,l_2,...,l_j$). If you mulitple them with orginal data then you will get rotated Principle components. For more information check this URL.
PCA to decorrelate variables
You have loading for each component (P1, P2,....Pi). $$P_1=l_1x_1+l_2x_2+...+l_jx_j$$ $$P_2=l_1x_1+l_2x_2+...+l_jx_j$$ $$.$$ $$.$$ $$P_i=l_1x_1+l_2x_2+...+l_jx_j$$ where $x$ is orginal data and $P_i$
PCA to decorrelate variables You have loading for each component (P1, P2,....Pi). $$P_1=l_1x_1+l_2x_2+...+l_jx_j$$ $$P_2=l_1x_1+l_2x_2+...+l_jx_j$$ $$.$$ $$.$$ $$P_i=l_1x_1+l_2x_2+...+l_jx_j$$ where $x$ is orginal data and $P_i$ is rotated component. The important is loadings ($l_1,l_2,...,l_j$). If you mulitple them with orginal data then you will get rotated Principle components. For more information check this URL.
PCA to decorrelate variables You have loading for each component (P1, P2,....Pi). $$P_1=l_1x_1+l_2x_2+...+l_jx_j$$ $$P_2=l_1x_1+l_2x_2+...+l_jx_j$$ $$.$$ $$.$$ $$P_i=l_1x_1+l_2x_2+...+l_jx_j$$ where $x$ is orginal data and $P_i$
53,049
How to calculate an overall job satisfaction score and test group differences?
You have two questions: How to form an overall measure of job satisfaction? How to examine group differences on the score that you create? Forming the overall measure of job satisfaction If you are using an established measure of job satisfaction, then the test manual should tell you how you should calculate the overall job satisfaction score. If the measure of job satisfaction is novel, there are multiple ways of forming an overall job satisfaction score where the individual items ask participants about facets of job satisfaction. In my experience, when you perform a factor analysis on a job satisfaction measure, the first unrotated factor explains a massive proportion of variance relative to any subsequent factors. As such, whether you run a factor analysis and save the first factor or whether you just take the mean of the items all measuring facet satisfaction, you are likely to be left with a very similar measure of overall job satisfaction (I'd expect correlations between the two forms to be in the r > .95 range). Of course you could and should test this idea in your data. More importantly, there are general issues of validity. If you don't care too much about precision in measurement, then I would think that the first factor saved score or a mean of job satisfaction items would be a reasonable approximation to a measure of overall job satisfaction. However, if you care about precision, you would want to engage with debates in the literature about whether overall job satisfaction should be asked directly rather than extracted from facet level measures. I discuss this a little more here. Job satisfaction by group Once you have your overall measure of job satisfaction, the task of comparing groups might look like this: For type of organisation, job status, and gender, independent groups t tests would work For age group, you could do an ANOVA with polynomial contrasts. In particular, if their is an effect of age it often has both linear and quadratic components. It would be better if you had a more granular measure of age. Update I received the following comment on my blog, where you wrote: However, I am still confused of how my supervisor told me to use factor analysis but you seem to say that using the t test is enough. Can you please advise me further? I am saying that you have two questions. The factor analysis pertains only to the first question of how to construct the overall measure of job satisfaction. After you have created that overall measure, whether it is informed by factor analysis or not, the tests of group differences are straightforward.
How to calculate an overall job satisfaction score and test group differences?
You have two questions: How to form an overall measure of job satisfaction? How to examine group differences on the score that you create? Forming the overall measure of job satisfaction If you are
How to calculate an overall job satisfaction score and test group differences? You have two questions: How to form an overall measure of job satisfaction? How to examine group differences on the score that you create? Forming the overall measure of job satisfaction If you are using an established measure of job satisfaction, then the test manual should tell you how you should calculate the overall job satisfaction score. If the measure of job satisfaction is novel, there are multiple ways of forming an overall job satisfaction score where the individual items ask participants about facets of job satisfaction. In my experience, when you perform a factor analysis on a job satisfaction measure, the first unrotated factor explains a massive proportion of variance relative to any subsequent factors. As such, whether you run a factor analysis and save the first factor or whether you just take the mean of the items all measuring facet satisfaction, you are likely to be left with a very similar measure of overall job satisfaction (I'd expect correlations between the two forms to be in the r > .95 range). Of course you could and should test this idea in your data. More importantly, there are general issues of validity. If you don't care too much about precision in measurement, then I would think that the first factor saved score or a mean of job satisfaction items would be a reasonable approximation to a measure of overall job satisfaction. However, if you care about precision, you would want to engage with debates in the literature about whether overall job satisfaction should be asked directly rather than extracted from facet level measures. I discuss this a little more here. Job satisfaction by group Once you have your overall measure of job satisfaction, the task of comparing groups might look like this: For type of organisation, job status, and gender, independent groups t tests would work For age group, you could do an ANOVA with polynomial contrasts. In particular, if their is an effect of age it often has both linear and quadratic components. It would be better if you had a more granular measure of age. Update I received the following comment on my blog, where you wrote: However, I am still confused of how my supervisor told me to use factor analysis but you seem to say that using the t test is enough. Can you please advise me further? I am saying that you have two questions. The factor analysis pertains only to the first question of how to construct the overall measure of job satisfaction. After you have created that overall measure, whether it is informed by factor analysis or not, the tests of group differences are straightforward.
How to calculate an overall job satisfaction score and test group differences? You have two questions: How to form an overall measure of job satisfaction? How to examine group differences on the score that you create? Forming the overall measure of job satisfaction If you are
53,050
How to calculate an overall job satisfaction score and test group differences?
Your ultimate job satisfaction construct sounds suspect to me. When you summed up the different facets of job satisfaction to create your ultimate score, you implicitly assumed that each facet has the same weight - they really don't, as you'll see in the literature. You'd have a much easier time with an explicit measure of overall satisfaction, which would let you model how much each facet contributes to overall satisfaction (plenty of examples of this in the literature). Without that, I think the best you could do would be to model individual facets of job satisfaction separately. I think that's where your supervisor's factor analysis suggestion comes in: if you've measured several facets of job satisfaction, some of them probably hang together. Factor analysis might let you create scales from those variables (e.g., "material rewards", "working environment"). I'm not the right person to give you advice on that, but I think that's probably your supervisor's general idea. You sound like a university student, so find out if there's statistical support available for students and milk it for all it's worth. This stuff is not easy. That doesn't stop social scientists from doing it anyway, but just because your software will compute results and give you significant p-values for your model doesn't mean that model is worth a damn.
How to calculate an overall job satisfaction score and test group differences?
Your ultimate job satisfaction construct sounds suspect to me. When you summed up the different facets of job satisfaction to create your ultimate score, you implicitly assumed that each facet has th
How to calculate an overall job satisfaction score and test group differences? Your ultimate job satisfaction construct sounds suspect to me. When you summed up the different facets of job satisfaction to create your ultimate score, you implicitly assumed that each facet has the same weight - they really don't, as you'll see in the literature. You'd have a much easier time with an explicit measure of overall satisfaction, which would let you model how much each facet contributes to overall satisfaction (plenty of examples of this in the literature). Without that, I think the best you could do would be to model individual facets of job satisfaction separately. I think that's where your supervisor's factor analysis suggestion comes in: if you've measured several facets of job satisfaction, some of them probably hang together. Factor analysis might let you create scales from those variables (e.g., "material rewards", "working environment"). I'm not the right person to give you advice on that, but I think that's probably your supervisor's general idea. You sound like a university student, so find out if there's statistical support available for students and milk it for all it's worth. This stuff is not easy. That doesn't stop social scientists from doing it anyway, but just because your software will compute results and give you significant p-values for your model doesn't mean that model is worth a damn.
How to calculate an overall job satisfaction score and test group differences? Your ultimate job satisfaction construct sounds suspect to me. When you summed up the different facets of job satisfaction to create your ultimate score, you implicitly assumed that each facet has th
53,051
How to calculate an overall job satisfaction score and test group differences?
I am just learning statistical analysis and I am unsure if my response would be helpful but I just want you to stop for a while and think about what you are trying to achieve through your analysis. (Often in a state of panic, one usually ends up doing all sorts of analysis) One of the most valuable advice I have received is that somethines you don't need complex statistical analysis to prove a point. It is fine if you are statstically trained but at the end of the day you must be able to understand and more importantly explain your methods and findings to others. I would take a simple approach as follows: Give a weight to the responses based on their relative importance/unimportance e.g. 1 point for statisfied, 0 for ambivalent and -1 for dissatisfied. (A likert scale response is better) Ensure that your rating in (1) is equivalent i.e. have all three point or five point rating. This ensures the purity of the overall scale of the final scores. Add the scores together You now have an overall score for job satisfaction. You can run correlation analysis and then compare the coefficients to see if there is a difference.
How to calculate an overall job satisfaction score and test group differences?
I am just learning statistical analysis and I am unsure if my response would be helpful but I just want you to stop for a while and think about what you are trying to achieve through your analysis. (O
How to calculate an overall job satisfaction score and test group differences? I am just learning statistical analysis and I am unsure if my response would be helpful but I just want you to stop for a while and think about what you are trying to achieve through your analysis. (Often in a state of panic, one usually ends up doing all sorts of analysis) One of the most valuable advice I have received is that somethines you don't need complex statistical analysis to prove a point. It is fine if you are statstically trained but at the end of the day you must be able to understand and more importantly explain your methods and findings to others. I would take a simple approach as follows: Give a weight to the responses based on their relative importance/unimportance e.g. 1 point for statisfied, 0 for ambivalent and -1 for dissatisfied. (A likert scale response is better) Ensure that your rating in (1) is equivalent i.e. have all three point or five point rating. This ensures the purity of the overall scale of the final scores. Add the scores together You now have an overall score for job satisfaction. You can run correlation analysis and then compare the coefficients to see if there is a difference.
How to calculate an overall job satisfaction score and test group differences? I am just learning statistical analysis and I am unsure if my response would be helpful but I just want you to stop for a while and think about what you are trying to achieve through your analysis. (O
53,052
How to calculate an overall job satisfaction score and test group differences?
The Job Satisfaction Survey or JSS, has some of its items written in each direction--positive and negative. Scores on each of nine facet subscales, based on 4 items each, can range from 4 to 24; while scores for total job satisfaction, based on the sum of all 36 items, can range from 36 to 216. Each item is scored from 1 to 6 if the original response choices are used. High scores on the scale represent job satisfaction, so the scores on the negatively worded items must be reversed before summing with the positively worded into facet or total scores. A score of 6 representing strongest agreement with a negatively worded item is considered equivalent to a score of 1 representing strongest disagreement on a positively worded item, allowing them to be combined meaningfully. Below is the step by step procedure for scoring. Responses to the items should be numbered from 1 representing strongest disagreement to 6 representing strongest agreement with each. This assumes that the scale has not be modified and the original agree-disagree response choices are used. The negatively worded items should be reverse scored. Below are the reversals for the original item score in the left column and reversed item score in the right. The rightmost values should be substituted for the leftmost. This can also be accomplished by subtracting the original values for the internal items from 7. 1 = 6 2 = 5 3 = 4 4 = 3 5 = 2 6 = 1 Negatively worded items are 2, 4, 6, 8, 10, 12, 14, 16, 18, 19, 21, 23, 24, 26, 29, 31, 32, 34, 36. Note the reversals are NOT every other one. Sum responses to 4 items for each facet score and all items for total score after the reversals from step 2. Items go into the subscales as shown in the table. Subscale Item numbers Pay 1, 10, 19, 28 Promotion 2, 11, 20, 33 Supervision 3, 12, 21, 30 Fringe Benefits 4, 13, 22, 29 Contingent rewards 5, 14, 23, 32 Operating conditions 6, 15, 24, 31 Coworkers 7, 16, 25, 34 Nature of work 8, 17, 27, 35 Communication 9, 18, 26, 36 Total satisfaction 1-36 If some items are missing you must make an adjustment otherwise the score will be too low. The best procedure is to compute the mean score per item for the individual, and substitute that mean for missing items. For example, if a person does not make a response to 1 item, take the total from step 4, divide by the number answered or 3 for a facet or 35 for total, and substitute this number for the missing item by adding it to the total from step 4. An easier but less accurate procedure is to substitute a middle response for each of the missing items. Since the center of the scale is between 3 and 4, either number could be used. One should alternate the two numbers as missing items occur.
How to calculate an overall job satisfaction score and test group differences?
The Job Satisfaction Survey or JSS, has some of its items written in each direction--positive and negative. Scores on each of nine facet subscales, based on 4 items each, can range from 4 to 24; while
How to calculate an overall job satisfaction score and test group differences? The Job Satisfaction Survey or JSS, has some of its items written in each direction--positive and negative. Scores on each of nine facet subscales, based on 4 items each, can range from 4 to 24; while scores for total job satisfaction, based on the sum of all 36 items, can range from 36 to 216. Each item is scored from 1 to 6 if the original response choices are used. High scores on the scale represent job satisfaction, so the scores on the negatively worded items must be reversed before summing with the positively worded into facet or total scores. A score of 6 representing strongest agreement with a negatively worded item is considered equivalent to a score of 1 representing strongest disagreement on a positively worded item, allowing them to be combined meaningfully. Below is the step by step procedure for scoring. Responses to the items should be numbered from 1 representing strongest disagreement to 6 representing strongest agreement with each. This assumes that the scale has not be modified and the original agree-disagree response choices are used. The negatively worded items should be reverse scored. Below are the reversals for the original item score in the left column and reversed item score in the right. The rightmost values should be substituted for the leftmost. This can also be accomplished by subtracting the original values for the internal items from 7. 1 = 6 2 = 5 3 = 4 4 = 3 5 = 2 6 = 1 Negatively worded items are 2, 4, 6, 8, 10, 12, 14, 16, 18, 19, 21, 23, 24, 26, 29, 31, 32, 34, 36. Note the reversals are NOT every other one. Sum responses to 4 items for each facet score and all items for total score after the reversals from step 2. Items go into the subscales as shown in the table. Subscale Item numbers Pay 1, 10, 19, 28 Promotion 2, 11, 20, 33 Supervision 3, 12, 21, 30 Fringe Benefits 4, 13, 22, 29 Contingent rewards 5, 14, 23, 32 Operating conditions 6, 15, 24, 31 Coworkers 7, 16, 25, 34 Nature of work 8, 17, 27, 35 Communication 9, 18, 26, 36 Total satisfaction 1-36 If some items are missing you must make an adjustment otherwise the score will be too low. The best procedure is to compute the mean score per item for the individual, and substitute that mean for missing items. For example, if a person does not make a response to 1 item, take the total from step 4, divide by the number answered or 3 for a facet or 35 for total, and substitute this number for the missing item by adding it to the total from step 4. An easier but less accurate procedure is to substitute a middle response for each of the missing items. Since the center of the scale is between 3 and 4, either number could be used. One should alternate the two numbers as missing items occur.
How to calculate an overall job satisfaction score and test group differences? The Job Satisfaction Survey or JSS, has some of its items written in each direction--positive and negative. Scores on each of nine facet subscales, based on 4 items each, can range from 4 to 24; while
53,053
Grouping data in ranges in R by summing them
One option is to create a new variable for your bins with cut or cut2 in package Hmisc. dat <- data.frame(year = 1901:2000, value = runif(100)) dat <- transform(dat, bin = cut(year, 10)) I would then probably use plyr to do the group by summary: library(plyr) ddply(dat, "bin", summarize, totVal = sum(value)) The help page for cut should be illustrative in defining labels, what to do with edge cases (include / exclude min or max values), etc.
Grouping data in ranges in R by summing them
One option is to create a new variable for your bins with cut or cut2 in package Hmisc. dat <- data.frame(year = 1901:2000, value = runif(100)) dat <- transform(dat, bin = cut(year, 10)) I would then
Grouping data in ranges in R by summing them One option is to create a new variable for your bins with cut or cut2 in package Hmisc. dat <- data.frame(year = 1901:2000, value = runif(100)) dat <- transform(dat, bin = cut(year, 10)) I would then probably use plyr to do the group by summary: library(plyr) ddply(dat, "bin", summarize, totVal = sum(value)) The help page for cut should be illustrative in defining labels, what to do with edge cases (include / exclude min or max values), etc.
Grouping data in ranges in R by summing them One option is to create a new variable for your bins with cut or cut2 in package Hmisc. dat <- data.frame(year = 1901:2000, value = runif(100)) dat <- transform(dat, bin = cut(year, 10)) I would then
53,054
Grouping data in ranges in R by summing them
Interesting Chase. I hasn't seen transform and would have likely done it this (second) way: set.seed(1234) dat <- data.frame(year = 1901:2000, value = runif(100)) dat <- transform(dat, bin = cut(year, 10)) set.seed(1234) dat2 <- data.frame(year = 1901:2000, value = runif(100)) dat2$bin <- cut(dat$year, 10) identical(dat,dat2) # true Following on from that I would look to: dat2$bin <- cut(dat$year, 10, labels=F) # this gives you 1:10 as labels rather than the very messy 'intervals' aggregate(value~bin, data=dat2, sum) > aggregate(value~bin, data=dat2, sum) bin value 1 1 4.892264 2 2 4.546337 3 3 4.165217 4 4 4.733585 5 5 5.136625 6 6 4.530420 7 7 3.616002 8 8 3.864675 9 9 4.936536 10 10 3.328065
Grouping data in ranges in R by summing them
Interesting Chase. I hasn't seen transform and would have likely done it this (second) way: set.seed(1234) dat <- data.frame(year = 1901:2000, value = runif(100)) dat <- transform(dat, bin = cut(year,
Grouping data in ranges in R by summing them Interesting Chase. I hasn't seen transform and would have likely done it this (second) way: set.seed(1234) dat <- data.frame(year = 1901:2000, value = runif(100)) dat <- transform(dat, bin = cut(year, 10)) set.seed(1234) dat2 <- data.frame(year = 1901:2000, value = runif(100)) dat2$bin <- cut(dat$year, 10) identical(dat,dat2) # true Following on from that I would look to: dat2$bin <- cut(dat$year, 10, labels=F) # this gives you 1:10 as labels rather than the very messy 'intervals' aggregate(value~bin, data=dat2, sum) > aggregate(value~bin, data=dat2, sum) bin value 1 1 4.892264 2 2 4.546337 3 3 4.165217 4 4 4.733585 5 5 5.136625 6 6 4.530420 7 7 3.616002 8 8 3.864675 9 9 4.936536 10 10 3.328065
Grouping data in ranges in R by summing them Interesting Chase. I hasn't seen transform and would have likely done it this (second) way: set.seed(1234) dat <- data.frame(year = 1901:2000, value = runif(100)) dat <- transform(dat, bin = cut(year,
53,055
Logistic regression and Wilcoxon test
Did you just fit one big glm model then look at the individual p-values? Remember that those p-values are measuring the effect of that variable above and beyond all other variables in the model. It is possible that more of your covariates are really contributing, but there is redundant information, so they don't show significance. It could be the combination of A along with another covariate or 2 that shows the real difference, but A by itself is not meaningful. Also look at the effect sizes and standard errors for all terms. There is a paradox associated with the Wald tests/estimates (the individual p-values in the standard summary) that you can have a very important variable appear to be non-significant because the standard error is over estimated. Likelihood ratio tests are much better because of this. You can assess both the above by fitting a reduced model using only A, then use the anova function to compare the 2 models, if that is significant then it indicates that there is something important in your model beyond the A variable.
Logistic regression and Wilcoxon test
Did you just fit one big glm model then look at the individual p-values? Remember that those p-values are measuring the effect of that variable above and beyond all other variables in the model. It i
Logistic regression and Wilcoxon test Did you just fit one big glm model then look at the individual p-values? Remember that those p-values are measuring the effect of that variable above and beyond all other variables in the model. It is possible that more of your covariates are really contributing, but there is redundant information, so they don't show significance. It could be the combination of A along with another covariate or 2 that shows the real difference, but A by itself is not meaningful. Also look at the effect sizes and standard errors for all terms. There is a paradox associated with the Wald tests/estimates (the individual p-values in the standard summary) that you can have a very important variable appear to be non-significant because the standard error is over estimated. Likelihood ratio tests are much better because of this. You can assess both the above by fitting a reduced model using only A, then use the anova function to compare the 2 models, if that is significant then it indicates that there is something important in your model beyond the A variable.
Logistic regression and Wilcoxon test Did you just fit one big glm model then look at the individual p-values? Remember that those p-values are measuring the effect of that variable above and beyond all other variables in the model. It i
53,056
Logistic regression and Wilcoxon test
The tests make different assumptions, and so do not give exactly the same result. The bigger problem is the (incorrect) assumption that failure to reject the null "indicates that there is no difference". It does not. It just means that you don't have enough evidence to reject the null of no difference.
Logistic regression and Wilcoxon test
The tests make different assumptions, and so do not give exactly the same result. The bigger problem is the (incorrect) assumption that failure to reject the null "indicates that there is no differenc
Logistic regression and Wilcoxon test The tests make different assumptions, and so do not give exactly the same result. The bigger problem is the (incorrect) assumption that failure to reject the null "indicates that there is no difference". It does not. It just means that you don't have enough evidence to reject the null of no difference.
Logistic regression and Wilcoxon test The tests make different assumptions, and so do not give exactly the same result. The bigger problem is the (incorrect) assumption that failure to reject the null "indicates that there is no differenc
53,057
Logistic regression and Wilcoxon test
The more appropriate comparator is the proportional odds model, which contains the Wilcoxon test as a special case. And note that 'multivariate' refers to the simultaneous analysis of more than one dependent variable. I think you meant to say 'multivariable'.
Logistic regression and Wilcoxon test
The more appropriate comparator is the proportional odds model, which contains the Wilcoxon test as a special case. And note that 'multivariate' refers to the simultaneous analysis of more than one d
Logistic regression and Wilcoxon test The more appropriate comparator is the proportional odds model, which contains the Wilcoxon test as a special case. And note that 'multivariate' refers to the simultaneous analysis of more than one dependent variable. I think you meant to say 'multivariable'.
Logistic regression and Wilcoxon test The more appropriate comparator is the proportional odds model, which contains the Wilcoxon test as a special case. And note that 'multivariate' refers to the simultaneous analysis of more than one d
53,058
Logistic regression and Wilcoxon test
If I understand things correctly, you're testing if the mean value of predictor $A$ associated with outcome $1$ differ from the mean value of predictor A associated with outcome $2$. Even if they don't differ, this result says nothing to your research question. What it says is only that in your sample, the average value of the predictor $A$ is'n different among succes and failures of the dependent variable. I'll give an example so you can see why. Imagine that i'm interesting to know if education affects the probability of an employee being manager in a given firm. I collect a random sample of employees, with $200$ employes that are not managers and $70$ which are actually manager. I ran a logistic regression and the effect of education is significant (usual caveats about p-value apply here as well). Now, you may wonder if the sample is balanced, which is, if there is the same variation in education for managers and non-managers. One first test to see this is to test if their means differ. If their means don't differ, than you know that on average on your sample, managers and non-managers have the same level of education. You could check if there is roughly the same variation of educational level among managers and non-managers. But I think that it's pretty clear that the Wilcoxon test (or the Mann-Whitney) has nothing to say about the effect of your predicton on the probability of success (in my example, success means to be a manager).
Logistic regression and Wilcoxon test
If I understand things correctly, you're testing if the mean value of predictor $A$ associated with outcome $1$ differ from the mean value of predictor A associated with outcome $2$. Even if they don'
Logistic regression and Wilcoxon test If I understand things correctly, you're testing if the mean value of predictor $A$ associated with outcome $1$ differ from the mean value of predictor A associated with outcome $2$. Even if they don't differ, this result says nothing to your research question. What it says is only that in your sample, the average value of the predictor $A$ is'n different among succes and failures of the dependent variable. I'll give an example so you can see why. Imagine that i'm interesting to know if education affects the probability of an employee being manager in a given firm. I collect a random sample of employees, with $200$ employes that are not managers and $70$ which are actually manager. I ran a logistic regression and the effect of education is significant (usual caveats about p-value apply here as well). Now, you may wonder if the sample is balanced, which is, if there is the same variation in education for managers and non-managers. One first test to see this is to test if their means differ. If their means don't differ, than you know that on average on your sample, managers and non-managers have the same level of education. You could check if there is roughly the same variation of educational level among managers and non-managers. But I think that it's pretty clear that the Wilcoxon test (or the Mann-Whitney) has nothing to say about the effect of your predicton on the probability of success (in my example, success means to be a manager).
Logistic regression and Wilcoxon test If I understand things correctly, you're testing if the mean value of predictor $A$ associated with outcome $1$ differ from the mean value of predictor A associated with outcome $2$. Even if they don'
53,059
Logistic regression and Wilcoxon test
The p-value that you got for the parametric test (from glm) depends on the effect size, i.e., by how much can the predicted values change depending on the value of A. Now if a small proportion (that proportion may be dependent on other predictors) of predicted values change by a large quantity due to the variable A, you have a large effect size and significant correlation and significant p-value. The rank-sum test, however only cares about the effect size in as much as it changes the ranking. If you were to push all of the overall highest 10 values (regardless of their class) by a large number in the absolute scale so that the rankings remain the same, the rank sum statistic will not change while parametric tests will. If you have reason to believe the effect size or the distribution you assumed (probably normal), then use the p-value from the parametric test, otherwise use rank-sum.
Logistic regression and Wilcoxon test
The p-value that you got for the parametric test (from glm) depends on the effect size, i.e., by how much can the predicted values change depending on the value of A. Now if a small proportion (that p
Logistic regression and Wilcoxon test The p-value that you got for the parametric test (from glm) depends on the effect size, i.e., by how much can the predicted values change depending on the value of A. Now if a small proportion (that proportion may be dependent on other predictors) of predicted values change by a large quantity due to the variable A, you have a large effect size and significant correlation and significant p-value. The rank-sum test, however only cares about the effect size in as much as it changes the ranking. If you were to push all of the overall highest 10 values (regardless of their class) by a large number in the absolute scale so that the rankings remain the same, the rank sum statistic will not change while parametric tests will. If you have reason to believe the effect size or the distribution you assumed (probably normal), then use the p-value from the parametric test, otherwise use rank-sum.
Logistic regression and Wilcoxon test The p-value that you got for the parametric test (from glm) depends on the effect size, i.e., by how much can the predicted values change depending on the value of A. Now if a small proportion (that p
53,060
Interpreting Gaussian probabilities greater than 1 [duplicate]
The code in the question returns the values of probability density function. The values of probability density function can be greater than one. The actual probability $P(X<x)$ for random variable $X$ with probability density function $p(x)$ is integral $\int_{-\infty}^xp(t)dt$. The values of this integral are of course restricted to interval $[0,1]$.
Interpreting Gaussian probabilities greater than 1 [duplicate]
The code in the question returns the values of probability density function. The values of probability density function can be greater than one. The actual probability $P(X<x)$ for random variable $X$
Interpreting Gaussian probabilities greater than 1 [duplicate] The code in the question returns the values of probability density function. The values of probability density function can be greater than one. The actual probability $P(X<x)$ for random variable $X$ with probability density function $p(x)$ is integral $\int_{-\infty}^xp(t)dt$. The values of this integral are of course restricted to interval $[0,1]$.
Interpreting Gaussian probabilities greater than 1 [duplicate] The code in the question returns the values of probability density function. The values of probability density function can be greater than one. The actual probability $P(X<x)$ for random variable $X$
53,061
Excel xy chart with unequal x values in series
Use a Scatter Plot where the horizontal axis is time. Below is a screenshot of an Excel sheet where two temperatures are shown. Temp 1 starts at 9:00:00 AM and increments by 1 minute and 26 seconds. Temp 2 starts at 9:30:00 AM and increments by 2 minutes and 53 seconds. They are both plotted in the graph. As shown in the plot, the blue Temp 1 has been "selected" in the graph and the corresponding graph formula is: =SERIES(Sheet1!$B$1,Sheet1!$A$2:$A$27,Sheet1!$B$2:$B$27,1) When I select Temp 2, the corresponding graph formula is: =SERIES(Sheet1!$E$1,Sheet1!$D$2:$D$27,Sheet1!$E$2:$E$27,2) Both Temp 1 and Temp 2 uses the same x-axis and y-axis. There is a second way to do this where the blue series has its own x-axis and y-axis, and the red series has a different x-axis and y-axis. However, I wouldn't recommend this scheme because it is typically too confusing for others to interpret the results. It requires that you assign one (or more) of the lines to a secondary y-axis and then a secondary x-axis. Here are the steps: 1) Right click on the series that you want to assign to the secondary axes. Select "Format Data Series". Under "Series Options" select "Secondary Axis". That will assign the selected series to a secondary y-axis. 2) With the series still selected, go to the "Chart Tools" "Layout" "Axes" selection on the ribbon. Click "Secondary Horizontal Axis" and pick whatever options you want. That will assign the selected series to a secondary x-axis. Below is an example. I purposely changed the color of the text of the lower x-axis and the left y-axis to blue to signify that these are the axes to use for the blue series. I changed the color of the text of the upper x-axis and the right y-axis to red to signify their use for the red series. Notice that the two y-axes have different scales. The blue y-axis runs from 0 to 90, while the red y-axis runs from 0 to 100. Notice also that I set both x-axis to start, stop, and increment at the same values. If you set the increment of the two x-axes to two different values, this graph can be even more confusing.
Excel xy chart with unequal x values in series
Use a Scatter Plot where the horizontal axis is time. Below is a screenshot of an Excel sheet where two temperatures are shown. Temp 1 starts at 9:00:00 AM and increments by 1 minute and 26 seconds
Excel xy chart with unequal x values in series Use a Scatter Plot where the horizontal axis is time. Below is a screenshot of an Excel sheet where two temperatures are shown. Temp 1 starts at 9:00:00 AM and increments by 1 minute and 26 seconds. Temp 2 starts at 9:30:00 AM and increments by 2 minutes and 53 seconds. They are both plotted in the graph. As shown in the plot, the blue Temp 1 has been "selected" in the graph and the corresponding graph formula is: =SERIES(Sheet1!$B$1,Sheet1!$A$2:$A$27,Sheet1!$B$2:$B$27,1) When I select Temp 2, the corresponding graph formula is: =SERIES(Sheet1!$E$1,Sheet1!$D$2:$D$27,Sheet1!$E$2:$E$27,2) Both Temp 1 and Temp 2 uses the same x-axis and y-axis. There is a second way to do this where the blue series has its own x-axis and y-axis, and the red series has a different x-axis and y-axis. However, I wouldn't recommend this scheme because it is typically too confusing for others to interpret the results. It requires that you assign one (or more) of the lines to a secondary y-axis and then a secondary x-axis. Here are the steps: 1) Right click on the series that you want to assign to the secondary axes. Select "Format Data Series". Under "Series Options" select "Secondary Axis". That will assign the selected series to a secondary y-axis. 2) With the series still selected, go to the "Chart Tools" "Layout" "Axes" selection on the ribbon. Click "Secondary Horizontal Axis" and pick whatever options you want. That will assign the selected series to a secondary x-axis. Below is an example. I purposely changed the color of the text of the lower x-axis and the left y-axis to blue to signify that these are the axes to use for the blue series. I changed the color of the text of the upper x-axis and the right y-axis to red to signify their use for the red series. Notice that the two y-axes have different scales. The blue y-axis runs from 0 to 90, while the red y-axis runs from 0 to 100. Notice also that I set both x-axis to start, stop, and increment at the same values. If you set the increment of the two x-axes to two different values, this graph can be even more confusing.
Excel xy chart with unequal x values in series Use a Scatter Plot where the horizontal axis is time. Below is a screenshot of an Excel sheet where two temperatures are shown. Temp 1 starts at 9:00:00 AM and increments by 1 minute and 26 seconds
53,062
Excel xy chart with unequal x values in series
Simple answer: Select one set of X and Y values, and create your XY chart. Copy the second set of X and Y values, select the chart, and use paste special to add the data as a new series.
Excel xy chart with unequal x values in series
Simple answer: Select one set of X and Y values, and create your XY chart. Copy the second set of X and Y values, select the chart, and use paste special to add the data as a new series.
Excel xy chart with unequal x values in series Simple answer: Select one set of X and Y values, and create your XY chart. Copy the second set of X and Y values, select the chart, and use paste special to add the data as a new series.
Excel xy chart with unequal x values in series Simple answer: Select one set of X and Y values, and create your XY chart. Copy the second set of X and Y values, select the chart, and use paste special to add the data as a new series.
53,063
Determinant perturbation approximation
I'll assume that you already know the eigenvalues of $B$. Since $B$ is symmetric positive semidefinite, it can be decomposed as $$ B = U D U^T $$ where $U$ is an orthogonal matrix and $D$ is the diagonal of nonnegative eigenvalues (some of which may be exactly zero). Now $$ B+xI = U D U^T + x U U^T = U (D + x I) U^T $$ and since the determinant of a matrix is the product of its eigenvalues and the determinant is distributive over matrix products, then $$ |B+xI| = |D+xI| = \prod_n (d_n + x) $$ where $d_n$ is the $n$th diagonal entry of $D$.
Determinant perturbation approximation
I'll assume that you already know the eigenvalues of $B$. Since $B$ is symmetric positive semidefinite, it can be decomposed as $$ B = U D U^T $$ where $U$ is an orthogonal matrix and $D$ is the dia
Determinant perturbation approximation I'll assume that you already know the eigenvalues of $B$. Since $B$ is symmetric positive semidefinite, it can be decomposed as $$ B = U D U^T $$ where $U$ is an orthogonal matrix and $D$ is the diagonal of nonnegative eigenvalues (some of which may be exactly zero). Now $$ B+xI = U D U^T + x U U^T = U (D + x I) U^T $$ and since the determinant of a matrix is the product of its eigenvalues and the determinant is distributive over matrix products, then $$ |B+xI| = |D+xI| = \prod_n (d_n + x) $$ where $d_n$ is the $n$th diagonal entry of $D$.
Determinant perturbation approximation I'll assume that you already know the eigenvalues of $B$. Since $B$ is symmetric positive semidefinite, it can be decomposed as $$ B = U D U^T $$ where $U$ is an orthogonal matrix and $D$ is the dia
53,064
Determinant perturbation approximation
I second @cardinal's answer, but provide a simple trick: If $p(z)$ is a polynomial (with integer powers), and $\mathbf{v}, \lambda$ are eigenvector and corresponding eigenvalue of matrix $M$, then $\mathbf{v}, p(\lambda)$ are eigenvector and corresponding eigenvalue of $p(M)$. The proof is a simple exercise. The polynomial $p$ may contain negative powers of $z$ and a constant term, which in the case of $p(M)$ corresponds to adding the constant times the identity matrix. Since the determinant is the product of the eigenvalues, the determinant of $A = p(B)$, where $p(z) = z^1 + x$ is the product $\prod_i \left(\lambda_i + x\right)$, where $\lambda_i$ are the eigenvalues of $B$. You can also use this trick to find the trace of $p(B)$, of course, but it is overkill! This polynomial trick is a classic in numerical analysis, used, for example, to prove convergence of the Gauss-Seidel method. See Cheney & Kincaid, or my answer to another question involving this trick.
Determinant perturbation approximation
I second @cardinal's answer, but provide a simple trick: If $p(z)$ is a polynomial (with integer powers), and $\mathbf{v}, \lambda$ are eigenvector and corresponding eigenvalue of matrix $M$, then $\m
Determinant perturbation approximation I second @cardinal's answer, but provide a simple trick: If $p(z)$ is a polynomial (with integer powers), and $\mathbf{v}, \lambda$ are eigenvector and corresponding eigenvalue of matrix $M$, then $\mathbf{v}, p(\lambda)$ are eigenvector and corresponding eigenvalue of $p(M)$. The proof is a simple exercise. The polynomial $p$ may contain negative powers of $z$ and a constant term, which in the case of $p(M)$ corresponds to adding the constant times the identity matrix. Since the determinant is the product of the eigenvalues, the determinant of $A = p(B)$, where $p(z) = z^1 + x$ is the product $\prod_i \left(\lambda_i + x\right)$, where $\lambda_i$ are the eigenvalues of $B$. You can also use this trick to find the trace of $p(B)$, of course, but it is overkill! This polynomial trick is a classic in numerical analysis, used, for example, to prove convergence of the Gauss-Seidel method. See Cheney & Kincaid, or my answer to another question involving this trick.
Determinant perturbation approximation I second @cardinal's answer, but provide a simple trick: If $p(z)$ is a polynomial (with integer powers), and $\mathbf{v}, \lambda$ are eigenvector and corresponding eigenvalue of matrix $M$, then $\m
53,065
Derivative of a linear model
@onestop points in the right direction. Belsley, Kuh, and Welsch describe this approach on pp. 24-26 of their book. To differentiate with respect to an observation (and not just one of its attributes), they introduce a weight, perform weighted least squares, and differentiate with respect to the weight. Specifically, let $\mathbb{X} = X_{ij}$ be the design matrix, let $\mathbf{x}_i$ be the $i$th observation, let $e_i$ be its residual, let $w_i$ be the weight, and define $h_i$ (the $i$th diagonal entry in the hat matrix) to be $\mathbf{x}_i (\mathbb{X}^T \mathbb{X})^{-1} \mathbf{x}_i^T$. They compute $$\frac{\partial b(w_i)}{\partial w_i} = \frac{(\mathbb{X}^T\mathbb{X})^{-1} \mathbf{x}_i^T e_i}{\left[1 - (1 - w_i)h_i\right]^2},$$ whence $$\frac{\partial b(w_i)}{\partial w_i}\Bigg|_{w_i=1} = (\mathbb{X}^T\mathbb{X})^{-1} \mathbf{x}_i^T e_i.$$ This is interpreted as a way to "identify influential observations, ... provid[ing] a means for examining the sensitivity of the regression coefficients to a slight change in the weight given to the ith observation. Large values of this derivative indicate observations that have large influence on the calculated coefficients." They suggest it can be used as an alternative to the DFBETA diagnostic. (DFBETA measures the change in $b$ when observation $i$ is completely deleted.) The relationship between the influence and DFBETA is that DFBETA equals the influence divided by $1 - h_i$ [equation 2.1 p. 13].
Derivative of a linear model
@onestop points in the right direction. Belsley, Kuh, and Welsch describe this approach on pp. 24-26 of their book. To differentiate with respect to an observation (and not just one of its attribute
Derivative of a linear model @onestop points in the right direction. Belsley, Kuh, and Welsch describe this approach on pp. 24-26 of their book. To differentiate with respect to an observation (and not just one of its attributes), they introduce a weight, perform weighted least squares, and differentiate with respect to the weight. Specifically, let $\mathbb{X} = X_{ij}$ be the design matrix, let $\mathbf{x}_i$ be the $i$th observation, let $e_i$ be its residual, let $w_i$ be the weight, and define $h_i$ (the $i$th diagonal entry in the hat matrix) to be $\mathbf{x}_i (\mathbb{X}^T \mathbb{X})^{-1} \mathbf{x}_i^T$. They compute $$\frac{\partial b(w_i)}{\partial w_i} = \frac{(\mathbb{X}^T\mathbb{X})^{-1} \mathbf{x}_i^T e_i}{\left[1 - (1 - w_i)h_i\right]^2},$$ whence $$\frac{\partial b(w_i)}{\partial w_i}\Bigg|_{w_i=1} = (\mathbb{X}^T\mathbb{X})^{-1} \mathbf{x}_i^T e_i.$$ This is interpreted as a way to "identify influential observations, ... provid[ing] a means for examining the sensitivity of the regression coefficients to a slight change in the weight given to the ith observation. Large values of this derivative indicate observations that have large influence on the calculated coefficients." They suggest it can be used as an alternative to the DFBETA diagnostic. (DFBETA measures the change in $b$ when observation $i$ is completely deleted.) The relationship between the influence and DFBETA is that DFBETA equals the influence divided by $1 - h_i$ [equation 2.1 p. 13].
Derivative of a linear model @onestop points in the right direction. Belsley, Kuh, and Welsch describe this approach on pp. 24-26 of their book. To differentiate with respect to an observation (and not just one of its attribute
53,066
Derivative of a linear model
I guess this would come under the heading of regression diagnostics. I haven't seen this precise statistic before, but something that comes fairly close is DFBETAij, which is the the change in regression coefficient i when the jth observation is omitted divided by the estimated standard error of coefficient i. The book that defined this and many other regression diagnostics (perhaps too many) is: Belsley, D. A., E. Kuh, and R. E. Welsch. (1980). Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. New York: Wiley. ISBN 0471691178
Derivative of a linear model
I guess this would come under the heading of regression diagnostics. I haven't seen this precise statistic before, but something that comes fairly close is DFBETAij, which is the the change in regress
Derivative of a linear model I guess this would come under the heading of regression diagnostics. I haven't seen this precise statistic before, but something that comes fairly close is DFBETAij, which is the the change in regression coefficient i when the jth observation is omitted divided by the estimated standard error of coefficient i. The book that defined this and many other regression diagnostics (perhaps too many) is: Belsley, D. A., E. Kuh, and R. E. Welsch. (1980). Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. New York: Wiley. ISBN 0471691178
Derivative of a linear model I guess this would come under the heading of regression diagnostics. I haven't seen this precise statistic before, but something that comes fairly close is DFBETAij, which is the the change in regress
53,067
Java implementations of the lasso [closed]
About clean implementation in Python, there is the scikit.learn toolkit. The L1/L2 regularization scheme (incl. elasticnet) works great with GLM (LARS and coordinate descent algorithms available). Don't know about Java implementation.
Java implementations of the lasso [closed]
About clean implementation in Python, there is the scikit.learn toolkit. The L1/L2 regularization scheme (incl. elasticnet) works great with GLM (LARS and coordinate descent algorithms available). Don
Java implementations of the lasso [closed] About clean implementation in Python, there is the scikit.learn toolkit. The L1/L2 regularization scheme (incl. elasticnet) works great with GLM (LARS and coordinate descent algorithms available). Don't know about Java implementation.
Java implementations of the lasso [closed] About clean implementation in Python, there is the scikit.learn toolkit. The L1/L2 regularization scheme (incl. elasticnet) works great with GLM (LARS and coordinate descent algorithms available). Don
53,068
Java implementations of the lasso [closed]
lasso4j is an open source Java implementation of Lasso for linear regression.
Java implementations of the lasso [closed]
lasso4j is an open source Java implementation of Lasso for linear regression.
Java implementations of the lasso [closed] lasso4j is an open source Java implementation of Lasso for linear regression.
Java implementations of the lasso [closed] lasso4j is an open source Java implementation of Lasso for linear regression.
53,069
Java implementations of the lasso [closed]
I've just come across mlpy, which also has an implementation of the lasso (in Python.)
Java implementations of the lasso [closed]
I've just come across mlpy, which also has an implementation of the lasso (in Python.)
Java implementations of the lasso [closed] I've just come across mlpy, which also has an implementation of the lasso (in Python.)
Java implementations of the lasso [closed] I've just come across mlpy, which also has an implementation of the lasso (in Python.)
53,070
How to deduce $ \mathbb{E}(\sqrt{X}) < \infty \implies\int_{\mathbb{R}^+} (1 - F(x))^2 dx < \infty,~X$ being a non-negative integrable rv?
Interesting problem. The proof goes as follows: let $X, X_1, X_2$ i.i.d. $\sim F$, then \begin{align} \int_0^\infty (1 - F(x))^2dx = E[\min(X_1, X_2)] \leq E[\sqrt{X_1}\sqrt{X_2}] = (E[\sqrt{X}])^2 < \infty. \end{align} By the way, the proof to the first implication is a straightforward application of Cauchy-Schwarz inequality: \begin{align} E[\sqrt{X}] = E[1 \cdot \sqrt{X}] \leq \sqrt{E[1]E[X]} = \sqrt{E[X]} < \infty. \end{align}
How to deduce $ \mathbb{E}(\sqrt{X}) < \infty \implies\int_{\mathbb{R}^+} (1 - F(x))^2 dx < \infty,~
Interesting problem. The proof goes as follows: let $X, X_1, X_2$ i.i.d. $\sim F$, then \begin{align} \int_0^\infty (1 - F(x))^2dx = E[\min(X_1, X_2)] \leq E[\sqrt{X_1}\sqrt{X_2}] = (E[\sqrt{X}])^2
How to deduce $ \mathbb{E}(\sqrt{X}) < \infty \implies\int_{\mathbb{R}^+} (1 - F(x))^2 dx < \infty,~X$ being a non-negative integrable rv? Interesting problem. The proof goes as follows: let $X, X_1, X_2$ i.i.d. $\sim F$, then \begin{align} \int_0^\infty (1 - F(x))^2dx = E[\min(X_1, X_2)] \leq E[\sqrt{X_1}\sqrt{X_2}] = (E[\sqrt{X}])^2 < \infty. \end{align} By the way, the proof to the first implication is a straightforward application of Cauchy-Schwarz inequality: \begin{align} E[\sqrt{X}] = E[1 \cdot \sqrt{X}] \leq \sqrt{E[1]E[X]} = \sqrt{E[X]} < \infty. \end{align}
How to deduce $ \mathbb{E}(\sqrt{X}) < \infty \implies\int_{\mathbb{R}^+} (1 - F(x))^2 dx < \infty,~ Interesting problem. The proof goes as follows: let $X, X_1, X_2$ i.i.d. $\sim F$, then \begin{align} \int_0^\infty (1 - F(x))^2dx = E[\min(X_1, X_2)] \leq E[\sqrt{X_1}\sqrt{X_2}] = (E[\sqrt{X}])^2
53,071
Using the Chebyshev inequality to uncover saturating distribution
I don't think Chebyshev's inequality helps in proving this reversed problem (Chebyshev's inequality only tells you, with the given condition, that $P[|X - 1/2| > \epsilon] \leq \frac{1}{4\epsilon^2}$. When $\epsilon \leq 1/2$, this is weaker than the trivial statement that $P[|X - 1/2| > \epsilon] \leq 1$. When $\epsilon > 1/2$, this amounts to say that the probability of $X > 1/2 + \epsilon$ or $X < 1/2 - \epsilon$ is less than $\frac{1}{4\epsilon^2} < 1$, but this is already implied by the other condition "$0 \leq X \leq 1$". Hence the Chebyshev's inequality does not provide any additional insights in proving the goal.), but may be proved as follows (the key of this proof is that if $E[Y] = 0$ and $Y$ is nonnegative, then $Y = 0$ with probability $1$): The condition $E[X] = 1/2$ and $\operatorname{Var}(X) = 1/4$ implies that $E[X^2] = 1/4 + (1/2)^2 = 1/2$. Therefore, $E[X - X^2] = 0$. By the condition $0 \leq X \leq 1$, the random variable $X - X^2$ is nonnegative, hence $E[X - X^2] = 0$ implies that $X - X^2 = 0$ with probability $1$, i.e., $P[X = 0] + P[X = 1] = 1$. This shows that $X$ must be a binary discrete random variable, and if $P[X = 0] = p$, then $P[X = 1] = 1 - p$. Hence $E[X] = 1/2 = 1 - p$ gives $p = 1/2$. In other words, $X \sim \text{Bernoulli}(1, 1/2)$.
Using the Chebyshev inequality to uncover saturating distribution
I don't think Chebyshev's inequality helps in proving this reversed problem (Chebyshev's inequality only tells you, with the given condition, that $P[|X - 1/2| > \epsilon] \leq \frac{1}{4\epsilon^2}$.
Using the Chebyshev inequality to uncover saturating distribution I don't think Chebyshev's inequality helps in proving this reversed problem (Chebyshev's inequality only tells you, with the given condition, that $P[|X - 1/2| > \epsilon] \leq \frac{1}{4\epsilon^2}$. When $\epsilon \leq 1/2$, this is weaker than the trivial statement that $P[|X - 1/2| > \epsilon] \leq 1$. When $\epsilon > 1/2$, this amounts to say that the probability of $X > 1/2 + \epsilon$ or $X < 1/2 - \epsilon$ is less than $\frac{1}{4\epsilon^2} < 1$, but this is already implied by the other condition "$0 \leq X \leq 1$". Hence the Chebyshev's inequality does not provide any additional insights in proving the goal.), but may be proved as follows (the key of this proof is that if $E[Y] = 0$ and $Y$ is nonnegative, then $Y = 0$ with probability $1$): The condition $E[X] = 1/2$ and $\operatorname{Var}(X) = 1/4$ implies that $E[X^2] = 1/4 + (1/2)^2 = 1/2$. Therefore, $E[X - X^2] = 0$. By the condition $0 \leq X \leq 1$, the random variable $X - X^2$ is nonnegative, hence $E[X - X^2] = 0$ implies that $X - X^2 = 0$ with probability $1$, i.e., $P[X = 0] + P[X = 1] = 1$. This shows that $X$ must be a binary discrete random variable, and if $P[X = 0] = p$, then $P[X = 1] = 1 - p$. Hence $E[X] = 1/2 = 1 - p$ gives $p = 1/2$. In other words, $X \sim \text{Bernoulli}(1, 1/2)$.
Using the Chebyshev inequality to uncover saturating distribution I don't think Chebyshev's inequality helps in proving this reversed problem (Chebyshev's inequality only tells you, with the given condition, that $P[|X - 1/2| > \epsilon] \leq \frac{1}{4\epsilon^2}$.
53,072
Using the Chebyshev inequality to uncover saturating distribution
Yes, the logic of the Chebyshev's inequality can be reversed. You could say that $$\mathrm{P}(|X - \mathrm{E}[X]| \leq \sqrt{\mathrm{Var}[X]}) = 1$$ if and only if $X$ is a Bernoulli variable with parameter $p = 0.5$ shifted and scaled to match the specific mean and variance. "If $X$ is a random variable with $0 \leq X \leq 1$" this condition is unnecessary for reversing the logic of the inequality. The mean and variance, along with the condition that the Chebyshev's inequality is an equality is enough. Proof, consider the quantile function of $Y = \frac{X-\mu}{\sigma}$. It must be constrained between -1 and +1, have mean 0, and at the same time the square has to integrate to 1.
Using the Chebyshev inequality to uncover saturating distribution
Yes, the logic of the Chebyshev's inequality can be reversed. You could say that $$\mathrm{P}(|X - \mathrm{E}[X]| \leq \sqrt{\mathrm{Var}[X]}) = 1$$ if and only if $X$ is a Bernoulli variable with pa
Using the Chebyshev inequality to uncover saturating distribution Yes, the logic of the Chebyshev's inequality can be reversed. You could say that $$\mathrm{P}(|X - \mathrm{E}[X]| \leq \sqrt{\mathrm{Var}[X]}) = 1$$ if and only if $X$ is a Bernoulli variable with parameter $p = 0.5$ shifted and scaled to match the specific mean and variance. "If $X$ is a random variable with $0 \leq X \leq 1$" this condition is unnecessary for reversing the logic of the inequality. The mean and variance, along with the condition that the Chebyshev's inequality is an equality is enough. Proof, consider the quantile function of $Y = \frac{X-\mu}{\sigma}$. It must be constrained between -1 and +1, have mean 0, and at the same time the square has to integrate to 1.
Using the Chebyshev inequality to uncover saturating distribution Yes, the logic of the Chebyshev's inequality can be reversed. You could say that $$\mathrm{P}(|X - \mathrm{E}[X]| \leq \sqrt{\mathrm{Var}[X]}) = 1$$ if and only if $X$ is a Bernoulli variable with pa
53,073
Are the coefficients in `lme4` correct with nested effects?
coef() gives you the adjusted coefficients (fixed + random effects) for each random-effect term separately; the key thing to know here is that Cohort/Subject is counted as two separate random effect terms by the model. That is, (Days | Cohort/Subject) is internally expanded to (Days | Cohort) + (Days | Cohort:Subject), and dealt with thereafter (for ranef()/coef() purposes) as two separate terms, not as a combined term. So what you are getting when you ask for coef(sleep_model)[["Subject:Cohort"]]["308:Cohort1", "Days"] is indeed just the adjustment for the subject-by-cohort combination, not the combination of the cohort and subject effects. I realize that may be a nuisance. It would be possible to augment/expand the coef method so that it collapsed these terms together (although a bit of a pain, as the fitted model object doesn't keep track of which RE terms were expanded from nested components in the original formula ...) I think this should do what you want, provided the grouping variable names/levels within them are not pathologically set up so that grep() gets weird results. object <- sleep_model ## nested terms to evaluate nests <- c("Cohort", "Subject:Cohort") ref <- ranef(object) maxnest <- ref[nests][[length(nests)]] ## innermost nested RE ## start with FE, shape matching innermost RE vals <- lapply(fixef(object), rep, nrow(maxnest)) vals <- data.frame(vals, check.names = FALSE) rownames(vals) <- rownames(maxnest) ## for (n in nests) { for (nm in rownames(ref[[n]])) { ## add relevant terms to running total (yikes) matches <- grep(nm, rownames(vals)) vals[matches,] <- sweep(vals[matches,], MARGIN = 2, ## columnwise STATS = unlist(ref[[n]][nm,]), ## cond modes for this group FUN = "+") } }
Are the coefficients in `lme4` correct with nested effects?
coef() gives you the adjusted coefficients (fixed + random effects) for each random-effect term separately; the key thing to know here is that Cohort/Subject is counted as two separate random effect t
Are the coefficients in `lme4` correct with nested effects? coef() gives you the adjusted coefficients (fixed + random effects) for each random-effect term separately; the key thing to know here is that Cohort/Subject is counted as two separate random effect terms by the model. That is, (Days | Cohort/Subject) is internally expanded to (Days | Cohort) + (Days | Cohort:Subject), and dealt with thereafter (for ranef()/coef() purposes) as two separate terms, not as a combined term. So what you are getting when you ask for coef(sleep_model)[["Subject:Cohort"]]["308:Cohort1", "Days"] is indeed just the adjustment for the subject-by-cohort combination, not the combination of the cohort and subject effects. I realize that may be a nuisance. It would be possible to augment/expand the coef method so that it collapsed these terms together (although a bit of a pain, as the fitted model object doesn't keep track of which RE terms were expanded from nested components in the original formula ...) I think this should do what you want, provided the grouping variable names/levels within them are not pathologically set up so that grep() gets weird results. object <- sleep_model ## nested terms to evaluate nests <- c("Cohort", "Subject:Cohort") ref <- ranef(object) maxnest <- ref[nests][[length(nests)]] ## innermost nested RE ## start with FE, shape matching innermost RE vals <- lapply(fixef(object), rep, nrow(maxnest)) vals <- data.frame(vals, check.names = FALSE) rownames(vals) <- rownames(maxnest) ## for (n in nests) { for (nm in rownames(ref[[n]])) { ## add relevant terms to running total (yikes) matches <- grep(nm, rownames(vals)) vals[matches,] <- sweep(vals[matches,], MARGIN = 2, ## columnwise STATS = unlist(ref[[n]][nm,]), ## cond modes for this group FUN = "+") } }
Are the coefficients in `lme4` correct with nested effects? coef() gives you the adjusted coefficients (fixed + random effects) for each random-effect term separately; the key thing to know here is that Cohort/Subject is counted as two separate random effect t
53,074
Interpretability for chi-squared test?
The chi-square test of a 2x2 contingency table such as this basically tests the following null hypothesis: gender should produce no difference in diabetes rates. Essentially, your chi-square test poses the following question: "Is the difference in diabetes rates by gender more than we would expect?". In this case, you have a lot of males that do not have diabetes. Because this number is disproportionate, your chi square is consequently 6.78, giving a significant value. However, you don't know the strength of this association yet, so it may also help to also obtain Yule's Q coefficient. You can get this by using the psych package in R, using the Yule function. I demonstrate with your data below: #### Construct Contingency Table #### diabetes <- matrix(c(50,55,124,70), ncol=2, byrow=TRUE) rownames(diabetes) <- c("Female","Male") colnames(diabetes) <- c("No Diabetes","Diabetes") diabetes #### Test Table #### chisq.test(diabetes) # chi square psych::Yule(diabetes) # Yule's Q coefficient The association is moderate, as shown by the result: [1] -0.3217054 So to summarize, your test is significant, indicating that you can say with some level of certainty that we cannot support the null hypothesis: gender seems to be associated with diabetes rates. Your Yule coefficient explains that this association is moderate. To answer some of your additional questions, this test only relates to the sample size you have, so it is not generalizable to all hospitals. This should make sense intuitively, as 1) your sample size isn't extremely large and 2) hospitals can vary a lot, such as how doctors are trained, access to resources, etc. Is this helpful? Certainly. While we would want more people to test and more sophisticated ways of tackling this question, this at least informs us that at the very minimum there is in fact a trend at this hospital, and it may (with caveats) indicate that this could be the case elsewhere. To your question about proportionality, there is no way of knowing whether or not this would hold for larger samples, but theoretically if the effect is consistent, the underlying assertion of the test would still hold. Only more testing in more settings can answer how generalizable your findings actually are. To see what is going on under the hood, this video shows how to calculate both chi-square and Yule's coefficient by hand. This video is also an accessible summary of what chi-square tests do.
Interpretability for chi-squared test?
The chi-square test of a 2x2 contingency table such as this basically tests the following null hypothesis: gender should produce no difference in diabetes rates. Essentially, your chi-square test pose
Interpretability for chi-squared test? The chi-square test of a 2x2 contingency table such as this basically tests the following null hypothesis: gender should produce no difference in diabetes rates. Essentially, your chi-square test poses the following question: "Is the difference in diabetes rates by gender more than we would expect?". In this case, you have a lot of males that do not have diabetes. Because this number is disproportionate, your chi square is consequently 6.78, giving a significant value. However, you don't know the strength of this association yet, so it may also help to also obtain Yule's Q coefficient. You can get this by using the psych package in R, using the Yule function. I demonstrate with your data below: #### Construct Contingency Table #### diabetes <- matrix(c(50,55,124,70), ncol=2, byrow=TRUE) rownames(diabetes) <- c("Female","Male") colnames(diabetes) <- c("No Diabetes","Diabetes") diabetes #### Test Table #### chisq.test(diabetes) # chi square psych::Yule(diabetes) # Yule's Q coefficient The association is moderate, as shown by the result: [1] -0.3217054 So to summarize, your test is significant, indicating that you can say with some level of certainty that we cannot support the null hypothesis: gender seems to be associated with diabetes rates. Your Yule coefficient explains that this association is moderate. To answer some of your additional questions, this test only relates to the sample size you have, so it is not generalizable to all hospitals. This should make sense intuitively, as 1) your sample size isn't extremely large and 2) hospitals can vary a lot, such as how doctors are trained, access to resources, etc. Is this helpful? Certainly. While we would want more people to test and more sophisticated ways of tackling this question, this at least informs us that at the very minimum there is in fact a trend at this hospital, and it may (with caveats) indicate that this could be the case elsewhere. To your question about proportionality, there is no way of knowing whether or not this would hold for larger samples, but theoretically if the effect is consistent, the underlying assertion of the test would still hold. Only more testing in more settings can answer how generalizable your findings actually are. To see what is going on under the hood, this video shows how to calculate both chi-square and Yule's coefficient by hand. This video is also an accessible summary of what chi-square tests do.
Interpretability for chi-squared test? The chi-square test of a 2x2 contingency table such as this basically tests the following null hypothesis: gender should produce no difference in diabetes rates. Essentially, your chi-square test pose
53,075
Plotting a cdf in r
Note that yours is not a valid probability density function. Indeed, you have $f(x)< 0$, for all $x< -1$. But, by definition, a PDF is always non-negative, ie. $$f(x)\geq 0, \text{ for all } x.$$ In addition, $$ \int_{-\infty}^{-1} 1/x^3\,dx + \int_{1}^{\infty} 1/x^3\, dx= -\frac{1}{2}+\frac{1}{2}=0. $$ Thus the CDF is also not valid either. Update Note: The above post refers to the original question. Below is the answer to the updated question. For this pdf the CDF is $$ F(x) = \begin{cases} \frac{1}{2x^2}& \text{ if } x\leq -1\\ \frac{1}{2} & \text{ if } x\in[0,1]\\ 1-\frac{1}{2x^2}& \text{ if } x>1 \end{cases}. $$ Below you can see the PDF, the CDF and the relative R code. ff_pdf <- function(x) abs(1/x^3) ff_cdf_l <- function(x) (1/(2*x^2)) ff_cdf_r <- function(x) 1/2 + 1/2 - 1/(2*x^2) x_l <- seq(-7,-1,len=50) x_r <- seq(1,7,len=50) pdfv_l <- sapply(x_l, ff_pdf) pdfv_r <- sapply(x_r, ff_pdf) plot(x_l, pdfv_l, type = "l", xlim=c(-7,7), lwd=2, ylim=c(0,1.3), ylab = "Density", xlab="x") points(x_r, pdfv_r, type = "l", xlim=c(-7,7), lwd=2) segments(x0=-1, y0=0, x1=1, y1=0, lwd=2) cdfv_l <- sapply(x_l, ff_cdf_l) cdfv_r <- sapply(x_r, ff_cdf_r) plot(x_l, cdfv_l, type = "l", xlim=c(-7,7), lwd=2, ylim=c(0,1.1), ylab = "Distribution function", xlab="x") segments(x0=-1, y0=1/2, x1=1, y1=1/2, lwd=2) points(x_r, cdfv_r, type = "l",lwd=2) abline(h = c(0,1), lwd=2, lty=2, col="gray")
Plotting a cdf in r
Note that yours is not a valid probability density function. Indeed, you have $f(x)< 0$, for all $x< -1$. But, by definition, a PDF is always non-negative, ie. $$f(x)\geq 0, \text{ for all } x.$$ In a
Plotting a cdf in r Note that yours is not a valid probability density function. Indeed, you have $f(x)< 0$, for all $x< -1$. But, by definition, a PDF is always non-negative, ie. $$f(x)\geq 0, \text{ for all } x.$$ In addition, $$ \int_{-\infty}^{-1} 1/x^3\,dx + \int_{1}^{\infty} 1/x^3\, dx= -\frac{1}{2}+\frac{1}{2}=0. $$ Thus the CDF is also not valid either. Update Note: The above post refers to the original question. Below is the answer to the updated question. For this pdf the CDF is $$ F(x) = \begin{cases} \frac{1}{2x^2}& \text{ if } x\leq -1\\ \frac{1}{2} & \text{ if } x\in[0,1]\\ 1-\frac{1}{2x^2}& \text{ if } x>1 \end{cases}. $$ Below you can see the PDF, the CDF and the relative R code. ff_pdf <- function(x) abs(1/x^3) ff_cdf_l <- function(x) (1/(2*x^2)) ff_cdf_r <- function(x) 1/2 + 1/2 - 1/(2*x^2) x_l <- seq(-7,-1,len=50) x_r <- seq(1,7,len=50) pdfv_l <- sapply(x_l, ff_pdf) pdfv_r <- sapply(x_r, ff_pdf) plot(x_l, pdfv_l, type = "l", xlim=c(-7,7), lwd=2, ylim=c(0,1.3), ylab = "Density", xlab="x") points(x_r, pdfv_r, type = "l", xlim=c(-7,7), lwd=2) segments(x0=-1, y0=0, x1=1, y1=0, lwd=2) cdfv_l <- sapply(x_l, ff_cdf_l) cdfv_r <- sapply(x_r, ff_cdf_r) plot(x_l, cdfv_l, type = "l", xlim=c(-7,7), lwd=2, ylim=c(0,1.1), ylab = "Distribution function", xlab="x") segments(x0=-1, y0=1/2, x1=1, y1=1/2, lwd=2) points(x_r, cdfv_r, type = "l",lwd=2) abline(h = c(0,1), lwd=2, lty=2, col="gray")
Plotting a cdf in r Note that yours is not a valid probability density function. Indeed, you have $f(x)< 0$, for all $x< -1$. But, by definition, a PDF is always non-negative, ie. $$f(x)\geq 0, \text{ for all } x.$$ In a
53,076
How can I compute variance of X^2 + XY where (X,Y) are normal distributed random variables?
By definition \begin{align} \operatorname{Var}(X(X+Y)) &=E(X^2(X+Y)^2)-(E(X(X+Y)))^2 \\&=E(X^4)+2E(X^3Y)+E(X^2Y^2)-(E(X^2)+E(XY))^2. \tag{1} \end{align} Using Isserlis' theorem, in terms of the variances and covariance $\sigma_{11},\sigma_{22},\sigma_{12}$ of $X$ and $Y$, \begin{align} E(X^4)&=3\sigma_{11}^2 \\E(X^3Y)&=3\sigma_{11}\sigma_{12} \tag{2} \\E(X^2Y^2)&=\sigma_{11}\sigma_{22}+2\sigma_{12}^2. \end{align} Substituting (2) into (1) and simplifying, \begin{align} \operatorname{Var}(X(X+Y))=2\sigma_{11}^2+4\sigma_{11}\sigma_{12}+\sigma_{11}\sigma_{22}+\sigma_{22}^2. \end{align}
How can I compute variance of X^2 + XY where (X,Y) are normal distributed random variables?
By definition \begin{align} \operatorname{Var}(X(X+Y)) &=E(X^2(X+Y)^2)-(E(X(X+Y)))^2 \\&=E(X^4)+2E(X^3Y)+E(X^2Y^2)-(E(X^2)+E(XY))^2. \tag{1} \end{align} Using Isserlis' theorem, in terms of the vari
How can I compute variance of X^2 + XY where (X,Y) are normal distributed random variables? By definition \begin{align} \operatorname{Var}(X(X+Y)) &=E(X^2(X+Y)^2)-(E(X(X+Y)))^2 \\&=E(X^4)+2E(X^3Y)+E(X^2Y^2)-(E(X^2)+E(XY))^2. \tag{1} \end{align} Using Isserlis' theorem, in terms of the variances and covariance $\sigma_{11},\sigma_{22},\sigma_{12}$ of $X$ and $Y$, \begin{align} E(X^4)&=3\sigma_{11}^2 \\E(X^3Y)&=3\sigma_{11}\sigma_{12} \tag{2} \\E(X^2Y^2)&=\sigma_{11}\sigma_{22}+2\sigma_{12}^2. \end{align} Substituting (2) into (1) and simplifying, \begin{align} \operatorname{Var}(X(X+Y))=2\sigma_{11}^2+4\sigma_{11}\sigma_{12}+\sigma_{11}\sigma_{22}+\sigma_{22}^2. \end{align}
How can I compute variance of X^2 + XY where (X,Y) are normal distributed random variables? By definition \begin{align} \operatorname{Var}(X(X+Y)) &=E(X^2(X+Y)^2)-(E(X(X+Y)))^2 \\&=E(X^4)+2E(X^3Y)+E(X^2Y^2)-(E(X^2)+E(XY))^2. \tag{1} \end{align} Using Isserlis' theorem, in terms of the vari
53,077
Derivative of the Backshift Operator?
The backshift operator is a mapping (an "operator") between vector spaces, namely spaces of time series or sequences, $$ B\colon \mathbb{R}^\mathbb{N} \to \mathbb{R}^\mathbb{N}, (y_i)\mapsto (y_{i-1}). $$ Here, $\mathbb{R}^\mathbb{N}$ is the space of mappings from $\mathbb{N}$ to $\mathbb{R}$, i.e., of real-valued sequences indexed by natural numbers. Time series that are not infinite can be accommodated by having only finitely many observations nonzero. The space $\mathbb{R}^\mathbb{N}$ is naturally a real vector space: we can add time series and multiply them by real scalars. In functional analysis, there is a notion of differential of an operator between normed vector spaces. If an operator $f$ admits a "local linear approximation" $\varphi$ near a point $x$, then we say that $f$ is differentiable and that $\varphi$ is its differential at $x$ (see, e.g., Coleman, 2012, section 2.2). Note that this generalizes the familiar notion of differentiability of mappings between finite dimensional spaces: a function $f\colon\mathbb{R}^n\to\mathbb{R}^m$ is differentiable at a point $x$ if and only if it admits a well-defined tangential subspace (tangent line in the most familiar case of $n=m=1$) near $x$. Now, we observe that $B$ is already linear: $$ B\big(\lambda(y_i)\big) = \lambda B\big((y_i)\big)\quad\text{and}\quad B\big((y_i)+(z_i)\big) = B\big((y_i)\big)+B\big((z_i)\big). $$ Thus, a (unique!) "best approximation" to $B$ is $B$ itself (see here at Math.SE). Therefore, the differential of $B$ at all "points" (i.e., sequences) is $B$ itself. Note that we don't even need to think about what norm we put on our space $\mathbb{R}^\mathbb{N}$.
Derivative of the Backshift Operator?
The backshift operator is a mapping (an "operator") between vector spaces, namely spaces of time series or sequences, $$ B\colon \mathbb{R}^\mathbb{N} \to \mathbb{R}^\mathbb{N}, (y_i)\mapsto (y_{i-1})
Derivative of the Backshift Operator? The backshift operator is a mapping (an "operator") between vector spaces, namely spaces of time series or sequences, $$ B\colon \mathbb{R}^\mathbb{N} \to \mathbb{R}^\mathbb{N}, (y_i)\mapsto (y_{i-1}). $$ Here, $\mathbb{R}^\mathbb{N}$ is the space of mappings from $\mathbb{N}$ to $\mathbb{R}$, i.e., of real-valued sequences indexed by natural numbers. Time series that are not infinite can be accommodated by having only finitely many observations nonzero. The space $\mathbb{R}^\mathbb{N}$ is naturally a real vector space: we can add time series and multiply them by real scalars. In functional analysis, there is a notion of differential of an operator between normed vector spaces. If an operator $f$ admits a "local linear approximation" $\varphi$ near a point $x$, then we say that $f$ is differentiable and that $\varphi$ is its differential at $x$ (see, e.g., Coleman, 2012, section 2.2). Note that this generalizes the familiar notion of differentiability of mappings between finite dimensional spaces: a function $f\colon\mathbb{R}^n\to\mathbb{R}^m$ is differentiable at a point $x$ if and only if it admits a well-defined tangential subspace (tangent line in the most familiar case of $n=m=1$) near $x$. Now, we observe that $B$ is already linear: $$ B\big(\lambda(y_i)\big) = \lambda B\big((y_i)\big)\quad\text{and}\quad B\big((y_i)+(z_i)\big) = B\big((y_i)\big)+B\big((z_i)\big). $$ Thus, a (unique!) "best approximation" to $B$ is $B$ itself (see here at Math.SE). Therefore, the differential of $B$ at all "points" (i.e., sequences) is $B$ itself. Note that we don't even need to think about what norm we put on our space $\mathbb{R}^\mathbb{N}$.
Derivative of the Backshift Operator? The backshift operator is a mapping (an "operator") between vector spaces, namely spaces of time series or sequences, $$ B\colon \mathbb{R}^\mathbb{N} \to \mathbb{R}^\mathbb{N}, (y_i)\mapsto (y_{i-1})
53,078
Log of a log-transformed variable
One thing that comes to mind for exploratory data analysis (EDA), applicable for 1D distributions, is to plot the quantiles (related to the empirical cumulative distribution function) in various transformed axes: linear-vs-linear, log-vs-linear, linear-vs-log, log-vs-log, etc. Having done so, some hypotheses may be formed and/or tested. A specific example where “log of log” may appear is when the 1D data is distributed according to a Weibull distribution: The cumulative distribution function (CDF) of a Weibull distributed variable $x$ is: $ F(x) = 1 - \exp(-(x/\lambda)^k).$ Therefore $$-\log(1 - F(x)) = (x/\lambda)^k$$ $$\log(-\log(1 - F(x))) = k \log(x) - k \log(\lambda).$$ Note: $\log$ is the natural logarithm, sometimes denoted $\operatorname{ln}(x)$. Now, we obtain a straight line in the transformed axes: the left-hand side is the “dependent variable” in a kind of strange double-logarithmic transformation, $k$ is the slope of a log transformation of $x$, and the intercept is $-k\log(\lambda)$. In the example above, you approximate $F(x)$ empirically using the rank $r$ of each $x$ data value (requires sorting your data), and use a common method guaranteeing $0 < \hat{F} < 1$ required for the formula above such as $$\hat{F}(r) = \frac{r - 0.3}{n + 0.4},$$ where $r$ is the rank of the data value and $n$ is the total number of data points. Since the OP has a tag "least squares", please be very careful with applying it due to its fragility; better use a robust fitting procedure, such as Theil-Sen. Specifically in the example above, although the result should be a straight line after transformation, any small deviation of the actual data from a straight line may be magnified by the nonlinear transformation, ruining any least squares fit, but probably not a robust method.
Log of a log-transformed variable
One thing that comes to mind for exploratory data analysis (EDA), applicable for 1D distributions, is to plot the quantiles (related to the empirical cumulative distribution function) in various trans
Log of a log-transformed variable One thing that comes to mind for exploratory data analysis (EDA), applicable for 1D distributions, is to plot the quantiles (related to the empirical cumulative distribution function) in various transformed axes: linear-vs-linear, log-vs-linear, linear-vs-log, log-vs-log, etc. Having done so, some hypotheses may be formed and/or tested. A specific example where “log of log” may appear is when the 1D data is distributed according to a Weibull distribution: The cumulative distribution function (CDF) of a Weibull distributed variable $x$ is: $ F(x) = 1 - \exp(-(x/\lambda)^k).$ Therefore $$-\log(1 - F(x)) = (x/\lambda)^k$$ $$\log(-\log(1 - F(x))) = k \log(x) - k \log(\lambda).$$ Note: $\log$ is the natural logarithm, sometimes denoted $\operatorname{ln}(x)$. Now, we obtain a straight line in the transformed axes: the left-hand side is the “dependent variable” in a kind of strange double-logarithmic transformation, $k$ is the slope of a log transformation of $x$, and the intercept is $-k\log(\lambda)$. In the example above, you approximate $F(x)$ empirically using the rank $r$ of each $x$ data value (requires sorting your data), and use a common method guaranteeing $0 < \hat{F} < 1$ required for the formula above such as $$\hat{F}(r) = \frac{r - 0.3}{n + 0.4},$$ where $r$ is the rank of the data value and $n$ is the total number of data points. Since the OP has a tag "least squares", please be very careful with applying it due to its fragility; better use a robust fitting procedure, such as Theil-Sen. Specifically in the example above, although the result should be a straight line after transformation, any small deviation of the actual data from a straight line may be magnified by the nonlinear transformation, ruining any least squares fit, but probably not a robust method.
Log of a log-transformed variable One thing that comes to mind for exploratory data analysis (EDA), applicable for 1D distributions, is to plot the quantiles (related to the empirical cumulative distribution function) in various trans
53,079
Is the sample proportion ($\hat p$) a random variable?
In the frequentist tradition (which is what you are using here) the random variable is the data. The population parameters are mathematically treated as constant. This is what leads to the somewhat counterintuitive "null hypothesis" setup we use in intro statistics, because the probability we return (usually in the form of a p-value) is a probability on the sample given constant population parameter set at the null hypothesis values. I would imagine this is why you see the notation you do in introductory many textbooks.
Is the sample proportion ($\hat p$) a random variable?
In the frequentist tradition (which is what you are using here) the random variable is the data. The population parameters are mathematically treated as constant. This is what leads to the somewhat co
Is the sample proportion ($\hat p$) a random variable? In the frequentist tradition (which is what you are using here) the random variable is the data. The population parameters are mathematically treated as constant. This is what leads to the somewhat counterintuitive "null hypothesis" setup we use in intro statistics, because the probability we return (usually in the form of a p-value) is a probability on the sample given constant population parameter set at the null hypothesis values. I would imagine this is why you see the notation you do in introductory many textbooks.
Is the sample proportion ($\hat p$) a random variable? In the frequentist tradition (which is what you are using here) the random variable is the data. The population parameters are mathematically treated as constant. This is what leads to the somewhat co
53,080
Is the sample proportion ($\hat p$) a random variable?
I don't see a $\hat{p}$ in the figure you posted, but from the formula in the figure, $p_1$ and $p_2$ are statistics. Once you calculate a statistic, it becomes a realization of the random variable (Be aware that I am not saying that your statistic is the true population parameter). Above all, remember that in most cases upper/lower cases are conventions. They might be widespread, which can be helpful in many cases, but there is no law that forces you to write a random variable's "name" in uppercase. It's common for introductory (and even advanced) books to have a discussion on symbols and style. That section will help you understand the notation the author(s) has adopted.
Is the sample proportion ($\hat p$) a random variable?
I don't see a $\hat{p}$ in the figure you posted, but from the formula in the figure, $p_1$ and $p_2$ are statistics. Once you calculate a statistic, it becomes a realization of the random variable (B
Is the sample proportion ($\hat p$) a random variable? I don't see a $\hat{p}$ in the figure you posted, but from the formula in the figure, $p_1$ and $p_2$ are statistics. Once you calculate a statistic, it becomes a realization of the random variable (Be aware that I am not saying that your statistic is the true population parameter). Above all, remember that in most cases upper/lower cases are conventions. They might be widespread, which can be helpful in many cases, but there is no law that forces you to write a random variable's "name" in uppercase. It's common for introductory (and even advanced) books to have a discussion on symbols and style. That section will help you understand the notation the author(s) has adopted.
Is the sample proportion ($\hat p$) a random variable? I don't see a $\hat{p}$ in the figure you posted, but from the formula in the figure, $p_1$ and $p_2$ are statistics. Once you calculate a statistic, it becomes a realization of the random variable (B
53,081
Is the sample proportion ($\hat p$) a random variable?
A statistic is a function of a random sample, therefore it is also random variable. Like Tanner Phillips says in his answer, the frequentist school of statistics establishes a difference between a population and a sample taken from the population. Population parameters are [always] constants and to estimate those parameters we sample from the population and compute a statistic or estimate. Unless the entire population is sampled, the statistic is not a constant, it's a variable as random as the sample. In the case of proportions, the population proportion is usually represented by a lower-case $p$ and the sample proportion by the very same symbol plus a marker. The sample proportion is many times noted as $\hat p$ but the hat-p is not a standard notation for proportion estimator, it's a frequent one, not more. I can not find examples right now but I think that the convention of noting the sample proportion as $\hat p$ is not universal and is even a relatively recent one. I have also seen $p^{\star}$ and $\widetilde p$. And sometimes, not frequently, when the population proportion is noted with an upper-case $P$ so is the sample proportion.
Is the sample proportion ($\hat p$) a random variable?
A statistic is a function of a random sample, therefore it is also random variable. Like Tanner Phillips says in his answer, the frequentist school of statistics establishes a difference between a pop
Is the sample proportion ($\hat p$) a random variable? A statistic is a function of a random sample, therefore it is also random variable. Like Tanner Phillips says in his answer, the frequentist school of statistics establishes a difference between a population and a sample taken from the population. Population parameters are [always] constants and to estimate those parameters we sample from the population and compute a statistic or estimate. Unless the entire population is sampled, the statistic is not a constant, it's a variable as random as the sample. In the case of proportions, the population proportion is usually represented by a lower-case $p$ and the sample proportion by the very same symbol plus a marker. The sample proportion is many times noted as $\hat p$ but the hat-p is not a standard notation for proportion estimator, it's a frequent one, not more. I can not find examples right now but I think that the convention of noting the sample proportion as $\hat p$ is not universal and is even a relatively recent one. I have also seen $p^{\star}$ and $\widetilde p$. And sometimes, not frequently, when the population proportion is noted with an upper-case $P$ so is the sample proportion.
Is the sample proportion ($\hat p$) a random variable? A statistic is a function of a random sample, therefore it is also random variable. Like Tanner Phillips says in his answer, the frequentist school of statistics establishes a difference between a pop
53,082
What is the correct interpretation of the $\beta_1$ coefficient in a linear regression model?
Here is what $E[Y|X]$ changes when $X$ changes by one unit (using that the expectation operator is linear and that $\epsilon$ is independent of $X$ with $E[\epsilon] = 0$): $$ \begin{align} E[Y|X = x_0+1] - E[Y|X=x_0] &= E[\beta_0 + \beta_1X + \epsilon|X=x_0+1] - E[\beta_0 + \beta_1X + \epsilon|X=x_0]\\ &= \beta_0 + \beta_1 E[X|X=x_0+1] + E[\epsilon|X=x_0+1] - (\beta_0 + \beta_1 E[X|X=x_0] + E[\epsilon|X=x_0])\\ &= \beta_0 + \beta_1(x_0+1)+ 0 - (\beta_0 + \beta_1x_0 + 0)\\ &= \beta_1, \end{align} $$ so the second interpretation is correct. However, when people use the first version, they mean the second, it is just a bit of a lax way of expressing themselves. In the comments, @RichardHardy makes the point that one should distinguish between a purely probabilistic and causal interpretation of those two statements. In the answer above, I presumed those statements to be meant probabilistically, noncausally. Of course, there is also a causal interpretation along the lines: "If $X$ is intervened upon and set to $x_0$ and to $x_0+1$, how would $Y$ change as a result? I.e., what is the average causal effect (ACE) of $X$ on $Y$?" Of course, there are many situations where $\beta_1$ cannot be interpreted as the ACE, e.g. if $Y$ would be causing $X$.
What is the correct interpretation of the $\beta_1$ coefficient in a linear regression model?
Here is what $E[Y|X]$ changes when $X$ changes by one unit (using that the expectation operator is linear and that $\epsilon$ is independent of $X$ with $E[\epsilon] = 0$): $$ \begin{align} E[Y|X = x_
What is the correct interpretation of the $\beta_1$ coefficient in a linear regression model? Here is what $E[Y|X]$ changes when $X$ changes by one unit (using that the expectation operator is linear and that $\epsilon$ is independent of $X$ with $E[\epsilon] = 0$): $$ \begin{align} E[Y|X = x_0+1] - E[Y|X=x_0] &= E[\beta_0 + \beta_1X + \epsilon|X=x_0+1] - E[\beta_0 + \beta_1X + \epsilon|X=x_0]\\ &= \beta_0 + \beta_1 E[X|X=x_0+1] + E[\epsilon|X=x_0+1] - (\beta_0 + \beta_1 E[X|X=x_0] + E[\epsilon|X=x_0])\\ &= \beta_0 + \beta_1(x_0+1)+ 0 - (\beta_0 + \beta_1x_0 + 0)\\ &= \beta_1, \end{align} $$ so the second interpretation is correct. However, when people use the first version, they mean the second, it is just a bit of a lax way of expressing themselves. In the comments, @RichardHardy makes the point that one should distinguish between a purely probabilistic and causal interpretation of those two statements. In the answer above, I presumed those statements to be meant probabilistically, noncausally. Of course, there is also a causal interpretation along the lines: "If $X$ is intervened upon and set to $x_0$ and to $x_0+1$, how would $Y$ change as a result? I.e., what is the average causal effect (ACE) of $X$ on $Y$?" Of course, there are many situations where $\beta_1$ cannot be interpreted as the ACE, e.g. if $Y$ would be causing $X$.
What is the correct interpretation of the $\beta_1$ coefficient in a linear regression model? Here is what $E[Y|X]$ changes when $X$ changes by one unit (using that the expectation operator is linear and that $\epsilon$ is independent of $X$ with $E[\epsilon] = 0$): $$ \begin{align} E[Y|X = x_
53,083
How are sums and differences of independent Exponential random variables distributed?
Because the characteristic function of an exponential distribution is $$\phi(t) = \frac{1}{1 - it},$$ when $X_1,\ldots, X_4$ are independent exponentially distributed variables (all with the same rate, which with no loss of generality may be taken as unity), the characteristic function of $X = X_1+X_2+X_3-X_4$ is $$\phi_X(t) = \phi(t)^3\phi(-t) = \frac{1}{(1-it)^3(1+it)}.$$ Using partial fractions this is easily decomposed and inverted (by inspection or direct calculation of the inverse Fourier transform) to give the density $$f_X(x) = \frac{1}{8} \left(e^{x} \mathcal{I}(x\le 0) + e^{-x}(1 + 2x + 2x^2)\mathcal{I}(x\gt 0)\right).$$ This is recognizable as a mixture of a Laplace distribution, a Gamma$(2)$ distribution, and a Gamma$(3)$ distribution. As a check, in R I generated a million independent $(X_1,X_2,X_3,X_4)$ tuples and plotted this histogram of $X_1+X_2+X_3-X_4$ to compare it to the graph of $f_X,$ as shown. hist(c(1,1,1,-1) %*% matrix(rexp(1e6*4), 4), freq=FALSE, breaks=200, xlab="x", main="") curve(1/8 * (dgamma(-x, 1) + dgamma(x, 1) + 2*dgamma(x, 2) + 4*dgamma(x, 3))) The agreement is excellent. Generalization Because the question refers to "sums and differences" quite generally, I thought it might be interesting to obtain a general answer. To that end, note that the characteristic function (cf) of the sum of $k$ iid minus the sum of $j$ iid exponential variables will be $$\phi_{k,j}(t) = (1 - it)^{-k}(1 + it)^{-j}.$$ The aim is to express this, if possible, as a mixture (linear combination) of Gamma cfs. Recall that a Gamma distribution with shape parameter $\alpha$ has $(1-it)^{-\alpha}$ for its cf. It is immediately apparent that $\phi_{k,j}$ is the cf for the difference of a Gamma$(k)$ and (independent) Gamma$(j)$ variable. But that doesn't get us any further towards a solution. This problem turns out to be purely algebraic. Upon writing $$a = \frac{2}{1-it},\quad b = \frac{2}{1+it},$$ observe that $1/a + 1/b = 1,$ whence $a + b = ab.$ Our objective, then, can be attained if we can express $a^k b^j$ as a linear combination of pure (positive) powers of $a$ and $b.$ (Formally, the calculations are done in the polynomial quotient ring $\mathbb{Q}(a,b)/(a+b-ab).$ No analytic considerations of convergence, etc., will be needed.) The idea is to find $a^kb^j$ recursively. The base case is $a^k=a^k:$ we already have a pure power of $a.$ The next case is $$a^kb = a^{k-1}(ab) = a^{k-1}(a+b) = a^k + a^{k-1}b.$$ So, if $k\gt 1,$ repeat this process, ultimately finding $$a^kb = a^k + a^{k-1} + \cdots + a^2 + a + b.$$ The next step shows us what will happen generally, using the preceding result to expand each of the terms that appears in the final multiplication by $b$: $$\begin{aligned} a^k b^2 &= (a^k b)b = \left(a^k + a^{k-1} + \cdots + a^2 + a + b\right)b\\ &= a^kb + a^{k-1}b + \cdots + a^2b + ab + b^2\\ &= a^k + a^{k-1} + \cdots + a^2 + a + b\\ &+ \quad a^{k-1} + \cdots + a^2 + a + b\\ &\cdots\\ &+ \quad\quad\quad (a+b) + b^2\\ &= \left(a^k + 2a^{k-1} + \cdots + ka\right) + \left(k b + b^2\right). \end{aligned}$$ Proceeding in this vein, we arrive at a general formula (which is easily proven inductively): $$a^k b^j = \sum_{i=1}^k \binom{k+j-1-i}{j-1} a^{i}\ +\ \sum_{i=1}^j\binom{k+j-1-i}{k-1} b^{i}.$$ (That's rather a pretty relation in its own right...) Upon dividing both sides by $2^{k+j}$ and re-expressing $a$ and $b$ in terms of $t,$ we obtain the desired decomposition, from which we may read off the Gamma shape parameters and mixture weights directly: $$\phi_{k,j}(t)=\sum_{i=1}^k \binom{k+j-1-i}{j-1}2^{i-j-k} \phi(t)^i\ +\ \sum_{i=1}^j\binom{k+j-1-i}{k-1} 2^{i-j-k} \phi(-t)^i.$$ For example, the combination in the question corresponds to a sum of $k=3$ exponentials minus $j=1$ exponential, for which $$\begin{aligned} \phi_{3,1}(t) &= \binom{3-1}{0}2^{1-4}\phi(t)+\binom{3-2}{0}2^{2-4}\phi(t)^2+\binom{3-3}{0}2^{3-4}\phi(t)^3\\&\quad+ \binom{1-1}{0}2^{1-4}\phi(-t)\\ &= \frac{1}{8}\phi(t)+\frac{1}{4}\phi(t)^2+\frac{1}{2}\phi(t)^3+\frac{1}{8}\phi(-t), \end{aligned}$$ telling us the distribution is a mixture of $1/8$ exponential, $1/4$ Gamma$(2),$ $1/2$ Gamma$(3),$ and $1/8$ reversed exponential distributions. Here is an R implementation modeled on the previous code. It also includes a common rate parameter for all the exponentials. (It played no role in the calculations because it merely establishes a common unit of measurement for all the distributions.) # # Density of the sum of `k` and difference of `j` *iid* exponentials. # (Equivalently, the difference of a Gamma(k) and Gamma(j) distribution.) # f <- Vectorize(function(x, k, j, ...) { g <- function(x, k, j) { i <- seq_len(k) sum(choose(k+j-1-i, j-1) * 2^(i-j-k) * dgamma(x, i, ...)) } g(x, k, j) + g(-x, j, k) }, "x") # # Simulated data. # k <- 9 j <- 4 rate <- 10 hist(c(rep(1,k), rep(-1,j)) %*% matrix(rexp(1e6 * (k+j), rate), k+j), freq=FALSE, breaks=200, xlab="x", main=bquote(paste(k==.(k), ", ", j==.(j), ", and rate"==.(rate)))) # # Check by overplotting the density function. # curve(f(x, k, j, rate=rate), add=TRUE, lwd=2) The positive components will have Gamma distributions with shape parameters $1, 2, \ldots, k$ while the negative components, when reversed, will have Gamma distributions with shape parameters $1,2,\ldots, j.$ As an illustration the next histogram, produced by the preceding code, also displays the components, suitably weighted and color-shaded to differentiate them. Their sum is plotted in black, nicely tracing out the histogram.
How are sums and differences of independent Exponential random variables distributed?
Because the characteristic function of an exponential distribution is $$\phi(t) = \frac{1}{1 - it},$$ when $X_1,\ldots, X_4$ are independent exponentially distributed variables (all with the same rate
How are sums and differences of independent Exponential random variables distributed? Because the characteristic function of an exponential distribution is $$\phi(t) = \frac{1}{1 - it},$$ when $X_1,\ldots, X_4$ are independent exponentially distributed variables (all with the same rate, which with no loss of generality may be taken as unity), the characteristic function of $X = X_1+X_2+X_3-X_4$ is $$\phi_X(t) = \phi(t)^3\phi(-t) = \frac{1}{(1-it)^3(1+it)}.$$ Using partial fractions this is easily decomposed and inverted (by inspection or direct calculation of the inverse Fourier transform) to give the density $$f_X(x) = \frac{1}{8} \left(e^{x} \mathcal{I}(x\le 0) + e^{-x}(1 + 2x + 2x^2)\mathcal{I}(x\gt 0)\right).$$ This is recognizable as a mixture of a Laplace distribution, a Gamma$(2)$ distribution, and a Gamma$(3)$ distribution. As a check, in R I generated a million independent $(X_1,X_2,X_3,X_4)$ tuples and plotted this histogram of $X_1+X_2+X_3-X_4$ to compare it to the graph of $f_X,$ as shown. hist(c(1,1,1,-1) %*% matrix(rexp(1e6*4), 4), freq=FALSE, breaks=200, xlab="x", main="") curve(1/8 * (dgamma(-x, 1) + dgamma(x, 1) + 2*dgamma(x, 2) + 4*dgamma(x, 3))) The agreement is excellent. Generalization Because the question refers to "sums and differences" quite generally, I thought it might be interesting to obtain a general answer. To that end, note that the characteristic function (cf) of the sum of $k$ iid minus the sum of $j$ iid exponential variables will be $$\phi_{k,j}(t) = (1 - it)^{-k}(1 + it)^{-j}.$$ The aim is to express this, if possible, as a mixture (linear combination) of Gamma cfs. Recall that a Gamma distribution with shape parameter $\alpha$ has $(1-it)^{-\alpha}$ for its cf. It is immediately apparent that $\phi_{k,j}$ is the cf for the difference of a Gamma$(k)$ and (independent) Gamma$(j)$ variable. But that doesn't get us any further towards a solution. This problem turns out to be purely algebraic. Upon writing $$a = \frac{2}{1-it},\quad b = \frac{2}{1+it},$$ observe that $1/a + 1/b = 1,$ whence $a + b = ab.$ Our objective, then, can be attained if we can express $a^k b^j$ as a linear combination of pure (positive) powers of $a$ and $b.$ (Formally, the calculations are done in the polynomial quotient ring $\mathbb{Q}(a,b)/(a+b-ab).$ No analytic considerations of convergence, etc., will be needed.) The idea is to find $a^kb^j$ recursively. The base case is $a^k=a^k:$ we already have a pure power of $a.$ The next case is $$a^kb = a^{k-1}(ab) = a^{k-1}(a+b) = a^k + a^{k-1}b.$$ So, if $k\gt 1,$ repeat this process, ultimately finding $$a^kb = a^k + a^{k-1} + \cdots + a^2 + a + b.$$ The next step shows us what will happen generally, using the preceding result to expand each of the terms that appears in the final multiplication by $b$: $$\begin{aligned} a^k b^2 &= (a^k b)b = \left(a^k + a^{k-1} + \cdots + a^2 + a + b\right)b\\ &= a^kb + a^{k-1}b + \cdots + a^2b + ab + b^2\\ &= a^k + a^{k-1} + \cdots + a^2 + a + b\\ &+ \quad a^{k-1} + \cdots + a^2 + a + b\\ &\cdots\\ &+ \quad\quad\quad (a+b) + b^2\\ &= \left(a^k + 2a^{k-1} + \cdots + ka\right) + \left(k b + b^2\right). \end{aligned}$$ Proceeding in this vein, we arrive at a general formula (which is easily proven inductively): $$a^k b^j = \sum_{i=1}^k \binom{k+j-1-i}{j-1} a^{i}\ +\ \sum_{i=1}^j\binom{k+j-1-i}{k-1} b^{i}.$$ (That's rather a pretty relation in its own right...) Upon dividing both sides by $2^{k+j}$ and re-expressing $a$ and $b$ in terms of $t,$ we obtain the desired decomposition, from which we may read off the Gamma shape parameters and mixture weights directly: $$\phi_{k,j}(t)=\sum_{i=1}^k \binom{k+j-1-i}{j-1}2^{i-j-k} \phi(t)^i\ +\ \sum_{i=1}^j\binom{k+j-1-i}{k-1} 2^{i-j-k} \phi(-t)^i.$$ For example, the combination in the question corresponds to a sum of $k=3$ exponentials minus $j=1$ exponential, for which $$\begin{aligned} \phi_{3,1}(t) &= \binom{3-1}{0}2^{1-4}\phi(t)+\binom{3-2}{0}2^{2-4}\phi(t)^2+\binom{3-3}{0}2^{3-4}\phi(t)^3\\&\quad+ \binom{1-1}{0}2^{1-4}\phi(-t)\\ &= \frac{1}{8}\phi(t)+\frac{1}{4}\phi(t)^2+\frac{1}{2}\phi(t)^3+\frac{1}{8}\phi(-t), \end{aligned}$$ telling us the distribution is a mixture of $1/8$ exponential, $1/4$ Gamma$(2),$ $1/2$ Gamma$(3),$ and $1/8$ reversed exponential distributions. Here is an R implementation modeled on the previous code. It also includes a common rate parameter for all the exponentials. (It played no role in the calculations because it merely establishes a common unit of measurement for all the distributions.) # # Density of the sum of `k` and difference of `j` *iid* exponentials. # (Equivalently, the difference of a Gamma(k) and Gamma(j) distribution.) # f <- Vectorize(function(x, k, j, ...) { g <- function(x, k, j) { i <- seq_len(k) sum(choose(k+j-1-i, j-1) * 2^(i-j-k) * dgamma(x, i, ...)) } g(x, k, j) + g(-x, j, k) }, "x") # # Simulated data. # k <- 9 j <- 4 rate <- 10 hist(c(rep(1,k), rep(-1,j)) %*% matrix(rexp(1e6 * (k+j), rate), k+j), freq=FALSE, breaks=200, xlab="x", main=bquote(paste(k==.(k), ", ", j==.(j), ", and rate"==.(rate)))) # # Check by overplotting the density function. # curve(f(x, k, j, rate=rate), add=TRUE, lwd=2) The positive components will have Gamma distributions with shape parameters $1, 2, \ldots, k$ while the negative components, when reversed, will have Gamma distributions with shape parameters $1,2,\ldots, j.$ As an illustration the next histogram, produced by the preceding code, also displays the components, suitably weighted and color-shaded to differentiate them. Their sum is plotted in black, nicely tracing out the histogram.
How are sums and differences of independent Exponential random variables distributed? Because the characteristic function of an exponential distribution is $$\phi(t) = \frac{1}{1 - it},$$ when $X_1,\ldots, X_4$ are independent exponentially distributed variables (all with the same rate
53,084
When does the sum of two $t$-distributed random variables follow a $t$ distribution?
Characteristic functions are handy for studying sums of independent variables, because the c.f. of the sum is the product of the c.f.s of the addends and c.f.s determine the distribution. The c.f. of a Student t variable with parameter ("degrees of freedom") $\nu \ge 1$ is, up to a constant of proportionality $C(\nu)$ we don't need to consider, given by $$\phi_\nu(t) = C(\nu) K_{\nu/2}(|t|\sqrt{\nu})\,|t|^{\nu/2}\tag{*}$$ where $K$ is the Bessel Function of the Second Kind. I promise we won't need to learn much about this function in the following analysis. Interpreting the question a little more broadly to accommodate a possible change of scale in the result, it asks for all solutions $\nu,\kappa,\mu,\sigma$ to the equation $$\phi_\nu\,\phi_\kappa= \phi_\mu\circ \lambda_\sigma$$ where all the subscripts are Student t parameters (necessarily positive) and the new scale $\sigma$ is positive; $\lambda_\sigma (t) = \sigma t$ is the change of scale. According to $(*),$ this equation is equivalent to $$K_{\nu/2}(|t|\sqrt{\nu})\, K_{\kappa/2}(|t|\sqrt{\kappa}) |t|^{(\nu + \kappa)/2}\ \propto\ K_{\mu/2}(|t|\sigma \sqrt{\mu}) |t|^{\mu/2}.\tag{**}$$ As $|t|$ grows large, $K_\alpha(|t|)$ is asymptotically proportional to $$K_\alpha(|t|) \ \sim\ C(\alpha)e^{-|t|}|t|^{-1/2}\left(1 + \frac{4\alpha^2-1}{8|t|} + O\left(|t|^{-2}\right)\right).$$ Comparing the leading terms of $(**)$ shows us that $$e^{-(\sqrt\nu + \sqrt\kappa)|t|}|t|^{(\nu + \kappa)/2-1}\ \propto\ e^{-\sigma\sqrt\mu|t|}|t|^{(\mu-1)/2},$$ equivalent to $\nu+\kappa-1=\mu$ and $\sqrt{\nu}+\sqrt{\kappa} = \sigma\sqrt{\mu}.$ Consequently, one or both of the original random variables must have a Student $t(1)$ distribution. Moreover, taking $\kappa=1$ in that case, whence $\mu=\nu,$ we find $\sigma\sqrt{\nu} = \sqrt{\nu}+1,$ entailing $$\sigma = 1 + \frac{1}{\sqrt{\nu}}.$$ Comparing the $|t|^{-1}$ terms in the asymptotic expansion shows $$\frac{4\nu^2-1}{8|t|} =\frac{4\nu^2-1}{8\sigma|t|} + O\left(|t|^{-2}\right)$$ which is possible only when $1 = \sigma = 1 + 1/\sqrt{\nu}$ or $4\nu^2-1=0:$ the only solution is $\nu=1.$ The sum of two independent Student t variables has a Student t distribution (up to scale) only when both variables have one degree of freedom; and in that case, the resulting distribution has one degree of freedom and a scale factor of $2.$ As a check, here is a QQ plot of $|X+Y|$ for a million independent realizations of a Student $t(2)$ variable $Y$ and Student $t(1)$ variable $X$ (shown on a log-log scale due to the heavy tail). (To avoid plotting all million values, only the 0.5%, 1%, ..., 99.5% quantiles of the data are shown.) The agreement with a $\sigma$ - scaled Student $t(2)$ distribution is good in the middle, but grows increasingly worse out in the tail: adding the heavy-tailed $X$ to $Y$ has given the sum a heavier tail, even after scaling. It is this deviation at the upper right that the analysis of the $O(|t|^{-1})$ term in the c.f. revealed.
When does the sum of two $t$-distributed random variables follow a $t$ distribution?
Characteristic functions are handy for studying sums of independent variables, because the c.f. of the sum is the product of the c.f.s of the addends and c.f.s determine the distribution. The c.f. of
When does the sum of two $t$-distributed random variables follow a $t$ distribution? Characteristic functions are handy for studying sums of independent variables, because the c.f. of the sum is the product of the c.f.s of the addends and c.f.s determine the distribution. The c.f. of a Student t variable with parameter ("degrees of freedom") $\nu \ge 1$ is, up to a constant of proportionality $C(\nu)$ we don't need to consider, given by $$\phi_\nu(t) = C(\nu) K_{\nu/2}(|t|\sqrt{\nu})\,|t|^{\nu/2}\tag{*}$$ where $K$ is the Bessel Function of the Second Kind. I promise we won't need to learn much about this function in the following analysis. Interpreting the question a little more broadly to accommodate a possible change of scale in the result, it asks for all solutions $\nu,\kappa,\mu,\sigma$ to the equation $$\phi_\nu\,\phi_\kappa= \phi_\mu\circ \lambda_\sigma$$ where all the subscripts are Student t parameters (necessarily positive) and the new scale $\sigma$ is positive; $\lambda_\sigma (t) = \sigma t$ is the change of scale. According to $(*),$ this equation is equivalent to $$K_{\nu/2}(|t|\sqrt{\nu})\, K_{\kappa/2}(|t|\sqrt{\kappa}) |t|^{(\nu + \kappa)/2}\ \propto\ K_{\mu/2}(|t|\sigma \sqrt{\mu}) |t|^{\mu/2}.\tag{**}$$ As $|t|$ grows large, $K_\alpha(|t|)$ is asymptotically proportional to $$K_\alpha(|t|) \ \sim\ C(\alpha)e^{-|t|}|t|^{-1/2}\left(1 + \frac{4\alpha^2-1}{8|t|} + O\left(|t|^{-2}\right)\right).$$ Comparing the leading terms of $(**)$ shows us that $$e^{-(\sqrt\nu + \sqrt\kappa)|t|}|t|^{(\nu + \kappa)/2-1}\ \propto\ e^{-\sigma\sqrt\mu|t|}|t|^{(\mu-1)/2},$$ equivalent to $\nu+\kappa-1=\mu$ and $\sqrt{\nu}+\sqrt{\kappa} = \sigma\sqrt{\mu}.$ Consequently, one or both of the original random variables must have a Student $t(1)$ distribution. Moreover, taking $\kappa=1$ in that case, whence $\mu=\nu,$ we find $\sigma\sqrt{\nu} = \sqrt{\nu}+1,$ entailing $$\sigma = 1 + \frac{1}{\sqrt{\nu}}.$$ Comparing the $|t|^{-1}$ terms in the asymptotic expansion shows $$\frac{4\nu^2-1}{8|t|} =\frac{4\nu^2-1}{8\sigma|t|} + O\left(|t|^{-2}\right)$$ which is possible only when $1 = \sigma = 1 + 1/\sqrt{\nu}$ or $4\nu^2-1=0:$ the only solution is $\nu=1.$ The sum of two independent Student t variables has a Student t distribution (up to scale) only when both variables have one degree of freedom; and in that case, the resulting distribution has one degree of freedom and a scale factor of $2.$ As a check, here is a QQ plot of $|X+Y|$ for a million independent realizations of a Student $t(2)$ variable $Y$ and Student $t(1)$ variable $X$ (shown on a log-log scale due to the heavy tail). (To avoid plotting all million values, only the 0.5%, 1%, ..., 99.5% quantiles of the data are shown.) The agreement with a $\sigma$ - scaled Student $t(2)$ distribution is good in the middle, but grows increasingly worse out in the tail: adding the heavy-tailed $X$ to $Y$ has given the sum a heavier tail, even after scaling. It is this deviation at the upper right that the analysis of the $O(|t|^{-1})$ term in the c.f. revealed.
When does the sum of two $t$-distributed random variables follow a $t$ distribution? Characteristic functions are handy for studying sums of independent variables, because the c.f. of the sum is the product of the c.f.s of the addends and c.f.s determine the distribution. The c.f. of
53,085
When does the sum of two $t$-distributed random variables follow a $t$ distribution?
Here is a much less formal approach that (mainly) relies on the fact that moments of order $\nu$ or larger do not exist for Student's t-distributions with $\nu$ degrees of freedom. If $X_\nu$ and $Y_\nu$ have independent Student's t-distributions with $\nu$ degrees of freedom where the moments of order $\nu$ or higher don't exist, then for the sum ($X_\nu +Y_\nu$) will have the same moments that exist and the same moments that don't exist. That implies that if the sum were to have a Student's t-distribution, then that distribution would also need to have exactly $\nu$ degrees of freedom. But for $\nu >2$ the variance is $\nu/(\nu-2)$ for a Student's t-distribution and the sum would have variance $2\nu/(\nu-2)$. But those two variances can't be the same for $\nu>2$. So for $\nu>2$, there is no sum of two independent Student's t-distributions degrees of freedom $\nu$ that results in a Student's t-distribution. That leaves only $\nu=1$ and $\nu=2$ to check.
When does the sum of two $t$-distributed random variables follow a $t$ distribution?
Here is a much less formal approach that (mainly) relies on the fact that moments of order $\nu$ or larger do not exist for Student's t-distributions with $\nu$ degrees of freedom. If $X_\nu$ and $Y_\
When does the sum of two $t$-distributed random variables follow a $t$ distribution? Here is a much less formal approach that (mainly) relies on the fact that moments of order $\nu$ or larger do not exist for Student's t-distributions with $\nu$ degrees of freedom. If $X_\nu$ and $Y_\nu$ have independent Student's t-distributions with $\nu$ degrees of freedom where the moments of order $\nu$ or higher don't exist, then for the sum ($X_\nu +Y_\nu$) will have the same moments that exist and the same moments that don't exist. That implies that if the sum were to have a Student's t-distribution, then that distribution would also need to have exactly $\nu$ degrees of freedom. But for $\nu >2$ the variance is $\nu/(\nu-2)$ for a Student's t-distribution and the sum would have variance $2\nu/(\nu-2)$. But those two variances can't be the same for $\nu>2$. So for $\nu>2$, there is no sum of two independent Student's t-distributions degrees of freedom $\nu$ that results in a Student's t-distribution. That leaves only $\nu=1$ and $\nu=2$ to check.
When does the sum of two $t$-distributed random variables follow a $t$ distribution? Here is a much less formal approach that (mainly) relies on the fact that moments of order $\nu$ or larger do not exist for Student's t-distributions with $\nu$ degrees of freedom. If $X_\nu$ and $Y_\
53,086
Setting null hypothesis for Binomial test
No, it is not correct. In this example you have two samples so you should be using a two-sample hypothesis test. There are a few different two-sample binomial tests available, so you will need to choose one.
Setting null hypothesis for Binomial test
No, it is not correct. In this example you have two samples so you should be using a two-sample hypothesis test. There are a few different two-sample binomial tests available, so you will need to ch
Setting null hypothesis for Binomial test No, it is not correct. In this example you have two samples so you should be using a two-sample hypothesis test. There are a few different two-sample binomial tests available, so you will need to choose one.
Setting null hypothesis for Binomial test No, it is not correct. In this example you have two samples so you should be using a two-sample hypothesis test. There are a few different two-sample binomial tests available, so you will need to ch
53,087
Setting null hypothesis for Binomial test
Is it correct to use one sample binomial test to check if 9 out of 10 (90%) is significantly different using 68% as expected probability? I don't think this is correct because you would assume that 68% was measured without uncertainty. Intuitively, 68% obtained from 10000 flips is "better", more certain than 68% from just 100 flips. In your case you could use Fisher test for the null hypothesis that 68/100 and 9/10 can originate from the same coin. In R: mat <- matrix(c(68, 9, 100-68, 10-9), nrow= 2) mat [,1] [,2] [1,] 68 32 [2,] 9 1 fisher.test(mat) Fisher's Exact Test for Count Data data: mat p-value = 0.2765 alternative hypothesis: true odds ratio is not equal to 1 95 percent confidence interval: 0.005230503 1.851113305 sample estimates: odds ratio 0.2384069 ```
Setting null hypothesis for Binomial test
Is it correct to use one sample binomial test to check if 9 out of 10 (90%) is significantly different using 68% as expected probability? I don't think this is correct because you would assume that 6
Setting null hypothesis for Binomial test Is it correct to use one sample binomial test to check if 9 out of 10 (90%) is significantly different using 68% as expected probability? I don't think this is correct because you would assume that 68% was measured without uncertainty. Intuitively, 68% obtained from 10000 flips is "better", more certain than 68% from just 100 flips. In your case you could use Fisher test for the null hypothesis that 68/100 and 9/10 can originate from the same coin. In R: mat <- matrix(c(68, 9, 100-68, 10-9), nrow= 2) mat [,1] [,2] [1,] 68 32 [2,] 9 1 fisher.test(mat) Fisher's Exact Test for Count Data data: mat p-value = 0.2765 alternative hypothesis: true odds ratio is not equal to 1 95 percent confidence interval: 0.005230503 1.851113305 sample estimates: odds ratio 0.2384069 ```
Setting null hypothesis for Binomial test Is it correct to use one sample binomial test to check if 9 out of 10 (90%) is significantly different using 68% as expected probability? I don't think this is correct because you would assume that 6
53,088
Setting null hypothesis for Binomial test
No, you treat the result from your sample of 100 ($\hat{p}=68/100$) as a sample statistic not a population proportion. If you repeated that trial of $100$ before the second bending, you would be very likely not to get $68$ on the second attempt; you'd see $65$ or $74$ or $70$ or $63$, or some other number, and a third trial would be likely different again from the first two. Which is to say $0.68$ is just a noisy estimate of the true proportion. You should not ignore that uncertainty, treating it as if it were the true proportion.
Setting null hypothesis for Binomial test
No, you treat the result from your sample of 100 ($\hat{p}=68/100$) as a sample statistic not a population proportion. If you repeated that trial of $100$ before the second bending, you would be very
Setting null hypothesis for Binomial test No, you treat the result from your sample of 100 ($\hat{p}=68/100$) as a sample statistic not a population proportion. If you repeated that trial of $100$ before the second bending, you would be very likely not to get $68$ on the second attempt; you'd see $65$ or $74$ or $70$ or $63$, or some other number, and a third trial would be likely different again from the first two. Which is to say $0.68$ is just a noisy estimate of the true proportion. You should not ignore that uncertainty, treating it as if it were the true proportion.
Setting null hypothesis for Binomial test No, you treat the result from your sample of 100 ($\hat{p}=68/100$) as a sample statistic not a population proportion. If you repeated that trial of $100$ before the second bending, you would be very
53,089
Linear regression: highly-correlated features but not redundant
The correlation affects the standard errors on coefficients, but, at least under nice conditions that are typical to assume, the OLS solution is unbiased. Regress away, knowing that there is uncertainty in your estimates. If that uncertainty is small enough (that's a decision for you to make with your knowledge of the material being studied), then you're done. If you are concerned about lacking the sample size to have small standard errors that result in less uncertainty, then you can do a sample size calculation to determine how many observations you need to get acceptable results. An alternative could be to design your experiment to keep the features from getting correlated. How to execute such an experiment will come down to your knowledge of the subject matter. Alternatives like regularized regression might be problematic in your case where interpreting the coefficients is of paramount importance. In particular, there is not a straightforward way to estimate uncertainty in the coefficient estimates, especially if you use cross validation to tune the regularization amount.
Linear regression: highly-correlated features but not redundant
The correlation affects the standard errors on coefficients, but, at least under nice conditions that are typical to assume, the OLS solution is unbiased. Regress away, knowing that there is uncertain
Linear regression: highly-correlated features but not redundant The correlation affects the standard errors on coefficients, but, at least under nice conditions that are typical to assume, the OLS solution is unbiased. Regress away, knowing that there is uncertainty in your estimates. If that uncertainty is small enough (that's a decision for you to make with your knowledge of the material being studied), then you're done. If you are concerned about lacking the sample size to have small standard errors that result in less uncertainty, then you can do a sample size calculation to determine how many observations you need to get acceptable results. An alternative could be to design your experiment to keep the features from getting correlated. How to execute such an experiment will come down to your knowledge of the subject matter. Alternatives like regularized regression might be problematic in your case where interpreting the coefficients is of paramount importance. In particular, there is not a straightforward way to estimate uncertainty in the coefficient estimates, especially if you use cross validation to tune the regularization amount.
Linear regression: highly-correlated features but not redundant The correlation affects the standard errors on coefficients, but, at least under nice conditions that are typical to assume, the OLS solution is unbiased. Regress away, knowing that there is uncertain
53,090
Why does the von Mises-Fisher distribution need two parameters?
The von Mises-Fisher distribution is a distribution on the surface of a sphere. To make it as easy to visualize, think of a circle. The vM-F distribution has two parameters: the mean direction in which points are distributed on the circle, and how concentrated they are around the point on the circle in that mean direction. You can think of this as analogous to the mean and standard deviation of a Normal distribution, or the median and scale parameter of a Cauchy distribution, or... In higher dimensions, the parameter interpretations are the same. In the higher dimensions, an alternative way of thinking of it is as a point on the surface of a sphere which is the center of the distribution and how "spread out" the distribution is around that point. If κ=0, the spread is such that the points are uniformly distributed on the surface of the sphere, but as κ gets large, the distribution concentrates around the center point. You can think of these two features of the dist'n independently of each other, and it is often helpful to do so. As @Scortchi - Reinstate Monica points out in response to a comment below: "... there are plenty of applications in which $\mu$ is the parameter of interest & $\kappa$ a nuisance parameter - testing hypotheses about $\mu$, making point & interval estimates of $\mu$, are the aims of the study. It's hard to think of an application where the components ($\kappa \mu_1,\kappa \mu_2, \dots, \kappa \mu_p$) would be of separate interest."
Why does the von Mises-Fisher distribution need two parameters?
The von Mises-Fisher distribution is a distribution on the surface of a sphere. To make it as easy to visualize, think of a circle. The vM-F distribution has two parameters: the mean direction in wh
Why does the von Mises-Fisher distribution need two parameters? The von Mises-Fisher distribution is a distribution on the surface of a sphere. To make it as easy to visualize, think of a circle. The vM-F distribution has two parameters: the mean direction in which points are distributed on the circle, and how concentrated they are around the point on the circle in that mean direction. You can think of this as analogous to the mean and standard deviation of a Normal distribution, or the median and scale parameter of a Cauchy distribution, or... In higher dimensions, the parameter interpretations are the same. In the higher dimensions, an alternative way of thinking of it is as a point on the surface of a sphere which is the center of the distribution and how "spread out" the distribution is around that point. If κ=0, the spread is such that the points are uniformly distributed on the surface of the sphere, but as κ gets large, the distribution concentrates around the center point. You can think of these two features of the dist'n independently of each other, and it is often helpful to do so. As @Scortchi - Reinstate Monica points out in response to a comment below: "... there are plenty of applications in which $\mu$ is the parameter of interest & $\kappa$ a nuisance parameter - testing hypotheses about $\mu$, making point & interval estimates of $\mu$, are the aims of the study. It's hard to think of an application where the components ($\kappa \mu_1,\kappa \mu_2, \dots, \kappa \mu_p$) would be of separate interest."
Why does the von Mises-Fisher distribution need two parameters? The von Mises-Fisher distribution is a distribution on the surface of a sphere. To make it as easy to visualize, think of a circle. The vM-F distribution has two parameters: the mean direction in wh
53,091
Why does the von Mises-Fisher distribution need two parameters?
In geometry we represent directions with a unit norm vector, so $\mu$ represents the direction. That's why its norm is $||\mu||=1.$ The other parameter is similar to variance in that it represents how directions are concentrated around $\mu.$ You can pack $\kappa$ into $\vec\mu$ as its magnitude $\vec\mu'=\kappa\vec\mu$, then $||\vec\mu'||\ne 1$, but this will create uninterpretable artifacts: concentration components in different directions $\kappa_x,\kappa_y,\kappa_z$. This is different from a technically similar trick in physics: packing a magnitude $a$ of acceleration into a length of a vector $\vec a$ with $a=||\vec a||\ne 1$. The acceleration components $a_x,a_y,a_z$ have a clear physical interpretation here. The distribution in question, although technically operates in nD space, concerns with directions on (n-1)D sphere. So, trying to use up that extra dimension may look like a clever way to parsimony, but it in fact doesn't correspond to the mature of the phenomenon modeled.
Why does the von Mises-Fisher distribution need two parameters?
In geometry we represent directions with a unit norm vector, so $\mu$ represents the direction. That's why its norm is $||\mu||=1.$ The other parameter is similar to variance in that it represents how
Why does the von Mises-Fisher distribution need two parameters? In geometry we represent directions with a unit norm vector, so $\mu$ represents the direction. That's why its norm is $||\mu||=1.$ The other parameter is similar to variance in that it represents how directions are concentrated around $\mu.$ You can pack $\kappa$ into $\vec\mu$ as its magnitude $\vec\mu'=\kappa\vec\mu$, then $||\vec\mu'||\ne 1$, but this will create uninterpretable artifacts: concentration components in different directions $\kappa_x,\kappa_y,\kappa_z$. This is different from a technically similar trick in physics: packing a magnitude $a$ of acceleration into a length of a vector $\vec a$ with $a=||\vec a||\ne 1$. The acceleration components $a_x,a_y,a_z$ have a clear physical interpretation here. The distribution in question, although technically operates in nD space, concerns with directions on (n-1)D sphere. So, trying to use up that extra dimension may look like a clever way to parsimony, but it in fact doesn't correspond to the mature of the phenomenon modeled.
Why does the von Mises-Fisher distribution need two parameters? In geometry we represent directions with a unit norm vector, so $\mu$ represents the direction. That's why its norm is $||\mu||=1.$ The other parameter is similar to variance in that it represents how
53,092
Why does the von Mises-Fisher distribution need two parameters?
Why does the von Mises-Fisher distribution need two parameters? We do not necessarily need two sets of parameters: location and scale. You can combine them into one set or in another alternative parameterization. But, nobody has found it useful to describe the distribution that way (or at least not enough people such that it has become mainstream). The von Mises(-Fisher) distribution The von Mises-Fisher distribution can be parameterized in terms of the location on a sphere. But this can also be parameterized in terms of spherical coordinates. For instance, we have for the von Mises distribution (the 2d special case) the two descriptions in terms of the angle $\theta$ $$f(\theta; \mu_{\theta},\kappa) = \frac{1}{2\pi I_0 (\kappa)} e^{\kappa \cos(\theta-\mu_\theta)}$$ in terms of the point $\vec{x} = \lbrace x_1, x_2 \rbrace$ on a circle of unit length $$f(\vec{x}; \vec{\mu},\kappa) = \frac{1}{2\pi I_0 (\kappa)} e^{\kappa \vec{x} \cdot\vec{\mu} }$$ Why two parameters? Why does the von Mises-Fisher distribution need two parameters? The term 'parameter' here is a bit confusing. The distribution actually needs $n$ parameters. What you are referring to is the split-up of those $n$ parameters into a set of $n-1$ parameters relating to the location and a parameter relating to the scale. Those two sets, instead of combining them into a parameterization with a single set, are convenient. Location and scale are ambiguous in statistics and have intuitive meanings. The distribution can be written in more than $n$ parameters. There is no need to express the distribution which requires $n$ parameters in exactly $n$ parameters. The set of location parameters, requiring $n-1$ parameters, can be expressed as an Euclidian coordinate on the unit sphere, in which case there are $n$ instead of $n-1$ parameters, of which one is superfluous parameter due to the constraint. You could leave out one of the coordinates or express it in terms of spherical coordinates. Why not alternative parameterisation Why not instead define the distribution for one unconstrained parameter i.e. $y := \kappa \mu \in \mathbb{R}^p$? There's nothing that stops you from using $\vec{y} = \kappa \vec{x}$ as an alternative parameterization. You can do it if you like, but it is not done because it is not useful. Expressing the position in terms of an Euclidian coordinate makes you add an extra parameter to the parameterization. This is a redundancy and you could use the redundancy to mix the $\kappa$ into the location parameter. But, that is not why the redundancy has been created. The reason to use $n$ location parameters instead of $n-1$ is because the expression $\vec{x} \cdot \vec{\mu}$ is easier than the trigonometric functions when we would use spherical coordinates. The constraint on $\mu$ does not relate to $\kappa$. It is not necessary to get the $\kappa$ mixed into the redundant parameter. The $\mu$ and $\kappa$ have a physical and intuitive meaning. But, mixing them together into one single parameter does not seem to make sense. Reasons for alternative parameterizations There are various types of parameterizations for distributions and various reasons to use them. In your comments, you have mentioned four. (1) notational succinctness The notation of the distribution function $\frac{1}{2\pi I_0 (\kappa)} e^{\kappa \vec{x} \cdot\vec{\mu} }$ is very succinct when we express the location as $\vec{x}$. Succinctness is exactly the very reason why we use the notation of the location, which only needs $n-1$ parameters, in terms of $n$ parameters with a constraint. Using spherical coordinates, or expressing just the first $n-1$ coordinates of the $n$-dimensional vector $\vec{x}$ would reduce the parameters, but make the formulas more complicated. Also, notational succinctness is not a goal in itself, but it should be a tool to reach another goal. In this answer there are some ways to reparameterize a 3 parameter distribution family in terms of 2 parameters. Yes, we can reduce the number of parameters, we can make parameterization more succinct, but it has no use in itself. (2) avoiding confusion (e.g. my own) There are indeed many distributions that have confusing parameterizations. For instance, the geometric distribution expresses the probability of the number of Bernoulli trials until a successful trial is observed. The number of trials can be counted including the successful trial, or counting only the unsuccessful trials. This leads to the use of two different supports. For instance, the non-central parameter in non-central distributions can be expressed in different ways. It can relate to a shift of the mean of the normal distributions from which the distribution is derived, or it can relate to a shift of the mean of the distribution itself. For instance, the scale parameter in the normal distribution. Often we use the notation $\mathcal{N}(\mu, \sigma^2)$, which parameterizes the normal distribution by the mean and the variance. But some computer codes (for instance R) uses instead of $\sigma^2$ as the parameter, the standard deviation $\sigma$ as a parameter. In the case of the von Mises distribution I find the parameterization in terms of a separate location parameter $\vec{x}$ (with the constraint that it is on the unit-sphere) and a scale parameter $\kappa$ not at all confusing. Combining them together would be more confusing. (3) reducing computation Many distributions have parameters that are easier for computations. For instance, the beta distribution has parameters $\alpha$ and $\beta$ that relate to the powers in the distribution density $$f(x) \propto x^{\alpha-1}(1-x)^{\beta-1}$$ The more intuitive parameters are the mean $\mu$ and the variance $\sigma^2$ but then the distribution function would be $$f(x) \propto x^{\frac{\mu^2(1-\mu)}{\sigma^2}-\mu-1}(1-x)^{\frac{\mu(1-\mu)^2}{\sigma^2}+\mu}$$ I do not see how a computational simplification would be possible for the von Mises-Fisher distribution by using $\vec{y} = \kappa \vec{\mu}$. You state that you encounter an updating relation $$\kappa_{nk} \mu_{nk} = \kappa_{n-1,k} \mu_{n-1,k} + \pi_{nk} o_n$$ Indeed with the alternative parameterization, you would get $$\vec{y}_{nk} = \vec{y}_{n-1,k} + \pi_{nk} o_n$$ How did you get this parametrization? For your computations you could, of course, always switch to an alternative parametrization. The conversion between the single parameter $\vec{y}$ and the two parameters $(\vec{\mu},\kappa)$ is not so difficult. I might also suggest that $\kappa \mu$ has a physically intuitive meaning Physical meaning is another reason for the choice of parameterizations. Like in the previous example with the beta distribution, the $\alpha$ and $\beta$ parameters are are useful for computations, but the mean and variance can be interpreted more easily. In distributions like the exponential distribution and the Poisson distribution the 'rate' parameter is used in place of a 'scale' parameter. The 'rate' has a direct meaning relating to the physical process. The rate parameter is a good choice to describe the properties of the physical process that creates the distribution. The rate is the inverse of the 'scale' parameter. The scale parameter is often a preferable choice to describe distributions. It relates to the variation and to transformations of data. You mention that in the case of the von Mises(-Fisher) distribution the combination of the $\mu\kappa$ parameter as some physical meaning like the combination of location and intensity, with the example of pulling a table cloth. What this example lacks is a direct use or application of this physical meaning. For example, the peak height of the normal distribution is $f_{max} = \frac{1}{\sigma\sqrt{2\pi}}$. This has a physical meaning, namely the peak height, but should it be used just because it has a physical meaning?
Why does the von Mises-Fisher distribution need two parameters?
Why does the von Mises-Fisher distribution need two parameters? We do not necessarily need two sets of parameters: location and scale. You can combine them into one set or in another alternative para
Why does the von Mises-Fisher distribution need two parameters? Why does the von Mises-Fisher distribution need two parameters? We do not necessarily need two sets of parameters: location and scale. You can combine them into one set or in another alternative parameterization. But, nobody has found it useful to describe the distribution that way (or at least not enough people such that it has become mainstream). The von Mises(-Fisher) distribution The von Mises-Fisher distribution can be parameterized in terms of the location on a sphere. But this can also be parameterized in terms of spherical coordinates. For instance, we have for the von Mises distribution (the 2d special case) the two descriptions in terms of the angle $\theta$ $$f(\theta; \mu_{\theta},\kappa) = \frac{1}{2\pi I_0 (\kappa)} e^{\kappa \cos(\theta-\mu_\theta)}$$ in terms of the point $\vec{x} = \lbrace x_1, x_2 \rbrace$ on a circle of unit length $$f(\vec{x}; \vec{\mu},\kappa) = \frac{1}{2\pi I_0 (\kappa)} e^{\kappa \vec{x} \cdot\vec{\mu} }$$ Why two parameters? Why does the von Mises-Fisher distribution need two parameters? The term 'parameter' here is a bit confusing. The distribution actually needs $n$ parameters. What you are referring to is the split-up of those $n$ parameters into a set of $n-1$ parameters relating to the location and a parameter relating to the scale. Those two sets, instead of combining them into a parameterization with a single set, are convenient. Location and scale are ambiguous in statistics and have intuitive meanings. The distribution can be written in more than $n$ parameters. There is no need to express the distribution which requires $n$ parameters in exactly $n$ parameters. The set of location parameters, requiring $n-1$ parameters, can be expressed as an Euclidian coordinate on the unit sphere, in which case there are $n$ instead of $n-1$ parameters, of which one is superfluous parameter due to the constraint. You could leave out one of the coordinates or express it in terms of spherical coordinates. Why not alternative parameterisation Why not instead define the distribution for one unconstrained parameter i.e. $y := \kappa \mu \in \mathbb{R}^p$? There's nothing that stops you from using $\vec{y} = \kappa \vec{x}$ as an alternative parameterization. You can do it if you like, but it is not done because it is not useful. Expressing the position in terms of an Euclidian coordinate makes you add an extra parameter to the parameterization. This is a redundancy and you could use the redundancy to mix the $\kappa$ into the location parameter. But, that is not why the redundancy has been created. The reason to use $n$ location parameters instead of $n-1$ is because the expression $\vec{x} \cdot \vec{\mu}$ is easier than the trigonometric functions when we would use spherical coordinates. The constraint on $\mu$ does not relate to $\kappa$. It is not necessary to get the $\kappa$ mixed into the redundant parameter. The $\mu$ and $\kappa$ have a physical and intuitive meaning. But, mixing them together into one single parameter does not seem to make sense. Reasons for alternative parameterizations There are various types of parameterizations for distributions and various reasons to use them. In your comments, you have mentioned four. (1) notational succinctness The notation of the distribution function $\frac{1}{2\pi I_0 (\kappa)} e^{\kappa \vec{x} \cdot\vec{\mu} }$ is very succinct when we express the location as $\vec{x}$. Succinctness is exactly the very reason why we use the notation of the location, which only needs $n-1$ parameters, in terms of $n$ parameters with a constraint. Using spherical coordinates, or expressing just the first $n-1$ coordinates of the $n$-dimensional vector $\vec{x}$ would reduce the parameters, but make the formulas more complicated. Also, notational succinctness is not a goal in itself, but it should be a tool to reach another goal. In this answer there are some ways to reparameterize a 3 parameter distribution family in terms of 2 parameters. Yes, we can reduce the number of parameters, we can make parameterization more succinct, but it has no use in itself. (2) avoiding confusion (e.g. my own) There are indeed many distributions that have confusing parameterizations. For instance, the geometric distribution expresses the probability of the number of Bernoulli trials until a successful trial is observed. The number of trials can be counted including the successful trial, or counting only the unsuccessful trials. This leads to the use of two different supports. For instance, the non-central parameter in non-central distributions can be expressed in different ways. It can relate to a shift of the mean of the normal distributions from which the distribution is derived, or it can relate to a shift of the mean of the distribution itself. For instance, the scale parameter in the normal distribution. Often we use the notation $\mathcal{N}(\mu, \sigma^2)$, which parameterizes the normal distribution by the mean and the variance. But some computer codes (for instance R) uses instead of $\sigma^2$ as the parameter, the standard deviation $\sigma$ as a parameter. In the case of the von Mises distribution I find the parameterization in terms of a separate location parameter $\vec{x}$ (with the constraint that it is on the unit-sphere) and a scale parameter $\kappa$ not at all confusing. Combining them together would be more confusing. (3) reducing computation Many distributions have parameters that are easier for computations. For instance, the beta distribution has parameters $\alpha$ and $\beta$ that relate to the powers in the distribution density $$f(x) \propto x^{\alpha-1}(1-x)^{\beta-1}$$ The more intuitive parameters are the mean $\mu$ and the variance $\sigma^2$ but then the distribution function would be $$f(x) \propto x^{\frac{\mu^2(1-\mu)}{\sigma^2}-\mu-1}(1-x)^{\frac{\mu(1-\mu)^2}{\sigma^2}+\mu}$$ I do not see how a computational simplification would be possible for the von Mises-Fisher distribution by using $\vec{y} = \kappa \vec{\mu}$. You state that you encounter an updating relation $$\kappa_{nk} \mu_{nk} = \kappa_{n-1,k} \mu_{n-1,k} + \pi_{nk} o_n$$ Indeed with the alternative parameterization, you would get $$\vec{y}_{nk} = \vec{y}_{n-1,k} + \pi_{nk} o_n$$ How did you get this parametrization? For your computations you could, of course, always switch to an alternative parametrization. The conversion between the single parameter $\vec{y}$ and the two parameters $(\vec{\mu},\kappa)$ is not so difficult. I might also suggest that $\kappa \mu$ has a physically intuitive meaning Physical meaning is another reason for the choice of parameterizations. Like in the previous example with the beta distribution, the $\alpha$ and $\beta$ parameters are are useful for computations, but the mean and variance can be interpreted more easily. In distributions like the exponential distribution and the Poisson distribution the 'rate' parameter is used in place of a 'scale' parameter. The 'rate' has a direct meaning relating to the physical process. The rate parameter is a good choice to describe the properties of the physical process that creates the distribution. The rate is the inverse of the 'scale' parameter. The scale parameter is often a preferable choice to describe distributions. It relates to the variation and to transformations of data. You mention that in the case of the von Mises(-Fisher) distribution the combination of the $\mu\kappa$ parameter as some physical meaning like the combination of location and intensity, with the example of pulling a table cloth. What this example lacks is a direct use or application of this physical meaning. For example, the peak height of the normal distribution is $f_{max} = \frac{1}{\sigma\sqrt{2\pi}}$. This has a physical meaning, namely the peak height, but should it be used just because it has a physical meaning?
Why does the von Mises-Fisher distribution need two parameters? Why does the von Mises-Fisher distribution need two parameters? We do not necessarily need two sets of parameters: location and scale. You can combine them into one set or in another alternative para
53,093
Does it make sense to have 2 branches resulting in same decision while using one single variable to divide the predictor multiple times?
Does it make sense to use one variable multiple times for dividing the predictor? Yes, if the variable is useful and has different regions that are not connnected, it can be used mutliple times, e.g. return 1 if 1<x<4 else 0 would require two splits. Is it possible that two splits result in the same decision? It is possible as well. Although the final decision is the same, which might have been obtained by taking the majority of cases in each branch, the overall entropy might improve with the branching. For example, if the distribution is 5-7 for class 0 and 1 respectively, and after the split it becomes (2-3) and (3-4) in the branches, the information gain would be: h = -(log(5/12) * (5/12) + log(7/12) * (7/12)) b1 = -(log(2/5) * (2/5) + log(3/5) * 3/5) b2 = -(log(3/7) * (3/7) + log(4/7) * 4/7) h - (b1 * 5/12 + b2 * 7/12) which would result in the following gain 0.000408677
Does it make sense to have 2 branches resulting in same decision while using one single variable to
Does it make sense to use one variable multiple times for dividing the predictor? Yes, if the variable is useful and has different regions that are not connnected, it can be used mutliple times, e.g.
Does it make sense to have 2 branches resulting in same decision while using one single variable to divide the predictor multiple times? Does it make sense to use one variable multiple times for dividing the predictor? Yes, if the variable is useful and has different regions that are not connnected, it can be used mutliple times, e.g. return 1 if 1<x<4 else 0 would require two splits. Is it possible that two splits result in the same decision? It is possible as well. Although the final decision is the same, which might have been obtained by taking the majority of cases in each branch, the overall entropy might improve with the branching. For example, if the distribution is 5-7 for class 0 and 1 respectively, and after the split it becomes (2-3) and (3-4) in the branches, the information gain would be: h = -(log(5/12) * (5/12) + log(7/12) * (7/12)) b1 = -(log(2/5) * (2/5) + log(3/5) * 3/5) b2 = -(log(3/7) * (3/7) + log(4/7) * 4/7) h - (b1 * 5/12 + b2 * 7/12) which would result in the following gain 0.000408677
Does it make sense to have 2 branches resulting in same decision while using one single variable to Does it make sense to use one variable multiple times for dividing the predictor? Yes, if the variable is useful and has different regions that are not connnected, it can be used mutliple times, e.g.
53,094
Does it make sense to have 2 branches resulting in same decision while using one single variable to divide the predictor multiple times?
Absolutely. In one particular case, you may have a non-monotonic relationship between the continuous variable and the target, so that the "correct" tree will have several switches from No to Yes along the leaves from left to right. Yes. While the final "answer" Yes/No can be the same in two leaves from the same parent, their proportions of "yes" samples might be quite different, which may be informative for you. If at the end of the day you only care about the final decision, you can always post-process the tree to simplify the rules.
Does it make sense to have 2 branches resulting in same decision while using one single variable to
Absolutely. In one particular case, you may have a non-monotonic relationship between the continuous variable and the target, so that the "correct" tree will have several switches from No to Yes alon
Does it make sense to have 2 branches resulting in same decision while using one single variable to divide the predictor multiple times? Absolutely. In one particular case, you may have a non-monotonic relationship between the continuous variable and the target, so that the "correct" tree will have several switches from No to Yes along the leaves from left to right. Yes. While the final "answer" Yes/No can be the same in two leaves from the same parent, their proportions of "yes" samples might be quite different, which may be informative for you. If at the end of the day you only care about the final decision, you can always post-process the tree to simplify the rules.
Does it make sense to have 2 branches resulting in same decision while using one single variable to Absolutely. In one particular case, you may have a non-monotonic relationship between the continuous variable and the target, so that the "correct" tree will have several switches from No to Yes alon
53,095
Does it make sense to have 2 branches resulting in same decision while using one single variable to divide the predictor multiple times?
YES & YES set.seed(2021) N <- 10 x0 <- c(runif(N, -4, -2), runif(N, 2, 4)) x1 <- runif(2*N, -2, 2) plot(x0, rep(1, 2*N), col = 'red') points(x1, rep(1, 2*N), col = 'blue') (If someone could run this in R and post the picture, that would be fantastic.) If you're above $0$, the color depends on if you're above $2$. If you're below $0$, the color depends on if you're below $-2$. However, you cannot split at $2$ or $-2$ and always get the right color. There are reds and blues above $-2$. There are reds and blues below $2$. Splitting on the same variable multiple times allows us to guess the correct color every time. In some pseudocode: if x > 0: if x < 2: color = red if x > 2: color = blue if x < 0: if x < -2: color = red if x > -2: color = blue
Does it make sense to have 2 branches resulting in same decision while using one single variable to
YES & YES set.seed(2021) N <- 10 x0 <- c(runif(N, -4, -2), runif(N, 2, 4)) x1 <- runif(2*N, -2, 2) plot(x0, rep(1, 2*N), col = 'red') points(x1, rep(1, 2*N), col = 'blue') (If someone could run this
Does it make sense to have 2 branches resulting in same decision while using one single variable to divide the predictor multiple times? YES & YES set.seed(2021) N <- 10 x0 <- c(runif(N, -4, -2), runif(N, 2, 4)) x1 <- runif(2*N, -2, 2) plot(x0, rep(1, 2*N), col = 'red') points(x1, rep(1, 2*N), col = 'blue') (If someone could run this in R and post the picture, that would be fantastic.) If you're above $0$, the color depends on if you're above $2$. If you're below $0$, the color depends on if you're below $-2$. However, you cannot split at $2$ or $-2$ and always get the right color. There are reds and blues above $-2$. There are reds and blues below $2$. Splitting on the same variable multiple times allows us to guess the correct color every time. In some pseudocode: if x > 0: if x < 2: color = red if x > 2: color = blue if x < 0: if x < -2: color = red if x > -2: color = blue
Does it make sense to have 2 branches resulting in same decision while using one single variable to YES & YES set.seed(2021) N <- 10 x0 <- c(runif(N, -4, -2), runif(N, 2, 4)) x1 <- runif(2*N, -2, 2) plot(x0, rep(1, 2*N), col = 'red') points(x1, rep(1, 2*N), col = 'blue') (If someone could run this
53,096
Why the pearson correlation p-value doesn't fully correspond to CI in R?
Nearly all such questions are answered in the help on the relevant functions. From the help for cor.test: If method is "pearson", the test statistic is based on Pearson's product moment correlation coefficient cor(x, y) and follows a t distribution with length(x)-2 degrees of freedom if the samples follow independent normal distributions. If there are at least 4 complete pairs of observation, an asymptotic confidence interval is given based on Fisher's Z transform. The CI based on Fisher's z will not exactly correspond to the result of the t-test. The usual hypothesis tests rely on the distribution of the test statistic under the null. In this case, the null distribution for the test is based on the population correlation being zero. Under that circumstance, you get a t-test (under the assumptions), but when the population correlation is not zero you no longer have a t-distribution for the statistic, so the confidence interval isn't based on that. If an exact distribution were used for the CI (a calculation involving the hypergeometric function $_2F_1$, or a related series expansion), then it should correspond.
Why the pearson correlation p-value doesn't fully correspond to CI in R?
Nearly all such questions are answered in the help on the relevant functions. From the help for cor.test: If method is "pearson", the test statistic is based on Pearson's product moment correlation c
Why the pearson correlation p-value doesn't fully correspond to CI in R? Nearly all such questions are answered in the help on the relevant functions. From the help for cor.test: If method is "pearson", the test statistic is based on Pearson's product moment correlation coefficient cor(x, y) and follows a t distribution with length(x)-2 degrees of freedom if the samples follow independent normal distributions. If there are at least 4 complete pairs of observation, an asymptotic confidence interval is given based on Fisher's Z transform. The CI based on Fisher's z will not exactly correspond to the result of the t-test. The usual hypothesis tests rely on the distribution of the test statistic under the null. In this case, the null distribution for the test is based on the population correlation being zero. Under that circumstance, you get a t-test (under the assumptions), but when the population correlation is not zero you no longer have a t-distribution for the statistic, so the confidence interval isn't based on that. If an exact distribution were used for the CI (a calculation involving the hypergeometric function $_2F_1$, or a related series expansion), then it should correspond.
Why the pearson correlation p-value doesn't fully correspond to CI in R? Nearly all such questions are answered in the help on the relevant functions. From the help for cor.test: If method is "pearson", the test statistic is based on Pearson's product moment correlation c
53,097
Is it accurate to call P(D|H) the 'likelihood' in Bayesian inference?
Note that the likelihood function as defined by Etz is still given by the same conditional probability. I.e.: $$ L(H) = p(D|H) $$ When we write it as $L(H)$ (or $\mathcal{L}(H)$, or similar), we emphasize that we're evaluating $p(D|H)$ as a function of $H$ (with $D$ fixed), rather than of $D$ (with $H$ fixed). However, that doesn't mean that it's incorrect to say that $p(D|H)$ is the likelihood, in situations where we are indeed evaluating it as a function of $H$, conditioned on some given value of $D$. The typical situation in which this happens is when applying Bayes' rule, as in Olshausen's example, to obtain the posterior. By definition, that scenario has $D$ fixed, and so it satisfies the criteria for calling $p(D|H)$ the likelihood (or likelihood function). It is only incorrect (in Etz' view, and many others, including myself) to call $p(D|H)$ the likelihood if you're using it in the "forward" direction, i.e. with $D$ not fixed. One argument for using the word "likelihood" when we do is that the likelihood function does not (in general) integrate to 1. That is, $\int L(H=h)dh=\int p(D|H=h)dh\neq1$. Thus, the outputs of the function are clearly not proper probabilities (or probability densities). Conversely, we do have $\int p(D=x|H)dx=1$ (using $x$ here to prevent confusion with the $d$ from the integral), which illustrates that in that case, we are dealing with probabilities (or probability densities). You can see that Olshausen's usage is in accordance with this principle: his likelihood does not (necessarily) integrate to 1 either.
Is it accurate to call P(D|H) the 'likelihood' in Bayesian inference?
Note that the likelihood function as defined by Etz is still given by the same conditional probability. I.e.: $$ L(H) = p(D|H) $$ When we write it as $L(H)$ (or $\mathcal{L}(H)$, or similar), we empha
Is it accurate to call P(D|H) the 'likelihood' in Bayesian inference? Note that the likelihood function as defined by Etz is still given by the same conditional probability. I.e.: $$ L(H) = p(D|H) $$ When we write it as $L(H)$ (or $\mathcal{L}(H)$, or similar), we emphasize that we're evaluating $p(D|H)$ as a function of $H$ (with $D$ fixed), rather than of $D$ (with $H$ fixed). However, that doesn't mean that it's incorrect to say that $p(D|H)$ is the likelihood, in situations where we are indeed evaluating it as a function of $H$, conditioned on some given value of $D$. The typical situation in which this happens is when applying Bayes' rule, as in Olshausen's example, to obtain the posterior. By definition, that scenario has $D$ fixed, and so it satisfies the criteria for calling $p(D|H)$ the likelihood (or likelihood function). It is only incorrect (in Etz' view, and many others, including myself) to call $p(D|H)$ the likelihood if you're using it in the "forward" direction, i.e. with $D$ not fixed. One argument for using the word "likelihood" when we do is that the likelihood function does not (in general) integrate to 1. That is, $\int L(H=h)dh=\int p(D|H=h)dh\neq1$. Thus, the outputs of the function are clearly not proper probabilities (or probability densities). Conversely, we do have $\int p(D=x|H)dx=1$ (using $x$ here to prevent confusion with the $d$ from the integral), which illustrates that in that case, we are dealing with probabilities (or probability densities). You can see that Olshausen's usage is in accordance with this principle: his likelihood does not (necessarily) integrate to 1 either.
Is it accurate to call P(D|H) the 'likelihood' in Bayesian inference? Note that the likelihood function as defined by Etz is still given by the same conditional probability. I.e.: $$ L(H) = p(D|H) $$ When we write it as $L(H)$ (or $\mathcal{L}(H)$, or similar), we empha
53,098
Is it accurate to call P(D|H) the 'likelihood' in Bayesian inference?
This seems to be mostly matters of convention. Let me say mine, and clarify the actual important properties besides the nomenclature. $$ \text{Likelihood} = L(\theta) = P\left(\text{Observed data} \, \middle| \, \theta\right) $$ Note well, that here we consider the observed data. This is neither a distribution in terms of data (since data was fixed to the observed data) or parameters, $\theta$. $$ \text{Sampling distribution} = P\left(\text{data} \, \middle| \, \theta\right) $$ This is a distribution of the data and could be used e.g., to draw pseudo-data. This would usually only be used frequentist inference (which violates the likelihood principle) though may be used in Bayesian inference in: approximate Bayesian computation, simulation based calibration, for computing e.g., prior predictive distributions, and lastly objective Bayesian priors. Broadly speaking though, as we condition only on the observed data in Bayesian analysis, it obeys the likelihood principle and requires only the likelihood.
Is it accurate to call P(D|H) the 'likelihood' in Bayesian inference?
This seems to be mostly matters of convention. Let me say mine, and clarify the actual important properties besides the nomenclature. $$ \text{Likelihood} = L(\theta) = P\left(\text{Observed data} \,
Is it accurate to call P(D|H) the 'likelihood' in Bayesian inference? This seems to be mostly matters of convention. Let me say mine, and clarify the actual important properties besides the nomenclature. $$ \text{Likelihood} = L(\theta) = P\left(\text{Observed data} \, \middle| \, \theta\right) $$ Note well, that here we consider the observed data. This is neither a distribution in terms of data (since data was fixed to the observed data) or parameters, $\theta$. $$ \text{Sampling distribution} = P\left(\text{data} \, \middle| \, \theta\right) $$ This is a distribution of the data and could be used e.g., to draw pseudo-data. This would usually only be used frequentist inference (which violates the likelihood principle) though may be used in Bayesian inference in: approximate Bayesian computation, simulation based calibration, for computing e.g., prior predictive distributions, and lastly objective Bayesian priors. Broadly speaking though, as we condition only on the observed data in Bayesian analysis, it obeys the likelihood principle and requires only the likelihood.
Is it accurate to call P(D|H) the 'likelihood' in Bayesian inference? This seems to be mostly matters of convention. Let me say mine, and clarify the actual important properties besides the nomenclature. $$ \text{Likelihood} = L(\theta) = P\left(\text{Observed data} \,
53,099
Should I make equally sized samples for the Mann-Whitney U test if originally I have unequal sample sizes
Use your original data. Claims about losing power when groups have unequal sizes make more sense in the context of designing the experiment. If you have enough resources to make $100$ observations, the greatest power will be when you allocate $50$ to the control group and the other $50$ to the treatment group. However, $150$ in the control group with $50$ in the treatment group will be more powerful than the test with $100$ observations, even if it is not the most powerful allocation of the $200$ observations. (It can be defended to do uneven allocation, however. Maybe the treatment is expensive, but you still get adequate power with $50$ treatment subjects and $150$ control subjects.)
Should I make equally sized samples for the Mann-Whitney U test if originally I have unequal sample
Use your original data. Claims about losing power when groups have unequal sizes make more sense in the context of designing the experiment. If you have enough resources to make $100$ observations, th
Should I make equally sized samples for the Mann-Whitney U test if originally I have unequal sample sizes Use your original data. Claims about losing power when groups have unequal sizes make more sense in the context of designing the experiment. If you have enough resources to make $100$ observations, the greatest power will be when you allocate $50$ to the control group and the other $50$ to the treatment group. However, $150$ in the control group with $50$ in the treatment group will be more powerful than the test with $100$ observations, even if it is not the most powerful allocation of the $200$ observations. (It can be defended to do uneven allocation, however. Maybe the treatment is expensive, but you still get adequate power with $50$ treatment subjects and $150$ control subjects.)
Should I make equally sized samples for the Mann-Whitney U test if originally I have unequal sample Use your original data. Claims about losing power when groups have unequal sizes make more sense in the context of designing the experiment. If you have enough resources to make $100$ observations, th
53,100
Riddler puzzle - distance from origin after two random jumps of equal length
The original 'riddler' problem at FiveThirtyEight appears to be asking about the mode of the distribution rather than its expected value. In any case, if we can find the distribution then we can find both the mode and the expected value. Your initial trigonometric expression for the length looks wrong to me, and I think that is the problem with your approach. You can see from your plot that the problem is not just the constant-of-integration, since your posited density line is not proportionate to the simulated density. I will show you how to solve this using the correct trigonometric rules. If we let $\theta$ be the angle between the two directions of the jumps then we can apply the law of cosines to determine that the total distance from the starting point after both jumps is: $$\begin{align} T &= \sqrt{r^2 + r^2 - 2 \cdot r \cdot r \cos \theta} \\[8pt] &= \sqrt{2 r^2 - 2 r^2 \cos \theta} \\[6pt] &= r \sqrt{2 (1-\cos \theta)} \\[6pt] &= r \sqrt{2 (1-\cos |\theta|)}. \\[6pt] \end{align}$$ Now, since $|\theta| \sim \text{U}(0, \pi)$, for all $0 \leqslant t \leqslant 2r$ we have: $$\begin{align} F_T(t) \equiv \mathbb{P}(T \leqslant t) &= \mathbb{P} \bigg( \sqrt{2 (1-\cos |\theta|)} \leqslant \frac{t}{r} \bigg) \\[6pt] &= \mathbb{P} \bigg( 2 (1-\cos |\theta|) \leqslant \frac{t^2}{r^2} \bigg) \\[6pt] &= \mathbb{P} \bigg( \cos |\theta| \geqslant 1 - \frac{t^2}{2 r^2} \bigg) \\[6pt] &= \mathbb{P} \bigg( |\theta| \leqslant \text{arccos} \Big(1 - \frac{t^2}{2 r^2} \Big) \bigg) \\[6pt] &= \frac{1}{\pi} \cdot \text{arccos} \Big(1 - \frac{t^2}{2 r^2} \Big). \\[6pt] \end{align}$$ Differentiating this expression then gives the corresponding density function: $$\begin{align} f_T(t) \equiv \frac{d F_T(t)}{d t} &= \frac{1}{\pi} \cdot \frac{2t/2r^2}{\sqrt{1-(1-t^2/2r^2)^2}} \\[6pt] &= \frac{1}{\pi} \cdot \frac{2t/2r^2}{\sqrt{t^2/r^2 - t^4/4r^4}} \\[6pt] &= \frac{1}{\pi} \cdot \frac{2}{\sqrt{4 r^2 - t^2}}. \\[6pt] \end{align}$$ The mode of this distribution occurs at $t=2r$, which solves the original 'riddler' problem. With a bit of extra work you can show that the mean and standard deviation of the distribution are: $$\begin{align} \mathbb{E}(T) &= \frac{4}{\pi} \cdot r \approx 1.27324 r \\[6pt] \mathbb{S}(T) &= \frac{\sqrt{2(\pi^2-8)}}{\pi} \cdot r \approx 0.6155169 r \\[6pt] \end{align}$$ Incidentally, this distribution is essentially a folded-arcsine distribution (relating closely to the arcsine distribution). It can be characterised in an alternative way by taking $X \sim \text{arcsine}$ and then taking $T = 4r |X-\tfrac{1}{2}|$. (The interested reader may confirm that this transformation gives the same distribution shown above.) Another interesting aspect of this problem is to ask what happens to the distribution as we add more random jumps. It turns out that the mathematics gets rather nasty once we go beyond two jumps, and you end up with density formulae that are defined recursively using expressions that cannot be put into closed form. If we were to look at increments of a large number of jumps, the displacement from the origin should act like a kind of Brownian motion process, and so in the limit the distribution will converge to an appropriately scaled normal distribution (something that we can easily demonstrate using the CLT). Confirming this result by simulation: We can confirm this result by simulation in R. First we will program a function SIMULATE to simulate the jumps and compute the total displacement from the starting point. SIMULATE <- function(n, r = 1) { #Generate two jumps in random direction ANGLE1 <- 2*pi*runif(n) ANGLE2 <- 2*pi*runif(n) JUMP1x <- r*sin(ANGLE1) JUMP1y <- r*cos(ANGLE1) JUMP2x <- r*sin(ANGLE2) JUMP2y <- r*cos(ANGLE2) #Determine total displacement and distance TOTALx <- JUMP1x + JUMP2x TOTALy <- JUMP1y + JUMP2y DIST <- sqrt(TOTALx^2 + TOTALy^2) #Return the distance values DIST } Now we will confirm that our density function matches the result from a large number of simulations. #Generate the true density function DENSITY <- function(x, r = 1) { n <- length(x) OUT <- rep(0, n) for (i in 1:n) { if ((x[i] >= 0)&(x[i] <= 2*r)) { OUT[i] <- 2/(pi*sqrt(4*r^2-x[i]^2)) } } OUT } #Generate histogram of simulations set.seed(1) SIMS <- SIMULATE(10^6) hist(SIMS, freq = FALSE, breaks = 100, col = 'LightBlue', xlab = 'Distance', main = 'Simulation versus True Density') XX <- seq(0, 1.999, length = 100) YY <- DENSITY(XX) lines(XX, YY, col = 'Blue', lwd = 3)
Riddler puzzle - distance from origin after two random jumps of equal length
The original 'riddler' problem at FiveThirtyEight appears to be asking about the mode of the distribution rather than its expected value. In any case, if we can find the distribution then we can find
Riddler puzzle - distance from origin after two random jumps of equal length The original 'riddler' problem at FiveThirtyEight appears to be asking about the mode of the distribution rather than its expected value. In any case, if we can find the distribution then we can find both the mode and the expected value. Your initial trigonometric expression for the length looks wrong to me, and I think that is the problem with your approach. You can see from your plot that the problem is not just the constant-of-integration, since your posited density line is not proportionate to the simulated density. I will show you how to solve this using the correct trigonometric rules. If we let $\theta$ be the angle between the two directions of the jumps then we can apply the law of cosines to determine that the total distance from the starting point after both jumps is: $$\begin{align} T &= \sqrt{r^2 + r^2 - 2 \cdot r \cdot r \cos \theta} \\[8pt] &= \sqrt{2 r^2 - 2 r^2 \cos \theta} \\[6pt] &= r \sqrt{2 (1-\cos \theta)} \\[6pt] &= r \sqrt{2 (1-\cos |\theta|)}. \\[6pt] \end{align}$$ Now, since $|\theta| \sim \text{U}(0, \pi)$, for all $0 \leqslant t \leqslant 2r$ we have: $$\begin{align} F_T(t) \equiv \mathbb{P}(T \leqslant t) &= \mathbb{P} \bigg( \sqrt{2 (1-\cos |\theta|)} \leqslant \frac{t}{r} \bigg) \\[6pt] &= \mathbb{P} \bigg( 2 (1-\cos |\theta|) \leqslant \frac{t^2}{r^2} \bigg) \\[6pt] &= \mathbb{P} \bigg( \cos |\theta| \geqslant 1 - \frac{t^2}{2 r^2} \bigg) \\[6pt] &= \mathbb{P} \bigg( |\theta| \leqslant \text{arccos} \Big(1 - \frac{t^2}{2 r^2} \Big) \bigg) \\[6pt] &= \frac{1}{\pi} \cdot \text{arccos} \Big(1 - \frac{t^2}{2 r^2} \Big). \\[6pt] \end{align}$$ Differentiating this expression then gives the corresponding density function: $$\begin{align} f_T(t) \equiv \frac{d F_T(t)}{d t} &= \frac{1}{\pi} \cdot \frac{2t/2r^2}{\sqrt{1-(1-t^2/2r^2)^2}} \\[6pt] &= \frac{1}{\pi} \cdot \frac{2t/2r^2}{\sqrt{t^2/r^2 - t^4/4r^4}} \\[6pt] &= \frac{1}{\pi} \cdot \frac{2}{\sqrt{4 r^2 - t^2}}. \\[6pt] \end{align}$$ The mode of this distribution occurs at $t=2r$, which solves the original 'riddler' problem. With a bit of extra work you can show that the mean and standard deviation of the distribution are: $$\begin{align} \mathbb{E}(T) &= \frac{4}{\pi} \cdot r \approx 1.27324 r \\[6pt] \mathbb{S}(T) &= \frac{\sqrt{2(\pi^2-8)}}{\pi} \cdot r \approx 0.6155169 r \\[6pt] \end{align}$$ Incidentally, this distribution is essentially a folded-arcsine distribution (relating closely to the arcsine distribution). It can be characterised in an alternative way by taking $X \sim \text{arcsine}$ and then taking $T = 4r |X-\tfrac{1}{2}|$. (The interested reader may confirm that this transformation gives the same distribution shown above.) Another interesting aspect of this problem is to ask what happens to the distribution as we add more random jumps. It turns out that the mathematics gets rather nasty once we go beyond two jumps, and you end up with density formulae that are defined recursively using expressions that cannot be put into closed form. If we were to look at increments of a large number of jumps, the displacement from the origin should act like a kind of Brownian motion process, and so in the limit the distribution will converge to an appropriately scaled normal distribution (something that we can easily demonstrate using the CLT). Confirming this result by simulation: We can confirm this result by simulation in R. First we will program a function SIMULATE to simulate the jumps and compute the total displacement from the starting point. SIMULATE <- function(n, r = 1) { #Generate two jumps in random direction ANGLE1 <- 2*pi*runif(n) ANGLE2 <- 2*pi*runif(n) JUMP1x <- r*sin(ANGLE1) JUMP1y <- r*cos(ANGLE1) JUMP2x <- r*sin(ANGLE2) JUMP2y <- r*cos(ANGLE2) #Determine total displacement and distance TOTALx <- JUMP1x + JUMP2x TOTALy <- JUMP1y + JUMP2y DIST <- sqrt(TOTALx^2 + TOTALy^2) #Return the distance values DIST } Now we will confirm that our density function matches the result from a large number of simulations. #Generate the true density function DENSITY <- function(x, r = 1) { n <- length(x) OUT <- rep(0, n) for (i in 1:n) { if ((x[i] >= 0)&(x[i] <= 2*r)) { OUT[i] <- 2/(pi*sqrt(4*r^2-x[i]^2)) } } OUT } #Generate histogram of simulations set.seed(1) SIMS <- SIMULATE(10^6) hist(SIMS, freq = FALSE, breaks = 100, col = 'LightBlue', xlab = 'Distance', main = 'Simulation versus True Density') XX <- seq(0, 1.999, length = 100) YY <- DENSITY(XX) lines(XX, YY, col = 'Blue', lwd = 3)
Riddler puzzle - distance from origin after two random jumps of equal length The original 'riddler' problem at FiveThirtyEight appears to be asking about the mode of the distribution rather than its expected value. In any case, if we can find the distribution then we can find