idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
54,301 | Expectation of rational formula | I symbolically evaluated the double integral for the desired expectation in MAPLE as follows (note that I executed expand just so that I could get the image to display better).
expand(int(int(a*x^2*y^2/(1+b*x^2)*1/(2*Pi)*exp(-1/2*x^2)*exp(-1/2*y^2),x=-infinity..infinity),y=-infinity..infinity));
Here is the result:
I checked this result against stochastic simulation for several values of b, and it matches, so I have confidence it is correct.
Here are the numerical results for a = 1 and a few values of b:
b = 0.1 --> 0.79214855
b = 1 --> 0.34432046
b = 5 --> 0.11888698
Edit: I am bumping this because even if this was a homework problem, by now the OP has probably graduated or flunked out. | Expectation of rational formula | I symbolically evaluated the double integral for the desired expectation in MAPLE as follows (note that I executed expand just so that I could get the image to display better).
expand(int(int(a*x^2*y^ | Expectation of rational formula
I symbolically evaluated the double integral for the desired expectation in MAPLE as follows (note that I executed expand just so that I could get the image to display better).
expand(int(int(a*x^2*y^2/(1+b*x^2)*1/(2*Pi)*exp(-1/2*x^2)*exp(-1/2*y^2),x=-infinity..infinity),y=-infinity..infinity));
Here is the result:
I checked this result against stochastic simulation for several values of b, and it matches, so I have confidence it is correct.
Here are the numerical results for a = 1 and a few values of b:
b = 0.1 --> 0.79214855
b = 1 --> 0.34432046
b = 5 --> 0.11888698
Edit: I am bumping this because even if this was a homework problem, by now the OP has probably graduated or flunked out. | Expectation of rational formula
I symbolically evaluated the double integral for the desired expectation in MAPLE as follows (note that I executed expand just so that I could get the image to display better).
expand(int(int(a*x^2*y^ |
54,302 | Expectation of rational formula | This has an elementary solution. It employs a technique often found to be useful when integrating exponentials: a fraction can be expressed in terms of an integral of an exponential function.
Because $X$ and $Y$ are independent, the expectation splits into the product of $\mathbb{E}(Y^2)=1$ and
$$\mathbb{E}\left(\frac{aX^2}{1+bX^2}\right) = \frac{a}{b}\left(1 - \frac{1}{b}\mathbb{E}\left(\frac{1}{1/b + X^2}\right)\right).\tag{1}$$
Write $1/b = 2s$ (so that $s=1/(2b)$ is positive, too) and note that for any $x$,
$$\frac{1}{2s + x^2} = \int_0^\infty \exp(-(2s + x^2)t)\mathrm{d}t.$$
Apply Fubini's theorem to compute the expectation before performing this integral:
$$\eqalign{
\mathbb{E}\left(\frac{1}{1/b+X^2}\right) &= \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}}\exp(-x^2/2)\int_0^\infty \exp(-(2s + x^2)t)\mathrm{d}t \mathrm{d}x \\
&= \frac{1}{\sqrt{2\pi}}\int_0^\infty\int_{-\infty}^\infty \exp\left(-\frac{1}{2}\left(x^2+2t(2s + x^2)\right)\right) \mathrm{d}x\mathrm{d}t \\
&= \int_0^\infty\frac{e^{-2 s t}}{\sqrt{1+2t}} \left[\frac{\sqrt{1+2t}}{\sqrt{2\pi}}\int_{-\infty}^\infty \exp\left(-\frac{1}{2}\left((1+2t)x^2)\right)\right) \mathrm{d}x\right]\mathrm{d}t \\
&=\int_0^\infty\frac{e^{-2 s t}}{\sqrt{1+2t}} \mathrm{d}t.\tag{2}
}$$
The last equality follows by observing that the integral in brackets is the total probability of a Normal variable (with mean $0$ and variance $1/(1+2t)$), which is just $1$.
A substitution with $1+2t$ as the variable easily expresses this in terms of a $\chi^2(1)$ tail probability (computed as an incomplete Gamma function). Alternatively--to bring the ideas full circle and get back to Normal distribution probabilities--let's apply an aggressive substitution to clear the denominator in the integrand: letting $x^2 =2s(1+2t)$, deduce $dt = x dx/(2s)$, whence (writing $\Phi$ for the standard normal CDF)
$$\int_0^\infty\frac{e^{-2 s t}}{\sqrt{1+2t}} \mathrm{d}t = \frac{e^{s}\sqrt{2\pi}}{\sqrt{2s}}\frac{1}{\sqrt{2\pi}}\int_\sqrt{2s}^\infty\exp(-x^2/2)\mathrm{d}x = \frac{e^{s}\sqrt{2\pi}}{\sqrt{2s}}\left(1 - \Phi(\sqrt{2s})\right).$$
Plugging this into $(2)$ and then into $(1)$ and re-expressing $s$ as $1/(2b)$ yields
$$\mathbb{E}\left(\frac{aX^2Y^2}{1+bX^2}\right) = \frac{a}{b}\left(1 - e^{1/(2b)}\sqrt{\frac{2\pi}{b}}\left(1 - \Phi\left(\frac{1}{\sqrt{b}}\right)\right)\right).$$ | Expectation of rational formula | This has an elementary solution. It employs a technique often found to be useful when integrating exponentials: a fraction can be expressed in terms of an integral of an exponential function.
Becaus | Expectation of rational formula
This has an elementary solution. It employs a technique often found to be useful when integrating exponentials: a fraction can be expressed in terms of an integral of an exponential function.
Because $X$ and $Y$ are independent, the expectation splits into the product of $\mathbb{E}(Y^2)=1$ and
$$\mathbb{E}\left(\frac{aX^2}{1+bX^2}\right) = \frac{a}{b}\left(1 - \frac{1}{b}\mathbb{E}\left(\frac{1}{1/b + X^2}\right)\right).\tag{1}$$
Write $1/b = 2s$ (so that $s=1/(2b)$ is positive, too) and note that for any $x$,
$$\frac{1}{2s + x^2} = \int_0^\infty \exp(-(2s + x^2)t)\mathrm{d}t.$$
Apply Fubini's theorem to compute the expectation before performing this integral:
$$\eqalign{
\mathbb{E}\left(\frac{1}{1/b+X^2}\right) &= \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}}\exp(-x^2/2)\int_0^\infty \exp(-(2s + x^2)t)\mathrm{d}t \mathrm{d}x \\
&= \frac{1}{\sqrt{2\pi}}\int_0^\infty\int_{-\infty}^\infty \exp\left(-\frac{1}{2}\left(x^2+2t(2s + x^2)\right)\right) \mathrm{d}x\mathrm{d}t \\
&= \int_0^\infty\frac{e^{-2 s t}}{\sqrt{1+2t}} \left[\frac{\sqrt{1+2t}}{\sqrt{2\pi}}\int_{-\infty}^\infty \exp\left(-\frac{1}{2}\left((1+2t)x^2)\right)\right) \mathrm{d}x\right]\mathrm{d}t \\
&=\int_0^\infty\frac{e^{-2 s t}}{\sqrt{1+2t}} \mathrm{d}t.\tag{2}
}$$
The last equality follows by observing that the integral in brackets is the total probability of a Normal variable (with mean $0$ and variance $1/(1+2t)$), which is just $1$.
A substitution with $1+2t$ as the variable easily expresses this in terms of a $\chi^2(1)$ tail probability (computed as an incomplete Gamma function). Alternatively--to bring the ideas full circle and get back to Normal distribution probabilities--let's apply an aggressive substitution to clear the denominator in the integrand: letting $x^2 =2s(1+2t)$, deduce $dt = x dx/(2s)$, whence (writing $\Phi$ for the standard normal CDF)
$$\int_0^\infty\frac{e^{-2 s t}}{\sqrt{1+2t}} \mathrm{d}t = \frac{e^{s}\sqrt{2\pi}}{\sqrt{2s}}\frac{1}{\sqrt{2\pi}}\int_\sqrt{2s}^\infty\exp(-x^2/2)\mathrm{d}x = \frac{e^{s}\sqrt{2\pi}}{\sqrt{2s}}\left(1 - \Phi(\sqrt{2s})\right).$$
Plugging this into $(2)$ and then into $(1)$ and re-expressing $s$ as $1/(2b)$ yields
$$\mathbb{E}\left(\frac{aX^2Y^2}{1+bX^2}\right) = \frac{a}{b}\left(1 - e^{1/(2b)}\sqrt{\frac{2\pi}{b}}\left(1 - \Phi\left(\frac{1}{\sqrt{b}}\right)\right)\right).$$ | Expectation of rational formula
This has an elementary solution. It employs a technique often found to be useful when integrating exponentials: a fraction can be expressed in terms of an integral of an exponential function.
Becaus |
54,303 | Expectation of rational formula | No real answer, but it makes things more simple:
$$\mathbb{E}Z=\mathbb{E}\frac{aX^{2}Y^{2}}{1+bX^{2}}=\frac{a}{b}\mathbb{E}Y^{2}\mathbb{E}\frac{bX^{2}}{1+bX^{2}}=\frac{a}{b}\mathbb{E}\left[1-\frac{1}{1+bX^{2}}\right]=\frac{a}{b}-\frac{a}{b}\mathbb{E}\frac{1}{1+bX^{2}}$$
So actually to be found is $$\mathbb{E}\frac{1}{1+bX^{2}}$$
where $X$ has standard normal distribution. | Expectation of rational formula | No real answer, but it makes things more simple:
$$\mathbb{E}Z=\mathbb{E}\frac{aX^{2}Y^{2}}{1+bX^{2}}=\frac{a}{b}\mathbb{E}Y^{2}\mathbb{E}\frac{bX^{2}}{1+bX^{2}}=\frac{a}{b}\mathbb{E}\left[1-\frac{1}{ | Expectation of rational formula
No real answer, but it makes things more simple:
$$\mathbb{E}Z=\mathbb{E}\frac{aX^{2}Y^{2}}{1+bX^{2}}=\frac{a}{b}\mathbb{E}Y^{2}\mathbb{E}\frac{bX^{2}}{1+bX^{2}}=\frac{a}{b}\mathbb{E}\left[1-\frac{1}{1+bX^{2}}\right]=\frac{a}{b}-\frac{a}{b}\mathbb{E}\frac{1}{1+bX^{2}}$$
So actually to be found is $$\mathbb{E}\frac{1}{1+bX^{2}}$$
where $X$ has standard normal distribution. | Expectation of rational formula
No real answer, but it makes things more simple:
$$\mathbb{E}Z=\mathbb{E}\frac{aX^{2}Y^{2}}{1+bX^{2}}=\frac{a}{b}\mathbb{E}Y^{2}\mathbb{E}\frac{bX^{2}}{1+bX^{2}}=\frac{a}{b}\mathbb{E}\left[1-\frac{1}{ |
54,304 | The Harris recurrence of a stepping-out slice-sampling-within-Gibbs MCMC | Slice sampling is a special case of Gibbs sampling, to the point that in Monte Carlo Statistical Methods, we started our chapters on Gibbs sampling with a first chapter on slice sampling (Chapter 8).
To quote verbatim from the book (p.326):
Slice sampling relies upon the decomposition of the density $f(x)$ as
$$ f(x) \propto \prod_{i=1}^k f_i(x)\,, $$ where the $f_i$'s are
positive functions, but not necessarily densities. For instance, in
a Bayesian framework with a flat prior, the $f_i(x)$ may be chosen as
the individual likelihoods. This decomposition can then be associated
with $k$ auxiliary variables $\omega_i$, rather than one as in the
fundamental theorem, in the sense that each $f_i(x)$ can be written as
an integral $$ f_i(x) = \int \mathbb{I}_{0\le \omega_i\le
f_i(x)}\,d\omega_i\,, $$ and that $f$ is the marginal distribution of
the joint distribution $$(x,\omega_1,\ldots,\omega_k) \sim
p(x,\omega_1,\ldots,\omega_k) \propto \prod_{i=1}^k \mathbb{I}_{0\le
\omega_i\le f_i(x)}\,. $$This particular demarginalization of $f$
introduces a larger dimensionality to the problem and induces a
generalization of the random walk of Section 8.1 which is to have
uniform proposals one direction at a time.
Now, why is slice sampling a particular case of Gibbs sampling? Simply because in the "augmented space" made of the original variables $X_i$'s and of the auxiliary slice variables $U_i$'s, the steps are sheer simulations from the full conditionals:
1.Generate from $p(u_1 | u_2,\ldots, u_n,x_{1}, \ldots,x_n)=\mathbb{I}_{0\le u_1\le p(x_1|x_2,\ldots,x_n)}$
2.Generate from $p(x_1| u_1,\ldots, u_n,x_{2}, \ldots,x_n)=\mathbb{I}_{0\le u_1\le p(x_1|x_2,\ldots,x_n)}$ $$\vdots$$
2i-1.Generate from $p(u_i | u_1,\ldots,
u_{i-1},u_{i+1},\ldots,u_n,x_{1}, \ldots,x_n)=\mathbb{I}_{0\le u_i\le
p(x_i|x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)}$
2i.Generate from $p(x_i| u_1,\ldots, u_n,x_{1},\ldots,x_{i-1},x_{i+1},
\ldots,x_n)=\mathbb{I}_{0\le u_i\le
p(x_i|x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)}$ $$\vdots$$
2n-1.Generate from $p(u_n | u_1,\ldots, u_{n-1}x_{1},
\ldots,x_n)=\mathbb{I}_{0\le u_n\le p(x_i|x_1,\ldots,x_{n-1})}$
2n.Generate from $p(x_n| u_1,\ldots, u_n,x_{1},
\ldots,x_{n-a})=\mathbb{I}_{0\le u_n\le p(x_i|x_1,\ldots,x_{n-1})}$
So this is Gibbs sampling 101, only using uniform draws. Hence a Metropolis-Hastings move with probability one of accepting. If the support satisfies the connectivity property set by Besag (1994), the chain is irreducible and hence Harris positive recurrent. | The Harris recurrence of a stepping-out slice-sampling-within-Gibbs MCMC | Slice sampling is a special case of Gibbs sampling, to the point that in Monte Carlo Statistical Methods, we started our chapters on Gibbs sampling with a first chapter on slice sampling (Chapter 8).
| The Harris recurrence of a stepping-out slice-sampling-within-Gibbs MCMC
Slice sampling is a special case of Gibbs sampling, to the point that in Monte Carlo Statistical Methods, we started our chapters on Gibbs sampling with a first chapter on slice sampling (Chapter 8).
To quote verbatim from the book (p.326):
Slice sampling relies upon the decomposition of the density $f(x)$ as
$$ f(x) \propto \prod_{i=1}^k f_i(x)\,, $$ where the $f_i$'s are
positive functions, but not necessarily densities. For instance, in
a Bayesian framework with a flat prior, the $f_i(x)$ may be chosen as
the individual likelihoods. This decomposition can then be associated
with $k$ auxiliary variables $\omega_i$, rather than one as in the
fundamental theorem, in the sense that each $f_i(x)$ can be written as
an integral $$ f_i(x) = \int \mathbb{I}_{0\le \omega_i\le
f_i(x)}\,d\omega_i\,, $$ and that $f$ is the marginal distribution of
the joint distribution $$(x,\omega_1,\ldots,\omega_k) \sim
p(x,\omega_1,\ldots,\omega_k) \propto \prod_{i=1}^k \mathbb{I}_{0\le
\omega_i\le f_i(x)}\,. $$This particular demarginalization of $f$
introduces a larger dimensionality to the problem and induces a
generalization of the random walk of Section 8.1 which is to have
uniform proposals one direction at a time.
Now, why is slice sampling a particular case of Gibbs sampling? Simply because in the "augmented space" made of the original variables $X_i$'s and of the auxiliary slice variables $U_i$'s, the steps are sheer simulations from the full conditionals:
1.Generate from $p(u_1 | u_2,\ldots, u_n,x_{1}, \ldots,x_n)=\mathbb{I}_{0\le u_1\le p(x_1|x_2,\ldots,x_n)}$
2.Generate from $p(x_1| u_1,\ldots, u_n,x_{2}, \ldots,x_n)=\mathbb{I}_{0\le u_1\le p(x_1|x_2,\ldots,x_n)}$ $$\vdots$$
2i-1.Generate from $p(u_i | u_1,\ldots,
u_{i-1},u_{i+1},\ldots,u_n,x_{1}, \ldots,x_n)=\mathbb{I}_{0\le u_i\le
p(x_i|x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)}$
2i.Generate from $p(x_i| u_1,\ldots, u_n,x_{1},\ldots,x_{i-1},x_{i+1},
\ldots,x_n)=\mathbb{I}_{0\le u_i\le
p(x_i|x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)}$ $$\vdots$$
2n-1.Generate from $p(u_n | u_1,\ldots, u_{n-1}x_{1},
\ldots,x_n)=\mathbb{I}_{0\le u_n\le p(x_i|x_1,\ldots,x_{n-1})}$
2n.Generate from $p(x_n| u_1,\ldots, u_n,x_{1},
\ldots,x_{n-a})=\mathbb{I}_{0\le u_n\le p(x_i|x_1,\ldots,x_{n-1})}$
So this is Gibbs sampling 101, only using uniform draws. Hence a Metropolis-Hastings move with probability one of accepting. If the support satisfies the connectivity property set by Besag (1994), the chain is irreducible and hence Harris positive recurrent. | The Harris recurrence of a stepping-out slice-sampling-within-Gibbs MCMC
Slice sampling is a special case of Gibbs sampling, to the point that in Monte Carlo Statistical Methods, we started our chapters on Gibbs sampling with a first chapter on slice sampling (Chapter 8).
|
54,305 | The Harris recurrence of a stepping-out slice-sampling-within-Gibbs MCMC | I prove that the stepping-out and shrinkage procedure satisfies detailed balance, but this is of course not enough to show irreducibility or ergodicity. And it's easy to construct examples in which there are regions with zero probability density in which a slice sampler that looks only at the part of the slice found by stepping out won't be ergodic. (Assuming the step size for stepping out is fixed - one could recover ergodicity by randomly picking the stepsize from an unbounded distribution.)
If the conditional distributions to which univariate slice sampling with stepping out are applied have non-zero probability density everywhere within their range, then I think one could show that the sampler is ergodic, but there are probably cases where it's still not geometrically ergodic.
By the way, thanks for looking at the stepping out / shrinkage methods. I've been puzzled why so many people just think of slice sampling as sampling independently from the slice, when the cases where this isn't possible were my main motivation for looking at slice sampling. And the use of slice sampling as a method that can be applied automatically to many problems is crucially dependent on being able to do something like stepping out and shrinkage. | The Harris recurrence of a stepping-out slice-sampling-within-Gibbs MCMC | I prove that the stepping-out and shrinkage procedure satisfies detailed balance, but this is of course not enough to show irreducibility or ergodicity. And it's easy to construct examples in which t | The Harris recurrence of a stepping-out slice-sampling-within-Gibbs MCMC
I prove that the stepping-out and shrinkage procedure satisfies detailed balance, but this is of course not enough to show irreducibility or ergodicity. And it's easy to construct examples in which there are regions with zero probability density in which a slice sampler that looks only at the part of the slice found by stepping out won't be ergodic. (Assuming the step size for stepping out is fixed - one could recover ergodicity by randomly picking the stepsize from an unbounded distribution.)
If the conditional distributions to which univariate slice sampling with stepping out are applied have non-zero probability density everywhere within their range, then I think one could show that the sampler is ergodic, but there are probably cases where it's still not geometrically ergodic.
By the way, thanks for looking at the stepping out / shrinkage methods. I've been puzzled why so many people just think of slice sampling as sampling independently from the slice, when the cases where this isn't possible were my main motivation for looking at slice sampling. And the use of slice sampling as a method that can be applied automatically to many problems is crucially dependent on being able to do something like stepping out and shrinkage. | The Harris recurrence of a stepping-out slice-sampling-within-Gibbs MCMC
I prove that the stepping-out and shrinkage procedure satisfies detailed balance, but this is of course not enough to show irreducibility or ergodicity. And it's easy to construct examples in which t |
54,306 | Do conjugate priors just lead to a posterior that is a modification of the parameters of the prior? | This question is actually somewhat subtle, and it brings to attention an interesting quirk of usage that I hadn't noticed before.
For every practical definition of conjugate distributions that I'm familiar with, it is the case that the posterior of a model using a conjugate prior is a modified form of the prior. The wikipedia definition follows the "practicality" (convenience) convention, for example:
In Bayesian probability theory, if the posterior distributions $p(\theta|x)$ are in the same family as the prior probability distribution $p(\theta)$, the prior and posterior are then called conjugate distributions, and the prior is called a conjugate prior for the likelihood function
However, a distinction can be found in the formal definition of conjugacy in Gelman's Bayesian Data Analysis, 3rd edition, p. 35:
If $\mathcal{F}$ is a class of sampling distributions $p(y|\theta)$ and $\mathcal{P}$ is a class of prior distributions for $\theta$, then the class $\mathcal{P}$ is conjugate for $\mathcal{F}$ if
$$
p(\theta|y)\in\mathcal{P}\forall p(\cdot|\theta)\in\mathcal{F} \text{ and } p(\cdot)\in\mathcal{P}.
$$
This definition is formally vague since if we choose $\mathcal{P}$ as the class of all distributions, then $\mathcal{P}$ is always conjugate no matter what class of sampling distribution is used.
Obviously the construction in the final sentence has little practical utility: if all distributions are conjugate, then the distinction between conjugate and non-conjugate distributions is meaningless. Instead, it is common to take $\mathcal{P}$ to be the set of all densities having the same functional form of the likelihood, giving rise to the practical convenience properties of conjugacy, namely that the posterior is the form of the prior. | Do conjugate priors just lead to a posterior that is a modification of the parameters of the prior? | This question is actually somewhat subtle, and it brings to attention an interesting quirk of usage that I hadn't noticed before.
For every practical definition of conjugate distributions that I'm fam | Do conjugate priors just lead to a posterior that is a modification of the parameters of the prior?
This question is actually somewhat subtle, and it brings to attention an interesting quirk of usage that I hadn't noticed before.
For every practical definition of conjugate distributions that I'm familiar with, it is the case that the posterior of a model using a conjugate prior is a modified form of the prior. The wikipedia definition follows the "practicality" (convenience) convention, for example:
In Bayesian probability theory, if the posterior distributions $p(\theta|x)$ are in the same family as the prior probability distribution $p(\theta)$, the prior and posterior are then called conjugate distributions, and the prior is called a conjugate prior for the likelihood function
However, a distinction can be found in the formal definition of conjugacy in Gelman's Bayesian Data Analysis, 3rd edition, p. 35:
If $\mathcal{F}$ is a class of sampling distributions $p(y|\theta)$ and $\mathcal{P}$ is a class of prior distributions for $\theta$, then the class $\mathcal{P}$ is conjugate for $\mathcal{F}$ if
$$
p(\theta|y)\in\mathcal{P}\forall p(\cdot|\theta)\in\mathcal{F} \text{ and } p(\cdot)\in\mathcal{P}.
$$
This definition is formally vague since if we choose $\mathcal{P}$ as the class of all distributions, then $\mathcal{P}$ is always conjugate no matter what class of sampling distribution is used.
Obviously the construction in the final sentence has little practical utility: if all distributions are conjugate, then the distinction between conjugate and non-conjugate distributions is meaningless. Instead, it is common to take $\mathcal{P}$ to be the set of all densities having the same functional form of the likelihood, giving rise to the practical convenience properties of conjugacy, namely that the posterior is the form of the prior. | Do conjugate priors just lead to a posterior that is a modification of the parameters of the prior?
This question is actually somewhat subtle, and it brings to attention an interesting quirk of usage that I hadn't noticed before.
For every practical definition of conjugate distributions that I'm fam |
54,307 | Feature subsampling with gradient boosting | A large motivation in restricting the number of predictors available to each learner in a random forest is to encourage variance between the trees. Because each tree has the same starting point, tricks like row and column subsampling are necessary to ensure that you don't have the same tree multiple times. This isn't nearly as big a problem for boosting, where trees are built residually to each other. Each tree gets a new, adjusted starting point for which a new, different tree structure will be optimal.
Subsampling by rows and columns still increases variance between trees and allows your model to converge faster with boosting, but it is not essential. $p/3$ or $\sqrt p$ seems like it would be too low for most boosting problems. Interacted signal will be harder to find if a pair of variables have such a small chance of existing together in the same tree. I have seen occasional and marginal gains in predictive power at around $3/4$. | Feature subsampling with gradient boosting | A large motivation in restricting the number of predictors available to each learner in a random forest is to encourage variance between the trees. Because each tree has the same starting point, trick | Feature subsampling with gradient boosting
A large motivation in restricting the number of predictors available to each learner in a random forest is to encourage variance between the trees. Because each tree has the same starting point, tricks like row and column subsampling are necessary to ensure that you don't have the same tree multiple times. This isn't nearly as big a problem for boosting, where trees are built residually to each other. Each tree gets a new, adjusted starting point for which a new, different tree structure will be optimal.
Subsampling by rows and columns still increases variance between trees and allows your model to converge faster with boosting, but it is not essential. $p/3$ or $\sqrt p$ seems like it would be too low for most boosting problems. Interacted signal will be harder to find if a pair of variables have such a small chance of existing together in the same tree. I have seen occasional and marginal gains in predictive power at around $3/4$. | Feature subsampling with gradient boosting
A large motivation in restricting the number of predictors available to each learner in a random forest is to encourage variance between the trees. Because each tree has the same starting point, trick |
54,308 | Why is computing the Bayesian Evidence difficult? | I like @ShijiaBian's answer. I would add the following.
The normalizing constant is important because without it, (1) you won't have a valid probability distribution and (2) you can't assess relative probabilities of values of the parameter. For example, if you modeled data $x_t$ as Gaussian conditional on a mean $\theta$ that was modeled as Poisson, you would not be able to average the likelihood over the values of the parameter because the infinite sum over the product of the two PDFs' kernels is not available in closed form. Mathematically:
$$
\begin{align}
p(\theta) &= \text{Poisson}(\lambda)\\
p(x_t|\theta) &= \mathcal{N}(\theta, \sigma^2)\\
p(\theta | \mathbf{X}) &= \frac{p(\theta)\prod_tp(x_t|\theta)}{\sum_\Theta p(\theta)\prod_tp(x_t|\theta)}
\end{align}
$$
Expanding the numerator, you'll find that:
$$
p(\theta | \mathbf{X}) \propto \frac{1}{\theta!}\lambda^\theta(2\pi\sigma^2)^{-T/2}\prod_t\exp{\left[\frac{-1}{2\sigma^2}(x_t^2 - 2\theta x_t + \theta^2) - \frac{\lambda}{T}\right]}
$$
To normalize this function, you'd have to sum over all the possible (discrete) values of $\theta$: $0, 1, 2, \ldots, \infty$. This is impossible analytically because there is no closed-form expression for an infinite sum of the above form. If you don't do this, however, your function will not integrate to $1$ and you won't have a valid probability density. Furthermore, normalizing ensures that for each value of $\theta = \theta^*$, you can exactly determine the relative probability of $\theta^*$ relative to other values of $\theta$.
Expanding on this second point, if you only normalized for values of $\theta$, say, from $0$ through $10$, then you cannot compare how likely values of $\theta$ outside that range to values inside that range. This does suggest, however, that if you have some belief about the range of values for which $\theta$ may be restricted, you could truncate your distribution to that range and perform the summation numerically within that range, like $0$ to $10$. Then, you would have a valid probability distribution (a truncated Poisson) over the range of values from $0$ to $10$. This is much harder, however, when $\theta$ is continuous (say, Beta or Gamma distributed), although you could perform numerical integration. Numerical integration is difficult in high dimensions, however, so you'd have to restrict the dimension of $\theta$ to something that is computationally feasible. | Why is computing the Bayesian Evidence difficult? | I like @ShijiaBian's answer. I would add the following.
The normalizing constant is important because without it, (1) you won't have a valid probability distribution and (2) you can't assess relative | Why is computing the Bayesian Evidence difficult?
I like @ShijiaBian's answer. I would add the following.
The normalizing constant is important because without it, (1) you won't have a valid probability distribution and (2) you can't assess relative probabilities of values of the parameter. For example, if you modeled data $x_t$ as Gaussian conditional on a mean $\theta$ that was modeled as Poisson, you would not be able to average the likelihood over the values of the parameter because the infinite sum over the product of the two PDFs' kernels is not available in closed form. Mathematically:
$$
\begin{align}
p(\theta) &= \text{Poisson}(\lambda)\\
p(x_t|\theta) &= \mathcal{N}(\theta, \sigma^2)\\
p(\theta | \mathbf{X}) &= \frac{p(\theta)\prod_tp(x_t|\theta)}{\sum_\Theta p(\theta)\prod_tp(x_t|\theta)}
\end{align}
$$
Expanding the numerator, you'll find that:
$$
p(\theta | \mathbf{X}) \propto \frac{1}{\theta!}\lambda^\theta(2\pi\sigma^2)^{-T/2}\prod_t\exp{\left[\frac{-1}{2\sigma^2}(x_t^2 - 2\theta x_t + \theta^2) - \frac{\lambda}{T}\right]}
$$
To normalize this function, you'd have to sum over all the possible (discrete) values of $\theta$: $0, 1, 2, \ldots, \infty$. This is impossible analytically because there is no closed-form expression for an infinite sum of the above form. If you don't do this, however, your function will not integrate to $1$ and you won't have a valid probability density. Furthermore, normalizing ensures that for each value of $\theta = \theta^*$, you can exactly determine the relative probability of $\theta^*$ relative to other values of $\theta$.
Expanding on this second point, if you only normalized for values of $\theta$, say, from $0$ through $10$, then you cannot compare how likely values of $\theta$ outside that range to values inside that range. This does suggest, however, that if you have some belief about the range of values for which $\theta$ may be restricted, you could truncate your distribution to that range and perform the summation numerically within that range, like $0$ to $10$. Then, you would have a valid probability distribution (a truncated Poisson) over the range of values from $0$ to $10$. This is much harder, however, when $\theta$ is continuous (say, Beta or Gamma distributed), although you could perform numerical integration. Numerical integration is difficult in high dimensions, however, so you'd have to restrict the dimension of $\theta$ to something that is computationally feasible. | Why is computing the Bayesian Evidence difficult?
I like @ShijiaBian's answer. I would add the following.
The normalizing constant is important because without it, (1) you won't have a valid probability distribution and (2) you can't assess relative |
54,309 | Why is computing the Bayesian Evidence difficult? | The posterior distribution has no relationship with the law of total probability, even though they are similar looking.
The given $P_X(x)$ is the normalizing constant. The reason that this is hard to compute is because 1). the conjugate property only can be applied for some specific distributions; 2). The prior and the likelihood function can be high dimensional, which is very difficult to integrate; 3). The integral might not be closed form.
This is the reason why the resampling method has to play a role in Bayesian approximation. | Why is computing the Bayesian Evidence difficult? | The posterior distribution has no relationship with the law of total probability, even though they are similar looking.
The given $P_X(x)$ is the normalizing constant. The reason that this is hard to | Why is computing the Bayesian Evidence difficult?
The posterior distribution has no relationship with the law of total probability, even though they are similar looking.
The given $P_X(x)$ is the normalizing constant. The reason that this is hard to compute is because 1). the conjugate property only can be applied for some specific distributions; 2). The prior and the likelihood function can be high dimensional, which is very difficult to integrate; 3). The integral might not be closed form.
This is the reason why the resampling method has to play a role in Bayesian approximation. | Why is computing the Bayesian Evidence difficult?
The posterior distribution has no relationship with the law of total probability, even though they are similar looking.
The given $P_X(x)$ is the normalizing constant. The reason that this is hard to |
54,310 | What kind of distribution is this? (Number of stones until 2 are adjacent in a game of Go) | As I wrote in the comments,
I suspect that this distribution doesn't have a name, because it represents the outcome of a non-independent random process. But the description of the problem is binomial-flavored, so it's probably related to the binomial and Poisson distributions. In general I think these kinds of questions are misguided. If you're trying to model this outcome and want to know what kind of distribution to use, that's a more answerable question.
By "non-independent" here, I mean that the probability of success (two adjacent stones) changes at every step of the simulation, because the board arrangement changes at every step. I imagine the combinatorics involved here would make exact computations intractable.
User whuber adds the following insight (emphasis mine):
This distribution is unlikely to be named, parameterized, or studied as such because it's messy: the board is not homogeneous--it contains three different kinds of locations (central, edge, and corner cells). Thus obtaining an exact numerical answer is of little interest. There are standard methods for obtaining exact asymptotic answers (as the size of the board increases). | What kind of distribution is this? (Number of stones until 2 are adjacent in a game of Go) | As I wrote in the comments,
I suspect that this distribution doesn't have a name, because it represents the outcome of a non-independent random process. But the description of the problem is binomial | What kind of distribution is this? (Number of stones until 2 are adjacent in a game of Go)
As I wrote in the comments,
I suspect that this distribution doesn't have a name, because it represents the outcome of a non-independent random process. But the description of the problem is binomial-flavored, so it's probably related to the binomial and Poisson distributions. In general I think these kinds of questions are misguided. If you're trying to model this outcome and want to know what kind of distribution to use, that's a more answerable question.
By "non-independent" here, I mean that the probability of success (two adjacent stones) changes at every step of the simulation, because the board arrangement changes at every step. I imagine the combinatorics involved here would make exact computations intractable.
User whuber adds the following insight (emphasis mine):
This distribution is unlikely to be named, parameterized, or studied as such because it's messy: the board is not homogeneous--it contains three different kinds of locations (central, edge, and corner cells). Thus obtaining an exact numerical answer is of little interest. There are standard methods for obtaining exact asymptotic answers (as the size of the board increases). | What kind of distribution is this? (Number of stones until 2 are adjacent in a game of Go)
As I wrote in the comments,
I suspect that this distribution doesn't have a name, because it represents the outcome of a non-independent random process. But the description of the problem is binomial |
54,311 | What kind of distribution is this? (Number of stones until 2 are adjacent in a game of Go) | To simplify the issue: let the board be one-dimensional of size $1 \times n$.
Equivalent problem
For each $k$-th step number we can ask the following equivalent question:
In how many ways can we distribute the $k$ stones such that only 2 or 3 are touching together (more or bigger groups are not a valid end-states). These are the possible end states.
Then you have to consider in addition the number of ways to place the stones on the board such that one of the two touching or the middle of the three was the last stone placed on the board.
Gaps
The distribution question is equal to finding sufficient non-zero gaps between the stones. With $k$ stones, where 2 touch each other (for which there are $k-1$ ways, e.g the stones that touch are the 1st and the 2nd until the k-1-th and the k-th), there need to be $k-2$ gaps in between them and there are between $k-2$ and $n-k$ squares to use for this (the process should be stopped before $2k\geq n+2$). This is not a definite number because the outside stones may not need to touch the sides.
Let there be $l$ squares to be distribute among $k-2$ non zero gaps. The number of ways to do this is
$$f(l,k-2) = \dbinom{l-1}{k-3} $$
(see https://math.stackexchange.com/questions/58753/unique-ways-to-keep-n-balls-into-k-boxes )
Counting the ways to get to finish with adjacent pair in $k$ steps
For the $l$ squares to be distributed in between the stones there are $n-k-l$ squares on the sides which can be split in $n-k-l+1$ ways.
So the number of end situations/states with $k$ stones is:
$$ N_{2states}(k) = \begin{cases} n-1 & \qquad \text{for } k=2 \\
\sum_{l=k-2}^{l=n-k} (k-1) (n-k-l+1) \dbinom{l-1}{k-3} & \qquad \text{for } k>2 \end{cases}$$
$$ N_{3states}(k) = \begin{cases} n-2 & \qquad \text{for } k=3 \\
\sum_{l=k-3}^{l=n-k} (k-2) (n-k-l+1) \dbinom{l-1}{k-4} & \qquad \text{for } k>3 \end{cases}$$
The number ways to end in these states is (such that one of the two adjacent stones or the middle of the three was in the last step):
$$N_{endings}(k) = N_{2states}(k) \cdot 2 (k-1)! + N_{3states}(k) \cdot (k-2)!$$
So the probability to end in $k$ steps is
$$P(k) = \frac{N_{endings}(k)}{n!/(n-k)!}$$
If you would wish to stick a name to this distribution than you could call it a sum of negative binomial distributions (ie. how long it takes before a success/failure), but with varying probabilities each step (success becomes more likely with more stones) | What kind of distribution is this? (Number of stones until 2 are adjacent in a game of Go) | To simplify the issue: let the board be one-dimensional of size $1 \times n$.
Equivalent problem
For each $k$-th step number we can ask the following equivalent question:
In how many ways can we dist | What kind of distribution is this? (Number of stones until 2 are adjacent in a game of Go)
To simplify the issue: let the board be one-dimensional of size $1 \times n$.
Equivalent problem
For each $k$-th step number we can ask the following equivalent question:
In how many ways can we distribute the $k$ stones such that only 2 or 3 are touching together (more or bigger groups are not a valid end-states). These are the possible end states.
Then you have to consider in addition the number of ways to place the stones on the board such that one of the two touching or the middle of the three was the last stone placed on the board.
Gaps
The distribution question is equal to finding sufficient non-zero gaps between the stones. With $k$ stones, where 2 touch each other (for which there are $k-1$ ways, e.g the stones that touch are the 1st and the 2nd until the k-1-th and the k-th), there need to be $k-2$ gaps in between them and there are between $k-2$ and $n-k$ squares to use for this (the process should be stopped before $2k\geq n+2$). This is not a definite number because the outside stones may not need to touch the sides.
Let there be $l$ squares to be distribute among $k-2$ non zero gaps. The number of ways to do this is
$$f(l,k-2) = \dbinom{l-1}{k-3} $$
(see https://math.stackexchange.com/questions/58753/unique-ways-to-keep-n-balls-into-k-boxes )
Counting the ways to get to finish with adjacent pair in $k$ steps
For the $l$ squares to be distributed in between the stones there are $n-k-l$ squares on the sides which can be split in $n-k-l+1$ ways.
So the number of end situations/states with $k$ stones is:
$$ N_{2states}(k) = \begin{cases} n-1 & \qquad \text{for } k=2 \\
\sum_{l=k-2}^{l=n-k} (k-1) (n-k-l+1) \dbinom{l-1}{k-3} & \qquad \text{for } k>2 \end{cases}$$
$$ N_{3states}(k) = \begin{cases} n-2 & \qquad \text{for } k=3 \\
\sum_{l=k-3}^{l=n-k} (k-2) (n-k-l+1) \dbinom{l-1}{k-4} & \qquad \text{for } k>3 \end{cases}$$
The number ways to end in these states is (such that one of the two adjacent stones or the middle of the three was in the last step):
$$N_{endings}(k) = N_{2states}(k) \cdot 2 (k-1)! + N_{3states}(k) \cdot (k-2)!$$
So the probability to end in $k$ steps is
$$P(k) = \frac{N_{endings}(k)}{n!/(n-k)!}$$
If you would wish to stick a name to this distribution than you could call it a sum of negative binomial distributions (ie. how long it takes before a success/failure), but with varying probabilities each step (success becomes more likely with more stones) | What kind of distribution is this? (Number of stones until 2 are adjacent in a game of Go)
To simplify the issue: let the board be one-dimensional of size $1 \times n$.
Equivalent problem
For each $k$-th step number we can ask the following equivalent question:
In how many ways can we dist |
54,312 | What kind of distribution is this? (Number of stones until 2 are adjacent in a game of Go) | Your second graph looks log-normal to me. See sigma=1, mu=0 here: https://en.wikipedia.org/wiki/Log-normal_distribution.
Can you perform a regression of your data to log normal and report the results? | What kind of distribution is this? (Number of stones until 2 are adjacent in a game of Go) | Your second graph looks log-normal to me. See sigma=1, mu=0 here: https://en.wikipedia.org/wiki/Log-normal_distribution.
Can you perform a regression of your data to log normal and report the resul | What kind of distribution is this? (Number of stones until 2 are adjacent in a game of Go)
Your second graph looks log-normal to me. See sigma=1, mu=0 here: https://en.wikipedia.org/wiki/Log-normal_distribution.
Can you perform a regression of your data to log normal and report the results? | What kind of distribution is this? (Number of stones until 2 are adjacent in a game of Go)
Your second graph looks log-normal to me. See sigma=1, mu=0 here: https://en.wikipedia.org/wiki/Log-normal_distribution.
Can you perform a regression of your data to log normal and report the resul |
54,313 | Using the central limit theorem | The binomial distribution of size $n$ and probability $p$ has probability mass function $P(Y_n=k)=\binom{n}{k}p^k(1-p)^{n-k}$. Setting $p=2/3$ gives:
$$P(Y_n=k)=\binom{n}{k}\left(\frac{2}{3}\right)^k\left(\frac{1}{3}\right)^{n-k}=\frac{1}{3^n}\binom{n}{k}2^k.$$
It follows that the left-hand-side of your equation sneakily represents the probability:
$$\lim_{n\rightarrow\infty}P(|3Y_n-2n|\leq\sqrt{2n}x).$$
$Y_n$ has mean $\frac{2}{3}n$. Do you now see how to apply the CLT (after a few manipulations)? | Using the central limit theorem | The binomial distribution of size $n$ and probability $p$ has probability mass function $P(Y_n=k)=\binom{n}{k}p^k(1-p)^{n-k}$. Setting $p=2/3$ gives:
$$P(Y_n=k)=\binom{n}{k}\left(\frac{2}{3}\right)^k\ | Using the central limit theorem
The binomial distribution of size $n$ and probability $p$ has probability mass function $P(Y_n=k)=\binom{n}{k}p^k(1-p)^{n-k}$. Setting $p=2/3$ gives:
$$P(Y_n=k)=\binom{n}{k}\left(\frac{2}{3}\right)^k\left(\frac{1}{3}\right)^{n-k}=\frac{1}{3^n}\binom{n}{k}2^k.$$
It follows that the left-hand-side of your equation sneakily represents the probability:
$$\lim_{n\rightarrow\infty}P(|3Y_n-2n|\leq\sqrt{2n}x).$$
$Y_n$ has mean $\frac{2}{3}n$. Do you now see how to apply the CLT (after a few manipulations)? | Using the central limit theorem
The binomial distribution of size $n$ and probability $p$ has probability mass function $P(Y_n=k)=\binom{n}{k}p^k(1-p)^{n-k}$. Setting $p=2/3$ gives:
$$P(Y_n=k)=\binom{n}{k}\left(\frac{2}{3}\right)^k\ |
54,314 | dropout regularization in gbm | Check out this paper: DART: Dropouts meet Multiple Additive Regression Trees (Arxiv PDF).
Their interpertation of dropout is this: instead of developing the next tree from the residual of all previous trees, develop the next tree from the residual of a sample of previous trees. The effect on the model is similar in that individual components are forced to be more self-sufficient. They observe some reasonably significant gains.
As Soren points out, colsample_bytree and colsample_bylevel are analogous to input-layer dropout.
DART is available in xgboost already by setting booster="dart" | dropout regularization in gbm | Check out this paper: DART: Dropouts meet Multiple Additive Regression Trees (Arxiv PDF).
Their interpertation of dropout is this: instead of developing the next tree from the residual of all previous | dropout regularization in gbm
Check out this paper: DART: Dropouts meet Multiple Additive Regression Trees (Arxiv PDF).
Their interpertation of dropout is this: instead of developing the next tree from the residual of all previous trees, develop the next tree from the residual of a sample of previous trees. The effect on the model is similar in that individual components are forced to be more self-sufficient. They observe some reasonably significant gains.
As Soren points out, colsample_bytree and colsample_bylevel are analogous to input-layer dropout.
DART is available in xgboost already by setting booster="dart" | dropout regularization in gbm
Check out this paper: DART: Dropouts meet Multiple Additive Regression Trees (Arxiv PDF).
Their interpertation of dropout is this: instead of developing the next tree from the residual of all previous |
54,315 | Relation between Bayesian analysis and Bayesian hierarchical analysis? | In my view, hierarchical modeling in a Bayesian setting mainly refers to the building of a complex prior structure. Consider a parameter of interest $\theta_{0}$ and your observation $(x_i)$.
Now, consider for example that you are adding a supplemental layer to your model $p(\theta_0|\theta_1)$ through hyperprior $p(\theta_1)$ on $\theta_1$, then $p(\theta_0)$ writes:
$$
p(\theta_0)=\int_R p(\theta_0|\theta_1)p(\theta_1)d\theta_1,
$$
and so on for $\theta_2, \ldots$.
The same for the observation model : consider that your parameter of interest $\theta_0$ is not directly related to the observation but to an other parameter $\theta_1$ that is itself related to the observations:
$$
p((x_i)|\theta_0)=\int_R p((x_i)|\theta_1)p(\theta_1|\theta_0) d\theta_1.
$$
To sum up, in principle, you can always (to the best of my knowledge) marginalize the hierarchical structures to get something as $p(\theta_0|x) \propto p(x|\theta_0) \cdot p(\theta_0)$ so to the simplest Bayes formulation. However, most of the time, integrals are intractable and we need to work with all the latent variables of the prior structure. So, IMHO, a hierarchical Bayesian model is only a decomposed Bayesian model (assuming that we call a Bayesian model something of the simplest form $p(\theta_0|x) \propto p(x|\theta_0) \cdot p(\theta_0)$).
Finally to answer your last question: "Is it enough for any statistical model which uses Bayes theorem to be categorized under Bayesian analysis/statistics" I would say no. A model can be qualified as a Bayesian model if it relies on Bayesian interpreation of probability https://en.wikipedia.org/wiki/Bayesian_probability and in particular if the posterior $p(\theta|x_i)$ makes any sense. Bayes theorem can be used in other contexts. See the related answer to the question Can frequentists use Bayes theorem?. | Relation between Bayesian analysis and Bayesian hierarchical analysis? | In my view, hierarchical modeling in a Bayesian setting mainly refers to the building of a complex prior structure. Consider a parameter of interest $\theta_{0}$ and your observation $(x_i)$.
Now, co | Relation between Bayesian analysis and Bayesian hierarchical analysis?
In my view, hierarchical modeling in a Bayesian setting mainly refers to the building of a complex prior structure. Consider a parameter of interest $\theta_{0}$ and your observation $(x_i)$.
Now, consider for example that you are adding a supplemental layer to your model $p(\theta_0|\theta_1)$ through hyperprior $p(\theta_1)$ on $\theta_1$, then $p(\theta_0)$ writes:
$$
p(\theta_0)=\int_R p(\theta_0|\theta_1)p(\theta_1)d\theta_1,
$$
and so on for $\theta_2, \ldots$.
The same for the observation model : consider that your parameter of interest $\theta_0$ is not directly related to the observation but to an other parameter $\theta_1$ that is itself related to the observations:
$$
p((x_i)|\theta_0)=\int_R p((x_i)|\theta_1)p(\theta_1|\theta_0) d\theta_1.
$$
To sum up, in principle, you can always (to the best of my knowledge) marginalize the hierarchical structures to get something as $p(\theta_0|x) \propto p(x|\theta_0) \cdot p(\theta_0)$ so to the simplest Bayes formulation. However, most of the time, integrals are intractable and we need to work with all the latent variables of the prior structure. So, IMHO, a hierarchical Bayesian model is only a decomposed Bayesian model (assuming that we call a Bayesian model something of the simplest form $p(\theta_0|x) \propto p(x|\theta_0) \cdot p(\theta_0)$).
Finally to answer your last question: "Is it enough for any statistical model which uses Bayes theorem to be categorized under Bayesian analysis/statistics" I would say no. A model can be qualified as a Bayesian model if it relies on Bayesian interpreation of probability https://en.wikipedia.org/wiki/Bayesian_probability and in particular if the posterior $p(\theta|x_i)$ makes any sense. Bayes theorem can be used in other contexts. See the related answer to the question Can frequentists use Bayes theorem?. | Relation between Bayesian analysis and Bayesian hierarchical analysis?
In my view, hierarchical modeling in a Bayesian setting mainly refers to the building of a complex prior structure. Consider a parameter of interest $\theta_{0}$ and your observation $(x_i)$.
Now, co |
54,316 | Choosing words in a topic, which cut-off for LDA topics? | It is important to remember that topic models such as LDA were primarily developed for unsupervised text summarization. So often, there is not a "best" choice for how many top words to show. Most research papers on topic models tend to use the top 5-20 words. If you use more than 20 words, then you start to defeat the purpose of succinctly summarizing the text.
A tolerance $\epsilon > 0.01$ is far too low for showing which words pertain to each topic. A primary purpose of LDA is to group words such that the topic words in each topic are highly probable within that topic. If such a low threshold is chosen, then many, many words will appear in each topic, again defeating the purpose of succinct text summarization. To extract the most probable words, you would be better off choosing a threshold of $\epsilon > 0.9$ or maybe $\epsilon > 0.8$.
The issue of seeing wordless topics in general when using Gensim is probably because Gensim has its own tolerance parameter "minimum_probability". This parameter defaults to 0.01 (this is explained in the Gensim LDA documentation). If you want to see all the words per topic, regardless of their low probability of appearing in the topic, you can set minimum_probability = 0.
For LDA, you are best off using the normalized probabilities (using "get_topic_terms" function through the ldamodel) because they are the most interpretable. I am not intimately familiar with how Gensim estimates the topic-word probabilities, but the unnormalized values are probably a result of Bayesian estimation where it's not relevant to directly estimate the denominator because (as you've said) it's just normalization. | Choosing words in a topic, which cut-off for LDA topics? | It is important to remember that topic models such as LDA were primarily developed for unsupervised text summarization. So often, there is not a "best" choice for how many top words to show. Most rese | Choosing words in a topic, which cut-off for LDA topics?
It is important to remember that topic models such as LDA were primarily developed for unsupervised text summarization. So often, there is not a "best" choice for how many top words to show. Most research papers on topic models tend to use the top 5-20 words. If you use more than 20 words, then you start to defeat the purpose of succinctly summarizing the text.
A tolerance $\epsilon > 0.01$ is far too low for showing which words pertain to each topic. A primary purpose of LDA is to group words such that the topic words in each topic are highly probable within that topic. If such a low threshold is chosen, then many, many words will appear in each topic, again defeating the purpose of succinct text summarization. To extract the most probable words, you would be better off choosing a threshold of $\epsilon > 0.9$ or maybe $\epsilon > 0.8$.
The issue of seeing wordless topics in general when using Gensim is probably because Gensim has its own tolerance parameter "minimum_probability". This parameter defaults to 0.01 (this is explained in the Gensim LDA documentation). If you want to see all the words per topic, regardless of their low probability of appearing in the topic, you can set minimum_probability = 0.
For LDA, you are best off using the normalized probabilities (using "get_topic_terms" function through the ldamodel) because they are the most interpretable. I am not intimately familiar with how Gensim estimates the topic-word probabilities, but the unnormalized values are probably a result of Bayesian estimation where it's not relevant to directly estimate the denominator because (as you've said) it's just normalization. | Choosing words in a topic, which cut-off for LDA topics?
It is important to remember that topic models such as LDA were primarily developed for unsupervised text summarization. So often, there is not a "best" choice for how many top words to show. Most rese |
54,317 | Simple kNN example | Using the Anderson's Iris data set available in the iris {datasets} in R, I worked on a makeshift function (simply to make sure I got the idea) to predict three species of Iris based on different botanical measurements in the dataset:
We want to predict the actual species Iris (Iris setosa, Iris virginica and Iris versicolor) based on the measurement of the sepals and petals. Since the species are categorical levels, this is a ML classification problem.
It would be very easy to visualize if there were only two dimensions (or variables) being measured as the predictors. For instance, if we were just measuring sepal length and sepal width:
Each point could be considered as a vector from the origin, and the distance to other adjacent points be calculated simply as $\small \sqrt{\displaystyle\sum_{\text{coord}\,=\,x}^{\text{coord}\,=\,y}(\text{coord}_i - \text{coord}_j)^2}$, corresponding to the length of the vector spanning from one point to its adjacent entry in the dataset. You could simply say that you are measuring the Euclidean distance between any given point and $k$ adjacent points, and then tabulating the number of setosa, versicolor and virginica, winner takes all - whichever species with the highest number of counts among the closest $k$ points is used as the predicted label. In case of a tie, a coin can be flipped to select the winner.
The reason for the vector notion is that in this case there are more than two variables used to predict the species. It looks like this:
So we have to just imagine every point as a vector in a 4-dimensional hyperspace - Dali could paint this data cloud levitating on a hypercube over the Mediterranean; R, not so sure... Fortunately, linear algebra doesn't require much creative inspiration: each variable measured for each data point forms a vector, and the distance to other vectors is simply calculated as the length of the vector extending from one point to its $k$ neighboring entries.
I have put together a function in R to do just that for this dataset not so much to rediscover the wheel, but to make sure I had to work through all the hurdles of putting into practice this intuitive system. It is data-specific, but easy to adapt to other datasets. The code is here. The results on the testing set with $k=13$ are not too far off the built-in function in R, knn {class}, and look quite on target on this tabulation of the results:
> print(table(predicted = data_test[,6], actual = data_test[,5]))
actual
predicted setosa versicolor virginica
setosa 22 0 0
versicolor 0 11 0
virginica 0 4 23
> mean(data_test[,6] == data_test[,5]) # Accuracy rate
[1] 0.9333333
As a related counterpoint in unsupervised ML, if we didn't have the labels identifying the species, we could have run instead a k-means clustering, for which, and as a conceptual exercise, I include the code here. Serving as a mere illustrative extension of the original answer, I didn't split the data into training and testing. The plots were virtually identical to the ones above, albeit without the pertinent species labels.
If instead we resort to the available R packages, and plotting the clusters after performing PCA dimensionality reduction, we get the following separation just using the first two components (clusplot with labeled examples):
The color shading parallels the overlap between virginica and versicolor on the original scatterplot matrix above, with setosa more clearly separable. | Simple kNN example | Using the Anderson's Iris data set available in the iris {datasets} in R, I worked on a makeshift function (simply to make sure I got the idea) to predict three species of Iris based on different bota | Simple kNN example
Using the Anderson's Iris data set available in the iris {datasets} in R, I worked on a makeshift function (simply to make sure I got the idea) to predict three species of Iris based on different botanical measurements in the dataset:
We want to predict the actual species Iris (Iris setosa, Iris virginica and Iris versicolor) based on the measurement of the sepals and petals. Since the species are categorical levels, this is a ML classification problem.
It would be very easy to visualize if there were only two dimensions (or variables) being measured as the predictors. For instance, if we were just measuring sepal length and sepal width:
Each point could be considered as a vector from the origin, and the distance to other adjacent points be calculated simply as $\small \sqrt{\displaystyle\sum_{\text{coord}\,=\,x}^{\text{coord}\,=\,y}(\text{coord}_i - \text{coord}_j)^2}$, corresponding to the length of the vector spanning from one point to its adjacent entry in the dataset. You could simply say that you are measuring the Euclidean distance between any given point and $k$ adjacent points, and then tabulating the number of setosa, versicolor and virginica, winner takes all - whichever species with the highest number of counts among the closest $k$ points is used as the predicted label. In case of a tie, a coin can be flipped to select the winner.
The reason for the vector notion is that in this case there are more than two variables used to predict the species. It looks like this:
So we have to just imagine every point as a vector in a 4-dimensional hyperspace - Dali could paint this data cloud levitating on a hypercube over the Mediterranean; R, not so sure... Fortunately, linear algebra doesn't require much creative inspiration: each variable measured for each data point forms a vector, and the distance to other vectors is simply calculated as the length of the vector extending from one point to its $k$ neighboring entries.
I have put together a function in R to do just that for this dataset not so much to rediscover the wheel, but to make sure I had to work through all the hurdles of putting into practice this intuitive system. It is data-specific, but easy to adapt to other datasets. The code is here. The results on the testing set with $k=13$ are not too far off the built-in function in R, knn {class}, and look quite on target on this tabulation of the results:
> print(table(predicted = data_test[,6], actual = data_test[,5]))
actual
predicted setosa versicolor virginica
setosa 22 0 0
versicolor 0 11 0
virginica 0 4 23
> mean(data_test[,6] == data_test[,5]) # Accuracy rate
[1] 0.9333333
As a related counterpoint in unsupervised ML, if we didn't have the labels identifying the species, we could have run instead a k-means clustering, for which, and as a conceptual exercise, I include the code here. Serving as a mere illustrative extension of the original answer, I didn't split the data into training and testing. The plots were virtually identical to the ones above, albeit without the pertinent species labels.
If instead we resort to the available R packages, and plotting the clusters after performing PCA dimensionality reduction, we get the following separation just using the first two components (clusplot with labeled examples):
The color shading parallels the overlap between virginica and versicolor on the original scatterplot matrix above, with setosa more clearly separable. | Simple kNN example
Using the Anderson's Iris data set available in the iris {datasets} in R, I worked on a makeshift function (simply to make sure I got the idea) to predict three species of Iris based on different bota |
54,318 | Simple kNN example | I assume you've read the Wikipedia kNN Entry? (It has a diagram illustrating how it works in 2 dimensions.)
As a simple example, assume you're looking to classify homes by "Well-maintained" or "Not well-maintained". You have a map and you are able to stick pins on this map: green for "Well-maintained" and red for "Not well-maintained". You go online and find some recent photos of a couple dozen houses in your section of the town and as you look at each one, you place either a red or green pin in your map at the location of the house.
Then your realtor friend calls you and says, "Hey, I have a call about a house at such-and-such address and I hear you are researching how well-maintained houses are. Is this one likely to be well-maintained?" So you find that address on your map and look at the 10 nearest pins and 8 out of 10 are green. So you tell your friend, "It looks like there's a fairly high chance that it's well-maintained."
In this example, $k=10$, and the distance was the literal distance (based on lat/lon) of houses. We're making the assumption that blocks, streets, neighborhoods tend to be well-maintained or not, which is somewhat plausible but of course there are always exceptions.
So, to generalize a bit, you first gather a set of points that you know the classification of already. These will be your exemplars. When you have a new point that comes in and you want to classify it, you look at the $k$ exemplars that are closest to the new point and the most common classification is what you decide your new point must be. Obviously, if all $k$ nearest exemplars are from one class, you're pretty certain of your new classification. And if the $k$ nearest exemplars fall into $k$ different classes you really can't make a decision at all. Between those two extremes, you have varying levels of confidence.
With $k=1$, the new point is considered to be of the same class as its nearest exemplar, no voting necessary. But you're more likely to make mistakes because of the exceptions I mentioned. Larger $k$ allows more votes and in general that leads to more stable outcomes, but you also end up with more distant exemplars voting and you homogenize out local variations.
In real problems, you will not have lat/lon of houses, but a lot of facts/measurements for each data point. For a person, maybe height, weight, age, smoker/non-smoker, etc. The distance between people is more abstract, but you can come up with such a distance and do the same thing -- in as many dimensions as facts/measurements. | Simple kNN example | I assume you've read the Wikipedia kNN Entry? (It has a diagram illustrating how it works in 2 dimensions.)
As a simple example, assume you're looking to classify homes by "Well-maintained" or "Not we | Simple kNN example
I assume you've read the Wikipedia kNN Entry? (It has a diagram illustrating how it works in 2 dimensions.)
As a simple example, assume you're looking to classify homes by "Well-maintained" or "Not well-maintained". You have a map and you are able to stick pins on this map: green for "Well-maintained" and red for "Not well-maintained". You go online and find some recent photos of a couple dozen houses in your section of the town and as you look at each one, you place either a red or green pin in your map at the location of the house.
Then your realtor friend calls you and says, "Hey, I have a call about a house at such-and-such address and I hear you are researching how well-maintained houses are. Is this one likely to be well-maintained?" So you find that address on your map and look at the 10 nearest pins and 8 out of 10 are green. So you tell your friend, "It looks like there's a fairly high chance that it's well-maintained."
In this example, $k=10$, and the distance was the literal distance (based on lat/lon) of houses. We're making the assumption that blocks, streets, neighborhoods tend to be well-maintained or not, which is somewhat plausible but of course there are always exceptions.
So, to generalize a bit, you first gather a set of points that you know the classification of already. These will be your exemplars. When you have a new point that comes in and you want to classify it, you look at the $k$ exemplars that are closest to the new point and the most common classification is what you decide your new point must be. Obviously, if all $k$ nearest exemplars are from one class, you're pretty certain of your new classification. And if the $k$ nearest exemplars fall into $k$ different classes you really can't make a decision at all. Between those two extremes, you have varying levels of confidence.
With $k=1$, the new point is considered to be of the same class as its nearest exemplar, no voting necessary. But you're more likely to make mistakes because of the exceptions I mentioned. Larger $k$ allows more votes and in general that leads to more stable outcomes, but you also end up with more distant exemplars voting and you homogenize out local variations.
In real problems, you will not have lat/lon of houses, but a lot of facts/measurements for each data point. For a person, maybe height, weight, age, smoker/non-smoker, etc. The distance between people is more abstract, but you can come up with such a distance and do the same thing -- in as many dimensions as facts/measurements. | Simple kNN example
I assume you've read the Wikipedia kNN Entry? (It has a diagram illustrating how it works in 2 dimensions.)
As a simple example, assume you're looking to classify homes by "Well-maintained" or "Not we |
54,319 | GLM coefficient estimates distribution | Assuming $\hat{\beta}$ is the MLE, we do not know the distribution of $\hat{\beta}$ but we do know the asymptotic distribution of $\sqrt n (\hat{\beta} - \beta)$ to be $N\bigg(0, \bigg(E\Big[-\frac{\partial^2 \mathcal{L}(\beta)}{\partial \beta \partial \beta^T}\Big]\bigg)^{-1}\bigg)$.
Rearranging terms we say $\hat\beta$ is approximately $ N\bigg(\beta, \frac{1}{n} \bigg(E\Big[-\frac{\partial^2 \mathcal{L}(\beta)}{\partial \beta \partial \beta^T}\Big]\bigg)^{-1}\bigg)$. We cannot say this is an asymptotic distribution because as $ n \rightarrow \infty$ the variance will be zero. | GLM coefficient estimates distribution | Assuming $\hat{\beta}$ is the MLE, we do not know the distribution of $\hat{\beta}$ but we do know the asymptotic distribution of $\sqrt n (\hat{\beta} - \beta)$ to be $N\bigg(0, \bigg(E\Big[-\frac{\p | GLM coefficient estimates distribution
Assuming $\hat{\beta}$ is the MLE, we do not know the distribution of $\hat{\beta}$ but we do know the asymptotic distribution of $\sqrt n (\hat{\beta} - \beta)$ to be $N\bigg(0, \bigg(E\Big[-\frac{\partial^2 \mathcal{L}(\beta)}{\partial \beta \partial \beta^T}\Big]\bigg)^{-1}\bigg)$.
Rearranging terms we say $\hat\beta$ is approximately $ N\bigg(\beta, \frac{1}{n} \bigg(E\Big[-\frac{\partial^2 \mathcal{L}(\beta)}{\partial \beta \partial \beta^T}\Big]\bigg)^{-1}\bigg)$. We cannot say this is an asymptotic distribution because as $ n \rightarrow \infty$ the variance will be zero. | GLM coefficient estimates distribution
Assuming $\hat{\beta}$ is the MLE, we do not know the distribution of $\hat{\beta}$ but we do know the asymptotic distribution of $\sqrt n (\hat{\beta} - \beta)$ to be $N\bigg(0, \bigg(E\Big[-\frac{\p |
54,320 | GLM coefficient estimates distribution | Your question is, imho, not directly related to GLMs but can be answered using more simple examples that do not require likelihood approaches:
Take $X_i\sim iid (\mu,1)$, i.e., the $X_i$ are independently and identically distributed from a distribution with some mean $\mu$ and a variance of 1 (for simplicity).
Then, by the central limit theorem we know that as $n\to\infty$,
$$\sqrt{n}(\bar{X}-\mu)\to_dN(0,1)$$
Basically, we "magnify" the difference $\bar{X}-\mu$ which itself, by the law of large numbers, vanishes as $n\to\infty$.
We can then approximately say that $\bar{X}\sim N(\mu,1/n)$. The "logic" is as follows:
$$Var(\sqrt{n}(\bar{X}-\mu))=1\Rightarrow nVar(\bar{X}-\mu)=1\Rightarrow Var(\bar{X}-\mu)=1/n\Rightarrow Var(\bar{X})=1/n,$$
where the last implication is because $\mu$ is nonrandom. This is only an approximation because the leftmost statement is the asymptotic variance, i.e., the variance of $\sqrt{n}(\bar{X}-\mu)$ as $n\to\infty$, so that the rightmost statement ought to consequently read $Var(\bar{X})=0$. That, by the LLN, is correct, but the approximation often turns out to be useful nevertheless. | GLM coefficient estimates distribution | Your question is, imho, not directly related to GLMs but can be answered using more simple examples that do not require likelihood approaches:
Take $X_i\sim iid (\mu,1)$, i.e., the $X_i$ are independe | GLM coefficient estimates distribution
Your question is, imho, not directly related to GLMs but can be answered using more simple examples that do not require likelihood approaches:
Take $X_i\sim iid (\mu,1)$, i.e., the $X_i$ are independently and identically distributed from a distribution with some mean $\mu$ and a variance of 1 (for simplicity).
Then, by the central limit theorem we know that as $n\to\infty$,
$$\sqrt{n}(\bar{X}-\mu)\to_dN(0,1)$$
Basically, we "magnify" the difference $\bar{X}-\mu$ which itself, by the law of large numbers, vanishes as $n\to\infty$.
We can then approximately say that $\bar{X}\sim N(\mu,1/n)$. The "logic" is as follows:
$$Var(\sqrt{n}(\bar{X}-\mu))=1\Rightarrow nVar(\bar{X}-\mu)=1\Rightarrow Var(\bar{X}-\mu)=1/n\Rightarrow Var(\bar{X})=1/n,$$
where the last implication is because $\mu$ is nonrandom. This is only an approximation because the leftmost statement is the asymptotic variance, i.e., the variance of $\sqrt{n}(\bar{X}-\mu)$ as $n\to\infty$, so that the rightmost statement ought to consequently read $Var(\bar{X})=0$. That, by the LLN, is correct, but the approximation often turns out to be useful nevertheless. | GLM coefficient estimates distribution
Your question is, imho, not directly related to GLMs but can be answered using more simple examples that do not require likelihood approaches:
Take $X_i\sim iid (\mu,1)$, i.e., the $X_i$ are independe |
54,321 | Class Balancing in Deep Neural Network | Yes, they need to compute the weights ones but not assigned it to the whole loss. Instead each pixel in the loss (before summing it in both directions) should take a weight. So overall 'weights' is a tensor just like the others. Lets say there are only two classes, and frequency of $ C_1 $ is twofold of $ C_2 $. One of the pixel is corretly predicted as $ C_2 $ with confidence [0.3 0.7] . The loss is $ sum([1, 0].*log[0.3, 0.7]) $. When the weight is included the loss is $ sum([1, 0].*log[0.3, 0.7] * 2) $, because $ C_2 $ should take twice to make a balance. So for each pixel, weight is either 1 or 2 depends on which class it belongs to. This construct a weight matrix. However it can be convenient to think it as tensor because, the weight value correspond to the other class multiplied by 0 in 'true_dist'. In this case the loss for single pixel can be written as $ sum([1, 0].*log[0.3, 0.7].*[2, 1]) $. So it doesn't effect the result. In this way you can make a point-wise multiplication.
PS: It didn't fit to the commment section
EDIT: I can't edit your code because the weight calculation section is not included. If you calculated weights for N classes, then its a 1XN vector. You will construct a 3D array, $ W_{ijk} $, with these weights. The first and second dimension of this array corresponds to 'img_col' and 'img_row' respectively. The third dimension will be a function of 'true_dist', $ T_{ijk}$, at corresponding pixel. I guess you are confused in here, so I will try to be more open at this point. Lets say N is 4 and the weight vector you calculated is denoted as $ w = [w_1,w_2,w_3,w_4]$. The weight values are inversely correlated with frequency of each class'. If a pixel $ (a,b) $ belongs to class $ C_3 $ then T_{ab.} = [0, 0, 1, 0] and $ W_{ab.} = T_{ab.}.*([w_1,w_2,w_3,w_4]) = [0,0,w_3,0]$ where $ .* $ is point-wise multiplication. So only 3rd class' weight value will effect for that individual pixel (a,b). As you see $ W $ is a function of $ T $. What you need to do evaluate $ W $ before passing in to the loss. You can make a function which take $ T $ as input.
The 'weights' in the code is denoted as $ W $ (capital) is a 3D array. $ w $, vector, corresponds to reverse frequency values for each class.
EDIT2: Sorry for the mess I created in here. You don't need to make point-wise multiplication to create $ W_{ijk}$ because it is already done in loss function. So just replicate $ w $ to each pixel of $ W $.
$ \forall (a,b), W_{ab.} = [w_1,w_2,w_3,w_4]$ | Class Balancing in Deep Neural Network | Yes, they need to compute the weights ones but not assigned it to the whole loss. Instead each pixel in the loss (before summing it in both directions) should take a weight. So overall 'weights' is a | Class Balancing in Deep Neural Network
Yes, they need to compute the weights ones but not assigned it to the whole loss. Instead each pixel in the loss (before summing it in both directions) should take a weight. So overall 'weights' is a tensor just like the others. Lets say there are only two classes, and frequency of $ C_1 $ is twofold of $ C_2 $. One of the pixel is corretly predicted as $ C_2 $ with confidence [0.3 0.7] . The loss is $ sum([1, 0].*log[0.3, 0.7]) $. When the weight is included the loss is $ sum([1, 0].*log[0.3, 0.7] * 2) $, because $ C_2 $ should take twice to make a balance. So for each pixel, weight is either 1 or 2 depends on which class it belongs to. This construct a weight matrix. However it can be convenient to think it as tensor because, the weight value correspond to the other class multiplied by 0 in 'true_dist'. In this case the loss for single pixel can be written as $ sum([1, 0].*log[0.3, 0.7].*[2, 1]) $. So it doesn't effect the result. In this way you can make a point-wise multiplication.
PS: It didn't fit to the commment section
EDIT: I can't edit your code because the weight calculation section is not included. If you calculated weights for N classes, then its a 1XN vector. You will construct a 3D array, $ W_{ijk} $, with these weights. The first and second dimension of this array corresponds to 'img_col' and 'img_row' respectively. The third dimension will be a function of 'true_dist', $ T_{ijk}$, at corresponding pixel. I guess you are confused in here, so I will try to be more open at this point. Lets say N is 4 and the weight vector you calculated is denoted as $ w = [w_1,w_2,w_3,w_4]$. The weight values are inversely correlated with frequency of each class'. If a pixel $ (a,b) $ belongs to class $ C_3 $ then T_{ab.} = [0, 0, 1, 0] and $ W_{ab.} = T_{ab.}.*([w_1,w_2,w_3,w_4]) = [0,0,w_3,0]$ where $ .* $ is point-wise multiplication. So only 3rd class' weight value will effect for that individual pixel (a,b). As you see $ W $ is a function of $ T $. What you need to do evaluate $ W $ before passing in to the loss. You can make a function which take $ T $ as input.
The 'weights' in the code is denoted as $ W $ (capital) is a 3D array. $ w $, vector, corresponds to reverse frequency values for each class.
EDIT2: Sorry for the mess I created in here. You don't need to make point-wise multiplication to create $ W_{ijk}$ because it is already done in loss function. So just replicate $ w $ to each pixel of $ W $.
$ \forall (a,b), W_{ab.} = [w_1,w_2,w_3,w_4]$ | Class Balancing in Deep Neural Network
Yes, they need to compute the weights ones but not assigned it to the whole loss. Instead each pixel in the loss (before summing it in both directions) should take a weight. So overall 'weights' is a |
54,322 | Why is rejection sampling with acceptance probability 2/3 for Beta(2,2) not slower than `rbeta(N,2,2)`? | Here is Cheng & Feast Gamma generator code, on which R rbeta and rgamma functions are based:
function x=gamrnd_cheng(alpha)
% Gamma(alpha,1) generator using Cheng--Feast method
% Algorithm 4.35
c1=alpha-1; c2=(alpha-1/(6*alpha))/c1; c3=2/c1; c4=1+c3;
c5=1/sqrt(alpha);
flag=0;
while flag==0;
U1=rand; U2=rand;
if alpha>2.5
U1=U2+c5*(1-1.86*U1);
end
W=c2*U2/U1;
flag=(U1<1)&&(U1>0)&&(((c3*U1+W+1/W)<c4)||((c3*log(U1)-log(W)+W)<1));
end
x=c1*W;
which can be recycled into a Beta generator at about the same cost. It uses two uniforms, plus a rejection condition, so for the values of $(\alpha,\beta)$ that you picked, i.e., for a rejection probability of $1/3$, the accept-reject algorithm may be equally efficient. However, you should also run the comparison for larger non-integer values of $(\alpha,\beta)$ to check whether or not the Cheng & Feast Gamma generator remains efficient.
For instance, Joe Whittaker's Beta $\mathfrak{B}(\alpha,\beta)$ generator has a rejection condition of the form$$U_1^{1/\alpha}+U_2^{1/\beta}>1$$which occurs with increasing frequency as $\alpha$ and $\beta$ increase. I remember Luc Devroye mentioning that, for $\mathfrak{G}(\alpha,1)$ distributions, it is not possible to find a bound on the computing time that is independent of $\alpha$... | Why is rejection sampling with acceptance probability 2/3 for Beta(2,2) not slower than `rbeta(N,2,2 | Here is Cheng & Feast Gamma generator code, on which R rbeta and rgamma functions are based:
function x=gamrnd_cheng(alpha)
% Gamma(alpha,1) generator using Cheng--Feast method
% Algorithm 4.35
c1=alp | Why is rejection sampling with acceptance probability 2/3 for Beta(2,2) not slower than `rbeta(N,2,2)`?
Here is Cheng & Feast Gamma generator code, on which R rbeta and rgamma functions are based:
function x=gamrnd_cheng(alpha)
% Gamma(alpha,1) generator using Cheng--Feast method
% Algorithm 4.35
c1=alpha-1; c2=(alpha-1/(6*alpha))/c1; c3=2/c1; c4=1+c3;
c5=1/sqrt(alpha);
flag=0;
while flag==0;
U1=rand; U2=rand;
if alpha>2.5
U1=U2+c5*(1-1.86*U1);
end
W=c2*U2/U1;
flag=(U1<1)&&(U1>0)&&(((c3*U1+W+1/W)<c4)||((c3*log(U1)-log(W)+W)<1));
end
x=c1*W;
which can be recycled into a Beta generator at about the same cost. It uses two uniforms, plus a rejection condition, so for the values of $(\alpha,\beta)$ that you picked, i.e., for a rejection probability of $1/3$, the accept-reject algorithm may be equally efficient. However, you should also run the comparison for larger non-integer values of $(\alpha,\beta)$ to check whether or not the Cheng & Feast Gamma generator remains efficient.
For instance, Joe Whittaker's Beta $\mathfrak{B}(\alpha,\beta)$ generator has a rejection condition of the form$$U_1^{1/\alpha}+U_2^{1/\beta}>1$$which occurs with increasing frequency as $\alpha$ and $\beta$ increase. I remember Luc Devroye mentioning that, for $\mathfrak{G}(\alpha,1)$ distributions, it is not possible to find a bound on the computing time that is independent of $\alpha$... | Why is rejection sampling with acceptance probability 2/3 for Beta(2,2) not slower than `rbeta(N,2,2
Here is Cheng & Feast Gamma generator code, on which R rbeta and rgamma functions are based:
function x=gamrnd_cheng(alpha)
% Gamma(alpha,1) generator using Cheng--Feast method
% Algorithm 4.35
c1=alp |
54,323 | Why is rejection sampling with acceptance probability 2/3 for Beta(2,2) not slower than `rbeta(N,2,2)`? | Following up on Xi'an's suggestion, the result in the question indeed seems to be an artifact of the particular (small) parameters chosen. In particular, when choosing higher $\alpha$ and $\beta$, such that the beta density becomes more concentrated with a correspondingly higher density at the mode, such that a larger box needs to be constructed around the beta density in the basic rejection algorithm, such that the acceptance probability decreases, the result disappears and rbeta is much more efficient.
Here is an example for $\alpha=\beta=10.2$:
# densityatmode <- dbeta(.5,10.2,10.2)=3.559877
# precompute the beta density:
# gamma(20.4)/gamma(10.2)^2=1231365
rejectionsampling_betaab <- function(N){
px <- runif(N,min=0,max=1)
py <- runif(N,min=0,max=3.559877)
#ikeep <- (py < dbeta(px,10.2,10.2))
ikeep <- (py < 1231365*px^9.2*(1-px)^9.2)
return(px[ikeep])
}
AcceptanceProbability <- 1/(3.559877)
samples <- 1e5
library(microbenchmark)
result <- microbenchmark(rejectionsampling_betaab(1/AcceptanceProbability*samples),rbeta(samples,10.2,10.2))
The result now clearly favors rbeta:
> result
Unit: milliseconds
expr min lq mean median uq max neval
rejectionsampling_betaab(1/AcceptanceProbability * samples) 417.46861 426.16041 469.10049 430.37921 485.60090 635.9086 100
rbeta(samples, 10.2, 10.2) 90.39748 90.94824 94.24759 91.52765 92.51329 264.5119 100 | Why is rejection sampling with acceptance probability 2/3 for Beta(2,2) not slower than `rbeta(N,2,2 | Following up on Xi'an's suggestion, the result in the question indeed seems to be an artifact of the particular (small) parameters chosen. In particular, when choosing higher $\alpha$ and $\beta$, suc | Why is rejection sampling with acceptance probability 2/3 for Beta(2,2) not slower than `rbeta(N,2,2)`?
Following up on Xi'an's suggestion, the result in the question indeed seems to be an artifact of the particular (small) parameters chosen. In particular, when choosing higher $\alpha$ and $\beta$, such that the beta density becomes more concentrated with a correspondingly higher density at the mode, such that a larger box needs to be constructed around the beta density in the basic rejection algorithm, such that the acceptance probability decreases, the result disappears and rbeta is much more efficient.
Here is an example for $\alpha=\beta=10.2$:
# densityatmode <- dbeta(.5,10.2,10.2)=3.559877
# precompute the beta density:
# gamma(20.4)/gamma(10.2)^2=1231365
rejectionsampling_betaab <- function(N){
px <- runif(N,min=0,max=1)
py <- runif(N,min=0,max=3.559877)
#ikeep <- (py < dbeta(px,10.2,10.2))
ikeep <- (py < 1231365*px^9.2*(1-px)^9.2)
return(px[ikeep])
}
AcceptanceProbability <- 1/(3.559877)
samples <- 1e5
library(microbenchmark)
result <- microbenchmark(rejectionsampling_betaab(1/AcceptanceProbability*samples),rbeta(samples,10.2,10.2))
The result now clearly favors rbeta:
> result
Unit: milliseconds
expr min lq mean median uq max neval
rejectionsampling_betaab(1/AcceptanceProbability * samples) 417.46861 426.16041 469.10049 430.37921 485.60090 635.9086 100
rbeta(samples, 10.2, 10.2) 90.39748 90.94824 94.24759 91.52765 92.51329 264.5119 100 | Why is rejection sampling with acceptance probability 2/3 for Beta(2,2) not slower than `rbeta(N,2,2
Following up on Xi'an's suggestion, the result in the question indeed seems to be an artifact of the particular (small) parameters chosen. In particular, when choosing higher $\alpha$ and $\beta$, suc |
54,324 | Why does SGD and back propagation work with ReLUs? | At x = 0, the ReLU function is no longer differentiable, however it is sub-differentiable and any value in the range [0,1] is a valid choice of sub-gradient. You may see some implementations simply use 0 sub-gradient at the x = 0 singularity. For further details see the Wikipedia article: Subdervative. | Why does SGD and back propagation work with ReLUs? | At x = 0, the ReLU function is no longer differentiable, however it is sub-differentiable and any value in the range [0,1] is a valid choice of sub-gradient. You may see some implementations simply us | Why does SGD and back propagation work with ReLUs?
At x = 0, the ReLU function is no longer differentiable, however it is sub-differentiable and any value in the range [0,1] is a valid choice of sub-gradient. You may see some implementations simply use 0 sub-gradient at the x = 0 singularity. For further details see the Wikipedia article: Subdervative. | Why does SGD and back propagation work with ReLUs?
At x = 0, the ReLU function is no longer differentiable, however it is sub-differentiable and any value in the range [0,1] is a valid choice of sub-gradient. You may see some implementations simply us |
54,325 | What is the purpose of the scaling factor used in dropout? | If a p=0.5 dropout is used, only half on the neurons are activate during training, while if we activate them all at test time, the output of the dropout layer would get "doubled", so in this regard it makes sense to multiply the output by a factor 1-p to neutralize that effect.
Here's a quote from the dropout paper http://arxiv.org/pdf/1207.0580v1.pdf .
At test time, we use the “mean network” that contains all of the hidden units but with their
outgoing weights halved to compensate for the fact that twice as many of them are active.
Also see this question about two different ways of implementing dropout Dropout: scaling the activation versus inverting the dropout. | What is the purpose of the scaling factor used in dropout? | If a p=0.5 dropout is used, only half on the neurons are activate during training, while if we activate them all at test time, the output of the dropout layer would get "doubled", so in this regard it | What is the purpose of the scaling factor used in dropout?
If a p=0.5 dropout is used, only half on the neurons are activate during training, while if we activate them all at test time, the output of the dropout layer would get "doubled", so in this regard it makes sense to multiply the output by a factor 1-p to neutralize that effect.
Here's a quote from the dropout paper http://arxiv.org/pdf/1207.0580v1.pdf .
At test time, we use the “mean network” that contains all of the hidden units but with their
outgoing weights halved to compensate for the fact that twice as many of them are active.
Also see this question about two different ways of implementing dropout Dropout: scaling the activation versus inverting the dropout. | What is the purpose of the scaling factor used in dropout?
If a p=0.5 dropout is used, only half on the neurons are activate during training, while if we activate them all at test time, the output of the dropout layer would get "doubled", so in this regard it |
54,326 | Diagonal in ROC plot? | Assume that you have the following result:
score label
1.000 positive
0.900 negative
0.900 positive
0.900 negative
0.500 negative
0.200 positive
Manually plot the ROC curve for the possible thresholds of 1.0, 0.9, 0.5, 0.2 and you do get a sloped part.
The reason are duplicate scores.
Beware, there are some poor implementations of ROC out there. I've seem some that only sample values (usually you can recognize this because they have very evenly spaced steps), and I've seen implementations that simply sort the data and then take the nth object - ignoring duplicate scores. If the input data is presorted by label, this causes results to be much better than expected. This can be detected by using a data set where all scores are 0 - the only correct result then is the diagonal line and a AuC of 0.5 | Diagonal in ROC plot? | Assume that you have the following result:
score label
1.000 positive
0.900 negative
0.900 positive
0.900 negative
0.500 negative
0.200 positive
Manually plot the ROC curve for the possible threshold | Diagonal in ROC plot?
Assume that you have the following result:
score label
1.000 positive
0.900 negative
0.900 positive
0.900 negative
0.500 negative
0.200 positive
Manually plot the ROC curve for the possible thresholds of 1.0, 0.9, 0.5, 0.2 and you do get a sloped part.
The reason are duplicate scores.
Beware, there are some poor implementations of ROC out there. I've seem some that only sample values (usually you can recognize this because they have very evenly spaced steps), and I've seen implementations that simply sort the data and then take the nth object - ignoring duplicate scores. If the input data is presorted by label, this causes results to be much better than expected. This can be detected by using a data set where all scores are 0 - the only correct result then is the diagonal line and a AuC of 0.5 | Diagonal in ROC plot?
Assume that you have the following result:
score label
1.000 positive
0.900 negative
0.900 positive
0.900 negative
0.500 negative
0.200 positive
Manually plot the ROC curve for the possible threshold |
54,327 | Diagonal in ROC plot? | Yes, this is "legal". If the jump from one threshold to the next raises the amount of false positives and false negatives together the result is a diagonal line.
Two reasons that might happen:
You have 2 observations with same threshold but with different ground truth
The resolution between 2 thresholds is large enough - in that case you may also check a threshold between the two. | Diagonal in ROC plot? | Yes, this is "legal". If the jump from one threshold to the next raises the amount of false positives and false negatives together the result is a diagonal line.
Two reasons that might happen:
You ha | Diagonal in ROC plot?
Yes, this is "legal". If the jump from one threshold to the next raises the amount of false positives and false negatives together the result is a diagonal line.
Two reasons that might happen:
You have 2 observations with same threshold but with different ground truth
The resolution between 2 thresholds is large enough - in that case you may also check a threshold between the two. | Diagonal in ROC plot?
Yes, this is "legal". If the jump from one threshold to the next raises the amount of false positives and false negatives together the result is a diagonal line.
Two reasons that might happen:
You ha |
54,328 | Why is "the probability that a continuous random variable equals some value always zero"? [duplicate] | The thing is $Y$ is not that continuous to begin with. To be continuous, the distribution function of $Y$ must be absolutely continuous (see definition 1.32, page 10 of link http://math.arizona.edu/~jwatkins/probnotes.pdf by @fcop). You see the distribution of Y has a half impulse (Dirac delta) function at zero. When you approach zero on the negative side there is a jump in distribution function value. So the distribution function of $Y$ is not continuous.
If $f(x)$ is continuous, $g(f(x))$ is not necessarily continuous. | Why is "the probability that a continuous random variable equals some value always zero"? [duplicate | The thing is $Y$ is not that continuous to begin with. To be continuous, the distribution function of $Y$ must be absolutely continuous (see definition 1.32, page 10 of link http://math.arizona.edu/~j | Why is "the probability that a continuous random variable equals some value always zero"? [duplicate]
The thing is $Y$ is not that continuous to begin with. To be continuous, the distribution function of $Y$ must be absolutely continuous (see definition 1.32, page 10 of link http://math.arizona.edu/~jwatkins/probnotes.pdf by @fcop). You see the distribution of Y has a half impulse (Dirac delta) function at zero. When you approach zero on the negative side there is a jump in distribution function value. So the distribution function of $Y$ is not continuous.
If $f(x)$ is continuous, $g(f(x))$ is not necessarily continuous. | Why is "the probability that a continuous random variable equals some value always zero"? [duplicate
The thing is $Y$ is not that continuous to begin with. To be continuous, the distribution function of $Y$ must be absolutely continuous (see definition 1.32, page 10 of link http://math.arizona.edu/~j |
54,329 | Why is "the probability that a continuous random variable equals some value always zero"? [duplicate] | Let's just go with a simple intuitive explanation. But it can get really mathy really fast (if you prefer).
A continuous distribution is a line between two points A and B. On this line there are infinitely many points, no matter if the distance between the points (A, B) is extremely small. If all of those infinite points had a probability larger than 0, then the sum of probabilities would be infinity.
But if that where the case, then the said probability distribution would violate the (Kolomogorov) axioms of probability. So it would not be measure of probability in the modern understanding.
Edit, the derivative of the function $\text{min}(x,y)$ is not defined for $x=y$. So your example is not continuous. | Why is "the probability that a continuous random variable equals some value always zero"? [duplicate | Let's just go with a simple intuitive explanation. But it can get really mathy really fast (if you prefer).
A continuous distribution is a line between two points A and B. On this line there are infin | Why is "the probability that a continuous random variable equals some value always zero"? [duplicate]
Let's just go with a simple intuitive explanation. But it can get really mathy really fast (if you prefer).
A continuous distribution is a line between two points A and B. On this line there are infinitely many points, no matter if the distance between the points (A, B) is extremely small. If all of those infinite points had a probability larger than 0, then the sum of probabilities would be infinity.
But if that where the case, then the said probability distribution would violate the (Kolomogorov) axioms of probability. So it would not be measure of probability in the modern understanding.
Edit, the derivative of the function $\text{min}(x,y)$ is not defined for $x=y$. So your example is not continuous. | Why is "the probability that a continuous random variable equals some value always zero"? [duplicate
Let's just go with a simple intuitive explanation. But it can get really mathy really fast (if you prefer).
A continuous distribution is a line between two points A and B. On this line there are infin |
54,330 | Can I use linear model on each variable to determine which variables are important? | No. The proposed method seems very unlikely to produce useful results. The problem I would anticipate is lots of false positive results.
For example, suppose we wish to predict $y$ and have two predictors $x$ and $z$. Let
$y = x + \epsilon_1$
$z = x + \epsilon_2$
Then the proposed method is likely to select both $x$ and $z$. Any decent linear model involving $x$ and $z$ simultaneously will lead you to see that $z$ is not required in the presence of $x$.
This might sound unlikely, but as you add more variables to a model you're more likely to already have accounted for all the signal.
The upshot of this is that you may have to do lots of unnecessary additional experiments.
I would suggest the OP investigate LASSO regression. This is set up nicely for $p \gg n$ regression problems where variable selection is required.
In general while you'd guess this area should have been fully developed for linear models, the $p \gg n$ variable selection problem is still an active research area.
X = rnorm(1000)
Y = X + rnorm(1000)
Z = X + rnorm(1000)
summary(lm(Y ~ Z))
Call:
lm(formula = Y ~ Z)
Residuals:
Min 1Q Median 3Q Max
-3.8207 -0.8326 -0.0109 0.8688 3.8545
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.02033 0.03998 -0.509 0.611
Z 0.50815 0.02840 17.895 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.264 on 998 degrees of freedom
Multiple R-squared: 0.2429, Adjusted R-squared: 0.2422
F-statistic: 320.2 on 1 and 998 DF, p-value: < 2.2e-16
summary(lm(Y ~ X + Z))
Call:
lm(formula = Y ~ X + Z)
Residuals:
Min 1Q Median 3Q Max
-3.5276 -0.6879 -0.0111 0.6992 3.4331
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.021569 0.032455 -0.665 0.506
X 1.028787 0.045233 22.744 <2e-16 ***
Z 0.001838 0.032047 0.057 0.954
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.026 on 997 degrees of freedom
Multiple R-squared: 0.5015, Adjusted R-squared: 0.5005
F-statistic: 501.6 on 2 and 997 DF, p-value: < 2.2e-16
EDIT: On whether the proposed method would mop up all true positives. I would be doubtful.
First, assuming all the true positives were found, then the proposed filtering mechanism based on $R^2$ would still be in trouble. Suppose we added to the truth in our example another variable $q$ which didn't contribute much to overall variation in $y$:
$y = x + 0.1q + \epsilon$
Then $q$ would often have a lower $R^2$ than $z$. So you wouldn't be able to
trust your ranking mechanism.
> Q = rnorm(1000)
> Y = X + 0.1*Q + rnorm(1000)
> summary(lm(Y~X+Z+Q))
Call:
lm(formula = Y ~ X + Z + Q)
Residuals:
Min 1Q Median 3Q Max
-3.4460 -0.6397 0.0551 0.6146 3.6106
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.002512 0.032670 0.077 0.938719
X 0.981008 0.047013 20.867 < 2e-16 ***
Z 0.015557 0.033436 0.465 0.641838
Q 0.115547 0.032690 3.535 0.000427 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.03 on 996 degrees of freedom
Multiple R-squared: 0.487, Adjusted R-squared: 0.4855
F-statistic: 315.2 on 3 and 996 DF, p-value: < 2.2e-16
> summary(lm(Y~Z))
Call:
lm(formula = Y ~ Z)
Residuals:
Min 1Q Median 3Q Max
-3.8912 -0.8182 -0.0121 0.8061 3.7114
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.03833 0.03915 0.979 0.328
Z 0.51934 0.02785 18.645 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.238 on 998 degrees of freedom
Multiple R-squared: 0.2583, Adjusted R-squared: 0.2576
F-statistic: 347.6 on 1 and 998 DF, p-value: < 2.2e-16
> summary(lm(Y~Q))
Call:
lm(formula = Y ~ Q)
Residuals:
Min 1Q Median 3Q Max
-4.2620 -0.9772 0.0030 1.0116 4.7014
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.02539 0.04535 0.56 0.576
Q 0.10861 0.04544 2.39 0.017 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.433 on 998 degrees of freedom
Multiple R-squared: 0.005693, Adjusted R-squared: 0.004696
F-statistic: 5.714 on 1 and 998 DF, p-value: 0.01702
Also I would anticipate that as the true model for $y$ became more complex, e.g.
$y = x_1 + x_2 + ... + x_n + q + \epsilon$
Then regression on $q$ alone would end up not seeing the coefficient as significant. The reason for this being that structural variation in $y$ due to $x$ would get swept up in our estimate of the noise $\sigma^2$
The reason for this being that omitting the true variables $x$ our regression's performance gets worse, the mean square error gets bigger, $\hat \sigma^2$ gets bigger and thus the standard error of linear regression coefficients gets worse. | Can I use linear model on each variable to determine which variables are important? | No. The proposed method seems very unlikely to produce useful results. The problem I would anticipate is lots of false positive results.
For example, suppose we wish to predict $y$ and have two predi | Can I use linear model on each variable to determine which variables are important?
No. The proposed method seems very unlikely to produce useful results. The problem I would anticipate is lots of false positive results.
For example, suppose we wish to predict $y$ and have two predictors $x$ and $z$. Let
$y = x + \epsilon_1$
$z = x + \epsilon_2$
Then the proposed method is likely to select both $x$ and $z$. Any decent linear model involving $x$ and $z$ simultaneously will lead you to see that $z$ is not required in the presence of $x$.
This might sound unlikely, but as you add more variables to a model you're more likely to already have accounted for all the signal.
The upshot of this is that you may have to do lots of unnecessary additional experiments.
I would suggest the OP investigate LASSO regression. This is set up nicely for $p \gg n$ regression problems where variable selection is required.
In general while you'd guess this area should have been fully developed for linear models, the $p \gg n$ variable selection problem is still an active research area.
X = rnorm(1000)
Y = X + rnorm(1000)
Z = X + rnorm(1000)
summary(lm(Y ~ Z))
Call:
lm(formula = Y ~ Z)
Residuals:
Min 1Q Median 3Q Max
-3.8207 -0.8326 -0.0109 0.8688 3.8545
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.02033 0.03998 -0.509 0.611
Z 0.50815 0.02840 17.895 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.264 on 998 degrees of freedom
Multiple R-squared: 0.2429, Adjusted R-squared: 0.2422
F-statistic: 320.2 on 1 and 998 DF, p-value: < 2.2e-16
summary(lm(Y ~ X + Z))
Call:
lm(formula = Y ~ X + Z)
Residuals:
Min 1Q Median 3Q Max
-3.5276 -0.6879 -0.0111 0.6992 3.4331
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.021569 0.032455 -0.665 0.506
X 1.028787 0.045233 22.744 <2e-16 ***
Z 0.001838 0.032047 0.057 0.954
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.026 on 997 degrees of freedom
Multiple R-squared: 0.5015, Adjusted R-squared: 0.5005
F-statistic: 501.6 on 2 and 997 DF, p-value: < 2.2e-16
EDIT: On whether the proposed method would mop up all true positives. I would be doubtful.
First, assuming all the true positives were found, then the proposed filtering mechanism based on $R^2$ would still be in trouble. Suppose we added to the truth in our example another variable $q$ which didn't contribute much to overall variation in $y$:
$y = x + 0.1q + \epsilon$
Then $q$ would often have a lower $R^2$ than $z$. So you wouldn't be able to
trust your ranking mechanism.
> Q = rnorm(1000)
> Y = X + 0.1*Q + rnorm(1000)
> summary(lm(Y~X+Z+Q))
Call:
lm(formula = Y ~ X + Z + Q)
Residuals:
Min 1Q Median 3Q Max
-3.4460 -0.6397 0.0551 0.6146 3.6106
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.002512 0.032670 0.077 0.938719
X 0.981008 0.047013 20.867 < 2e-16 ***
Z 0.015557 0.033436 0.465 0.641838
Q 0.115547 0.032690 3.535 0.000427 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.03 on 996 degrees of freedom
Multiple R-squared: 0.487, Adjusted R-squared: 0.4855
F-statistic: 315.2 on 3 and 996 DF, p-value: < 2.2e-16
> summary(lm(Y~Z))
Call:
lm(formula = Y ~ Z)
Residuals:
Min 1Q Median 3Q Max
-3.8912 -0.8182 -0.0121 0.8061 3.7114
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.03833 0.03915 0.979 0.328
Z 0.51934 0.02785 18.645 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.238 on 998 degrees of freedom
Multiple R-squared: 0.2583, Adjusted R-squared: 0.2576
F-statistic: 347.6 on 1 and 998 DF, p-value: < 2.2e-16
> summary(lm(Y~Q))
Call:
lm(formula = Y ~ Q)
Residuals:
Min 1Q Median 3Q Max
-4.2620 -0.9772 0.0030 1.0116 4.7014
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.02539 0.04535 0.56 0.576
Q 0.10861 0.04544 2.39 0.017 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.433 on 998 degrees of freedom
Multiple R-squared: 0.005693, Adjusted R-squared: 0.004696
F-statistic: 5.714 on 1 and 998 DF, p-value: 0.01702
Also I would anticipate that as the true model for $y$ became more complex, e.g.
$y = x_1 + x_2 + ... + x_n + q + \epsilon$
Then regression on $q$ alone would end up not seeing the coefficient as significant. The reason for this being that structural variation in $y$ due to $x$ would get swept up in our estimate of the noise $\sigma^2$
The reason for this being that omitting the true variables $x$ our regression's performance gets worse, the mean square error gets bigger, $\hat \sigma^2$ gets bigger and thus the standard error of linear regression coefficients gets worse. | Can I use linear model on each variable to determine which variables are important?
No. The proposed method seems very unlikely to produce useful results. The problem I would anticipate is lots of false positive results.
For example, suppose we wish to predict $y$ and have two predi |
54,331 | Can I use linear model on each variable to determine which variables are important? | If your predictors (biomarkers) are colinear, univariate regressions may grossly over / underestimate effect sizes, depending on sign of the colinearity and the sign of the product of their effect sizes. This is known as Simpson's paradox, or in general as omitted-variable bias, as mentioned above. I would therefore not recommend this approach.
I am not aware of a perfect solution for the p>>n case, and neither do I think that one exists. Yet, if the goal is to prioritize predictors for later testing, and you think effects can be well expressed by linear relationships, I would go for a regularization method such as ridge regression and lasso, and simply take the variables that come out with the strongest effects - the advantage over AIC-based model selection is less sensitivity to colinearity in the predictors (because predictors are not removed). | Can I use linear model on each variable to determine which variables are important? | If your predictors (biomarkers) are colinear, univariate regressions may grossly over / underestimate effect sizes, depending on sign of the colinearity and the sign of the product of their effect siz | Can I use linear model on each variable to determine which variables are important?
If your predictors (biomarkers) are colinear, univariate regressions may grossly over / underestimate effect sizes, depending on sign of the colinearity and the sign of the product of their effect sizes. This is known as Simpson's paradox, or in general as omitted-variable bias, as mentioned above. I would therefore not recommend this approach.
I am not aware of a perfect solution for the p>>n case, and neither do I think that one exists. Yet, if the goal is to prioritize predictors for later testing, and you think effects can be well expressed by linear relationships, I would go for a regularization method such as ridge regression and lasso, and simply take the variables that come out with the strongest effects - the advantage over AIC-based model selection is less sensitivity to colinearity in the predictors (because predictors are not removed). | Can I use linear model on each variable to determine which variables are important?
If your predictors (biomarkers) are colinear, univariate regressions may grossly over / underestimate effect sizes, depending on sign of the colinearity and the sign of the product of their effect siz |
54,332 | Can I use linear model on each variable to determine which variables are important? | Aristotle said that, “The whole is greater than the sum of its parts.” Each simple linear regression is merely testing a part. However, I imagine that many diseases are associated with combinations of markers (the whole). What you really care about are the combination of markers. As a result, your algorithm may not work well because you are not testing the combination. | Can I use linear model on each variable to determine which variables are important? | Aristotle said that, “The whole is greater than the sum of its parts.” Each simple linear regression is merely testing a part. However, I imagine that many diseases are associated with combinations o | Can I use linear model on each variable to determine which variables are important?
Aristotle said that, “The whole is greater than the sum of its parts.” Each simple linear regression is merely testing a part. However, I imagine that many diseases are associated with combinations of markers (the whole). What you really care about are the combination of markers. As a result, your algorithm may not work well because you are not testing the combination. | Can I use linear model on each variable to determine which variables are important?
Aristotle said that, “The whole is greater than the sum of its parts.” Each simple linear regression is merely testing a part. However, I imagine that many diseases are associated with combinations o |
54,333 | What is UV decomposition? | If $A$ is a matrix of rank $k$ and size $m$ by $n$, $A$ can be written as
$A=UV^{T}$
where $U$ is of size $m$ by $k$ and $V$ is of size $n$ by $k$. The columns of $U$ and $V$ need not necessarily be orthogonal.
If you have the SVD of $A$, then it's easy to compute this low rank factorization from the SVD. Given the SVD
$A=U\Sigma V^{T}$
where $\Sigma$ is a diagonal matrix with only the first $k$ entries of $\Sigma$ nonzero, we can write $A$ as
$A=U_{:,1:k} \Sigma_{1:k,1:k} V_{:,1:k}^{T}$.
The scaling factors on the diagonal of $\Sigma_{1:k,1:k}$ can be incorporated into $V$ so that $A$ and can be written as $A=UV^{T}$.
However, computing the singular value decomposition of a large matrix can be extremely expensive, and the resulting $U$ and $V$ matrices would typically be fully dense.
There are specialized algorithms for heuristically finding low rank approximations of matrices that are faster than computing a full SVD. Some of these methods find sparse $U$ and $V$ matrices and
also deal with the case where $A$ is only approximately of rank $k$ (e.g. due to noise in the entries.) There is a lot of current interest in low rank matrix factorization algorithms of various sorts. | What is UV decomposition? | If $A$ is a matrix of rank $k$ and size $m$ by $n$, $A$ can be written as
$A=UV^{T}$
where $U$ is of size $m$ by $k$ and $V$ is of size $n$ by $k$. The columns of $U$ and $V$ need not necessarily be | What is UV decomposition?
If $A$ is a matrix of rank $k$ and size $m$ by $n$, $A$ can be written as
$A=UV^{T}$
where $U$ is of size $m$ by $k$ and $V$ is of size $n$ by $k$. The columns of $U$ and $V$ need not necessarily be orthogonal.
If you have the SVD of $A$, then it's easy to compute this low rank factorization from the SVD. Given the SVD
$A=U\Sigma V^{T}$
where $\Sigma$ is a diagonal matrix with only the first $k$ entries of $\Sigma$ nonzero, we can write $A$ as
$A=U_{:,1:k} \Sigma_{1:k,1:k} V_{:,1:k}^{T}$.
The scaling factors on the diagonal of $\Sigma_{1:k,1:k}$ can be incorporated into $V$ so that $A$ and can be written as $A=UV^{T}$.
However, computing the singular value decomposition of a large matrix can be extremely expensive, and the resulting $U$ and $V$ matrices would typically be fully dense.
There are specialized algorithms for heuristically finding low rank approximations of matrices that are faster than computing a full SVD. Some of these methods find sparse $U$ and $V$ matrices and
also deal with the case where $A$ is only approximately of rank $k$ (e.g. due to noise in the entries.) There is a lot of current interest in low rank matrix factorization algorithms of various sorts. | What is UV decomposition?
If $A$ is a matrix of rank $k$ and size $m$ by $n$, $A$ can be written as
$A=UV^{T}$
where $U$ is of size $m$ by $k$ and $V$ is of size $n$ by $k$. The columns of $U$ and $V$ need not necessarily be |
54,334 | Calculating the expression for the derivative of a Gaussian process | For simplicity I assume x has one dimension.
$\frac{\partial f}{\partial x}$ is normally distributed with expectation:
$E[\frac{\partial f}{\partial x}] = \frac{\partial}{\partial x}E[f]$
And Covariance:
$\text{Cov}(\frac{\partial f_1}{\partial x_1},\frac{\partial f_2}{\partial x_2})$ = $\frac{\partial^2 }{\partial x_2\partial x_1}\text{Cov}(f_1,f_2))$
In case you are using a gaussian correlation function: $\text{Cov}(f_1,f_2)) = \sigma^2\exp(-\frac{1}{2}\frac{(x_1-x_2)^2}{a^2})$, then:
$\text{Cov}(\frac{\partial f_1}{\partial x_1},\frac{\partial f_2}{\partial x_2})$ = $\frac{\sigma^2}{a^2}(1.0-\frac{(x_1-x_2)^2}{a^2})\exp(-\frac{1}{2}\frac{(x_1-x_2)^2}{a^2})$.
If x has more than one dimensions, each of the partial derivatives are normally distributed, and the covariance and expectation for each dimension can be calculated in the same way as above. | Calculating the expression for the derivative of a Gaussian process | For simplicity I assume x has one dimension.
$\frac{\partial f}{\partial x}$ is normally distributed with expectation:
$E[\frac{\partial f}{\partial x}] = \frac{\partial}{\partial x}E[f]$
And Covarian | Calculating the expression for the derivative of a Gaussian process
For simplicity I assume x has one dimension.
$\frac{\partial f}{\partial x}$ is normally distributed with expectation:
$E[\frac{\partial f}{\partial x}] = \frac{\partial}{\partial x}E[f]$
And Covariance:
$\text{Cov}(\frac{\partial f_1}{\partial x_1},\frac{\partial f_2}{\partial x_2})$ = $\frac{\partial^2 }{\partial x_2\partial x_1}\text{Cov}(f_1,f_2))$
In case you are using a gaussian correlation function: $\text{Cov}(f_1,f_2)) = \sigma^2\exp(-\frac{1}{2}\frac{(x_1-x_2)^2}{a^2})$, then:
$\text{Cov}(\frac{\partial f_1}{\partial x_1},\frac{\partial f_2}{\partial x_2})$ = $\frac{\sigma^2}{a^2}(1.0-\frac{(x_1-x_2)^2}{a^2})\exp(-\frac{1}{2}\frac{(x_1-x_2)^2}{a^2})$.
If x has more than one dimensions, each of the partial derivatives are normally distributed, and the covariance and expectation for each dimension can be calculated in the same way as above. | Calculating the expression for the derivative of a Gaussian process
For simplicity I assume x has one dimension.
$\frac{\partial f}{\partial x}$ is normally distributed with expectation:
$E[\frac{\partial f}{\partial x}] = \frac{\partial}{\partial x}E[f]$
And Covarian |
54,335 | Calculating the expression for the derivative of a Gaussian process | Since your domain for $f$ is $n$-dimensional you will actually have $n$ derivative processes $\frac{\partial f}{\partial x_j}$ with $j=1,\ldots n$. You need to calculate the mean and correlation function of $\frac{\partial f}{\partial x_j}$. From the linked answer you know that those are the derivative of $f$'s mean function and the derivative with respect to both arguments of the correlation $R$. So there is nothing more to do than to calculate those derivatives:
I assume that $\beta$ are constant, hence the mean function is $\frac{\partial X\beta}{\partial x_j}=\beta_j$.
To keep confusion to a minimum let's write $R$ as $R(x,y)=exp\{-\sum_i\frac{(x_i - y_i)^2}{\phi_i}\}$. Then the derivative with respect to the first argument (assuming $\sigma$ is constant) is $$ \frac{\partial R}{\partial x_j}= \left(- 2 \frac{x_j-y_j}{\phi_j}\right) R(x,y)$$ and with respect to both $$ \frac{\partial }{\partial y_j}\frac{\partial R}{\partial x_j}=\frac{\partial }{\partial y_j}\left( R(x,y)\left(- 2 \frac{x_j-y_j}{\phi_j}\right) \right)=\frac{2}{\phi_j}R(x,y)\left( 1 - \frac{2}{\phi_j}(x_j - y_j)^2\right).$$ Which means in your notation that $$ \frac{\partial f}{\partial x_j} \sim \text{GP}\left( \beta_j, \sigma^2 \frac{2}{\phi_j} R(x,y)\left( 1 - \frac{2}{\phi_j}(x_j - y_j)^2\right)\right). $$ | Calculating the expression for the derivative of a Gaussian process | Since your domain for $f$ is $n$-dimensional you will actually have $n$ derivative processes $\frac{\partial f}{\partial x_j}$ with $j=1,\ldots n$. You need to calculate the mean and correlation funct | Calculating the expression for the derivative of a Gaussian process
Since your domain for $f$ is $n$-dimensional you will actually have $n$ derivative processes $\frac{\partial f}{\partial x_j}$ with $j=1,\ldots n$. You need to calculate the mean and correlation function of $\frac{\partial f}{\partial x_j}$. From the linked answer you know that those are the derivative of $f$'s mean function and the derivative with respect to both arguments of the correlation $R$. So there is nothing more to do than to calculate those derivatives:
I assume that $\beta$ are constant, hence the mean function is $\frac{\partial X\beta}{\partial x_j}=\beta_j$.
To keep confusion to a minimum let's write $R$ as $R(x,y)=exp\{-\sum_i\frac{(x_i - y_i)^2}{\phi_i}\}$. Then the derivative with respect to the first argument (assuming $\sigma$ is constant) is $$ \frac{\partial R}{\partial x_j}= \left(- 2 \frac{x_j-y_j}{\phi_j}\right) R(x,y)$$ and with respect to both $$ \frac{\partial }{\partial y_j}\frac{\partial R}{\partial x_j}=\frac{\partial }{\partial y_j}\left( R(x,y)\left(- 2 \frac{x_j-y_j}{\phi_j}\right) \right)=\frac{2}{\phi_j}R(x,y)\left( 1 - \frac{2}{\phi_j}(x_j - y_j)^2\right).$$ Which means in your notation that $$ \frac{\partial f}{\partial x_j} \sim \text{GP}\left( \beta_j, \sigma^2 \frac{2}{\phi_j} R(x,y)\left( 1 - \frac{2}{\phi_j}(x_j - y_j)^2\right)\right). $$ | Calculating the expression for the derivative of a Gaussian process
Since your domain for $f$ is $n$-dimensional you will actually have $n$ derivative processes $\frac{\partial f}{\partial x_j}$ with $j=1,\ldots n$. You need to calculate the mean and correlation funct |
54,336 | Calculating the expression for the derivative of a Gaussian process | You can use sympy in Python, it will calculate any derivatives including integral defined one.
diffn(ff,x0,kk) :
dffk= Derivative(ff(x),x,kk)
dffk1= simplify( dffk.doit())
dffx0= simplify(Subs(dffk1, (x), (x0)).doit())
return dffx0 | Calculating the expression for the derivative of a Gaussian process | You can use sympy in Python, it will calculate any derivatives including integral defined one.
diffn(ff,x0,kk) :
dffk= Derivative(ff(x),x,kk)
dffk1= simplify( dffk.doit())
dffx0= simplify(Sub | Calculating the expression for the derivative of a Gaussian process
You can use sympy in Python, it will calculate any derivatives including integral defined one.
diffn(ff,x0,kk) :
dffk= Derivative(ff(x),x,kk)
dffk1= simplify( dffk.doit())
dffx0= simplify(Subs(dffk1, (x), (x0)).doit())
return dffx0 | Calculating the expression for the derivative of a Gaussian process
You can use sympy in Python, it will calculate any derivatives including integral defined one.
diffn(ff,x0,kk) :
dffk= Derivative(ff(x),x,kk)
dffk1= simplify( dffk.doit())
dffx0= simplify(Sub |
54,337 | t test for intercept? | It is also $H_0:\beta_{0,0}=0$ in your example. You can infer that from the general formulation of a t-ratio
$$
t=\frac{\hat\beta_j-\beta_{j,0}}{std.error(\hat\beta_j)},
$$
where $\beta_{j,0}$ is the hypothesis formulated on $\beta_j$ and $\beta_0$ is the coefficient on the intercept.
In your case, we have
$$
.837=\frac{.128}{.154},
$$
so nothing is subtracted from $\hat\beta_0$ in the numerator, thus $\beta_{0,0}=0$. But as the above hopefully makes clear, you could test any hypothesis you happen to be interested in using the coefficient estimate .128 and the standard error .154 by formulating your own t-ratio. 0 is just the default reported by the package. | t test for intercept? | It is also $H_0:\beta_{0,0}=0$ in your example. You can infer that from the general formulation of a t-ratio
$$
t=\frac{\hat\beta_j-\beta_{j,0}}{std.error(\hat\beta_j)},
$$
where $\beta_{j,0}$ is the | t test for intercept?
It is also $H_0:\beta_{0,0}=0$ in your example. You can infer that from the general formulation of a t-ratio
$$
t=\frac{\hat\beta_j-\beta_{j,0}}{std.error(\hat\beta_j)},
$$
where $\beta_{j,0}$ is the hypothesis formulated on $\beta_j$ and $\beta_0$ is the coefficient on the intercept.
In your case, we have
$$
.837=\frac{.128}{.154},
$$
so nothing is subtracted from $\hat\beta_0$ in the numerator, thus $\beta_{0,0}=0$. But as the above hopefully makes clear, you could test any hypothesis you happen to be interested in using the coefficient estimate .128 and the standard error .154 by formulating your own t-ratio. 0 is just the default reported by the package. | t test for intercept?
It is also $H_0:\beta_{0,0}=0$ in your example. You can infer that from the general formulation of a t-ratio
$$
t=\frac{\hat\beta_j-\beta_{j,0}}{std.error(\hat\beta_j)},
$$
where $\beta_{j,0}$ is the |
54,338 | t test for intercept? | Hypothesis testing is like mathematical ''proof by contradiction''; if you want to prove something, then you assume that the opposite is true and using this ''opposite is true'' assumption you try to find a contradiction. As contradictions are impossible, the assumption ''opposite is true'' must be false.
In hypothesis testing you do the same; if you want to show that the intercept (or the slope) is signficantly different from zero, then you assume the opposite, i.e. $H_0: \beta_0 = 0$ and try to derive a contradiction from this. As in statistics nothing is impossible we will not be able to derive something ''contradictory'' but we will try to show that this leads to something ''very improbable''.
So you first have to define what you mean by ''very improbable'' i.e. you must chose a siginficance level e.g. 5%. If the probability of ''something'' is below this chosen significance level we will take it as very ''improbable''.
To summarize (a) define what you mean by ''very improbable'', i.e. define your significance level e.g. $\alpha=0.05$ (b) if you want to ''show'' that $\beta_0 \ne 0$ then assume to opposite (i.e. $H_0: \beta_0 = 0$ and try to find a ''contradiction'' i.e. something that occurs with low probability.
The theory of linear regression learns that, if $H_0: \beta_0=0$ is true (and the assumption of linear regression are fulfilled), then the estimate of $\beta_0$, i.e. $\hat{\beta_0}$ has a normal distribution with mean $\beta_0$, but the latter we assumed it to be equal to zero. The p-values in your table are derived from this normal distribution. It seems to be 0.4 which is not below our chosen significance level $\alpha=0.05$, so we do not find a ''contradiction'' and we can not ''prove'' that $\beta_0 \ne 0$
Note; if we say that we can not prove that $\beta_0 \ne 0$ then this does in no way mean that we can prove that it is zero !
So if we reject the null hypothesis then we have ''statistically proven'' $H_1$, if we can not reject $H_0$ then we can not conclude that we can prove $H_0$, we simply accept it (which is different from having proven it).
Applying this to e.g. the coeffient of 'Horsepower', let me call this coefficient $\beta_1$: if I want to show that this coefficient is different from zero then I assume the opposite: $H_0: \beta_1 = 0$ and assuming this I can find a p-value of zero, so someting very improbable. So the assumption $H_0: \beta_1 = 0$ leads to something that is very improbable, therefore it must be false and thus $H_1: \beta_1 \ne 0$ is ''statistically proven''.
Similar for the intercept, let me call it $\beta_0$; if we want to show that it is non-zero, then assume the opposite $H_0: \beta_0=0$ , if that is true then you found a p-value of 0.4, not so ''improbable'' thus. So the assumption $H_0: \beta_0 = 0$ does not lead to a ''statistical contradiction'' and we find no evidence that $H_1: \beta_0 \ne 0$ is true.
Note that 'finding no evidence that $H_1:\beta_0 \ne 0$ is true' does not imply that the opposite ($H_0: \beta_0 = 0$) is ''statistically proven'', we simply can not find indications that $H_0$ could be false, therefore we ''accept it''.
Important remark: the assumption $H_0: \beta_0 = 0$ is used in the computation of the p-value. It is only because we assume that $\beta_0 = 0$ that we can compute a p-value, without that assumption we do not know the distribution and can not compute probabilities.
So - in the same logic as above - if I want to ''statistically prove'' that $\beta_0 = 0$ then I have to assume the opposite, so my $H_0$ would then be: $H_0: \beta_0 \ne 0$ and we have to find something improbable so that I can reject $H_0$. But there is a problem here, when I assume that $H_0: \beta_0 \ne 0$ then I can not guess the value of $\beta_0$ (it could be anything that is not zero, so 5 or 1 million, ...). So my assumption $H_0: \beta_0 \ne 0$ does not allow me to fix the distribution and does not allow me to compute p-values ... so I am stuck here.
Very brief summary: the goal is to reject $H_0$ in order to find ''statistical evidence'' for $H_1$, when $H_0$ can not be rejected we just ''accept'' it. The assumption under $H_0$ must be precise enough to fully determine the distribution of the test statistic, else we can not compute p-values. | t test for intercept? | Hypothesis testing is like mathematical ''proof by contradiction''; if you want to prove something, then you assume that the opposite is true and using this ''opposite is true'' assumption you try to | t test for intercept?
Hypothesis testing is like mathematical ''proof by contradiction''; if you want to prove something, then you assume that the opposite is true and using this ''opposite is true'' assumption you try to find a contradiction. As contradictions are impossible, the assumption ''opposite is true'' must be false.
In hypothesis testing you do the same; if you want to show that the intercept (or the slope) is signficantly different from zero, then you assume the opposite, i.e. $H_0: \beta_0 = 0$ and try to derive a contradiction from this. As in statistics nothing is impossible we will not be able to derive something ''contradictory'' but we will try to show that this leads to something ''very improbable''.
So you first have to define what you mean by ''very improbable'' i.e. you must chose a siginficance level e.g. 5%. If the probability of ''something'' is below this chosen significance level we will take it as very ''improbable''.
To summarize (a) define what you mean by ''very improbable'', i.e. define your significance level e.g. $\alpha=0.05$ (b) if you want to ''show'' that $\beta_0 \ne 0$ then assume to opposite (i.e. $H_0: \beta_0 = 0$ and try to find a ''contradiction'' i.e. something that occurs with low probability.
The theory of linear regression learns that, if $H_0: \beta_0=0$ is true (and the assumption of linear regression are fulfilled), then the estimate of $\beta_0$, i.e. $\hat{\beta_0}$ has a normal distribution with mean $\beta_0$, but the latter we assumed it to be equal to zero. The p-values in your table are derived from this normal distribution. It seems to be 0.4 which is not below our chosen significance level $\alpha=0.05$, so we do not find a ''contradiction'' and we can not ''prove'' that $\beta_0 \ne 0$
Note; if we say that we can not prove that $\beta_0 \ne 0$ then this does in no way mean that we can prove that it is zero !
So if we reject the null hypothesis then we have ''statistically proven'' $H_1$, if we can not reject $H_0$ then we can not conclude that we can prove $H_0$, we simply accept it (which is different from having proven it).
Applying this to e.g. the coeffient of 'Horsepower', let me call this coefficient $\beta_1$: if I want to show that this coefficient is different from zero then I assume the opposite: $H_0: \beta_1 = 0$ and assuming this I can find a p-value of zero, so someting very improbable. So the assumption $H_0: \beta_1 = 0$ leads to something that is very improbable, therefore it must be false and thus $H_1: \beta_1 \ne 0$ is ''statistically proven''.
Similar for the intercept, let me call it $\beta_0$; if we want to show that it is non-zero, then assume the opposite $H_0: \beta_0=0$ , if that is true then you found a p-value of 0.4, not so ''improbable'' thus. So the assumption $H_0: \beta_0 = 0$ does not lead to a ''statistical contradiction'' and we find no evidence that $H_1: \beta_0 \ne 0$ is true.
Note that 'finding no evidence that $H_1:\beta_0 \ne 0$ is true' does not imply that the opposite ($H_0: \beta_0 = 0$) is ''statistically proven'', we simply can not find indications that $H_0$ could be false, therefore we ''accept it''.
Important remark: the assumption $H_0: \beta_0 = 0$ is used in the computation of the p-value. It is only because we assume that $\beta_0 = 0$ that we can compute a p-value, without that assumption we do not know the distribution and can not compute probabilities.
So - in the same logic as above - if I want to ''statistically prove'' that $\beta_0 = 0$ then I have to assume the opposite, so my $H_0$ would then be: $H_0: \beta_0 \ne 0$ and we have to find something improbable so that I can reject $H_0$. But there is a problem here, when I assume that $H_0: \beta_0 \ne 0$ then I can not guess the value of $\beta_0$ (it could be anything that is not zero, so 5 or 1 million, ...). So my assumption $H_0: \beta_0 \ne 0$ does not allow me to fix the distribution and does not allow me to compute p-values ... so I am stuck here.
Very brief summary: the goal is to reject $H_0$ in order to find ''statistical evidence'' for $H_1$, when $H_0$ can not be rejected we just ''accept'' it. The assumption under $H_0$ must be precise enough to fully determine the distribution of the test statistic, else we can not compute p-values. | t test for intercept?
Hypothesis testing is like mathematical ''proof by contradiction''; if you want to prove something, then you assume that the opposite is true and using this ''opposite is true'' assumption you try to |
54,339 | Relationship between R2 and correlation coefficient [duplicate] | The usual way of interpreting the coefficient of determination R^{2} is to see it as the percentage of the variation of the dependent variable y (Var(y)) can be explained by our model.
For the proof we have to know the following (taken from OLS theory and general statistics):
I hope this answer clears your doubt. | Relationship between R2 and correlation coefficient [duplicate] | The usual way of interpreting the coefficient of determination R^{2} is to see it as the percentage of the variation of the dependent variable y (Var(y)) can be explained by our model.
For the proof w | Relationship between R2 and correlation coefficient [duplicate]
The usual way of interpreting the coefficient of determination R^{2} is to see it as the percentage of the variation of the dependent variable y (Var(y)) can be explained by our model.
For the proof we have to know the following (taken from OLS theory and general statistics):
I hope this answer clears your doubt. | Relationship between R2 and correlation coefficient [duplicate]
The usual way of interpreting the coefficient of determination R^{2} is to see it as the percentage of the variation of the dependent variable y (Var(y)) can be explained by our model.
For the proof w |
54,340 | How to update weights in a neural network using gradient descent with mini-batches? | Mini-batch is implemented basically as you describe in 2.
Epoch starts. We sample and feedforward a minibatch, get the error and backprop it, i.e. update the weights. We repeat this until we have sampled the full data set. Epoch over.
Assuming that the network is minimizing the following objective function:
$$
\frac{\lambda}{2}||\theta||^2 + \frac{1}{n}\sum_{i=1}^n E(x^{(i)}, y^{(i)}, \theta)
$$
This is essentially the weights update step
$$
\theta = (1 - \alpha \lambda) \theta - \alpha \frac{1}{b}\sum_{k=i}^{i+b-1} \frac{\partial E}{\partial \theta}(x^{(k)}, y^{(k)}, \theta)
$$
where the following symbols mean:
$E$ = the error measure (also sometimes denoted as cost measure $J$)
$\theta$ = weights
$\alpha$ = learning rate
$1 - \alpha \lambda$ = weight decay
$b$ = batch size
$x$ = variables
You loop over the consecutive batches (i.e. increment by $b$) and update the weights. This more frequent weight updating combined with vectorization is what allows mini-batch gradient descent to tend to converge more quickly than either generic batch of stochastic methods. | How to update weights in a neural network using gradient descent with mini-batches? | Mini-batch is implemented basically as you describe in 2.
Epoch starts. We sample and feedforward a minibatch, get the error and backprop it, i.e. update the weights. We repeat this until we have sa | How to update weights in a neural network using gradient descent with mini-batches?
Mini-batch is implemented basically as you describe in 2.
Epoch starts. We sample and feedforward a minibatch, get the error and backprop it, i.e. update the weights. We repeat this until we have sampled the full data set. Epoch over.
Assuming that the network is minimizing the following objective function:
$$
\frac{\lambda}{2}||\theta||^2 + \frac{1}{n}\sum_{i=1}^n E(x^{(i)}, y^{(i)}, \theta)
$$
This is essentially the weights update step
$$
\theta = (1 - \alpha \lambda) \theta - \alpha \frac{1}{b}\sum_{k=i}^{i+b-1} \frac{\partial E}{\partial \theta}(x^{(k)}, y^{(k)}, \theta)
$$
where the following symbols mean:
$E$ = the error measure (also sometimes denoted as cost measure $J$)
$\theta$ = weights
$\alpha$ = learning rate
$1 - \alpha \lambda$ = weight decay
$b$ = batch size
$x$ = variables
You loop over the consecutive batches (i.e. increment by $b$) and update the weights. This more frequent weight updating combined with vectorization is what allows mini-batch gradient descent to tend to converge more quickly than either generic batch of stochastic methods. | How to update weights in a neural network using gradient descent with mini-batches?
Mini-batch is implemented basically as you describe in 2.
Epoch starts. We sample and feedforward a minibatch, get the error and backprop it, i.e. update the weights. We repeat this until we have sa |
54,341 | Why is ridge regression giving different results in Matlab and Python? | MATLAB always uses the centred and scaled variables for the computations within ridge. It just back-transforms them before returning them. As you have a really small matrix this probably makes a noticeable difference. You can reproduce the Python results in MATLAB easily:
X = [1 1 2 ; 3 4 2 ; 6 5 2 ; 5 5 3];
Y = [1 0 0 1];
k = 10; % which is the ridge parameter
Xn = [ones(4,1), X];
(Xn'*Xn + diag([0,k,k,k]))\ (Xn'*Y') %Same as sklearn
ans =
0.7165
-0.0377
-0.0544
0.0572 | Why is ridge regression giving different results in Matlab and Python? | MATLAB always uses the centred and scaled variables for the computations within ridge. It just back-transforms them before returning them. As you have a really small matrix this probably makes a notic | Why is ridge regression giving different results in Matlab and Python?
MATLAB always uses the centred and scaled variables for the computations within ridge. It just back-transforms them before returning them. As you have a really small matrix this probably makes a noticeable difference. You can reproduce the Python results in MATLAB easily:
X = [1 1 2 ; 3 4 2 ; 6 5 2 ; 5 5 3];
Y = [1 0 0 1];
k = 10; % which is the ridge parameter
Xn = [ones(4,1), X];
(Xn'*Xn + diag([0,k,k,k]))\ (Xn'*Y') %Same as sklearn
ans =
0.7165
-0.0377
-0.0544
0.0572 | Why is ridge regression giving different results in Matlab and Python?
MATLAB always uses the centred and scaled variables for the computations within ridge. It just back-transforms them before returning them. As you have a really small matrix this probably makes a notic |
54,342 | Probability of product of two random variables | Since the distribution of $XY$ is characterised by its cdf, you want to compute $\mathbb{P}(XY<z)$ for an arbitrary value $z$. Let us assume first that $Y$ is always positive. Then
$$\eqalign{\mathbb{P}(XY<z)&=\mathbb{E}[\mathbb{I}_{(-\infty,z)}(XY)]\\&=\mathbb{E}[\mathbb{E}[\mathbb{I}_{(-\infty,z)}(XY)]|Y]]\\&=\mathbb{E}[\mathbb{P}(XY<z|Y)]\\&=\mathbb{E}[F_X(z/Y)|Y]}$$
If $Y$ can take both positive and negative values,
$$\eqalign{\mathbb{P}(XY<z)&=\mathbb{E}[\mathbb{I}_{(-\infty,z)}(XY)]\\&=\mathbb{E}[\mathbb{I}_{(-\infty,z)}(XY)\mathbb{I}_{(-\infty,0)}(Y)]+\mathbb{E}[\mathbb{I}_{(-\infty,z)}(XY)\mathbb{I}_{(0,\infty)}(Y)]\\&=\mathbb{E}[\mathbb{E}[\mathbb{I}_{(-\infty,z)}(XY)]|Y]\mathbb{I}_{(-\infty,0)}(Y)]+\mathbb{E}[\mathbb{E}[\mathbb{I}_{(-\infty,z)}(XY)]|Y]\mathbb{I}_{(0,\infty)}(Y)]\\&=\mathbb{E}[\mathbb{P}(XY<z|Y)\mathbb{I}_{(-\infty,0)}(Y)]+\mathbb{E}[\mathbb{P}(XY<z|Y)\mathbb{I}_{(0,\infty)}(Y)]\\&=\mathbb{E}[\mathbb{I}_{(-\infty,0)}(Y)F_X(z/Y)]+\mathbb{E}[\mathbb{I}_{(0,\infty)}(Y)\{1-F_X(z/Y)\}]}$$
Depending on the setting, the density of $XY$ can then be obtained by derivation of the above. | Probability of product of two random variables | Since the distribution of $XY$ is characterised by its cdf, you want to compute $\mathbb{P}(XY<z)$ for an arbitrary value $z$. Let us assume first that $Y$ is always positive. Then
$$\eqalign{\mathbb | Probability of product of two random variables
Since the distribution of $XY$ is characterised by its cdf, you want to compute $\mathbb{P}(XY<z)$ for an arbitrary value $z$. Let us assume first that $Y$ is always positive. Then
$$\eqalign{\mathbb{P}(XY<z)&=\mathbb{E}[\mathbb{I}_{(-\infty,z)}(XY)]\\&=\mathbb{E}[\mathbb{E}[\mathbb{I}_{(-\infty,z)}(XY)]|Y]]\\&=\mathbb{E}[\mathbb{P}(XY<z|Y)]\\&=\mathbb{E}[F_X(z/Y)|Y]}$$
If $Y$ can take both positive and negative values,
$$\eqalign{\mathbb{P}(XY<z)&=\mathbb{E}[\mathbb{I}_{(-\infty,z)}(XY)]\\&=\mathbb{E}[\mathbb{I}_{(-\infty,z)}(XY)\mathbb{I}_{(-\infty,0)}(Y)]+\mathbb{E}[\mathbb{I}_{(-\infty,z)}(XY)\mathbb{I}_{(0,\infty)}(Y)]\\&=\mathbb{E}[\mathbb{E}[\mathbb{I}_{(-\infty,z)}(XY)]|Y]\mathbb{I}_{(-\infty,0)}(Y)]+\mathbb{E}[\mathbb{E}[\mathbb{I}_{(-\infty,z)}(XY)]|Y]\mathbb{I}_{(0,\infty)}(Y)]\\&=\mathbb{E}[\mathbb{P}(XY<z|Y)\mathbb{I}_{(-\infty,0)}(Y)]+\mathbb{E}[\mathbb{P}(XY<z|Y)\mathbb{I}_{(0,\infty)}(Y)]\\&=\mathbb{E}[\mathbb{I}_{(-\infty,0)}(Y)F_X(z/Y)]+\mathbb{E}[\mathbb{I}_{(0,\infty)}(Y)\{1-F_X(z/Y)\}]}$$
Depending on the setting, the density of $XY$ can then be obtained by derivation of the above. | Probability of product of two random variables
Since the distribution of $XY$ is characterised by its cdf, you want to compute $\mathbb{P}(XY<z)$ for an arbitrary value $z$. Let us assume first that $Y$ is always positive. Then
$$\eqalign{\mathbb |
54,343 | Probability of product of two random variables | $ P(XY=k)= \sum_{t}P(X=t,Y=k/t)=\sum_{t}P(Y=k/t|X=t)P(X=t)$
Now,if $X$ and $Y$ are independent,then $P(Y=k/t|X=t)=P(Y=k/t)$ | Probability of product of two random variables | $ P(XY=k)= \sum_{t}P(X=t,Y=k/t)=\sum_{t}P(Y=k/t|X=t)P(X=t)$
Now,if $X$ and $Y$ are independent,then $P(Y=k/t|X=t)=P(Y=k/t)$ | Probability of product of two random variables
$ P(XY=k)= \sum_{t}P(X=t,Y=k/t)=\sum_{t}P(Y=k/t|X=t)P(X=t)$
Now,if $X$ and $Y$ are independent,then $P(Y=k/t|X=t)=P(Y=k/t)$ | Probability of product of two random variables
$ P(XY=k)= \sum_{t}P(X=t,Y=k/t)=\sum_{t}P(Y=k/t|X=t)P(X=t)$
Now,if $X$ and $Y$ are independent,then $P(Y=k/t|X=t)=P(Y=k/t)$ |
54,344 | Multilevel model with nested repeated measures design | The only random part here is the individual. Both Time and Treatment are fixed parts. As I understand it, you want global (ie. fixed) estimates of the effect of
Time
Each level of Treatment (except for the reference level)
The interaction between each level of Treatment (except for the
reference level) and Time.
The following models will give you that.
fm1 <- lmer(PosQ ~ Treatm * Time + (1|ID), data = analyses.4)
fm2 <- glmer(Conc ~ Treatm * Time + (1|ID), data = analyses.4, family = binomial)
That being said, you can get a random effect of time, ie. a random slope model where the effect of time varies between the individuals.
fm3 <- lmer(PosQ ~ Treatm * Time + (Time|ID), data = analyses.4)
fm4 <- glmer(Conc ~ Treatm * Time + (Time|ID), data = analyses.4, family = binomial)
This is possible since there is within-subject variation with respect to time. However, since there is no within-subject variation with respect to treatment, you cannot do the same for treatment.
Since there is no within-subject variation with respect to treatment, the effect of time in a random slope model is actually the deviance between the individual effect of the particular treatment that the indivdual received and the global estimate of Time, which would measure the average effect of the treatment that corresponds to the reference category of the variable Treatment.
You can use anova() to compare the models and test whether or not it is justified to let the effect of time vary by subject:
anova(fm1, fm3)
anova(fm2, fm4)
would do the testing you need. | Multilevel model with nested repeated measures design | The only random part here is the individual. Both Time and Treatment are fixed parts. As I understand it, you want global (ie. fixed) estimates of the effect of
Time
Each level of Treatment (excep | Multilevel model with nested repeated measures design
The only random part here is the individual. Both Time and Treatment are fixed parts. As I understand it, you want global (ie. fixed) estimates of the effect of
Time
Each level of Treatment (except for the reference level)
The interaction between each level of Treatment (except for the
reference level) and Time.
The following models will give you that.
fm1 <- lmer(PosQ ~ Treatm * Time + (1|ID), data = analyses.4)
fm2 <- glmer(Conc ~ Treatm * Time + (1|ID), data = analyses.4, family = binomial)
That being said, you can get a random effect of time, ie. a random slope model where the effect of time varies between the individuals.
fm3 <- lmer(PosQ ~ Treatm * Time + (Time|ID), data = analyses.4)
fm4 <- glmer(Conc ~ Treatm * Time + (Time|ID), data = analyses.4, family = binomial)
This is possible since there is within-subject variation with respect to time. However, since there is no within-subject variation with respect to treatment, you cannot do the same for treatment.
Since there is no within-subject variation with respect to treatment, the effect of time in a random slope model is actually the deviance between the individual effect of the particular treatment that the indivdual received and the global estimate of Time, which would measure the average effect of the treatment that corresponds to the reference category of the variable Treatment.
You can use anova() to compare the models and test whether or not it is justified to let the effect of time vary by subject:
anova(fm1, fm3)
anova(fm2, fm4)
would do the testing you need. | Multilevel model with nested repeated measures design
The only random part here is the individual. Both Time and Treatment are fixed parts. As I understand it, you want global (ie. fixed) estimates of the effect of
Time
Each level of Treatment (excep |
54,345 | gam smoother vs parametric term (concurvity difference) | The concurvity moves from the stated smooth terms to the parametric terms, which concurvity groups in total under the para column of the matrix or matrices returned.
Here's a modified example from ?concurvity
library("mgcv")
## simulate data with concurvity...
set.seed(8)
n<- 200
f2 <- function(x) 0.2 * x^11 * (10 * (1 - x))^6 + 10 *
(10 * x)^3 * (1 - x)^10
t <- sort(runif(n)) ## first covariate
## make covariate x a smooth function of t + noise...
x <- f2(t) + rnorm(n)*3
## simulate response dependent on t and x...
y <- sin(4*pi*t) + exp(x/20) + rnorm(n)*.3
## fit model...
b <- gam(y ~ s(t,k=15) + s(x,k=15), method="REML")
Now add a linear term and refit
x2 <- seq_len(n) + rnorm(n)*3
b2 <- update(b, . ~ . + x2)
Now look at the concurvity of the two models
## assess concurvity between each term and `rest of model'...
concurvity(b)
concurvity(b2)
These produce
> concurvity(b)
para s(t) s(x)
worst 1.06587e-24 0.60269087 0.6026909
observed 1.06587e-24 0.09576829 0.5728602
estimate 1.06587e-24 0.24513981 0.4659564
> concurvity(b2)
para s(t) s(x)
worst 0.9990068 0.9970541 0.6042295
observed 0.9990068 0.7866776 0.5733337
estimate 0.9990068 0.9111690 0.4668871
Note that x2 is essentially a noisy version of t:
> cor(t, x2)
[1] 0.9975977
and hence the concurvity is gone up from essentially 0 in b to almost 1 in b2.
Now if we add x2 as a smooth function instead...
concurvity(update(b, . ~ . + s(x2)))
we see that the para entries return to being very small and we get a measure for the spline term s(x2) directly
> concurvity(update(b, . ~ . + s(x2)))
para s(t) s(x) s(x2)
worst 1.506201e-24 0.9977153 0.6264654 0.9976988
observed 1.506201e-24 0.9838018 0.5893737 0.9963857
estimate 1.506201e-24 0.9909506 0.4921592 0.9943990
This is just how the function works in terms of the parametric terms; the focus is on the smooth terms.
Note: you are specifying gamma but fitting using REML. gamma only affects GCV and UBRE/AIC methods of smoothness selection, so you can remove this argument as it is having zero effect on the model fits. From version 1.8-23 of mgcv, the gamma argument no also affects models fitted using REML/ML, where smoothness parameters are selected BY REML/ML as if the sample size was $n/\gamma$ instead of $n$. | gam smoother vs parametric term (concurvity difference) | The concurvity moves from the stated smooth terms to the parametric terms, which concurvity groups in total under the para column of the matrix or matrices returned.
Here's a modified example from ?co | gam smoother vs parametric term (concurvity difference)
The concurvity moves from the stated smooth terms to the parametric terms, which concurvity groups in total under the para column of the matrix or matrices returned.
Here's a modified example from ?concurvity
library("mgcv")
## simulate data with concurvity...
set.seed(8)
n<- 200
f2 <- function(x) 0.2 * x^11 * (10 * (1 - x))^6 + 10 *
(10 * x)^3 * (1 - x)^10
t <- sort(runif(n)) ## first covariate
## make covariate x a smooth function of t + noise...
x <- f2(t) + rnorm(n)*3
## simulate response dependent on t and x...
y <- sin(4*pi*t) + exp(x/20) + rnorm(n)*.3
## fit model...
b <- gam(y ~ s(t,k=15) + s(x,k=15), method="REML")
Now add a linear term and refit
x2 <- seq_len(n) + rnorm(n)*3
b2 <- update(b, . ~ . + x2)
Now look at the concurvity of the two models
## assess concurvity between each term and `rest of model'...
concurvity(b)
concurvity(b2)
These produce
> concurvity(b)
para s(t) s(x)
worst 1.06587e-24 0.60269087 0.6026909
observed 1.06587e-24 0.09576829 0.5728602
estimate 1.06587e-24 0.24513981 0.4659564
> concurvity(b2)
para s(t) s(x)
worst 0.9990068 0.9970541 0.6042295
observed 0.9990068 0.7866776 0.5733337
estimate 0.9990068 0.9111690 0.4668871
Note that x2 is essentially a noisy version of t:
> cor(t, x2)
[1] 0.9975977
and hence the concurvity is gone up from essentially 0 in b to almost 1 in b2.
Now if we add x2 as a smooth function instead...
concurvity(update(b, . ~ . + s(x2)))
we see that the para entries return to being very small and we get a measure for the spline term s(x2) directly
> concurvity(update(b, . ~ . + s(x2)))
para s(t) s(x) s(x2)
worst 1.506201e-24 0.9977153 0.6264654 0.9976988
observed 1.506201e-24 0.9838018 0.5893737 0.9963857
estimate 1.506201e-24 0.9909506 0.4921592 0.9943990
This is just how the function works in terms of the parametric terms; the focus is on the smooth terms.
Note: you are specifying gamma but fitting using REML. gamma only affects GCV and UBRE/AIC methods of smoothness selection, so you can remove this argument as it is having zero effect on the model fits. From version 1.8-23 of mgcv, the gamma argument no also affects models fitted using REML/ML, where smoothness parameters are selected BY REML/ML as if the sample size was $n/\gamma$ instead of $n$. | gam smoother vs parametric term (concurvity difference)
The concurvity moves from the stated smooth terms to the parametric terms, which concurvity groups in total under the para column of the matrix or matrices returned.
Here's a modified example from ?co |
54,346 | Empirical verification of the probability integral transform | I believe your code just does not do what you want it to do. Here's what you want:
set.seed(154)
x <- rnorm(10000)
hist(pnorm(x))
This histogram looks uniform.
I believe
plot(ppoints(1000), pnorm(ppoints(1000)))
results in a plot of a portion of the graph of the normal cdf.
Here's a quick verification
plot((1:100 - 50)/25, pnorm((1:100 - 50)/25))
points(ppoints(25), pnorm(ppoints(25)), col="blue") | Empirical verification of the probability integral transform | I believe your code just does not do what you want it to do. Here's what you want:
set.seed(154)
x <- rnorm(10000)
hist(pnorm(x))
This histogram looks uniform.
I believe
plot(ppoints(1000), pnorm(pp | Empirical verification of the probability integral transform
I believe your code just does not do what you want it to do. Here's what you want:
set.seed(154)
x <- rnorm(10000)
hist(pnorm(x))
This histogram looks uniform.
I believe
plot(ppoints(1000), pnorm(ppoints(1000)))
results in a plot of a portion of the graph of the normal cdf.
Here's a quick verification
plot((1:100 - 50)/25, pnorm((1:100 - 50)/25))
points(ppoints(25), pnorm(ppoints(25)), col="blue") | Empirical verification of the probability integral transform
I believe your code just does not do what you want it to do. Here's what you want:
set.seed(154)
x <- rnorm(10000)
hist(pnorm(x))
This histogram looks uniform.
I believe
plot(ppoints(1000), pnorm(pp |
54,347 | If I have a k-dimensional random vector distributed as multivariate gaussian, are the elements of the random vector also gaussian? | A random vector $X$ have the multinormal distribution if all linear combinations are normally distributed. Just take the linear combination with coefficient vector $e_i = (0,0,\dots, 1,0,\dots,0)$ where the one is in place $i$. Then $e_i^T X=X_i$, the $i$th component, hence that is normally distributed.
For the additional question in the comments about mean and variance of $X_i$.
For the expectation (mean) of a linear combination we have
$$
\DeclareMathOperator{\E}{E} \E \sum_i a_i X_i = \sum_i a_i \E X_i = \sum_i a_i \mu_i
$$
and then you an conclude using the above!
For the variance of a linear combination, use that the covariance of linear combinations are given by
$$
\DeclareMathOperator{\Cov}{Cov}
\Cov(\sum_i a_i X_i, \sum_j b_j Y_j) = \sum_i \sum_j a_i b_j \Cov(X_i,Y_j)
$$
then that
$$
\DeclareMathOperator{\Var}{Var}
\Var(\sum_i a_i X_i)=\Cov(\sum_i a_i X_i, \sum_j a_j X_j)=
\sum_i \sum_j a_i a_j \Cov(X_i, X_j)
$$
finally putting the coefficient vector $e_i$ into this formulas.
So, we are using that expectation is a linear operator, while covariance is a bilinear operator. | If I have a k-dimensional random vector distributed as multivariate gaussian, are the elements of th | A random vector $X$ have the multinormal distribution if all linear combinations are normally distributed. Just take the linear combination with coefficient vector $e_i = (0,0,\dots, 1,0,\dots,0)$ whe | If I have a k-dimensional random vector distributed as multivariate gaussian, are the elements of the random vector also gaussian?
A random vector $X$ have the multinormal distribution if all linear combinations are normally distributed. Just take the linear combination with coefficient vector $e_i = (0,0,\dots, 1,0,\dots,0)$ where the one is in place $i$. Then $e_i^T X=X_i$, the $i$th component, hence that is normally distributed.
For the additional question in the comments about mean and variance of $X_i$.
For the expectation (mean) of a linear combination we have
$$
\DeclareMathOperator{\E}{E} \E \sum_i a_i X_i = \sum_i a_i \E X_i = \sum_i a_i \mu_i
$$
and then you an conclude using the above!
For the variance of a linear combination, use that the covariance of linear combinations are given by
$$
\DeclareMathOperator{\Cov}{Cov}
\Cov(\sum_i a_i X_i, \sum_j b_j Y_j) = \sum_i \sum_j a_i b_j \Cov(X_i,Y_j)
$$
then that
$$
\DeclareMathOperator{\Var}{Var}
\Var(\sum_i a_i X_i)=\Cov(\sum_i a_i X_i, \sum_j a_j X_j)=
\sum_i \sum_j a_i a_j \Cov(X_i, X_j)
$$
finally putting the coefficient vector $e_i$ into this formulas.
So, we are using that expectation is a linear operator, while covariance is a bilinear operator. | If I have a k-dimensional random vector distributed as multivariate gaussian, are the elements of th
A random vector $X$ have the multinormal distribution if all linear combinations are normally distributed. Just take the linear combination with coefficient vector $e_i = (0,0,\dots, 1,0,\dots,0)$ whe |
54,348 | If I have a k-dimensional random vector distributed as multivariate gaussian, are the elements of the random vector also gaussian? | kjetil's answer (+1) addresses the more general case of linear combinations of components of your multivariate normal distribution.
Your specific case concerns the so-called marginals of the multivariate normal distribution. And yes, the marginals of a multivariate normal distribution are again normal (multivariate normal, if you look at higher-dimensional marginals than your one-dimensional ones).
Interestingly, and importantly, the converse does not hold. A multivariate distribution can have normal marginals, but be non-multivariate normal. This is a standard homework problem. See, e.g., Kowalski (1973, The American Statistician), or here.
This indicates the importance of the "all" in kjetil's answer - it's not enough for only some linear combinations of marginals to be normal, all linear combinations need to be normal for the whole vector to be multivariate normal. | If I have a k-dimensional random vector distributed as multivariate gaussian, are the elements of th | kjetil's answer (+1) addresses the more general case of linear combinations of components of your multivariate normal distribution.
Your specific case concerns the so-called marginals of the multivari | If I have a k-dimensional random vector distributed as multivariate gaussian, are the elements of the random vector also gaussian?
kjetil's answer (+1) addresses the more general case of linear combinations of components of your multivariate normal distribution.
Your specific case concerns the so-called marginals of the multivariate normal distribution. And yes, the marginals of a multivariate normal distribution are again normal (multivariate normal, if you look at higher-dimensional marginals than your one-dimensional ones).
Interestingly, and importantly, the converse does not hold. A multivariate distribution can have normal marginals, but be non-multivariate normal. This is a standard homework problem. See, e.g., Kowalski (1973, The American Statistician), or here.
This indicates the importance of the "all" in kjetil's answer - it's not enough for only some linear combinations of marginals to be normal, all linear combinations need to be normal for the whole vector to be multivariate normal. | If I have a k-dimensional random vector distributed as multivariate gaussian, are the elements of th
kjetil's answer (+1) addresses the more general case of linear combinations of components of your multivariate normal distribution.
Your specific case concerns the so-called marginals of the multivari |
54,349 | Is there a book on applied linear algebra | My favourite book on linear algebra is this one. And it's quite inexpensive only 9.99 the kindle version. So related to your question this book teaches linear algebra from practical applications using python working on challenging problems, so some python experience is required though some people have learned python through this book.
Coding the Matrix: Linear Algebra through Applications to Computer Science
http://www.amazon.com/dp/0615880991/ | Is there a book on applied linear algebra | My favourite book on linear algebra is this one. And it's quite inexpensive only 9.99 the kindle version. So related to your question this book teaches linear algebra from practical applications using | Is there a book on applied linear algebra
My favourite book on linear algebra is this one. And it's quite inexpensive only 9.99 the kindle version. So related to your question this book teaches linear algebra from practical applications using python working on challenging problems, so some python experience is required though some people have learned python through this book.
Coding the Matrix: Linear Algebra through Applications to Computer Science
http://www.amazon.com/dp/0615880991/ | Is there a book on applied linear algebra
My favourite book on linear algebra is this one. And it's quite inexpensive only 9.99 the kindle version. So related to your question this book teaches linear algebra from practical applications using |
54,350 | Is there a book on applied linear algebra | Gilbert Strang's linear algebra is good.
If you want to learn linear algebra from scratch I think this one is pretty good too. It is even simpler than Strang's book.
http://www.amazon.com/Linear-Algebra-Applications-8th-Edition/dp/0136009298 | Is there a book on applied linear algebra | Gilbert Strang's linear algebra is good.
If you want to learn linear algebra from scratch I think this one is pretty good too. It is even simpler than Strang's book.
http://www.amazon.com/Linear-Alge | Is there a book on applied linear algebra
Gilbert Strang's linear algebra is good.
If you want to learn linear algebra from scratch I think this one is pretty good too. It is even simpler than Strang's book.
http://www.amazon.com/Linear-Algebra-Applications-8th-Edition/dp/0136009298 | Is there a book on applied linear algebra
Gilbert Strang's linear algebra is good.
If you want to learn linear algebra from scratch I think this one is pretty good too. It is even simpler than Strang's book.
http://www.amazon.com/Linear-Alge |
54,351 | Recommendations for books regarding statistical consulting | Statistical Sleuth does not describe the consulting process, but teaches methods using case studies.
To quote:
The Sleuth was written to train graduate students in disciplines other than Statistics to correctly draw and communicate statistical conclusions for their Master's and Doctoral theses, and for their eventual careers as scientists. | Recommendations for books regarding statistical consulting | Statistical Sleuth does not describe the consulting process, but teaches methods using case studies.
To quote:
The Sleuth was written to train graduate students in disciplines other than Statistics t | Recommendations for books regarding statistical consulting
Statistical Sleuth does not describe the consulting process, but teaches methods using case studies.
To quote:
The Sleuth was written to train graduate students in disciplines other than Statistics to correctly draw and communicate statistical conclusions for their Master's and Doctoral theses, and for their eventual careers as scientists. | Recommendations for books regarding statistical consulting
Statistical Sleuth does not describe the consulting process, but teaches methods using case studies.
To quote:
The Sleuth was written to train graduate students in disciplines other than Statistics t |
54,352 | Recommendations for books regarding statistical consulting | There is some books dedicated to that topic of training statistical consultants, I do not have personal experience with this books, but here are some:
"Statistical Consulting" by Javier Cabrera and Andrew McDougall (had a positive review in the American Statistician)
"Statistical Consulting: A Guide to Effective Communication" by Janice Derr (but is too expensive!)
"Guide for the New Statistical Consultant: Some Suggestions and Three Key Questions to Ask" by Frederick Ruland.
The following could maybe be useful to have alook at:
"Statistics Done Wrong: The Woefully Complete Guide" by Alex Reinhart
(there are more titles like this on amazon.com) | Recommendations for books regarding statistical consulting | There is some books dedicated to that topic of training statistical consultants, I do not have personal experience with this books, but here are some:
"Statistical Consulting" by Javier Cabrera and | Recommendations for books regarding statistical consulting
There is some books dedicated to that topic of training statistical consultants, I do not have personal experience with this books, but here are some:
"Statistical Consulting" by Javier Cabrera and Andrew McDougall (had a positive review in the American Statistician)
"Statistical Consulting: A Guide to Effective Communication" by Janice Derr (but is too expensive!)
"Guide for the New Statistical Consultant: Some Suggestions and Three Key Questions to Ask" by Frederick Ruland.
The following could maybe be useful to have alook at:
"Statistics Done Wrong: The Woefully Complete Guide" by Alex Reinhart
(there are more titles like this on amazon.com) | Recommendations for books regarding statistical consulting
There is some books dedicated to that topic of training statistical consultants, I do not have personal experience with this books, but here are some:
"Statistical Consulting" by Javier Cabrera and |
54,353 | Are products of independent random variables independent? | You are making this problem a lot harder than it needs to be because the
random variables in question are two-valued, and the problem can be
treated as one of independence of events rather than independence of
random variables. In what follows, I will treat the independence of
events even though the events will be stated in terms of random variables.
Let $Z_0,Z_1,Z_2,\cdots$ be independent random variables $\ldots$
I will take this as the assertion that the countably infinite collection of events $A_i = \{Z_i = +1\}$ is a collection of independent events. Now, a countable collection of events is said to be a collection of
independent events if each finite subset (of cardinality $2$ or
more) is a collection of independent events. Recall that
$n\geq 2$ events $B_0, B_1, \cdots, B_{n-1}$ are said to be independent events
if
$$P(B_0\cap B_1\cap \cdots \cap B_{n-1})
= P(B_0)P(B_1) \cdots P(B_{n-1})$$
and every finite subset of two or more of these events is a
collection of independent events. Alternatively,
$B_0, B_1, \cdots, B_{n-1}$ are said to be independent events
if the following $2^n$ equations hold:
$$P(B_0^*\cap B_1^*\cap \cdots \cap B_{n-1}^*)
= P(B_0^*)P(B_1^*)\cdots P(B_{n-1}^*)\tag{1}$$
Note that in $(1)$, $B_i^*$ stands for $B_i$ or $B_i^c$
(same on both sides of $(1)$) and the $2^n$ choices
($B_i$ or $B_i^c$) give us the $2^n$ equations.
For our application, $A_i = \{Z_i = +1\}$ and $A_i^c = \{Z_i=-1\}$,
and so checking whether the $2^n$ equations
$$P(A_0^*\cap A_1^*\cap \cdots \cap A_{n-1}^*)
= P(A_0^*)P(A_1^*)\cdots P(A_{n-1}^*)\tag{2}$$
hold or not, is equivalent to checking that the
joint probability mass function (pmf) of $Z_0, Z_1, \cdots, Z_{n-1}$
factors into the product of the $n$ marginal pmfs at each and
every one of the points $(\pm 1, \pm 1, \cdots, \pm 1)$ which is
what you would be doing if you had never heard of independent
events, just about independent random variables.
Thus, the statement
Let $Z_0,Z_1,Z_2,\cdots$ be independent random variables $\ldots$
does mean, among other things, that $Z_0,Z_1,Z_2,\cdots, Z_{n-1}$
is a finite collection of independent random variables. But,
does the assertion
For all $n \geq 2$, $\{Z_0,Z_1,Z_2,\cdots, Z_{n-1}\}$ is a set
of $n$ independent random variables
imply that the
countably infinite set $\{Z_0,Z_1,Z_2,\cdots \}$ is a
collection of independent random variables?
The answer is Yes, because we know by hypothesis
that some specific finite
subsets of $\{Z_0,Z_1,Z_2,\cdots \}$ are independent random
variables, while any other finite subset, say $\{Z_2, Z_5, Z_{313}\}$,
is a subset of $\{Z_0, Z_1, \cdots, Z_{313}\}$ which are independent
per the hypothesis and so the subset is also a set of independent
random variables.
In your question, with each $a_i \in \{+1, -1\}$ and
defining $b_i = \prod_{j=0}^i a_j$ which is also in $\{+1,-1\}$,
\begin{align}
P(X_0 = a_0, X_1 = a_1, \cdots, X_n = a_n)
&= P(Z_0 = a_0, Z_1 = a_0a_1, Z_2 = a_0a_1a_2, \cdots, Z_n = a_0a_1...a_n)\\
&= P(Z_0=b_0, Z_1 = b_1, \cdots, Z_n = b_n)\\
&= \prod_{i=0}^n P(Z_i = b_i)\\
&= 2^{-(n+1)}\\
&= \prod_{i=0}^n P(X_i = a_i),
\end{align}
that is, all $2^{n+1}$ equations of the form $(2)$ hold.
Thus, for each $n \geq 1$, $X_0, X_1, \cdots, X_n$ are
independent random variables, and therefore the
countably infinite collection $\{X_0, X_1, \cdots\}$
of random variables is a collection of independent
random variables.
After reading over my revised answer, perhaps it is I who is
making the problem much harder than necessary. My apologies. | Are products of independent random variables independent? | You are making this problem a lot harder than it needs to be because the
random variables in question are two-valued, and the problem can be
treated as one of independence of events rather than indepe | Are products of independent random variables independent?
You are making this problem a lot harder than it needs to be because the
random variables in question are two-valued, and the problem can be
treated as one of independence of events rather than independence of
random variables. In what follows, I will treat the independence of
events even though the events will be stated in terms of random variables.
Let $Z_0,Z_1,Z_2,\cdots$ be independent random variables $\ldots$
I will take this as the assertion that the countably infinite collection of events $A_i = \{Z_i = +1\}$ is a collection of independent events. Now, a countable collection of events is said to be a collection of
independent events if each finite subset (of cardinality $2$ or
more) is a collection of independent events. Recall that
$n\geq 2$ events $B_0, B_1, \cdots, B_{n-1}$ are said to be independent events
if
$$P(B_0\cap B_1\cap \cdots \cap B_{n-1})
= P(B_0)P(B_1) \cdots P(B_{n-1})$$
and every finite subset of two or more of these events is a
collection of independent events. Alternatively,
$B_0, B_1, \cdots, B_{n-1}$ are said to be independent events
if the following $2^n$ equations hold:
$$P(B_0^*\cap B_1^*\cap \cdots \cap B_{n-1}^*)
= P(B_0^*)P(B_1^*)\cdots P(B_{n-1}^*)\tag{1}$$
Note that in $(1)$, $B_i^*$ stands for $B_i$ or $B_i^c$
(same on both sides of $(1)$) and the $2^n$ choices
($B_i$ or $B_i^c$) give us the $2^n$ equations.
For our application, $A_i = \{Z_i = +1\}$ and $A_i^c = \{Z_i=-1\}$,
and so checking whether the $2^n$ equations
$$P(A_0^*\cap A_1^*\cap \cdots \cap A_{n-1}^*)
= P(A_0^*)P(A_1^*)\cdots P(A_{n-1}^*)\tag{2}$$
hold or not, is equivalent to checking that the
joint probability mass function (pmf) of $Z_0, Z_1, \cdots, Z_{n-1}$
factors into the product of the $n$ marginal pmfs at each and
every one of the points $(\pm 1, \pm 1, \cdots, \pm 1)$ which is
what you would be doing if you had never heard of independent
events, just about independent random variables.
Thus, the statement
Let $Z_0,Z_1,Z_2,\cdots$ be independent random variables $\ldots$
does mean, among other things, that $Z_0,Z_1,Z_2,\cdots, Z_{n-1}$
is a finite collection of independent random variables. But,
does the assertion
For all $n \geq 2$, $\{Z_0,Z_1,Z_2,\cdots, Z_{n-1}\}$ is a set
of $n$ independent random variables
imply that the
countably infinite set $\{Z_0,Z_1,Z_2,\cdots \}$ is a
collection of independent random variables?
The answer is Yes, because we know by hypothesis
that some specific finite
subsets of $\{Z_0,Z_1,Z_2,\cdots \}$ are independent random
variables, while any other finite subset, say $\{Z_2, Z_5, Z_{313}\}$,
is a subset of $\{Z_0, Z_1, \cdots, Z_{313}\}$ which are independent
per the hypothesis and so the subset is also a set of independent
random variables.
In your question, with each $a_i \in \{+1, -1\}$ and
defining $b_i = \prod_{j=0}^i a_j$ which is also in $\{+1,-1\}$,
\begin{align}
P(X_0 = a_0, X_1 = a_1, \cdots, X_n = a_n)
&= P(Z_0 = a_0, Z_1 = a_0a_1, Z_2 = a_0a_1a_2, \cdots, Z_n = a_0a_1...a_n)\\
&= P(Z_0=b_0, Z_1 = b_1, \cdots, Z_n = b_n)\\
&= \prod_{i=0}^n P(Z_i = b_i)\\
&= 2^{-(n+1)}\\
&= \prod_{i=0}^n P(X_i = a_i),
\end{align}
that is, all $2^{n+1}$ equations of the form $(2)$ hold.
Thus, for each $n \geq 1$, $X_0, X_1, \cdots, X_n$ are
independent random variables, and therefore the
countably infinite collection $\{X_0, X_1, \cdots\}$
of random variables is a collection of independent
random variables.
After reading over my revised answer, perhaps it is I who is
making the problem much harder than necessary. My apologies. | Are products of independent random variables independent?
You are making this problem a lot harder than it needs to be because the
random variables in question are two-valued, and the problem can be
treated as one of independence of events rather than indepe |
54,354 | Are products of independent random variables independent? | ... How do I state this precisely, if it is right? $\forall i \leq n, \sigma(X_i) \subseteq \sigma(X_n)$ ?
Your have the right idea, but I would recommend using the definition of the Markov property to state this, namely that we have $P(X_n\mid X_0,\dots,X_{n-1})=P(X_n \mid X_{n-1})$. There is nothing imprecise about this as long as you have a precise definition of conditional probabilities. The $\sigma-$algebra condition you wrote is not correct.
...It seems like I assumed $X_n$ and $Z_{n+1}$ are independent. Are they?
Hint: measurable functions of independent random variables are independent (you decide if you need to prove this).
...I'm stuck.
Is what I've done right so far? Which parts are wrong? Where do I go from here?
Try structure your answer some more. Specify the events $B_i$ under consideration, e.g. notice that since each variable only takes 2 values there are not that many different types of events to consider.
First solve for the right hand side using, e.g., the argument that
$$P(X_i = 1)=\mathbb E P(X_i = 1 \mid X_{i-1})=\mathbb E 1/2=1/2;$$ you have the right value.
Then solve for the left hand side using the Markov property as you have attempted. | Are products of independent random variables independent? | ... How do I state this precisely, if it is right? $\forall i \leq n, \sigma(X_i) \subseteq \sigma(X_n)$ ?
Your have the right idea, but I would recommend using the definition of the Markov property | Are products of independent random variables independent?
... How do I state this precisely, if it is right? $\forall i \leq n, \sigma(X_i) \subseteq \sigma(X_n)$ ?
Your have the right idea, but I would recommend using the definition of the Markov property to state this, namely that we have $P(X_n\mid X_0,\dots,X_{n-1})=P(X_n \mid X_{n-1})$. There is nothing imprecise about this as long as you have a precise definition of conditional probabilities. The $\sigma-$algebra condition you wrote is not correct.
...It seems like I assumed $X_n$ and $Z_{n+1}$ are independent. Are they?
Hint: measurable functions of independent random variables are independent (you decide if you need to prove this).
...I'm stuck.
Is what I've done right so far? Which parts are wrong? Where do I go from here?
Try structure your answer some more. Specify the events $B_i$ under consideration, e.g. notice that since each variable only takes 2 values there are not that many different types of events to consider.
First solve for the right hand side using, e.g., the argument that
$$P(X_i = 1)=\mathbb E P(X_i = 1 \mid X_{i-1})=\mathbb E 1/2=1/2;$$ you have the right value.
Then solve for the left hand side using the Markov property as you have attempted. | Are products of independent random variables independent?
... How do I state this precisely, if it is right? $\forall i \leq n, \sigma(X_i) \subseteq \sigma(X_n)$ ?
Your have the right idea, but I would recommend using the definition of the Markov property |
54,355 | Comparing the within-subject variance between two groups of subjects | You can test this by fitting a linear mixed model. A linear mixed model is like a multiple regression model but you can have random effects. The random effects part is needed because you have multiple tests per observation. You will then model score as a function of sex and test, and the subjects are your random effects. Let's enter your test data in R:
subject <- c(1,1,1,2,2,2,3,3,3,4,4,4,5,5,5,6,6,6)
score <- c(5,2,7,3,3,2,-2,1,0,4,2,3,6,-4,-1,6,0,3)
test <- rep(c(1,2,3),6)
sex <- c(rep(1,9), rep(0,9))
testdata <- data.frame(subject, score, test, sex)
testdata
The data would look like this:
subject score test sex
1 1 5 1 1
2 1 2 2 1
3 1 7 3 1
4 2 3 1 1
5 2 3 2 1
6 2 2 3 1
7 3 -2 1 1
8 3 1 2 1
9 3 0 3 1
10 4 4 1 0
11 4 2 2 0
12 4 3 3 0
13 5 6 1 0
14 5 -4 2 0
15 5 -1 3 0
16 6 6 1 0
17 6 0 2 0
18 6 3 3 0
Now we fit two models. Both models use sex and test (1-3 in this case) as fixed effects and subject as random effect. The difference between the models is that in the second model, the variance is allowed to differ between women and men. We then compare the models using the anova() command, and if there is a significant difference, this indicates that the more complex model (the one with the differing variances per sex) provides a better fit, and we thus have indirect evidence that the difference in variance is statistically significant:
m1 <- lme(score ~ sex + factor(test), random=~1|subject, data=testdata)
m2 <- lme(score ~ sex + factor(test), random=~1|subject, weights=varIdent(form=~1|sex), data=testdata)
anova(m1, m2)
Model df AIC BIC logLik Test L.Ratio p-value
m1 1 6 167.8734 176.6679 -77.93673
m2 2 7 169.8450 180.1051 -77.92247 1 vs 2 0.02850744 0.8659
There was no difference in this example. But if we change the score for the last female a little (changing score from 3 to -17, increasing the variance) and run the m2 model and the comparison again:
score <- c(5,2,7,3,3,2,-2,1,0,4,2,3,6,-4,-1,6,0,-17)
testdata <- data.frame(subject, score, test, sex)
m2 <- lme(score ~ sex + factor(test), random=~1|subject, weights=varIdent(form=~1|sex), data=testdata)
anova(m1, m2)
Model df AIC BIC logLik Test L.Ratio p-value
m1 1 6 105.3948 109.2291 -46.69739
m2 2 7 102.7737 107.2471 -44.38685 1 vs 2 4.621064 0.0316
Now we see a difference in AIC and logLik, and a low p-value which indicates a difference in variance between the sexes. | Comparing the within-subject variance between two groups of subjects | You can test this by fitting a linear mixed model. A linear mixed model is like a multiple regression model but you can have random effects. The random effects part is needed because you have multiple | Comparing the within-subject variance between two groups of subjects
You can test this by fitting a linear mixed model. A linear mixed model is like a multiple regression model but you can have random effects. The random effects part is needed because you have multiple tests per observation. You will then model score as a function of sex and test, and the subjects are your random effects. Let's enter your test data in R:
subject <- c(1,1,1,2,2,2,3,3,3,4,4,4,5,5,5,6,6,6)
score <- c(5,2,7,3,3,2,-2,1,0,4,2,3,6,-4,-1,6,0,3)
test <- rep(c(1,2,3),6)
sex <- c(rep(1,9), rep(0,9))
testdata <- data.frame(subject, score, test, sex)
testdata
The data would look like this:
subject score test sex
1 1 5 1 1
2 1 2 2 1
3 1 7 3 1
4 2 3 1 1
5 2 3 2 1
6 2 2 3 1
7 3 -2 1 1
8 3 1 2 1
9 3 0 3 1
10 4 4 1 0
11 4 2 2 0
12 4 3 3 0
13 5 6 1 0
14 5 -4 2 0
15 5 -1 3 0
16 6 6 1 0
17 6 0 2 0
18 6 3 3 0
Now we fit two models. Both models use sex and test (1-3 in this case) as fixed effects and subject as random effect. The difference between the models is that in the second model, the variance is allowed to differ between women and men. We then compare the models using the anova() command, and if there is a significant difference, this indicates that the more complex model (the one with the differing variances per sex) provides a better fit, and we thus have indirect evidence that the difference in variance is statistically significant:
m1 <- lme(score ~ sex + factor(test), random=~1|subject, data=testdata)
m2 <- lme(score ~ sex + factor(test), random=~1|subject, weights=varIdent(form=~1|sex), data=testdata)
anova(m1, m2)
Model df AIC BIC logLik Test L.Ratio p-value
m1 1 6 167.8734 176.6679 -77.93673
m2 2 7 169.8450 180.1051 -77.92247 1 vs 2 0.02850744 0.8659
There was no difference in this example. But if we change the score for the last female a little (changing score from 3 to -17, increasing the variance) and run the m2 model and the comparison again:
score <- c(5,2,7,3,3,2,-2,1,0,4,2,3,6,-4,-1,6,0,-17)
testdata <- data.frame(subject, score, test, sex)
m2 <- lme(score ~ sex + factor(test), random=~1|subject, weights=varIdent(form=~1|sex), data=testdata)
anova(m1, m2)
Model df AIC BIC logLik Test L.Ratio p-value
m1 1 6 105.3948 109.2291 -46.69739
m2 2 7 102.7737 107.2471 -44.38685 1 vs 2 4.621064 0.0316
Now we see a difference in AIC and logLik, and a low p-value which indicates a difference in variance between the sexes. | Comparing the within-subject variance between two groups of subjects
You can test this by fitting a linear mixed model. A linear mixed model is like a multiple regression model but you can have random effects. The random effects part is needed because you have multiple |
54,356 | Comparing the within-subject variance between two groups of subjects | The average of variances is probably not the best variable to use.
Variance is a squared result. In many cases, the standard deviation may be the better result to use, because it is on the same scale as your original data. You can then decide on whether use the arithmetic mean, or e.g. root-mean-square again.
Avoid blindly using standard toolbox functions.
In your case, I already disagree with your variances.
The proper variances are:
6.33, 0.33, 2.33, 1, 26.3, 9.
because you must use the unbiased sample variance. Note that these estimates are much higher than yours, because of the small sample size.
As you can see, there is a massive outlier here - 26.3 is way outside your range. Averaging such squared values is not sound. Take the standard deviations instead:
2.52, 0.58, 1.53, 1.00, 5.13, 3.00
The 5.13 is still large, but not as extreme anymore. The mean standard deviation of the males is 1.54, of the females it is 3.04; the average standard deviation of both is $2.29 \pm 1.66$. But you need to be aware that at this sample size, even your estimates of the mean are pretty unreliable.
You have to make informed decision on a number of steps, including:
variance, or standard deviation? or standard error?
more than one variance/standard deviation (biased, unbiased)
more than one mean (arithmetic, harmonic, geometric, power ...)
Any statistic test such as the t-test will come with some assumption on your input data; and doing the wrong choices will have a considerable impact on your result. Sorry, but I cannot save you from studying these differences yourself. The only choice where I'm fairly confident myself is that you need to use the unbiased variant.
For your actual test, you may want to look at the standard error of the mean, too. This may at first appear redundant to the standard deviation, but it is not. It measures how good your estimation of the mean is. It is tighter than the standard deviation; but if you have differences within this standard error, any difference likely is just random. | Comparing the within-subject variance between two groups of subjects | The average of variances is probably not the best variable to use.
Variance is a squared result. In many cases, the standard deviation may be the better result to use, because it is on the same scale | Comparing the within-subject variance between two groups of subjects
The average of variances is probably not the best variable to use.
Variance is a squared result. In many cases, the standard deviation may be the better result to use, because it is on the same scale as your original data. You can then decide on whether use the arithmetic mean, or e.g. root-mean-square again.
Avoid blindly using standard toolbox functions.
In your case, I already disagree with your variances.
The proper variances are:
6.33, 0.33, 2.33, 1, 26.3, 9.
because you must use the unbiased sample variance. Note that these estimates are much higher than yours, because of the small sample size.
As you can see, there is a massive outlier here - 26.3 is way outside your range. Averaging such squared values is not sound. Take the standard deviations instead:
2.52, 0.58, 1.53, 1.00, 5.13, 3.00
The 5.13 is still large, but not as extreme anymore. The mean standard deviation of the males is 1.54, of the females it is 3.04; the average standard deviation of both is $2.29 \pm 1.66$. But you need to be aware that at this sample size, even your estimates of the mean are pretty unreliable.
You have to make informed decision on a number of steps, including:
variance, or standard deviation? or standard error?
more than one variance/standard deviation (biased, unbiased)
more than one mean (arithmetic, harmonic, geometric, power ...)
Any statistic test such as the t-test will come with some assumption on your input data; and doing the wrong choices will have a considerable impact on your result. Sorry, but I cannot save you from studying these differences yourself. The only choice where I'm fairly confident myself is that you need to use the unbiased variant.
For your actual test, you may want to look at the standard error of the mean, too. This may at first appear redundant to the standard deviation, but it is not. It measures how good your estimation of the mean is. It is tighter than the standard deviation; but if you have differences within this standard error, any difference likely is just random. | Comparing the within-subject variance between two groups of subjects
The average of variances is probably not the best variable to use.
Variance is a squared result. In many cases, the standard deviation may be the better result to use, because it is on the same scale |
54,357 | Comparing the within-subject variance between two groups of subjects | The fact the same subject does the test multiple times introduces correlation among the observations. The goal is to get an estimate of the var-covar matrix of your data. We will assume that the subjects are independent, but that the score for the same subject on the test are dependent. This means that that var-covar matrix of the data will look like a diagonal matrix, but on the main diagonal you will find $3\times3$ block matrices.
This var-covar matrix can be estimated using the function gls from the package nlme.
Using the same testdata as @Jonas Berge I have the following R-code
library(nlme)
subject <- as.factor(c(1,1,1,2,2,2,3,3,3,4,4,4,5,5,5,6,6,6))
score <- c(5,2,7,3,3,2,-2,1,0,4,2,3,6,-4,-1,6,0,3)
test <- (rep(c(1,2,3),6))
sex <- as.factor(c(rep(1,9), rep(0,9)))
testdata <- data.frame(subject, score, test, sex)
testdata
r.1<-gls(score ~ 1+sex,
data=testdata,
corr=corCAR1(form=~ 1|subject),
method="REML")
The corr=corCAR1(form=~ 1|subject) means that I assume an autoregressive pattern in the scores of the test for by subject. You can try with other assumptions on this correlation structure (see help on gls).
Note that we used option method=REML to get unbiased estimates of the variance. (see Why does one have to use REML (instead of ML) for choosing among nested var-covar models?)
With summary(r.1) you see what this gives, but at this point this is not so important. The most important result in this context is that value of the log-likelihood of this model, it will be used as input for a likelihood ratio test further on.
The above model r.1 assumes a correlation structure whereby all subjects have the same variance. In the second stage we will re-estimate a model, but with different variances for the male and female subjects:
r.2<-update(r.1,
weights=varIdent(form=~1|sex),
method="REML")
The option weights=varIdent(form=~1|sex) means that we want different variances by sex. With the summary function one can again find the likelihood for this model.
The second model r.2 has different variances by sex, so it is more general than the first model that assumes that both sexes have the same variance. Therefore the log-likelihood of the second model will be higher than the lig-likelihood of the second model. With a likelihood ratio test (see What are the ''desirable'' statistical properties of the likelihood ratio test?) we can find out whether the difference is significant: the second model r.2 is more general than the first one or the first model is nested within the second one. If we find that the second model's likelihood is significantly higher than the likelihood of the first model, then the model r.2 (with different variances for male and female) fits the data better. So if the likelihood ratio test gives a significant result, then we have reason to believe that the variance differs by sex.
The likelihood ratio test takes two times the difference between the two likelihoods, and this is (asymptotically) a $\chi^2$ with degrees of freedom equal to the difference in number of parameters between teh two models. The second model has one parameter more (two variances, one per sex) so df=1.
The p-value of the test can be found with:
1-pchisq(2*(r.2$logLik - r.1$logLik), df=1)
It is $0.49$ so there is no significant difference in 'fit' between the two models and therefore we keep the simpler model, with male and female having the same variance.
To find the variances one executes summary(r.2). At the bottom you find that the residual standard error is 2.66, squaring this yields the variance, and under 'Variance function' one finds that for sex==1 the variance is 2.66^2 * 1, for sex==0 it is 2.66^2 * 1.61. | Comparing the within-subject variance between two groups of subjects | The fact the same subject does the test multiple times introduces correlation among the observations. The goal is to get an estimate of the var-covar matrix of your data. We will assume that the sub | Comparing the within-subject variance between two groups of subjects
The fact the same subject does the test multiple times introduces correlation among the observations. The goal is to get an estimate of the var-covar matrix of your data. We will assume that the subjects are independent, but that the score for the same subject on the test are dependent. This means that that var-covar matrix of the data will look like a diagonal matrix, but on the main diagonal you will find $3\times3$ block matrices.
This var-covar matrix can be estimated using the function gls from the package nlme.
Using the same testdata as @Jonas Berge I have the following R-code
library(nlme)
subject <- as.factor(c(1,1,1,2,2,2,3,3,3,4,4,4,5,5,5,6,6,6))
score <- c(5,2,7,3,3,2,-2,1,0,4,2,3,6,-4,-1,6,0,3)
test <- (rep(c(1,2,3),6))
sex <- as.factor(c(rep(1,9), rep(0,9)))
testdata <- data.frame(subject, score, test, sex)
testdata
r.1<-gls(score ~ 1+sex,
data=testdata,
corr=corCAR1(form=~ 1|subject),
method="REML")
The corr=corCAR1(form=~ 1|subject) means that I assume an autoregressive pattern in the scores of the test for by subject. You can try with other assumptions on this correlation structure (see help on gls).
Note that we used option method=REML to get unbiased estimates of the variance. (see Why does one have to use REML (instead of ML) for choosing among nested var-covar models?)
With summary(r.1) you see what this gives, but at this point this is not so important. The most important result in this context is that value of the log-likelihood of this model, it will be used as input for a likelihood ratio test further on.
The above model r.1 assumes a correlation structure whereby all subjects have the same variance. In the second stage we will re-estimate a model, but with different variances for the male and female subjects:
r.2<-update(r.1,
weights=varIdent(form=~1|sex),
method="REML")
The option weights=varIdent(form=~1|sex) means that we want different variances by sex. With the summary function one can again find the likelihood for this model.
The second model r.2 has different variances by sex, so it is more general than the first model that assumes that both sexes have the same variance. Therefore the log-likelihood of the second model will be higher than the lig-likelihood of the second model. With a likelihood ratio test (see What are the ''desirable'' statistical properties of the likelihood ratio test?) we can find out whether the difference is significant: the second model r.2 is more general than the first one or the first model is nested within the second one. If we find that the second model's likelihood is significantly higher than the likelihood of the first model, then the model r.2 (with different variances for male and female) fits the data better. So if the likelihood ratio test gives a significant result, then we have reason to believe that the variance differs by sex.
The likelihood ratio test takes two times the difference between the two likelihoods, and this is (asymptotically) a $\chi^2$ with degrees of freedom equal to the difference in number of parameters between teh two models. The second model has one parameter more (two variances, one per sex) so df=1.
The p-value of the test can be found with:
1-pchisq(2*(r.2$logLik - r.1$logLik), df=1)
It is $0.49$ so there is no significant difference in 'fit' between the two models and therefore we keep the simpler model, with male and female having the same variance.
To find the variances one executes summary(r.2). At the bottom you find that the residual standard error is 2.66, squaring this yields the variance, and under 'Variance function' one finds that for sex==1 the variance is 2.66^2 * 1, for sex==0 it is 2.66^2 * 1.61. | Comparing the within-subject variance between two groups of subjects
The fact the same subject does the test multiple times introduces correlation among the observations. The goal is to get an estimate of the var-covar matrix of your data. We will assume that the sub |
54,358 | Comparing the within-subject variance between two groups of subjects | I'm not sure if you can assume normality of the performance variances, but I'm still looking into it. What you're saying makes sense.
Since you are using a within-subjects design (prone to carryover effects) I think it would be most interesting to look at how the variances are changing over time. Are your subjects performing better or worse?
Yes, you would want to use a two-sample Student's t-test to look at that. | Comparing the within-subject variance between two groups of subjects | I'm not sure if you can assume normality of the performance variances, but I'm still looking into it. What you're saying makes sense.
Since you are using a within-subjects design (prone to carryover | Comparing the within-subject variance between two groups of subjects
I'm not sure if you can assume normality of the performance variances, but I'm still looking into it. What you're saying makes sense.
Since you are using a within-subjects design (prone to carryover effects) I think it would be most interesting to look at how the variances are changing over time. Are your subjects performing better or worse?
Yes, you would want to use a two-sample Student's t-test to look at that. | Comparing the within-subject variance between two groups of subjects
I'm not sure if you can assume normality of the performance variances, but I'm still looking into it. What you're saying makes sense.
Since you are using a within-subjects design (prone to carryover |
54,359 | Comparing the within-subject variance between two groups of subjects | It is not very well known but as an alternative to linear mixed model suggested by Jonas Berge you can perform a F test of equality of variances
https://en.wikipedia.org/wiki/F-test_of_equality_of_variances | Comparing the within-subject variance between two groups of subjects | It is not very well known but as an alternative to linear mixed model suggested by Jonas Berge you can perform a F test of equality of variances
https://en.wikipedia.org/wiki/F-test_of_equality_of_var | Comparing the within-subject variance between two groups of subjects
It is not very well known but as an alternative to linear mixed model suggested by Jonas Berge you can perform a F test of equality of variances
https://en.wikipedia.org/wiki/F-test_of_equality_of_variances | Comparing the within-subject variance between two groups of subjects
It is not very well known but as an alternative to linear mixed model suggested by Jonas Berge you can perform a F test of equality of variances
https://en.wikipedia.org/wiki/F-test_of_equality_of_var |
54,360 | Bias inputs in an RNN | This is basically correct.
The bias is an "offset" added to each unit in a neural network layer that's independent of the input to the layer. The bias permits a layer to model a data space that's centered around some point other than the origin.
Mathematically, a feedforward neural network layer without bias is written as $$ z = \sigma(Wx) $$ where $W$ is the weights of the layer, $x$ is the input to a layer, and $\sigma(\cdot)$ is the activation function for the layer. If you want to add a bias to this expression, it's common to create a separate parameter $$ z = \sigma(Wx + \color{red}{b}) = \sigma\left(\sum_{i=1}^n W_i x_i + \color{red}{b}\right). $$ But this is equivalent to creating a pseudo-input node $\color{red}{x_0}=1$ in the previous layer and stacking it onto the input so $\hat{x} = [\color{red}{1} \; x^\top]^\top$, with $\color{red}{b}$ being stacked onto the start of $W$ so $\hat{W} = [\color{red}{b} \; W]$: $$ z = \sigma(\hat{W}\hat{x}) = \sigma\left(\sum_{i=0}^n \hat{W}_i \hat{x}_i\right) = \sigma\left(\color{red}{\hat{W}_0 \cdot 1} + \sum_{i=1}^n \hat{W}_i \hat{x}_i\right) $$
A recurrent network layer is basically the same. Without a bias, the output $z_t$ is given by $$ z_t = \sigma(Wx_t + Vz_{t-1}) $$ where $V$ is an array of weights that connects the previous state of the hidden layer to the current state. Adding a bias gives the recurrent layer this form: $$ z_t = \sigma(Wx_t + Vz_{t-1} + \color{red}{b}). $$ | Bias inputs in an RNN | This is basically correct.
The bias is an "offset" added to each unit in a neural network layer that's independent of the input to the layer. The bias permits a layer to model a data space that's cent | Bias inputs in an RNN
This is basically correct.
The bias is an "offset" added to each unit in a neural network layer that's independent of the input to the layer. The bias permits a layer to model a data space that's centered around some point other than the origin.
Mathematically, a feedforward neural network layer without bias is written as $$ z = \sigma(Wx) $$ where $W$ is the weights of the layer, $x$ is the input to a layer, and $\sigma(\cdot)$ is the activation function for the layer. If you want to add a bias to this expression, it's common to create a separate parameter $$ z = \sigma(Wx + \color{red}{b}) = \sigma\left(\sum_{i=1}^n W_i x_i + \color{red}{b}\right). $$ But this is equivalent to creating a pseudo-input node $\color{red}{x_0}=1$ in the previous layer and stacking it onto the input so $\hat{x} = [\color{red}{1} \; x^\top]^\top$, with $\color{red}{b}$ being stacked onto the start of $W$ so $\hat{W} = [\color{red}{b} \; W]$: $$ z = \sigma(\hat{W}\hat{x}) = \sigma\left(\sum_{i=0}^n \hat{W}_i \hat{x}_i\right) = \sigma\left(\color{red}{\hat{W}_0 \cdot 1} + \sum_{i=1}^n \hat{W}_i \hat{x}_i\right) $$
A recurrent network layer is basically the same. Without a bias, the output $z_t$ is given by $$ z_t = \sigma(Wx_t + Vz_{t-1}) $$ where $V$ is an array of weights that connects the previous state of the hidden layer to the current state. Adding a bias gives the recurrent layer this form: $$ z_t = \sigma(Wx_t + Vz_{t-1} + \color{red}{b}). $$ | Bias inputs in an RNN
This is basically correct.
The bias is an "offset" added to each unit in a neural network layer that's independent of the input to the layer. The bias permits a layer to model a data space that's cent |
54,361 | Central Limit Theorem for Normal Distribution of Negative Binomial | You also can use CLT directly,one form of CLT states:
$\frac{\sum_{i=1}^nX_i-n\mu}{\sigma\sqrt{n}}\sim N(0,1)=\Rightarrow\sum_{i=1}^nX_i\sim N(n\mu,n\sigma^2)$
Above equations invovle two theorems:
The first one is one form CLT
The second related to multivariate normal distribution, but it also apply to 1-dimensional random vector.
For your case:
$\sum_{i=1}^k Y_i \sim N(\frac{k}{\pi},k\frac{1-\pi}{\pi^2})$ | Central Limit Theorem for Normal Distribution of Negative Binomial | You also can use CLT directly,one form of CLT states:
$\frac{\sum_{i=1}^nX_i-n\mu}{\sigma\sqrt{n}}\sim N(0,1)=\Rightarrow\sum_{i=1}^nX_i\sim N(n\mu,n\sigma^2)$
Above equations invovle two theorems:
T | Central Limit Theorem for Normal Distribution of Negative Binomial
You also can use CLT directly,one form of CLT states:
$\frac{\sum_{i=1}^nX_i-n\mu}{\sigma\sqrt{n}}\sim N(0,1)=\Rightarrow\sum_{i=1}^nX_i\sim N(n\mu,n\sigma^2)$
Above equations invovle two theorems:
The first one is one form CLT
The second related to multivariate normal distribution, but it also apply to 1-dimensional random vector.
For your case:
$\sum_{i=1}^k Y_i \sim N(\frac{k}{\pi},k\frac{1-\pi}{\pi^2})$ | Central Limit Theorem for Normal Distribution of Negative Binomial
You also can use CLT directly,one form of CLT states:
$\frac{\sum_{i=1}^nX_i-n\mu}{\sigma\sqrt{n}}\sim N(0,1)=\Rightarrow\sum_{i=1}^nX_i\sim N(n\mu,n\sigma^2)$
Above equations invovle two theorems:
T |
54,362 | Central Limit Theorem for Normal Distribution of Negative Binomial | 1.6(c) From the Central Limit Theorem we know that as the number of samples from any distribution increases, it becomes better approximated by a normal distribution.
This is not what the central limit theorem says. The CLT does not hold for every distribution, and in its standard form it concerns properly scaled and standardized sample averages. The statement $\sum_{i=1}^n X_i \underset{n \to \infty}{\to}N(n\mu_x,.)$ is not quite correct, even if we take the mode of convergence to be understood from the context. If $n$ approaches infinity, you cannot have an $n$ left on the right hand side. Indeed, if the $X_i$ are independent and identically distributed geometric random variables, $\sum_{i=1}^nX_i \overset{a.s}{\to} \infty$ so certainly the sum cannot converge in distribution, which is a weaker form of convergence, to something else.
You can save your argument by being more careful with the central limit theorem, however. | Central Limit Theorem for Normal Distribution of Negative Binomial | 1.6(c) From the Central Limit Theorem we know that as the number of samples from any distribution increases, it becomes better approximated by a normal distribution.
This is not what the central lim | Central Limit Theorem for Normal Distribution of Negative Binomial
1.6(c) From the Central Limit Theorem we know that as the number of samples from any distribution increases, it becomes better approximated by a normal distribution.
This is not what the central limit theorem says. The CLT does not hold for every distribution, and in its standard form it concerns properly scaled and standardized sample averages. The statement $\sum_{i=1}^n X_i \underset{n \to \infty}{\to}N(n\mu_x,.)$ is not quite correct, even if we take the mode of convergence to be understood from the context. If $n$ approaches infinity, you cannot have an $n$ left on the right hand side. Indeed, if the $X_i$ are independent and identically distributed geometric random variables, $\sum_{i=1}^nX_i \overset{a.s}{\to} \infty$ so certainly the sum cannot converge in distribution, which is a weaker form of convergence, to something else.
You can save your argument by being more careful with the central limit theorem, however. | Central Limit Theorem for Normal Distribution of Negative Binomial
1.6(c) From the Central Limit Theorem we know that as the number of samples from any distribution increases, it becomes better approximated by a normal distribution.
This is not what the central lim |
54,363 | Central Limit Theorem for Normal Distribution of Negative Binomial | The Central Limit Theorem makes a limiting-distribution statement for sums of random variables from which sum we have subtracted the sum's expected value, and which we have divided by its standard deviation. Denoting $\sum_{i=1}^kY_i \equiv S_k$ the CLT can be written as
$$\frac {S_k - E(S_k)}{\sqrt {{\rm Var}(S_k)}} \xrightarrow{d} \mathcal N(0,1),\;\;\; {k\rightarrow\infty} $$
Indeed a Negative Binomial ($X$) random variable with parameters $k$ (number of failures before stopping time) and $p$ (probability of success) can be written as the sum of $k$ independent and identically distributed geometric random variables (with $0$ included in the support) with common parameter $1-p$. So $\sum_{i=1}^kY_i \equiv S_k$ in our case is the sum of these $k$ geometric rv's, and $S_k = X$. We have
$$E(Y_i) = \frac {p}{1-p} \implies E(S_k) = \frac {kp}{1-p}$$
$${\rm Var}(Y_i) = \frac {p}{(1-p)^2} \implies {\rm Var}(S_k) = \frac {kp}{(1-p)^2}$$
Plugging these into the CLT expression we have
$$\frac {S_k - E(S_k)}{\sqrt {{\rm Var}(S_k)}} = \frac {X - \frac {kp}{1-p}}{\sqrt {\frac {kp}{(1-p)^2}}} \xrightarrow{d} Z \sim\mathcal N(0,1),\;\;\; {k\rightarrow\infty}$$
Then, approximately for "large $k$" ( and not for $k\rightarrow \infty$) we can write (accepting that the distributional result holds for finite $k$)
$$X \sim_{approx} \left(\sqrt {\frac {kp}{(1-p)^2}}\right)\cdot Z + \frac {kp}{1-p}$$
which by standard properties of scaled and shifted random variables implies that
$$X \sim_{approx}\mathcal N \left(\frac {kp}{1-p}, \frac{kp}{(1-p)^2}\right)$$ | Central Limit Theorem for Normal Distribution of Negative Binomial | The Central Limit Theorem makes a limiting-distribution statement for sums of random variables from which sum we have subtracted the sum's expected value, and which we have divided by its standard dev | Central Limit Theorem for Normal Distribution of Negative Binomial
The Central Limit Theorem makes a limiting-distribution statement for sums of random variables from which sum we have subtracted the sum's expected value, and which we have divided by its standard deviation. Denoting $\sum_{i=1}^kY_i \equiv S_k$ the CLT can be written as
$$\frac {S_k - E(S_k)}{\sqrt {{\rm Var}(S_k)}} \xrightarrow{d} \mathcal N(0,1),\;\;\; {k\rightarrow\infty} $$
Indeed a Negative Binomial ($X$) random variable with parameters $k$ (number of failures before stopping time) and $p$ (probability of success) can be written as the sum of $k$ independent and identically distributed geometric random variables (with $0$ included in the support) with common parameter $1-p$. So $\sum_{i=1}^kY_i \equiv S_k$ in our case is the sum of these $k$ geometric rv's, and $S_k = X$. We have
$$E(Y_i) = \frac {p}{1-p} \implies E(S_k) = \frac {kp}{1-p}$$
$${\rm Var}(Y_i) = \frac {p}{(1-p)^2} \implies {\rm Var}(S_k) = \frac {kp}{(1-p)^2}$$
Plugging these into the CLT expression we have
$$\frac {S_k - E(S_k)}{\sqrt {{\rm Var}(S_k)}} = \frac {X - \frac {kp}{1-p}}{\sqrt {\frac {kp}{(1-p)^2}}} \xrightarrow{d} Z \sim\mathcal N(0,1),\;\;\; {k\rightarrow\infty}$$
Then, approximately for "large $k$" ( and not for $k\rightarrow \infty$) we can write (accepting that the distributional result holds for finite $k$)
$$X \sim_{approx} \left(\sqrt {\frac {kp}{(1-p)^2}}\right)\cdot Z + \frac {kp}{1-p}$$
which by standard properties of scaled and shifted random variables implies that
$$X \sim_{approx}\mathcal N \left(\frac {kp}{1-p}, \frac{kp}{(1-p)^2}\right)$$ | Central Limit Theorem for Normal Distribution of Negative Binomial
The Central Limit Theorem makes a limiting-distribution statement for sums of random variables from which sum we have subtracted the sum's expected value, and which we have divided by its standard dev |
54,364 | Multidimensional dynamic time warping | There are two ways to do it. The way you describe is DTWI, but other way, DTWD can be better, because it pools the information before warping.
There is an explanation of the differences, and an empirical study here.
http://www.cs.ucr.edu/~eamonn/Multi-Dimensional_DTW_Journal.pdf | Multidimensional dynamic time warping | There are two ways to do it. The way you describe is DTWI, but other way, DTWD can be better, because it pools the information before warping.
There is an explanation of the differences, and an empiri | Multidimensional dynamic time warping
There are two ways to do it. The way you describe is DTWI, but other way, DTWD can be better, because it pools the information before warping.
There is an explanation of the differences, and an empirical study here.
http://www.cs.ucr.edu/~eamonn/Multi-Dimensional_DTW_Journal.pdf | Multidimensional dynamic time warping
There are two ways to do it. The way you describe is DTWI, but other way, DTWD can be better, because it pools the information before warping.
There is an explanation of the differences, and an empiri |
54,365 | Name of single sample multinomial distribution | It's called the categorical distribution (among other things).
http://en.wikipedia.org/wiki/Categorical_distribution
This article mentions the following names: categorical, multinoulli, generalized Bernoulli | Name of single sample multinomial distribution | It's called the categorical distribution (among other things).
http://en.wikipedia.org/wiki/Categorical_distribution
This article mentions the following names: categorical, multinoulli, generalized Be | Name of single sample multinomial distribution
It's called the categorical distribution (among other things).
http://en.wikipedia.org/wiki/Categorical_distribution
This article mentions the following names: categorical, multinoulli, generalized Bernoulli | Name of single sample multinomial distribution
It's called the categorical distribution (among other things).
http://en.wikipedia.org/wiki/Categorical_distribution
This article mentions the following names: categorical, multinoulli, generalized Be |
54,366 | Naming PCA factors: is it a minor art? | You focus on "naming", but I would say that the real problem is understanding what principal components mean. You are right: this is an art. It often turns out that they are very difficult to interpret, hence all the attempts (especially in factor analysis literature and practice) to rotate the components/factors in order to achieve "simple structure", i.e. structure that will be easier to interpret (see my answer here).
I don't know where you took your figure from, but this dataset was nicely analyzed in Cosma Shalizi's lecture notes on PCA, and I quote from page 7:
This [matrix of eigenvectors] says that all the variables
except
the gas-mileages have a negative projection
on to the first component. This means that there is a negative correlation
between mileage and everything else. The first principal component tells us
about whether we are getting a big, expensive gas-guzzling car with a powerful
engine, or whether we are getting a small, cheap, fuel-efficient car with a wimpy
engine.
The second component is a little more interesting. Engine size and gas
mileage hardly project on to it at all. Instead we have a contrast between the
physical size of the car (positive projection) and the price and horsepower. Basically, this axis separates mini-vans, trucks and SUVs (big, not so expensive, not
so much horse-power) from sports-cars (small, expensive, lots of horse-power).
Once you understood that, you can look for good names. | Naming PCA factors: is it a minor art? | You focus on "naming", but I would say that the real problem is understanding what principal components mean. You are right: this is an art. It often turns out that they are very difficult to interpre | Naming PCA factors: is it a minor art?
You focus on "naming", but I would say that the real problem is understanding what principal components mean. You are right: this is an art. It often turns out that they are very difficult to interpret, hence all the attempts (especially in factor analysis literature and practice) to rotate the components/factors in order to achieve "simple structure", i.e. structure that will be easier to interpret (see my answer here).
I don't know where you took your figure from, but this dataset was nicely analyzed in Cosma Shalizi's lecture notes on PCA, and I quote from page 7:
This [matrix of eigenvectors] says that all the variables
except
the gas-mileages have a negative projection
on to the first component. This means that there is a negative correlation
between mileage and everything else. The first principal component tells us
about whether we are getting a big, expensive gas-guzzling car with a powerful
engine, or whether we are getting a small, cheap, fuel-efficient car with a wimpy
engine.
The second component is a little more interesting. Engine size and gas
mileage hardly project on to it at all. Instead we have a contrast between the
physical size of the car (positive projection) and the price and horsepower. Basically, this axis separates mini-vans, trucks and SUVs (big, not so expensive, not
so much horse-power) from sports-cars (small, expensive, lots of horse-power).
Once you understood that, you can look for good names. | Naming PCA factors: is it a minor art?
You focus on "naming", but I would say that the real problem is understanding what principal components mean. You are right: this is an art. It often turns out that they are very difficult to interpre |
54,367 | Is there a two-way Friedman's test? | Your data are ordinal ratings, so you need some form of ordinal logistic regression. But I also gather that your data are not independent ("... 15 measures per patient..."), so that needs to be taken into account as well. Thus, the appropriate method here is a mixed effects ordinal logistic regression. In R, mixed effects OLR models can be fit with the ordinal package.
Here is a brief demonstration with your data:
prtdf = read.table(text="Protein Location Concentration
Prot1 Loc1 0
...
Prot5 Loc3 0", header=T)
There are several issues with these data. First, they are not quite balanced (which is not actually a big deal):
with(prtdf, table(Protein, Location))
# Location
# Protein Loc1 Loc2 Loc3
# Prot1 9 11 11
# Prot2 11 11 11
# Prot3 11 11 11
# Prot4 11 11 11
# Prot5 11 11 11
Crucially, they are missing a patient ID indicator. I will make one up, using the assumption that the order within each category is consistent and by patient ID (this may well be totally false in reality, so be forewarned):
prtdf$ID = c(1:9, rep(1:11, times=14))
prtdf = prtdf[,c(4,1:3)]
head(prtdf, 10)
# ID Protein Location Concentration
# 1 1 Prot1 Loc1 0
# 2 2 Prot1 Loc1 0
# 3 3 Prot1 Loc1 1
# 4 4 Prot1 Loc1 0
# 5 5 Prot1 Loc1 0
# 6 6 Prot1 Loc1 2
# 7 7 Prot1 Loc1 1
# 8 8 Prot1 Loc1 1
# 9 9 Prot1 Loc1 1
# 10 1 Prot1 Loc2 1
tail(prtdf, 12)
# ID Protein Location Concentration
# 152 11 Prot5 Loc2 0
# 153 1 Prot5 Loc3 0
# 154 2 Prot5 Loc3 0
# 155 3 Prot5 Loc3 0
# 156 4 Prot5 Loc3 0
# 157 5 Prot5 Loc3 0
# 158 6 Prot5 Loc3 0
# 159 7 Prot5 Loc3 0
# 160 8 Prot5 Loc3 0
# 161 9 Prot5 Loc3 0
# 162 10 Prot5 Loc3 0
# 163 11 Prot5 Loc3 0
Next, we need to make sure that ID and Concentration are appropriately categorized as factors. (Note also that you are missing any 4's in Concentration.)
with(prtdf, table(Concentration))
# Concentration
# 0 1 2 3 4 5
# 120 26 10 5 0 2
prtdf$Concentration = factor(prtdf$Concentration, levels=0:5, ordered=T)
prtdf$ID = factor(prtdf$ID, levels=1:11)
Now we can try to fit a model:
library(ordinal)
mod = clmm(Concentration~Protein*Location+(1|ID), data=prtdf, Hess=T, nAGQ=17)
# Warning message:
# (1) Hessian is numerically singular: parameters are not uniquely determined
# In addition: Absolute convergence criterion was met, but relative criterion was not met
That crashed. The problem seems to be that all the Concentrations in "Prot4" and "Prot5" are 0:
aggregate(Concentration~Protein*Location, data=prtdf, function(x){ mean(as.numeric(x)) })
# Protein Location Concentration
# 1 Prot1 Loc1 1.666667
# 2 Prot2 Loc1 1.636364
# 3 Prot3 Loc1 1.363636
# 4 Prot4 Loc1 1.000000
# 5 Prot5 Loc1 1.000000
# 6 Prot1 Loc2 1.727273
# 7 Prot2 Loc2 2.181818
# 8 Prot3 Loc2 1.636364
# 9 Prot4 Loc2 1.000000
# 10 Prot5 Loc2 1.000000
# 11 Prot1 Loc3 1.545455
# 12 Prot2 Loc3 1.818182
# 13 Prot3 Loc3 2.000000
# 14 Prot4 Loc3 1.000000
# 15 Prot5 Loc3 1.000000
table(prtdf[prtdf$Protein%in%c("Prot4","Prot5"), "Concentration"])
# 0 1 2 3 4 5
# 66 0 0 0 0 0
We'll simply exclude those levels from the analysis:
mod2 = clmm(Concentration~Protein*Location+(1|ID), data=prtdf, Hess=T, nAGQ=7,
subset=!prtdf$Protein%in%c("Prot4","Prot5"))
summary(mod2)
# Cumulative Link Mixed Model fitted with the adaptive Gauss-Hermite
# quadrature approximation with 7 quadrature points
#
# formula: Concentration ~ Protein * Location + (1 | ID)
# data: prtdf
# subset: !prtdf$Protein %in% c("Prot4", "Prot5")
#
# link threshold nobs logLik AIC niter max.grad cond.H
# logit flexible 97 -106.48 238.97 760(2283) 1.06e-04 2.0e+02
#
# Random effects:
# Groups Name Variance Std.Dev.
# ID (Intercept) 0.5756 0.7587
# Number of groups: ID 11
#
# Coefficients:
# Estimate Std. Error z value Pr(>|z|)
# ProteinProt2 -0.14342 0.85466 -0.168 0.867
# ProteinProt3 -0.96646 0.92526 -1.045 0.296
# LocationLoc2 0.03622 0.86002 0.042 0.966
# LocationLoc3 -0.17634 0.84245 -0.209 0.834
# ProteinProt2:LocationLoc2 0.96548 1.19393 0.809 0.419
# ProteinProt3:LocationLoc2 0.02491 1.31402 0.019 0.985
# ProteinProt2:LocationLoc3 0.57413 1.18357 0.485 0.628
# ProteinProt3:LocationLoc3 1.13724 1.27533 0.892 0.373
#
# Threshold coefficients:
# Estimate Std. Error z value
# 0|1 0.1591 0.6595 0.241
# 1|2 1.6771 0.6904 2.429
# 2|3 2.7789 0.7615 3.649
# 3|5 4.1442 0.9793 4.232
Now this does return a result, but because your variables are factors (or multilevel categorical variables), the individual level p-values are not of interest. You want to know the significance of the factors as a whole. In particular, I gather you may be interested in knowing if the interaction is significant. We can test that by fitting an additive model (i.e., without the interaction term) and performing a nested model test:
mod2a = clmm(Concentration~Protein+Location+(1|ID), data=prtdf, Hess=T, nAGQ=7,
subset=!prtdf$Protein%in%c("Prot4","Prot5"))
anova(mod2a, mod2)
# Likelihood ratio tests of cumulative link models:
#
# formula: link: threshold:
# mod2a Concentration ~ Protein + Location + (1 | ID) logit flexible
# mod2 Concentration ~ Protein * Location + (1 | ID) logit flexible
#
# no.par AIC logLik LR.stat df Pr(>Chisq)
# mod2a 9 233.02 -107.51
# mod2 13 238.97 -106.48 2.053 4 0.726
The interaction does not appear to be significant for these data. If you also wanted to test the variables in the additive model, that can be conveniently done like so:
drop1(mod2a, test="Chisq")
# Single term deletions
#
# Model:
# Concentration ~ Protein + Location + (1 | ID)
# Df AIC LRT Pr(>Chi)
# <none> 233.02
# Protein 2 232.44 3.4238 0.1805
# Location 2 229.74 0.7212 0.6973
They are not significant in these data either.
To provide explicit answers to your questions: Although Friedman's test is a one-way test only, ordinal logistic regression is a generalization of the Kruskal-Wallis test, and mixed effects OLR is a generalization of OLR and of Friedman's test. Bootstrapping is unlikely to help you here. Ordinal logistic regression is often called the proportional odds model. | Is there a two-way Friedman's test? | Your data are ordinal ratings, so you need some form of ordinal logistic regression. But I also gather that your data are not independent ("... 15 measures per patient..."), so that needs to be taken | Is there a two-way Friedman's test?
Your data are ordinal ratings, so you need some form of ordinal logistic regression. But I also gather that your data are not independent ("... 15 measures per patient..."), so that needs to be taken into account as well. Thus, the appropriate method here is a mixed effects ordinal logistic regression. In R, mixed effects OLR models can be fit with the ordinal package.
Here is a brief demonstration with your data:
prtdf = read.table(text="Protein Location Concentration
Prot1 Loc1 0
...
Prot5 Loc3 0", header=T)
There are several issues with these data. First, they are not quite balanced (which is not actually a big deal):
with(prtdf, table(Protein, Location))
# Location
# Protein Loc1 Loc2 Loc3
# Prot1 9 11 11
# Prot2 11 11 11
# Prot3 11 11 11
# Prot4 11 11 11
# Prot5 11 11 11
Crucially, they are missing a patient ID indicator. I will make one up, using the assumption that the order within each category is consistent and by patient ID (this may well be totally false in reality, so be forewarned):
prtdf$ID = c(1:9, rep(1:11, times=14))
prtdf = prtdf[,c(4,1:3)]
head(prtdf, 10)
# ID Protein Location Concentration
# 1 1 Prot1 Loc1 0
# 2 2 Prot1 Loc1 0
# 3 3 Prot1 Loc1 1
# 4 4 Prot1 Loc1 0
# 5 5 Prot1 Loc1 0
# 6 6 Prot1 Loc1 2
# 7 7 Prot1 Loc1 1
# 8 8 Prot1 Loc1 1
# 9 9 Prot1 Loc1 1
# 10 1 Prot1 Loc2 1
tail(prtdf, 12)
# ID Protein Location Concentration
# 152 11 Prot5 Loc2 0
# 153 1 Prot5 Loc3 0
# 154 2 Prot5 Loc3 0
# 155 3 Prot5 Loc3 0
# 156 4 Prot5 Loc3 0
# 157 5 Prot5 Loc3 0
# 158 6 Prot5 Loc3 0
# 159 7 Prot5 Loc3 0
# 160 8 Prot5 Loc3 0
# 161 9 Prot5 Loc3 0
# 162 10 Prot5 Loc3 0
# 163 11 Prot5 Loc3 0
Next, we need to make sure that ID and Concentration are appropriately categorized as factors. (Note also that you are missing any 4's in Concentration.)
with(prtdf, table(Concentration))
# Concentration
# 0 1 2 3 4 5
# 120 26 10 5 0 2
prtdf$Concentration = factor(prtdf$Concentration, levels=0:5, ordered=T)
prtdf$ID = factor(prtdf$ID, levels=1:11)
Now we can try to fit a model:
library(ordinal)
mod = clmm(Concentration~Protein*Location+(1|ID), data=prtdf, Hess=T, nAGQ=17)
# Warning message:
# (1) Hessian is numerically singular: parameters are not uniquely determined
# In addition: Absolute convergence criterion was met, but relative criterion was not met
That crashed. The problem seems to be that all the Concentrations in "Prot4" and "Prot5" are 0:
aggregate(Concentration~Protein*Location, data=prtdf, function(x){ mean(as.numeric(x)) })
# Protein Location Concentration
# 1 Prot1 Loc1 1.666667
# 2 Prot2 Loc1 1.636364
# 3 Prot3 Loc1 1.363636
# 4 Prot4 Loc1 1.000000
# 5 Prot5 Loc1 1.000000
# 6 Prot1 Loc2 1.727273
# 7 Prot2 Loc2 2.181818
# 8 Prot3 Loc2 1.636364
# 9 Prot4 Loc2 1.000000
# 10 Prot5 Loc2 1.000000
# 11 Prot1 Loc3 1.545455
# 12 Prot2 Loc3 1.818182
# 13 Prot3 Loc3 2.000000
# 14 Prot4 Loc3 1.000000
# 15 Prot5 Loc3 1.000000
table(prtdf[prtdf$Protein%in%c("Prot4","Prot5"), "Concentration"])
# 0 1 2 3 4 5
# 66 0 0 0 0 0
We'll simply exclude those levels from the analysis:
mod2 = clmm(Concentration~Protein*Location+(1|ID), data=prtdf, Hess=T, nAGQ=7,
subset=!prtdf$Protein%in%c("Prot4","Prot5"))
summary(mod2)
# Cumulative Link Mixed Model fitted with the adaptive Gauss-Hermite
# quadrature approximation with 7 quadrature points
#
# formula: Concentration ~ Protein * Location + (1 | ID)
# data: prtdf
# subset: !prtdf$Protein %in% c("Prot4", "Prot5")
#
# link threshold nobs logLik AIC niter max.grad cond.H
# logit flexible 97 -106.48 238.97 760(2283) 1.06e-04 2.0e+02
#
# Random effects:
# Groups Name Variance Std.Dev.
# ID (Intercept) 0.5756 0.7587
# Number of groups: ID 11
#
# Coefficients:
# Estimate Std. Error z value Pr(>|z|)
# ProteinProt2 -0.14342 0.85466 -0.168 0.867
# ProteinProt3 -0.96646 0.92526 -1.045 0.296
# LocationLoc2 0.03622 0.86002 0.042 0.966
# LocationLoc3 -0.17634 0.84245 -0.209 0.834
# ProteinProt2:LocationLoc2 0.96548 1.19393 0.809 0.419
# ProteinProt3:LocationLoc2 0.02491 1.31402 0.019 0.985
# ProteinProt2:LocationLoc3 0.57413 1.18357 0.485 0.628
# ProteinProt3:LocationLoc3 1.13724 1.27533 0.892 0.373
#
# Threshold coefficients:
# Estimate Std. Error z value
# 0|1 0.1591 0.6595 0.241
# 1|2 1.6771 0.6904 2.429
# 2|3 2.7789 0.7615 3.649
# 3|5 4.1442 0.9793 4.232
Now this does return a result, but because your variables are factors (or multilevel categorical variables), the individual level p-values are not of interest. You want to know the significance of the factors as a whole. In particular, I gather you may be interested in knowing if the interaction is significant. We can test that by fitting an additive model (i.e., without the interaction term) and performing a nested model test:
mod2a = clmm(Concentration~Protein+Location+(1|ID), data=prtdf, Hess=T, nAGQ=7,
subset=!prtdf$Protein%in%c("Prot4","Prot5"))
anova(mod2a, mod2)
# Likelihood ratio tests of cumulative link models:
#
# formula: link: threshold:
# mod2a Concentration ~ Protein + Location + (1 | ID) logit flexible
# mod2 Concentration ~ Protein * Location + (1 | ID) logit flexible
#
# no.par AIC logLik LR.stat df Pr(>Chisq)
# mod2a 9 233.02 -107.51
# mod2 13 238.97 -106.48 2.053 4 0.726
The interaction does not appear to be significant for these data. If you also wanted to test the variables in the additive model, that can be conveniently done like so:
drop1(mod2a, test="Chisq")
# Single term deletions
#
# Model:
# Concentration ~ Protein + Location + (1 | ID)
# Df AIC LRT Pr(>Chi)
# <none> 233.02
# Protein 2 232.44 3.4238 0.1805
# Location 2 229.74 0.7212 0.6973
They are not significant in these data either.
To provide explicit answers to your questions: Although Friedman's test is a one-way test only, ordinal logistic regression is a generalization of the Kruskal-Wallis test, and mixed effects OLR is a generalization of OLR and of Friedman's test. Bootstrapping is unlikely to help you here. Ordinal logistic regression is often called the proportional odds model. | Is there a two-way Friedman's test?
Your data are ordinal ratings, so you need some form of ordinal logistic regression. But I also gather that your data are not independent ("... 15 measures per patient..."), so that needs to be taken |
54,368 | Is there a two-way Friedman's test? | You can use simple linear regression with interactions to determine relations of proteins, locations (and their interaction) with concentations. Following is the output from the data that you have provided:
> summary(lm(Concentration~Protein*Location, data=prtdf))
Call:
lm(formula = Concentration ~ Protein * Location, data = prtdf)
Residuals:
Min 1Q Median 3Q Max
-1.1818 -0.5455 0.0000 0.0000 4.3636
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.66667 0.27918 2.388 0.0182 *
ProteinProt2 -0.03030 0.37645 -0.080 0.9360
ProteinProt3 -0.30303 0.37645 -0.805 0.4221
ProteinProt4 -0.66667 0.37645 -1.771 0.0786 .
ProteinProt5 -0.66667 0.37645 -1.771 0.0786 .
LocationLoc2 0.06061 0.37645 0.161 0.8723
LocationLoc3 -0.12121 0.37645 -0.322 0.7479
ProteinProt2:LocationLoc2 0.48485 0.51890 0.934 0.3516
ProteinProt3:LocationLoc2 0.21212 0.51890 0.409 0.6833
ProteinProt4:LocationLoc2 -0.06061 0.51890 -0.117 0.9072
ProteinProt5:LocationLoc2 -0.06061 0.51890 -0.117 0.9072
ProteinProt2:LocationLoc3 0.30303 0.51890 0.584 0.5601
ProteinProt3:LocationLoc3 0.75758 0.51890 1.460 0.1464
ProteinProt4:LocationLoc3 0.12121 0.51890 0.234 0.8156
ProteinProt5:LocationLoc3 0.12121 0.51890 0.234 0.8156
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.8375 on 148 degrees of freedom
Multiple R-squared: 0.2019, Adjusted R-squared: 0.1263
F-statistic: 2.673 on 14 and 148 DF, p-value: 0.001637
It shows that protein 4 and 5 are in somewhat higher concentration but their is no difference between any location regarding concentration of protein. Also there is no interaction between location and protein, hence no one protein is preferentially concentrated in any one location. | Is there a two-way Friedman's test? | You can use simple linear regression with interactions to determine relations of proteins, locations (and their interaction) with concentations. Following is the output from the data that you have pro | Is there a two-way Friedman's test?
You can use simple linear regression with interactions to determine relations of proteins, locations (and their interaction) with concentations. Following is the output from the data that you have provided:
> summary(lm(Concentration~Protein*Location, data=prtdf))
Call:
lm(formula = Concentration ~ Protein * Location, data = prtdf)
Residuals:
Min 1Q Median 3Q Max
-1.1818 -0.5455 0.0000 0.0000 4.3636
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.66667 0.27918 2.388 0.0182 *
ProteinProt2 -0.03030 0.37645 -0.080 0.9360
ProteinProt3 -0.30303 0.37645 -0.805 0.4221
ProteinProt4 -0.66667 0.37645 -1.771 0.0786 .
ProteinProt5 -0.66667 0.37645 -1.771 0.0786 .
LocationLoc2 0.06061 0.37645 0.161 0.8723
LocationLoc3 -0.12121 0.37645 -0.322 0.7479
ProteinProt2:LocationLoc2 0.48485 0.51890 0.934 0.3516
ProteinProt3:LocationLoc2 0.21212 0.51890 0.409 0.6833
ProteinProt4:LocationLoc2 -0.06061 0.51890 -0.117 0.9072
ProteinProt5:LocationLoc2 -0.06061 0.51890 -0.117 0.9072
ProteinProt2:LocationLoc3 0.30303 0.51890 0.584 0.5601
ProteinProt3:LocationLoc3 0.75758 0.51890 1.460 0.1464
ProteinProt4:LocationLoc3 0.12121 0.51890 0.234 0.8156
ProteinProt5:LocationLoc3 0.12121 0.51890 0.234 0.8156
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.8375 on 148 degrees of freedom
Multiple R-squared: 0.2019, Adjusted R-squared: 0.1263
F-statistic: 2.673 on 14 and 148 DF, p-value: 0.001637
It shows that protein 4 and 5 are in somewhat higher concentration but their is no difference between any location regarding concentration of protein. Also there is no interaction between location and protein, hence no one protein is preferentially concentrated in any one location. | Is there a two-way Friedman's test?
You can use simple linear regression with interactions to determine relations of proteins, locations (and their interaction) with concentations. Following is the output from the data that you have pro |
54,369 | Is there a two-way Friedman's test? | After reading a little bit more, I think I got the answer to my own question.
Since the response variable is ordinal and the other factors are categorical, I shouldn't use simple linear regression, so I've used Logistic Regression Model.
There are two packages in R that can handle ordinal variables: lrm {rms} and polr {MASS}.
I opted for lrm because it shows the p-value for each stimate, but both results are the same.
Following @rnso's notation (thanks!):
> library (rms)
> lrm (Concentration ~ Protein * Location, data = prtdf)
Logistic Regression Model
lrm(formula = Concentration ~ Protein * Location, data = prtdf)
Frequencies of Responses
0 1 2 3 5
120 26 10 5 2
Model Likelihood Discrimination Rank Discrim.
Ratio Test Indexes Indexes
Obs 163 LR chi2 60.55 R2 0.380 C 0.805
max |deriv| 0.002 d.f. 14 g 5.173 Dxy 0.610
Pr(> chi2) <0.0001 gr 176.462 gamma 0.644
gp 0.111 tau-a 0.263
Brier 0.083
Coef S.E. Wald Z Pr(>|Z|)
y>=1 -0.0288 0.5961 -0.05 0.9615
y>=2 -1.4135 0.6168 -2.29 0.0219
y>=3 -2.4530 0.6863 -3.57 0.0004
y>=5 -3.7741 0.9132 -4.13 <0.0001
Protein=Prot2 -0.1834 0.8232 -0.22 0.8237
Protein=Prot3 -0.9596 0.8933 -1.07 0.2827
Protein=Prot4 -10.4962 58.1844 -0.18 0.8568
Protein=Prot5 -10.4962 58.1844 -0.18 0.8568
Location=Loc2 -0.1326 0.8291 -0.16 0.8729
Location=Loc3 -0.3028 0.8180 -0.37 0.7113
Protein=Prot2 * Location=Loc2 1.0297 1.1526 0.89 0.3717
Protein=Prot3 * Location=Loc2 0.1827 1.2607 0.14 0.8848
Protein=Prot4 * Location=Loc2 0.1326 82.2851 0.00 0.9987
Protein=Prot5 * Location=Loc2 0.1326 82.2851 0.00 0.9987
Protein=Prot2 * Location=Loc3 0.6113 1.1413 0.54 0.5923
Protein=Prot3 * Location=Loc3 1.0436 1.2382 0.84 0.3993
Protein=Prot4 * Location=Loc3 0.3028 82.2850 0.00 0.9971
Protein=Prot5 * Location=Loc3 0.3028 82.2850 0.00 0.9971
So with the truncated data set I gave, no significant differences are found between protein concentration, location, or their combination.
I've used the guide by the Institute for digital research and education, UCLA for "Ordinal Logistic Regression".
Any other suggestion? | Is there a two-way Friedman's test? | After reading a little bit more, I think I got the answer to my own question.
Since the response variable is ordinal and the other factors are categorical, I shouldn't use simple linear regression, so | Is there a two-way Friedman's test?
After reading a little bit more, I think I got the answer to my own question.
Since the response variable is ordinal and the other factors are categorical, I shouldn't use simple linear regression, so I've used Logistic Regression Model.
There are two packages in R that can handle ordinal variables: lrm {rms} and polr {MASS}.
I opted for lrm because it shows the p-value for each stimate, but both results are the same.
Following @rnso's notation (thanks!):
> library (rms)
> lrm (Concentration ~ Protein * Location, data = prtdf)
Logistic Regression Model
lrm(formula = Concentration ~ Protein * Location, data = prtdf)
Frequencies of Responses
0 1 2 3 5
120 26 10 5 2
Model Likelihood Discrimination Rank Discrim.
Ratio Test Indexes Indexes
Obs 163 LR chi2 60.55 R2 0.380 C 0.805
max |deriv| 0.002 d.f. 14 g 5.173 Dxy 0.610
Pr(> chi2) <0.0001 gr 176.462 gamma 0.644
gp 0.111 tau-a 0.263
Brier 0.083
Coef S.E. Wald Z Pr(>|Z|)
y>=1 -0.0288 0.5961 -0.05 0.9615
y>=2 -1.4135 0.6168 -2.29 0.0219
y>=3 -2.4530 0.6863 -3.57 0.0004
y>=5 -3.7741 0.9132 -4.13 <0.0001
Protein=Prot2 -0.1834 0.8232 -0.22 0.8237
Protein=Prot3 -0.9596 0.8933 -1.07 0.2827
Protein=Prot4 -10.4962 58.1844 -0.18 0.8568
Protein=Prot5 -10.4962 58.1844 -0.18 0.8568
Location=Loc2 -0.1326 0.8291 -0.16 0.8729
Location=Loc3 -0.3028 0.8180 -0.37 0.7113
Protein=Prot2 * Location=Loc2 1.0297 1.1526 0.89 0.3717
Protein=Prot3 * Location=Loc2 0.1827 1.2607 0.14 0.8848
Protein=Prot4 * Location=Loc2 0.1326 82.2851 0.00 0.9987
Protein=Prot5 * Location=Loc2 0.1326 82.2851 0.00 0.9987
Protein=Prot2 * Location=Loc3 0.6113 1.1413 0.54 0.5923
Protein=Prot3 * Location=Loc3 1.0436 1.2382 0.84 0.3993
Protein=Prot4 * Location=Loc3 0.3028 82.2850 0.00 0.9971
Protein=Prot5 * Location=Loc3 0.3028 82.2850 0.00 0.9971
So with the truncated data set I gave, no significant differences are found between protein concentration, location, or their combination.
I've used the guide by the Institute for digital research and education, UCLA for "Ordinal Logistic Regression".
Any other suggestion? | Is there a two-way Friedman's test?
After reading a little bit more, I think I got the answer to my own question.
Since the response variable is ordinal and the other factors are categorical, I shouldn't use simple linear regression, so |
54,370 | How do I calculate a t-score from a p-value (gain scores and N also available) | I'm guessing (hoping) this is a one-sided one-sample t test, where the 'gain' for
the $i$th subject is a difference $d_i$ and the test statistic is
$t = \bar d \sqrt{n}/S_d,$ in which $\bar d$ and $S_d$ are the mean
and standard deviation, respectively, of the $n$ differences.
Let $\delta$ be the population mean of $d_i$. Then one would reject $H_0: \delta = 0$ against $H_a: \delta > 0,$ for sufficiently large $t.$
The P-value, would be the probability under the density curve of
Student's t distribution with $n -1$ degrees of freedom beyond
the observed $t$ statistic. If $T$ is a random variable with
that distribution then the P-value $p$ is $P(T > t)$. That is,
$1 - p = P(T \le t).$
The value $t$ you wish to reclaim from the reported $p$ is then
the inverse CDF (quantile) function of $1 - p$.
For example, if $n = 16,$ and $p = 0.037,$ then we could use
statistical software to obtain $t = 1.92$. In R, the
code qt(1-.037, 15) returns 1.920596.
A difficulty may be that P-values are not reported to many decimal
places, especially when $p$ is too large to lead to rejection of
$H_0$ at some significance level such as 5%. As a 'reality check', in my example,
the critical value for a 5% level test (separating acceptance
and rejection regions) is given by R code qt(.95, 15) which
returns 1.753050 (probably 1.753 in a printed
t table).
Possible difficulties: (a) Your retrieved values of observed
$t$ might be only approximate if $p$ is rounded.
In the example above, if the P-value is reported as $p = .04$,
my suggested procedure gives $t = 1.88$. (b) You must know $n$ to get the degrees
of freedom. Or, if $n$ is very large, you might get a useful
approximation from standard normal tables. (c) If the
alternative is the two-sided $H_a: \delta \ne 0$, then that alternative
will be rejected for large $|t|$ and you won't be able to
know whether the observed $t$ is positive or negative.
(d) If this is a two-sample t test, you can still retrieve $t$,
provided you know the degrees of freedom. (For the pooled version
of the test $DF = n_1 + n_2 - 2$, but for the Welch (separate-variances)
version you would need to find $DF$ via a formula that involves
both the two sample sizes $n_1$ and $n_2$ and the standard
deviations of the two samples, which I'm guessing will not be
reported if $t$ isn't. | How do I calculate a t-score from a p-value (gain scores and N also available) | I'm guessing (hoping) this is a one-sided one-sample t test, where the 'gain' for
the $i$th subject is a difference $d_i$ and the test statistic is
$t = \bar d \sqrt{n}/S_d,$ in which $\bar d$ and $S_ | How do I calculate a t-score from a p-value (gain scores and N also available)
I'm guessing (hoping) this is a one-sided one-sample t test, where the 'gain' for
the $i$th subject is a difference $d_i$ and the test statistic is
$t = \bar d \sqrt{n}/S_d,$ in which $\bar d$ and $S_d$ are the mean
and standard deviation, respectively, of the $n$ differences.
Let $\delta$ be the population mean of $d_i$. Then one would reject $H_0: \delta = 0$ against $H_a: \delta > 0,$ for sufficiently large $t.$
The P-value, would be the probability under the density curve of
Student's t distribution with $n -1$ degrees of freedom beyond
the observed $t$ statistic. If $T$ is a random variable with
that distribution then the P-value $p$ is $P(T > t)$. That is,
$1 - p = P(T \le t).$
The value $t$ you wish to reclaim from the reported $p$ is then
the inverse CDF (quantile) function of $1 - p$.
For example, if $n = 16,$ and $p = 0.037,$ then we could use
statistical software to obtain $t = 1.92$. In R, the
code qt(1-.037, 15) returns 1.920596.
A difficulty may be that P-values are not reported to many decimal
places, especially when $p$ is too large to lead to rejection of
$H_0$ at some significance level such as 5%. As a 'reality check', in my example,
the critical value for a 5% level test (separating acceptance
and rejection regions) is given by R code qt(.95, 15) which
returns 1.753050 (probably 1.753 in a printed
t table).
Possible difficulties: (a) Your retrieved values of observed
$t$ might be only approximate if $p$ is rounded.
In the example above, if the P-value is reported as $p = .04$,
my suggested procedure gives $t = 1.88$. (b) You must know $n$ to get the degrees
of freedom. Or, if $n$ is very large, you might get a useful
approximation from standard normal tables. (c) If the
alternative is the two-sided $H_a: \delta \ne 0$, then that alternative
will be rejected for large $|t|$ and you won't be able to
know whether the observed $t$ is positive or negative.
(d) If this is a two-sample t test, you can still retrieve $t$,
provided you know the degrees of freedom. (For the pooled version
of the test $DF = n_1 + n_2 - 2$, but for the Welch (separate-variances)
version you would need to find $DF$ via a formula that involves
both the two sample sizes $n_1$ and $n_2$ and the standard
deviations of the two samples, which I'm guessing will not be
reported if $t$ isn't. | How do I calculate a t-score from a p-value (gain scores and N also available)
I'm guessing (hoping) this is a one-sided one-sample t test, where the 'gain' for
the $i$th subject is a difference $d_i$ and the test statistic is
$t = \bar d \sqrt{n}/S_d,$ in which $\bar d$ and $S_ |
54,371 | How do I calculate a t-score from a p-value (gain scores and N also available) | Why not just look it up a T Table or just punch the numbers in excel? I get the elaborate explanation above and good job at it but feels a bit overkill. In excel you can use T.INV. Just that you need degrees of freedom (which for the t distribution is n-1) for the second argument in T.INV(). And then just adjust based on what test it was upper, lower, two tailest test etc | How do I calculate a t-score from a p-value (gain scores and N also available) | Why not just look it up a T Table or just punch the numbers in excel? I get the elaborate explanation above and good job at it but feels a bit overkill. In excel you can use T.INV. Just that you need | How do I calculate a t-score from a p-value (gain scores and N also available)
Why not just look it up a T Table or just punch the numbers in excel? I get the elaborate explanation above and good job at it but feels a bit overkill. In excel you can use T.INV. Just that you need degrees of freedom (which for the t distribution is n-1) for the second argument in T.INV(). And then just adjust based on what test it was upper, lower, two tailest test etc | How do I calculate a t-score from a p-value (gain scores and N also available)
Why not just look it up a T Table or just punch the numbers in excel? I get the elaborate explanation above and good job at it but feels a bit overkill. In excel you can use T.INV. Just that you need |
54,372 | Can I Interpret the impact of variables like positive or negative on the model by Random Forest, as I can do by Logistic Regression | The short answer is No.
The long answer follows, for which I fit a random forest to demonstrate variable importance (a.k.a variable ranking):
if(!require('randomForest')) { install.packages("randomForest"); require("randomForest") }
# Observe iris data
pairs(iris)
# Train & Test split
train = sample (1: nrow(iris ), nrow(iris )/2)
test=iris [-train ,"Species"]
rf.iris =randomForest(Species∼.,data=iris ,subset =train ,
mtry=3, importance =TRUE)
yhat.rf = predict (rf.iris, newdata = iris[-train ,])
confusion_matrix <- table(yhat.rf, test)
Let's look at the class label distributions per each of the 4 numeric variables:
pairs(iris)
Focus on the bottom row of the figure (Species), which of the 4 variables carry more class discriminatory information?
Hopefully, you will answer the ones that correspond to subplots 3 and 4, i.e. Petal.Length and Petal.Width.
So, this is what the variable importance is capturing:
var_importance <- importance (rf.iris )
setosa versicolor virginica MeanDecreaseAccuracy MeanDecreaseGini
Sepal.Length 0.00000 -3.658955 4.588084 2.529800 0.4303867
Sepal.Width 0.00000 -3.411590 1.133001 -1.061102 0.2859101
Petal.Length 23.26742 26.463392 34.734821 37.700686 24.2050973
Petal.Width 23.25556 23.387203 30.062981 33.186258 24.2027126
Take the Petal.Length variable for instance. The MeanDecreaseAccuracy column tells us that if we exclude Petal.Length from our classification exercise, the accuracy (max possible value 100) of our classification decreases by 37.700686. This is related to the concept of Mutual Information.
If you focus on the column MeanDecreaseGini, this is another indicator of variable importance, which gives the average node impurity for the forest. This is measured by the Gini coefficient.
I hope it is clear how these two measures are different from the coefficient estimates in a logistic regression. They do not signify positive or negative impact on the class label. They judge how much class discriminatory information each variable contains.
You can interpret that Petal.Width and Petal.Length are the most useful variables for the classification task. Knowing these two variables for an observation (plant), decreases uncertainty and helps us to make more accurate predictions.
One thing to be careful about is that, while coming up with the importances, this technique looks at the variables individually. In some cases, it may be that, for instance, Sepal.Length does not contain an awful lot of class discriminatory information on its own, but when combined with Sepal.Width, it does carry a lot of information. This is not the case here, but is worth keeping in mind.
This last concept is discussed thoroughly in Sections 2.3 and 2.4 of this brilliant feature selection paper by Guyon et al. | Can I Interpret the impact of variables like positive or negative on the model by Random Forest, as | The short answer is No.
The long answer follows, for which I fit a random forest to demonstrate variable importance (a.k.a variable ranking):
if(!require('randomForest')) { install.packages("randomFo | Can I Interpret the impact of variables like positive or negative on the model by Random Forest, as I can do by Logistic Regression
The short answer is No.
The long answer follows, for which I fit a random forest to demonstrate variable importance (a.k.a variable ranking):
if(!require('randomForest')) { install.packages("randomForest"); require("randomForest") }
# Observe iris data
pairs(iris)
# Train & Test split
train = sample (1: nrow(iris ), nrow(iris )/2)
test=iris [-train ,"Species"]
rf.iris =randomForest(Species∼.,data=iris ,subset =train ,
mtry=3, importance =TRUE)
yhat.rf = predict (rf.iris, newdata = iris[-train ,])
confusion_matrix <- table(yhat.rf, test)
Let's look at the class label distributions per each of the 4 numeric variables:
pairs(iris)
Focus on the bottom row of the figure (Species), which of the 4 variables carry more class discriminatory information?
Hopefully, you will answer the ones that correspond to subplots 3 and 4, i.e. Petal.Length and Petal.Width.
So, this is what the variable importance is capturing:
var_importance <- importance (rf.iris )
setosa versicolor virginica MeanDecreaseAccuracy MeanDecreaseGini
Sepal.Length 0.00000 -3.658955 4.588084 2.529800 0.4303867
Sepal.Width 0.00000 -3.411590 1.133001 -1.061102 0.2859101
Petal.Length 23.26742 26.463392 34.734821 37.700686 24.2050973
Petal.Width 23.25556 23.387203 30.062981 33.186258 24.2027126
Take the Petal.Length variable for instance. The MeanDecreaseAccuracy column tells us that if we exclude Petal.Length from our classification exercise, the accuracy (max possible value 100) of our classification decreases by 37.700686. This is related to the concept of Mutual Information.
If you focus on the column MeanDecreaseGini, this is another indicator of variable importance, which gives the average node impurity for the forest. This is measured by the Gini coefficient.
I hope it is clear how these two measures are different from the coefficient estimates in a logistic regression. They do not signify positive or negative impact on the class label. They judge how much class discriminatory information each variable contains.
You can interpret that Petal.Width and Petal.Length are the most useful variables for the classification task. Knowing these two variables for an observation (plant), decreases uncertainty and helps us to make more accurate predictions.
One thing to be careful about is that, while coming up with the importances, this technique looks at the variables individually. In some cases, it may be that, for instance, Sepal.Length does not contain an awful lot of class discriminatory information on its own, but when combined with Sepal.Width, it does carry a lot of information. This is not the case here, but is worth keeping in mind.
This last concept is discussed thoroughly in Sections 2.3 and 2.4 of this brilliant feature selection paper by Guyon et al. | Can I Interpret the impact of variables like positive or negative on the model by Random Forest, as
The short answer is No.
The long answer follows, for which I fit a random forest to demonstrate variable importance (a.k.a variable ranking):
if(!require('randomForest')) { install.packages("randomFo |
54,373 | Selecting the most similar subset from an alternative dataset | This problem seem to be the same as selecting control groups for case-control studies. In the famous Doll and Hill study of smoking and lung cancer, the authors identified patients with lung cancer (cases) and then examined a control group "deliberately selected to be closely comparable in age and sex with the carcinoma of the lung patients." The higher incidence of smoking, particularly heavy smoking, in the lung cancer cases provided important evidence of the relation between that behavior and the disease.
So in your situation your group A represents the cases, and group B represents those from whom you will choose your controls.
This is not an easy problem in general, as it requires careful definition of what you mean by "representative." This Cross Validated page provides a good entry into algorithms for making such choices. | Selecting the most similar subset from an alternative dataset | This problem seem to be the same as selecting control groups for case-control studies. In the famous Doll and Hill study of smoking and lung cancer, the authors identified patients with lung cancer (c | Selecting the most similar subset from an alternative dataset
This problem seem to be the same as selecting control groups for case-control studies. In the famous Doll and Hill study of smoking and lung cancer, the authors identified patients with lung cancer (cases) and then examined a control group "deliberately selected to be closely comparable in age and sex with the carcinoma of the lung patients." The higher incidence of smoking, particularly heavy smoking, in the lung cancer cases provided important evidence of the relation between that behavior and the disease.
So in your situation your group A represents the cases, and group B represents those from whom you will choose your controls.
This is not an easy problem in general, as it requires careful definition of what you mean by "representative." This Cross Validated page provides a good entry into algorithms for making such choices. | Selecting the most similar subset from an alternative dataset
This problem seem to be the same as selecting control groups for case-control studies. In the famous Doll and Hill study of smoking and lung cancer, the authors identified patients with lung cancer (c |
54,374 | Selecting the most similar subset from an alternative dataset | EdM's answer seems like a good match, and you should absolutely look into that literature. Here's a possible alternative approach to explore.
One reasonable way of measuring distances between distributions is known as the maximum mean discrepancy (MMD); a thorough overview is given by Gretton, Borgwardt, Rasch, Schölkopf, and Smola, A Kernel Two-Sample Test, JMLR 2012. The basic idea is to embed distributions into a reproducing kernel Hilbert space (RKHS), and then get distances between those distributions in that RKHS. If the kernel of the RKHS is $k(x, y) = \langle \varphi(x), \varphi(y) \rangle$, you can estimate the distance between the distributions as (letting $n = |A|$, $m = |B|$):
$$\Big\| \frac1n \sum_{i=1}^n \varphi(A_i) - \frac1m \sum_{i=1}^m \varphi(B_j)\Big\|.$$
You can compute this using only $k$ via the kernel trick,
but for large-scale problems it's easier to use a finite-dimensional approximation $z$ with $z(x) \in \mathbb{R}^D$
such that $z(x)^T z \approx \langle \varphi(x), \varphi(y) \rangle$.
A popular example of such an approximation was given by Rahimi and Recht, Random Features for Large-Scale Kernel Machines, NIPS 2007. If you use use the very popular Gaussian kernel $k(x, y) = \exp\left( - \frac{1}{2 \sigma^2} \lVert x - y \rVert^2 \right)$,
then
$k(x, y) \approx z(x)^T z(y)$
where
$$
z(x) = \begin{bmatrix} \sin(\omega_1^T x) & \cos(\omega_1^T x) & \dots & \sin(\omega_D^T x) & \cos(\omega_D^T x) \end{bmatrix}
$$
for $\omega_i \sim N(0, \frac{1}{\sigma} I)$ (using the same $\omega$ values for each $x$).
(This isn't exactly the version given in the linked version of the paper, but this version is better.)
The estimate of the MMD is thus as above, except with $z$ instead of $\varphi$.
You'll need to pick an embedding dimension; for this scale of data, maybe 5000 is good.
You'll also need to pick a $\sigma$; a reasonable rule of thumb is maybe the median of the pairwise distances between elements of $B$.
Now, you want to find the subset of $B$ whose distribution is closest to that of $A$. We can write the problem by defining a vector $\alpha \in \{0, 1\}^m$ where $\alpha_i$ determines if $B_i$ is included or not:
\begin{align}\DeclareMathOperator*{\argmin}{\arg\min}
\alpha^*
&= \argmin_{\alpha \in \{0, 1\}^m} \Big\| \frac{1}{n} \sum_{i=1}^n z(A_i) - \frac{1}{1^T \alpha} \sum_{i=1}^m \alpha_i z(B_i) \Big\|^2
\end{align}
where we make sure to treat the case $\alpha = 0$ as infinite.
Now, let $Y_i = z(B_i) - \frac{1}{n} \sum_{i=1}^n z(A_i)$,
i.e. $Y_i$ is the distance of $B_i$'s embedding from our target vector,
and we want to find an assignment $\alpha$ that minimizes the mean distance:
\begin{align}
\alpha^*
&= \argmin_{\alpha \in \{0, 1\}^m} \Big\| \frac{1}{1^T \alpha} \sum_{i=1}^m \alpha_i Y_i \Big\|^2
= \argmin_{\alpha \in \{0, 1\}^m} \left\lVert \frac{Y \alpha}{1^T \alpha} \right\rVert^2
\end{align}
where $Y = \begin{bmatrix} Y_1 & \dots & Y_m \end{bmatrix}$.
The 1d, integer-valued data version of this problem is pretty close to subset sum, so it's probably NP-hard. We'll have to approximate.
If we fix $1^T \alpha$, then this becomes a binary quadratic program. There's been some work on solving these approximately, seemingly mostly (in a quick search) from the computer vision community. It still seems hard to solve, though.
But: in your case, I think you'd be happy with weights assigned to $B$, yes? You could then do weighted density estimation or whatever to try to figure out the distribution of the unknown components of $A$.
In that case, fix $1^T \alpha = 1$.
Our problem becomes
$$
\argmin_{\alpha \in [0, 1]^m}
\left\lVert Y \alpha \right\rVert^2
\quad\text{s.t.}\;
1^T \alpha = 1
.$$
This is a simple quadratic program with linear constraints, for which there are many solvers.
I actually just remembered that Hino and Murata, Information estimators for weighted observations, Neural Networks 2013 did something similar in their section 5.2, "Application to distribution-preserving data compression" using their nearest-neighbor-type estimator of the KL divergence. The problem there is more difficult both computationally and in terms of effort to code it up, though. (They don't say this, but it's actually a convex maximization, so they're not even guaranteed to get the global optimum.)
Another approach entirely is to think of this as a regression problem. Train a predictor for $(v_5, \dots, v_9)$ using $(v_1, \dots, v_4)$ as features on the training set $B$, and then apply the predictor to each data point from $A$. This is in some sense "more work" than what you're trying to do, but it might be more useful in the end. You could use transductive approaches to do so; Quadrianto, Patterson, and Smola, Distribution Matching for Transduction, NIPS 2009 even used an approach similar to the distribution matching scheme above to try to solve this problem. | Selecting the most similar subset from an alternative dataset | EdM's answer seems like a good match, and you should absolutely look into that literature. Here's a possible alternative approach to explore.
One reasonable way of measuring distances between distribu | Selecting the most similar subset from an alternative dataset
EdM's answer seems like a good match, and you should absolutely look into that literature. Here's a possible alternative approach to explore.
One reasonable way of measuring distances between distributions is known as the maximum mean discrepancy (MMD); a thorough overview is given by Gretton, Borgwardt, Rasch, Schölkopf, and Smola, A Kernel Two-Sample Test, JMLR 2012. The basic idea is to embed distributions into a reproducing kernel Hilbert space (RKHS), and then get distances between those distributions in that RKHS. If the kernel of the RKHS is $k(x, y) = \langle \varphi(x), \varphi(y) \rangle$, you can estimate the distance between the distributions as (letting $n = |A|$, $m = |B|$):
$$\Big\| \frac1n \sum_{i=1}^n \varphi(A_i) - \frac1m \sum_{i=1}^m \varphi(B_j)\Big\|.$$
You can compute this using only $k$ via the kernel trick,
but for large-scale problems it's easier to use a finite-dimensional approximation $z$ with $z(x) \in \mathbb{R}^D$
such that $z(x)^T z \approx \langle \varphi(x), \varphi(y) \rangle$.
A popular example of such an approximation was given by Rahimi and Recht, Random Features for Large-Scale Kernel Machines, NIPS 2007. If you use use the very popular Gaussian kernel $k(x, y) = \exp\left( - \frac{1}{2 \sigma^2} \lVert x - y \rVert^2 \right)$,
then
$k(x, y) \approx z(x)^T z(y)$
where
$$
z(x) = \begin{bmatrix} \sin(\omega_1^T x) & \cos(\omega_1^T x) & \dots & \sin(\omega_D^T x) & \cos(\omega_D^T x) \end{bmatrix}
$$
for $\omega_i \sim N(0, \frac{1}{\sigma} I)$ (using the same $\omega$ values for each $x$).
(This isn't exactly the version given in the linked version of the paper, but this version is better.)
The estimate of the MMD is thus as above, except with $z$ instead of $\varphi$.
You'll need to pick an embedding dimension; for this scale of data, maybe 5000 is good.
You'll also need to pick a $\sigma$; a reasonable rule of thumb is maybe the median of the pairwise distances between elements of $B$.
Now, you want to find the subset of $B$ whose distribution is closest to that of $A$. We can write the problem by defining a vector $\alpha \in \{0, 1\}^m$ where $\alpha_i$ determines if $B_i$ is included or not:
\begin{align}\DeclareMathOperator*{\argmin}{\arg\min}
\alpha^*
&= \argmin_{\alpha \in \{0, 1\}^m} \Big\| \frac{1}{n} \sum_{i=1}^n z(A_i) - \frac{1}{1^T \alpha} \sum_{i=1}^m \alpha_i z(B_i) \Big\|^2
\end{align}
where we make sure to treat the case $\alpha = 0$ as infinite.
Now, let $Y_i = z(B_i) - \frac{1}{n} \sum_{i=1}^n z(A_i)$,
i.e. $Y_i$ is the distance of $B_i$'s embedding from our target vector,
and we want to find an assignment $\alpha$ that minimizes the mean distance:
\begin{align}
\alpha^*
&= \argmin_{\alpha \in \{0, 1\}^m} \Big\| \frac{1}{1^T \alpha} \sum_{i=1}^m \alpha_i Y_i \Big\|^2
= \argmin_{\alpha \in \{0, 1\}^m} \left\lVert \frac{Y \alpha}{1^T \alpha} \right\rVert^2
\end{align}
where $Y = \begin{bmatrix} Y_1 & \dots & Y_m \end{bmatrix}$.
The 1d, integer-valued data version of this problem is pretty close to subset sum, so it's probably NP-hard. We'll have to approximate.
If we fix $1^T \alpha$, then this becomes a binary quadratic program. There's been some work on solving these approximately, seemingly mostly (in a quick search) from the computer vision community. It still seems hard to solve, though.
But: in your case, I think you'd be happy with weights assigned to $B$, yes? You could then do weighted density estimation or whatever to try to figure out the distribution of the unknown components of $A$.
In that case, fix $1^T \alpha = 1$.
Our problem becomes
$$
\argmin_{\alpha \in [0, 1]^m}
\left\lVert Y \alpha \right\rVert^2
\quad\text{s.t.}\;
1^T \alpha = 1
.$$
This is a simple quadratic program with linear constraints, for which there are many solvers.
I actually just remembered that Hino and Murata, Information estimators for weighted observations, Neural Networks 2013 did something similar in their section 5.2, "Application to distribution-preserving data compression" using their nearest-neighbor-type estimator of the KL divergence. The problem there is more difficult both computationally and in terms of effort to code it up, though. (They don't say this, but it's actually a convex maximization, so they're not even guaranteed to get the global optimum.)
Another approach entirely is to think of this as a regression problem. Train a predictor for $(v_5, \dots, v_9)$ using $(v_1, \dots, v_4)$ as features on the training set $B$, and then apply the predictor to each data point from $A$. This is in some sense "more work" than what you're trying to do, but it might be more useful in the end. You could use transductive approaches to do so; Quadrianto, Patterson, and Smola, Distribution Matching for Transduction, NIPS 2009 even used an approach similar to the distribution matching scheme above to try to solve this problem. | Selecting the most similar subset from an alternative dataset
EdM's answer seems like a good match, and you should absolutely look into that literature. Here's a possible alternative approach to explore.
One reasonable way of measuring distances between distribu |
54,375 | Selecting the most similar subset from an alternative dataset | Thanks to EdM, this problem is called matching. And I found an excellent package in R called Matching for this purpose. I found an excellent resource in this related document Genetic Matching for Estimating Causal Effects!. | Selecting the most similar subset from an alternative dataset | Thanks to EdM, this problem is called matching. And I found an excellent package in R called Matching for this purpose. I found an excellent resource in this related document Genetic Matching for Esti | Selecting the most similar subset from an alternative dataset
Thanks to EdM, this problem is called matching. And I found an excellent package in R called Matching for this purpose. I found an excellent resource in this related document Genetic Matching for Estimating Causal Effects!. | Selecting the most similar subset from an alternative dataset
Thanks to EdM, this problem is called matching. And I found an excellent package in R called Matching for this purpose. I found an excellent resource in this related document Genetic Matching for Esti |
54,376 | Why does the linear test statistic of GLM follow F-distribution? | Why does the linear test static of GLM follow F-distribution?
It doesn't.
Then, the test statistic will follow an $F$-distribution [...] does this hold for all generalized linear models?
There's no result that establishes it in the general case, and indeed we can show (e.g. by simulation in particular instances) that it's not the case in general.
It holds for the Gaussian case, of course, but the derivation relies on the normality of the data. You can see it's not the case for logistic regression, since the data (and hence "F"-statistics based on the data) are discrete.
There is an asymptotic chi square result. This, combined with Slutsky's theorem should give us that the F-statistic will asymptotically be distributed as a scaled chi-square.
However, in sufficiently large samples (where how large "large" is will depend on a number of things), we might anticipate that The F distribution would still be approximately correct, since both the $F$ distribution being used to figure out p-values, and the actual distribution of the test statistic are both going to the same scaled chi-square distribution asymptotically.
We see the same issue with the common use of t-tests for parameter significance in GLMs (which many packages do) even though it's only t-distributed for the Gaussian case; for the others we only have an asymptotic normal result (but a similar argument for why the $t$ shouldn't do badly in sufficiently large samples can be made).
I don't have a good book suggestion. Some books give a handwavy argument for using the $F$ (some akin to mine above), others seem to ignore the need to justify it at all. | Why does the linear test statistic of GLM follow F-distribution? | Why does the linear test static of GLM follow F-distribution?
It doesn't.
Then, the test statistic will follow an $F$-distribution [...] does this hold for all generalized linear models?
There's no | Why does the linear test statistic of GLM follow F-distribution?
Why does the linear test static of GLM follow F-distribution?
It doesn't.
Then, the test statistic will follow an $F$-distribution [...] does this hold for all generalized linear models?
There's no result that establishes it in the general case, and indeed we can show (e.g. by simulation in particular instances) that it's not the case in general.
It holds for the Gaussian case, of course, but the derivation relies on the normality of the data. You can see it's not the case for logistic regression, since the data (and hence "F"-statistics based on the data) are discrete.
There is an asymptotic chi square result. This, combined with Slutsky's theorem should give us that the F-statistic will asymptotically be distributed as a scaled chi-square.
However, in sufficiently large samples (where how large "large" is will depend on a number of things), we might anticipate that The F distribution would still be approximately correct, since both the $F$ distribution being used to figure out p-values, and the actual distribution of the test statistic are both going to the same scaled chi-square distribution asymptotically.
We see the same issue with the common use of t-tests for parameter significance in GLMs (which many packages do) even though it's only t-distributed for the Gaussian case; for the others we only have an asymptotic normal result (but a similar argument for why the $t$ shouldn't do badly in sufficiently large samples can be made).
I don't have a good book suggestion. Some books give a handwavy argument for using the $F$ (some akin to mine above), others seem to ignore the need to justify it at all. | Why does the linear test statistic of GLM follow F-distribution?
Why does the linear test static of GLM follow F-distribution?
It doesn't.
Then, the test statistic will follow an $F$-distribution [...] does this hold for all generalized linear models?
There's no |
54,377 | Shifted log-normal distribution and moments | We have
$Y^n=(aX+b)^n=\sum_{k=0}^n \binom{n}{k}(aX)^k b^{n-k}$
so
$\mathbb{E}Y^n=\mathbb{E}(\sum_{k=0}^n \binom{n}{k}(aX)^k b^{n-k})=\sum_{k=0}^n \binom{n}{k} b^{n-k} a^k \mathbb{E}X^k$.
The rest remains on what is $X$ (i.e. what are its $\mathbb{E}X^k$ moments). For log-normal distribution we have
$\mathbb{E}X^k=e^{k\mu+k^2\sigma^2/2}$.
Thus
$\mathbb{E}Y^n=\sum_{k=0}^n \binom{n}{k} b^{n-k} a^k e^{k\mu+k^2\sigma^2/2}$.
I don't immediately see whether this has a closed-form (someone might supplement this). | Shifted log-normal distribution and moments | We have
$Y^n=(aX+b)^n=\sum_{k=0}^n \binom{n}{k}(aX)^k b^{n-k}$
so
$\mathbb{E}Y^n=\mathbb{E}(\sum_{k=0}^n \binom{n}{k}(aX)^k b^{n-k})=\sum_{k=0}^n \binom{n}{k} b^{n-k} a^k \mathbb{E}X^k$.
The rest rema | Shifted log-normal distribution and moments
We have
$Y^n=(aX+b)^n=\sum_{k=0}^n \binom{n}{k}(aX)^k b^{n-k}$
so
$\mathbb{E}Y^n=\mathbb{E}(\sum_{k=0}^n \binom{n}{k}(aX)^k b^{n-k})=\sum_{k=0}^n \binom{n}{k} b^{n-k} a^k \mathbb{E}X^k$.
The rest remains on what is $X$ (i.e. what are its $\mathbb{E}X^k$ moments). For log-normal distribution we have
$\mathbb{E}X^k=e^{k\mu+k^2\sigma^2/2}$.
Thus
$\mathbb{E}Y^n=\sum_{k=0}^n \binom{n}{k} b^{n-k} a^k e^{k\mu+k^2\sigma^2/2}$.
I don't immediately see whether this has a closed-form (someone might supplement this). | Shifted log-normal distribution and moments
We have
$Y^n=(aX+b)^n=\sum_{k=0}^n \binom{n}{k}(aX)^k b^{n-k}$
so
$\mathbb{E}Y^n=\mathbb{E}(\sum_{k=0}^n \binom{n}{k}(aX)^k b^{n-k})=\sum_{k=0}^n \binom{n}{k} b^{n-k} a^k \mathbb{E}X^k$.
The rest rema |
54,378 | Shifted log-normal distribution and moments | You have $Y=aX+b$, but the multiplication by $a$ still leaves you with a lognormal, not really changing anything. If $X\sim \text{logN}(\mu,\sigma^2)$ then $aX\sim \text{logN}(\mu+\log(a),\sigma^2)$, so if you know how to compute moments for a lognormal you can do it for $aX$ as easily as for $X$.
So for $X\sim \text{logN}(\mu,\sigma)$ the problem reduces to working out moments for a simple shift, $Y=X+\gamma$.
You have an answer for raw moments, but I presuming that apart from the mean you want central moments.
So we already have $E(Y) = E(X)+\gamma$ for the mean.
The central moments (and functions of them, like skewness or kurtosis) are all unaffected by the location-shift. | Shifted log-normal distribution and moments | You have $Y=aX+b$, but the multiplication by $a$ still leaves you with a lognormal, not really changing anything. If $X\sim \text{logN}(\mu,\sigma^2)$ then $aX\sim \text{logN}(\mu+\log(a),\sigma^2)$, | Shifted log-normal distribution and moments
You have $Y=aX+b$, but the multiplication by $a$ still leaves you with a lognormal, not really changing anything. If $X\sim \text{logN}(\mu,\sigma^2)$ then $aX\sim \text{logN}(\mu+\log(a),\sigma^2)$, so if you know how to compute moments for a lognormal you can do it for $aX$ as easily as for $X$.
So for $X\sim \text{logN}(\mu,\sigma)$ the problem reduces to working out moments for a simple shift, $Y=X+\gamma$.
You have an answer for raw moments, but I presuming that apart from the mean you want central moments.
So we already have $E(Y) = E(X)+\gamma$ for the mean.
The central moments (and functions of them, like skewness or kurtosis) are all unaffected by the location-shift. | Shifted log-normal distribution and moments
You have $Y=aX+b$, but the multiplication by $a$ still leaves you with a lognormal, not really changing anything. If $X\sim \text{logN}(\mu,\sigma^2)$ then $aX\sim \text{logN}(\mu+\log(a),\sigma^2)$, |
54,379 | Number of Gaussian mixture components needed to approximate any distribution | I am afraid this is an absurd question: there is no magical number and no upper bound on the number of components in a Gaussian mixture for approximating (in which sense?) any distribution. Just think of the Gaussian mixture with 19 components... | Number of Gaussian mixture components needed to approximate any distribution | I am afraid this is an absurd question: there is no magical number and no upper bound on the number of components in a Gaussian mixture for approximating (in which sense?) any distribution. Just think | Number of Gaussian mixture components needed to approximate any distribution
I am afraid this is an absurd question: there is no magical number and no upper bound on the number of components in a Gaussian mixture for approximating (in which sense?) any distribution. Just think of the Gaussian mixture with 19 components... | Number of Gaussian mixture components needed to approximate any distribution
I am afraid this is an absurd question: there is no magical number and no upper bound on the number of components in a Gaussian mixture for approximating (in which sense?) any distribution. Just think |
54,380 | Why discrepancy between lasso and randomForest? | This could be because you're measuring two different things. The lasso coefficients are essentially effect sizes, and shrinkage helps distinguish "zero" effects from "nonzero" effects. Importance of a variable in the random forest model measures the improvement in predictive accuracy due to including that variable.
So you're comparing apples and oranges. A fair comparison would be to re-fit both models without each variable, and compute the decrease in MSE (i.e. with cross-validation or a train/test split) due to omitting each variable. Or instead of dropping each predictor you could randomly permute it; this how %IndMSE is computed.
This procedure should be invariant to input scaling, but you should usually scale and center your inputs anyway. It helps with numerical stability, convergence in iterative algorithms, inverting matrices, and most of all interpretability. | Why discrepancy between lasso and randomForest? | This could be because you're measuring two different things. The lasso coefficients are essentially effect sizes, and shrinkage helps distinguish "zero" effects from "nonzero" effects. Importance of a | Why discrepancy between lasso and randomForest?
This could be because you're measuring two different things. The lasso coefficients are essentially effect sizes, and shrinkage helps distinguish "zero" effects from "nonzero" effects. Importance of a variable in the random forest model measures the improvement in predictive accuracy due to including that variable.
So you're comparing apples and oranges. A fair comparison would be to re-fit both models without each variable, and compute the decrease in MSE (i.e. with cross-validation or a train/test split) due to omitting each variable. Or instead of dropping each predictor you could randomly permute it; this how %IndMSE is computed.
This procedure should be invariant to input scaling, but you should usually scale and center your inputs anyway. It helps with numerical stability, convergence in iterative algorithms, inverting matrices, and most of all interpretability. | Why discrepancy between lasso and randomForest?
This could be because you're measuring two different things. The lasso coefficients are essentially effect sizes, and shrinkage helps distinguish "zero" effects from "nonzero" effects. Importance of a |
54,381 | What is the shape of the decision surface of a Gaussian Process classifier? | As you can see in the example I crafted below, the probability surface in the case of the squared exponential (Gaussian) Kernel as covariance function for Gaussian processes looks like a smooth density.
A good read about covariance functions in the context of Gaussian processes is Chapter 4 - Covariance Functions [1]
So, I'll be using R, specifically the kernlab package for this example. There's a nice gausspr function that accepts different kernel types as parameters. I'll be using the iris dataset as a binary classification problem with only two dimensions.
library(kernlab)
data(iris)
#let's use only two variables plus the target variable so we can accurately plot it
data = iris[, 3:5]
data$Species[data$Species == "virginica"] = "versicolor"
data$Species = factor(data$Species)
levels(data$Species) = c("setosa", "virginica OR versicolor")
#The fitting, you can change kernel and parameters, check the kernlab manual
fit = gausspr(Species~., data = data, kernel = "rbfdot")
pred = predict(fit, data)
N = 250L #integer that gives the number of unique values in each dimension of the grid
grid = expand.grid(
Petal.Length = seq(min(data$Petal.Length), max(data$Petal.Length), length.out = N),
Petal.Width = seq(min(data$Petal.Width), max(data$Petal.Width), length.out = N)
)
pred.grid = predict(fit, grid, type = "probabilities")[, 1, drop = FALSE]
#This maps the predictions a matrix representing the dimensions of data
pred.grid = matrix(pred.grid, ncol = N)
#The color part is thanks to http://www.r-bloggers.com/how-to-correctly-set-color-in-the-image-function/
collist<-c("#053061","#2166AC","#4393C3","#92C5DE","#D1E5F0","#F7F7F7","#FDDBC7","#F4A582","#D6604D","#B2182B","#67001F")
ColorRamp<-colorRampPalette(collist)(100L)
tiff(filename = "Rplot_rbfdot.tiff")
image(unique(grid$Petal.Length), unique(grid$Petal.Width), pred.grid, useRaster = TRUE, col = ColorRamp,
ylab = "Petal.Width", xlab = "Petal.Length", main = "kernel = \"rbfdot\""
)
points(data[,1:2], pch = c(16,17)[as.numeric(pred)], col = adjustcolor("black", alpha = 0.5))
contour(unique(grid$Petal.Length), unique(grid$Petal.Width), pred.grid, add = TRUE,
levels = c(.4,.5,.6), labcex = 1, lwd = 1.75
)
legend("topleft", legend = levels(data$Species), pch = c(16,17), bg = "white")
dev.off()
These plots were produced with the above code.
[1] C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. © 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml | What is the shape of the decision surface of a Gaussian Process classifier? | As you can see in the example I crafted below, the probability surface in the case of the squared exponential (Gaussian) Kernel as covariance function for Gaussian processes looks like a smooth densit | What is the shape of the decision surface of a Gaussian Process classifier?
As you can see in the example I crafted below, the probability surface in the case of the squared exponential (Gaussian) Kernel as covariance function for Gaussian processes looks like a smooth density.
A good read about covariance functions in the context of Gaussian processes is Chapter 4 - Covariance Functions [1]
So, I'll be using R, specifically the kernlab package for this example. There's a nice gausspr function that accepts different kernel types as parameters. I'll be using the iris dataset as a binary classification problem with only two dimensions.
library(kernlab)
data(iris)
#let's use only two variables plus the target variable so we can accurately plot it
data = iris[, 3:5]
data$Species[data$Species == "virginica"] = "versicolor"
data$Species = factor(data$Species)
levels(data$Species) = c("setosa", "virginica OR versicolor")
#The fitting, you can change kernel and parameters, check the kernlab manual
fit = gausspr(Species~., data = data, kernel = "rbfdot")
pred = predict(fit, data)
N = 250L #integer that gives the number of unique values in each dimension of the grid
grid = expand.grid(
Petal.Length = seq(min(data$Petal.Length), max(data$Petal.Length), length.out = N),
Petal.Width = seq(min(data$Petal.Width), max(data$Petal.Width), length.out = N)
)
pred.grid = predict(fit, grid, type = "probabilities")[, 1, drop = FALSE]
#This maps the predictions a matrix representing the dimensions of data
pred.grid = matrix(pred.grid, ncol = N)
#The color part is thanks to http://www.r-bloggers.com/how-to-correctly-set-color-in-the-image-function/
collist<-c("#053061","#2166AC","#4393C3","#92C5DE","#D1E5F0","#F7F7F7","#FDDBC7","#F4A582","#D6604D","#B2182B","#67001F")
ColorRamp<-colorRampPalette(collist)(100L)
tiff(filename = "Rplot_rbfdot.tiff")
image(unique(grid$Petal.Length), unique(grid$Petal.Width), pred.grid, useRaster = TRUE, col = ColorRamp,
ylab = "Petal.Width", xlab = "Petal.Length", main = "kernel = \"rbfdot\""
)
points(data[,1:2], pch = c(16,17)[as.numeric(pred)], col = adjustcolor("black", alpha = 0.5))
contour(unique(grid$Petal.Length), unique(grid$Petal.Width), pred.grid, add = TRUE,
levels = c(.4,.5,.6), labcex = 1, lwd = 1.75
)
legend("topleft", legend = levels(data$Species), pch = c(16,17), bg = "white")
dev.off()
These plots were produced with the above code.
[1] C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. © 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml | What is the shape of the decision surface of a Gaussian Process classifier?
As you can see in the example I crafted below, the probability surface in the case of the squared exponential (Gaussian) Kernel as covariance function for Gaussian processes looks like a smooth densit |
54,382 | Bayesian analysis: Estimate whether a parameter is 0 or not | If you are interested in TESTING B=0, then the standard Bayesian solution (i.e. the most accepted), to that problem is the Bayes Factor (BF). Suppose you want to test model 1 against model 2. Let
$\pi_1(\theta_1|y)= \frac{L(\theta_1)\pi_1(\theta_1)}{p_1(y)}$, $p_1(y)=\int L(\theta_1)\pi_1(\theta_1)d\theta_1$
$\pi_2(\theta_2|y)= \frac{L(\theta_2)\pi_2(\theta_2)}{p_2(y)}$, $p_2(y)=\int L(\theta_2)\pi_2(\theta_2)d\theta_2$
Then the BF of model 1 against model 2 is simply $BF_{12}=p_1(y)/p_2(y)$ and it tells, for instance, how much likely is model 1 with respect to model 2.
BFs automatically penalise for model complexity and are asymptotically related to the BIC information criterion. See Kass and Raftery (1995) https://www.stat.washington.edu/raftery/Research/PDF/kass1995.pdf for more dails on its interpretation.
It does have some problems, though. For instance, it is not well-defined with improper priors. See, for instance, Robert and Marin (2007)
http://www.amazon.com/Bayesian-Core-Practical-Computational-Statistics/dp/0387389792.
Of course, not ALL Bayesians are happy with BFs. Some prefer to use the DIC or similar Bayesian information criteria. There are also some proposals for Bayesian testing via HPDs or credible intervals, but in my opinion, this approach is quite limited, and is not yet widely accepted. See, for instance this issue What is the connection between credible regions and Bayesian hypothesis tests?. | Bayesian analysis: Estimate whether a parameter is 0 or not | If you are interested in TESTING B=0, then the standard Bayesian solution (i.e. the most accepted), to that problem is the Bayes Factor (BF). Suppose you want to test model 1 against model 2. Let
$\p | Bayesian analysis: Estimate whether a parameter is 0 or not
If you are interested in TESTING B=0, then the standard Bayesian solution (i.e. the most accepted), to that problem is the Bayes Factor (BF). Suppose you want to test model 1 against model 2. Let
$\pi_1(\theta_1|y)= \frac{L(\theta_1)\pi_1(\theta_1)}{p_1(y)}$, $p_1(y)=\int L(\theta_1)\pi_1(\theta_1)d\theta_1$
$\pi_2(\theta_2|y)= \frac{L(\theta_2)\pi_2(\theta_2)}{p_2(y)}$, $p_2(y)=\int L(\theta_2)\pi_2(\theta_2)d\theta_2$
Then the BF of model 1 against model 2 is simply $BF_{12}=p_1(y)/p_2(y)$ and it tells, for instance, how much likely is model 1 with respect to model 2.
BFs automatically penalise for model complexity and are asymptotically related to the BIC information criterion. See Kass and Raftery (1995) https://www.stat.washington.edu/raftery/Research/PDF/kass1995.pdf for more dails on its interpretation.
It does have some problems, though. For instance, it is not well-defined with improper priors. See, for instance, Robert and Marin (2007)
http://www.amazon.com/Bayesian-Core-Practical-Computational-Statistics/dp/0387389792.
Of course, not ALL Bayesians are happy with BFs. Some prefer to use the DIC or similar Bayesian information criteria. There are also some proposals for Bayesian testing via HPDs or credible intervals, but in my opinion, this approach is quite limited, and is not yet widely accepted. See, for instance this issue What is the connection between credible regions and Bayesian hypothesis tests?. | Bayesian analysis: Estimate whether a parameter is 0 or not
If you are interested in TESTING B=0, then the standard Bayesian solution (i.e. the most accepted), to that problem is the Bayes Factor (BF). Suppose you want to test model 1 against model 2. Let
$\p |
54,383 | Bayesian analysis: Estimate whether a parameter is 0 or not | The model you describe is simple univariate linear regression and what you want to do is to test wether your intercept is greater then zero. You can use MCMC obtaining posterior distribution of such model and then you can check what is the proportion of cases where $B > 0$:
$$ \Pr(B > 0) = \frac{1}N \sum^N_{i=1} \mathbf{1}(B_i) $$
where
$$
\mathbf{1}(B_i) =
\begin{cases}
1 &\text{if } & B_i > 0, \\
0 &\text{if } & B_i \leq 0.
\end{cases}
$$
for all $B_i$ values from $i = 1,...,N$ MCMC replications. This gives you posterior probability of $B > 0$. Since $B$ is continuous checking whether it is exactly zero does not make sense. Examples of using such approach could be found in possibly any handbook on Bayesian statistics.
If you restrict the range of $B$'s by choosing some prior that is only positive, then there is no point in checking if there are posterior $B$ values less then zero. In Bayesian approach you choose some priors for your parameters and every prior brings some information in your model ("noninformative" is a misleading term), however in many cases you do not want to use prior to restrict range of your values, but rather you use prior that enables "improbable" values with lower probability (example here). On the other hand, if values of $B$ simply cannot be less then zero, then you can restrict your prior but you should describe this approach and its rationale in your report. | Bayesian analysis: Estimate whether a parameter is 0 or not | The model you describe is simple univariate linear regression and what you want to do is to test wether your intercept is greater then zero. You can use MCMC obtaining posterior distribution of such m | Bayesian analysis: Estimate whether a parameter is 0 or not
The model you describe is simple univariate linear regression and what you want to do is to test wether your intercept is greater then zero. You can use MCMC obtaining posterior distribution of such model and then you can check what is the proportion of cases where $B > 0$:
$$ \Pr(B > 0) = \frac{1}N \sum^N_{i=1} \mathbf{1}(B_i) $$
where
$$
\mathbf{1}(B_i) =
\begin{cases}
1 &\text{if } & B_i > 0, \\
0 &\text{if } & B_i \leq 0.
\end{cases}
$$
for all $B_i$ values from $i = 1,...,N$ MCMC replications. This gives you posterior probability of $B > 0$. Since $B$ is continuous checking whether it is exactly zero does not make sense. Examples of using such approach could be found in possibly any handbook on Bayesian statistics.
If you restrict the range of $B$'s by choosing some prior that is only positive, then there is no point in checking if there are posterior $B$ values less then zero. In Bayesian approach you choose some priors for your parameters and every prior brings some information in your model ("noninformative" is a misleading term), however in many cases you do not want to use prior to restrict range of your values, but rather you use prior that enables "improbable" values with lower probability (example here). On the other hand, if values of $B$ simply cannot be less then zero, then you can restrict your prior but you should describe this approach and its rationale in your report. | Bayesian analysis: Estimate whether a parameter is 0 or not
The model you describe is simple univariate linear regression and what you want to do is to test wether your intercept is greater then zero. You can use MCMC obtaining posterior distribution of such m |
54,384 | Extremly poor polynomial fitting with SVR in sklearn | In short, you need to tune your parameters. Here's the sklearn docs:
The free parameters in the model are C and epsilon.
and their descriptions:
C : float, optional (default=1.0)
Penalty parameter C of the error term.
epsilon : float, optional (default=0.1)
Epsilon in the epsilon-SVR model. It specifies the epsilon-tube within which no penalty is associated in the training loss function with points predicted within a distance epsilon from the actual value.
It looks like you have an under-penalized model, it is not punished harshly enough for straying away from the data. Let's check.
I generated some polynomial data that is on approximately the same scale as yours:
xs = np.linspace(0, 1, 100)
ys = 400*(xs - 2*xs*xs + xs*xs*xs) - 20
scatter(xs, ys, alpha=.25)
And then fit the SVR with the default parameters:
clf = SVR(degree=3)
clf.fit(np.transpose([xs]), ys)
yf = clf.predict(numpy.transpose([xs]))
Which gives me essentially the same issue as you:
Using the intuition that the parameters are under-penalizing the fit, I adjusted them:
clf = SVR(degree=3, C=100, epsilon=.01)
Which gives me a pretty good fit:
In general, whenever your model has free parameters like this, it is very important to tune them carefully. sklearn makes this as convenient as possible, it supplies the grid_search module, which will try many models in parallel with different tuning parameters and choose the one that best fits your data. Also important is getting the measurement of best fits your data correct, as the model fit measured using the training data is not a good representation of the model fit on unseen data. Use cross validation or a sample of held out data to examine how well your model fits. In your case, I would recommend using cross validation with GridSearchCV. | Extremly poor polynomial fitting with SVR in sklearn | In short, you need to tune your parameters. Here's the sklearn docs:
The free parameters in the model are C and epsilon.
and their descriptions:
C : float, optional (default=1.0)
Penalty parameter | Extremly poor polynomial fitting with SVR in sklearn
In short, you need to tune your parameters. Here's the sklearn docs:
The free parameters in the model are C and epsilon.
and their descriptions:
C : float, optional (default=1.0)
Penalty parameter C of the error term.
epsilon : float, optional (default=0.1)
Epsilon in the epsilon-SVR model. It specifies the epsilon-tube within which no penalty is associated in the training loss function with points predicted within a distance epsilon from the actual value.
It looks like you have an under-penalized model, it is not punished harshly enough for straying away from the data. Let's check.
I generated some polynomial data that is on approximately the same scale as yours:
xs = np.linspace(0, 1, 100)
ys = 400*(xs - 2*xs*xs + xs*xs*xs) - 20
scatter(xs, ys, alpha=.25)
And then fit the SVR with the default parameters:
clf = SVR(degree=3)
clf.fit(np.transpose([xs]), ys)
yf = clf.predict(numpy.transpose([xs]))
Which gives me essentially the same issue as you:
Using the intuition that the parameters are under-penalizing the fit, I adjusted them:
clf = SVR(degree=3, C=100, epsilon=.01)
Which gives me a pretty good fit:
In general, whenever your model has free parameters like this, it is very important to tune them carefully. sklearn makes this as convenient as possible, it supplies the grid_search module, which will try many models in parallel with different tuning parameters and choose the one that best fits your data. Also important is getting the measurement of best fits your data correct, as the model fit measured using the training data is not a good representation of the model fit on unseen data. Use cross validation or a sample of held out data to examine how well your model fits. In your case, I would recommend using cross validation with GridSearchCV. | Extremly poor polynomial fitting with SVR in sklearn
In short, you need to tune your parameters. Here's the sklearn docs:
The free parameters in the model are C and epsilon.
and their descriptions:
C : float, optional (default=1.0)
Penalty parameter |
54,385 | Extremly poor polynomial fitting with SVR in sklearn | For the polynomial kernel, specify kernel='poly' and also try also rescaling your data, as well as tuning your parameters C and epsilon as Matthew described.
>>> from sklearn import preprocessing
>>> X = [[ 1., -1., 2.],
... [ 2., 0., 0.],
... [ 0., 1., -1.]]
>>> X_scaled = preprocessing.scale(X)
>>> X_scaled
array([[ 0. ..., -1.22..., 1.33...],
[ 1.22..., 0. ..., -0.26...],
[-1.22..., 1.22..., -1.06...]])
"The data will then have zero mean and unit variance".
Copied verbatim from: https://stackoverflow.com/questions/13324071/scaling-data-in-scikit-learn-svm | Extremly poor polynomial fitting with SVR in sklearn | For the polynomial kernel, specify kernel='poly' and also try also rescaling your data, as well as tuning your parameters C and epsilon as Matthew described.
>>> from sklearn import preprocessing
>>> | Extremly poor polynomial fitting with SVR in sklearn
For the polynomial kernel, specify kernel='poly' and also try also rescaling your data, as well as tuning your parameters C and epsilon as Matthew described.
>>> from sklearn import preprocessing
>>> X = [[ 1., -1., 2.],
... [ 2., 0., 0.],
... [ 0., 1., -1.]]
>>> X_scaled = preprocessing.scale(X)
>>> X_scaled
array([[ 0. ..., -1.22..., 1.33...],
[ 1.22..., 0. ..., -0.26...],
[-1.22..., 1.22..., -1.06...]])
"The data will then have zero mean and unit variance".
Copied verbatim from: https://stackoverflow.com/questions/13324071/scaling-data-in-scikit-learn-svm | Extremly poor polynomial fitting with SVR in sklearn
For the polynomial kernel, specify kernel='poly' and also try also rescaling your data, as well as tuning your parameters C and epsilon as Matthew described.
>>> from sklearn import preprocessing
>>> |
54,386 | Is it feasible to transform each variable differently while doing multiple regression | Yes. Sure. The key is to understand that in the expression "linear regression" the word "linear" means "linear with respect to the coefficients in front of variables". So, you can not only transform each variable differently, but, for example, make two different transformations of each variable and include both in regression. You should keep in mind, however, that ideally your variables should be uncorrelated with each other and roughly on the same scale.
If you are using R, you can transform variables directly in the formula without changing the data frame
lm(y ~ I(log(v1)) + I(v2^2) + I(1/v3), data=data)
In this case, if you want to make predictions for another data frame newdata, you can use it directly (without changing it).
Alternatively you can transform variables "by hand" in the data frame (introduce new columns with or without eliminating the old columns) and work with new variables
lm(y ~ new_v1 + new_v2 + new_v3, data=data)
To make predictions in this case, you need to transform variables in your newdata data frame (that is, introduce new columns) in the same way you transformed it in data.
The results of these two "implementations" will be the same. And both are linear regression. | Is it feasible to transform each variable differently while doing multiple regression | Yes. Sure. The key is to understand that in the expression "linear regression" the word "linear" means "linear with respect to the coefficients in front of variables". So, you can not only transform e | Is it feasible to transform each variable differently while doing multiple regression
Yes. Sure. The key is to understand that in the expression "linear regression" the word "linear" means "linear with respect to the coefficients in front of variables". So, you can not only transform each variable differently, but, for example, make two different transformations of each variable and include both in regression. You should keep in mind, however, that ideally your variables should be uncorrelated with each other and roughly on the same scale.
If you are using R, you can transform variables directly in the formula without changing the data frame
lm(y ~ I(log(v1)) + I(v2^2) + I(1/v3), data=data)
In this case, if you want to make predictions for another data frame newdata, you can use it directly (without changing it).
Alternatively you can transform variables "by hand" in the data frame (introduce new columns with or without eliminating the old columns) and work with new variables
lm(y ~ new_v1 + new_v2 + new_v3, data=data)
To make predictions in this case, you need to transform variables in your newdata data frame (that is, introduce new columns) in the same way you transformed it in data.
The results of these two "implementations" will be the same. And both are linear regression. | Is it feasible to transform each variable differently while doing multiple regression
Yes. Sure. The key is to understand that in the expression "linear regression" the word "linear" means "linear with respect to the coefficients in front of variables". So, you can not only transform e |
54,387 | In a model with several parameters, which one should be tuned via cross validation first? | As these hyperparameters interact with each other, it is best to tune them together. Generally, a hyperparameter response surface is very complex, which means that tuning parameters separately usually leads to poor results.
The standard approach to tuning is grid search, e.g. test predetermined tuples of hyperparameters (with cross-validation) and use the one that yielded best performance. Grid search, however, is inefficient. Grid search is very inefficient when you have a lot of hyperparameters (6 is already a problem for grid search). An alternative is random search, which essentially means trying a set of random tuples.
The best option is to use specialized libraries that provide automatic solvers to optimize hyperparameters. These solvers require far less parameter tuples to be tested, and hence require less time. You can find such solvers in Optunity and Hyperopt. | In a model with several parameters, which one should be tuned via cross validation first? | As these hyperparameters interact with each other, it is best to tune them together. Generally, a hyperparameter response surface is very complex, which means that tuning parameters separately usually | In a model with several parameters, which one should be tuned via cross validation first?
As these hyperparameters interact with each other, it is best to tune them together. Generally, a hyperparameter response surface is very complex, which means that tuning parameters separately usually leads to poor results.
The standard approach to tuning is grid search, e.g. test predetermined tuples of hyperparameters (with cross-validation) and use the one that yielded best performance. Grid search, however, is inefficient. Grid search is very inefficient when you have a lot of hyperparameters (6 is already a problem for grid search). An alternative is random search, which essentially means trying a set of random tuples.
The best option is to use specialized libraries that provide automatic solvers to optimize hyperparameters. These solvers require far less parameter tuples to be tested, and hence require less time. You can find such solvers in Optunity and Hyperopt. | In a model with several parameters, which one should be tuned via cross validation first?
As these hyperparameters interact with each other, it is best to tune them together. Generally, a hyperparameter response surface is very complex, which means that tuning parameters separately usually |
54,388 | Autocorrelation and Partial Correlation plots in ARMA models | The blue shaded part joins the boundaries of an approximate 95% interval for the individual correlations assuming the series is independent. So if your data were white noise, about 5% of those autocorrelations would be expected to lie outside those bounds.
The PACF is basically the lagged correlations adjusted for the effect of lower order correlation. For example, if you have an AR(1) with autocorrelation at lag 1 $\rho_1$, then the correlation at lag 2 will be $\rho_1^2$. If you want to assess what's going on apart from the correlation you already expect at lag 2 because of the correlation at lag 1, you want the PACF at lag 2.
If you have a roughly sinusoidal series, you'll typically see a damped sinusoid in the ACF. But notice that your PACF cuts off ... so that ongoing sinusoid is largely due to the pattern of the earlier lags; a low order AR might describe the data reasonably well (but actually, looking at the data, I think it's not a simple AR as such; there's what looks like some periodic effect, and you should try to model that. Also the data seem to be positive, right skew and nonstationary, so you should beware trying to overinterpret your displays in terms of ARMA models. | Autocorrelation and Partial Correlation plots in ARMA models | The blue shaded part joins the boundaries of an approximate 95% interval for the individual correlations assuming the series is independent. So if your data were white noise, about 5% of those autocor | Autocorrelation and Partial Correlation plots in ARMA models
The blue shaded part joins the boundaries of an approximate 95% interval for the individual correlations assuming the series is independent. So if your data were white noise, about 5% of those autocorrelations would be expected to lie outside those bounds.
The PACF is basically the lagged correlations adjusted for the effect of lower order correlation. For example, if you have an AR(1) with autocorrelation at lag 1 $\rho_1$, then the correlation at lag 2 will be $\rho_1^2$. If you want to assess what's going on apart from the correlation you already expect at lag 2 because of the correlation at lag 1, you want the PACF at lag 2.
If you have a roughly sinusoidal series, you'll typically see a damped sinusoid in the ACF. But notice that your PACF cuts off ... so that ongoing sinusoid is largely due to the pattern of the earlier lags; a low order AR might describe the data reasonably well (but actually, looking at the data, I think it's not a simple AR as such; there's what looks like some periodic effect, and you should try to model that. Also the data seem to be positive, right skew and nonstationary, so you should beware trying to overinterpret your displays in terms of ARMA models. | Autocorrelation and Partial Correlation plots in ARMA models
The blue shaded part joins the boundaries of an approximate 95% interval for the individual correlations assuming the series is independent. So if your data were white noise, about 5% of those autocor |
54,389 | Autocorrelation and Partial Correlation plots in ARMA models | The blue shaded areas are used to test the statistical significance of the autocorrelation and partial autocorrelation coefficients. In the ACF, these bands are sometimes based on Bartlett's standard errors, which go back to a paper published in 1946. They are calculated using the formula below, where $r_{k}$ denotes the kth estimated autocorrelation coefficient and $n$ is the number of observations of the time-series:
$$ (1 + 2 \sum_{j=1}^{k-1}r_{j}^{2})^{1/2} n^{-1/2}. $$
The estimated standard error plotted in the PACF, which is sometimes plotted in the ACF instead of Bartlett's errors, is calculated using the following formula:
$$ n^{-1/2}. $$
The ACF shows the correlation between ordered pairs separated by various time spans. For example, the correlation between $x_{t}$ and $x_{t-1}$, or, say, $x_{t}$ and $x_{t-2}$, and so on.
The PACF measures the correlation between ordered pairs separated by various time spans taking into account the effects of intervening pairs. So, the PACF differs from the ACF in terms of accounting for intervening effects. For example, the second partial autocorrelation coefficient measures the correlation between $x_{t}$ and $x_{t-2}$ taking into account the effect of $x_{t-1}$. Similarly, the fourth partial autocorrelation coefficient is a measure of the correlation between $x_{t}$ and $x_{t-4}$ taking into account the effects of $x_{t-1}$, $x_{t-2}$, and $x_{t-3}$.
The R code below provides a demonstration.
Lastly, the ACF and PACF are important time-domain tools to help one understand the time-series properties of the data. They are used extensively in the Box-Jenkins methodology for identifying ARIMA models.
# Simulate an AR(2) process ---
x <- c(rep(0,200)) # Vector to hold the simulated series
w <- rnorm(200) # Errors drawn from normal distribution
phi <- c(1.58,-0.64) # Autoregressive paramters
# Simulation
for(t in 3:200) x[t] <- phi[1] * x[t-1] + phi[2] * x[t-2] + w[t]
# Housekeeping
x <- ts(x[-(1:2)]) # Remove leading zeros
maxLags <- 6 # Number of lagged series for regressions
# Prepare data for lm() function
data <- ts(embed(c(rep(NA,maxLags), x), maxLags+1)) # Dataset
colnames(data) <- c("x",paste0("AR",1:6)) # Varnames
# Autocorrelation Function ---
lm(x ~ AR1, data=data)
acf(x)$acf[2]
lm(x ~ AR2, data=data)
acf(x)$acf[3]
lm(x ~ AR3, data=data)
acf(x)$acf[4]
lm(x ~ AR4, data=data)
acf(x)$acf[5]
lm(x ~ AR5, data=data)
acf(x)$acf[6]
# Partial Autocorrelation Function ---
lm(x ~ AR1, data=data)
pacf(x)$acf[1]
lm(x ~ AR1 + AR2, data=data)
pacf(x)$acf[2]
lm(x ~ AR1 + AR2 + AR3, data=data)
pacf(x)$acf[3]
lm(x ~ AR1 + AR2 + AR3 + AR4, data=data)
pacf(x)$acf[4]
lm(x ~ AR1 + AR2 + AR3 + AR4 + AR5, data=data)
pacf(x)$acf[5]
lm(x ~ AR1 + AR2 + AR3 + AR4 + AR5 + AR6, data=data)
pacf(x)$acf[6]
There are small differences between some of the coefficients, but this is just related to the estimation methods used. The PACF can be estimated via linear regressions, but software typically use approximation methods such as the Yule-Walker equations and other methods of estimation. | Autocorrelation and Partial Correlation plots in ARMA models | The blue shaded areas are used to test the statistical significance of the autocorrelation and partial autocorrelation coefficients. In the ACF, these bands are sometimes based on Bartlett's standard | Autocorrelation and Partial Correlation plots in ARMA models
The blue shaded areas are used to test the statistical significance of the autocorrelation and partial autocorrelation coefficients. In the ACF, these bands are sometimes based on Bartlett's standard errors, which go back to a paper published in 1946. They are calculated using the formula below, where $r_{k}$ denotes the kth estimated autocorrelation coefficient and $n$ is the number of observations of the time-series:
$$ (1 + 2 \sum_{j=1}^{k-1}r_{j}^{2})^{1/2} n^{-1/2}. $$
The estimated standard error plotted in the PACF, which is sometimes plotted in the ACF instead of Bartlett's errors, is calculated using the following formula:
$$ n^{-1/2}. $$
The ACF shows the correlation between ordered pairs separated by various time spans. For example, the correlation between $x_{t}$ and $x_{t-1}$, or, say, $x_{t}$ and $x_{t-2}$, and so on.
The PACF measures the correlation between ordered pairs separated by various time spans taking into account the effects of intervening pairs. So, the PACF differs from the ACF in terms of accounting for intervening effects. For example, the second partial autocorrelation coefficient measures the correlation between $x_{t}$ and $x_{t-2}$ taking into account the effect of $x_{t-1}$. Similarly, the fourth partial autocorrelation coefficient is a measure of the correlation between $x_{t}$ and $x_{t-4}$ taking into account the effects of $x_{t-1}$, $x_{t-2}$, and $x_{t-3}$.
The R code below provides a demonstration.
Lastly, the ACF and PACF are important time-domain tools to help one understand the time-series properties of the data. They are used extensively in the Box-Jenkins methodology for identifying ARIMA models.
# Simulate an AR(2) process ---
x <- c(rep(0,200)) # Vector to hold the simulated series
w <- rnorm(200) # Errors drawn from normal distribution
phi <- c(1.58,-0.64) # Autoregressive paramters
# Simulation
for(t in 3:200) x[t] <- phi[1] * x[t-1] + phi[2] * x[t-2] + w[t]
# Housekeeping
x <- ts(x[-(1:2)]) # Remove leading zeros
maxLags <- 6 # Number of lagged series for regressions
# Prepare data for lm() function
data <- ts(embed(c(rep(NA,maxLags), x), maxLags+1)) # Dataset
colnames(data) <- c("x",paste0("AR",1:6)) # Varnames
# Autocorrelation Function ---
lm(x ~ AR1, data=data)
acf(x)$acf[2]
lm(x ~ AR2, data=data)
acf(x)$acf[3]
lm(x ~ AR3, data=data)
acf(x)$acf[4]
lm(x ~ AR4, data=data)
acf(x)$acf[5]
lm(x ~ AR5, data=data)
acf(x)$acf[6]
# Partial Autocorrelation Function ---
lm(x ~ AR1, data=data)
pacf(x)$acf[1]
lm(x ~ AR1 + AR2, data=data)
pacf(x)$acf[2]
lm(x ~ AR1 + AR2 + AR3, data=data)
pacf(x)$acf[3]
lm(x ~ AR1 + AR2 + AR3 + AR4, data=data)
pacf(x)$acf[4]
lm(x ~ AR1 + AR2 + AR3 + AR4 + AR5, data=data)
pacf(x)$acf[5]
lm(x ~ AR1 + AR2 + AR3 + AR4 + AR5 + AR6, data=data)
pacf(x)$acf[6]
There are small differences between some of the coefficients, but this is just related to the estimation methods used. The PACF can be estimated via linear regressions, but software typically use approximation methods such as the Yule-Walker equations and other methods of estimation. | Autocorrelation and Partial Correlation plots in ARMA models
The blue shaded areas are used to test the statistical significance of the autocorrelation and partial autocorrelation coefficients. In the ACF, these bands are sometimes based on Bartlett's standard |
54,390 | Marginal, joint, and conditional distributions of a multivariate normal | Alrighty, y'all. I have an answer. Sorry it took me so long to get it posted here. School was absolutely hectic this week. Spring break is here, though, and I can type up my answer.
First we need to find the joint distribution of $(Y_1, Y_3)$. Since $Y\sim MVN( \mu, \Sigma)$ we know that any subset of the components of $Y$ is also $MVN$. Thus we use
$$
A = \begin{pmatrix}
1 & 0 & 0 \\
0 & 0 & 1 \\
\end{pmatrix}
$$
And see that
$$
AY = (Y_1, Y_3)^T
$$
$$
\Sigma = \begin{pmatrix}
2 & 1 \\
1 & 4 \\
\end{pmatrix}
$$
$$
\mu(Y_1,Y_2) = (5,7)^T
$$
Therefore, using the theorem for conditional distributions of a multivariate normal yields:
$$\begin{align}\newcommand{\c}{\text{Cov}}
\newcommand{\v}{\text{Var}}
E[Y_3|Y_1]&=μ_{Y_3}+\frac{\c(Y_1,Y_3)(Y_1−μ_{Y_1})}{\v(Y_1)}\\
&=\frac{9+Y_1}{2}
\end{align}$$
And
$$\begin{align}
\v(Y_3|Y_1) &= \v(Y_3) - \frac{\c(Y_1,Y_3)^2}{\v(Y_1)} \\
&= 4 - \frac{1}{2} = \frac{7}{2}
\end{align}$$ | Marginal, joint, and conditional distributions of a multivariate normal | Alrighty, y'all. I have an answer. Sorry it took me so long to get it posted here. School was absolutely hectic this week. Spring break is here, though, and I can type up my answer.
First we need to | Marginal, joint, and conditional distributions of a multivariate normal
Alrighty, y'all. I have an answer. Sorry it took me so long to get it posted here. School was absolutely hectic this week. Spring break is here, though, and I can type up my answer.
First we need to find the joint distribution of $(Y_1, Y_3)$. Since $Y\sim MVN( \mu, \Sigma)$ we know that any subset of the components of $Y$ is also $MVN$. Thus we use
$$
A = \begin{pmatrix}
1 & 0 & 0 \\
0 & 0 & 1 \\
\end{pmatrix}
$$
And see that
$$
AY = (Y_1, Y_3)^T
$$
$$
\Sigma = \begin{pmatrix}
2 & 1 \\
1 & 4 \\
\end{pmatrix}
$$
$$
\mu(Y_1,Y_2) = (5,7)^T
$$
Therefore, using the theorem for conditional distributions of a multivariate normal yields:
$$\begin{align}\newcommand{\c}{\text{Cov}}
\newcommand{\v}{\text{Var}}
E[Y_3|Y_1]&=μ_{Y_3}+\frac{\c(Y_1,Y_3)(Y_1−μ_{Y_1})}{\v(Y_1)}\\
&=\frac{9+Y_1}{2}
\end{align}$$
And
$$\begin{align}
\v(Y_3|Y_1) &= \v(Y_3) - \frac{\c(Y_1,Y_3)^2}{\v(Y_1)} \\
&= 4 - \frac{1}{2} = \frac{7}{2}
\end{align}$$ | Marginal, joint, and conditional distributions of a multivariate normal
Alrighty, y'all. I have an answer. Sorry it took me so long to get it posted here. School was absolutely hectic this week. Spring break is here, though, and I can type up my answer.
First we need to |
54,391 | Intuitive interpretation of Bayes risk $R(\delta, \lambda) = \int_{\Omega}R(\theta, \delta) \lambda(\theta) d\theta$ | The Bayes risk is the frequentist risk averaged over the parameter space against to the prior distribution $\lambda$. The notion turns a function of $\theta$, $R(\theta,\delta)$, into a positive number, $R(\delta,\lambda)$, and hence allows for a total ordering of estimators $\delta$, hence for the definition of the Bayes estimator$$\delta^\lambda=\arg\min_\delta R(\delta,\lambda)$$
The link with the conditional Bayesian approach is that, thanks to Fubini's theorem, the Bayes estimator can also be derived by minimising for every $x$ the posterior expected loss$$\varrho(\delta(x),\lambda)=\mathbb{E}[L(\theta,\delta(x))|X=x]$$
As noted by @guy, this quantity also has frequentist motivations, one of them being that, in regular problems, the minimax risk equals the maximin risk in the following way:$$\min_\delta\max_\theta R(\theta,\delta)=\max_\lambda\min_\delta R(\delta,\lambda)$$expressing minimax estimators as "worst" Bayes estimators. | Intuitive interpretation of Bayes risk $R(\delta, \lambda) = \int_{\Omega}R(\theta, \delta) \lambda( | The Bayes risk is the frequentist risk averaged over the parameter space against to the prior distribution $\lambda$. The notion turns a function of $\theta$, $R(\theta,\delta)$, into a positive numbe | Intuitive interpretation of Bayes risk $R(\delta, \lambda) = \int_{\Omega}R(\theta, \delta) \lambda(\theta) d\theta$
The Bayes risk is the frequentist risk averaged over the parameter space against to the prior distribution $\lambda$. The notion turns a function of $\theta$, $R(\theta,\delta)$, into a positive number, $R(\delta,\lambda)$, and hence allows for a total ordering of estimators $\delta$, hence for the definition of the Bayes estimator$$\delta^\lambda=\arg\min_\delta R(\delta,\lambda)$$
The link with the conditional Bayesian approach is that, thanks to Fubini's theorem, the Bayes estimator can also be derived by minimising for every $x$ the posterior expected loss$$\varrho(\delta(x),\lambda)=\mathbb{E}[L(\theta,\delta(x))|X=x]$$
As noted by @guy, this quantity also has frequentist motivations, one of them being that, in regular problems, the minimax risk equals the maximin risk in the following way:$$\min_\delta\max_\theta R(\theta,\delta)=\max_\lambda\min_\delta R(\delta,\lambda)$$expressing minimax estimators as "worst" Bayes estimators. | Intuitive interpretation of Bayes risk $R(\delta, \lambda) = \int_{\Omega}R(\theta, \delta) \lambda(
The Bayes risk is the frequentist risk averaged over the parameter space against to the prior distribution $\lambda$. The notion turns a function of $\theta$, $R(\theta,\delta)$, into a positive numbe |
54,392 | Using an asymmetric distance matrix for clustering | Full metric properties are rarely required if you don't need strong theoretical results. In particular, $d(x,y)=0\Rightarrow x=y$ is not realistic on natural data. Just because two observed values are identical does not imply they are the same observation (this property can easily be restored by working on equivalence classes if you have the triangle inequality, for theoretical results).
Triangle inequality is mostly used for acceleration. Before using an algorithm, you should check whether it makes such assumption. Usually they will still work, but the results may worse, if the assumption does not hold. (There may be cases where convergence relies on the triangle inequality, too.)
Symmetry is much harder. Few implementations will allow this, even if a number of algorithms could support this (e.g. DBSCAN). A lot of code you see assumes symmetry in many places...
Consider this simple but well-understandable transformation:
$$s(x,y) := \min\{ d(x,y), d(y,x) \}$$
which restores symmetry. | Using an asymmetric distance matrix for clustering | Full metric properties are rarely required if you don't need strong theoretical results. In particular, $d(x,y)=0\Rightarrow x=y$ is not realistic on natural data. Just because two observed values are | Using an asymmetric distance matrix for clustering
Full metric properties are rarely required if you don't need strong theoretical results. In particular, $d(x,y)=0\Rightarrow x=y$ is not realistic on natural data. Just because two observed values are identical does not imply they are the same observation (this property can easily be restored by working on equivalence classes if you have the triangle inequality, for theoretical results).
Triangle inequality is mostly used for acceleration. Before using an algorithm, you should check whether it makes such assumption. Usually they will still work, but the results may worse, if the assumption does not hold. (There may be cases where convergence relies on the triangle inequality, too.)
Symmetry is much harder. Few implementations will allow this, even if a number of algorithms could support this (e.g. DBSCAN). A lot of code you see assumes symmetry in many places...
Consider this simple but well-understandable transformation:
$$s(x,y) := \min\{ d(x,y), d(y,x) \}$$
which restores symmetry. | Using an asymmetric distance matrix for clustering
Full metric properties are rarely required if you don't need strong theoretical results. In particular, $d(x,y)=0\Rightarrow x=y$ is not realistic on natural data. Just because two observed values are |
54,393 | How to compute the sum of a mixture distribution with another distribution? | Let's start with notation. When you need to refer to different things, you have to give them different names. To that end let $f$ refer to the first PDF and $g$ the second.
Second, let's focus on the form of the PDFs and strip away unnecessary details:
$f$ is a positive linear combination of $\phi_b$ and $\phi_l$: that is called a mixture.
$\phi_b$ and $\phi_l$ are both of the form
$$\phi(t) = \text{some number}\times \exp(-t^2/2\sigma^2)$$
where $\sigma$ is a parameter. These are recognizable as describing Normal distributions with mean $0$ and standard deviations $\sigma_b$ and $\sigma_l$. (I am ignoring the "$(n)$" that appears in the original expressions, because this makes no sense.)
$g$ is likewise recognizable as a Normal distribution with mean $0$ and standard deviation $\sigma_v$.
It is elementary that the sum of a Normal$(\mu, \sigma^2)$ distribution (having mean $\mu$ and standard deviation $\sigma$) and a Normal$(\mu^\prime, \sigma^{\prime 2})$ distribution is a Normal$(\mu+\mu^\prime, \sigma^2 + \sigma^{\prime 2})$ distribution. It is also immediate from the mathematical definitions that convolution (which is the operation used to carry out the sum of distributions) is bilinear: in other words, when you convolve some distribution with a mixture, you get the mixture of the convolutions with the components.
Therefore, without any more consideration, we may conclude that the distribution of the sum is the mixture of a Normal$(0, \sigma_v^2 + \sigma_b^2)$ distribution (with coefficient $1-\epsilon$) and a Normal$(0, \sigma_v^2 + \sigma_l^2)$ distribution (with coefficient $\epsilon$). If you really, really want to see its PDF in all its glory, here it is:
$$\newcommand{\s}[2]{\sigma_{#1}^2 + \sigma_{#2}^2}
\newcommand{\N}[2]{\frac{#1}{\sqrt{2\pi(#2)}}\exp\left(\frac{-t^2}{2(#2)}\right)}
\eqalign{&(f*g)(t) = \\ &\N{1-\epsilon}{\s{v}{b}} + \N{\epsilon}{\s{v}{l}}.}$$
Normally this level of detail is not needed--it just obscures the basic simplicity of what is going on.
Appendix
This approach can rigorously be justified by considering characteristic functions (cf). The cf of any random variable $X$ with distribution $F$ is a function of a real variable $t$ defined by
$$\newcommand{\E}[2][F]{\mathbb{E}_{#1}{\left(#2\right)}}
\phi_F(t) = \E{e^{itX}} = \E{\cos(tX)} + i\E{\sin(tX)}$$
(where, as usual, $i^2 = -1$). Because $|e^{itX}|\le 1$, $|\phi_F(t)|\le \E{1}=1$ exists and is finite for all $t$.
The elementary properties of expectations and exponentials provide simple useful algebraic relationships between characteristic functions, sums, and mixtures:
Let $X$ and $Y$ be independent random variables with distributions $F$ and $G$, respectively, and let $\alpha$ and $\beta$ be numbers. Then
$$\phi_{\alpha X + \beta Y}(t) = \E[(F,G)]{e^{i(\alpha X + \beta Y)t}} = \E[(F,G)]{e^{i X (\alpha t)} e^{i Y (\beta t)}} = \phi_F(\alpha t)\phi_G(\beta t).$$
Let $X$ be governed by the distribution $(1-\epsilon)F + \epsilon G,\ 0 \le \epsilon \le 1$ be a mixture of the distributions $F$ and $G$. (Let us call this an "$\epsilon$ mixture of $X$ and $Y$".) Because expectations are integrals and integrals are linear in their arguments,
$$\eqalign{
\phi_{(1-\epsilon)F + \epsilon G}(t) &= \E[(1-\epsilon)F + \epsilon G]{e^{i t X}}
\\ &= (1-\epsilon)\E[F]{e^{i t X}} + \epsilon\E[G]{e^{i t X}}
\\ &= (1-\epsilon)\phi_F(t) + \epsilon \phi_G(t).
}$$
A fundamental theorem asserts that the characteristic function completely determines the distribution. This comes down to showing how $F$ can be recovered from $\phi_F$, which involves more detailed analysis of the integrals involved. An account can be found in textbooks on probability, real analysis, or Fourier analysis.
One actual calculation of an integral is needed. By definition, the standard Normal distribution has the characteristic function $\phi(t) = e^{-t^2/2}$. This corresponds to the distribution $F$ whose density function is proportional to $dF(x) = e^{-x^2/2}$ (with constant of proportionality $C$, say: its actual value does not matter to us). This can be verified by computing
$$\eqalign{
\phi_F(t) &= \E{e^{i t X}} = C\int_{\mathbb R} e^{i t x} e^{-x^2/2} dx
\\& = C\int_{\mathbb R} e^{-[(x - i t)^2 + t^2]/2} dx
\\& = e^{-t^2/2}C\int_{\mathbb R} e^{-(x - i t)^2 /2} dx
\\& = e^{-t^2/2}C\int_{\mathbb{R}-it} e^{-x^2 /2} dx
\\& = e^{-t^2/2}.
}$$
The only calculation of an integral occurred at the last step; everything else was elementary algebraic manipulation. The justification relies on Cauchy's Integral Theorem. if you know this theorem and its applications to integration in the Complex plane, then you will also recognize that this calculation required only establishing that $e^{-z^2}$ grows very small when the real part of $z$ is large and the imaginary part is bounded.
The "Normal distribution with mean $\mu$ and variance $\sigma^2$" is defined to be the distribution of $\sigma X + \mu$ for a standard Normally distributed random variable $X$: this is a linear combination of a standard Normal and a constant, with coefficients $\sigma$ and $\mu$ respectively. Consider two such Normal distributions, both with zero mean and variances $\sigma$ and $\sigma^{\prime 2}$. By (1) their cf is
$$\phi(t) = \phi_F(\sigma t)\phi_F(\sigma^\prime t) = e^{-(\sigma t)^2/2}e^{-(\sigma^\prime t)^2/2} = e^{-(\sigma^2 + \sigma^{\prime 2}) t^2/2}.$$
The latter is recognizable as the cf of a Normal$(0, \sigma^2 + \sigma^{\prime 2})$ distribution. Because (by (3) above) the cf determines the distribution, we conclude the sum of independent Normal distributions is again Normal. Its variance is the sum of the variances of the components.
Here, then, is the fully rigorous solution to the problem. First, its statement:
Let $X, Y,$ and $Z$ be independent random variables with Normal$(0,\sigma_b^2)$, Normal$(0,\sigma_l^2)$, and Normal$(0,\sigma_v^2)$ distributions, respectively. What is the sum of $Z$ with an $\epsilon$ mixture of $X$ and $Y$?
Relationships (1) and (2) above, along with the result about sums of independent Normals give the answer so immediately and clearly that all one has to do is write it down. Even if you want to display the little details, all the calculations are simple algebra because no more integration is necessary: it is all hidden in the one integral we did to compute the standard Normal cf from its pdf.
In a generalization of this problem it is required to find the distribution of a sum of two independent variables where one is a mixture of two independent variables. Let the three distributions involved be $F$, $G$, and $H$. The basic calculation for the cf of this sum is still the same:
$$\phi(t) = (1-\epsilon)\phi_F(t)\phi_H(t) + \epsilon\phi_G(t)\phi_H(t).$$
It is manifestly another mixture. To obtain formulas for the pdf of this distribution, then, you have to find pdfs for which $\phi_F(t)\phi_H(t)$ and $\phi_G(t)\phi_H(t)$ are their cfs. Sometimes this can be done by inspection, as we did previously; sometimes it can be done using formulas to compute pdfs from cf using inverse Fourier transforms; and sometimes it just cannot be done in any nice closed analytic form at all. | How to compute the sum of a mixture distribution with another distribution? | Let's start with notation. When you need to refer to different things, you have to give them different names. To that end let $f$ refer to the first PDF and $g$ the second.
Second, let's focus on th | How to compute the sum of a mixture distribution with another distribution?
Let's start with notation. When you need to refer to different things, you have to give them different names. To that end let $f$ refer to the first PDF and $g$ the second.
Second, let's focus on the form of the PDFs and strip away unnecessary details:
$f$ is a positive linear combination of $\phi_b$ and $\phi_l$: that is called a mixture.
$\phi_b$ and $\phi_l$ are both of the form
$$\phi(t) = \text{some number}\times \exp(-t^2/2\sigma^2)$$
where $\sigma$ is a parameter. These are recognizable as describing Normal distributions with mean $0$ and standard deviations $\sigma_b$ and $\sigma_l$. (I am ignoring the "$(n)$" that appears in the original expressions, because this makes no sense.)
$g$ is likewise recognizable as a Normal distribution with mean $0$ and standard deviation $\sigma_v$.
It is elementary that the sum of a Normal$(\mu, \sigma^2)$ distribution (having mean $\mu$ and standard deviation $\sigma$) and a Normal$(\mu^\prime, \sigma^{\prime 2})$ distribution is a Normal$(\mu+\mu^\prime, \sigma^2 + \sigma^{\prime 2})$ distribution. It is also immediate from the mathematical definitions that convolution (which is the operation used to carry out the sum of distributions) is bilinear: in other words, when you convolve some distribution with a mixture, you get the mixture of the convolutions with the components.
Therefore, without any more consideration, we may conclude that the distribution of the sum is the mixture of a Normal$(0, \sigma_v^2 + \sigma_b^2)$ distribution (with coefficient $1-\epsilon$) and a Normal$(0, \sigma_v^2 + \sigma_l^2)$ distribution (with coefficient $\epsilon$). If you really, really want to see its PDF in all its glory, here it is:
$$\newcommand{\s}[2]{\sigma_{#1}^2 + \sigma_{#2}^2}
\newcommand{\N}[2]{\frac{#1}{\sqrt{2\pi(#2)}}\exp\left(\frac{-t^2}{2(#2)}\right)}
\eqalign{&(f*g)(t) = \\ &\N{1-\epsilon}{\s{v}{b}} + \N{\epsilon}{\s{v}{l}}.}$$
Normally this level of detail is not needed--it just obscures the basic simplicity of what is going on.
Appendix
This approach can rigorously be justified by considering characteristic functions (cf). The cf of any random variable $X$ with distribution $F$ is a function of a real variable $t$ defined by
$$\newcommand{\E}[2][F]{\mathbb{E}_{#1}{\left(#2\right)}}
\phi_F(t) = \E{e^{itX}} = \E{\cos(tX)} + i\E{\sin(tX)}$$
(where, as usual, $i^2 = -1$). Because $|e^{itX}|\le 1$, $|\phi_F(t)|\le \E{1}=1$ exists and is finite for all $t$.
The elementary properties of expectations and exponentials provide simple useful algebraic relationships between characteristic functions, sums, and mixtures:
Let $X$ and $Y$ be independent random variables with distributions $F$ and $G$, respectively, and let $\alpha$ and $\beta$ be numbers. Then
$$\phi_{\alpha X + \beta Y}(t) = \E[(F,G)]{e^{i(\alpha X + \beta Y)t}} = \E[(F,G)]{e^{i X (\alpha t)} e^{i Y (\beta t)}} = \phi_F(\alpha t)\phi_G(\beta t).$$
Let $X$ be governed by the distribution $(1-\epsilon)F + \epsilon G,\ 0 \le \epsilon \le 1$ be a mixture of the distributions $F$ and $G$. (Let us call this an "$\epsilon$ mixture of $X$ and $Y$".) Because expectations are integrals and integrals are linear in their arguments,
$$\eqalign{
\phi_{(1-\epsilon)F + \epsilon G}(t) &= \E[(1-\epsilon)F + \epsilon G]{e^{i t X}}
\\ &= (1-\epsilon)\E[F]{e^{i t X}} + \epsilon\E[G]{e^{i t X}}
\\ &= (1-\epsilon)\phi_F(t) + \epsilon \phi_G(t).
}$$
A fundamental theorem asserts that the characteristic function completely determines the distribution. This comes down to showing how $F$ can be recovered from $\phi_F$, which involves more detailed analysis of the integrals involved. An account can be found in textbooks on probability, real analysis, or Fourier analysis.
One actual calculation of an integral is needed. By definition, the standard Normal distribution has the characteristic function $\phi(t) = e^{-t^2/2}$. This corresponds to the distribution $F$ whose density function is proportional to $dF(x) = e^{-x^2/2}$ (with constant of proportionality $C$, say: its actual value does not matter to us). This can be verified by computing
$$\eqalign{
\phi_F(t) &= \E{e^{i t X}} = C\int_{\mathbb R} e^{i t x} e^{-x^2/2} dx
\\& = C\int_{\mathbb R} e^{-[(x - i t)^2 + t^2]/2} dx
\\& = e^{-t^2/2}C\int_{\mathbb R} e^{-(x - i t)^2 /2} dx
\\& = e^{-t^2/2}C\int_{\mathbb{R}-it} e^{-x^2 /2} dx
\\& = e^{-t^2/2}.
}$$
The only calculation of an integral occurred at the last step; everything else was elementary algebraic manipulation. The justification relies on Cauchy's Integral Theorem. if you know this theorem and its applications to integration in the Complex plane, then you will also recognize that this calculation required only establishing that $e^{-z^2}$ grows very small when the real part of $z$ is large and the imaginary part is bounded.
The "Normal distribution with mean $\mu$ and variance $\sigma^2$" is defined to be the distribution of $\sigma X + \mu$ for a standard Normally distributed random variable $X$: this is a linear combination of a standard Normal and a constant, with coefficients $\sigma$ and $\mu$ respectively. Consider two such Normal distributions, both with zero mean and variances $\sigma$ and $\sigma^{\prime 2}$. By (1) their cf is
$$\phi(t) = \phi_F(\sigma t)\phi_F(\sigma^\prime t) = e^{-(\sigma t)^2/2}e^{-(\sigma^\prime t)^2/2} = e^{-(\sigma^2 + \sigma^{\prime 2}) t^2/2}.$$
The latter is recognizable as the cf of a Normal$(0, \sigma^2 + \sigma^{\prime 2})$ distribution. Because (by (3) above) the cf determines the distribution, we conclude the sum of independent Normal distributions is again Normal. Its variance is the sum of the variances of the components.
Here, then, is the fully rigorous solution to the problem. First, its statement:
Let $X, Y,$ and $Z$ be independent random variables with Normal$(0,\sigma_b^2)$, Normal$(0,\sigma_l^2)$, and Normal$(0,\sigma_v^2)$ distributions, respectively. What is the sum of $Z$ with an $\epsilon$ mixture of $X$ and $Y$?
Relationships (1) and (2) above, along with the result about sums of independent Normals give the answer so immediately and clearly that all one has to do is write it down. Even if you want to display the little details, all the calculations are simple algebra because no more integration is necessary: it is all hidden in the one integral we did to compute the standard Normal cf from its pdf.
In a generalization of this problem it is required to find the distribution of a sum of two independent variables where one is a mixture of two independent variables. Let the three distributions involved be $F$, $G$, and $H$. The basic calculation for the cf of this sum is still the same:
$$\phi(t) = (1-\epsilon)\phi_F(t)\phi_H(t) + \epsilon\phi_G(t)\phi_H(t).$$
It is manifestly another mixture. To obtain formulas for the pdf of this distribution, then, you have to find pdfs for which $\phi_F(t)\phi_H(t)$ and $\phi_G(t)\phi_H(t)$ are their cfs. Sometimes this can be done by inspection, as we did previously; sometimes it can be done using formulas to compute pdfs from cf using inverse Fourier transforms; and sometimes it just cannot be done in any nice closed analytic form at all. | How to compute the sum of a mixture distribution with another distribution?
Let's start with notation. When you need to refer to different things, you have to give them different names. To that end let $f$ refer to the first PDF and $g$ the second.
Second, let's focus on th |
54,394 | Generalized linear models and central limit theorem | In short: CLT alone isn't sufficient; $n$ isn't always near enough to infinity; and the shape of the distribution of the sample mean isn't the only consideration
If you're using a hypothesis test in your comparison of treatment means in ANOVA, you rely on the distribution of a ratio of two quantities having an F-distribution. You need the numerator and denominator to both be scaled chi-square, and you need them to be independent. If you don't have normality, this won't be the case, and the Central limit theorem on its own doesn't get you there; it's a theorem about what happens in the limit as $n\to\infty$
GLMs and ANOVA are not always carried out at sample sizes for which you could simply assert normality of quantities like sample means (small sample sizes can be a problem with inference in GLMs for a similar reason -- the usual inference relies on other asymptotic results).
GLMs not only deal with non-normality, there are issues like heteroskedasticity, and not all uses of GLMs are direct comparisons of treatment means, so there's often need to deal with nonlinearity as well (which GLMs do via the link function). | Generalized linear models and central limit theorem | In short: CLT alone isn't sufficient; $n$ isn't always near enough to infinity; and the shape of the distribution of the sample mean isn't the only consideration
If you're using a hypothesis test in | Generalized linear models and central limit theorem
In short: CLT alone isn't sufficient; $n$ isn't always near enough to infinity; and the shape of the distribution of the sample mean isn't the only consideration
If you're using a hypothesis test in your comparison of treatment means in ANOVA, you rely on the distribution of a ratio of two quantities having an F-distribution. You need the numerator and denominator to both be scaled chi-square, and you need them to be independent. If you don't have normality, this won't be the case, and the Central limit theorem on its own doesn't get you there; it's a theorem about what happens in the limit as $n\to\infty$
GLMs and ANOVA are not always carried out at sample sizes for which you could simply assert normality of quantities like sample means (small sample sizes can be a problem with inference in GLMs for a similar reason -- the usual inference relies on other asymptotic results).
GLMs not only deal with non-normality, there are issues like heteroskedasticity, and not all uses of GLMs are direct comparisons of treatment means, so there's often need to deal with nonlinearity as well (which GLMs do via the link function). | Generalized linear models and central limit theorem
In short: CLT alone isn't sufficient; $n$ isn't always near enough to infinity; and the shape of the distribution of the sample mean isn't the only consideration
If you're using a hypothesis test in |
54,395 | Proving Bayesian Network must be acyclic | You're not going to find any contradictions by creating a cyclic graph. It's not that Bayes nets (or as I've heard them called, conditional independence networks since they really don't have anything to do with Bayesianism besides conditional independence rules) "have to be acyclic". We assume them to be acyclic to get certain properties and simplify calculation of probabilities. In fact, if we relax the acyclic restriction as well the directed restriction we get a more general model called a Markov network. | Proving Bayesian Network must be acyclic | You're not going to find any contradictions by creating a cyclic graph. It's not that Bayes nets (or as I've heard them called, conditional independence networks since they really don't have anything | Proving Bayesian Network must be acyclic
You're not going to find any contradictions by creating a cyclic graph. It's not that Bayes nets (or as I've heard them called, conditional independence networks since they really don't have anything to do with Bayesianism besides conditional independence rules) "have to be acyclic". We assume them to be acyclic to get certain properties and simplify calculation of probabilities. In fact, if we relax the acyclic restriction as well the directed restriction we get a more general model called a Markov network. | Proving Bayesian Network must be acyclic
You're not going to find any contradictions by creating a cyclic graph. It's not that Bayes nets (or as I've heard them called, conditional independence networks since they really don't have anything |
54,396 | Proving Bayesian Network must be acyclic | A Bayesian Network can be viewed as a data structure that provides the skeleton for representing a joint distribution compactly in a factorized way. For any valid joint distribution two restrictions should be satisfied:
1) All probabilities in the distribution should be non negative;
2) All the probabilities should sum to one.
Normally a graph is determined by the ordering of the factorization and the conditional independencies assumed in the independency structure of the distribution. Since we are dealing with an invalid Bayes Network we don't need to follow the procedure, and we can design many counterexamples to verify that the sum of the distribution would be not 1. For instance, assume that we have such a graph over three binary variables:
And we can obtain the joint distribution easily: in all $2^3$ possibilities at least two are 1:
$P(a_0, b_0, c_0) = P(a_0|c_0)P(b_0|a_0)P(c_0|b_0)=1$
$P(a_1, b_1, c_1) = P(a_1|c_1)P(b_1|a_1)P(c_1|b_1)=1$
Then $\sum_{A,B,C} P(A, B, C)\ge2>1$, so the distribution is invalid(indicating the cyclic Bayes Network is invalid).
If we remove any one of the directions in the graph and adjust the parameters in the CPTs accordingly we can find that the joint distribution will be valid. | Proving Bayesian Network must be acyclic | A Bayesian Network can be viewed as a data structure that provides the skeleton for representing a joint distribution compactly in a factorized way. For any valid joint distribution two restrictions s | Proving Bayesian Network must be acyclic
A Bayesian Network can be viewed as a data structure that provides the skeleton for representing a joint distribution compactly in a factorized way. For any valid joint distribution two restrictions should be satisfied:
1) All probabilities in the distribution should be non negative;
2) All the probabilities should sum to one.
Normally a graph is determined by the ordering of the factorization and the conditional independencies assumed in the independency structure of the distribution. Since we are dealing with an invalid Bayes Network we don't need to follow the procedure, and we can design many counterexamples to verify that the sum of the distribution would be not 1. For instance, assume that we have such a graph over three binary variables:
And we can obtain the joint distribution easily: in all $2^3$ possibilities at least two are 1:
$P(a_0, b_0, c_0) = P(a_0|c_0)P(b_0|a_0)P(c_0|b_0)=1$
$P(a_1, b_1, c_1) = P(a_1|c_1)P(b_1|a_1)P(c_1|b_1)=1$
Then $\sum_{A,B,C} P(A, B, C)\ge2>1$, so the distribution is invalid(indicating the cyclic Bayes Network is invalid).
If we remove any one of the directions in the graph and adjust the parameters in the CPTs accordingly we can find that the joint distribution will be valid. | Proving Bayesian Network must be acyclic
A Bayesian Network can be viewed as a data structure that provides the skeleton for representing a joint distribution compactly in a factorized way. For any valid joint distribution two restrictions s |
54,397 | Proving Bayesian Network must be acyclic | I am afraid that Nick's answer might be incomplete.
BNs must be acyclic in order to guarantee that their underlying probability distribution is normalized to 1. It is quite easy to prove that this is the case, by starting at a vertex with no parents (which must exist, otherwise the graph would contain a cycle) and marginalizing it out, then repeating the procedure until all vertices have been accounted for.
This is no longer guaranteed to be the case if the graph has a cycle and a counterexample is also readily found. Consider the cyclic graph $A \to B \to C$ where the value of each parent fully determines the value of its child, e.g., $p(B=x|A=x)=1$. If we now sum the joint distribution over all possible states $(A,B,C)$, then all states of type $(x,x,x)$ have joint probability 1, and all other states have probability 0. Clearly, the sum over multiple "ones" is larger than one.
Of course this argument only holds for stationary distributions, which I assume was the premise of the OP's question. | Proving Bayesian Network must be acyclic | I am afraid that Nick's answer might be incomplete.
BNs must be acyclic in order to guarantee that their underlying probability distribution is normalized to 1. It is quite easy to prove that this is | Proving Bayesian Network must be acyclic
I am afraid that Nick's answer might be incomplete.
BNs must be acyclic in order to guarantee that their underlying probability distribution is normalized to 1. It is quite easy to prove that this is the case, by starting at a vertex with no parents (which must exist, otherwise the graph would contain a cycle) and marginalizing it out, then repeating the procedure until all vertices have been accounted for.
This is no longer guaranteed to be the case if the graph has a cycle and a counterexample is also readily found. Consider the cyclic graph $A \to B \to C$ where the value of each parent fully determines the value of its child, e.g., $p(B=x|A=x)=1$. If we now sum the joint distribution over all possible states $(A,B,C)$, then all states of type $(x,x,x)$ have joint probability 1, and all other states have probability 0. Clearly, the sum over multiple "ones" is larger than one.
Of course this argument only holds for stationary distributions, which I assume was the premise of the OP's question. | Proving Bayesian Network must be acyclic
I am afraid that Nick's answer might be incomplete.
BNs must be acyclic in order to guarantee that their underlying probability distribution is normalized to 1. It is quite easy to prove that this is |
54,398 | Proving Bayesian Network must be acyclic | Bayesian Network is defined to be a DAG (Directed Acyclic Graph), you cannot prove a definition. Take a look at this explanation. | Proving Bayesian Network must be acyclic | Bayesian Network is defined to be a DAG (Directed Acyclic Graph), you cannot prove a definition. Take a look at this explanation. | Proving Bayesian Network must be acyclic
Bayesian Network is defined to be a DAG (Directed Acyclic Graph), you cannot prove a definition. Take a look at this explanation. | Proving Bayesian Network must be acyclic
Bayesian Network is defined to be a DAG (Directed Acyclic Graph), you cannot prove a definition. Take a look at this explanation. |
54,399 | Proving Bayesian Network must be acyclic | Cycles are redundant, that's all. And it is pretty easy to show if you think about the transitivity.
A -> B -> C -> A implies A -> B and B -> A
So the directionality is immediately lost (for every pair in the loop).
A -> B -> C -> A is also deterministic, the multiplier is one.
For example:
Say B = (1/3)A + e1 and C = (1/2)B + e2; then A = A implies that A = 6C + e3, where 0 = e3 + 6e2 + 3*e1.
So cycles like this may be solved deterministically by simultaneous equations, and all nodes are basically transformations of the same distribution. You might as well collapse the cycle to one node only, because there is no more information in the three of them than there is in any one.
For a formal proof like this it would be nice to replace the constant multipliers with functions, then using something like this simplest of cycles as the basis for an induction argument, but you get the idea. | Proving Bayesian Network must be acyclic | Cycles are redundant, that's all. And it is pretty easy to show if you think about the transitivity.
A -> B -> C -> A implies A -> B and B -> A
So the directionality is immediately lost (for every p | Proving Bayesian Network must be acyclic
Cycles are redundant, that's all. And it is pretty easy to show if you think about the transitivity.
A -> B -> C -> A implies A -> B and B -> A
So the directionality is immediately lost (for every pair in the loop).
A -> B -> C -> A is also deterministic, the multiplier is one.
For example:
Say B = (1/3)A + e1 and C = (1/2)B + e2; then A = A implies that A = 6C + e3, where 0 = e3 + 6e2 + 3*e1.
So cycles like this may be solved deterministically by simultaneous equations, and all nodes are basically transformations of the same distribution. You might as well collapse the cycle to one node only, because there is no more information in the three of them than there is in any one.
For a formal proof like this it would be nice to replace the constant multipliers with functions, then using something like this simplest of cycles as the basis for an induction argument, but you get the idea. | Proving Bayesian Network must be acyclic
Cycles are redundant, that's all. And it is pretty easy to show if you think about the transitivity.
A -> B -> C -> A implies A -> B and B -> A
So the directionality is immediately lost (for every p |
54,400 | Does triple interaction need to include all main effect variables? | It's awkward to try to give direct, literal answers to "should I?" and "do I have to?" questions on this site. It's preferable to talk about what the consequences of a certain decision are likely to be.
If you include a second-order, AxBxD interaction term without the first-order BxD term, you are liable to mistake a BxD effect for an AxBxD effect. After all, how would you be able to distinguish the two? (I'm using "effect" loosely to mean a statistical connection rather than a true effect of a cause.)
A first-order interaction necessitates that the connection between one predictor and Y is different depending on the level of a second predictor. Similarly, a second-order interaction necessitates that the first-order interaction pattern and associated coefficient (whether zero or non-zero) is itself different depending on the level of a third predictor. In order to test for this latter difference, one certainly needs to know just what that first-order interaction coefficient is.
Interactions will be to some degree collinear with their component main effects, and higher-order interactions, with their component lower-order interactions. Thus the different terms will compete for shared variance and will "interfere" with the statistical significance of the others. The usual, and my recommended, practice is to include only those interactions that show significant and/or substantial effects, whatever your criteria might be, and to ignore determinations of non-significance for all but the highest-order interactions included in any given iteration of model-building. | Does triple interaction need to include all main effect variables? | It's awkward to try to give direct, literal answers to "should I?" and "do I have to?" questions on this site. It's preferable to talk about what the consequences of a certain decision are likely to | Does triple interaction need to include all main effect variables?
It's awkward to try to give direct, literal answers to "should I?" and "do I have to?" questions on this site. It's preferable to talk about what the consequences of a certain decision are likely to be.
If you include a second-order, AxBxD interaction term without the first-order BxD term, you are liable to mistake a BxD effect for an AxBxD effect. After all, how would you be able to distinguish the two? (I'm using "effect" loosely to mean a statistical connection rather than a true effect of a cause.)
A first-order interaction necessitates that the connection between one predictor and Y is different depending on the level of a second predictor. Similarly, a second-order interaction necessitates that the first-order interaction pattern and associated coefficient (whether zero or non-zero) is itself different depending on the level of a third predictor. In order to test for this latter difference, one certainly needs to know just what that first-order interaction coefficient is.
Interactions will be to some degree collinear with their component main effects, and higher-order interactions, with their component lower-order interactions. Thus the different terms will compete for shared variance and will "interfere" with the statistical significance of the others. The usual, and my recommended, practice is to include only those interactions that show significant and/or substantial effects, whatever your criteria might be, and to ignore determinations of non-significance for all but the highest-order interactions included in any given iteration of model-building. | Does triple interaction need to include all main effect variables?
It's awkward to try to give direct, literal answers to "should I?" and "do I have to?" questions on this site. It's preferable to talk about what the consequences of a certain decision are likely to |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.