idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
50,601
creating multiple categorical variable with specified degree of association (correlation) matrix [duplicate]
If the variables are just dichotomous you can treat them as binomial. Then the job becomes easier. The package bindata can simulate multivariate distribution with specified correlation. Just small example from the manual amat <- cbind(c(1/2,1/5,1/6),c(1/5,1/2,1/6),c(1/6,1/6,1/2)) require(bindata) out <- rmvbin(n=100,commonprob=amat) # n number of samples, # you can replace 0 and 1 with text variable out[out==1] <- "A" out[out==0] <- "P" require(psych) phi(table (out[,1], out[,2]) The detail underlying principle and method on this paper is discussed [link to pdf ]. Also for simulation of correlated ordinal data, there is another package called ordata the details underlying method is discussed in this paper. I know you might want more, but this is what I got considering no answer so far here.
creating multiple categorical variable with specified degree of association (correlation) matrix [du
If the variables are just dichotomous you can treat them as binomial. Then the job becomes easier. The package bindata can simulate multivariate distribution with specified correlation. Just small exa
creating multiple categorical variable with specified degree of association (correlation) matrix [duplicate] If the variables are just dichotomous you can treat them as binomial. Then the job becomes easier. The package bindata can simulate multivariate distribution with specified correlation. Just small example from the manual amat <- cbind(c(1/2,1/5,1/6),c(1/5,1/2,1/6),c(1/6,1/6,1/2)) require(bindata) out <- rmvbin(n=100,commonprob=amat) # n number of samples, # you can replace 0 and 1 with text variable out[out==1] <- "A" out[out==0] <- "P" require(psych) phi(table (out[,1], out[,2]) The detail underlying principle and method on this paper is discussed [link to pdf ]. Also for simulation of correlated ordinal data, there is another package called ordata the details underlying method is discussed in this paper. I know you might want more, but this is what I got considering no answer so far here.
creating multiple categorical variable with specified degree of association (correlation) matrix [du If the variables are just dichotomous you can treat them as binomial. Then the job becomes easier. The package bindata can simulate multivariate distribution with specified correlation. Just small exa
50,602
How would you model this random effects structure?
This is very similar to a classic split plot design. Forest type is a fixed effect with four levels. It is a bit of a stretch to think of forest type being randomly assigned to sites. Instead, sites are a random effect nested in forest type. Treatment is a fixed effect with three levels. Plots are a random effect nested in sites (nested in forest type). Plots are uniquely identified in this case by site by treatment combinations. Formally, the site by treatment interaction would be a random effect also. In this case, it is confounded with plot-to-plot variability and error variance. The test for forest type would use the site term. The estimate for site would include the site-to-site variance $\sigma^2_{\textrm{site}}$ as well as any variability introduced in preparing each site. The tests for treatment and for the treatment by forest type interaction would use the site by treatment term. The estimate for site by treatment term would include plot-to-plot variance $\sigma^2_{\textrm{plot}}$, site by treatment variance $\sigma^2_{\textrm{site}\times\textrm{treatment}}$, and error variance $\sigma^2_{\textrm{error}}$. I don't have a good way yet to get correct expected mean squares and degrees of freedom for these sorts of statistical analysis in general. For this design maybe it's feasible using packages for calculating stratified error variances. My take is that forest.type should be tested with 3 and 96 degrees of freedom, treatment should be tested with 3 and 288 degrees of freedom, and forest.type:treatment with 9 and 288 degrees of freedom. It looks like nlme would produce the correct analysis for the fixed effects: lme(seedling ~ forest.type + treatment + forest.type*treatment, random=~1/forest.type/site)
How would you model this random effects structure?
This is very similar to a classic split plot design. Forest type is a fixed effect with four levels. It is a bit of a stretch to think of forest type being randomly assigned to sites. Instead, sit
How would you model this random effects structure? This is very similar to a classic split plot design. Forest type is a fixed effect with four levels. It is a bit of a stretch to think of forest type being randomly assigned to sites. Instead, sites are a random effect nested in forest type. Treatment is a fixed effect with three levels. Plots are a random effect nested in sites (nested in forest type). Plots are uniquely identified in this case by site by treatment combinations. Formally, the site by treatment interaction would be a random effect also. In this case, it is confounded with plot-to-plot variability and error variance. The test for forest type would use the site term. The estimate for site would include the site-to-site variance $\sigma^2_{\textrm{site}}$ as well as any variability introduced in preparing each site. The tests for treatment and for the treatment by forest type interaction would use the site by treatment term. The estimate for site by treatment term would include plot-to-plot variance $\sigma^2_{\textrm{plot}}$, site by treatment variance $\sigma^2_{\textrm{site}\times\textrm{treatment}}$, and error variance $\sigma^2_{\textrm{error}}$. I don't have a good way yet to get correct expected mean squares and degrees of freedom for these sorts of statistical analysis in general. For this design maybe it's feasible using packages for calculating stratified error variances. My take is that forest.type should be tested with 3 and 96 degrees of freedom, treatment should be tested with 3 and 288 degrees of freedom, and forest.type:treatment with 9 and 288 degrees of freedom. It looks like nlme would produce the correct analysis for the fixed effects: lme(seedling ~ forest.type + treatment + forest.type*treatment, random=~1/forest.type/site)
How would you model this random effects structure? This is very similar to a classic split plot design. Forest type is a fixed effect with four levels. It is a bit of a stretch to think of forest type being randomly assigned to sites. Instead, sit
50,603
Estimating total variation distance from a given distribution
This reference demonstrates how to test that the distributions $p$ and $q$ have $||p-q||_{TV}\leq \max\left(\frac{\epsilon^2}{32\sqrt[3]{n}},\frac{\epsilon}{4\sqrt{n}}\right)$ with probability at least $1-\delta$ for your choices of $\epsilon$ and $\delta$, with number of samples $O(n^{2/3}\epsilon^{-4}\log n)$. In your case, you already know one of your distributions, which means you can do better, in general, even if you just execute their algorithm, and generate samples from $p$ given your knowledge of it. The approach in that paper is to look at how many collisions (the event of sampling the same value twice) occur between $p$ and $q$, and compare that to the number that happen between $p$ and itself, as well as $q$ and itself. Given that you know $p$, you could provide some of the collision rates directly from your knowledge of $p$. HTH
Estimating total variation distance from a given distribution
This reference demonstrates how to test that the distributions $p$ and $q$ have $||p-q||_{TV}\leq \max\left(\frac{\epsilon^2}{32\sqrt[3]{n}},\frac{\epsilon}{4\sqrt{n}}\right)$ with probability at leas
Estimating total variation distance from a given distribution This reference demonstrates how to test that the distributions $p$ and $q$ have $||p-q||_{TV}\leq \max\left(\frac{\epsilon^2}{32\sqrt[3]{n}},\frac{\epsilon}{4\sqrt{n}}\right)$ with probability at least $1-\delta$ for your choices of $\epsilon$ and $\delta$, with number of samples $O(n^{2/3}\epsilon^{-4}\log n)$. In your case, you already know one of your distributions, which means you can do better, in general, even if you just execute their algorithm, and generate samples from $p$ given your knowledge of it. The approach in that paper is to look at how many collisions (the event of sampling the same value twice) occur between $p$ and $q$, and compare that to the number that happen between $p$ and itself, as well as $q$ and itself. Given that you know $p$, you could provide some of the collision rates directly from your knowledge of $p$. HTH
Estimating total variation distance from a given distribution This reference demonstrates how to test that the distributions $p$ and $q$ have $||p-q||_{TV}\leq \max\left(\frac{\epsilon^2}{32\sqrt[3]{n}},\frac{\epsilon}{4\sqrt{n}}\right)$ with probability at leas
50,604
Independence of a linear and a quadratic form
Use Craig's Theorem. Consider the quadratic form on b. If two random variables are independent, then any univariate functions of those random variables are likewise independent. The quadratic forms are independent, ergo the linear form on b and the quadratic form on A are likewise independent.
Independence of a linear and a quadratic form
Use Craig's Theorem. Consider the quadratic form on b. If two random variables are independent, then any univariate functions of those random variables are likewise independent. The quadratic forms
Independence of a linear and a quadratic form Use Craig's Theorem. Consider the quadratic form on b. If two random variables are independent, then any univariate functions of those random variables are likewise independent. The quadratic forms are independent, ergo the linear form on b and the quadratic form on A are likewise independent.
Independence of a linear and a quadratic form Use Craig's Theorem. Consider the quadratic form on b. If two random variables are independent, then any univariate functions of those random variables are likewise independent. The quadratic forms
50,605
Independence of a linear and a quadratic form
Starting with the univariate case $X=X_1$, we find the correlation: $\rho(bX,AX^2)=bA\rho(X,X^2)=bA\dfrac{\mathrm{Cov}(X,X^2)}{ \sigma_X \sigma_{X^2}} =bA\dfrac{E[(X-\mu_X)(X^2-\mu_{X^2})]}{ \sigma_X\sigma_{X^2}}$ With $\mu_x=0$, $\sigma_x=\sigma$, and for the expectation we know the distributions of $X$ and $X^2$ (Normal and Chi-Square). We see that $bA\neq 0$ implies $\rho\neq0$, so they are not independent in this case. Now look at the case $bA=0$, which means $b=A=0$, $(b=0,A\neq0)$ or $(b\neq0,A=0)$: In the first case here, we have $bX=0=AX^2$ and two constants are generally independent. For the other two cases, we have aswell that a constant $(0)$ and any random variable are independent. So $(bX,AX^2)$ are independent only in the case $bA=0$, otherwise we have $\rho\neq0$. This univariate case is not directly extendable to the multivariate case though because $X'AX\neq AX'X$. As shortcut in general, if you have some transformation $Y=T(X)$, it is hence directly dependent on $X$ (not independent) except if any of them is a constant which here requires $b'A=0$.
Independence of a linear and a quadratic form
Starting with the univariate case $X=X_1$, we find the correlation: $\rho(bX,AX^2)=bA\rho(X,X^2)=bA\dfrac{\mathrm{Cov}(X,X^2)}{ \sigma_X \sigma_{X^2}} =bA\dfrac{E[(X-\mu_X)(X^2-\mu_{X^2})]}{ \sigma_X\
Independence of a linear and a quadratic form Starting with the univariate case $X=X_1$, we find the correlation: $\rho(bX,AX^2)=bA\rho(X,X^2)=bA\dfrac{\mathrm{Cov}(X,X^2)}{ \sigma_X \sigma_{X^2}} =bA\dfrac{E[(X-\mu_X)(X^2-\mu_{X^2})]}{ \sigma_X\sigma_{X^2}}$ With $\mu_x=0$, $\sigma_x=\sigma$, and for the expectation we know the distributions of $X$ and $X^2$ (Normal and Chi-Square). We see that $bA\neq 0$ implies $\rho\neq0$, so they are not independent in this case. Now look at the case $bA=0$, which means $b=A=0$, $(b=0,A\neq0)$ or $(b\neq0,A=0)$: In the first case here, we have $bX=0=AX^2$ and two constants are generally independent. For the other two cases, we have aswell that a constant $(0)$ and any random variable are independent. So $(bX,AX^2)$ are independent only in the case $bA=0$, otherwise we have $\rho\neq0$. This univariate case is not directly extendable to the multivariate case though because $X'AX\neq AX'X$. As shortcut in general, if you have some transformation $Y=T(X)$, it is hence directly dependent on $X$ (not independent) except if any of them is a constant which here requires $b'A=0$.
Independence of a linear and a quadratic form Starting with the univariate case $X=X_1$, we find the correlation: $\rho(bX,AX^2)=bA\rho(X,X^2)=bA\dfrac{\mathrm{Cov}(X,X^2)}{ \sigma_X \sigma_{X^2}} =bA\dfrac{E[(X-\mu_X)(X^2-\mu_{X^2})]}{ \sigma_X\
50,606
Why is it necessary to use ML estimation instead of REML to compare multilevel linear models?
(RE)ML estimation is an iterative process. ML estimate the variances as if the fixed parameters are known, so doesn't account for the degrees of freedom lost in their estimation. REML adjusts for the uncertainty about the fixed parameters. So you generally cannot use REML to compare models, because whatever difference in the fixed part (parameters and constrasts) invalidates the comparison. However you can use REML to compare models if their fixed part are exactly equal.
Why is it necessary to use ML estimation instead of REML to compare multilevel linear models?
(RE)ML estimation is an iterative process. ML estimate the variances as if the fixed parameters are known, so doesn't account for the degrees of freedom lost in their estimation. REML adjusts for the
Why is it necessary to use ML estimation instead of REML to compare multilevel linear models? (RE)ML estimation is an iterative process. ML estimate the variances as if the fixed parameters are known, so doesn't account for the degrees of freedom lost in their estimation. REML adjusts for the uncertainty about the fixed parameters. So you generally cannot use REML to compare models, because whatever difference in the fixed part (parameters and constrasts) invalidates the comparison. However you can use REML to compare models if their fixed part are exactly equal.
Why is it necessary to use ML estimation instead of REML to compare multilevel linear models? (RE)ML estimation is an iterative process. ML estimate the variances as if the fixed parameters are known, so doesn't account for the degrees of freedom lost in their estimation. REML adjusts for the
50,607
Meaning of Qqnorm plot in R
The x-coordinate of the points is the value this point would have if it were drawn from the standard normal distribution (preserving it's current quantile). That is to say, if it currently is at the median of the sample, it's corresponding value in the standard normal distribution would be 0 (2.5th percentile would be at -1.96, and so on). The units are just numbers. Because this is the standard normal distribution, it just coincides with the standard deviation.
Meaning of Qqnorm plot in R
The x-coordinate of the points is the value this point would have if it were drawn from the standard normal distribution (preserving it's current quantile). That is to say, if it currently is at the m
Meaning of Qqnorm plot in R The x-coordinate of the points is the value this point would have if it were drawn from the standard normal distribution (preserving it's current quantile). That is to say, if it currently is at the median of the sample, it's corresponding value in the standard normal distribution would be 0 (2.5th percentile would be at -1.96, and so on). The units are just numbers. Because this is the standard normal distribution, it just coincides with the standard deviation.
Meaning of Qqnorm plot in R The x-coordinate of the points is the value this point would have if it were drawn from the standard normal distribution (preserving it's current quantile). That is to say, if it currently is at the m
50,608
Is there an equivalent to Lower bound of Wilson score confidence interval for variables with more outcome
It's easy to think of the following 'workaround' which adapts a multi-ranking system to the 'upvote/downvote' solution discussed in the linked article: Let's say you have the popular 5 star rating system. So we have a number of votes, each having a value of: 1, 2, 3, 4 or 5. To 'convert' these ratings to up/down votes, use the following rule: For star rating -- Add * - 0.00 to up votes and 1.00 to down votes (i.e. a full down vote) ** - 0.25 to up votes and 0.75 to down votes *** - 0.50 to up votes and 0.50 to down votes **** - 0.75 to up votes and 0.25 to down votes ***** - 1.00 to up votes and 0.00 to down votes (i.e. a full up vote) After we reduce the 5 star ratings to up/down ratings, we can proceed with the usual score calculations described in Evan Miller's article. As I am not a statistician or mathematician and I would love to hear from other people if this makes sense or not.
Is there an equivalent to Lower bound of Wilson score confidence interval for variables with more ou
It's easy to think of the following 'workaround' which adapts a multi-ranking system to the 'upvote/downvote' solution discussed in the linked article: Let's say you have the popular 5 star rating sys
Is there an equivalent to Lower bound of Wilson score confidence interval for variables with more outcome It's easy to think of the following 'workaround' which adapts a multi-ranking system to the 'upvote/downvote' solution discussed in the linked article: Let's say you have the popular 5 star rating system. So we have a number of votes, each having a value of: 1, 2, 3, 4 or 5. To 'convert' these ratings to up/down votes, use the following rule: For star rating -- Add * - 0.00 to up votes and 1.00 to down votes (i.e. a full down vote) ** - 0.25 to up votes and 0.75 to down votes *** - 0.50 to up votes and 0.50 to down votes **** - 0.75 to up votes and 0.25 to down votes ***** - 1.00 to up votes and 0.00 to down votes (i.e. a full up vote) After we reduce the 5 star ratings to up/down ratings, we can proceed with the usual score calculations described in Evan Miller's article. As I am not a statistician or mathematician and I would love to hear from other people if this makes sense or not.
Is there an equivalent to Lower bound of Wilson score confidence interval for variables with more ou It's easy to think of the following 'workaround' which adapts a multi-ranking system to the 'upvote/downvote' solution discussed in the linked article: Let's say you have the popular 5 star rating sys
50,609
Setting up a naive tensor product B-spline example
I know this is an old question, but it's a good one, and I thought about it recently myself. You select the rows of the Kronecker product matrix in which the row indices of each $B_x$ and $B_z$ element are the same. For example, each element of $B_x \otimes B_z$ is (I am going to use $x$ to mean an element of the $B_x$ design matrix and do the same for $z$ because I can't figure out how to do subscripts-on-subscripts) $x_{ij}z_{rs}$ where $i=[1:n]$ (rows of $B_x$) $j=[1:q]$ (columns of $B_x$) $r=[1:n]$ (rows of $B_z$) $s=[1:p]$ (columns of $B_z$) So you only select the rows where $i=r$. The result is an $n$-by-$qp$ matrix, which is what you want, because your tensor product basis has $n$ observations and a basis dimension of $qp$. I recommend looking at the full Kronecker product expansion in the above Wikipedia link, as it will make clear the order in which the row/column indices iterate as you move along the rows/columns of the matrix. The rationale here is that each row of the design matrices $B_x$ and $B_z$ corresponds to the same actual observation, so the only values of the Kronecker product matrix that have any physical meaning are those in which the row values of each constituent matrix are the same.
Setting up a naive tensor product B-spline example
I know this is an old question, but it's a good one, and I thought about it recently myself. You select the rows of the Kronecker product matrix in which the row indices of each $B_x$ and $B_z$ elemen
Setting up a naive tensor product B-spline example I know this is an old question, but it's a good one, and I thought about it recently myself. You select the rows of the Kronecker product matrix in which the row indices of each $B_x$ and $B_z$ element are the same. For example, each element of $B_x \otimes B_z$ is (I am going to use $x$ to mean an element of the $B_x$ design matrix and do the same for $z$ because I can't figure out how to do subscripts-on-subscripts) $x_{ij}z_{rs}$ where $i=[1:n]$ (rows of $B_x$) $j=[1:q]$ (columns of $B_x$) $r=[1:n]$ (rows of $B_z$) $s=[1:p]$ (columns of $B_z$) So you only select the rows where $i=r$. The result is an $n$-by-$qp$ matrix, which is what you want, because your tensor product basis has $n$ observations and a basis dimension of $qp$. I recommend looking at the full Kronecker product expansion in the above Wikipedia link, as it will make clear the order in which the row/column indices iterate as you move along the rows/columns of the matrix. The rationale here is that each row of the design matrices $B_x$ and $B_z$ corresponds to the same actual observation, so the only values of the Kronecker product matrix that have any physical meaning are those in which the row values of each constituent matrix are the same.
Setting up a naive tensor product B-spline example I know this is an old question, but it's a good one, and I thought about it recently myself. You select the rows of the Kronecker product matrix in which the row indices of each $B_x$ and $B_z$ elemen
50,610
Similarity measures for point processes
I think the most direct way to determine if your measurements are similar is to compute the wait time distribution for each measurement. The wait time is just the time elapsed between events, where an event is, presumably, signal equals $1$. This will give you a series of wait times. You can then plot these as a distribution (a histogram) to get frequencies. What do you do with these? If your question is solely, "are these measurements of a point process similar", the wait time distributions are one thing you can examine. If the measurements are the same, their wait time distributions should be the same. Obviously if each measurement is very short then you won't get nice wait time distributions and it will be harder to determine if they are the same. With enough events though, the wait time distributions should converge on each other (if the measurements are equivalent). The nature of the wait time distribution is itself very informative about the process you're studying though, and some analysis will be required if the wait time distributions are not the same. If the point process is "memory-less", i.e., seeing a $1$ has no influence on when you see the next $1$, then the wait time distribution is expected to be exponentially distributed. This is called a Poisson process and is the simplest possible process you could be observing. As user11852 was implying, a Poisson process has only one parameter $\lambda$ which tracks the average rate of the event. If the wait time distribution is in fact exponential, and the mean wait time is $4$, then $\lambda=1/4$. So if your point process measurements all have exponentially distributed wait times, you can ask how similar their $\lambda$ parameters are.
Similarity measures for point processes
I think the most direct way to determine if your measurements are similar is to compute the wait time distribution for each measurement. The wait time is just the time elapsed between events, where an
Similarity measures for point processes I think the most direct way to determine if your measurements are similar is to compute the wait time distribution for each measurement. The wait time is just the time elapsed between events, where an event is, presumably, signal equals $1$. This will give you a series of wait times. You can then plot these as a distribution (a histogram) to get frequencies. What do you do with these? If your question is solely, "are these measurements of a point process similar", the wait time distributions are one thing you can examine. If the measurements are the same, their wait time distributions should be the same. Obviously if each measurement is very short then you won't get nice wait time distributions and it will be harder to determine if they are the same. With enough events though, the wait time distributions should converge on each other (if the measurements are equivalent). The nature of the wait time distribution is itself very informative about the process you're studying though, and some analysis will be required if the wait time distributions are not the same. If the point process is "memory-less", i.e., seeing a $1$ has no influence on when you see the next $1$, then the wait time distribution is expected to be exponentially distributed. This is called a Poisson process and is the simplest possible process you could be observing. As user11852 was implying, a Poisson process has only one parameter $\lambda$ which tracks the average rate of the event. If the wait time distribution is in fact exponential, and the mean wait time is $4$, then $\lambda=1/4$. So if your point process measurements all have exponentially distributed wait times, you can ask how similar their $\lambda$ parameters are.
Similarity measures for point processes I think the most direct way to determine if your measurements are similar is to compute the wait time distribution for each measurement. The wait time is just the time elapsed between events, where an
50,611
Similarity measures for point processes
i would try coherence another idea is to apply haar wavelet analysis and compare the measurements in frequency space
Similarity measures for point processes
i would try coherence another idea is to apply haar wavelet analysis and compare the measurements in frequency space
Similarity measures for point processes i would try coherence another idea is to apply haar wavelet analysis and compare the measurements in frequency space
Similarity measures for point processes i would try coherence another idea is to apply haar wavelet analysis and compare the measurements in frequency space
50,612
Why is Sampling Importance Resampling (SIR) better than Importance Sampling (IS)?
The main point is that most of the time, SIS/SIR is used in a sequential setting and this is why one needs to reallocate particles to best deal with the next time integration. When performing IS, it is quite common that all of the weightings are attributed to only a very small subpart of the particles whereas they characterise the area of interest. Resampling allows reallocating particles from low-density regions into high-density regions making thus a more optimal use of our available particles (because the more particles, the more costly the procedure). As I understand it, $S_\rm{small}$ is not smaller than $S$: they have typically the same size.
Why is Sampling Importance Resampling (SIR) better than Importance Sampling (IS)?
The main point is that most of the time, SIS/SIR is used in a sequential setting and this is why one needs to reallocate particles to best deal with the next time integration. When performing IS, it i
Why is Sampling Importance Resampling (SIR) better than Importance Sampling (IS)? The main point is that most of the time, SIS/SIR is used in a sequential setting and this is why one needs to reallocate particles to best deal with the next time integration. When performing IS, it is quite common that all of the weightings are attributed to only a very small subpart of the particles whereas they characterise the area of interest. Resampling allows reallocating particles from low-density regions into high-density regions making thus a more optimal use of our available particles (because the more particles, the more costly the procedure). As I understand it, $S_\rm{small}$ is not smaller than $S$: they have typically the same size.
Why is Sampling Importance Resampling (SIR) better than Importance Sampling (IS)? The main point is that most of the time, SIS/SIR is used in a sequential setting and this is why one needs to reallocate particles to best deal with the next time integration. When performing IS, it i
50,613
lme4: glmer problems with offset()
I don't think you do want to add sampling_effort as a offset at all. You are estimating the logit of marked over unmarked catch probabilities - call that quantity $\eta$ - and you think, quite reasonably, that the quantity $N=$ marked+unmarked will be a function of how long the traps remain undamaged, i.e. a function of sampling_effort. However, it's not obvious why that relationship is relevant; your binomial model already treats $N$ as known and conditions on it. The circumstance I can imagine where it would be relevant is when $\eta$ really depends on sampling_effort. To examples would be a) if the rate of unmarked captures were constant but the rate of marked captures is increasing because of some feature of the experiment such as marked animals being expected to take a random walk from a common origin, or b) if capturing one kind of animal makes it more or less likely to capture the other type. But even in these cases sampling_effort would be a regular covariate and not have its coefficient fixed at 1 as an offset would have. Here's some possibly relevant discussion of this question elsewhere on the site.
lme4: glmer problems with offset()
I don't think you do want to add sampling_effort as a offset at all. You are estimating the logit of marked over unmarked catch probabilities - call that quantity $\eta$ - and you think, quite reas
lme4: glmer problems with offset() I don't think you do want to add sampling_effort as a offset at all. You are estimating the logit of marked over unmarked catch probabilities - call that quantity $\eta$ - and you think, quite reasonably, that the quantity $N=$ marked+unmarked will be a function of how long the traps remain undamaged, i.e. a function of sampling_effort. However, it's not obvious why that relationship is relevant; your binomial model already treats $N$ as known and conditions on it. The circumstance I can imagine where it would be relevant is when $\eta$ really depends on sampling_effort. To examples would be a) if the rate of unmarked captures were constant but the rate of marked captures is increasing because of some feature of the experiment such as marked animals being expected to take a random walk from a common origin, or b) if capturing one kind of animal makes it more or less likely to capture the other type. But even in these cases sampling_effort would be a regular covariate and not have its coefficient fixed at 1 as an offset would have. Here's some possibly relevant discussion of this question elsewhere on the site.
lme4: glmer problems with offset() I don't think you do want to add sampling_effort as a offset at all. You are estimating the logit of marked over unmarked catch probabilities - call that quantity $\eta$ - and you think, quite reas
50,614
Could you explain how gradient boosting algorithm works?
For implementation checkout: https://github.com/2pc/libgbdt.git For algorithm detail there is another graphical depiction which explained better: http://www.lifesciencessociety.org/CSB2006/toc/PDF/43.2006.pdf and this: which in the protein folding case, translate to this:
Could you explain how gradient boosting algorithm works?
For implementation checkout: https://github.com/2pc/libgbdt.git For algorithm detail there is another graphical depiction which explained better: http://www.lifesciencessociety.org/CSB2006/toc/PDF/43.
Could you explain how gradient boosting algorithm works? For implementation checkout: https://github.com/2pc/libgbdt.git For algorithm detail there is another graphical depiction which explained better: http://www.lifesciencessociety.org/CSB2006/toc/PDF/43.2006.pdf and this: which in the protein folding case, translate to this:
Could you explain how gradient boosting algorithm works? For implementation checkout: https://github.com/2pc/libgbdt.git For algorithm detail there is another graphical depiction which explained better: http://www.lifesciencessociety.org/CSB2006/toc/PDF/43.
50,615
Could you explain how gradient boosting algorithm works?
From the FAQ in the appendix of an article I wrote with Jeremy Howard, called How to explain gradient boosting: "Instead of creating a single powerful model, boosting combines multiple simple models into a single composite model. The idea is that, as we introduce more and more simple models, the overall model becomes stronger and stronger. In boosting terminology, the simple models are called weak models or weak learners. To improve its predictions, gradient boosting looks at the difference between its current approximation, yhat, and the known correct target vector, y, which is called the residual, y - yhat. It then trains a weak model that maps feature vector x to that residual vector. Adding a residual predicted by a weak model to an existing model's approximation nudges the model towards the correct target. Adding lots of these nudges, improves the overall models approximation." We put in a number of interesting visualizations that I think will help; for example, here’s one of them: For the algorithm itself, you can take a look at our discussion of the general algorithm, but that refers to notation that you might need to look backwards in that article to get. Regardless, here is our version of the algorithm that assumes regression trees rather than any other kind of weak model: That regression tree assumption dramatically signifies the mathematics and, besides, it's what everybody uses in practice.
Could you explain how gradient boosting algorithm works?
From the FAQ in the appendix of an article I wrote with Jeremy Howard, called How to explain gradient boosting: "Instead of creating a single powerful model, boosting combines multiple simple models i
Could you explain how gradient boosting algorithm works? From the FAQ in the appendix of an article I wrote with Jeremy Howard, called How to explain gradient boosting: "Instead of creating a single powerful model, boosting combines multiple simple models into a single composite model. The idea is that, as we introduce more and more simple models, the overall model becomes stronger and stronger. In boosting terminology, the simple models are called weak models or weak learners. To improve its predictions, gradient boosting looks at the difference between its current approximation, yhat, and the known correct target vector, y, which is called the residual, y - yhat. It then trains a weak model that maps feature vector x to that residual vector. Adding a residual predicted by a weak model to an existing model's approximation nudges the model towards the correct target. Adding lots of these nudges, improves the overall models approximation." We put in a number of interesting visualizations that I think will help; for example, here’s one of them: For the algorithm itself, you can take a look at our discussion of the general algorithm, but that refers to notation that you might need to look backwards in that article to get. Regardless, here is our version of the algorithm that assumes regression trees rather than any other kind of weak model: That regression tree assumption dramatically signifies the mathematics and, besides, it's what everybody uses in practice.
Could you explain how gradient boosting algorithm works? From the FAQ in the appendix of an article I wrote with Jeremy Howard, called How to explain gradient boosting: "Instead of creating a single powerful model, boosting combines multiple simple models i
50,616
Spline – basis functions
This looks like a truncated power basis. The answer is b) although $h_5(X)$ will only be non-zero if $X$ is greater than $\xi_1$ and similarly for $h_6(X)$ and $\xi_2$
Spline – basis functions
This looks like a truncated power basis. The answer is b) although $h_5(X)$ will only be non-zero if $X$ is greater than $\xi_1$ and similarly for $h_6(X)$ and $\xi_2$
Spline – basis functions This looks like a truncated power basis. The answer is b) although $h_5(X)$ will only be non-zero if $X$ is greater than $\xi_1$ and similarly for $h_6(X)$ and $\xi_2$
Spline – basis functions This looks like a truncated power basis. The answer is b) although $h_5(X)$ will only be non-zero if $X$ is greater than $\xi_1$ and similarly for $h_6(X)$ and $\xi_2$
50,617
Is the p-value equivalent to the false alarm value in the Bayesian rule?
They're related in that selecting the critical value for p amounts to selecting a false alarm rate, under the assumption that the null hypothesis is true. The difference between the two is what is observed and known prior to the test. It's conventional to denote the probability of making a type I error, i.e. a false positive, as $\alpha$; in the example you linked above, $\alpha$ is known from observations of previous tests. In other words, under the (null) hypothesis that a patient doesn't have cancer, $\alpha = 0.2$. In that example, the prior probability of a positive condition is also known, and is what ultimately allows for computing the odds of cancer. In a traditional significance test, the idea is that the experimenter sets a critical value for p at $\alpha$, such that they only report false findings one in $\alpha^{-1}$ times. Because experimenters wish to avoid reporting false positives, they set their own false alarm rate under the assumption that their test hypothesis is incorrect. You can consider a test result of $p < \alpha$ equivalent to the positive test result D in the example.
Is the p-value equivalent to the false alarm value in the Bayesian rule?
They're related in that selecting the critical value for p amounts to selecting a false alarm rate, under the assumption that the null hypothesis is true. The difference between the two is what is obs
Is the p-value equivalent to the false alarm value in the Bayesian rule? They're related in that selecting the critical value for p amounts to selecting a false alarm rate, under the assumption that the null hypothesis is true. The difference between the two is what is observed and known prior to the test. It's conventional to denote the probability of making a type I error, i.e. a false positive, as $\alpha$; in the example you linked above, $\alpha$ is known from observations of previous tests. In other words, under the (null) hypothesis that a patient doesn't have cancer, $\alpha = 0.2$. In that example, the prior probability of a positive condition is also known, and is what ultimately allows for computing the odds of cancer. In a traditional significance test, the idea is that the experimenter sets a critical value for p at $\alpha$, such that they only report false findings one in $\alpha^{-1}$ times. Because experimenters wish to avoid reporting false positives, they set their own false alarm rate under the assumption that their test hypothesis is incorrect. You can consider a test result of $p < \alpha$ equivalent to the positive test result D in the example.
Is the p-value equivalent to the false alarm value in the Bayesian rule? They're related in that selecting the critical value for p amounts to selecting a false alarm rate, under the assumption that the null hypothesis is true. The difference between the two is what is obs
50,618
Demonstrate difference in growth over time
1) log, I think log would work. (motivation, error scales with size of measurement, also your error bars might have hit negative values, ie normal distribution is not right and less powerfull) m<-lm(y~factor(Tx),data=df1) #p ~ 0.05 m<-lm(log(y)~factor(Tx),data=df1) #p ~ 0.015 and equaly for m<-lm(y~factor(Day)+factor(Tx),data=df1) #p ~ 0.05 m<-lm(log(y)~factor(Day)+factor(Tx),data=df1) #p ~ 0.015 2) Also, more data. If the distribution is so wide you can't suffice with just a few measurements. It is like a law of physical impossibility. 3) another, use a (realistic -> not linear, but instead exponential) model that takes all data into account and can reduce measurement error (still you will have variation of the parameters for different experimental entities). For instance using df1 <- data.frame(Day = rep(rep(0:4, each=3), 2), Tx = rep(c(1,2), each=15), y = c(rep(16e3, 3), 32e3, 56e3, 6e3, 36e3, 14e3, 24e3, 90e3, 22e3, 18e3, 246e3, 38e3, 82e3, rep(16e3, 3), 16e3, 34e3, 16e3, 20e3, 20e3, 24e3, 4e3, 12e3, 16e3, 20e3, 5e3, 12e3), id=c(rep(c(1,2,3),5),rep(c(4,5,6),5)) ) m <- nls(log(y) ~ log(a) + (b[id]*Day), start = list(a = 16747, b = rep(0.4,6) ), data = df1 ) you obtain quite different coefficients b b1 b2 b3 b4 b5 b6 0.59905318 0.16483349 0.20879753 -0.10920228 -0.15905458 -0.02652816 (which may be improved if the quality of the data would allow more sophisticated growth models, for instance including a lag phase) 4) a rank test may be more robust if you only have three data points for each class
Demonstrate difference in growth over time
1) log, I think log would work. (motivation, error scales with size of measurement, also your error bars might have hit negative values, ie normal distribution is not right and less powerfull) m<-lm(
Demonstrate difference in growth over time 1) log, I think log would work. (motivation, error scales with size of measurement, also your error bars might have hit negative values, ie normal distribution is not right and less powerfull) m<-lm(y~factor(Tx),data=df1) #p ~ 0.05 m<-lm(log(y)~factor(Tx),data=df1) #p ~ 0.015 and equaly for m<-lm(y~factor(Day)+factor(Tx),data=df1) #p ~ 0.05 m<-lm(log(y)~factor(Day)+factor(Tx),data=df1) #p ~ 0.015 2) Also, more data. If the distribution is so wide you can't suffice with just a few measurements. It is like a law of physical impossibility. 3) another, use a (realistic -> not linear, but instead exponential) model that takes all data into account and can reduce measurement error (still you will have variation of the parameters for different experimental entities). For instance using df1 <- data.frame(Day = rep(rep(0:4, each=3), 2), Tx = rep(c(1,2), each=15), y = c(rep(16e3, 3), 32e3, 56e3, 6e3, 36e3, 14e3, 24e3, 90e3, 22e3, 18e3, 246e3, 38e3, 82e3, rep(16e3, 3), 16e3, 34e3, 16e3, 20e3, 20e3, 24e3, 4e3, 12e3, 16e3, 20e3, 5e3, 12e3), id=c(rep(c(1,2,3),5),rep(c(4,5,6),5)) ) m <- nls(log(y) ~ log(a) + (b[id]*Day), start = list(a = 16747, b = rep(0.4,6) ), data = df1 ) you obtain quite different coefficients b b1 b2 b3 b4 b5 b6 0.59905318 0.16483349 0.20879753 -0.10920228 -0.15905458 -0.02652816 (which may be improved if the quality of the data would allow more sophisticated growth models, for instance including a lag phase) 4) a rank test may be more robust if you only have three data points for each class
Demonstrate difference in growth over time 1) log, I think log would work. (motivation, error scales with size of measurement, also your error bars might have hit negative values, ie normal distribution is not right and less powerfull) m<-lm(
50,619
Demonstrate difference in growth over time
OK so we gave up and did it properly physically, and it worked fine. Nonetheless this remains a borderline case so will keep it up here for a while. According to my 'GraphPadPrisn' manual 'AIC' is the way to approach this sort of problem so will keep this open until I verify for myself. Maybe one of those inter-generational questions for which the physical solution is the simpler. In this particular case am glad to have had feedback that stats would not be the best way to settle things and that when 'put to the sword' the solution should always be empiric. Will change accepted answer of course if something better comes along.
Demonstrate difference in growth over time
OK so we gave up and did it properly physically, and it worked fine. Nonetheless this remains a borderline case so will keep it up here for a while. According to my 'GraphPadPrisn' manual 'AIC' is the
Demonstrate difference in growth over time OK so we gave up and did it properly physically, and it worked fine. Nonetheless this remains a borderline case so will keep it up here for a while. According to my 'GraphPadPrisn' manual 'AIC' is the way to approach this sort of problem so will keep this open until I verify for myself. Maybe one of those inter-generational questions for which the physical solution is the simpler. In this particular case am glad to have had feedback that stats would not be the best way to settle things and that when 'put to the sword' the solution should always be empiric. Will change accepted answer of course if something better comes along.
Demonstrate difference in growth over time OK so we gave up and did it properly physically, and it worked fine. Nonetheless this remains a borderline case so will keep it up here for a while. According to my 'GraphPadPrisn' manual 'AIC' is the
50,620
How to use a mathematical model for data analysis in R
You can fit this equation to your data using non-linear regression. I'd give it a try with nls. The crucial aspect of using nls is to provide sensible starting values. An example code could look something like nls.mod <- nls(Wa~Wma*(1+a*(Na + alpha*N))^(-b), data = dataset, start = list(a = 1, b = 1, alpha = 1)). – COOLSerdash Nonlinear regression would be my first thought too - but beware, nonlinear least squares by default assumes constant variance; it may be that a modified version of the equation (perhaps a log scale for example) might be a better description of the relationship once you take proper account of the error term. If you don't have theory as a way of choosing an error term you might look at the relationship between the spread of the data (perhaps via the residuals) and the mean (perhaps via an initial model that fits reasonably well) to assess the reasonableness of assuming constant variance considering the response is mean yield per plant, it seems highly plausible that the variation about the mean could be larger when the mean is larger. – Glen_b
How to use a mathematical model for data analysis in R
You can fit this equation to your data using non-linear regression. I'd give it a try with nls. The crucial aspect of using nls is to provide sensible starting values. An example code could look somet
How to use a mathematical model for data analysis in R You can fit this equation to your data using non-linear regression. I'd give it a try with nls. The crucial aspect of using nls is to provide sensible starting values. An example code could look something like nls.mod <- nls(Wa~Wma*(1+a*(Na + alpha*N))^(-b), data = dataset, start = list(a = 1, b = 1, alpha = 1)). – COOLSerdash Nonlinear regression would be my first thought too - but beware, nonlinear least squares by default assumes constant variance; it may be that a modified version of the equation (perhaps a log scale for example) might be a better description of the relationship once you take proper account of the error term. If you don't have theory as a way of choosing an error term you might look at the relationship between the spread of the data (perhaps via the residuals) and the mean (perhaps via an initial model that fits reasonably well) to assess the reasonableness of assuming constant variance considering the response is mean yield per plant, it seems highly plausible that the variation about the mean could be larger when the mean is larger. – Glen_b
How to use a mathematical model for data analysis in R You can fit this equation to your data using non-linear regression. I'd give it a try with nls. The crucial aspect of using nls is to provide sensible starting values. An example code could look somet
50,621
How to use a mathematical model for data analysis in R
The mean-yield per plant is strictly positive, which means we can deal with its logarithm (as a real number). The deterministic version of the model can be usefully rewritten as: $$\ln W_{A} = \ln W_{mA} - b_A \ln (1 + a_{A} (N_{A}+\alpha N_{B})).$$ The obvious stochastic analogy would be the non-linear regression: $$\ln W_{A,i} = \ln W_{mA} - b_A \ln (1 + a_{A} (N_{A,i}+\alpha N_{B,i})) + \varepsilon_i \quad \quad\ \quad \varepsilon_i \sim \text{IID N}(0, \sigma^2).$$ This is a non-linear regression with unknown coefficient parameters $a_A$, $b_A$ and $\alpha$, and unknown error variance $\sigma^2$. It can be programmed in R using the following syntax: #Define the formula for the non-linear regression FUNC <- function(Wma, Na, Nb, a, b, alpha) { log(Wma) - b*log(1 + a*(Na + alpha*Nb)) } FORMULA <- as.formula(log(Wa) ~ FUNC(Wma, Na, Nb, a, b, alpha)); #Set parameters a <- 1.2; b <- 0.2; alpha <- 0.4; sigma <- 0.1; #Create mock data for analysis set.seed(10000); N <- 1000; Wma <- rgamma(N, 20, 4); Na <- rgamma(N, 4, 2); Nb <- rgamma(N, 6, 1); Wa <- rep(0, N); for (i in 1:N) { Wa[i] <- exp(FUNC(Wma[i], Na[i], Nb[i], a, b, alpha) + rnorm(1, 0, sigma)) } DATA <- as.data.frame(cbind(Wa, Wma, Na, Nb)); #Fit the data to the non-linear regression model MODEL <- nls(FORMULA, data = DATA, start = list(a = 1, b = 1, alpha = 1)); With this particular mock data the model returns estimates that are reasonably close to the coefficient values that were used to generate the mock data: summary(MODEL) Formula: log(Wa) ~ FUNC(Wma, Na, Nb, a, b, alpha) Parameters: Estimate Std. Error t value Pr(>|t|) a 1.30205 0.31614 4.119 4.13e-05 *** b 0.17763 0.01706 10.410 < 2e-16 *** alpha 0.53021 0.08063 6.576 7.81e-11 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.1028 on 997 degrees of freedom Number of iterations to convergence: 5 Achieved convergence tolerance: 2.445e-06
How to use a mathematical model for data analysis in R
The mean-yield per plant is strictly positive, which means we can deal with its logarithm (as a real number). The deterministic version of the model can be usefully rewritten as: $$\ln W_{A} = \ln W_
How to use a mathematical model for data analysis in R The mean-yield per plant is strictly positive, which means we can deal with its logarithm (as a real number). The deterministic version of the model can be usefully rewritten as: $$\ln W_{A} = \ln W_{mA} - b_A \ln (1 + a_{A} (N_{A}+\alpha N_{B})).$$ The obvious stochastic analogy would be the non-linear regression: $$\ln W_{A,i} = \ln W_{mA} - b_A \ln (1 + a_{A} (N_{A,i}+\alpha N_{B,i})) + \varepsilon_i \quad \quad\ \quad \varepsilon_i \sim \text{IID N}(0, \sigma^2).$$ This is a non-linear regression with unknown coefficient parameters $a_A$, $b_A$ and $\alpha$, and unknown error variance $\sigma^2$. It can be programmed in R using the following syntax: #Define the formula for the non-linear regression FUNC <- function(Wma, Na, Nb, a, b, alpha) { log(Wma) - b*log(1 + a*(Na + alpha*Nb)) } FORMULA <- as.formula(log(Wa) ~ FUNC(Wma, Na, Nb, a, b, alpha)); #Set parameters a <- 1.2; b <- 0.2; alpha <- 0.4; sigma <- 0.1; #Create mock data for analysis set.seed(10000); N <- 1000; Wma <- rgamma(N, 20, 4); Na <- rgamma(N, 4, 2); Nb <- rgamma(N, 6, 1); Wa <- rep(0, N); for (i in 1:N) { Wa[i] <- exp(FUNC(Wma[i], Na[i], Nb[i], a, b, alpha) + rnorm(1, 0, sigma)) } DATA <- as.data.frame(cbind(Wa, Wma, Na, Nb)); #Fit the data to the non-linear regression model MODEL <- nls(FORMULA, data = DATA, start = list(a = 1, b = 1, alpha = 1)); With this particular mock data the model returns estimates that are reasonably close to the coefficient values that were used to generate the mock data: summary(MODEL) Formula: log(Wa) ~ FUNC(Wma, Na, Nb, a, b, alpha) Parameters: Estimate Std. Error t value Pr(>|t|) a 1.30205 0.31614 4.119 4.13e-05 *** b 0.17763 0.01706 10.410 < 2e-16 *** alpha 0.53021 0.08063 6.576 7.81e-11 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.1028 on 997 degrees of freedom Number of iterations to convergence: 5 Achieved convergence tolerance: 2.445e-06
How to use a mathematical model for data analysis in R The mean-yield per plant is strictly positive, which means we can deal with its logarithm (as a real number). The deterministic version of the model can be usefully rewritten as: $$\ln W_{A} = \ln W_
50,622
Bias and variance estimation with boostrap
I think the formulas for Bootstrap are as follows, although I can't seem to find a proper reference: Let $h = \frac{1}{K} \sum_{k=1}^{K}{\hat{y}_k}$ be the mean predicted outcome from the $K$ bootstrap resamples. Then: Bias is $y - h$ Variance is $\frac{1}{K-1}\sum_{k=1}^{K} (y_k - h)^2$
Bias and variance estimation with boostrap
I think the formulas for Bootstrap are as follows, although I can't seem to find a proper reference: Let $h = \frac{1}{K} \sum_{k=1}^{K}{\hat{y}_k}$ be the mean predicted outcome from the $K$ bootstr
Bias and variance estimation with boostrap I think the formulas for Bootstrap are as follows, although I can't seem to find a proper reference: Let $h = \frac{1}{K} \sum_{k=1}^{K}{\hat{y}_k}$ be the mean predicted outcome from the $K$ bootstrap resamples. Then: Bias is $y - h$ Variance is $\frac{1}{K-1}\sum_{k=1}^{K} (y_k - h)^2$
Bias and variance estimation with boostrap I think the formulas for Bootstrap are as follows, although I can't seem to find a proper reference: Let $h = \frac{1}{K} \sum_{k=1}^{K}{\hat{y}_k}$ be the mean predicted outcome from the $K$ bootstr
50,623
How to report Kruskal-Wallis test?
As @Germaniawerks remarked above, if you only have two groups (managers vs juniors) you should use ranksum (aka Mann-Whitney-Wilcoxon) test and there is no need for Kruskal-Wallis. If you have more than two groups, then Kruskal-Wallis will tell you if they are significantly different, but if you want to know which pairs are significantly different between each other, you need to do a post hoc comparison, e.g. ranksum test with Bonferroni correction. Now answering specifically your question: I think your first formulation is completely acceptable. But personally, I don't think it makes a lot of sense to report U statistic (in case of comparison between two groups, it should be U of Mann-Whitney, as explained above): few people have intuitive understanding of it, and this particular number (U=14.338) does not convey anything meaningful for the reader, only taking space. Instead, I would provide the means and standard deviations of your distributions for both groups. I would also explicitly mention the test you are doing. So taking your example I would write something along these lines: Managers are more likely to arrive late than juniors (managers: $10 \pm 5$ minutes late, juniors: $2\pm4$ minutes late, mean$\pm$SD, $N=10$ for both groups, p<.01, Mann-Whitney-Wilcoxon ranksum test) That's a lot of information to put inside one pair of brackets, so you can split as you like. For example you can report N in the methods section, and make a boxplot figure to illustrate the distributions. Then it would suffice to write: Managers are more likely to arrive late than juniors, see Figure 1 (p<.01, Mann-Whitney-Wilcoxon ranksum test) Update Note that if your data have gross outliers, than means and SDs do not have a lot of meaning and you should rather not report them. Above I assumed that there are no gross outliers in either of the groups. Otherwise situation is more complex and maybe the best way is to provide a boxplot, without giving any numbers in the text at all.
How to report Kruskal-Wallis test?
As @Germaniawerks remarked above, if you only have two groups (managers vs juniors) you should use ranksum (aka Mann-Whitney-Wilcoxon) test and there is no need for Kruskal-Wallis. If you have more th
How to report Kruskal-Wallis test? As @Germaniawerks remarked above, if you only have two groups (managers vs juniors) you should use ranksum (aka Mann-Whitney-Wilcoxon) test and there is no need for Kruskal-Wallis. If you have more than two groups, then Kruskal-Wallis will tell you if they are significantly different, but if you want to know which pairs are significantly different between each other, you need to do a post hoc comparison, e.g. ranksum test with Bonferroni correction. Now answering specifically your question: I think your first formulation is completely acceptable. But personally, I don't think it makes a lot of sense to report U statistic (in case of comparison between two groups, it should be U of Mann-Whitney, as explained above): few people have intuitive understanding of it, and this particular number (U=14.338) does not convey anything meaningful for the reader, only taking space. Instead, I would provide the means and standard deviations of your distributions for both groups. I would also explicitly mention the test you are doing. So taking your example I would write something along these lines: Managers are more likely to arrive late than juniors (managers: $10 \pm 5$ minutes late, juniors: $2\pm4$ minutes late, mean$\pm$SD, $N=10$ for both groups, p<.01, Mann-Whitney-Wilcoxon ranksum test) That's a lot of information to put inside one pair of brackets, so you can split as you like. For example you can report N in the methods section, and make a boxplot figure to illustrate the distributions. Then it would suffice to write: Managers are more likely to arrive late than juniors, see Figure 1 (p<.01, Mann-Whitney-Wilcoxon ranksum test) Update Note that if your data have gross outliers, than means and SDs do not have a lot of meaning and you should rather not report them. Above I assumed that there are no gross outliers in either of the groups. Otherwise situation is more complex and maybe the best way is to provide a boxplot, without giving any numbers in the text at all.
How to report Kruskal-Wallis test? As @Germaniawerks remarked above, if you only have two groups (managers vs juniors) you should use ranksum (aka Mann-Whitney-Wilcoxon) test and there is no need for Kruskal-Wallis. If you have more th
50,624
How to report Kruskal-Wallis test?
Following above very useful comments I like to add that the median should be reported instead of the mean. The statement "Managers are more likely to arrive late than juniors (H=14.338, p<.01)" is incomplete. The only thing it says is that there is a difference between the groups. It does not specify where the difference lies or what the exact difference is. For that purpose medians are reported. Following the scenario in the question I would recommend rephrasing; On a 5-point likert scale managers reported to be more often late than juniors (H = xx, p < .01, MdnManagers = x, MdnJuniors x = ) The statement "A statistically significant difference (H=14.338, p<.01) exists between late arrivals at work by managers and juniors." is fine, but could be more informative.
How to report Kruskal-Wallis test?
Following above very useful comments I like to add that the median should be reported instead of the mean. The statement "Managers are more likely to arrive late than juniors (H=14.338, p<.01)" is in
How to report Kruskal-Wallis test? Following above very useful comments I like to add that the median should be reported instead of the mean. The statement "Managers are more likely to arrive late than juniors (H=14.338, p<.01)" is incomplete. The only thing it says is that there is a difference between the groups. It does not specify where the difference lies or what the exact difference is. For that purpose medians are reported. Following the scenario in the question I would recommend rephrasing; On a 5-point likert scale managers reported to be more often late than juniors (H = xx, p < .01, MdnManagers = x, MdnJuniors x = ) The statement "A statistically significant difference (H=14.338, p<.01) exists between late arrivals at work by managers and juniors." is fine, but could be more informative.
How to report Kruskal-Wallis test? Following above very useful comments I like to add that the median should be reported instead of the mean. The statement "Managers are more likely to arrive late than juniors (H=14.338, p<.01)" is in
50,625
Maximizing returns - A Bayesian approach
As a general principle, risk-responsiveness is incorporated into an economic analysis through the shape of the utility function. In your analysis you have a utility function $U$ that operates on the total return $\boldsymbol{a} \cdot \boldsymbol{r}$ to produce the utility of that return. Since your utility function is linear with respect to the return, it is risk-neutral. Optimisation of this utility function will place the maximum allowable investment in the asset with the highest expected return, then the maximum allowable investment in the asset with the next highest expected return, and so on. If you would like to change to a risk-averse position, you should change the utility function to some concave function. For example, if you would like to use a utility function with constant relative-risk aversion you could use the isoelastic utility function: $$U(x) = \left\{ \begin{matrix} \frac{x^{1-\phi}-1}{1-\phi} & & \text{for } \phi \neq 1 \\ \ln(x) & & \text{for } \phi = 1 \end{matrix} \right\} $$ where the parameter $\phi$ is the coefficient of relative-risk aversion. You could then formulate your problem as a constrained non-linear optimisation problem, and solve it using either Lagrangian optimisation or penalty methods. The concavity of the utility function will then militate against "putting all your eggs in one basket", and the optimum will tend to involve a greater asset spread.
Maximizing returns - A Bayesian approach
As a general principle, risk-responsiveness is incorporated into an economic analysis through the shape of the utility function. In your analysis you have a utility function $U$ that operates on the
Maximizing returns - A Bayesian approach As a general principle, risk-responsiveness is incorporated into an economic analysis through the shape of the utility function. In your analysis you have a utility function $U$ that operates on the total return $\boldsymbol{a} \cdot \boldsymbol{r}$ to produce the utility of that return. Since your utility function is linear with respect to the return, it is risk-neutral. Optimisation of this utility function will place the maximum allowable investment in the asset with the highest expected return, then the maximum allowable investment in the asset with the next highest expected return, and so on. If you would like to change to a risk-averse position, you should change the utility function to some concave function. For example, if you would like to use a utility function with constant relative-risk aversion you could use the isoelastic utility function: $$U(x) = \left\{ \begin{matrix} \frac{x^{1-\phi}-1}{1-\phi} & & \text{for } \phi \neq 1 \\ \ln(x) & & \text{for } \phi = 1 \end{matrix} \right\} $$ where the parameter $\phi$ is the coefficient of relative-risk aversion. You could then formulate your problem as a constrained non-linear optimisation problem, and solve it using either Lagrangian optimisation or penalty methods. The concavity of the utility function will then militate against "putting all your eggs in one basket", and the optimum will tend to involve a greater asset spread.
Maximizing returns - A Bayesian approach As a general principle, risk-responsiveness is incorporated into an economic analysis through the shape of the utility function. In your analysis you have a utility function $U$ that operates on the
50,626
Image classification using histogram
In the paper you sited the histogram considered the three color values of each pixel. They used HSV instead of RGB. Here's a package which can do that transform, but probably get this working with RGB first. Each bin of the histogram considered each possible combination of colors, not just a single one. Think of it as a 3D histogram. (Google "Color Histogram", I can't post any more links). They also used 16 bins, as quoted by the article: "The number of bins per color component has been fixed to 16, and the dimension of each histogram is $16^3 = 4096$." (Third page, left column, almost at the bottom of the page) You could also think of it as a three level histogram if that helps where the first level considers the first color, the second level considers the second color, and the third the third color.
Image classification using histogram
In the paper you sited the histogram considered the three color values of each pixel. They used HSV instead of RGB. Here's a package which can do that transform, but probably get this working with RGB
Image classification using histogram In the paper you sited the histogram considered the three color values of each pixel. They used HSV instead of RGB. Here's a package which can do that transform, but probably get this working with RGB first. Each bin of the histogram considered each possible combination of colors, not just a single one. Think of it as a 3D histogram. (Google "Color Histogram", I can't post any more links). They also used 16 bins, as quoted by the article: "The number of bins per color component has been fixed to 16, and the dimension of each histogram is $16^3 = 4096$." (Third page, left column, almost at the bottom of the page) You could also think of it as a three level histogram if that helps where the first level considers the first color, the second level considers the second color, and the third the third color.
Image classification using histogram In the paper you sited the histogram considered the three color values of each pixel. They used HSV instead of RGB. Here's a package which can do that transform, but probably get this working with RGB
50,627
Prediction interval for number of biased coin tosses to get 2 consecutive heads
This answer implements an approach along the lines of whuber's comment, where $p$ is estimated naively from the 10 tosses made at the start, and then 'plugged-in' to $P(F_{HH}|p)$ to get the prediction interval. The approach does not explicitly account for uncertainty in $p$, which leads to poor performance in some cases, as shown below. If we had many more than 10 tosses available to estimate $p$, then this approach might work fine. It would be interesting to know of other approaches which can account for the uncertainty in $p$. All code in this answer is in R. Step 1: Code to compute $P(F_{HH}|p)$ Firstly, we need to be able to compute $P(F_{HH}|p)$. The following code does that analytically (since the simulation approach is very inefficient for small $p$): pmf_FHH<-function(p, Nout){ ############################################################# # # Analytically compute the probability mass function for F_HH # F_HH = number of coin flips required to give 2 consecutive heads # # # p = probability of heads (length 1 vector) # Nout = integer vector of values for which we want the pmf # # Quick exit if(p==0) return(Nout*0) if(p==1) return((Nout==2)) if(max(Nout)==1) return(0*Nout) # Recursively compute the pmf N=max(Nout) # Storage PrN_T=rep(NA,N) # Probability that we got to the N'th flip without 2 consecutive heads, AND the N'th flip was a tail PrN_2H=rep(NA,N) # Probability that the N'th flip gives 2 consecutive heads (for the first time) # First flip PrN_T[1]=(1-p) # Probability that we got to the first flip and it was a tail PrN_2H[1]=0 # Can't have 2 heads on 1st flip # Second flip PrN_T[2] =(1- p) # Probability we get to the second flip and it was a tail PrN_2H[2]=p*p # Probability that we get 2 heads after 2 flips # Third flip and above for(i in 3:length(PrN_2H)){ # 'Probability that we got to the i'th flip, and it was a tail # = [1-(probability that we have terminated by i-1) ]*(1-p) PrN_T[i] = (1-sum(PrN_2H[1:(i-1)]))*(1-p) # Probability that flip i-2 was a tail, and i-1 and i were heads PrN_2H[i]=PrN_T[i-2]*p*p } return(PrN_2H[Nout]) } To test the above function and for later use testing the prediction intervals, we write another function to simulate the coin toss process. sim_FHH_p<-function(p,n=round(1e+04/p**3), pattern='11'){ # Simulate many coin toss sequences, ending in the first occurrence of pattern # # p = probability of 1 (1=heads) # n = number of individual tosses to sample (split into sequences ending in pattern) # pattern = pattern to split on (1=heads) # # returns vector with the length of each toss sequence # Make a data string of many coin flips e.g. '011010011'. random_data=paste(sample(c(0,1),n,replace=T,prob=c(1-p,p)), collapse="") # Split up by occurrence of pattern, count characters, and add the number of characters in pattern. # Each element of random_FHH gives a number of coin-tosses to get pattern random_FHH=nchar(unlist(strsplit(random_data,pattern)))+nchar(pattern) # The last string may not have ended in pattern. Remove it. random_FHH=random_FHH[-length(random_FHH)] return(random_FHH) } Now I run a test to check that the simulated and analytical results are 'the same' (increase the 1e+07 to get better agreement). set.seed(1) p=0.3 # Simulate coin-toss qq=sim_FHH_p(p,n=1e+07, pattern='11') Nmax=round(10/p**2) # Convenient upper limit where we check pmf_FHH empirical_pmf=rep(NA,Nmax) for(i in 1:Nmax) empirical_pmf[i] = (sum(qq==i)/length(qq)) png('test_analytical_relation.png',width=6,height=5,res=200,units='in') plot(1:Nmax,empirical_pmf,main='Test of analytical relation',ylab='pmf') points(1:Nmax,pmf_FHH(p, 1:Nmax),col='red',t='l') legend('topright', c('Approximate empirical pmf', 'Analytical pmf'), pch=c(1,NA),lty=c(NA,1),col=c(1,2)) dev.off() It looks fine. Step 2: Code to compute the prediction interval, assuming $p$ is known. If p is known, then we can directly use $P(F_{HH}|p)$ to get a prediction interval for $F_{HH}$. For a one-sided (1-$\alpha$) prediction interval, we just need to get the (1-$\alpha$) quantile of $P(F_{HH}|p)$. The code is: ci_FHH<-function(p, alpha=0.1,Nmax=round(10/max(p,0.001)**2), two.sided=FALSE){ ## Compute a prediction interval for FHH, assuming p ## is known exactly ## ## By default, compute 1-sided prediction interval to bound the upper values of FHH if(p==0){ return(c(Inf, Inf, NA, NA)) }else if(p==1){ return(c(2, 2, 0, 1)) }else{ cdf_FHH=cumsum(pmf_FHH(p, 1:Nmax)) if(two.sided){ lowerInd=max(which(cdf_FHH<(alpha/2)))+1 upperInd=min(which(cdf_FHH>(1-alpha/2))) }else{ lowerInd=2 upperInd=min(which(cdf_FHH>(1-alpha))) } return(c(lowerInd,upperInd, cdf_FHH[lowerInd-1],cdf_FHH[upperInd])) } Step 3: Test the prediction interval coverage Theoretically we expect the prediction intervals developed above to be very good if $p$ is estimated correctly, but perhaps very bad if it is not. To test the coverage, the following function assumes the true value of $p$ is known, and then repeatedly makes an estimate of $p$ based on 10 coin flips (using the fraction of observed heads), and computes a prediction interval with the estimated value of $p$. test_ci_with_estimated_p<-function(true_p=0.5, theoretical_coverage=0.9, len_data=10, Nsim=100){ # Simulate many coin-toss experiments simRuns=sim_FHH_p(true_p,n=1e+07) # Simulate many prediction intervals with ESTIMATED p, and see what their # coverage is like store_est_p=rep(NA,Nsim) store_coverage=rep(NA,Nsim) for(i in 1:Nsim){ # Estimate p from a sample of size len_data mysim=rbinom(len_data,1,true_p) est_p = mean(mysim) # sample estimate of p myci=ci_FHH(est_p,alpha=(1-theoretical_coverage)) # store_est_p[i]=est_p store_coverage[i] = sum(simRuns>=myci[1] & simRuns<=myci[2])/length(simRuns) } return(list(est_p=store_est_p,coverage=store_coverage, simRuns=simRuns)) } A few tests confirm that the coverage is nearly correct when $p$ is estimated correctly, but can be very bad when it is not. The figures show tests with real $p$ =0.2 and 0.5 (vertical lines), and a theoretical coverage of 0.9 (horizontal lines). It is clear that if the estimated $p$ is too high, then the prediction intervals tend to undercover, whereas if the estimated $p$ is too low, they over-cover, except if the estimated $p$ is zero, in which case we cannot compute any prediction interval (since with the plug-in estimate, heads should never occur). With only 10 samples to estimate $p$, often the coverage is far from the theoretical level. t5=test_ci_with_estimated_p(0.5,theoretical_coverage=0.9) t2=test_ci_with_estimated_p(0.2,theoretical_coverage=0.9) png('test_CI.png',width=12,height=10,res=300,units='in') par(mfrow=c(2,2)) plot(t2$est_p,t2$coverage,xlab='Estimated value of p', ylab='Coverage',cex=2,pch=19,main='CI performance when p=0.2') abline(h=0.9) abline(v=0.2) plot(t5$est_p,t5$coverage,xlab='Estimated value of p', ylab='Coverage',cex=2,pch=19,main='CI performance when p=0.5') abline(h=0.9) abline(v=0.5) #dev.off() barplot(table(t2$est_p),main='Estimated p when p=0.2') barplot(table(t5$est_p),main='Estimated p when p=0.5') dev.off() In the above examples, the mean coverage was pretty close to the desired coverage when true $p$=0.5 (87% compared with the desired 90%), but not so good when true $p$=0.2 (71% vs 90%). # Compute mean coverage + other stats summary(t2$coverage) summary(t5$coverage)
Prediction interval for number of biased coin tosses to get 2 consecutive heads
This answer implements an approach along the lines of whuber's comment, where $p$ is estimated naively from the 10 tosses made at the start, and then 'plugged-in' to $P(F_{HH}|p)$ to get the predictio
Prediction interval for number of biased coin tosses to get 2 consecutive heads This answer implements an approach along the lines of whuber's comment, where $p$ is estimated naively from the 10 tosses made at the start, and then 'plugged-in' to $P(F_{HH}|p)$ to get the prediction interval. The approach does not explicitly account for uncertainty in $p$, which leads to poor performance in some cases, as shown below. If we had many more than 10 tosses available to estimate $p$, then this approach might work fine. It would be interesting to know of other approaches which can account for the uncertainty in $p$. All code in this answer is in R. Step 1: Code to compute $P(F_{HH}|p)$ Firstly, we need to be able to compute $P(F_{HH}|p)$. The following code does that analytically (since the simulation approach is very inefficient for small $p$): pmf_FHH<-function(p, Nout){ ############################################################# # # Analytically compute the probability mass function for F_HH # F_HH = number of coin flips required to give 2 consecutive heads # # # p = probability of heads (length 1 vector) # Nout = integer vector of values for which we want the pmf # # Quick exit if(p==0) return(Nout*0) if(p==1) return((Nout==2)) if(max(Nout)==1) return(0*Nout) # Recursively compute the pmf N=max(Nout) # Storage PrN_T=rep(NA,N) # Probability that we got to the N'th flip without 2 consecutive heads, AND the N'th flip was a tail PrN_2H=rep(NA,N) # Probability that the N'th flip gives 2 consecutive heads (for the first time) # First flip PrN_T[1]=(1-p) # Probability that we got to the first flip and it was a tail PrN_2H[1]=0 # Can't have 2 heads on 1st flip # Second flip PrN_T[2] =(1- p) # Probability we get to the second flip and it was a tail PrN_2H[2]=p*p # Probability that we get 2 heads after 2 flips # Third flip and above for(i in 3:length(PrN_2H)){ # 'Probability that we got to the i'th flip, and it was a tail # = [1-(probability that we have terminated by i-1) ]*(1-p) PrN_T[i] = (1-sum(PrN_2H[1:(i-1)]))*(1-p) # Probability that flip i-2 was a tail, and i-1 and i were heads PrN_2H[i]=PrN_T[i-2]*p*p } return(PrN_2H[Nout]) } To test the above function and for later use testing the prediction intervals, we write another function to simulate the coin toss process. sim_FHH_p<-function(p,n=round(1e+04/p**3), pattern='11'){ # Simulate many coin toss sequences, ending in the first occurrence of pattern # # p = probability of 1 (1=heads) # n = number of individual tosses to sample (split into sequences ending in pattern) # pattern = pattern to split on (1=heads) # # returns vector with the length of each toss sequence # Make a data string of many coin flips e.g. '011010011'. random_data=paste(sample(c(0,1),n,replace=T,prob=c(1-p,p)), collapse="") # Split up by occurrence of pattern, count characters, and add the number of characters in pattern. # Each element of random_FHH gives a number of coin-tosses to get pattern random_FHH=nchar(unlist(strsplit(random_data,pattern)))+nchar(pattern) # The last string may not have ended in pattern. Remove it. random_FHH=random_FHH[-length(random_FHH)] return(random_FHH) } Now I run a test to check that the simulated and analytical results are 'the same' (increase the 1e+07 to get better agreement). set.seed(1) p=0.3 # Simulate coin-toss qq=sim_FHH_p(p,n=1e+07, pattern='11') Nmax=round(10/p**2) # Convenient upper limit where we check pmf_FHH empirical_pmf=rep(NA,Nmax) for(i in 1:Nmax) empirical_pmf[i] = (sum(qq==i)/length(qq)) png('test_analytical_relation.png',width=6,height=5,res=200,units='in') plot(1:Nmax,empirical_pmf,main='Test of analytical relation',ylab='pmf') points(1:Nmax,pmf_FHH(p, 1:Nmax),col='red',t='l') legend('topright', c('Approximate empirical pmf', 'Analytical pmf'), pch=c(1,NA),lty=c(NA,1),col=c(1,2)) dev.off() It looks fine. Step 2: Code to compute the prediction interval, assuming $p$ is known. If p is known, then we can directly use $P(F_{HH}|p)$ to get a prediction interval for $F_{HH}$. For a one-sided (1-$\alpha$) prediction interval, we just need to get the (1-$\alpha$) quantile of $P(F_{HH}|p)$. The code is: ci_FHH<-function(p, alpha=0.1,Nmax=round(10/max(p,0.001)**2), two.sided=FALSE){ ## Compute a prediction interval for FHH, assuming p ## is known exactly ## ## By default, compute 1-sided prediction interval to bound the upper values of FHH if(p==0){ return(c(Inf, Inf, NA, NA)) }else if(p==1){ return(c(2, 2, 0, 1)) }else{ cdf_FHH=cumsum(pmf_FHH(p, 1:Nmax)) if(two.sided){ lowerInd=max(which(cdf_FHH<(alpha/2)))+1 upperInd=min(which(cdf_FHH>(1-alpha/2))) }else{ lowerInd=2 upperInd=min(which(cdf_FHH>(1-alpha))) } return(c(lowerInd,upperInd, cdf_FHH[lowerInd-1],cdf_FHH[upperInd])) } Step 3: Test the prediction interval coverage Theoretically we expect the prediction intervals developed above to be very good if $p$ is estimated correctly, but perhaps very bad if it is not. To test the coverage, the following function assumes the true value of $p$ is known, and then repeatedly makes an estimate of $p$ based on 10 coin flips (using the fraction of observed heads), and computes a prediction interval with the estimated value of $p$. test_ci_with_estimated_p<-function(true_p=0.5, theoretical_coverage=0.9, len_data=10, Nsim=100){ # Simulate many coin-toss experiments simRuns=sim_FHH_p(true_p,n=1e+07) # Simulate many prediction intervals with ESTIMATED p, and see what their # coverage is like store_est_p=rep(NA,Nsim) store_coverage=rep(NA,Nsim) for(i in 1:Nsim){ # Estimate p from a sample of size len_data mysim=rbinom(len_data,1,true_p) est_p = mean(mysim) # sample estimate of p myci=ci_FHH(est_p,alpha=(1-theoretical_coverage)) # store_est_p[i]=est_p store_coverage[i] = sum(simRuns>=myci[1] & simRuns<=myci[2])/length(simRuns) } return(list(est_p=store_est_p,coverage=store_coverage, simRuns=simRuns)) } A few tests confirm that the coverage is nearly correct when $p$ is estimated correctly, but can be very bad when it is not. The figures show tests with real $p$ =0.2 and 0.5 (vertical lines), and a theoretical coverage of 0.9 (horizontal lines). It is clear that if the estimated $p$ is too high, then the prediction intervals tend to undercover, whereas if the estimated $p$ is too low, they over-cover, except if the estimated $p$ is zero, in which case we cannot compute any prediction interval (since with the plug-in estimate, heads should never occur). With only 10 samples to estimate $p$, often the coverage is far from the theoretical level. t5=test_ci_with_estimated_p(0.5,theoretical_coverage=0.9) t2=test_ci_with_estimated_p(0.2,theoretical_coverage=0.9) png('test_CI.png',width=12,height=10,res=300,units='in') par(mfrow=c(2,2)) plot(t2$est_p,t2$coverage,xlab='Estimated value of p', ylab='Coverage',cex=2,pch=19,main='CI performance when p=0.2') abline(h=0.9) abline(v=0.2) plot(t5$est_p,t5$coverage,xlab='Estimated value of p', ylab='Coverage',cex=2,pch=19,main='CI performance when p=0.5') abline(h=0.9) abline(v=0.5) #dev.off() barplot(table(t2$est_p),main='Estimated p when p=0.2') barplot(table(t5$est_p),main='Estimated p when p=0.5') dev.off() In the above examples, the mean coverage was pretty close to the desired coverage when true $p$=0.5 (87% compared with the desired 90%), but not so good when true $p$=0.2 (71% vs 90%). # Compute mean coverage + other stats summary(t2$coverage) summary(t5$coverage)
Prediction interval for number of biased coin tosses to get 2 consecutive heads This answer implements an approach along the lines of whuber's comment, where $p$ is estimated naively from the 10 tosses made at the start, and then 'plugged-in' to $P(F_{HH}|p)$ to get the predictio
50,628
Oversampling correction for multinomial logistic regression
Off the cuff, I presume one could proceed as in logistic regression: a generalisation to $K>2$ categories and base category $K$ would be to set the $i$-th correction term to be $$\log \frac{(r_i p_K)}{(r_K p_i)}$$ corresponding to the $i$ vs $K$ contrast. For $K=2$, $p_1$ is as before and $p_K = p_2 = 1-p_1$, so it reduces to $$\log \frac{r_1 (1-p_1)}{(1-r_1) p_1}.$$ However, I'd be happy to be corrected on this one.
Oversampling correction for multinomial logistic regression
Off the cuff, I presume one could proceed as in logistic regression: a generalisation to $K>2$ categories and base category $K$ would be to set the $i$-th correction term to be $$\log \frac{(r_i p_K)
Oversampling correction for multinomial logistic regression Off the cuff, I presume one could proceed as in logistic regression: a generalisation to $K>2$ categories and base category $K$ would be to set the $i$-th correction term to be $$\log \frac{(r_i p_K)}{(r_K p_i)}$$ corresponding to the $i$ vs $K$ contrast. For $K=2$, $p_1$ is as before and $p_K = p_2 = 1-p_1$, so it reduces to $$\log \frac{r_1 (1-p_1)}{(1-r_1) p_1}.$$ However, I'd be happy to be corrected on this one.
Oversampling correction for multinomial logistic regression Off the cuff, I presume one could proceed as in logistic regression: a generalisation to $K>2$ categories and base category $K$ would be to set the $i$-th correction term to be $$\log \frac{(r_i p_K)
50,629
What to do with my data?
If I may make some suggestions: I would not implement a confidence interval because most students don't really understand what it is anyway. An inter-quartile range would be more appropriate instead Most(?) professors cook their grades to have a normal distribution, so the presence of the normal distribution should not surprise you Other things that could be useful for students using the system is a calculator that will tell them what grades they need on the remaining assignments and tests to obtain a desired final grade In the grade history aspect, you might want to include reference to the number of people in the class and such basic points of reference like their major, their pre-requisites grades, etc. (I'm assuming here that you want to create a grade history for each time the course is taken, not just a grade history for assignment x versus assignment y.) The raw scores versus curved scores should also be interesting to see, however it doesn't seem like you would have access to that information. Edited to add comment on fairness of displaying data with few reports: If you don't know the class population ahead of time, you could (I assume) mention to the user that the percentile is based on x students reporting and that the answer will not be final until all students report. The mechanism of the system you're describing seems odd to me, though. From my experience as a student, the professor publishes the distribution of grades and you as a student can see approximately where you fall. To have a system where it's the students who are doing the completely voluntary reporting of their grades risks misuse. If it's voluntary you can't make people participate and moreover you can't make them tell the truth of the actual grade they received. This is more a school policy thing though, which isn't really your problem.
What to do with my data?
If I may make some suggestions: I would not implement a confidence interval because most students don't really understand what it is anyway. An inter-quartile range would be more appropriate instead
What to do with my data? If I may make some suggestions: I would not implement a confidence interval because most students don't really understand what it is anyway. An inter-quartile range would be more appropriate instead Most(?) professors cook their grades to have a normal distribution, so the presence of the normal distribution should not surprise you Other things that could be useful for students using the system is a calculator that will tell them what grades they need on the remaining assignments and tests to obtain a desired final grade In the grade history aspect, you might want to include reference to the number of people in the class and such basic points of reference like their major, their pre-requisites grades, etc. (I'm assuming here that you want to create a grade history for each time the course is taken, not just a grade history for assignment x versus assignment y.) The raw scores versus curved scores should also be interesting to see, however it doesn't seem like you would have access to that information. Edited to add comment on fairness of displaying data with few reports: If you don't know the class population ahead of time, you could (I assume) mention to the user that the percentile is based on x students reporting and that the answer will not be final until all students report. The mechanism of the system you're describing seems odd to me, though. From my experience as a student, the professor publishes the distribution of grades and you as a student can see approximately where you fall. To have a system where it's the students who are doing the completely voluntary reporting of their grades risks misuse. If it's voluntary you can't make people participate and moreover you can't make them tell the truth of the actual grade they received. This is more a school policy thing though, which isn't really your problem.
What to do with my data? If I may make some suggestions: I would not implement a confidence interval because most students don't really understand what it is anyway. An inter-quartile range would be more appropriate instead
50,630
What to do with my data?
I've seen students' scores of different kinds. The distribution often exhibits one or more tresholds reflecting what they might want or have to achieve. And even with untresheld and continuous scores the distributions are not normal but rather skewed towards higher scores. You should test the normality assumption. As for the percentiles, I would use the empirical, with regard to the aforementioned.
What to do with my data?
I've seen students' scores of different kinds. The distribution often exhibits one or more tresholds reflecting what they might want or have to achieve. And even with untresheld and continuous scores
What to do with my data? I've seen students' scores of different kinds. The distribution often exhibits one or more tresholds reflecting what they might want or have to achieve. And even with untresheld and continuous scores the distributions are not normal but rather skewed towards higher scores. You should test the normality assumption. As for the percentiles, I would use the empirical, with regard to the aforementioned.
What to do with my data? I've seen students' scores of different kinds. The distribution often exhibits one or more tresholds reflecting what they might want or have to achieve. And even with untresheld and continuous scores
50,631
Is Bayesian structural equation modelling better than maximum likelihood with smaller sample sizes?
This question is very broad. It first of all really depends on the model you want to test, in which a higher complexity would decrease the validity of an ML-SEM model (but probably also of a BSEM model). I would say, as a starter, try both and experience/see which difference you get. To give you a gross insight in the debate between both you could read the following literature (as a start): Asparouhov, T., Muthén, B., & Morin, A. J. S. (2015). Bayesian structural equation modeling with cross-loadings and residual covariances: Comments on Stromeyer et al. Journal of Management, 41(6), 1561-1577. doi:10.1177/0149206315591075 Barrett, P. (2007). Structural equation modelling: Adjudging model fit. Personality and Individual Differences, 42(5), 815–824. doi:10.1016/j.paid.2006.09.018 Kaplan, D., & Depaoli, S. (2012). Bayesian structural equation modeling. In R. Hoyle (Ed.), Handbook of structural equation modeling (pp. 650–673). New York, NY: Guilford Press. Markland, D. (2005). The golden rule is that there are no golden rules: A commentary on Paul Barrett's recommendations for reporting model fit in structural equation modeling. Personality and Individual Differences, 42(5), 851–858. doi:10.1016/j.paid.2006.09.023 Muthén, B. O., & Asparouhov, T. (2012). Bayesian structural equation modeling: A more flexible representation of substantive theory. Psychological Methods, 17(3), 313–335. doi:10.1037/a0026802 Stromeyer, W. R., Miller, J. W., Sriramachandramurthy, R., & DeMartino, R. (2015). The prowess and pitfalls of Bayesian structural equation modeling: Important considerations for management research. Journal of Management Research, 41(2), 491–520.
Is Bayesian structural equation modelling better than maximum likelihood with smaller sample sizes?
This question is very broad. It first of all really depends on the model you want to test, in which a higher complexity would decrease the validity of an ML-SEM model (but probably also of a BSEM mode
Is Bayesian structural equation modelling better than maximum likelihood with smaller sample sizes? This question is very broad. It first of all really depends on the model you want to test, in which a higher complexity would decrease the validity of an ML-SEM model (but probably also of a BSEM model). I would say, as a starter, try both and experience/see which difference you get. To give you a gross insight in the debate between both you could read the following literature (as a start): Asparouhov, T., Muthén, B., & Morin, A. J. S. (2015). Bayesian structural equation modeling with cross-loadings and residual covariances: Comments on Stromeyer et al. Journal of Management, 41(6), 1561-1577. doi:10.1177/0149206315591075 Barrett, P. (2007). Structural equation modelling: Adjudging model fit. Personality and Individual Differences, 42(5), 815–824. doi:10.1016/j.paid.2006.09.018 Kaplan, D., & Depaoli, S. (2012). Bayesian structural equation modeling. In R. Hoyle (Ed.), Handbook of structural equation modeling (pp. 650–673). New York, NY: Guilford Press. Markland, D. (2005). The golden rule is that there are no golden rules: A commentary on Paul Barrett's recommendations for reporting model fit in structural equation modeling. Personality and Individual Differences, 42(5), 851–858. doi:10.1016/j.paid.2006.09.023 Muthén, B. O., & Asparouhov, T. (2012). Bayesian structural equation modeling: A more flexible representation of substantive theory. Psychological Methods, 17(3), 313–335. doi:10.1037/a0026802 Stromeyer, W. R., Miller, J. W., Sriramachandramurthy, R., & DeMartino, R. (2015). The prowess and pitfalls of Bayesian structural equation modeling: Important considerations for management research. Journal of Management Research, 41(2), 491–520.
Is Bayesian structural equation modelling better than maximum likelihood with smaller sample sizes? This question is very broad. It first of all really depends on the model you want to test, in which a higher complexity would decrease the validity of an ML-SEM model (but probably also of a BSEM mode
50,632
What would be a parametric model with properties similar to the Theil-Sen estimator?
I beleive, the S estimator[1] (and it's algorithm, FastS[2]) is the closest parametric equivalent to the Theil-Sen estimator. This is because the S estimator explicitly adds a parametric assumption on the distribution of the residuals (through the tuning constant $c$) to get better efficiency at uncontaminated samples. The FastS algorithm is implemented in the robustbase R package[3] distributed through CRAN. There are some differences between the two approaches: FastS is more robust to outliers than Theil-Sen (the latter has a breakdown point of 0.29, the former 0.5) FastS can be computed efficiently for moderately sized dataset, including when there are more than one regressor. Theil-Sen is only defined for univariate regression. These two differences explain why the Theil Sen estimator is essentially deprecated. Rousseeuw, P.J. and Yohai, V.J. (1984). Robust regression by means of S-estimators, In Robust and Nonlinear Time Series, J. Franke, W. Hardle and R. D. Martin (eds.). Lectures Notes in Statistics 26, 256--272, Springer Verlag, New York. Salibian-Barrera, M. Yohai, V.J. (2006). A Fast Algorithm for S-Regression Estimates. Journal of Computational and Graphical Statistics, Vol. 15, 414--427. Rousseeuw P., Croux C., Todorov V., Ruckstuhl A., Salibian-Barrera M., Verbeke T., Koller M., Maechler M. (2012). robustbase: Basic Robust Statistics. R package version 0.9--5.
What would be a parametric model with properties similar to the Theil-Sen estimator?
I beleive, the S estimator[1] (and it's algorithm, FastS[2]) is the closest parametric equivalent to the Theil-Sen estimator. This is because the S estimator explicitly adds a parametric assumption on
What would be a parametric model with properties similar to the Theil-Sen estimator? I beleive, the S estimator[1] (and it's algorithm, FastS[2]) is the closest parametric equivalent to the Theil-Sen estimator. This is because the S estimator explicitly adds a parametric assumption on the distribution of the residuals (through the tuning constant $c$) to get better efficiency at uncontaminated samples. The FastS algorithm is implemented in the robustbase R package[3] distributed through CRAN. There are some differences between the two approaches: FastS is more robust to outliers than Theil-Sen (the latter has a breakdown point of 0.29, the former 0.5) FastS can be computed efficiently for moderately sized dataset, including when there are more than one regressor. Theil-Sen is only defined for univariate regression. These two differences explain why the Theil Sen estimator is essentially deprecated. Rousseeuw, P.J. and Yohai, V.J. (1984). Robust regression by means of S-estimators, In Robust and Nonlinear Time Series, J. Franke, W. Hardle and R. D. Martin (eds.). Lectures Notes in Statistics 26, 256--272, Springer Verlag, New York. Salibian-Barrera, M. Yohai, V.J. (2006). A Fast Algorithm for S-Regression Estimates. Journal of Computational and Graphical Statistics, Vol. 15, 414--427. Rousseeuw P., Croux C., Todorov V., Ruckstuhl A., Salibian-Barrera M., Verbeke T., Koller M., Maechler M. (2012). robustbase: Basic Robust Statistics. R package version 0.9--5.
What would be a parametric model with properties similar to the Theil-Sen estimator? I beleive, the S estimator[1] (and it's algorithm, FastS[2]) is the closest parametric equivalent to the Theil-Sen estimator. This is because the S estimator explicitly adds a parametric assumption on
50,633
What would be a parametric model with properties similar to the Theil-Sen estimator?
One possibility consists of using flexible error distributions. This is, you have a model $$y_j = x_j^{\top}\beta + \epsilon_j,$$ where $\epsilon_j\sim F$, and $F$ is a flexible distribution. So, for instance, in order to produce a model that is (relatively) robust to the presence of outliers and skewness, a possible choice for $F$ is a skew-t distribution (there are several types of this).
What would be a parametric model with properties similar to the Theil-Sen estimator?
One possibility consists of using flexible error distributions. This is, you have a model $$y_j = x_j^{\top}\beta + \epsilon_j,$$ where $\epsilon_j\sim F$, and $F$ is a flexible distribution. So, for
What would be a parametric model with properties similar to the Theil-Sen estimator? One possibility consists of using flexible error distributions. This is, you have a model $$y_j = x_j^{\top}\beta + \epsilon_j,$$ where $\epsilon_j\sim F$, and $F$ is a flexible distribution. So, for instance, in order to produce a model that is (relatively) robust to the presence of outliers and skewness, a possible choice for $F$ is a skew-t distribution (there are several types of this).
What would be a parametric model with properties similar to the Theil-Sen estimator? One possibility consists of using flexible error distributions. This is, you have a model $$y_j = x_j^{\top}\beta + \epsilon_j,$$ where $\epsilon_j\sim F$, and $F$ is a flexible distribution. So, for
50,634
Conditional expectations by conditioning on functions of random variables
Injectivity is sufficient (so long as your function is measurable) Let's assume that $f$ is a measureable function, so that all relevant random variables and events are well-defined. Now, to give this some more structure, suppose we are working in a probability space $(\Omega, \mathscr{S}, \mathbb{P})$ so that $X: \Omega \rightarrow A$ is your conditioning random variable of interest. Since $f$ is an injective function it has a left-inverse $g: B \rightarrow A$ (i.e., $g(f(x))=x$ for all $x \in A$). Thus, for all $x \in A$ you have the following event equivalence: $$\begin{align} \{ \omega \in \Omega | f(X(\omega)) = f(x) \} &= \{ \omega \in \Omega | g(f(X(\omega))) = g(f(x)) \} \\[6pt] &= \{ \omega \in \Omega | X(\omega) = (x) \} \\[6pt] \end{align}$$ This means that conditioning on $f(X)=x$ is equivalent to conditioning on $X=x$. Thus, so long as $f$ is measureable, that should be enough to obtain equivalence of the conditional expectations. (Proof of this is a bit more involved, since you need to establish this via the Radon-Nikodym form for conditional expectation, or via theorems about sigma-fields; that should not be especially difficult.)
Conditional expectations by conditioning on functions of random variables
Injectivity is sufficient (so long as your function is measurable) Let's assume that $f$ is a measureable function, so that all relevant random variables and events are well-defined. Now, to give thi
Conditional expectations by conditioning on functions of random variables Injectivity is sufficient (so long as your function is measurable) Let's assume that $f$ is a measureable function, so that all relevant random variables and events are well-defined. Now, to give this some more structure, suppose we are working in a probability space $(\Omega, \mathscr{S}, \mathbb{P})$ so that $X: \Omega \rightarrow A$ is your conditioning random variable of interest. Since $f$ is an injective function it has a left-inverse $g: B \rightarrow A$ (i.e., $g(f(x))=x$ for all $x \in A$). Thus, for all $x \in A$ you have the following event equivalence: $$\begin{align} \{ \omega \in \Omega | f(X(\omega)) = f(x) \} &= \{ \omega \in \Omega | g(f(X(\omega))) = g(f(x)) \} \\[6pt] &= \{ \omega \in \Omega | X(\omega) = (x) \} \\[6pt] \end{align}$$ This means that conditioning on $f(X)=x$ is equivalent to conditioning on $X=x$. Thus, so long as $f$ is measureable, that should be enough to obtain equivalence of the conditional expectations. (Proof of this is a bit more involved, since you need to establish this via the Radon-Nikodym form for conditional expectation, or via theorems about sigma-fields; that should not be especially difficult.)
Conditional expectations by conditioning on functions of random variables Injectivity is sufficient (so long as your function is measurable) Let's assume that $f$ is a measureable function, so that all relevant random variables and events are well-defined. Now, to give thi
50,635
Conditional expectations by conditioning on functions of random variables
if $Z=f(X)$ and $f$ is injective function so $$\sigma(Z)=\sigma(X)$$ since $Z=f(X)$ so $$\sigma(Z) \subset \sigma(X)$$ and because $f$ is injective so $X=f^{-1}(Z)=g(Z)$ so $$\sigma(X) \subset \sigma(Z)$$ so $\sigma(Z)=\sigma(X)$ or $\sigma(f(X))=\sigma(X)$ now $$E(Y|X)=E(Y|\sigma(X))=E(Y|\sigma(f(X)))=E(Y|f(X))$$
Conditional expectations by conditioning on functions of random variables
if $Z=f(X)$ and $f$ is injective function so $$\sigma(Z)=\sigma(X)$$ since $Z=f(X)$ so $$\sigma(Z) \subset \sigma(X)$$ and because $f$ is injective so $X=f^{-1}(Z)=g(Z)$ so $$\sigma(X) \subset \sigma
Conditional expectations by conditioning on functions of random variables if $Z=f(X)$ and $f$ is injective function so $$\sigma(Z)=\sigma(X)$$ since $Z=f(X)$ so $$\sigma(Z) \subset \sigma(X)$$ and because $f$ is injective so $X=f^{-1}(Z)=g(Z)$ so $$\sigma(X) \subset \sigma(Z)$$ so $\sigma(Z)=\sigma(X)$ or $\sigma(f(X))=\sigma(X)$ now $$E(Y|X)=E(Y|\sigma(X))=E(Y|\sigma(f(X)))=E(Y|f(X))$$
Conditional expectations by conditioning on functions of random variables if $Z=f(X)$ and $f$ is injective function so $$\sigma(Z)=\sigma(X)$$ since $Z=f(X)$ so $$\sigma(Z) \subset \sigma(X)$$ and because $f$ is injective so $X=f^{-1}(Z)=g(Z)$ so $$\sigma(X) \subset \sigma
50,636
Conditional expectations by conditioning on functions of random variables
Assuming $f$ is measurable I think the weakest condition is: Whenever $\mathbb{E}(Y|X=x_1) \ne \mathbb{E}(Y|X=x_2), f(x_1) \ne f(x_2)$.
Conditional expectations by conditioning on functions of random variables
Assuming $f$ is measurable I think the weakest condition is: Whenever $\mathbb{E}(Y|X=x_1) \ne \mathbb{E}(Y|X=x_2), f(x_1) \ne f(x_2)$.
Conditional expectations by conditioning on functions of random variables Assuming $f$ is measurable I think the weakest condition is: Whenever $\mathbb{E}(Y|X=x_1) \ne \mathbb{E}(Y|X=x_2), f(x_1) \ne f(x_2)$.
Conditional expectations by conditioning on functions of random variables Assuming $f$ is measurable I think the weakest condition is: Whenever $\mathbb{E}(Y|X=x_1) \ne \mathbb{E}(Y|X=x_2), f(x_1) \ne f(x_2)$.
50,637
Comparing multiclass classification algorithms for a particular application
I am simply copy-pasting the answers I got from Alexandre Passos on Metaoptimize. It would really help if someone here can add more to it. Any binary classifier can be used for multiclass with the 1-vs-all reduction, or the all-vs-all reduction. This list seems to cover most of the common multiclass algorithms. Logistic regression and SVMs are linear (though SVMs are linear in kernel space). Neural networks, decision trees, and knn aren't lineasr. Naive bayes and discriminant analysis are linear. Random forests aren't linear. Logistic regression can give you calibrated probabilities. So can many SVM implementations (though it requires slightly different training). Neural networks can do that too, if using a right loss (softmax). Decision trees and KNN can be probabilistic, though are not particularly well calibrated. Naive bayes does not produce well calibrated probabilities, nor does the discriminant analysis. I'm not sure about random forests, depends on the implementation I think. All are deterministic except for neural networks and random forests. Why do you want to compare different classification algorithms? Are you trying to decide which one is the best in general, or just for one application? If the former, it's not worth doing it, as most claims are rather sketchy and there is no method which can give that kind of conclusion. If the latter, it is well accepted that cross-validation, or comparing performance on a fixed test-set, gives you unbiased results. For multiclass classification it is not always obvious which metric to use, but things like accuracy; per-class precision/recall/f1, per-class AUC, and the confusion matrix are commonly used.
Comparing multiclass classification algorithms for a particular application
I am simply copy-pasting the answers I got from Alexandre Passos on Metaoptimize. It would really help if someone here can add more to it. Any binary classifier can be used for multiclass with the 1
Comparing multiclass classification algorithms for a particular application I am simply copy-pasting the answers I got from Alexandre Passos on Metaoptimize. It would really help if someone here can add more to it. Any binary classifier can be used for multiclass with the 1-vs-all reduction, or the all-vs-all reduction. This list seems to cover most of the common multiclass algorithms. Logistic regression and SVMs are linear (though SVMs are linear in kernel space). Neural networks, decision trees, and knn aren't lineasr. Naive bayes and discriminant analysis are linear. Random forests aren't linear. Logistic regression can give you calibrated probabilities. So can many SVM implementations (though it requires slightly different training). Neural networks can do that too, if using a right loss (softmax). Decision trees and KNN can be probabilistic, though are not particularly well calibrated. Naive bayes does not produce well calibrated probabilities, nor does the discriminant analysis. I'm not sure about random forests, depends on the implementation I think. All are deterministic except for neural networks and random forests. Why do you want to compare different classification algorithms? Are you trying to decide which one is the best in general, or just for one application? If the former, it's not worth doing it, as most claims are rather sketchy and there is no method which can give that kind of conclusion. If the latter, it is well accepted that cross-validation, or comparing performance on a fixed test-set, gives you unbiased results. For multiclass classification it is not always obvious which metric to use, but things like accuracy; per-class precision/recall/f1, per-class AUC, and the confusion matrix are commonly used.
Comparing multiclass classification algorithms for a particular application I am simply copy-pasting the answers I got from Alexandre Passos on Metaoptimize. It would really help if someone here can add more to it. Any binary classifier can be used for multiclass with the 1
50,638
Taking the log of variables
You might find this display interesting: These are residuals from a linear regression with ten x-variables (IVs), a skewed error distribution (but one with all moments finite, to which the CLT definitely applies!), and 1000 observations (i.e. the data was simulated). It's a normal qqplot, which if the residuals are close to normal should look reasonably close to a straight line. Clearly, it's not remotely normal looking! The residuals are still pretty skewed. Okay, maybe I didn't have enough variables. Here's one for 100 x-variables: The plot is very similar - and still very skew. So with n=1000 and p=100, we're not seeing anything like what you say we should be seeing.
Taking the log of variables
You might find this display interesting: These are residuals from a linear regression with ten x-variables (IVs), a skewed error distribution (but one with all moments finite, to which the CLT definit
Taking the log of variables You might find this display interesting: These are residuals from a linear regression with ten x-variables (IVs), a skewed error distribution (but one with all moments finite, to which the CLT definitely applies!), and 1000 observations (i.e. the data was simulated). It's a normal qqplot, which if the residuals are close to normal should look reasonably close to a straight line. Clearly, it's not remotely normal looking! The residuals are still pretty skewed. Okay, maybe I didn't have enough variables. Here's one for 100 x-variables: The plot is very similar - and still very skew. So with n=1000 and p=100, we're not seeing anything like what you say we should be seeing.
Taking the log of variables You might find this display interesting: These are residuals from a linear regression with ten x-variables (IVs), a skewed error distribution (but one with all moments finite, to which the CLT definit
50,639
Taking the log of variables
Per your comment, Lindberg-Feller CLT requires independence (but not identically distributed), along with a finite means and variance. Are you sure that the "Y can't be [independent] by definition but this is the case for all regressions" part doesn't kill your argument? Just because it's true by definition doesn't mean that it's not true (or applicable).
Taking the log of variables
Per your comment, Lindberg-Feller CLT requires independence (but not identically distributed), along with a finite means and variance. Are you sure that the "Y can't be [independent] by definition but
Taking the log of variables Per your comment, Lindberg-Feller CLT requires independence (but not identically distributed), along with a finite means and variance. Are you sure that the "Y can't be [independent] by definition but this is the case for all regressions" part doesn't kill your argument? Just because it's true by definition doesn't mean that it's not true (or applicable).
Taking the log of variables Per your comment, Lindberg-Feller CLT requires independence (but not identically distributed), along with a finite means and variance. Are you sure that the "Y can't be [independent] by definition but
50,640
How to analyze this incomplete block design in R?
I think you're exactly right. Set up data like your example: d <- expand.grid(Site=factor(1:10),rep=1:5) d <- transform(d,Clone=factor(LETTERS[(as.numeric(Site)+1) %/% 2])) library(lme4) ## could use development version of lme4 to simulate, but will do ## it by hand beta <- c(2,1,3,-2,2) ## clone effects (intercept + differences) X <- model.matrix(~Clone,d) set.seed(1) u.site <- rnorm(length(levels(d$Site)),sd=1) d$y <- rnorm(nrow(d), mean=X %*% beta + u.site[d$Site], sd=2) Now analyze: m1 <- lmer(y~Clone+(1|Site),data=d) round(fixef(m1),3) ## (Intercept) CloneB CloneC CloneD CloneE ## 2.624 -0.034 2.504 -2.297 2.396 VarCorr(m1) ## Groups Name Std.Dev. ## Site (Intercept) 0.0000 ## Residual 1.6108 I don't think there's actually anything wrong, but I used a pretty big residual variance, and so in this case (probably only on a subset of replicates), lmer estimates a zero among-site variation.
How to analyze this incomplete block design in R?
I think you're exactly right. Set up data like your example: d <- expand.grid(Site=factor(1:10),rep=1:5) d <- transform(d,Clone=factor(LETTERS[(as.numeric(Site)+1) %/% 2])) library(lme4) ## could use
How to analyze this incomplete block design in R? I think you're exactly right. Set up data like your example: d <- expand.grid(Site=factor(1:10),rep=1:5) d <- transform(d,Clone=factor(LETTERS[(as.numeric(Site)+1) %/% 2])) library(lme4) ## could use development version of lme4 to simulate, but will do ## it by hand beta <- c(2,1,3,-2,2) ## clone effects (intercept + differences) X <- model.matrix(~Clone,d) set.seed(1) u.site <- rnorm(length(levels(d$Site)),sd=1) d$y <- rnorm(nrow(d), mean=X %*% beta + u.site[d$Site], sd=2) Now analyze: m1 <- lmer(y~Clone+(1|Site),data=d) round(fixef(m1),3) ## (Intercept) CloneB CloneC CloneD CloneE ## 2.624 -0.034 2.504 -2.297 2.396 VarCorr(m1) ## Groups Name Std.Dev. ## Site (Intercept) 0.0000 ## Residual 1.6108 I don't think there's actually anything wrong, but I used a pretty big residual variance, and so in this case (probably only on a subset of replicates), lmer estimates a zero among-site variation.
How to analyze this incomplete block design in R? I think you're exactly right. Set up data like your example: d <- expand.grid(Site=factor(1:10),rep=1:5) d <- transform(d,Clone=factor(LETTERS[(as.numeric(Site)+1) %/% 2])) library(lme4) ## could use
50,641
Sampling technique to estimate how many toxic waste sites are in a country?
Your approach seems reasonable, especially your choice to stratify your sampling. This will make it more efficient provided you can easily delineate the different industrial zones. I don't have a book to recommend you, but you could model your uncertainty using the Poisson distribution, with the $\lambda =$ No. of Toxic Waste Sites per Square Kilometer. You could carry out your sampling program as you described and then find the maximum likelihood estimator for $\lambda_{Ai}$ where A is the area of a sampling sector in zone $i$. In particular, you would maximize the following formula wrt $\lambda_{Ai}$ where $N_i$= number of sectors sampled from zone $i$: $\max\limits_{\lambda_{Ai}} \prod\limits_{j=1}^{N_i} \frac{e^{-\lambda_{Ai}}\lambda_{Ai}^{n_{ij}}}{n_{ij}!}$ where $n_{ij}$ is the number of toxic sites in sector $j$ of zone $i$. The value of $\lambda_{Ai}$ that maximizes the product is $\lambda_{Ai}^* = \frac{1}{N_i}\sum\limits_{j=1}^{N_i}{n_{ij}}$ You will get one estimate per zone, $\lambda_{Ai}^*$, which you can interpret as the frequency of toxic waste sites within a region of area $A_i$. Your uncertainty for the total number of sites in Zone $i$ with total area $A_{Ti}$can be modeled using your estimated $\lambda_{Ai}^*$ in the Poisson distribution: $Poisson(\lambda_{Ai}^*\frac{A_{Ti}}{A_i})$. To get a country-wide estiamte, you would need to combine the $\lambda_{Ai}^*$ into another Poisson distribution: Total No. of Sites ~ $Poisson(\sum\limits_{i=1}^{N_{zones}}\lambda_{Ai}^*\frac{A_{Ti}}{A_i})$. Refinements The above should get you a decent estimate. However, if your country is small enough that your sample will cover an appreciable portion of the total land area or area within a zone, then you should reduce the total area for each zone by the sampled area in the above formula, so you are modleing the uncertainty on the remaining area (which is actually more accurate in both cases), then you add this uncertainty to your actual counts in the areas you've sampled. Also, you will notice that you're using a point estimate of $\lambda$. There is some uncertianty in the actual value of this quantity, but including it requires using extended likelihood for predicting a Poisson variable. The formula is pretty simple, if Y is the total number of sites in zone $i$, then the likelihood function for Y is: $L(Y_i) = e^{-(N+1)\hat\theta(Y_i)}\frac{\hat\theta(Y_i)^{Y_i+\sum\limits_{j=1}^{N_i}{n_{ij}}}}{Y_i!}$ Where $\hat\theta(Y_i) = \frac{A_{Ti}}{A_i}(Y_i + \frac{\sum\limits_{j=1}^{N_i}{n_{ij}}}{N_i+1})$ You need to normalize this formula to sum to 1 over the range of relevant Y. To get the country-wide estimate, you would need to use Monte-Carlo simulation for the sum of the $Y_i$ from each area based on the above formula. There are a couple inexpensive/free simulators out there.
Sampling technique to estimate how many toxic waste sites are in a country?
Your approach seems reasonable, especially your choice to stratify your sampling. This will make it more efficient provided you can easily delineate the different industrial zones. I don't have a boo
Sampling technique to estimate how many toxic waste sites are in a country? Your approach seems reasonable, especially your choice to stratify your sampling. This will make it more efficient provided you can easily delineate the different industrial zones. I don't have a book to recommend you, but you could model your uncertainty using the Poisson distribution, with the $\lambda =$ No. of Toxic Waste Sites per Square Kilometer. You could carry out your sampling program as you described and then find the maximum likelihood estimator for $\lambda_{Ai}$ where A is the area of a sampling sector in zone $i$. In particular, you would maximize the following formula wrt $\lambda_{Ai}$ where $N_i$= number of sectors sampled from zone $i$: $\max\limits_{\lambda_{Ai}} \prod\limits_{j=1}^{N_i} \frac{e^{-\lambda_{Ai}}\lambda_{Ai}^{n_{ij}}}{n_{ij}!}$ where $n_{ij}$ is the number of toxic sites in sector $j$ of zone $i$. The value of $\lambda_{Ai}$ that maximizes the product is $\lambda_{Ai}^* = \frac{1}{N_i}\sum\limits_{j=1}^{N_i}{n_{ij}}$ You will get one estimate per zone, $\lambda_{Ai}^*$, which you can interpret as the frequency of toxic waste sites within a region of area $A_i$. Your uncertainty for the total number of sites in Zone $i$ with total area $A_{Ti}$can be modeled using your estimated $\lambda_{Ai}^*$ in the Poisson distribution: $Poisson(\lambda_{Ai}^*\frac{A_{Ti}}{A_i})$. To get a country-wide estiamte, you would need to combine the $\lambda_{Ai}^*$ into another Poisson distribution: Total No. of Sites ~ $Poisson(\sum\limits_{i=1}^{N_{zones}}\lambda_{Ai}^*\frac{A_{Ti}}{A_i})$. Refinements The above should get you a decent estimate. However, if your country is small enough that your sample will cover an appreciable portion of the total land area or area within a zone, then you should reduce the total area for each zone by the sampled area in the above formula, so you are modleing the uncertainty on the remaining area (which is actually more accurate in both cases), then you add this uncertainty to your actual counts in the areas you've sampled. Also, you will notice that you're using a point estimate of $\lambda$. There is some uncertianty in the actual value of this quantity, but including it requires using extended likelihood for predicting a Poisson variable. The formula is pretty simple, if Y is the total number of sites in zone $i$, then the likelihood function for Y is: $L(Y_i) = e^{-(N+1)\hat\theta(Y_i)}\frac{\hat\theta(Y_i)^{Y_i+\sum\limits_{j=1}^{N_i}{n_{ij}}}}{Y_i!}$ Where $\hat\theta(Y_i) = \frac{A_{Ti}}{A_i}(Y_i + \frac{\sum\limits_{j=1}^{N_i}{n_{ij}}}{N_i+1})$ You need to normalize this formula to sum to 1 over the range of relevant Y. To get the country-wide estimate, you would need to use Monte-Carlo simulation for the sum of the $Y_i$ from each area based on the above formula. There are a couple inexpensive/free simulators out there.
Sampling technique to estimate how many toxic waste sites are in a country? Your approach seems reasonable, especially your choice to stratify your sampling. This will make it more efficient provided you can easily delineate the different industrial zones. I don't have a boo
50,642
If $X_{n+1}$ is a martingale subject to $Y_0,\ldots,Y_n$, then is it a martingale with respect to $Y_0^2,\ldots,Y_n^2$?
Prove or disprove $ E \left( X_{n+1} | Y_0^2,\ldots,Y_n^2 \right) = X_n $ I am thinking that if $F=\sigma \left(Y_0,\ldots,Y_n \right)$ and $G=\sigma \left(Y_0^2,\ldots,Y_n^2 \right)$, I need to prove that F = G? Is this correct? Actually, $G\subseteq F$ and the inclusion can be strict. Then I can do something like this: $E \left( X_{n+1} | G \right) = E \left( E \left( X_{n+1} | F \right) | G \right) = E \left( X_{n+1} | F \right) = X_{n+1} $. The third equality is wrong. When $G\subseteq F$, $E \left( E \left( X_{n+1} | F \right) | G \right)= E \left( X_{n+1} | G \right) \ne E \left( X_{n+1} | F \right)$ in general hence this proves nothing. Here is a counterexample to the statement you are trying to show: if $(Y_k)$ is an i.i.d. Bernoulli sequence, then $Y_n^2=1$ almost surely hence $E \left( X_{n+1} | Y_0^2,\ldots,Y_n^2 \right) = E(X_{n+1})\ne X_n$ in general. Also is $E \left( X_{n+1}^2 | G \right) = X_n^2$ (a martingale) given $ E \left( X_{n+1} | Y_0,\ldots,Y_n \right) = X_n. $ No. Actually, by convexity, $E \left( X_{n+1}^2 | G \right) \geqslant E \left( X_{n+1}| G \right)^2= X_n^2$ and the equality holds if and only if $X_{n+1}$ is measurable with respect to $G$.
If $X_{n+1}$ is a martingale subject to $Y_0,\ldots,Y_n$, then is it a martingale with respect to $Y
Prove or disprove $ E \left( X_{n+1} | Y_0^2,\ldots,Y_n^2 \right) = X_n $ I am thinking that if $F=\sigma \left(Y_0,\ldots,Y_n \right)$ and $G=\sigma \left(Y_0^2,\ldots,Y_n^2 \right)$, I need to p
If $X_{n+1}$ is a martingale subject to $Y_0,\ldots,Y_n$, then is it a martingale with respect to $Y_0^2,\ldots,Y_n^2$? Prove or disprove $ E \left( X_{n+1} | Y_0^2,\ldots,Y_n^2 \right) = X_n $ I am thinking that if $F=\sigma \left(Y_0,\ldots,Y_n \right)$ and $G=\sigma \left(Y_0^2,\ldots,Y_n^2 \right)$, I need to prove that F = G? Is this correct? Actually, $G\subseteq F$ and the inclusion can be strict. Then I can do something like this: $E \left( X_{n+1} | G \right) = E \left( E \left( X_{n+1} | F \right) | G \right) = E \left( X_{n+1} | F \right) = X_{n+1} $. The third equality is wrong. When $G\subseteq F$, $E \left( E \left( X_{n+1} | F \right) | G \right)= E \left( X_{n+1} | G \right) \ne E \left( X_{n+1} | F \right)$ in general hence this proves nothing. Here is a counterexample to the statement you are trying to show: if $(Y_k)$ is an i.i.d. Bernoulli sequence, then $Y_n^2=1$ almost surely hence $E \left( X_{n+1} | Y_0^2,\ldots,Y_n^2 \right) = E(X_{n+1})\ne X_n$ in general. Also is $E \left( X_{n+1}^2 | G \right) = X_n^2$ (a martingale) given $ E \left( X_{n+1} | Y_0,\ldots,Y_n \right) = X_n. $ No. Actually, by convexity, $E \left( X_{n+1}^2 | G \right) \geqslant E \left( X_{n+1}| G \right)^2= X_n^2$ and the equality holds if and only if $X_{n+1}$ is measurable with respect to $G$.
If $X_{n+1}$ is a martingale subject to $Y_0,\ldots,Y_n$, then is it a martingale with respect to $Y Prove or disprove $ E \left( X_{n+1} | Y_0^2,\ldots,Y_n^2 \right) = X_n $ I am thinking that if $F=\sigma \left(Y_0,\ldots,Y_n \right)$ and $G=\sigma \left(Y_0^2,\ldots,Y_n^2 \right)$, I need to p
50,643
If $X_{n+1}$ is a martingale subject to $Y_0,\ldots,Y_n$, then is it a martingale with respect to $Y_0^2,\ldots,Y_n^2$?
$G \subseteq F$ If we know $Y_0, Y_1, ...$, then we know $Y_0^2, Y_1^2, ...$ The converse is not true. $E[X|G] = E[E[X|F]|G]$, but $E[X|F] \ne E[E[X|G]|F]$. $E[X|G] = E[E[X|F]|G]$ allows you to prove the converse of your conjecture is true.
If $X_{n+1}$ is a martingale subject to $Y_0,\ldots,Y_n$, then is it a martingale with respect to $Y
$G \subseteq F$ If we know $Y_0, Y_1, ...$, then we know $Y_0^2, Y_1^2, ...$ The converse is not true. $E[X|G] = E[E[X|F]|G]$, but $E[X|F] \ne E[E[X|G]|F]$. $E[X|G] = E[E[X|F]|G]$ allows you to prove
If $X_{n+1}$ is a martingale subject to $Y_0,\ldots,Y_n$, then is it a martingale with respect to $Y_0^2,\ldots,Y_n^2$? $G \subseteq F$ If we know $Y_0, Y_1, ...$, then we know $Y_0^2, Y_1^2, ...$ The converse is not true. $E[X|G] = E[E[X|F]|G]$, but $E[X|F] \ne E[E[X|G]|F]$. $E[X|G] = E[E[X|F]|G]$ allows you to prove the converse of your conjecture is true.
If $X_{n+1}$ is a martingale subject to $Y_0,\ldots,Y_n$, then is it a martingale with respect to $Y $G \subseteq F$ If we know $Y_0, Y_1, ...$, then we know $Y_0^2, Y_1^2, ...$ The converse is not true. $E[X|G] = E[E[X|F]|G]$, but $E[X|F] \ne E[E[X|G]|F]$. $E[X|G] = E[E[X|F]|G]$ allows you to prove
50,644
Quantitative results of cluster analysis
How are the data sets related? IF both data sets are drawn from the same distribution (they describe the same problem) than you can use the labeled set as a "test set" for the clustering. Basically you treat the clustering algorithm as a classifier. The only problem is that you must find a match between the output of the clustering algorithm and the actual labels. You might use some simple matching (ex: instances labeled GREEN are more often clustered in cluster 2 and BLUE in cluster 1 so cluster 1== BLUE and cluster 2 == GREEN). More elegantly you can compute the Mutual Information between the clustering output and actual labels. Mutual Information has a nice property, that one doesn't need to know the exact matching. MI will give high scores if most of the matching are consistent. Think of it as a correlation coefficient between (cluster <-> actual label) relation. Also check http://en.wikipedia.org/wiki/Cluster_analysis for some measures. The key phrase there is: [...] clustering results are evaluated based on data that was not used for clustering, such as known class labels and external benchmarks. Such benchmarks consist of a set of pre-classified items, and these sets are often created by human (experts). Thus, the benchmark sets can be thought of as a gold standard for evaluation. For ROC usually one needs some "a posteriori" probability, outputted by the classifier, but in your case, the distance between the instance and the cluster center will work. Keep in mind that ROC is computed for a specific label at a time (i.e. one vs all). So for 5 labels you will get 4 independent AUROC values. IMHO I strongly advise yo to do the CV for clustering if you have labeled data! Iterate it several times and use the mean of your measure as the performance. I would also try this: Use some percent (66% usually) of unlabeled data to perform clustering, measure performance using labeled data, repeat the experiment with different randomization (usually 5-10 times) and report mean performance. Unfortunately I don't know if this method will give a good estimate of your real performance. Is it possible that will overfit the labeled data set. This is not a textbook approach, so, use it with caution.
Quantitative results of cluster analysis
How are the data sets related? IF both data sets are drawn from the same distribution (they describe the same problem) than you can use the labeled set as a "test set" for the clustering. Basically yo
Quantitative results of cluster analysis How are the data sets related? IF both data sets are drawn from the same distribution (they describe the same problem) than you can use the labeled set as a "test set" for the clustering. Basically you treat the clustering algorithm as a classifier. The only problem is that you must find a match between the output of the clustering algorithm and the actual labels. You might use some simple matching (ex: instances labeled GREEN are more often clustered in cluster 2 and BLUE in cluster 1 so cluster 1== BLUE and cluster 2 == GREEN). More elegantly you can compute the Mutual Information between the clustering output and actual labels. Mutual Information has a nice property, that one doesn't need to know the exact matching. MI will give high scores if most of the matching are consistent. Think of it as a correlation coefficient between (cluster <-> actual label) relation. Also check http://en.wikipedia.org/wiki/Cluster_analysis for some measures. The key phrase there is: [...] clustering results are evaluated based on data that was not used for clustering, such as known class labels and external benchmarks. Such benchmarks consist of a set of pre-classified items, and these sets are often created by human (experts). Thus, the benchmark sets can be thought of as a gold standard for evaluation. For ROC usually one needs some "a posteriori" probability, outputted by the classifier, but in your case, the distance between the instance and the cluster center will work. Keep in mind that ROC is computed for a specific label at a time (i.e. one vs all). So for 5 labels you will get 4 independent AUROC values. IMHO I strongly advise yo to do the CV for clustering if you have labeled data! Iterate it several times and use the mean of your measure as the performance. I would also try this: Use some percent (66% usually) of unlabeled data to perform clustering, measure performance using labeled data, repeat the experiment with different randomization (usually 5-10 times) and report mean performance. Unfortunately I don't know if this method will give a good estimate of your real performance. Is it possible that will overfit the labeled data set. This is not a textbook approach, so, use it with caution.
Quantitative results of cluster analysis How are the data sets related? IF both data sets are drawn from the same distribution (they describe the same problem) than you can use the labeled set as a "test set" for the clustering. Basically yo
50,645
Is adjusted R-squared appropriate to compare models with different response variables?
I believe using R2 or adjusted R2 is okay in your case. Fact is that we should not use RSE(Residue Standard Error) when the scale is different. This is because both R2 and adjusted R2 are normalized quantity having maximum value of 1, but the RSE is not normalized. Ref : https://datastoriesweb.wordpress.com/2017/01/15/interpreting-statistical-values/
Is adjusted R-squared appropriate to compare models with different response variables?
I believe using R2 or adjusted R2 is okay in your case. Fact is that we should not use RSE(Residue Standard Error) when the scale is different. This is because both R2 and adjusted R2 are normalized q
Is adjusted R-squared appropriate to compare models with different response variables? I believe using R2 or adjusted R2 is okay in your case. Fact is that we should not use RSE(Residue Standard Error) when the scale is different. This is because both R2 and adjusted R2 are normalized quantity having maximum value of 1, but the RSE is not normalized. Ref : https://datastoriesweb.wordpress.com/2017/01/15/interpreting-statistical-values/
Is adjusted R-squared appropriate to compare models with different response variables? I believe using R2 or adjusted R2 is okay in your case. Fact is that we should not use RSE(Residue Standard Error) when the scale is different. This is because both R2 and adjusted R2 are normalized q
50,646
Probability of pairwise difference of samples from distribution with finite support
OP wrote: Let $X_1, X_2, \dots, X_N$ be i.i.d. continuous random variables with support $[0, 1]$. What is a reasonable bound on the probability that some pair of random variables is less than ϵ apart? ... interested in the cases of a uniform distribution ... I only need a reasonable probability bound, not the exact probability For the Uniform case: I propose the following approximation: $$\Pr \left( \exists_{i,j:\ i\ne j}\ |X_i - X_j| < \epsilon \right) \approx 1-(1-\epsilon )^{n (n-1)}$$ Performance Here is a quick comparison of the proposed APPROXIMATE solution $1-(1-\epsilon )^{n (n-1)}$ posited here, ... compared to the 'actual' probability calculated via Monte Carlo simulations (in each case, 500,000 samples of size $n$): Case 1: $\epsilon = 0.01$ and $n = 2, 3, 4, 5, 8, 16$ Approx: {0.0199, 0.0585199, 0.113615, 0.182093, 0.430399, 0.910371} Monte: {0.019478, 0.058606, 0.115204, 0.184624, 0.441432, 0.92556} Case 2: $\epsilon = 0.03$ and $n = 2, 3, 4, 5, 8, 16$ Approx: {0.0591, 0.167028, 0.306158, 0.456206, 0.818358, 0.999331} Monte: {0.05871, 0.16875, 0.315202, 0.473064, 0.848328, 0.999938} Case 3: $\epsilon = 0.05$ and $n = 2, 3, 4, 5, 8, 16$ Approx: {0.0975, 0.264908, 0.45964, 0.641514, 0.943438, 0.999995} Monte: {0.097572, 0.269868, 0.479214, 0.672386, 0.967794, 1.} Case 4: $\epsilon = 0.1$ and $n = 2, 3, 4, 5, 8, 16$ Approx: {0.19, 0.468559, 0.71757, 0.878423, 0.997261, 1.} Monte: {0.19069, 0.487214, 0.759996, 0.92185, 0.999922, 1.} The performance seems surprisingly good for such a simple approximation. I would be interested to know if any better approximations exist (published or otherwise). Derivation We are given $X$ ~ Uniform(0,1) with pdf $f(x)$: Let $(X_1, X_2, \dots, X_n)$ denote a random sample of size $n$ drawn on $X$, and let $(X_{(1)}, X_{(2)}, \dots, X_{(n)})$ be the order statistics, such that $(X_{(1)} < X_{(2)} < \dots < X_{(n)})$. The joint pdf of the order statistics $X_{(r)}$ and $X_{(s)}$, for $r < s$, is say $g(x_{(r)},x_{(s)})$: where OrderStat is a mathStatica function which I am using to automate the mechanical aspects of the calculations. We are interested to find a pair of random variables that are so close to each other that they are separated by less than $\epsilon$. The two closest random variables in the sample $(X_1, X_2, \dots, X_n)$ must necessarily be adjoining order statistics, say $X_{(r)}$ and $X_{(r+1)}$. Replacing $s$ with $r + 1$ in the previous result simplifies the joint pdf to: Then $P((X_{(r+1)} - X_{(r)}) < \epsilon)$ is: Note that there is no need to specify the probability using absolute values, as we are working with the ordered sample. Thus far, we have calculated the probability that the distance between the $r$th and $(r+1)$th adjoining order statistics is smaller than $\epsilon$. But we do not have only 1 such chance ... there are $(n-1)$ such combinations of adjacent ordered statistics to choose from, ... which suggests the following approximation, as a sort of geometric-style modification to the previous result: $$\approx 1-(1-\epsilon )^{n (n-1)}$$ The above accuracy / performance comparison suggests it works surprisingly well. In the case of $n=2$, there is no approximation, and the result should be theoretically exact.
Probability of pairwise difference of samples from distribution with finite support
OP wrote: Let $X_1, X_2, \dots, X_N$ be i.i.d. continuous random variables with support $[0, 1]$. What is a reasonable bound on the probability that some pair of random variables is less than ϵ ap
Probability of pairwise difference of samples from distribution with finite support OP wrote: Let $X_1, X_2, \dots, X_N$ be i.i.d. continuous random variables with support $[0, 1]$. What is a reasonable bound on the probability that some pair of random variables is less than ϵ apart? ... interested in the cases of a uniform distribution ... I only need a reasonable probability bound, not the exact probability For the Uniform case: I propose the following approximation: $$\Pr \left( \exists_{i,j:\ i\ne j}\ |X_i - X_j| < \epsilon \right) \approx 1-(1-\epsilon )^{n (n-1)}$$ Performance Here is a quick comparison of the proposed APPROXIMATE solution $1-(1-\epsilon )^{n (n-1)}$ posited here, ... compared to the 'actual' probability calculated via Monte Carlo simulations (in each case, 500,000 samples of size $n$): Case 1: $\epsilon = 0.01$ and $n = 2, 3, 4, 5, 8, 16$ Approx: {0.0199, 0.0585199, 0.113615, 0.182093, 0.430399, 0.910371} Monte: {0.019478, 0.058606, 0.115204, 0.184624, 0.441432, 0.92556} Case 2: $\epsilon = 0.03$ and $n = 2, 3, 4, 5, 8, 16$ Approx: {0.0591, 0.167028, 0.306158, 0.456206, 0.818358, 0.999331} Monte: {0.05871, 0.16875, 0.315202, 0.473064, 0.848328, 0.999938} Case 3: $\epsilon = 0.05$ and $n = 2, 3, 4, 5, 8, 16$ Approx: {0.0975, 0.264908, 0.45964, 0.641514, 0.943438, 0.999995} Monte: {0.097572, 0.269868, 0.479214, 0.672386, 0.967794, 1.} Case 4: $\epsilon = 0.1$ and $n = 2, 3, 4, 5, 8, 16$ Approx: {0.19, 0.468559, 0.71757, 0.878423, 0.997261, 1.} Monte: {0.19069, 0.487214, 0.759996, 0.92185, 0.999922, 1.} The performance seems surprisingly good for such a simple approximation. I would be interested to know if any better approximations exist (published or otherwise). Derivation We are given $X$ ~ Uniform(0,1) with pdf $f(x)$: Let $(X_1, X_2, \dots, X_n)$ denote a random sample of size $n$ drawn on $X$, and let $(X_{(1)}, X_{(2)}, \dots, X_{(n)})$ be the order statistics, such that $(X_{(1)} < X_{(2)} < \dots < X_{(n)})$. The joint pdf of the order statistics $X_{(r)}$ and $X_{(s)}$, for $r < s$, is say $g(x_{(r)},x_{(s)})$: where OrderStat is a mathStatica function which I am using to automate the mechanical aspects of the calculations. We are interested to find a pair of random variables that are so close to each other that they are separated by less than $\epsilon$. The two closest random variables in the sample $(X_1, X_2, \dots, X_n)$ must necessarily be adjoining order statistics, say $X_{(r)}$ and $X_{(r+1)}$. Replacing $s$ with $r + 1$ in the previous result simplifies the joint pdf to: Then $P((X_{(r+1)} - X_{(r)}) < \epsilon)$ is: Note that there is no need to specify the probability using absolute values, as we are working with the ordered sample. Thus far, we have calculated the probability that the distance between the $r$th and $(r+1)$th adjoining order statistics is smaller than $\epsilon$. But we do not have only 1 such chance ... there are $(n-1)$ such combinations of adjacent ordered statistics to choose from, ... which suggests the following approximation, as a sort of geometric-style modification to the previous result: $$\approx 1-(1-\epsilon )^{n (n-1)}$$ The above accuracy / performance comparison suggests it works surprisingly well. In the case of $n=2$, there is no approximation, and the result should be theoretically exact.
Probability of pairwise difference of samples from distribution with finite support OP wrote: Let $X_1, X_2, \dots, X_N$ be i.i.d. continuous random variables with support $[0, 1]$. What is a reasonable bound on the probability that some pair of random variables is less than ϵ ap
50,647
How does this remove autocorrelation?
As @Analyst pointed out the inclusion of lagged dependent variables excludes one source of regression error autocorrelation. The autocorrelation can still be present if the lags of dependent variables are included. Here is the mathematical illustration. Suppose the true model is the following $$Y_t=\alpha+\beta_0X_t+\beta_1X_{t-1}+u_t,$$ where $Eu_t|(u_{t-1},...,X_t,X_{t-1})=0$, meaning that $u_t$ is not autocorrelated and it does not correlated with the regressors. Suppose you are estimating the model $$Y_t=\alpha+\beta_0X_t+v_t$$ then $$EX_tv_t=EX_tu_t+\beta_1EX_tX_{t-1}$$ Now if $EX_tX_{t-1}\neq 0$ then you have the ommited variables problem and the autocorellation is the least of your worries, since the OLS estimates in this case are inconsistent. Now if $X_t$ is not autocorrelated then $EX_tv_t=0$ and OLS estimates are consistent and asymptotically normal (if $Eu_t^2<\infty$ and $EX_t^2<\infty$). But \begin{align*} Ev_tv_{t-1}&=E(u_t+\beta_1X_{t-1})(u_{t-1}+\beta_1X_{t-2})\\ &=Eu_tu_{t-1}+\beta_1Eu_{t-1}X_{t-1}+\beta_1Eu_tX_{t-2}+\beta_1EX_{t-1}X_{t-2}\\ &=\beta_1Eu_tX_{t-2} \end{align*} and this might be non zero giving the autocorrelation problem. So to sum up the claim in the citation is not entirely correct. If lags are omitted this can lead to omitted variable bias and that is the first reason to include them.
How does this remove autocorrelation?
As @Analyst pointed out the inclusion of lagged dependent variables excludes one source of regression error autocorrelation. The autocorrelation can still be present if the lags of dependent variables
How does this remove autocorrelation? As @Analyst pointed out the inclusion of lagged dependent variables excludes one source of regression error autocorrelation. The autocorrelation can still be present if the lags of dependent variables are included. Here is the mathematical illustration. Suppose the true model is the following $$Y_t=\alpha+\beta_0X_t+\beta_1X_{t-1}+u_t,$$ where $Eu_t|(u_{t-1},...,X_t,X_{t-1})=0$, meaning that $u_t$ is not autocorrelated and it does not correlated with the regressors. Suppose you are estimating the model $$Y_t=\alpha+\beta_0X_t+v_t$$ then $$EX_tv_t=EX_tu_t+\beta_1EX_tX_{t-1}$$ Now if $EX_tX_{t-1}\neq 0$ then you have the ommited variables problem and the autocorellation is the least of your worries, since the OLS estimates in this case are inconsistent. Now if $X_t$ is not autocorrelated then $EX_tv_t=0$ and OLS estimates are consistent and asymptotically normal (if $Eu_t^2<\infty$ and $EX_t^2<\infty$). But \begin{align*} Ev_tv_{t-1}&=E(u_t+\beta_1X_{t-1})(u_{t-1}+\beta_1X_{t-2})\\ &=Eu_tu_{t-1}+\beta_1Eu_{t-1}X_{t-1}+\beta_1Eu_tX_{t-2}+\beta_1EX_{t-1}X_{t-2}\\ &=\beta_1Eu_tX_{t-2} \end{align*} and this might be non zero giving the autocorrelation problem. So to sum up the claim in the citation is not entirely correct. If lags are omitted this can lead to omitted variable bias and that is the first reason to include them.
How does this remove autocorrelation? As @Analyst pointed out the inclusion of lagged dependent variables excludes one source of regression error autocorrelation. The autocorrelation can still be present if the lags of dependent variables
50,648
How does this remove autocorrelation?
Often not including lagged values of dependent variable or independent variables will induce autocorrelation structure in residuals when these values should have been included.
How does this remove autocorrelation?
Often not including lagged values of dependent variable or independent variables will induce autocorrelation structure in residuals when these values should have been included.
How does this remove autocorrelation? Often not including lagged values of dependent variable or independent variables will induce autocorrelation structure in residuals when these values should have been included.
How does this remove autocorrelation? Often not including lagged values of dependent variable or independent variables will induce autocorrelation structure in residuals when these values should have been included.
50,649
Can I perform Cox regression on left truncated records?
First a disclaimer: I've never had to use the time start/end variable in this way and although I'm familiar with mixed effects models I have never really had to use them IRL. Feel free to correct me if I've made a mistake The problem consists out of two things as I see it: One person can occur multiple times. This puts the observations independence into question. A person may enter and exit the cohort at risk at different times throughout the study, i.e. this is an open (dynamic) cohort. For the first point I think using a mixed effects model is a must. I use R and there is a coxme package recently developed by prof. Therneau. The vignette documentation is excellent and it seems easy to deploy. For the second point you just need to add the start and end point to the survival object. This is fairly easy in R although I have never had to use it myself. Below is an example that should work: # Set the event yes/no (1/0) df$event <- !is.na(df$IpAdmit) # Those that lack a date should have one df$discharged <- !is.na(df$start) df$start[is.na(df$start)] <- as.Date("2010-01-01") df$end[is.na(df$end)] <- as.Date("2011-12-31") # Can be merged into one step without the sv variable sv <- Surv(time=df$start, time2=df$end, event=df$event) # A model where the medication possession ratio (compliance) interacts with # the fact that a patient has been discharged coxme(sv ~ discharged*MPR + age + sex + (1|MemberID), data=df) You might want to consider what you want to achieve with the cox regression model in this case. I am not sure that hazards make sense in this setting, although this is very difficult to know without going through the full study protocol. Make sure that others have used cox regressions in similar settings prior to this analysis. It seems to me that a good alternative would be a mixed effect logistic regression where you simply use odds for admission and add the number of days at risk as a predictor, preferably as a natural spline or something that allows a non-linear relationship. Minor update from the discussion When it comes to time-dependant covariates I have found this to be a little tricky when trying to deploy. I had a CV-question a while ago on this subject that you may want to look into. As I wrote in the comments, in the end the time dependence was a little more than I could conveniently display and explain to my colleagues. Furthermore the model was not strongly affected by this effect so I dropped it and switched to an early and late dataset. I recommend you consider who your audience is and if the time-varying coefficients will add that much to the model. You have a potentially very serious problem where some patients start their period discharged from the hospital while some are untainted. I think you need to think about possible effect modification between these two groups - do they belong to the same population or not? It is easy to make a case that medication-compliance has a much bigger admission-avoidance impact in the discharged population. I think you at least should have a variable indicating if the patient has started a period straight after hospitalization or not (I've added an example in the code). I have recently done a medication adherence study, if you haven't read this article I strongly recommend it. In my study I was also able to deduce from the prescription text 94 % of the cases using Python's very powerful regular expressions. I'm planning on doing a post on my blog once the article gets published, the text interpretation is in Swedish but you can very easily use the structure as most prescriptions follow a similar pattern (let me know if this would be useful and I can write up the post a little earlier). The advantage is that you want to identify exactly when a patient is expected to be without medication because you will probably have a very close relationship between that and readmission.
Can I perform Cox regression on left truncated records?
First a disclaimer: I've never had to use the time start/end variable in this way and although I'm familiar with mixed effects models I have never really had to use them IRL. Feel free to correct me i
Can I perform Cox regression on left truncated records? First a disclaimer: I've never had to use the time start/end variable in this way and although I'm familiar with mixed effects models I have never really had to use them IRL. Feel free to correct me if I've made a mistake The problem consists out of two things as I see it: One person can occur multiple times. This puts the observations independence into question. A person may enter and exit the cohort at risk at different times throughout the study, i.e. this is an open (dynamic) cohort. For the first point I think using a mixed effects model is a must. I use R and there is a coxme package recently developed by prof. Therneau. The vignette documentation is excellent and it seems easy to deploy. For the second point you just need to add the start and end point to the survival object. This is fairly easy in R although I have never had to use it myself. Below is an example that should work: # Set the event yes/no (1/0) df$event <- !is.na(df$IpAdmit) # Those that lack a date should have one df$discharged <- !is.na(df$start) df$start[is.na(df$start)] <- as.Date("2010-01-01") df$end[is.na(df$end)] <- as.Date("2011-12-31") # Can be merged into one step without the sv variable sv <- Surv(time=df$start, time2=df$end, event=df$event) # A model where the medication possession ratio (compliance) interacts with # the fact that a patient has been discharged coxme(sv ~ discharged*MPR + age + sex + (1|MemberID), data=df) You might want to consider what you want to achieve with the cox regression model in this case. I am not sure that hazards make sense in this setting, although this is very difficult to know without going through the full study protocol. Make sure that others have used cox regressions in similar settings prior to this analysis. It seems to me that a good alternative would be a mixed effect logistic regression where you simply use odds for admission and add the number of days at risk as a predictor, preferably as a natural spline or something that allows a non-linear relationship. Minor update from the discussion When it comes to time-dependant covariates I have found this to be a little tricky when trying to deploy. I had a CV-question a while ago on this subject that you may want to look into. As I wrote in the comments, in the end the time dependence was a little more than I could conveniently display and explain to my colleagues. Furthermore the model was not strongly affected by this effect so I dropped it and switched to an early and late dataset. I recommend you consider who your audience is and if the time-varying coefficients will add that much to the model. You have a potentially very serious problem where some patients start their period discharged from the hospital while some are untainted. I think you need to think about possible effect modification between these two groups - do they belong to the same population or not? It is easy to make a case that medication-compliance has a much bigger admission-avoidance impact in the discharged population. I think you at least should have a variable indicating if the patient has started a period straight after hospitalization or not (I've added an example in the code). I have recently done a medication adherence study, if you haven't read this article I strongly recommend it. In my study I was also able to deduce from the prescription text 94 % of the cases using Python's very powerful regular expressions. I'm planning on doing a post on my blog once the article gets published, the text interpretation is in Swedish but you can very easily use the structure as most prescriptions follow a similar pattern (let me know if this would be useful and I can write up the post a little earlier). The advantage is that you want to identify exactly when a patient is expected to be without medication because you will probably have a very close relationship between that and readmission.
Can I perform Cox regression on left truncated records? First a disclaimer: I've never had to use the time start/end variable in this way and although I'm familiar with mixed effects models I have never really had to use them IRL. Feel free to correct me i
50,650
Computing the steady state probability vector of a random walk on $\{0, 1, \dots, n\}$
You can solve your second equation to give $\pi(1)=\frac{a}{1-p} \pi(0)$, then your third to give $\pi(2)=\frac{ap}{(1-p)^2} \pi(0)$, then your first to give $\pi(x)=\frac{a}{p}\left(\frac{p}{1-p}\right)^x \pi(0)$ for $0 \lt x \lt n$, and finally your fourth to give $\pi(n)=\frac{p}{b} \pi(n-1) = \frac{a}{b}\left(\frac{p}{1-p}\right)^{n-1} \pi(0)$. Your fifth equation is not independent of the others. You can now add up the terms, noting the geometric progression in the middle, and set the sum equal to $1$ to solve for $\pi(0)$ and thus find all the values of $\pi(x)$.
Computing the steady state probability vector of a random walk on $\{0, 1, \dots, n\}$
You can solve your second equation to give $\pi(1)=\frac{a}{1-p} \pi(0)$, then your third to give $\pi(2)=\frac{ap}{(1-p)^2} \pi(0)$, then your first to give $\pi(x)=\frac{a}{p}\left(\frac{p}{1-p}\rig
Computing the steady state probability vector of a random walk on $\{0, 1, \dots, n\}$ You can solve your second equation to give $\pi(1)=\frac{a}{1-p} \pi(0)$, then your third to give $\pi(2)=\frac{ap}{(1-p)^2} \pi(0)$, then your first to give $\pi(x)=\frac{a}{p}\left(\frac{p}{1-p}\right)^x \pi(0)$ for $0 \lt x \lt n$, and finally your fourth to give $\pi(n)=\frac{p}{b} \pi(n-1) = \frac{a}{b}\left(\frac{p}{1-p}\right)^{n-1} \pi(0)$. Your fifth equation is not independent of the others. You can now add up the terms, noting the geometric progression in the middle, and set the sum equal to $1$ to solve for $\pi(0)$ and thus find all the values of $\pi(x)$.
Computing the steady state probability vector of a random walk on $\{0, 1, \dots, n\}$ You can solve your second equation to give $\pi(1)=\frac{a}{1-p} \pi(0)$, then your third to give $\pi(2)=\frac{ap}{(1-p)^2} \pi(0)$, then your first to give $\pi(x)=\frac{a}{p}\left(\frac{p}{1-p}\rig
50,651
Computing the steady state probability vector of a random walk on $\{0, 1, \dots, n\}$
Here are the generic steps to solve these problems. The first computations may be difficult or time consuming, so do not hesitate to use online symbolic equation solvers like wolframalpha to gain time. Start by solving the case $n=3$ to compute explicitly $\pi_3$ and try to guess what will be the general form of $\pi_n$ for larger $n$. If the guessing step is not trivial, you should try to solve the case $n=4$ and understand how to pass from $\pi_3$ to $\pi_4$. Prove by induction on $n$ that $\pi_n$ verifies $\pi_n \mathbf{P}_n = \pi_n$, where $\mathbf{P}_n$ is the transition matrix of your process. Prove the ergodicity of your Markov chain in order to claim that $\pi_n$ is the unique steady state distribution.
Computing the steady state probability vector of a random walk on $\{0, 1, \dots, n\}$
Here are the generic steps to solve these problems. The first computations may be difficult or time consuming, so do not hesitate to use online symbolic equation solvers like wolframalpha to gain time
Computing the steady state probability vector of a random walk on $\{0, 1, \dots, n\}$ Here are the generic steps to solve these problems. The first computations may be difficult or time consuming, so do not hesitate to use online symbolic equation solvers like wolframalpha to gain time. Start by solving the case $n=3$ to compute explicitly $\pi_3$ and try to guess what will be the general form of $\pi_n$ for larger $n$. If the guessing step is not trivial, you should try to solve the case $n=4$ and understand how to pass from $\pi_3$ to $\pi_4$. Prove by induction on $n$ that $\pi_n$ verifies $\pi_n \mathbf{P}_n = \pi_n$, where $\mathbf{P}_n$ is the transition matrix of your process. Prove the ergodicity of your Markov chain in order to claim that $\pi_n$ is the unique steady state distribution.
Computing the steady state probability vector of a random walk on $\{0, 1, \dots, n\}$ Here are the generic steps to solve these problems. The first computations may be difficult or time consuming, so do not hesitate to use online symbolic equation solvers like wolframalpha to gain time
50,652
How to compare rates of occurence in consecutive time series count data?
To keep things really simple, you could consider using a simple mean/standard deviation inspired ratio, a bit like a z-score? If you assume that the counts for two days, $X_1$ and $X_2$ are Poisson random samples with $\lambda_1$ and $\lambda_2$ respectively, then the change in word count follows a Skellam distribution, with mean $\lambda_2-\lambda_1$ and variance $\lambda_2+\lambda_1$ Taking simple point estimates, I think it would therefore be reasonable to construct: $\mathrm{Score} = \frac{X_2 - X_1}{\sqrt{X_2+X_1}}$ So in your example, $\mathrm{Score_{dog}} = \frac{45}{\sqrt{135}} = 3.87$ $\mathrm{Score_{cat}} = \frac{2}{\sqrt{6}} = 0.816$ You could consider more difficult inferences if you have a strong idea what your really want to detect, but based on your description I think the above will be nice and simple and capture roughly the behaviour you want.
How to compare rates of occurence in consecutive time series count data?
To keep things really simple, you could consider using a simple mean/standard deviation inspired ratio, a bit like a z-score? If you assume that the counts for two days, $X_1$ and $X_2$ are Poisson ra
How to compare rates of occurence in consecutive time series count data? To keep things really simple, you could consider using a simple mean/standard deviation inspired ratio, a bit like a z-score? If you assume that the counts for two days, $X_1$ and $X_2$ are Poisson random samples with $\lambda_1$ and $\lambda_2$ respectively, then the change in word count follows a Skellam distribution, with mean $\lambda_2-\lambda_1$ and variance $\lambda_2+\lambda_1$ Taking simple point estimates, I think it would therefore be reasonable to construct: $\mathrm{Score} = \frac{X_2 - X_1}{\sqrt{X_2+X_1}}$ So in your example, $\mathrm{Score_{dog}} = \frac{45}{\sqrt{135}} = 3.87$ $\mathrm{Score_{cat}} = \frac{2}{\sqrt{6}} = 0.816$ You could consider more difficult inferences if you have a strong idea what your really want to detect, but based on your description I think the above will be nice and simple and capture roughly the behaviour you want.
How to compare rates of occurence in consecutive time series count data? To keep things really simple, you could consider using a simple mean/standard deviation inspired ratio, a bit like a z-score? If you assume that the counts for two days, $X_1$ and $X_2$ are Poisson ra
50,653
Laplace distribution and, generally, interpreting an undefined moment
I was incorrectly using the moment generating function which led to my misunderstanding of the Laplace distribution. The moment generating function is $M_X(\theta) = \text{E}(e^{\theta X})$. When you use that to find the $n^{\text{th}}$ moment, you take the $n^{\text{th}}$ derivative at $\theta=0$: $$\frac{d^{n}(M_X(\theta))}{d(\theta)^{n}} |_{\theta=0}\quad\text{.}$$ If you see my note above there is a proof using Taylor series expansion of $E(e^{\theta X})$. When you take the n-th derivative the leading term will not have a $\theta$ in it, but higher order terms will have $\theta$. This allows you to set $\theta=0$ and use the moment generating function to produce moments. So for the Laplace we have $E(e^{\theta X}) = e^{\mu\theta}/(1-b^{2}\theta^{2})$ (from Wikipedia) $E(X) = d^{1}(M_X(\theta))/d(\theta)^{1} = (e^{\theta\mu} (\mu + b^2 \theta (2 - \theta \mu)))/(-1 + b^2 \theta^2)^2$ if you evaluate this for $\theta=0$, then you get $E(X) = \mu$ as expected. Now the second part of my question is trying to understand undefined moments. The implication of an undefined moment means that trying to estimate the parameters of the distribution by matching moments will generally require more advanced techniques (such as maximizing log-likelihood). There is a good discussion about this for the Cauchy distribution which does not have defined moments see http://en.wikipedia.org/wiki/Cauchy_distribution As an added thought, in Python there is a symbolic algebra package called sympy that makes evaluating moments very simple using symbolic algebra. There are simple formulas to convert non-central moments to central moments, allowing you to calculate skewness and kurtosis fairly easily for many distributions.
Laplace distribution and, generally, interpreting an undefined moment
I was incorrectly using the moment generating function which led to my misunderstanding of the Laplace distribution. The moment generating function is $M_X(\theta) = \text{E}(e^{\theta X})$. When y
Laplace distribution and, generally, interpreting an undefined moment I was incorrectly using the moment generating function which led to my misunderstanding of the Laplace distribution. The moment generating function is $M_X(\theta) = \text{E}(e^{\theta X})$. When you use that to find the $n^{\text{th}}$ moment, you take the $n^{\text{th}}$ derivative at $\theta=0$: $$\frac{d^{n}(M_X(\theta))}{d(\theta)^{n}} |_{\theta=0}\quad\text{.}$$ If you see my note above there is a proof using Taylor series expansion of $E(e^{\theta X})$. When you take the n-th derivative the leading term will not have a $\theta$ in it, but higher order terms will have $\theta$. This allows you to set $\theta=0$ and use the moment generating function to produce moments. So for the Laplace we have $E(e^{\theta X}) = e^{\mu\theta}/(1-b^{2}\theta^{2})$ (from Wikipedia) $E(X) = d^{1}(M_X(\theta))/d(\theta)^{1} = (e^{\theta\mu} (\mu + b^2 \theta (2 - \theta \mu)))/(-1 + b^2 \theta^2)^2$ if you evaluate this for $\theta=0$, then you get $E(X) = \mu$ as expected. Now the second part of my question is trying to understand undefined moments. The implication of an undefined moment means that trying to estimate the parameters of the distribution by matching moments will generally require more advanced techniques (such as maximizing log-likelihood). There is a good discussion about this for the Cauchy distribution which does not have defined moments see http://en.wikipedia.org/wiki/Cauchy_distribution As an added thought, in Python there is a symbolic algebra package called sympy that makes evaluating moments very simple using symbolic algebra. There are simple formulas to convert non-central moments to central moments, allowing you to calculate skewness and kurtosis fairly easily for many distributions.
Laplace distribution and, generally, interpreting an undefined moment I was incorrectly using the moment generating function which led to my misunderstanding of the Laplace distribution. The moment generating function is $M_X(\theta) = \text{E}(e^{\theta X})$. When y
50,654
How to interpret variation explained by principal coordinates?
In preparing a workshop on ordination techniques, I realized I was having the same difficulty in interpreting the eigenvalues of principal coordinate analysis for the same reasons that have puzzled you (@Paul Igor Costea), so I started digging around for some answers. I have a few books on multivariate statistics that are not for the statistically faint of heart, and occasionally certain explanations get lost in some heavy matrix algebra (not the best for an 101 on ordinations). The best answer I found was actually in an overview of ordination methods for non-experts by Lengendre & Birks 2012 in a chapter of "Tracking Environmental Change using Lake Sediments". The eigenvectors are typically much easier to interpret as they are essentially the coordinates (in reduced space) of a given object along a given axis. The eigenvalues, however, represent: "the variance (not divided by degrees of freedom) of the objects along that axis." (Lengendre & Birks 2012) This is the most concise and precise interpretation I have found. While it is true that PCoA is not computed on a covariance matrix but on a distance matrix, PCoA and PCA are very similar, and the following simple example (from the same chapter) explains the mathematical relationship between the eigenvalues computed by each technique: "From an object-by-variable data matrix Y, compute matrix D of Euclidean distances among the objects. Run PCA using matrix Y and PCoA using matrix D. The eigenvalues of the PCoA of matrix D are proportional to the PCA eigenvalues computed for matrix Y (they differ by the factor (n – 1) [i.e.the degrees of freedom]), while the eigenvectors of the PCoA of D are identical to matrix F [i.e. the matrix of eigenvectors] of the PCA of Y. Normally, one would not compute PCoA on a matrix of Euclidean distances since PCA is a faster method to obtain an ordination of the objects in Y that preserves the Euclidean distance among the objects.This was presented here simply as a way of understanding the relationship between PCA and PCoA in the Euclidean distance case. The real interest of PCoA is to obtain an ordination of the objects from some other form of distance matrix more appropriate to the data at hand — for example, a Steinhaus/Odum/Bray-Curtis distance matrix in the case of assemblage composition data."
How to interpret variation explained by principal coordinates?
In preparing a workshop on ordination techniques, I realized I was having the same difficulty in interpreting the eigenvalues of principal coordinate analysis for the same reasons that have puzzled yo
How to interpret variation explained by principal coordinates? In preparing a workshop on ordination techniques, I realized I was having the same difficulty in interpreting the eigenvalues of principal coordinate analysis for the same reasons that have puzzled you (@Paul Igor Costea), so I started digging around for some answers. I have a few books on multivariate statistics that are not for the statistically faint of heart, and occasionally certain explanations get lost in some heavy matrix algebra (not the best for an 101 on ordinations). The best answer I found was actually in an overview of ordination methods for non-experts by Lengendre & Birks 2012 in a chapter of "Tracking Environmental Change using Lake Sediments". The eigenvectors are typically much easier to interpret as they are essentially the coordinates (in reduced space) of a given object along a given axis. The eigenvalues, however, represent: "the variance (not divided by degrees of freedom) of the objects along that axis." (Lengendre & Birks 2012) This is the most concise and precise interpretation I have found. While it is true that PCoA is not computed on a covariance matrix but on a distance matrix, PCoA and PCA are very similar, and the following simple example (from the same chapter) explains the mathematical relationship between the eigenvalues computed by each technique: "From an object-by-variable data matrix Y, compute matrix D of Euclidean distances among the objects. Run PCA using matrix Y and PCoA using matrix D. The eigenvalues of the PCoA of matrix D are proportional to the PCA eigenvalues computed for matrix Y (they differ by the factor (n – 1) [i.e.the degrees of freedom]), while the eigenvectors of the PCoA of D are identical to matrix F [i.e. the matrix of eigenvectors] of the PCA of Y. Normally, one would not compute PCoA on a matrix of Euclidean distances since PCA is a faster method to obtain an ordination of the objects in Y that preserves the Euclidean distance among the objects.This was presented here simply as a way of understanding the relationship between PCA and PCoA in the Euclidean distance case. The real interest of PCoA is to obtain an ordination of the objects from some other form of distance matrix more appropriate to the data at hand — for example, a Steinhaus/Odum/Bray-Curtis distance matrix in the case of assemblage composition data."
How to interpret variation explained by principal coordinates? In preparing a workshop on ordination techniques, I realized I was having the same difficulty in interpreting the eigenvalues of principal coordinate analysis for the same reasons that have puzzled yo
50,655
The gamma distribution and Poisson processes
Interval between events in a Poisson process is exponentially distributed. If you skip intervals, and only count k-th event, then you get convolution of exponential r.v.'s which gives you Erlang distribution (which is a special case of Gamma distribution). Is this what you meant in the first paragraph? If your growth process is such that it adds an iid Gamma distributed length every time, then you will get a Gamma distributed length per time (sums of Gamma r.v. with same scale is again Gamma). But this has little to do with Poisson process, I'm afraid.
The gamma distribution and Poisson processes
Interval between events in a Poisson process is exponentially distributed. If you skip intervals, and only count k-th event, then you get convolution of exponential r.v.'s which gives you Erlang distr
The gamma distribution and Poisson processes Interval between events in a Poisson process is exponentially distributed. If you skip intervals, and only count k-th event, then you get convolution of exponential r.v.'s which gives you Erlang distribution (which is a special case of Gamma distribution). Is this what you meant in the first paragraph? If your growth process is such that it adds an iid Gamma distributed length every time, then you will get a Gamma distributed length per time (sums of Gamma r.v. with same scale is again Gamma). But this has little to do with Poisson process, I'm afraid.
The gamma distribution and Poisson processes Interval between events in a Poisson process is exponentially distributed. If you skip intervals, and only count k-th event, then you get convolution of exponential r.v.'s which gives you Erlang distr
50,656
Bootstrap residuals: Wild vs Block Bootstrap?
Have you read this paper: Cameron, A. C., Gelbach, J. B., & Miller, D. L. (2008). Bootstrap-Based Improvements for Inference with Clustered Errors. Review of Economics and Statistics, 90(3), 414–427. https://doi.org/10.1162/rest.90.3.414 For wild bootstrapping when you suspect clustering, this is probably the most comprehensive review. Essentially yes, you may have to use block wild bootstrapping. Luckily, block wild bootstrapping is implemented in the multiwayvcov (cluster.boot() function) and clusterSEs packages (cluster.wild.glm() function) in R.
Bootstrap residuals: Wild vs Block Bootstrap?
Have you read this paper: Cameron, A. C., Gelbach, J. B., & Miller, D. L. (2008). Bootstrap-Based Improvements for Inference with Clustered Errors. Review of Economics and Statistics, 90(3), 414–427.
Bootstrap residuals: Wild vs Block Bootstrap? Have you read this paper: Cameron, A. C., Gelbach, J. B., & Miller, D. L. (2008). Bootstrap-Based Improvements for Inference with Clustered Errors. Review of Economics and Statistics, 90(3), 414–427. https://doi.org/10.1162/rest.90.3.414 For wild bootstrapping when you suspect clustering, this is probably the most comprehensive review. Essentially yes, you may have to use block wild bootstrapping. Luckily, block wild bootstrapping is implemented in the multiwayvcov (cluster.boot() function) and clusterSEs packages (cluster.wild.glm() function) in R.
Bootstrap residuals: Wild vs Block Bootstrap? Have you read this paper: Cameron, A. C., Gelbach, J. B., & Miller, D. L. (2008). Bootstrap-Based Improvements for Inference with Clustered Errors. Review of Economics and Statistics, 90(3), 414–427.
50,657
Seemingly unrelated regression and multivariate Regression
As mentioned in the Wikipedia: SUR is equivalent to the equation-by-equation OLS under the following two conditions: i. When there are no cross-equation correlations between the error terms ii. When each equation contains exactly the same set of regressors. That being said, if your model and data don't satisfy above two cases, then you can proceed with the SUR.
Seemingly unrelated regression and multivariate Regression
As mentioned in the Wikipedia: SUR is equivalent to the equation-by-equation OLS under the following two conditions: i. When there are no cross-equation correlations between the error terms ii. When
Seemingly unrelated regression and multivariate Regression As mentioned in the Wikipedia: SUR is equivalent to the equation-by-equation OLS under the following two conditions: i. When there are no cross-equation correlations between the error terms ii. When each equation contains exactly the same set of regressors. That being said, if your model and data don't satisfy above two cases, then you can proceed with the SUR.
Seemingly unrelated regression and multivariate Regression As mentioned in the Wikipedia: SUR is equivalent to the equation-by-equation OLS under the following two conditions: i. When there are no cross-equation correlations between the error terms ii. When
50,658
What is the standard procedure for evaluating a user-based CF algorithm with a dataset offline?
Regarding "The three sets" training set: The users in the training folds restricted to the ratings before the split date. This data is used to build the model. validation set: The users in the validation fold restricted to the ratings after the split date. This data is used to evaluate the model. testing set: The users put aside beforehand (currently not happening in your setup), restricted to the ratings after the split date. This is done in the following way: Use all data in the training and validation set to build the model (restricted to the ratings before the split date) Calculate the neighborhood using the user data in the testing set before the split date and calculate the error using the ratings after the split data (standard procedure) The important point is, that the testing set is used to evaluate what the generalization error and quality looks like when put into practice (after all optimization has been done). If the quality at this step is strongly inferior to the one obtained in the validation set, overfitting has slipped in. See also this question: What is the difference between test set and validation set? Regarding inner vs outer crossvalidation The inner crossvalidation is used to optimize the so called hyperparameters (http://en.wikipedia.org/wiki/Hyperparameter_optimization) like the neighborhood k. It is done by splitting the training set again. The outer crossvalidation is used to evaluate the generalization power and error of this optimization. See also this question: Nested cross validation for model selection Now one may ask: Why resplitting the training data ? Why not just say that the training part of the inner crossvalidation corresponds to the training set the validation part of the inner crossvalidation corresponds to the validation set the validation part of the outer crossvalidation corresponds to the testing set One can do this. But given this split without any data put aside beforehand, one must be careful not to e.g. rerun the outer crossvalidation with different settings, because in this case you do not have untainted data to use for the final estimation of the generalization power and error. Regarding the validation procedure ignoring the testing set and hyperparameter optimization argument Basically, the validation itself is good and valid ! Splitting the users into folds varies the mixture of interests present in the training data meanwhile splitting every user profile at a certain date takes timely aspects like "new items coming in" or "what is hot a the moment" etc. into account. Some thoughts based on my own experience ... Repetition of the (outer) crossvalidation You may repeat the crossvalidation with different user and rating splittings without varying the hyperparameters of the model to gain a more reliable estimate of the generalization error. Beware of repeating it not to often (I recommend 6-10 based on Kohavi's analysis of crossvalidation), since the more repetitions, the higher the probability that the same type of split occurs again. Contemplation: Drop the splitting at a date ? One suggestion: Splitting at a certain date has the disadvantage that the amount of training data and validation data may vary heavily across users, depending on their levels of activity. Distinguishing between these types of users may help to first build a good recommender, then make it work for small amounts of data, too. To account for this one can ... either do not use users for validation where the amount of training data is to small. drop the splitting at the date and split the ratings randomly (e.g.2/3 / 1/3 or something like this). This is only appropriate for items with less dynamics (books, movies) then items with a high dynamics (fashion). To somehow balance this, myself has restricted the validation to items which occurred at least once in a certain time frame (e.g. a month). The time frame is chosen based on the dynamics of the items, i.e. it represents the time frame where the item / rating occurrences can be considered as stable. Contemplation: Drop the splitting of users ? When the recommender system is applied in practice, all users with activity up to the launch date are known. So splitting the users in such a way that some are put into the training folds and other in the validation folds is not very realistic. It would be more realistic to just split the ratings per user at a certain date or just random (2/3 / 1/3), using all data from all users in the training part of the split for building and the rest for validation accordingly. HOWEVER, the only difference to the user-and-rating-splitting is, that the rating-splitting-only can utilize the ratings of the user in the validation fold before the date split. So all in all not much of a difference. The user-and-rating-splitting makes the model training a little bit harder, but forces more stability on the other hand. It is some sort of regularization.
What is the standard procedure for evaluating a user-based CF algorithm with a dataset offline?
Regarding "The three sets" training set: The users in the training folds restricted to the ratings before the split date. This data is used to build the model. validation set: The users in the valida
What is the standard procedure for evaluating a user-based CF algorithm with a dataset offline? Regarding "The three sets" training set: The users in the training folds restricted to the ratings before the split date. This data is used to build the model. validation set: The users in the validation fold restricted to the ratings after the split date. This data is used to evaluate the model. testing set: The users put aside beforehand (currently not happening in your setup), restricted to the ratings after the split date. This is done in the following way: Use all data in the training and validation set to build the model (restricted to the ratings before the split date) Calculate the neighborhood using the user data in the testing set before the split date and calculate the error using the ratings after the split data (standard procedure) The important point is, that the testing set is used to evaluate what the generalization error and quality looks like when put into practice (after all optimization has been done). If the quality at this step is strongly inferior to the one obtained in the validation set, overfitting has slipped in. See also this question: What is the difference between test set and validation set? Regarding inner vs outer crossvalidation The inner crossvalidation is used to optimize the so called hyperparameters (http://en.wikipedia.org/wiki/Hyperparameter_optimization) like the neighborhood k. It is done by splitting the training set again. The outer crossvalidation is used to evaluate the generalization power and error of this optimization. See also this question: Nested cross validation for model selection Now one may ask: Why resplitting the training data ? Why not just say that the training part of the inner crossvalidation corresponds to the training set the validation part of the inner crossvalidation corresponds to the validation set the validation part of the outer crossvalidation corresponds to the testing set One can do this. But given this split without any data put aside beforehand, one must be careful not to e.g. rerun the outer crossvalidation with different settings, because in this case you do not have untainted data to use for the final estimation of the generalization power and error. Regarding the validation procedure ignoring the testing set and hyperparameter optimization argument Basically, the validation itself is good and valid ! Splitting the users into folds varies the mixture of interests present in the training data meanwhile splitting every user profile at a certain date takes timely aspects like "new items coming in" or "what is hot a the moment" etc. into account. Some thoughts based on my own experience ... Repetition of the (outer) crossvalidation You may repeat the crossvalidation with different user and rating splittings without varying the hyperparameters of the model to gain a more reliable estimate of the generalization error. Beware of repeating it not to often (I recommend 6-10 based on Kohavi's analysis of crossvalidation), since the more repetitions, the higher the probability that the same type of split occurs again. Contemplation: Drop the splitting at a date ? One suggestion: Splitting at a certain date has the disadvantage that the amount of training data and validation data may vary heavily across users, depending on their levels of activity. Distinguishing between these types of users may help to first build a good recommender, then make it work for small amounts of data, too. To account for this one can ... either do not use users for validation where the amount of training data is to small. drop the splitting at the date and split the ratings randomly (e.g.2/3 / 1/3 or something like this). This is only appropriate for items with less dynamics (books, movies) then items with a high dynamics (fashion). To somehow balance this, myself has restricted the validation to items which occurred at least once in a certain time frame (e.g. a month). The time frame is chosen based on the dynamics of the items, i.e. it represents the time frame where the item / rating occurrences can be considered as stable. Contemplation: Drop the splitting of users ? When the recommender system is applied in practice, all users with activity up to the launch date are known. So splitting the users in such a way that some are put into the training folds and other in the validation folds is not very realistic. It would be more realistic to just split the ratings per user at a certain date or just random (2/3 / 1/3), using all data from all users in the training part of the split for building and the rest for validation accordingly. HOWEVER, the only difference to the user-and-rating-splitting is, that the rating-splitting-only can utilize the ratings of the user in the validation fold before the date split. So all in all not much of a difference. The user-and-rating-splitting makes the model training a little bit harder, but forces more stability on the other hand. It is some sort of regularization.
What is the standard procedure for evaluating a user-based CF algorithm with a dataset offline? Regarding "The three sets" training set: The users in the training folds restricted to the ratings before the split date. This data is used to build the model. validation set: The users in the valida
50,659
Partitioning Around Medoids (PAM) with Gower distance matrix
If the binary variable is not very useful, try putting less weight on it. There is nothing wrong with having a domain expert manually assign weights to different attributes to help the algorithm find new information. That the binary attribute splits the data into two is a correct result, now you want to find something new, so either remove it (weight 0) or at least reduce the weight.
Partitioning Around Medoids (PAM) with Gower distance matrix
If the binary variable is not very useful, try putting less weight on it. There is nothing wrong with having a domain expert manually assign weights to different attributes to help the algorithm find
Partitioning Around Medoids (PAM) with Gower distance matrix If the binary variable is not very useful, try putting less weight on it. There is nothing wrong with having a domain expert manually assign weights to different attributes to help the algorithm find new information. That the binary attribute splits the data into two is a correct result, now you want to find something new, so either remove it (weight 0) or at least reduce the weight.
Partitioning Around Medoids (PAM) with Gower distance matrix If the binary variable is not very useful, try putting less weight on it. There is nothing wrong with having a domain expert manually assign weights to different attributes to help the algorithm find
50,660
Partitioning Around Medoids (PAM) with Gower distance matrix
To better justify the chosen number of clusters (k) you can use other partition quality indices than Silhouette width. Example indices based on arbitrary dissimilarity are: Caliński & Harabasz index (chosen as the best in Milligan and Cooper study 1985 and as 4th best in Dimitriadou et al. 2002) generalized for dissimilarities, Dunn index, Gamma index, C index etc. A selection of these and other quality indices is provided by e.g. R's cluster.stats function included in fpc package. It is common to then choose the k returned by majority of the computed indices, as the final one.
Partitioning Around Medoids (PAM) with Gower distance matrix
To better justify the chosen number of clusters (k) you can use other partition quality indices than Silhouette width. Example indices based on arbitrary dissimilarity are: Caliński & Harabasz index (
Partitioning Around Medoids (PAM) with Gower distance matrix To better justify the chosen number of clusters (k) you can use other partition quality indices than Silhouette width. Example indices based on arbitrary dissimilarity are: Caliński & Harabasz index (chosen as the best in Milligan and Cooper study 1985 and as 4th best in Dimitriadou et al. 2002) generalized for dissimilarities, Dunn index, Gamma index, C index etc. A selection of these and other quality indices is provided by e.g. R's cluster.stats function included in fpc package. It is common to then choose the k returned by majority of the computed indices, as the final one.
Partitioning Around Medoids (PAM) with Gower distance matrix To better justify the chosen number of clusters (k) you can use other partition quality indices than Silhouette width. Example indices based on arbitrary dissimilarity are: Caliński & Harabasz index (
50,661
How to get from $R$ to $R^2$ the hard way
You are missing square roots for $S_X$, $S_Y$ and $SS_{reg}$ should be $\sum(\hat{y}-\overline{y})^2$ but I assume that these are just typos. Let $y=\alpha+\beta x+\varepsilon$ and let us note that $$r=\beta\frac{S_X}{S_Y}$$ $$r^2=\beta^2\frac{\overline{x^2}-\overline{x}^2}{\overline{y^2}-\overline{y}^2}$$ On the other hand we have $$\frac{\sum{(\hat{y}-\overline{y})^2}}{\sum{(y-\bar{y})^2}}=\frac{\overline{\hat{y}^2}-2\overline{y}\overline{\hat{y}}+\overline{y}^2}{\overline{y^2}-2\overline{y}^2+\overline{y}}=\frac{\overline{\hat{y}^2}-\overline{\hat{y}}^2}{\overline{y^2}-\overline{y}^2}$$ since $\overline{y}=\overline{\hat{y}}$ (show it). Now we are left to prove that $$\beta^2(\overline{x^2}-\overline{x}^2)=\overline{\hat{y}^2}-\overline{\hat{y}}^2$$ which can be done by manipulating the rhs, e.g. using $\overline{\hat{y}}=\hat{\alpha}+\hat{\beta}\overline{x}$.
How to get from $R$ to $R^2$ the hard way
You are missing square roots for $S_X$, $S_Y$ and $SS_{reg}$ should be $\sum(\hat{y}-\overline{y})^2$ but I assume that these are just typos. Let $y=\alpha+\beta x+\varepsilon$ and let us note that $
How to get from $R$ to $R^2$ the hard way You are missing square roots for $S_X$, $S_Y$ and $SS_{reg}$ should be $\sum(\hat{y}-\overline{y})^2$ but I assume that these are just typos. Let $y=\alpha+\beta x+\varepsilon$ and let us note that $$r=\beta\frac{S_X}{S_Y}$$ $$r^2=\beta^2\frac{\overline{x^2}-\overline{x}^2}{\overline{y^2}-\overline{y}^2}$$ On the other hand we have $$\frac{\sum{(\hat{y}-\overline{y})^2}}{\sum{(y-\bar{y})^2}}=\frac{\overline{\hat{y}^2}-2\overline{y}\overline{\hat{y}}+\overline{y}^2}{\overline{y^2}-2\overline{y}^2+\overline{y}}=\frac{\overline{\hat{y}^2}-\overline{\hat{y}}^2}{\overline{y^2}-\overline{y}^2}$$ since $\overline{y}=\overline{\hat{y}}$ (show it). Now we are left to prove that $$\beta^2(\overline{x^2}-\overline{x}^2)=\overline{\hat{y}^2}-\overline{\hat{y}}^2$$ which can be done by manipulating the rhs, e.g. using $\overline{\hat{y}}=\hat{\alpha}+\hat{\beta}\overline{x}$.
How to get from $R$ to $R^2$ the hard way You are missing square roots for $S_X$, $S_Y$ and $SS_{reg}$ should be $\sum(\hat{y}-\overline{y})^2$ but I assume that these are just typos. Let $y=\alpha+\beta x+\varepsilon$ and let us note that $
50,662
Normalize binary features for logistic regression
Normalization means that you put your data in a particular range, often $0$ to $1$. If you have coded your binary variable with $0$ and $1$, you already have this property and do not need to do anything. If you use a different type of coding of your binary variable, such as $\pm 1$, then the usual $\dfrac{x_i - \min(x)}{\max(x) - \min(x)}$ should work. If you have not coded your categories with numbers, your software does it under the hood, and the documentation should say how.
Normalize binary features for logistic regression
Normalization means that you put your data in a particular range, often $0$ to $1$. If you have coded your binary variable with $0$ and $1$, you already have this property and do not need to do anythi
Normalize binary features for logistic regression Normalization means that you put your data in a particular range, often $0$ to $1$. If you have coded your binary variable with $0$ and $1$, you already have this property and do not need to do anything. If you use a different type of coding of your binary variable, such as $\pm 1$, then the usual $\dfrac{x_i - \min(x)}{\max(x) - \min(x)}$ should work. If you have not coded your categories with numbers, your software does it under the hood, and the documentation should say how.
Normalize binary features for logistic regression Normalization means that you put your data in a particular range, often $0$ to $1$. If you have coded your binary variable with $0$ and $1$, you already have this property and do not need to do anythi
50,663
Within, between or overall R-square for random effects in Stata
Random effect estimator (GLS estimator) is a weighted average of between and within estimators. In Stata, the default is random effect and you need to use R-squared: overall. As specified here, R-sq: within is not correct for fixed effect and there are alternatives to correct that in Stata. For example you need to use R-square from the one provided by either regressor areg. Also, see here for details. use http://www.stata-press.com/data/imeus/traffic, clear xtreg fatal beertax spircons unrate perinc or xtreg fatal beertax spircons unrate perinc, re
Within, between or overall R-square for random effects in Stata
Random effect estimator (GLS estimator) is a weighted average of between and within estimators. In Stata, the default is random effect and you need to use R-squared: overall. As specified here, R-sq:
Within, between or overall R-square for random effects in Stata Random effect estimator (GLS estimator) is a weighted average of between and within estimators. In Stata, the default is random effect and you need to use R-squared: overall. As specified here, R-sq: within is not correct for fixed effect and there are alternatives to correct that in Stata. For example you need to use R-square from the one provided by either regressor areg. Also, see here for details. use http://www.stata-press.com/data/imeus/traffic, clear xtreg fatal beertax spircons unrate perinc or xtreg fatal beertax spircons unrate perinc, re
Within, between or overall R-square for random effects in Stata Random effect estimator (GLS estimator) is a weighted average of between and within estimators. In Stata, the default is random effect and you need to use R-squared: overall. As specified here, R-sq:
50,664
What is the probability regression coefficient is larger than its OLS estimate
The OLS (or any other) estimator, $\hat \beta$, is a random variable. Namely, it is a real-valued function. It takes as input the sample data and produces a real number. This real number is the sample-specific estimate. The habit of using the same symbol to denote the function and a specific value of it can become confusing, as some comment showed. Being a random variable, $\hat \beta$ has a proper distribution function, $F_{\hat \beta}\left(\hat \beta\right)$. Then it is valid to ask questions like $$P(\hat \beta \gt c) = ?\, \Rightarrow 1-F_{\hat \beta}\left(c\right) =?,\;c\in \Bbb R$$ I said "it is valid", I didn't say it will give you a tangible result. And this is because this distribution will involve the unknown parameter $\beta$, which remains unknown despite our estimation efforts. So even if you are in a position to specify the distribution as belonging to some family (like normal or Student's t or whatever), you will not be able to get a specific numerical value for the above probability you are looking for, because some parameter of this distribution will be unknown. Moreover, any specific estimate, like 2.3 is just a point in the support of the density of $\hat \beta$. We have no way of knowing whether it is the "true value" of $\beta$- this would be equivalent to believe that with one sample we hit dead-center and uncovered the true value of $\beta$. So even if we assume that the distribution of the estimator is symmetric, we don't know if the specific estimate is the expected value=median of its distribution (as someone commented). So the statement $P(\hat \beta \gt 2.3) = \frac12$ is wrong, remembering that $2.3= \hat \beta\left(\text{sample(j)}\right)$. The statement $P(\hat \beta \gt \beta) = \frac12$ is correct if we assume a) that $\hat \beta$ is an unbiased estimator of $\beta$ and b) that $\hat \beta$ has a symmetric distribution. If our sample $j$ is very large, and we assume/accept/prove that our estimator is consistent, then some weight can be given to the argument that $\hat \beta\left(\text{sample(j)}\right)\approx \beta$ - this is the essence of consistency, and this is why consistency is "informally" considered a more important property of estimators than unbiasedness: the desirable consequences of consistency can be at least partially "bestowed" upon a single estimate, if the sample size is large (and eventually we are obtaining more and more large samples). The desirable consequences of unbiasedness need many samples and many estimates for them to emerge. If we are in a position to estimate many different samples, then we will be able to obtain many different estimates, and then take their mean value as the "true value" of $\beta$, if, again, we have reasons to believe that $\hat \beta$ is an unbiased estimator. Probabilistic questions about $\beta$ can only be asked in a Beaysian framework. Here, we model the unknown parameter itself as a random variable to reflect our ignorance about it. In this context there is no distinction between a fixed unknown parameter $\beta$ and a function $\hat \beta$ that tries to estimate it, and so it makes sense to ask $P(\beta\gt c|sample) = ?\, $
What is the probability regression coefficient is larger than its OLS estimate
The OLS (or any other) estimator, $\hat \beta$, is a random variable. Namely, it is a real-valued function. It takes as input the sample data and produces a real number. This real number is the sample
What is the probability regression coefficient is larger than its OLS estimate The OLS (or any other) estimator, $\hat \beta$, is a random variable. Namely, it is a real-valued function. It takes as input the sample data and produces a real number. This real number is the sample-specific estimate. The habit of using the same symbol to denote the function and a specific value of it can become confusing, as some comment showed. Being a random variable, $\hat \beta$ has a proper distribution function, $F_{\hat \beta}\left(\hat \beta\right)$. Then it is valid to ask questions like $$P(\hat \beta \gt c) = ?\, \Rightarrow 1-F_{\hat \beta}\left(c\right) =?,\;c\in \Bbb R$$ I said "it is valid", I didn't say it will give you a tangible result. And this is because this distribution will involve the unknown parameter $\beta$, which remains unknown despite our estimation efforts. So even if you are in a position to specify the distribution as belonging to some family (like normal or Student's t or whatever), you will not be able to get a specific numerical value for the above probability you are looking for, because some parameter of this distribution will be unknown. Moreover, any specific estimate, like 2.3 is just a point in the support of the density of $\hat \beta$. We have no way of knowing whether it is the "true value" of $\beta$- this would be equivalent to believe that with one sample we hit dead-center and uncovered the true value of $\beta$. So even if we assume that the distribution of the estimator is symmetric, we don't know if the specific estimate is the expected value=median of its distribution (as someone commented). So the statement $P(\hat \beta \gt 2.3) = \frac12$ is wrong, remembering that $2.3= \hat \beta\left(\text{sample(j)}\right)$. The statement $P(\hat \beta \gt \beta) = \frac12$ is correct if we assume a) that $\hat \beta$ is an unbiased estimator of $\beta$ and b) that $\hat \beta$ has a symmetric distribution. If our sample $j$ is very large, and we assume/accept/prove that our estimator is consistent, then some weight can be given to the argument that $\hat \beta\left(\text{sample(j)}\right)\approx \beta$ - this is the essence of consistency, and this is why consistency is "informally" considered a more important property of estimators than unbiasedness: the desirable consequences of consistency can be at least partially "bestowed" upon a single estimate, if the sample size is large (and eventually we are obtaining more and more large samples). The desirable consequences of unbiasedness need many samples and many estimates for them to emerge. If we are in a position to estimate many different samples, then we will be able to obtain many different estimates, and then take their mean value as the "true value" of $\beta$, if, again, we have reasons to believe that $\hat \beta$ is an unbiased estimator. Probabilistic questions about $\beta$ can only be asked in a Beaysian framework. Here, we model the unknown parameter itself as a random variable to reflect our ignorance about it. In this context there is no distinction between a fixed unknown parameter $\beta$ and a function $\hat \beta$ that tries to estimate it, and so it makes sense to ask $P(\beta\gt c|sample) = ?\, $
What is the probability regression coefficient is larger than its OLS estimate The OLS (or any other) estimator, $\hat \beta$, is a random variable. Namely, it is a real-valued function. It takes as input the sample data and produces a real number. This real number is the sample
50,665
Standard error of a weighted mean when observations are not independent
You say you have known sampling error. If you have multinormal data $(X_1, \dots, X_n)=X$ with commom mean $\mu$ and known covariance matrix $\Sigma$, then you can use the result that if $A$ is a matrix of constants, and $X \sim MN(\mu 1_n, \Sigma)$ then $AX \sim MN(A\mu, A \Sigma A^T)$. Just use the result with $A= n^{-1} 1_n$. Here $1_n$ is a constant (column) vector with all components $1$. Just plug in your known $\Sigma$. However, if you give more details of your real problem, then maybe we can indicate a better statistical solution!
Standard error of a weighted mean when observations are not independent
You say you have known sampling error. If you have multinormal data $(X_1, \dots, X_n)=X$ with commom mean $\mu$ and known covariance matrix $\Sigma$, then you can use the result that if $A$ is a mat
Standard error of a weighted mean when observations are not independent You say you have known sampling error. If you have multinormal data $(X_1, \dots, X_n)=X$ with commom mean $\mu$ and known covariance matrix $\Sigma$, then you can use the result that if $A$ is a matrix of constants, and $X \sim MN(\mu 1_n, \Sigma)$ then $AX \sim MN(A\mu, A \Sigma A^T)$. Just use the result with $A= n^{-1} 1_n$. Here $1_n$ is a constant (column) vector with all components $1$. Just plug in your known $\Sigma$. However, if you give more details of your real problem, then maybe we can indicate a better statistical solution!
Standard error of a weighted mean when observations are not independent You say you have known sampling error. If you have multinormal data $(X_1, \dots, X_n)=X$ with commom mean $\mu$ and known covariance matrix $\Sigma$, then you can use the result that if $A$ is a mat
50,666
What can I do if the confidence intervals of the predicted mean are small but the predicted intervals are large
Very little can be said in general terms about this sort of situation. We certainly can't conclude that this is the best model and just submit it. There may be other variables that are better than this one, or that complement it well by explaining the remaining randomness. To answer the question "what should I do here" is basically to consider the whole world of modelling strategies. You might want to consider a book such as Frank Harrell's on Regression Modeling Strategies.
What can I do if the confidence intervals of the predicted mean are small but the predicted interval
Very little can be said in general terms about this sort of situation. We certainly can't conclude that this is the best model and just submit it. There may be other variables that are better than t
What can I do if the confidence intervals of the predicted mean are small but the predicted intervals are large Very little can be said in general terms about this sort of situation. We certainly can't conclude that this is the best model and just submit it. There may be other variables that are better than this one, or that complement it well by explaining the remaining randomness. To answer the question "what should I do here" is basically to consider the whole world of modelling strategies. You might want to consider a book such as Frank Harrell's on Regression Modeling Strategies.
What can I do if the confidence intervals of the predicted mean are small but the predicted interval Very little can be said in general terms about this sort of situation. We certainly can't conclude that this is the best model and just submit it. There may be other variables that are better than t
50,667
What can I do if the confidence intervals of the predicted mean are small but the predicted intervals are large
To add some information to my own question, I could imagine that the predictor $X$ is length, and response $Y$ is weight. In this case, length perfectly predicts the average weight. However, the random error at each length is high. We could imagine a second predictor Country {Asia, Europe, America...} and it satisfies that at a given length, people from Europe or America has averagely a higher weight than people from Asia. Therefore, by adding this extra variable, we could stratify the current model into two models (Europe+America vs. Asia), and each model with smaller random error. So my suggestion is that when we judge how good the model is, we should not only look at the p-value and CI for the fitted variables included in the model, but we should also look at how much variance was actually explained by the model (e.g. R_square) (in this case the $R^{2}$ might not be very informative due to the replicates at given x). Adding more predictors could further decrease the random error in the model.
What can I do if the confidence intervals of the predicted mean are small but the predicted interval
To add some information to my own question, I could imagine that the predictor $X$ is length, and response $Y$ is weight. In this case, length perfectly predicts the average weight. However, the rando
What can I do if the confidence intervals of the predicted mean are small but the predicted intervals are large To add some information to my own question, I could imagine that the predictor $X$ is length, and response $Y$ is weight. In this case, length perfectly predicts the average weight. However, the random error at each length is high. We could imagine a second predictor Country {Asia, Europe, America...} and it satisfies that at a given length, people from Europe or America has averagely a higher weight than people from Asia. Therefore, by adding this extra variable, we could stratify the current model into two models (Europe+America vs. Asia), and each model with smaller random error. So my suggestion is that when we judge how good the model is, we should not only look at the p-value and CI for the fitted variables included in the model, but we should also look at how much variance was actually explained by the model (e.g. R_square) (in this case the $R^{2}$ might not be very informative due to the replicates at given x). Adding more predictors could further decrease the random error in the model.
What can I do if the confidence intervals of the predicted mean are small but the predicted interval To add some information to my own question, I could imagine that the predictor $X$ is length, and response $Y$ is weight. In this case, length perfectly predicts the average weight. However, the rando
50,668
How many counts are required to estimate a mean to a certain precision?
Your question seems to be assuming the rate of bubbles is constant across images. If we do that, then you want that a $100(1-\alpha)\%$ CI for $\mu$ is $< 0.05\mu$. $\text{se}(\hat{\mu}) = \sqrt{\mu/n}$ For a 95% interval, approximately $1.96 \sqrt{\mu/n} = .05 \mu$ ; approximating again: $2 \sqrt{\mu/n} < .05 \mu$ i.e. $n>\mu/(.025^2 \mu^2)$ $n>1600/\mu$ if $\mu$ is 5 that's 320 images if $\mu$ is 10 that's 160 images if $\mu$ is 20 that's 80 images The more uncertainty you have about your pilot estimate of $\mu$, the more you should assume that $\mu$ is lower than your estimate. If your estimate itself is from a sample, you can take that (the uncertainty in the estimate) into account. Check: if $\mu$ is 10 then a sample of 160 should get you 1520-1680 bubbles (about 95% of the time). The estimate of $\mu$ would be between 9.5 and 10.5 95% of the time. Looks about right.
How many counts are required to estimate a mean to a certain precision?
Your question seems to be assuming the rate of bubbles is constant across images. If we do that, then you want that a $100(1-\alpha)\%$ CI for $\mu$ is $< 0.05\mu$. $\text{se}(\hat{\mu}) = \sqrt{\mu/
How many counts are required to estimate a mean to a certain precision? Your question seems to be assuming the rate of bubbles is constant across images. If we do that, then you want that a $100(1-\alpha)\%$ CI for $\mu$ is $< 0.05\mu$. $\text{se}(\hat{\mu}) = \sqrt{\mu/n}$ For a 95% interval, approximately $1.96 \sqrt{\mu/n} = .05 \mu$ ; approximating again: $2 \sqrt{\mu/n} < .05 \mu$ i.e. $n>\mu/(.025^2 \mu^2)$ $n>1600/\mu$ if $\mu$ is 5 that's 320 images if $\mu$ is 10 that's 160 images if $\mu$ is 20 that's 80 images The more uncertainty you have about your pilot estimate of $\mu$, the more you should assume that $\mu$ is lower than your estimate. If your estimate itself is from a sample, you can take that (the uncertainty in the estimate) into account. Check: if $\mu$ is 10 then a sample of 160 should get you 1520-1680 bubbles (about 95% of the time). The estimate of $\mu$ would be between 9.5 and 10.5 95% of the time. Looks about right.
How many counts are required to estimate a mean to a certain precision? Your question seems to be assuming the rate of bubbles is constant across images. If we do that, then you want that a $100(1-\alpha)\%$ CI for $\mu$ is $< 0.05\mu$. $\text{se}(\hat{\mu}) = \sqrt{\mu/
50,669
Testing for non-random overlap of polygons
In this case you should use spatial pattern analysis to identify relations between human activity and distribution of rocks. Rocks have to be represented as a point shp-file. Adding another level of abstract to distribution of rocks (making them polygons instead of points) is highly questionable approach that doesn't seem to provide any help for analysis. You will need a spatstat package (with the great tutorial) for R and maybe QGIS. [Are you hunting for ghosts?] Use Monte-Carlo test to be sure that your rocks are not randomly distributed across the study area (regardless of human activity). To perform it see the corresponding section of tutorial or this post. In this case o-win object (it is the boundaries of the study area, see tutorial) will be the whole area that you study. Use Monte-Carlo test to determine whether rocks are randomly distributed across human activity sites (in this case o-win object will be polygon shp-file of human activity). Get the empirical graph of the dependency of intensity of rocks distribution on the distance to the human activity sites. If humans determine rocks location then intensity of spatial distribution of rocks will depend on the distance to the areas of human activities. Use rhohat function for it. If distribution of rocks indeed depends on human activity sites then the graph in your case should look similar to this: Here o-win object again will be the whole study area. Use a distfun function for a polygons of human activity as a covariate in rhohat. Here is some example code for the third step from one of my projects: library(maptools) library(rgdal) library(spatstat) spatstat.options(gpclib=TRUE) gpclibPermit() # load point shp-file for analysis S <- readShapePoints("/dumps.shp", proj4string= CRS("your_+proj_sting_here")) SP <- as(S, "SpatialPoints") P <- as(SP, "ppp") # load boundary layer and make it o-win object Z <- readShapePoly("/boundaries.shp", proj4string= CRS("your_+proj_sting_here")) Z1 <- as(Z, "SpatialPolygons") W <- as.owin(Z1) P <- P[W] # in my case covariate polygons were quite small so I loaded them as lines # to avoid spatstat issue with polygons. You should represent your # human activity polygons as points as will be described below c <- readShapeLines("/ccovariate.shp", proj4string= CRS("your_+proj_sting_here")) cr <- as.psp(c) cr <- cr[W] # create a distance function сrdist <- distfun(cr) # create and plot your graph plot(rhohat(P, сrdist, covname="quarry"), xlab= "Расстояние до карьера, м", legendpos = "topright", ##see help(plot.fv) main = NULL) That's pretty much it. Now some important details. If you dont't have a point shp-files for rocks (in this case you should skip first two steps cause it will be pointless) you can recreate it using your polygon layer. Use QGIS for this. Go Vector -> Research Tools -> Random points. Here choose your polygon layer of rocks as an input layer and set up one of the option for the Individual Polygons (density or the number of points per polygon). If you have issues with human activity polygons as a covariate for the rhohat function (spatstat sometimes not working well with the polygons). You can replace that polygons with the point layer just as it was suggested in the previous paragraph, but using Regular points instead of Random points. P.S. You may take some other approach (get inspiration from the tutorial for spatatat) but it is essential to use point pattern analysis in this case, not some polygons of locations of rocks.
Testing for non-random overlap of polygons
In this case you should use spatial pattern analysis to identify relations between human activity and distribution of rocks. Rocks have to be represented as a point shp-file. Adding another level of a
Testing for non-random overlap of polygons In this case you should use spatial pattern analysis to identify relations between human activity and distribution of rocks. Rocks have to be represented as a point shp-file. Adding another level of abstract to distribution of rocks (making them polygons instead of points) is highly questionable approach that doesn't seem to provide any help for analysis. You will need a spatstat package (with the great tutorial) for R and maybe QGIS. [Are you hunting for ghosts?] Use Monte-Carlo test to be sure that your rocks are not randomly distributed across the study area (regardless of human activity). To perform it see the corresponding section of tutorial or this post. In this case o-win object (it is the boundaries of the study area, see tutorial) will be the whole area that you study. Use Monte-Carlo test to determine whether rocks are randomly distributed across human activity sites (in this case o-win object will be polygon shp-file of human activity). Get the empirical graph of the dependency of intensity of rocks distribution on the distance to the human activity sites. If humans determine rocks location then intensity of spatial distribution of rocks will depend on the distance to the areas of human activities. Use rhohat function for it. If distribution of rocks indeed depends on human activity sites then the graph in your case should look similar to this: Here o-win object again will be the whole study area. Use a distfun function for a polygons of human activity as a covariate in rhohat. Here is some example code for the third step from one of my projects: library(maptools) library(rgdal) library(spatstat) spatstat.options(gpclib=TRUE) gpclibPermit() # load point shp-file for analysis S <- readShapePoints("/dumps.shp", proj4string= CRS("your_+proj_sting_here")) SP <- as(S, "SpatialPoints") P <- as(SP, "ppp") # load boundary layer and make it o-win object Z <- readShapePoly("/boundaries.shp", proj4string= CRS("your_+proj_sting_here")) Z1 <- as(Z, "SpatialPolygons") W <- as.owin(Z1) P <- P[W] # in my case covariate polygons were quite small so I loaded them as lines # to avoid spatstat issue with polygons. You should represent your # human activity polygons as points as will be described below c <- readShapeLines("/ccovariate.shp", proj4string= CRS("your_+proj_sting_here")) cr <- as.psp(c) cr <- cr[W] # create a distance function сrdist <- distfun(cr) # create and plot your graph plot(rhohat(P, сrdist, covname="quarry"), xlab= "Расстояние до карьера, м", legendpos = "topright", ##see help(plot.fv) main = NULL) That's pretty much it. Now some important details. If you dont't have a point shp-files for rocks (in this case you should skip first two steps cause it will be pointless) you can recreate it using your polygon layer. Use QGIS for this. Go Vector -> Research Tools -> Random points. Here choose your polygon layer of rocks as an input layer and set up one of the option for the Individual Polygons (density or the number of points per polygon). If you have issues with human activity polygons as a covariate for the rhohat function (spatstat sometimes not working well with the polygons). You can replace that polygons with the point layer just as it was suggested in the previous paragraph, but using Regular points instead of Random points. P.S. You may take some other approach (get inspiration from the tutorial for spatatat) but it is essential to use point pattern analysis in this case, not some polygons of locations of rocks.
Testing for non-random overlap of polygons In this case you should use spatial pattern analysis to identify relations between human activity and distribution of rocks. Rocks have to be represented as a point shp-file. Adding another level of a
50,670
Fisher's method of combining p-values when one of the p-values is zero
Irrespective of the discussion in the comments about how these $p$-values of $0$ arose there are methods for combining $p$-values which can be calculated if $p=0$. As the OP indicated neither Fisher's method nor Stouffer's works. The method of Edgington based on the sum of $p$, the closely related mean $p$ method, the method using logit of $p$, Tippett's method based on the minimum $p$ and variants of Wilkinson's method of which Tippett is a special case can all be calculated. Whether that is a sensible thing to do depends on the scientific question of course. All the methods mentioned are available in the R package metap which, disclaimer, I wrote and maintain.
Fisher's method of combining p-values when one of the p-values is zero
Irrespective of the discussion in the comments about how these $p$-values of $0$ arose there are methods for combining $p$-values which can be calculated if $p=0$. As the OP indicated neither Fisher's
Fisher's method of combining p-values when one of the p-values is zero Irrespective of the discussion in the comments about how these $p$-values of $0$ arose there are methods for combining $p$-values which can be calculated if $p=0$. As the OP indicated neither Fisher's method nor Stouffer's works. The method of Edgington based on the sum of $p$, the closely related mean $p$ method, the method using logit of $p$, Tippett's method based on the minimum $p$ and variants of Wilkinson's method of which Tippett is a special case can all be calculated. Whether that is a sensible thing to do depends on the scientific question of course. All the methods mentioned are available in the R package metap which, disclaimer, I wrote and maintain.
Fisher's method of combining p-values when one of the p-values is zero Irrespective of the discussion in the comments about how these $p$-values of $0$ arose there are methods for combining $p$-values which can be calculated if $p=0$. As the OP indicated neither Fisher's
50,671
Bias term in support vector machine
This is a paper you probably should read: Poggio, T., S. Mukherjee, R. Rifkin, A. Rakhlin and A. Verri. b, CBCL Paper #198/AI Memo #2001-011, Massachusetts Institute of Technology, Cambridge, MA, July 2001. (PostScript) I rather doubt there is a paper with a shorter title!
Bias term in support vector machine
This is a paper you probably should read: Poggio, T., S. Mukherjee, R. Rifkin, A. Rakhlin and A. Verri. b, CBCL Paper #198/AI Memo #2001-011, Massachusetts Institute of Technology, Cambridge, MA, July
Bias term in support vector machine This is a paper you probably should read: Poggio, T., S. Mukherjee, R. Rifkin, A. Rakhlin and A. Verri. b, CBCL Paper #198/AI Memo #2001-011, Massachusetts Institute of Technology, Cambridge, MA, July 2001. (PostScript) I rather doubt there is a paper with a shorter title!
Bias term in support vector machine This is a paper you probably should read: Poggio, T., S. Mukherjee, R. Rifkin, A. Rakhlin and A. Verri. b, CBCL Paper #198/AI Memo #2001-011, Massachusetts Institute of Technology, Cambridge, MA, July
50,672
Is the p-postulate (equal p-values provide equal evidence against the null) true?
The Royall paper begins with two quotes that providing apparently contradictory interpretations of the p-value. Both rely on interpreting the p-value in light of the sample size and as such both are flawed interpretations of the p-value. A p-value tells us one thing and one thing only--the probability of observing a statistic as extreme or more extreme than that observed in a sample as a result of random sampling error. A p-value of .05 with a sample of 10 or a sample of 50 (or any sample size for that matter) yields the same interpretation in any case. Under the assumptions of the model, a difference of the magnitude observed or greater would be observed in just 5% of samples if the null hypothesis were actually true. So, in response to your specific question and focusing on interpreting only the p-value, the answer is yes--equal p-values provide equal evidence against the null hypothesis at any sample size. This does not tell us anything about the magnitude of the difference or the effect size. Indeed, all else being equal, the same difference in an observed effect size will yield lower p-values as sample size increases. Strength of evidence against the null (p-value) and magnitude of the difference (effect size) should be interpreted together.
Is the p-postulate (equal p-values provide equal evidence against the null) true?
The Royall paper begins with two quotes that providing apparently contradictory interpretations of the p-value. Both rely on interpreting the p-value in light of the sample size and as such both are
Is the p-postulate (equal p-values provide equal evidence against the null) true? The Royall paper begins with two quotes that providing apparently contradictory interpretations of the p-value. Both rely on interpreting the p-value in light of the sample size and as such both are flawed interpretations of the p-value. A p-value tells us one thing and one thing only--the probability of observing a statistic as extreme or more extreme than that observed in a sample as a result of random sampling error. A p-value of .05 with a sample of 10 or a sample of 50 (or any sample size for that matter) yields the same interpretation in any case. Under the assumptions of the model, a difference of the magnitude observed or greater would be observed in just 5% of samples if the null hypothesis were actually true. So, in response to your specific question and focusing on interpreting only the p-value, the answer is yes--equal p-values provide equal evidence against the null hypothesis at any sample size. This does not tell us anything about the magnitude of the difference or the effect size. Indeed, all else being equal, the same difference in an observed effect size will yield lower p-values as sample size increases. Strength of evidence against the null (p-value) and magnitude of the difference (effect size) should be interpreted together.
Is the p-postulate (equal p-values provide equal evidence against the null) true? The Royall paper begins with two quotes that providing apparently contradictory interpretations of the p-value. Both rely on interpreting the p-value in light of the sample size and as such both are
50,673
MLE for simulated case of binomial p with constant labelling rate
No, it is not possible to estimate the overcounting error rate $e$. This experiment behaves essentially like a Binomial experiment in which the erroneous "heads" cannot be distinguished from true heads and therefore the proportion of erroneous heads cannot be estimated. Alternative interpretations There is an ambiguity concerning the observations: how, exactly, would "$1205 + (2000-1205)*0.10$" be reported? As $1284.5$? If so, we can obtain a lot of information from that decimal! For instance, suppose the true error rate is $e=0.103427779$. Then when $1205$ heads appear, the reported amount would be $1287.225084305$ and when $1197$ heads appear the amount reported would be $1280.052506537$. There are only two feasible values of $e$ consistent with these results: $0.103427779$ and $0.5517138895$ (which can be found by brute force, solving $e$ for all the possible values of $k$ between $0$ and $1287$ and again solving $e$ for all the possible values of $k$ between $0$ and $1280$ and finding which solutions appear in both sets). With a third sample we are likely to be able to settle which of these possible values is the correct one--and to know it entirely without error. If instead the observations are rounded, the analysis becomes more complex. It would appear we could exploit this rounding somehow, but the fact of the matter is that for the sample sizes posited in the question, the experiment behaves essentially in the way analyzed below where the errors are applied randomly and independently with probability $e$. One can indeed write down a likelihood. I did so in a comment to another answer: Consider the probability of observing $k$ heads in $n$ trials. Due to the mislabeling, $k=k'+e(n-k')$ (rounded) where $k'$ is the number of heads that actually occurred. Therefore $\Pr(k)=\sum_j \binom{n}{j}p^j(1-p)^{n-j}$ where $j$ ranges over all integers for which $\text{round}(j+e(n-j))=k$. The likelihood of a sequence of independent experiments is the product of these values. However, this leads to a spiky likelihood function (with probability spikes wherever there are two or more outcomes $k$ and $k'$ for which $k+e(n-k)$ and $k'+e(n-k')$ round to the same value). The spikes provide information comparable to that described in the preceding paragraph, but this information is "smeared" by the rounding as well as by the varying sample sizes. That smearing out causes the likelihood to reduce, for all practical purposes, to the binomial experiment described below. Both these interpretations feel highly artificial. One leads to a number-theoretic solution and the other leads to one with pathological behavior in its likelihood function. Instead, it is plausible that the overcounting is a random process: each tail independently has a chance of $e$ of being mistaken for a heads. (This is an example of an asymmetric binary channel in communications theory.) For this situation there is a clear, convincing analysis, as follows. Model 1 A model for flipping the coin is a box full of tickets on which "heads" is written on an unknown proportion $p$ of the tickets and on the remaining tickets "tails" is written. After pulling a ticket randomly from this box, if it reads "tails" the experimenter goes to a second box containing tickets reading either "keep" and "change" and pulls one of them at random. If it reads "change," the word "tails" is erased from the first ticket and replaced by "heads," which is recorded as an observation. Normally we would ask that all tickets be restored to their original states and replaced in their respective boxes before we repeat this experiment, for otherwise the proportions in each box will change slightly each time, distorting the model. However, when the boxes have enormous numbers of tickets, these slight changes are inconsequential. So bear with me and imagine conducting this experiment from the first box without replacement. Model 2 The argument hinges on noting that it does not matter when in the experiment words are written on the tickets: all that matters is what is finally read off of the tickets that are drawn. We may create an equivalent model, then, by first extracting all the tickets with "tails" from the first box and, for each one of them, separately drawing a ticket from the second box, performing its stipulated action, and replacing the possibly altered ticket into the first box. This causes some proportion $e$ of the "tails" tickets to be erased and "heads" to be written on them. The first box now has a proportion $p + e(1-p)$ of "heads" tickets in it due to this rewriting step. The experiment proceeds by drawing $n$ tickets randomly from this doctored box. The solution I hope it's clear that the two models are the same: they differ only in that in the second model, the erasures occur earlier in the process. The point is that the second model consists of draws from a single box: this is a binomial experiment. As such we can estimate the box's proportion $p + e(1-p)$ but, no matter how many times we repeat this experiment, we cannot estimate $p$ or $e$ separately.
MLE for simulated case of binomial p with constant labelling rate
No, it is not possible to estimate the overcounting error rate $e$. This experiment behaves essentially like a Binomial experiment in which the erroneous "heads" cannot be distinguished from true hea
MLE for simulated case of binomial p with constant labelling rate No, it is not possible to estimate the overcounting error rate $e$. This experiment behaves essentially like a Binomial experiment in which the erroneous "heads" cannot be distinguished from true heads and therefore the proportion of erroneous heads cannot be estimated. Alternative interpretations There is an ambiguity concerning the observations: how, exactly, would "$1205 + (2000-1205)*0.10$" be reported? As $1284.5$? If so, we can obtain a lot of information from that decimal! For instance, suppose the true error rate is $e=0.103427779$. Then when $1205$ heads appear, the reported amount would be $1287.225084305$ and when $1197$ heads appear the amount reported would be $1280.052506537$. There are only two feasible values of $e$ consistent with these results: $0.103427779$ and $0.5517138895$ (which can be found by brute force, solving $e$ for all the possible values of $k$ between $0$ and $1287$ and again solving $e$ for all the possible values of $k$ between $0$ and $1280$ and finding which solutions appear in both sets). With a third sample we are likely to be able to settle which of these possible values is the correct one--and to know it entirely without error. If instead the observations are rounded, the analysis becomes more complex. It would appear we could exploit this rounding somehow, but the fact of the matter is that for the sample sizes posited in the question, the experiment behaves essentially in the way analyzed below where the errors are applied randomly and independently with probability $e$. One can indeed write down a likelihood. I did so in a comment to another answer: Consider the probability of observing $k$ heads in $n$ trials. Due to the mislabeling, $k=k'+e(n-k')$ (rounded) where $k'$ is the number of heads that actually occurred. Therefore $\Pr(k)=\sum_j \binom{n}{j}p^j(1-p)^{n-j}$ where $j$ ranges over all integers for which $\text{round}(j+e(n-j))=k$. The likelihood of a sequence of independent experiments is the product of these values. However, this leads to a spiky likelihood function (with probability spikes wherever there are two or more outcomes $k$ and $k'$ for which $k+e(n-k)$ and $k'+e(n-k')$ round to the same value). The spikes provide information comparable to that described in the preceding paragraph, but this information is "smeared" by the rounding as well as by the varying sample sizes. That smearing out causes the likelihood to reduce, for all practical purposes, to the binomial experiment described below. Both these interpretations feel highly artificial. One leads to a number-theoretic solution and the other leads to one with pathological behavior in its likelihood function. Instead, it is plausible that the overcounting is a random process: each tail independently has a chance of $e$ of being mistaken for a heads. (This is an example of an asymmetric binary channel in communications theory.) For this situation there is a clear, convincing analysis, as follows. Model 1 A model for flipping the coin is a box full of tickets on which "heads" is written on an unknown proportion $p$ of the tickets and on the remaining tickets "tails" is written. After pulling a ticket randomly from this box, if it reads "tails" the experimenter goes to a second box containing tickets reading either "keep" and "change" and pulls one of them at random. If it reads "change," the word "tails" is erased from the first ticket and replaced by "heads," which is recorded as an observation. Normally we would ask that all tickets be restored to their original states and replaced in their respective boxes before we repeat this experiment, for otherwise the proportions in each box will change slightly each time, distorting the model. However, when the boxes have enormous numbers of tickets, these slight changes are inconsequential. So bear with me and imagine conducting this experiment from the first box without replacement. Model 2 The argument hinges on noting that it does not matter when in the experiment words are written on the tickets: all that matters is what is finally read off of the tickets that are drawn. We may create an equivalent model, then, by first extracting all the tickets with "tails" from the first box and, for each one of them, separately drawing a ticket from the second box, performing its stipulated action, and replacing the possibly altered ticket into the first box. This causes some proportion $e$ of the "tails" tickets to be erased and "heads" to be written on them. The first box now has a proportion $p + e(1-p)$ of "heads" tickets in it due to this rewriting step. The experiment proceeds by drawing $n$ tickets randomly from this doctored box. The solution I hope it's clear that the two models are the same: they differ only in that in the second model, the erasures occur earlier in the process. The point is that the second model consists of draws from a single box: this is a binomial experiment. As such we can estimate the box's proportion $p + e(1-p)$ but, no matter how many times we repeat this experiment, we cannot estimate $p$ or $e$ separately.
MLE for simulated case of binomial p with constant labelling rate No, it is not possible to estimate the overcounting error rate $e$. This experiment behaves essentially like a Binomial experiment in which the erroneous "heads" cannot be distinguished from true hea
50,674
Tests for spatial stationarity (homogeneity)?
Three comments based on a mixture of experience and prejudice: What should be important here is that the researcher's substantive knowledge (or that of a collaborator), which may make the question obvious at some level. That is, it may be foolish to apply models assuming stationarity if there are known to be gross trends across a region that are important for the variable(s) being modelled. At a minimum, expect flak from experts if your application is a real stretch. Nonstationarity may well be evident by fitting a model and then assessing the fit, e.g. if the fit is lousy, nonstationarity may be a likely suspect. But as often in statistics, an oversimplified model that is only a crude approximation may still be of use or interest. Nonstationarity may be evident by inspection of basic maps, etc. In short, this answer stresses the scope for considering the answer informally as well as by seeking formal tests. "Informally" does include ensuring that subject-matter knowledge and expertise play a key part.
Tests for spatial stationarity (homogeneity)?
Three comments based on a mixture of experience and prejudice: What should be important here is that the researcher's substantive knowledge (or that of a collaborator), which may make the question o
Tests for spatial stationarity (homogeneity)? Three comments based on a mixture of experience and prejudice: What should be important here is that the researcher's substantive knowledge (or that of a collaborator), which may make the question obvious at some level. That is, it may be foolish to apply models assuming stationarity if there are known to be gross trends across a region that are important for the variable(s) being modelled. At a minimum, expect flak from experts if your application is a real stretch. Nonstationarity may well be evident by fitting a model and then assessing the fit, e.g. if the fit is lousy, nonstationarity may be a likely suspect. But as often in statistics, an oversimplified model that is only a crude approximation may still be of use or interest. Nonstationarity may be evident by inspection of basic maps, etc. In short, this answer stresses the scope for considering the answer informally as well as by seeking formal tests. "Informally" does include ensuring that subject-matter knowledge and expertise play a key part.
Tests for spatial stationarity (homogeneity)? Three comments based on a mixture of experience and prejudice: What should be important here is that the researcher's substantive knowledge (or that of a collaborator), which may make the question o
50,675
Tests for spatial stationarity (homogeneity)?
Leung, Mei, and Zhang have developed two tests for whether GWR is a better fit than OLS regression. Their paper is here, but it's behind a paywall if you don't have academic access. As for variograms, etc. I know that Bivand, et al. cover tools and mechanisms In their book. I know a pdf of this exists because I have it, but I forgot where I got it from. As far as spatial stationarity in general, I'm a little bit skeptical; GWR is the only method I have looked at specifically, but it seems to give contradictory answers perhaps because of its susceptibility to collinearity. I don't know what your particular application is, but in home price hedonics there has been some movement to autoregressive models that incorporate heterogeneity sympathetically (like the spatial Durbin model).
Tests for spatial stationarity (homogeneity)?
Leung, Mei, and Zhang have developed two tests for whether GWR is a better fit than OLS regression. Their paper is here, but it's behind a paywall if you don't have academic access. As for variograms,
Tests for spatial stationarity (homogeneity)? Leung, Mei, and Zhang have developed two tests for whether GWR is a better fit than OLS regression. Their paper is here, but it's behind a paywall if you don't have academic access. As for variograms, etc. I know that Bivand, et al. cover tools and mechanisms In their book. I know a pdf of this exists because I have it, but I forgot where I got it from. As far as spatial stationarity in general, I'm a little bit skeptical; GWR is the only method I have looked at specifically, but it seems to give contradictory answers perhaps because of its susceptibility to collinearity. I don't know what your particular application is, but in home price hedonics there has been some movement to autoregressive models that incorporate heterogeneity sympathetically (like the spatial Durbin model).
Tests for spatial stationarity (homogeneity)? Leung, Mei, and Zhang have developed two tests for whether GWR is a better fit than OLS regression. Their paper is here, but it's behind a paywall if you don't have academic access. As for variograms,
50,676
Speed up web a/b tests with sample size checkpoints
This approach doesn't have the properties you would have if you fixed the sample size ahead of time. The situation where you look for a particular result while your experiment continues and have some 'stopping rule' (halt your experiment early if a particular situation is achieved) is a version of sequential analysis; see also SPRT. You have to take care that the properties of your actual decision rules are doing what you want - you can't apply the properties of one situation to another and expect that it will work. For example, you won't have the power you have calculated at the given sample sizes if you're doing sequential testing; the required sample sizes will be somewhat larger. On the other hand, when your effects are substantial, you'll often end up stopping earlier - meaning smaller sample sizes/faster decisions. Specifically which properties are affected if one were to terminate the test at, say, 490 samples because 20% improvement over the control is shown? First, estimates will be biased, but also standard errors, Type I and (as already mentioned) Type II error rates are affected - plus anything any of these will impact. The SPRT link I gave outlines a general approach that is used with early stopping with hypothesis testing. Phillip Good does some work with discrete sequential analysis in his book Permutation, Parametric, and Bootstrap Tests of Hypotheses in section 6.7
Speed up web a/b tests with sample size checkpoints
This approach doesn't have the properties you would have if you fixed the sample size ahead of time. The situation where you look for a particular result while your experiment continues and have some
Speed up web a/b tests with sample size checkpoints This approach doesn't have the properties you would have if you fixed the sample size ahead of time. The situation where you look for a particular result while your experiment continues and have some 'stopping rule' (halt your experiment early if a particular situation is achieved) is a version of sequential analysis; see also SPRT. You have to take care that the properties of your actual decision rules are doing what you want - you can't apply the properties of one situation to another and expect that it will work. For example, you won't have the power you have calculated at the given sample sizes if you're doing sequential testing; the required sample sizes will be somewhat larger. On the other hand, when your effects are substantial, you'll often end up stopping earlier - meaning smaller sample sizes/faster decisions. Specifically which properties are affected if one were to terminate the test at, say, 490 samples because 20% improvement over the control is shown? First, estimates will be biased, but also standard errors, Type I and (as already mentioned) Type II error rates are affected - plus anything any of these will impact. The SPRT link I gave outlines a general approach that is used with early stopping with hypothesis testing. Phillip Good does some work with discrete sequential analysis in his book Permutation, Parametric, and Bootstrap Tests of Hypotheses in section 6.7
Speed up web a/b tests with sample size checkpoints This approach doesn't have the properties you would have if you fixed the sample size ahead of time. The situation where you look for a particular result while your experiment continues and have some
50,677
Understanding the construction of Dirichlet process
1) infinite does not mean continuous. 2) The total mass (measure) of a probabilistic measure F is always 1. In the former case, F is partitioned by k bins (discrete). However F is not required to be discrete. If F is continuous, it can be partitioned by k regions (T1...Tk)
Understanding the construction of Dirichlet process
1) infinite does not mean continuous. 2) The total mass (measure) of a probabilistic measure F is always 1. In the former case, F is partitioned by k bins (discrete). However F is not required to be
Understanding the construction of Dirichlet process 1) infinite does not mean continuous. 2) The total mass (measure) of a probabilistic measure F is always 1. In the former case, F is partitioned by k bins (discrete). However F is not required to be discrete. If F is continuous, it can be partitioned by k regions (T1...Tk)
Understanding the construction of Dirichlet process 1) infinite does not mean continuous. 2) The total mass (measure) of a probabilistic measure F is always 1. In the former case, F is partitioned by k bins (discrete). However F is not required to be
50,678
Understanding the construction of Dirichlet process
Not an answer but rather a long comment regarding your point 2 (I am not an expert so take my explanation with caution) It was not clear to me in what sense you have used the terms infinite and finite in your statement 2. Below is my take on how you can argue that indeed "In the finite case, $F$ is indexed by $1,\ldots,k$ while in the infinite case, $F$ should be indexed by partitions...", the later I think should be changed to indexed by sets not partitions. $\underline{Finite~~case}$ When you say "in the finite case" you probably refer to the situation where you take $$F \sim \text{Dirichlet } \textbf{distribution } \text{of dimension } k\,.$$ That is, your $F$ is a random vector in $\mathbb{R}^k$ drawn according to the Dirichlet distribution. In particular this means that $ F \equiv(x_1, \ldots x_k)$ admits $\sum_i^k x_i = 1$. The fact that elements of $F$ sum to $1$ allows you to associate with $F$, so to speak, a distribution $\widetilde{F}$ over indexes $i= 1,\ldots, k$, indeed take $$\widetilde{F}(m) := \sum_{j \leq m} x_i \,.$$ This $\widetilde{F}$ is a valid distribution on indexes $1, \ldots, k$. Namely you can define random variable $Z$ on $1, \ldots, k$ which is distributed according to $\widetilde{F}$ meaning that $$ P(X \leq m) = \widetilde{F}(m) := \sum_{j \leq m} x_i\,.$$ The scheme therefor allows you to generate a random distribution over index set $\{1, \ldots, k\}$ (random since $\{x_i\}$ and therefor your $\widetilde{F}$ is random). $\underline{Infinite~~case}$. In the infinite case, I presume you refer to $\widetilde{F}$ drawn from Dirichle process in that case $\widetilde{F}$ is already a distribution, no need to associate a new distribution with it. So Dirichle process is already a scheme to generate random distributions. This random $\widetilde{F}$ is defined on measure space $\Omega$, where $\Omega$ is in turn the domain of "base distribution" $H_0:\Omega \to \mathbb{R}$ which is a parameter of a Dirichle process . Ultimatly, $\widetilde{F}$ defines a random variable $Z$ on $\Omega$, which is distributed for $A \subset \Omega$ according to $$P(Z \in A) \sim \widetilde{F}$$ unlike in the finite case where our random variable was defined on finite index set here $Z$ is defined on any subset of sigma algebra of $\Omega$. Lastly to draw the analogy with Dirichlet distribution property that $\sum_{i=1}^k x_i = 1$, note that we must have for any (disjoint) partition $\Omega = \cup_{i=1}^k A_i$ that $\sum_{i=1}^k P(Z \in A_i) = 1$. Here $P(Z \in A_i)$ take the place of $x_i$, however we can take $k$ to be $\infty$, meaning a partition $\cup_{i=1}^\infty A_i$ and still $\sum_{i=1}^\infty P(Z \in A_i) = 1$.
Understanding the construction of Dirichlet process
Not an answer but rather a long comment regarding your point 2 (I am not an expert so take my explanation with caution) It was not clear to me in what sense you have used the terms infinite and finit
Understanding the construction of Dirichlet process Not an answer but rather a long comment regarding your point 2 (I am not an expert so take my explanation with caution) It was not clear to me in what sense you have used the terms infinite and finite in your statement 2. Below is my take on how you can argue that indeed "In the finite case, $F$ is indexed by $1,\ldots,k$ while in the infinite case, $F$ should be indexed by partitions...", the later I think should be changed to indexed by sets not partitions. $\underline{Finite~~case}$ When you say "in the finite case" you probably refer to the situation where you take $$F \sim \text{Dirichlet } \textbf{distribution } \text{of dimension } k\,.$$ That is, your $F$ is a random vector in $\mathbb{R}^k$ drawn according to the Dirichlet distribution. In particular this means that $ F \equiv(x_1, \ldots x_k)$ admits $\sum_i^k x_i = 1$. The fact that elements of $F$ sum to $1$ allows you to associate with $F$, so to speak, a distribution $\widetilde{F}$ over indexes $i= 1,\ldots, k$, indeed take $$\widetilde{F}(m) := \sum_{j \leq m} x_i \,.$$ This $\widetilde{F}$ is a valid distribution on indexes $1, \ldots, k$. Namely you can define random variable $Z$ on $1, \ldots, k$ which is distributed according to $\widetilde{F}$ meaning that $$ P(X \leq m) = \widetilde{F}(m) := \sum_{j \leq m} x_i\,.$$ The scheme therefor allows you to generate a random distribution over index set $\{1, \ldots, k\}$ (random since $\{x_i\}$ and therefor your $\widetilde{F}$ is random). $\underline{Infinite~~case}$. In the infinite case, I presume you refer to $\widetilde{F}$ drawn from Dirichle process in that case $\widetilde{F}$ is already a distribution, no need to associate a new distribution with it. So Dirichle process is already a scheme to generate random distributions. This random $\widetilde{F}$ is defined on measure space $\Omega$, where $\Omega$ is in turn the domain of "base distribution" $H_0:\Omega \to \mathbb{R}$ which is a parameter of a Dirichle process . Ultimatly, $\widetilde{F}$ defines a random variable $Z$ on $\Omega$, which is distributed for $A \subset \Omega$ according to $$P(Z \in A) \sim \widetilde{F}$$ unlike in the finite case where our random variable was defined on finite index set here $Z$ is defined on any subset of sigma algebra of $\Omega$. Lastly to draw the analogy with Dirichlet distribution property that $\sum_{i=1}^k x_i = 1$, note that we must have for any (disjoint) partition $\Omega = \cup_{i=1}^k A_i$ that $\sum_{i=1}^k P(Z \in A_i) = 1$. Here $P(Z \in A_i)$ take the place of $x_i$, however we can take $k$ to be $\infty$, meaning a partition $\cup_{i=1}^\infty A_i$ and still $\sum_{i=1}^\infty P(Z \in A_i) = 1$.
Understanding the construction of Dirichlet process Not an answer but rather a long comment regarding your point 2 (I am not an expert so take my explanation with caution) It was not clear to me in what sense you have used the terms infinite and finit
50,679
Why don't we look at $R^2$ when fitting an autoregressive model?
IMHO, using the R2 is irrelevant since it would just push you to use a larger regression order $k$ which would generally give you a smaller R2. The idea of fitting an AR (or any GLP) is to reproduce the underlying process with a model that is as simple as possible (since the idea is also to extract meaning out of the different coefficients) This is why people generally look at information criterion such as the BIC or the AIC that englobe a penalty for the number of parameters in the model with a goodness of fit based on the likelihood of the fitted parameters (and hence of the model). Now I guess you could consider the adjusted R2 but it would be somehow less general which I guess is the reason the AIC, BIC and other similar IC are popular.
Why don't we look at $R^2$ when fitting an autoregressive model?
IMHO, using the R2 is irrelevant since it would just push you to use a larger regression order $k$ which would generally give you a smaller R2. The idea of fitting an AR (or any GLP) is to reproduce t
Why don't we look at $R^2$ when fitting an autoregressive model? IMHO, using the R2 is irrelevant since it would just push you to use a larger regression order $k$ which would generally give you a smaller R2. The idea of fitting an AR (or any GLP) is to reproduce the underlying process with a model that is as simple as possible (since the idea is also to extract meaning out of the different coefficients) This is why people generally look at information criterion such as the BIC or the AIC that englobe a penalty for the number of parameters in the model with a goodness of fit based on the likelihood of the fitted parameters (and hence of the model). Now I guess you could consider the adjusted R2 but it would be somehow less general which I guess is the reason the AIC, BIC and other similar IC are popular.
Why don't we look at $R^2$ when fitting an autoregressive model? IMHO, using the R2 is irrelevant since it would just push you to use a larger regression order $k$ which would generally give you a smaller R2. The idea of fitting an AR (or any GLP) is to reproduce t
50,680
Standard error for a statistic obtained via simulation
I do not know what you mean with "certain conditions" but if you want to calculate the uncertainty in the regression coefficients via simulation, you have to take account of the uncertainty about the residual standard deviation in general (because Var$(\,\hat{\beta}\,|\,X\,) = \sigma^2\,(X^TX)^{-1}$, $\sigma^2$ should be $\chi^2_{n-k}$ distributed, but maybe here is no uncertainty in your case) as well as the regression coefficients (which are often assumed to be multivariate normal distributed). There is a nice chapter about simulations in Gelman, A., & Hill, J. (2006). Data analysis using regression and multilevel/hierarchical models. Cambridge University Press which provides also an R function "sim" (in the arm package) to do want you want to do. You can check how they programmed this function in R to get an impression what you have to do, or you read their chapter about simulations in the book.
Standard error for a statistic obtained via simulation
I do not know what you mean with "certain conditions" but if you want to calculate the uncertainty in the regression coefficients via simulation, you have to take account of the uncertainty about the
Standard error for a statistic obtained via simulation I do not know what you mean with "certain conditions" but if you want to calculate the uncertainty in the regression coefficients via simulation, you have to take account of the uncertainty about the residual standard deviation in general (because Var$(\,\hat{\beta}\,|\,X\,) = \sigma^2\,(X^TX)^{-1}$, $\sigma^2$ should be $\chi^2_{n-k}$ distributed, but maybe here is no uncertainty in your case) as well as the regression coefficients (which are often assumed to be multivariate normal distributed). There is a nice chapter about simulations in Gelman, A., & Hill, J. (2006). Data analysis using regression and multilevel/hierarchical models. Cambridge University Press which provides also an R function "sim" (in the arm package) to do want you want to do. You can check how they programmed this function in R to get an impression what you have to do, or you read their chapter about simulations in the book.
Standard error for a statistic obtained via simulation I do not know what you mean with "certain conditions" but if you want to calculate the uncertainty in the regression coefficients via simulation, you have to take account of the uncertainty about the
50,681
Estimating parameters of a normal distribution from noisy observation of samples
I think I've figured it out now. As stated in the comments, I think there is indeed a problem with the estimators. I found another answer on using a linear estimator for a similar problem. If I didn't make any mistakes, the estimator for $\mu$ in my case should be $$\hat{\mu} = \frac{ \sum_{i=1..n} z_i/(C+\Sigma_i)}{ \sum_{i=1..n} 1/(C+\Sigma_i)}$$ Since I don't actually know $C$, I guess I could use the estimate $\hat{C}$ based on the samples. Based on the Wikipedia Article on weighted mean and then subtracting the covariance influence from the $\Sigma_i$, I can estimate the covariance as $$\hat{C} = \left(\frac{\sum w_i}{(\sum w_i)^2 - \sum w_i^2} \sum w_i(z_i - \hat{\mu}) \right) - \frac{n}{\sum w_i}$$ where $w_i = 1/\Sigma_i$.
Estimating parameters of a normal distribution from noisy observation of samples
I think I've figured it out now. As stated in the comments, I think there is indeed a problem with the estimators. I found another answer on using a linear estimator for a similar problem. If I didn't
Estimating parameters of a normal distribution from noisy observation of samples I think I've figured it out now. As stated in the comments, I think there is indeed a problem with the estimators. I found another answer on using a linear estimator for a similar problem. If I didn't make any mistakes, the estimator for $\mu$ in my case should be $$\hat{\mu} = \frac{ \sum_{i=1..n} z_i/(C+\Sigma_i)}{ \sum_{i=1..n} 1/(C+\Sigma_i)}$$ Since I don't actually know $C$, I guess I could use the estimate $\hat{C}$ based on the samples. Based on the Wikipedia Article on weighted mean and then subtracting the covariance influence from the $\Sigma_i$, I can estimate the covariance as $$\hat{C} = \left(\frac{\sum w_i}{(\sum w_i)^2 - \sum w_i^2} \sum w_i(z_i - \hat{\mu}) \right) - \frac{n}{\sum w_i}$$ where $w_i = 1/\Sigma_i$.
Estimating parameters of a normal distribution from noisy observation of samples I think I've figured it out now. As stated in the comments, I think there is indeed a problem with the estimators. I found another answer on using a linear estimator for a similar problem. If I didn't
50,682
Robust parameter estimation for shifted log normal distribution
In case anyone is still interested, I have managed to implement Aristizabal's formulae in Java. This is more proof-of-concept than the requested "robust" code, but it is a starting point. /** * Computes the point estimate of the shift offset (gamma) from the given sample. The sample array will be sorted by this method.<p> * Cf. Aristizabal section 2.2 ff. * @param sample {@code double[]}, will be sorted * @return gamma point estimate */ public static double pointEstimateOfGammaFromSample(double[] sample) { Arrays.sort(sample); DoubleUnaryOperator func = x->calculatePivotalOfSortedSample(sample, x)-1.0; double upperLimit = sample[0]; double lowerLimit = 0; double gamma = bisect(func, lowerLimit, upperLimit); return gamma; } /** * Cf. Aristizabal's equation (2.3.1) * @param sample {@code double[]}, should be sorted in ascending order * @param gamma shift offset * @return pivotal value of sample */ private static double calculatePivotalOfSortedSample(final double[] sample, double gamma) { final int n=sample.length; final int n3=n/3; final double mid = avg(sample, gamma, n3+1, n-n3); final double low = avg(sample, gamma, 1, n3); final double upp = avg(sample, gamma, n-n3+1, n); final double result = (mid-low)/(upp-mid); return result; } /** * Computes average of sample values from {@code sample[l-1]} to {@code sample[u-1]}. * @param sample {@code double[]}, should be sorted in ascending order * @param gamma shift offset * @param l lower limit * @param u upper limit * @return average */ private static double avg(double[] sample, double gamma, int l, int u) { double sum = 0.0; for (int i=l-1;i<u;sum+=Math.log(sample[i++]-gamma)); final int n = u-l+1; return sum/n; } /** * Naive bisection implementation. Should always complete if the given values actually straddles the root. * Will call {@link #secant(DoubleUnaryOperator, double, double)} if they do not, in which case the * call may not complete. * @param func Function solve for root value * @param lowerLimit Some value for which the given function evaluates < 0 * @param upperLimit Some value for which the given function evaluates > 0 * @return x value, somewhere between the lower and upper limits, which evaluates close enough to zero */ private static double bisect(DoubleUnaryOperator func, double lowerLimit, double upperLimit) { final double eps = 0.000001; double low=lowerLimit; double valAtLow = func.applyAsDouble(low); double upp=upperLimit; double valAtUpp = func.applyAsDouble(upp); if (valAtLow*valAtLow>0) { // Switch to secant method return secant(func, lowerLimit, upperLimit); } System.out.printf("bisect %f@%f -- %f@%f%n", valAtLow, low, valAtUpp, upp); double mid; while(true) { mid = (upp+low)/2; if (Math.abs(upp-low)/low<eps) break; double val = func.applyAsDouble(mid); if (Math.abs(val)<eps) break; if (val<0) low=mid; else upp=mid; } return mid; } /** * Naive secant root solver implementation. May not complete if root not found. * @param f Function solve for root value * @param a Some value for which the given function evaluates * @param b Some value for which the given function evaluates * @return x value which evaluates close enough to zero */ static double secant(final DoubleUnaryOperator f, double a, double b) { double fa = f.applyAsDouble(a); if (fa==0) return a; double fb = f.applyAsDouble(b); if (fb==0) return b; System.out.printf("secant %f@%f -- %f@%f%n", fa, a, fb, b); if (fa*fb<0) { return bisect(f, a, b); } while ( abs(b-a) > abs(0.00001*a) ) { final double m = (a+b)/2; final double k = (fb-fa)/(b-a); final double fm = f.applyAsDouble(m); final double x = m-fm/k; if (Math.abs(fa)<Math.abs(fb)) { // f(a)<f(b); Choose x and a b=x; fb=f.applyAsDouble(b); } else { // f(a)>=f(b); Choose x and b a=x; fa=f.applyAsDouble(a); } if (fa==0) return a; if (fb==0) return b; if (fa*fb<0) { // Straddling root; switch to bisect method return bisect(f, a, b); } } return (a+b)/2; }
Robust parameter estimation for shifted log normal distribution
In case anyone is still interested, I have managed to implement Aristizabal's formulae in Java. This is more proof-of-concept than the requested "robust" code, but it is a starting point. /** * Compu
Robust parameter estimation for shifted log normal distribution In case anyone is still interested, I have managed to implement Aristizabal's formulae in Java. This is more proof-of-concept than the requested "robust" code, but it is a starting point. /** * Computes the point estimate of the shift offset (gamma) from the given sample. The sample array will be sorted by this method.<p> * Cf. Aristizabal section 2.2 ff. * @param sample {@code double[]}, will be sorted * @return gamma point estimate */ public static double pointEstimateOfGammaFromSample(double[] sample) { Arrays.sort(sample); DoubleUnaryOperator func = x->calculatePivotalOfSortedSample(sample, x)-1.0; double upperLimit = sample[0]; double lowerLimit = 0; double gamma = bisect(func, lowerLimit, upperLimit); return gamma; } /** * Cf. Aristizabal's equation (2.3.1) * @param sample {@code double[]}, should be sorted in ascending order * @param gamma shift offset * @return pivotal value of sample */ private static double calculatePivotalOfSortedSample(final double[] sample, double gamma) { final int n=sample.length; final int n3=n/3; final double mid = avg(sample, gamma, n3+1, n-n3); final double low = avg(sample, gamma, 1, n3); final double upp = avg(sample, gamma, n-n3+1, n); final double result = (mid-low)/(upp-mid); return result; } /** * Computes average of sample values from {@code sample[l-1]} to {@code sample[u-1]}. * @param sample {@code double[]}, should be sorted in ascending order * @param gamma shift offset * @param l lower limit * @param u upper limit * @return average */ private static double avg(double[] sample, double gamma, int l, int u) { double sum = 0.0; for (int i=l-1;i<u;sum+=Math.log(sample[i++]-gamma)); final int n = u-l+1; return sum/n; } /** * Naive bisection implementation. Should always complete if the given values actually straddles the root. * Will call {@link #secant(DoubleUnaryOperator, double, double)} if they do not, in which case the * call may not complete. * @param func Function solve for root value * @param lowerLimit Some value for which the given function evaluates < 0 * @param upperLimit Some value for which the given function evaluates > 0 * @return x value, somewhere between the lower and upper limits, which evaluates close enough to zero */ private static double bisect(DoubleUnaryOperator func, double lowerLimit, double upperLimit) { final double eps = 0.000001; double low=lowerLimit; double valAtLow = func.applyAsDouble(low); double upp=upperLimit; double valAtUpp = func.applyAsDouble(upp); if (valAtLow*valAtLow>0) { // Switch to secant method return secant(func, lowerLimit, upperLimit); } System.out.printf("bisect %f@%f -- %f@%f%n", valAtLow, low, valAtUpp, upp); double mid; while(true) { mid = (upp+low)/2; if (Math.abs(upp-low)/low<eps) break; double val = func.applyAsDouble(mid); if (Math.abs(val)<eps) break; if (val<0) low=mid; else upp=mid; } return mid; } /** * Naive secant root solver implementation. May not complete if root not found. * @param f Function solve for root value * @param a Some value for which the given function evaluates * @param b Some value for which the given function evaluates * @return x value which evaluates close enough to zero */ static double secant(final DoubleUnaryOperator f, double a, double b) { double fa = f.applyAsDouble(a); if (fa==0) return a; double fb = f.applyAsDouble(b); if (fb==0) return b; System.out.printf("secant %f@%f -- %f@%f%n", fa, a, fb, b); if (fa*fb<0) { return bisect(f, a, b); } while ( abs(b-a) > abs(0.00001*a) ) { final double m = (a+b)/2; final double k = (fb-fa)/(b-a); final double fm = f.applyAsDouble(m); final double x = m-fm/k; if (Math.abs(fa)<Math.abs(fb)) { // f(a)<f(b); Choose x and a b=x; fb=f.applyAsDouble(b); } else { // f(a)>=f(b); Choose x and b a=x; fa=f.applyAsDouble(a); } if (fa==0) return a; if (fb==0) return b; if (fa*fb<0) { // Straddling root; switch to bisect method return bisect(f, a, b); } } return (a+b)/2; }
Robust parameter estimation for shifted log normal distribution In case anyone is still interested, I have managed to implement Aristizabal's formulae in Java. This is more proof-of-concept than the requested "robust" code, but it is a starting point. /** * Compu
50,683
predict() - multinomial logistic regression
When the estimates do not have the expected sign, the usual suspect is multicollinearity. Below is a passage from the Wikipedia page. The usual interpretation of a regression coefficient is that it provides an estimate of the effect of a one unit change in an independent variable, $X_1$, holding the other variables constant. If is highly correlated with another independent variable, $X_2$, in the given data set, then we have a set of observations for which $X_1$ and $X_2$ have a particular linear stochastic relationship. We don't have a set of observations for which all changes in $X_1$ are independent of changes in $X_2$, so we have an imprecise estimate of the effect of independent changes in $X_1$. You can find more about the adverse effects of multicollinearity, and strategies to fight it by reading this question and that question.
predict() - multinomial logistic regression
When the estimates do not have the expected sign, the usual suspect is multicollinearity. Below is a passage from the Wikipedia page. The usual interpretation of a regression coefficient is that it p
predict() - multinomial logistic regression When the estimates do not have the expected sign, the usual suspect is multicollinearity. Below is a passage from the Wikipedia page. The usual interpretation of a regression coefficient is that it provides an estimate of the effect of a one unit change in an independent variable, $X_1$, holding the other variables constant. If is highly correlated with another independent variable, $X_2$, in the given data set, then we have a set of observations for which $X_1$ and $X_2$ have a particular linear stochastic relationship. We don't have a set of observations for which all changes in $X_1$ are independent of changes in $X_2$, so we have an imprecise estimate of the effect of independent changes in $X_1$. You can find more about the adverse effects of multicollinearity, and strategies to fight it by reading this question and that question.
predict() - multinomial logistic regression When the estimates do not have the expected sign, the usual suspect is multicollinearity. Below is a passage from the Wikipedia page. The usual interpretation of a regression coefficient is that it p
50,684
Creating fuzzy values for binary data
I have never seen this done, and I doubt other people have either. One usually gets informed answers on this site within a couple of hours of posting something. It's been a day, and no joy. My thinking is this: if you want to tell the model that some values are more trustworthy than others, use weights. If you downweight values where you doubt the accuracy of the data, the model will basically accept a worse fit at that point -- which is what you want. Example: suppose you have a very "married" set of covariates for someone coded "unmarried" in the dodgy data set. Without weights, the fitting algorithm could distort the parameter estimates in order to get some kind of fit. With weights, the algorithm need not try so hard. In effect, it lets you have bigger residuals when you don't trust the data. If you want to go with your first idea of substituting data with probabilities, I would iterate: estimate probabilities that someone is married or not, then fit the model with my best guesses, then go back and adjust the estimates. This is an EM approach. So, I would not replace 0's and 1's with 0.8 and 0.2 in the fit. I would use 1 and 0 according as the probabilities were less than or greater than 0.5 - but then I would go back and adjust the probabilities on the basis of lack of fit at those points. If you look at what happens in a logistic regression model, the math involved really expects that the data are going to be 0's or 1's. I think you want to stick with that. My advice boils down to using weights or estimating marital status from the rest of the data.
Creating fuzzy values for binary data
I have never seen this done, and I doubt other people have either. One usually gets informed answers on this site within a couple of hours of posting something. It's been a day, and no joy. My thinkin
Creating fuzzy values for binary data I have never seen this done, and I doubt other people have either. One usually gets informed answers on this site within a couple of hours of posting something. It's been a day, and no joy. My thinking is this: if you want to tell the model that some values are more trustworthy than others, use weights. If you downweight values where you doubt the accuracy of the data, the model will basically accept a worse fit at that point -- which is what you want. Example: suppose you have a very "married" set of covariates for someone coded "unmarried" in the dodgy data set. Without weights, the fitting algorithm could distort the parameter estimates in order to get some kind of fit. With weights, the algorithm need not try so hard. In effect, it lets you have bigger residuals when you don't trust the data. If you want to go with your first idea of substituting data with probabilities, I would iterate: estimate probabilities that someone is married or not, then fit the model with my best guesses, then go back and adjust the estimates. This is an EM approach. So, I would not replace 0's and 1's with 0.8 and 0.2 in the fit. I would use 1 and 0 according as the probabilities were less than or greater than 0.5 - but then I would go back and adjust the probabilities on the basis of lack of fit at those points. If you look at what happens in a logistic regression model, the math involved really expects that the data are going to be 0's or 1's. I think you want to stick with that. My advice boils down to using weights or estimating marital status from the rest of the data.
Creating fuzzy values for binary data I have never seen this done, and I doubt other people have either. One usually gets informed answers on this site within a couple of hours of posting something. It's been a day, and no joy. My thinkin
50,685
Preliminary estimates of ARIMA in R?
In base R's arima() take a look at the method= argument. From the help docs: Fitting method: maximum likelihood or minimize conditional sum-of-squares. The default (unless there are missing values) is to use conditional-sum-of-squares to find starting values, then maximum likelihood. From a little further down in the details: Conditional sum-of-squares is provided mainly for expositional purposes. This computes the sum of squares of the fitted innovations from observation n.cond on, (where n.cond is at least the maximum lag of an AR term), treating all earlier innovations to be zero. Argument n.cond can be used to allow comparability between different fits. The ‘part log-likelihood’ is the first term, half the log of the estimated mean square. Missing values are allowed, but will cause many of the innovations to be missing. If you just wanted those initial values, you could state method='CSS'. Of course, you could also just use the full 'CSS-ML' results as your starting values...
Preliminary estimates of ARIMA in R?
In base R's arima() take a look at the method= argument. From the help docs: Fitting method: maximum likelihood or minimize conditional sum-of-squares. The default (unless there are missing values
Preliminary estimates of ARIMA in R? In base R's arima() take a look at the method= argument. From the help docs: Fitting method: maximum likelihood or minimize conditional sum-of-squares. The default (unless there are missing values) is to use conditional-sum-of-squares to find starting values, then maximum likelihood. From a little further down in the details: Conditional sum-of-squares is provided mainly for expositional purposes. This computes the sum of squares of the fitted innovations from observation n.cond on, (where n.cond is at least the maximum lag of an AR term), treating all earlier innovations to be zero. Argument n.cond can be used to allow comparability between different fits. The ‘part log-likelihood’ is the first term, half the log of the estimated mean square. Missing values are allowed, but will cause many of the innovations to be missing. If you just wanted those initial values, you could state method='CSS'. Of course, you could also just use the full 'CSS-ML' results as your starting values...
Preliminary estimates of ARIMA in R? In base R's arima() take a look at the method= argument. From the help docs: Fitting method: maximum likelihood or minimize conditional sum-of-squares. The default (unless there are missing values
50,686
What is the relationship between correlation coefficients and regression coefficients in multiple regression?
Let's assume that the variable $x_1$ and $x_2$ are centered, it will make things easier (nothing prevents you to do that before doing your regression). Then, it is straightforward to see that: $r_{y, x_1}\sigma_y=\beta_1\sigma_{x_1} + \beta_2r_{x_1, x_2} \sigma_{x_2}$ and $r_{y, x_2}\sigma_y=\beta_2\sigma_{x_2} + \beta_1r_{x_1, x_2} \sigma_{x_1}$ Hence, the relation also involves standard deviations terms and the correlation between $x_1$ and $x_2$. This should answer your second question. For example, if $r_{x_1, x_2}=1$ and $x_1 = x_2$, any solution $\beta_1\sigma_{x_1} + \beta_2r_{x_1, x_2} \sigma_{x_2} = \sigma_y$ leads to the same linear model for $y$. As a consequence, $\beta_1$ (or $\beta_2$) can take arbitrary values (which are going to depend on the numerical implementation of the linear regression you are using). Looking at the value of $r_{x_1, x_2}$ is then critical before making any relation between $r_{y, x_i}$ and $\beta_1, \beta_2$.
What is the relationship between correlation coefficients and regression coefficients in multiple re
Let's assume that the variable $x_1$ and $x_2$ are centered, it will make things easier (nothing prevents you to do that before doing your regression). Then, it is straightforward to see that: $r_{y,
What is the relationship between correlation coefficients and regression coefficients in multiple regression? Let's assume that the variable $x_1$ and $x_2$ are centered, it will make things easier (nothing prevents you to do that before doing your regression). Then, it is straightforward to see that: $r_{y, x_1}\sigma_y=\beta_1\sigma_{x_1} + \beta_2r_{x_1, x_2} \sigma_{x_2}$ and $r_{y, x_2}\sigma_y=\beta_2\sigma_{x_2} + \beta_1r_{x_1, x_2} \sigma_{x_1}$ Hence, the relation also involves standard deviations terms and the correlation between $x_1$ and $x_2$. This should answer your second question. For example, if $r_{x_1, x_2}=1$ and $x_1 = x_2$, any solution $\beta_1\sigma_{x_1} + \beta_2r_{x_1, x_2} \sigma_{x_2} = \sigma_y$ leads to the same linear model for $y$. As a consequence, $\beta_1$ (or $\beta_2$) can take arbitrary values (which are going to depend on the numerical implementation of the linear regression you are using). Looking at the value of $r_{x_1, x_2}$ is then critical before making any relation between $r_{y, x_i}$ and $\beta_1, \beta_2$.
What is the relationship between correlation coefficients and regression coefficients in multiple re Let's assume that the variable $x_1$ and $x_2$ are centered, it will make things easier (nothing prevents you to do that before doing your regression). Then, it is straightforward to see that: $r_{y,
50,687
How to compare coefficients of a negative binomial regression for determining relative importance?
First you'd have to figure out what change in one variable is "equal" to a what change in another. The usual standardization uses the standard deviation, but that may or may not be ideal. It may not be possible to figure this out - particularly if the IVs are related to each other, in which case a change in one would go with a change in another. Once you've figured that out, you can get the predicted values from various combinations of the IVs, varying each by the amount you thought was "equal" in the first step. Another thing to do is to graph the predicted results as the independent variables change in value.
How to compare coefficients of a negative binomial regression for determining relative importance?
First you'd have to figure out what change in one variable is "equal" to a what change in another. The usual standardization uses the standard deviation, but that may or may not be ideal. It may not
How to compare coefficients of a negative binomial regression for determining relative importance? First you'd have to figure out what change in one variable is "equal" to a what change in another. The usual standardization uses the standard deviation, but that may or may not be ideal. It may not be possible to figure this out - particularly if the IVs are related to each other, in which case a change in one would go with a change in another. Once you've figured that out, you can get the predicted values from various combinations of the IVs, varying each by the amount you thought was "equal" in the first step. Another thing to do is to graph the predicted results as the independent variables change in value.
How to compare coefficients of a negative binomial regression for determining relative importance? First you'd have to figure out what change in one variable is "equal" to a what change in another. The usual standardization uses the standard deviation, but that may or may not be ideal. It may not
50,688
How to compare coefficients of a negative binomial regression for determining relative importance?
For a quick way to get at the standardized beta coefficients directly from any lm (or glm) model in R, try using lm.beta(model). In the example provided, this would be: library("MASS") nb = glm.nb(responseCountVar ~ predictor1 + predictor2 + predictor3, data=myData, control=glm.control(maxit=125)) summary(nb) library(QuantPsyc) lm.beta(nb)
How to compare coefficients of a negative binomial regression for determining relative importance?
For a quick way to get at the standardized beta coefficients directly from any lm (or glm) model in R, try using lm.beta(model). In the example provided, this would be: library("MASS") nb = glm.nb(re
How to compare coefficients of a negative binomial regression for determining relative importance? For a quick way to get at the standardized beta coefficients directly from any lm (or glm) model in R, try using lm.beta(model). In the example provided, this would be: library("MASS") nb = glm.nb(responseCountVar ~ predictor1 + predictor2 + predictor3, data=myData, control=glm.control(maxit=125)) summary(nb) library(QuantPsyc) lm.beta(nb)
How to compare coefficients of a negative binomial regression for determining relative importance? For a quick way to get at the standardized beta coefficients directly from any lm (or glm) model in R, try using lm.beta(model). In the example provided, this would be: library("MASS") nb = glm.nb(re
50,689
Satisfaction of detailed balance equation in Metropolis-Hastings algorithms?
I'll address question 2. If you fix a distribution supported on a finite set, the Markov chains which have that distribution as a stable distribution form a polytope. You can interpolate between any two by following one rule with probability $p$ and the other with probability $1-p$ and the convex combination will also preserve the stable distribution. Another way to look at the polytope is the space of maximal flows in a network with a source and a sink for each state, and complete (bipartite) connections between the sources and sinks, so that the capacity of each source/sink is the probability of the corresponding state in the distribution. The Markov chains where total balance is satisfied are the intersection of this polytope with a subspace. Given any cycle of length $2n$ in this complete bipartite graph and a maximal flow so that the even edges all carry at least $c \gt 0$, you can produce another maximal flow by reducing the amounts carried by the even edges by $c$, and increasing the amounts carried by the odd edges by $c$. The new flow generically does not satisfy detailed balance even if the original does. So, this is a way to produce Markov chains with the same stable distribution which do not satisfy detailed balance.
Satisfaction of detailed balance equation in Metropolis-Hastings algorithms?
I'll address question 2. If you fix a distribution supported on a finite set, the Markov chains which have that distribution as a stable distribution form a polytope. You can interpolate between any
Satisfaction of detailed balance equation in Metropolis-Hastings algorithms? I'll address question 2. If you fix a distribution supported on a finite set, the Markov chains which have that distribution as a stable distribution form a polytope. You can interpolate between any two by following one rule with probability $p$ and the other with probability $1-p$ and the convex combination will also preserve the stable distribution. Another way to look at the polytope is the space of maximal flows in a network with a source and a sink for each state, and complete (bipartite) connections between the sources and sinks, so that the capacity of each source/sink is the probability of the corresponding state in the distribution. The Markov chains where total balance is satisfied are the intersection of this polytope with a subspace. Given any cycle of length $2n$ in this complete bipartite graph and a maximal flow so that the even edges all carry at least $c \gt 0$, you can produce another maximal flow by reducing the amounts carried by the even edges by $c$, and increasing the amounts carried by the odd edges by $c$. The new flow generically does not satisfy detailed balance even if the original does. So, this is a way to produce Markov chains with the same stable distribution which do not satisfy detailed balance.
Satisfaction of detailed balance equation in Metropolis-Hastings algorithms? I'll address question 2. If you fix a distribution supported on a finite set, the Markov chains which have that distribution as a stable distribution form a polytope. You can interpolate between any
50,690
Is a sequence of random variables indexed by a homogeneous Poisson process process strictly stationary?
If the process $(N_t)$ and the sequence $(X_k)$ are independent, then indeed the identity $Y_t=X_{N_t}$ defines a strictly stationary process $(Y_t)$. To show this, consider $Y^s_t=Y_{s+t}$, then, for every nonnegative time $s$, conditionally on $N_s=m$, the process $(Y^s_t)_{t\geqslant0}$ is such that, for every $t\geqslant0$, $$ Y^s_t=X_{m+N'_{t}},\qquad N'_t=N_{t+s}-N_s. $$ The process $N'$ is independent on $N_s$ hence $Y^s$ is independent on $N_s$. Furthermore, for every $m$, $(X_{m+k})_{k\geqslant0}$ is distributed like $(X_k)_{k\geqslant0}$ hence $(X_{m+N'_t})_{t\geqslant0}$ is distributed like $(X_{N_t})_{t\geqslant0}$ and the result follows.
Is a sequence of random variables indexed by a homogeneous Poisson process process strictly stationa
If the process $(N_t)$ and the sequence $(X_k)$ are independent, then indeed the identity $Y_t=X_{N_t}$ defines a strictly stationary process $(Y_t)$. To show this, consider $Y^s_t=Y_{s+t}$, then, f
Is a sequence of random variables indexed by a homogeneous Poisson process process strictly stationary? If the process $(N_t)$ and the sequence $(X_k)$ are independent, then indeed the identity $Y_t=X_{N_t}$ defines a strictly stationary process $(Y_t)$. To show this, consider $Y^s_t=Y_{s+t}$, then, for every nonnegative time $s$, conditionally on $N_s=m$, the process $(Y^s_t)_{t\geqslant0}$ is such that, for every $t\geqslant0$, $$ Y^s_t=X_{m+N'_{t}},\qquad N'_t=N_{t+s}-N_s. $$ The process $N'$ is independent on $N_s$ hence $Y^s$ is independent on $N_s$. Furthermore, for every $m$, $(X_{m+k})_{k\geqslant0}$ is distributed like $(X_k)_{k\geqslant0}$ hence $(X_{m+N'_t})_{t\geqslant0}$ is distributed like $(X_{N_t})_{t\geqslant0}$ and the result follows.
Is a sequence of random variables indexed by a homogeneous Poisson process process strictly stationa If the process $(N_t)$ and the sequence $(X_k)$ are independent, then indeed the identity $Y_t=X_{N_t}$ defines a strictly stationary process $(Y_t)$. To show this, consider $Y^s_t=Y_{s+t}$, then, f
50,691
The relationship between expectation-maximization and majorization-minimization
The general idea of Minorant Maximization algorithms is: (a) Approximate target function with a dominated function. (b) Climb in approximating function. (c) Return to (a), approximating at new coordinate. The "flavor" of the MM is given by the approximating method, and the climbing method. EM is a particular instance in which a convex combination of the target function at different points is used or approximation. This approximation is dominated by the target function via Jenessen's-Inequality (JI). Convexity alone still allows an endless amount of approximating functions. It turns out that (given regularity assumptions on the target function) there is a single convex combination of the target function, which is also tangent at the approximating point. This function, happens to have a nice probabilistic interpretation, and it know as the E step of the EM. In conclusion, the EM is indeed a particular instance of the MM, although to the best of my knowledge, first came EM, then then generalization.
The relationship between expectation-maximization and majorization-minimization
The general idea of Minorant Maximization algorithms is: (a) Approximate target function with a dominated function. (b) Climb in approximating function. (c) Return to (a), approximating at new coor
The relationship between expectation-maximization and majorization-minimization The general idea of Minorant Maximization algorithms is: (a) Approximate target function with a dominated function. (b) Climb in approximating function. (c) Return to (a), approximating at new coordinate. The "flavor" of the MM is given by the approximating method, and the climbing method. EM is a particular instance in which a convex combination of the target function at different points is used or approximation. This approximation is dominated by the target function via Jenessen's-Inequality (JI). Convexity alone still allows an endless amount of approximating functions. It turns out that (given regularity assumptions on the target function) there is a single convex combination of the target function, which is also tangent at the approximating point. This function, happens to have a nice probabilistic interpretation, and it know as the E step of the EM. In conclusion, the EM is indeed a particular instance of the MM, although to the best of my knowledge, first came EM, then then generalization.
The relationship between expectation-maximization and majorization-minimization The general idea of Minorant Maximization algorithms is: (a) Approximate target function with a dominated function. (b) Climb in approximating function. (c) Return to (a), approximating at new coor
50,692
What's the null hypothesis in a one-sided Kolmogorov-Smirnov test?
I think most of the tables providing p-values for the K-S statistic are based on a two-sided test. The null hypothesis assumed by the values in the table is that the two samples are drawn from the same distribution (ie, that $C_x=C_y$). So really the table is only concerned with the absolute value of the difference between $C_x$ and $C_y$ and not the sign. That's why it does not matter if your result shows $C_x<<C_y$ or $C_x>>C_y$. Both are considered strong evidence against the null hypothesis, with a small p-value. Let's say your null hypothesis is $C_x \leq C_y$ and your desired criticality level is $\alpha$. You could adapt the values in the table by finding the critical value of $D+$ corresponding to $2\alpha$ and using that instead. This works because the table is splitting up the probability densities into the two tails, so by doubling the specified total tail density, you are "tricking" it into allocating $\alpha$ into the upper tail, which is what you want in the one-sided test.
What's the null hypothesis in a one-sided Kolmogorov-Smirnov test?
I think most of the tables providing p-values for the K-S statistic are based on a two-sided test. The null hypothesis assumed by the values in the table is that the two samples are drawn from the sam
What's the null hypothesis in a one-sided Kolmogorov-Smirnov test? I think most of the tables providing p-values for the K-S statistic are based on a two-sided test. The null hypothesis assumed by the values in the table is that the two samples are drawn from the same distribution (ie, that $C_x=C_y$). So really the table is only concerned with the absolute value of the difference between $C_x$ and $C_y$ and not the sign. That's why it does not matter if your result shows $C_x<<C_y$ or $C_x>>C_y$. Both are considered strong evidence against the null hypothesis, with a small p-value. Let's say your null hypothesis is $C_x \leq C_y$ and your desired criticality level is $\alpha$. You could adapt the values in the table by finding the critical value of $D+$ corresponding to $2\alpha$ and using that instead. This works because the table is splitting up the probability densities into the two tails, so by doubling the specified total tail density, you are "tricking" it into allocating $\alpha$ into the upper tail, which is what you want in the one-sided test.
What's the null hypothesis in a one-sided Kolmogorov-Smirnov test? I think most of the tables providing p-values for the K-S statistic are based on a two-sided test. The null hypothesis assumed by the values in the table is that the two samples are drawn from the sam
50,693
Assumption for an M/M/1 queue
I think the main advantage to modeling the arrival distribution as Poisson is the memory-less property. It greatly simplifies the subsequent calculations to be able to assume that the number of arrivals in a particular time interval depends only on the length of the interval, rather than when the interval occurs, how many people came before, etc. Of course, this assumption is not always appropriate. For example, modeling the number of patients arriving to a doctor's office for routine checkups could be considered memory-less, since it would tend to average out to a constant rate over a long period of time. On the other hand, this assumption would likely be inappropriate for modeling the arrival rate of patients to an emergency room, since this would be more likely to follow a boom-and-bust pattern.
Assumption for an M/M/1 queue
I think the main advantage to modeling the arrival distribution as Poisson is the memory-less property. It greatly simplifies the subsequent calculations to be able to assume that the number of arriva
Assumption for an M/M/1 queue I think the main advantage to modeling the arrival distribution as Poisson is the memory-less property. It greatly simplifies the subsequent calculations to be able to assume that the number of arrivals in a particular time interval depends only on the length of the interval, rather than when the interval occurs, how many people came before, etc. Of course, this assumption is not always appropriate. For example, modeling the number of patients arriving to a doctor's office for routine checkups could be considered memory-less, since it would tend to average out to a constant rate over a long period of time. On the other hand, this assumption would likely be inappropriate for modeling the arrival rate of patients to an emergency room, since this would be more likely to follow a boom-and-bust pattern.
Assumption for an M/M/1 queue I think the main advantage to modeling the arrival distribution as Poisson is the memory-less property. It greatly simplifies the subsequent calculations to be able to assume that the number of arriva
50,694
Assumption for an M/M/1 queue
There's a theorem in renewal theory related to that. It is roughly equivalent to the Central Limit Theorem for sums of independent random variables. It says that, under certain general conditions, the superposition of a large number of independent arrival processes converges to a Poisson process. That explains why the Poisson process so often arises, and why it is such a common assumption in queueing theory. I think the theorem is called Cinlar's theorem, but I can't find a reference right now. EDIT: See this document, section 4.5.B (page 194): http://www.pitt.edu/~super7/19011-20001/19501.pdf Another intuitive property of Poisson arrivals is this: given a fixed time interval, and conditioned on the fact that the number of arrivals contained in that interval is a fixed number N, those arrivals are uniformly distributed on the interval. See for example here: http://www.netlab.tkk.fi/opetus/s383143/kalvot/E_poisson.pdf
Assumption for an M/M/1 queue
There's a theorem in renewal theory related to that. It is roughly equivalent to the Central Limit Theorem for sums of independent random variables. It says that, under certain general conditions, the
Assumption for an M/M/1 queue There's a theorem in renewal theory related to that. It is roughly equivalent to the Central Limit Theorem for sums of independent random variables. It says that, under certain general conditions, the superposition of a large number of independent arrival processes converges to a Poisson process. That explains why the Poisson process so often arises, and why it is such a common assumption in queueing theory. I think the theorem is called Cinlar's theorem, but I can't find a reference right now. EDIT: See this document, section 4.5.B (page 194): http://www.pitt.edu/~super7/19011-20001/19501.pdf Another intuitive property of Poisson arrivals is this: given a fixed time interval, and conditioned on the fact that the number of arrivals contained in that interval is a fixed number N, those arrivals are uniformly distributed on the interval. See for example here: http://www.netlab.tkk.fi/opetus/s383143/kalvot/E_poisson.pdf
Assumption for an M/M/1 queue There's a theorem in renewal theory related to that. It is roughly equivalent to the Central Limit Theorem for sums of independent random variables. It says that, under certain general conditions, the
50,695
Assumption for an M/M/1 queue
To use an M/M/I queuing model, we must assume Poisson arrivals and exponential service distribution. Then, to derive characteristics for this types of waiting line, we must consider some other assumptions. Most importantly, there must be only one service channel, which arrivals enter one at a time. More so, it is assumed that there is an infinite population from which arrivals originate: we assumed arrivals are served on a first-come-served basis, we assume that there is no room to hold arrival waiting for service.
Assumption for an M/M/1 queue
To use an M/M/I queuing model, we must assume Poisson arrivals and exponential service distribution. Then, to derive characteristics for this types of waiting line, we must consider some other assumpt
Assumption for an M/M/1 queue To use an M/M/I queuing model, we must assume Poisson arrivals and exponential service distribution. Then, to derive characteristics for this types of waiting line, we must consider some other assumptions. Most importantly, there must be only one service channel, which arrivals enter one at a time. More so, it is assumed that there is an infinite population from which arrivals originate: we assumed arrivals are served on a first-come-served basis, we assume that there is no room to hold arrival waiting for service.
Assumption for an M/M/1 queue To use an M/M/I queuing model, we must assume Poisson arrivals and exponential service distribution. Then, to derive characteristics for this types of waiting line, we must consider some other assumpt
50,696
With a binary Y, why are R's lowess fits so often flat?
It works consistently fine for me. y <- rbinom(100, 1, (0:99)/100) x <- 1:100 m <- loess(y~x) plot(y ~ x) lines(predict(m)) # also illustrating the newer loess function that has different defaults lines(lowess(x,y), col = 'blue') If you run that a few times you'll notice the blue line does tend to stick closer to the 0s at the low x-values and 1s at the higher ones. But different defaults in lowess will change that. Raise the span from the default (2/3) to 0.75 and it will tend to do it less. lines(lowess(x,y, f = 0.75), col = 'blue') (loess is generally preferred over lowess these days. It has many more options and is more advanced.)
With a binary Y, why are R's lowess fits so often flat?
It works consistently fine for me. y <- rbinom(100, 1, (0:99)/100) x <- 1:100 m <- loess(y~x) plot(y ~ x) lines(predict(m)) # also illustrating the newer loess function that has different defaults lin
With a binary Y, why are R's lowess fits so often flat? It works consistently fine for me. y <- rbinom(100, 1, (0:99)/100) x <- 1:100 m <- loess(y~x) plot(y ~ x) lines(predict(m)) # also illustrating the newer loess function that has different defaults lines(lowess(x,y), col = 'blue') If you run that a few times you'll notice the blue line does tend to stick closer to the 0s at the low x-values and 1s at the higher ones. But different defaults in lowess will change that. Raise the span from the default (2/3) to 0.75 and it will tend to do it less. lines(lowess(x,y, f = 0.75), col = 'blue') (loess is generally preferred over lowess these days. It has many more options and is more advanced.)
With a binary Y, why are R's lowess fits so often flat? It works consistently fine for me. y <- rbinom(100, 1, (0:99)/100) x <- 1:100 m <- loess(y~x) plot(y ~ x) lines(predict(m)) # also illustrating the newer loess function that has different defaults lin
50,697
Bayesian hypothesis testing and Bayes factors
Let $p(\alpha,\beta|M_1)=\mathrm{Ga}(\alpha,\beta)$ be your first prior, and $p(\alpha,\beta|M_2)=\mathrm{Ga}(\alpha+1,\beta)$ be your second prior. Furthermore, let $D$ be your observed data. Your posterior distribution can then be expressed as: $$p(\alpha,\beta|D,M_i) = \frac{L(D|\alpha,\beta)p(\alpha,\beta|M_i)}{p(D|M_i)}$$ where $p(D|M_i)=\int\int L(D|\alpha,\beta)p(\alpha,\beta|M_i) \,\mathrm{d}\alpha\mathrm{d}\beta$ and $i$ is either 1 or 2. How would a Bayes factor for the two prior distributions be formed? The Bayes factor is defined as $$K=\frac{p(D|M_1)}{p(D|M_2)}$$ Meaning, that the Bayes factor is not formed for the two prior distributions alone, but always in combination with the respective likelihood. Typically, in Bayesian analysis we compare different likelihood functions. However, comparing different prior assumptions also makes perfect sense in some cases. A typical application would be to make robust predictions: $$p(\alpha,\beta|D) = p(\alpha,\beta|D,M_1)p(M_1|D) + p(\alpha,\beta|D,M_2)p(M_2|D) $$ with $$ p(M_i|D) = \frac{p(D|M_i) p(M_i)}{p(D|M_1) p(M_1)+p(D|M_2) p(M_2)} $$ Assuming that you are indifferent between $M_1$ and $M_2$ a priori, we have $p(M_i)=0.5$ How can an estimate of the observed data be combined with the model probabilities to achieve a posterior ratio of model probabilities? I am not sure if I understand your question correctly, but in my opinion this is the Bayes factor. A value of $K > 1$ means that $M_1$ is more strongly supported by the data under consideration than $M_2$.
Bayesian hypothesis testing and Bayes factors
Let $p(\alpha,\beta|M_1)=\mathrm{Ga}(\alpha,\beta)$ be your first prior, and $p(\alpha,\beta|M_2)=\mathrm{Ga}(\alpha+1,\beta)$ be your second prior. Furthermore, let $D$ be your observed data. Your po
Bayesian hypothesis testing and Bayes factors Let $p(\alpha,\beta|M_1)=\mathrm{Ga}(\alpha,\beta)$ be your first prior, and $p(\alpha,\beta|M_2)=\mathrm{Ga}(\alpha+1,\beta)$ be your second prior. Furthermore, let $D$ be your observed data. Your posterior distribution can then be expressed as: $$p(\alpha,\beta|D,M_i) = \frac{L(D|\alpha,\beta)p(\alpha,\beta|M_i)}{p(D|M_i)}$$ where $p(D|M_i)=\int\int L(D|\alpha,\beta)p(\alpha,\beta|M_i) \,\mathrm{d}\alpha\mathrm{d}\beta$ and $i$ is either 1 or 2. How would a Bayes factor for the two prior distributions be formed? The Bayes factor is defined as $$K=\frac{p(D|M_1)}{p(D|M_2)}$$ Meaning, that the Bayes factor is not formed for the two prior distributions alone, but always in combination with the respective likelihood. Typically, in Bayesian analysis we compare different likelihood functions. However, comparing different prior assumptions also makes perfect sense in some cases. A typical application would be to make robust predictions: $$p(\alpha,\beta|D) = p(\alpha,\beta|D,M_1)p(M_1|D) + p(\alpha,\beta|D,M_2)p(M_2|D) $$ with $$ p(M_i|D) = \frac{p(D|M_i) p(M_i)}{p(D|M_1) p(M_1)+p(D|M_2) p(M_2)} $$ Assuming that you are indifferent between $M_1$ and $M_2$ a priori, we have $p(M_i)=0.5$ How can an estimate of the observed data be combined with the model probabilities to achieve a posterior ratio of model probabilities? I am not sure if I understand your question correctly, but in my opinion this is the Bayes factor. A value of $K > 1$ means that $M_1$ is more strongly supported by the data under consideration than $M_2$.
Bayesian hypothesis testing and Bayes factors Let $p(\alpha,\beta|M_1)=\mathrm{Ga}(\alpha,\beta)$ be your first prior, and $p(\alpha,\beta|M_2)=\mathrm{Ga}(\alpha+1,\beta)$ be your second prior. Furthermore, let $D$ be your observed data. Your po
50,698
Bayesian hypothesis testing and Bayes factors
How would a Bayes factor for the two prior distributions be formed? The Bayes factor $K$ for competing models $M_1$ and $M_2$ is the quotient of the data likelihoods of the two models. Applying this definition to the problem you describe gives: $$ K = \frac{p(\mathbf{x} |M_1)}{p(\mathbf{x}|M_2)} = \frac{\int_{\lambda}{p(\mathbf{x}|\lambda)p(\lambda|M_1)}d\lambda}{\int_{\lambda}{p(\mathbf{x}|\lambda)p(\lambda|M_2)}d\lambda} $$ Stating the densities explicitly gives: $$ K = \frac {\int_{\lambda} {(\prod_{i=1}^n{\lambda e^{-\lambda x_i})} \frac{\beta^{\alpha}}{\Gamma(\alpha)}{\lambda}^{\alpha-1}e^{-\beta \lambda}}} {\int_{\lambda} {(\prod_{i=1}^n{\lambda e^{-\lambda x_i})} \frac{\beta^{\alpha + 1}}{\Gamma(\alpha+1)}{\lambda}^{\alpha}e^{-\beta \lambda}}} = \frac {\frac{\beta^{\alpha}}{\Gamma(\alpha)} \int_\lambda \lambda^{n+\alpha-1}e^{-\lambda(\beta + n\bar{x})} } {\frac{\beta^{\alpha+1}}{\Gamma(\alpha + 1)} \int_\lambda \lambda^{n+\alpha}e^{-\lambda(\beta + n\bar{x})} } $$ (Using $n\bar{x} = \sum_{i=1}^n{x_i}$ for convenience.) We recognize the integrand as the kernels of gamma distributions Gamma($n+\alpha, \beta + n\bar{x}$) and Gamma($n+\alpha + 1, \beta + n\bar{x}$). This tells us that each integrates to the reciprocal of its normalizing constant, giving: $$ K = \frac { \frac{\beta^{\alpha}}{\Gamma(\alpha)} \frac{\Gamma(\alpha+n)}{(\beta + n\bar{x})^{\alpha+n}} } { \frac{\beta^{\alpha+1}}{\Gamma(\alpha+1)} \frac{\Gamma(\alpha+n+1)}{(\beta + n\bar{x})^{\alpha+n+1}} } = \frac {\beta^\alpha\Gamma(\alpha+1)\Gamma(\alpha+n)(\beta + n\bar{x})^{\alpha+n+1}} {\beta^{\alpha+1}\Gamma(\alpha)\Gamma(\alpha+n+1)(\beta + n\bar{x})^{\alpha+n}} $$ By the $\Gamma(\alpha+1) = \alpha\Gamma(\alpha)$ property of the Gamma function: $$ K = \frac {\alpha\Gamma(\alpha)\Gamma(\alpha+n)(\beta + n\bar{x})} {\beta\Gamma(\alpha)(\alpha+n)\Gamma(\alpha + n)} = \frac {\alpha(\beta + n\bar{x})} {\beta(\alpha + n)} = \frac {\alpha\beta + \alpha n \bar{x}} {\alpha\beta + \beta n } $$ ...given that there are multiple observations $x_i$, how should each observation be inserted into the distribution to give a composite distribution[?] This is accomplished above by the product in my second set of equations. Assuming each $x_i$ is independently drawn from the exponential, the joint density $p(\bf{x})$ is the product of the individual densities. How can an estimate of the observed data be combined with the model probabilities to achieve a posterior ratio of model probabilities? Solving this means finding the posterior model odds using the priors for $M_1, M_2$: $$ \frac{p(\mathbf{x} |M_1)p(M_1)}{p(\mathbf{x}|M_2)p(M_2)} = K\frac{p(M_1)}{p(M_2)} = K\frac{\frac{3}{4}}{\frac{1}{4}} = 3K $$ In other words, the Bayes factor is equivalent to the posterior model odds when competing models are assumed equally likely.
Bayesian hypothesis testing and Bayes factors
How would a Bayes factor for the two prior distributions be formed? The Bayes factor $K$ for competing models $M_1$ and $M_2$ is the quotient of the data likelihoods of the two models. Applying this
Bayesian hypothesis testing and Bayes factors How would a Bayes factor for the two prior distributions be formed? The Bayes factor $K$ for competing models $M_1$ and $M_2$ is the quotient of the data likelihoods of the two models. Applying this definition to the problem you describe gives: $$ K = \frac{p(\mathbf{x} |M_1)}{p(\mathbf{x}|M_2)} = \frac{\int_{\lambda}{p(\mathbf{x}|\lambda)p(\lambda|M_1)}d\lambda}{\int_{\lambda}{p(\mathbf{x}|\lambda)p(\lambda|M_2)}d\lambda} $$ Stating the densities explicitly gives: $$ K = \frac {\int_{\lambda} {(\prod_{i=1}^n{\lambda e^{-\lambda x_i})} \frac{\beta^{\alpha}}{\Gamma(\alpha)}{\lambda}^{\alpha-1}e^{-\beta \lambda}}} {\int_{\lambda} {(\prod_{i=1}^n{\lambda e^{-\lambda x_i})} \frac{\beta^{\alpha + 1}}{\Gamma(\alpha+1)}{\lambda}^{\alpha}e^{-\beta \lambda}}} = \frac {\frac{\beta^{\alpha}}{\Gamma(\alpha)} \int_\lambda \lambda^{n+\alpha-1}e^{-\lambda(\beta + n\bar{x})} } {\frac{\beta^{\alpha+1}}{\Gamma(\alpha + 1)} \int_\lambda \lambda^{n+\alpha}e^{-\lambda(\beta + n\bar{x})} } $$ (Using $n\bar{x} = \sum_{i=1}^n{x_i}$ for convenience.) We recognize the integrand as the kernels of gamma distributions Gamma($n+\alpha, \beta + n\bar{x}$) and Gamma($n+\alpha + 1, \beta + n\bar{x}$). This tells us that each integrates to the reciprocal of its normalizing constant, giving: $$ K = \frac { \frac{\beta^{\alpha}}{\Gamma(\alpha)} \frac{\Gamma(\alpha+n)}{(\beta + n\bar{x})^{\alpha+n}} } { \frac{\beta^{\alpha+1}}{\Gamma(\alpha+1)} \frac{\Gamma(\alpha+n+1)}{(\beta + n\bar{x})^{\alpha+n+1}} } = \frac {\beta^\alpha\Gamma(\alpha+1)\Gamma(\alpha+n)(\beta + n\bar{x})^{\alpha+n+1}} {\beta^{\alpha+1}\Gamma(\alpha)\Gamma(\alpha+n+1)(\beta + n\bar{x})^{\alpha+n}} $$ By the $\Gamma(\alpha+1) = \alpha\Gamma(\alpha)$ property of the Gamma function: $$ K = \frac {\alpha\Gamma(\alpha)\Gamma(\alpha+n)(\beta + n\bar{x})} {\beta\Gamma(\alpha)(\alpha+n)\Gamma(\alpha + n)} = \frac {\alpha(\beta + n\bar{x})} {\beta(\alpha + n)} = \frac {\alpha\beta + \alpha n \bar{x}} {\alpha\beta + \beta n } $$ ...given that there are multiple observations $x_i$, how should each observation be inserted into the distribution to give a composite distribution[?] This is accomplished above by the product in my second set of equations. Assuming each $x_i$ is independently drawn from the exponential, the joint density $p(\bf{x})$ is the product of the individual densities. How can an estimate of the observed data be combined with the model probabilities to achieve a posterior ratio of model probabilities? Solving this means finding the posterior model odds using the priors for $M_1, M_2$: $$ \frac{p(\mathbf{x} |M_1)p(M_1)}{p(\mathbf{x}|M_2)p(M_2)} = K\frac{p(M_1)}{p(M_2)} = K\frac{\frac{3}{4}}{\frac{1}{4}} = 3K $$ In other words, the Bayes factor is equivalent to the posterior model odds when competing models are assumed equally likely.
Bayesian hypothesis testing and Bayes factors How would a Bayes factor for the two prior distributions be formed? The Bayes factor $K$ for competing models $M_1$ and $M_2$ is the quotient of the data likelihoods of the two models. Applying this
50,699
Computing confidence region for Gaussian mixture model
In general, it is possible for instance to compute what is the probability contained in a ball $\mathcal{B}(c,r)$. I suppose that your gaussian mixture writes $$p(x) = \sum_{j=1}^K \mathcal{N}(x;\mu_j,\Sigma_j)\mathbb{P}(J=j)$$ There exist elementary pieces of code to make an algorithm to compute $F(c,r)=\int_{\mathcal{B}(c,r)} p(x)dx $ for each $(c,r)\in \mathbb{R}^N \times \mathbb{R}^+$ which I will detail further. First note that since for a fixed $c$, the function $r\mapsto F(c,r)$ is increasing on $\mathbb{R}^+$ then a search by dichotomy can numerically solve the problem of finding $r$ such as, for a given $c$, $F(c,r)=95\%$. More efficient methods such as the secant method exist. To handle the general case, if the matrices $\Sigma_j$ have no particular form except that they are positive definite, you may have to use [wikipedia link to the generalized non central chi square cumulative function][1]. For this purpose, the C source code will be useful [source code from Robert Davies' website][2]. You'll find the related documentation [on Robert Davies' page][3] and in his paper [The Distribution of a Linear Combination of Chi Squared Random Variables][4] in which he adopts the same symbols. If the matrices $\Sigma_j$ are each one proportional to the identity matrix, you may use either the already mentioned generalized non central chi square cumulative function or the non central chi square cumulative function which is more common (see [Non Central Chi Square Law][5]). This function is available in MATLAB for instance. Now, here is how you can use it. $$p(x) = \sum_{j=1}^K \mathcal{N}(x;\mu_j,\Sigma_j)\mathbb{P}(J=j)$$ is the density of a variable that we call $X_J$ where for each $i \in \{1,...,K\}$, the variable $X_i$ is a gaussian random variable which has a probability density function given by $\mathcal{N}(x;\mu_j,\Sigma_j)$ and where $J$ is a random discrete variable on the set $\{1,...,K\}$ independent of each $X_i$, and which follows the known law $\mathbb{P}(J=j)$. Since we can decompose the probability $$\int_{\mathcal{B}(c,r)} p(x)dx = \mathbb{P}\left(X_J \in \mathcal{B}(c,r) \right) = \sum_{j=1}^K \mathbb{P}\left(X_J \in \mathcal{B}(c,r) |J=j\right)\mathbb{P}(J=j)$$ What we need to compute is $\mathbb{P} \left(X_J \in \mathcal{B}(c,r) |J=j\right)=\mathbb{P}\left(\|X_j-c\|^2 \leq r^2 \right)$. Since $X_j-c$ is a gaussian random variable which follows $\mathcal{N}(x;\mu_j-c,\Sigma_j)$, then $\|X_j-c\|^2$ follows a (generalized non central) chi squared law. Some derivations must be done to identify the parameters (which we call $\theta_j$) of this law. (I can be more explicit on demand). Then the only thing remaining to do is to evaluate the Chi Squared cumulative function (which we call $S$) of this chi squared law at $r$. Finally : $$F(c,r)=\int_{\mathcal{B}(c,r)} p(x)dx = \sum_{j=1}^K S(r;\theta_j)\mathbb{P}(J=j) $$ Then, one can apply a dichotomy or a secant method to find the better approximation to $r$ which ensusres $\mathcal{B}(c,r)$ to contain 95%. If you are in the particular case where the matrices $\Sigma_j$ are diagonal, then you can find a rectangular region. By rectangle I mean a domain which is a cartesian product of intervals. You'll need the erf function, which is related to the cumulative function of a gaussian probability density function. To answer another question posted : The union of the contours contains exacly or more than 95% if each contour contains 95% of probability. Here is why. Let $E_i$ be the contour such as $\mathbb{P}(X_i \in E_i)=95\%$ and let $\bigcup_{i=1}^K E_i $ be the union of the contours then $$ \int_{ \bigcup_{i=1}^K E_i} p(x)dx = \mathbb{P}\left(X_J \in \bigcup_{i=1}^K E_i\right)$$ $$= \sum_{i=1}^K \mathbb{P}\left(X_J \in \bigcup_{i=1}^K E_i |J=j\right)\mathbb{P}(J=j)$$ in which each term $\mathbb{P}\left(X_J \in \bigcup_{i=1}^K E_i \Big| J=j \right) \geq 95\%$. Finally, because $\mathbb{P}(J=j)$ sums to 1, this last line is a weighted average of values greater than 95%, thus the sum is greater than 95%.
Computing confidence region for Gaussian mixture model
In general, it is possible for instance to compute what is the probability contained in a ball $\mathcal{B}(c,r)$. I suppose that your gaussian mixture writes $$p(x) = \sum_{j=1}^K \mathcal{N}(x;\mu_j
Computing confidence region for Gaussian mixture model In general, it is possible for instance to compute what is the probability contained in a ball $\mathcal{B}(c,r)$. I suppose that your gaussian mixture writes $$p(x) = \sum_{j=1}^K \mathcal{N}(x;\mu_j,\Sigma_j)\mathbb{P}(J=j)$$ There exist elementary pieces of code to make an algorithm to compute $F(c,r)=\int_{\mathcal{B}(c,r)} p(x)dx $ for each $(c,r)\in \mathbb{R}^N \times \mathbb{R}^+$ which I will detail further. First note that since for a fixed $c$, the function $r\mapsto F(c,r)$ is increasing on $\mathbb{R}^+$ then a search by dichotomy can numerically solve the problem of finding $r$ such as, for a given $c$, $F(c,r)=95\%$. More efficient methods such as the secant method exist. To handle the general case, if the matrices $\Sigma_j$ have no particular form except that they are positive definite, you may have to use [wikipedia link to the generalized non central chi square cumulative function][1]. For this purpose, the C source code will be useful [source code from Robert Davies' website][2]. You'll find the related documentation [on Robert Davies' page][3] and in his paper [The Distribution of a Linear Combination of Chi Squared Random Variables][4] in which he adopts the same symbols. If the matrices $\Sigma_j$ are each one proportional to the identity matrix, you may use either the already mentioned generalized non central chi square cumulative function or the non central chi square cumulative function which is more common (see [Non Central Chi Square Law][5]). This function is available in MATLAB for instance. Now, here is how you can use it. $$p(x) = \sum_{j=1}^K \mathcal{N}(x;\mu_j,\Sigma_j)\mathbb{P}(J=j)$$ is the density of a variable that we call $X_J$ where for each $i \in \{1,...,K\}$, the variable $X_i$ is a gaussian random variable which has a probability density function given by $\mathcal{N}(x;\mu_j,\Sigma_j)$ and where $J$ is a random discrete variable on the set $\{1,...,K\}$ independent of each $X_i$, and which follows the known law $\mathbb{P}(J=j)$. Since we can decompose the probability $$\int_{\mathcal{B}(c,r)} p(x)dx = \mathbb{P}\left(X_J \in \mathcal{B}(c,r) \right) = \sum_{j=1}^K \mathbb{P}\left(X_J \in \mathcal{B}(c,r) |J=j\right)\mathbb{P}(J=j)$$ What we need to compute is $\mathbb{P} \left(X_J \in \mathcal{B}(c,r) |J=j\right)=\mathbb{P}\left(\|X_j-c\|^2 \leq r^2 \right)$. Since $X_j-c$ is a gaussian random variable which follows $\mathcal{N}(x;\mu_j-c,\Sigma_j)$, then $\|X_j-c\|^2$ follows a (generalized non central) chi squared law. Some derivations must be done to identify the parameters (which we call $\theta_j$) of this law. (I can be more explicit on demand). Then the only thing remaining to do is to evaluate the Chi Squared cumulative function (which we call $S$) of this chi squared law at $r$. Finally : $$F(c,r)=\int_{\mathcal{B}(c,r)} p(x)dx = \sum_{j=1}^K S(r;\theta_j)\mathbb{P}(J=j) $$ Then, one can apply a dichotomy or a secant method to find the better approximation to $r$ which ensusres $\mathcal{B}(c,r)$ to contain 95%. If you are in the particular case where the matrices $\Sigma_j$ are diagonal, then you can find a rectangular region. By rectangle I mean a domain which is a cartesian product of intervals. You'll need the erf function, which is related to the cumulative function of a gaussian probability density function. To answer another question posted : The union of the contours contains exacly or more than 95% if each contour contains 95% of probability. Here is why. Let $E_i$ be the contour such as $\mathbb{P}(X_i \in E_i)=95\%$ and let $\bigcup_{i=1}^K E_i $ be the union of the contours then $$ \int_{ \bigcup_{i=1}^K E_i} p(x)dx = \mathbb{P}\left(X_J \in \bigcup_{i=1}^K E_i\right)$$ $$= \sum_{i=1}^K \mathbb{P}\left(X_J \in \bigcup_{i=1}^K E_i |J=j\right)\mathbb{P}(J=j)$$ in which each term $\mathbb{P}\left(X_J \in \bigcup_{i=1}^K E_i \Big| J=j \right) \geq 95\%$. Finally, because $\mathbb{P}(J=j)$ sums to 1, this last line is a weighted average of values greater than 95%, thus the sum is greater than 95%.
Computing confidence region for Gaussian mixture model In general, it is possible for instance to compute what is the probability contained in a ball $\mathcal{B}(c,r)$. I suppose that your gaussian mixture writes $$p(x) = \sum_{j=1}^K \mathcal{N}(x;\mu_j
50,700
Expectation notations
It might be helpful to isolate all the moving parts in a particular expectation. In doing so, recall that an expectation is a Lebesgue-Stieltjes integral. I will adapt the notation slightly and use $X$ to denote a random variable, $x$ to denote a fixed value taken by that random variable in its support, that is, $x\in\mathcal{X}\equiv\text{support}(X)$. For the average loss, it might be helpful to distinguish between the true parameter $\theta_0$ and the parameter value $\theta$ at which the risk of the estimator $\delta$ is being evaluated. $$ \begin{align} R(\theta, \delta) &= \mathbb{E}_{\theta_0} \left(L(\theta, \delta(X)\right)),\, \theta\in \Theta\\ &=\int_\mathcal{X} L(\theta,\delta(x)) \, d\mathbb{P}_{\theta_0}(x) \end{align} $$ Note that the integration here is with respect to the true underlying probability measure generating the random variable $X$, which belongs to the model class $\mathcal{P} = \{\mathbb{P}_\theta\mid \theta \in \Theta\}$, $\mathbb{P}_{\theta_0}\in \mathcal{P}$. If the probability measure is absolutely continuous with respect to the Lebesgue measure, then we can write it equivalently as you have written it $$ \begin{align} R(\theta, \delta) &= \int_\mathcal{X}L(\theta, \delta(x))f(x\mid \theta_0)\, dx \end{align} $$ For the posterior loss, which is defined conditional on a value $x \in \mathcal{X}$, that is it is conditional on you having observed the data to be $X=x$, matters are more straightforward. It just says that the expectation is an integral with respect to the posterior which is conditional on the value of the $X$, and if the data were to change, the posterior would change, and so would the posterior expected loss. The following notation might clarify matters (or not), $$ \begin{align} \rho(\pi(\mid x), d \mid X=x) &= \mathbb{E}^{\pi(\mid x)}\left(L(\theta, d)\mid X=x\right) \\ &= \int_\Theta L(\theta, d)\pi(\theta \mid x)\, d\theta \end{align} $$ to indicate exactly where the conditioning variable appears in the integral.
Expectation notations
It might be helpful to isolate all the moving parts in a particular expectation. In doing so, recall that an expectation is a Lebesgue-Stieltjes integral. I will adapt the notation slightly and use $
Expectation notations It might be helpful to isolate all the moving parts in a particular expectation. In doing so, recall that an expectation is a Lebesgue-Stieltjes integral. I will adapt the notation slightly and use $X$ to denote a random variable, $x$ to denote a fixed value taken by that random variable in its support, that is, $x\in\mathcal{X}\equiv\text{support}(X)$. For the average loss, it might be helpful to distinguish between the true parameter $\theta_0$ and the parameter value $\theta$ at which the risk of the estimator $\delta$ is being evaluated. $$ \begin{align} R(\theta, \delta) &= \mathbb{E}_{\theta_0} \left(L(\theta, \delta(X)\right)),\, \theta\in \Theta\\ &=\int_\mathcal{X} L(\theta,\delta(x)) \, d\mathbb{P}_{\theta_0}(x) \end{align} $$ Note that the integration here is with respect to the true underlying probability measure generating the random variable $X$, which belongs to the model class $\mathcal{P} = \{\mathbb{P}_\theta\mid \theta \in \Theta\}$, $\mathbb{P}_{\theta_0}\in \mathcal{P}$. If the probability measure is absolutely continuous with respect to the Lebesgue measure, then we can write it equivalently as you have written it $$ \begin{align} R(\theta, \delta) &= \int_\mathcal{X}L(\theta, \delta(x))f(x\mid \theta_0)\, dx \end{align} $$ For the posterior loss, which is defined conditional on a value $x \in \mathcal{X}$, that is it is conditional on you having observed the data to be $X=x$, matters are more straightforward. It just says that the expectation is an integral with respect to the posterior which is conditional on the value of the $X$, and if the data were to change, the posterior would change, and so would the posterior expected loss. The following notation might clarify matters (or not), $$ \begin{align} \rho(\pi(\mid x), d \mid X=x) &= \mathbb{E}^{\pi(\mid x)}\left(L(\theta, d)\mid X=x\right) \\ &= \int_\Theta L(\theta, d)\pi(\theta \mid x)\, d\theta \end{align} $$ to indicate exactly where the conditioning variable appears in the integral.
Expectation notations It might be helpful to isolate all the moving parts in a particular expectation. In doing so, recall that an expectation is a Lebesgue-Stieltjes integral. I will adapt the notation slightly and use $