idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
53,401 | Fiducial distribution and sequential monte-carlo algorithm | The whole goal was to generate standard normal conditional on the $C$ (using all of the data). That proved to be a rather difficult problem and therefore we have chosen to do this sequentially. You have correctly understood the meaning of $C_t$ and $Q_t$.
By the way, $C_t$ is formally defined in the middle of the page 9. You are correct the definition of $Q_t$ has slipped through the cracks of several revisions but you did guess its meaning correctly. Thank you for pointing this out.
The main idea is that it is easy to move from $C_t$ to $C_{t+1}$—that is all the business with $m_t$ and $M_t$ that follows. The reason why we choose sequential MC is that we can easily generate from the conditional distribution $Z_{t+1} I_{C_{t+1}}$ given $Z_t$ [using $m_t$ and $M_t$]. However there is a need to re-weight due to the fact that marginal distribution of $Z_{t+1} I_{C_{t+1}}$ marginalized to the time $t$ is different than the distribution of $Z_t I_{C_t}$.
The details of how much to re-weight is on the page 10 with more details in the Appendix C. | Fiducial distribution and sequential monte-carlo algorithm | The whole goal was to generate standard normal conditional on the $C$ (using all of the data). That proved to be a rather difficult problem and therefore we have chosen to do this sequentially. You ha | Fiducial distribution and sequential monte-carlo algorithm
The whole goal was to generate standard normal conditional on the $C$ (using all of the data). That proved to be a rather difficult problem and therefore we have chosen to do this sequentially. You have correctly understood the meaning of $C_t$ and $Q_t$.
By the way, $C_t$ is formally defined in the middle of the page 9. You are correct the definition of $Q_t$ has slipped through the cracks of several revisions but you did guess its meaning correctly. Thank you for pointing this out.
The main idea is that it is easy to move from $C_t$ to $C_{t+1}$—that is all the business with $m_t$ and $M_t$ that follows. The reason why we choose sequential MC is that we can easily generate from the conditional distribution $Z_{t+1} I_{C_{t+1}}$ given $Z_t$ [using $m_t$ and $M_t$]. However there is a need to re-weight due to the fact that marginal distribution of $Z_{t+1} I_{C_{t+1}}$ marginalized to the time $t$ is different than the distribution of $Z_t I_{C_t}$.
The details of how much to re-weight is on the page 10 with more details in the Appendix C. | Fiducial distribution and sequential monte-carlo algorithm
The whole goal was to generate standard normal conditional on the $C$ (using all of the data). That proved to be a rather difficult problem and therefore we have chosen to do this sequentially. You ha |
53,402 | Fiducial distribution and sequential monte-carlo algorithm | Thank you for your questions. I just want to elaborate on a couple points you made above:
I just want to clarify your description of the interval, or "fat", data: each observation is considered an interval rather than an exact value, but the intervals are fixed and defined based on the data collection/data storing process. The clarification is for your example where you say if your observation is 1.32, then the interval would be [1.315, 1.325). In this illustration, you are making the observation the midpoint of the interval and allowing more significant digits than the original datum, which is not what we use. Let's suppose your observation 1.32 is really 1.32 meters, and your measuring devise is a ruler than has tick marks down to millimeters. Then the observation is recorded as 1.32 meters, but really, it could be 1.312 m or 1.318 m (this is supposing that the practice is to round to the next highest mm). Hence, the observation "1.32" would be placed in the interval [1.31, 1.32). The point is that the intervals are not arbitrarily defined around each point, but are set in advance as a fixed grid (in this case, the fixed grid is determined by the ruler).
I also want to note that there is no assumption about the distribution across an interval. This leads into your other question about V(). We maintain the intervals throughout the algorithm, and at t = n we have a sample of weighted particles that, geometrically, are polyhedrons on the parameter space. One can then select a value within each weighted polyhedron according to some rule, V(). In the simulation study in our paper, we randomly select either the upper or lower endpoint of the marginal intervals for each dimension of the parameter space. There are other options here, and Hannig (2009) "On Generalized Fiducial Inference" describes some alternatives. | Fiducial distribution and sequential monte-carlo algorithm | Thank you for your questions. I just want to elaborate on a couple points you made above:
I just want to clarify your description of the interval, or "fat", data: each observation is considered an | Fiducial distribution and sequential monte-carlo algorithm
Thank you for your questions. I just want to elaborate on a couple points you made above:
I just want to clarify your description of the interval, or "fat", data: each observation is considered an interval rather than an exact value, but the intervals are fixed and defined based on the data collection/data storing process. The clarification is for your example where you say if your observation is 1.32, then the interval would be [1.315, 1.325). In this illustration, you are making the observation the midpoint of the interval and allowing more significant digits than the original datum, which is not what we use. Let's suppose your observation 1.32 is really 1.32 meters, and your measuring devise is a ruler than has tick marks down to millimeters. Then the observation is recorded as 1.32 meters, but really, it could be 1.312 m or 1.318 m (this is supposing that the practice is to round to the next highest mm). Hence, the observation "1.32" would be placed in the interval [1.31, 1.32). The point is that the intervals are not arbitrarily defined around each point, but are set in advance as a fixed grid (in this case, the fixed grid is determined by the ruler).
I also want to note that there is no assumption about the distribution across an interval. This leads into your other question about V(). We maintain the intervals throughout the algorithm, and at t = n we have a sample of weighted particles that, geometrically, are polyhedrons on the parameter space. One can then select a value within each weighted polyhedron according to some rule, V(). In the simulation study in our paper, we randomly select either the upper or lower endpoint of the marginal intervals for each dimension of the parameter space. There are other options here, and Hannig (2009) "On Generalized Fiducial Inference" describes some alternatives. | Fiducial distribution and sequential monte-carlo algorithm
Thank you for your questions. I just want to elaborate on a couple points you made above:
I just want to clarify your description of the interval, or "fat", data: each observation is considered an |
53,403 | Inner loop overfitting in nested cross-validation | Overfitting in model selection problems for classification is usually due to including too many parameters. Cross-validation or bootstrap error rate estimation should avoid this problem because it avoids the optimism of an estimate like resubstitution which tests the classifier on the same data used in the fit. If you minimize the cross-validated estimate of error rate in your inner loop as the criterion for variable selection you should not have this problem. Am I correct in assuming that your selection procedure does not do this? If so you are probably using a procedure that is biased toward models with many parameters that may be poor models for prediction. | Inner loop overfitting in nested cross-validation | Overfitting in model selection problems for classification is usually due to including too many parameters. Cross-validation or bootstrap error rate estimation should avoid this problem because it av | Inner loop overfitting in nested cross-validation
Overfitting in model selection problems for classification is usually due to including too many parameters. Cross-validation or bootstrap error rate estimation should avoid this problem because it avoids the optimism of an estimate like resubstitution which tests the classifier on the same data used in the fit. If you minimize the cross-validated estimate of error rate in your inner loop as the criterion for variable selection you should not have this problem. Am I correct in assuming that your selection procedure does not do this? If so you are probably using a procedure that is biased toward models with many parameters that may be poor models for prediction. | Inner loop overfitting in nested cross-validation
Overfitting in model selection problems for classification is usually due to including too many parameters. Cross-validation or bootstrap error rate estimation should avoid this problem because it av |
53,404 | Inner loop overfitting in nested cross-validation | I know it's an old thread, but anyway...
When I've designed algorithms to perform feature selection within a nested cross-validation, I've found that you can decrease overfitting to the inner segments by performing recursive elimination and making sure that you'll resample the inner segments for each iteration of the recursive feature elimination rather than performing full ranking/selection for each inner segment. | Inner loop overfitting in nested cross-validation | I know it's an old thread, but anyway...
When I've designed algorithms to perform feature selection within a nested cross-validation, I've found that you can decrease overfitting to the inner segments | Inner loop overfitting in nested cross-validation
I know it's an old thread, but anyway...
When I've designed algorithms to perform feature selection within a nested cross-validation, I've found that you can decrease overfitting to the inner segments by performing recursive elimination and making sure that you'll resample the inner segments for each iteration of the recursive feature elimination rather than performing full ranking/selection for each inner segment. | Inner loop overfitting in nested cross-validation
I know it's an old thread, but anyway...
When I've designed algorithms to perform feature selection within a nested cross-validation, I've found that you can decrease overfitting to the inner segments |
53,405 | Inner loop overfitting in nested cross-validation | To answer my own question (I realize this material is probably actually stated elsewhere on this site):
I have found that when using complex classifier models (such as an SVM with an RBF kernel or an regularized discriminant classifier) there can be extremely large variance between folds in cross validation, if a large proportion of the data is withheld for testing within each fold.
I have found a practical strategy for nested cross-validation is to use leave one out cross validation for the outer loop and k-fold cross validation (in my case k=10) for the inner loop.
While this approach does mean that the final performance estimate has a larger variance, I have found it results in a 'better' model; as a larger proportion of data are used to find the optimal model in the inner CV loop. | Inner loop overfitting in nested cross-validation | To answer my own question (I realize this material is probably actually stated elsewhere on this site):
I have found that when using complex classifier models (such as an SVM with an RBF kernel or an | Inner loop overfitting in nested cross-validation
To answer my own question (I realize this material is probably actually stated elsewhere on this site):
I have found that when using complex classifier models (such as an SVM with an RBF kernel or an regularized discriminant classifier) there can be extremely large variance between folds in cross validation, if a large proportion of the data is withheld for testing within each fold.
I have found a practical strategy for nested cross-validation is to use leave one out cross validation for the outer loop and k-fold cross validation (in my case k=10) for the inner loop.
While this approach does mean that the final performance estimate has a larger variance, I have found it results in a 'better' model; as a larger proportion of data are used to find the optimal model in the inner CV loop. | Inner loop overfitting in nested cross-validation
To answer my own question (I realize this material is probably actually stated elsewhere on this site):
I have found that when using complex classifier models (such as an SVM with an RBF kernel or an |
53,406 | Inner loop overfitting in nested cross-validation | This is an old question, but I want to leave an answer in case anyone stumbles on this thread and happens to have made the same novice mistake as me.
I consistently found my errors on the outer CV loop were higher than the inner loops, even with very careful cross-validation in both loops (stratified, repeated in the inner loop).
My mistake was actually that I was re-scaling the outer CV test data in the wrong way: I was using the mean and s.d. of the test data, rather than using the same scaler as was used for the training data (see for example here). Doing it the right way resolved the problem. | Inner loop overfitting in nested cross-validation | This is an old question, but I want to leave an answer in case anyone stumbles on this thread and happens to have made the same novice mistake as me.
I consistently found my errors on the outer CV loo | Inner loop overfitting in nested cross-validation
This is an old question, but I want to leave an answer in case anyone stumbles on this thread and happens to have made the same novice mistake as me.
I consistently found my errors on the outer CV loop were higher than the inner loops, even with very careful cross-validation in both loops (stratified, repeated in the inner loop).
My mistake was actually that I was re-scaling the outer CV test data in the wrong way: I was using the mean and s.d. of the test data, rather than using the same scaler as was used for the training data (see for example here). Doing it the right way resolved the problem. | Inner loop overfitting in nested cross-validation
This is an old question, but I want to leave an answer in case anyone stumbles on this thread and happens to have made the same novice mistake as me.
I consistently found my errors on the outer CV loo |
53,407 | Multilevel covariance structure and more | I'm not sure I can provide the kind of answer I'd like to, but I will try to throw out some pieces of information regarding your questions.
First, both @Seth and @gui11aume (+1 to each) have noted that lme() defaults to no within group correlations. The question is why, and whether that's likely to be a problem. I believe that the thinking is a properly specified multilevel model will account for the covariance amongst your observations such that the residuals are independent. That's why the function was coded to expect no correlations. That is, you may be OK.
Several of your questions concern the effect of having a misspecified variance/covariance structure (bearing in mind that this may not actually apply to you). The estimation of your betas should be unaffected by this, that is, they should be unbiased. However, the estimation of the variance of the sampling distributions will be inaccurate, that is, your p-values will be inaccurate. Moreover, I believe that you cannot say a-priori whether they will be too high or too low. If you are really concerned about these issues you can always use robust (a.k.a., 'sandwich') standard errors. These are typically thought about in the context of generalized linear models, but they can be used elsewhere. Check out the R package sandwich. Note that if they are not necessary, you could be at risk of increased type II errors.
The standard AR(1) variance/covariance structure does assume homoskedasticity, so far as I know. More restrictive, however, is that it assumes every observation was made at the appropriate time, and that all measurements are equally spaced in time. These assumptions usually don't hold, even in the most fortuitous situations, and as such, the AR(1) variance/covariance structure is dangerous to assume.
Remember that the proper specification of the model for the means is crucial. It is remotely possible that time is not relevant to the appropriate model of the mean, but it isn't very likely at all. Leaving TIME out of the model risks the omitted variable bias. Thus, dropping TIME is likely to yield both biased estimates of the means and invalid inferences. This is just not worth gambling on. | Multilevel covariance structure and more | I'm not sure I can provide the kind of answer I'd like to, but I will try to throw out some pieces of information regarding your questions.
First, both @Seth and @gui11aume (+1 to each) have noted t | Multilevel covariance structure and more
I'm not sure I can provide the kind of answer I'd like to, but I will try to throw out some pieces of information regarding your questions.
First, both @Seth and @gui11aume (+1 to each) have noted that lme() defaults to no within group correlations. The question is why, and whether that's likely to be a problem. I believe that the thinking is a properly specified multilevel model will account for the covariance amongst your observations such that the residuals are independent. That's why the function was coded to expect no correlations. That is, you may be OK.
Several of your questions concern the effect of having a misspecified variance/covariance structure (bearing in mind that this may not actually apply to you). The estimation of your betas should be unaffected by this, that is, they should be unbiased. However, the estimation of the variance of the sampling distributions will be inaccurate, that is, your p-values will be inaccurate. Moreover, I believe that you cannot say a-priori whether they will be too high or too low. If you are really concerned about these issues you can always use robust (a.k.a., 'sandwich') standard errors. These are typically thought about in the context of generalized linear models, but they can be used elsewhere. Check out the R package sandwich. Note that if they are not necessary, you could be at risk of increased type II errors.
The standard AR(1) variance/covariance structure does assume homoskedasticity, so far as I know. More restrictive, however, is that it assumes every observation was made at the appropriate time, and that all measurements are equally spaced in time. These assumptions usually don't hold, even in the most fortuitous situations, and as such, the AR(1) variance/covariance structure is dangerous to assume.
Remember that the proper specification of the model for the means is crucial. It is remotely possible that time is not relevant to the appropriate model of the mean, but it isn't very likely at all. Leaving TIME out of the model risks the omitted variable bias. Thus, dropping TIME is likely to yield both biased estimates of the means and invalid inferences. This is just not worth gambling on. | Multilevel covariance structure and more
I'm not sure I can provide the kind of answer I'd like to, but I will try to throw out some pieces of information regarding your questions.
First, both @Seth and @gui11aume (+1 to each) have noted t |
53,408 | Multilevel covariance structure and more | You first point has been addressed by @Seth: the default is no within group correlation. Regarding your second point, I think it depends on the effect size of TIME. If the effect is minor compared to your other variables, like CONDITION and SUBJECT, the misfit should not be a problem. But if TIME is a major effect you will want your model to describe it well.
Now regarding your third question, if you drop TIME from the model by calling
lme(fixed= FR ~ CONDITION, data=mydata, random= ~ 1 | SUBJECT)
you remove the interaction term TIME*SUBJECT. So if TIME has the same effect for every subject, it's no big deal. However, if the response shows (or is expected to show) a particular behaviour for some people at some time (say some people get better and some get worse) then this will be "absorbed" in your other variables in ways that are difficult to predict (i.e. might make some terms significant and others non significant).
That you are not interested in TIME does not mean that you should leave it out of the model. Quite the contrary I would say. To give a heuristic example, in simple linear regression, you fit a slope and and intercept, but you are usually interested in the slope only. If you do not include the intercept term, you fit a a line through the origin. You can check by yourself that with this approach you will sometimes (for different clouds of points) reach the wrong conclusion: conclude that the slope is non null when it is and vice versa.
So if you have a doubt, keep TIME in but only focus one the estimates and p-values for CONDITION. | Multilevel covariance structure and more | You first point has been addressed by @Seth: the default is no within group correlation. Regarding your second point, I think it depends on the effect size of TIME. If the effect is minor compared to | Multilevel covariance structure and more
You first point has been addressed by @Seth: the default is no within group correlation. Regarding your second point, I think it depends on the effect size of TIME. If the effect is minor compared to your other variables, like CONDITION and SUBJECT, the misfit should not be a problem. But if TIME is a major effect you will want your model to describe it well.
Now regarding your third question, if you drop TIME from the model by calling
lme(fixed= FR ~ CONDITION, data=mydata, random= ~ 1 | SUBJECT)
you remove the interaction term TIME*SUBJECT. So if TIME has the same effect for every subject, it's no big deal. However, if the response shows (or is expected to show) a particular behaviour for some people at some time (say some people get better and some get worse) then this will be "absorbed" in your other variables in ways that are difficult to predict (i.e. might make some terms significant and others non significant).
That you are not interested in TIME does not mean that you should leave it out of the model. Quite the contrary I would say. To give a heuristic example, in simple linear regression, you fit a slope and and intercept, but you are usually interested in the slope only. If you do not include the intercept term, you fit a a line through the origin. You can check by yourself that with this approach you will sometimes (for different clouds of points) reach the wrong conclusion: conclude that the slope is non null when it is and vice versa.
So if you have a doubt, keep TIME in but only focus one the estimates and p-values for CONDITION. | Multilevel covariance structure and more
You first point has been addressed by @Seth: the default is no within group correlation. Regarding your second point, I think it depends on the effect size of TIME. If the effect is minor compared to |
53,409 | Bayesian and frequentist optimization and intervals | The maximum aposteriori (MAP) approach isn't really fully Bayesian, ideally inference should involve marginalising over the whole posterior. Optimisation is the root of all evil in statistics; it is difficult to over-fit if you don't optimise! ;o) So the practical difference between Bayesian and frequentist runs rather more deeply if you opt for a fully Bayesian solution, although there will often be a prior for which the result is numerically the same as the frequentist approach.
However, the credible interval and confidence interval are answers to different questions, and shouldn't be considered interchangable, even if they happen to be numerically the same. Treating a frequentist confidence interval as if it was a Bayesian credible interval can lead to problems of interpretation.
UPDATE "My question could be posed simply as the difference simply between the likelihood function and the posterior."
No, the definitions of probability are different, this means that even if the solutions are numerically identical, this doesn't imply they mean the same thing. This is a practical issue as well as a conceptual one, as the correct intepretation of the result depends on the definition of probability.
Tongue in cheek, if the answer to the question was "yes", it would imply that frequentists were merely Bayesians that always used flat (often improper) priors for everything. I doubt many frequentists would agree with that! ;o) | Bayesian and frequentist optimization and intervals | The maximum aposteriori (MAP) approach isn't really fully Bayesian, ideally inference should involve marginalising over the whole posterior. Optimisation is the root of all evil in statistics; it is | Bayesian and frequentist optimization and intervals
The maximum aposteriori (MAP) approach isn't really fully Bayesian, ideally inference should involve marginalising over the whole posterior. Optimisation is the root of all evil in statistics; it is difficult to over-fit if you don't optimise! ;o) So the practical difference between Bayesian and frequentist runs rather more deeply if you opt for a fully Bayesian solution, although there will often be a prior for which the result is numerically the same as the frequentist approach.
However, the credible interval and confidence interval are answers to different questions, and shouldn't be considered interchangable, even if they happen to be numerically the same. Treating a frequentist confidence interval as if it was a Bayesian credible interval can lead to problems of interpretation.
UPDATE "My question could be posed simply as the difference simply between the likelihood function and the posterior."
No, the definitions of probability are different, this means that even if the solutions are numerically identical, this doesn't imply they mean the same thing. This is a practical issue as well as a conceptual one, as the correct intepretation of the result depends on the definition of probability.
Tongue in cheek, if the answer to the question was "yes", it would imply that frequentists were merely Bayesians that always used flat (often improper) priors for everything. I doubt many frequentists would agree with that! ;o) | Bayesian and frequentist optimization and intervals
The maximum aposteriori (MAP) approach isn't really fully Bayesian, ideally inference should involve marginalising over the whole posterior. Optimisation is the root of all evil in statistics; it is |
53,410 | Bayesian and frequentist optimization and intervals | I would agree that roughly speaking you are right. Priors noninformative or not will lead to different solutions. The solutions will converge when the data dominates the prior. Also Jeffreys needed improper priors in some cases to match Bayesian results with frequentist results. The real difference and the controversy is philosophical. Frequentists want objectivity. The prior brings in subjective opinion. Bayesians following the teachings of Di Finetti believe that probability is subjective. For a true Bayesian priors should be informative. The other point also related to the differing concepts of probability is that probability can be assigned to an unknown parameter according to Bayesians while frequentist think strictly in term of probability spaces as given in the theory developed by Kolmogorov and von Mises. For frequentists it is only the random avriables that you can define on the probaility space that have probabilities associated with their outcomes. So the probability of getting a head on a coin toss is 1/2 because repeated flipping leads to a relative frequence of heads that converges to 1/2 as the sample size approaches infinity.
For frequentists Bayes' theorem applies to events which are measureable sets in a probabilty space. The Bayesians apply it to parameters as if the parameter was a random variable. That is the frequentists objection to Bayesian methods. The Bayesians object to the frequenttist approach because it lacks a property called coherence. I will not go into that here but you can look up the definiton on the internet or read Dennis Lindley's books. | Bayesian and frequentist optimization and intervals | I would agree that roughly speaking you are right. Priors noninformative or not will lead to different solutions. The solutions will converge when the data dominates the prior. Also Jeffreys needed | Bayesian and frequentist optimization and intervals
I would agree that roughly speaking you are right. Priors noninformative or not will lead to different solutions. The solutions will converge when the data dominates the prior. Also Jeffreys needed improper priors in some cases to match Bayesian results with frequentist results. The real difference and the controversy is philosophical. Frequentists want objectivity. The prior brings in subjective opinion. Bayesians following the teachings of Di Finetti believe that probability is subjective. For a true Bayesian priors should be informative. The other point also related to the differing concepts of probability is that probability can be assigned to an unknown parameter according to Bayesians while frequentist think strictly in term of probability spaces as given in the theory developed by Kolmogorov and von Mises. For frequentists it is only the random avriables that you can define on the probaility space that have probabilities associated with their outcomes. So the probability of getting a head on a coin toss is 1/2 because repeated flipping leads to a relative frequence of heads that converges to 1/2 as the sample size approaches infinity.
For frequentists Bayes' theorem applies to events which are measureable sets in a probabilty space. The Bayesians apply it to parameters as if the parameter was a random variable. That is the frequentists objection to Bayesian methods. The Bayesians object to the frequenttist approach because it lacks a property called coherence. I will not go into that here but you can look up the definiton on the internet or read Dennis Lindley's books. | Bayesian and frequentist optimization and intervals
I would agree that roughly speaking you are right. Priors noninformative or not will lead to different solutions. The solutions will converge when the data dominates the prior. Also Jeffreys needed |
53,411 | Categorical fixed effect w/ 3 levels in LMER | In general, in a model which includes the intercept term (as yours does), the effect of a categorical predictor with k levels can be represented with k-1 codes. So, as expected, the effect of your 3-level predictor is represented with 2 codes (plus the intercept term). In this case, they appear to be "dummy codes," where the intercept represents the predicted value for a baseline category, here level A, and the two codes represent the deviations of the other groups from the baseline category.
None of this is specific to either lmer() or to mixed-effects models more generally; it is basic ANOVA theory. | Categorical fixed effect w/ 3 levels in LMER | In general, in a model which includes the intercept term (as yours does), the effect of a categorical predictor with k levels can be represented with k-1 codes. So, as expected, the effect of your 3-l | Categorical fixed effect w/ 3 levels in LMER
In general, in a model which includes the intercept term (as yours does), the effect of a categorical predictor with k levels can be represented with k-1 codes. So, as expected, the effect of your 3-level predictor is represented with 2 codes (plus the intercept term). In this case, they appear to be "dummy codes," where the intercept represents the predicted value for a baseline category, here level A, and the two codes represent the deviations of the other groups from the baseline category.
None of this is specific to either lmer() or to mixed-effects models more generally; it is basic ANOVA theory. | Categorical fixed effect w/ 3 levels in LMER
In general, in a model which includes the intercept term (as yours does), the effect of a categorical predictor with k levels can be represented with k-1 codes. So, as expected, the effect of your 3-l |
53,412 | Why is the power for this test so low? | Your effect size is extremely small: the true value is only $.07$ away from the null hypothesis, while the population standard deviation is $1.2$ and the sample size is $100$, making your standard error $.12$, so your standard error is almost twice as large as the distance from the null hypothesis, combining for a very small effect size. Intuitively, it makes sense that you would have very little power to detect a very small effect. In addition, you're using a pretty strict significance criteria $(.01)$, so you're less likely, in general, the reject null hypotheses.
Between these two things, the combined result of a very underpowered test is not particularly surprising. | Why is the power for this test so low? | Your effect size is extremely small: the true value is only $.07$ away from the null hypothesis, while the population standard deviation is $1.2$ and the sample size is $100$, making your standard err | Why is the power for this test so low?
Your effect size is extremely small: the true value is only $.07$ away from the null hypothesis, while the population standard deviation is $1.2$ and the sample size is $100$, making your standard error $.12$, so your standard error is almost twice as large as the distance from the null hypothesis, combining for a very small effect size. Intuitively, it makes sense that you would have very little power to detect a very small effect. In addition, you're using a pretty strict significance criteria $(.01)$, so you're less likely, in general, the reject null hypotheses.
Between these two things, the combined result of a very underpowered test is not particularly surprising. | Why is the power for this test so low?
Your effect size is extremely small: the true value is only $.07$ away from the null hypothesis, while the population standard deviation is $1.2$ and the sample size is $100$, making your standard err |
53,413 | Mixture model fixed effects | This type of model has been entertained by education researchers for years under the name of growth mixture model (do different students exhibit different rate of learning?), although they work with it as a random effects model. I don't think you'd be able to come up with a proper fixed effect estimation for this model, as it may lack the sufficient statistic that you could condition on. But a full blown ML model should not be extremely difficult to fit.
In Stata, you could try fmm with a full set of panel id dummies and/or interactions with time, although this is a very wasteful approach in terms of degrees of freedom. | Mixture model fixed effects | This type of model has been entertained by education researchers for years under the name of growth mixture model (do different students exhibit different rate of learning?), although they work with i | Mixture model fixed effects
This type of model has been entertained by education researchers for years under the name of growth mixture model (do different students exhibit different rate of learning?), although they work with it as a random effects model. I don't think you'd be able to come up with a proper fixed effect estimation for this model, as it may lack the sufficient statistic that you could condition on. But a full blown ML model should not be extremely difficult to fit.
In Stata, you could try fmm with a full set of panel id dummies and/or interactions with time, although this is a very wasteful approach in terms of degrees of freedom. | Mixture model fixed effects
This type of model has been entertained by education researchers for years under the name of growth mixture model (do different students exhibit different rate of learning?), although they work with i |
53,414 | Mixture model fixed effects | You are trying to fit a "mixture model", though I would suggest not having a separate set of $\pi_{ij}$ parameters for every person, but rather model these probabilities as a function (i.e. logistic) of some covariates. You could even start with a constant $\pi_j$. The typical approach for fitting mixture models is the EM (expectation-maximization) algorithm, though for the simpler versions even direct maximization would work.
Perhaps searching on these terms will help you find a solution that already exists. | Mixture model fixed effects | You are trying to fit a "mixture model", though I would suggest not having a separate set of $\pi_{ij}$ parameters for every person, but rather model these probabilities as a function (i.e. logistic) | Mixture model fixed effects
You are trying to fit a "mixture model", though I would suggest not having a separate set of $\pi_{ij}$ parameters for every person, but rather model these probabilities as a function (i.e. logistic) of some covariates. You could even start with a constant $\pi_j$. The typical approach for fitting mixture models is the EM (expectation-maximization) algorithm, though for the simpler versions even direct maximization would work.
Perhaps searching on these terms will help you find a solution that already exists. | Mixture model fixed effects
You are trying to fit a "mixture model", though I would suggest not having a separate set of $\pi_{ij}$ parameters for every person, but rather model these probabilities as a function (i.e. logistic) |
53,415 | Mixture model fixed effects | I suspect you actually mean $y_{it}=μ_i+∑_j I_{ij}θ_{jt}+ϵ_{it}$ where $I_{ij}$ is an indicator for individual $i$ belonging to group $j$ (with probability $\pi_{ij}$. I also wonder if you really want to allow for different individuals to have different group assignment probabilities? I suspect $\pi_j$ will suffice?
I guess it could be seen as a random/mixed effect (not a mixture) with a multinomial effect. This might be an improper term since "random effects" is a typically reseved for Gaussian random effects.
Also note that you are not dealing with a Gaussian mixture, because you allow an individuals shift of the Gaussian ($\mu_i$).
Anyhow, I can think of two approaches:
Write the likelihood and maximize it using en EM algorithm which you can write yourself.
Compute $\mu_i$ by averaging within individuals (assuming you have enough observations). The residuals will be distributed like a Gaussian location mixture at each $t$. You can use a standard EM implementation such as mixtools R package to estimate $\{\theta_{jt}\}$ and $\{\pi_j\}$. | Mixture model fixed effects | I suspect you actually mean $y_{it}=μ_i+∑_j I_{ij}θ_{jt}+ϵ_{it}$ where $I_{ij}$ is an indicator for individual $i$ belonging to group $j$ (with probability $\pi_{ij}$. I also wonder if you really want | Mixture model fixed effects
I suspect you actually mean $y_{it}=μ_i+∑_j I_{ij}θ_{jt}+ϵ_{it}$ where $I_{ij}$ is an indicator for individual $i$ belonging to group $j$ (with probability $\pi_{ij}$. I also wonder if you really want to allow for different individuals to have different group assignment probabilities? I suspect $\pi_j$ will suffice?
I guess it could be seen as a random/mixed effect (not a mixture) with a multinomial effect. This might be an improper term since "random effects" is a typically reseved for Gaussian random effects.
Also note that you are not dealing with a Gaussian mixture, because you allow an individuals shift of the Gaussian ($\mu_i$).
Anyhow, I can think of two approaches:
Write the likelihood and maximize it using en EM algorithm which you can write yourself.
Compute $\mu_i$ by averaging within individuals (assuming you have enough observations). The residuals will be distributed like a Gaussian location mixture at each $t$. You can use a standard EM implementation such as mixtools R package to estimate $\{\theta_{jt}\}$ and $\{\pi_j\}$. | Mixture model fixed effects
I suspect you actually mean $y_{it}=μ_i+∑_j I_{ij}θ_{jt}+ϵ_{it}$ where $I_{ij}$ is an indicator for individual $i$ belonging to group $j$ (with probability $\pi_{ij}$. I also wonder if you really want |
53,416 | What is the distribution of the product of a Bernoulli & a normal random variable? | $-X$ has the same distribution as $X$ since its density is symmetric about the origin, and $Z$ is likewise symmetric, therefore the result is ... yet another normal random variable.
It's instructive to ponder how $Y$ is impacted by changes in the parameter $p=\mathrm P(Z=1)$ of the Bernoulli random variable $Z$. Here is a plot of $Y$ as $p$ runs from $0$ to $1$:
Can you mentally confirm this animation by imagining $Y$ for $p=0$, $p=0.5$, and $p=1$, then doing a little interpolation? | What is the distribution of the product of a Bernoulli & a normal random variable? | $-X$ has the same distribution as $X$ since its density is symmetric about the origin, and $Z$ is likewise symmetric, therefore the result is ... yet another normal random variable.
It's instructive t | What is the distribution of the product of a Bernoulli & a normal random variable?
$-X$ has the same distribution as $X$ since its density is symmetric about the origin, and $Z$ is likewise symmetric, therefore the result is ... yet another normal random variable.
It's instructive to ponder how $Y$ is impacted by changes in the parameter $p=\mathrm P(Z=1)$ of the Bernoulli random variable $Z$. Here is a plot of $Y$ as $p$ runs from $0$ to $1$:
Can you mentally confirm this animation by imagining $Y$ for $p=0$, $p=0.5$, and $p=1$, then doing a little interpolation? | What is the distribution of the product of a Bernoulli & a normal random variable?
$-X$ has the same distribution as $X$ since its density is symmetric about the origin, and $Z$ is likewise symmetric, therefore the result is ... yet another normal random variable.
It's instructive t |
53,417 | What is the distribution of the product of a Bernoulli & a normal random variable? | May I suggest you start from first principles? You seek the distribution of $Y$, so you should be asking yourself about the chance that $Y \le t$ for some arbitrary real value $t$. To handle the discreteness of $Z$, consider enumerating its possible values:
$$\Pr[Y \le t] = \Pr[Z\ |X| \le t] = \Pr[|X| \le t \text{ and }Z=1] + \Pr[-|X| \le t \text{ and } Z=-1].$$
Because you are assuming $X$ and $Z$ independent, the joint probabilities (connected by "$\text{and}$") are obtained by multiplication. The rest now is straightforward.
By doing the computations graphically (use a sketch of the PDF of $X$) you will likely note some opportunities for simplification of the answer; it reduces to a very simple expression in terms of the cumulative distribution function of $X$ itself. | What is the distribution of the product of a Bernoulli & a normal random variable? | May I suggest you start from first principles? You seek the distribution of $Y$, so you should be asking yourself about the chance that $Y \le t$ for some arbitrary real value $t$. To handle the dis | What is the distribution of the product of a Bernoulli & a normal random variable?
May I suggest you start from first principles? You seek the distribution of $Y$, so you should be asking yourself about the chance that $Y \le t$ for some arbitrary real value $t$. To handle the discreteness of $Z$, consider enumerating its possible values:
$$\Pr[Y \le t] = \Pr[Z\ |X| \le t] = \Pr[|X| \le t \text{ and }Z=1] + \Pr[-|X| \le t \text{ and } Z=-1].$$
Because you are assuming $X$ and $Z$ independent, the joint probabilities (connected by "$\text{and}$") are obtained by multiplication. The rest now is straightforward.
By doing the computations graphically (use a sketch of the PDF of $X$) you will likely note some opportunities for simplification of the answer; it reduces to a very simple expression in terms of the cumulative distribution function of $X$ itself. | What is the distribution of the product of a Bernoulli & a normal random variable?
May I suggest you start from first principles? You seek the distribution of $Y$, so you should be asking yourself about the chance that $Y \le t$ for some arbitrary real value $t$. To handle the dis |
53,418 | What is the distribution of the product of a Bernoulli & a normal random variable? | Here, I have done following calculation:
P(Y <= y)
=P(Z*mod(X) <= y)
=0.5P(mod(X) <= y) + 0.5P(-mod(X) <= y)
=0.5*[ P(-y <= X <= y) + P(mod(X) >= -y) ]
=0.5*[ 2Phi(y) - 1 + 1 - P(mod(X) < -y) ]
=0.5*[ 2Phi(y) - P(y <= X <= -y) ]
=Phi(y) because, 2nd component is probability of impossible event
Therefore Y have standard normal distribution.
Is my calculation is correct? | What is the distribution of the product of a Bernoulli & a normal random variable? | Here, I have done following calculation:
P(Y <= y)
=P(Z*mod(X) <= y)
=0.5P(mod(X) <= y) + 0.5P(-mod(X) <= y)
=0.5*[ P(-y <= X <= y) + P(mod(X) >= -y) ]
=0.5*[ 2Phi(y) - 1 + 1 - P(mod(X) < -y) ]
=0.5*[ | What is the distribution of the product of a Bernoulli & a normal random variable?
Here, I have done following calculation:
P(Y <= y)
=P(Z*mod(X) <= y)
=0.5P(mod(X) <= y) + 0.5P(-mod(X) <= y)
=0.5*[ P(-y <= X <= y) + P(mod(X) >= -y) ]
=0.5*[ 2Phi(y) - 1 + 1 - P(mod(X) < -y) ]
=0.5*[ 2Phi(y) - P(y <= X <= -y) ]
=Phi(y) because, 2nd component is probability of impossible event
Therefore Y have standard normal distribution.
Is my calculation is correct? | What is the distribution of the product of a Bernoulli & a normal random variable?
Here, I have done following calculation:
P(Y <= y)
=P(Z*mod(X) <= y)
=0.5P(mod(X) <= y) + 0.5P(-mod(X) <= y)
=0.5*[ P(-y <= X <= y) + P(mod(X) >= -y) ]
=0.5*[ 2Phi(y) - 1 + 1 - P(mod(X) < -y) ]
=0.5*[ |
53,419 | How to get coefficients from princomp object in R? | If you read the documentation on the princomp function, via ?princomp you'll see that there is a section titled "Value" that describes the components of the object returned.
The piece you are looking for is probably the loadings.
Additionally, if you type stats:::predict.princomp at the console you can see for yourself precisely what R is doing when you call predict. | How to get coefficients from princomp object in R? | If you read the documentation on the princomp function, via ?princomp you'll see that there is a section titled "Value" that describes the components of the object returned.
The piece you are looking | How to get coefficients from princomp object in R?
If you read the documentation on the princomp function, via ?princomp you'll see that there is a section titled "Value" that describes the components of the object returned.
The piece you are looking for is probably the loadings.
Additionally, if you type stats:::predict.princomp at the console you can see for yourself precisely what R is doing when you call predict. | How to get coefficients from princomp object in R?
If you read the documentation on the princomp function, via ?princomp you'll see that there is a section titled "Value" that describes the components of the object returned.
The piece you are looking |
53,420 | How to get coefficients from princomp object in R? | The particular calculation done by predict in this case is scale(newdata, object$center, object$scale) %*% object$loadings, where object is the object returned by princomp. There's no way to reduce this to a single matrix multiplication, as in the general case newdata must be rescaled to match the original data. | How to get coefficients from princomp object in R? | The particular calculation done by predict in this case is scale(newdata, object$center, object$scale) %*% object$loadings, where object is the object returned by princomp. There's no way to reduce th | How to get coefficients from princomp object in R?
The particular calculation done by predict in this case is scale(newdata, object$center, object$scale) %*% object$loadings, where object is the object returned by princomp. There's no way to reduce this to a single matrix multiplication, as in the general case newdata must be rescaled to match the original data. | How to get coefficients from princomp object in R?
The particular calculation done by predict in this case is scale(newdata, object$center, object$scale) %*% object$loadings, where object is the object returned by princomp. There's no way to reduce th |
53,421 | How to get p value and confidence intervals for nls functions? | 1.
- You could try (this is an approximation)
library(nls2)
summary(as.lm(model))
You can obtain a p-value for all parameters used in your model using
summary(model)
You can get p values for a model by comparing it to another ("nested") model using
anova(model1, model2)
where model 2 is a simplified version of model 1 (it is your null hypothesis)
You can use methods such a bootstrapping, to get a measure of the probability of fit of your complete model.
2.
You can possibly get full model confidence interval using (this is an approximation)
library(nls2)
predict(as.lm(model2), interval = "confidence")
You can obtain the confidence interval of the parameters using
confint(model)
You can get more information about these parameter intervals using
profile(model)
plot(profile(model))
You can obtain the pair wise confidence interval for two of your parameters (for both plotting and to get the matrix) using
ellipse.nls(model) | How to get p value and confidence intervals for nls functions? | 1.
- You could try (this is an approximation)
library(nls2)
summary(as.lm(model))
You can obtain a p-value for all parameters used in your model using
summary(model)
You can get p values for | How to get p value and confidence intervals for nls functions?
1.
- You could try (this is an approximation)
library(nls2)
summary(as.lm(model))
You can obtain a p-value for all parameters used in your model using
summary(model)
You can get p values for a model by comparing it to another ("nested") model using
anova(model1, model2)
where model 2 is a simplified version of model 1 (it is your null hypothesis)
You can use methods such a bootstrapping, to get a measure of the probability of fit of your complete model.
2.
You can possibly get full model confidence interval using (this is an approximation)
library(nls2)
predict(as.lm(model2), interval = "confidence")
You can obtain the confidence interval of the parameters using
confint(model)
You can get more information about these parameter intervals using
profile(model)
plot(profile(model))
You can obtain the pair wise confidence interval for two of your parameters (for both plotting and to get the matrix) using
ellipse.nls(model) | How to get p value and confidence intervals for nls functions?
1.
- You could try (this is an approximation)
library(nls2)
summary(as.lm(model))
You can obtain a p-value for all parameters used in your model using
summary(model)
You can get p values for |
53,422 | How to get p value and confidence intervals for nls functions? | Regarding confidence intervals, other answers here seem to have issues with the use of functions (as.lm.nls, as.proto.list) that for some reason are not defined for some users (like me). After some surfing, I found an answer that works for me, requiring only the MASS package. At the urging of @etov, I am posting the answer I found here. It is originally from https://www.r-bloggers.com/predictnls-part-1-monte-carlo-simulation-confidence-intervals-for-nls-models/ and appears to be by someone named Andrej who uses the handle anspiess. This function by Andrej, in his words, "takes an nls object, extracts the variables/parameter values/parameter variance-covariance matrix, creates an “augmented” covariance matrix (with the variance/covariance values from the parameters and predictor variables included, the latter often being zero), simulates from a multivariate normal distribution (using mvrnorm of the ‘MASS’ package), evaluates the function (object$call$formula) on the values and finally collects statistics". So it is a Monte-Carlo-based method of getting confidence intervals for an nls model. His code:
predictNLS <- function(
object,
newdata,
level = 0.95,
nsim = 10000,
...
)
{
require(MASS, quietly = TRUE)
## get right-hand side of formula
RHS <- as.list(object$call$formula)[[3]]
EXPR <- as.expression(RHS)
## all variables in model
VARS <- all.vars(EXPR)
## coefficients
COEF <- coef(object)
## extract predictor variable
predNAME <- setdiff(VARS, names(COEF))
## take fitted values, if 'newdata' is missing
if (missing(newdata)) {
newdata <- eval(object$data)[predNAME]
colnames(newdata) <- predNAME
}
## check that 'newdata' has same name as predVAR
if (names(newdata)[1] != predNAME) stop("newdata should have name '", predNAME, "'!")
## get parameter coefficients
COEF <- coef(object)
## get variance-covariance matrix
VCOV <- vcov(object)
## augment variance-covariance matrix for 'mvrnorm'
## by adding a column/row for 'error in x'
NCOL <- ncol(VCOV)
ADD1 <- c(rep(0, NCOL))
ADD1 <- matrix(ADD1, ncol = 1)
colnames(ADD1) <- predNAME
VCOV <- cbind(VCOV, ADD1)
ADD2 <- c(rep(0, NCOL + 1))
ADD2 <- matrix(ADD2, nrow = 1)
rownames(ADD2) <- predNAME
VCOV <- rbind(VCOV, ADD2)
## iterate over all entries in 'newdata' as in usual 'predict.' functions
NR <- nrow(newdata)
respVEC <- numeric(NR)
seVEC <- numeric(NR)
varPLACE <- ncol(VCOV)
## define counter function
counter <- function (i)
{
if (i%%10 == 0)
cat(i)
else cat(".")
if (i%%50 == 0)
cat("\n")
flush.console()
}
outMAT <- NULL
for (i in 1:NR) {
counter(i)
## get predictor values and optional errors
predVAL <- newdata[i, 1]
if (ncol(newdata) == 2) predERROR <- newdata[i, 2] else predERROR <- 0
names(predVAL) <- predNAME
names(predERROR) <- predNAME
## create mean vector for 'mvrnorm'
MU <- c(COEF, predVAL)
## create variance-covariance matrix for 'mvrnorm'
## by putting error^2 in lower-right position of VCOV
newVCOV <- VCOV
newVCOV[varPLACE, varPLACE] <- predERROR^2
## create MC simulation matrix
simMAT <- mvrnorm(n = nsim, mu = MU, Sigma = newVCOV, empirical = TRUE)
## evaluate expression on rows of simMAT
EVAL <- try(eval(EXPR, envir = as.data.frame(simMAT)), silent = TRUE)
if (inherits(EVAL, "try-error")) stop("There was an error evaluating the simulations!")
## collect statistics
PRED <- data.frame(predVAL)
colnames(PRED) <- predNAME
FITTED <- predict(object, newdata = data.frame(PRED))
MEAN.sim <- mean(EVAL, na.rm = TRUE)
SD.sim <- sd(EVAL, na.rm = TRUE)
MEDIAN.sim <- median(EVAL, na.rm = TRUE)
MAD.sim <- mad(EVAL, na.rm = TRUE)
QUANT <- quantile(EVAL, c((1 - level)/2, level + (1 - level)/2))
RES <- c(FITTED, MEAN.sim, SD.sim, MEDIAN.sim, MAD.sim, QUANT[1], QUANT[2])
outMAT <- rbind(outMAT, RES)
}
colnames(outMAT) <- c("fit", "mean", "sd", "median", "mad", names(QUANT[1]), names(QUANT[2]))
rownames(outMAT) <- NULL
cat("\n")
return(outMAT)
}
And then he writes: "The input is an ‘nls’ object, a data.frame ‘newdata’ of values to be predicted with the value x_new in the first column and (optional) “errors-in-x” (as sigma) in the second column. The number of simulations can be tweaked with nsim as well as the alpha-level for the confidence interval.
The output is f(x_new, beta) (fitted value), mu(y_n) (mean of simulation), sigma(y_n) (s.d. of simulation), median(y_n) (median of simulation), mad(y_n) (mad of simulation) and the lower/upper confidence interval."
He has some additional text explaining this further and giving a usage example, but I don't feel like it's really appropriate for me to copy his entire blog post into this answer, so please visit his page, if it still exists, for further details. Anyway it's pretty simple and self-explanatory, and worked for me right out of the box, on the first try. Thanks Andrej! | How to get p value and confidence intervals for nls functions? | Regarding confidence intervals, other answers here seem to have issues with the use of functions (as.lm.nls, as.proto.list) that for some reason are not defined for some users (like me). After some s | How to get p value and confidence intervals for nls functions?
Regarding confidence intervals, other answers here seem to have issues with the use of functions (as.lm.nls, as.proto.list) that for some reason are not defined for some users (like me). After some surfing, I found an answer that works for me, requiring only the MASS package. At the urging of @etov, I am posting the answer I found here. It is originally from https://www.r-bloggers.com/predictnls-part-1-monte-carlo-simulation-confidence-intervals-for-nls-models/ and appears to be by someone named Andrej who uses the handle anspiess. This function by Andrej, in his words, "takes an nls object, extracts the variables/parameter values/parameter variance-covariance matrix, creates an “augmented” covariance matrix (with the variance/covariance values from the parameters and predictor variables included, the latter often being zero), simulates from a multivariate normal distribution (using mvrnorm of the ‘MASS’ package), evaluates the function (object$call$formula) on the values and finally collects statistics". So it is a Monte-Carlo-based method of getting confidence intervals for an nls model. His code:
predictNLS <- function(
object,
newdata,
level = 0.95,
nsim = 10000,
...
)
{
require(MASS, quietly = TRUE)
## get right-hand side of formula
RHS <- as.list(object$call$formula)[[3]]
EXPR <- as.expression(RHS)
## all variables in model
VARS <- all.vars(EXPR)
## coefficients
COEF <- coef(object)
## extract predictor variable
predNAME <- setdiff(VARS, names(COEF))
## take fitted values, if 'newdata' is missing
if (missing(newdata)) {
newdata <- eval(object$data)[predNAME]
colnames(newdata) <- predNAME
}
## check that 'newdata' has same name as predVAR
if (names(newdata)[1] != predNAME) stop("newdata should have name '", predNAME, "'!")
## get parameter coefficients
COEF <- coef(object)
## get variance-covariance matrix
VCOV <- vcov(object)
## augment variance-covariance matrix for 'mvrnorm'
## by adding a column/row for 'error in x'
NCOL <- ncol(VCOV)
ADD1 <- c(rep(0, NCOL))
ADD1 <- matrix(ADD1, ncol = 1)
colnames(ADD1) <- predNAME
VCOV <- cbind(VCOV, ADD1)
ADD2 <- c(rep(0, NCOL + 1))
ADD2 <- matrix(ADD2, nrow = 1)
rownames(ADD2) <- predNAME
VCOV <- rbind(VCOV, ADD2)
## iterate over all entries in 'newdata' as in usual 'predict.' functions
NR <- nrow(newdata)
respVEC <- numeric(NR)
seVEC <- numeric(NR)
varPLACE <- ncol(VCOV)
## define counter function
counter <- function (i)
{
if (i%%10 == 0)
cat(i)
else cat(".")
if (i%%50 == 0)
cat("\n")
flush.console()
}
outMAT <- NULL
for (i in 1:NR) {
counter(i)
## get predictor values and optional errors
predVAL <- newdata[i, 1]
if (ncol(newdata) == 2) predERROR <- newdata[i, 2] else predERROR <- 0
names(predVAL) <- predNAME
names(predERROR) <- predNAME
## create mean vector for 'mvrnorm'
MU <- c(COEF, predVAL)
## create variance-covariance matrix for 'mvrnorm'
## by putting error^2 in lower-right position of VCOV
newVCOV <- VCOV
newVCOV[varPLACE, varPLACE] <- predERROR^2
## create MC simulation matrix
simMAT <- mvrnorm(n = nsim, mu = MU, Sigma = newVCOV, empirical = TRUE)
## evaluate expression on rows of simMAT
EVAL <- try(eval(EXPR, envir = as.data.frame(simMAT)), silent = TRUE)
if (inherits(EVAL, "try-error")) stop("There was an error evaluating the simulations!")
## collect statistics
PRED <- data.frame(predVAL)
colnames(PRED) <- predNAME
FITTED <- predict(object, newdata = data.frame(PRED))
MEAN.sim <- mean(EVAL, na.rm = TRUE)
SD.sim <- sd(EVAL, na.rm = TRUE)
MEDIAN.sim <- median(EVAL, na.rm = TRUE)
MAD.sim <- mad(EVAL, na.rm = TRUE)
QUANT <- quantile(EVAL, c((1 - level)/2, level + (1 - level)/2))
RES <- c(FITTED, MEAN.sim, SD.sim, MEDIAN.sim, MAD.sim, QUANT[1], QUANT[2])
outMAT <- rbind(outMAT, RES)
}
colnames(outMAT) <- c("fit", "mean", "sd", "median", "mad", names(QUANT[1]), names(QUANT[2]))
rownames(outMAT) <- NULL
cat("\n")
return(outMAT)
}
And then he writes: "The input is an ‘nls’ object, a data.frame ‘newdata’ of values to be predicted with the value x_new in the first column and (optional) “errors-in-x” (as sigma) in the second column. The number of simulations can be tweaked with nsim as well as the alpha-level for the confidence interval.
The output is f(x_new, beta) (fitted value), mu(y_n) (mean of simulation), sigma(y_n) (s.d. of simulation), median(y_n) (median of simulation), mad(y_n) (mad of simulation) and the lower/upper confidence interval."
He has some additional text explaining this further and giving a usage example, but I don't feel like it's really appropriate for me to copy his entire blog post into this answer, so please visit his page, if it still exists, for further details. Anyway it's pretty simple and self-explanatory, and worked for me right out of the box, on the first try. Thanks Andrej! | How to get p value and confidence intervals for nls functions?
Regarding confidence intervals, other answers here seem to have issues with the use of functions (as.lm.nls, as.proto.list) that for some reason are not defined for some users (like me). After some s |
53,423 | How to get p value and confidence intervals for nls functions? | A note regarding confidence intervals (2 above), and the answer by @Etienne Low-Décarie:
Even after attaching nls2, the as.lm functions is sometimes unavailable. Based on this (now stale) reference (originally authored by delichon), here's the function's source:
as.lm.nls <- function(object, ...) {
if (!inherits(object, "nls")) {
w <- paste("expected object of class nls but got object of class:",
paste(class(object), collapse = " "))
warning(w)
}
gradient <- object$m$gradient()
if (is.null(colnames(gradient))) {
colnames(gradient) <- names(object$m$getPars())
}
response.name <- if (length(formula(object)) == 2) "0" else
as.character(formula(object)[[2]])
lhs <- object$m$lhs()
L <- data.frame(lhs, gradient)
names(L)[1] <- response.name
fo <- sprintf("%s ~ %s - 1", response.name,
paste(colnames(gradient), collapse = "+"))
fo <- as.formula(fo, env = as.proto.list(L))
do.call("lmst(fo, offset = substitute(fitted(object)))")
}
Then use predict the standard way:
predCI <- predict(as.lm.nls(fittednls), interval = “confidence”, level = 0.95)
Thanks @waybackmachine | How to get p value and confidence intervals for nls functions? | A note regarding confidence intervals (2 above), and the answer by @Etienne Low-Décarie:
Even after attaching nls2, the as.lm functions is sometimes unavailable. Based on this (now stale) reference (o | How to get p value and confidence intervals for nls functions?
A note regarding confidence intervals (2 above), and the answer by @Etienne Low-Décarie:
Even after attaching nls2, the as.lm functions is sometimes unavailable. Based on this (now stale) reference (originally authored by delichon), here's the function's source:
as.lm.nls <- function(object, ...) {
if (!inherits(object, "nls")) {
w <- paste("expected object of class nls but got object of class:",
paste(class(object), collapse = " "))
warning(w)
}
gradient <- object$m$gradient()
if (is.null(colnames(gradient))) {
colnames(gradient) <- names(object$m$getPars())
}
response.name <- if (length(formula(object)) == 2) "0" else
as.character(formula(object)[[2]])
lhs <- object$m$lhs()
L <- data.frame(lhs, gradient)
names(L)[1] <- response.name
fo <- sprintf("%s ~ %s - 1", response.name,
paste(colnames(gradient), collapse = "+"))
fo <- as.formula(fo, env = as.proto.list(L))
do.call("lmst(fo, offset = substitute(fitted(object)))")
}
Then use predict the standard way:
predCI <- predict(as.lm.nls(fittednls), interval = “confidence”, level = 0.95)
Thanks @waybackmachine | How to get p value and confidence intervals for nls functions?
A note regarding confidence intervals (2 above), and the answer by @Etienne Low-Décarie:
Even after attaching nls2, the as.lm functions is sometimes unavailable. Based on this (now stale) reference (o |
53,424 | How to get p value and confidence intervals for nls functions? | I was also banging my head on this one and eventually found predictNLS() function in the propagate package.
For example:
library(propagate)
Y <- c(282, 314, 581, 846, 1320, 2014, 2798, 4593, 6065, 7818, 9826)
temp <- data.frame(y = Y, x = seq(1:11))
mod <- nls(y ~ exp(a + b * x), data = temp, start = list(a = 0, b = 1))
(PROP1 <- predictNLS(mod, newdata = data.frame(x = c(12,13)), interval = "prediction"))
Hope this helps.
Link to R documentation | How to get p value and confidence intervals for nls functions? | I was also banging my head on this one and eventually found predictNLS() function in the propagate package.
For example:
library(propagate)
Y <- c(282, 314, 581, 846, 1320, 2014, 2798, 4593, 6065 | How to get p value and confidence intervals for nls functions?
I was also banging my head on this one and eventually found predictNLS() function in the propagate package.
For example:
library(propagate)
Y <- c(282, 314, 581, 846, 1320, 2014, 2798, 4593, 6065, 7818, 9826)
temp <- data.frame(y = Y, x = seq(1:11))
mod <- nls(y ~ exp(a + b * x), data = temp, start = list(a = 0, b = 1))
(PROP1 <- predictNLS(mod, newdata = data.frame(x = c(12,13)), interval = "prediction"))
Hope this helps.
Link to R documentation | How to get p value and confidence intervals for nls functions?
I was also banging my head on this one and eventually found predictNLS() function in the propagate package.
For example:
library(propagate)
Y <- c(282, 314, 581, 846, 1320, 2014, 2798, 4593, 6065 |
53,425 | A Monte Carlo confidence interval method for the maximum of a uniform distribution? | Yes, this is a classic example of when the bootstrap is inconsistent, but subsampling yields valid inference. See Swanepoel (1986), Politis and Romano (1994) or Canty et. al. (2006). | A Monte Carlo confidence interval method for the maximum of a uniform distribution? | Yes, this is a classic example of when the bootstrap is inconsistent, but subsampling yields valid inference. See Swanepoel (1986), Politis and Romano (1994) or Canty et. al. (2006). | A Monte Carlo confidence interval method for the maximum of a uniform distribution?
Yes, this is a classic example of when the bootstrap is inconsistent, but subsampling yields valid inference. See Swanepoel (1986), Politis and Romano (1994) or Canty et. al. (2006). | A Monte Carlo confidence interval method for the maximum of a uniform distribution?
Yes, this is a classic example of when the bootstrap is inconsistent, but subsampling yields valid inference. See Swanepoel (1986), Politis and Romano (1994) or Canty et. al. (2006). |
53,426 | A Monte Carlo confidence interval method for the maximum of a uniform distribution? | Why not just use the definition of confidence interval?
You seek an upper confidence limit for $\theta$, say with $1-\alpha$ coverage. Because this is tantamount to estimating a scale and $X_n$ is your test statistic, you should seek a UCL of the form $cX_n$ for some universal constant $c$. All this means (by definition) is that
$$1-\alpha = {\Pr}_\theta[c X_n \ge \theta] = {\Pr}_\theta[X_n \ge \theta/c] = 1-\left(\frac{1}{c}\right)^n.$$
The solution is $c = \alpha^{-1/n}$.
Interestingly, a lower confidence limit for $\theta$ can be found in the same way, in the form LCL = $b X_n$ with $b = (1-\alpha)^{-1/n}$. This confidence interval never contains $\hat{\theta} = X_n$!
For example, suppose $n=10$ and we seek a (symmetric) two-sided 95% confidence interval, so that $\alpha = 0.025$. Thus $c=1.446126$ and $b=1.002535$: we have 95% confidence that the limit of the underlying uniform distribution, $\theta$, lies between 1.446 and 1.003 times the largest of 10 iid draws from that distribution.
As a test (in R):
# Specify the confidence.
alpha <- 1 - 0.95
# Create simulated values.
n <- 10 # Number of iid draws per trial
nTrials <- 10000 # Number of trials
theta <- 1 # Parameter (positive; its value doesn't matter)
set.seed(17)
x <- runif(n*nTrials, max=theta) # The data
xn <- apply(matrix(x, nrow=n), 2, max) # The test statistics
# Compute the coverage of the simulated intervals.
ucl.k <- (alpha/2)^(-1/n)
lcl.k <- (1-alpha/2)^(-1/n)
length(xn[lcl.k * xn <= theta & theta <= ucl.k * xn]) / nTrials
This (reproducible) example yields 95.05%, as close as one can hope to the nominal coverage of 95%. | A Monte Carlo confidence interval method for the maximum of a uniform distribution? | Why not just use the definition of confidence interval?
You seek an upper confidence limit for $\theta$, say with $1-\alpha$ coverage. Because this is tantamount to estimating a scale and $X_n$ is yo | A Monte Carlo confidence interval method for the maximum of a uniform distribution?
Why not just use the definition of confidence interval?
You seek an upper confidence limit for $\theta$, say with $1-\alpha$ coverage. Because this is tantamount to estimating a scale and $X_n$ is your test statistic, you should seek a UCL of the form $cX_n$ for some universal constant $c$. All this means (by definition) is that
$$1-\alpha = {\Pr}_\theta[c X_n \ge \theta] = {\Pr}_\theta[X_n \ge \theta/c] = 1-\left(\frac{1}{c}\right)^n.$$
The solution is $c = \alpha^{-1/n}$.
Interestingly, a lower confidence limit for $\theta$ can be found in the same way, in the form LCL = $b X_n$ with $b = (1-\alpha)^{-1/n}$. This confidence interval never contains $\hat{\theta} = X_n$!
For example, suppose $n=10$ and we seek a (symmetric) two-sided 95% confidence interval, so that $\alpha = 0.025$. Thus $c=1.446126$ and $b=1.002535$: we have 95% confidence that the limit of the underlying uniform distribution, $\theta$, lies between 1.446 and 1.003 times the largest of 10 iid draws from that distribution.
As a test (in R):
# Specify the confidence.
alpha <- 1 - 0.95
# Create simulated values.
n <- 10 # Number of iid draws per trial
nTrials <- 10000 # Number of trials
theta <- 1 # Parameter (positive; its value doesn't matter)
set.seed(17)
x <- runif(n*nTrials, max=theta) # The data
xn <- apply(matrix(x, nrow=n), 2, max) # The test statistics
# Compute the coverage of the simulated intervals.
ucl.k <- (alpha/2)^(-1/n)
lcl.k <- (1-alpha/2)^(-1/n)
length(xn[lcl.k * xn <= theta & theta <= ucl.k * xn]) / nTrials
This (reproducible) example yields 95.05%, as close as one can hope to the nominal coverage of 95%. | A Monte Carlo confidence interval method for the maximum of a uniform distribution?
Why not just use the definition of confidence interval?
You seek an upper confidence limit for $\theta$, say with $1-\alpha$ coverage. Because this is tantamount to estimating a scale and $X_n$ is yo |
53,427 | When parsing text for n-grams - should punctuation be included? | Here's a hint: Google doesn't include punctuation in n-grams. | When parsing text for n-grams - should punctuation be included? | Here's a hint: Google doesn't include punctuation in n-grams. | When parsing text for n-grams - should punctuation be included?
Here's a hint: Google doesn't include punctuation in n-grams. | When parsing text for n-grams - should punctuation be included?
Here's a hint: Google doesn't include punctuation in n-grams. |
53,428 | When parsing text for n-grams - should punctuation be included? | Google ignores punctuation, but there are non-alphanumeric characters that are not ignored.
For example, search these word/phrases:
tech spec
tech-spec
tech,spec
tech_spec
The search results vary, showing Google does consider some characters significant.
Also, are you doing this in non-English languages?
If so, then consider creating n-grams from a certain number of characters, instead of words. This will lead to better results on many non-English languages, and it's the only way to effectively parse CJK-type languages that don't use significant whitespace. | When parsing text for n-grams - should punctuation be included? | Google ignores punctuation, but there are non-alphanumeric characters that are not ignored.
For example, search these word/phrases:
tech spec
tech-spec
tech,spec
tech_spec
The search results vary, s | When parsing text for n-grams - should punctuation be included?
Google ignores punctuation, but there are non-alphanumeric characters that are not ignored.
For example, search these word/phrases:
tech spec
tech-spec
tech,spec
tech_spec
The search results vary, showing Google does consider some characters significant.
Also, are you doing this in non-English languages?
If so, then consider creating n-grams from a certain number of characters, instead of words. This will lead to better results on many non-English languages, and it's the only way to effectively parse CJK-type languages that don't use significant whitespace. | When parsing text for n-grams - should punctuation be included?
Google ignores punctuation, but there are non-alphanumeric characters that are not ignored.
For example, search these word/phrases:
tech spec
tech-spec
tech,spec
tech_spec
The search results vary, s |
53,429 | Feature naming conventions | With a lot of variables, at some point you are going to want to figure out what's what, and this will be easier if they are meaningful when put in alphabetical order. You are less likely to group them by whether they are logged or not than whether they are in the same "family". So, I'd rearrange your example as this:
gt_100k_income income_gt_100k
is_missing_income income_missing
lg10_income income_lg10
age_20_lt_25_age age_ge_20_lt_25
zscale_age age_zscale
ratio_ln_income_ln_age income_ln_over_age_ln
I realize that this is exactly the opposite of what some software does automatically (such as Excel pivot tables or Alteryx Summaries), but Bill Gates isn't right all the time.
It's probably more important to be consistent with your method, than what that particular method is. | Feature naming conventions | With a lot of variables, at some point you are going to want to figure out what's what, and this will be easier if they are meaningful when put in alphabetical order. You are less likely to group them | Feature naming conventions
With a lot of variables, at some point you are going to want to figure out what's what, and this will be easier if they are meaningful when put in alphabetical order. You are less likely to group them by whether they are logged or not than whether they are in the same "family". So, I'd rearrange your example as this:
gt_100k_income income_gt_100k
is_missing_income income_missing
lg10_income income_lg10
age_20_lt_25_age age_ge_20_lt_25
zscale_age age_zscale
ratio_ln_income_ln_age income_ln_over_age_ln
I realize that this is exactly the opposite of what some software does automatically (such as Excel pivot tables or Alteryx Summaries), but Bill Gates isn't right all the time.
It's probably more important to be consistent with your method, than what that particular method is. | Feature naming conventions
With a lot of variables, at some point you are going to want to figure out what's what, and this will be easier if they are meaningful when put in alphabetical order. You are less likely to group them |
53,430 | Feature naming conventions | You're making me think about a kind of Hungarian notation for feature names. Cool.
I like feature names that sort alphabetically on some meaningful domain category. In your example, I'd have all income-related features start with income_.
I also like feature name suffixes that make it obvious what kind of values the feature can take. For instance, if the feature is binary, I may have it end in _is, e.g., income_missing_is. If the feature is a frequency count, it's a _freq, and you immediately know that that will never be less than zero.
If the feature is automatically generated by some special mechanism, e.g., third-party software or some cross-referenced dataset, that is sometimes useful in the feature name. For instance, income_census2010_bracket.
You will want to search and filter feature names, so always use underscore separators and lower case identifiers, never camel case.
Verbosity is not generally an issue; 40-60 characters is fine. | Feature naming conventions | You're making me think about a kind of Hungarian notation for feature names. Cool.
I like feature names that sort alphabetically on some meaningful domain category. In your example, I'd have all incom | Feature naming conventions
You're making me think about a kind of Hungarian notation for feature names. Cool.
I like feature names that sort alphabetically on some meaningful domain category. In your example, I'd have all income-related features start with income_.
I also like feature name suffixes that make it obvious what kind of values the feature can take. For instance, if the feature is binary, I may have it end in _is, e.g., income_missing_is. If the feature is a frequency count, it's a _freq, and you immediately know that that will never be less than zero.
If the feature is automatically generated by some special mechanism, e.g., third-party software or some cross-referenced dataset, that is sometimes useful in the feature name. For instance, income_census2010_bracket.
You will want to search and filter feature names, so always use underscore separators and lower case identifiers, never camel case.
Verbosity is not generally an issue; 40-60 characters is fine. | Feature naming conventions
You're making me think about a kind of Hungarian notation for feature names. Cool.
I like feature names that sort alphabetically on some meaningful domain category. In your example, I'd have all incom |
53,431 | Non-parametric test for unequal samples with subsequent post-hoc analysis? | Yes and Yes. Kruskal-Wallis analysis does not require equal sample size. In latest SPSS versions (from 18, if I remember correctly) there is a new nonparametrics procedure that performs pairwise comparisons with sig. adjustment, as well as step-down post-hoc method. Alternative would be to use very nice macro by Marta Garcia-Granero http://gjyp.nl/marta/ | Non-parametric test for unequal samples with subsequent post-hoc analysis? | Yes and Yes. Kruskal-Wallis analysis does not require equal sample size. In latest SPSS versions (from 18, if I remember correctly) there is a new nonparametrics procedure that performs pairwise compa | Non-parametric test for unequal samples with subsequent post-hoc analysis?
Yes and Yes. Kruskal-Wallis analysis does not require equal sample size. In latest SPSS versions (from 18, if I remember correctly) there is a new nonparametrics procedure that performs pairwise comparisons with sig. adjustment, as well as step-down post-hoc method. Alternative would be to use very nice macro by Marta Garcia-Granero http://gjyp.nl/marta/ | Non-parametric test for unequal samples with subsequent post-hoc analysis?
Yes and Yes. Kruskal-Wallis analysis does not require equal sample size. In latest SPSS versions (from 18, if I remember correctly) there is a new nonparametrics procedure that performs pairwise compa |
53,432 | Non-parametric test for unequal samples with subsequent post-hoc analysis? | According to the formula for the Kruskal-Wallis test statistic, each group can have a different number of observations, so "yes".
Whether this is the best test or not I'm not sure - if you're still in doubt you'd need to post more details. But perhaps this is all you needed to know. Good luck! | Non-parametric test for unequal samples with subsequent post-hoc analysis? | According to the formula for the Kruskal-Wallis test statistic, each group can have a different number of observations, so "yes".
Whether this is the best test or not I'm not sure - if you're still in | Non-parametric test for unequal samples with subsequent post-hoc analysis?
According to the formula for the Kruskal-Wallis test statistic, each group can have a different number of observations, so "yes".
Whether this is the best test or not I'm not sure - if you're still in doubt you'd need to post more details. But perhaps this is all you needed to know. Good luck! | Non-parametric test for unequal samples with subsequent post-hoc analysis?
According to the formula for the Kruskal-Wallis test statistic, each group can have a different number of observations, so "yes".
Whether this is the best test or not I'm not sure - if you're still in |
53,433 | Non-parametric test for unequal samples with subsequent post-hoc analysis? | In case of heavy unbalanced groups the Kruskal-Wallis-test may be far off and you should not use it. There is a recent paper published on arXiv by Brunner et al. 2018 "Ranks and Pseudo-Ranks - Paradoxical Results of Rank Tests" in which the authors show that under certain conditions Kruskal-Wallis-test for more than two groups with unequal sample sizes may lead to intransitive decisions and false rejection of the null hypothesis.
This means that for the same set of distributions $F_1 ,..., F_d$ and unequal sample sizes the p-value of the test may be arbitrary small if $N$ is large enough.
A possible solution is provided by the concept of pseudo-ranks. Apparently, the authors have implemented solutions (which I have not tested yet) for this in R and SAS. See this reference for more information:
Brunner, E., Bathke, A.C., and Konietschke, F. (2018). Rank- and Pseudo-Rank Procedures for Independent Observations in Factorial Designs - Using R and SAS. Springer Series in Statistics, Springer, Heidelberg. | Non-parametric test for unequal samples with subsequent post-hoc analysis? | In case of heavy unbalanced groups the Kruskal-Wallis-test may be far off and you should not use it. There is a recent paper published on arXiv by Brunner et al. 2018 "Ranks and Pseudo-Ranks - Paradox | Non-parametric test for unequal samples with subsequent post-hoc analysis?
In case of heavy unbalanced groups the Kruskal-Wallis-test may be far off and you should not use it. There is a recent paper published on arXiv by Brunner et al. 2018 "Ranks and Pseudo-Ranks - Paradoxical Results of Rank Tests" in which the authors show that under certain conditions Kruskal-Wallis-test for more than two groups with unequal sample sizes may lead to intransitive decisions and false rejection of the null hypothesis.
This means that for the same set of distributions $F_1 ,..., F_d$ and unequal sample sizes the p-value of the test may be arbitrary small if $N$ is large enough.
A possible solution is provided by the concept of pseudo-ranks. Apparently, the authors have implemented solutions (which I have not tested yet) for this in R and SAS. See this reference for more information:
Brunner, E., Bathke, A.C., and Konietschke, F. (2018). Rank- and Pseudo-Rank Procedures for Independent Observations in Factorial Designs - Using R and SAS. Springer Series in Statistics, Springer, Heidelberg. | Non-parametric test for unequal samples with subsequent post-hoc analysis?
In case of heavy unbalanced groups the Kruskal-Wallis-test may be far off and you should not use it. There is a recent paper published on arXiv by Brunner et al. 2018 "Ranks and Pseudo-Ranks - Paradox |
53,434 | Non-parametric test for unequal samples with subsequent post-hoc analysis? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Yes, Mann-Whitney tests are the normal post-hoc test to use for a Kruskal-Wallis test. | Non-parametric test for unequal samples with subsequent post-hoc analysis? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Non-parametric test for unequal samples with subsequent post-hoc analysis?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Yes, Mann-Whitney tests are the normal post-hoc test to use for a Kruskal-Wallis test. | Non-parametric test for unequal samples with subsequent post-hoc analysis?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
53,435 | Difference between ANCOVA and Hierarchical Regression | There really isn't a difference. In matrix algebra form, regression, ANOVA and ANCOVA are all written as
$Y = X\beta + \epsilon$
They arose in different fields and the output is typically formatted differently, but the meaning is the same. However, in the usual usage of the words, regression incorporates the other two, because ANOVA is usually used only when all the independent variables are categorical; ANCOVA when one is continuous (usually) and the others categorical, and regression for any IVs at all. | Difference between ANCOVA and Hierarchical Regression | There really isn't a difference. In matrix algebra form, regression, ANOVA and ANCOVA are all written as
$Y = X\beta + \epsilon$
They arose in different fields and the output is typically formatted di | Difference between ANCOVA and Hierarchical Regression
There really isn't a difference. In matrix algebra form, regression, ANOVA and ANCOVA are all written as
$Y = X\beta + \epsilon$
They arose in different fields and the output is typically formatted differently, but the meaning is the same. However, in the usual usage of the words, regression incorporates the other two, because ANOVA is usually used only when all the independent variables are categorical; ANCOVA when one is continuous (usually) and the others categorical, and regression for any IVs at all. | Difference between ANCOVA and Hierarchical Regression
There really isn't a difference. In matrix algebra form, regression, ANOVA and ANCOVA are all written as
$Y = X\beta + \epsilon$
They arose in different fields and the output is typically formatted di |
53,436 | The reciprocal of $t$-distributed random variable | One can show that if $X$ has density $f(t)$, then $Y = 1/X$ has density $g(t) = {1\over t^2} f\left( {1\over t} \right)$ (for $t\ne0$).
The density of $T$ with $k$ degrees of freedom is
$$\frac{1}{\sqrt{k\pi}}\frac{\Gamma(\frac{k+1}{2})}{\Gamma(\frac{k}{2})}\frac{1}{(1+\frac{t^2}{k})^{\frac{k+1}{2}}}$$
so the density of $1 \over T$ is
$$\frac{1}{\sqrt{k\pi}}\frac{\Gamma(\frac{k+1}{2})}{\Gamma(\frac{k}{2})}\frac{1}{t^2(1+\frac{1}{kt^2})^{\frac{k+1}{2}}}.$$
Note that for $k = 1$ this is the same density (in this case $t$ is the quotient of two iid centered gaussian variables).
Here is the allure of this density for $k=1, 2, 30$. As whuber says in the comments when $k>1$ it is bimodal, and all moments diverge.
Edit How do you show with elementary tools $g(t) = {1\over t^2} f\left( {1\over t} \right)$ ?
One possible solution is to first verify (draw a graph) that :
$$\mathbb P(Y\le t) = \left\{\begin{array}{ll}
\mathbb P\left(X \ge {1\over t} \right) + \mathbb P(X \le 0) & \mbox{ if } t> 0 \\
\mathbb P(X \le 0) & \mbox{ if } t= 0 \\
\mathbb P(X\le 0) - \mathbb P\left(X \le {1\over t} \right) & \mbox{ if } t< 0 \\
\end{array}\right.$$
Using this you will easily find how to express the cdf of $Y$ in terms of the cdf of $X$ ; derive this expression to obtain the density. | The reciprocal of $t$-distributed random variable | One can show that if $X$ has density $f(t)$, then $Y = 1/X$ has density $g(t) = {1\over t^2} f\left( {1\over t} \right)$ (for $t\ne0$).
The density of $T$ with $k$ degrees of freedom is
$$\frac{1}{\s | The reciprocal of $t$-distributed random variable
One can show that if $X$ has density $f(t)$, then $Y = 1/X$ has density $g(t) = {1\over t^2} f\left( {1\over t} \right)$ (for $t\ne0$).
The density of $T$ with $k$ degrees of freedom is
$$\frac{1}{\sqrt{k\pi}}\frac{\Gamma(\frac{k+1}{2})}{\Gamma(\frac{k}{2})}\frac{1}{(1+\frac{t^2}{k})^{\frac{k+1}{2}}}$$
so the density of $1 \over T$ is
$$\frac{1}{\sqrt{k\pi}}\frac{\Gamma(\frac{k+1}{2})}{\Gamma(\frac{k}{2})}\frac{1}{t^2(1+\frac{1}{kt^2})^{\frac{k+1}{2}}}.$$
Note that for $k = 1$ this is the same density (in this case $t$ is the quotient of two iid centered gaussian variables).
Here is the allure of this density for $k=1, 2, 30$. As whuber says in the comments when $k>1$ it is bimodal, and all moments diverge.
Edit How do you show with elementary tools $g(t) = {1\over t^2} f\left( {1\over t} \right)$ ?
One possible solution is to first verify (draw a graph) that :
$$\mathbb P(Y\le t) = \left\{\begin{array}{ll}
\mathbb P\left(X \ge {1\over t} \right) + \mathbb P(X \le 0) & \mbox{ if } t> 0 \\
\mathbb P(X \le 0) & \mbox{ if } t= 0 \\
\mathbb P(X\le 0) - \mathbb P\left(X \le {1\over t} \right) & \mbox{ if } t< 0 \\
\end{array}\right.$$
Using this you will easily find how to express the cdf of $Y$ in terms of the cdf of $X$ ; derive this expression to obtain the density. | The reciprocal of $t$-distributed random variable
One can show that if $X$ has density $f(t)$, then $Y = 1/X$ has density $g(t) = {1\over t^2} f\left( {1\over t} \right)$ (for $t\ne0$).
The density of $T$ with $k$ degrees of freedom is
$$\frac{1}{\s |
53,437 | Sampling variables and calculating likelihood in WinBUGS/OpenBUGS | 1) The probabilistic dnorm, dunif etc. functions are just describing the probability distribution which the variable on the left hand side (lhs) is assumed to have. If the variable is a parameter, then it's a prior distribution. If the variable is data, then it's p(data | parameters).
2) The distribution is not what is used for sampling. Don't think that pop.mean ~ dunif(0,5000) means that pop.mean is actually drawn from a Uniform(0,5000) distribution when the code is executed! The sampling is done using various algorithms inside WinBUGS / JAGS. The code describes a statistical model, not an algorithm to be executed.
3) As Tomas points out, the order of statements doesn't matter.
prec <- 1/pop.variance
pop.variance <- pop.sd * pop.sd
pop.sd ~ dunif(0,100)
merely tells the underlying sampler that 1) the prior on pop.sd is a U(0,100) distribution, 2) pop.variance = pop.sd*pop.sd, and 3) prec = 1/pop.variance. You can rearrange those statements however you like without changing their meaning.
Pretty much everything that happens goes on under the hood of WinBUGS / JAGS. The software takes the statistical model, as described by the code, and processes it using whatever algorithms it determines are "best". I'll repeat: the code describes the model, it does not specify an algorithm for processing the model + data. This is the hardest thing to get about WinBUGS / JAGS, or functional programming languages in general (at least, it was for me,) and once you get it, programming becomes much easier. | Sampling variables and calculating likelihood in WinBUGS/OpenBUGS | 1) The probabilistic dnorm, dunif etc. functions are just describing the probability distribution which the variable on the left hand side (lhs) is assumed to have. If the variable is a parameter, th | Sampling variables and calculating likelihood in WinBUGS/OpenBUGS
1) The probabilistic dnorm, dunif etc. functions are just describing the probability distribution which the variable on the left hand side (lhs) is assumed to have. If the variable is a parameter, then it's a prior distribution. If the variable is data, then it's p(data | parameters).
2) The distribution is not what is used for sampling. Don't think that pop.mean ~ dunif(0,5000) means that pop.mean is actually drawn from a Uniform(0,5000) distribution when the code is executed! The sampling is done using various algorithms inside WinBUGS / JAGS. The code describes a statistical model, not an algorithm to be executed.
3) As Tomas points out, the order of statements doesn't matter.
prec <- 1/pop.variance
pop.variance <- pop.sd * pop.sd
pop.sd ~ dunif(0,100)
merely tells the underlying sampler that 1) the prior on pop.sd is a U(0,100) distribution, 2) pop.variance = pop.sd*pop.sd, and 3) prec = 1/pop.variance. You can rearrange those statements however you like without changing their meaning.
Pretty much everything that happens goes on under the hood of WinBUGS / JAGS. The software takes the statistical model, as described by the code, and processes it using whatever algorithms it determines are "best". I'll repeat: the code describes the model, it does not specify an algorithm for processing the model + data. This is the hardest thing to get about WinBUGS / JAGS, or functional programming languages in general (at least, it was for me,) and once you get it, programming becomes much easier. | Sampling variables and calculating likelihood in WinBUGS/OpenBUGS
1) The probabilistic dnorm, dunif etc. functions are just describing the probability distribution which the variable on the left hand side (lhs) is assumed to have. If the variable is a parameter, th |
53,438 | Sampling variables and calculating likelihood in WinBUGS/OpenBUGS | There are two categories of programming languages (which WinBUGS definitely is):
procedural (or imperative) languages - they specify how the thing should be done - they specify the algorithm, the sequence of steps
declarative languages - they specify what should be done - they just declare the principle
The WinBUGS model code is a declarative language and your "problem" is that you are looking at it as if it was a procedural language. The genial thing on WinBUGS is that you just write formulas as if you were writing it in a paper :-). You just specify the model, you specify what, and you don't solve how the thing is computed (this is some MCMC magic you don't need to understand). So don't bother about calculating likelihoods - you just specify the facts about variables! This will make your head a lot lighter :-)
So particular answers to your questions:
the order of commands does not matter (but the indices do - it is different to write m from m[i])
there is no difference in syntax because the principle is in fact the same! The variable m[i] or pop.mean is having the particular distribution of given parameters and it doesn't matter if they are derived from another distribution (like in hierarchical models) or they are fixed constants. This is the beauty of the language - don't be scare of it, enjoy it :-) | Sampling variables and calculating likelihood in WinBUGS/OpenBUGS | There are two categories of programming languages (which WinBUGS definitely is):
procedural (or imperative) languages - they specify how the thing should be done - they specify the algorithm, the seq | Sampling variables and calculating likelihood in WinBUGS/OpenBUGS
There are two categories of programming languages (which WinBUGS definitely is):
procedural (or imperative) languages - they specify how the thing should be done - they specify the algorithm, the sequence of steps
declarative languages - they specify what should be done - they just declare the principle
The WinBUGS model code is a declarative language and your "problem" is that you are looking at it as if it was a procedural language. The genial thing on WinBUGS is that you just write formulas as if you were writing it in a paper :-). You just specify the model, you specify what, and you don't solve how the thing is computed (this is some MCMC magic you don't need to understand). So don't bother about calculating likelihoods - you just specify the facts about variables! This will make your head a lot lighter :-)
So particular answers to your questions:
the order of commands does not matter (but the indices do - it is different to write m from m[i])
there is no difference in syntax because the principle is in fact the same! The variable m[i] or pop.mean is having the particular distribution of given parameters and it doesn't matter if they are derived from another distribution (like in hierarchical models) or they are fixed constants. This is the beauty of the language - don't be scare of it, enjoy it :-) | Sampling variables and calculating likelihood in WinBUGS/OpenBUGS
There are two categories of programming languages (which WinBUGS definitely is):
procedural (or imperative) languages - they specify how the thing should be done - they specify the algorithm, the seq |
53,439 | Predicted by residual plot in R | A plot of residuals versus predicted response is essentially used to spot possible heteroskedasticity (non-constant variance across the range of the predicted values), as well as influential observations (possible outliers). Usually, we expect such plot to exhibit no particular pattern (a funnel-like plot would indicate that variance increase with mean). Plotting residuals against one predictor can be used to check the linearity assumption. Again, we do not expect any systematic structure in this plot, which would otherwise suggest some transformation (of the response variable or the predictor) or the addition of higher-order (e.g., quadratic) terms in the initial model.
More information can be found in any textbook on regression or on-line, e.g. Graphical Residual Analysis or Using Plots to Check Model Assumptions.
As for the case where you have to deal with multiple predictors, you can use partial residual plot, available in R in the car (crPlot) or faraway (prplot) package. However, if you are willing to spend some time reading on-line documentation, I highly recommend installing the rms package and its ecosystem of goodies for regression modeling. | Predicted by residual plot in R | A plot of residuals versus predicted response is essentially used to spot possible heteroskedasticity (non-constant variance across the range of the predicted values), as well as influential observat | Predicted by residual plot in R
A plot of residuals versus predicted response is essentially used to spot possible heteroskedasticity (non-constant variance across the range of the predicted values), as well as influential observations (possible outliers). Usually, we expect such plot to exhibit no particular pattern (a funnel-like plot would indicate that variance increase with mean). Plotting residuals against one predictor can be used to check the linearity assumption. Again, we do not expect any systematic structure in this plot, which would otherwise suggest some transformation (of the response variable or the predictor) or the addition of higher-order (e.g., quadratic) terms in the initial model.
More information can be found in any textbook on regression or on-line, e.g. Graphical Residual Analysis or Using Plots to Check Model Assumptions.
As for the case where you have to deal with multiple predictors, you can use partial residual plot, available in R in the car (crPlot) or faraway (prplot) package. However, if you are willing to spend some time reading on-line documentation, I highly recommend installing the rms package and its ecosystem of goodies for regression modeling. | Predicted by residual plot in R
A plot of residuals versus predicted response is essentially used to spot possible heteroskedasticity (non-constant variance across the range of the predicted values), as well as influential observat |
53,440 | Predicted by residual plot in R | After you fit an lm object, you can plot it.
e.g.:
model <- lm(y~x,data=data.frame(y=rnorm(25),x=rnorm(25)))
plot(model)
?plot.lm
edit: example 2, which you should have posted yourself:
rm(list = ls(all = TRUE)) #CLEAR WORKSPACE
library(foreign)
Data <- read.dta('http://dl.dropbox.com/u/22681355/child.iq.dta')
model <- lm(ppvt~momage+educ_cat, Data)
plot(model) | Predicted by residual plot in R | After you fit an lm object, you can plot it.
e.g.:
model <- lm(y~x,data=data.frame(y=rnorm(25),x=rnorm(25)))
plot(model)
?plot.lm
edit: example 2, which you should have posted yourself:
rm(list = ls( | Predicted by residual plot in R
After you fit an lm object, you can plot it.
e.g.:
model <- lm(y~x,data=data.frame(y=rnorm(25),x=rnorm(25)))
plot(model)
?plot.lm
edit: example 2, which you should have posted yourself:
rm(list = ls(all = TRUE)) #CLEAR WORKSPACE
library(foreign)
Data <- read.dta('http://dl.dropbox.com/u/22681355/child.iq.dta')
model <- lm(ppvt~momage+educ_cat, Data)
plot(model) | Predicted by residual plot in R
After you fit an lm object, you can plot it.
e.g.:
model <- lm(y~x,data=data.frame(y=rnorm(25),x=rnorm(25)))
plot(model)
?plot.lm
edit: example 2, which you should have posted yourself:
rm(list = ls( |
53,441 | Autocorrelation and heteroskedasticity in panel data | A standard way of correcting for this is by using heteroskedasticity and autocorrelation consistent (HAC) standard errors. They are also known after their developers as Newey-West standard errors. They can be applied in Stata using the newey command. The Stata help file for this command is here:
http://www.stata.com/help.cgi?newey
The difficulty in applying these errors is that you need to choose the number of lags that you want the procedure to consider in the autocorrelation structure. The standard autocorrelation tests usually provide good guidance, though.
This approach relies on asymptotics, so large data sets work better here.
There are alternatives, including the block bootstrap. Check out this article for a comparison of approaches to dealing with autocorrelation in panel data:
Bertrand, Marianne, Ester Duflo, and Sendhil Mullainathan. 2004. "How Much Should We Trust Differences-in-Differences Estimates?" Quarterly Journal of Economics. 119(1): 249--275. [prepub version] | Autocorrelation and heteroskedasticity in panel data | A standard way of correcting for this is by using heteroskedasticity and autocorrelation consistent (HAC) standard errors. They are also known after their developers as Newey-West standard errors. The | Autocorrelation and heteroskedasticity in panel data
A standard way of correcting for this is by using heteroskedasticity and autocorrelation consistent (HAC) standard errors. They are also known after their developers as Newey-West standard errors. They can be applied in Stata using the newey command. The Stata help file for this command is here:
http://www.stata.com/help.cgi?newey
The difficulty in applying these errors is that you need to choose the number of lags that you want the procedure to consider in the autocorrelation structure. The standard autocorrelation tests usually provide good guidance, though.
This approach relies on asymptotics, so large data sets work better here.
There are alternatives, including the block bootstrap. Check out this article for a comparison of approaches to dealing with autocorrelation in panel data:
Bertrand, Marianne, Ester Duflo, and Sendhil Mullainathan. 2004. "How Much Should We Trust Differences-in-Differences Estimates?" Quarterly Journal of Economics. 119(1): 249--275. [prepub version] | Autocorrelation and heteroskedasticity in panel data
A standard way of correcting for this is by using heteroskedasticity and autocorrelation consistent (HAC) standard errors. They are also known after their developers as Newey-West standard errors. The |
53,442 | Autocorrelation and heteroskedasticity in panel data | The answer depends on what do you define as heteroskedasticity. For panel data model:
$$y_{it}=x_{it}\beta+u_{it}$$
the heteroskedasticity can be defined in various ways:
$$Eu_{it}^2=\sigma^2_{it}$$
or
$$Eu_{it}^2=\sigma^2_{i}$$
or
$$Eu_{it}^2=\sigma^2_{t}.$$
I am not familiar with Stata, but quick check on the Internet suggests that option cluster will deal with the latter two cases, you only need to specify correct clustvar. Coincidentally for the last case this will also guard against autocorrelation of the following type:
$$u_{it}=\rho u_{i,t-1}+e_{it}.$$
To see why, rewrite the panel data in vector format:
$$y_i=x_i\beta+u_i,$$
where $y_i'=(y_{i1},...,y_{iT})$, $u_i=(u_{i1},...,u_{it})$. Then classical robust standard errors guard against
$$Eu_iu_i'=\Omega_T$$
which is $T\times T$ matrix, which is the same for all $i$. It is not hard to see then that both intra-group heteroskedasticity and AR(1) autocorrelation give covariance matrix which is a special case of general $\Omega_T$.
Rewriting the model in
$$y_t=x_t\beta+u_t$$
you can guard for other cases of heteroskedasticity:
$$Eu_{t}u_t'=\Omega_N$$
but then it is not possible to do anything about AR(1).
If you are interested in getting efficient estimators for both of these cases by using generalised least squares, then you can have readily available feasible estimates from simple OLS regression:
$$\hat\Omega_T=\frac{1}{N}\sum_{i=1}^N\hat{u}_{i}\hat{u}_{i}'$$
$$\hat\Omega_N=\frac{1}{T}\sum_{t=1}^T\hat{u}_{t}\hat{u}_{t}'$$
I do not know about Stata, but if I remember correctly Eviews has an option to use these matrices for estimation.
If you have more complicated covariance structure, I think you will need to develop your own solution. | Autocorrelation and heteroskedasticity in panel data | The answer depends on what do you define as heteroskedasticity. For panel data model:
$$y_{it}=x_{it}\beta+u_{it}$$
the heteroskedasticity can be defined in various ways:
$$Eu_{it}^2=\sigma^2_{it}$$
o | Autocorrelation and heteroskedasticity in panel data
The answer depends on what do you define as heteroskedasticity. For panel data model:
$$y_{it}=x_{it}\beta+u_{it}$$
the heteroskedasticity can be defined in various ways:
$$Eu_{it}^2=\sigma^2_{it}$$
or
$$Eu_{it}^2=\sigma^2_{i}$$
or
$$Eu_{it}^2=\sigma^2_{t}.$$
I am not familiar with Stata, but quick check on the Internet suggests that option cluster will deal with the latter two cases, you only need to specify correct clustvar. Coincidentally for the last case this will also guard against autocorrelation of the following type:
$$u_{it}=\rho u_{i,t-1}+e_{it}.$$
To see why, rewrite the panel data in vector format:
$$y_i=x_i\beta+u_i,$$
where $y_i'=(y_{i1},...,y_{iT})$, $u_i=(u_{i1},...,u_{it})$. Then classical robust standard errors guard against
$$Eu_iu_i'=\Omega_T$$
which is $T\times T$ matrix, which is the same for all $i$. It is not hard to see then that both intra-group heteroskedasticity and AR(1) autocorrelation give covariance matrix which is a special case of general $\Omega_T$.
Rewriting the model in
$$y_t=x_t\beta+u_t$$
you can guard for other cases of heteroskedasticity:
$$Eu_{t}u_t'=\Omega_N$$
but then it is not possible to do anything about AR(1).
If you are interested in getting efficient estimators for both of these cases by using generalised least squares, then you can have readily available feasible estimates from simple OLS regression:
$$\hat\Omega_T=\frac{1}{N}\sum_{i=1}^N\hat{u}_{i}\hat{u}_{i}'$$
$$\hat\Omega_N=\frac{1}{T}\sum_{t=1}^T\hat{u}_{t}\hat{u}_{t}'$$
I do not know about Stata, but if I remember correctly Eviews has an option to use these matrices for estimation.
If you have more complicated covariance structure, I think you will need to develop your own solution. | Autocorrelation and heteroskedasticity in panel data
The answer depends on what do you define as heteroskedasticity. For panel data model:
$$y_{it}=x_{it}\beta+u_{it}$$
the heteroskedasticity can be defined in various ways:
$$Eu_{it}^2=\sigma^2_{it}$$
o |
53,443 | Autocorrelation and heteroskedasticity in panel data | In STATA you can deal with both problems by using the options (for fixed effects)
, fe vce(cluster code)
hope this helps | Autocorrelation and heteroskedasticity in panel data | In STATA you can deal with both problems by using the options (for fixed effects)
, fe vce(cluster code)
hope this helps | Autocorrelation and heteroskedasticity in panel data
In STATA you can deal with both problems by using the options (for fixed effects)
, fe vce(cluster code)
hope this helps | Autocorrelation and heteroskedasticity in panel data
In STATA you can deal with both problems by using the options (for fixed effects)
, fe vce(cluster code)
hope this helps |
53,444 | How to check if the volatility is stationary? | A simple method, useful both for exploration and hypothesis testing, takes the Durbin Watson statistic and the semivariogram as its point of departure: denoting the sequence of residuals by $e_t$ (with $t$ the "index" of the plot), compute
$$\gamma(t) = \frac{1}{2}(e_{t+1}-e_t)^2.$$
The data (as presented in the question) result in this plot:
(The residual values have been uniformly rescaled compared to what is shown on the vertical axis in the original plot. This does not affect the subsequent analysis.) Typically there will be scatter, as shown here. To visualize this better, smooth it:
This plot is on a square root scale (let's call this the "dispersion") to dampen the extreme oscillations. The wiggly blue lines are a modest smooth (medians of 3 followed by a lag-13 moving average). The solid red line is an aggressive smooth of that. (Here, it's a Gaussian convolution; in general, though, I would recommend a Lowess smooth of $\sqrt{\gamma(t)}$.) Together, these lines trace out detailed fluctuations in dispersion and intermediate-range trends, respectively. The yellow and green thresholds, shown for reference, delimit (a) the lowest smoothed dispersion in the first 225 indexes and (b) the mean smoothed dispersion from index 226 onwards. I chose the changepoint of 225 by inspection.
Clearly almost all the smoothed dispersion values above 6 occurred during the first 225 indexes. From index=225 to index=350 or so, the smoothed dispersions decrease and then stay constant around 2.7.
The change in dispersion is now visually obvious. For most purposes--where the change is so clear and strong--that's all one needs. But this approach lends itself to formal testing, too. To identify a point where the dispersion changed, and to estimate the uncertainty of that point, apply a sequential online changepoint procedure to the sequence $(\gamma(t))$. An even more straightforward check, when you suspect there has been just one major change in volatility, is to use regression to check the dispersion for a trend. If one exists, you can reject the null hypothesis of homogeneous volatility. | How to check if the volatility is stationary? | A simple method, useful both for exploration and hypothesis testing, takes the Durbin Watson statistic and the semivariogram as its point of departure: denoting the sequence of residuals by $e_t$ (wit | How to check if the volatility is stationary?
A simple method, useful both for exploration and hypothesis testing, takes the Durbin Watson statistic and the semivariogram as its point of departure: denoting the sequence of residuals by $e_t$ (with $t$ the "index" of the plot), compute
$$\gamma(t) = \frac{1}{2}(e_{t+1}-e_t)^2.$$
The data (as presented in the question) result in this plot:
(The residual values have been uniformly rescaled compared to what is shown on the vertical axis in the original plot. This does not affect the subsequent analysis.) Typically there will be scatter, as shown here. To visualize this better, smooth it:
This plot is on a square root scale (let's call this the "dispersion") to dampen the extreme oscillations. The wiggly blue lines are a modest smooth (medians of 3 followed by a lag-13 moving average). The solid red line is an aggressive smooth of that. (Here, it's a Gaussian convolution; in general, though, I would recommend a Lowess smooth of $\sqrt{\gamma(t)}$.) Together, these lines trace out detailed fluctuations in dispersion and intermediate-range trends, respectively. The yellow and green thresholds, shown for reference, delimit (a) the lowest smoothed dispersion in the first 225 indexes and (b) the mean smoothed dispersion from index 226 onwards. I chose the changepoint of 225 by inspection.
Clearly almost all the smoothed dispersion values above 6 occurred during the first 225 indexes. From index=225 to index=350 or so, the smoothed dispersions decrease and then stay constant around 2.7.
The change in dispersion is now visually obvious. For most purposes--where the change is so clear and strong--that's all one needs. But this approach lends itself to formal testing, too. To identify a point where the dispersion changed, and to estimate the uncertainty of that point, apply a sequential online changepoint procedure to the sequence $(\gamma(t))$. An even more straightforward check, when you suspect there has been just one major change in volatility, is to use regression to check the dispersion for a trend. If one exists, you can reject the null hypothesis of homogeneous volatility. | How to check if the volatility is stationary?
A simple method, useful both for exploration and hypothesis testing, takes the Durbin Watson statistic and the semivariogram as its point of departure: denoting the sequence of residuals by $e_t$ (wit |
53,445 | How to check if the volatility is stationary? | volatility (visually non-constant variability) can arise from a number of sources. To name a few
A non-constant mean of the errors caused by unspecified deterministic variables yielding Pulses, Level Shifts, Seasonal Pulses and /or Local Time Trends ( i.e. an underspecified model )
Actual parameter dynamics over time reflecting statistically significantly different model parameters ( for the same model ) for different time intervals. This can be detected by incorporating a variant of the Chow Test which actually searches for and finds the points in time that parameters have been proven to be statistically significantly different thus suggesting an underspecified model.
An autprojective (autoregressive/moving average structure (ARIMA)) present in the residuals as a result of an omitted stochastic cause series which has been untreated thus suggesting an under-specified model.
The need for a Weighted Least Squares optimization also known as Generalized least Squares where the "excess variability" i.e. the variability above and beyond a Gaussian process has been untreated reflecting the need for "weights" to be optimally applied to the observations. This can be remedied by identifying time regions where the error variance has had Structural Change of or Deterministic Change thus suggesting an underspecified model.
The presence of a "variance" that is stochastically/dynamically changing over time according to some Garch Model thus suggesting an under-specified model.
If you actually post your Residuals , I will use commercially available software which I have involved in the development of to actually determine which of these remedies are suggested/needed by your data. If you don't want to actually share your data with the list you can send it my privately at [deleted].
In terms of doing if programatically using "R" , it might take you a while. By actually showing you how this can be done might motivate you to try and emulate/copy/duplicate and write your own.
Edit: Upon rereading the possible causes of "volatility" or non-randomness in the residuals I should also have mentioned:
If you have incorrectly used predictor variables by omitting needed lag in these variables then this omission can yield errors that have "volatility". It suggests that one capture any and all lags of the predictor variables ( if any ! ) into your currently under=specified equation. | How to check if the volatility is stationary? | volatility (visually non-constant variability) can arise from a number of sources. To name a few
A non-constant mean of the errors caused by unspecified deterministic variables yielding Pulses, Level | How to check if the volatility is stationary?
volatility (visually non-constant variability) can arise from a number of sources. To name a few
A non-constant mean of the errors caused by unspecified deterministic variables yielding Pulses, Level Shifts, Seasonal Pulses and /or Local Time Trends ( i.e. an underspecified model )
Actual parameter dynamics over time reflecting statistically significantly different model parameters ( for the same model ) for different time intervals. This can be detected by incorporating a variant of the Chow Test which actually searches for and finds the points in time that parameters have been proven to be statistically significantly different thus suggesting an underspecified model.
An autprojective (autoregressive/moving average structure (ARIMA)) present in the residuals as a result of an omitted stochastic cause series which has been untreated thus suggesting an under-specified model.
The need for a Weighted Least Squares optimization also known as Generalized least Squares where the "excess variability" i.e. the variability above and beyond a Gaussian process has been untreated reflecting the need for "weights" to be optimally applied to the observations. This can be remedied by identifying time regions where the error variance has had Structural Change of or Deterministic Change thus suggesting an underspecified model.
The presence of a "variance" that is stochastically/dynamically changing over time according to some Garch Model thus suggesting an under-specified model.
If you actually post your Residuals , I will use commercially available software which I have involved in the development of to actually determine which of these remedies are suggested/needed by your data. If you don't want to actually share your data with the list you can send it my privately at [deleted].
In terms of doing if programatically using "R" , it might take you a while. By actually showing you how this can be done might motivate you to try and emulate/copy/duplicate and write your own.
Edit: Upon rereading the possible causes of "volatility" or non-randomness in the residuals I should also have mentioned:
If you have incorrectly used predictor variables by omitting needed lag in these variables then this omission can yield errors that have "volatility". It suggests that one capture any and all lags of the predictor variables ( if any ! ) into your currently under=specified equation. | How to check if the volatility is stationary?
volatility (visually non-constant variability) can arise from a number of sources. To name a few
A non-constant mean of the errors caused by unspecified deterministic variables yielding Pulses, Level |
53,446 | How to check if the volatility is stationary? | One approach to test many of the assumptions at once is using visual tests:
Buja, A., Cook, D. Hofmann, H., Lawrence, M. Lee, E.-K., Swayne,
D.F and Wickham, H. (2009) Statistical Inference for exploratory
data analysis and model diagnostics Phil. Trans. R. Soc. A
367, 4361-4383 doi: 10.1098/rsta.2009.0120
This plots your residual plot among several others for which the assumptions are true. You then try to pick out the real plot, if the assumptions hold and the only thing going on in your plot is random variation then you will have a difficult time picking your plot from the random ones (you should not familiarize yourself with the plot before hand, or if you have, then have another person not familiar with your analysis try to find the plot that does not belong). If the assumptions are violated (strongly enough) then the real plot will be easy to distinguish.
One implementation of this idea is the vis.test function in the TeachingDemos package for R. | How to check if the volatility is stationary? | One approach to test many of the assumptions at once is using visual tests:
Buja, A., Cook, D. Hofmann, H., Lawrence, M. Lee, E.-K., Swayne,
D.F and Wickham, H. (2009) Statistical Inference for exp | How to check if the volatility is stationary?
One approach to test many of the assumptions at once is using visual tests:
Buja, A., Cook, D. Hofmann, H., Lawrence, M. Lee, E.-K., Swayne,
D.F and Wickham, H. (2009) Statistical Inference for exploratory
data analysis and model diagnostics Phil. Trans. R. Soc. A
367, 4361-4383 doi: 10.1098/rsta.2009.0120
This plots your residual plot among several others for which the assumptions are true. You then try to pick out the real plot, if the assumptions hold and the only thing going on in your plot is random variation then you will have a difficult time picking your plot from the random ones (you should not familiarize yourself with the plot before hand, or if you have, then have another person not familiar with your analysis try to find the plot that does not belong). If the assumptions are violated (strongly enough) then the real plot will be easy to distinguish.
One implementation of this idea is the vis.test function in the TeachingDemos package for R. | How to check if the volatility is stationary?
One approach to test many of the assumptions at once is using visual tests:
Buja, A., Cook, D. Hofmann, H., Lawrence, M. Lee, E.-K., Swayne,
D.F and Wickham, H. (2009) Statistical Inference for exp |
53,447 | How to calculate number of participants required to compare mean scores on a questionnaire between two groups? | Defining the problem
I am guessing that you are measuring sportiness on some form of multi-item scale.
Thus, you will have a numeric measure of sportiness.
I am assuming that you will be running an independent groups t-test to test the difference between group means.
Your problem is then a form of a priori power analysis. To determine the sample size you need to specify:
Your desired statistical power (often 80% is considered adequate)
Your expected effect size (typically specified as standardised group mean difference; of course this is not known ahead of time, so you have to think about what is the minimal effect that you would consider interesting)
Your alpha level for your significance test (typically .05)
Software options
A nice free GUI for calculating statistical power is G Power 3 available on Mac and Windows.
I have an explanation of basic use with some sample power analysis graphs relevant to your example (standardised group mean differences).
R also has options as summarised on the quick-r page on power analysis. Check out pwr.t.test in the pwr package for one option.
The basic rule is that more participants is always better, and that any power analysis rests on assumptions about population effect which are unknown (if they were known, you wouldn't need to do a study).
R Example:
> install.packages("pwr")
> library(pwr)
> pwr.t.test(power= .8, d=.5, sig.level=.05, type="two.sample")
Two-sample t test power calculation
n = 63.76561
d = 0.5
sig.level = 0.05
power = 0.8
alternative = two.sided
NOTE: n is number in *each* group
Thus, assuming what is often labeled as a medium difference in group means and conventional values for alpha and power, you would need 64 participants per group.
Graph from G Power
The following graph was generated from G Power and is taken from my post on power analysis.
It shows for different levels of d, what power you will obtain for a given total sample size.
Final complication if non-experimental design
The above calculations are all done on the assumption that you have equal numbers of facebook and non-facebook users in your sample.
This feature is common to experiments and to studies where participants are sampled in some systematic way.
However, if you are just taking a general sample from the community, you will end up with uneven numbers of facebook and non-facebook users.
All else being equal statistical power decreases as group sizes become less equal. G Power 3 allows you to specify the ratio of group sizes when calculating required sample size. | How to calculate number of participants required to compare mean scores on a questionnaire between t | Defining the problem
I am guessing that you are measuring sportiness on some form of multi-item scale.
Thus, you will have a numeric measure of sportiness.
I am assuming that you will be running an i | How to calculate number of participants required to compare mean scores on a questionnaire between two groups?
Defining the problem
I am guessing that you are measuring sportiness on some form of multi-item scale.
Thus, you will have a numeric measure of sportiness.
I am assuming that you will be running an independent groups t-test to test the difference between group means.
Your problem is then a form of a priori power analysis. To determine the sample size you need to specify:
Your desired statistical power (often 80% is considered adequate)
Your expected effect size (typically specified as standardised group mean difference; of course this is not known ahead of time, so you have to think about what is the minimal effect that you would consider interesting)
Your alpha level for your significance test (typically .05)
Software options
A nice free GUI for calculating statistical power is G Power 3 available on Mac and Windows.
I have an explanation of basic use with some sample power analysis graphs relevant to your example (standardised group mean differences).
R also has options as summarised on the quick-r page on power analysis. Check out pwr.t.test in the pwr package for one option.
The basic rule is that more participants is always better, and that any power analysis rests on assumptions about population effect which are unknown (if they were known, you wouldn't need to do a study).
R Example:
> install.packages("pwr")
> library(pwr)
> pwr.t.test(power= .8, d=.5, sig.level=.05, type="two.sample")
Two-sample t test power calculation
n = 63.76561
d = 0.5
sig.level = 0.05
power = 0.8
alternative = two.sided
NOTE: n is number in *each* group
Thus, assuming what is often labeled as a medium difference in group means and conventional values for alpha and power, you would need 64 participants per group.
Graph from G Power
The following graph was generated from G Power and is taken from my post on power analysis.
It shows for different levels of d, what power you will obtain for a given total sample size.
Final complication if non-experimental design
The above calculations are all done on the assumption that you have equal numbers of facebook and non-facebook users in your sample.
This feature is common to experiments and to studies where participants are sampled in some systematic way.
However, if you are just taking a general sample from the community, you will end up with uneven numbers of facebook and non-facebook users.
All else being equal statistical power decreases as group sizes become less equal. G Power 3 allows you to specify the ratio of group sizes when calculating required sample size. | How to calculate number of participants required to compare mean scores on a questionnaire between t
Defining the problem
I am guessing that you are measuring sportiness on some form of multi-item scale.
Thus, you will have a numeric measure of sportiness.
I am assuming that you will be running an i |
53,448 | How to calculate number of participants required to compare mean scores on a questionnaire between two groups? | Here is a link to a website that do that for you, but it is in german... :-\ But mybe you can use it...
http://www.bauinfoconsult.de/Stichproben_Rechner.html
This site calculates the number of needed respondents according to the standard error and the expected power. | How to calculate number of participants required to compare mean scores on a questionnaire between t | Here is a link to a website that do that for you, but it is in german... :-\ But mybe you can use it...
http://www.bauinfoconsult.de/Stichproben_Rechner.html
This site calculates the number of needed | How to calculate number of participants required to compare mean scores on a questionnaire between two groups?
Here is a link to a website that do that for you, but it is in german... :-\ But mybe you can use it...
http://www.bauinfoconsult.de/Stichproben_Rechner.html
This site calculates the number of needed respondents according to the standard error and the expected power. | How to calculate number of participants required to compare mean scores on a questionnaire between t
Here is a link to a website that do that for you, but it is in german... :-\ But mybe you can use it...
http://www.bauinfoconsult.de/Stichproben_Rechner.html
This site calculates the number of needed |
53,449 | How to calculate number of participants required to compare mean scores on a questionnaire between two groups? | It is a pretty old question here but since I know of, and made use of a straightforward way of calculating this I ‘d like to share it depending on whether you wish to have a double sided or a single sided hypothesis. Lets help the ones who also wish to know the answer hence forth.
Anyway,for now to keep things simpler let $\delta$ be the mean difference i.e. between the groups (fb users vs non-fb users) for some standard deviation $\sigma$.
According to the brevity of the survey simply choose these for now:
For a most common confidence level of 95% i.e. an $\alpha$ of 0.05. Like we did for the $\alpha$ Choose power i.e $1-\beta$ to be 80% (0.8) obviously corresponding to a $\beta$ of 20% (0.2).
Finally, The sample size is given by the formula (similar to Lehr’s) $n= \Big[{(z_{1-\alpha} + z_{1-\beta})\sigma \over \delta}\Big]^2$
Take a look at this example I have penned down earlier. Also note instead of $z_{1-\alpha} and z_{1-\beta}$ I have misrepresented it as $z_\alpha$ and $z_\beta$ respectively but the idea remains the same here. | How to calculate number of participants required to compare mean scores on a questionnaire between t | It is a pretty old question here but since I know of, and made use of a straightforward way of calculating this I ‘d like to share it depending on whether you wish to have a double sided or a single s | How to calculate number of participants required to compare mean scores on a questionnaire between two groups?
It is a pretty old question here but since I know of, and made use of a straightforward way of calculating this I ‘d like to share it depending on whether you wish to have a double sided or a single sided hypothesis. Lets help the ones who also wish to know the answer hence forth.
Anyway,for now to keep things simpler let $\delta$ be the mean difference i.e. between the groups (fb users vs non-fb users) for some standard deviation $\sigma$.
According to the brevity of the survey simply choose these for now:
For a most common confidence level of 95% i.e. an $\alpha$ of 0.05. Like we did for the $\alpha$ Choose power i.e $1-\beta$ to be 80% (0.8) obviously corresponding to a $\beta$ of 20% (0.2).
Finally, The sample size is given by the formula (similar to Lehr’s) $n= \Big[{(z_{1-\alpha} + z_{1-\beta})\sigma \over \delta}\Big]^2$
Take a look at this example I have penned down earlier. Also note instead of $z_{1-\alpha} and z_{1-\beta}$ I have misrepresented it as $z_\alpha$ and $z_\beta$ respectively but the idea remains the same here. | How to calculate number of participants required to compare mean scores on a questionnaire between t
It is a pretty old question here but since I know of, and made use of a straightforward way of calculating this I ‘d like to share it depending on whether you wish to have a double sided or a single s |
53,450 | How to calculate number of participants required to compare mean scores on a questionnaire between two groups? | Here's an English version
Questionnaire or survey sample size How do you work out what sample
size to use for your survey? It is actually a complex calculation.
And consequently, in my experience, people use rules of thumb - like
10%. Such rules of thumb cannot hope to give an adequate estimate of
the needed sample size and consequently people either under-sample or
over-sample. Often these samples are vastly too small - causing
inappropriate decisions - or much too large - a wasted effort and
expense.
What sample size should you take? The answer is a balance between your
intolerance for 'false positives' and 'false negatives'.
You can also try the Sample Size Calculator at gpra.net - here's the Intro and Install PDFs | How to calculate number of participants required to compare mean scores on a questionnaire between t | Here's an English version
Questionnaire or survey sample size How do you work out what sample
size to use for your survey? It is actually a complex calculation.
And consequently, in my experienc | How to calculate number of participants required to compare mean scores on a questionnaire between two groups?
Here's an English version
Questionnaire or survey sample size How do you work out what sample
size to use for your survey? It is actually a complex calculation.
And consequently, in my experience, people use rules of thumb - like
10%. Such rules of thumb cannot hope to give an adequate estimate of
the needed sample size and consequently people either under-sample or
over-sample. Often these samples are vastly too small - causing
inappropriate decisions - or much too large - a wasted effort and
expense.
What sample size should you take? The answer is a balance between your
intolerance for 'false positives' and 'false negatives'.
You can also try the Sample Size Calculator at gpra.net - here's the Intro and Install PDFs | How to calculate number of participants required to compare mean scores on a questionnaire between t
Here's an English version
Questionnaire or survey sample size How do you work out what sample
size to use for your survey? It is actually a complex calculation.
And consequently, in my experienc |
53,451 | How to calculate number of participants required to compare mean scores on a questionnaire between two groups? | Lehr's rule, as quoted by Van Belle is
$$n = \frac{16}{\Delta^2},$$
where $\Delta$ is the posited effect size, which in your case would be $(\mu_{\mbox{fb}} - \mu_{\mbox{non fb}}) / \sigma$, where $\mu$ is the mean 'sportiness', and $\sigma$ is the (pooled) standard deviation of sportiness. You want to collect $n/2$ participants from Facebook and the remaining half not from facebook.
This rule gives you approximately 80% power for a 2-sample 2-sided t-test at the 0.05 type I rate. | How to calculate number of participants required to compare mean scores on a questionnaire between t | Lehr's rule, as quoted by Van Belle is
$$n = \frac{16}{\Delta^2},$$
where $\Delta$ is the posited effect size, which in your case would be $(\mu_{\mbox{fb}} - \mu_{\mbox{non fb}}) / \sigma$, where $\m | How to calculate number of participants required to compare mean scores on a questionnaire between two groups?
Lehr's rule, as quoted by Van Belle is
$$n = \frac{16}{\Delta^2},$$
where $\Delta$ is the posited effect size, which in your case would be $(\mu_{\mbox{fb}} - \mu_{\mbox{non fb}}) / \sigma$, where $\mu$ is the mean 'sportiness', and $\sigma$ is the (pooled) standard deviation of sportiness. You want to collect $n/2$ participants from Facebook and the remaining half not from facebook.
This rule gives you approximately 80% power for a 2-sample 2-sided t-test at the 0.05 type I rate. | How to calculate number of participants required to compare mean scores on a questionnaire between t
Lehr's rule, as quoted by Van Belle is
$$n = \frac{16}{\Delta^2},$$
where $\Delta$ is the posited effect size, which in your case would be $(\mu_{\mbox{fb}} - \mu_{\mbox{non fb}}) / \sigma$, where $\m |
53,452 | At what value of mean and variance should I throw data away? | Let me clear up some misconceptions first. The estimate of your population variance is not high because your sample is small. In fact, just the opposite is often the case, the variance tends to be low because small samples over represent the peak of the distribution. The variances of larger samples are more representative and more accurate. And, as a corollary, small samples are less accurate and have a higher sampling error, usually measured as standard error.
Data are generally not discarded merely because they have something that might be judged as high variance. The variance is considered a property of the data that you need to discover and you attempt to get enough data to get a reasonable determination of that. Looking at your data these don't seem like very high variances at all. You can actually get very useful information out of data like that but you will need more of it.
If you throw away these data and just get another small sample and it has lower variance that doesn't tell you that the underlying distribution has lower variance, it's just sampling variability. Therefore, don't do that. Just keep collecting more data and noting properties of it like time of day. If it's relatively consistently noisy over a period of time you might just be able to average it all together and get great distributions of the two different kinds of signal.
Clearly there is overlap in your distributions and you're going to have to take some time to get this working correctly. You'll need to collect lots of samples of each of your different signals to see if your manipulations in the algorithm have effects. There's enough noise that it would be easy to fool yourself into thinking you had succeeded in solving the problem if you just throw away samples you don't like. There's also enough noise that you might not have much of a problem and continue to think you're failing if you throw away samples you don't like.
In short, keep all of the data and get more of it. Work out the distributions. Make adjustments to your algorithm. Collect more data. Repeat until you've solved the problem.
When you do get more data and you've tried a couple of algorithms come back and ask for help on exactly how to model your data so that you can make decisions about that algorithms to keep and what to reject. At that time you might post more summary type statistics with your question like means, variances, and perhaps a histogram.
You might want to also ask for help from a cognitive psychologist who specializes in applied issues. There will be some tradeoff between mean QoS and the variance that you may be better off minimizing the variance even if it drops the mean. But I'm betting that should have been done by people other than you. | At what value of mean and variance should I throw data away? | Let me clear up some misconceptions first. The estimate of your population variance is not high because your sample is small. In fact, just the opposite is often the case, the variance tends to be l | At what value of mean and variance should I throw data away?
Let me clear up some misconceptions first. The estimate of your population variance is not high because your sample is small. In fact, just the opposite is often the case, the variance tends to be low because small samples over represent the peak of the distribution. The variances of larger samples are more representative and more accurate. And, as a corollary, small samples are less accurate and have a higher sampling error, usually measured as standard error.
Data are generally not discarded merely because they have something that might be judged as high variance. The variance is considered a property of the data that you need to discover and you attempt to get enough data to get a reasonable determination of that. Looking at your data these don't seem like very high variances at all. You can actually get very useful information out of data like that but you will need more of it.
If you throw away these data and just get another small sample and it has lower variance that doesn't tell you that the underlying distribution has lower variance, it's just sampling variability. Therefore, don't do that. Just keep collecting more data and noting properties of it like time of day. If it's relatively consistently noisy over a period of time you might just be able to average it all together and get great distributions of the two different kinds of signal.
Clearly there is overlap in your distributions and you're going to have to take some time to get this working correctly. You'll need to collect lots of samples of each of your different signals to see if your manipulations in the algorithm have effects. There's enough noise that it would be easy to fool yourself into thinking you had succeeded in solving the problem if you just throw away samples you don't like. There's also enough noise that you might not have much of a problem and continue to think you're failing if you throw away samples you don't like.
In short, keep all of the data and get more of it. Work out the distributions. Make adjustments to your algorithm. Collect more data. Repeat until you've solved the problem.
When you do get more data and you've tried a couple of algorithms come back and ask for help on exactly how to model your data so that you can make decisions about that algorithms to keep and what to reject. At that time you might post more summary type statistics with your question like means, variances, and perhaps a histogram.
You might want to also ask for help from a cognitive psychologist who specializes in applied issues. There will be some tradeoff between mean QoS and the variance that you may be better off minimizing the variance even if it drops the mean. But I'm betting that should have been done by people other than you. | At what value of mean and variance should I throw data away?
Let me clear up some misconceptions first. The estimate of your population variance is not high because your sample is small. In fact, just the opposite is often the case, the variance tends to be l |
53,453 | Repeated measures ANOVA in R and Bonferroni adjusted intervals | I agree with suncoolsu that it is difficult to tell exactly what you were looking for. And in general it's not recommended to do standard repeated measures ANOVAs anymore since there are generally better alternatives.
Nevertheless, perhaps you want to generate a simple stratified ANOVA. By stratified I mean that your effects are measured within another grouping variable, in your case the subject and thus a within subjects design. If your data frame is df and your response variable is y then you might have a within subjects predictor x1 and that crossed with a within subjects predictor x2, and perhaps a between subjects predictor z. To get the full model with all interactions you would use.
myModel <- aov( y ~ x1 * x2 * z + Error(id/(x1*x2)), data = df)
summary(myModel)
You'll note that within the Error term we are grouping x1, x2, and their interaction under id. Note that z is not in there because it is not a within subjects variable.
Keep in mind further that this data is laid out in long format and you probably need to aggregate it first to run this correctly since a repeated measures design often suggests more samples / subject than conditions in order to get good estimates of each subject's response value. Therefore, df above might be replaced with the following dfa.
dfa <- aggregate ( y ~ x1 + x2 + z + id, data = df, mean)
(BTW, suncoolsu gave a much more modern answer based on multi-level modelling. It's suggested you learn about that if you continue to do repeated measures designs because it is much more powerful, flexible, and allows one to ignore certain kinds of within subjects assumptions (notably sphericity). What I've described is how to do repeated measure ANOVA. You also might want to look at the car, or higher level ez packages in order to do it as well.)
As for your Bonferroni query... it should probably have been a separate question. Nevertheless, that's a bit of a hard one to answer with repeated measures. You could try ?pairwise.t.test. If you give the interactions of all your within variables as the group factor and set paired to true and the correction to bonf you're set. However, straight corrections like that probably are far too conservative. You state at the outset you're only going to use it if there is a significant effect, you probably also have a theoretical reason for making some comparisons, therefore it's not strictly the fishing expedition that Bonferroni (over) corrects for. So, something like...
with( df, pairwise.t.test(y, x1:x2, paired = TRUE, p.adj = 'bonf') )
will do what you want but that's probably not really what you want. | Repeated measures ANOVA in R and Bonferroni adjusted intervals | I agree with suncoolsu that it is difficult to tell exactly what you were looking for. And in general it's not recommended to do standard repeated measures ANOVAs anymore since there are generally be | Repeated measures ANOVA in R and Bonferroni adjusted intervals
I agree with suncoolsu that it is difficult to tell exactly what you were looking for. And in general it's not recommended to do standard repeated measures ANOVAs anymore since there are generally better alternatives.
Nevertheless, perhaps you want to generate a simple stratified ANOVA. By stratified I mean that your effects are measured within another grouping variable, in your case the subject and thus a within subjects design. If your data frame is df and your response variable is y then you might have a within subjects predictor x1 and that crossed with a within subjects predictor x2, and perhaps a between subjects predictor z. To get the full model with all interactions you would use.
myModel <- aov( y ~ x1 * x2 * z + Error(id/(x1*x2)), data = df)
summary(myModel)
You'll note that within the Error term we are grouping x1, x2, and their interaction under id. Note that z is not in there because it is not a within subjects variable.
Keep in mind further that this data is laid out in long format and you probably need to aggregate it first to run this correctly since a repeated measures design often suggests more samples / subject than conditions in order to get good estimates of each subject's response value. Therefore, df above might be replaced with the following dfa.
dfa <- aggregate ( y ~ x1 + x2 + z + id, data = df, mean)
(BTW, suncoolsu gave a much more modern answer based on multi-level modelling. It's suggested you learn about that if you continue to do repeated measures designs because it is much more powerful, flexible, and allows one to ignore certain kinds of within subjects assumptions (notably sphericity). What I've described is how to do repeated measure ANOVA. You also might want to look at the car, or higher level ez packages in order to do it as well.)
As for your Bonferroni query... it should probably have been a separate question. Nevertheless, that's a bit of a hard one to answer with repeated measures. You could try ?pairwise.t.test. If you give the interactions of all your within variables as the group factor and set paired to true and the correction to bonf you're set. However, straight corrections like that probably are far too conservative. You state at the outset you're only going to use it if there is a significant effect, you probably also have a theoretical reason for making some comparisons, therefore it's not strictly the fishing expedition that Bonferroni (over) corrects for. So, something like...
with( df, pairwise.t.test(y, x1:x2, paired = TRUE, p.adj = 'bonf') )
will do what you want but that's probably not really what you want. | Repeated measures ANOVA in R and Bonferroni adjusted intervals
I agree with suncoolsu that it is difficult to tell exactly what you were looking for. And in general it's not recommended to do standard repeated measures ANOVAs anymore since there are generally be |
53,454 | Repeated measures ANOVA in R and Bonferroni adjusted intervals | You should provide more details about your data. From the limited details provided by you, assuming you have a data frame df which has response, trt, time, and subject information, then these are many ways to fit a LME model in R using lme4 package. However, I will illustrate three methods that I think will be useful for you.
library(lme4)
# Random intercepts for different subjects but time and trt effects are fixed
mmod1 <- lmer(response ~ time*trt + (1 | subject), df)
# Random intercepts and trt effects for different subjects, but the time effect is still fixed
mmod2 <- lmer(response ~ time*trt + (1 + trt | subject), df)
# Random intercepts, trt, and time effects for different subjects
mmod3 <- lmer(response ~ time*trt + (1 + trt + time | subject), df)
Once you have the p-values from the model fit above, you can use:
HPDinterval(mmod1, prob = 0.95, ...)
HPDinterval(mmod2, prob = 0.95, ...)
HPDinterval(mmod3, prob = 0.95, ...)
to obtain the 95% CI determined from MCMC sample. Since this CI is obtained from MCMC sampling, it takes into account of the random errors and you won't need to correct for multiple comparisons (I think so, please correct me if I am wrong). | Repeated measures ANOVA in R and Bonferroni adjusted intervals | You should provide more details about your data. From the limited details provided by you, assuming you have a data frame df which has response, trt, time, and subject information, then these are many | Repeated measures ANOVA in R and Bonferroni adjusted intervals
You should provide more details about your data. From the limited details provided by you, assuming you have a data frame df which has response, trt, time, and subject information, then these are many ways to fit a LME model in R using lme4 package. However, I will illustrate three methods that I think will be useful for you.
library(lme4)
# Random intercepts for different subjects but time and trt effects are fixed
mmod1 <- lmer(response ~ time*trt + (1 | subject), df)
# Random intercepts and trt effects for different subjects, but the time effect is still fixed
mmod2 <- lmer(response ~ time*trt + (1 + trt | subject), df)
# Random intercepts, trt, and time effects for different subjects
mmod3 <- lmer(response ~ time*trt + (1 + trt + time | subject), df)
Once you have the p-values from the model fit above, you can use:
HPDinterval(mmod1, prob = 0.95, ...)
HPDinterval(mmod2, prob = 0.95, ...)
HPDinterval(mmod3, prob = 0.95, ...)
to obtain the 95% CI determined from MCMC sample. Since this CI is obtained from MCMC sampling, it takes into account of the random errors and you won't need to correct for multiple comparisons (I think so, please correct me if I am wrong). | Repeated measures ANOVA in R and Bonferroni adjusted intervals
You should provide more details about your data. From the limited details provided by you, assuming you have a data frame df which has response, trt, time, and subject information, then these are many |
53,455 | Off the shelf tool for multi-label classification | As @mbq suggests, a battery of binary classifiers is a good place to start. Ridge regression classifiers work pretty well on text classification problems (choose the ridge parameter via leave-one-out cross-validation. If training time is not an issue, also use the bootstrap as a further protection against over-fitting; the committee of boostrap classifiers can be amalgamated into one, so it doesn't have a computational cost at runtime).
However, as there are 4K classes and only 10K samples, it will probably be necessary to look at the hierarchy of classes and try to predict whether the page fits into broad categories first. | Off the shelf tool for multi-label classification | As @mbq suggests, a battery of binary classifiers is a good place to start. Ridge regression classifiers work pretty well on text classification problems (choose the ridge parameter via leave-one-out | Off the shelf tool for multi-label classification
As @mbq suggests, a battery of binary classifiers is a good place to start. Ridge regression classifiers work pretty well on text classification problems (choose the ridge parameter via leave-one-out cross-validation. If training time is not an issue, also use the bootstrap as a further protection against over-fitting; the committee of boostrap classifiers can be amalgamated into one, so it doesn't have a computational cost at runtime).
However, as there are 4K classes and only 10K samples, it will probably be necessary to look at the hierarchy of classes and try to predict whether the page fits into broad categories first. | Off the shelf tool for multi-label classification
As @mbq suggests, a battery of binary classifiers is a good place to start. Ridge regression classifiers work pretty well on text classification problems (choose the ridge parameter via leave-one-out |
53,456 | Off the shelf tool for multi-label classification | About multilabel classification, the baseline (but usually quite good) approach is just to make a battery of binary classifiers, each trained to recognize one class versus all other, use them all on each sample and combine their answers.
This is trivial to implement, so almost any tool will do.
However, you have other problem -- an ultrapoor training set. 10k samples in 4k classes gives 2-3 examples per class -- this is almost nothing; I can at most imagine some embarrassing 1-NN method in this setting. | Off the shelf tool for multi-label classification | About multilabel classification, the baseline (but usually quite good) approach is just to make a battery of binary classifiers, each trained to recognize one class versus all other, use them all on e | Off the shelf tool for multi-label classification
About multilabel classification, the baseline (but usually quite good) approach is just to make a battery of binary classifiers, each trained to recognize one class versus all other, use them all on each sample and combine their answers.
This is trivial to implement, so almost any tool will do.
However, you have other problem -- an ultrapoor training set. 10k samples in 4k classes gives 2-3 examples per class -- this is almost nothing; I can at most imagine some embarrassing 1-NN method in this setting. | Off the shelf tool for multi-label classification
About multilabel classification, the baseline (but usually quite good) approach is just to make a battery of binary classifiers, each trained to recognize one class versus all other, use them all on e |
53,457 | Off the shelf tool for multi-label classification | Try using R. You can use the factor data type to store what you are calling each document's "label."
Before you use any predictive functions, you'll have to calculate some quantitative attributes about each document. This is going to depend on the nature of your problem, but check out the Natural Language Processing section on CRAN for help with this step.
Once you've converted your documents into a dataframe with one row per document, and variables representing that document's classification, as well as covariates describing the document's content and metadata, you can start building predictive models. You might start with linear discriminant analysis.
With 4k classes and over 10k documents, I think it is going to be difficult to develop a good classifier. See if any of the authors you've been reading have written packages for R. | Off the shelf tool for multi-label classification | Try using R. You can use the factor data type to store what you are calling each document's "label."
Before you use any predictive functions, you'll have to calculate some quantitative attributes abo | Off the shelf tool for multi-label classification
Try using R. You can use the factor data type to store what you are calling each document's "label."
Before you use any predictive functions, you'll have to calculate some quantitative attributes about each document. This is going to depend on the nature of your problem, but check out the Natural Language Processing section on CRAN for help with this step.
Once you've converted your documents into a dataframe with one row per document, and variables representing that document's classification, as well as covariates describing the document's content and metadata, you can start building predictive models. You might start with linear discriminant analysis.
With 4k classes and over 10k documents, I think it is going to be difficult to develop a good classifier. See if any of the authors you've been reading have written packages for R. | Off the shelf tool for multi-label classification
Try using R. You can use the factor data type to store what you are calling each document's "label."
Before you use any predictive functions, you'll have to calculate some quantitative attributes abo |
53,458 | Off the shelf tool for multi-label classification | One of my homework assignments this semester was exactly to do his (well, not exactly, most of it was already implemented and we had to do some improvements and experimentation).
You can read the details here.
Anyway, whatever method you choose to use, I suggest to take a look at Weka.
Most probably the classifier you are looking for is already implemented there. | Off the shelf tool for multi-label classification | One of my homework assignments this semester was exactly to do his (well, not exactly, most of it was already implemented and we had to do some improvements and experimentation).
You can read the deta | Off the shelf tool for multi-label classification
One of my homework assignments this semester was exactly to do his (well, not exactly, most of it was already implemented and we had to do some improvements and experimentation).
You can read the details here.
Anyway, whatever method you choose to use, I suggest to take a look at Weka.
Most probably the classifier you are looking for is already implemented there. | Off the shelf tool for multi-label classification
One of my homework assignments this semester was exactly to do his (well, not exactly, most of it was already implemented and we had to do some improvements and experimentation).
You can read the deta |
53,459 | Off the shelf tool for multi-label classification | In general, you'll need to specify what make each category unique to be able to choose which classifier to use. In a silly example, if categories are based on the size of the document... you got the idea.
One good classifier that I used in this task was using the Normalized Compression Distance (NCD). It measures how close two documents are using compressors like ZIP.
For example: you have a document D and you have 10 categories C1, C2... C10. In each category you have one example document: C1a in category C1, C2a in category C2...
To verify the document D, you'll concatenate it with C1a and compress, then concatenate D with C2a and compress, and so on. By doing some calculation, you'll end up have NCD measures. Them you attribute it to the category that gave you the minimum NCD value.
The good point is that I'll end up my master degree in two weeks, using NCD as authorship attribution. Got correct attributions > 95% when dealing with 20 classes, > 55% when dealing with 100 classes, all of portuguese documents from newspapers. The bad point is that it's written in portuguese... | Off the shelf tool for multi-label classification | In general, you'll need to specify what make each category unique to be able to choose which classifier to use. In a silly example, if categories are based on the size of the document... you got the i | Off the shelf tool for multi-label classification
In general, you'll need to specify what make each category unique to be able to choose which classifier to use. In a silly example, if categories are based on the size of the document... you got the idea.
One good classifier that I used in this task was using the Normalized Compression Distance (NCD). It measures how close two documents are using compressors like ZIP.
For example: you have a document D and you have 10 categories C1, C2... C10. In each category you have one example document: C1a in category C1, C2a in category C2...
To verify the document D, you'll concatenate it with C1a and compress, then concatenate D with C2a and compress, and so on. By doing some calculation, you'll end up have NCD measures. Them you attribute it to the category that gave you the minimum NCD value.
The good point is that I'll end up my master degree in two weeks, using NCD as authorship attribution. Got correct attributions > 95% when dealing with 20 classes, > 55% when dealing with 100 classes, all of portuguese documents from newspapers. The bad point is that it's written in portuguese... | Off the shelf tool for multi-label classification
In general, you'll need to specify what make each category unique to be able to choose which classifier to use. In a silly example, if categories are based on the size of the document... you got the i |
53,460 | Off the shelf tool for multi-label classification | I haven't used them yet, but Mulan and Meka are multi-label addons for Weka. Meka seems more off the shelf, and includes support for Mulan. | Off the shelf tool for multi-label classification | I haven't used them yet, but Mulan and Meka are multi-label addons for Weka. Meka seems more off the shelf, and includes support for Mulan. | Off the shelf tool for multi-label classification
I haven't used them yet, but Mulan and Meka are multi-label addons for Weka. Meka seems more off the shelf, and includes support for Mulan. | Off the shelf tool for multi-label classification
I haven't used them yet, but Mulan and Meka are multi-label addons for Weka. Meka seems more off the shelf, and includes support for Mulan. |
53,461 | Convex Hull in R | I think you want the convex hull of your data. Try this
library(grDevices) # load grDevices package
df <- data.frame(X = c(-62, -40, 9, 13, 26, 27, 27),
Y = c( 7, -14, 10, 9, -8, -16, 12)) # store X,Y together
con.hull.pos <- chull(df) # find positions of convex hull
con.hull <- rbind(df[con.hull.pos,],df[con.hull.pos[1],]) # get coordinates for convex hull
plot(Y ~ X, data = df) # plot data
lines(con.hull) # add lines for convex hull
EDIT
If you want to add a line from the origin to each side of the convex hull such that each line is perpendicular to the convex hull, then try this:
getPerpPoints <- function(mat) {
# mat: 2x2 matrix with first row corresponding to first point
# on the line and second row corresponding to second
# point on the line
#
# output: two points which define the line going from the side
# to the origin
# store the inputs more conveniently
x <- mat[,1]
y <- mat[,2]
# define a new matrix to hold the output
out <- matrix(0, nrow = 2, ncol = 2)
# handle special case of vertical line
if(diff(x) == 0) {
xnew <- x[1]
}
else {
# find point on original line
xnew <- (diff(y) / diff(x)) * x[1] - y[1]
xnew <- xnew / (diff(y) / diff(x) + diff(x) / diff(y))
}
ynew <- -(diff(x) / diff(y)) * xnew
# put new point in second row of matrix
out[2,] <- c(xnew, ynew)
return(out = out)
}
After you've plotted the initial points, as well as the convex hull of the data, run the above code and the following:
for(i in 1:4) {
lines(getPerpPoints(con.hull[i:(i+1),]))
}
Keep in mind that some of the lines going from the origin to each side will not terminate within the interior of the convex hull of the data.
Here is what I got as output: | Convex Hull in R | I think you want the convex hull of your data. Try this
library(grDevices) # load grDevices package
df <- data.frame(X = c(-62, -40, 9, 13, 26, 27, 27),
Y = c( 7, -14, 10, 9 | Convex Hull in R
I think you want the convex hull of your data. Try this
library(grDevices) # load grDevices package
df <- data.frame(X = c(-62, -40, 9, 13, 26, 27, 27),
Y = c( 7, -14, 10, 9, -8, -16, 12)) # store X,Y together
con.hull.pos <- chull(df) # find positions of convex hull
con.hull <- rbind(df[con.hull.pos,],df[con.hull.pos[1],]) # get coordinates for convex hull
plot(Y ~ X, data = df) # plot data
lines(con.hull) # add lines for convex hull
EDIT
If you want to add a line from the origin to each side of the convex hull such that each line is perpendicular to the convex hull, then try this:
getPerpPoints <- function(mat) {
# mat: 2x2 matrix with first row corresponding to first point
# on the line and second row corresponding to second
# point on the line
#
# output: two points which define the line going from the side
# to the origin
# store the inputs more conveniently
x <- mat[,1]
y <- mat[,2]
# define a new matrix to hold the output
out <- matrix(0, nrow = 2, ncol = 2)
# handle special case of vertical line
if(diff(x) == 0) {
xnew <- x[1]
}
else {
# find point on original line
xnew <- (diff(y) / diff(x)) * x[1] - y[1]
xnew <- xnew / (diff(y) / diff(x) + diff(x) / diff(y))
}
ynew <- -(diff(x) / diff(y)) * xnew
# put new point in second row of matrix
out[2,] <- c(xnew, ynew)
return(out = out)
}
After you've plotted the initial points, as well as the convex hull of the data, run the above code and the following:
for(i in 1:4) {
lines(getPerpPoints(con.hull[i:(i+1),]))
}
Keep in mind that some of the lines going from the origin to each side will not terminate within the interior of the convex hull of the data.
Here is what I got as output: | Convex Hull in R
I think you want the convex hull of your data. Try this
library(grDevices) # load grDevices package
df <- data.frame(X = c(-62, -40, 9, 13, 26, 27, 27),
Y = c( 7, -14, 10, 9 |
53,462 | Convex Hull in R | I'm not 100% sure I'm following what you are trying to do with abline, but maybe this will move you in the right direction. You can use the function which.min() and which.max() to return the minimum or maximum values from a vector. You can combine that with the [ operator to index a second vector with that condition. For example:
X[which.min(Y)]
X[which.max(Y)]
EDIT to address additional details in the question
Instead of indexing the X vector with the min/max value of the Y vector, you can index the Y vector itself...and the X vector for the X vector:
c(X[which.min(X)], Y[which.min(Y)])
c(X[which.min(X)], Y[which.max(Y)])
c(X[which.max(X)], Y[which.min(Y)])
c(X[which.max(X)], Y[which.max(Y)])
EDIT # 2:
You want to find the convex hull of your data. Here's how you go about doing that:
#Make a data.frame out of your vectors
dat <- data.frame(X = X, Y = Y)
#Compute the convex hull. THis returns the index for the X and Y coordinates
c.hull <- chull(dat)
#You need five points to draw four line segments, so we add the fist set of points at the end
c.hull <- c(c.hull, c.hull[1])
#Here's how we get the points back
#Extract the points from the convex hull. Note we are using the row indices again.
dat[c.hull ,]
#Make a pretty plot
with(dat, plot(X,Y))
lines(dat[c.hull ,], col = "pink", lwd = 3)
###Note if you wanted the bounding box
library(spatstat)
box <- bounding.box.xy(dat)
plot(box, add = TRUE, lwd = 3)
#Retrieve bounding box points
with(box, expand.grid(xrange, yrange))
And as promised, your pretty plot: | Convex Hull in R | I'm not 100% sure I'm following what you are trying to do with abline, but maybe this will move you in the right direction. You can use the function which.min() and which.max() to return the minimum o | Convex Hull in R
I'm not 100% sure I'm following what you are trying to do with abline, but maybe this will move you in the right direction. You can use the function which.min() and which.max() to return the minimum or maximum values from a vector. You can combine that with the [ operator to index a second vector with that condition. For example:
X[which.min(Y)]
X[which.max(Y)]
EDIT to address additional details in the question
Instead of indexing the X vector with the min/max value of the Y vector, you can index the Y vector itself...and the X vector for the X vector:
c(X[which.min(X)], Y[which.min(Y)])
c(X[which.min(X)], Y[which.max(Y)])
c(X[which.max(X)], Y[which.min(Y)])
c(X[which.max(X)], Y[which.max(Y)])
EDIT # 2:
You want to find the convex hull of your data. Here's how you go about doing that:
#Make a data.frame out of your vectors
dat <- data.frame(X = X, Y = Y)
#Compute the convex hull. THis returns the index for the X and Y coordinates
c.hull <- chull(dat)
#You need five points to draw four line segments, so we add the fist set of points at the end
c.hull <- c(c.hull, c.hull[1])
#Here's how we get the points back
#Extract the points from the convex hull. Note we are using the row indices again.
dat[c.hull ,]
#Make a pretty plot
with(dat, plot(X,Y))
lines(dat[c.hull ,], col = "pink", lwd = 3)
###Note if you wanted the bounding box
library(spatstat)
box <- bounding.box.xy(dat)
plot(box, add = TRUE, lwd = 3)
#Retrieve bounding box points
with(box, expand.grid(xrange, yrange))
And as promised, your pretty plot: | Convex Hull in R
I'm not 100% sure I'm following what you are trying to do with abline, but maybe this will move you in the right direction. You can use the function which.min() and which.max() to return the minimum o |
53,463 | Interpreting p-values associated with correlation measurements | One explanation is that outliers, even mild ones can affect the results in a pearson correlation. If the outlier is a legitimate point (not a typo or other error) then it should increase the significance of the correlation (as you see), but will not change much in the other 2, so it is easy for the pearson correlation to be larger and more significant. In real data analysis seeing this would suggest looking for outliers (you should be plotting the data anyways) that are influencing the results. What to do next depends on what question you are asking and what assumptions are reasonable given the science. | Interpreting p-values associated with correlation measurements | One explanation is that outliers, even mild ones can affect the results in a pearson correlation. If the outlier is a legitimate point (not a typo or other error) then it should increase the signific | Interpreting p-values associated with correlation measurements
One explanation is that outliers, even mild ones can affect the results in a pearson correlation. If the outlier is a legitimate point (not a typo or other error) then it should increase the significance of the correlation (as you see), but will not change much in the other 2, so it is easy for the pearson correlation to be larger and more significant. In real data analysis seeing this would suggest looking for outliers (you should be plotting the data anyways) that are influencing the results. What to do next depends on what question you are asking and what assumptions are reasonable given the science. | Interpreting p-values associated with correlation measurements
One explanation is that outliers, even mild ones can affect the results in a pearson correlation. If the outlier is a legitimate point (not a typo or other error) then it should increase the signific |
53,464 | Interpreting p-values associated with correlation measurements | @Greg Snow is on the money about your first question.
In regard to your second, comparing the two tests is misleading since two hypotheses are different even though the scientific question is (ostensibly) the same. This is a case where it's really important to be explicit about what hypothesis test you're using.
To be explicit, the test using $r$ is testing something like $H_0: r=0$ vs $H_1: r \neq 0$. For Spearman's rho, you're testing $H_0: \rho=0$ vs $H_1: \rho \neq 0$. Using $r$ presumes a linear relationship, while using $\rho$ presumes a more general monotonic relationship since it's based on the observed ranks (which is also where it gets its robustness). The two hypotheses are actually quite different. | Interpreting p-values associated with correlation measurements | @Greg Snow is on the money about your first question.
In regard to your second, comparing the two tests is misleading since two hypotheses are different even though the scientific question is (ostensi | Interpreting p-values associated with correlation measurements
@Greg Snow is on the money about your first question.
In regard to your second, comparing the two tests is misleading since two hypotheses are different even though the scientific question is (ostensibly) the same. This is a case where it's really important to be explicit about what hypothesis test you're using.
To be explicit, the test using $r$ is testing something like $H_0: r=0$ vs $H_1: r \neq 0$. For Spearman's rho, you're testing $H_0: \rho=0$ vs $H_1: \rho \neq 0$. Using $r$ presumes a linear relationship, while using $\rho$ presumes a more general monotonic relationship since it's based on the observed ranks (which is also where it gets its robustness). The two hypotheses are actually quite different. | Interpreting p-values associated with correlation measurements
@Greg Snow is on the money about your first question.
In regard to your second, comparing the two tests is misleading since two hypotheses are different even though the scientific question is (ostensi |
53,465 | Rescaling for desired standard deviation | The SD is directly proportional to the data. Therefore, to change it from 10 to 15 = 1.5 * 10, multiply all scores by 1.5. The other way is to multiply all scores by -1.5, because negating all values does not change the SD. Of course you can also add an arbitrary constant to all the scores, too, without changing the SD. That is an exhaustive description of the linear transformations of the data that change the SD to the desired value.
You would use the negative multiple when you want to reverse the order of the data. | Rescaling for desired standard deviation | The SD is directly proportional to the data. Therefore, to change it from 10 to 15 = 1.5 * 10, multiply all scores by 1.5. The other way is to multiply all scores by -1.5, because negating all value | Rescaling for desired standard deviation
The SD is directly proportional to the data. Therefore, to change it from 10 to 15 = 1.5 * 10, multiply all scores by 1.5. The other way is to multiply all scores by -1.5, because negating all values does not change the SD. Of course you can also add an arbitrary constant to all the scores, too, without changing the SD. That is an exhaustive description of the linear transformations of the data that change the SD to the desired value.
You would use the negative multiple when you want to reverse the order of the data. | Rescaling for desired standard deviation
The SD is directly proportional to the data. Therefore, to change it from 10 to 15 = 1.5 * 10, multiply all scores by 1.5. The other way is to multiply all scores by -1.5, because negating all value |
53,466 | Rescaling for desired standard deviation | If you have a random variable (or observed data) $X$ with mean $\mu_x$ and standard deviation $\sigma_x$, and then apply any linear transformation $$Y=a+bX$$ then you will find the mean of $Y$ is $$\mu_y = a + b \mu_x$$ and the standard deviation of $Y$ is $$\sigma_y = |b|\; \sigma_x.$$
So for example, as whuber says, to multiply the standard deviation by 1.5, the two possibilities are $b=1.5$ or $b=-1.5$, while $a$ can have any value. | Rescaling for desired standard deviation | If you have a random variable (or observed data) $X$ with mean $\mu_x$ and standard deviation $\sigma_x$, and then apply any linear transformation $$Y=a+bX$$ then you will find the mean of $Y$ is $$\m | Rescaling for desired standard deviation
If you have a random variable (or observed data) $X$ with mean $\mu_x$ and standard deviation $\sigma_x$, and then apply any linear transformation $$Y=a+bX$$ then you will find the mean of $Y$ is $$\mu_y = a + b \mu_x$$ and the standard deviation of $Y$ is $$\sigma_y = |b|\; \sigma_x.$$
So for example, as whuber says, to multiply the standard deviation by 1.5, the two possibilities are $b=1.5$ or $b=-1.5$, while $a$ can have any value. | Rescaling for desired standard deviation
If you have a random variable (or observed data) $X$ with mean $\mu_x$ and standard deviation $\sigma_x$, and then apply any linear transformation $$Y=a+bX$$ then you will find the mean of $Y$ is $$\m |
53,467 | Why does adding statistically significant interaction reduce true positives? | "True positives", like proportion classified correctly, requires arbitrary and information losing categorization of the predicted values. These are improper scoring rules. An improper scoring rule is a criterion that when optimized leads to a bogus model. Also, watch out when using P-values in any way to guide model selection. This will greatly distort the inference from the final model. | Why does adding statistically significant interaction reduce true positives? | "True positives", like proportion classified correctly, requires arbitrary and information losing categorization of the predicted values. These are improper scoring rules. An improper scoring rule i | Why does adding statistically significant interaction reduce true positives?
"True positives", like proportion classified correctly, requires arbitrary and information losing categorization of the predicted values. These are improper scoring rules. An improper scoring rule is a criterion that when optimized leads to a bogus model. Also, watch out when using P-values in any way to guide model selection. This will greatly distort the inference from the final model. | Why does adding statistically significant interaction reduce true positives?
"True positives", like proportion classified correctly, requires arbitrary and information losing categorization of the predicted values. These are improper scoring rules. An improper scoring rule i |
53,468 | Why does adding statistically significant interaction reduce true positives? | No it should not. e.g. for logistic regression, which appears to be the case here, it may be that the greater p-value (I'm assuming you mean the right kind of p-value here, like from a likelihood ratio test) comes from increasing the odds for (correctly predicted) observations that are already relatively extreme (e.g. all observations that had predicted probability 0.8 become 0.9 and all those that had 0.2 become 0.1), while the ones that were only just on the right side of the 50% threshold are now just on the other side. As a result, the extremities are now predicted with more confidence, but there are more misclassifications.
In general, good fit does not guarantee good prediction (or the other way around) - even though that's the way most 'scientific' publications work these days :-(
I would advise you to look into a more evolved technique like LASSO or elastic net for variable selection... It will also easily allow you to optimize for some predictive measure like missclassification. In R, try glmnet. | Why does adding statistically significant interaction reduce true positives? | No it should not. e.g. for logistic regression, which appears to be the case here, it may be that the greater p-value (I'm assuming you mean the right kind of p-value here, like from a likelihood rati | Why does adding statistically significant interaction reduce true positives?
No it should not. e.g. for logistic regression, which appears to be the case here, it may be that the greater p-value (I'm assuming you mean the right kind of p-value here, like from a likelihood ratio test) comes from increasing the odds for (correctly predicted) observations that are already relatively extreme (e.g. all observations that had predicted probability 0.8 become 0.9 and all those that had 0.2 become 0.1), while the ones that were only just on the right side of the 50% threshold are now just on the other side. As a result, the extremities are now predicted with more confidence, but there are more misclassifications.
In general, good fit does not guarantee good prediction (or the other way around) - even though that's the way most 'scientific' publications work these days :-(
I would advise you to look into a more evolved technique like LASSO or elastic net for variable selection... It will also easily allow you to optimize for some predictive measure like missclassification. In R, try glmnet. | Why does adding statistically significant interaction reduce true positives?
No it should not. e.g. for logistic regression, which appears to be the case here, it may be that the greater p-value (I'm assuming you mean the right kind of p-value here, like from a likelihood rati |
53,469 | Using and/or operators in R | See ?"&": the single version does elementwise comparisons (for when you are doing logical operations on two vectors of the same length, e.g. if in your example x<-c(1.5,3.5). The other one works just like C++'s or java's &&: it only looks at the first element of each vector (this is typically an unexpected downside), but in a typically better performing way: it looks from left to right, and as soon as at least one of the values is false, it knows not to look at the rest anymore.
So if you know, in your examples x<-6 (in any case, just one value), you're better off using &&, otherwise, always use &. | Using and/or operators in R | See ?"&": the single version does elementwise comparisons (for when you are doing logical operations on two vectors of the same length, e.g. if in your example x<-c(1.5,3.5). The other one works just | Using and/or operators in R
See ?"&": the single version does elementwise comparisons (for when you are doing logical operations on two vectors of the same length, e.g. if in your example x<-c(1.5,3.5). The other one works just like C++'s or java's &&: it only looks at the first element of each vector (this is typically an unexpected downside), but in a typically better performing way: it looks from left to right, and as soon as at least one of the values is false, it knows not to look at the rest anymore.
So if you know, in your examples x<-6 (in any case, just one value), you're better off using &&, otherwise, always use &. | Using and/or operators in R
See ?"&": the single version does elementwise comparisons (for when you are doing logical operations on two vectors of the same length, e.g. if in your example x<-c(1.5,3.5). The other one works just |
53,470 | Probability that two values drawn at random from a normal distribution are separated by at most T | The distribution of the difference between two single independent samples from normal distributions has a mean which is the difference between the means of the original distributions and a variance which is the sum of the variances of the original distributions.
So in your case, if the normal distribution of $X$ has mean $\mu$ and variance $\sigma^2$ then $X_1-X_2$ has a normal distribution with mean $0$ and variance $2\sigma^2$; if you prefer,$\frac{X_1-X_2}{\sqrt{2} \sigma}$ has a standard normal distribution. So long as you do a two-tailed calculation, that should be enough for you to find your answer.
To spell it out, your answer should be
$$2 \Phi \left(\frac{T}{\sqrt{2} \sigma} \right) - 1$$
where $\Phi$ is the cumulative distribution function of a standard normal distribution. | Probability that two values drawn at random from a normal distribution are separated by at most T | The distribution of the difference between two single independent samples from normal distributions has a mean which is the difference between the means of the original distributions and a variance wh | Probability that two values drawn at random from a normal distribution are separated by at most T
The distribution of the difference between two single independent samples from normal distributions has a mean which is the difference between the means of the original distributions and a variance which is the sum of the variances of the original distributions.
So in your case, if the normal distribution of $X$ has mean $\mu$ and variance $\sigma^2$ then $X_1-X_2$ has a normal distribution with mean $0$ and variance $2\sigma^2$; if you prefer,$\frac{X_1-X_2}{\sqrt{2} \sigma}$ has a standard normal distribution. So long as you do a two-tailed calculation, that should be enough for you to find your answer.
To spell it out, your answer should be
$$2 \Phi \left(\frac{T}{\sqrt{2} \sigma} \right) - 1$$
where $\Phi$ is the cumulative distribution function of a standard normal distribution. | Probability that two values drawn at random from a normal distribution are separated by at most T
The distribution of the difference between two single independent samples from normal distributions has a mean which is the difference between the means of the original distributions and a variance wh |
53,471 | 2x2 chi-square test vs. binomial proportion statistic | Suppose you have
Case 1:
A=200, B=100
C=100, D=200
versus
Case 2:
A=200, B=0
C=200, D=200
The B=0 in case 2 means that case 2 provides much stronger evidence than case 1 of a relationship between X and Outcome; but in your test, both cases would be scored the same.
The Chi-Square test, informally speaking, not only takes into account the "X XOR Outcome" relationship (which is what you test) but also "X implies Outcome", "not X implies Outcome" and so on. | 2x2 chi-square test vs. binomial proportion statistic | Suppose you have
Case 1:
A=200, B=100
C=100, D=200
versus
Case 2:
A=200, B=0
C=200, D=200
The B=0 in case 2 means that case 2 provides much stronger evidence than case 1 of a relationship between | 2x2 chi-square test vs. binomial proportion statistic
Suppose you have
Case 1:
A=200, B=100
C=100, D=200
versus
Case 2:
A=200, B=0
C=200, D=200
The B=0 in case 2 means that case 2 provides much stronger evidence than case 1 of a relationship between X and Outcome; but in your test, both cases would be scored the same.
The Chi-Square test, informally speaking, not only takes into account the "X XOR Outcome" relationship (which is what you test) but also "X implies Outcome", "not X implies Outcome" and so on. | 2x2 chi-square test vs. binomial proportion statistic
Suppose you have
Case 1:
A=200, B=100
C=100, D=200
versus
Case 2:
A=200, B=0
C=200, D=200
The B=0 in case 2 means that case 2 provides much stronger evidence than case 1 of a relationship between |
53,472 | 2x2 chi-square test vs. binomial proportion statistic | Shouldn't you favour Fisher's exact test on a 2x2 contingency table? The advantages of the Chi-squared test would be preserved, with the additional advantages of an exact test. Based on a few handbook readings, I believe that Fisher's exact test is recommended chiefly with low cell counts, but it is also often also recommended for 2x2 tables. | 2x2 chi-square test vs. binomial proportion statistic | Shouldn't you favour Fisher's exact test on a 2x2 contingency table? The advantages of the Chi-squared test would be preserved, with the additional advantages of an exact test. Based on a few handbook | 2x2 chi-square test vs. binomial proportion statistic
Shouldn't you favour Fisher's exact test on a 2x2 contingency table? The advantages of the Chi-squared test would be preserved, with the additional advantages of an exact test. Based on a few handbook readings, I believe that Fisher's exact test is recommended chiefly with low cell counts, but it is also often also recommended for 2x2 tables. | 2x2 chi-square test vs. binomial proportion statistic
Shouldn't you favour Fisher's exact test on a 2x2 contingency table? The advantages of the Chi-squared test would be preserved, with the additional advantages of an exact test. Based on a few handbook |
53,473 | 2x2 chi-square test vs. binomial proportion statistic | Fisher's exact was once only recommended for low cell counts because back in "the dark times" it was computationally infeasible to use it for large counts. Indeed, with some approximations doing small counts properly demands a correction be applied.
No matter, the hypergeometric tests involved are good and powerful. They can be generalized to NxM tables by deposing these into 2x2 tables, where a count of significance for row and column is kept in row 1 and in column 1, and their complements summed to form row 2 and column 2, respectively.
See http://www.stat.psu.edu/online/courses/stat504/03_2way/30_2way_exact.htm for a nice overview and the text(s) by Bishop and Fienberg as well as Agresti for a detailed presentation.
Realizing the connection with the hypergeometric lets you pick two aspects of whether an intersection represents independence or not, letting both false alarms and effect size be specified.
I like the Bishop and Fienberg, and Fienberg books:
http://www.amazon.com/Discrete-Multivariate-Analysis-Theory-Practice/dp/0387728058/
http://www.amazon.com/Analysis-Cross-Classified-Categorical-Data/dp/0387728244/
as well as the related one by Zelterman:
http://www.amazon.com/Models-Discrete-Data-Daniel-Zelterman/dp/0198567014/ | 2x2 chi-square test vs. binomial proportion statistic | Fisher's exact was once only recommended for low cell counts because back in "the dark times" it was computationally infeasible to use it for large counts. Indeed, with some approximations doing smal | 2x2 chi-square test vs. binomial proportion statistic
Fisher's exact was once only recommended for low cell counts because back in "the dark times" it was computationally infeasible to use it for large counts. Indeed, with some approximations doing small counts properly demands a correction be applied.
No matter, the hypergeometric tests involved are good and powerful. They can be generalized to NxM tables by deposing these into 2x2 tables, where a count of significance for row and column is kept in row 1 and in column 1, and their complements summed to form row 2 and column 2, respectively.
See http://www.stat.psu.edu/online/courses/stat504/03_2way/30_2way_exact.htm for a nice overview and the text(s) by Bishop and Fienberg as well as Agresti for a detailed presentation.
Realizing the connection with the hypergeometric lets you pick two aspects of whether an intersection represents independence or not, letting both false alarms and effect size be specified.
I like the Bishop and Fienberg, and Fienberg books:
http://www.amazon.com/Discrete-Multivariate-Analysis-Theory-Practice/dp/0387728058/
http://www.amazon.com/Analysis-Cross-Classified-Categorical-Data/dp/0387728244/
as well as the related one by Zelterman:
http://www.amazon.com/Models-Discrete-Data-Daniel-Zelterman/dp/0198567014/ | 2x2 chi-square test vs. binomial proportion statistic
Fisher's exact was once only recommended for low cell counts because back in "the dark times" it was computationally infeasible to use it for large counts. Indeed, with some approximations doing smal |
53,474 | 2x2 chi-square test vs. binomial proportion statistic | KL divergence is another good test statistic to use. It has very similar properties to the chi-square for large expected cell counts, but retains these "good" properties even when expected cell counts are small.
The KL divergence is given by:
$$\sum_{i}O_{i}\log\left(\frac{O_{i}}{E_{i}}\right)$$
And this statistic never breaks down unless your hypothesis (the $E_{i}$ contradict the data by specifying $E_i=0$ when $O_i>0$. For independence we have $E_A=\frac{(A+B)(A+C)}{A+B+C+D}$, etc. as in the chi-square test. This is also called "likelihood ratio chi-square" in some stats packages. For the purpose of calculating a p-value, its no different to pearson chi-square (they both require large $E_i$ for the asymptotic chi-square distribution to apply). But you don't need a p-value, as this is a log likelihood ratio, so you can interpret the numerical value directly. | 2x2 chi-square test vs. binomial proportion statistic | KL divergence is another good test statistic to use. It has very similar properties to the chi-square for large expected cell counts, but retains these "good" properties even when expected cell count | 2x2 chi-square test vs. binomial proportion statistic
KL divergence is another good test statistic to use. It has very similar properties to the chi-square for large expected cell counts, but retains these "good" properties even when expected cell counts are small.
The KL divergence is given by:
$$\sum_{i}O_{i}\log\left(\frac{O_{i}}{E_{i}}\right)$$
And this statistic never breaks down unless your hypothesis (the $E_{i}$ contradict the data by specifying $E_i=0$ when $O_i>0$. For independence we have $E_A=\frac{(A+B)(A+C)}{A+B+C+D}$, etc. as in the chi-square test. This is also called "likelihood ratio chi-square" in some stats packages. For the purpose of calculating a p-value, its no different to pearson chi-square (they both require large $E_i$ for the asymptotic chi-square distribution to apply). But you don't need a p-value, as this is a log likelihood ratio, so you can interpret the numerical value directly. | 2x2 chi-square test vs. binomial proportion statistic
KL divergence is another good test statistic to use. It has very similar properties to the chi-square for large expected cell counts, but retains these "good" properties even when expected cell count |
53,475 | Spam filtering using naive Bayesian classifiers with the e1071/klaR package on R | The NaiveBayes() function in the klaR package obeys the classical formula R interface whereby you express your outcome as a function of its predictors, e.g. spam ~ x1+x2+x3. If your data are stored in a data.frame, you can input all predictors in the rhs of the formula using dot notation: spam ~ ., data=df means "spam as a function of all other variables present in the data.frame called df."
Here is a toy example, using the spam dataset discussed in the Elements of Statistical Learning (Hastie et al., Springer 2009, 2nd ed.), available on-line. This really is to get you started with the use of the R function, not the methodological aspects for using NB classifier.
data(spam, package="ElemStatLearn")
library(klaR)
# set up a training sample
train.ind <- sample(1:nrow(spam), ceiling(nrow(spam)*2/3), replace=FALSE)
# apply NB classifier
nb.res <- NaiveBayes(spam ~ ., data=spam[train.ind,])
# show the results
opar <- par(mfrow=c(2,4))
plot(nb.res)
par(opar)
# predict on holdout units
nb.pred <- predict(nb.res, spam[-train.ind,])
# raw accuracy
confusion.mat <- table(nb.pred$class, spam[-train.ind,"spam"])
sum(diag(confusion.mat))/sum(confusion.mat)
A recommended add-on package for such ML task is the caret package. It offers a lot of useful tools for preprocessing data, handling training/test samples, running different classifiers on the same data, and summarizing the results. It is available from CRAN and has a lot of vignettes that describe common tasks. | Spam filtering using naive Bayesian classifiers with the e1071/klaR package on R | The NaiveBayes() function in the klaR package obeys the classical formula R interface whereby you express your outcome as a function of its predictors, e.g. spam ~ x1+x2+x3. If your data are stored in | Spam filtering using naive Bayesian classifiers with the e1071/klaR package on R
The NaiveBayes() function in the klaR package obeys the classical formula R interface whereby you express your outcome as a function of its predictors, e.g. spam ~ x1+x2+x3. If your data are stored in a data.frame, you can input all predictors in the rhs of the formula using dot notation: spam ~ ., data=df means "spam as a function of all other variables present in the data.frame called df."
Here is a toy example, using the spam dataset discussed in the Elements of Statistical Learning (Hastie et al., Springer 2009, 2nd ed.), available on-line. This really is to get you started with the use of the R function, not the methodological aspects for using NB classifier.
data(spam, package="ElemStatLearn")
library(klaR)
# set up a training sample
train.ind <- sample(1:nrow(spam), ceiling(nrow(spam)*2/3), replace=FALSE)
# apply NB classifier
nb.res <- NaiveBayes(spam ~ ., data=spam[train.ind,])
# show the results
opar <- par(mfrow=c(2,4))
plot(nb.res)
par(opar)
# predict on holdout units
nb.pred <- predict(nb.res, spam[-train.ind,])
# raw accuracy
confusion.mat <- table(nb.pred$class, spam[-train.ind,"spam"])
sum(diag(confusion.mat))/sum(confusion.mat)
A recommended add-on package for such ML task is the caret package. It offers a lot of useful tools for preprocessing data, handling training/test samples, running different classifiers on the same data, and summarizing the results. It is available from CRAN and has a lot of vignettes that describe common tasks. | Spam filtering using naive Bayesian classifiers with the e1071/klaR package on R
The NaiveBayes() function in the klaR package obeys the classical formula R interface whereby you express your outcome as a function of its predictors, e.g. spam ~ x1+x2+x3. If your data are stored in |
53,476 | Spam filtering using naive Bayesian classifiers with the e1071/klaR package on R | There is a new book out called:
Machine Learning for Email: Spam Filtering and Priority Inbox
http://www.amazon.com/Machine-Learning-Email-Filtering-Priority/dp/1449314309/ref=sr_1_1?s=books&ie=UTF8&qid=1323836340&sr=1-1
I browsed the contents and the authors are explaining Bayesian spam filtering.
HTH | Spam filtering using naive Bayesian classifiers with the e1071/klaR package on R | There is a new book out called:
Machine Learning for Email: Spam Filtering and Priority Inbox
http://www.amazon.com/Machine-Learning-Email-Filtering-Priority/dp/1449314309/ref=sr_1_1?s=books&ie=UTF8&q | Spam filtering using naive Bayesian classifiers with the e1071/klaR package on R
There is a new book out called:
Machine Learning for Email: Spam Filtering and Priority Inbox
http://www.amazon.com/Machine-Learning-Email-Filtering-Priority/dp/1449314309/ref=sr_1_1?s=books&ie=UTF8&qid=1323836340&sr=1-1
I browsed the contents and the authors are explaining Bayesian spam filtering.
HTH | Spam filtering using naive Bayesian classifiers with the e1071/klaR package on R
There is a new book out called:
Machine Learning for Email: Spam Filtering and Priority Inbox
http://www.amazon.com/Machine-Learning-Email-Filtering-Priority/dp/1449314309/ref=sr_1_1?s=books&ie=UTF8&q |
53,477 | What does plotting residuals from one regression against the residuals from another regression give us? | This sounds like what I call an "added variable" plot. The idea behind these is to provide a visual way of whether adding a variable to a model (ht9 in your case) is likely to add anything to the model (soma on wt9 in your case).
It was explained to me like this. When you fit a linear regression, the order of the variables matters. It's kind of like imagining the variance in the soma variable as an "island". The first variables "claims" a portion of the variance on the island, and the second variable "claims" what it can from what is left over.
So basically this plot will show you if "what is left to explain" in soma's variation (residuals from soma.wt9) can be explained by "the capacity of ht9 to explain anything over and above wt9" (residuals from ht9.wt9).
You can also show mathematically what is going on. Residuals from soma.wt9 are calculate as:
$$e_{i}=soma-\beta_{0}-\beta_{1}wt9$$
residuals from ht9.wt9 are:
$$f_{i}=ht9-\alpha_{0}-\alpha_{1}wt9$$
Regression of $e_i$ on $f_i$ through the origin (because $\overline{e}=\overline{f}=0$, so line will pass through origin) gives
$$e_{i}=\delta f_{i}$$
Substituting the residual equations into this one gives:
$$soma-\beta_{0}-\beta_{1}wt9=\delta (ht9-\alpha_{0}-\alpha_{1}wt9)$$
Re-arranging terms gives:
$$soma=(\beta_{0}-\delta\alpha_{0})+(\beta_{1}-\delta\alpha_{1})wt9+\delta ht9$$
Hence, the estimated slope (using OLS regression) will be the same in the model with $soma = \beta_0+\beta_{wt9}wt9 + \beta_{ht9}ht9$ as in the model $resid.soma=\beta_{ht9} resid.ht9$
This also shows explicitly why having correlated regressor variables ($\alpha_{1}$ is a rescaled correlation) will make the estimated slopes change, and possibly be the "opposite sign" to what is expected.
I think this method was actually how Multiple regression was carried out before computers were able to invert large matrices. It may be quicker to invert lots of $2\times 2$ matrices than it is to invert one huge matrix. | What does plotting residuals from one regression against the residuals from another regression give | This sounds like what I call an "added variable" plot. The idea behind these is to provide a visual way of whether adding a variable to a model (ht9 in your case) is likely to add anything to the mod | What does plotting residuals from one regression against the residuals from another regression give us?
This sounds like what I call an "added variable" plot. The idea behind these is to provide a visual way of whether adding a variable to a model (ht9 in your case) is likely to add anything to the model (soma on wt9 in your case).
It was explained to me like this. When you fit a linear regression, the order of the variables matters. It's kind of like imagining the variance in the soma variable as an "island". The first variables "claims" a portion of the variance on the island, and the second variable "claims" what it can from what is left over.
So basically this plot will show you if "what is left to explain" in soma's variation (residuals from soma.wt9) can be explained by "the capacity of ht9 to explain anything over and above wt9" (residuals from ht9.wt9).
You can also show mathematically what is going on. Residuals from soma.wt9 are calculate as:
$$e_{i}=soma-\beta_{0}-\beta_{1}wt9$$
residuals from ht9.wt9 are:
$$f_{i}=ht9-\alpha_{0}-\alpha_{1}wt9$$
Regression of $e_i$ on $f_i$ through the origin (because $\overline{e}=\overline{f}=0$, so line will pass through origin) gives
$$e_{i}=\delta f_{i}$$
Substituting the residual equations into this one gives:
$$soma-\beta_{0}-\beta_{1}wt9=\delta (ht9-\alpha_{0}-\alpha_{1}wt9)$$
Re-arranging terms gives:
$$soma=(\beta_{0}-\delta\alpha_{0})+(\beta_{1}-\delta\alpha_{1})wt9+\delta ht9$$
Hence, the estimated slope (using OLS regression) will be the same in the model with $soma = \beta_0+\beta_{wt9}wt9 + \beta_{ht9}ht9$ as in the model $resid.soma=\beta_{ht9} resid.ht9$
This also shows explicitly why having correlated regressor variables ($\alpha_{1}$ is a rescaled correlation) will make the estimated slopes change, and possibly be the "opposite sign" to what is expected.
I think this method was actually how Multiple regression was carried out before computers were able to invert large matrices. It may be quicker to invert lots of $2\times 2$ matrices than it is to invert one huge matrix. | What does plotting residuals from one regression against the residuals from another regression give
This sounds like what I call an "added variable" plot. The idea behind these is to provide a visual way of whether adding a variable to a model (ht9 in your case) is likely to add anything to the mod |
53,478 | What does plotting residuals from one regression against the residuals from another regression give us? | Judging by the details and variable names, soma.WT9 and HT9.WT9, you are obtaining the residuals by first regressing, soma on WT9 and HT9 on WT9 (right?). If I understood you correctly, the scatter plot between soma.WT9 and HT9.WT9 will tell you -- if after removing the effects of WT9 (possibly linear effects in your case) from HT9 and soma, is there a relationship between HT9 and soma. This is beneficial in the case when WT9 explains all the source of variation in soma, then the scatter plot between soma.WT9 and HT9.WT9 will not show any particular (recognizable/standard) pattern. It may be also called partial residual plots. | What does plotting residuals from one regression against the residuals from another regression give | Judging by the details and variable names, soma.WT9 and HT9.WT9, you are obtaining the residuals by first regressing, soma on WT9 and HT9 on WT9 (right?). If I understood you correctly, the scatter pl | What does plotting residuals from one regression against the residuals from another regression give us?
Judging by the details and variable names, soma.WT9 and HT9.WT9, you are obtaining the residuals by first regressing, soma on WT9 and HT9 on WT9 (right?). If I understood you correctly, the scatter plot between soma.WT9 and HT9.WT9 will tell you -- if after removing the effects of WT9 (possibly linear effects in your case) from HT9 and soma, is there a relationship between HT9 and soma. This is beneficial in the case when WT9 explains all the source of variation in soma, then the scatter plot between soma.WT9 and HT9.WT9 will not show any particular (recognizable/standard) pattern. It may be also called partial residual plots. | What does plotting residuals from one regression against the residuals from another regression give
Judging by the details and variable names, soma.WT9 and HT9.WT9, you are obtaining the residuals by first regressing, soma on WT9 and HT9 on WT9 (right?). If I understood you correctly, the scatter pl |
53,479 | What is the right statistical test to use when attesting claims like "60% people switched over from group A to group B"? | I think what you're probably looking for are the so-called mover-stayer models. The basic model is a loglinear model that allows for an "inflated" diagonal and usually involves some form of symmetry on the off-diagonals.
A good place to start is A. Agresti, Categorical Data Analysis, 2nd ed., pp. 423--428. (That's the big green book, not the smaller blue or white/brown one with almost the same name!)
You might also look at J. K. Lindsey, Applying generalized linear models, Springer, 1997, pp. 33--35.
Addendum: The log-linear version of the mover-stayer model is also often referred to as a quasi-independence model. Two forms are popular. The more general form is
$$
\log \mu_{ij} = \mu + \alpha_i + \beta_j + \delta_i \mathbf{1}_{(i = j)} ,
$$
where $\mathbf{1}_{(x \in A)}$ is the indicator function that $x \in A$.
Let's interpret this model. First off, if we drop the last term, then we recover the model for independence of the rows and columns. This would mean assuming that people change groups in such a way that the beginning and ending groups were independent of each other, even though each particular group (before and after) may have different marginal distributions.
By throwing in the $\delta_i$ into the model, we are instead assuming there are two groups in the population. There are the movers who will tend to move from one group to another distinct group and there are the stayers who tend to be happy with their first choice. When the population breaks down into two such groups, we expect to see an inflated diagonal due to the stayers.
The model is also called a quasi-independence model since it is a model for independence on the off-diagonals. This means that, among the movers, there is no interaction between the group they started in and the one they end up in ("drifters" might be a more evocative term).
Notice that if we have a separate $\delta_i$ for each $i$, then the diagonal cells will be fit perfectly by this model. A common variant is to replace $\delta_i$ by $\delta$, i.e., each group has a common affinity to "stay" versus "move".
Here is some $R$ code to fit such a model
# Mover-stayer model example
Y <- rbind( c(18,0,0,0,2),
c( 0,6,0,0,3),
c( 4,0,2,0,2),
c( 0,0,0,0,0),
c( 4,2,0,1,8))
grp <- c("A", "B", "C", "D", "E")
colnames(Y) <- grp
rownames(Y) <- grp
X <- expand.grid( row=grp, col=grp)
X <- cbind( X, sapply(grp, function(r) { X[,1]==r & X[,2]==r }) )
y <- as.vector(Y)
df <- data.frame(y,X)
# General model
move.stay <- glm(y~., family="poisson", data=df)
move.stay.summary <- summary(move.stay)
# Common diagonal term
move.stay.const <- glm(y~row+col+I(A+B+C+D+E), family="poisson", data=df)
move.stay.const.summary <- summary(move.stay.const)
At this point, you can do lots of things. If you need a formal hypothesis test, than a deviance test might be appropriate.
Your particular data certainly has the peculiarity of a row of all zeros. It's interesting that there was a "group" choice that contained no one in it to start with. | What is the right statistical test to use when attesting claims like "60% people switched over from | I think what you're probably looking for are the so-called mover-stayer models. The basic model is a loglinear model that allows for an "inflated" diagonal and usually involves some form of symmetry o | What is the right statistical test to use when attesting claims like "60% people switched over from group A to group B"?
I think what you're probably looking for are the so-called mover-stayer models. The basic model is a loglinear model that allows for an "inflated" diagonal and usually involves some form of symmetry on the off-diagonals.
A good place to start is A. Agresti, Categorical Data Analysis, 2nd ed., pp. 423--428. (That's the big green book, not the smaller blue or white/brown one with almost the same name!)
You might also look at J. K. Lindsey, Applying generalized linear models, Springer, 1997, pp. 33--35.
Addendum: The log-linear version of the mover-stayer model is also often referred to as a quasi-independence model. Two forms are popular. The more general form is
$$
\log \mu_{ij} = \mu + \alpha_i + \beta_j + \delta_i \mathbf{1}_{(i = j)} ,
$$
where $\mathbf{1}_{(x \in A)}$ is the indicator function that $x \in A$.
Let's interpret this model. First off, if we drop the last term, then we recover the model for independence of the rows and columns. This would mean assuming that people change groups in such a way that the beginning and ending groups were independent of each other, even though each particular group (before and after) may have different marginal distributions.
By throwing in the $\delta_i$ into the model, we are instead assuming there are two groups in the population. There are the movers who will tend to move from one group to another distinct group and there are the stayers who tend to be happy with their first choice. When the population breaks down into two such groups, we expect to see an inflated diagonal due to the stayers.
The model is also called a quasi-independence model since it is a model for independence on the off-diagonals. This means that, among the movers, there is no interaction between the group they started in and the one they end up in ("drifters" might be a more evocative term).
Notice that if we have a separate $\delta_i$ for each $i$, then the diagonal cells will be fit perfectly by this model. A common variant is to replace $\delta_i$ by $\delta$, i.e., each group has a common affinity to "stay" versus "move".
Here is some $R$ code to fit such a model
# Mover-stayer model example
Y <- rbind( c(18,0,0,0,2),
c( 0,6,0,0,3),
c( 4,0,2,0,2),
c( 0,0,0,0,0),
c( 4,2,0,1,8))
grp <- c("A", "B", "C", "D", "E")
colnames(Y) <- grp
rownames(Y) <- grp
X <- expand.grid( row=grp, col=grp)
X <- cbind( X, sapply(grp, function(r) { X[,1]==r & X[,2]==r }) )
y <- as.vector(Y)
df <- data.frame(y,X)
# General model
move.stay <- glm(y~., family="poisson", data=df)
move.stay.summary <- summary(move.stay)
# Common diagonal term
move.stay.const <- glm(y~row+col+I(A+B+C+D+E), family="poisson", data=df)
move.stay.const.summary <- summary(move.stay.const)
At this point, you can do lots of things. If you need a formal hypothesis test, than a deviance test might be appropriate.
Your particular data certainly has the peculiarity of a row of all zeros. It's interesting that there was a "group" choice that contained no one in it to start with. | What is the right statistical test to use when attesting claims like "60% people switched over from
I think what you're probably looking for are the so-called mover-stayer models. The basic model is a loglinear model that allows for an "inflated" diagonal and usually involves some form of symmetry o |
53,480 | What is the right statistical test to use when attesting claims like "60% people switched over from group A to group B"? | I would start with building the sums:
Stage 1 Stage 2
A B C D E | Sum
-----------------------------------
Group A 18 0 0 0 2 | 20
Group B 0 6 0 0 3 | 9
Group C 4 0 2 0 2 | 8
Group D 0 0 0 0 0 | 0
Group E 4 2 0 1 8 | 15
-----------------------------------
Sum 26 8 2 1 15 | 52
As long as you don't tell us, that this is just a sample from a bigger population, I don't see any statistic involved, just frequency distributions.
From A to B went 0 person, not much to calculate here. 2 Persons went from A to E. A had initially 20 members (am I right?) and 2/20 is 10%. 10 Percent of the A group went to E. How many people have been later in group A, or where in total is not of interest here.
If you ask 'How many from group E came from A, then you would use the total of E in stage 2; there are 15 people, two of them came from A: 2/15. | What is the right statistical test to use when attesting claims like "60% people switched over from | I would start with building the sums:
Stage 1 Stage 2
A B C D E | Sum
-----------------------------------
Group A 18 0 0 0 2 | 20
Group B 0 6 | What is the right statistical test to use when attesting claims like "60% people switched over from group A to group B"?
I would start with building the sums:
Stage 1 Stage 2
A B C D E | Sum
-----------------------------------
Group A 18 0 0 0 2 | 20
Group B 0 6 0 0 3 | 9
Group C 4 0 2 0 2 | 8
Group D 0 0 0 0 0 | 0
Group E 4 2 0 1 8 | 15
-----------------------------------
Sum 26 8 2 1 15 | 52
As long as you don't tell us, that this is just a sample from a bigger population, I don't see any statistic involved, just frequency distributions.
From A to B went 0 person, not much to calculate here. 2 Persons went from A to E. A had initially 20 members (am I right?) and 2/20 is 10%. 10 Percent of the A group went to E. How many people have been later in group A, or where in total is not of interest here.
If you ask 'How many from group E came from A, then you would use the total of E in stage 2; there are 15 people, two of them came from A: 2/15. | What is the right statistical test to use when attesting claims like "60% people switched over from
I would start with building the sums:
Stage 1 Stage 2
A B C D E | Sum
-----------------------------------
Group A 18 0 0 0 2 | 20
Group B 0 6 |
53,481 | Visualizing the distribution of something within a very large body of data | While Whuber is correct in principle you still might be able to see something because your word is very infrequent and you only want plots of the one word. Something quite uncommon might only appear 30 times, probably not more than 500.
Let's say you convert your words into a single vector of words that's a million long. You could easily construct a plot with basic R commands. Let's call the vector 'words' and the rare item 'wretch'.
n <- length(words)
plot(1:n, integer(n), type = 'n', xlab = 'index of word', ylab = '', main = 'instances of wretch', yaxt = 'n', frame.plot = TRUE)
wretch <- which(words %in% 'wretch')
abline(v = wretch, col = 'red', lwd = 0.2)
You could change the line assigning wretch using a grep command if you need to account for variations of the word. Also, the lwd in the abline command could be set thicker or thinner depending on the frequency of the word. If you end up plotting 400 instances 0.2 will work fine.
I tried some density plots of this kind of data. I imported about 50,000 words of Shakespeare and finding patterns was easier for me in the code above than it was in the density plots. I used a very common word that appeared in frequency 200x more than the mean frequency ('to') and the plots looked just fine. I think you'll make a fine graph like this with rare instances in 1e6 words. | Visualizing the distribution of something within a very large body of data | While Whuber is correct in principle you still might be able to see something because your word is very infrequent and you only want plots of the one word. Something quite uncommon might only appear | Visualizing the distribution of something within a very large body of data
While Whuber is correct in principle you still might be able to see something because your word is very infrequent and you only want plots of the one word. Something quite uncommon might only appear 30 times, probably not more than 500.
Let's say you convert your words into a single vector of words that's a million long. You could easily construct a plot with basic R commands. Let's call the vector 'words' and the rare item 'wretch'.
n <- length(words)
plot(1:n, integer(n), type = 'n', xlab = 'index of word', ylab = '', main = 'instances of wretch', yaxt = 'n', frame.plot = TRUE)
wretch <- which(words %in% 'wretch')
abline(v = wretch, col = 'red', lwd = 0.2)
You could change the line assigning wretch using a grep command if you need to account for variations of the word. Also, the lwd in the abline command could be set thicker or thinner depending on the frequency of the word. If you end up plotting 400 instances 0.2 will work fine.
I tried some density plots of this kind of data. I imported about 50,000 words of Shakespeare and finding patterns was easier for me in the code above than it was in the density plots. I used a very common word that appeared in frequency 200x more than the mean frequency ('to') and the plots looked just fine. I think you'll make a fine graph like this with rare instances in 1e6 words. | Visualizing the distribution of something within a very large body of data
While Whuber is correct in principle you still might be able to see something because your word is very infrequent and you only want plots of the one word. Something quite uncommon might only appear |
53,482 | Visualizing the distribution of something within a very large body of data | With a 1200 dpi printer using the thinnest possible line (one pixel) for each word, your plot of a million words would still be almost 20 meters long!
Maybe a density plot would be more helpful. | Visualizing the distribution of something within a very large body of data | With a 1200 dpi printer using the thinnest possible line (one pixel) for each word, your plot of a million words would still be almost 20 meters long!
Maybe a density plot would be more helpful. | Visualizing the distribution of something within a very large body of data
With a 1200 dpi printer using the thinnest possible line (one pixel) for each word, your plot of a million words would still be almost 20 meters long!
Maybe a density plot would be more helpful. | Visualizing the distribution of something within a very large body of data
With a 1200 dpi printer using the thinnest possible line (one pixel) for each word, your plot of a million words would still be almost 20 meters long!
Maybe a density plot would be more helpful. |
53,483 | Visualizing the distribution of something within a very large body of data | Let's take a derivative (difference) here, so instead of working with location, you work directly with what you want: distance.
Say word FOO appears in the text 30 times. Calculate the distance (number of other words) between each consecutive occurrence of FOO, creating a vector of 29 distances. Then pick your plot: histogram, density, xy with log x, etc.
This doesn't show you where clusters are, but it does show clustering. | Visualizing the distribution of something within a very large body of data | Let's take a derivative (difference) here, so instead of working with location, you work directly with what you want: distance.
Say word FOO appears in the text 30 times. Calculate the distance (numbe | Visualizing the distribution of something within a very large body of data
Let's take a derivative (difference) here, so instead of working with location, you work directly with what you want: distance.
Say word FOO appears in the text 30 times. Calculate the distance (number of other words) between each consecutive occurrence of FOO, creating a vector of 29 distances. Then pick your plot: histogram, density, xy with log x, etc.
This doesn't show you where clusters are, but it does show clustering. | Visualizing the distribution of something within a very large body of data
Let's take a derivative (difference) here, so instead of working with location, you work directly with what you want: distance.
Say word FOO appears in the text 30 times. Calculate the distance (numbe |
53,484 | Visualizing the distribution of something within a very large body of data | I don't know if this may be useful in your case, but in bioinformatics I often feel the need to visualize the distribution of gene counts in a give data set. This is definitely not as large as your data set, but I think the strategy can be followed for most of the large data sets.
A typical strategy would be to find a predetermined number of clusters using, say, hierarchical clustering (or any other clustering procedure). Once you have the clusters, you can sample a gene from each of these clusters. Assuming that the gene is representative of the cluster, visualizing the count for the gene (in form of density plot, histogram, qq-plot, etc.) is equivalent to visualizing the behavior of the cluster. You can do the same for all the clusters.
Basically, you reduce the huge data set to clusters and then visualize the representatives from these clusters assuming "on an average" the clusters' behavior will remain "more or less" the same.
Warning: This method is highly sensitive to a lot of things, a few are, clustering method, how many clusters you choose, etc.
I believe visualizing all the words if the number of words is reasonably large (say $\geq$ 50) would be pretty difficult. As as whuber aptly points out, it may be almost impossible. | Visualizing the distribution of something within a very large body of data | I don't know if this may be useful in your case, but in bioinformatics I often feel the need to visualize the distribution of gene counts in a give data set. This is definitely not as large as your da | Visualizing the distribution of something within a very large body of data
I don't know if this may be useful in your case, but in bioinformatics I often feel the need to visualize the distribution of gene counts in a give data set. This is definitely not as large as your data set, but I think the strategy can be followed for most of the large data sets.
A typical strategy would be to find a predetermined number of clusters using, say, hierarchical clustering (or any other clustering procedure). Once you have the clusters, you can sample a gene from each of these clusters. Assuming that the gene is representative of the cluster, visualizing the count for the gene (in form of density plot, histogram, qq-plot, etc.) is equivalent to visualizing the behavior of the cluster. You can do the same for all the clusters.
Basically, you reduce the huge data set to clusters and then visualize the representatives from these clusters assuming "on an average" the clusters' behavior will remain "more or less" the same.
Warning: This method is highly sensitive to a lot of things, a few are, clustering method, how many clusters you choose, etc.
I believe visualizing all the words if the number of words is reasonably large (say $\geq$ 50) would be pretty difficult. As as whuber aptly points out, it may be almost impossible. | Visualizing the distribution of something within a very large body of data
I don't know if this may be useful in your case, but in bioinformatics I often feel the need to visualize the distribution of gene counts in a give data set. This is definitely not as large as your da |
53,485 | Can constrained optimization techniques be applied to unconstrained problems? | As far as I'm concerned, constrained optimization is a less-than-optimal way of avoiding strong fluctuations in your parameters for the independents due to a bad model-specification. Pretty often a constraint is "needed" when the variance-covariance matrix is ill-structured, when there is a lot of (unaccounted) correlation between independents, when you have aliasing or near-aliasing in datasets, when you gave the model too many degrees of freedom, and so on. Basically, every condition that inflates the variance on the parameter estimates will cause an unconstrained method to behave poorly.
You can look at constrained optimization, but I reckon you should first take a look at your model if you believe constrained optimization is necessary. This for two reasons :
There's no way you can still rely on the inference, even on the estimated variances for your parameters
You have no control over the amount of bias you introduce.
So depending on the goal of the analysis, constrained optimization can be a sub-optimal solution (purely estimating the parameters) or inappropriate (when inference is needed).
On a side note, penalized methods (in this case penalized likelihoods) are specifically designed for these cases, and introduce the bias in a controlled manner where it is accounted for (mostly). Using these, there is no need to go into constrained methods, as the classic optimization algorithms will do a pretty good job. And with the correct penalization, inference is still valid in many cases. So I'd rather go for such a method instead of putting arbitrary constraints that are not backed up with an inferential framework.
My 2 cents, YMMV. | Can constrained optimization techniques be applied to unconstrained problems? | As far as I'm concerned, constrained optimization is a less-than-optimal way of avoiding strong fluctuations in your parameters for the independents due to a bad model-specification. Pretty often a co | Can constrained optimization techniques be applied to unconstrained problems?
As far as I'm concerned, constrained optimization is a less-than-optimal way of avoiding strong fluctuations in your parameters for the independents due to a bad model-specification. Pretty often a constraint is "needed" when the variance-covariance matrix is ill-structured, when there is a lot of (unaccounted) correlation between independents, when you have aliasing or near-aliasing in datasets, when you gave the model too many degrees of freedom, and so on. Basically, every condition that inflates the variance on the parameter estimates will cause an unconstrained method to behave poorly.
You can look at constrained optimization, but I reckon you should first take a look at your model if you believe constrained optimization is necessary. This for two reasons :
There's no way you can still rely on the inference, even on the estimated variances for your parameters
You have no control over the amount of bias you introduce.
So depending on the goal of the analysis, constrained optimization can be a sub-optimal solution (purely estimating the parameters) or inappropriate (when inference is needed).
On a side note, penalized methods (in this case penalized likelihoods) are specifically designed for these cases, and introduce the bias in a controlled manner where it is accounted for (mostly). Using these, there is no need to go into constrained methods, as the classic optimization algorithms will do a pretty good job. And with the correct penalization, inference is still valid in many cases. So I'd rather go for such a method instead of putting arbitrary constraints that are not backed up with an inferential framework.
My 2 cents, YMMV. | Can constrained optimization techniques be applied to unconstrained problems?
As far as I'm concerned, constrained optimization is a less-than-optimal way of avoiding strong fluctuations in your parameters for the independents due to a bad model-specification. Pretty often a co |
53,486 | Can constrained optimization techniques be applied to unconstrained problems? | As far as I know, there is no reason to stop you from applying constrained optimization to an unconstrainted problem. However, this may not be a great idea in terms of computational complexity and convergence. For example, fitting a logistic regression model can done efficiently with the Newton-Raphson approach (or the Fisher scoring variant). I am not sure if there is much to gain with the interior point approach in this particular case. | Can constrained optimization techniques be applied to unconstrained problems? | As far as I know, there is no reason to stop you from applying constrained optimization to an unconstrainted problem. However, this may not be a great idea in terms of computational complexity and con | Can constrained optimization techniques be applied to unconstrained problems?
As far as I know, there is no reason to stop you from applying constrained optimization to an unconstrainted problem. However, this may not be a great idea in terms of computational complexity and convergence. For example, fitting a logistic regression model can done efficiently with the Newton-Raphson approach (or the Fisher scoring variant). I am not sure if there is much to gain with the interior point approach in this particular case. | Can constrained optimization techniques be applied to unconstrained problems?
As far as I know, there is no reason to stop you from applying constrained optimization to an unconstrainted problem. However, this may not be a great idea in terms of computational complexity and con |
53,487 | Can constrained optimization techniques be applied to unconstrained problems? | The general sense in optimization is that if you have a convex function and no constraints, you want to use the "powerful stuff", gradient descent, Newton, etc. Without constraints interior point methods are not very good (competitive).
In particular for the problem you're studying (binary logistic regression) you should consider trying simple stochastic gradient descent.
Nothing really stops you from applying constrained optimization techniques to unconstrained problems. The same way nothing stops from pushing (instead of riding) your car to work. But you should definitely try interior point methods w/o constraints and convince yourself about it.
Finally you mention that you want to try linear programming-based methods presumably without constraints, what you plan to do in this case I don't quite understand. | Can constrained optimization techniques be applied to unconstrained problems? | The general sense in optimization is that if you have a convex function and no constraints, you want to use the "powerful stuff", gradient descent, Newton, etc. Without constraints interior point meth | Can constrained optimization techniques be applied to unconstrained problems?
The general sense in optimization is that if you have a convex function and no constraints, you want to use the "powerful stuff", gradient descent, Newton, etc. Without constraints interior point methods are not very good (competitive).
In particular for the problem you're studying (binary logistic regression) you should consider trying simple stochastic gradient descent.
Nothing really stops you from applying constrained optimization techniques to unconstrained problems. The same way nothing stops from pushing (instead of riding) your car to work. But you should definitely try interior point methods w/o constraints and convince yourself about it.
Finally you mention that you want to try linear programming-based methods presumably without constraints, what you plan to do in this case I don't quite understand. | Can constrained optimization techniques be applied to unconstrained problems?
The general sense in optimization is that if you have a convex function and no constraints, you want to use the "powerful stuff", gradient descent, Newton, etc. Without constraints interior point meth |
53,488 | CI for a difference based on independent CIs | No, you can't compute a CI for the difference that way I'm afraid, for the same reason you can't use whether the CIs overlap to judge the statistical significance of the difference. See, for example,
"On Judging the Significance of Differences by Examining the Overlap Between Confidence Intervals"
Nathaniel Schenker, Jane F Gentleman. The American Statistician. August 1, 2001, 55(3): 182-186. doi:10.1198/000313001317097960.
http://pubs.amstat.org/doi/abs/10.1198/000313001317097960
or:
Overlapping confidence intervals or standard error intervals: What do they mean in terms of statistical significance?
Mark E. Payton, Matthew H. Greenstone, and Nathaniel Schenker. Journal of Insect Science 2003; 3: 34. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC524673/
The correct procedure requires you also know the sample sizes of both groups. You can then back-compute the two standard deviations from the CIs and use those to conduct a standard two-sample t-test, or to calculate a standard error of the difference and hence a CI for the difference. | CI for a difference based on independent CIs | No, you can't compute a CI for the difference that way I'm afraid, for the same reason you can't use whether the CIs overlap to judge the statistical significance of the difference. See, for example, | CI for a difference based on independent CIs
No, you can't compute a CI for the difference that way I'm afraid, for the same reason you can't use whether the CIs overlap to judge the statistical significance of the difference. See, for example,
"On Judging the Significance of Differences by Examining the Overlap Between Confidence Intervals"
Nathaniel Schenker, Jane F Gentleman. The American Statistician. August 1, 2001, 55(3): 182-186. doi:10.1198/000313001317097960.
http://pubs.amstat.org/doi/abs/10.1198/000313001317097960
or:
Overlapping confidence intervals or standard error intervals: What do they mean in terms of statistical significance?
Mark E. Payton, Matthew H. Greenstone, and Nathaniel Schenker. Journal of Insect Science 2003; 3: 34. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC524673/
The correct procedure requires you also know the sample sizes of both groups. You can then back-compute the two standard deviations from the CIs and use those to conduct a standard two-sample t-test, or to calculate a standard error of the difference and hence a CI for the difference. | CI for a difference based on independent CIs
No, you can't compute a CI for the difference that way I'm afraid, for the same reason you can't use whether the CIs overlap to judge the statistical significance of the difference. See, for example, |
53,489 | Options for 3D coordinate systems? | Along with the positions of the amino acids, protein folding can also be described by the angles in the peptide bonds such as used in a Ramachandran plot, and with global values such as the radius of gyration.
You could apply all those values together in a single model such as is being done with QSAR models to predict physiochemical properties of molecules. | Options for 3D coordinate systems? | Along with the positions of the amino acids, protein folding can also be described by the angles in the peptide bonds such as used in a Ramachandran plot, and with global values such as the radius of | Options for 3D coordinate systems?
Along with the positions of the amino acids, protein folding can also be described by the angles in the peptide bonds such as used in a Ramachandran plot, and with global values such as the radius of gyration.
You could apply all those values together in a single model such as is being done with QSAR models to predict physiochemical properties of molecules. | Options for 3D coordinate systems?
Along with the positions of the amino acids, protein folding can also be described by the angles in the peptide bonds such as used in a Ramachandran plot, and with global values such as the radius of |
53,490 | Options for 3D coordinate systems? | If we know something about the problem we're trying to model, then representing that knowledge to the model via a good set of features is incredibly valuable. This is often called "feature-engineering," and it can yield dramatic improvements in model quality. Sextus's answer provides an example, in the context of protien folding.
But there's no single "best representation" of arbitrary data, because not all problems have the same underlying mechanics. In other words, even if you do an experiment and validate that polar coordinates are more useful than Cartesian on one task, there's surely a different task where you can demonstrate the opposite result.
Another layer of nuance is considering how many model parameters are required to achieve the desired result. It may be the case that having a more refined feature representation allows one to achieve similar results using a smaller network (measured by the number of parameters). | Options for 3D coordinate systems? | If we know something about the problem we're trying to model, then representing that knowledge to the model via a good set of features is incredibly valuable. This is often called "feature-engineering | Options for 3D coordinate systems?
If we know something about the problem we're trying to model, then representing that knowledge to the model via a good set of features is incredibly valuable. This is often called "feature-engineering," and it can yield dramatic improvements in model quality. Sextus's answer provides an example, in the context of protien folding.
But there's no single "best representation" of arbitrary data, because not all problems have the same underlying mechanics. In other words, even if you do an experiment and validate that polar coordinates are more useful than Cartesian on one task, there's surely a different task where you can demonstrate the opposite result.
Another layer of nuance is considering how many model parameters are required to achieve the desired result. It may be the case that having a more refined feature representation allows one to achieve similar results using a smaller network (measured by the number of parameters). | Options for 3D coordinate systems?
If we know something about the problem we're trying to model, then representing that knowledge to the model via a good set of features is incredibly valuable. This is often called "feature-engineering |
53,491 | forecasting without data | Use the queue function in the utilities package
This type of problem can be dealt with as a queueing problem, which is a class of problem dealt with in statistical theory. For the case where you have the inputs for the use of the facilities by a set of users you can use a deterministic function to turn this into a set of queuing metrics. Typically, for each user you would have an input for the time they arrive, the amount of time they need to use the facilities, and a description of their waiting behaviour (i.e,. how long they are willing to wait for the facility before giving up and leaving).
Assuming you have all this information, you can use the queue function in the utilities package in R to compute the queueing metrics under various numbers of beds in your facility (see O'Neill 2021 for further explanation). Below I show an example of this function showing queueing results from using n = 3 facilities for a set of twenty users with random arrival-times and use-times. By varying the parameter n you can see the queueing results using different numbers of amenities. As you can see, the function allows inputs for a range of aspects of the problem, including revival-times and close-times for the facilities.
#Set parameters for queuing model
lambda <- 1.5
mu <- 6
alpha <- 5
beta <- 2
#Generate arrival-times, use-times and patience-times
set.seed(1)
K <- 20
ARRIVE <- cumsum(rexp(K, rate = 1/lambda))
USE.FULL <- 2*mu*runif(K)
WAIT.MAX <- function(kappa) { alpha*exp(-kappa/beta) }
#Compute and print queuing information with n = 3
library(utilities)
QUEUE <- queue(arrive = ARRIVE, use.full = USE.FULL,
wait.max = WAIT.MAX, n = 3, revive = 2,
close.arrive = 30, close.full = 35)
#View the queue results
plot(QUEUE)
QUEUE
This code produces the following queueing output showing information for each of the users and facilities in the queueing problem. The =plot shows this same information graphically, which gives a clear visualisation of the waiting-times, use-times, etc.
Queue Information
Model of an amenity with 3 service facilities with revival-time 2
Service facilities close to new arrivals at closure-time = 30
Service facilities close to new services at closure-time = 35
Service facilities end existing services at closure-time = 35
Users are allocated to facilities on a 'first-come first-served' basis
----------------------------------------------------------------------
User information
arrive wait use leave unserved F
user[1] 1.132773 0.000000 1.295324 2.428096 0.0000000 1
user[2] 2.905237 0.000000 8.684531 11.589768 0.0000000 2
user[3] 3.123797 0.000000 4.935293 8.059090 0.0000000 3
user[4] 3.333490 1.094606 9.851356 14.279452 0.0000000 1
user[5] 3.987593 3.032653 0.000000 3.987593 7.7647223 NA
user[6] 8.330046 1.729045 9.395193 19.454283 0.0000000 3
user[7] 10.174389 3.032653 0.000000 10.174389 6.6364357 NA
user[8] 10.983913 1.839397 0.000000 10.983913 6.3566350 NA
user[9] 12.418764 1.171004 9.472275 23.062043 0.0000000 2
user[10] 12.639333 3.032653 0.000000 12.639333 0.2799744 NA
user[11] 14.725436 1.554016 5.726761 22.006213 0.0000000 1
user[12] 15.868481 3.032653 0.000000 15.868481 8.7877649 NA
user[13] 17.724886 3.032653 0.000000 17.724886 8.3127787 NA
user[14] 24.360787 0.000000 5.731435 30.092223 0.0000000 1
user[15] 25.942602 0.000000 9.057398 35.000000 1.2771158 2
user[16] 27.495468 0.000000 5.257165 32.752633 0.0000000 3
user[17] 30.309521 0.000000 0.000000 30.309521 2.9375673 NA
user[18] 31.291641 0.000000 0.000000 31.291641 0.8481486 NA
user[19] 31.797041 0.000000 0.000000 31.797041 1.1935939 NA
user[20] 32.679761 0.000000 0.000000 32.679761 3.7952605 NA
----------------------------------------------------------------------
Facility information
open end.service use revive
F[1] 0 30.09222 22.60488 8
F[2] 0 35.00000 27.21420 6
F[3] 0 32.75263 19.58765 6 | forecasting without data | Use the queue function in the utilities package
This type of problem can be dealt with as a queueing problem, which is a class of problem dealt with in statistical theory. For the case where you have | forecasting without data
Use the queue function in the utilities package
This type of problem can be dealt with as a queueing problem, which is a class of problem dealt with in statistical theory. For the case where you have the inputs for the use of the facilities by a set of users you can use a deterministic function to turn this into a set of queuing metrics. Typically, for each user you would have an input for the time they arrive, the amount of time they need to use the facilities, and a description of their waiting behaviour (i.e,. how long they are willing to wait for the facility before giving up and leaving).
Assuming you have all this information, you can use the queue function in the utilities package in R to compute the queueing metrics under various numbers of beds in your facility (see O'Neill 2021 for further explanation). Below I show an example of this function showing queueing results from using n = 3 facilities for a set of twenty users with random arrival-times and use-times. By varying the parameter n you can see the queueing results using different numbers of amenities. As you can see, the function allows inputs for a range of aspects of the problem, including revival-times and close-times for the facilities.
#Set parameters for queuing model
lambda <- 1.5
mu <- 6
alpha <- 5
beta <- 2
#Generate arrival-times, use-times and patience-times
set.seed(1)
K <- 20
ARRIVE <- cumsum(rexp(K, rate = 1/lambda))
USE.FULL <- 2*mu*runif(K)
WAIT.MAX <- function(kappa) { alpha*exp(-kappa/beta) }
#Compute and print queuing information with n = 3
library(utilities)
QUEUE <- queue(arrive = ARRIVE, use.full = USE.FULL,
wait.max = WAIT.MAX, n = 3, revive = 2,
close.arrive = 30, close.full = 35)
#View the queue results
plot(QUEUE)
QUEUE
This code produces the following queueing output showing information for each of the users and facilities in the queueing problem. The =plot shows this same information graphically, which gives a clear visualisation of the waiting-times, use-times, etc.
Queue Information
Model of an amenity with 3 service facilities with revival-time 2
Service facilities close to new arrivals at closure-time = 30
Service facilities close to new services at closure-time = 35
Service facilities end existing services at closure-time = 35
Users are allocated to facilities on a 'first-come first-served' basis
----------------------------------------------------------------------
User information
arrive wait use leave unserved F
user[1] 1.132773 0.000000 1.295324 2.428096 0.0000000 1
user[2] 2.905237 0.000000 8.684531 11.589768 0.0000000 2
user[3] 3.123797 0.000000 4.935293 8.059090 0.0000000 3
user[4] 3.333490 1.094606 9.851356 14.279452 0.0000000 1
user[5] 3.987593 3.032653 0.000000 3.987593 7.7647223 NA
user[6] 8.330046 1.729045 9.395193 19.454283 0.0000000 3
user[7] 10.174389 3.032653 0.000000 10.174389 6.6364357 NA
user[8] 10.983913 1.839397 0.000000 10.983913 6.3566350 NA
user[9] 12.418764 1.171004 9.472275 23.062043 0.0000000 2
user[10] 12.639333 3.032653 0.000000 12.639333 0.2799744 NA
user[11] 14.725436 1.554016 5.726761 22.006213 0.0000000 1
user[12] 15.868481 3.032653 0.000000 15.868481 8.7877649 NA
user[13] 17.724886 3.032653 0.000000 17.724886 8.3127787 NA
user[14] 24.360787 0.000000 5.731435 30.092223 0.0000000 1
user[15] 25.942602 0.000000 9.057398 35.000000 1.2771158 2
user[16] 27.495468 0.000000 5.257165 32.752633 0.0000000 3
user[17] 30.309521 0.000000 0.000000 30.309521 2.9375673 NA
user[18] 31.291641 0.000000 0.000000 31.291641 0.8481486 NA
user[19] 31.797041 0.000000 0.000000 31.797041 1.1935939 NA
user[20] 32.679761 0.000000 0.000000 32.679761 3.7952605 NA
----------------------------------------------------------------------
Facility information
open end.service use revive
F[1] 0 30.09222 22.60488 8
F[2] 0 35.00000 27.21420 6
F[3] 0 32.75263 19.58765 6 | forecasting without data
Use the queue function in the utilities package
This type of problem can be dealt with as a queueing problem, which is a class of problem dealt with in statistical theory. For the case where you have |
53,492 | forecasting without data | First off, I would not put ARIMA at the top of my list. On the one hand, as you write, you need data to fit an ARIMA model. On the other hand, ARIMA allows for non-integer and negative values, which does not make a lot of sense in your situation, but this is usually not a major problem. On the third hand, I see little reason for bed occupancy to exhibit autoregressive or moving average behavior, so an automatic ARIMA model selection algorithm is likely to get hung up on noise.
(Then again, if your hospital is overworked, then people may not get good enough care, so high occupancy today may be associated with high occupancy tomorrow. Or conversely, if your beds are full, your staff may have an incentive to "encourage" patients to be discharged, so high occupancy today may be associated with low occupancy tomorrow. You could take a look once you have data, but I would not expect the signal to be strong. ARIMA is not very good at forecasting, see here and here.)
Instead, I would simply simulate. Ask your domain experts about how many new stroke patients they expect each day, and what variability this figure might have. Also, ask the same question about how long any given new patient might stay. You will probably need to translate your experts' opinions into some sort of probability distributions, like Poissons or Negbins.
You might already have data on at least some of these pieces of information; I find it hard to believe a hospital does not have records about past patients, their indications and their length of stay. If so, you can draw from these empirical distributions.
Then simulate: draw a random number of new patients coming in today, and for each patient, draw how long they will stay. Fill your virtual beds, tracking how long each bed occupant still has to stay. Increment the date, discharge some patients, take new ones in, rinse and repeat. Do this over 100 days, multiple times, plot time courses or calculate summary statistics like averages and quantiles. This should not be hard in any programming environment, like Python or R.
The advantage is that you can immediately perform a sensitivity analysis, e.g., on what happens if the length of stay has more or less variability than your experts expected. | forecasting without data | First off, I would not put ARIMA at the top of my list. On the one hand, as you write, you need data to fit an ARIMA model. On the other hand, ARIMA allows for non-integer and negative values, which d | forecasting without data
First off, I would not put ARIMA at the top of my list. On the one hand, as you write, you need data to fit an ARIMA model. On the other hand, ARIMA allows for non-integer and negative values, which does not make a lot of sense in your situation, but this is usually not a major problem. On the third hand, I see little reason for bed occupancy to exhibit autoregressive or moving average behavior, so an automatic ARIMA model selection algorithm is likely to get hung up on noise.
(Then again, if your hospital is overworked, then people may not get good enough care, so high occupancy today may be associated with high occupancy tomorrow. Or conversely, if your beds are full, your staff may have an incentive to "encourage" patients to be discharged, so high occupancy today may be associated with low occupancy tomorrow. You could take a look once you have data, but I would not expect the signal to be strong. ARIMA is not very good at forecasting, see here and here.)
Instead, I would simply simulate. Ask your domain experts about how many new stroke patients they expect each day, and what variability this figure might have. Also, ask the same question about how long any given new patient might stay. You will probably need to translate your experts' opinions into some sort of probability distributions, like Poissons or Negbins.
You might already have data on at least some of these pieces of information; I find it hard to believe a hospital does not have records about past patients, their indications and their length of stay. If so, you can draw from these empirical distributions.
Then simulate: draw a random number of new patients coming in today, and for each patient, draw how long they will stay. Fill your virtual beds, tracking how long each bed occupant still has to stay. Increment the date, discharge some patients, take new ones in, rinse and repeat. Do this over 100 days, multiple times, plot time courses or calculate summary statistics like averages and quantiles. This should not be hard in any programming environment, like Python or R.
The advantage is that you can immediately perform a sensitivity analysis, e.g., on what happens if the length of stay has more or less variability than your experts expected. | forecasting without data
First off, I would not put ARIMA at the top of my list. On the one hand, as you write, you need data to fit an ARIMA model. On the other hand, ARIMA allows for non-integer and negative values, which d |
53,493 | Calculating the Brier or log score from the confusion matrix, or from accuracy, sensitivity, specificity, F1 score etc | Short answer
You can't.
Somewhat longer answer
The Brier score or log score are calculated from probabilistic classifications and corresponding outcomes. The confusion matrix, accuracy etc. are calculated from hard 0-1 classifications and the corresponding outcomes. If you have probabilistic classifications, you can turn them into hard ones by using a threshold, but since that threshold cannot be trained, it is an absolutely crucial ingredient in calculating the confusion matrix etc.:
$$ \text{Brier or log score} = f(\hat{p}, y) $$
versus
$$\text{Confusion matrix, accuracy etc.} = g(\hat{y}, y) = g\big(\hat{y}(\hat{p},t),y\big), $$
where $\hat{p}$ is a vector of predicted class membership probabilities, $y$ is a 0-1 vector of the corresponding true class memberships, $0\leq t\leq 1$ is a threshold, and $\hat{y} = \hat{y}(\hat{p},t)$ is the dichotomization of probabilistic predictions into hard 0-1 classifications using the threshold $t$ (relatedly, see Frank Harrell on dichotomania).
Alternatively, $\hat{y}$ might be directly output by a classifier that does not (explicitly) use probabilitites, in which case we can't even calculate the Brier or log score. (In my opinion, that is a strong argument against such classifiers.)
An example
Suppose we have no useful predictors distinguishing the instances in our evaluation sample. For instance, these data points might all have the exact same predictor data. Alternatively, we can apply the argument below separately for each possible combination of predictor values.
Thus, as far as our model knows, all instances have the same probability $\hat{p}$ of belonging to the target class. In particular, this means that for a "hard" classification, all instances will be classified as either 0 or 1.
Let's assume our model is correct in this, and all instances really do have the same true probability $p$ of belonging to the target class. We will assume that $p=0.1$.
We consider four different possible probabilistic predictions - perhaps they come from four different models: $\hat{p}\in\{0.05, 0.10, 0.20, 0.30\}$. Per above, in order to get hard classifications to calculate the confusion matrix etc., we need a threshold. Let's say we use $t=0.15$.
Then if $\hat{p}=0.05$ or $\hat{p}=0.10$, both of which are below the threshold, all instances will be classified as 0, whereas for $\hat{p}=0.20$ or $\hat{p}=0.30$, all instances will be classified as 1. Thus, the confusion matrix, accuracy etc. will be identical for all $\hat{p}<t$ and for all $\hat{p}>t$, regardless of the actual outcome.
But of course the Brier and the log scores will be different for different $\hat{p}$. Their specific values will depend on the outcome we actually observe (just as the confusion matrix etc.), but given the observed outcome, the scores will depend on $\hat{p}$. For instance, the expected scores are:
$$ EB(\hat{p}=0.05) = 0.0925, \quad EB(\hat{p}=0.10) = 0.09, \quad EB(\hat{p}=0.20) = 0.1, \quad EB(\hat{p}=0.3) = 0.13 $$
and
$$ E\ell(\hat{p}=0.05) = 0.35, \quad E\ell(\hat{p}=0.10) = 0.33, \quad E\ell(\hat{p}=0.20) = 0.36, \quad E\ell(\hat{p}=0.30) = 0.44. $$
(Note that both scores are minimized in expectation by the true probability $\hat{p}=p$, because that is the definition of their being proper, and they are actually strictly proper.)
Thus, since the confusion matrix, accuracy etc. are constant with respect to $\hat{p}$ above and below the threshold $t$, whereas the Brier and the log score are not constant, you cannot derive the Brier or log score from the confusion matrix, accuracy etc.
Note finally that this argument does not depend on $p$ being different from $0.5$, i.e., on having "unbalanced" data. | Calculating the Brier or log score from the confusion matrix, or from accuracy, sensitivity, specifi | Short answer
You can't.
Somewhat longer answer
The Brier score or log score are calculated from probabilistic classifications and corresponding outcomes. The confusion matrix, accuracy etc. are calcul | Calculating the Brier or log score from the confusion matrix, or from accuracy, sensitivity, specificity, F1 score etc
Short answer
You can't.
Somewhat longer answer
The Brier score or log score are calculated from probabilistic classifications and corresponding outcomes. The confusion matrix, accuracy etc. are calculated from hard 0-1 classifications and the corresponding outcomes. If you have probabilistic classifications, you can turn them into hard ones by using a threshold, but since that threshold cannot be trained, it is an absolutely crucial ingredient in calculating the confusion matrix etc.:
$$ \text{Brier or log score} = f(\hat{p}, y) $$
versus
$$\text{Confusion matrix, accuracy etc.} = g(\hat{y}, y) = g\big(\hat{y}(\hat{p},t),y\big), $$
where $\hat{p}$ is a vector of predicted class membership probabilities, $y$ is a 0-1 vector of the corresponding true class memberships, $0\leq t\leq 1$ is a threshold, and $\hat{y} = \hat{y}(\hat{p},t)$ is the dichotomization of probabilistic predictions into hard 0-1 classifications using the threshold $t$ (relatedly, see Frank Harrell on dichotomania).
Alternatively, $\hat{y}$ might be directly output by a classifier that does not (explicitly) use probabilitites, in which case we can't even calculate the Brier or log score. (In my opinion, that is a strong argument against such classifiers.)
An example
Suppose we have no useful predictors distinguishing the instances in our evaluation sample. For instance, these data points might all have the exact same predictor data. Alternatively, we can apply the argument below separately for each possible combination of predictor values.
Thus, as far as our model knows, all instances have the same probability $\hat{p}$ of belonging to the target class. In particular, this means that for a "hard" classification, all instances will be classified as either 0 or 1.
Let's assume our model is correct in this, and all instances really do have the same true probability $p$ of belonging to the target class. We will assume that $p=0.1$.
We consider four different possible probabilistic predictions - perhaps they come from four different models: $\hat{p}\in\{0.05, 0.10, 0.20, 0.30\}$. Per above, in order to get hard classifications to calculate the confusion matrix etc., we need a threshold. Let's say we use $t=0.15$.
Then if $\hat{p}=0.05$ or $\hat{p}=0.10$, both of which are below the threshold, all instances will be classified as 0, whereas for $\hat{p}=0.20$ or $\hat{p}=0.30$, all instances will be classified as 1. Thus, the confusion matrix, accuracy etc. will be identical for all $\hat{p}<t$ and for all $\hat{p}>t$, regardless of the actual outcome.
But of course the Brier and the log scores will be different for different $\hat{p}$. Their specific values will depend on the outcome we actually observe (just as the confusion matrix etc.), but given the observed outcome, the scores will depend on $\hat{p}$. For instance, the expected scores are:
$$ EB(\hat{p}=0.05) = 0.0925, \quad EB(\hat{p}=0.10) = 0.09, \quad EB(\hat{p}=0.20) = 0.1, \quad EB(\hat{p}=0.3) = 0.13 $$
and
$$ E\ell(\hat{p}=0.05) = 0.35, \quad E\ell(\hat{p}=0.10) = 0.33, \quad E\ell(\hat{p}=0.20) = 0.36, \quad E\ell(\hat{p}=0.30) = 0.44. $$
(Note that both scores are minimized in expectation by the true probability $\hat{p}=p$, because that is the definition of their being proper, and they are actually strictly proper.)
Thus, since the confusion matrix, accuracy etc. are constant with respect to $\hat{p}$ above and below the threshold $t$, whereas the Brier and the log score are not constant, you cannot derive the Brier or log score from the confusion matrix, accuracy etc.
Note finally that this argument does not depend on $p$ being different from $0.5$, i.e., on having "unbalanced" data. | Calculating the Brier or log score from the confusion matrix, or from accuracy, sensitivity, specifi
Short answer
You can't.
Somewhat longer answer
The Brier score or log score are calculated from probabilistic classifications and corresponding outcomes. The confusion matrix, accuracy etc. are calcul |
53,494 | Calculating the Brier or log score from the confusion matrix, or from accuracy, sensitivity, specificity, F1 score etc | A point raised in some related conversations that seem to have inspired the posting of this question is that a confusion matrix can contain probability values if you normalize by the sum of the entries.
$$
\begin{pmatrix}
4 & 2\\
1 & 3
\end{pmatrix}
\overset{\div 10}{\rightarrow}
\begin{pmatrix}
0.4 & 0.2\\
0.1 & 0.3
\end{pmatrix}\\
\begin{pmatrix}
8 & 2\\
3 & 7
\end{pmatrix}
\overset{\div 20}{\rightarrow}
\begin{pmatrix}
0.4 & 0.1\\
0.15 & 0.35
\end{pmatrix}
$$
However, notice that each of these matrices contains four values, while they correspond to ten and twenty classification attempts, respectively. This means there is a mismatch between the number of true $y_i$ and predicted $\hat p_i$ when you go to calculate something like Brier score or log-loss (even c-index or ROCAUC) that sums over the number of observations (ten and twenty in the two respective matrices). Consequently, even this normalization of the confusion matrices does not allow for calculations of the Brier score or log loss, giving further evidence that the confusion matrix is not the whole story. Since the accuracy, sensitivity, specificity, $F_1$, $F_{\beta}$, precision, recall, FPR, FNR, TPR, TNR, PPV, NPV, Matthews correlation coefficient, and all of the other friends mentioned here are derived from the confusion matrix, those would not tell the whole story, either. | Calculating the Brier or log score from the confusion matrix, or from accuracy, sensitivity, specifi | A point raised in some related conversations that seem to have inspired the posting of this question is that a confusion matrix can contain probability values if you normalize by the sum of the entrie | Calculating the Brier or log score from the confusion matrix, or from accuracy, sensitivity, specificity, F1 score etc
A point raised in some related conversations that seem to have inspired the posting of this question is that a confusion matrix can contain probability values if you normalize by the sum of the entries.
$$
\begin{pmatrix}
4 & 2\\
1 & 3
\end{pmatrix}
\overset{\div 10}{\rightarrow}
\begin{pmatrix}
0.4 & 0.2\\
0.1 & 0.3
\end{pmatrix}\\
\begin{pmatrix}
8 & 2\\
3 & 7
\end{pmatrix}
\overset{\div 20}{\rightarrow}
\begin{pmatrix}
0.4 & 0.1\\
0.15 & 0.35
\end{pmatrix}
$$
However, notice that each of these matrices contains four values, while they correspond to ten and twenty classification attempts, respectively. This means there is a mismatch between the number of true $y_i$ and predicted $\hat p_i$ when you go to calculate something like Brier score or log-loss (even c-index or ROCAUC) that sums over the number of observations (ten and twenty in the two respective matrices). Consequently, even this normalization of the confusion matrices does not allow for calculations of the Brier score or log loss, giving further evidence that the confusion matrix is not the whole story. Since the accuracy, sensitivity, specificity, $F_1$, $F_{\beta}$, precision, recall, FPR, FNR, TPR, TNR, PPV, NPV, Matthews correlation coefficient, and all of the other friends mentioned here are derived from the confusion matrix, those would not tell the whole story, either. | Calculating the Brier or log score from the confusion matrix, or from accuracy, sensitivity, specifi
A point raised in some related conversations that seem to have inspired the posting of this question is that a confusion matrix can contain probability values if you normalize by the sum of the entrie |
53,495 | Time to event = 0 in survival analysis? | Depending on the implementation, software for fitting a Cox regression or other continuous-time survival model might not even accept a survival time of 0. The idea with such models is that you start with 100% survival at time = 0 and proceed down toward lower survival fractions in continuous time.
In the situation you describe there was survival for at least a fraction of a day, so you could include the actual fraction of the day if available. If that's not known, you might use a small value like 0.5 days for the event time.
For a Cox model the exact time you choose isn't important, as the fitting of the model just proceeds from event time to event time without considering the actual time values of the events. Fitting a parametric model does use the actual time values, but if there aren't many such cases the choice probably won't matter much; you could try different time values for the "0-day" survival times to evaluate how much. | Time to event = 0 in survival analysis? | Depending on the implementation, software for fitting a Cox regression or other continuous-time survival model might not even accept a survival time of 0. The idea with such models is that you start w | Time to event = 0 in survival analysis?
Depending on the implementation, software for fitting a Cox regression or other continuous-time survival model might not even accept a survival time of 0. The idea with such models is that you start with 100% survival at time = 0 and proceed down toward lower survival fractions in continuous time.
In the situation you describe there was survival for at least a fraction of a day, so you could include the actual fraction of the day if available. If that's not known, you might use a small value like 0.5 days for the event time.
For a Cox model the exact time you choose isn't important, as the fitting of the model just proceeds from event time to event time without considering the actual time values of the events. Fitting a parametric model does use the actual time values, but if there aren't many such cases the choice probably won't matter much; you could try different time values for the "0-day" survival times to evaluate how much. | Time to event = 0 in survival analysis?
Depending on the implementation, software for fitting a Cox regression or other continuous-time survival model might not even accept a survival time of 0. The idea with such models is that you start w |
53,496 | Where does e come from in this problem about dice? | The reason $e =\lim\limits_{n \to \infty}\left(1+\frac1n\right)^{n}=\frac1{\lim\limits_{n \to \infty}\left(1-\frac1n\right)^{n}}$ appears is precisely because you are taking a particular limit, though I am not sure you took exactly the correct one.
The answer should head towards $0$ as $n$ increases, since player B has one die and player A has an increasing number of dice. Ignoring the possibilities of ties for the largest values, you might think the probability of player B having the largest of $n$ values might be $\frac1n$. But the possibility of ties changes this.
You have used $n$ to mean two different things: the numbers of/on the dice and the number thrown by $B$. If instead you had used $b$ as the number thrown by player B then the answer to "the probability that Player B's number is strictly larger than the maximum of all the numbers of person A" is $$\sum\limits_{b=1}^n \frac{(b-1)^{n-1}}{n^n}$$ which for large $n$ is approximately $\frac{1}{n(e-1)}$, about $\frac{0.58}n$.
Meanwhile "the probability that Player B's number is at least as large as the maximum of all the numbers of person A" would be $\sum\limits_{b=1}^n \frac{b^{n-1}}{n^n}$ which for large $n$ is approximately $\frac{e}{n(e-1)}$ about $\frac{1.58}n$.
From the difference, you can tell that the probability of player B's number being exactly equal to the largest of player A's numbers is $\sum\limits_{b=1}^n \frac{b^{n-1}-(b-1)^{n-1}}{n^n}=\frac1n$. An alternative approach would be to say that the largest of player A's numbers is some value from $1$ to $n$ and player B has a probability of $\frac1n$ of getting that exact value. | Where does e come from in this problem about dice? | The reason $e =\lim\limits_{n \to \infty}\left(1+\frac1n\right)^{n}=\frac1{\lim\limits_{n \to \infty}\left(1-\frac1n\right)^{n}}$ appears is precisely because you are taking a particular limit, though | Where does e come from in this problem about dice?
The reason $e =\lim\limits_{n \to \infty}\left(1+\frac1n\right)^{n}=\frac1{\lim\limits_{n \to \infty}\left(1-\frac1n\right)^{n}}$ appears is precisely because you are taking a particular limit, though I am not sure you took exactly the correct one.
The answer should head towards $0$ as $n$ increases, since player B has one die and player A has an increasing number of dice. Ignoring the possibilities of ties for the largest values, you might think the probability of player B having the largest of $n$ values might be $\frac1n$. But the possibility of ties changes this.
You have used $n$ to mean two different things: the numbers of/on the dice and the number thrown by $B$. If instead you had used $b$ as the number thrown by player B then the answer to "the probability that Player B's number is strictly larger than the maximum of all the numbers of person A" is $$\sum\limits_{b=1}^n \frac{(b-1)^{n-1}}{n^n}$$ which for large $n$ is approximately $\frac{1}{n(e-1)}$, about $\frac{0.58}n$.
Meanwhile "the probability that Player B's number is at least as large as the maximum of all the numbers of person A" would be $\sum\limits_{b=1}^n \frac{b^{n-1}}{n^n}$ which for large $n$ is approximately $\frac{e}{n(e-1)}$ about $\frac{1.58}n$.
From the difference, you can tell that the probability of player B's number being exactly equal to the largest of player A's numbers is $\sum\limits_{b=1}^n \frac{b^{n-1}-(b-1)^{n-1}}{n^n}=\frac1n$. An alternative approach would be to say that the largest of player A's numbers is some value from $1$ to $n$ and player B has a probability of $\frac1n$ of getting that exact value. | Where does e come from in this problem about dice?
The reason $e =\lim\limits_{n \to \infty}\left(1+\frac1n\right)^{n}=\frac1{\lim\limits_{n \to \infty}\left(1-\frac1n\right)^{n}}$ appears is precisely because you are taking a particular limit, though |
53,497 | Calculating functions of truncated and censored normal variables | The first result simulates the mean of a truncated normal distribution with truncation point 1, whose expected value is, from here and standard normality with $\mu=0$, $\sigma=1$,
$$
\mu_T=\frac{\phi(1)}{1-\Phi(1)}=\frac{\phi(1)}{\Phi(-1)},
$$
where the second equality uses symmetry of the standard normal around zero.
The second is the mean of a version of a censored normal distribution with censoring point 1, where, however, the censored values are not replaced with the censoring point but with 0. Hence, the density of the censored distribution can be taken from here.
Since we replace censored values with zeros, they contribute nothing to the expected value, so that we evaluate (see e.g. https://math.stackexchange.com/questions/402215/what-is-int-xe-x2-dx or shorturl.at/eFLO0)
$$
\int_1^\infty x\phi(x)dx=\phi(1)
$$
Had this been "conventional" censoring, we would have looked at mean(X*(X>1)+1*(X<=1)), which, again via here, would have expectation
$$
1\cdot\Phi(1)+\phi(1).
$$
The third problem is basically the same as the second, except that we now evaluate
$$
\int_1^\infty (x-1)\phi(x)dx=\phi(1)-(1-\Phi(1))=\phi(1)-\Phi(-1)
$$ | Calculating functions of truncated and censored normal variables | The first result simulates the mean of a truncated normal distribution with truncation point 1, whose expected value is, from here and standard normality with $\mu=0$, $\sigma=1$,
$$
\mu_T=\frac{\phi( | Calculating functions of truncated and censored normal variables
The first result simulates the mean of a truncated normal distribution with truncation point 1, whose expected value is, from here and standard normality with $\mu=0$, $\sigma=1$,
$$
\mu_T=\frac{\phi(1)}{1-\Phi(1)}=\frac{\phi(1)}{\Phi(-1)},
$$
where the second equality uses symmetry of the standard normal around zero.
The second is the mean of a version of a censored normal distribution with censoring point 1, where, however, the censored values are not replaced with the censoring point but with 0. Hence, the density of the censored distribution can be taken from here.
Since we replace censored values with zeros, they contribute nothing to the expected value, so that we evaluate (see e.g. https://math.stackexchange.com/questions/402215/what-is-int-xe-x2-dx or shorturl.at/eFLO0)
$$
\int_1^\infty x\phi(x)dx=\phi(1)
$$
Had this been "conventional" censoring, we would have looked at mean(X*(X>1)+1*(X<=1)), which, again via here, would have expectation
$$
1\cdot\Phi(1)+\phi(1).
$$
The third problem is basically the same as the second, except that we now evaluate
$$
\int_1^\infty (x-1)\phi(x)dx=\phi(1)-(1-\Phi(1))=\phi(1)-\Phi(-1)
$$ | Calculating functions of truncated and censored normal variables
The first result simulates the mean of a truncated normal distribution with truncation point 1, whose expected value is, from here and standard normality with $\mu=0$, $\sigma=1$,
$$
\mu_T=\frac{\phi( |
53,498 | Can I apply a confusion matrix to classification tasks outside of ML? | Sure you can. Those metrics are older than machine learning. For example, the ROC curves calculated based on TPR and FPR were designed during World War II for judging the accuracy of radars. The metrics calculated based on the confusion matrix are also commonly used in medicine for judging the performance of diagnostic tests. For example, below I posted a table from the report mentioning such results for a rapid COVID test (not advertising it, it was the first such result that I found online), as you can see, it has many of the "machine learning" metrics.
There are also used in many other scenarios regarding information retrieval, signal detection, classification, etc. | Can I apply a confusion matrix to classification tasks outside of ML? | Sure you can. Those metrics are older than machine learning. For example, the ROC curves calculated based on TPR and FPR were designed during World War II for judging the accuracy of radars. The metri | Can I apply a confusion matrix to classification tasks outside of ML?
Sure you can. Those metrics are older than machine learning. For example, the ROC curves calculated based on TPR and FPR were designed during World War II for judging the accuracy of radars. The metrics calculated based on the confusion matrix are also commonly used in medicine for judging the performance of diagnostic tests. For example, below I posted a table from the report mentioning such results for a rapid COVID test (not advertising it, it was the first such result that I found online), as you can see, it has many of the "machine learning" metrics.
There are also used in many other scenarios regarding information retrieval, signal detection, classification, etc. | Can I apply a confusion matrix to classification tasks outside of ML?
Sure you can. Those metrics are older than machine learning. For example, the ROC curves calculated based on TPR and FPR were designed during World War II for judging the accuracy of radars. The metri |
53,499 | Can I apply a confusion matrix to classification tasks outside of ML? | YES AND NO
FIRST THE YES
In fact, there was (and still is, I suppose) an area of artificial intelligence called expert systems which were , more or less, decision trees designed by subject matter experts (doctors, scientists, etc). Given some data, the expert system would go down the decision tree and arrive at a prediction. You might even think of your own decision-making as working like this: you take in information, evaluate it, and then say, “Hi, doggie,” instead of, “Good morning, Mrs. Johnson,” since your eyes tell you that you see a dog instead of your neighbor.
Machine learning might be the dominant approach to artificial intelligence these days, but evaluating the predictions does not really depend on how you made those predictions.
NOW THE NO
Even in machine learning, metrics like accuracy, precision, and recall are threshold-based, discontinuous, improper scoring rules (arguably not even scoring rules). The problems with these metrics tend to be most noticeable in settings with class imbalance, which is the main setting where this topic arises in Cross Validated questions, but the problems are present with balanced classes, too. Briefly, the probabilities returned by many machine learning models allow us a much more nuanced evaluation.
This answer by our Stephan Kolassa is a good place to start reading about this notion, and it links to other good material (particularly Frank Harrell’s blog). | Can I apply a confusion matrix to classification tasks outside of ML? | YES AND NO
FIRST THE YES
In fact, there was (and still is, I suppose) an area of artificial intelligence called expert systems which were , more or less, decision trees designed by subject matter expe | Can I apply a confusion matrix to classification tasks outside of ML?
YES AND NO
FIRST THE YES
In fact, there was (and still is, I suppose) an area of artificial intelligence called expert systems which were , more or less, decision trees designed by subject matter experts (doctors, scientists, etc). Given some data, the expert system would go down the decision tree and arrive at a prediction. You might even think of your own decision-making as working like this: you take in information, evaluate it, and then say, “Hi, doggie,” instead of, “Good morning, Mrs. Johnson,” since your eyes tell you that you see a dog instead of your neighbor.
Machine learning might be the dominant approach to artificial intelligence these days, but evaluating the predictions does not really depend on how you made those predictions.
NOW THE NO
Even in machine learning, metrics like accuracy, precision, and recall are threshold-based, discontinuous, improper scoring rules (arguably not even scoring rules). The problems with these metrics tend to be most noticeable in settings with class imbalance, which is the main setting where this topic arises in Cross Validated questions, but the problems are present with balanced classes, too. Briefly, the probabilities returned by many machine learning models allow us a much more nuanced evaluation.
This answer by our Stephan Kolassa is a good place to start reading about this notion, and it links to other good material (particularly Frank Harrell’s blog). | Can I apply a confusion matrix to classification tasks outside of ML?
YES AND NO
FIRST THE YES
In fact, there was (and still is, I suppose) an area of artificial intelligence called expert systems which were , more or less, decision trees designed by subject matter expe |
53,500 | Continuous and differentiable bell-shaped distribution on $[a, b]$ | Let's construct all possible solutions.
By "distribution" you appear to refer to a density function (PDF) $f.$ The properties you require are
Supported on $[a,b].$ That is, $f(x)=0$ for any $x\le a$ or $x \ge b.$
$f$ should be (continuously) differentiable on $(a,b)$ with derivative $f^\prime.$
"Bell-curved," which can be taken as
(Strictly) increasing from value of $0$ at $a$ to some intermediate point $c;$ that is, $f^\prime(x) \gt 0$ for $a\lt x \lt c;$ and
(Strictly) decreasing from $c$ to a value of $0$ at $b;$ that is, $f^\prime(x) \lt 0$ for $c \lt x \lt b.$
To standardize this description, translate and scale the left-hand arm of $f^\prime$ to the interval $[0,1]$ and do the same for the right-hand arm, reversing and negating it. That is, let
$$g_{-}(x) = f^\prime\left((c-a)x\right)$$
and
$$g_{+}(x) = -f^\prime\left(1 - (b-c)x\right).$$
Both are increasing positive integrable functions defined on $[0,1]$ for which
$$\int_0^1 g_{-}(x)\,\mathrm{d}x = \int_0^1 f^\prime((c-a)x)\,\mathrm{d}x = \frac{1}{c-a}\int_a^c f^\prime(y)\,\mathrm{d}y = \frac{f(c^-)}{c-a}$$
and
$$\int_0^1 g_{+}(x)\,\mathrm{d}x = \int_0^1 -f^\prime(1 - (b-c)x)\,\mathrm{d}x = \frac{1}{b-c}\int_b^c f^\prime(y)\,\mathrm{d}y = \frac{f(c^+)}{b-c}.$$
Conversely, given any two positive increasing integrable functions $g_{-}$ and $g_{+}$ defined on $[0,1]$ (the "left side" and "right side" models), these steps can be reversed to construct $f^\prime,$ which in turn can be integrated (and normalized) to yield a valid distribution function.
Here is this reverse process in pictures. It begins with the two model functions.
(Notice that these functions need not even be continuous and can be unbounded, as illustrated by $g_{+}$ at the right.)
They are then integrated and assembled to produce $f^\prime,$ which in turn is integrated and normalized to unit area to yield a density $f$ with every required characteristic.
You may further control the appearance of the density in many ways. For instance, by taking the two models to be the same function and placing the peak $c = (a+b)/2$ at the midpoint, you will obtain a symmetric density. Here I have used the original $g_{-}$ for the right hand model $g_{+}.$
You can enforce many other properties of $f$ by going through the original analysis to deduce the corresponding properties of the model functions and restricting your construction to functions of that type.
Finally, if you choose to limit the two model functions to a finitely parameterized subset of the possibilities, you will have constructed a parametric family of distributions meeting all your criteria. | Continuous and differentiable bell-shaped distribution on $[a, b]$ | Let's construct all possible solutions.
By "distribution" you appear to refer to a density function (PDF) $f.$ The properties you require are
Supported on $[a,b].$ That is, $f(x)=0$ for any $x\le a | Continuous and differentiable bell-shaped distribution on $[a, b]$
Let's construct all possible solutions.
By "distribution" you appear to refer to a density function (PDF) $f.$ The properties you require are
Supported on $[a,b].$ That is, $f(x)=0$ for any $x\le a$ or $x \ge b.$
$f$ should be (continuously) differentiable on $(a,b)$ with derivative $f^\prime.$
"Bell-curved," which can be taken as
(Strictly) increasing from value of $0$ at $a$ to some intermediate point $c;$ that is, $f^\prime(x) \gt 0$ for $a\lt x \lt c;$ and
(Strictly) decreasing from $c$ to a value of $0$ at $b;$ that is, $f^\prime(x) \lt 0$ for $c \lt x \lt b.$
To standardize this description, translate and scale the left-hand arm of $f^\prime$ to the interval $[0,1]$ and do the same for the right-hand arm, reversing and negating it. That is, let
$$g_{-}(x) = f^\prime\left((c-a)x\right)$$
and
$$g_{+}(x) = -f^\prime\left(1 - (b-c)x\right).$$
Both are increasing positive integrable functions defined on $[0,1]$ for which
$$\int_0^1 g_{-}(x)\,\mathrm{d}x = \int_0^1 f^\prime((c-a)x)\,\mathrm{d}x = \frac{1}{c-a}\int_a^c f^\prime(y)\,\mathrm{d}y = \frac{f(c^-)}{c-a}$$
and
$$\int_0^1 g_{+}(x)\,\mathrm{d}x = \int_0^1 -f^\prime(1 - (b-c)x)\,\mathrm{d}x = \frac{1}{b-c}\int_b^c f^\prime(y)\,\mathrm{d}y = \frac{f(c^+)}{b-c}.$$
Conversely, given any two positive increasing integrable functions $g_{-}$ and $g_{+}$ defined on $[0,1]$ (the "left side" and "right side" models), these steps can be reversed to construct $f^\prime,$ which in turn can be integrated (and normalized) to yield a valid distribution function.
Here is this reverse process in pictures. It begins with the two model functions.
(Notice that these functions need not even be continuous and can be unbounded, as illustrated by $g_{+}$ at the right.)
They are then integrated and assembled to produce $f^\prime,$ which in turn is integrated and normalized to unit area to yield a density $f$ with every required characteristic.
You may further control the appearance of the density in many ways. For instance, by taking the two models to be the same function and placing the peak $c = (a+b)/2$ at the midpoint, you will obtain a symmetric density. Here I have used the original $g_{-}$ for the right hand model $g_{+}.$
You can enforce many other properties of $f$ by going through the original analysis to deduce the corresponding properties of the model functions and restricting your construction to functions of that type.
Finally, if you choose to limit the two model functions to a finitely parameterized subset of the possibilities, you will have constructed a parametric family of distributions meeting all your criteria. | Continuous and differentiable bell-shaped distribution on $[a, b]$
Let's construct all possible solutions.
By "distribution" you appear to refer to a density function (PDF) $f.$ The properties you require are
Supported on $[a,b].$ That is, $f(x)=0$ for any $x\le a |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.