idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
52,301 | What is collinearity and how does it differ from multicollinearity? | There are indeed slight inconsistencies in the usage of the term, depending who you ask. The most common distinction I've seen (and I tend to use), is that we have collinearity if $\det(X^T X)=0$, and multicollinearity if $\det(X^T X)\approx 0$. The latter obviously includes the former, which is why we also say "perfect multicollinearity" for "collinearity". | What is collinearity and how does it differ from multicollinearity? | There are indeed slight inconsistencies in the usage of the term, depending who you ask. The most common distinction I've seen (and I tend to use), is that we have collinearity if $\det(X^T X)=0$, and | What is collinearity and how does it differ from multicollinearity?
There are indeed slight inconsistencies in the usage of the term, depending who you ask. The most common distinction I've seen (and I tend to use), is that we have collinearity if $\det(X^T X)=0$, and multicollinearity if $\det(X^T X)\approx 0$. The latter obviously includes the former, which is why we also say "perfect multicollinearity" for "collinearity". | What is collinearity and how does it differ from multicollinearity?
There are indeed slight inconsistencies in the usage of the term, depending who you ask. The most common distinction I've seen (and I tend to use), is that we have collinearity if $\det(X^T X)=0$, and |
52,302 | Is the inference from a parametric test valid when the population distribution is not normal? | I need to correct a number of mistaken or partly misplaced ideas in the question first (as well as some that aren't in the question but are commonly seen and may be indirectly influencing the way you ask your question), but I will return to the main issue.
The answer you link to says:
The t-test assumes that the means of the different samples are normally distributed; it does not assume that the population is normally distributed.
As one of the comments below the answer points out, this claim is not correct as stated.
The assumption that's made in the derivation that the t-statistic has a t-distribution relies on the distribution from which the observations were drawn being normal, not just the means. Note that the t-statistic is not just a numerator, but a ratio of two things. (Such derivations are
where the assumptions for tests arise from.)
You need the denominator in the t-statistic to be (i) distributed as scaled chi and (ii) independent of the numerator.
There's no result I'm aware of that makes the t-statistic have a t-distribution in any other general circumstance of interest.
In addition, while you can use the CLT plus Slutsky's theorem to argue that a t-statistic will asymptotically approach a standard normal, (so that as sample sizes go to infinity you approach a level-$α$
test) there's nothing that makes it approach the $t$ (e.g. at least in some situations, it might always be better approximated by the normal than it is by the t, all along the way as $n$ increases).
There's also nothing that says that such a test will have good power (i.e. that would make us care to use it). In fact in large samples the relative power at small effect sizes may be quite poor compared to commonly considered alternatives (indeed the ARE of the t-test can go to zero).
[One of the comments under that answer points this error out but it has not been addressed by the person who answered.]
That said, however, often the t-test does work at least fairly well. It would not do to be overly reliant on it without understanding something of the circumstances with which one was dealing.
parametric statistics are using sample data to draw inference about population parameters like the mean, or the difference in means between two populations.
I recognize that you're not saying that parametric statistics is only about means there, but I want to take a moment to deal with a not-uncommon idea that they do.
Some parametric methods do focus on the mean but parametric methods are not in general about means. (A lot of books that don't go into the theory properly manage to get this completely wrong. The term "parametric" is not about normality and it is not about means.)
If the population distribution is not normal, it means that the population mean is a poor representation of the central tendency of the population.
One can care about a population mean without remotely caring about whatever "central tendency" might be.
Consider the exponential distribution.
This distribution is commonly used to model a number of things. A typical example is inter-event times (indeed you can derive it as the inter-event time in a Poisson process). Let's imagine we're doing just that (using it as a model for inter-event times). It looks like this:
Here the labels on the x-axis represent multiples of the population mean (since it's in units of the mean, the population mean is at 1).
I'm not even sure we can say that the exponential distribution has a central tendency -- but it makes perfect sense to use its population mean to describe the process, it's still the average inter-event time. It's not the typical inter-event time for a single event, but if I am repeatedly experiencing this process that mean is very much of interest.
is the inference gained from the test informative when the population is not normal and is thus poorly described by the mean and variance?
Outside some very particular situations, this is not the best way to think about it. It doesn't really matter whether the particular population parameter of interest is typical of the distribution (in whatever sense that 'typical' is intended).
It's easily possible for a population mean to be of interest even when the mean is not a 'typical' value (e.g. when the mean and the mode are quite different). Whether a particular population parameter is of interest (might be suitable for some hypothesis) is not so much a question of what the distribution looks like, but of what the question is that you want to answer.
For example, if I have to wait for 100 inter-event times in the above example, the population mean on the individual times is an important thing to be thinking about, even though most of the time I'll wait much less than the mean.
We can also construct tests based on sample means without assuming normality (this is an unrelated issue to the above one, but you seem as if you might be partly conflating the two issues, so I better discuss it). In the above exponential model we have a different parametric assumption, and we can base a suitable test on that assumption (and it can sensibly be based on the sample mean, though it's not quite optimal choice when comparing two small samples). Or if we don't have a parametric model, we can nevertheless construct a test that doesn't make a parametric assumption but still deals with (say) a difference in population means (such as a permutation test of the difference in sample means).
However, in the other hand, just because I may want to test something in relation to a population mean, that does not necessarily imply that the sample mean is a good way to construct a test of it. If I were interested in the difference in population means for two samples from logistic distributions, I would be better to use something other than the sample mean as the basis of the test (indeed, a Wilcoxon-Mann-Whitney test -- used as a test of a shift in population mean -- would be an excellent choice in that case).
In summary: let us begin to summarize that by passing back to the title question.
Is the inference from a parametric test valid when the population distribution is not normal?
It certainly can be. The consideration of validity of a parametric test is not related to whether or not we're dealing with normality.
the mean can be a perfectly reasonable population quantity to make hypotheses about, even when the mean is not at all close to the median or the mode.
a parametric test needn't be about the mean. It might be about the population minimum, for example, or the population median, or the population interquartile range or any number of other things. Nor is a parametric test necessarily related to normality at all (like a test for the upper limit of a uniform distribution or the shift parameter in a shifted exponential)
we must be careful not to conflate considerations of what population quantities we're interested in with what sample quantities we use for making inferences about them. Sometimes it's good to use the sample equivalent to estimate the population quantity and sometimes it's not -- and it's not always intuitive. With a (right skewed) exponential distribution, using the sample mean in inference about the population mean works well. With a symmetric, bell-shaped, not very heavy-tailed distribution like a logistic (and one you're likely to think looks pretty normal-ish in a histogram), not so much. And if it's Laplace even less so.
When looking at what to test and how to test it, I think it boils down to answering two main questions --
What question about the population are you interested in answering? (This consideration needn't necessarily relate much to distribution shapes.)
Given what you know/assume (or don't know/won't assume) about the distribution, what's a good way to answer that question? (it's here that questions of efficiency and robustness of tests come in, and where possible distribution shapes may be a central consideration) | Is the inference from a parametric test valid when the population distribution is not normal? | I need to correct a number of mistaken or partly misplaced ideas in the question first (as well as some that aren't in the question but are commonly seen and may be indirectly influencing the way you | Is the inference from a parametric test valid when the population distribution is not normal?
I need to correct a number of mistaken or partly misplaced ideas in the question first (as well as some that aren't in the question but are commonly seen and may be indirectly influencing the way you ask your question), but I will return to the main issue.
The answer you link to says:
The t-test assumes that the means of the different samples are normally distributed; it does not assume that the population is normally distributed.
As one of the comments below the answer points out, this claim is not correct as stated.
The assumption that's made in the derivation that the t-statistic has a t-distribution relies on the distribution from which the observations were drawn being normal, not just the means. Note that the t-statistic is not just a numerator, but a ratio of two things. (Such derivations are
where the assumptions for tests arise from.)
You need the denominator in the t-statistic to be (i) distributed as scaled chi and (ii) independent of the numerator.
There's no result I'm aware of that makes the t-statistic have a t-distribution in any other general circumstance of interest.
In addition, while you can use the CLT plus Slutsky's theorem to argue that a t-statistic will asymptotically approach a standard normal, (so that as sample sizes go to infinity you approach a level-$α$
test) there's nothing that makes it approach the $t$ (e.g. at least in some situations, it might always be better approximated by the normal than it is by the t, all along the way as $n$ increases).
There's also nothing that says that such a test will have good power (i.e. that would make us care to use it). In fact in large samples the relative power at small effect sizes may be quite poor compared to commonly considered alternatives (indeed the ARE of the t-test can go to zero).
[One of the comments under that answer points this error out but it has not been addressed by the person who answered.]
That said, however, often the t-test does work at least fairly well. It would not do to be overly reliant on it without understanding something of the circumstances with which one was dealing.
parametric statistics are using sample data to draw inference about population parameters like the mean, or the difference in means between two populations.
I recognize that you're not saying that parametric statistics is only about means there, but I want to take a moment to deal with a not-uncommon idea that they do.
Some parametric methods do focus on the mean but parametric methods are not in general about means. (A lot of books that don't go into the theory properly manage to get this completely wrong. The term "parametric" is not about normality and it is not about means.)
If the population distribution is not normal, it means that the population mean is a poor representation of the central tendency of the population.
One can care about a population mean without remotely caring about whatever "central tendency" might be.
Consider the exponential distribution.
This distribution is commonly used to model a number of things. A typical example is inter-event times (indeed you can derive it as the inter-event time in a Poisson process). Let's imagine we're doing just that (using it as a model for inter-event times). It looks like this:
Here the labels on the x-axis represent multiples of the population mean (since it's in units of the mean, the population mean is at 1).
I'm not even sure we can say that the exponential distribution has a central tendency -- but it makes perfect sense to use its population mean to describe the process, it's still the average inter-event time. It's not the typical inter-event time for a single event, but if I am repeatedly experiencing this process that mean is very much of interest.
is the inference gained from the test informative when the population is not normal and is thus poorly described by the mean and variance?
Outside some very particular situations, this is not the best way to think about it. It doesn't really matter whether the particular population parameter of interest is typical of the distribution (in whatever sense that 'typical' is intended).
It's easily possible for a population mean to be of interest even when the mean is not a 'typical' value (e.g. when the mean and the mode are quite different). Whether a particular population parameter is of interest (might be suitable for some hypothesis) is not so much a question of what the distribution looks like, but of what the question is that you want to answer.
For example, if I have to wait for 100 inter-event times in the above example, the population mean on the individual times is an important thing to be thinking about, even though most of the time I'll wait much less than the mean.
We can also construct tests based on sample means without assuming normality (this is an unrelated issue to the above one, but you seem as if you might be partly conflating the two issues, so I better discuss it). In the above exponential model we have a different parametric assumption, and we can base a suitable test on that assumption (and it can sensibly be based on the sample mean, though it's not quite optimal choice when comparing two small samples). Or if we don't have a parametric model, we can nevertheless construct a test that doesn't make a parametric assumption but still deals with (say) a difference in population means (such as a permutation test of the difference in sample means).
However, in the other hand, just because I may want to test something in relation to a population mean, that does not necessarily imply that the sample mean is a good way to construct a test of it. If I were interested in the difference in population means for two samples from logistic distributions, I would be better to use something other than the sample mean as the basis of the test (indeed, a Wilcoxon-Mann-Whitney test -- used as a test of a shift in population mean -- would be an excellent choice in that case).
In summary: let us begin to summarize that by passing back to the title question.
Is the inference from a parametric test valid when the population distribution is not normal?
It certainly can be. The consideration of validity of a parametric test is not related to whether or not we're dealing with normality.
the mean can be a perfectly reasonable population quantity to make hypotheses about, even when the mean is not at all close to the median or the mode.
a parametric test needn't be about the mean. It might be about the population minimum, for example, or the population median, or the population interquartile range or any number of other things. Nor is a parametric test necessarily related to normality at all (like a test for the upper limit of a uniform distribution or the shift parameter in a shifted exponential)
we must be careful not to conflate considerations of what population quantities we're interested in with what sample quantities we use for making inferences about them. Sometimes it's good to use the sample equivalent to estimate the population quantity and sometimes it's not -- and it's not always intuitive. With a (right skewed) exponential distribution, using the sample mean in inference about the population mean works well. With a symmetric, bell-shaped, not very heavy-tailed distribution like a logistic (and one you're likely to think looks pretty normal-ish in a histogram), not so much. And if it's Laplace even less so.
When looking at what to test and how to test it, I think it boils down to answering two main questions --
What question about the population are you interested in answering? (This consideration needn't necessarily relate much to distribution shapes.)
Given what you know/assume (or don't know/won't assume) about the distribution, what's a good way to answer that question? (it's here that questions of efficiency and robustness of tests come in, and where possible distribution shapes may be a central consideration) | Is the inference from a parametric test valid when the population distribution is not normal?
I need to correct a number of mistaken or partly misplaced ideas in the question first (as well as some that aren't in the question but are commonly seen and may be indirectly influencing the way you |
52,303 | Is the inference from a parametric test valid when the population distribution is not normal? | It depends on the degree and type of non-normality. Even with relatively small sample sizes the test is conservative with skewed distributions but not strictly "valid" since the actual p is lower than the nominal p (if you use p=0.05 as a cutoff you will reject a true null hypothesis less than 0.05 of the time. If your browser lets you run unsigned applets you can explore this yourself here: http://onlinestatbook.com/stat_sim/robustness/index.html
Other deviations from normality (such as kurtosis) may have different effects. | Is the inference from a parametric test valid when the population distribution is not normal? | It depends on the degree and type of non-normality. Even with relatively small sample sizes the test is conservative with skewed distributions but not strictly "valid" since the actual p is lower than | Is the inference from a parametric test valid when the population distribution is not normal?
It depends on the degree and type of non-normality. Even with relatively small sample sizes the test is conservative with skewed distributions but not strictly "valid" since the actual p is lower than the nominal p (if you use p=0.05 as a cutoff you will reject a true null hypothesis less than 0.05 of the time. If your browser lets you run unsigned applets you can explore this yourself here: http://onlinestatbook.com/stat_sim/robustness/index.html
Other deviations from normality (such as kurtosis) may have different effects. | Is the inference from a parametric test valid when the population distribution is not normal?
It depends on the degree and type of non-normality. Even with relatively small sample sizes the test is conservative with skewed distributions but not strictly "valid" since the actual p is lower than |
52,304 | Random forest variance | Two thoughts.
RF are often split until purity. This often means that there are many terminal nodes, each with a single observation. The final splits leading up to these nodes may not generalize very well because there are so few observations to work with at that depth of the tree. So you may get more generalizable trees with lower out-of-sample variance if you increase the minimum node size from 1 to something like 10 or more (depending on how much data you have -- this is another hyper-parameter, so you might profitably tune it). This also has the property of yielding consistent probability estimates, which can be desirable in some contexts.
Increasing the number of trees will reduce the variance of the estimator. This is an obvious consequence of one of the CLTs -- each tree is a binomial trial, and the prediction of the forest is the average of many binomial trials. Moreover, the trees are iid in the sense that they are all fit on different re-samplings of the data and different random subsets of features. So you have iid binomial trails (which have finite variance because each trial is 0 or 1, i.e. has finite cardinality). This can make the predictions less volatile because the trees only have to explain chunks of your data, instead of each observation. So four times as many trials will cut the standard error of the mean in half.
There is extended discussion of some of these RF properties in Elements of Statistical Learning. The consistency property is discussed in Malley JD, Kruppa J, Dasgupta A, Malley KG, Ziegler A. Probability Machines: Consistent Probability Estimation Using Nonparametric Learning Machines. Methods of Information in Medicine. 2012;51(1):74-81. doi:10.3414/ME00-01-0052.
Finally, as a general observation, the best regularizer is more data, and better features usually beat a cleverer algorithm. | Random forest variance | Two thoughts.
RF are often split until purity. This often means that there are many terminal nodes, each with a single observation. The final splits leading up to these nodes may not generalize very | Random forest variance
Two thoughts.
RF are often split until purity. This often means that there are many terminal nodes, each with a single observation. The final splits leading up to these nodes may not generalize very well because there are so few observations to work with at that depth of the tree. So you may get more generalizable trees with lower out-of-sample variance if you increase the minimum node size from 1 to something like 10 or more (depending on how much data you have -- this is another hyper-parameter, so you might profitably tune it). This also has the property of yielding consistent probability estimates, which can be desirable in some contexts.
Increasing the number of trees will reduce the variance of the estimator. This is an obvious consequence of one of the CLTs -- each tree is a binomial trial, and the prediction of the forest is the average of many binomial trials. Moreover, the trees are iid in the sense that they are all fit on different re-samplings of the data and different random subsets of features. So you have iid binomial trails (which have finite variance because each trial is 0 or 1, i.e. has finite cardinality). This can make the predictions less volatile because the trees only have to explain chunks of your data, instead of each observation. So four times as many trials will cut the standard error of the mean in half.
There is extended discussion of some of these RF properties in Elements of Statistical Learning. The consistency property is discussed in Malley JD, Kruppa J, Dasgupta A, Malley KG, Ziegler A. Probability Machines: Consistent Probability Estimation Using Nonparametric Learning Machines. Methods of Information in Medicine. 2012;51(1):74-81. doi:10.3414/ME00-01-0052.
Finally, as a general observation, the best regularizer is more data, and better features usually beat a cleverer algorithm. | Random forest variance
Two thoughts.
RF are often split until purity. This often means that there are many terminal nodes, each with a single observation. The final splits leading up to these nodes may not generalize very |
52,305 | Random forest variance | I mean more of a variance reduction, i.e. predictable result. I increased the number of trees, but it didn't help.
Due to the central limit theorem, and due to the fact Random Forests predictions are obtained through averaging, increasing the number of trees should help. The default in R is 500L, set this as high as you can support (I've often put it to 5000L, depending on the data).
The randomness in Random Forests come from both attribute bagging and bootstrap aggregating. You may also try to reduce the randomness either of those add.
Last, depending on how many features and how many samples you have, it might simply due to the data, and no amount of hyperparameter tinkering will solve it.
You mentioned in a comment:
I've tried quite a few, between 6 and 400. I use a regression-based
feature selector that keeps all features with p-value below 0.05
And, as I said, I find feature selection mostly useless with Random Forests. The reasons are simple: you risk overfitting by doing that selection and Random Forests are good with large number of features. Let the forest decide which features are worthy, unless you have orders of magnitude more features than samples, then do some small reduction, just enough to remove noise features. | Random forest variance | I mean more of a variance reduction, i.e. predictable result. I increased the number of trees, but it didn't help.
Due to the central limit theorem, and due to the fact Random Forests predictions are | Random forest variance
I mean more of a variance reduction, i.e. predictable result. I increased the number of trees, but it didn't help.
Due to the central limit theorem, and due to the fact Random Forests predictions are obtained through averaging, increasing the number of trees should help. The default in R is 500L, set this as high as you can support (I've often put it to 5000L, depending on the data).
The randomness in Random Forests come from both attribute bagging and bootstrap aggregating. You may also try to reduce the randomness either of those add.
Last, depending on how many features and how many samples you have, it might simply due to the data, and no amount of hyperparameter tinkering will solve it.
You mentioned in a comment:
I've tried quite a few, between 6 and 400. I use a regression-based
feature selector that keeps all features with p-value below 0.05
And, as I said, I find feature selection mostly useless with Random Forests. The reasons are simple: you risk overfitting by doing that selection and Random Forests are good with large number of features. Let the forest decide which features are worthy, unless you have orders of magnitude more features than samples, then do some small reduction, just enough to remove noise features. | Random forest variance
I mean more of a variance reduction, i.e. predictable result. I increased the number of trees, but it didn't help.
Due to the central limit theorem, and due to the fact Random Forests predictions are |
52,306 | Random forest variance | Adding to what users said, if you average many uncorrelated or little correlated variable you get a reduction in variance.
Define $X=\sum_{i}^{B}T_i$ where $T_i$ is a single decision tree, B number of decision tree.
$Var(X)=\frac{1}{B^2}\sum_i\sum_jcov(X_i,X_j)=\frac{1}{B^2}
\sum_{i} ( \sum_{i\neq j} Cov(X_i,X_j)+Var(X_i)) =
\frac{1}{B^2}\sum_ i ( (B-1)\sigma^2\rho +\sigma^3)=\frac{B (B-1)\sigma^2\rho+B\sigma^2}{B^2}=\rho\sigma^2+\sigma^2\frac{1-\rho}{B}$.
First quantity decreases if $\rho$ decreases to zero ( uncorrelation) and second quantity decreases if B increases. | Random forest variance | Adding to what users said, if you average many uncorrelated or little correlated variable you get a reduction in variance.
Define $X=\sum_{i}^{B}T_i$ where $T_i$ is a single decision tree, B number of | Random forest variance
Adding to what users said, if you average many uncorrelated or little correlated variable you get a reduction in variance.
Define $X=\sum_{i}^{B}T_i$ where $T_i$ is a single decision tree, B number of decision tree.
$Var(X)=\frac{1}{B^2}\sum_i\sum_jcov(X_i,X_j)=\frac{1}{B^2}
\sum_{i} ( \sum_{i\neq j} Cov(X_i,X_j)+Var(X_i)) =
\frac{1}{B^2}\sum_ i ( (B-1)\sigma^2\rho +\sigma^3)=\frac{B (B-1)\sigma^2\rho+B\sigma^2}{B^2}=\rho\sigma^2+\sigma^2\frac{1-\rho}{B}$.
First quantity decreases if $\rho$ decreases to zero ( uncorrelation) and second quantity decreases if B increases. | Random forest variance
Adding to what users said, if you average many uncorrelated or little correlated variable you get a reduction in variance.
Define $X=\sum_{i}^{B}T_i$ where $T_i$ is a single decision tree, B number of |
52,307 | Any theory on how to split the data? | This may not be quite what you asked, but one major theoretical point of caution when splitting the data is that you shouldn't put a set of correlated observations (i.e. correlated even after conditioning on your features) into both training and the test set. You need to take some stand on what in some sense is an independent observations (or set of observations). For example, imagine:
each record/observation is a medical test result.
you have multiple test results per person.
the test results for each person are correlated through unobserved, individual specific characteristics.
When constructing a training set and a test set, you could split:
record wise: randomly assign each record to training or test set
subject wise: randomly assign people (and all their test results) to training or test set
Record wise splitting into training and test sets is in some sense is like running validation on your training data! And it may give horribly biased validation results. In this case, you would want to do (2). | Any theory on how to split the data? | This may not be quite what you asked, but one major theoretical point of caution when splitting the data is that you shouldn't put a set of correlated observations (i.e. correlated even after conditio | Any theory on how to split the data?
This may not be quite what you asked, but one major theoretical point of caution when splitting the data is that you shouldn't put a set of correlated observations (i.e. correlated even after conditioning on your features) into both training and the test set. You need to take some stand on what in some sense is an independent observations (or set of observations). For example, imagine:
each record/observation is a medical test result.
you have multiple test results per person.
the test results for each person are correlated through unobserved, individual specific characteristics.
When constructing a training set and a test set, you could split:
record wise: randomly assign each record to training or test set
subject wise: randomly assign people (and all their test results) to training or test set
Record wise splitting into training and test sets is in some sense is like running validation on your training data! And it may give horribly biased validation results. In this case, you would want to do (2). | Any theory on how to split the data?
This may not be quite what you asked, but one major theoretical point of caution when splitting the data is that you shouldn't put a set of correlated observations (i.e. correlated even after conditio |
52,308 | Any theory on how to split the data? | Summary: there are known guidelines (at least for some situations) for required sample sizes - but IMHO they cannot easily be converted into a "split into such-and-such fractions" rule of thumb.
While in k-fold cross validation the choice of k often doesn't matter much there are additional coniderations for single splits.
The reason why the k in k-fold cross validation doesn't have much influence (outside extreme choices like $k = n$) is that at the end of the cross validation run each case has been tested once. The difference is only how many surrogate models you compute in order to achieve this.
For single splits, the following considerations can help, though you'll notice that they are not easily expressed as fraction of total samples available.
Training data: necessary training sample sizes are often conveniently discussed as training cases relative to the needed/desired model complexity.
In contrast, the uncertainty on test results due to the limited number of test cases depends on the absolute number of test cases. In general this is true for both regression and classification, however classification 0/1 loss and the related proportions (figures of merit such as accuracy, true positive rates, predictive values etc.) have a particularly nasty behaviour in this respect.
However, the upside of this is that you can calculate necessary absolute test sample sizes for the testing of the final model for different scenarios such as "the 95% confidence interval of the final sensitivity estimate should not be wider than 10 percentage points".
Data-driven model optimization/selection: from a statistical point of view, this will often require much larger test sample sizes than the final evaluation in order to avoid "skimming testing variance" i.e. spurious (accidental) optimization "results" (which I'd consider failure of the optimization - however, this type of failure is not routinely detected/warned against). Also, the necessary optimization testing sample size depends crucially on the number of comparisons to be made.
All in all, I'd expect that the optimization sets should typically be larger than the final evaluation sets - or that instead of data-driven optimization, reasonable hyperparameters may be fixed using external knowledge (the sample sizes I encounter typically do not allow optimization driven by test results)
For classification 0/1 loss and proportion-type figures of merit, we put these thoughts into a paper:
Beleites, C. and Neugebauer, U. and Bocklitz, T. and Krafft, C. and Popp, J.: Sample size planning for classification models. Anal Chim Acta, 2013, 760, 25-33.
DOI: 10.1016/j.aca.2012.11.007
accepted manuscript on arXiv: 1211.1323 | Any theory on how to split the data? | Summary: there are known guidelines (at least for some situations) for required sample sizes - but IMHO they cannot easily be converted into a "split into such-and-such fractions" rule of thumb.
Whi | Any theory on how to split the data?
Summary: there are known guidelines (at least for some situations) for required sample sizes - but IMHO they cannot easily be converted into a "split into such-and-such fractions" rule of thumb.
While in k-fold cross validation the choice of k often doesn't matter much there are additional coniderations for single splits.
The reason why the k in k-fold cross validation doesn't have much influence (outside extreme choices like $k = n$) is that at the end of the cross validation run each case has been tested once. The difference is only how many surrogate models you compute in order to achieve this.
For single splits, the following considerations can help, though you'll notice that they are not easily expressed as fraction of total samples available.
Training data: necessary training sample sizes are often conveniently discussed as training cases relative to the needed/desired model complexity.
In contrast, the uncertainty on test results due to the limited number of test cases depends on the absolute number of test cases. In general this is true for both regression and classification, however classification 0/1 loss and the related proportions (figures of merit such as accuracy, true positive rates, predictive values etc.) have a particularly nasty behaviour in this respect.
However, the upside of this is that you can calculate necessary absolute test sample sizes for the testing of the final model for different scenarios such as "the 95% confidence interval of the final sensitivity estimate should not be wider than 10 percentage points".
Data-driven model optimization/selection: from a statistical point of view, this will often require much larger test sample sizes than the final evaluation in order to avoid "skimming testing variance" i.e. spurious (accidental) optimization "results" (which I'd consider failure of the optimization - however, this type of failure is not routinely detected/warned against). Also, the necessary optimization testing sample size depends crucially on the number of comparisons to be made.
All in all, I'd expect that the optimization sets should typically be larger than the final evaluation sets - or that instead of data-driven optimization, reasonable hyperparameters may be fixed using external knowledge (the sample sizes I encounter typically do not allow optimization driven by test results)
For classification 0/1 loss and proportion-type figures of merit, we put these thoughts into a paper:
Beleites, C. and Neugebauer, U. and Bocklitz, T. and Krafft, C. and Popp, J.: Sample size planning for classification models. Anal Chim Acta, 2013, 760, 25-33.
DOI: 10.1016/j.aca.2012.11.007
accepted manuscript on arXiv: 1211.1323 | Any theory on how to split the data?
Summary: there are known guidelines (at least for some situations) for required sample sizes - but IMHO they cannot easily be converted into a "split into such-and-such fractions" rule of thumb.
Whi |
52,309 | Any theory on how to split the data? | I don't have a theory based response, but my allocation of data usually depends on how much data I have and how well my model fits that data. If you have a bunch of data, you can probably be pretty liberal in how much you set aside for validation and testing. If you are limited, that is where model fit comes in. Imagine you are making a linear regression model where your outcome y is perfectly dependent on you feature x with absolutely no noise in the system. In this case, you would never need more than 2 data points to get a perfect fit, so even in a small data set, you can still leave out a bunch for validating and testing. On the other hand, if there is a lot of noise in the system, you may have to devote more data as a training set in order to find the signal.
One way to see which situation you are in is to start with a smaller training set and boot strap it to produce a few models. Then take a look at your model coefficients in the various rounds of boot strapping. If your coefficients are pretty stable, then chances are you gave the model enough data to find what ever signal there is to find. If they bounce around a lot, then the model probably needs more data. Keep in mind, you are not looking at errors of predictions here, just the stability of the over all model. You can then verify the model stability by either adding a little more data to your training set and repeating your boot strapping or you can select a new subset of your data of the same size and repeat to make sure that you didn't just pick weird data the first time. | Any theory on how to split the data? | I don't have a theory based response, but my allocation of data usually depends on how much data I have and how well my model fits that data. If you have a bunch of data, you can probably be pretty l | Any theory on how to split the data?
I don't have a theory based response, but my allocation of data usually depends on how much data I have and how well my model fits that data. If you have a bunch of data, you can probably be pretty liberal in how much you set aside for validation and testing. If you are limited, that is where model fit comes in. Imagine you are making a linear regression model where your outcome y is perfectly dependent on you feature x with absolutely no noise in the system. In this case, you would never need more than 2 data points to get a perfect fit, so even in a small data set, you can still leave out a bunch for validating and testing. On the other hand, if there is a lot of noise in the system, you may have to devote more data as a training set in order to find the signal.
One way to see which situation you are in is to start with a smaller training set and boot strap it to produce a few models. Then take a look at your model coefficients in the various rounds of boot strapping. If your coefficients are pretty stable, then chances are you gave the model enough data to find what ever signal there is to find. If they bounce around a lot, then the model probably needs more data. Keep in mind, you are not looking at errors of predictions here, just the stability of the over all model. You can then verify the model stability by either adding a little more data to your training set and repeating your boot strapping or you can select a new subset of your data of the same size and repeat to make sure that you didn't just pick weird data the first time. | Any theory on how to split the data?
I don't have a theory based response, but my allocation of data usually depends on how much data I have and how well my model fits that data. If you have a bunch of data, you can probably be pretty l |
52,310 | Any theory on how to split the data? | To add to other comments I think there is no rule gainst seeing this percentage as a hyper(or maybe better to say hyper-hyper) parameter. The set up that I can think of is dividing your data by two at first. Let's say put 10% of your data aside (yeah ! it's percentage again) and then for the rest of your data slice it in different ways (30,30,30), (80,10,10). In each scenario, you'll get training, validation and test rates & you should be able to compare it to the rate that you get on that 10% test set and see if you can find any constant improvement because of a specific scenario. Like some other parts of machine learning, we may test it first and then try to find a reason for what we see (finding a relationship between the amount of data and these values, if there is any! | Any theory on how to split the data? | To add to other comments I think there is no rule gainst seeing this percentage as a hyper(or maybe better to say hyper-hyper) parameter. The set up that I can think of is dividing your data by two at | Any theory on how to split the data?
To add to other comments I think there is no rule gainst seeing this percentage as a hyper(or maybe better to say hyper-hyper) parameter. The set up that I can think of is dividing your data by two at first. Let's say put 10% of your data aside (yeah ! it's percentage again) and then for the rest of your data slice it in different ways (30,30,30), (80,10,10). In each scenario, you'll get training, validation and test rates & you should be able to compare it to the rate that you get on that 10% test set and see if you can find any constant improvement because of a specific scenario. Like some other parts of machine learning, we may test it first and then try to find a reason for what we see (finding a relationship between the amount of data and these values, if there is any! | Any theory on how to split the data?
To add to other comments I think there is no rule gainst seeing this percentage as a hyper(or maybe better to say hyper-hyper) parameter. The set up that I can think of is dividing your data by two at |
52,311 | circular reasoning? | It's not terrible but it's not great, either. Ideally, we'd prefer that the network be able to sort out what is/isn't important given the presence of the other variables. On the other hand, a large number of irrelevant features can make that a challenging fitting & regularization task, so it's hard to fault you for taking a different approach.
A useful example of a prominent researcher using univariate tests in this way is described in Elements of Statistical Learning (2nd edition, Section 11.9), on Neal and Zhang's Bayesian neural nets. (Don't bother hunting down Neal and Zhang's write-up, published in an obscure text -- the discussion in ESL covers it very well.) | circular reasoning? | It's not terrible but it's not great, either. Ideally, we'd prefer that the network be able to sort out what is/isn't important given the presence of the other variables. On the other hand, a large nu | circular reasoning?
It's not terrible but it's not great, either. Ideally, we'd prefer that the network be able to sort out what is/isn't important given the presence of the other variables. On the other hand, a large number of irrelevant features can make that a challenging fitting & regularization task, so it's hard to fault you for taking a different approach.
A useful example of a prominent researcher using univariate tests in this way is described in Elements of Statistical Learning (2nd edition, Section 11.9), on Neal and Zhang's Bayesian neural nets. (Don't bother hunting down Neal and Zhang's write-up, published in an obscure text -- the discussion in ESL covers it very well.) | circular reasoning?
It's not terrible but it's not great, either. Ideally, we'd prefer that the network be able to sort out what is/isn't important given the presence of the other variables. On the other hand, a large nu |
52,312 | circular reasoning? | It's related to circular reasoning. Variables chosen because they are associated with the outcome will likely continue to show (with an alternate approach) that they are associated with the outcome. It's very important to cross-validate. Choose predictors based on a subset ("training set") of the data, and test their predictive accuracy on a holdout set ("test set"). In fact, this needs to be done multiple times -- often hundreds or thousands of times, automated via code -- to yield a stable estimate of accuracy with an acceptable confidence interval. | circular reasoning? | It's related to circular reasoning. Variables chosen because they are associated with the outcome will likely continue to show (with an alternate approach) that they are associated with the outcome. | circular reasoning?
It's related to circular reasoning. Variables chosen because they are associated with the outcome will likely continue to show (with an alternate approach) that they are associated with the outcome. It's very important to cross-validate. Choose predictors based on a subset ("training set") of the data, and test their predictive accuracy on a holdout set ("test set"). In fact, this needs to be done multiple times -- often hundreds or thousands of times, automated via code -- to yield a stable estimate of accuracy with an acceptable confidence interval. | circular reasoning?
It's related to circular reasoning. Variables chosen because they are associated with the outcome will likely continue to show (with an alternate approach) that they are associated with the outcome. |
52,313 | circular reasoning? | Your ad hoc selection method is guaranteed to inflate false-positive error rate due to overfitting.
Philosophically, I'm not sure whether this is formally a kind of "circular reasoning"--practically this is like debating the sex of angels, to quote Miguel Hernan.
A 0.01 significance level with 1500 comparisons reason leads to an expected 15 false discoveries. This means nearly half of your 32 U-stat statistically significant features are questionable. Put differently, the probability of having 10 or more false discoveries is 88%. I do not think a 0.01 significance cut off is defensible here. You have a $1-0.99^{1500} = 0.9999997$ family wise error rate. You should better control for multiple comparisons, Bonferroni is imperfect but easy to do. Try your ad hoc method again using a $0.05/1400 = 0.00004$ cut off. | circular reasoning? | Your ad hoc selection method is guaranteed to inflate false-positive error rate due to overfitting.
Philosophically, I'm not sure whether this is formally a kind of "circular reasoning"--practically | circular reasoning?
Your ad hoc selection method is guaranteed to inflate false-positive error rate due to overfitting.
Philosophically, I'm not sure whether this is formally a kind of "circular reasoning"--practically this is like debating the sex of angels, to quote Miguel Hernan.
A 0.01 significance level with 1500 comparisons reason leads to an expected 15 false discoveries. This means nearly half of your 32 U-stat statistically significant features are questionable. Put differently, the probability of having 10 or more false discoveries is 88%. I do not think a 0.01 significance cut off is defensible here. You have a $1-0.99^{1500} = 0.9999997$ family wise error rate. You should better control for multiple comparisons, Bonferroni is imperfect but easy to do. Try your ad hoc method again using a $0.05/1400 = 0.00004$ cut off. | circular reasoning?
Your ad hoc selection method is guaranteed to inflate false-positive error rate due to overfitting.
Philosophically, I'm not sure whether this is formally a kind of "circular reasoning"--practically |
52,314 | Loss functions for regression proof | I would like to explain the way I understood it, explaining each and every step on the way.
Assumptions:
$g(x,t)$ is a function of x and t.
$p(x,t)$ is a joint distribution over $x$ and $t$.
Basic formulas:
$$\mathbb{E}_t[g|x] = \int_t{g(x,t)p(t|x)\mathop{dt}} \ (\mathbb{E}_t[g|x] \text{ is a function of $x$ and constant w.r.t. } t) \tag{1}\label{1} $$
$$\mathbb{E}_t[t|x] = \int_t{t.p(t|x)\mathop{dt}} \tag{2}\label{2}$$
$\operatorname{var}_t[t|x] = \int_t{(t - \mathbb{E}_t[t|x])^2p(t|x)\mathop{dt}} = \mathbb{E}_t[(t - \mathbb{E}_t[t|x])^2 | x] \tag{3}\label{3}$
$$
\eqalign{\mathbb{E}_t[f(x)g(x,t)|x]
&= \int_t{f(x)g(x,t)p(t|x)\mathop{dt}} \\
&= f(x)\int_t{g(x,t)p(t|x)\mathop{dt}} \\
&= f(x) \ \mathbb{E}_t[g|x] }
\tag{4}\label{4}$$
$$\mathbb{E}_t[f(x)|x] = f(x) \tag{4a}\label{4a}$$
$$\mathbb{E}_{x,t}[g] = \mathbb{E}_x[\mathbb{E}_t[g|x]] \tag{5}\label{5}$$
We derive the last formula above.
$$
\eqalign{
\mathbb{E}_{x,t}[g]
&= \int_x\int_tg(x,t)p(x,t)\mathop{dx}\mathop{dt}\\
&= \int_x\int_tg(x,t)p(x)p(t|x)\mathop{dx}\mathop{dt}\\
&= \int_x p(x)\int_t g(x,t)p(t|x)\mathop{dt}\mathop{dx}\\
&= \int_x \mathbb{E}_t[g|x]p(x)\mathop{dx} \text{ (using \ref{1}) } \\
&= \mathbb{E}_x [\mathbb{E}_t[g|x]] \\
}
$$
Derivation of the expected loss:
Represent the Loss function in the form as below. Please notice the subscript $t$ in the $\mathbb{E}_t$ notations. This was omitted in the book, but I added it here for clarity.
$$
\eqalign{
L(x,t) &= (y(x)-t)^{2} \\
&= (y(x) - \mathbb{E}_t[t|x])^{2} + 2(y(x) - \mathbb{E}_t[t|x])(\mathbb{E}_t[t|x]-t) + (\mathbb{E}_t[t|x]-t)^{2} \\
&= L_1 + 2L_2 + L_3
}
$$
Hence the joint expectation can be represented as:
$$
\eqalign{
\mathbb{E}_{x,t}[L]
&= \mathbb{E}_{x,t}[L_1] + 2\mathbb{E}_{x,t}[L_2] + \mathbb{E}_{x,t}[L_3]
}
$$
We derive the 3 expectations:
$$
\eqalign{
\mathbb{E}_{x,t}[L_1]
&= \mathbb{E}_{x,t}[(y(x) - \mathbb{E}_t[t|x])^{2}] \\
&= \mathbb{E}_x[ \ \mathbb{E}_t[(y(x) - \mathbb{E}_t[t|x])^{2} | x] \ ] \ \ \text{ (using \ref{5}) } \\
&= \mathbb{E}_x[(y(x) - \mathbb{E}_t[t|x] )^{2}] \text{ (using \ref{4a}, as the operand is a function of $x$ only)} \\
}
$$
$$
\eqalign{
\mathbb{E}_{x,t}[L_2]
&= \mathbb{E}_{x,t}[(y(x) - \mathbb{E}_t[t|x])(\mathbb{E}_t[t|x]-t)] \\
&= \mathbb{E}_x[ \ \mathbb{E}_t[\{(y(x) - \mathbb{E}_t[t|x])(\mathbb{E}_t[t|x]-t)\} \ | \ x] \ ] \text{ (using \ref{5}) } \\
&= \mathbb{E}_x[ \ (y(x) - \mathbb{E}_t[t|x]) \ \mathbb{E}_t[(\mathbb{E}_t[t|x]-t) | x] \ \ ] \ \text{ (using \ref{4} on the inner expectation)}
}
$$
Considering only the inner expectation:
$$
\eqalign{
\mathbb{E}_t[(\mathbb{E}_t[t|x]-t) | x]
&= \mathbb{E}_t[\mathbb{E}_t[t|x] | x] - \mathbb{E}_t[t|x] \text{ (using inearity of $\mathbb{E}$)} \\
&= \mathbb{E}_t[t|x] - \mathbb{E}_t[t|x] \text{ (using \ref{4a} as $\mathbb{E}_t[t|x]$ is a function of $x$)}\\
&= 0
}
$$
Therefore,
$$
\eqalign{
\mathbb{E}_{x,t}[L_2]
&= \mathbb{E}_x[ \ (y(x) - \mathbb{E}_t[t|x]) \ \cdot \ 0 \ ] \\
&= 0
}
$$
$$
\eqalign{
\mathbb{E}_{x,t}[L_3]
&= \mathbb{E}_{x,t}[(\mathbb{E}_t[t|x]-t)^{2}] \\
&= \mathbb{E}_x[ \ \mathbb{E}_t[(\mathbb{E}_t[t|x]-t)^{2} | x] \ ] \text{ (using \ref{5}) } \\
&= \mathbb{E}_x[\operatorname{var}_t[t|x]] \text{ (using \ref{3}) }
}
$$
Putting them all together and expressing the $\mathbb{E}_x$ terms as integrals under $x$, we get the following form:
$$
\mathbb{E}_{x,t}[L] = \int_x (y(x) - \mathbb{E}_t[t|x])^2 p(x)\mathop{dx} + \int_x \operatorname{var}_t[t|x] p(x) \mathop{dx}
$$
Note: As mentioned by @Juho Kokkalla, the erroneous last term in the book is corrected in the errata. | Loss functions for regression proof | I would like to explain the way I understood it, explaining each and every step on the way.
Assumptions:
$g(x,t)$ is a function of x and t.
$p(x,t)$ is a joint distribution over $x$ and $t$.
Basic f | Loss functions for regression proof
I would like to explain the way I understood it, explaining each and every step on the way.
Assumptions:
$g(x,t)$ is a function of x and t.
$p(x,t)$ is a joint distribution over $x$ and $t$.
Basic formulas:
$$\mathbb{E}_t[g|x] = \int_t{g(x,t)p(t|x)\mathop{dt}} \ (\mathbb{E}_t[g|x] \text{ is a function of $x$ and constant w.r.t. } t) \tag{1}\label{1} $$
$$\mathbb{E}_t[t|x] = \int_t{t.p(t|x)\mathop{dt}} \tag{2}\label{2}$$
$\operatorname{var}_t[t|x] = \int_t{(t - \mathbb{E}_t[t|x])^2p(t|x)\mathop{dt}} = \mathbb{E}_t[(t - \mathbb{E}_t[t|x])^2 | x] \tag{3}\label{3}$
$$
\eqalign{\mathbb{E}_t[f(x)g(x,t)|x]
&= \int_t{f(x)g(x,t)p(t|x)\mathop{dt}} \\
&= f(x)\int_t{g(x,t)p(t|x)\mathop{dt}} \\
&= f(x) \ \mathbb{E}_t[g|x] }
\tag{4}\label{4}$$
$$\mathbb{E}_t[f(x)|x] = f(x) \tag{4a}\label{4a}$$
$$\mathbb{E}_{x,t}[g] = \mathbb{E}_x[\mathbb{E}_t[g|x]] \tag{5}\label{5}$$
We derive the last formula above.
$$
\eqalign{
\mathbb{E}_{x,t}[g]
&= \int_x\int_tg(x,t)p(x,t)\mathop{dx}\mathop{dt}\\
&= \int_x\int_tg(x,t)p(x)p(t|x)\mathop{dx}\mathop{dt}\\
&= \int_x p(x)\int_t g(x,t)p(t|x)\mathop{dt}\mathop{dx}\\
&= \int_x \mathbb{E}_t[g|x]p(x)\mathop{dx} \text{ (using \ref{1}) } \\
&= \mathbb{E}_x [\mathbb{E}_t[g|x]] \\
}
$$
Derivation of the expected loss:
Represent the Loss function in the form as below. Please notice the subscript $t$ in the $\mathbb{E}_t$ notations. This was omitted in the book, but I added it here for clarity.
$$
\eqalign{
L(x,t) &= (y(x)-t)^{2} \\
&= (y(x) - \mathbb{E}_t[t|x])^{2} + 2(y(x) - \mathbb{E}_t[t|x])(\mathbb{E}_t[t|x]-t) + (\mathbb{E}_t[t|x]-t)^{2} \\
&= L_1 + 2L_2 + L_3
}
$$
Hence the joint expectation can be represented as:
$$
\eqalign{
\mathbb{E}_{x,t}[L]
&= \mathbb{E}_{x,t}[L_1] + 2\mathbb{E}_{x,t}[L_2] + \mathbb{E}_{x,t}[L_3]
}
$$
We derive the 3 expectations:
$$
\eqalign{
\mathbb{E}_{x,t}[L_1]
&= \mathbb{E}_{x,t}[(y(x) - \mathbb{E}_t[t|x])^{2}] \\
&= \mathbb{E}_x[ \ \mathbb{E}_t[(y(x) - \mathbb{E}_t[t|x])^{2} | x] \ ] \ \ \text{ (using \ref{5}) } \\
&= \mathbb{E}_x[(y(x) - \mathbb{E}_t[t|x] )^{2}] \text{ (using \ref{4a}, as the operand is a function of $x$ only)} \\
}
$$
$$
\eqalign{
\mathbb{E}_{x,t}[L_2]
&= \mathbb{E}_{x,t}[(y(x) - \mathbb{E}_t[t|x])(\mathbb{E}_t[t|x]-t)] \\
&= \mathbb{E}_x[ \ \mathbb{E}_t[\{(y(x) - \mathbb{E}_t[t|x])(\mathbb{E}_t[t|x]-t)\} \ | \ x] \ ] \text{ (using \ref{5}) } \\
&= \mathbb{E}_x[ \ (y(x) - \mathbb{E}_t[t|x]) \ \mathbb{E}_t[(\mathbb{E}_t[t|x]-t) | x] \ \ ] \ \text{ (using \ref{4} on the inner expectation)}
}
$$
Considering only the inner expectation:
$$
\eqalign{
\mathbb{E}_t[(\mathbb{E}_t[t|x]-t) | x]
&= \mathbb{E}_t[\mathbb{E}_t[t|x] | x] - \mathbb{E}_t[t|x] \text{ (using inearity of $\mathbb{E}$)} \\
&= \mathbb{E}_t[t|x] - \mathbb{E}_t[t|x] \text{ (using \ref{4a} as $\mathbb{E}_t[t|x]$ is a function of $x$)}\\
&= 0
}
$$
Therefore,
$$
\eqalign{
\mathbb{E}_{x,t}[L_2]
&= \mathbb{E}_x[ \ (y(x) - \mathbb{E}_t[t|x]) \ \cdot \ 0 \ ] \\
&= 0
}
$$
$$
\eqalign{
\mathbb{E}_{x,t}[L_3]
&= \mathbb{E}_{x,t}[(\mathbb{E}_t[t|x]-t)^{2}] \\
&= \mathbb{E}_x[ \ \mathbb{E}_t[(\mathbb{E}_t[t|x]-t)^{2} | x] \ ] \text{ (using \ref{5}) } \\
&= \mathbb{E}_x[\operatorname{var}_t[t|x]] \text{ (using \ref{3}) }
}
$$
Putting them all together and expressing the $\mathbb{E}_x$ terms as integrals under $x$, we get the following form:
$$
\mathbb{E}_{x,t}[L] = \int_x (y(x) - \mathbb{E}_t[t|x])^2 p(x)\mathop{dx} + \int_x \operatorname{var}_t[t|x] p(x) \mathop{dx}
$$
Note: As mentioned by @Juho Kokkalla, the erroneous last term in the book is corrected in the errata. | Loss functions for regression proof
I would like to explain the way I understood it, explaining each and every step on the way.
Assumptions:
$g(x,t)$ is a function of x and t.
$p(x,t)$ is a joint distribution over $x$ and $t$.
Basic f |
52,315 | Loss functions for regression proof | This proof is easier if you iterate expectations. You want to show that $\mathbb{E}(y|x)$ is in a sense the best predictor of $t$, so you want to show that
$$
\mathbb{E}[(y(x) - t)^2] \ge \mathbb{E}[\{\mathbb{E}(t \vert x) - t\}^2]
$$
for any $y(x)$.
It's easier to first consider $\mathbb{E}[(y(x) - t)^2 \vert x]$. If you do this and apply the trick you mentioned above (adding and subtracting a conditional expectation), you can get
\begin{align*}
&\mathbb{E}[(y(x) - t)^2 \vert x] \\
&= \mathbb{E}[\{y(x) - \mathbb{E}[t|x] \}^{2}\vert x] + 2\mathbb{E}[\{y(x) - \mathbb{E}[t|x]\}\{\mathbb{E}[t|x]-t\}\vert x] + \mathbb{E}[\{\mathbb{E}[t|x]-t\}^{2}\vert x] \\
&= \{y(x) - \mathbb{E}[t|x] \}^{2} + 0 + \mathbb{E}[\{\mathbb{E}[t|x]-t\}^{2}\vert x] .
\end{align*}
Notice that the cross term here becomes $0$ when you do this because of linearity and the fact that you can pull out any $x-$measurable random variables from the conditional expectation. For more details, see whuber's comment below.
And then you finally get the desired result by taking expectations again (with respect to $p(x)$). No differentiation needed. | Loss functions for regression proof | This proof is easier if you iterate expectations. You want to show that $\mathbb{E}(y|x)$ is in a sense the best predictor of $t$, so you want to show that
$$
\mathbb{E}[(y(x) - t)^2] \ge \mathbb{E}[\ | Loss functions for regression proof
This proof is easier if you iterate expectations. You want to show that $\mathbb{E}(y|x)$ is in a sense the best predictor of $t$, so you want to show that
$$
\mathbb{E}[(y(x) - t)^2] \ge \mathbb{E}[\{\mathbb{E}(t \vert x) - t\}^2]
$$
for any $y(x)$.
It's easier to first consider $\mathbb{E}[(y(x) - t)^2 \vert x]$. If you do this and apply the trick you mentioned above (adding and subtracting a conditional expectation), you can get
\begin{align*}
&\mathbb{E}[(y(x) - t)^2 \vert x] \\
&= \mathbb{E}[\{y(x) - \mathbb{E}[t|x] \}^{2}\vert x] + 2\mathbb{E}[\{y(x) - \mathbb{E}[t|x]\}\{\mathbb{E}[t|x]-t\}\vert x] + \mathbb{E}[\{\mathbb{E}[t|x]-t\}^{2}\vert x] \\
&= \{y(x) - \mathbb{E}[t|x] \}^{2} + 0 + \mathbb{E}[\{\mathbb{E}[t|x]-t\}^{2}\vert x] .
\end{align*}
Notice that the cross term here becomes $0$ when you do this because of linearity and the fact that you can pull out any $x-$measurable random variables from the conditional expectation. For more details, see whuber's comment below.
And then you finally get the desired result by taking expectations again (with respect to $p(x)$). No differentiation needed. | Loss functions for regression proof
This proof is easier if you iterate expectations. You want to show that $\mathbb{E}(y|x)$ is in a sense the best predictor of $t$, so you want to show that
$$
\mathbb{E}[(y(x) - t)^2] \ge \mathbb{E}[\ |
52,316 | What does "mixing" mean in sampling? | When people say "mixing" in the context of Markov chain Monte Carlo (MCMC), they are (knowingly or unknowingly) referring to the "mixing time" of the Markov chain.
Intuitively, mixing time for a Markov chain is the number of steps required of the Markov chain to come close to the stationary distribution (or in the world of Bayesian statistics, posterior distribution). If $\pi$ is the stationary distribution and $P(x,A)$ is the Markov chain transition kernel, where $x$ is the starting value of the Markov chain, and $A$ is a measurable set, then the mixing time is the first time $t$ such that
$$\left|P^t(x,A) - \pi(A)\right|_{TV} \leq \dfrac{1}{4}. $$
Here $|\cdot|_{TV}$ refers to total variation distance. This is only one of the many definitions, but they all intuitively mean the same.
The mixing time has a direct impact on sampling quality since, the smaller the mixing time, the faster the convergence of the Markov chain to the stationary distribution, and the smaller the correlation in the samples. | What does "mixing" mean in sampling? | When people say "mixing" in the context of Markov chain Monte Carlo (MCMC), they are (knowingly or unknowingly) referring to the "mixing time" of the Markov chain.
Intuitively, mixing time for a Marko | What does "mixing" mean in sampling?
When people say "mixing" in the context of Markov chain Monte Carlo (MCMC), they are (knowingly or unknowingly) referring to the "mixing time" of the Markov chain.
Intuitively, mixing time for a Markov chain is the number of steps required of the Markov chain to come close to the stationary distribution (or in the world of Bayesian statistics, posterior distribution). If $\pi$ is the stationary distribution and $P(x,A)$ is the Markov chain transition kernel, where $x$ is the starting value of the Markov chain, and $A$ is a measurable set, then the mixing time is the first time $t$ such that
$$\left|P^t(x,A) - \pi(A)\right|_{TV} \leq \dfrac{1}{4}. $$
Here $|\cdot|_{TV}$ refers to total variation distance. This is only one of the many definitions, but they all intuitively mean the same.
The mixing time has a direct impact on sampling quality since, the smaller the mixing time, the faster the convergence of the Markov chain to the stationary distribution, and the smaller the correlation in the samples. | What does "mixing" mean in sampling?
When people say "mixing" in the context of Markov chain Monte Carlo (MCMC), they are (knowingly or unknowingly) referring to the "mixing time" of the Markov chain.
Intuitively, mixing time for a Marko |
52,317 | What does "mixing" mean in sampling? | Well, if I remember correctly what little I ever knew about ergodic theory, mixing implies ergodic, and ergodic means that time averages and space averages are the same, which is the justification for sampling over a sequence of random samples. So one would like to know that one's sampling scheme is mixing.
I suppose this is really only an issue if one is inventing a new sampling scheme. But more often, one is just using some software off the shelf, with (one hopes) known properties. | What does "mixing" mean in sampling? | Well, if I remember correctly what little I ever knew about ergodic theory, mixing implies ergodic, and ergodic means that time averages and space averages are the same, which is the justification for | What does "mixing" mean in sampling?
Well, if I remember correctly what little I ever knew about ergodic theory, mixing implies ergodic, and ergodic means that time averages and space averages are the same, which is the justification for sampling over a sequence of random samples. So one would like to know that one's sampling scheme is mixing.
I suppose this is really only an issue if one is inventing a new sampling scheme. But more often, one is just using some software off the shelf, with (one hopes) known properties. | What does "mixing" mean in sampling?
Well, if I remember correctly what little I ever knew about ergodic theory, mixing implies ergodic, and ergodic means that time averages and space averages are the same, which is the justification for |
52,318 | Can odds ratios be negative? | It cannot be negative. However, the (often natural) logarithm of it can be. Usually, an odds ratio is actually derived on the log-transformed scale (where confidence intervals derived based on log-odds ratio $\pm 1.96 \times $ SE work better) and estimate & confidence intervals are the transformed onto the odds ratio scale for reporting. | Can odds ratios be negative? | It cannot be negative. However, the (often natural) logarithm of it can be. Usually, an odds ratio is actually derived on the log-transformed scale (where confidence intervals derived based on log-odd | Can odds ratios be negative?
It cannot be negative. However, the (often natural) logarithm of it can be. Usually, an odds ratio is actually derived on the log-transformed scale (where confidence intervals derived based on log-odds ratio $\pm 1.96 \times $ SE work better) and estimate & confidence intervals are the transformed onto the odds ratio scale for reporting. | Can odds ratios be negative?
It cannot be negative. However, the (often natural) logarithm of it can be. Usually, an odds ratio is actually derived on the log-transformed scale (where confidence intervals derived based on log-odd |
52,319 | Can odds ratios be negative? | It simply can not be negative.
Perhaps you read somewhere about negative values of logarithm of odds rato or the logit function defined as $\ln(p)$ where $p$ is $\frac{OR}{1+OR}$ | Can odds ratios be negative? | It simply can not be negative.
Perhaps you read somewhere about negative values of logarithm of odds rato or the logit function defined as $\ln(p)$ where $p$ is $\frac{OR}{1+OR}$ | Can odds ratios be negative?
It simply can not be negative.
Perhaps you read somewhere about negative values of logarithm of odds rato or the logit function defined as $\ln(p)$ where $p$ is $\frac{OR}{1+OR}$ | Can odds ratios be negative?
It simply can not be negative.
Perhaps you read somewhere about negative values of logarithm of odds rato or the logit function defined as $\ln(p)$ where $p$ is $\frac{OR}{1+OR}$ |
52,320 | Residuals analysis: interpretation of a scatter plot | No, this does not look good. You appear to have a problem with heteroscedasticity as there is increasing variance of residuals with increasing predicted values. Constant variance is an important condition for OLS regression in order to perform valid inference. This might be resolved by log-transforming the response variable.
There is also a hint of autocorrelation but this is hard to assess with so few data points.
Edit, after downloading the data:
Log-transforming C helps with heteroscedasticity, though there are few data points so I would advise some caution: while it seems to help with these data, it may not be the case with more observations. There could be other non-linearities that should be accounted for.
However, all your independent variables are highly correlated with each other, which is not good at all for model interpretation:
years Y W SSW G T TR D
years 1.00 0.95 0.96 0.96 0.98 0.98 1.00 0.98
Y 0.95 1.00 0.99 0.95 0.97 0.98 0.95 0.87
W 0.96 0.99 1.00 0.97 0.98 0.98 0.96 0.89
SSW 0.96 0.95 0.97 1.00 0.98 0.97 0.97 0.93
G 0.98 0.97 0.98 0.98 1.00 0.99 0.99 0.95
T 0.98 0.98 0.98 0.97 0.99 1.00 0.98 0.93
TR 1.00 0.95 0.96 0.97 0.99 0.98 1.00 0.98
D 0.98 0.87 0.89 0.93 0.95 0.93 0.98 1.00 | Residuals analysis: interpretation of a scatter plot | No, this does not look good. You appear to have a problem with heteroscedasticity as there is increasing variance of residuals with increasing predicted values. Constant variance is an important condi | Residuals analysis: interpretation of a scatter plot
No, this does not look good. You appear to have a problem with heteroscedasticity as there is increasing variance of residuals with increasing predicted values. Constant variance is an important condition for OLS regression in order to perform valid inference. This might be resolved by log-transforming the response variable.
There is also a hint of autocorrelation but this is hard to assess with so few data points.
Edit, after downloading the data:
Log-transforming C helps with heteroscedasticity, though there are few data points so I would advise some caution: while it seems to help with these data, it may not be the case with more observations. There could be other non-linearities that should be accounted for.
However, all your independent variables are highly correlated with each other, which is not good at all for model interpretation:
years Y W SSW G T TR D
years 1.00 0.95 0.96 0.96 0.98 0.98 1.00 0.98
Y 0.95 1.00 0.99 0.95 0.97 0.98 0.95 0.87
W 0.96 0.99 1.00 0.97 0.98 0.98 0.96 0.89
SSW 0.96 0.95 0.97 1.00 0.98 0.97 0.97 0.93
G 0.98 0.97 0.98 0.98 1.00 0.99 0.99 0.95
T 0.98 0.98 0.98 0.97 0.99 1.00 0.98 0.93
TR 1.00 0.95 0.96 0.97 0.99 0.98 1.00 0.98
D 0.98 0.87 0.89 0.93 0.95 0.93 0.98 1.00 | Residuals analysis: interpretation of a scatter plot
No, this does not look good. You appear to have a problem with heteroscedasticity as there is increasing variance of residuals with increasing predicted values. Constant variance is an important condi |
52,321 | Residuals analysis: interpretation of a scatter plot | Visually at first it seemed to me that your residuals look like they were heteroskedastic (non-constant variance), autocorrelated (not independent), and non-Normally distributed. Those are actually issues that could be resolved anyway. It turns out that you tested your residuals, and they appear to have done ok on all those counts. However, those issues are minute vs. the multicollinearity issue uncovered by Long. All your independent variables are very highly correlated with positive correlation coefficients ranging between 0.9 and 1.0. The multicollinearity is a huge problem. You indicated not being very concerned about it. But, you should. This problem does not go away just by wishful thinking. And, logging the variables will certainly not solve that. It may actually exacerbate it. A symptom of multicollinearity is that your variables' regression coefficients are likely very unstable. Rerun your regression by omitting some of the data to test the stability of the coefficients. They are likely to be very unstable. Another symptom is that the statistical significance of some of your variables may be questionable. It does not make sense to have 8 independent variables in a model that are all very highly correlated. They simply impart to your model almost the exact same type of info in terms of explaining the variance (or behavior) of your dependent variable.
I think you need to rebuild this model by starting by selecting one single of the best time-oriented independent variable you have. And, then adding other variables that provide information other than time. One type of time-oriented variable you may add to this model without running into excessive multicollinearity issues are seasonality dummy variable. But, this is not a sure thing. You would have to test for that (the multicollinearity bit). | Residuals analysis: interpretation of a scatter plot | Visually at first it seemed to me that your residuals look like they were heteroskedastic (non-constant variance), autocorrelated (not independent), and non-Normally distributed. Those are actually i | Residuals analysis: interpretation of a scatter plot
Visually at first it seemed to me that your residuals look like they were heteroskedastic (non-constant variance), autocorrelated (not independent), and non-Normally distributed. Those are actually issues that could be resolved anyway. It turns out that you tested your residuals, and they appear to have done ok on all those counts. However, those issues are minute vs. the multicollinearity issue uncovered by Long. All your independent variables are very highly correlated with positive correlation coefficients ranging between 0.9 and 1.0. The multicollinearity is a huge problem. You indicated not being very concerned about it. But, you should. This problem does not go away just by wishful thinking. And, logging the variables will certainly not solve that. It may actually exacerbate it. A symptom of multicollinearity is that your variables' regression coefficients are likely very unstable. Rerun your regression by omitting some of the data to test the stability of the coefficients. They are likely to be very unstable. Another symptom is that the statistical significance of some of your variables may be questionable. It does not make sense to have 8 independent variables in a model that are all very highly correlated. They simply impart to your model almost the exact same type of info in terms of explaining the variance (or behavior) of your dependent variable.
I think you need to rebuild this model by starting by selecting one single of the best time-oriented independent variable you have. And, then adding other variables that provide information other than time. One type of time-oriented variable you may add to this model without running into excessive multicollinearity issues are seasonality dummy variable. But, this is not a sure thing. You would have to test for that (the multicollinearity bit). | Residuals analysis: interpretation of a scatter plot
Visually at first it seemed to me that your residuals look like they were heteroskedastic (non-constant variance), autocorrelated (not independent), and non-Normally distributed. Those are actually i |
52,322 | Should I use an average to summarize ordinal data? | This is largely an issue for you to decide based on your theoretical assumptions about the data and what lies behind them. When you calculate an arithmetic average, you are assuming that the intervals are reasonably similar. (That is, you are implicitly stating that $3-2 = 2-1$ and $3-1 = 2\times (3-2)$.) If you believe that is a reasonable assumption, and others in your field (e.g., reviewers) are likely to agree with you, then it's fine. Using means with ordinal data tends to be more defensible when:
There are a larger number of ordinal levels (a rule of thumb is $\ge 12$);
the ordinal levels are composed of many components (e.g., ratings for many related questions are aggregated into a composite); and/or
the raters were instructed / tried to make the ratings equal interval.
It isn't clear to me that those hold in your case, but it is for you to decide.
You also should think hard about what you mean by '"mainly" rated as 2'. Again, that is for you to decide. However, I would not think of the set of ratings $\{1,1,2,3,3\}$ as "mainly" being $2$, despite the fact that the mean is $2$. I would interpret that as being a somewhat polarizing word, with some thinking it's 'easy' and some thinking it's 'hard'. But again, this is a theoretical issue for you to decide.
For what it's worth (almost certainly very little), if it were me, I would think your ratings were not amenable to be described by means. I think I would interpret '"mainly" rated as 2' as the majority of raters gave this word a 2. That is, I would select words that received $>50\%\ \rm ``2\!"$s.
By contrast, I suspect that you don't only want to select individual words '"mainly" rated as 2', but also want the entire set of selected words to be rated $\approx 2$. To check that aspect, I would feel more comfortable using the mean of all the ratings for all the selected words (or the mean of the words' means). At this point, you are averaging over many more ratings and I think the mean would be more defensible. | Should I use an average to summarize ordinal data? | This is largely an issue for you to decide based on your theoretical assumptions about the data and what lies behind them. When you calculate an arithmetic average, you are assuming that the interval | Should I use an average to summarize ordinal data?
This is largely an issue for you to decide based on your theoretical assumptions about the data and what lies behind them. When you calculate an arithmetic average, you are assuming that the intervals are reasonably similar. (That is, you are implicitly stating that $3-2 = 2-1$ and $3-1 = 2\times (3-2)$.) If you believe that is a reasonable assumption, and others in your field (e.g., reviewers) are likely to agree with you, then it's fine. Using means with ordinal data tends to be more defensible when:
There are a larger number of ordinal levels (a rule of thumb is $\ge 12$);
the ordinal levels are composed of many components (e.g., ratings for many related questions are aggregated into a composite); and/or
the raters were instructed / tried to make the ratings equal interval.
It isn't clear to me that those hold in your case, but it is for you to decide.
You also should think hard about what you mean by '"mainly" rated as 2'. Again, that is for you to decide. However, I would not think of the set of ratings $\{1,1,2,3,3\}$ as "mainly" being $2$, despite the fact that the mean is $2$. I would interpret that as being a somewhat polarizing word, with some thinking it's 'easy' and some thinking it's 'hard'. But again, this is a theoretical issue for you to decide.
For what it's worth (almost certainly very little), if it were me, I would think your ratings were not amenable to be described by means. I think I would interpret '"mainly" rated as 2' as the majority of raters gave this word a 2. That is, I would select words that received $>50\%\ \rm ``2\!"$s.
By contrast, I suspect that you don't only want to select individual words '"mainly" rated as 2', but also want the entire set of selected words to be rated $\approx 2$. To check that aspect, I would feel more comfortable using the mean of all the ratings for all the selected words (or the mean of the words' means). At this point, you are averaging over many more ratings and I think the mean would be more defensible. | Should I use an average to summarize ordinal data?
This is largely an issue for you to decide based on your theoretical assumptions about the data and what lies behind them. When you calculate an arithmetic average, you are assuming that the interval |
52,323 | Should I use an average to summarize ordinal data? | It is often the case with discrete data that the mean is non-discrete. That doesn't mean that it isn't a worthwhile statistic to report. A value of 1.6 could be interpreted as easy for most, but intermediate for some - pretty much what you see in your table.
If you want integers, you can calculate the median, which is the value such that half of all observations are below it and half above it. For example for kanarie, the median of (1,1,2,2,2) is 2, since half are below 2 and half are above (or equal).
Alternatively is the mode, which is just the most common value. It is often useful to get a feel for the overall population's choice. | Should I use an average to summarize ordinal data? | It is often the case with discrete data that the mean is non-discrete. That doesn't mean that it isn't a worthwhile statistic to report. A value of 1.6 could be interpreted as easy for most, but inter | Should I use an average to summarize ordinal data?
It is often the case with discrete data that the mean is non-discrete. That doesn't mean that it isn't a worthwhile statistic to report. A value of 1.6 could be interpreted as easy for most, but intermediate for some - pretty much what you see in your table.
If you want integers, you can calculate the median, which is the value such that half of all observations are below it and half above it. For example for kanarie, the median of (1,1,2,2,2) is 2, since half are below 2 and half are above (or equal).
Alternatively is the mode, which is just the most common value. It is often useful to get a feel for the overall population's choice. | Should I use an average to summarize ordinal data?
It is often the case with discrete data that the mean is non-discrete. That doesn't mean that it isn't a worthwhile statistic to report. A value of 1.6 could be interpreted as easy for most, but inter |
52,324 | Should I use an average to summarize ordinal data? | You state that you "need to use the words that are 'mainly' marked as 2."
I'd argue that you need to define what 'mainly' actually means to you.
One way is to take the mean, as you indicate intuitively makes sense to you. If you do, then, indeed, if the mean is: 1.5 ≤ mean < 2.5. Then you might interpret it as 'mainly' 2.
An alternative is to use the mode, as @ssdecontrol suggested. You would, however, need to decide on how to handle a situation where there are two modes: e.g. (1,1,2,2,3) where the modes are 1 and 2. Would you consider this to pass your criteria? Or your example of (1,1,2,3,3) where the modes are 1 and 3. The mode is not 2 in this case, but the mean is exactly 2. Does it pass your criteria?
A third option is to use the mean of the mode. Where the pass criterion could be: 1.5 ≤ mean of mode < 2.5. | Should I use an average to summarize ordinal data? | You state that you "need to use the words that are 'mainly' marked as 2."
I'd argue that you need to define what 'mainly' actually means to you.
One way is to take the mean, as you indicate intuitivel | Should I use an average to summarize ordinal data?
You state that you "need to use the words that are 'mainly' marked as 2."
I'd argue that you need to define what 'mainly' actually means to you.
One way is to take the mean, as you indicate intuitively makes sense to you. If you do, then, indeed, if the mean is: 1.5 ≤ mean < 2.5. Then you might interpret it as 'mainly' 2.
An alternative is to use the mode, as @ssdecontrol suggested. You would, however, need to decide on how to handle a situation where there are two modes: e.g. (1,1,2,2,3) where the modes are 1 and 2. Would you consider this to pass your criteria? Or your example of (1,1,2,3,3) where the modes are 1 and 3. The mode is not 2 in this case, but the mean is exactly 2. Does it pass your criteria?
A third option is to use the mean of the mode. Where the pass criterion could be: 1.5 ≤ mean of mode < 2.5. | Should I use an average to summarize ordinal data?
You state that you "need to use the words that are 'mainly' marked as 2."
I'd argue that you need to define what 'mainly' actually means to you.
One way is to take the mean, as you indicate intuitivel |
52,325 | Should I use an average to summarize ordinal data? | I think you have a few options depending on what you care about. You can go with the mean and see if it falls within some range of 2. This would be like averaging your votes. The advantage here is that you can check on how balanced people's votes are.
If you care more about how the votes land with at least 1/2 voting more or less at some point you would want to go with the median. The advantage of the median is that it is based more on vote distribution.
You might be interested in where the most voters voted. This would be the mode. You used the word mainly so you might be interested in this metric. Advantage is where the most voted, disadvantage is that as you get more options this can mean less.
One thing to keep in mind is also that you may have some controversial words where most voters are 1 and 3. In this case you should be careful since most would not say 2, but some methods would give you 2. I would thus suggest you use mode, in the case of two modes you would reject or use another method such as median.
e: You could also look into percent of votes that are 2. If it is above 50% then mainly people believe the question is a 2.
E: If you really want to add complexity you could also try to address the issue of voters not agreeing and having a bias. You would then need to do some transformation of the vote and do a mean/median. | Should I use an average to summarize ordinal data? | I think you have a few options depending on what you care about. You can go with the mean and see if it falls within some range of 2. This would be like averaging your votes. The advantage here is tha | Should I use an average to summarize ordinal data?
I think you have a few options depending on what you care about. You can go with the mean and see if it falls within some range of 2. This would be like averaging your votes. The advantage here is that you can check on how balanced people's votes are.
If you care more about how the votes land with at least 1/2 voting more or less at some point you would want to go with the median. The advantage of the median is that it is based more on vote distribution.
You might be interested in where the most voters voted. This would be the mode. You used the word mainly so you might be interested in this metric. Advantage is where the most voted, disadvantage is that as you get more options this can mean less.
One thing to keep in mind is also that you may have some controversial words where most voters are 1 and 3. In this case you should be careful since most would not say 2, but some methods would give you 2. I would thus suggest you use mode, in the case of two modes you would reject or use another method such as median.
e: You could also look into percent of votes that are 2. If it is above 50% then mainly people believe the question is a 2.
E: If you really want to add complexity you could also try to address the issue of voters not agreeing and having a bias. You would then need to do some transformation of the vote and do a mean/median. | Should I use an average to summarize ordinal data?
I think you have a few options depending on what you care about. You can go with the mean and see if it falls within some range of 2. This would be like averaging your votes. The advantage here is tha |
52,326 | How can I calculate $E\left(\prod_{i=1}^{n}\frac{X_i}{X_{(n)}}\right)$ where $X_1,\ldots,X_n$ are i.i.d $U(0,\theta)$? | Short answer: it is the same as $\text{E}[\prod_{i=1}^{n-1} Y_i]$ with the factors $Y_i\sim U(0,1)$. If they are all independent (i.e. if $X_1,\ldots,X_n$ are sampled independently), this becomes $2^{1-n}$.
Longer answer: this is really a question about uniform order statistics. If the $X_i$ are independent and uniformly distributed between $0$ and $\theta$, then their joint density is the product of the marginal densities
$$
f_{X_1,\ldots,X_n} (x_1,\ldots,x_n) = \prod_{i=1}^n f_{X_i} (x_i) = \frac{1}{\theta^n} 1(0\leqslant x_i\leqslant \theta, \forall i)\,,
$$
where $1(A)$ is 1 if $A$ is true and 0 otherwise. So all configurations within the constraint $0\leqslant x_i\leqslant 1, \forall i$, are equally likely. Suppose now we condition this distribution on the fact that $X_{(n)}=a$, i.e. the largest of the values $X_i$ is some $a<\theta$. Because of uniformity, the other $n-1$ values are again uniform within the imposed constraint, namely that they must all be smaller than $a$. In other words, given the largest value $X_{(n)}$, the other values are uniformly distributed over the $(n-1)$-dimensional region where these other values are smaller than $X_{(n)}$. So we can scale and rename the other values, for example assuming $X_{(n)}=X_j$, as
$$
Y_i = \begin{cases} X_i/X_j \,, & i=1,\ldots,j-1\\
X_{i+1}/X_j\,, & i=j=,\ldots,n-1\,, \end{cases}
$$
then these $Y_i$, $i=1,\ldots,n-1$ are uniform in $[0,1]^{n-1}$. Hence:
\begin{align*}
\text{E}[ \prod_{i=1}^n \frac{X_i}{X_{(n)}}]
& = \sum_{j=1}^n\text{E}[ \prod_{i=1}^n \frac{X_i}{X_{(n)}} \vert X_{(n)}=X_j] \text{Prob}[X_{(n)}=X_j] \\
& = \sum_{j=1}^n\text{E}[ Y_1\cdot \ldots\cdot Y_{j-1}\cdot 1 \cdot Y_j \cdot\ldots\cdot Y_{n-1} ] \frac{1}{n}\\
& = \sum_{j=1}^n \prod_{i=1}^{n-1} \text{E}[Y_i] \frac{1}{n}
= \sum_{j=1}^n \prod_{i=1}^{n-1} \frac{1}{2} \frac{1}{n}
= \sum_{j=1}^n 2^{1-n} \frac{1}{n} = 2^{1-n}
\end{align*}
I have a more direct proof lying on my desk here as well, if anyone is interested. | How can I calculate $E\left(\prod_{i=1}^{n}\frac{X_i}{X_{(n)}}\right)$ where $X_1,\ldots,X_n$ are i. | Short answer: it is the same as $\text{E}[\prod_{i=1}^{n-1} Y_i]$ with the factors $Y_i\sim U(0,1)$. If they are all independent (i.e. if $X_1,\ldots,X_n$ are sampled independently), this becomes $2^{ | How can I calculate $E\left(\prod_{i=1}^{n}\frac{X_i}{X_{(n)}}\right)$ where $X_1,\ldots,X_n$ are i.i.d $U(0,\theta)$?
Short answer: it is the same as $\text{E}[\prod_{i=1}^{n-1} Y_i]$ with the factors $Y_i\sim U(0,1)$. If they are all independent (i.e. if $X_1,\ldots,X_n$ are sampled independently), this becomes $2^{1-n}$.
Longer answer: this is really a question about uniform order statistics. If the $X_i$ are independent and uniformly distributed between $0$ and $\theta$, then their joint density is the product of the marginal densities
$$
f_{X_1,\ldots,X_n} (x_1,\ldots,x_n) = \prod_{i=1}^n f_{X_i} (x_i) = \frac{1}{\theta^n} 1(0\leqslant x_i\leqslant \theta, \forall i)\,,
$$
where $1(A)$ is 1 if $A$ is true and 0 otherwise. So all configurations within the constraint $0\leqslant x_i\leqslant 1, \forall i$, are equally likely. Suppose now we condition this distribution on the fact that $X_{(n)}=a$, i.e. the largest of the values $X_i$ is some $a<\theta$. Because of uniformity, the other $n-1$ values are again uniform within the imposed constraint, namely that they must all be smaller than $a$. In other words, given the largest value $X_{(n)}$, the other values are uniformly distributed over the $(n-1)$-dimensional region where these other values are smaller than $X_{(n)}$. So we can scale and rename the other values, for example assuming $X_{(n)}=X_j$, as
$$
Y_i = \begin{cases} X_i/X_j \,, & i=1,\ldots,j-1\\
X_{i+1}/X_j\,, & i=j=,\ldots,n-1\,, \end{cases}
$$
then these $Y_i$, $i=1,\ldots,n-1$ are uniform in $[0,1]^{n-1}$. Hence:
\begin{align*}
\text{E}[ \prod_{i=1}^n \frac{X_i}{X_{(n)}}]
& = \sum_{j=1}^n\text{E}[ \prod_{i=1}^n \frac{X_i}{X_{(n)}} \vert X_{(n)}=X_j] \text{Prob}[X_{(n)}=X_j] \\
& = \sum_{j=1}^n\text{E}[ Y_1\cdot \ldots\cdot Y_{j-1}\cdot 1 \cdot Y_j \cdot\ldots\cdot Y_{n-1} ] \frac{1}{n}\\
& = \sum_{j=1}^n \prod_{i=1}^{n-1} \text{E}[Y_i] \frac{1}{n}
= \sum_{j=1}^n \prod_{i=1}^{n-1} \frac{1}{2} \frac{1}{n}
= \sum_{j=1}^n 2^{1-n} \frac{1}{n} = 2^{1-n}
\end{align*}
I have a more direct proof lying on my desk here as well, if anyone is interested. | How can I calculate $E\left(\prod_{i=1}^{n}\frac{X_i}{X_{(n)}}\right)$ where $X_1,\ldots,X_n$ are i.
Short answer: it is the same as $\text{E}[\prod_{i=1}^{n-1} Y_i]$ with the factors $Y_i\sim U(0,1)$. If they are all independent (i.e. if $X_1,\ldots,X_n$ are sampled independently), this becomes $2^{ |
52,327 | Linear probability model, dummy variables and the same standard errors on all estimates | Recall that the standard errors are the diagonal elements of the matrix
$$
\hat\sigma^2(X'X)^{-1}
$$
As pointed out by @repmat, this result requires that each group is of equal size, i.e., that
$$\sum_iG_{ji}=c$$
for $j=1,\ldots,J$.
In that case, you can easily check that
$$
X'X=n
\begin{pmatrix}
1&1/J&\cdots&\cdots&\cdots&1/J\\
1/J&1/J&0&\cdots&\cdots&0\\
1/J&0&1/J&\ddots&\cdots&0\\
\vdots&0&0&\ddots&\ddots&0\\
\vdots&\vdots&\cdots&\ddots&\ddots&0\\
1/J&0&\cdots&\cdots&0&1/J\\
\end{pmatrix},
$$
assuming that the first column contains the constant term. Assuming that one of the redundant dummies has been dropped, so that $X'X$ is of dimension $J\times J$, the inverse is
$$
(X'X)^{-1}=\frac{1}{n}\begin{pmatrix}
J&-J&\cdots&&\cdots&-J\\
-J&2J&J&\cdots&\cdots&J\\
\vdots&J&2J&J&&J\\
&\vdots&J&2J&\ddots&\vdots\\
\vdots&\vdots&&\ddots&2J&J\\
-J&J&\cdots&\cdots&J&2J
\end{pmatrix}
$$
We see that the diagonal elements are identical (except for the one on the constant), yielding identical standard errors.
The nature of the $y_i$ is irrelevant, as these enter the standard errors only through $\hat\sigma^2$, which however obviously just multiplies each element of $(X'X)^{-1}$ and hence does not affect the result that the diagonal elements are identical.
This result also shows that the squared standard error on the constant is one half that of the dummies.
P.S.: As suggested by Alecos in the comments, we may define the block matrix
$$
X'X=
n\begin{pmatrix}
A&B\\C&D
\end{pmatrix},
$$
with $A=1$, $B=(1/J,\ldots, 1/J)$, $C=B'$ and $D=I/J$ and use a result on partitioned inverses that the lower right block has the inverse
$$
D^{-1}+D^{-1}C(A-BD^{-1}C)^{-1}BD^{-1}
$$
to see that the result is as presented as above.
UPDATE: Regarding the discussion in the comments of @repmat's answer, the numerical equivalence does not exactly hold for robust standard errors.
This is because the "meat" matrix $X'\Sigma_uX$ of the robust variance estimator
$$
(X'X)^{-1}X'\Sigma_uX(X'X)^{-1}
$$
has a diagonal
$$
\begin{pmatrix}
\sum_{i=1}^n\hat{u}_i^2\\
\sum_{i=1,\,i\in j=1}^n\hat{u}_i^2\\
\vdots\\
\sum_{i=1,\,i\in j=J-1}^n\hat{u}_i^2\\
\end{pmatrix}
$$
(assuming that group $J$ has been dropped to avoid multicollinearity), and there is in general no reason to believe that the sums of squared residuals belonging to the different groups are identical.
The differences are minor, though (at least under homoskedasticity). Here is a modification of his illustration:
set.seed(42)
n <- 999
library(sandwich)
library(lmtest)
year1 <- data.frame(rep(1, n/3))
year2 <- data.frame(rep(2, n/3))
year3 <- data.frame(rep(3, n/3))
require(plyr)
years <- rbind.fill(year1, year2, year3)
years[is.na(years)] <- 0
years <- as.factor(rowSums(years))
y <- round(runif(n),0)
reg <- lm(y ~ years)
coeftest(reg, vcov = vcovHC(reg, "HC0"))
>
t test of coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.486486 0.027390 17.7616 <2e-16 ***
years2 -0.027027 0.038678 -0.6988 0.4849
years3 -0.012012 0.038717 -0.3103 0.7564
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 | Linear probability model, dummy variables and the same standard errors on all estimates | Recall that the standard errors are the diagonal elements of the matrix
$$
\hat\sigma^2(X'X)^{-1}
$$
As pointed out by @repmat, this result requires that each group is of equal size, i.e., that
$$\su | Linear probability model, dummy variables and the same standard errors on all estimates
Recall that the standard errors are the diagonal elements of the matrix
$$
\hat\sigma^2(X'X)^{-1}
$$
As pointed out by @repmat, this result requires that each group is of equal size, i.e., that
$$\sum_iG_{ji}=c$$
for $j=1,\ldots,J$.
In that case, you can easily check that
$$
X'X=n
\begin{pmatrix}
1&1/J&\cdots&\cdots&\cdots&1/J\\
1/J&1/J&0&\cdots&\cdots&0\\
1/J&0&1/J&\ddots&\cdots&0\\
\vdots&0&0&\ddots&\ddots&0\\
\vdots&\vdots&\cdots&\ddots&\ddots&0\\
1/J&0&\cdots&\cdots&0&1/J\\
\end{pmatrix},
$$
assuming that the first column contains the constant term. Assuming that one of the redundant dummies has been dropped, so that $X'X$ is of dimension $J\times J$, the inverse is
$$
(X'X)^{-1}=\frac{1}{n}\begin{pmatrix}
J&-J&\cdots&&\cdots&-J\\
-J&2J&J&\cdots&\cdots&J\\
\vdots&J&2J&J&&J\\
&\vdots&J&2J&\ddots&\vdots\\
\vdots&\vdots&&\ddots&2J&J\\
-J&J&\cdots&\cdots&J&2J
\end{pmatrix}
$$
We see that the diagonal elements are identical (except for the one on the constant), yielding identical standard errors.
The nature of the $y_i$ is irrelevant, as these enter the standard errors only through $\hat\sigma^2$, which however obviously just multiplies each element of $(X'X)^{-1}$ and hence does not affect the result that the diagonal elements are identical.
This result also shows that the squared standard error on the constant is one half that of the dummies.
P.S.: As suggested by Alecos in the comments, we may define the block matrix
$$
X'X=
n\begin{pmatrix}
A&B\\C&D
\end{pmatrix},
$$
with $A=1$, $B=(1/J,\ldots, 1/J)$, $C=B'$ and $D=I/J$ and use a result on partitioned inverses that the lower right block has the inverse
$$
D^{-1}+D^{-1}C(A-BD^{-1}C)^{-1}BD^{-1}
$$
to see that the result is as presented as above.
UPDATE: Regarding the discussion in the comments of @repmat's answer, the numerical equivalence does not exactly hold for robust standard errors.
This is because the "meat" matrix $X'\Sigma_uX$ of the robust variance estimator
$$
(X'X)^{-1}X'\Sigma_uX(X'X)^{-1}
$$
has a diagonal
$$
\begin{pmatrix}
\sum_{i=1}^n\hat{u}_i^2\\
\sum_{i=1,\,i\in j=1}^n\hat{u}_i^2\\
\vdots\\
\sum_{i=1,\,i\in j=J-1}^n\hat{u}_i^2\\
\end{pmatrix}
$$
(assuming that group $J$ has been dropped to avoid multicollinearity), and there is in general no reason to believe that the sums of squared residuals belonging to the different groups are identical.
The differences are minor, though (at least under homoskedasticity). Here is a modification of his illustration:
set.seed(42)
n <- 999
library(sandwich)
library(lmtest)
year1 <- data.frame(rep(1, n/3))
year2 <- data.frame(rep(2, n/3))
year3 <- data.frame(rep(3, n/3))
require(plyr)
years <- rbind.fill(year1, year2, year3)
years[is.na(years)] <- 0
years <- as.factor(rowSums(years))
y <- round(runif(n),0)
reg <- lm(y ~ years)
coeftest(reg, vcov = vcovHC(reg, "HC0"))
>
t test of coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.486486 0.027390 17.7616 <2e-16 ***
years2 -0.027027 0.038678 -0.6988 0.4849
years3 -0.012012 0.038717 -0.3103 0.7564
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 | Linear probability model, dummy variables and the same standard errors on all estimates
Recall that the standard errors are the diagonal elements of the matrix
$$
\hat\sigma^2(X'X)^{-1}
$$
As pointed out by @repmat, this result requires that each group is of equal size, i.e., that
$$\su |
52,328 | Linear probability model, dummy variables and the same standard errors on all estimates | This will happen if, and only if, the two (or more variables) have the same variance, or in other words that all groups are equally large (in terms of 1's). The nature of $y$ does not matter.
Here is an example from R:
set.seed(42)
year1 <- data.frame(rep(1, 333))
year2 <- data.frame(rep(2, 333))
year3 <- data.frame(rep(3, 333))
require(plyr)
years <- rbind.fill(year1, year2, year3)
years[is.na(years)] <- 0
years <- as.factor(rowSums(years))
y <- round(runif(999),0)
coef(summary(lm(y ~ years)))
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.48648649 0.02739570 17.7577679 1.667287e-61
years2 -0.02702703 0.03874337 -0.6975910 4.855958e-01
years3 -0.01201201 0.03874337 -0.3100404 7.565951e-01
If the groups are not equally large (has the same numbers of 1's), the errors will be similar but not identical (as above).
The intuition is that you are just estimating means in different groups. Two dummy variables will always have the same variance if the amount of 1's are the same. Because the formula for the variance is:
$$
p (1-p)
$$
Where $p$ is the (sample) mean of the variable.
Edit: To be clear, you can add other non dummy variables to the regression above. The two dummy variables will still have the same variance. | Linear probability model, dummy variables and the same standard errors on all estimates | This will happen if, and only if, the two (or more variables) have the same variance, or in other words that all groups are equally large (in terms of 1's). The nature of $y$ does not matter.
Here is | Linear probability model, dummy variables and the same standard errors on all estimates
This will happen if, and only if, the two (or more variables) have the same variance, or in other words that all groups are equally large (in terms of 1's). The nature of $y$ does not matter.
Here is an example from R:
set.seed(42)
year1 <- data.frame(rep(1, 333))
year2 <- data.frame(rep(2, 333))
year3 <- data.frame(rep(3, 333))
require(plyr)
years <- rbind.fill(year1, year2, year3)
years[is.na(years)] <- 0
years <- as.factor(rowSums(years))
y <- round(runif(999),0)
coef(summary(lm(y ~ years)))
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.48648649 0.02739570 17.7577679 1.667287e-61
years2 -0.02702703 0.03874337 -0.6975910 4.855958e-01
years3 -0.01201201 0.03874337 -0.3100404 7.565951e-01
If the groups are not equally large (has the same numbers of 1's), the errors will be similar but not identical (as above).
The intuition is that you are just estimating means in different groups. Two dummy variables will always have the same variance if the amount of 1's are the same. Because the formula for the variance is:
$$
p (1-p)
$$
Where $p$ is the (sample) mean of the variable.
Edit: To be clear, you can add other non dummy variables to the regression above. The two dummy variables will still have the same variance. | Linear probability model, dummy variables and the same standard errors on all estimates
This will happen if, and only if, the two (or more variables) have the same variance, or in other words that all groups are equally large (in terms of 1's). The nature of $y$ does not matter.
Here is |
52,329 | Linear probability model, dummy variables and the same standard errors on all estimates | You could ask yourself the question 'Why would I expect one of them to be much larger than the others?'. I do not think you are doing anything wrong as far as specifying your model matrix is concerned. I do wonder though whether using linear regression for an outcome which is binary is a good thing. Most people would use logistic regression here. | Linear probability model, dummy variables and the same standard errors on all estimates | You could ask yourself the question 'Why would I expect one of them to be much larger than the others?'. I do not think you are doing anything wrong as far as specifying your model matrix is concerned | Linear probability model, dummy variables and the same standard errors on all estimates
You could ask yourself the question 'Why would I expect one of them to be much larger than the others?'. I do not think you are doing anything wrong as far as specifying your model matrix is concerned. I do wonder though whether using linear regression for an outcome which is binary is a good thing. Most people would use logistic regression here. | Linear probability model, dummy variables and the same standard errors on all estimates
You could ask yourself the question 'Why would I expect one of them to be much larger than the others?'. I do not think you are doing anything wrong as far as specifying your model matrix is concerned |
52,330 | Replicating a linear regression example from Hastie, Tibshirani and Friedman | As they say in the text:
We fit a linear model to the log of prostate-specific antigen, lpsa,
after first standardizing the predictors to have unit variance. We
randomly split the dataset into a training set of size 67 and a test
set of size 30. We applied least squares estimation to the training
set, producing the estimates, standard errors and Z-scores shown in
Table 3.2
You have not standardized the predictors. | Replicating a linear regression example from Hastie, Tibshirani and Friedman | As they say in the text:
We fit a linear model to the log of prostate-specific antigen, lpsa,
after first standardizing the predictors to have unit variance. We
randomly split the dataset into a | Replicating a linear regression example from Hastie, Tibshirani and Friedman
As they say in the text:
We fit a linear model to the log of prostate-specific antigen, lpsa,
after first standardizing the predictors to have unit variance. We
randomly split the dataset into a training set of size 67 and a test
set of size 30. We applied least squares estimation to the training
set, producing the estimates, standard errors and Z-scores shown in
Table 3.2
You have not standardized the predictors. | Replicating a linear regression example from Hastie, Tibshirani and Friedman
As they say in the text:
We fit a linear model to the log of prostate-specific antigen, lpsa,
after first standardizing the predictors to have unit variance. We
randomly split the dataset into a |
52,331 | Replicating a linear regression example from Hastie, Tibshirani and Friedman | When I standardize the predictors (i.e. subtract mean and divide by standard deviation), and then do standard OLS on observations marked as belonging to the training set, I get:
2.464932922123745
0.679528141237975
0.263053065732544
-0.141464833536172
0.210146557221827
0.305200597125098
-0.288492772453546
-0.021305038802948
0.266955762119924
Which is basically the same as described in the text (minor differences remain but probably nothing important and I shouldn't have spent this much time already!) | Replicating a linear regression example from Hastie, Tibshirani and Friedman | When I standardize the predictors (i.e. subtract mean and divide by standard deviation), and then do standard OLS on observations marked as belonging to the training set, I get:
2.464932922123745
| Replicating a linear regression example from Hastie, Tibshirani and Friedman
When I standardize the predictors (i.e. subtract mean and divide by standard deviation), and then do standard OLS on observations marked as belonging to the training set, I get:
2.464932922123745
0.679528141237975
0.263053065732544
-0.141464833536172
0.210146557221827
0.305200597125098
-0.288492772453546
-0.021305038802948
0.266955762119924
Which is basically the same as described in the text (minor differences remain but probably nothing important and I shouldn't have spent this much time already!) | Replicating a linear regression example from Hastie, Tibshirani and Friedman
When I standardize the predictors (i.e. subtract mean and divide by standard deviation), and then do standard OLS on observations marked as belonging to the training set, I get:
2.464932922123745
|
52,332 | Downsampling vs upsampling on the significance of the predictors in logistic regression | NEVER use downsampling to make a method work. If the method is any good it will work under imbalance. Removal of samples is not scientific. Logistic regression works well under extreme imbalance. Also (1) logistic regression is not a classification method, (2) make sure you use proper accuracy scoring rules, and (3) logistic regression is not a multivariate (multiple dependent variables) method. It is a multivariable regression method. | Downsampling vs upsampling on the significance of the predictors in logistic regression | NEVER use downsampling to make a method work. If the method is any good it will work under imbalance. Removal of samples is not scientific. Logistic regression works well under extreme imbalance. | Downsampling vs upsampling on the significance of the predictors in logistic regression
NEVER use downsampling to make a method work. If the method is any good it will work under imbalance. Removal of samples is not scientific. Logistic regression works well under extreme imbalance. Also (1) logistic regression is not a classification method, (2) make sure you use proper accuracy scoring rules, and (3) logistic regression is not a multivariate (multiple dependent variables) method. It is a multivariable regression method. | Downsampling vs upsampling on the significance of the predictors in logistic regression
NEVER use downsampling to make a method work. If the method is any good it will work under imbalance. Removal of samples is not scientific. Logistic regression works well under extreme imbalance. |
52,333 | Downsampling vs upsampling on the significance of the predictors in logistic regression | Yes upsampling is better if your machine can handle the bigger dataset to train especially if you're testing for statistically significant associations in your model. Due to the fact that more samples = more statistical power and lower standard error estimate which is why your results are the way they are. | Downsampling vs upsampling on the significance of the predictors in logistic regression | Yes upsampling is better if your machine can handle the bigger dataset to train especially if you're testing for statistically significant associations in your model. Due to the fact that more samples | Downsampling vs upsampling on the significance of the predictors in logistic regression
Yes upsampling is better if your machine can handle the bigger dataset to train especially if you're testing for statistically significant associations in your model. Due to the fact that more samples = more statistical power and lower standard error estimate which is why your results are the way they are. | Downsampling vs upsampling on the significance of the predictors in logistic regression
Yes upsampling is better if your machine can handle the bigger dataset to train especially if you're testing for statistically significant associations in your model. Due to the fact that more samples |
52,334 | Where in Elements of Statistical Learning does it talk of a "trick" to deal with categorical variables for binary classification? | It is mentioned in Section 9.2.4 under Categorical Predictors.
Here is a basic example of the "trick" and why it's important.
Suppose you have a binary response $y$ with values $\{\text{Yes}, \text{No}\}$ and a categorical variable $x$ with levels $\{A, B, C, D, E\}$. When splitting on $x$ at a given node, you have $15$ $(=2^{5-1}-1)$ possible splits. In this scenario, you could consider all possible splits and choose the optimal cutpoint using a specified impurity measure (e.g. entropy, Gini index). However, for a categorical variable with many levels, this strategy will fail.
Instead of considering all $15$ possible splits, let's reduce this to only $4$ splits (or fewer if there are ties). Suppose the proportion of $\text{Yes}$ is $0.8$ in class $A$, $0.7$ class $B$, $0.7$ class $C$, $0.2$ class $D$, and $0.9$ class $E$. One can reorder this as $(0.2, 0.7, 0.7, 0.8, 0.9)$ and split $x$ assuming the values are continuous. Once one determines the optimal cutpoint, say $\leq 0.75$, the values are back transformed, so the split to the left has $x \in \{ B, C, D\}$ and the split to the right has $x \in \{A, E\}$. | Where in Elements of Statistical Learning does it talk of a "trick" to deal with categorical variabl | It is mentioned in Section 9.2.4 under Categorical Predictors.
Here is a basic example of the "trick" and why it's important.
Suppose you have a binary response $y$ with values $\{\text{Yes}, \text{No | Where in Elements of Statistical Learning does it talk of a "trick" to deal with categorical variables for binary classification?
It is mentioned in Section 9.2.4 under Categorical Predictors.
Here is a basic example of the "trick" and why it's important.
Suppose you have a binary response $y$ with values $\{\text{Yes}, \text{No}\}$ and a categorical variable $x$ with levels $\{A, B, C, D, E\}$. When splitting on $x$ at a given node, you have $15$ $(=2^{5-1}-1)$ possible splits. In this scenario, you could consider all possible splits and choose the optimal cutpoint using a specified impurity measure (e.g. entropy, Gini index). However, for a categorical variable with many levels, this strategy will fail.
Instead of considering all $15$ possible splits, let's reduce this to only $4$ splits (or fewer if there are ties). Suppose the proportion of $\text{Yes}$ is $0.8$ in class $A$, $0.7$ class $B$, $0.7$ class $C$, $0.2$ class $D$, and $0.9$ class $E$. One can reorder this as $(0.2, 0.7, 0.7, 0.8, 0.9)$ and split $x$ assuming the values are continuous. Once one determines the optimal cutpoint, say $\leq 0.75$, the values are back transformed, so the split to the left has $x \in \{ B, C, D\}$ and the split to the right has $x \in \{A, E\}$. | Where in Elements of Statistical Learning does it talk of a "trick" to deal with categorical variabl
It is mentioned in Section 9.2.4 under Categorical Predictors.
Here is a basic example of the "trick" and why it's important.
Suppose you have a binary response $y$ with values $\{\text{Yes}, \text{No |
52,335 | Naive Ridge Regression in R? | First, very simply, I don't think your call to solve looks right, this is what I would expect
solve(t(X) %*% X + lambda.diag, t(X) %*% y)
Your code seems to be explicitly calculating a matrix inverse and then multiplying. This is mathematically correct, but computationally incorrect. It is always better to solve the system of equations. I've gotten in the habit of reading equations like $y = X^{-1}z$ as "solve the system of equations $Xy = z$ for $y$."
On a more mathematical note, you should not have to include an intercept term when fitting a ridge regression.
It is very important, when applying penalized methods, to standardize your data (as you point out with your comment about scale. It's also important to realize that penalties are not generally applied to the intercept term, as this would cause the model to violate the attractive property that the average predictions equal the average response (on the training data).
Together, these facts (centered data, no intercept penalty) imply that the intercept parameter estimate in a ridge regression is known a priori, it is zero.
The coefficient vector from ridge regression is the solution to the penalized optimization problem
$$ \beta = argmin \left( (y - X\beta)^t (y - X\beta) + \frac{1}{2}\sum_{j > 0} \beta_j^2 \right) $$
Taking a partial with respect to the intercept parameter
$$ \frac{\partial L}{\partial \beta_0} =
\sum_{i=1}^{n} \left( y - \sum_{j=0}^q \beta_j x_{ij} \right) x_{i0} $$
But $x_{0i}$ are the entries in the model matrix corresponding to the intercept, so $x_{0i} = 1$ always. So we get
$$\sum_{i=1} y_i + \sum_{j=0}^q \beta_j \sum_{i=1}^n x_{ij} $$
The first term, with the sum over y, is zero because $y$ is centered (or not, a good check of understanding is to work out what happens if you don't center y). In the second term, each predictor is centered, so the sum over $i$ is zero for every predictor $j$ except the intercept. For the intercept, the second term $i$ sum comes out to $n$ (it's $1 + 1 + 1 + \cdots$). So this whole thing reduces to
$$ n \beta_0 $$
Setting this partial equal to zero, $n\beta_0 = 0$, we recover $\beta_0 = 0$, as expected.
So, you do not need to bind on an intercept term to your model matrix. Your function should either expect standardized data (and if you plan on making it public, it should check this is so), or standardize the data itself. Once this is done, the intercept is known to be zero. I'll leave it as an exersize to work out what the intercept should be when you translate the coefficients back to the un-normalized scale.
I still don't get results equivalent to lm.ridge though, but it might just be a question of translating the formula back into the original scales. However, I can't seem to work out how to do this. I thought it would just entail multiplying by the standard deviation of the response and adding the mean, as usual for standard scores, but either my function is still wrong or rescaling is more complex than I realize.
It's a bit more complicated, but not too bad if you are careful. Here's a place where I answered a very similar question:
GLMnet - “Unstandardizing” Linear Regression Coefficients
You may have to make a very simple change if you are not standardizing $y$. | Naive Ridge Regression in R? | First, very simply, I don't think your call to solve looks right, this is what I would expect
solve(t(X) %*% X + lambda.diag, t(X) %*% y)
Your code seems to be explicitly calculating a matrix inverse | Naive Ridge Regression in R?
First, very simply, I don't think your call to solve looks right, this is what I would expect
solve(t(X) %*% X + lambda.diag, t(X) %*% y)
Your code seems to be explicitly calculating a matrix inverse and then multiplying. This is mathematically correct, but computationally incorrect. It is always better to solve the system of equations. I've gotten in the habit of reading equations like $y = X^{-1}z$ as "solve the system of equations $Xy = z$ for $y$."
On a more mathematical note, you should not have to include an intercept term when fitting a ridge regression.
It is very important, when applying penalized methods, to standardize your data (as you point out with your comment about scale. It's also important to realize that penalties are not generally applied to the intercept term, as this would cause the model to violate the attractive property that the average predictions equal the average response (on the training data).
Together, these facts (centered data, no intercept penalty) imply that the intercept parameter estimate in a ridge regression is known a priori, it is zero.
The coefficient vector from ridge regression is the solution to the penalized optimization problem
$$ \beta = argmin \left( (y - X\beta)^t (y - X\beta) + \frac{1}{2}\sum_{j > 0} \beta_j^2 \right) $$
Taking a partial with respect to the intercept parameter
$$ \frac{\partial L}{\partial \beta_0} =
\sum_{i=1}^{n} \left( y - \sum_{j=0}^q \beta_j x_{ij} \right) x_{i0} $$
But $x_{0i}$ are the entries in the model matrix corresponding to the intercept, so $x_{0i} = 1$ always. So we get
$$\sum_{i=1} y_i + \sum_{j=0}^q \beta_j \sum_{i=1}^n x_{ij} $$
The first term, with the sum over y, is zero because $y$ is centered (or not, a good check of understanding is to work out what happens if you don't center y). In the second term, each predictor is centered, so the sum over $i$ is zero for every predictor $j$ except the intercept. For the intercept, the second term $i$ sum comes out to $n$ (it's $1 + 1 + 1 + \cdots$). So this whole thing reduces to
$$ n \beta_0 $$
Setting this partial equal to zero, $n\beta_0 = 0$, we recover $\beta_0 = 0$, as expected.
So, you do not need to bind on an intercept term to your model matrix. Your function should either expect standardized data (and if you plan on making it public, it should check this is so), or standardize the data itself. Once this is done, the intercept is known to be zero. I'll leave it as an exersize to work out what the intercept should be when you translate the coefficients back to the un-normalized scale.
I still don't get results equivalent to lm.ridge though, but it might just be a question of translating the formula back into the original scales. However, I can't seem to work out how to do this. I thought it would just entail multiplying by the standard deviation of the response and adding the mean, as usual for standard scores, but either my function is still wrong or rescaling is more complex than I realize.
It's a bit more complicated, but not too bad if you are careful. Here's a place where I answered a very similar question:
GLMnet - “Unstandardizing” Linear Regression Coefficients
You may have to make a very simple change if you are not standardizing $y$. | Naive Ridge Regression in R?
First, very simply, I don't think your call to solve looks right, this is what I would expect
solve(t(X) %*% X + lambda.diag, t(X) %*% y)
Your code seems to be explicitly calculating a matrix inverse |
52,336 | Naive Ridge Regression in R? | Try to type lm.ridge in your R console. You will see the code for the function, and it does standardize the input. Try to standardize the input to your function and compare the results.
If you read the documentation you can also see:
If an intercept is present in the model, its coefficient is not penalized. (If you want to penalize an intercept, put in your own constant term and remove the intercept.)
You penalize the intercept in your function, so that is also going to create a difference. | Naive Ridge Regression in R? | Try to type lm.ridge in your R console. You will see the code for the function, and it does standardize the input. Try to standardize the input to your function and compare the results.
If you read th | Naive Ridge Regression in R?
Try to type lm.ridge in your R console. You will see the code for the function, and it does standardize the input. Try to standardize the input to your function and compare the results.
If you read the documentation you can also see:
If an intercept is present in the model, its coefficient is not penalized. (If you want to penalize an intercept, put in your own constant term and remove the intercept.)
You penalize the intercept in your function, so that is also going to create a difference. | Naive Ridge Regression in R?
Try to type lm.ridge in your R console. You will see the code for the function, and it does standardize the input. Try to standardize the input to your function and compare the results.
If you read th |
52,337 | Naive Ridge Regression in R? | Not a complete answer here, but some relevant things for those that will try to implement Ridge regression by themselves and compare their results to lm.ridge.
lm.ridge uses SVD to estimate the coefficients. It is pretty straightforward to implement
Relevant explanation on how to do that is in the upvoted comment from this question:
Relevant answer
It also standardizes the predictors in a different way from the scale() function. It uses divisions by n instead of n-1. That can explain small deviations in the estimations.
There is also something up with the intercept (it deals with it in a different way than I do) as pointed out, but I couldn't get everything running inside lm.ridge to work out what it exactly does with the intercept term. | Naive Ridge Regression in R? | Not a complete answer here, but some relevant things for those that will try to implement Ridge regression by themselves and compare their results to lm.ridge.
lm.ridge uses SVD to estimate the coeff | Naive Ridge Regression in R?
Not a complete answer here, but some relevant things for those that will try to implement Ridge regression by themselves and compare their results to lm.ridge.
lm.ridge uses SVD to estimate the coefficients. It is pretty straightforward to implement
Relevant explanation on how to do that is in the upvoted comment from this question:
Relevant answer
It also standardizes the predictors in a different way from the scale() function. It uses divisions by n instead of n-1. That can explain small deviations in the estimations.
There is also something up with the intercept (it deals with it in a different way than I do) as pointed out, but I couldn't get everything running inside lm.ridge to work out what it exactly does with the intercept term. | Naive Ridge Regression in R?
Not a complete answer here, but some relevant things for those that will try to implement Ridge regression by themselves and compare their results to lm.ridge.
lm.ridge uses SVD to estimate the coeff |
52,338 | Residuals in regression should not be correlated with another variable | This is based on the approach that there exists a set of explanatory variables (EV) whose variability captures everything in the variability of the dependent variable bar "random, unpredictable noise". As the link itself says clearly,
"The idea is that the deterministic portion of your model is so good at explaining (or predicting) the response that only the inherent
randomness of any real-world phenomenon remains leftover for the error
portion."
So, if the residuals $\hat {\mathbf u}$ from the regression of say, DV $y$ on EVs $\{X_1, X_2\}$ are correlated with some third variable $Z$ (which was not included in the regression), it means that the residuals do not behave like "random, unpredictable noise". So the set of EVs that was used, is not that set which "captures everything in the variability of $y$", and so it can be "improved upon".
Let's see what this means in the merciless language of mathematics.
We have the model (matrix notation) (and a sample of size $n$)
$$\mathbf y = \mathbf X\beta + \mathbf u$$
and let's assume that the nice properties for the OLS estimator to triumph, do hold:
$$Ε(\mathbf u \mid \mathbf X) = 0, {\rm Var}(\mathbf u \mid \mathbf X) = \sigma^2 I$$
Note that "the error is random, unpredictable noise" is not part of these assumptions. What the above says is that the error is unpredictable with respect to the specific set of regressors (check out the "Conditional Expectation Function" approach to linear regression).
Running the estimation we obtain the OLS estimator
$$\hat \beta = (\mathbf X'\mathbf X)^{-1}\mathbf X'\mathbf y = \beta + (\mathbf X'\mathbf X)^{-1}\mathbf X'\mathbf u$$
and from it the residuals. The residuals are an estimator of the error, and, since
$$u_i = y_i - \mathbf x'_i\beta = \mathbf x'_i \hat \beta +\hat u_i - \mathbf x'_i\beta$$
we can re-arrange to get
$$\hat u_i = u_i - \mathbf x'_i(\hat \beta - \beta),\;\; i=1,...,n$$
So (and since moreover the residuals by construction have zero mean), being correlated with some variable $Z$, which is not in $\mathbf X$, it means,
$${\rm Cov}(\hat u_i, z_i) = E\Big(z_i\cdot [u_i - \mathbf x'_i(\hat \beta - \beta)]\Big) = E(u_iz_i) - E\Big(z_i\cdot\mathbf x'_i(\mathbf X'\mathbf X)^{-1}\mathbf X'\mathbf u\Big) \neq 0$$
Now, apply the Law of Iterated Expectations on the second term of the last expression to get
$$E(u_iz_i) - E\Big[E\Big(z\cdot\mathbf x'_i(\mathbf X'\mathbf X)^{-1}\mathbf X'\mathbf u\mid Z, \mathbf X\Big)\Big] \neq 0$$
$$\implies E(u_iz_i) - E\Big[z\cdot\mathbf x'_i(\mathbf X'\mathbf X)^{-1}\mathbf X'E\Big(\mathbf u\mid Z, \mathbf X\Big)\Big] \neq 0$$
The possible scenarios here are:
A) $E(u_iz_i) = 0, \;\;\; E\Big(\mathbf u\mid Z, \mathbf X\Big) \neq 0$
In this case, if you follow the advice given, and re-run the regression of $\mathbf y$ using $\{Z, \mathbf X\}$ as regressors this time, you should know that, in the attempt to make the residuals behave as "random noise" with respect to other variables, not in $\mathbf X$ (which is far more ambitious than what we usually assume for our models), you will lose the finite-sample unbiasedness property of the estimator (due to $E\Big(\mathbf u\mid Z, \mathbf X)\Big) \neq 0$), but at least you will retain asymptotic consistency (due to $E(u_iz_i) = 0$, and under the assumptions already made). That's a good trade-off, Econometrics has long abandoned the hopes for finite-sample unbiasedness (and if you look around this site, you will find out that Statisticians in general have adopted, or pioneered, the same stance).
B) $E(u_iz_i) \neq 0, \;\;\; E\Big(\mathbf u\mid Z, \mathbf X\Big) \neq 0$
Here, you will lose both unbiasedness and consistency, and your estimator starts looking rather weak in good properties, to put it mildly, and I am not sure that "making the residuals random" justifies the price to pay. To my eyes, it doesn't because the estimates for $\beta$ are now very unreliable, and so the "random" residuals achievement becomes an artificial construct, and not a step closer to the true associations. And what if there is yet another variable $W$ which still can predict the new residuals obtained?
So the advice given may send you to better places, may send you to much worse places. Therefore, the critical issue is : Can you obtain evidence related to which of the two scenarios holds in a given case? - but this exceeds the scope of this answer.
The lesson I take out from all these is: "intuitive discussions" about "random and deterministic parts of a dependent variable", may be useful to a degree -but somewhere along the road, one should remember that the estimators that will be the ones that will eventually make concrete and tangible our attempts at estimation and inference, are mathematical tools with specific properties and specific limits to what they can do, achieve and guarantee. And some times they cannot achieve what appears "powerfully logical and intuitive" in a non-mathematical approach of the matter. | Residuals in regression should not be correlated with another variable | This is based on the approach that there exists a set of explanatory variables (EV) whose variability captures everything in the variability of the dependent variable bar "random, unpredictable noise" | Residuals in regression should not be correlated with another variable
This is based on the approach that there exists a set of explanatory variables (EV) whose variability captures everything in the variability of the dependent variable bar "random, unpredictable noise". As the link itself says clearly,
"The idea is that the deterministic portion of your model is so good at explaining (or predicting) the response that only the inherent
randomness of any real-world phenomenon remains leftover for the error
portion."
So, if the residuals $\hat {\mathbf u}$ from the regression of say, DV $y$ on EVs $\{X_1, X_2\}$ are correlated with some third variable $Z$ (which was not included in the regression), it means that the residuals do not behave like "random, unpredictable noise". So the set of EVs that was used, is not that set which "captures everything in the variability of $y$", and so it can be "improved upon".
Let's see what this means in the merciless language of mathematics.
We have the model (matrix notation) (and a sample of size $n$)
$$\mathbf y = \mathbf X\beta + \mathbf u$$
and let's assume that the nice properties for the OLS estimator to triumph, do hold:
$$Ε(\mathbf u \mid \mathbf X) = 0, {\rm Var}(\mathbf u \mid \mathbf X) = \sigma^2 I$$
Note that "the error is random, unpredictable noise" is not part of these assumptions. What the above says is that the error is unpredictable with respect to the specific set of regressors (check out the "Conditional Expectation Function" approach to linear regression).
Running the estimation we obtain the OLS estimator
$$\hat \beta = (\mathbf X'\mathbf X)^{-1}\mathbf X'\mathbf y = \beta + (\mathbf X'\mathbf X)^{-1}\mathbf X'\mathbf u$$
and from it the residuals. The residuals are an estimator of the error, and, since
$$u_i = y_i - \mathbf x'_i\beta = \mathbf x'_i \hat \beta +\hat u_i - \mathbf x'_i\beta$$
we can re-arrange to get
$$\hat u_i = u_i - \mathbf x'_i(\hat \beta - \beta),\;\; i=1,...,n$$
So (and since moreover the residuals by construction have zero mean), being correlated with some variable $Z$, which is not in $\mathbf X$, it means,
$${\rm Cov}(\hat u_i, z_i) = E\Big(z_i\cdot [u_i - \mathbf x'_i(\hat \beta - \beta)]\Big) = E(u_iz_i) - E\Big(z_i\cdot\mathbf x'_i(\mathbf X'\mathbf X)^{-1}\mathbf X'\mathbf u\Big) \neq 0$$
Now, apply the Law of Iterated Expectations on the second term of the last expression to get
$$E(u_iz_i) - E\Big[E\Big(z\cdot\mathbf x'_i(\mathbf X'\mathbf X)^{-1}\mathbf X'\mathbf u\mid Z, \mathbf X\Big)\Big] \neq 0$$
$$\implies E(u_iz_i) - E\Big[z\cdot\mathbf x'_i(\mathbf X'\mathbf X)^{-1}\mathbf X'E\Big(\mathbf u\mid Z, \mathbf X\Big)\Big] \neq 0$$
The possible scenarios here are:
A) $E(u_iz_i) = 0, \;\;\; E\Big(\mathbf u\mid Z, \mathbf X\Big) \neq 0$
In this case, if you follow the advice given, and re-run the regression of $\mathbf y$ using $\{Z, \mathbf X\}$ as regressors this time, you should know that, in the attempt to make the residuals behave as "random noise" with respect to other variables, not in $\mathbf X$ (which is far more ambitious than what we usually assume for our models), you will lose the finite-sample unbiasedness property of the estimator (due to $E\Big(\mathbf u\mid Z, \mathbf X)\Big) \neq 0$), but at least you will retain asymptotic consistency (due to $E(u_iz_i) = 0$, and under the assumptions already made). That's a good trade-off, Econometrics has long abandoned the hopes for finite-sample unbiasedness (and if you look around this site, you will find out that Statisticians in general have adopted, or pioneered, the same stance).
B) $E(u_iz_i) \neq 0, \;\;\; E\Big(\mathbf u\mid Z, \mathbf X\Big) \neq 0$
Here, you will lose both unbiasedness and consistency, and your estimator starts looking rather weak in good properties, to put it mildly, and I am not sure that "making the residuals random" justifies the price to pay. To my eyes, it doesn't because the estimates for $\beta$ are now very unreliable, and so the "random" residuals achievement becomes an artificial construct, and not a step closer to the true associations. And what if there is yet another variable $W$ which still can predict the new residuals obtained?
So the advice given may send you to better places, may send you to much worse places. Therefore, the critical issue is : Can you obtain evidence related to which of the two scenarios holds in a given case? - but this exceeds the scope of this answer.
The lesson I take out from all these is: "intuitive discussions" about "random and deterministic parts of a dependent variable", may be useful to a degree -but somewhere along the road, one should remember that the estimators that will be the ones that will eventually make concrete and tangible our attempts at estimation and inference, are mathematical tools with specific properties and specific limits to what they can do, achieve and guarantee. And some times they cannot achieve what appears "powerfully logical and intuitive" in a non-mathematical approach of the matter. | Residuals in regression should not be correlated with another variable
This is based on the approach that there exists a set of explanatory variables (EV) whose variability captures everything in the variability of the dependent variable bar "random, unpredictable noise" |
52,339 | Residuals in regression should not be correlated with another variable | Your residuals should be normal noise. If they have another structure you might have a problem like omitted-variable bias. You should then be able to predict the residuals by using the omitted variable(s). If you reestimate your model and now include the variable(s) you omitted before, then your residuals should be normal.
Or let me rephrase: You explain the dependent variable with the systematic part of the model (e.g. $X\beta$) and the (usually) unsystematic part, the residual. But, if the residuals have a structure apart from being random noise, then they aren't that "unsystematic", are they? Now if you can explain that structure with certain variables, then go on and include them in your model in the first place. | Residuals in regression should not be correlated with another variable | Your residuals should be normal noise. If they have another structure you might have a problem like omitted-variable bias. You should then be able to predict the residuals by using the omitted variabl | Residuals in regression should not be correlated with another variable
Your residuals should be normal noise. If they have another structure you might have a problem like omitted-variable bias. You should then be able to predict the residuals by using the omitted variable(s). If you reestimate your model and now include the variable(s) you omitted before, then your residuals should be normal.
Or let me rephrase: You explain the dependent variable with the systematic part of the model (e.g. $X\beta$) and the (usually) unsystematic part, the residual. But, if the residuals have a structure apart from being random noise, then they aren't that "unsystematic", are they? Now if you can explain that structure with certain variables, then go on and include them in your model in the first place. | Residuals in regression should not be correlated with another variable
Your residuals should be normal noise. If they have another structure you might have a problem like omitted-variable bias. You should then be able to predict the residuals by using the omitted variabl |
52,340 | Residuals in regression should not be correlated with another variable | Another way to think about it is by analogy. Box, Hunter & Hunter (in their "Statistics for Experimenters" which is highly recommended) uses the following analogy with chemistry and (drinking) water purification. To purify the water, we pass it through some sort of filter to remove impurities. How do we test if there are left impurities? we take samples of the purified water, send it to a laboratory, and test for various chemical substances.
Now, in this analogy, the filter corresponds to our statistical model: The purpose of the model is to filter information from the data. To see if it worked, that is, it removed what there is of information from the data and left nothing, we test the residuals: this is what is left after the filter took what it should. Then we test the residuals to see if any impurities (that is, information) is left. An example of such is correlation between the residuals and some extra variable not used for the model. If that correlation exists, it means the residuals are not pure white noise (that is, clean water ...), and we try to extend the model (that is, the filter) to remove that information also. One way of doing that is including that extra variable in the model! | Residuals in regression should not be correlated with another variable | Another way to think about it is by analogy. Box, Hunter & Hunter (in their "Statistics for Experimenters" which is highly recommended) uses the following analogy with chemistry and (drinking) water | Residuals in regression should not be correlated with another variable
Another way to think about it is by analogy. Box, Hunter & Hunter (in their "Statistics for Experimenters" which is highly recommended) uses the following analogy with chemistry and (drinking) water purification. To purify the water, we pass it through some sort of filter to remove impurities. How do we test if there are left impurities? we take samples of the purified water, send it to a laboratory, and test for various chemical substances.
Now, in this analogy, the filter corresponds to our statistical model: The purpose of the model is to filter information from the data. To see if it worked, that is, it removed what there is of information from the data and left nothing, we test the residuals: this is what is left after the filter took what it should. Then we test the residuals to see if any impurities (that is, information) is left. An example of such is correlation between the residuals and some extra variable not used for the model. If that correlation exists, it means the residuals are not pure white noise (that is, clean water ...), and we try to extend the model (that is, the filter) to remove that information also. One way of doing that is including that extra variable in the model! | Residuals in regression should not be correlated with another variable
Another way to think about it is by analogy. Box, Hunter & Hunter (in their "Statistics for Experimenters" which is highly recommended) uses the following analogy with chemistry and (drinking) water |
52,341 | What is the difference between multimodal and multivariate? | Put very simply, "multi-modal" refers to a dataset (variable) in which there is more than one mode, whereas "multi-variate" refers to a dataset in which there is more than one variable.
Here is a simple demonstration, coded with R:
set.seed(5104)
x1mm = c(rnorm(50, mean=-2), rnorm(50, mean=2))
x1um = rnorm(100, mean=0.5, sd=sqrt(3))
plot(density(x1mm), main="multimodal data")
plot(density(x1um), main="unimodal data")
y = .5*x1um + rnorm(100)
plot(x1um, y, xlab="X", ylab="Y", main="bivariate data")
That's the gist of it. When you have response and regressor variables, and you want to fit a model that maps them, the use of "multivariate" depends on the nature of the mapping. When there is only one response and one covariate, we say this is simple regression; if there is more than one covariate, we say it is multiple regression; and if there is more than one response variable, we call it multivariate regression. In your case, I gather you are interested in clustering / unsupervised learning, so these distinctions don't really apply.
However, the clustering aspect makes this a little more interesting. In order to cluster successfully, you generally want your data to be multimodal in the full data space. The clusters / latent groupings are found by finding a partition that separates the data into unimodal subsets that are more coherent than the original (unpartitioned) superset. | What is the difference between multimodal and multivariate? | Put very simply, "multi-modal" refers to a dataset (variable) in which there is more than one mode, whereas "multi-variate" refers to a dataset in which there is more than one variable.
Here is a si | What is the difference between multimodal and multivariate?
Put very simply, "multi-modal" refers to a dataset (variable) in which there is more than one mode, whereas "multi-variate" refers to a dataset in which there is more than one variable.
Here is a simple demonstration, coded with R:
set.seed(5104)
x1mm = c(rnorm(50, mean=-2), rnorm(50, mean=2))
x1um = rnorm(100, mean=0.5, sd=sqrt(3))
plot(density(x1mm), main="multimodal data")
plot(density(x1um), main="unimodal data")
y = .5*x1um + rnorm(100)
plot(x1um, y, xlab="X", ylab="Y", main="bivariate data")
That's the gist of it. When you have response and regressor variables, and you want to fit a model that maps them, the use of "multivariate" depends on the nature of the mapping. When there is only one response and one covariate, we say this is simple regression; if there is more than one covariate, we say it is multiple regression; and if there is more than one response variable, we call it multivariate regression. In your case, I gather you are interested in clustering / unsupervised learning, so these distinctions don't really apply.
However, the clustering aspect makes this a little more interesting. In order to cluster successfully, you generally want your data to be multimodal in the full data space. The clusters / latent groupings are found by finding a partition that separates the data into unimodal subsets that are more coherent than the original (unpartitioned) superset. | What is the difference between multimodal and multivariate?
Put very simply, "multi-modal" refers to a dataset (variable) in which there is more than one mode, whereas "multi-variate" refers to a dataset in which there is more than one variable.
Here is a si |
52,342 | What is the difference between multimodal and multivariate? | Multivariate refers to cases where you have more than one outcome variable (not levels). In cases where you have just one outcome variable, one speaks of an univariate problem. But as @gung already said, in practice and sometimes even in textbooks these terms get a little blurry and often refer to cases where you have more than two input variables, e.g. multiple regression etc.
Where multimodal refers to the experimental design. One says a model is multimodal if you measure one construct with different methods (e.g. questionnaire and observation). But it can also refer to the distribution of your data.
To conclude, the meaning of those terms depends heavily on the context. | What is the difference between multimodal and multivariate? | Multivariate refers to cases where you have more than one outcome variable (not levels). In cases where you have just one outcome variable, one speaks of an univariate problem. But as @gung already sa | What is the difference between multimodal and multivariate?
Multivariate refers to cases where you have more than one outcome variable (not levels). In cases where you have just one outcome variable, one speaks of an univariate problem. But as @gung already said, in practice and sometimes even in textbooks these terms get a little blurry and often refer to cases where you have more than two input variables, e.g. multiple regression etc.
Where multimodal refers to the experimental design. One says a model is multimodal if you measure one construct with different methods (e.g. questionnaire and observation). But it can also refer to the distribution of your data.
To conclude, the meaning of those terms depends heavily on the context. | What is the difference between multimodal and multivariate?
Multivariate refers to cases where you have more than one outcome variable (not levels). In cases where you have just one outcome variable, one speaks of an univariate problem. But as @gung already sa |
52,343 | Why is there a need for a 'sampling distribution' to find confidence intervals? | 1. Goal of using confidence intervals
As you correctly stated, the rationale behind confidence intervals is to get an idea about the value of some unknown parameter, in your case the 'mean' weight of a basket of apples. One way to find out is to weight each and every existing basket, and then compute the average.
Obviously, when there are a huge amount of baskets to weigh, this can be very time consuming and expensive. Therefore, in many practical cases we would like to measure a 'limited number' of baskets and from this so-called sample, we want to get an 'idea' about the mean weight of all baskets, i.e. we want to use the weight of the baskets in our 'limited' sample to get an estimate for the mean weight over all the baskets.
Obviously, as we only 'estimate' this overall (unknown) mean, we will make an estimation error, and we do not only want to have an estimate for the overall mean, but also we want to have an idea about the 'estimation error'. This is where the notion of a confidence interval comes in, by 'expressing' our estimate as an interval, we also have an idea about its precision.
2. How can we find such an estimate and its (im)precision ? - the sampling distribution of the sample average
For the problem of finding confidence intervals for the mean, the basic theorem to derive confidence intervals is - as you said - the central limit theorem (CLT). This CLT states that, under fairly general conditions, the arithmetic average of a sum of random variables has has a normal distribution. Let us simplify a bit and assume that your apple baskets have a distribution with unknown mean and known standard deviation (if the standard deviation is unknown the idea is the same but there are some minor differences because of the fact that the standard deviation will have to be estimated from your sample - see at the end of this section). If I note the weight of your basket as $W$, then this sentence is denoted as $W \sim N(\mu, \sigma)$.
Now you will take a sample of $n$ baskets, each with a weight $w_i, i= 1 \dots n$.
The CLT says that, if we 'pick' randomly (independently) $n$ baskets out of the set of all the baskets that the arithmetic average of the weights in the sample $\bar{w}=\frac{1}{n} \sum_{i=1}^n w_i$ is a (draw from) a random variable that converges (as $n \to +\infty$) in distribution to a random variable :
has a normal distribution;
has a mean equal to the overall mean of all the baskets;
has a variance equal to the variance of all the baskets, divided by the square root of the sample size
So $\bar{w}$ is a (draw) from $N(\mu, \frac{\sigma}{\sqrt{n}})$.
(Note: this is valid, even if the random variable $W$ has another than a normal distribution).
$N(\mu, \frac{\sigma}{\sqrt{n}})$ is called the sampling distribution of the sample average, so it is the distribution of the averages of all the samples of size $n$.
We do we have now:
we have the weight $w_i$ of the $n$ baskets in a sample of $n$ baskets and we can compute $\bar{w}$
from the CLT we know that the probabilities of 'observing' a value $\bar{w}$ for our (randomly drawn) sample can be computed using the density of the normal distribution.
E.g. It is well known that, as the sampling distribution of the sample average is normal, the probability that $\bar{w}$ lies between $[\mu-1.96\frac{\sigma}{\sqrt{n}};\mu+1.96\frac{\sigma}{\sqrt{n}}]$ is $95\%$.
Let's us analyse this in detail: $\bar{w} \in [\mu-1.96\frac{\sigma}{\sqrt{n}};\mu+1.96\frac{\sigma}{\sqrt{n}}]$ is equivalent to saying that $\bar{w} \ge \mu-1.96\frac{\sigma}{\sqrt{n}}$ and $\bar{w} \le \mu+1.96\frac{\sigma}{\sqrt{n}}$.
The first inequality can be re-written: $\bar{w} \ge \mu-1.96\frac{\sigma}{\sqrt{n}} \iff \bar{w} + 1.96\frac{\sigma}{\sqrt{n}} \ge \mu$ and the second inequality as $\bar{w} \le \mu+1.96\frac{\sigma}{\sqrt{n}} \iff \bar{w} - 1.96\frac{\sigma}{\sqrt{n}} \le \mu$
Therefore $\bar{w} \in [\mu-1.96\frac{\sigma}{\sqrt{n}};\mu+1.96\frac{\sigma}{\sqrt{n}}]$ is equivalent with $\mu \in [\bar{w}-1.96\frac{\sigma}{\sqrt{n}};\bar{w}+1.96\frac{\sigma}{\sqrt{n}}]$ and obvioulsy, in that case it must hold that the probability that $\bar{w} \in [\mu-1.96\frac{\sigma}{\sqrt{n}};\mu+1.96\frac{\sigma}{\sqrt{n}}]$ (which was $95\%$, see supra) is equal to the probability that $\mu \in [\bar{w}-1.96\frac{\sigma}{\sqrt{n}};\bar{w}+1.96\frac{\sigma}{\sqrt{n}}]$
So we find that the probability that $\mu \in [\bar{w}-1.96\frac{\sigma}{\sqrt{n}};\bar{w}+1.96\frac{\sigma}{\sqrt{n}}]$ is $95\%$.
This means that the unknown mean of all baskets $\mu$ is (with a probability of $95\%$) in the interval $[\bar{w}-1.96\frac{\sigma}{\sqrt{n}};\bar{w}+1.96\frac{\sigma}{\sqrt{n}}]$, and we can compute this interval from our sample (because $\bar{w}$ can be computed from the sample and we assumed the $\sigma$ is known).
So we are confident to find the unknown mean $\mu$ between $[\bar{w}-1.96\frac{\sigma}{\sqrt{n}};\bar{w}+1.96\frac{\sigma}{\sqrt{n}}]$ at a $95\%$ confidence level.
Note that this interval could only be derived because we knew the sampling distribution of the sample average.
P.S. If the standard deviation $\sigma$ is not known then it has to be estimated from the sample, i.e. it is replaced by the sample standard deviation $s$. This has consequences for the sampling distribution of the sample mean: it is no longer normal but becomes a $t$-distribution with $n-1$ degrees of freedom, i.e. in the above $\bar{w} \sim t(\nu=n-1)$. The reasoning supra remains the same, the only difference is that factor $1.96$ is replaced by the corresponding quantile of the $t$-distribution and, as said, $\sigma$ replaced by $s$.
3. Interpretation of a confidence interval.
It is very important to note that $\mu$ is unknown but it is a fixed number; it is the mean of all baskets and that number can be known, we do not know it because we decided to estimate it from a sample for reasons of time and costs !
In order to estimate the unknown $\mu$, we have drawn a random sample. If we take another sample, than we will obviously find another value for $\bar{w}$ and thus another confidence interval !
So the overall mean is an unknown but fixed number while the interval is computed from a random sample and therefore the interval is random !
So what does a '$95\%$' confidence interval mean ? It means that, if we draw an infinite number of random samples, and for each sample we compute the $95\%$ confidence interval, then we find an infinite number of confidence intervals, and $95\%$ of all these intervals will contain the unknown $\mu$.
4. Remark on the 'precision' and the sample size
By the above, it can be seen the the confidence interval will be smaller (and this the precision of the estimate will be higher) when $\frac{\sigma}{\sqrt{n}}$ becomes smaller, or thus when $n$ becomes larger.
In the example in your question you weigh only one basket, so $n=1$ in your example.
For completeness I remark that if your population (i.e. the set of all the baskets) is not 'infinite' that a 'finite population correction' can be applied. The idea is simple: if the total number of baskets in the whole population if $N$, then, if I draw a sample of size $n$ equal to the population size $N$ then there is no imprecision anymore (because I weighted all available baskets), so in that case the confidence interval should reduce to only one value: $\mu$. This is not the case if we simply apply the above formulas, they become for $n=N$:
$[\bar{w}-1.96\frac{\sigma}{\sqrt{N}};\bar{w}+1.96\frac{\sigma}{\sqrt{N}}]$,
which is not a single value, contrary to what we expect.
Therefore, for finite populations of size $N$, the $\sigma$ in the above formulas under section '2' should be replaced by $\sqrt{1-\frac{n}{N}}\sigma$ and the interval becomes:
$[\bar{w}-1.96\frac{\sigma \sqrt{1-\frac{n}{N}}}{\sqrt{n}};\bar{w}+1.96\frac{\sigma \sqrt{1-\frac{n}{N}}}{\sqrt{n}}]$.
It is easily verified that for $n=N$ this reduces to the singleton $\bar{w}$, but as we have been exhaustive, $\bar{w}=\mu$.
If $N$ is infinite, $1-\frac{n}{N}=1$, and we find the formulas derived under section 2. | Why is there a need for a 'sampling distribution' to find confidence intervals? | 1. Goal of using confidence intervals
As you correctly stated, the rationale behind confidence intervals is to get an idea about the value of some unknown parameter, in your case the 'mean' weight of | Why is there a need for a 'sampling distribution' to find confidence intervals?
1. Goal of using confidence intervals
As you correctly stated, the rationale behind confidence intervals is to get an idea about the value of some unknown parameter, in your case the 'mean' weight of a basket of apples. One way to find out is to weight each and every existing basket, and then compute the average.
Obviously, when there are a huge amount of baskets to weigh, this can be very time consuming and expensive. Therefore, in many practical cases we would like to measure a 'limited number' of baskets and from this so-called sample, we want to get an 'idea' about the mean weight of all baskets, i.e. we want to use the weight of the baskets in our 'limited' sample to get an estimate for the mean weight over all the baskets.
Obviously, as we only 'estimate' this overall (unknown) mean, we will make an estimation error, and we do not only want to have an estimate for the overall mean, but also we want to have an idea about the 'estimation error'. This is where the notion of a confidence interval comes in, by 'expressing' our estimate as an interval, we also have an idea about its precision.
2. How can we find such an estimate and its (im)precision ? - the sampling distribution of the sample average
For the problem of finding confidence intervals for the mean, the basic theorem to derive confidence intervals is - as you said - the central limit theorem (CLT). This CLT states that, under fairly general conditions, the arithmetic average of a sum of random variables has has a normal distribution. Let us simplify a bit and assume that your apple baskets have a distribution with unknown mean and known standard deviation (if the standard deviation is unknown the idea is the same but there are some minor differences because of the fact that the standard deviation will have to be estimated from your sample - see at the end of this section). If I note the weight of your basket as $W$, then this sentence is denoted as $W \sim N(\mu, \sigma)$.
Now you will take a sample of $n$ baskets, each with a weight $w_i, i= 1 \dots n$.
The CLT says that, if we 'pick' randomly (independently) $n$ baskets out of the set of all the baskets that the arithmetic average of the weights in the sample $\bar{w}=\frac{1}{n} \sum_{i=1}^n w_i$ is a (draw from) a random variable that converges (as $n \to +\infty$) in distribution to a random variable :
has a normal distribution;
has a mean equal to the overall mean of all the baskets;
has a variance equal to the variance of all the baskets, divided by the square root of the sample size
So $\bar{w}$ is a (draw) from $N(\mu, \frac{\sigma}{\sqrt{n}})$.
(Note: this is valid, even if the random variable $W$ has another than a normal distribution).
$N(\mu, \frac{\sigma}{\sqrt{n}})$ is called the sampling distribution of the sample average, so it is the distribution of the averages of all the samples of size $n$.
We do we have now:
we have the weight $w_i$ of the $n$ baskets in a sample of $n$ baskets and we can compute $\bar{w}$
from the CLT we know that the probabilities of 'observing' a value $\bar{w}$ for our (randomly drawn) sample can be computed using the density of the normal distribution.
E.g. It is well known that, as the sampling distribution of the sample average is normal, the probability that $\bar{w}$ lies between $[\mu-1.96\frac{\sigma}{\sqrt{n}};\mu+1.96\frac{\sigma}{\sqrt{n}}]$ is $95\%$.
Let's us analyse this in detail: $\bar{w} \in [\mu-1.96\frac{\sigma}{\sqrt{n}};\mu+1.96\frac{\sigma}{\sqrt{n}}]$ is equivalent to saying that $\bar{w} \ge \mu-1.96\frac{\sigma}{\sqrt{n}}$ and $\bar{w} \le \mu+1.96\frac{\sigma}{\sqrt{n}}$.
The first inequality can be re-written: $\bar{w} \ge \mu-1.96\frac{\sigma}{\sqrt{n}} \iff \bar{w} + 1.96\frac{\sigma}{\sqrt{n}} \ge \mu$ and the second inequality as $\bar{w} \le \mu+1.96\frac{\sigma}{\sqrt{n}} \iff \bar{w} - 1.96\frac{\sigma}{\sqrt{n}} \le \mu$
Therefore $\bar{w} \in [\mu-1.96\frac{\sigma}{\sqrt{n}};\mu+1.96\frac{\sigma}{\sqrt{n}}]$ is equivalent with $\mu \in [\bar{w}-1.96\frac{\sigma}{\sqrt{n}};\bar{w}+1.96\frac{\sigma}{\sqrt{n}}]$ and obvioulsy, in that case it must hold that the probability that $\bar{w} \in [\mu-1.96\frac{\sigma}{\sqrt{n}};\mu+1.96\frac{\sigma}{\sqrt{n}}]$ (which was $95\%$, see supra) is equal to the probability that $\mu \in [\bar{w}-1.96\frac{\sigma}{\sqrt{n}};\bar{w}+1.96\frac{\sigma}{\sqrt{n}}]$
So we find that the probability that $\mu \in [\bar{w}-1.96\frac{\sigma}{\sqrt{n}};\bar{w}+1.96\frac{\sigma}{\sqrt{n}}]$ is $95\%$.
This means that the unknown mean of all baskets $\mu$ is (with a probability of $95\%$) in the interval $[\bar{w}-1.96\frac{\sigma}{\sqrt{n}};\bar{w}+1.96\frac{\sigma}{\sqrt{n}}]$, and we can compute this interval from our sample (because $\bar{w}$ can be computed from the sample and we assumed the $\sigma$ is known).
So we are confident to find the unknown mean $\mu$ between $[\bar{w}-1.96\frac{\sigma}{\sqrt{n}};\bar{w}+1.96\frac{\sigma}{\sqrt{n}}]$ at a $95\%$ confidence level.
Note that this interval could only be derived because we knew the sampling distribution of the sample average.
P.S. If the standard deviation $\sigma$ is not known then it has to be estimated from the sample, i.e. it is replaced by the sample standard deviation $s$. This has consequences for the sampling distribution of the sample mean: it is no longer normal but becomes a $t$-distribution with $n-1$ degrees of freedom, i.e. in the above $\bar{w} \sim t(\nu=n-1)$. The reasoning supra remains the same, the only difference is that factor $1.96$ is replaced by the corresponding quantile of the $t$-distribution and, as said, $\sigma$ replaced by $s$.
3. Interpretation of a confidence interval.
It is very important to note that $\mu$ is unknown but it is a fixed number; it is the mean of all baskets and that number can be known, we do not know it because we decided to estimate it from a sample for reasons of time and costs !
In order to estimate the unknown $\mu$, we have drawn a random sample. If we take another sample, than we will obviously find another value for $\bar{w}$ and thus another confidence interval !
So the overall mean is an unknown but fixed number while the interval is computed from a random sample and therefore the interval is random !
So what does a '$95\%$' confidence interval mean ? It means that, if we draw an infinite number of random samples, and for each sample we compute the $95\%$ confidence interval, then we find an infinite number of confidence intervals, and $95\%$ of all these intervals will contain the unknown $\mu$.
4. Remark on the 'precision' and the sample size
By the above, it can be seen the the confidence interval will be smaller (and this the precision of the estimate will be higher) when $\frac{\sigma}{\sqrt{n}}$ becomes smaller, or thus when $n$ becomes larger.
In the example in your question you weigh only one basket, so $n=1$ in your example.
For completeness I remark that if your population (i.e. the set of all the baskets) is not 'infinite' that a 'finite population correction' can be applied. The idea is simple: if the total number of baskets in the whole population if $N$, then, if I draw a sample of size $n$ equal to the population size $N$ then there is no imprecision anymore (because I weighted all available baskets), so in that case the confidence interval should reduce to only one value: $\mu$. This is not the case if we simply apply the above formulas, they become for $n=N$:
$[\bar{w}-1.96\frac{\sigma}{\sqrt{N}};\bar{w}+1.96\frac{\sigma}{\sqrt{N}}]$,
which is not a single value, contrary to what we expect.
Therefore, for finite populations of size $N$, the $\sigma$ in the above formulas under section '2' should be replaced by $\sqrt{1-\frac{n}{N}}\sigma$ and the interval becomes:
$[\bar{w}-1.96\frac{\sigma \sqrt{1-\frac{n}{N}}}{\sqrt{n}};\bar{w}+1.96\frac{\sigma \sqrt{1-\frac{n}{N}}}{\sqrt{n}}]$.
It is easily verified that for $n=N$ this reduces to the singleton $\bar{w}$, but as we have been exhaustive, $\bar{w}=\mu$.
If $N$ is infinite, $1-\frac{n}{N}=1$, and we find the formulas derived under section 2. | Why is there a need for a 'sampling distribution' to find confidence intervals?
1. Goal of using confidence intervals
As you correctly stated, the rationale behind confidence intervals is to get an idea about the value of some unknown parameter, in your case the 'mean' weight of |
52,344 | Why is there a need for a 'sampling distribution' to find confidence intervals? | First we need to establish the population parameter or what is it that we want to estimate:
Example: You want to know the average weight of a Californian. (This is the population parameter you are looking for)
Now if you want the correct answer you need to measure the weight of every single Californian and get the average. Unfortunately we do not have the money and time to do this.
So here is what we can do,
we take samples. Let’s say we took a sample of 100 Californians and got the average, which was 150lbs.
Now what we have is a “sample mean or sample average” from a single sample. If we took another 10 such samples of size 100, we would have 10 more “sample means or sample averages”.
If you create a distribution of such “sample averages” by taking as many samples as possible, the mean of the sample distribution will converge to the true population parameter. Below is an example:
Ex: Sequence of numbers { 1, 2 and 3}
The average of these three numbers is 2.
Let’s take samples of 2 from the sequence of three numbers and calculate the sample average
1, 2 =1.5
2, 3=2.5
3, 1=2
So the sample averages are {1.5, 2.5, and 2}. The average of these sample average is:
1.5 +2.5+3/3=2 ,
which is the true average.
Back to your original question: You have a sample of apples and you have an average value. Which is just one sample average. We call this a point estimate. With one sample average you cannot say anything about the true value. So we create an interval around this point estimate and claim that there is a 95% probability that the true value would fall inside this interval.
This what we call a confidence interval. (confidence interval around the point estimate) | Why is there a need for a 'sampling distribution' to find confidence intervals? | First we need to establish the population parameter or what is it that we want to estimate:
Example: You want to know the average weight of a Californian. (This is the population parameter you are loo | Why is there a need for a 'sampling distribution' to find confidence intervals?
First we need to establish the population parameter or what is it that we want to estimate:
Example: You want to know the average weight of a Californian. (This is the population parameter you are looking for)
Now if you want the correct answer you need to measure the weight of every single Californian and get the average. Unfortunately we do not have the money and time to do this.
So here is what we can do,
we take samples. Let’s say we took a sample of 100 Californians and got the average, which was 150lbs.
Now what we have is a “sample mean or sample average” from a single sample. If we took another 10 such samples of size 100, we would have 10 more “sample means or sample averages”.
If you create a distribution of such “sample averages” by taking as many samples as possible, the mean of the sample distribution will converge to the true population parameter. Below is an example:
Ex: Sequence of numbers { 1, 2 and 3}
The average of these three numbers is 2.
Let’s take samples of 2 from the sequence of three numbers and calculate the sample average
1, 2 =1.5
2, 3=2.5
3, 1=2
So the sample averages are {1.5, 2.5, and 2}. The average of these sample average is:
1.5 +2.5+3/3=2 ,
which is the true average.
Back to your original question: You have a sample of apples and you have an average value. Which is just one sample average. We call this a point estimate. With one sample average you cannot say anything about the true value. So we create an interval around this point estimate and claim that there is a 95% probability that the true value would fall inside this interval.
This what we call a confidence interval. (confidence interval around the point estimate) | Why is there a need for a 'sampling distribution' to find confidence intervals?
First we need to establish the population parameter or what is it that we want to estimate:
Example: You want to know the average weight of a Californian. (This is the population parameter you are loo |
52,345 | How to use cross-validation with regularization? | You generally do have infinitely many to choose from. There are two approaches to resolving this difficulty.
You can attempt to be very creative and work out mathematics for estimating the full path of models as $\lambda$ varies. This is only possible in some cases, but when it is, it is a powerful method indeed. For example the LARS methodology for lasso linear regression is exactly of this type. It is very beautiful when this works out.
But usually you can't or don't know how to do that, so:
You simply discretize the problem by choosing an appropriate finite sequence of lambdas $\lambda_0 < \lambda_1 < \cdots < \lambda_N$ and working only with those values. There is still some art to this, as determining what $\lambda_N$ (the maximum) and $\lambda_0$ (the minimum) should be depends on the problem being solved. You often want to choose $\lambda_N$ to be the least value that collapses the model completely to predicting the average value of the response. For example, this is the approach taken by the famed glmnet. | How to use cross-validation with regularization? | You generally do have infinitely many to choose from. There are two approaches to resolving this difficulty.
You can attempt to be very creative and work out mathematics for estimating the full path | How to use cross-validation with regularization?
You generally do have infinitely many to choose from. There are two approaches to resolving this difficulty.
You can attempt to be very creative and work out mathematics for estimating the full path of models as $\lambda$ varies. This is only possible in some cases, but when it is, it is a powerful method indeed. For example the LARS methodology for lasso linear regression is exactly of this type. It is very beautiful when this works out.
But usually you can't or don't know how to do that, so:
You simply discretize the problem by choosing an appropriate finite sequence of lambdas $\lambda_0 < \lambda_1 < \cdots < \lambda_N$ and working only with those values. There is still some art to this, as determining what $\lambda_N$ (the maximum) and $\lambda_0$ (the minimum) should be depends on the problem being solved. You often want to choose $\lambda_N$ to be the least value that collapses the model completely to predicting the average value of the response. For example, this is the approach taken by the famed glmnet. | How to use cross-validation with regularization?
You generally do have infinitely many to choose from. There are two approaches to resolving this difficulty.
You can attempt to be very creative and work out mathematics for estimating the full path |
52,346 | How to use cross-validation with regularization? | The procedure for cross-validated regularization parameter selection is the following :
Discretize your lambdas : $\lambda_0, \lambda_1, ..., \lambda_n$ (for example you may choose $\lambda = 10^{-3}, 3 \times 10^{-3}, 10^-2, ..., 10^3$, but this is up to you.
Divide your dataset into $n$ subsamples, where $n$ is the number of cross-validation folds.
For each $\lambda$, compute the cross-validated error when training your model with regularization parameter $\lambda$ (this is the cross-validation part : for each fold, train on all the other folds and compute the error on the reserved fold ; then average out the error).
Choose the $\lambda$ which gave the lowest cross-validated error (alternatively, choose the smallest value of $\lambda$ within one standard deviation of the lowest cross-validated error if you want to be extra conservative). | How to use cross-validation with regularization? | The procedure for cross-validated regularization parameter selection is the following :
Discretize your lambdas : $\lambda_0, \lambda_1, ..., \lambda_n$ (for example you may choose $\lambda = 10^{-3 | How to use cross-validation with regularization?
The procedure for cross-validated regularization parameter selection is the following :
Discretize your lambdas : $\lambda_0, \lambda_1, ..., \lambda_n$ (for example you may choose $\lambda = 10^{-3}, 3 \times 10^{-3}, 10^-2, ..., 10^3$, but this is up to you.
Divide your dataset into $n$ subsamples, where $n$ is the number of cross-validation folds.
For each $\lambda$, compute the cross-validated error when training your model with regularization parameter $\lambda$ (this is the cross-validation part : for each fold, train on all the other folds and compute the error on the reserved fold ; then average out the error).
Choose the $\lambda$ which gave the lowest cross-validated error (alternatively, choose the smallest value of $\lambda$ within one standard deviation of the lowest cross-validated error if you want to be extra conservative). | How to use cross-validation with regularization?
The procedure for cross-validated regularization parameter selection is the following :
Discretize your lambdas : $\lambda_0, \lambda_1, ..., \lambda_n$ (for example you may choose $\lambda = 10^{-3 |
52,347 | How to use cross-validation with regularization? | (Training Error and Test Error) versus (Model Complexity/Capacity) form a U-shaped relationship. In learning a model, there are 2 goals:
Find the optimum on the model complexity axis where the U curve starts to go up again. This happens for the Test Error curve, even though the Training Error curve continues to go down overfitting the training data to the high-complexity model (marching towards interpolation).
Reduce the gap between the two U curves, meaning, reduce the gap between training error and test error.
(1) is achieved by using cross validation - to find the fine balance between bias and variance
(2) is achieved by using regularization - to bring down the test error U curve closer to the training error curve. | How to use cross-validation with regularization? | (Training Error and Test Error) versus (Model Complexity/Capacity) form a U-shaped relationship. In learning a model, there are 2 goals:
Find the optimum on the model complexity axis where the U curv | How to use cross-validation with regularization?
(Training Error and Test Error) versus (Model Complexity/Capacity) form a U-shaped relationship. In learning a model, there are 2 goals:
Find the optimum on the model complexity axis where the U curve starts to go up again. This happens for the Test Error curve, even though the Training Error curve continues to go down overfitting the training data to the high-complexity model (marching towards interpolation).
Reduce the gap between the two U curves, meaning, reduce the gap between training error and test error.
(1) is achieved by using cross validation - to find the fine balance between bias and variance
(2) is achieved by using regularization - to bring down the test error U curve closer to the training error curve. | How to use cross-validation with regularization?
(Training Error and Test Error) versus (Model Complexity/Capacity) form a U-shaped relationship. In learning a model, there are 2 goals:
Find the optimum on the model complexity axis where the U curv |
52,348 | Monte Carlo integration with imposed variance | The problem is that without knowing exactly what $\theta$ is, we cannot know the variance of its Monte-Carlo estimator. The solution is to estimate that variance and hope the estimate is sufficiently close to the truth.
The very simplest form of Monte-Carlo estimation surrounds the graph of the integrand, $f(x) = e^{-x^2}(1-x)$, by a box (or other congenial figure that is easy to work with) of area $A$ and places $n$ independent uniformly random points in the box. The proportion of points lying under the graph, times the area $A$, estimates the area $\theta$ under the graph. As usual, let's write this estimator of $\theta$ as $\hat\theta$. For examples, see the figure at the end of this post.
Because the chance of any point lying under the graph is $p = \theta / A$, the count $X$ of points lying under the graph has a Binomial$(n, p)$ distribution. This has an expected value of $np$ and a variance of $np(1-p)$. The variance of the estimate therefore is
$$\text{Var}(\hat \theta) = \text{Var}\left(\frac{AX}{n}\right) = \left(\frac{A}{n}\right)^2\text{Var}(X) = \left(\frac{A}{n}\right)^2 n \left(\frac{\theta}{A}\right)\left(1 - \frac{\theta}{A}\right) = \frac{\theta(A-\theta)}{n}.$$
Because we do no know $\theta$, we first use a small $n$ to obtain an initial estimate and plug that into this variance formula. (A good educated guess about $\theta$ will serve well to start, too. For instance, the graph (see below) suggests $\theta$ is not far from $1/2$, so you could start by substituting that for $\hat\theta$.) This is the estimated variance,
$$\widehat{\text{Var}}(\hat\theta) = \frac{\hat\theta(A-\hat\theta)}{n}.$$
Using this initial estimate $\hat\theta$, find an $n$ for which $\widehat{\text{Var}}(\hat\theta) \le 0.0001 = T$. The smallest possible such $n$ is easily found, with a little algebraic manipulation of the preceding formula, to be
$$\hat n = \bigg\lceil\frac{\hat\theta(A - \hat\theta)}{T}\bigg\rceil.$$
Iterating this procedure eventually produces a sample size that will at least approximately meet the variance target. As a practical matter, at each step $\hat n$ should be made sufficiently greater than the previous estimate of $n$ so that eventually a large enough $n$ is guaranteed to be found for which $\widehat{\text{Var}}(\hat\theta)$ is sufficiently small. For instance, if $\hat n$ is less than twice the preceding estimate, use twice the preceding estimate instead.
In the example in the question, because $f$ ranges from $1$ down to $0$ as $x$ goes from $0$ to $1$, we may surround its graph by a box of height $1$ and width $1$, whence $A=1$.
One calculation beginning at $n=10$ first estimated the variance as $2/125$, resulting in a guess $\hat n = 1600$. Using $1600$ new points (I didn't even bother to recycle the original $10$ points) resulted in an updated estimated variance of $0.0001545$, which was still too large. It suggested using $\hat n = 2473$ points. The calculation terminated there with $\hat\theta = 0.4262$ and $\widehat{\text{Var}}(\hat\theta) = 0.00009889$, just less than the target of $0.0001$. The figure shows the random points used at each of these three stages, from left to right, superimposed on plots of the box and the graph of $f$.
Since the true value is $\theta = 0.430764\ldots$, the true variance with $n=2473$ is $\theta(1-\theta)/n = 0.00009915\ldots$. (Another way to express this is to observe that $n=2453$ is the smallest number for which the true variance is less than $0.0001$, so that using the estimated variance in place of the true variance has cost us an extra $20$ sample points.)
In general, when the area under the graph $\theta$ is a sizable fraction of the box area $A$, the estimated variance will not change much when $\theta$ changes, so it's usually the case that the estimated variance is accurate. When $\theta/A$ is small, a better (more efficient) form of Monte-Carlo estimation is advisable. | Monte Carlo integration with imposed variance | The problem is that without knowing exactly what $\theta$ is, we cannot know the variance of its Monte-Carlo estimator. The solution is to estimate that variance and hope the estimate is sufficiently | Monte Carlo integration with imposed variance
The problem is that without knowing exactly what $\theta$ is, we cannot know the variance of its Monte-Carlo estimator. The solution is to estimate that variance and hope the estimate is sufficiently close to the truth.
The very simplest form of Monte-Carlo estimation surrounds the graph of the integrand, $f(x) = e^{-x^2}(1-x)$, by a box (or other congenial figure that is easy to work with) of area $A$ and places $n$ independent uniformly random points in the box. The proportion of points lying under the graph, times the area $A$, estimates the area $\theta$ under the graph. As usual, let's write this estimator of $\theta$ as $\hat\theta$. For examples, see the figure at the end of this post.
Because the chance of any point lying under the graph is $p = \theta / A$, the count $X$ of points lying under the graph has a Binomial$(n, p)$ distribution. This has an expected value of $np$ and a variance of $np(1-p)$. The variance of the estimate therefore is
$$\text{Var}(\hat \theta) = \text{Var}\left(\frac{AX}{n}\right) = \left(\frac{A}{n}\right)^2\text{Var}(X) = \left(\frac{A}{n}\right)^2 n \left(\frac{\theta}{A}\right)\left(1 - \frac{\theta}{A}\right) = \frac{\theta(A-\theta)}{n}.$$
Because we do no know $\theta$, we first use a small $n$ to obtain an initial estimate and plug that into this variance formula. (A good educated guess about $\theta$ will serve well to start, too. For instance, the graph (see below) suggests $\theta$ is not far from $1/2$, so you could start by substituting that for $\hat\theta$.) This is the estimated variance,
$$\widehat{\text{Var}}(\hat\theta) = \frac{\hat\theta(A-\hat\theta)}{n}.$$
Using this initial estimate $\hat\theta$, find an $n$ for which $\widehat{\text{Var}}(\hat\theta) \le 0.0001 = T$. The smallest possible such $n$ is easily found, with a little algebraic manipulation of the preceding formula, to be
$$\hat n = \bigg\lceil\frac{\hat\theta(A - \hat\theta)}{T}\bigg\rceil.$$
Iterating this procedure eventually produces a sample size that will at least approximately meet the variance target. As a practical matter, at each step $\hat n$ should be made sufficiently greater than the previous estimate of $n$ so that eventually a large enough $n$ is guaranteed to be found for which $\widehat{\text{Var}}(\hat\theta)$ is sufficiently small. For instance, if $\hat n$ is less than twice the preceding estimate, use twice the preceding estimate instead.
In the example in the question, because $f$ ranges from $1$ down to $0$ as $x$ goes from $0$ to $1$, we may surround its graph by a box of height $1$ and width $1$, whence $A=1$.
One calculation beginning at $n=10$ first estimated the variance as $2/125$, resulting in a guess $\hat n = 1600$. Using $1600$ new points (I didn't even bother to recycle the original $10$ points) resulted in an updated estimated variance of $0.0001545$, which was still too large. It suggested using $\hat n = 2473$ points. The calculation terminated there with $\hat\theta = 0.4262$ and $\widehat{\text{Var}}(\hat\theta) = 0.00009889$, just less than the target of $0.0001$. The figure shows the random points used at each of these three stages, from left to right, superimposed on plots of the box and the graph of $f$.
Since the true value is $\theta = 0.430764\ldots$, the true variance with $n=2473$ is $\theta(1-\theta)/n = 0.00009915\ldots$. (Another way to express this is to observe that $n=2453$ is the smallest number for which the true variance is less than $0.0001$, so that using the estimated variance in place of the true variance has cost us an extra $20$ sample points.)
In general, when the area under the graph $\theta$ is a sizable fraction of the box area $A$, the estimated variance will not change much when $\theta$ changes, so it's usually the case that the estimated variance is accurate. When $\theta/A$ is small, a better (more efficient) form of Monte-Carlo estimation is advisable. | Monte Carlo integration with imposed variance
The problem is that without knowing exactly what $\theta$ is, we cannot know the variance of its Monte-Carlo estimator. The solution is to estimate that variance and hope the estimate is sufficiently |
52,349 | Monte Carlo integration with imposed variance | Implement an estimator using Monte Carlo integration of
$$\theta=\int\limits_0^1e^{-x^2}(1-x)dx$$
While you can use a $\mathcal{U}([0,1])$ distribution for your Monte Carlo experiment, the fact that both $$x \longrightarrow \exp\{-x^2\}\quad \text{and}\quad x \longrightarrow (1-x)$$ are decreasing functions suggest that a decreasing density would work better. For instance, a truncated Normal $\mathcal{N}^1_0(0,.5)$ distribution could be used:
\begin{align*}\theta&=\int\limits_0^1e^{-x^2}(1-x)\,\text{d}x\\&=[\Phi(\sqrt{2})-\Phi(0)]\sqrt{2\pi\frac{1}{2}}\int\limits_0^1\frac{1}{\Phi(\sqrt{2})-\Phi(0)}\dfrac{e^{-x^2/2\frac{1}{2}}}{\sqrt{2\pi\frac{1}{2}}}(1-x)\,\text{d}x\\&=[\Phi(\sqrt{2})-\Phi(0)]\sqrt{\pi}\int\limits_0^1\frac{1}{\Phi(\sqrt{2})-\Phi(0)}\dfrac{e^{-x^2}}{\sqrt{\pi}}(1-x)\,\text{d}x\end{align*}
which leads to the implementation
n=1e8
U=runif(n)
#inverse cdf simulation
X=qnorm(U*pnorm(sqrt(2))+(1-U)*pnorm(0))/sqrt(2)
X=(pnorm(sqrt(2))-pnorm(0))*sqrt(pi)*(1-X)
mean(X)
sqrt(var(X)/n)
with the result
> mean(X)
[1] 0.4307648
> sqrt(var(X)/n)
[1] 2.039857e-05
fairly close to the true value
> integrate(function(x) exp(-x^2)*(1-x),0,1)
0.4307639 with absolute error < 4.8e-15
Another representation of the same integral is to use instead the distribution with density$$f(x)=2(1-x)\mathbb{I}{[0,1]}(x)$$and cdf $F(x)=1-(1-x)^2$ over $[0,1]$. The associated estimation is derived as follows:
> x=exp(-sqrt(runif(n))^2)/2
> mean(x)
[1] 0.4307693
> sqrt(var(x)/n)
[1] 7.369741e-06
which does better than the truncated normal simulation. | Monte Carlo integration with imposed variance | Implement an estimator using Monte Carlo integration of
$$\theta=\int\limits_0^1e^{-x^2}(1-x)dx$$
While you can use a $\mathcal{U}([0,1])$ distribution for your Monte Carlo experiment, the fact tha | Monte Carlo integration with imposed variance
Implement an estimator using Monte Carlo integration of
$$\theta=\int\limits_0^1e^{-x^2}(1-x)dx$$
While you can use a $\mathcal{U}([0,1])$ distribution for your Monte Carlo experiment, the fact that both $$x \longrightarrow \exp\{-x^2\}\quad \text{and}\quad x \longrightarrow (1-x)$$ are decreasing functions suggest that a decreasing density would work better. For instance, a truncated Normal $\mathcal{N}^1_0(0,.5)$ distribution could be used:
\begin{align*}\theta&=\int\limits_0^1e^{-x^2}(1-x)\,\text{d}x\\&=[\Phi(\sqrt{2})-\Phi(0)]\sqrt{2\pi\frac{1}{2}}\int\limits_0^1\frac{1}{\Phi(\sqrt{2})-\Phi(0)}\dfrac{e^{-x^2/2\frac{1}{2}}}{\sqrt{2\pi\frac{1}{2}}}(1-x)\,\text{d}x\\&=[\Phi(\sqrt{2})-\Phi(0)]\sqrt{\pi}\int\limits_0^1\frac{1}{\Phi(\sqrt{2})-\Phi(0)}\dfrac{e^{-x^2}}{\sqrt{\pi}}(1-x)\,\text{d}x\end{align*}
which leads to the implementation
n=1e8
U=runif(n)
#inverse cdf simulation
X=qnorm(U*pnorm(sqrt(2))+(1-U)*pnorm(0))/sqrt(2)
X=(pnorm(sqrt(2))-pnorm(0))*sqrt(pi)*(1-X)
mean(X)
sqrt(var(X)/n)
with the result
> mean(X)
[1] 0.4307648
> sqrt(var(X)/n)
[1] 2.039857e-05
fairly close to the true value
> integrate(function(x) exp(-x^2)*(1-x),0,1)
0.4307639 with absolute error < 4.8e-15
Another representation of the same integral is to use instead the distribution with density$$f(x)=2(1-x)\mathbb{I}{[0,1]}(x)$$and cdf $F(x)=1-(1-x)^2$ over $[0,1]$. The associated estimation is derived as follows:
> x=exp(-sqrt(runif(n))^2)/2
> mean(x)
[1] 0.4307693
> sqrt(var(x)/n)
[1] 7.369741e-06
which does better than the truncated normal simulation. | Monte Carlo integration with imposed variance
Implement an estimator using Monte Carlo integration of
$$\theta=\int\limits_0^1e^{-x^2}(1-x)dx$$
While you can use a $\mathcal{U}([0,1])$ distribution for your Monte Carlo experiment, the fact tha |
52,350 | Monte Carlo integration with imposed variance | It's not clear whether "write the variance of estimator" means to write the equation or the results of the execution. If the latter is the case then all you need to do is to run your code at different $n$ and show how the variance shrinks with $n$.
If the former is the case, then you have to show the equation for the variance estimate of the Monte Carlo algorithm. | Monte Carlo integration with imposed variance | It's not clear whether "write the variance of estimator" means to write the equation or the results of the execution. If the latter is the case then all you need to do is to run your code at different | Monte Carlo integration with imposed variance
It's not clear whether "write the variance of estimator" means to write the equation or the results of the execution. If the latter is the case then all you need to do is to run your code at different $n$ and show how the variance shrinks with $n$.
If the former is the case, then you have to show the equation for the variance estimate of the Monte Carlo algorithm. | Monte Carlo integration with imposed variance
It's not clear whether "write the variance of estimator" means to write the equation or the results of the execution. If the latter is the case then all you need to do is to run your code at different |
52,351 | Goodness of fit chi-square test for a number of PCA components | Goodness of fit cannot be meaningfully computed for PCA. The principal function is doing something strange here.
Function principal() is part of the psych package that implements factor analysis, and is designed to implement PCA in a manner mimicking factor analysis. In factor analysis, goodness of fit can be meaningfully computed as follows.
Given a $p\times p$ covariance matrix $\mathbf S$, factor analysis fits a $p\times k$ matrix of loadings $\mathbf W$ where $k$ is the number of factors and a diagonal $p\times p$ matrix of uniquenesses $\boldsymbol \Psi$ such that $$\mathbf S \approx \mathbf C = \mathbf W \mathbf W^\top + \boldsymbol \Psi.$$ There are different methods of "factor extraction", differing in how exactly the distance between $\mathbf C$ ans $\mathbf S$ is measured. Maximum likelihood method finds loadings and uniquenesses maximizing log-likelihood of the observed data. The log-likelihood of the data is given by $$L_k=-\frac{n}{2}\left[p\ln(2\pi) + \ln|\mathbf C| + \mathrm{tr}(\mathbf C^{-1}\mathbf S)\right],$$ where $n$ is the number of data points (observations). Given two models with different number of factors $k_1$ and $k_2$, they can be compared via a likelihood ratio test that tells us that $$2(L_{k_1} - L_{k_2}) \sim \chi^2_{\mathrm{df}_1-\mathrm{df}_2}.$$ In particular, a model with $k$ factors can be compared to the "full" model with $k=p$ and $\mathbf C=\mathbf S$.
In PCA, there is no $\boldsymbol \Psi$ and so $\mathbf C_\mathrm{PCA}=\mathbf W\mathbf W^\top$. This makes $\mathbf C$ low-rank, with zero determinant, and non-invertible.
Consequently, in PCA likelihood cannot be computed and likelihood ratio is undefined (ultimately because PCA is not a probabilistic model).
What exactly principal() is doing to "compute" it, is impossible to understand from its manual page. Having looked into its source code I saw that all the goodness of fit computations are done by the factor.stats() function, which again has an unclear manual page. The source code of factor.stats() reveals that after computing model covariance matrix $\mathbf C = \mathbf W\mathbf W^\top$, the algorithm makes its diagonal equal to the diagonal of $\mathbf S$ (making it full rank). This is the correct computation in factor analysis because there the diagonals of $\mathbf C$ and $\mathbf S$ always coincide (due to the matrix of uniquenesses), but makes no sense in PCA.
The resulting goodness of fit computation is essentially a factor analysis computation, but with PCA loadings. This is weird. I am not sure how this could be justified. My advice would be to ignore it and to use some other approach instead.
(Having said that, I guess that a very low $p$-value still does indicate that the number of components is "not sufficient": after all, such "diagonal-adjusted" $\widetilde{\mathbf C}$ is closer to $\mathbf S$ than the original $\mathbf C$, so if $\widetilde{\mathbf C}$ does not provide a sufficient fit, then $\mathbf C$ does not either.) | Goodness of fit chi-square test for a number of PCA components | Goodness of fit cannot be meaningfully computed for PCA. The principal function is doing something strange here.
Function principal() is part of the psych package that implements factor analysis, and | Goodness of fit chi-square test for a number of PCA components
Goodness of fit cannot be meaningfully computed for PCA. The principal function is doing something strange here.
Function principal() is part of the psych package that implements factor analysis, and is designed to implement PCA in a manner mimicking factor analysis. In factor analysis, goodness of fit can be meaningfully computed as follows.
Given a $p\times p$ covariance matrix $\mathbf S$, factor analysis fits a $p\times k$ matrix of loadings $\mathbf W$ where $k$ is the number of factors and a diagonal $p\times p$ matrix of uniquenesses $\boldsymbol \Psi$ such that $$\mathbf S \approx \mathbf C = \mathbf W \mathbf W^\top + \boldsymbol \Psi.$$ There are different methods of "factor extraction", differing in how exactly the distance between $\mathbf C$ ans $\mathbf S$ is measured. Maximum likelihood method finds loadings and uniquenesses maximizing log-likelihood of the observed data. The log-likelihood of the data is given by $$L_k=-\frac{n}{2}\left[p\ln(2\pi) + \ln|\mathbf C| + \mathrm{tr}(\mathbf C^{-1}\mathbf S)\right],$$ where $n$ is the number of data points (observations). Given two models with different number of factors $k_1$ and $k_2$, they can be compared via a likelihood ratio test that tells us that $$2(L_{k_1} - L_{k_2}) \sim \chi^2_{\mathrm{df}_1-\mathrm{df}_2}.$$ In particular, a model with $k$ factors can be compared to the "full" model with $k=p$ and $\mathbf C=\mathbf S$.
In PCA, there is no $\boldsymbol \Psi$ and so $\mathbf C_\mathrm{PCA}=\mathbf W\mathbf W^\top$. This makes $\mathbf C$ low-rank, with zero determinant, and non-invertible.
Consequently, in PCA likelihood cannot be computed and likelihood ratio is undefined (ultimately because PCA is not a probabilistic model).
What exactly principal() is doing to "compute" it, is impossible to understand from its manual page. Having looked into its source code I saw that all the goodness of fit computations are done by the factor.stats() function, which again has an unclear manual page. The source code of factor.stats() reveals that after computing model covariance matrix $\mathbf C = \mathbf W\mathbf W^\top$, the algorithm makes its diagonal equal to the diagonal of $\mathbf S$ (making it full rank). This is the correct computation in factor analysis because there the diagonals of $\mathbf C$ and $\mathbf S$ always coincide (due to the matrix of uniquenesses), but makes no sense in PCA.
The resulting goodness of fit computation is essentially a factor analysis computation, but with PCA loadings. This is weird. I am not sure how this could be justified. My advice would be to ignore it and to use some other approach instead.
(Having said that, I guess that a very low $p$-value still does indicate that the number of components is "not sufficient": after all, such "diagonal-adjusted" $\widetilde{\mathbf C}$ is closer to $\mathbf S$ than the original $\mathbf C$, so if $\widetilde{\mathbf C}$ does not provide a sufficient fit, then $\mathbf C$ does not either.) | Goodness of fit chi-square test for a number of PCA components
Goodness of fit cannot be meaningfully computed for PCA. The principal function is doing something strange here.
Function principal() is part of the psych package that implements factor analysis, and |
52,352 | Goodness of fit chi-square test for a number of PCA components | Thank you for your comments. I have removed the pseudo MLE chi square from the print function for principal because it clearly leads to confusion.
I have added the root mean square off diagonal residual because this is probably a more useful estimate of goodness of fit.
I have retained the fit of the model to the off diagonal elements. As everyone will recognize, PCA is not trying to optimally fit the off diagonals but most users act as if that is what they want.
I continue to report them empirical chi square. This based upon the size of observed residuals and is not MLE based (nor does it need to be).
I will try to document how the various stats are estimated a bit more clearly in the next release of psych (coming soon). | Goodness of fit chi-square test for a number of PCA components | Thank you for your comments. I have removed the pseudo MLE chi square from the print function for principal because it clearly leads to confusion.
I have added the root mean square off diagonal resid | Goodness of fit chi-square test for a number of PCA components
Thank you for your comments. I have removed the pseudo MLE chi square from the print function for principal because it clearly leads to confusion.
I have added the root mean square off diagonal residual because this is probably a more useful estimate of goodness of fit.
I have retained the fit of the model to the off diagonal elements. As everyone will recognize, PCA is not trying to optimally fit the off diagonals but most users act as if that is what they want.
I continue to report them empirical chi square. This based upon the size of observed residuals and is not MLE based (nor does it need to be).
I will try to document how the various stats are estimated a bit more clearly in the next release of psych (coming soon). | Goodness of fit chi-square test for a number of PCA components
Thank you for your comments. I have removed the pseudo MLE chi square from the print function for principal because it clearly leads to confusion.
I have added the root mean square off diagonal resid |
52,353 | MCMC chain getting stuck | In step 4, you don't have to reject the proposal $x,\theta$ every time its new likelihood is lower; if you do so, you are doing a sort of optimization instead of sampling from the posterior distribution.
Instead, if the proposal is worse then you still accept it with an acceptance probability $a$.
With pure Gibbs sampling, the general strategy to sample this would be:
Gibbs
Iteratively sample:
\begin{align}
p(x | \theta, y) &\propto p(y | x) p(x |\theta)\\
p(\theta | x, y) &\propto p(x | \theta) p(\theta)
\end{align}
Gibbs with Metropolis steps for non-conjugate cases:
If you some of the conditionals above is not a familar distribution (because you are multiplying non-conjugates; this is your case) you can sample with Metropolis Hastings:
From the current $x$, generate some proposal, e.g.:
$$
x^* \sim \mathcal{N(x, \sigma)}
$$
Accept $x^*$ with probability [1]:
$$
a = min \left(1, \frac{p(x^*)}{p(x)}\right)
= min \left(1, \frac{p(x^* | \theta, y)}{p(x | \theta, y)}\right)
$$
[1] If the proposal distribution wasn't symmetric then there is another multiplying factor.
Appendix:
$$
p(\theta | x, y) =
\frac{p(y|x)p(x| \theta)p(\theta)}
{\int p(y|x)p(x| \theta)p(\theta) \text{d}\theta}=
\frac{p(x| \theta)p(\theta)}
{\int p(x| \theta)p(\theta) \text{d}\theta}
\propto p(x| \theta)p(\theta)
$$ | MCMC chain getting stuck | In step 4, you don't have to reject the proposal $x,\theta$ every time its new likelihood is lower; if you do so, you are doing a sort of optimization instead of sampling from the posterior distributi | MCMC chain getting stuck
In step 4, you don't have to reject the proposal $x,\theta$ every time its new likelihood is lower; if you do so, you are doing a sort of optimization instead of sampling from the posterior distribution.
Instead, if the proposal is worse then you still accept it with an acceptance probability $a$.
With pure Gibbs sampling, the general strategy to sample this would be:
Gibbs
Iteratively sample:
\begin{align}
p(x | \theta, y) &\propto p(y | x) p(x |\theta)\\
p(\theta | x, y) &\propto p(x | \theta) p(\theta)
\end{align}
Gibbs with Metropolis steps for non-conjugate cases:
If you some of the conditionals above is not a familar distribution (because you are multiplying non-conjugates; this is your case) you can sample with Metropolis Hastings:
From the current $x$, generate some proposal, e.g.:
$$
x^* \sim \mathcal{N(x, \sigma)}
$$
Accept $x^*$ with probability [1]:
$$
a = min \left(1, \frac{p(x^*)}{p(x)}\right)
= min \left(1, \frac{p(x^* | \theta, y)}{p(x | \theta, y)}\right)
$$
[1] If the proposal distribution wasn't symmetric then there is another multiplying factor.
Appendix:
$$
p(\theta | x, y) =
\frac{p(y|x)p(x| \theta)p(\theta)}
{\int p(y|x)p(x| \theta)p(\theta) \text{d}\theta}=
\frac{p(x| \theta)p(\theta)}
{\int p(x| \theta)p(\theta) \text{d}\theta}
\propto p(x| \theta)p(\theta)
$$ | MCMC chain getting stuck
In step 4, you don't have to reject the proposal $x,\theta$ every time its new likelihood is lower; if you do so, you are doing a sort of optimization instead of sampling from the posterior distributi |
52,354 | MCMC chain getting stuck | Here is an R code in the univariate case for the above Metropolis-within-Gibbs approach drafted by @alberto. No indication of the chain getting stuck: the acceptance rate for the $x$ component is close to 50%.
First, I picked some pseudo-values to run the algorithm:
#observation from N(x,1)
y=3.081927
#latent x from t(nu,theta,1)
nu=3
Second, I simulated the location $\theta$ from the full condition distribution, namely a Student's $t$-distribution with location parameter $x$ and the latent parameter $x$ by a Metropolis-within-Gibbs step, making a proposal from a Student's $t$-distribution with location parameter $\theta$ and accepting this proposal based on the second part of the full conditional, namely the normal pdf centred in $y$.
#Metropolis-within-Gibbs
T=10^4
mcmc=matrix(NA,T,2)
#initialisation
mcmc[1,1]=rnorm(1,mean=y)
mcmc[1,2]=rt(1,df=nu)+mcmc[1,1]
#Gibbs iterations
for (t in 2:T){
mcmc[t,1]=rt(1,df=nu)+mcmc[t-1,2] #theta
mcmc[t,2]=proposal=rt(1,df=nu)+mcmc[t,1] #x
#acceptance probability:
accept=dnorm(proposal,mean=y)/dnorm(mcmc[t-1,2],mean=y)
if (runif(1)>accept) mcmc[t,2]=mcmc[t-1,2]
}
As seen from the contour plot below, the resulting chain $(\theta_t,x_t)$ is correctly located on the highest contours of the target density. | MCMC chain getting stuck | Here is an R code in the univariate case for the above Metropolis-within-Gibbs approach drafted by @alberto. No indication of the chain getting stuck: the acceptance rate for the $x$ component is clos | MCMC chain getting stuck
Here is an R code in the univariate case for the above Metropolis-within-Gibbs approach drafted by @alberto. No indication of the chain getting stuck: the acceptance rate for the $x$ component is close to 50%.
First, I picked some pseudo-values to run the algorithm:
#observation from N(x,1)
y=3.081927
#latent x from t(nu,theta,1)
nu=3
Second, I simulated the location $\theta$ from the full condition distribution, namely a Student's $t$-distribution with location parameter $x$ and the latent parameter $x$ by a Metropolis-within-Gibbs step, making a proposal from a Student's $t$-distribution with location parameter $\theta$ and accepting this proposal based on the second part of the full conditional, namely the normal pdf centred in $y$.
#Metropolis-within-Gibbs
T=10^4
mcmc=matrix(NA,T,2)
#initialisation
mcmc[1,1]=rnorm(1,mean=y)
mcmc[1,2]=rt(1,df=nu)+mcmc[1,1]
#Gibbs iterations
for (t in 2:T){
mcmc[t,1]=rt(1,df=nu)+mcmc[t-1,2] #theta
mcmc[t,2]=proposal=rt(1,df=nu)+mcmc[t,1] #x
#acceptance probability:
accept=dnorm(proposal,mean=y)/dnorm(mcmc[t-1,2],mean=y)
if (runif(1)>accept) mcmc[t,2]=mcmc[t-1,2]
}
As seen from the contour plot below, the resulting chain $(\theta_t,x_t)$ is correctly located on the highest contours of the target density. | MCMC chain getting stuck
Here is an R code in the univariate case for the above Metropolis-within-Gibbs approach drafted by @alberto. No indication of the chain getting stuck: the acceptance rate for the $x$ component is clos |
52,355 | Smaller standard errors *after* multiple imputation? | Have I certainly done something wrong ?
No, smaller standard errors are not unusual when using multiple imputation due to the larger sample size compared to the complete cases, as Jonathan Bartlett says in his answer. The extent to which they may be smaller will depend on how many auxiliary variables are used in the imputation model and how strong the associations are between them and the variable(s) being imputed and the number of imputations.
A simple simulation can show this:
require(mice)
require(MASS)
set.seed(1)
# simulate some multivariate normal data
(Sigma <- matrix(c(10,4,0.1,4,6,4,0.1,4,5),3,3))
mu <- c(100,40,30)
N <- 2000
dt <- data.frame(mvrnorm(n=N, mu, Sigma))
names(dt) <- c("Y","X1","X2")
m0 <- summary(lm(Y~X1,data=dt)) # this model represents the "truth"
# make 30% MCAR missingness in X1
dt$X1[sample(1:N,N*0.3,replace=F)] <- NA
m1 <- summary(lm(Y~X1,data=dt)) # this model is for complete cases only
imp <- mice(dt)
fit <- with(imp, lm(Y~X1))
m2 <- summary(pool(fit)) # this model is after imputation with mice defaults
So then we have the following standard errors for X1:
Truth:
print(m0$coefficients[[4]])
[1] 0.02508949
Complete cases:
print(m1$coefficients[[4]])
[1] 0.0304495
Imputed:
print(m2[2,2])
[1] 0.02607166 | Smaller standard errors *after* multiple imputation? | Have I certainly done something wrong ?
No, smaller standard errors are not unusual when using multiple imputation due to the larger sample size compared to the complete cases, as Jonathan Bartlett sa | Smaller standard errors *after* multiple imputation?
Have I certainly done something wrong ?
No, smaller standard errors are not unusual when using multiple imputation due to the larger sample size compared to the complete cases, as Jonathan Bartlett says in his answer. The extent to which they may be smaller will depend on how many auxiliary variables are used in the imputation model and how strong the associations are between them and the variable(s) being imputed and the number of imputations.
A simple simulation can show this:
require(mice)
require(MASS)
set.seed(1)
# simulate some multivariate normal data
(Sigma <- matrix(c(10,4,0.1,4,6,4,0.1,4,5),3,3))
mu <- c(100,40,30)
N <- 2000
dt <- data.frame(mvrnorm(n=N, mu, Sigma))
names(dt) <- c("Y","X1","X2")
m0 <- summary(lm(Y~X1,data=dt)) # this model represents the "truth"
# make 30% MCAR missingness in X1
dt$X1[sample(1:N,N*0.3,replace=F)] <- NA
m1 <- summary(lm(Y~X1,data=dt)) # this model is for complete cases only
imp <- mice(dt)
fit <- with(imp, lm(Y~X1))
m2 <- summary(pool(fit)) # this model is after imputation with mice defaults
So then we have the following standard errors for X1:
Truth:
print(m0$coefficients[[4]])
[1] 0.02508949
Complete cases:
print(m1$coefficients[[4]])
[1] 0.0304495
Imputed:
print(m2[2,2])
[1] 0.02607166 | Smaller standard errors *after* multiple imputation?
Have I certainly done something wrong ?
No, smaller standard errors are not unusual when using multiple imputation due to the larger sample size compared to the complete cases, as Jonathan Bartlett sa |
52,356 | Smaller standard errors *after* multiple imputation? | Yes it is certainly possible. If the variables other than Y and X1 are predictive of the X1 variable which you are imputing, multiple imputation will allow you to extract this information and use it to gain information about your target parameters (a regression of Y on X1) from those with X1 missing. e.g. suppose X2 is very highly correlated with X1. Multiple imputation will then be able to impute the missing X1 values with relatively little uncertainty, and your standard errors (relative to complete case analysis, which is based on a smaller sample size) should go down. | Smaller standard errors *after* multiple imputation? | Yes it is certainly possible. If the variables other than Y and X1 are predictive of the X1 variable which you are imputing, multiple imputation will allow you to extract this information and use it t | Smaller standard errors *after* multiple imputation?
Yes it is certainly possible. If the variables other than Y and X1 are predictive of the X1 variable which you are imputing, multiple imputation will allow you to extract this information and use it to gain information about your target parameters (a regression of Y on X1) from those with X1 missing. e.g. suppose X2 is very highly correlated with X1. Multiple imputation will then be able to impute the missing X1 values with relatively little uncertainty, and your standard errors (relative to complete case analysis, which is based on a smaller sample size) should go down. | Smaller standard errors *after* multiple imputation?
Yes it is certainly possible. If the variables other than Y and X1 are predictive of the X1 variable which you are imputing, multiple imputation will allow you to extract this information and use it t |
52,357 | Smaller standard errors *after* multiple imputation? | I think this could happen, even if you did nothing wrong. If the imputation process is very strong, then the added N will have more effect than the added variation.
If you compare multiple imputation to single imputation, I think the SEs have to be bigger in MI.
(That's just my intuition). | Smaller standard errors *after* multiple imputation? | I think this could happen, even if you did nothing wrong. If the imputation process is very strong, then the added N will have more effect than the added variation.
If you compare multiple imputation | Smaller standard errors *after* multiple imputation?
I think this could happen, even if you did nothing wrong. If the imputation process is very strong, then the added N will have more effect than the added variation.
If you compare multiple imputation to single imputation, I think the SEs have to be bigger in MI.
(That's just my intuition). | Smaller standard errors *after* multiple imputation?
I think this could happen, even if you did nothing wrong. If the imputation process is very strong, then the added N will have more effect than the added variation.
If you compare multiple imputation |
52,358 | How does caret handle factors? | The difference is how the formula method for each of these functions handles factors. The individual tree methods you mention do not convert factors to dummy variables. This is not the traditional way formulas work in R but it makes a lot of sense for these mods (and a few others).
train is deigned to be more general and train.formula will convert them.
You can use the non-formula interface to train with these models and keep the factors intact. | How does caret handle factors? | The difference is how the formula method for each of these functions handles factors. The individual tree methods you mention do not convert factors to dummy variables. This is not the traditional way | How does caret handle factors?
The difference is how the formula method for each of these functions handles factors. The individual tree methods you mention do not convert factors to dummy variables. This is not the traditional way formulas work in R but it makes a lot of sense for these mods (and a few others).
train is deigned to be more general and train.formula will convert them.
You can use the non-formula interface to train with these models and keep the factors intact. | How does caret handle factors?
The difference is how the formula method for each of these functions handles factors. The individual tree methods you mention do not convert factors to dummy variables. This is not the traditional way |
52,359 | Independence and orthogonality | In order to speak about orthogonality you need to define an inner product first.
If we consider random variables with finite second moment, covariance can be shown to be an inner product. In this case two random variables are orthogonal if and only if they are uncorrelated:
$$ 0 = \text{cov}(X, Y) = \mathbb{E}[X Y] - \mathbb{E}[X] \mathbb{E}[Y] $$
Note that zero covariance does not imply independence (in general). For details, see Covariance and independence?
Also, one could have defined inner product differently, which would lead to another notion of orthogonality. Yet, the one I described, seems more common to me. | Independence and orthogonality | In order to speak about orthogonality you need to define an inner product first.
If we consider random variables with finite second moment, covariance can be shown to be an inner product. In this case | Independence and orthogonality
In order to speak about orthogonality you need to define an inner product first.
If we consider random variables with finite second moment, covariance can be shown to be an inner product. In this case two random variables are orthogonal if and only if they are uncorrelated:
$$ 0 = \text{cov}(X, Y) = \mathbb{E}[X Y] - \mathbb{E}[X] \mathbb{E}[Y] $$
Note that zero covariance does not imply independence (in general). For details, see Covariance and independence?
Also, one could have defined inner product differently, which would lead to another notion of orthogonality. Yet, the one I described, seems more common to me. | Independence and orthogonality
In order to speak about orthogonality you need to define an inner product first.
If we consider random variables with finite second moment, covariance can be shown to be an inner product. In this case |
52,360 | Independence and orthogonality | From a geometric perspective 2 vectors are orthogonal if they are perpendicular to one another. | Independence and orthogonality | From a geometric perspective 2 vectors are orthogonal if they are perpendicular to one another. | Independence and orthogonality
From a geometric perspective 2 vectors are orthogonal if they are perpendicular to one another. | Independence and orthogonality
From a geometric perspective 2 vectors are orthogonal if they are perpendicular to one another. |
52,361 | Which property of count data make mean-variance dependency? | I propose that substantial insight into this question is afforded by viewing counts as sums of simple (happened vs. did not happen) events. That suffices to create a relationship between variance and expectation which in common situations amounts to a direct proportion.
Most counts are obtained in a context where numerous events could or could not have happened; the counts sum the events that did happen. By definition, such an event $i$ has a Bernoulli distribution: it had a chance of $p_i$ of occurring and, therefore, a chance of $1-p_i$ of not occurring. Their counts, therefore, are sums of Bernoulli variables.
The expectation of a sum is always the sum of the expectations. Thus, the expectation of $n$ Bernoulli variables with probabilities $p_i, i=1, 2,\ldots, n$ is the sum
$$p = \sum_{i=1}^n p_i.$$
When those variables are independent (and being "nearly" independent would be close enough), the variance of their sum is the sum of their variances. Since the variance of a Bernoulli$(p_i)$ variable is $p_i(1-p_i)$ (which is readily established from first principles), the variance of the sum is approximately
$$v = \sum_{i=1}^n p_i(1-p_i).$$
Although this is too complicated to allow any really general statements, we can make some useful deductions for common situations.
(Binomial sampling). When all the $p_i$ are equal, $p = np_1$ and $v = np_1(1-p_1)$. This exhibits $v$ as directly proportional to the expected count since
$$v = p(1-p_1).$$
(Poisson distribution). When $n$ is large and all the $p_i$ are so small that every $np_i$ is also small (say, less than $1$), then the $1-p_i$ terms in the general expression of $v$ are so close to $1$ as to be negligible, even when accumulated in the summations. Accordingly, to a good approximation,
$$v \approx \sum_{i=1}^n n p_i = p.$$
Again the variance is proportional to the expected count, but with a universal constant of proportionality equal to $1$. | Which property of count data make mean-variance dependency? | I propose that substantial insight into this question is afforded by viewing counts as sums of simple (happened vs. did not happen) events. That suffices to create a relationship between variance and | Which property of count data make mean-variance dependency?
I propose that substantial insight into this question is afforded by viewing counts as sums of simple (happened vs. did not happen) events. That suffices to create a relationship between variance and expectation which in common situations amounts to a direct proportion.
Most counts are obtained in a context where numerous events could or could not have happened; the counts sum the events that did happen. By definition, such an event $i$ has a Bernoulli distribution: it had a chance of $p_i$ of occurring and, therefore, a chance of $1-p_i$ of not occurring. Their counts, therefore, are sums of Bernoulli variables.
The expectation of a sum is always the sum of the expectations. Thus, the expectation of $n$ Bernoulli variables with probabilities $p_i, i=1, 2,\ldots, n$ is the sum
$$p = \sum_{i=1}^n p_i.$$
When those variables are independent (and being "nearly" independent would be close enough), the variance of their sum is the sum of their variances. Since the variance of a Bernoulli$(p_i)$ variable is $p_i(1-p_i)$ (which is readily established from first principles), the variance of the sum is approximately
$$v = \sum_{i=1}^n p_i(1-p_i).$$
Although this is too complicated to allow any really general statements, we can make some useful deductions for common situations.
(Binomial sampling). When all the $p_i$ are equal, $p = np_1$ and $v = np_1(1-p_1)$. This exhibits $v$ as directly proportional to the expected count since
$$v = p(1-p_1).$$
(Poisson distribution). When $n$ is large and all the $p_i$ are so small that every $np_i$ is also small (say, less than $1$), then the $1-p_i$ terms in the general expression of $v$ are so close to $1$ as to be negligible, even when accumulated in the summations. Accordingly, to a good approximation,
$$v \approx \sum_{i=1}^n n p_i = p.$$
Again the variance is proportional to the expected count, but with a universal constant of proportionality equal to $1$. | Which property of count data make mean-variance dependency?
I propose that substantial insight into this question is afforded by viewing counts as sums of simple (happened vs. did not happen) events. That suffices to create a relationship between variance and |
52,362 | Which property of count data make mean-variance dependency? | First of all, it's not necessary to transform count data because there are Poisson and Negative Binomial models that allow the variance to depend on the mean.
Second, the dependency of variance on mean is not restricted to count data. I think it happens because the variation among experimental units happens on a "relative" rather than "absolute" scale.
E.g., consider a group of individuals whose average income is \$200,000. It's quite possible that there is one person in the group whose income is "relatively" low. Say, it's 80% of the mean, \$160,000.
Now consider another group where the average income is \$35,000. Again it's quite possible to have one person in the group whose income is 80% of the mean (\$28,000), but it's unlikely to see the one who is \$40,000 below the mean (-\$5000).
If the variance were independent of the mean, then observing \$160,000 income in the first group would be just as plausible as observing -\$5000 in the second group, but that's not the case. | Which property of count data make mean-variance dependency? | First of all, it's not necessary to transform count data because there are Poisson and Negative Binomial models that allow the variance to depend on the mean.
Second, the dependency of variance on mea | Which property of count data make mean-variance dependency?
First of all, it's not necessary to transform count data because there are Poisson and Negative Binomial models that allow the variance to depend on the mean.
Second, the dependency of variance on mean is not restricted to count data. I think it happens because the variation among experimental units happens on a "relative" rather than "absolute" scale.
E.g., consider a group of individuals whose average income is \$200,000. It's quite possible that there is one person in the group whose income is "relatively" low. Say, it's 80% of the mean, \$160,000.
Now consider another group where the average income is \$35,000. Again it's quite possible to have one person in the group whose income is 80% of the mean (\$28,000), but it's unlikely to see the one who is \$40,000 below the mean (-\$5000).
If the variance were independent of the mean, then observing \$160,000 income in the first group would be just as plausible as observing -\$5000 in the second group, but that's not the case. | Which property of count data make mean-variance dependency?
First of all, it's not necessary to transform count data because there are Poisson and Negative Binomial models that allow the variance to depend on the mean.
Second, the dependency of variance on mea |
52,363 | A critical proof or counterexample regarding independence | It seems likely that you could fairly easily construct a counter-example by assigning the negative roots of $Y^2$ to $Y$ for $X\leq1$ and to the positive roots for $X > 1$.
Y <- rnorm(1000)^2
X <- rnorm(1000)^2
sy[X1] <- sqrt(Y[X1])
sy[X <= 1] <- -sqrt(Y[X <= 1])
plot(sx, sy)
cor(sx, sy)
[1] 0.6537367
png()
plot(X, Y)
dev.off()
cor(X,Y)
[1] -0.05216765 | A critical proof or counterexample regarding independence | It seems likely that you could fairly easily construct a counter-example by assigning the negative roots of $Y^2$ to $Y$ for $X\leq1$ and to the positive roots for $X > 1$.
Y <- rnorm(1000)^2
X <- rno | A critical proof or counterexample regarding independence
It seems likely that you could fairly easily construct a counter-example by assigning the negative roots of $Y^2$ to $Y$ for $X\leq1$ and to the positive roots for $X > 1$.
Y <- rnorm(1000)^2
X <- rnorm(1000)^2
sy[X1] <- sqrt(Y[X1])
sy[X <= 1] <- -sqrt(Y[X <= 1])
plot(sx, sy)
cor(sx, sy)
[1] 0.6537367
png()
plot(X, Y)
dev.off()
cor(X,Y)
[1] -0.05216765 | A critical proof or counterexample regarding independence
It seems likely that you could fairly easily construct a counter-example by assigning the negative roots of $Y^2$ to $Y$ for $X\leq1$ and to the positive roots for $X > 1$.
Y <- rnorm(1000)^2
X <- rno |
52,364 | A critical proof or counterexample regarding independence | The construction of a very general set of counterexamples illuminates this issue. The underlying idea is that although $X^2$ and $Y^2$ might be independent, a choice of two square roots is available for every nonzero value taken on by these variables. By making those choices dependent, we create a counterexample where $X$ and $Y$ are not independent. The details follow.
Because $X^2$ and $Y^2$ are independent, $|X|=\sqrt{X^2}$ and $|Y|=\sqrt{Y^2}$ are independent, too. Let $I$ be a discrete random variable, independent of $|X|$ and $|Y|$, taking on the values $\pm 1$, so that $I^2=1$. Let $p = \Pr(I=1)$ and assume $0\lt p\lt 1$. $I$ will determine the choice of sign of the square roots, taking the positive sign with probability $p$.
One possibility for the random variables $X$ and $Y$ is
$$X_I= I|X|, \quad Y_I=I|Y|$$
because obviously $X_I^2 = I^2|X|^2 = X^2$ and $Y_I^2 = Y^2$ and the only thing we know about $X$ and $Y$ is that they are square roots of the given variables $X^2$ and $Y^2$.
Let's check whether independence holds by looking at the event where both $X_I$ and $Y_I$ are nonnegative. Provided both $X$ and $Y$ have zero chance of equaling $0$, this is the event consisting of all values where the positive square root is chosen. The calculations are easy because the signs of $X_I$ and $Y_I$ are entirely determined by the sign of $I$:
$$\eqalign{
&\Pr(I|X| \ge 0)\Pr(I|Y| \ge 0) = \Pr(I\ge 0)\Pr(I\ge 0) = p^2; \\
& \Pr(I|X|\ge 0\text{ and }I|Y|\ge 0) = \Pr(I\ge 0) = p.
}$$
Since $p\ne p^2$, $\Pr(X_I\ge 0)\Pr(Y_I\ge 0)\ne \Pr(X_I\ge 0\text{ and }Y_I\ge 0)$. Therefore, by definition, $X_I$ and $Y_I$ are not independent, making a counterexample to the conjecture. | A critical proof or counterexample regarding independence | The construction of a very general set of counterexamples illuminates this issue. The underlying idea is that although $X^2$ and $Y^2$ might be independent, a choice of two square roots is available | A critical proof or counterexample regarding independence
The construction of a very general set of counterexamples illuminates this issue. The underlying idea is that although $X^2$ and $Y^2$ might be independent, a choice of two square roots is available for every nonzero value taken on by these variables. By making those choices dependent, we create a counterexample where $X$ and $Y$ are not independent. The details follow.
Because $X^2$ and $Y^2$ are independent, $|X|=\sqrt{X^2}$ and $|Y|=\sqrt{Y^2}$ are independent, too. Let $I$ be a discrete random variable, independent of $|X|$ and $|Y|$, taking on the values $\pm 1$, so that $I^2=1$. Let $p = \Pr(I=1)$ and assume $0\lt p\lt 1$. $I$ will determine the choice of sign of the square roots, taking the positive sign with probability $p$.
One possibility for the random variables $X$ and $Y$ is
$$X_I= I|X|, \quad Y_I=I|Y|$$
because obviously $X_I^2 = I^2|X|^2 = X^2$ and $Y_I^2 = Y^2$ and the only thing we know about $X$ and $Y$ is that they are square roots of the given variables $X^2$ and $Y^2$.
Let's check whether independence holds by looking at the event where both $X_I$ and $Y_I$ are nonnegative. Provided both $X$ and $Y$ have zero chance of equaling $0$, this is the event consisting of all values where the positive square root is chosen. The calculations are easy because the signs of $X_I$ and $Y_I$ are entirely determined by the sign of $I$:
$$\eqalign{
&\Pr(I|X| \ge 0)\Pr(I|Y| \ge 0) = \Pr(I\ge 0)\Pr(I\ge 0) = p^2; \\
& \Pr(I|X|\ge 0\text{ and }I|Y|\ge 0) = \Pr(I\ge 0) = p.
}$$
Since $p\ne p^2$, $\Pr(X_I\ge 0)\Pr(Y_I\ge 0)\ne \Pr(X_I\ge 0\text{ and }Y_I\ge 0)$. Therefore, by definition, $X_I$ and $Y_I$ are not independent, making a counterexample to the conjecture. | A critical proof or counterexample regarding independence
The construction of a very general set of counterexamples illuminates this issue. The underlying idea is that although $X^2$ and $Y^2$ might be independent, a choice of two square roots is available |
52,365 | A critical proof or counterexample regarding independence | Let $X$ be the random variable with $P(X=1)=P(X=-1)=\frac12$ and let $Y=-X$. Then $X^2$ and $Y^2$ are independent but $X$ and $Y$ are not. | A critical proof or counterexample regarding independence | Let $X$ be the random variable with $P(X=1)=P(X=-1)=\frac12$ and let $Y=-X$. Then $X^2$ and $Y^2$ are independent but $X$ and $Y$ are not. | A critical proof or counterexample regarding independence
Let $X$ be the random variable with $P(X=1)=P(X=-1)=\frac12$ and let $Y=-X$. Then $X^2$ and $Y^2$ are independent but $X$ and $Y$ are not. | A critical proof or counterexample regarding independence
Let $X$ be the random variable with $P(X=1)=P(X=-1)=\frac12$ and let $Y=-X$. Then $X^2$ and $Y^2$ are independent but $X$ and $Y$ are not. |
52,366 | Simulating Multinomial Logit Data with R | It is really simple to generate multinomial logit regression data. All you need to keep in mind are the normalizing assumptions.
# covariate matrix
mX = matrix(rnorm(1000), 200, 5)
# coefficients for each choice
vCoef1 = rep(0, 5)
vCoef2 = rnorm(5)
vCoef3 = rnorm(5)
# vector of probabilities
vProb = cbind(exp(mX%*%vCoef1), exp(mX%*%vCoef2), exp(mX%*%vCoef3))
# multinomial draws
mChoices = t(apply(vProb, 1, rmultinom, n = 1, size = 1))
dfM = cbind.data.frame(y = apply(mChoices, 1, function(x) which(x==1)), mX)
Here mChoices and dfM$y encode the same information differently. | Simulating Multinomial Logit Data with R | It is really simple to generate multinomial logit regression data. All you need to keep in mind are the normalizing assumptions.
# covariate matrix
mX = matrix(rnorm(1000), 200, 5)
# coefficients for | Simulating Multinomial Logit Data with R
It is really simple to generate multinomial logit regression data. All you need to keep in mind are the normalizing assumptions.
# covariate matrix
mX = matrix(rnorm(1000), 200, 5)
# coefficients for each choice
vCoef1 = rep(0, 5)
vCoef2 = rnorm(5)
vCoef3 = rnorm(5)
# vector of probabilities
vProb = cbind(exp(mX%*%vCoef1), exp(mX%*%vCoef2), exp(mX%*%vCoef3))
# multinomial draws
mChoices = t(apply(vProb, 1, rmultinom, n = 1, size = 1))
dfM = cbind.data.frame(y = apply(mChoices, 1, function(x) which(x==1)), mX)
Here mChoices and dfM$y encode the same information differently. | Simulating Multinomial Logit Data with R
It is really simple to generate multinomial logit regression data. All you need to keep in mind are the normalizing assumptions.
# covariate matrix
mX = matrix(rnorm(1000), 200, 5)
# coefficients for |
52,367 | Simulating Multinomial Logit Data with R | In this model with $k$ variables and $d$ categories, let $x = (1,x_1, x_2,\ldots, x_k)$ be one data point including a constant $1$ for the intercept. There are $d-1$ column vectors $\beta_2, \beta_3, \ldots, \beta_d,$ each of length $k+1,$ that give the relative chances of each category in the ratios
$$p_1:p_2:\cdots:p_d = 1:e^{x\beta_2}:e^{x\beta_3}:\cdots:e^{x\beta_d}.$$
The random response for this data point places a specified number $s$ balls into $d$ bins (one for each category) according to these probabilities.
Thus, to simulate such data you need to
Specify the variables--their number $k,$ the number of data points $n,$ and all their values.
Specify the number of categories and the $\beta_j$ parameters.
Specify the value of $s$ for each data point.
Compute the probabilities determined by (1) and (2) according to this model.
Use those sizes (3) and probabilities (4) to generate random multinomial outcomes.
This leads to a straightforward R implementation. It randomly generates and stores the variables (1) in an array X and, given a randomly-generated array of coefficients (2) and randomly-generated size array (3), computes the probabilities (4) and applies rmultinom to each data point (5) to obtain the matrix of responses (one column per category) and store it in the array y.
n <- 5e3 # Number of observations
k <- 3 # Number of variables
d <- 4 # Number of categories
size <- 15 # Expected size of each outcome (number of balls selected)
set.seed(17)
xnames <- paste0("X", seq_len(k))
beta <- matrix(rnorm((k+1) * d), ncol = d, dimnames=list(c("Intercept", xnames), seq_len(d)))
beta = beta - beta[, 1] # Standardize: category 1 is the reference category
X <- matrix(runif(n * k), n, dimnames = list(NULL, xnames))
p <- (function(h) h / rowSums(h))(exp(cbind(1, X) %*% beta))
s <- 1 + rpois(n, size-1)
y <- t(sapply(seq_len(n), function(i) rmultinom(1, s[i], p[i, ])))
With the $n=5000$ observations specified here, the estimates $\hat\beta$ had better be close to the stipulated value of $\beta$! Using nnet::multinom I estimated the coefficients.
library(nnet)
fit <- multinom(y ~ X)
Here is a plot comparing all $12$ coefficients to their specified values. The gray lines are 95% confidence intervals around each estimate. The agreement is good, indicating this approach agrees with the model assumed by multinom.
The total number of results in each category depend on all the inputs. The expected totals can be computed by multiplying the probabilities $(p_1, \ldots, p_d)$ for each data point times its value of $s$ and adding these up by category:
round(rbind(Expected = s %*% p, Observed = colSums(y)))
1 2 3 4
880 28799 13837 31623
Observed 902 28941 13852 31444
The counts observed in this simulation, by category, are close to the expected counts.
This indicates how you can adjust your simulation to achieve a desired count, set of counts, or (as asked in the question) proportion of counts: you can alter the data points, the coefficients, and the sizes as you will. How you do this will depend on what aspects of the situation you are willing to vary. There are so many possibilities that it would take us too far afield to discuss them all here. | Simulating Multinomial Logit Data with R | In this model with $k$ variables and $d$ categories, let $x = (1,x_1, x_2,\ldots, x_k)$ be one data point including a constant $1$ for the intercept. There are $d-1$ column vectors $\beta_2, \beta_3, | Simulating Multinomial Logit Data with R
In this model with $k$ variables and $d$ categories, let $x = (1,x_1, x_2,\ldots, x_k)$ be one data point including a constant $1$ for the intercept. There are $d-1$ column vectors $\beta_2, \beta_3, \ldots, \beta_d,$ each of length $k+1,$ that give the relative chances of each category in the ratios
$$p_1:p_2:\cdots:p_d = 1:e^{x\beta_2}:e^{x\beta_3}:\cdots:e^{x\beta_d}.$$
The random response for this data point places a specified number $s$ balls into $d$ bins (one for each category) according to these probabilities.
Thus, to simulate such data you need to
Specify the variables--their number $k,$ the number of data points $n,$ and all their values.
Specify the number of categories and the $\beta_j$ parameters.
Specify the value of $s$ for each data point.
Compute the probabilities determined by (1) and (2) according to this model.
Use those sizes (3) and probabilities (4) to generate random multinomial outcomes.
This leads to a straightforward R implementation. It randomly generates and stores the variables (1) in an array X and, given a randomly-generated array of coefficients (2) and randomly-generated size array (3), computes the probabilities (4) and applies rmultinom to each data point (5) to obtain the matrix of responses (one column per category) and store it in the array y.
n <- 5e3 # Number of observations
k <- 3 # Number of variables
d <- 4 # Number of categories
size <- 15 # Expected size of each outcome (number of balls selected)
set.seed(17)
xnames <- paste0("X", seq_len(k))
beta <- matrix(rnorm((k+1) * d), ncol = d, dimnames=list(c("Intercept", xnames), seq_len(d)))
beta = beta - beta[, 1] # Standardize: category 1 is the reference category
X <- matrix(runif(n * k), n, dimnames = list(NULL, xnames))
p <- (function(h) h / rowSums(h))(exp(cbind(1, X) %*% beta))
s <- 1 + rpois(n, size-1)
y <- t(sapply(seq_len(n), function(i) rmultinom(1, s[i], p[i, ])))
With the $n=5000$ observations specified here, the estimates $\hat\beta$ had better be close to the stipulated value of $\beta$! Using nnet::multinom I estimated the coefficients.
library(nnet)
fit <- multinom(y ~ X)
Here is a plot comparing all $12$ coefficients to their specified values. The gray lines are 95% confidence intervals around each estimate. The agreement is good, indicating this approach agrees with the model assumed by multinom.
The total number of results in each category depend on all the inputs. The expected totals can be computed by multiplying the probabilities $(p_1, \ldots, p_d)$ for each data point times its value of $s$ and adding these up by category:
round(rbind(Expected = s %*% p, Observed = colSums(y)))
1 2 3 4
880 28799 13837 31623
Observed 902 28941 13852 31444
The counts observed in this simulation, by category, are close to the expected counts.
This indicates how you can adjust your simulation to achieve a desired count, set of counts, or (as asked in the question) proportion of counts: you can alter the data points, the coefficients, and the sizes as you will. How you do this will depend on what aspects of the situation you are willing to vary. There are so many possibilities that it would take us too far afield to discuss them all here. | Simulating Multinomial Logit Data with R
In this model with $k$ variables and $d$ categories, let $x = (1,x_1, x_2,\ldots, x_k)$ be one data point including a constant $1$ for the intercept. There are $d-1$ column vectors $\beta_2, \beta_3, |
52,368 | Simulating Multinomial Logit Data with R | #Genarating 500 random numbers with zero mean
x = rnorm(500,0)
#Assigning the values of beta1 and beta2
Beta1 = 2
Beta2 = .5
#Calculation of denominator for probability calculation
Denominator= 1+exp(Beta1*x)+exp(Beta2*x)
#Calculating the matrix of probabilities for three choices
vProb = cbind(1/Denominator, exp(x*Beta1)/Denominator, exp(x*Beta2)/Denominator )
# Assigning the value one to maximum probability and zero for rest to get the appropriate choices for value of x
mChoices = t(apply(vProb, 1, rmultinom, n = 1, size = 1))
# Value of Y and X together
dfM = cbind.data.frame(y = apply(mChoices, 1, function(x) which(x==1)), x)
#Adding library for multinomial logit regression
library("nnet")
#We want zero intercept hence x+0 hence the foumula of regression as below
fit<-(multinom(y ~ x + 0, dfM))
#This function uses first y as base class
#hence upper probability calculation is changed
summary(fit)
#In case we do not keep intercept as zero
fit2<-multinom(y ~ x, dfM)
summary(fit2)
#This also result intercept very close to zero and non significant
#and value of beta as modeled earlier and significant
#running from mlogit package
library(mlogit)
DM<-mlogit.data(dfM, shape="wide",sep="",choice="y",alt.levels=1:3)
#Do not know why -1 is used at two places. I will appreciate if some one can explain
fit3<-mlogit(y~-1|-1+x,data=DM)
summary(fit3) | Simulating Multinomial Logit Data with R | #Genarating 500 random numbers with zero mean
x = rnorm(500,0)
#Assigning the values of beta1 and beta2
Beta1 = 2
Beta2 = .5
#Calculation of denominator for probability calculation
Denominator= 1+exp( | Simulating Multinomial Logit Data with R
#Genarating 500 random numbers with zero mean
x = rnorm(500,0)
#Assigning the values of beta1 and beta2
Beta1 = 2
Beta2 = .5
#Calculation of denominator for probability calculation
Denominator= 1+exp(Beta1*x)+exp(Beta2*x)
#Calculating the matrix of probabilities for three choices
vProb = cbind(1/Denominator, exp(x*Beta1)/Denominator, exp(x*Beta2)/Denominator )
# Assigning the value one to maximum probability and zero for rest to get the appropriate choices for value of x
mChoices = t(apply(vProb, 1, rmultinom, n = 1, size = 1))
# Value of Y and X together
dfM = cbind.data.frame(y = apply(mChoices, 1, function(x) which(x==1)), x)
#Adding library for multinomial logit regression
library("nnet")
#We want zero intercept hence x+0 hence the foumula of regression as below
fit<-(multinom(y ~ x + 0, dfM))
#This function uses first y as base class
#hence upper probability calculation is changed
summary(fit)
#In case we do not keep intercept as zero
fit2<-multinom(y ~ x, dfM)
summary(fit2)
#This also result intercept very close to zero and non significant
#and value of beta as modeled earlier and significant
#running from mlogit package
library(mlogit)
DM<-mlogit.data(dfM, shape="wide",sep="",choice="y",alt.levels=1:3)
#Do not know why -1 is used at two places. I will appreciate if some one can explain
fit3<-mlogit(y~-1|-1+x,data=DM)
summary(fit3) | Simulating Multinomial Logit Data with R
#Genarating 500 random numbers with zero mean
x = rnorm(500,0)
#Assigning the values of beta1 and beta2
Beta1 = 2
Beta2 = .5
#Calculation of denominator for probability calculation
Denominator= 1+exp( |
52,369 | Simulating Multinomial Logit Data with R | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
This wikibooks link describes generating multinomial ordered logit data. The mlogit package seems to have some existing data sets as well. | Simulating Multinomial Logit Data with R | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Simulating Multinomial Logit Data with R
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
This wikibooks link describes generating multinomial ordered logit data. The mlogit package seems to have some existing data sets as well. | Simulating Multinomial Logit Data with R
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
52,370 | Weibull distribution from given mean | You could either
(a) choose your mean and one other quantity (scale, shape), and solve for the two in terms of the available parameters
or
(b) reparameterize the Weibull to be in terms of the mean and one of the other parameters (essentially doing something like "(a)" for all possible choices of means).
(a) will be relatively easy if you only need to do it a few times. If you need to be able to solve it a potentially very large number of times, it may be worth trying to do (b).
Let's consider some specific examples, for which we'll need some parameterization. I'll use the one in the Wikipedia article on the Weibull:
$$f(x;\lambda,k) =\begin{cases}\frac{k}{\lambda}\left(\frac{x}{\lambda}\right)^{k-1}e^{-(x/\lambda)^{k}} & x\geq0 ,\\0 & x<0,\end{cases}\,.$$
It has mean $\lambda \Gamma\left(1+\frac{1}{k}\right)\,$.
a) so if you wanted say mean, $\mu= 10$ and shape parameter $k=3$, you'd have to solve $\lambda \Gamma\left(1+\frac{1}{3}\right)\, = 10$ for $\lambda$. Since $\Gamma(\frac{4}{3})\approx 0.893$, $\lambda\approx \frac{10}{0.893}\approx 11.2$.
b) if you wanted mean $\mu= 5$ and scale parameter $\lambda=5.5$ (*), you'd have to solve $\Gamma(1+\frac{1}{k}) = 5/5.5 = 0.9091$. There are two possible values of $k$ for which that's true:
So since $\Gamma(x) = \frac{5}{5.5}$, when $x\approx (1.23739896, 1.70237491)$, we need to solve $1+\frac{1}{k} = x$ for those values of $x$, i.e. $k = \frac{1}{x-1} \approx (0.8081, 0.5874)$.
* beware you don't choose incompatible mean and scale! The minimum of the gamma function, $\Gamma(1.46163..)\approx 0.8856$, so you can't have a scale more than about $1.12917$ times the mean (and this happens at $k\approx 2.166237$).
Note that we can relatively simply reparameterize the Weibull in terms of mean and shape (or very easily solve as above without reparameterizing), but to reparameterize for mean and scale would be considerably more involved (we'd effectively need an inverse of the gamma function or to solve the equation numerically), and we'd have to deal with two sets of solutions.
Edit: I see from comments you want to "keep the shape". That implies you want to specify $k$ and $\mu$. So that's like the first, simpler example - take the $k$ you have for the shape you want, and find
$$\lambda = \frac{\mu}{\Gamma(1+\frac{1}{k})}$$
and the resulting combination of scale $\lambda$ and shape $k$ will have the desired mean $\mu$. | Weibull distribution from given mean | You could either
(a) choose your mean and one other quantity (scale, shape), and solve for the two in terms of the available parameters
or
(b) reparameterize the Weibull to be in terms of the mean and | Weibull distribution from given mean
You could either
(a) choose your mean and one other quantity (scale, shape), and solve for the two in terms of the available parameters
or
(b) reparameterize the Weibull to be in terms of the mean and one of the other parameters (essentially doing something like "(a)" for all possible choices of means).
(a) will be relatively easy if you only need to do it a few times. If you need to be able to solve it a potentially very large number of times, it may be worth trying to do (b).
Let's consider some specific examples, for which we'll need some parameterization. I'll use the one in the Wikipedia article on the Weibull:
$$f(x;\lambda,k) =\begin{cases}\frac{k}{\lambda}\left(\frac{x}{\lambda}\right)^{k-1}e^{-(x/\lambda)^{k}} & x\geq0 ,\\0 & x<0,\end{cases}\,.$$
It has mean $\lambda \Gamma\left(1+\frac{1}{k}\right)\,$.
a) so if you wanted say mean, $\mu= 10$ and shape parameter $k=3$, you'd have to solve $\lambda \Gamma\left(1+\frac{1}{3}\right)\, = 10$ for $\lambda$. Since $\Gamma(\frac{4}{3})\approx 0.893$, $\lambda\approx \frac{10}{0.893}\approx 11.2$.
b) if you wanted mean $\mu= 5$ and scale parameter $\lambda=5.5$ (*), you'd have to solve $\Gamma(1+\frac{1}{k}) = 5/5.5 = 0.9091$. There are two possible values of $k$ for which that's true:
So since $\Gamma(x) = \frac{5}{5.5}$, when $x\approx (1.23739896, 1.70237491)$, we need to solve $1+\frac{1}{k} = x$ for those values of $x$, i.e. $k = \frac{1}{x-1} \approx (0.8081, 0.5874)$.
* beware you don't choose incompatible mean and scale! The minimum of the gamma function, $\Gamma(1.46163..)\approx 0.8856$, so you can't have a scale more than about $1.12917$ times the mean (and this happens at $k\approx 2.166237$).
Note that we can relatively simply reparameterize the Weibull in terms of mean and shape (or very easily solve as above without reparameterizing), but to reparameterize for mean and scale would be considerably more involved (we'd effectively need an inverse of the gamma function or to solve the equation numerically), and we'd have to deal with two sets of solutions.
Edit: I see from comments you want to "keep the shape". That implies you want to specify $k$ and $\mu$. So that's like the first, simpler example - take the $k$ you have for the shape you want, and find
$$\lambda = \frac{\mu}{\Gamma(1+\frac{1}{k})}$$
and the resulting combination of scale $\lambda$ and shape $k$ will have the desired mean $\mu$. | Weibull distribution from given mean
You could either
(a) choose your mean and one other quantity (scale, shape), and solve for the two in terms of the available parameters
or
(b) reparameterize the Weibull to be in terms of the mean and |
52,371 | Weibull distribution from given mean | For simplicity, let's use the Weibull defined as the density whose distribution is:
$$
\large F(x) = 1 - e^{-{\left(\frac{x}{\theta}\right)^\tau}}
$$
Here $\theta$ is the scale and $\tau$ is the shape. This distribution has mean:$\;\theta\cdot\Gamma\left(1 + \frac{1}{\tau}\right)$ Which means (no pun intended) that there are potentially infinite choices of $(\theta, \tau)$ pairs which give the same mean, as you have one equation and two unknowns. For example, the Weibull distributions using both $(5, \frac{1}{4})$ and $(60, \frac{1}{2})$ have mean 120, but the former has a much higher variance.
To select a specific distribution, you would want to use method of moments to fit to the first two moments. This way, you will have at most one distribution which will match your requirements. | Weibull distribution from given mean | For simplicity, let's use the Weibull defined as the density whose distribution is:
$$
\large F(x) = 1 - e^{-{\left(\frac{x}{\theta}\right)^\tau}}
$$
Here $\theta$ is the scale and $\tau$ is the shape | Weibull distribution from given mean
For simplicity, let's use the Weibull defined as the density whose distribution is:
$$
\large F(x) = 1 - e^{-{\left(\frac{x}{\theta}\right)^\tau}}
$$
Here $\theta$ is the scale and $\tau$ is the shape. This distribution has mean:$\;\theta\cdot\Gamma\left(1 + \frac{1}{\tau}\right)$ Which means (no pun intended) that there are potentially infinite choices of $(\theta, \tau)$ pairs which give the same mean, as you have one equation and two unknowns. For example, the Weibull distributions using both $(5, \frac{1}{4})$ and $(60, \frac{1}{2})$ have mean 120, but the former has a much higher variance.
To select a specific distribution, you would want to use method of moments to fit to the first two moments. This way, you will have at most one distribution which will match your requirements. | Weibull distribution from given mean
For simplicity, let's use the Weibull defined as the density whose distribution is:
$$
\large F(x) = 1 - e^{-{\left(\frac{x}{\theta}\right)^\tau}}
$$
Here $\theta$ is the scale and $\tau$ is the shape |
52,372 | Weibull distribution from given mean | What do you mean by "mean parameter"? Does it "location parameter"?
Does your model look like this?
Then also you get the same CDF as explained in here. Just replace 'x' by '(x-γ)'. Where 'γ' is the location parameter. Then proceed as before. | Weibull distribution from given mean | What do you mean by "mean parameter"? Does it "location parameter"?
Does your model look like this?
Then also you get the same CDF as explained in here. Just replace 'x' by '(x-γ)'. Where 'γ' is the l | Weibull distribution from given mean
What do you mean by "mean parameter"? Does it "location parameter"?
Does your model look like this?
Then also you get the same CDF as explained in here. Just replace 'x' by '(x-γ)'. Where 'γ' is the location parameter. Then proceed as before. | Weibull distribution from given mean
What do you mean by "mean parameter"? Does it "location parameter"?
Does your model look like this?
Then also you get the same CDF as explained in here. Just replace 'x' by '(x-γ)'. Where 'γ' is the l |
52,373 | Weibull distribution from given mean | Brute-force search can find these parameters easily. Here's my code (R):
# Find the parameters of a Weibull distribution with a given mean and sd
library(nloptr)
objective_mean = 8.4
objective_sd = 3.8
objective = function(weibull_params) {
scale = weibull_params[1]
shape = weibull_params[2]
weibull_mean = scale * gamma(1 + 1/shape)
weibull_sd = sqrt((scale^2) * (gamma(1 + 2/shape) - gamma(1 + 1/shape)^2))
return((objective_mean - weibull_mean)^2 + (objective_sd - weibull_sd)^2)
}
opt = nlminb(c(1, 1), objective, lower = c(0, 0), upper = c(1000, 1000))
scale = opt$par[1]
shape = opt$par[2] | Weibull distribution from given mean | Brute-force search can find these parameters easily. Here's my code (R):
# Find the parameters of a Weibull distribution with a given mean and sd
library(nloptr)
objective_mean = 8.4
objective_sd = 3 | Weibull distribution from given mean
Brute-force search can find these parameters easily. Here's my code (R):
# Find the parameters of a Weibull distribution with a given mean and sd
library(nloptr)
objective_mean = 8.4
objective_sd = 3.8
objective = function(weibull_params) {
scale = weibull_params[1]
shape = weibull_params[2]
weibull_mean = scale * gamma(1 + 1/shape)
weibull_sd = sqrt((scale^2) * (gamma(1 + 2/shape) - gamma(1 + 1/shape)^2))
return((objective_mean - weibull_mean)^2 + (objective_sd - weibull_sd)^2)
}
opt = nlminb(c(1, 1), objective, lower = c(0, 0), upper = c(1000, 1000))
scale = opt$par[1]
shape = opt$par[2] | Weibull distribution from given mean
Brute-force search can find these parameters easily. Here's my code (R):
# Find the parameters of a Weibull distribution with a given mean and sd
library(nloptr)
objective_mean = 8.4
objective_sd = 3 |
52,374 | Which equal correlations of three random variables are possible? [duplicate] | Let $X_1$, $X_2$, $X_3$ be three random variables with common pairwise correlation coefficient $\rho$, that is $\mbox{corr}(X_i, X_j)= \rho$ for $i \neq j$ with $|\rho|\leq 1$. So, the correlation matrix of $X = (X_1, X_2, X_3)$ is
$$
\left( \begin{array}{ccc}
1 & \rho & \rho \\
\rho & 1 & \rho \\
\rho & \rho & 1 \end{array} \right) .
$$
Correlation matrices need to be positive-semidefinite, which implies that their leading principal minors are all positive. So $\rho$ must satisfy the following two conditions
$$
\begin{cases}
1 - \rho^2 &\geq 0,\\
1 - 3\rho^2 + 2\rho^3 &\geq 0 .
\end{cases}
$$
The first condition is always satisfied, and the second condition implies that $\rho \geq -0.5$. | Which equal correlations of three random variables are possible? [duplicate] | Let $X_1$, $X_2$, $X_3$ be three random variables with common pairwise correlation coefficient $\rho$, that is $\mbox{corr}(X_i, X_j)= \rho$ for $i \neq j$ with $|\rho|\leq 1$. So, the correlation mat | Which equal correlations of three random variables are possible? [duplicate]
Let $X_1$, $X_2$, $X_3$ be three random variables with common pairwise correlation coefficient $\rho$, that is $\mbox{corr}(X_i, X_j)= \rho$ for $i \neq j$ with $|\rho|\leq 1$. So, the correlation matrix of $X = (X_1, X_2, X_3)$ is
$$
\left( \begin{array}{ccc}
1 & \rho & \rho \\
\rho & 1 & \rho \\
\rho & \rho & 1 \end{array} \right) .
$$
Correlation matrices need to be positive-semidefinite, which implies that their leading principal minors are all positive. So $\rho$ must satisfy the following two conditions
$$
\begin{cases}
1 - \rho^2 &\geq 0,\\
1 - 3\rho^2 + 2\rho^3 &\geq 0 .
\end{cases}
$$
The first condition is always satisfied, and the second condition implies that $\rho \geq -0.5$. | Which equal correlations of three random variables are possible? [duplicate]
Let $X_1$, $X_2$, $X_3$ be three random variables with common pairwise correlation coefficient $\rho$, that is $\mbox{corr}(X_i, X_j)= \rho$ for $i \neq j$ with $|\rho|\leq 1$. So, the correlation mat |
52,375 | Which equal correlations of three random variables are possible? [duplicate] | Just a side note for the otherwise correct answers already given (+1 both). The correlation/covariance matrix described is compound symmetric. This has some rather important theoretical implication on how one interpreters a model; in particular one would assume that the co-variance of the variables examined can be perfectly partitioned in a "shared" component and the "unshared" component between your variables.
A (somewhat) common setting for such structures to be used is when the assumption for equal correlation of residuals is plausible; for example when one deals with repeated trials under the same condition in an experiment.
The CV link: "What is compound symmetry in plain english?" gives a more in-depth presentation of compound symmetry. | Which equal correlations of three random variables are possible? [duplicate] | Just a side note for the otherwise correct answers already given (+1 both). The correlation/covariance matrix described is compound symmetric. This has some rather important theoretical implication on | Which equal correlations of three random variables are possible? [duplicate]
Just a side note for the otherwise correct answers already given (+1 both). The correlation/covariance matrix described is compound symmetric. This has some rather important theoretical implication on how one interpreters a model; in particular one would assume that the co-variance of the variables examined can be perfectly partitioned in a "shared" component and the "unshared" component between your variables.
A (somewhat) common setting for such structures to be used is when the assumption for equal correlation of residuals is plausible; for example when one deals with repeated trials under the same condition in an experiment.
The CV link: "What is compound symmetry in plain english?" gives a more in-depth presentation of compound symmetry. | Which equal correlations of three random variables are possible? [duplicate]
Just a side note for the otherwise correct answers already given (+1 both). The correlation/covariance matrix described is compound symmetric. This has some rather important theoretical implication on |
52,376 | Any algorithms better than polynomial regression | Looks like a time series with very strong (and fairly regular) seasonality. If you use R, you might want to look at function stl(), or fit a basic structural model (an easy entry point function is StructTS(), otherwise there are several packages which afford you more generality and better control of the model you want to fit). | Any algorithms better than polynomial regression | Looks like a time series with very strong (and fairly regular) seasonality. If you use R, you might want to look at function stl(), or fit a basic structural model (an easy entry point function is Str | Any algorithms better than polynomial regression
Looks like a time series with very strong (and fairly regular) seasonality. If you use R, you might want to look at function stl(), or fit a basic structural model (an easy entry point function is StructTS(), otherwise there are several packages which afford you more generality and better control of the model you want to fit). | Any algorithms better than polynomial regression
Looks like a time series with very strong (and fairly regular) seasonality. If you use R, you might want to look at function stl(), or fit a basic structural model (an easy entry point function is Str |
52,377 | Any algorithms better than polynomial regression | The answer is ARIMA models with Intervention Detection enabled. Intervention Detection will suggest level shifts/local time trends/seasonal pulses and pulses which are needed to aid the efficient identification/ robust identification of the ARIMA structure reflecting auto-regressive memory. Please post your data in column format and advise as to the frequency of measurement. It looks to me like you might be attempting to use a very dated procedure called Fourier (a pure deterministic structure i.e. no auto-regressive component ) which fits the data based upon an assumed structure but often (nearly always) doesn't deliver a good "explanation" of the data ... consequently a picture like the one that you presented. Kudos on asking the question !
IN RESPONSE TO COMMENTS BY w.huber AND OTHERS .....:
A point in clarification. Intervention Detection (ID) and Power Transforms(PT) like logs are forms of transformations. ID deals with adjusting vales for unspecified deterministic structure while PT deals with uncoupling error variance relationships with the expected value. The whole idea is to as little transformation as necessary much like a doctor prescribing the least form of treatment . As @w.huber correctly points out , you need to prove a dependence between the error variance and the expected value before you apply a PT. When (and why) should you take the log of a distribution (of numbers)? might help you. | Any algorithms better than polynomial regression | The answer is ARIMA models with Intervention Detection enabled. Intervention Detection will suggest level shifts/local time trends/seasonal pulses and pulses which are needed to aid the efficient iden | Any algorithms better than polynomial regression
The answer is ARIMA models with Intervention Detection enabled. Intervention Detection will suggest level shifts/local time trends/seasonal pulses and pulses which are needed to aid the efficient identification/ robust identification of the ARIMA structure reflecting auto-regressive memory. Please post your data in column format and advise as to the frequency of measurement. It looks to me like you might be attempting to use a very dated procedure called Fourier (a pure deterministic structure i.e. no auto-regressive component ) which fits the data based upon an assumed structure but often (nearly always) doesn't deliver a good "explanation" of the data ... consequently a picture like the one that you presented. Kudos on asking the question !
IN RESPONSE TO COMMENTS BY w.huber AND OTHERS .....:
A point in clarification. Intervention Detection (ID) and Power Transforms(PT) like logs are forms of transformations. ID deals with adjusting vales for unspecified deterministic structure while PT deals with uncoupling error variance relationships with the expected value. The whole idea is to as little transformation as necessary much like a doctor prescribing the least form of treatment . As @w.huber correctly points out , you need to prove a dependence between the error variance and the expected value before you apply a PT. When (and why) should you take the log of a distribution (of numbers)? might help you. | Any algorithms better than polynomial regression
The answer is ARIMA models with Intervention Detection enabled. Intervention Detection will suggest level shifts/local time trends/seasonal pulses and pulses which are needed to aid the efficient iden |
52,378 | Any algorithms better than polynomial regression | Think you have your answer.
But would add that it may be useful to log your data. Then consider doing regular (d) and/or seasonal (D) differencing. The resultant series should be much easier to model. I'm not confident d/D is necessary, but some form of transformation likely is necessary. Hard to tell from graph, but it appears that the volatility increases with time/linear trend.
The models suggested (by the other answers) will give you better forecasts and decomposition of the series, but with some transformation of the time series you can often fit a good-enough polynomial.
UPDATE:
(1) As mentioned below, it is unclear if the volatility is directly proportional to the level. If so log transformation is helpful. Otherwise perhaps not.
(2) Square root transformations are underused, but also often helpful in these settings. | Any algorithms better than polynomial regression | Think you have your answer.
But would add that it may be useful to log your data. Then consider doing regular (d) and/or seasonal (D) differencing. The resultant series should be much easier to model | Any algorithms better than polynomial regression
Think you have your answer.
But would add that it may be useful to log your data. Then consider doing regular (d) and/or seasonal (D) differencing. The resultant series should be much easier to model. I'm not confident d/D is necessary, but some form of transformation likely is necessary. Hard to tell from graph, but it appears that the volatility increases with time/linear trend.
The models suggested (by the other answers) will give you better forecasts and decomposition of the series, but with some transformation of the time series you can often fit a good-enough polynomial.
UPDATE:
(1) As mentioned below, it is unclear if the volatility is directly proportional to the level. If so log transformation is helpful. Otherwise perhaps not.
(2) Square root transformations are underused, but also often helpful in these settings. | Any algorithms better than polynomial regression
Think you have your answer.
But would add that it may be useful to log your data. Then consider doing regular (d) and/or seasonal (D) differencing. The resultant series should be much easier to model |
52,379 | How to extract dependence on a single variable when independent variables are correlated? | Aksakal's answer is correct. By controlling for all variables in a regression, you "keep them constant" and you are able to identify the partial correlation between your regressor of interest. Let me give you an example to make this clearer.
First, let us create some correlated $X$s.
ex <- rnorm(1000)
x1 <- 5*ex + rnorm(1000)
x2 <- -3*ex + rnorm(1000)
x3 <- 4*ex + rnorm(1000)
Now, since all these variables are generated by some underlying variable $ex$, they are clearly correlated. You can check this using cor(x1,x2), for instance.
Now, let us generate the dependent variable with known parameters.
y <- 1*x1 + 2*x2 + 3*x3 + rnorm(1000)
Here we know that $\beta_1=1, \beta_2=2, \beta_3=3$. I have picked them arbitrarily. Let us now see if Aksakal's approach can uncover these parameters:
lm(y ~ x1+x2+x3)
If it works, the estimated parameters should be close to the ones we have picked. Here the result:
Call:
lm(formula = y ~ x1 + x2 + x3)
Coefficients:
(Intercept) x1 x2 x3
-0.01224 0.99805 1.99746 2.99670
As you can see, all parameters have been uncovered.
Having said that, there are many caveats involved here as well. Most importantly, you should not interpret these coefficients in a causal way. Depending on your actual situation, it might help if you explain a bit more what you are trying to estimate so that people can evaluate whether this method is appropriate (or whether answering your research question is feasible at all). For instance, why do you think your independent variables are correlated? Is it that $X_1$ might have an effect on $X_2$ and this has an effect on $y$? If this is the setup you have in mind, then depending on your field, you may want to look into mediator/moderator analysis or into quasi-experimental methods. Hence you see you might benefit from elaborating a bit more on your situation. | How to extract dependence on a single variable when independent variables are correlated? | Aksakal's answer is correct. By controlling for all variables in a regression, you "keep them constant" and you are able to identify the partial correlation between your regressor of interest. Let me | How to extract dependence on a single variable when independent variables are correlated?
Aksakal's answer is correct. By controlling for all variables in a regression, you "keep them constant" and you are able to identify the partial correlation between your regressor of interest. Let me give you an example to make this clearer.
First, let us create some correlated $X$s.
ex <- rnorm(1000)
x1 <- 5*ex + rnorm(1000)
x2 <- -3*ex + rnorm(1000)
x3 <- 4*ex + rnorm(1000)
Now, since all these variables are generated by some underlying variable $ex$, they are clearly correlated. You can check this using cor(x1,x2), for instance.
Now, let us generate the dependent variable with known parameters.
y <- 1*x1 + 2*x2 + 3*x3 + rnorm(1000)
Here we know that $\beta_1=1, \beta_2=2, \beta_3=3$. I have picked them arbitrarily. Let us now see if Aksakal's approach can uncover these parameters:
lm(y ~ x1+x2+x3)
If it works, the estimated parameters should be close to the ones we have picked. Here the result:
Call:
lm(formula = y ~ x1 + x2 + x3)
Coefficients:
(Intercept) x1 x2 x3
-0.01224 0.99805 1.99746 2.99670
As you can see, all parameters have been uncovered.
Having said that, there are many caveats involved here as well. Most importantly, you should not interpret these coefficients in a causal way. Depending on your actual situation, it might help if you explain a bit more what you are trying to estimate so that people can evaluate whether this method is appropriate (or whether answering your research question is feasible at all). For instance, why do you think your independent variables are correlated? Is it that $X_1$ might have an effect on $X_2$ and this has an effect on $y$? If this is the setup you have in mind, then depending on your field, you may want to look into mediator/moderator analysis or into quasi-experimental methods. Hence you see you might benefit from elaborating a bit more on your situation. | How to extract dependence on a single variable when independent variables are correlated?
Aksakal's answer is correct. By controlling for all variables in a regression, you "keep them constant" and you are able to identify the partial correlation between your regressor of interest. Let me |
52,380 | How to extract dependence on a single variable when independent variables are correlated? | Regress Y on Xs, beta of X2 will be what you are looking for.
UPDATE:
I'll add to my answer based on the discussion after my original post.
Consider $y=f(x_1,x_2,x_3)$, an arbitrary smooth function. It seems that you are looking for the sensitivity of $y$ to $x_2$. This is captured by the partial derivative $\partial y/\partial x_2$. To see this it helps to look at Taylor expansion: $y(x+\Delta x)=y(x)+\partial y/\partial x_1 \Delta x_1+\partial y/\partial x_2 \Delta x_2+\partial y/\partial x_3 \Delta x_3+\partial^2 y/\partial x_1^2 (\Delta x_1)^2+\partial^2 y/(\partial x_1 \partial x_2) \Delta x_1 \Delta x_2+...$.
Note, how the interaction terms are of the second order in $\Delta$'s. So, if you're interested in the first order effects, then you are looking for $\partial f/\partial x_2$, i.e. the $\beta_{X_2}$ in your regression. Also note, that this does not preculde you from adding interaction terms in your regression such as $X_2*X_3$ or $X_1*X_2*X_3$. These are fine, but you don't need their coefficients to answer your question. When you add interaction terms, of course, your $\beta_{X_2}$ will change, but its interpretation won't. | How to extract dependence on a single variable when independent variables are correlated? | Regress Y on Xs, beta of X2 will be what you are looking for.
UPDATE:
I'll add to my answer based on the discussion after my original post.
Consider $y=f(x_1,x_2,x_3)$, an arbitrary smooth function. I | How to extract dependence on a single variable when independent variables are correlated?
Regress Y on Xs, beta of X2 will be what you are looking for.
UPDATE:
I'll add to my answer based on the discussion after my original post.
Consider $y=f(x_1,x_2,x_3)$, an arbitrary smooth function. It seems that you are looking for the sensitivity of $y$ to $x_2$. This is captured by the partial derivative $\partial y/\partial x_2$. To see this it helps to look at Taylor expansion: $y(x+\Delta x)=y(x)+\partial y/\partial x_1 \Delta x_1+\partial y/\partial x_2 \Delta x_2+\partial y/\partial x_3 \Delta x_3+\partial^2 y/\partial x_1^2 (\Delta x_1)^2+\partial^2 y/(\partial x_1 \partial x_2) \Delta x_1 \Delta x_2+...$.
Note, how the interaction terms are of the second order in $\Delta$'s. So, if you're interested in the first order effects, then you are looking for $\partial f/\partial x_2$, i.e. the $\beta_{X_2}$ in your regression. Also note, that this does not preculde you from adding interaction terms in your regression such as $X_2*X_3$ or $X_1*X_2*X_3$. These are fine, but you don't need their coefficients to answer your question. When you add interaction terms, of course, your $\beta_{X_2}$ will change, but its interpretation won't. | How to extract dependence on a single variable when independent variables are correlated?
Regress Y on Xs, beta of X2 will be what you are looking for.
UPDATE:
I'll add to my answer based on the discussion after my original post.
Consider $y=f(x_1,x_2,x_3)$, an arbitrary smooth function. I |
52,381 | How to extract dependence on a single variable when independent variables are correlated? | Primary to your concern should be whether the model of $Y$ given all $X$ is correct. If it is correct, the $\beta$ of $X_2$ is the effect coefficient you are looking for. Take into account that there may be non-linear trends on any $X$ with $Y$, $Y$ may not be normal (in which case you need a large sample), and there may be interactions betwen any $X$.
In particular effect hetorogeneity is an issue, which may bias your $\beta$ estimates. You should be able to model it, however, by including interaction terms of $X_2$ with the other $X$ in the model. When there are significnat interactions including them in the model will give you better (i.e., unbiased) estimates of the average effect of $X_2$ on $Y$.
Moreover, if you are in the situation of a case-control or an observational study like a quasi- or natural-experiment, as I take from one of your comments above, $X_2$ is actually dichotomous indicating treatment or control. Then there is a series of other approaches for valid inference about the average treatment effect of $X_2$ on $Y$. For example, you could match treatment and control units indicated by $X_2$ conditional on the other $X$, by means of matching algorithms and propensity scores. If you are actually in the situation of a case-control study or a binary variable $X_2$ the literature on causal inference provides these and other methods.
A --correct-- regression model of the type discussed above will also provide a correct treamtent estimate. However, it may be flawed when its basic assumptions (e.g. linearity, homoscedasticity, effect homogeneity, etc.) are violated.
I have once discussed the use of regression models for estimating average treatment effects from observational studies here | How to extract dependence on a single variable when independent variables are correlated? | Primary to your concern should be whether the model of $Y$ given all $X$ is correct. If it is correct, the $\beta$ of $X_2$ is the effect coefficient you are looking for. Take into account that there | How to extract dependence on a single variable when independent variables are correlated?
Primary to your concern should be whether the model of $Y$ given all $X$ is correct. If it is correct, the $\beta$ of $X_2$ is the effect coefficient you are looking for. Take into account that there may be non-linear trends on any $X$ with $Y$, $Y$ may not be normal (in which case you need a large sample), and there may be interactions betwen any $X$.
In particular effect hetorogeneity is an issue, which may bias your $\beta$ estimates. You should be able to model it, however, by including interaction terms of $X_2$ with the other $X$ in the model. When there are significnat interactions including them in the model will give you better (i.e., unbiased) estimates of the average effect of $X_2$ on $Y$.
Moreover, if you are in the situation of a case-control or an observational study like a quasi- or natural-experiment, as I take from one of your comments above, $X_2$ is actually dichotomous indicating treatment or control. Then there is a series of other approaches for valid inference about the average treatment effect of $X_2$ on $Y$. For example, you could match treatment and control units indicated by $X_2$ conditional on the other $X$, by means of matching algorithms and propensity scores. If you are actually in the situation of a case-control study or a binary variable $X_2$ the literature on causal inference provides these and other methods.
A --correct-- regression model of the type discussed above will also provide a correct treamtent estimate. However, it may be flawed when its basic assumptions (e.g. linearity, homoscedasticity, effect homogeneity, etc.) are violated.
I have once discussed the use of regression models for estimating average treatment effects from observational studies here | How to extract dependence on a single variable when independent variables are correlated?
Primary to your concern should be whether the model of $Y$ given all $X$ is correct. If it is correct, the $\beta$ of $X_2$ is the effect coefficient you are looking for. Take into account that there |
52,382 | Q-Value Less than P-Value | Yes, this is possible, if the proportion of null hypotheses (which is estimated by the qvalue package based on your p-value distribution) is small and your test is powerful.
Here's an example. Let's say you're testing 1000 hypotheses, and let's say 200 (20%) are actually null- this proportion is called $\pi_0$. Assume the qvalue package accurately estimates this value (see here for more on how it does that, and note that you can see what it estimates for your data with qvalue(mypvalues)$pi0). Furthermore, let's say that 500 of your p-values are under .05 (your test is powerful). Then what would be the q-value corresponding to a p-value of .05?
The q-value is the expected proportion of false discoveries you would obtain by setting a particular p-value cutoff. In your 200 null hypotheses, the p-values are uniformly distributed between 0 and 1 (that's part of the definition of a p-value). This means that 5% of them (10 hypotheses) will fall under .05. So you have 500 hypotheses under .05, and you expect 10 of them to be false: your expected FDR is $10/500=.02%$. In this case, the q-value is indeed smaller than the p-value. | Q-Value Less than P-Value | Yes, this is possible, if the proportion of null hypotheses (which is estimated by the qvalue package based on your p-value distribution) is small and your test is powerful.
Here's an example. Let's s | Q-Value Less than P-Value
Yes, this is possible, if the proportion of null hypotheses (which is estimated by the qvalue package based on your p-value distribution) is small and your test is powerful.
Here's an example. Let's say you're testing 1000 hypotheses, and let's say 200 (20%) are actually null- this proportion is called $\pi_0$. Assume the qvalue package accurately estimates this value (see here for more on how it does that, and note that you can see what it estimates for your data with qvalue(mypvalues)$pi0). Furthermore, let's say that 500 of your p-values are under .05 (your test is powerful). Then what would be the q-value corresponding to a p-value of .05?
The q-value is the expected proportion of false discoveries you would obtain by setting a particular p-value cutoff. In your 200 null hypotheses, the p-values are uniformly distributed between 0 and 1 (that's part of the definition of a p-value). This means that 5% of them (10 hypotheses) will fall under .05. So you have 500 hypotheses under .05, and you expect 10 of them to be false: your expected FDR is $10/500=.02%$. In this case, the q-value is indeed smaller than the p-value. | Q-Value Less than P-Value
Yes, this is possible, if the proportion of null hypotheses (which is estimated by the qvalue package based on your p-value distribution) is small and your test is powerful.
Here's an example. Let's s |
52,383 | Q-Value Less than P-Value | You do not compute the false discovery rate, you control the false discovery rate.
Family-wise error rate methods (e.g. Bonferroni, Holm-Sidák, etc.) attempt to control the probability of making a false rejection of H$_{0}$ while assuming that all null hypotheses are true.
False discovery rate methods attempt to control the probability of of making a false rejection of H$_{0}$ while assuming that some null hypotheses are false. In this context $q$-values, are $p$-values that have been adjusted using a method to control the false discovery rate. $q$-values should be greater than or equal to the $p$-values from which they are computed, but it is difficult to say more about why you are getting the results you are without seeing your code. | Q-Value Less than P-Value | You do not compute the false discovery rate, you control the false discovery rate.
Family-wise error rate methods (e.g. Bonferroni, Holm-Sidák, etc.) attempt to control the probability of making a fal | Q-Value Less than P-Value
You do not compute the false discovery rate, you control the false discovery rate.
Family-wise error rate methods (e.g. Bonferroni, Holm-Sidák, etc.) attempt to control the probability of making a false rejection of H$_{0}$ while assuming that all null hypotheses are true.
False discovery rate methods attempt to control the probability of of making a false rejection of H$_{0}$ while assuming that some null hypotheses are false. In this context $q$-values, are $p$-values that have been adjusted using a method to control the false discovery rate. $q$-values should be greater than or equal to the $p$-values from which they are computed, but it is difficult to say more about why you are getting the results you are without seeing your code. | Q-Value Less than P-Value
You do not compute the false discovery rate, you control the false discovery rate.
Family-wise error rate methods (e.g. Bonferroni, Holm-Sidák, etc.) attempt to control the probability of making a fal |
52,384 | Does R have post hoc tests robust to unequal sample sizes/population variances? | Robustness would not come from the package used to do post hoc tests. It would come from the model upon which they are based.
If you were to use, for example, nlme::gls() to model the data, that would allow for unequal variances and accommodate unbalanced data. Then, following that up with multcomp::glht() or lsmeans::lsmeans() would provide post hoc tests that inherit their robustness from the robustness of the model used. There are probably other modeling options in other R packages as well. | Does R have post hoc tests robust to unequal sample sizes/population variances? | Robustness would not come from the package used to do post hoc tests. It would come from the model upon which they are based.
If you were to use, for example, nlme::gls() to model the data, that woul | Does R have post hoc tests robust to unequal sample sizes/population variances?
Robustness would not come from the package used to do post hoc tests. It would come from the model upon which they are based.
If you were to use, for example, nlme::gls() to model the data, that would allow for unequal variances and accommodate unbalanced data. Then, following that up with multcomp::glht() or lsmeans::lsmeans() would provide post hoc tests that inherit their robustness from the robustness of the model used. There are probably other modeling options in other R packages as well. | Does R have post hoc tests robust to unequal sample sizes/population variances?
Robustness would not come from the package used to do post hoc tests. It would come from the model upon which they are based.
If you were to use, for example, nlme::gls() to model the data, that woul |
52,385 | Does R have post hoc tests robust to unequal sample sizes/population variances? | It's not been added to R, because no one thought it was important enough to add to R.
SPSS seems to have taken a scattergun approach to post hoc tests - they've just kept adding them. Stuff that appears in (say) SPSS is based on marketing, rather than need. SPSS thinks that they can say "We have more post hoc tests than SAS, Stata and Statistica put together, so you should buy our software". One rarely sees these tests mentioned outside the context of SPSS (and rarely outside the context of books that try to cover everything in a particular SPSS function). A slight problem with that book is that it's a rewrite of a book that was written for SPSS, and so sometimes a different structure would be sensible, so that it matched R, not SPSS.
For R, if someone cared enough to put it in, someone will have put it in. The fact that people have found the time to write thousands of packages for R, and none of them included (say) the Hochberg GT2 test might be telling us something.
If you really must do these post hoc tests (I'm not a fan, and rarely do them), I guess you have two choices:
You could bootstrap it.
You could write it yourself. The algorithms that SPSS uses are publised here: ftp://public.dhe.ibm.com/software/analytics/spss/documentation/statistics/20.0/en/client/Manuals/IBM_SPSS_Statistics_Algorithms.pdf
Also, note that this issue came up several years ago on the R help list (I suspect they'd read the same book), https://stat.ethz.ch/pipermail/r-help/2005-November/083595.html | Does R have post hoc tests robust to unequal sample sizes/population variances? | It's not been added to R, because no one thought it was important enough to add to R.
SPSS seems to have taken a scattergun approach to post hoc tests - they've just kept adding them. Stuff that appea | Does R have post hoc tests robust to unequal sample sizes/population variances?
It's not been added to R, because no one thought it was important enough to add to R.
SPSS seems to have taken a scattergun approach to post hoc tests - they've just kept adding them. Stuff that appears in (say) SPSS is based on marketing, rather than need. SPSS thinks that they can say "We have more post hoc tests than SAS, Stata and Statistica put together, so you should buy our software". One rarely sees these tests mentioned outside the context of SPSS (and rarely outside the context of books that try to cover everything in a particular SPSS function). A slight problem with that book is that it's a rewrite of a book that was written for SPSS, and so sometimes a different structure would be sensible, so that it matched R, not SPSS.
For R, if someone cared enough to put it in, someone will have put it in. The fact that people have found the time to write thousands of packages for R, and none of them included (say) the Hochberg GT2 test might be telling us something.
If you really must do these post hoc tests (I'm not a fan, and rarely do them), I guess you have two choices:
You could bootstrap it.
You could write it yourself. The algorithms that SPSS uses are publised here: ftp://public.dhe.ibm.com/software/analytics/spss/documentation/statistics/20.0/en/client/Manuals/IBM_SPSS_Statistics_Algorithms.pdf
Also, note that this issue came up several years ago on the R help list (I suspect they'd read the same book), https://stat.ethz.ch/pipermail/r-help/2005-November/083595.html | Does R have post hoc tests robust to unequal sample sizes/population variances?
It's not been added to R, because no one thought it was important enough to add to R.
SPSS seems to have taken a scattergun approach to post hoc tests - they've just kept adding them. Stuff that appea |
52,386 | Does R have post hoc tests robust to unequal sample sizes/population variances? | 'DTK' package in R has Dunnett’s Modified Tukey-Kramer Pairwise Multiple Comparison Test. | Does R have post hoc tests robust to unequal sample sizes/population variances? | 'DTK' package in R has Dunnett’s Modified Tukey-Kramer Pairwise Multiple Comparison Test. | Does R have post hoc tests robust to unequal sample sizes/population variances?
'DTK' package in R has Dunnett’s Modified Tukey-Kramer Pairwise Multiple Comparison Test. | Does R have post hoc tests robust to unequal sample sizes/population variances?
'DTK' package in R has Dunnett’s Modified Tukey-Kramer Pairwise Multiple Comparison Test. |
52,387 | Good references for time series? | I don't know of a single time series book that is as comprehensive as Elements of Statistical Learning. However, here's a list of a few books that i've found helpful:
Free online. More of a forecasting focus, but definitely a good starting point. The slides under resources are also helpful:
Hyndman, R. J., & Athanasopoulos, G. (2013). Forecasting: principles and practice. Retrieved from http://otexts.org/fpp/
Probably the most comprehensive. With information about many of the model types you've listed:
Shumway, R. H., & Stoffer, D. S. (2010). Time Series Analysis and Its Applications. Springer.
Definitive resource on exponential smoothing:
Hyndman, R., Koehler, A. B., Ord, J. K., & Snyder, R. D. (2008). Forecasting with Exponential Smoothing. Springer. | Good references for time series? | I don't know of a single time series book that is as comprehensive as Elements of Statistical Learning. However, here's a list of a few books that i've found helpful:
Free online. More of a forecastin | Good references for time series?
I don't know of a single time series book that is as comprehensive as Elements of Statistical Learning. However, here's a list of a few books that i've found helpful:
Free online. More of a forecasting focus, but definitely a good starting point. The slides under resources are also helpful:
Hyndman, R. J., & Athanasopoulos, G. (2013). Forecasting: principles and practice. Retrieved from http://otexts.org/fpp/
Probably the most comprehensive. With information about many of the model types you've listed:
Shumway, R. H., & Stoffer, D. S. (2010). Time Series Analysis and Its Applications. Springer.
Definitive resource on exponential smoothing:
Hyndman, R., Koehler, A. B., Ord, J. K., & Snyder, R. D. (2008). Forecasting with Exponential Smoothing. Springer. | Good references for time series?
I don't know of a single time series book that is as comprehensive as Elements of Statistical Learning. However, here's a list of a few books that i've found helpful:
Free online. More of a forecastin |
52,388 | Good references for time series? | Brockwell and Davis wrote two excellent time series books. Both cover a great deal of material and the writing is very clear. The first book is more introductory, and the second one has a more mathematical development.
http://www.amazon.com/Introduction-Forecasting-Springer-Texts-Statistics/dp/0387953515/
http://www.amazon.com/Time-Series-Methods-Springer-Statistics/dp/1441903194 | Good references for time series? | Brockwell and Davis wrote two excellent time series books. Both cover a great deal of material and the writing is very clear. The first book is more introductory, and the second one has a more mathema | Good references for time series?
Brockwell and Davis wrote two excellent time series books. Both cover a great deal of material and the writing is very clear. The first book is more introductory, and the second one has a more mathematical development.
http://www.amazon.com/Introduction-Forecasting-Springer-Texts-Statistics/dp/0387953515/
http://www.amazon.com/Time-Series-Methods-Springer-Statistics/dp/1441903194 | Good references for time series?
Brockwell and Davis wrote two excellent time series books. Both cover a great deal of material and the writing is very clear. The first book is more introductory, and the second one has a more mathema |
52,389 | Good references for time series? | I don't know about the 'ESL' or machine learning, but what about good ol' Tsay?
Some parts you mentioned are included, some not (e.g. Kalman filter):
Analysis of Financial Time Series by Ruey S. Tsay
When it comes to times series with applications and an easy-to-understand way of explaining he is my Tom Cruise, my top gun. | Good references for time series? | I don't know about the 'ESL' or machine learning, but what about good ol' Tsay?
Some parts you mentioned are included, some not (e.g. Kalman filter):
Analysis of Financial Time Series by Ruey S. Tsay | Good references for time series?
I don't know about the 'ESL' or machine learning, but what about good ol' Tsay?
Some parts you mentioned are included, some not (e.g. Kalman filter):
Analysis of Financial Time Series by Ruey S. Tsay
When it comes to times series with applications and an easy-to-understand way of explaining he is my Tom Cruise, my top gun. | Good references for time series?
I don't know about the 'ESL' or machine learning, but what about good ol' Tsay?
Some parts you mentioned are included, some not (e.g. Kalman filter):
Analysis of Financial Time Series by Ruey S. Tsay |
52,390 | Good references for time series? | ESL is not for time series in my opinion. Tsay's book in addition to Cowpertwait's intro level book are the best combination. | Good references for time series? | ESL is not for time series in my opinion. Tsay's book in addition to Cowpertwait's intro level book are the best combination. | Good references for time series?
ESL is not for time series in my opinion. Tsay's book in addition to Cowpertwait's intro level book are the best combination. | Good references for time series?
ESL is not for time series in my opinion. Tsay's book in addition to Cowpertwait's intro level book are the best combination. |
52,391 | Good references for time series? | Have a look at
James D. Hamilton. Time Series Analysis. Princeton Univ. Press, Princeton, N.J, 1994.
It is very thorough. I'm not sure about neural networks and "all" exponential smoothing, but the rest is in there. | Good references for time series? | Have a look at
James D. Hamilton. Time Series Analysis. Princeton Univ. Press, Princeton, N.J, 1994.
It is very thorough. I'm not sure about neural networks and "all" exponential smoothing, but the | Good references for time series?
Have a look at
James D. Hamilton. Time Series Analysis. Princeton Univ. Press, Princeton, N.J, 1994.
It is very thorough. I'm not sure about neural networks and "all" exponential smoothing, but the rest is in there. | Good references for time series?
Have a look at
James D. Hamilton. Time Series Analysis. Princeton Univ. Press, Princeton, N.J, 1994.
It is very thorough. I'm not sure about neural networks and "all" exponential smoothing, but the |
52,392 | Good references for time series? | Here is a good list of books on time series analysis. Note that there is a lot of difference amongst books that cater to people of different backgrounds (economists/engineers/statisticians). hth | Good references for time series? | Here is a good list of books on time series analysis. Note that there is a lot of difference amongst books that cater to people of different backgrounds (economists/engineers/statisticians). hth | Good references for time series?
Here is a good list of books on time series analysis. Note that there is a lot of difference amongst books that cater to people of different backgrounds (economists/engineers/statisticians). hth | Good references for time series?
Here is a good list of books on time series analysis. Note that there is a lot of difference amongst books that cater to people of different backgrounds (economists/engineers/statisticians). hth |
52,393 | Why such a poor result from sparse PCA R package? | nsprcomp computes the scores matrix Z (rst$x in your example) as $Z=XW$, where $X$ is the data matrix (prod in your example) and $W$ is the matrix of principal axes (rst$rotation in your example). This is in accordance with standard PCA and the predict.prcomp interface.
However, non-negative sparse PCA usually results in principal axes which are not pairwise orthogonal, and therefore a reconstruction
$\hat{X} = ZW^t = XWW^t$
doesn't recover $X$ even if $W$ has full rank, because $W$ is not an orthogonal matrix. If you reconstruct using the pseudo-inverse $W^\dagger=(W^tW)^{-1}W^t$ instead,
$\hat{X}_2 = ZW^\dagger = XW(W^tW)^{-1}W^t$
corresponds to an orthogonal projection of $X$ onto the principal subspace spanned by $W$, and recovers $X$ if $W$ has full rank:
library(MASS)
recon2 = predict(rst) %*% ginv(rst$rotation) + matrix(1,5,1) %*% rst$center
abs(prod - recon2)
[,1] [,2] [,3] [,4] [,5]
[1,] 4.440892e-16 4.440892e-16 2.220446e-16 8.881784e-16 2.220446e-16
[2,] 4.440892e-16 1.332268e-15 0.000000e+00 1.998401e-15 4.440892e-16
[3,] 1.110223e-16 4.440892e-16 2.220446e-16 4.440892e-16 2.220446e-16
[4,] 3.330669e-16 4.440892e-16 2.220446e-16 2.220446e-16 0.000000e+00
[5,] 1.110223e-16 4.440892e-16 2.220446e-16 4.440892e-16 2.220446e-16
I will amend the documentation accordingly.
@Marc: nneg=TRUE enforces non-negative loadings, i.e. the principal axes are constrained to the non-negative orthant. | Why such a poor result from sparse PCA R package? | nsprcomp computes the scores matrix Z (rst$x in your example) as $Z=XW$, where $X$ is the data matrix (prod in your example) and $W$ is the matrix of principal axes (rst$rotation in your example). Thi | Why such a poor result from sparse PCA R package?
nsprcomp computes the scores matrix Z (rst$x in your example) as $Z=XW$, where $X$ is the data matrix (prod in your example) and $W$ is the matrix of principal axes (rst$rotation in your example). This is in accordance with standard PCA and the predict.prcomp interface.
However, non-negative sparse PCA usually results in principal axes which are not pairwise orthogonal, and therefore a reconstruction
$\hat{X} = ZW^t = XWW^t$
doesn't recover $X$ even if $W$ has full rank, because $W$ is not an orthogonal matrix. If you reconstruct using the pseudo-inverse $W^\dagger=(W^tW)^{-1}W^t$ instead,
$\hat{X}_2 = ZW^\dagger = XW(W^tW)^{-1}W^t$
corresponds to an orthogonal projection of $X$ onto the principal subspace spanned by $W$, and recovers $X$ if $W$ has full rank:
library(MASS)
recon2 = predict(rst) %*% ginv(rst$rotation) + matrix(1,5,1) %*% rst$center
abs(prod - recon2)
[,1] [,2] [,3] [,4] [,5]
[1,] 4.440892e-16 4.440892e-16 2.220446e-16 8.881784e-16 2.220446e-16
[2,] 4.440892e-16 1.332268e-15 0.000000e+00 1.998401e-15 4.440892e-16
[3,] 1.110223e-16 4.440892e-16 2.220446e-16 4.440892e-16 2.220446e-16
[4,] 3.330669e-16 4.440892e-16 2.220446e-16 2.220446e-16 0.000000e+00
[5,] 1.110223e-16 4.440892e-16 2.220446e-16 4.440892e-16 2.220446e-16
I will amend the documentation accordingly.
@Marc: nneg=TRUE enforces non-negative loadings, i.e. the principal axes are constrained to the non-negative orthant. | Why such a poor result from sparse PCA R package?
nsprcomp computes the scores matrix Z (rst$x in your example) as $Z=XW$, where $X$ is the data matrix (prod in your example) and $W$ is the matrix of principal axes (rst$rotation in your example). Thi |
52,394 | Why such a poor result from sparse PCA R package? | Your code looks OK for the reconstruction, but this does not seem to be appropriate when the argument nneg=TRUE ("a logical value indicating whether the principal axes should be constrained to the non-negative orthant"). When this argument is set to FALSE, then the reconstruction works in the typical way:
rst.f = nsprcomp(prod, retx=T, nneg=FALSE)
recon = rst.f$x %*% t(rst.f$rotation) + matrix(1,5,1) %*% rst.f$center
abs(prod - recon)
[,1] [,2] [,3] [,4] [,5]
[1,] 1.110223e-16 1.110223e-16 0.000000e+00 2.220446e-16 0
[2,] 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0
[3,] 5.551115e-17 0.000000e+00 2.220446e-16 2.220446e-16 0
[4,] 0.000000e+00 4.440892e-16 0.000000e+00 2.220446e-16 0
[5,] 5.551115e-17 0.000000e+00 2.220446e-16 2.220446e-16 0
Sorry that I can't comment more on the use of the "non-negative orthant" option.
I might also point out that your example is not truely using a "sparse" matrix. But this method does appear do deal nicely with such objects:
require(Matrix)
prod <- replace(prod, prod==0, NaN)
tmp <- cbind(expand.grid(i=seq(nrow(prod)), j=seq(ncol(prod))), p=c(prod))[-which(is.na(c(prod))),]
prod2 <- sparseMatrix(i=tmp$i, j=tmp$j, x=tmp$p)
rst.f = nsprcomp(prod2, retx=T, nneg=FALSE)
recon = rst.f$x %*% t(rst.f$rotation) + matrix(1,5,1) %*% rst.f$center
abs(prod2 - recon)
5 x 5 sparse Matrix of class "dgCMatrix"
[1,] 2.220446e-16 1.110223e-16 . . .
[2,] 2.220446e-16 4.440892e-16 . . .
[3,] 1.110223e-16 2.220446e-16 2.220446e-16 2.220446e-16 .
[4,] . . . . .
[5,] 1.110223e-16 2.220446e-16 2.220446e-16 2.220446e-16 . | Why such a poor result from sparse PCA R package? | Your code looks OK for the reconstruction, but this does not seem to be appropriate when the argument nneg=TRUE ("a logical value indicating whether the principal axes should be constrained to the non | Why such a poor result from sparse PCA R package?
Your code looks OK for the reconstruction, but this does not seem to be appropriate when the argument nneg=TRUE ("a logical value indicating whether the principal axes should be constrained to the non-negative orthant"). When this argument is set to FALSE, then the reconstruction works in the typical way:
rst.f = nsprcomp(prod, retx=T, nneg=FALSE)
recon = rst.f$x %*% t(rst.f$rotation) + matrix(1,5,1) %*% rst.f$center
abs(prod - recon)
[,1] [,2] [,3] [,4] [,5]
[1,] 1.110223e-16 1.110223e-16 0.000000e+00 2.220446e-16 0
[2,] 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0
[3,] 5.551115e-17 0.000000e+00 2.220446e-16 2.220446e-16 0
[4,] 0.000000e+00 4.440892e-16 0.000000e+00 2.220446e-16 0
[5,] 5.551115e-17 0.000000e+00 2.220446e-16 2.220446e-16 0
Sorry that I can't comment more on the use of the "non-negative orthant" option.
I might also point out that your example is not truely using a "sparse" matrix. But this method does appear do deal nicely with such objects:
require(Matrix)
prod <- replace(prod, prod==0, NaN)
tmp <- cbind(expand.grid(i=seq(nrow(prod)), j=seq(ncol(prod))), p=c(prod))[-which(is.na(c(prod))),]
prod2 <- sparseMatrix(i=tmp$i, j=tmp$j, x=tmp$p)
rst.f = nsprcomp(prod2, retx=T, nneg=FALSE)
recon = rst.f$x %*% t(rst.f$rotation) + matrix(1,5,1) %*% rst.f$center
abs(prod2 - recon)
5 x 5 sparse Matrix of class "dgCMatrix"
[1,] 2.220446e-16 1.110223e-16 . . .
[2,] 2.220446e-16 4.440892e-16 . . .
[3,] 1.110223e-16 2.220446e-16 2.220446e-16 2.220446e-16 .
[4,] . . . . .
[5,] 1.110223e-16 2.220446e-16 2.220446e-16 2.220446e-16 . | Why such a poor result from sparse PCA R package?
Your code looks OK for the reconstruction, but this does not seem to be appropriate when the argument nneg=TRUE ("a logical value indicating whether the principal axes should be constrained to the non |
52,395 | Postive correlation but negative coefficient in regression [duplicate] | The interpretation of the post.count coefficient is that it gives the relationship with the response variable, all other factors being held constant. What can be happening is that the marginal effect of post.count is being taken up by one or more of the other variables (let's say building.count for definiteness). As building.count increases, so does revenue. However, for a given level of building.count, there is a small but sure decrease in revenue with post.count. In other words, the marginal relationship you saw for post.count was really due to building.count, and including that variable in the analysis brought that out.
To get this effect, there has to be positive correlation between the two predictors involved, as you have.
Here's an example from my own experience (insurance risk). The claim rate for motor insurance is positively correlated with year of manufacture: more recently-built cars have more claims. It's also positively correlated with sum insured: more expensive cars have more claims. However, when you include both predictors, you find that claim rate has a negative relationship with year of manufacture: for a given value, more recently-built cars have fewer claims than earlier ones. This is because for a given value, an older car is likely to be an inherently more prestigious/higher-status make or model which has suffered the effects of depreciation. On the other hand, a newer vehicle insured for the same amount is likely to be a more mass-market brand. Pricier, more valuable brands are more likely to claim, and this is brought out when both effects (vehicle age and sum insured) are included in the analysis. | Postive correlation but negative coefficient in regression [duplicate] | The interpretation of the post.count coefficient is that it gives the relationship with the response variable, all other factors being held constant. What can be happening is that the marginal effect | Postive correlation but negative coefficient in regression [duplicate]
The interpretation of the post.count coefficient is that it gives the relationship with the response variable, all other factors being held constant. What can be happening is that the marginal effect of post.count is being taken up by one or more of the other variables (let's say building.count for definiteness). As building.count increases, so does revenue. However, for a given level of building.count, there is a small but sure decrease in revenue with post.count. In other words, the marginal relationship you saw for post.count was really due to building.count, and including that variable in the analysis brought that out.
To get this effect, there has to be positive correlation between the two predictors involved, as you have.
Here's an example from my own experience (insurance risk). The claim rate for motor insurance is positively correlated with year of manufacture: more recently-built cars have more claims. It's also positively correlated with sum insured: more expensive cars have more claims. However, when you include both predictors, you find that claim rate has a negative relationship with year of manufacture: for a given value, more recently-built cars have fewer claims than earlier ones. This is because for a given value, an older car is likely to be an inherently more prestigious/higher-status make or model which has suffered the effects of depreciation. On the other hand, a newer vehicle insured for the same amount is likely to be a more mass-market brand. Pricier, more valuable brands are more likely to claim, and this is brought out when both effects (vehicle age and sum insured) are included in the analysis. | Postive correlation but negative coefficient in regression [duplicate]
The interpretation of the post.count coefficient is that it gives the relationship with the response variable, all other factors being held constant. What can be happening is that the marginal effect |
52,396 | Postive correlation but negative coefficient in regression [duplicate] | That is a possible and common error. The regression gives coefficients while controlling for the other variables. Simple correlation coefficients do not control for the other variables and,therefore, can give false relationships.
See the chart below from a previous thread for a visual. Variables are negative correlated, but unless they are controlled for, would be be considered positive.
Positive correlation and negative regressor coefficient sign | Postive correlation but negative coefficient in regression [duplicate] | That is a possible and common error. The regression gives coefficients while controlling for the other variables. Simple correlation coefficients do not control for the other variables and,therefore, | Postive correlation but negative coefficient in regression [duplicate]
That is a possible and common error. The regression gives coefficients while controlling for the other variables. Simple correlation coefficients do not control for the other variables and,therefore, can give false relationships.
See the chart below from a previous thread for a visual. Variables are negative correlated, but unless they are controlled for, would be be considered positive.
Positive correlation and negative regressor coefficient sign | Postive correlation but negative coefficient in regression [duplicate]
That is a possible and common error. The regression gives coefficients while controlling for the other variables. Simple correlation coefficients do not control for the other variables and,therefore, |
52,397 | SVM has relatively low classification rate for high-dimensional data even though 2-D projections show they are separable | I didn't see anything wrong with the results you had. As others pointed out already, you are facing a typical $P \gg N$ situation, where the number of predictors, i.e., features, is much greater than the number of instances.
In this situation, SVM is generally a good choice, as its maximum margin property gives some guarantees on the generalization performance. But in this situation, I would go for a linear kernel instead of a polynomial or RBF kernel. Because a linear kernel with 14000 features is already very powerful. If you later find out that the representation power of linear kernel is not enough, you can then try some more powerful kernels instead.
As you mentioned that the data are well separable with fewer predictors, you might want to get rid of some predictors in the first place. In general, we try to avoid the $P \gg N$ situation. Maybe you want to use some Lasso methods to do some implicit feature selection for you. | SVM has relatively low classification rate for high-dimensional data even though 2-D projections sho | I didn't see anything wrong with the results you had. As others pointed out already, you are facing a typical $P \gg N$ situation, where the number of predictors, i.e., features, is much greater than | SVM has relatively low classification rate for high-dimensional data even though 2-D projections show they are separable
I didn't see anything wrong with the results you had. As others pointed out already, you are facing a typical $P \gg N$ situation, where the number of predictors, i.e., features, is much greater than the number of instances.
In this situation, SVM is generally a good choice, as its maximum margin property gives some guarantees on the generalization performance. But in this situation, I would go for a linear kernel instead of a polynomial or RBF kernel. Because a linear kernel with 14000 features is already very powerful. If you later find out that the representation power of linear kernel is not enough, you can then try some more powerful kernels instead.
As you mentioned that the data are well separable with fewer predictors, you might want to get rid of some predictors in the first place. In general, we try to avoid the $P \gg N$ situation. Maybe you want to use some Lasso methods to do some implicit feature selection for you. | SVM has relatively low classification rate for high-dimensional data even though 2-D projections sho
I didn't see anything wrong with the results you had. As others pointed out already, you are facing a typical $P \gg N$ situation, where the number of predictors, i.e., features, is much greater than |
52,398 | SVM has relatively low classification rate for high-dimensional data even though 2-D projections show they are separable | From Gaussian Processes for Machine Learning by Rasmussen:
In a feature space of dimension $N$, if $N > n$ then there will always be a separating hyperplane. However this hyperplane may not give rise to good generalization performance, especially if some of the labels are incorrect.
That is, if you have more dimensions than data points the data will be linearly separable, but it doesn't necessarily mean that the data can be classified perfectly. | SVM has relatively low classification rate for high-dimensional data even though 2-D projections sho | From Gaussian Processes for Machine Learning by Rasmussen:
In a feature space of dimension $N$, if $N > n$ then there will always be a separating hyperplane. However this hyperplane may not give rise | SVM has relatively low classification rate for high-dimensional data even though 2-D projections show they are separable
From Gaussian Processes for Machine Learning by Rasmussen:
In a feature space of dimension $N$, if $N > n$ then there will always be a separating hyperplane. However this hyperplane may not give rise to good generalization performance, especially if some of the labels are incorrect.
That is, if you have more dimensions than data points the data will be linearly separable, but it doesn't necessarily mean that the data can be classified perfectly. | SVM has relatively low classification rate for high-dimensional data even though 2-D projections sho
From Gaussian Processes for Machine Learning by Rasmussen:
In a feature space of dimension $N$, if $N > n$ then there will always be a separating hyperplane. However this hyperplane may not give rise |
52,399 | SVM has relatively low classification rate for high-dimensional data even though 2-D projections show they are separable | When there are more features than patterns, a linear kernel is generally a good idea as the other answers suggest (+1), however the really important thing to do then is to set the regularisation parameter (C) carefully (using e.g. cross-validation). It is largely the regularisation of support vector machines that gives rise to their good generalisation performance (simply ridge regression with good choice of ridge parameter is often just as good in practice). | SVM has relatively low classification rate for high-dimensional data even though 2-D projections sho | When there are more features than patterns, a linear kernel is generally a good idea as the other answers suggest (+1), however the really important thing to do then is to set the regularisation param | SVM has relatively low classification rate for high-dimensional data even though 2-D projections show they are separable
When there are more features than patterns, a linear kernel is generally a good idea as the other answers suggest (+1), however the really important thing to do then is to set the regularisation parameter (C) carefully (using e.g. cross-validation). It is largely the regularisation of support vector machines that gives rise to their good generalisation performance (simply ridge regression with good choice of ridge parameter is often just as good in practice). | SVM has relatively low classification rate for high-dimensional data even though 2-D projections sho
When there are more features than patterns, a linear kernel is generally a good idea as the other answers suggest (+1), however the really important thing to do then is to set the regularisation param |
52,400 | Confidence bands in case of fitting ARIMA in R? | Use acf from the stats package (or Acf from the forecast package) with ci.type="ma". Note that some people use the simpler approximation all the time - it's just to give an idea what models might be worth considering so accuracy isn't so important.
Bartlett's approximation (the one you quote from Wikipedia) is only relevant to examining the autocorrelation function: the confidence interval for a lag $q$ is given assuming, as a null hypothesis, a moving average process of order $q-1$; it's conditional upon the estimated autocorrelations of all previous lags. (So note that it's not especially relevant to deciding between, say, an ARMA(1,1) & an ARMA (1,2).)
You might suppose a similar formula for confidence intervals on the partial autocorrelations, mutatis mutandis; but you'd be wrong: if you assume an autoregressive process of order $p-1$, the standard errors on the partial autocorrelations are asymptotically $\frac{1}{\sqrt{n}}$, where $n$ is the number of observations. Quenouille (1949), "Approximate tests of correlation in time-series", JRSS B, 11, 1. | Confidence bands in case of fitting ARIMA in R? | Use acf from the stats package (or Acf from the forecast package) with ci.type="ma". Note that some people use the simpler approximation all the time - it's just to give an idea what models might be w | Confidence bands in case of fitting ARIMA in R?
Use acf from the stats package (or Acf from the forecast package) with ci.type="ma". Note that some people use the simpler approximation all the time - it's just to give an idea what models might be worth considering so accuracy isn't so important.
Bartlett's approximation (the one you quote from Wikipedia) is only relevant to examining the autocorrelation function: the confidence interval for a lag $q$ is given assuming, as a null hypothesis, a moving average process of order $q-1$; it's conditional upon the estimated autocorrelations of all previous lags. (So note that it's not especially relevant to deciding between, say, an ARMA(1,1) & an ARMA (1,2).)
You might suppose a similar formula for confidence intervals on the partial autocorrelations, mutatis mutandis; but you'd be wrong: if you assume an autoregressive process of order $p-1$, the standard errors on the partial autocorrelations are asymptotically $\frac{1}{\sqrt{n}}$, where $n$ is the number of observations. Quenouille (1949), "Approximate tests of correlation in time-series", JRSS B, 11, 1. | Confidence bands in case of fitting ARIMA in R?
Use acf from the stats package (or Acf from the forecast package) with ci.type="ma". Note that some people use the simpler approximation all the time - it's just to give an idea what models might be w |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.