Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2259 | 1 | 2265 | null | 5 | 3143 | Imagine that
- responses were collected on a 20 item scale which was designed to measure 4 factors with 5 items on each scale.
- participants were drawn from two groups (Group 1) and (Group 2) with sample size $n_1 = 150$ and $n_2 = 150$.
- a researcher wanted to assess the factor structure of the scale
Common scenarios that I see in my consulting:
- Group 1 are first year psychology students and Group 2 is sampled from the general community
- Group 1 are sampled at one period of time and Group 2 is sampled several years later
- Group 1 is a normal population and Group 2 is a clinical population
Question
- Under what circumstances would it be appropriate to collapse across groups?
- How would these circumstances be assessed?
My initial Thoughts:
My own initial thoughts were as follows:
- Theoretical assessment: assess the degree to which the two groups were sampled or measured in ways that would alter the means, sds, or correlation between the items
- Empirical assessment: Examine differences between means, sds, and intercorrelations on the scales and optionally on other relevant variables (e.g., demographics); perform a two-group confirmatory factor analysis to assess the consistency of the factor structure across groups.
Essentially, if the empirical evidence suggests that the groups are similar and the theoretical assessment suggests that they are similar, then it should be reasonable to combine.
Conclusion
- Does the approach above seem reasonable?
- Do you have alternative strategies?
- Are there any references that provide recommendations or examples regarding best practice in this situation?
| When is it acceptable to collapse across groups when performing a factor analysis? | CC BY-SA 2.5 | null | 2010-09-01T07:23:52.913 | 2010-09-01T11:01:01.643 | null | null | 183 | [
"factor-analysis",
"sampling"
] |
2260 | 2 | null | 2259 | 1 | null | It might be a little fly by night, but your theory may suggest whether the two groups have the same factor structure or not. If your theory suggests they do, and there is no reason to doubt the theory, I'd suggest you could go right ahead and trust that they have the same factor structure.
Your empirical assessment would probably be a good route to go just to spot check the theoretical assessment as whether they are likely to share the same structure. However, I don't intuitively see why mean differences between items would imply they have a different underlying factor structure. It seems to me that might just suggest that one group has higher or lower scores on a given factor.
| null | CC BY-SA 2.5 | null | 2010-09-01T07:55:30.543 | 2010-09-01T07:55:30.543 | null | null | 196 | null |
2261 | 2 | null | 2259 | 2 | null | The approach you mention seems reasonable, but you'd have to take into account that you cannot see the total dataset as a single population. So theoretically, you should use any kind of method that can take differences between those groups into account, similar to using "group" as a random term in an ANOVA or GLM approach.
An alternative for empirical evaluation would be to check formally whether an effect of group can be found on the answers. To do that, you could create a binary dataset with following columns :
yes/no - item - participant - group
With this you can use item as a random term, participant nested in group and test the fixed effect of group using e.g. a glm with a logit link. You can just ignore participant too if you lose too many df.
This is an approximation of the truth, but if the effect of group is significant, I wouldn't collapse the dataset.
| null | CC BY-SA 2.5 | null | 2010-09-01T08:44:54.663 | 2010-09-01T08:44:54.663 | null | null | 1124 | null |
2262 | 1 | 2263 | null | 0 | 5990 | I am currently into a situation that i don't really know how to solve by myself.
I need to calculate the AUC of each peak and then compare these areas in relation to each other. The problem is that the peaks are not completely separated and the only information i got is the mean and the SD of each peak.
Does anyone know how to do this? Any hint or guess would already be really cool.
Thanks.
| Area Under Curve (AUC) - given peak mean and standard deviation (SD) | CC BY-SA 2.5 | null | 2010-09-01T09:46:35.167 | 2010-09-01T14:52:06.747 | null | null | 1133 | [
"normal-distribution"
] |
2263 | 2 | null | 2262 | 3 | null | That really depends on the form and the height of the curve. If you assume the curves are all gaussian and you know the heights, then you can calculate the area under the curve by using the normal density function. In R this would become:
```
heights <- 1
avg <- 3
sdev <- 2
AUC <- heights/dnorm(avg,avg,sd) # the density function at the mean
```
As the value of the density function at the mean is only determined by the sd, this information suffices for calculation of the AUC, given the assumptions are correct. If all heights are the same, the AUC is proportional to the sd only.
Without information about the shape of the curve and the heights, you simply cannot calculate the AUC as far as I know.
| null | CC BY-SA 2.5 | null | 2010-09-01T10:10:54.380 | 2010-09-01T10:10:54.380 | null | null | 1124 | null |
2264 | 1 | null | null | 4 | 1327 | I've got product ratings for a few thousand products. The number of ratings for each product varies from zero to about fifty. I want to find the expected value of product rating for each product. If there are lots of ratings for the product I'd expect the expected value to be the average of the ratings for the product, but if there are only a few I'd expect the expected value to be closer to the average of all ratings. How do I calculate the true expected value? Please be gentle: I'm no statistician or mathematician.
Edit 1: Joris's answer below maintains I can't calculate expected value because by definition that means I must have the entire population. In that case please can you tell me how to calculate the quantity that is similar to expected value in spirit, does not require the entire population, and can make use of prior information.
Edit 2: I would expect that if each product's ratings have low variance ratings, or if there is a very high variance between different products' ratings, then the measured ratings are more significant.
| Expected value of small sample | CC BY-SA 3.0 | null | 2010-09-01T10:20:34.940 | 2012-07-10T08:24:31.113 | 2012-07-10T01:01:21.760 | 9007 | 1134 | [
"expected-value"
] |
2265 | 2 | null | 2259 | 4 | null | There seems to be two cases to consider, depending on whether your scale was already validated using standard psychometric methods (from classical test or item response theory). In what follows, I will consider the first case where I assume preliminary studies have demonstrated construct validity and scores reliability for your scale.
In this case, there is no formal need to apply exploratory factor analysis, unless you want to examine the pattern matrix within each group (but I generally do it, just to ensure that there are no items that unexpectedly highlight low factor loading or cross-load onto different factors); in order to be able to pool all your data, you need to use a multi-group factor analysis (hence, a confirmatory approach as you suggest), which basically amount to add extra parameters for testing a group effect on factor loading (1st order model) or factor correlation (2nd order model, if this makes sense) which would impact measurement invariance across subgroups of respondents. This can be done using [Mplus](http://www.statmodel.com/) (see the discussion about CFA [there](http://www.statmodel.com/discussion/messages/9/9.html)) or [Mx](http://www.vcu.edu/mx/) (e.g. [Conor et al.](http://www.eric.ed.gov:80/ERICWebPortal/search/detailmini.jsp?_nfpb=true&_&ERICExtSearch_SearchValue_0=EJ857035&ERICExtSearch_SearchType_0=no&accno=EJ857035), 2009), not sure about [Amos](http://www.spss.com/amos/) as it seems to be restricted to simple factor structure. The Mx software has been redesigned to work within the R environment, [OpenMx](http://openmx.psyc.virginia.edu/). The wiki is well responding so you can ask questions if you encounter difficulties with it. There is also a more recent package, [lavaan](http://lavaan.ugent.be/), which appears to be a promising package for SEMs.
Alternatives models coming from IRT may also be considered, including a Latent Regression Rasch Model (for each scale separately, see De Boeck and Wilson, 2004), or a Multivariate Mixture Rasch Model (von Davier and Carstensen, 2007). You can take a look at [Volume 20](http://www.jstatsoft.org/index.php?vol=20) of the [Journal of Statistical Software](http://www.jstatsoft.org/), entirely devoted to psychometrics in R, for further information about IRT modeling with R.
You may be able to reach similar tests using Structural Equation Modeling, though.
If factor structure proves to be equivalent across the two groups, then you can aggregate the scores (on your four summated scales) and report your statistics as usual.
However, it is always a challenging task to use CFA since not rejecting H0 does by no mean allow you to check that your postulated theoretical model is correct in the true world, but just that there is no reason to reject it on statistical grounds; on the other hand, rejecting the null would lead to accept the alternative, which is generally left unspecified, unless you apply sequential testing of nested models. Anyway, this is the way we go in cross-cultural settings, especially when we want to assess whether a given questionnaire (e.g., on Patients Reported Outcomes) measures what it purports to do whatever the population it is administered to.
Now, regarding the apparent differences between the two groups -- one is drawn from a population of students, the other is a clinical sample, assessed at a later date -- it depends very much on your own considerations: Does mixing of these two samples makes sense from the literature surrounding the questionnaire used (esp., it should have shown temporal stability and applicability in a wide population), do you plan to generalize your findings over a larger population (obviously, you gain power by increasing sample size). At first sight, I would say that you need to ensure that both groups are comparable with respect to the characteristics thought to influence one's score on this questionnaire (e.g., gender, age, SES, biomedical history, etc.), and this can be done using classical statistics for two-groups comparison (on raw scores). It is worth noting that in clinical studies, we face the reverse situation: We usually want to show that scores differ between different clinical subgroups (or between treated and naive patients), which is often refered to as know-group validity.
Reference:
- De Boeck, P. and Wilson, M. (2004). Explanatory Item Response Models. A Generalized Linear and Nonlinear Approach. Springer.
- von Davier, M. and Carstensen, C.H. (2007). Multivariate and Mixture Distribution Rasch Models. Springer.
| null | CC BY-SA 2.5 | null | 2010-09-01T11:01:01.643 | 2010-09-01T11:01:01.643 | null | null | 930 | null |
2266 | 2 | null | 2264 | 3 | null | The "true" expected value cannot be calculated. You can estimate it using the mean of the ratings for each product, and get an idea about the position by calculating the 95% confidence interval (CI) on the mean.
This is done by
$CI \approx avg \pm 2 * \frac{SD}{\sqrt{n}}$
with n being the number of ratings, SD the standard deviation and avg the average. More correct would be to use the T-distribution, where you use the 2.5% and 97.5% quantile of the T-distribution with degrees of freedom equal to number of observations minus one.
$CI = avg \pm T_{(p=0.975,df=n-1)} * \frac{SD}{\sqrt{n}}$
For 10 ratings, $T_{(p=0.975,df=n-1)}$ is 2.26. For 50 ratings, it is 2.01.
There's a chance of 95% this confidence interval contains the true value. Or, to please Nèstor: if you do this experiment 10,000 times, 95% of the confidence intervals you construct this way will contain the true value for the expected value.
You assume here that the distribution of the average is normal. If you have a very low amount of ratings, the SD can be estimated wrongly.
In that case, you could estimate an "overall" standard deviation on the scoring, and use that to calculate the CI. But keep in mind that this way you assume that the standard deviation is the same for every product.
In extremis, you could resort to bootstrapping to calculate the CI for every product. This will increase the calculation time substantially, and won't be adding any value for products with enough ratings.
| null | CC BY-SA 3.0 | null | 2010-09-01T11:26:03.903 | 2012-07-10T08:24:31.113 | 2012-07-10T08:24:31.113 | 1124 | 1124 | null |
2267 | 2 | null | 73 | 3 | null | You can also take a look at [Task views](http://cran.r-project.org/web/views/) on CRAN and see if something suit your needs. I agree with @Jeromy for these must-have packages (for data manipulation and plotting).
| null | CC BY-SA 2.5 | null | 2010-09-01T11:31:31.833 | 2010-09-01T11:31:31.833 | null | null | 930 | null |
2268 | 2 | null | 2264 | 0 | null | I haven't looked into it much, but this article on [Bayesian rating systems](http://www.thebroth.com/blog/118/bayesian-rating) looks interesting.
| null | CC BY-SA 2.5 | null | 2010-09-01T11:57:18.720 | 2010-09-01T11:57:18.720 | null | null | 183 | null |
2269 | 1 | null | null | 3 | 1154 | How can I access tables created in SAS Enterprise Guide Client into SAS Enterprise Miner Client?
| Access tables created in SAS Enterprise Guide Client into SAS Enterprise Miner Client? | CC BY-SA 2.5 | null | 2010-09-01T12:07:28.173 | 2011-05-31T18:52:33.753 | null | null | 1135 | [
"sas"
] |
2270 | 2 | null | 1963 | 10 | null | I'll add an independent recommendation for Jeromy's blog post, and second the suggestions of James DeCoster's notes and the Borenstein textbook (propofols' no. 2).
At risk of indulging in self-promotion, I recently published a methods paper entitled [Getting Started with Meta-analysis](http://onlinelibrary.wiley.com/doi/10.1111/j.2041-210X.2010.00056.x/abstract). It's aimed at ecologists and evolutionary biologists, so the examples are taken from these fields, but I hope it will be useful for those working in other areas.
| null | CC BY-SA 2.5 | null | 2010-09-01T12:21:52.743 | 2010-09-01T12:21:52.743 | null | null | 266 | null |
2272 | 1 | 2287 | null | 313 | 184003 | Joris and Srikant's exchange [here](https://stats.stackexchange.com/questions/2182/can-you-explaining-why-statistical-tie-is-not-naively-when-p-1-p-2-2-moe/2242#2242) got me wondering (again) if my internal explanations for the difference between confidence intervals and credible intervals were the correct ones. How you would explain the difference?
| What's the difference between a confidence interval and a credible interval? | CC BY-SA 4.0 | null | 2010-09-01T13:53:07.183 | 2021-12-23T14:34:02.923 | 2020-07-03T23:59:57.003 | 11887 | 71 | [
"bayesian",
"confidence-interval",
"frequentist",
"credible-interval",
"fiducial"
] |
2274 | 2 | null | 2262 | 1 | null | Given how your plot looks like, I would suggest rather to fit a mixture of gaussians and get their respective densities. Look at the [mclust](http://cran.r-project.org/web/packages/mclust/index.html) package; basically this is refered to model-based clustering (you are seeking groups of points belonging to a given distribution, that is to be estimated, whose location parameter -- but also shape -- varies along a common dimension). A full explanation of MClust is available [here](http://www.google.fr/url?sa=t&source=web&cd=5&ved=0CD0QFjAE&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.156.6814%26rep%3Drep1%26type%3Dpdf&rct=j&q=Mclust%201D%20mixture%20of%20gaussian&ei=Cl1-TI_YAovOswb3zIiUCQ&usg=AFQjCNHRwRIYnWEvDbdMqBj6s2LmpCSouw&sig2=gkbsyHKQMi9KiontFQ0E1g).
It seems the [delt](http://cran.r-project.org/web/packages/delt/index.html) package offers an alternative way to fit 1D data with a mixture of gaussians, but I didn't get into details.
Anyway, I think this is the best way to get automatic estimates and avoid cutting your x-scale at arbitrary locations.
| null | CC BY-SA 2.5 | null | 2010-09-01T14:09:09.870 | 2010-09-01T14:09:09.870 | null | null | 930 | null |
2275 | 1 | 2285 | null | 7 | 795 | I do not study statistics but engineering, but this is a statistics question, and I hope you can lead me to what I need to learn to solve this problem.
I have this situation where I calculate probabilities of 1000's of things happening in like 30 days. If in 30 days I see what actually happened, how can I test to see how accurately I predicted? These calculations result in probabilities and in actual values (ft). What is the method for doing this?
Thanks,
CP
| How can I determine accuracy of past probability calculations? | CC BY-SA 2.5 | null | 2010-09-01T14:29:02.513 | 2010-09-02T00:12:45.650 | 2010-09-02T00:12:45.650 | 159 | 1137 | [
"probability"
] |
2277 | 2 | null | 2262 | 1 | null | It is critical to know how the peak heights and sds were calculated. (I take "mean" in the question to be a mistaken way of referring to a height. Without the heights, the problem is hopeless; it would be like requesting a formula for the area of a rectangle given only its width and location.)
One would expect, as Joris Meys' answer and its commentary suggest, that the area could be estimated as a sum of Gaussians. Actually, we don't need to assume a Gaussian shape; almost any standard (preferably unimodal, continuous) shape will do, because the area will be proportional to the peak height (a y-scale factor) and the sd (an x-scale factor), whence the total estimated area ought to be a constant times the sum of height*SD and the relative contribution of each peak will equal its height*SD divided by this sum. But this all assumes the heights and sds were fit to the curve with such an application in mind.
I realize there are many problems with such a formula, but let's not get carried away by all the detail in the example graph: the problem as posed says that the "means" and SDs are the only information available.
| null | CC BY-SA 2.5 | null | 2010-09-01T14:52:06.747 | 2010-09-01T14:52:06.747 | null | null | 919 | null |
2278 | 2 | null | 423 | 93 | null | And another one from xkcd.
Title: Self-Description

The mouseover text:
>
The contents of any one panel are
dependent on the contents of every
panel including itself. The graph of
panel dependencies is complete and
bidirectional, and each node has a
loop. The mouseover text has two
hundred and forty-two characters.
| null | CC BY-SA 3.0 | null | 2010-09-01T15:02:00.683 | 2016-10-09T15:32:00.433 | 2016-10-09T15:32:00.433 | -1 | 442 | null |
2279 | 2 | null | 2258 | 2 | null | Strictly speaking, this is trivial: the preimage of $(S_2, F_2)$ is all of $S_1$ (by definition), which is measurable (by definition).
Perhaps you want to conclude that the preimage of any measurable subset of $S_2$ is measurable: that is a nice property of a function. However, this conclusion is not true in general, either. For example, let $(S_1, F_1)$ contain a nonmeasurable set and let $(S_2, F_2)$ contain two disjoint measurable atoms. Map every element of the nonmeasurable set to one of the atoms and map every element of the complement of the nonmeasurable set to the other atom. The preimage of the first atom is not measurable, whence this map is not measurable.
| null | CC BY-SA 2.5 | null | 2010-09-01T15:07:38.893 | 2010-09-01T15:07:38.893 | null | null | 919 | null |
2280 | 2 | null | 2264 | 4 | null | Incorporating a prior is one way to 'make up' for small samples. Another is to use a mixed model, with an intercept for the mean structure and a random intercept for each product. The estimate of the population mean plus the predicted random effect (BLUP) then offers a form of shrinkage, where values for products with less information are shrunk more toward the overall sample mean than those based on more information. This method is common in, for example, Small Area Estimation in survey sampling.
Edit: The R code might look like:
```
library(nlme)
f <- lme(score ~ 1, data = yourData, random = ~1|product)
p <- predict(f)
```
If you go this route the assumptions are:
- independent, normal errors with expected value 0 and constant variance for all observations
- normal random effects with expected value 0
Violations of these can generally be modeled, but of course with that comes added complexity...
| null | CC BY-SA 2.5 | null | 2010-09-01T16:01:33.643 | 2010-09-01T23:12:40.680 | 2010-09-01T23:12:40.680 | 1107 | 1107 | null |
2281 | 2 | null | 2272 | 47 | null | My understanding is as follows:
Background
Suppose that you have some data $x$ and you are trying to estimate $\theta$. You have a data generating process that describes how $x$ is generated conditional on $\theta$. In other words you know the distribution of $x$ (say, $f(x|\theta)$.
Inference Problem
Your inference problem is: What values of $\theta$ are reasonable given the observed data $x$ ?
Confidence Intervals
Confidence intervals are a classical answer to the above problem. In this approach, you assume that there is true, fixed value of $\theta$. Given this assumption, you use the data $x$ to get to an estimate of $\theta$ (say, $\hat{\theta}$). Once you have your estimate you want to assess where the true value is in relation to your estimate.
Notice that under this approach the true value is not a random variable. It is a fixed but unknown quantity. In contrast, your estimate is a random variable as it depends on your data $x$ which was generated from your data generating process. Thus, you realize that you get different estimates each time you repeat your study.
The above understanding leads to the following methodology to assess where the true parameter is in relation to your estimate. Define an interval, $I \equiv [lb(x), ub(x)]$ with the following property:
$P(\theta \in I) = 0.95$
An interval constructed like the above is what is called a confidence interval. Since, the true value is unknown but fixed, the true value is either in the interval or outside the interval. The confidence interval then is a statement about the likelihood that the interval we obtain actually has the true parameter value. Thus, the probability statement is about the interval (i.e., the chances that interval which has the true value or not) rather than about the location of the true parameter value.
In this paradigm, it is meaningless to speak about the probability that a true value is less than or greater than some value as the true value is not a random variable.
Credible Intervals
In contrast to the classical approach, in the bayesian approach we assume that the true value is a random variable. Thus, we capture the our uncertainty about the true parameter value by a imposing a prior distribution on the true parameter vector (say $f(\theta)$).
Using bayes theorem, we construct the posterior distribution for the parameter vector by blending the prior and the data we have (briefly the posterior is $f(\theta|-) \propto f(\theta) f(x|\theta)$).
We then arrive at a point estimate using the posterior distribution (e.g., use the mean of the posterior distribution). However, since under this paradigm, the true parameter vector is a random variable, we also want to know the extent of uncertainty we have in our point estimate. Thus, we construct an interval such that the following holds:
$P(l(\theta) \le {\theta} \le ub(\theta)) = 0.95$
The above is a credible interval.
Summary
Credible intervals capture our current uncertainty in the location of the parameter values and thus can be interpreted as probabilistic statement about the parameter.
In contrast, confidence intervals capture the uncertainty about the interval we have obtained (i.e., whether it contains the true value or not). Thus, they cannot be interpreted as a probabilistic statement about the true parameter values.
| null | CC BY-SA 2.5 | null | 2010-09-01T16:01:43.313 | 2010-09-01T16:01:43.313 | null | null | null | null |
2282 | 1 | 2283 | null | -1 | 199 | Following [this question](https://stats.stackexchange.com/questions/1676/i-just-installed-the-latest-version-of-r-what-packages-should-i-obtain), I wish to have some way of counting how many times I am using a package in my daily work.
Is there a function/package to do that?
In case there isn't, how would you construct such a capability?
The way I would do that is by changing it so that at the end of any R session, a log file of the commands would be saved to some location. On that file, I would grep out all instances of "library" and "require". Then save the results into a file (with some time stamp).
Lastly, I might want some function to (once in X time) send this file to a remote FTP location - so that other R users could analyse the results. (is there a way to do that with R ?!)
If someone wants to try and construct such a machanism - I'd be glad to help by providing a relevant FTP account and by spreading the word on it on ["R bloggers"](http://R-bloggers.com) for the good of the community.
p.s (mainly for Shane): I wasn't sure if this question should go on stackoverflow or here. If this type of question wasn't debated yet on the meta.stat - it should be. If it was, I'll be glad to know what the conclusion of that discussion was.
| Counting how many times a package has been loaded in R? | CC BY-SA 2.5 | null | 2010-09-01T16:04:03.513 | 2013-09-06T09:38:35.813 | 2017-04-13T12:44:36.923 | -1 | 253 | [
"r"
] |
2283 | 2 | null | 2282 | 3 | null | Overload `library()` and `require()` so that they report what they do (whichever way: append to a text file, say) and have those replacement functions loaded first at startup.
| null | CC BY-SA 2.5 | null | 2010-09-01T16:13:47.113 | 2010-09-01T16:13:47.113 | null | null | 334 | null |
2284 | 2 | null | 2275 | 10 | null | In their classic book on the Federalist papers, Mosteller and Wallace argue for a log penalty function: you penalize yourself $-\log(p)$ when you predict an event with probability $p$ and it occurs; the penalty for it not occurring equals $-\log(1-p)$. Thus, the penalty is high when whatever happens is unexpected according to your prediction.
Their argument in favor of this function rests on a simple natural criterion: "the penalty function should encourage the prediction of the correct probabilities if they are known." Assuming the total penalty is summed over all predictions and there will be three or more of them, M&W claim that the log penalty function is the only one (up to affine transformation) for which the "expected penalty is minimized over all predictions" by the correct probabilities.
Following this, then, a good test for you to use is to track your accumulated log penalties. If, after a long time (or by means of some independent oracle), you obtain accurate estimates of what the probabilities actually were, you can compare your penalty with the minimum possible one. The average of that difference measures your long-run predictive performance (the lower the better). This is an excellent way to compare two or more competing predictors, too.
| null | CC BY-SA 2.5 | null | 2010-09-01T16:49:30.997 | 2010-09-01T16:49:30.997 | null | null | 919 | null |
2285 | 2 | null | 2275 | 10 | null | What you're looking for are called Scoring Rules, which are ways of evaluating probabilistic forecasts. They were invented in the 1950s by weather forecasters, and there has been a been a bit of work on them in the statistics community, but I don't know of any books on the topic.
One thing you could do would be to bin the forecasts by probability range (e.g.: 0-5%, 5%-10%, etc.) and look at how many predicted events in that range occurred (if there are 40 events in the 0-5% range, and 20 occur, then your might have problems). If the events are independent, then you could compare these numbers to a binomial distribution.
| null | CC BY-SA 2.5 | null | 2010-09-01T17:30:32.223 | 2010-09-01T17:30:32.223 | null | null | 495 | null |
2286 | 2 | null | 1337 | 13 | null | Here's a groaner:
Q: What do you call 100 statisticians at a tea party?
A: A Z-Party.
| null | CC BY-SA 2.5 | null | 2010-09-01T18:23:56.687 | 2010-09-01T18:23:56.687 | null | null | 1118 | null |
2287 | 2 | null | 2272 | 425 | null | I agree completely with Srikant's explanation. To give a more heuristic spin on it:
Classical approaches generally posit that the world is one way (e.g., a parameter has one particular true value), and try to conduct experiments whose resulting conclusion -- no matter the true value of the parameter -- will be correct with at least some minimum probability.
As a result, to express uncertainty in our knowledge after an experiment, the frequentist approach uses a "confidence interval" -- a range of values designed to include the true value of the parameter with some minimum probability, say 95%. A frequentist will design the experiment and 95% confidence interval procedure so that out of every 100 experiments run start to finish, at least 95 of the resulting confidence intervals will be expected to include the true value of the parameter. The other 5 might be slightly wrong, or they might be complete nonsense -- formally speaking that's ok as far as the approach is concerned, as long as 95 out of 100 inferences are correct. (Of course we would prefer them to be slightly wrong, not total nonsense.)
Bayesian approaches formulate the problem differently. Instead of saying the parameter simply has one (unknown) true value, a Bayesian method says the parameter's value is fixed but has been chosen from some probability distribution -- known as the prior probability distribution. (Another way to say that is that before taking any measurements, the Bayesian assigns a probability distribution, which they call a belief state, on what the true value of the parameter happens to be.) This "prior" might be known (imagine trying to estimate the size of a truck, if we know the overall distribution of truck sizes from the DMV) or it might be an assumption drawn out of thin air. The Bayesian inference is simpler -- we collect some data, and then calculate the probability of different values of the parameter GIVEN the data. This new probability distribution is called the "a posteriori probability" or simply the "posterior." Bayesian approaches can summarize their uncertainty by giving a range of values on the posterior probability distribution that includes 95% of the probability -- this is called a "95% credibility interval."
A Bayesian partisan might criticize the frequentist confidence interval like this: "So what if 95 out of 100 experiments yield a confidence interval that includes the true value? I don't care about 99 experiments I DIDN'T DO; I care about this experiment I DID DO. Your rule allows 5 out of the 100 to be complete nonsense [negative values, impossible values] as long as the other 95 are correct; that's ridiculous."
A frequentist die-hard might criticize the Bayesian credibility interval like this: "So what if 95% of the posterior probability is included in this range? What if the true value is, say, 0.37? If it is, then your method, run start to finish, will be WRONG 75% of the time. Your response is, 'Oh well, that's ok because according to the prior it's very rare that the value is 0.37,' and that may be so, but I want a method that works for ANY possible value of the parameter. I don't care about 99 values of the parameter that IT DOESN'T HAVE; I care about the one true value IT DOES HAVE. Oh also, by the way, your answers are only correct if the prior is correct. If you just pull it out of thin air because it feels right, you can be way off."
In a sense both of these partisans are correct in their criticisms of each others' methods, but I would urge you to think mathematically about the distinction -- as Srikant explains.
---
Here's an extended example from that talk that shows the difference precisely in a discrete example.
When I was a child my mother used to occasionally surprise me by ordering a jar of chocolate-chip cookies to be delivered by mail. The delivery company stocked four different kinds of cookie jars -- type A, type B, type C, and type D, and they were all on the same truck and you were never sure what type you would get. Each jar had exactly 100 cookies, but the feature that distinguished the different cookie jars was their respective distributions of chocolate chips per cookie. If you reached into a jar and took out a single cookie uniformly at random, these are the probability distributions you would get on the number of chips:

A type-A cookie jar, for example, has 70 cookies with two chips each, and no cookies with four chips or more! A type-D cookie jar has 70 cookies with one chip each. Notice how each vertical column is a probability mass function -- the conditional probability of the number of chips you'd get, given that the jar = A, or B, or C, or D, and each column sums to 100.
I used to love to play a game as soon as the deliveryman dropped off my new cookie jar. I'd pull one single cookie at random from the jar, count the chips on the cookie, and try to express my uncertainty -- at the 70% level -- of which jars it could be. Thus it's the identity of the jar (A, B, C or D) that is the value of the parameter being estimated. The number of chips (0, 1, 2, 3 or 4) is the outcome or the observation or the sample.
Originally I played this game using a frequentist, 70% confidence interval. Such an interval needs to make sure that no matter the true value of the parameter, meaning no matter which cookie jar I got, the interval would cover that true value with at least 70% probability.
An interval, of course, is a function that relates an outcome (a row) to a set of values of the parameter (a set of columns). But to construct the confidence interval and guarantee 70% coverage, we need to work "vertically" -- looking at each column in turn, and making sure that 70% of the probability mass function is covered so that 70% of the time, that column's identity will be part of the interval that results. Remember that it's the vertical columns that form a p.m.f.
So after doing that procedure, I ended up with these intervals:

For example, if the number of chips on the cookie I draw is 1, my confidence interval will be {B,C,D}. If the number is 4, my confidence interval will be {B,C}. Notice that since each column sums to 70% or greater, then no matter which column we are truly in (no matter which jar the deliveryman dropped off), the interval resulting from this procedure will include the correct jar with at least 70% probability.
Notice also that the procedure I followed in constructing the intervals had some discretion. In the column for type-B, I could have just as easily made sure that the intervals that included B would be 0,1,2,3 instead of 1,2,3,4. That would have resulted in 75% coverage for type-B jars (12+19+24+20), still meeting the lower bound of 70%.
My sister Bayesia thought this approach was crazy, though. "You have to consider the deliverman as part of the system," she said. "Let's treat the identity of the jar as a random variable itself, and let's assume that the deliverman chooses among them uniformly -- meaning he has all four on his truck, and when he gets to our house he picks one at random, each with uniform probability."
"With that assumption, now let's look at the joint probabilities of the whole event -- the jar type and the number of chips you draw from your first cookie," she said, drawing the following table:

Notice that the whole table is now a probability mass function -- meaning the whole table sums to 100%.
"Ok," I said, "where are you headed with this?"
"You've been looking at the conditional probability of the number of chips, given the jar," said Bayesia. "That's all wrong! What you really care about is the conditional probability of which jar it is, given the number of chips on the cookie! Your 70% interval should simply include the list jars that, in total, have 70% probability of being the true jar. Isn't that a lot simpler and more intuitive?"
"Sure, but how do we calculate that?" I asked.
"Let's say we know that you got 3 chips. Then we can ignore all the other rows in the table, and simply treat that row as a probability mass function. We'll need to scale up the probabilities proportionately so each row sums to 100, though." She did:

"Notice how each row is now a p.m.f., and sums to 100%. We've flipped the conditional probability from what you started with -- now it's the probability of the man having dropped off a certain jar, given the number of chips on the first cookie."
"Interesting," I said. "So now we just circle enough jars in each row to get up to 70% probability?" We did just that, making these credibility intervals:

Each interval includes a set of jars that, a posteriori, sum to 70% probability of being the true jar.
"Well, hang on," I said. "I'm not convinced. Let's put the two kinds of intervals side-by-side and compare them for coverage and, assuming that the deliveryman picks each kind of jar with equal probability, credibility."
Here they are:
Confidence intervals:

Credibility intervals:

"See how crazy your confidence intervals are?" said Bayesia. "You don't even have a sensible answer when you draw a cookie with zero chips! You just say it's the empty interval. But that's obviously wrong -- it has to be one of the four types of jars. How can you live with yourself, stating an interval at the end of the day when you know the interval is wrong? And ditto when you pull a cookie with 3 chips -- your interval is only correct 41% of the time. Calling this a '70%' confidence interval is bullshit."
"Well, hey," I replied. "It's correct 70% of the time, no matter which jar the deliveryman dropped off. That's a lot more than you can say about your credibility intervals. What if the jar is type B? Then your interval will be wrong 80% of the time, and only correct 20% of the time!"
"This seems like a big problem," I continued, "because your mistakes will be correlated with the type of jar. If you send out 100 'Bayesian' robots to assess what type of jar you have, each robot sampling one cookie, you're telling me that on type-B days, you will expect 80 of the robots to get the wrong answer, each having >73% belief in its incorrect conclusion! That's troublesome, especially if you want most of the robots to agree on the right answer."
"PLUS we had to make this assumption that the deliveryman behaves uniformly and selects each type of jar at random," I said. "Where did that come from? What if it's wrong? You haven't talked to him; you haven't interviewed him. Yet all your statements of a posteriori probability rest on this statement about his behavior. I didn't have to make any such assumptions, and my interval meets its criterion even in the worst case."
"It's true that my credibility interval does perform poorly on type-B jars," Bayesia said. "But so what? Type B jars happen only 25% of the time. It's balanced out by my good coverage of type A, C, and D jars. And I never publish nonsense."
"It's true that my confidence interval does perform poorly when I've drawn a cookie with zero chips," I said. "But so what? Chipless cookies happen, at most, 27% of the time in the worst case (a type-D jar). I can afford to give nonsense for this outcome because NO jar will result in a wrong answer more than 30% of the time."
"The column sums matter," I said.
"The row sums matter," Bayesia said.
"I can see we're at an impasse," I said. "We're both correct in the mathematical statements we're making, but we disagree about the appropriate way to quantify uncertainty."
"That's true," said my sister. "Want a cookie?"
| null | CC BY-SA 3.0 | null | 2010-09-01T18:46:23.463 | 2012-11-18T21:29:20.383 | 2012-11-18T21:29:20.383 | 1122 | 1122 | null |
2288 | 2 | null | 2169 | 1 | null | It looks like I am probably stuck with a bootstrap. One interesting possibility here is to compute the 'exact bootstrap covariance', as outlined by [Hutson & Ernst](http://onlinelibrary.wiley.com/doi/10.1111/1467-9868.00221/abstract). Presumably the bootstrap covariance gives a good estimate of the standard error, asymptotically. However, the approach of Hutson & Ernst requires computation of the covariance of each pair of order statistics, and so this method is quadratic in the number of samples. Maybe I should just stick with the bootstrap!
| null | CC BY-SA 2.5 | null | 2010-09-01T19:58:58.197 | 2010-09-01T19:58:58.197 | null | null | 795 | null |
2289 | 2 | null | 2244 | 5 | null | In an article in The American Statistician, Wolkewitz et al. use packages Epi, mvna, and survival. See Two Pitfalls in Survival Analyses of Time-Dependent Exposure: A Case Study in a Cohort of Oscar Nominees, v. 64 no. 3 (August 2010) pp 205-211. This exposition introduces multistate survival models and focuses on the use of a "Lexis diagram" to assess possible forms of bias.
| null | CC BY-SA 2.5 | null | 2010-09-01T20:42:40.850 | 2010-09-01T20:42:40.850 | null | null | 919 | null |
2290 | 1 | 2310 | null | 8 | 7088 | This is a follow-up to the [repeated measures sample size](https://stats.stackexchange.com/questions/1818/how-to-determine-the-sample-size-needed-for-repeated-measurement-anova) question.
I am planning a repeated measures experiment. We record energy usage for 12 months, then give (a randomly assigned) half of the customers continuous information about their energy usage (perform the treatment), and record their energy usage for another 12 months. A similar study performed in the past showed a 5% reduction in energy usage.
I want to estimate the required sample size using $\alpha=0.05, \beta=0.1$. G*Power 3 has a tool for repeated measures power analysis. However, it requires two inputs that I am not entirely familiar with:
- $\lambda$ - the noncentrality parameter (How do I estimate this?)
- $f$ - the effect size (I believe that this is the square root of Cohen's $f^2$)
According to Wikipedia's effect size page:
>
Cohen's $f^2= {R^2_{AB} - R^2_A \over 1 - R^2_{AB}}$ where $R^2_A$ is the variance accounted for by a set of one or more independent variables $A$, and $R^2_{AB}$ is the combined variance accounted for by $A$ and another set of one or more independent variables $B$.
However, my expected 5% change in energy consumption does not tell me how much variability will be explained. Is there any way to make this conversion?
If you know of a way to do this power analysis in R, I would love to hear it. I am planning to simulate some data and try using lmer from the lme4 package.
| Determination of effect size for a repeated measures ANOVA power analysis | CC BY-SA 2.5 | null | 2010-09-01T21:06:34.860 | 2017-04-01T03:19:00.563 | 2017-04-13T12:44:33.550 | -1 | 743 | [
"r",
"repeated-measures",
"statistical-power"
] |
2291 | 1 | 2305 | null | 13 | 9603 | I'm working on a small (200M) corpus of text, which I want to explore with some cluster analysis. What books or articles on that subject would you recommend?
| Recommended books or articles as introduction to Cluster Analysis? | CC BY-SA 2.5 | null | 2010-09-01T23:57:06.760 | 2022-08-01T22:17:07.350 | 2010-09-17T20:23:04.700 | null | 138 | [
"machine-learning",
"references",
"clustering"
] |
2292 | 2 | null | 2237 | 2 | null | The ets() function uses maximum likelihood estimation. So it would be possible to obtain standard errors based on the Hessian matrix in the usual way. However, in forecasting, the value of the model parameters is usually of very limited interest -- what we care about are the forecasts and their variances.
I can't think of a situation where you might want a confidence interval for a smoothing parameter, for example. What could you do with the information that the "true" value of alpha (whatever that means) lies between 0.2 and 0.4?
Consequently, I have not included the calculation of the standard errors of parameters in the package.
| null | CC BY-SA 2.5 | null | 2010-09-02T00:30:48.977 | 2010-09-02T00:30:48.977 | null | null | 159 | null |
2293 | 2 | null | 1906 | 5 | null | NIPS: [http://nips.cc/](http://nips.cc/)
| null | CC BY-SA 2.5 | null | 2010-09-02T01:10:36.043 | 2010-09-02T01:10:36.043 | null | null | null | null |
2294 | 1 | 14507 | null | 6 | 445 | I have a four-state, discrete time Markov process with time-dependent transition matrices such that after a given time T the matrices become constant. The idea is people in a program leaving the program in a variety of ways. Everyone starts in state 1, and states 2, 3 and 4 are absorbing, but state 4 represents the fairly small percentage of people who are 'lost in the system' - in other words state 4 represents our ignorance of what happens to people rather than a genuine outcome.
I would like to use lumping to put those in state 4 in with those in state 1 and run this as a three-state system, and compare this with the naive approach of running this as a four-state system then apportioning those who are asymptotically in state 4 into states 2 and 3 according to their relative proportions. (in other words, p_2/(p_2 + p_3) of those in state 4 go into state 2 after the system is run to infinite time and similar for state 3)
From some rough scribblings it doesn't seem that these two methods give the same results, so it would be good to get an idea on the error involved. To this end, here's my question:
Can I have pointers to the literature on lumping in Markov chains (or related) that would apply - even roughly - in this example? Or otherwise some words of advice on how to approach this.
| Lumping in Markov process with absorbing states | CC BY-SA 2.5 | null | 2010-09-02T01:14:21.507 | 2011-11-17T09:30:40.370 | null | null | 1144 | [
"modeling",
"asymptotics",
"markov-process"
] |
2296 | 1 | 2297 | null | 8 | 752 | I am interested in fitting a factor analysis-like model on asset returns or other similar latent variable models. What are good papers to read on this topic? I am particularly interested in how to handle the fact that a factor analysis model is identical under a sign change for the "factor loadings".
| Papers on Bayesian factor analysis? | CC BY-SA 2.5 | null | 2010-09-02T02:00:45.543 | 2023-01-06T04:42:12.417 | null | null | 1146 | [
"bayesian",
"pca",
"factor-analysis"
] |
2297 | 2 | null | 2296 | 7 | null | Some references to help you out.
- Tipping, M. E. & Bishop, C. M.
Probabilistic principal component
analysis Journal of the Royal
Statistical Society (Series B),
1999, 21, 611-622
- Tom Minka. Automatic choice of
dimensionality for PCA. NIPS 2000
url:
http://research.microsoft.com/en-us/um/people/minka/papers/pca/
- Šmídl, V. & Quinn, A. On Bayesian
principal component analysis
Computational Statistics & Data
Analysis, 2007, 51, 4101-4123
If you are familiar with information theoretic model selection (MML, MDL, etc.), I highly recommend checking out:
- Wallace, C. S. & Freeman, P. R.
Single-Factor Analysis by Minimum
Message Length Estimation Journal of
the Royal Statistical Society
(Series B), 1992, 54, 195-209
- C. S. Wallace. Multiple Factor
Analysis by MML Estimation.
http://www.allisons.org/ll/Images/People/Wallace/Multi-Factor/
Tech report:
http://www.allisons.org/ll/Images/People/Wallace/Multi-Factor/TR95.218.pdf
| null | CC BY-SA 2.5 | null | 2010-09-02T02:19:37.283 | 2010-09-02T02:19:37.283 | null | null | 530 | null |
2298 | 1 | 2319 | null | 6 | 836 | This is somewhat vague, but suppose you have a black box function $f(x_1,x_2,\ldots,x_k)$, for which you have code, and you are interested in the behaviour of $f$ when the $x_i$ are i.i.d. standard Gaussian random variables. What are some good ways to visualize this function? To make it easier, we may assume that $k$ is smallish, say less than 10.
One particular relationship of interest is how $f$ varies with one of the input, say $x_i$. An easy way to visualize this relationship would be to sample the function for fixed values of $x_i$ while varying the other input (either in a structured way, or randomly, say), then box-plotting, which could show how the mean trend is affected by $x_i$, but also whether the scatter is affected (i.e. heteroskedasticity). However, interaction between $x_i$ and the levels of the other input might be masked by this approach.
What I am looking for is somewhat open-ended. I do not have a particular hypothesis that I am testing, but rather am looking for new ways of visualizing the response which might reveal peculiarities of the function.
| Visualization of a multivariate function | CC BY-SA 2.5 | null | 2010-09-02T03:32:30.773 | 2010-09-02T21:56:32.277 | 2010-09-02T07:50:54.603 | null | 795 | [
"data-visualization",
"computational-statistics"
] |
2299 | 1 | 2303 | null | 24 | 666 | What broad methods are there to detect fraud, anomalies, fudging, etc. in scientific works produced by a third party? (I was motivated to ask this by the recent [Marc Hauser affair](http://en.wikipedia.org/wiki/Marc_Hauser#Scientific_misconduct).) Usually for election and accounting fraud, some variant of [Benford's Law](http://en.wikipedia.org/wiki/Benfords_law) is cited. I am not sure how this could be applied to e.g. the Marc Hauser case, because Benford's Law requires numbers to be approximately log uniform.
As a concrete example, suppose a paper cited the p-values for a large number of statistical tests. Could one transform these to log uniformity, then apply Benford's Law? It seems like there would be all kinds of problems with this approach (e.g. some of the null hypotheses might legitimately be false, the statistical code might give p-values which are only approximately correct, the tests might only give p-values which are uniform under the null asymptotically, etc.)
| Statistical forensics: Benford and beyond | CC BY-SA 2.5 | null | 2010-09-02T04:01:56.450 | 2010-09-26T22:53:23.677 | 2010-09-18T21:53:38.603 | 930 | 795 | [
"meta-analysis",
"fraud-detection"
] |
2300 | 2 | null | 2296 | 2 | null | A decent overview of factor analysis is [Latent Variable Methods and Factor Analysis](http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470711108.html) by Bartholomew and Knott. They write about the interpretation of latent factors. This book is not as algorithmically-oriented as I would like, but their description of e.g. partial factor analysis is decent.
| null | CC BY-SA 2.5 | null | 2010-09-02T04:13:24.343 | 2010-09-02T04:13:24.343 | null | null | 795 | null |
2301 | 2 | null | 2291 | 3 | null | Cluster Analysis by Brian S. Everitt is a nice book length applied treatment of Cluster Analysis.
| null | CC BY-SA 2.5 | null | 2010-09-02T04:23:58.130 | 2010-09-02T04:23:58.130 | null | null | 485 | null |
2302 | 2 | null | 2291 | 5 | null | This chapter of [Introduction to Data Mining](http://www-users.cs.umn.edu/~kumar/dmbook/ch8.pdf) is available online and gives a nice overview.
| null | CC BY-SA 2.5 | null | 2010-09-02T05:24:13.440 | 2010-09-02T05:24:13.440 | null | null | 5 | null |
2303 | 2 | null | 2299 | 11 | null | Great Question!
In the scientific context there are various kinds of problematic reporting and problematic behaviour:
- Fraud: I'd define fraud as a deliberate intention on the part of the author or analyst to misrepresent the results and where the misrepresentation is of a sufficiently grave nature. The main example being complete fabrication of raw data or summary statistics.
- Error: Data analysts can make errors at many phases of data analysis from data entry, to data manipulation, to analyses, to reporting, to interpretation.
- Inappropriate behaviour: There are many forms of inappropriate behaviour. In general, it can be summarised by an orientation which seeks to confirm a particular position rather than search for the truth.
Common examples of inappropriate behaviour include:
- Examining a series of possible dependent variables and only reporting the one that is statistically significant
- Not mentioning important violations of assumptions
- Performing data manipulations and outlier removal procedures without mentioning it, particularly where these procedures are both inappropriate and chosen purely to make the results look better
- Presenting a model as confirmatory which is actually exploratory
- Omitting important results that go against the desired argument
- Choosing a statistical test solely on the basis that it makes the results look better
- Running a series of five or ten under-powered studies where only one is statistically significant (perhaps at p = .04) and then reporting the study without mention of the other studies
In general, I'd hypothesise that incompetence is related to all three forms of problematic behaviour. A researcher who does not understand how to do good science but otherwise wants to be successful will have a greater incentive to misrepresent their results, and is less likely to respect the principles of ethical data analysis.
The above distinctions have implications for detection of problematic behaviour.
For example, if you manage to discern that a set of reported results are wrong, it still needs to be ascertained as to whether the results arose from fraud, error or inappropriate behaviour. Also, I'd assume that various forms of inappropriate behaviour are far more common than fraud.
With regards to detecting problematic behaviour, I think it is largely a skill that comes from experience working with data, working with a topic, and working with researchers. All of these experiences strengthen your expectations about what data should look like. Thus, major deviations from expectations start the process of searching for an explanation. Experience with researchers gives you a sense of the kinds of inappropriate behaviour which are more or less common. In combination this leads to the generation of hypotheses. For example, if I read a journal article and I am surprised with the results, the study is underpowered, and the nature of the writing suggests that the author is set on making a point, I generate the hypothesis that the results perhaps should not be trusted.
Other Resources
- Robert P. Abelson Statistics as a Principled Argument has a chapter titled "On Suspecting Fishiness"
| null | CC BY-SA 2.5 | null | 2010-09-02T05:45:28.647 | 2010-09-02T05:45:28.647 | null | null | 183 | null |
2304 | 2 | null | 2298 | 3 | null | Just a thought, although I've never tried it.
- you could obtain a large number of values from the function across different parameter values
- take a tour of the resulting data in ggobi (check out Mat Kelcey's video)
| null | CC BY-SA 2.5 | null | 2010-09-02T06:33:11.917 | 2010-09-02T06:33:11.917 | null | null | 183 | null |
2305 | 2 | null | 2291 | 6 | null | It may be worth looking at M.W. Berry's books:
- Survey of Text Mining I: Clustering, Classification, and Retrieval (2003)
- Survey of Text Mining II: Clustering, Classification, and Retrieval (2008)
They consist of series of applied and review papers. The latest seems to be available as PDF at the following address: [http://bit.ly/deNeiy](http://bit.ly/deNeiy).
Here are few links related to CA as applied to text mining:
- Document Topic Generation in Text Mining by Using Cluster Analysis with EROCK
- An Approach to Text Mining using Information Extraction
You can also look at Latent Semantic Analysis, but see my response there: [Working through a clustering problem](https://stats.stackexchange.com/questions/369/working-through-a-clustering-problem/2196#2196).
| null | CC BY-SA 2.5 | null | 2010-09-02T10:25:32.673 | 2010-09-02T10:25:32.673 | 2017-04-13T12:44:52.277 | -1 | 930 | null |
2306 | 1 | 2307 | null | 92 | 48617 | I am getting a bit confused about feature selection and machine learning
and I was wondering if you could help me out. I have a microarray dataset that is
classified into two groups and has 1000s of features. My aim is to get a small number of genes (my features) (10-20) in a signature that I will in theory be able to apply to
other datasets to optimally classify those samples. As I do not have that many samples (<100), I am not using a test and training set but using Leave-one-out cross-validation to help
determine the robustness. I have read that one should perform feature selection for each split of the samples i.e.
- Select one sample as the test set
- On the remaining samples perform feature selection
- Apply machine learning algorithm to remaining samples using the features selected
- Test whether the test set is correctly classified
- Go to 1.
If you do this, you might get different genes each time, so how do you
get your "final" optimal gene classifier? i.e. what is step 6.
What I mean by optimal is the collection of genes that any further studies
should use. For example, say I have a cancer/normal dataset and I want
to find the top 10 genes that will classify the tumour type according to
an SVM. I would like to know the set of genes plus SVM parameters that
could be used in further experiments to see if it could be used as a
diagnostic test.
| Feature selection for "final" model when performing cross-validation in machine learning | CC BY-SA 2.5 | null | 2010-09-02T10:25:42.330 | 2022-04-23T14:22:01.647 | 2012-02-01T17:56:30.787 | 930 | 1150 | [
"machine-learning",
"classification",
"cross-validation",
"feature-selection",
"genetics"
] |
2307 | 2 | null | 2306 | 41 | null | Whether you use LOO or K-fold CV, you'll end up with different features since the cross-validation iteration must be the most outer loop, as you said. You can think of some kind of voting scheme which would rate the n-vectors of features you got from your LOO-CV (can't remember the paper but it is worth checking the work of [Harald Binder](http://www.imbi.uni-freiburg.de/biom/index.php?showEmployee=binderh) or [Antoine Cornuéjols](http://www.lri.fr/%7Eantoine/Papers/papers.html)). In the absence of a new test sample, what is usually done is to re-apply the ML algorithm to the whole sample once you have found its optimal cross-validated parameters. But proceeding this way, you cannot ensure that there is no overfitting (since the sample was already used for model optimization).
Or, alternatively, you can use embedded methods which provide you with features ranking through a measure of variable importance, e.g. like in [Random Forests](http://www.stat.berkeley.edu/%7Ebreiman/RandomForests/) (RF). As cross-validation is included in RFs, you don't have to worry about the $n\ll p$ case or curse of dimensionality. Here are nice papers of their applications in gene expression studies:
- Cutler, A., Cutler, D.R., and Stevens, J.R. (2009). Tree-Based Methods, in High-Dimensional Data Analysis in Cancer Research, Li, X. and Xu, R. (eds.), pp. 83-101, Springer.
- Saeys, Y., Inza, I., and Larrañaga, P. (2007). A review of feature selection techniques in bioinformatics. Bioinformatics, 23(19): 2507-2517.
- Díaz-Uriarte, R., Alvarez de Andrés, S. (2006). Gene selection and classification of microarray data using random forest. BMC Bioinformatics, 7:3.
- Diaz-Uriarte, R. (2007). GeneSrF and varSelRF: a web-based tool and R package for gene selection and classification using random forest. BMC Bioinformatics, 8: 328
Since you are talking of SVM, you can look for penalized SVM.
| null | CC BY-SA 4.0 | null | 2010-09-02T10:46:12.320 | 2020-10-19T04:09:43.253 | 2020-10-19T04:09:43.253 | 93018 | 930 | null |
2308 | 2 | null | 1980 | 10 | null | [Irreproducibility of NCI60 Predictors of Chemotherapy](http://bioinformatics.mdanderson.org/Supplements/ReproRsch-Chemo/)
This is a reproducible analysis showing the lack of reproducibility of a paper that has been in the news. A clinical trial based on the false conclusions of the irreproducible paper was suspended, re-instated, suspended again, ... It's a good example of reproducible analysis in the news.
| null | CC BY-SA 2.5 | null | 2010-09-02T11:15:56.443 | 2010-09-02T14:57:59.267 | 2010-09-02T14:57:59.267 | 319 | 319 | null |
2309 | 2 | null | 2306 | 17 | null | To add to chl: When using support vector machines, a highly recommended penalization method is the elastic net. This method will shrink coefficients towards zero, and in theory retains the most stable coefficients in the model. Initially it was used in a regression framework, but it is easily extended for use with support vector machines.
[The original publication](http://www.math.mtu.edu/%7Eshuzhang/MA5761/model_selection2.pdf) : Zou and Hastie (2005) : Regularization and variable selection via the elastic net. J.R.Statist.Soc. B, 67-2,pp.301-320
[Elastic net for SVM](https://doi.org/10.1007/978-3-540-36122-0_2) : Zhu & Zou (2007): Variable Selection for the Support Vector Machine : Trends in Neural Computation, chapter 2 (Editors: Chen and Wang)
[improvements on the elastic net](http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B8H0V-50M1N7Y-2&_user=794998&_coverDate=07%2F31%2F2010&_rdoc=1&_fmt=high&_orig=search&_origin=search&_sort=d&_docanchor=&view=c&_searchStrId=1448587725&_rerunOrigin=google&_acct=C000043466&_version=1&_urlVersion=0&_userid=794998&md5=7e56402b32c944bc33c9373013f60ae8&searchtype=a) Jun-Tao and Ying-Min(2010): An Improved Elastic Net for Cancer Classification and Gene Selection : Acta Automatica Sinica, 36-7,pp.976-981
| null | CC BY-SA 4.0 | null | 2010-09-02T11:29:47.973 | 2022-04-23T14:22:01.647 | 2022-04-23T14:22:01.647 | 79696 | 1124 | null |
2310 | 2 | null | 2290 | 3 | null | Assuming you are going to average the first 12 months to form a baseline measure and the second 12 months to form as a follow-up measure, your problem reduces to a repeated measures t-test.
G*Power
You might want to check out the following menu in G*Power 3:
`Tests - Means - Two Dependent Groups (matched pairs)`.
Use A priori, $\alpha=.05$, Power = 0.90.
Use the `Determine` button to determine effect size. This requires that you can estimate time 1 and 2 means, sds, and correlation between time points.
If you know nothing about the domain, based on my experience in psychology, I'd start with something like
```
M1 = 0, SD1 = 1, SD2 = 1
correlation = .60
```
This means that M2 is basically a between subjects cohen's d.
You could then examine a few different values of M2 such as 0.2, 0.3, ... 0.5, ... 0.8, etc. Cohen's rules of thumb suggest 0.2 is small, 0.5 is medium, and 0.8 is large.
R
[UCLA has a tutorial](http://stats.idre.ucla.edu/r/dae/power-analysis-for-paired-sample-t-test/) on doing a power analysis on a repeated measures t-test using R.
Side point
As a side point, you might want to consider having a control group.
| null | CC BY-SA 3.0 | null | 2010-09-02T12:27:52.250 | 2017-04-01T03:19:00.563 | 2017-04-01T03:19:00.563 | 183 | 183 | null |
2312 | 2 | null | 2291 | 1 | null | Not specifically about text-mining, but I quite liked ["Exploratory Data Analysis with MATLAB"](http://rads.stackoverflow.com/amzn/click/1584883669) by Martinez and Martinez.
| null | CC BY-SA 2.5 | null | 2010-09-02T13:10:52.193 | 2010-09-02T13:10:52.193 | null | null | 582 | null |
2314 | 2 | null | 2306 | 10 | null | As step 6 (or 0) you run the feature detection algorithm on the entire data set.
The logic is the following: you have to think of cross-validation as a method for finding out the properties of the procedure you are using to select the features. It answers the question: "if I have some data and perform this procedure, then what is the error rate for classifying a new sample?". Once you know the answer, you can use the procedure (feature selection + classification rule development) on the entire data set. People like leave-one-out because the predictive properties usually depend on the sample size, and $n-1$ is usually close enough to $n$ not to matter much.
| null | CC BY-SA 2.5 | null | 2010-09-02T15:56:37.563 | 2010-09-02T15:56:37.563 | null | null | 279 | null |
2315 | 1 | null | null | 3 | 660 | A previous user asked [this question](https://stats.stackexchange.com/questions/1266/a-non-parametric-repeated-measures-multi-way-anova-in-r) specifically for R. I'd like to know what, if any, other software can do this.
| What software allows non-parametric repeated-measures multi-way Anova? | CC BY-SA 2.5 | null | 2010-09-02T16:12:52.183 | 2010-09-03T17:53:09.490 | 2017-04-13T12:44:20.903 | -1 | 132 | [
"anova",
"nonparametric",
"software"
] |
2316 | 2 | null | 2264 | 0 | null | Ha! I've answered my own question. Simon Funk figured this out for the Netflix challenge [here](http://sifter.org/~simon/journal/20061211.html). See the paragraph commencing "However, even this isn't quite as simple as it appears". But I'm having difficulty proving it algebraically: maybe you guys would like to take that on.
| null | CC BY-SA 2.5 | null | 2010-09-02T16:58:19.700 | 2010-09-02T16:58:19.700 | null | null | 1134 | null |
2317 | 2 | null | 2306 | 45 | null | In principle:
Make your predictions using a single model trained on the entire dataset (so there is only one set of features). The cross-validation is only used to estimate the predictive performance of the single model trained on the whole dataset. It is VITAL in using cross-validation that in each fold you repeat the entire procedure used to fit the primary model, as otherwise you can end up with a substantial optimistic bias in performance.
To see why this happens, consider a binary classification problem with 1000 binary features but only 100 cases, where the cases and features are all purely random, so there is no statistical relationship between the features and the cases whatsoever. If we train a primary model on the full dataset, we can always achieve zero error on the training set as there are more features than cases. We can even find a subset of "informative" features (that happen to be correlated by chance). If we then perform cross-validation using only those features, we will get an estimate of performance that is better than random guessing. The reason is that in each fold of the cross-validation procedure there is some information about the held-out cases used for testing as the features were chosen because they were good for predicting, all of them, including those held out. Of course the true error rate will be 0.5.
If we adopt the proper procedure, and perform feature selection in each fold, there is no longer any information about the held out cases in the choice of features used in that fold. If you use the proper procedure, in this case, you will get an error rate of about 0.5 (although it will vary a bit for different realisations of the dataset).
Good papers to read are:
Christophe Ambroise, Geoffrey J. McLachlan, "Selection bias in gene extraction on the basis of microarray gene-expression data", PNAS [http://www.pnas.org/content/99/10/6562.abstract](http://www.pnas.org/content/99/10/6562.abstract)
which is highly relevant to the OP and
Gavin C. Cawley, Nicola L. C. Talbot, "On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation", JMLR 11(Jul):2079−2107, 2010 [http://jmlr.csail.mit.edu/papers/v11/cawley10a.html](http://jmlr.csail.mit.edu/papers/v11/cawley10a.html)
which demonstrates that the same thing can easily ocurr in model selection (e.g. tuning the hyper-parameters of an SVM, which also need to be repeated in each iteration of the CV procedure).
In practice:
I would recommend using Bagging, and using the out-of-bag error for estimating performance. You will get a committee model using many features, but that is actually a good thing. If you only use a single model, it will be likely that you will over-fit the feature selection criterion, and end up with a model that gives poorer predictions than a model that uses a larger number of features.
Alan Millers book on subset selection in regression (Chapman and Hall monographs on statistics and applied probability, volume 95) gives the good bit of advice (page 221) that if predictive performance is the most important thing, then don't do any feature selection, just use ridge regression instead. And that is in a book on subset selection!!! ;o)
| null | CC BY-SA 3.0 | null | 2010-09-02T17:53:16.097 | 2014-07-27T14:17:04.517 | 2014-07-27T14:17:04.517 | 2669 | 887 | null |
2318 | 2 | null | 2179 | 9 | null | Isabelle Guyon, André Elisseeff, "An Introduction to Variable and Feature Selection", JMLR, 3(Mar):1157-1182, 2003. [http://jmlr.csail.mit.edu/papers/v3/guyon03a.html](http://jmlr.csail.mit.edu/papers/v3/guyon03a.html)
is well worth reading, it will give a good overview of approaches and issues. The one thing I would add is that feature selection doesn't necessarily improve predictive performance, and can easily make it worse (beacuse it is easy to over-fit the feature selection criterion). One of the advantages of (especially linear) SVMs is that they work well with large numbers of features (providing you tune the regularisation parameter properly), so there is often no need if you are only interested in prediction.
| null | CC BY-SA 2.5 | null | 2010-09-02T18:07:50.913 | 2010-09-02T18:07:50.913 | null | null | 887 | null |
2319 | 2 | null | 2298 | 4 | null | Given that you are at the initial, exploratory stages of the analysis I would start simple. Consider sampling your inputs using a [Latin Hypercube](http://en.wikipedia.org/wiki/Latin_hypercube_sampling) strategy. Then, a tornado chart can be used to get a quick assessment of the multiple,one-way sensitivities f() has to the various input variables. Here is an example chart (from [here](http://www.add-ins.com/sensitivity_chart_creator.htm))

This chart is not that interesting, but an interpretation would be "NPV is most sensitive to Shipments, all other things being equal. But, the sensitivity is mostly on the upside, which is good. The Escalation variable induces sensitivity into NPV, but what looks to be skewed negatively a bit...".
You could do something similar for Mean(f) on the X-axis as well as Var(f)
Given what you find from some first glance visualizations like this, you could then slice and dice more and focus on specific variables or relationships between variables. Maybe you can revisit this thread in coming months and post the visualizations you found useful :)
| null | CC BY-SA 2.5 | null | 2010-09-02T18:56:34.177 | 2010-09-02T18:56:34.177 | null | null | 1080 | null |
2320 | 2 | null | 2315 | 1 | null | This question was updated with a link to the previous question, at which point I realized that my response originally posted here pointing to the ez package in R was better left at the previous question.
| null | CC BY-SA 2.5 | null | 2010-09-02T19:22:14.867 | 2010-09-03T17:53:09.490 | 2010-09-03T17:53:09.490 | 364 | 364 | null |
2321 | 2 | null | 2298 | 0 | null | You could apply some sort of dimensionality reduction technique like principal components and plot the value of the function as you vary the first, second, third etc. principal components, holding all others fixed. This would show you how the function varies in the directions of the maximal variance of the inputs.
| null | CC BY-SA 2.5 | null | 2010-09-02T21:56:32.277 | 2010-09-02T21:56:32.277 | null | null | null | null |
2322 | 2 | null | 2306 | -1 | null | I'm not sure about classification problems, but in the case of feature selection for regression problems, Jun Shao showed [that Leave-One-Out CV is asymptotically inconsistent](http://www.jstor.org/pss/2290328), i.e. the probability of selecting the proper subset of features does not converge to 1 as the number of samples increases. From a practical point of view, Shao recommends a Monte-Carlo cross-validation, or leave-many-out procedure.
| null | CC BY-SA 2.5 | null | 2010-09-02T23:05:55.530 | 2010-09-02T23:05:55.530 | null | null | 795 | null |
2323 | 1 | null | null | 6 | 373 | I think that dynamic pricing algorithms (used in aviation and ticketing industry) is very statistical based, anyone here has experience with those algorithms with references for it?
| What are good references for dynamic pricing? | CC BY-SA 3.0 | null | 2010-09-03T00:34:44.053 | 2021-05-03T15:31:58.607 | 2021-05-03T15:31:58.607 | 11887 | 1167 | [
"time-series",
"references",
"algorithms",
"operations-research"
] |
2324 | 2 | null | 2323 | 4 | null | This article is highly cited:
"Yield Management at American Airlines" by Barry C. Smith et al.
Links:
- JSTOR
- free PDF 1, broken at 06.09.12
- free PDF 2, broken at 02.01.18
- free PDF 3
| null | CC BY-SA 3.0 | null | 2010-09-03T02:15:18.283 | 2018-01-02T14:15:19.883 | 2018-01-02T14:15:19.883 | 187023 | 74 | null |
2325 | 2 | null | 1432 | 9 | null |
## On Pearsons residuals,
The Pearson residual is the difference between the observed and estimated probabilities divided by the binomial standard deviation of the estimated probability. Therefore standardizing the residuals.
For large samples the standardized residuals should have a normal distribution.
From Menard, Scott (2002). Applied logistic regression analysis, 2nd Edition. Thousand Oaks, CA: Sage Publications. Series: Quantitative Applications in the Social Sciences, No. 106. First ed., 1995. See Chapter 4.4
| null | CC BY-SA 2.5 | null | 2010-09-03T02:27:00.000 | 2010-09-03T02:27:00.000 | null | null | 10229 | null |
2326 | 1 | 2366 | null | 2 | 398 | I am looking at setting up an experiment concerning a hobby of mine, basically measuring a variety of parameters 'before' and 'after' and see which one, if any, gives the most reliable prediction of a final parameter i.e. do they have a linear relationship, etc. The object being to save some time and effort later not sorting by the parameters that have little actual bearing on the final results, and to see if some of the more tedious sorting methods are actually as useful as thought.
The experiment is an extension of one I did for my final paper in my stats class a couple years ago: investigating the relationship between case weight & volume vs. muzzle velocity for centerfire rifle cases as used in modern target competition (in which I'm fairly active). For that paper I had to spend a fair amount of 'time' demonstrating various methods that didn't really pertain to what I wanted to know, but had to cover anyway to get an 'A' ;)
For this go-around... I'm looking at taking one hundred pieces of brass cases from one box, one lot. These are the 'non-consumable' parts, as they can and do get re-used and reloaded multiple times. Everything else - bullet, powder, primer - get shot down the barrel and cannot be re-used/re-measured. Specifically, the cases can and do become 'fire-formed' to the interior dimensions of the chamber under extreme pressure (50-70k psi). In my original experiment, I found a significant, but not especially strong, correlation between the initial case weight and the muzzle velocity... of the first firing. Case volume, on the other hand, which should be very strongly related to MV... was not, theoretically because the 'virgin' cases don't necessarily 'fit' the internal dimensions of the chamber and a certain amount of the energy generated is expended not just in heat, but in squashing the brass against the case wall during firing.
So again, I'm looking at taking 100pcs of brass, weighing them straight out of the box, taking various measurements (case wall thickness inside near the case web, case neck thickness near the mouth, weight, volume, etc.) in their 'virgin' untouched state, then performing several routine case prep steps (trim to length, chamfer/debur the mouth, debur the flash hole, uniform primer pocket) and repeat the measurements. Then load and fire the rounds as uniformly as possible - bullets sorted for maximum consistency, powder weighed on a milligram-capable analytic lab scale, etc. while controlling the rate of fire so as to regulate temperature rise along the barrel and measuring the muzzle velocity via a chronograph approx. 15 feet down range. Then I plan to completely clean the cases inside and out to remove any buildup of carbon or powder residue, and repeat the measurements and then load and fire again and then clean and measure one last time.
Part of what I want to see is how the distributions of some of the measurements change as the cases are prepped, then again as they are fired and formed to the chamber. After that... I want to see how much difference is there really between 'virgin' cases and fire-formed cases in terms of muzzle velocity, and finally... of all the tedious measurement steps mentioned above, which ones actually give a reasonable indication of consistent MV so I can successfully cull suspect pieces of brass during preliminary sorting rather than suffer lost points when using them on target in competition.
The single biggest problem in my mind, at least on the surface, is the chronograph. Getting anything approaching credible numbers for accuracy or % error from the vendors is somewhere between difficult and impossible. Given the nature of the device, it is extremely difficult to test on a consumer level - every round through it may (or may not) be just slightly different, so determining how much of the variation displayed on its screen is really the ammunition and how much is the variability of the instrument itself... has me scratching my head, to say the least.
As to why I was kind of vague before about the details of the experiment... well, sometimes people get kinda weird as soon as they realize something involves GUNS and won't touch it with a 10 ft pole regardless of how reasonable it may be.
Thanks,
Monte
| Setting up experiment for statistical analysis | CC BY-SA 2.5 | null | 2010-09-03T03:25:40.293 | 2010-09-16T06:35:12.133 | 2010-09-16T06:35:12.133 | null | 1114 | [
"r",
"experiment-design"
] |
2327 | 2 | null | 1126 | 1 | null | In my opinion 16 are too many reasons, too fine of a specification and sort of overlap at times. Instead I would personally streamline into broad groups. We can classify study objectives in 3 main categories: single hypothesis testing, exploratory study and to predict.
| null | CC BY-SA 2.5 | null | 2010-09-03T03:49:48.030 | 2010-09-03T03:49:48.030 | null | null | 10229 | null |
2328 | 1 | 2332 | null | 14 | 3199 | What should be the ratio of number of observations and number of variables?
How to detect overfitting in the neural network model and what are the ways to avoid overfitting?
If I want to perform classification with Neural Network, should the classes have equal frequency?
Please help me out.
| How to perform Neural Network modelling effectively? | CC BY-SA 2.5 | null | 2010-09-03T04:53:37.240 | 2013-12-04T00:23:59.377 | 2010-09-03T04:59:10.847 | 183 | 861 | [
"neural-networks"
] |
2329 | 2 | null | 534 | 3 | null | Suppose we think the factor A is the cause of the phenomenon B. Then we try to vary it to see whether B changes.
If B doesn't change and if we can assume that everything else unchanged, strong evidence that A is not the cause of B.
If B does change, we can't conclude that A is the cause because the change of A might have caused a change in the actual causation C, which made B change.
| null | CC BY-SA 2.5 | null | 2010-09-03T06:03:54.097 | 2010-09-03T06:03:54.097 | null | null | null | null |
2330 | 2 | null | 852 | 9 | null | In the usual case with a log variable, the model is
\begin{align}
\log(y) &= a + b\log(x) + \varepsilon\newline
\text{or}\quad y &= e^a x^b e^\varepsilon,
\end{align}
where $\varepsilon\sim\text{N}(0,\sigma^2)$ and $b$ is the elasticity.
In the situation you mention,
\begin{align}
y &= \exp[a + b\log(x)] + \varepsilon \newline
\text{or}\quad y &= e^ax^b + \varepsilon,
\end{align}
where $\varepsilon\sim\text{N}(0,\sigma^2)$. So, ignoring the error, the parameter $b$ is playing the same role in both models and is an elasticity in both cases.
What is different is the assumption on the error distribution. In the first case, the error is assumed to be logNormal and multiplicative while in the second case the error is assumed to be Normal and additive.
| null | CC BY-SA 2.5 | null | 2010-09-03T06:56:22.773 | 2010-09-03T06:56:22.773 | null | null | 159 | null |
2331 | 4 | null | null | 0 | null | Use this tag for any *on-topic* question that (a) involves `R` either as a critical part of the question or expected answer, & (b) is not *just* about how to use `R`. | null | CC BY-SA 3.0 | null | 2010-09-03T07:25:03.153 | 2016-01-15T23:25:24.240 | 2016-01-15T23:25:24.240 | 7290 | 183 | null |
2332 | 2 | null | 2328 | 26 | null | The advice I would give is as follows:
- Exhaust the possibilities of linear models (e.g. logistic regression) before going on to neural nets, especially if you have many features and not too many observations. For many problems a Neural Net does not out-perform simple linear classifiers, and the only way to find out if your problem is in this category is to try it and see.
- Investigate kernel methods (e.g. Support Vector Machines (SVM), kernel logistic regression), Gaussian process models first. In both cases over-fitting is effectively controlled by tuning a small number of hyper-parameters. For kernel methods this is often performed by cross-validation, for Gaussian process models this is performed by maximising the marginal likelihood (also known as the Bayesian "evidence" for the model). I have found it is much easier to get a reasonable model using these methods than with neural networks, as the means of avoiding over-fitting is so much more straightforward.
- If you really want to use a neural network, start with a (regularised) radial basis function network, rather than a feedforward Multilayer Perceptron (MLP) type network.
- If yo do use an MLP, then use regularisation. If you do, it will be less sensitive to choices about architecture, such as optimising the number of hidden units. Instead, all you have to do is choose a good value for the regularisation parameter. MacKay's Bayesian "evidence framework" provides a good method for setting the regularisation parameter. If you use regularisation, then the number of observations and number of variables becomes much less of an issue.
To detect over-fitting, simply perform cross-validation to test generalisation performance.
As for classes having equal frequencies, the thing to remember is that if you train a model with a balanced training set, but the classes are not balanced in the operational data, then the model is very likely to under-predict the minority class. If you use a probabilistic classifier such as logistic regression or a neural network, you can always correct the estimated probabilities to account for that after training. If your dataset is very imbalanced, I would recommend differential weighting of patterns from the positive and negative classes, with the weighting factors selected by cross-validation.
However, when the classes are very unbalanced, it is normally the case that false-negative and false-positive errors have difference costs (e.g. in medical screening tests a false-negative is much worse than a false-positive). So often all you need to do is include the misclassification costs into the error function used to train the network.
If you are a MATLAB user (like me) I can strongly recommend the NETLAB software (Ian Nabney and Chris Bishop) or the software that goes with the book Gaussian Process for Machine Learning by Rasmussen and Williams. I can else strongly recommend the book "Neural networks for pattern recognition" by Chris Bishop for anyone starting out in neural nets. It is a brilliant book, and covers the material with great clarity and the minimum level of maths required to really understand what you are doing, and most of it is implemented in the NETLAB software (which may also run under Octave).
HTH
P.S. The best way of modelling with a neural net is probably to use a Bayesian approach based on Hybrid Monte Carlo (HMC), as developed by Radford Neal. In general problems start in modelling when you try and optimise some parameters and you end up over-fitting. The best solution is to never optimise anything and marginalise (integrate) over parameters instead. Sadly this integration can't be performed analytically, so you need to use sampling based approaches instead. However, this is (a) computationally expensive and (b) a bit of a "black art" and required deep understanding and experience.
| null | CC BY-SA 3.0 | null | 2010-09-03T08:03:15.777 | 2013-12-04T00:23:59.377 | 2013-12-04T00:23:59.377 | 9007 | 887 | null |
2333 | 2 | null | 58 | 11 | null | Back-propogation is a way of working out the derivative of the error function with respect to the weights, so that the model can be trained by gradient descent optimisation methods - it is basically just the application of the "chain rule". There isn't really much more to it than that, so if you are comfortable with calculus that is basically the best way to look at it.
If you are not comfortable with calculus, a better way would be to say that we know how badly the output units are doing because we have a desired output with which to compare the actual output. However we don't have a desired output for the hidden units, so what do we do? The back-propagation rule is basically a way of speading out the blame for the error of the output units onto the hidden units. The more influence a hidden unit has on a particular output unit, the more blame it gets for the error. The total blame associated with a hidden unit then give an indication of how much the input-to-hidden layer weights need changing. The two things that govern how much blame is passed back is the weight connecting the hidden and output layer weights (obviously) and the output of the hidden unit (if it is shouting rather than whispering it is likely to have a larger influence). The rest is just the mathematical niceties that turn that intuition into the derivative of the training criterion.
I'd also recommend Bishops book for a proper answer! ;o)
| null | CC BY-SA 2.5 | null | 2010-09-03T08:28:20.503 | 2010-09-03T08:28:20.503 | null | null | 887 | null |
2334 | 2 | null | 181 | 50 | null | I am working on an empirical study of this at the moment (approching a processor-century of simulations on our HPC facility!). My advice would be to use a "large" network and regularisation, if you use regularisation then the network architecture becomes less important (provided it is large enough to represent the underlying function we want to capture), but you do need to tune the regularisation parameter properly.
One of the problems with architecture selection is that it is a discrete, rather than continuous, control of the complexity of the model, and therefore can be a bit of a blunt instrument, especially when the ideal complexity is low.
However, this is all subject to the "no free lunch" theorems, while regularisation is effective in most cases, there will always be cases where architecture selection works better, and the only way to find out if that is true of the problem at hand is to try both approaches and cross-validate.
If I were to build an automated neural network builder, I would use Radford Neal's Hybrid Monte Carlo (HMC) sampling-based Bayesian approach, and use a large network and integrate over the weights rather than optimise the weights of a single network. However that is computationally expensive and a bit of a "black art", but the results Prof. Neal achieves suggests it is worth it!
| null | CC BY-SA 2.5 | null | 2010-09-03T08:40:44.130 | 2010-09-03T08:40:44.130 | null | null | 887 | null |
2335 | 1 | 2342 | null | 10 | 4030 | I'm trying to interpret the following type of logistic model:
```
mdl <- glm(c(suc,fail) ~ fac1 + fac2, data=df, family=binomial)
```
Is the output of `predict(mdl)` the expected odds of success for each data point? Is there a simple way to tabulate the odds for each factor level of the model, rather than all the data points?
| Output of logistic model in R | CC BY-SA 2.5 | null | 2010-09-03T08:53:27.510 | 2010-09-03T15:16:08.050 | 2010-09-03T09:23:26.890 | 229 | 229 | [
"r",
"logistic",
"generalized-linear-model"
] |
2336 | 2 | null | 2131 | 1 | null | I would advise using a different value of the regularisation parameter C for examples of the positive class and examples of the negative class (many SVM packages support this, and in any case it is easily implemented). Then use e.g. cross-validation to find good values of the two regularisation parameters.
It can be shown that this is asypmtotically equivalent re-sampling the data in a ratio determined by C+ and C- (so there is no advantage in re-sampling rather than re-weighting, they come to the same thing in the end and weights can be continuous, rather than discrete, so it gives finer control).
Don't simply choose C+ and C- to give a 50-50 weighting to positive and negative patterns though, as the stength of the effect of the "imbalances classes" problem will vary from dataset to dataset, so the strength of the optimal re-weighting cannot be determined a-priori.
Also remember that false-positive and false-negative costs may be different, and the problem may resolve itself if these are included in determining C+ and C-.
It is also worth bearing in mind, that for some problems the Bayes optimal decision rule will assign all patterns to a single class and ignore the other, so it isn't necessarily a bad thing - it may just mean that the density of patterns of one class is everywhere below the density of patterns of the other class.
| null | CC BY-SA 2.5 | null | 2010-09-03T09:12:37.553 | 2010-09-03T09:12:37.553 | null | null | 887 | null |
2337 | 1 | 2340 | null | 3 | 3831 | I have a data-set consisting of N p-dimensional observations (all quantitative variables). I want to apply a hierarchical clustering algorithm to those data. As explained on page 505 in [Elements of Statistical Learning](http://www-stat.stanford.edu/~tibs/ElemStatLearn/), when using weighted average to combine the distances of the individual variables, it is often desirable (I have found no clues to the contrary in my scenario), to set the weights for each variable such that all variables have equal influence on the distance of the observations (formulas can be found in the book). The problem is, that I use [scipy.spatial.distance](http://docs.scipy.org/doc/scipy/reference/spatial.distance.html) for calculating the distances, and this does no let me specify any weights. My question is, if I standardize my observations (multiply each dimension with 1/average of that dimension), does that solve the problem?
| How to standardize a data-set | CC BY-SA 2.5 | null | 2010-09-03T09:38:35.287 | 2010-09-16T06:35:32.243 | 2010-09-16T06:35:32.243 | null | 977 | [
"standardization"
] |
2338 | 2 | null | 1266 | 8 | null | The [ez](http://cran.r-project.org/web/packages/ez/index.html) package, of which I am the author, has a function called ezPerm() which computes a permutation test, but probably doesn't do interactions properly (the documentation admits as much). The latest version has a function called ezBoot(), which lets you do bootstrap resampling that takes into account repeated measures (by resampling subjects, then within subjects), either using traditional cell means as the prediction statistic or using mixed effects modelling to make predictions for each cell in the design. I'm still not sure how "non-parametric" the bootstrap CIs from mixed effects model predictions are; my intuition is that they might reasonably be considered non-parametric, but my confidence in this area is low given that I'm still learning about mixed effects models.
| null | CC BY-SA 2.5 | null | 2010-09-03T09:41:58.997 | 2010-09-03T09:41:58.997 | null | null | 364 | null |
2340 | 2 | null | 2337 | 2 | null | I'd just say: be careful with that. Standarization is needed only when some variable(s) dominates the dissimilarity score just because it is expressed in "smaller units"; let's say that you have a variable that is truly equal for all elements, but there is some, very small variability due to the measurement error. Now if you'd normalize this value, you'll make those errors an important factor for clustering.
| null | CC BY-SA 2.5 | null | 2010-09-03T11:04:29.063 | 2010-09-03T11:04:29.063 | null | null | null | null |
2342 | 2 | null | 2335 | 14 | null | The help pages for
```
predict.glm
```
state: "Thus for a default binomial model the default predictions are of log-odds (probabilities on logit scale) and ‘type = "response"’ gives the predicted probabilities". So, `predict(mdl)` returns the log(odds), and using "type = "response" returns the predicted probabilities. You might find this toy example instructive:
```
> y <- c(0,0,0,1,1,1,1,1,1,1)
> prop.table(table(y))
y
0 1
0.3 0.7
> glm.y <- glm(y~1, family = "binomial")
> ## predicted log(odds)
> predict(glm.y)
1 2 3 4 5 6 7 8
0.8472979 0.8472979 0.8472979 0.8472979 0.8472979 0.8472979 0.8472979 0.8472979
9 10
0.8472979 0.8472979
> ## predicted probabilities (p = odds/(1+odds))
> exp(predict(glm.y))/(1+exp(predict(glm.y)))
1 2 3 4 5 6 7 8 9 10
0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7
> predict(glm.y, type = "response")
1 2 3 4 5 6 7 8 9 10
0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7
```
Regarding your second question, you might want to check out the effects-package [http://socserv.socsci.mcmaster.ca/jfox/Misc/effects/index.html](http://socserv.socsci.mcmaster.ca/jfox/Misc/effects/index.html) by John Fox; see also his JSS article "Effect Displays in R for Generalised Linear Models" (pp. 8-10).
| null | CC BY-SA 2.5 | null | 2010-09-03T13:21:10.640 | 2010-09-03T15:16:08.050 | 2010-09-03T15:16:08.050 | 307 | 307 | null |
2343 | 1 | null | null | 3 | 421 | I am trying to develop some algorithm to compute probabilities in multi-type branching trees, and I doubt I am doing this right...
Let us consider a multi-type branching process with two types, denoted by 0 and 1. The process starts in state 0 with probability 1, so that the root vertex of any tree generated by this process has state 0. A vertex in state 0 generates two vertices in state 1 with probability 1. A vertex in state 1 generates either two vertices in state 0 (with probability 0.5) or one vertex in state 0 and one vertex in state 1 (with probability 0.5).
We denote those three possible transitions by:
- A: 0 -> 1 1 (1.)
- B: 1 -> 0 0 (0.5)
- C: 1 -> 0 1 (0.5)
(their probabilities are between brackets).
In the sequel, a "tree" will refer to an unordered tree, i.e. two isomorphic ordered trees such that the tree isomorphism preserves the labels will be considered as a same mathematical object (following the principle in Chi (2004), p. 1993, paragraph 3 - see link at the end of the post).
Now let us consider the particular labeled tree of height three, composed by one root vertex in state 0 with
- one first child in state 1 that itself has one child in state 0 and one child in state 1
- one second child in state 1 that itself has two children in state 0
This tree should be depicted in some file I am not allowed to post, so I should draw it in ascii mode with limited guarantee on the result:
```
0
|
-------------
| |
1 1
| |
--------- ---------
| | | |
0 1 0 0
```
I would like to compute the probability that this tree was generated by the above multi-type branching process after 2 generation steps (the generation of the root vertex does not count as a step).
Using equation 2 p. 1994 in Chi (2004), this probability should be 0.25, since each transition A, B and C is applied once. However, each possible tree of height 3 among the three trees generated by this process has probability 0.25, and the sum of the probabilities is 0.75 instead of 1.
Another possibility is to consider that in this given tree, the set of children (0, 0) of vertex 1 may have been generated by any vertex in state 1 (and same principle for the set of children (0, 1)), so that the tree probability is in fact 0.25+0.25 = 0.5.
Finally, how to compute the probability that a given tree t of height n was generated by a multi-type branching process after n-1 generation steps ? Can equation 2 p. 1994 in Chi (2004) be used ? Or do we have to compute the number of trees that are isomorphic to t in some sense ? Or do we have to give up the idea that isomorphic trees are equivalent representations of same object ?
Thanks for your help !
JB
Ref.
Z. Chi. [Limit laws of estimators for critical multi-type Galton–Watson processes.](https://projecteuclid.org/journals/annals-of-applied-probability/volume-14/issue-4/Limit-laws-of-estimators-for-critical-multi-type-GaltonWatson-processes/10.1214/105051604000000521.full) Ann. Appl. Probab. Volume 14, Number 4 (2004), 1992-2015.
---
As a beginning of answer to the suggestion of Aniko, I think the following equivalence relation could lead to equivalence classes of trees such that formula (2) could be applied. It consists in adding constraints to the usual notion of isomorphism (by "isomorphism", I mean "an isomorphism preserving labels"). I denote by $I(v)$ the label of vertex $v$, and
$t_v$ the complete subtree rooted in $v$. A complete subtree rooted in $v$ is the subgraph induced by the descendants of $v$.
The property $P(t)$ is defined inductively by:
$P(t)$ is satisfied iif $t$ has a single vertex, or if for every pair $\{v, v'\}$ of children of the root of $t$ such that $I(v)$ = $I(v')$, $t_v$ and $t_{v'}$ are isomorphic and they satisfy $P(t_{v'})$ and $P(t_{v})$.
The two trees $t$ and $t'$ are in the same equivalence class iif [they are isomorphic, and property $P$ is satisfied for $t$ and $t'$] or [$t=t'$].
| Trees generated by multi-type branching processes in n steps | CC BY-SA 4.0 | null | 2010-09-03T13:21:55.897 | 2022-06-24T22:03:46.603 | 2022-06-24T22:03:46.603 | 79696 | 1185 | [
"probability",
"algorithms",
"stochastic-processes"
] |
2344 | 1 | 2346 | null | 81 | 65614 | I am using the random forest algorithm as a robust classifier of two groups in a microarray study with 1000s of features.
- What is the best way to present the random forest so that there is enough information to make it
reproducible in a paper?
- Is there a plot method in R to actually plot the tree, if there are a small number of features?
- Is the OOB estimate of error rate the best statistic to quote?
| Best way to present a random forest in a publication? | CC BY-SA 3.0 | null | 2010-09-03T13:50:51.707 | 2018-01-11T10:22:37.090 | 2018-01-11T10:22:37.090 | 128677 | 1150 | [
"r",
"machine-learning",
"classification",
"random-forest",
"microarray"
] |
2345 | 2 | null | 2343 | 4 | null | I think you explained well why the probability of the given tree is 0.5 if its topology does not count. Looking at the formula (2) superficially, I find it hard to imagine the definition of isomorphism under which it would work (only leaves can be rearranged?), though perhaps the trick is in finding the right definition of the counting function f.
For the general case I would try to write a recursive formula, and I think the binomial coefficient should pop uo in it.
| null | CC BY-SA 2.5 | null | 2010-09-03T14:19:30.763 | 2010-09-03T14:19:30.763 | null | null | 279 | null |
2346 | 2 | null | 2344 | 52 | null | Regarding making it reproducible, the best way is to provide reproducible research (i.e. code and data) along with the paper. Make it available on your website, or on a hosting site (like github).
Regarding visualization, Leo Breiman has done some interesting work on this (see [his homepage](http://www.stat.berkeley.edu/~breiman/RandomForests/), in particular the [section on graphics](http://www.stat.berkeley.edu/~breiman/RandomForests/cc_graphics.htm)).
But if you're using R, then the `randomForest` package has some useful functions:
```
data(mtcars)
mtcars.rf <- randomForest(mpg ~ ., data=mtcars, ntree=1000, keep.forest=FALSE,
importance=TRUE)
plot(mtcars.rf, log="y")
varImpPlot(mtcars.rf)
```
And
```
set.seed(1)
data(iris)
iris.rf <- randomForest(Species ~ ., iris, proximity=TRUE,
keep.forest=FALSE)
MDSplot(iris.rf, iris$Species)
```
I'm not aware of a simple way to actually plot a tree, but you can use the `getTree` function to retrieve the tree and plot that separately.
```
getTree(randomForest(iris[,-5], iris[,5], ntree=10), 3, labelVar=TRUE)
```
The [Strobl/Zeileis presentation on "Why and how to use random forest variable importance measures (and how you shouldn’t)"](http://www.statistik.uni-dortmund.de/useR-2008/slides/Strobl+Zeileis.pdf) has examples of trees which must have been produced in this way. This [blog post on tree models](http://www.statmethods.net/advstats/cart.html) has some nice examples of CART tree plots which you can use for example.
As @chl commented, a single tree isn't especially meaningful in this context, so short of using it to explain what a random forest is, I wouldn't include this in a paper.
| null | CC BY-SA 2.5 | null | 2010-09-03T14:32:35.293 | 2010-09-03T15:30:56.143 | 2010-09-03T15:30:56.143 | 5 | 5 | null |
2347 | 2 | null | 3 | 14 | null | There are also those projects initiated by the FSF or redistributed under GNU General Public License, like:
- PSPP, which aims to be a free alternative to SPSS
- GRETL, mostly dedicated to regression and econometrics
There is even applications that were released just as a companion software for a textbook, like [JMulTi](http://www.jmulti.de/), but are still in use by few people.
I am still playing with [xlispstat](http://www.stat.uiowa.edu/~luke/xls/xlsinfo/), from time to time, although Lisp has been largely superseded by R (see Jan de Leeuw's overview on [Lisp vs. R](http://www.jstatsoft.org/v13/i07) in the Journal of Statistical Software). Interestingly, one of the cofounders of the R language, Ross Ihaka, argued on the contrary that the future of statistical software is... Lisp: [Back to the Future: Lisp as a Base for a Statistical Computing System](http://www.stat.auckland.ac.nz/%7Eihaka/downloads/Compstat-2008.pdf). @Alex already pointed to the Clojure-based statistical environment [Incanter](http://incanter.org/), so maybe we will see a revival of Lisp-based software in the near future? :-)
| null | CC BY-SA 2.5 | null | 2010-09-03T14:42:15.677 | 2010-09-03T14:42:15.677 | null | null | 930 | null |
2348 | 1 | null | null | 19 | 1775 | I was reading [Christian Robert's Blog](http://xianblog.wordpress.com/2010/09/02/random-dive-mh) today and quite liked the new Metropolis-Hastings algorithm he was discussing. It seemed simple and easy to implement.
Whenever I code up MCMC, I tend to stick with very basic MH algorithms, such as independent moves or random walks on the log scale.
What MH algorithms do people routinely use? In particular:
- Why do you use them?
- In some sense you must think that they are optimal - after all you use them routinely! So how do you judge optimality: ease-of-coding, convergence, ...
I'm particularly interested in what is used in practice, i.e. when you code up your own schemes.
| Metropolis-Hastings algorithms used in practice | CC BY-SA 2.5 | null | 2010-09-03T15:02:58.163 | 2010-10-05T12:11:53.590 | 2010-09-04T10:49:24.537 | 8 | 8 | [
"markov-chain-montecarlo",
"metropolis-hastings"
] |
2349 | 2 | null | 2269 | 2 | null | Obviously there is probably a better solution available than what I'm about to say - especially since these are both SAS products. However but I think it bares saying that if all else fails, when your data is in a tabular structure, you can almost always export the data as a deliminated text file (e.g. csv) and import it into the new program as a deliminated text file. I don't think I've seen a program that works with tablular data structures yet that doesn't provide these options.
| null | CC BY-SA 2.5 | null | 2010-09-03T15:51:04.217 | 2010-09-03T15:51:04.217 | null | null | 196 | null |
2350 | 1 | 2359 | null | 15 | 8162 | I am applying a random forest algorithm as a classifier on a microarray dataset which are split into two known groups with 1000s of features. After the initial run I look at the importance of the features and run the tree algorithm again with the 5, 10 and 20 most important features. I find that for all features, top 10 and 20 that the OOB estimate of error rate is 1.19% where as for the top 5 features it is 0%. This seems counter-intuitive to me, so I was wondering whether you could explain whether I am missing something or I am using the wrong metric.
I an using the randomForest package in R with ntree=1000, nodesize=1 and mtry=sqrt(n)
| Why does the random forest OOB estimate of error improve when the number of features selected are decreased? | CC BY-SA 2.5 | null | 2010-09-03T15:55:37.100 | 2017-07-04T14:22:58.973 | null | null | 1150 | [
"r",
"machine-learning",
"classification",
"random-forest"
] |
2351 | 2 | null | 2348 | 2 | null | Hybrid Monte Carlo is the standard algorithm used for neural networks. Gibbs sampling for Gaussian process classification (when not using a deterministic approximation instead).
| null | CC BY-SA 2.5 | null | 2010-09-03T16:00:07.917 | 2010-09-03T16:00:07.917 | null | null | 887 | null |
2352 | 1 | null | null | 25 | 3046 | In his paper [Linear Model Selection by Cross-Validation](http://www.jstor.org/pss/2290328), Jun Shao shows that for the problem of variable selection in multivariate linear regression, the method of leave-one-out cross validation (LOOCV) is 'asymptotically inconsistent'. In plain English, it tends to select models with too many variables. In a simulation study, Shao shows that even for as few as 40 observations, LOOCV can underperform other cross-validation techniques.
This paper is somewhat controversial, and somewhat ignored (10 years after its publication, my chemometrics colleagues had never heard of it and were happily using LOOCV for variable selection...). There is also a belief (I am guilty of this), that its results extend somewhat beyond the original limited scope.
The question, then: how far do these results extend? Are they applicable to the following problems?
- Variable selection for logistic regression/GLM?
- Variable selection for Fisher LDA classification?
- Variable selection using SVM with finite (or infinite) kernel space?
- Comparison of models in classification, say SVM using different kernels?
- Comparison of models in linear regression, say comparing MLR to Ridge Regression?
- etc.
| When are Shao's results on leave-one-out cross-validation applicable? | CC BY-SA 3.0 | null | 2010-09-03T16:15:14.543 | 2020-02-27T20:39:06.940 | 2016-04-26T04:00:02.200 | 795 | 795 | [
"classification",
"model-selection",
"cross-validation"
] |
2353 | 2 | null | 3 | 19 | null | This may get downvoted to oblivion, but I happily used the Matlab clone [Octave](http://www.gnu.org/software/octave/) for many years. There are fairly good libraries in octave forge for generation of random variables from different distributions, statistical tests, etc, though clearly it is dwarfed by R. One possible advantage over R is that Matlab/octave is the lingua franca among numerical analysts, optimization researchers, and some subset of applied mathematicians (at least when I was in school), whereas nobody in my department, to my knowledge, used R. my loss. learn both if possible!
| null | CC BY-SA 2.5 | null | 2010-09-03T16:27:56.423 | 2010-09-03T16:27:56.423 | null | null | 795 | null |
2354 | 2 | null | 2170 | 1 | null | Thanks again for the response, and any other responses in the future will be much appreciated. I think I personally prefer using exploratory tools to identify the relationships, especially since the original researcher did not give any real reason why a curvilinear relationship would exist theoretically. Although exploration would identify if the relationship was not properly identified (e.g. if it should have been a cubed polynomial term instead of squared), this seems unlikely given theres no reason why it would have a curve in it to begin with.
The attached image is a plot of several of their finals models, and I have plotted the expected value of Y given their X and X^2 coefficients reported, holding everything else in the models constant. If their results are representative, I should observe similar findings in my sample controlling for other confounders. It also is enlightening just to plot their findings, as I see the squared term dominates in several of the models, and so for all practical purposes it has a negative relationship that reaches the bottom of realistic Y values fairly quickly.

| null | CC BY-SA 2.5 | null | 2010-09-03T17:57:15.193 | 2010-09-03T17:57:15.193 | null | null | 1036 | null |
2355 | 2 | null | 2352 | 6 | null | I would say: everywhere, but I haven't seen a strict proof of it. The intuition behind is such that when doing CV one must hold a balance between train large enough to build sensible model and test large enough so it would be a sensible benchmark.
When dealing with thousands of pretty homogeneous objects, picking one is connected with risk that it is pretty similar to other object that was left in the set -- and then the results would be too optimistic.
On the other hand, in case of few objects there will be no vital difference between LOO and k-fold; $10/10$ is just $1$ and we can't do anything with it.
| null | CC BY-SA 2.5 | null | 2010-09-03T18:14:13.153 | 2010-09-03T18:14:13.153 | null | null | null | null |
2356 | 1 | null | null | 101 | 13975 | A recent question on the difference between confidence and credible intervals led me to start re-reading Edwin Jaynes' article on that topic:
Jaynes, E. T., 1976. `Confidence Intervals vs Bayesian Intervals,' in Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science, W. L. Harper and C. A. Hooker (eds.), D. Reidel, Dordrecht, p. 175; ([pdf](http://bayes.wustl.edu/etj/articles/confidence.pdf))
In the abstract, Jaynes writes:
>
...we exhibit the Bayesian and orthodox solutions to six common statistical problems involving confidence intervals (including significance tests based on the same reasoning). In every case, we find the situation is exactly the opposite, i.e. the Bayesian method is easier to apply and yields the same or better results. Indeed, the orthodox results are satisfactory only when they agree closely (or exactly) with the Bayesian results. No contrary example has yet been produced.
(emphasis mine)
The paper was published in 1976, so perhaps things have moved on. My question is, are there examples where the frequentist confidence interval is clearly superior to the Bayesian credible interval (as per the challenge implicitly made by Jaynes)?
Examples based on incorrect prior assumptions are not acceptable as they say nothing about the internal consistency of the different approaches.
| Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals | CC BY-SA 3.0 | null | 2010-09-03T18:23:44.087 | 2022-07-03T15:40:27.170 | 2016-07-24T17:31:36.353 | 103338 | 887 | [
"bayesian",
"confidence-interval"
] |
2357 | 2 | null | 1668 | 1 | null | As a computer engineer coming to data analysis myself, a really readable book that covers things from a pretty unintimidating & readable perspective (at the cost of not covering as much as any of the other books suggested here) was Programming Collective Intelligence by Toby Segaran. I found it a lot more approachable than, for example, Bishop's book, which a great reference but goes into more depth that you probably want at a first pass. On amazon: [http://www.amazon.com/Programming-Collective-Intelligence-Building-Applications/dp/0596529325](http://rads.stackoverflow.com/amzn/click/0596529325)
| null | CC BY-SA 2.5 | null | 2010-09-03T18:51:15.067 | 2010-09-03T18:51:15.067 | null | null | 1076 | null |
2358 | 1 | 2360 | null | 90 | 178893 | Are multiple and multivariate regression really different? What is a variate anyways?
| Explain the difference between multiple regression and multivariate regression, with minimal use of symbols/math | CC BY-SA 3.0 | null | 2010-09-03T18:54:17.230 | 2023-04-17T13:41:02.513 | 2015-10-28T09:32:08.590 | 28666 | 74 | [
"regression",
"multiple-regression",
"terminology",
"multivariate-regression"
] |
2359 | 2 | null | 2350 | 16 | null | This is feature selection overfit and this is pretty known -- see [Ambroise & McLachlan 2002](http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=124442&tool=pmcentrez&rendertype=abstract).
The problem is based on the facts that RF is too smart and number of objects is too small. In the latter case, it is generally pretty easy to randomly create attribute that may have good correlation with the decision. And when the number of attributes is large, you may be certain that some of totally irrelevant ones will be a very good predictors, even enough to form a cluster that will be able to recreate the decision in 100%, especially when the huge flexibility of RF is considered. And so, it becomes obvious that when instructed to find the best possible subset of attributes, the FS procedure finds this cluster.
One solution (CV) is given in A&McL, you can also test our approach to the topic, the [Boruta algorithm](http://cran.r-project.org/web/packages/Boruta/index.html), which basically extends the set with "shadow attributes" made to be random by design and compares their RF importance to this obtained for real attributes to judge which of them are indeed random and can be removed; this is replicated many times to be significant. Boruta is rather intended to a bit different task, but as far as my tests showed, the resulting set is free of the FS overfit problem.
| null | CC BY-SA 2.5 | null | 2010-09-03T19:00:15.123 | 2010-09-03T19:00:15.123 | null | null | null | null |
2360 | 2 | null | 2358 | 67 | null | Very quickly, I would say: 'multiple' applies to the number of predictors that enter the model (or equivalently the design matrix) with a single outcome (Y response), while 'multivariate' refers to a matrix of response vectors. Cannot remember the author who starts its introductory section on multivariate modeling with that consideration, but I think it is Brian Everitt in his textbook [An R and S-Plus Companion to Multivariate Analysis](http://www.springer.com/statistics/social+sciences+%26+law/book/978-1-85233-882-4). For a thorough discussion about this, I would suggest to look at his latest book, [Multivariable Modeling and Multivariate Analysis for the Behavioral Sciences](http://www.crcpress.com/product/isbn/9781439807699?refpage=http%3A//www.crcpress.com/ecommerce_product/browse_book_categories.jsf&refpn=category&refpv=STA12A).
For 'variate', I would say this is a common way to refer to any random variable that follows a known or hypothesized distribution, e.g. we speak of gaussian variates $X_i$ as a series of observations drawn from a normal distribution (with parameters $\mu$ and $\sigma^2$). In probabilistic terms, we said that these are some random realizations of X, with mathematical expectation $\mu$, and about 95% of them are expected to lie on the range $[\mu-2\sigma;\mu+2\sigma]$ .
| null | CC BY-SA 2.5 | null | 2010-09-03T19:03:07.687 | 2010-09-19T09:32:42.730 | 2010-09-19T09:32:42.730 | 930 | 930 | null |
2361 | 2 | null | 1668 | 3 | null | Here is a very nice book from James E. Gentle, [Computational Statistics](http://www.springer.com/statistics/computanional+statistics/book/978-0-387-98143-7) (Springer, 2009), which covers both computational and statistical aspects of data analysis. Gentle also authored other great books, check his publications.
Another great book is the [Handbook of Computational Statistics](http://www.springer.com/statistics/computanional+statistics/book/978-3-540-40464-4), from Gentle et al. (Springer, 2004); it is circulating as PDF somewhere on the web, so just try looking at it on Google.
| null | CC BY-SA 2.5 | null | 2010-09-03T19:22:30.447 | 2010-09-03T19:22:30.447 | null | null | 930 | null |
2362 | 2 | null | 2344 | 19 | null |
- As Shane wrote; make it reproducible research + include random seeds, because RF is stochastic.
- First of all, plotting single trees forming RF is nonsense; this is an ensemble classifier, it makes sense only as a whole. But even plotting the whole forest is nonsense -- it is a black-box classifier, so it is not intended to explain the data with its structure, rather to replicate the original process. Instead, make some of plots Shane suggested.
- In practice, OOB is a very good error approximation; yet this is not a widely accepted fact, so for publication it is better to also make a CV to confirm it.
| null | CC BY-SA 2.5 | null | 2010-09-03T19:22:56.617 | 2010-09-03T19:22:56.617 | null | null | null | null |
2363 | 2 | null | 2358 | 70 | null | Here are two closely related examples which illustrate the ideas. The examples are somewhat US centric but the ideas can be extrapolated to other countries.
Example 1
Suppose that a university wishes to refine its admission criteria so that they admit 'better' students. Also, suppose that a student's grade Point Average (GPA) is what the university wishes to use as a performance metric for students. They have several criteria in mind such as high school GPA (HSGPA), SAT scores (SAT), Gender etc and would like to know which one of these criteria matter as far as GPA is concerned.
Solution: Multiple Regression
In the above context, there is one dependent variable (GPA) and you have multiple independent variables (HSGPA, SAT, Gender etc). You want to find out which one of the independent variables are good predictors for your dependent variable. You would use multiple regression to make this assessment.
Example 2
Instead of the above situation, suppose the admissions office wants to track student performance across time and wishes to determine which one of their criteria drives student performance across time. In other words, they have GPA scores for the four years that a student stays in school (say, GPA1, GPA2, GPA3, GPA4) and they want to know which one of the independent variables predict GPA scores better on a year-by-year basis. The admissions office hopes to find that the same independent variables predict performance across all four years so that their choice of admissions criteria ensures that student performance is consistently high across all four years.
Solution: Multivariate Regression
In example 2, we have multiple dependent variables (i.e., GPA1, GPA2, GPA3, GPA4) and multiple independent variables. In such a situation, you would use multivariate regression.
| null | CC BY-SA 2.5 | null | 2010-09-03T19:27:20.703 | 2010-09-03T19:27:20.703 | null | null | null | null |
2364 | 2 | null | 1668 | 1 | null | CRAN has several good examples of books pertaining to statistical programming. Some of them will not pertain to machine learning and MCMC, but each entry is annotated, so you should have a rough idea of what each book contains to dive a bit further.
[http://www.r-project.org/doc/bib/R-books.html](http://www.r-project.org/doc/bib/R-books.html)
| null | CC BY-SA 2.5 | null | 2010-09-03T19:38:45.930 | 2010-09-03T19:38:45.930 | null | null | 1118 | null |
2365 | 2 | null | 2356 | 12 | null | The problem starts with your sentence :
>
Examples based on incorrect prior
assumptions are not acceptable as they
say nothing about the internal
consistency of the different
approaches.
Yeah well, how do you know your prior is correct?
Take the case of Bayesian inference in phylogeny. The probability of at least one change is related to evolutionary time (branch length t) by the formula
$$P=1-e^{-\frac{4}{3}ut}$$
with u being the rate of substitution.
Now you want to make a model of the evolution, based on comparison of DNA sequences. In essence, you try to estimate a tree in which you try to model the amount of change between the DNA sequences as close as possible. The P above is the chance of at least one change on a given branch. Evolutionary models describe the chances of change between any two nucleotides, and from these evolutionary models the estimation function is derived, either with p as a parameter or with t as a parameter.
You have no sensible knowledge and you chose a flat prior for p. This inherently implies an exponentially decreasing prior for t. (It becomes even more problematic if you want to set a flat prior on t. The implied prior on p is strongly dependent on where you cut off the range of t.)
In theory, t can be infinite, but when you allow an infinite range, the area under its density function equals infinity as well, so you have to define a truncation point for the prior. Now when you chose the truncation point sufficiently large, it is not difficult to prove that both ends of the credible interval rise, and at a certain point the true value is not contained in the credible interval any more. Unless you have a very good idea about the prior, Bayesian methods are not guaranteed to be equal to or superior to other methods.
ref: Joseph Felsenstein : Inferring Phylogenies, chapter 18
On a side note, I'm getting sick of that Bayesian/Frequentist quarrel. They're both different frameworks, and neither is the Absolute Truth. The classical examples pro Bayesian methods invariantly come from probability calculation, and not one frequentist will contradict them. The classical argument against Bayesian methods invariantly involve the arbitrary choice of a prior. And sensible priors are definitely possible.
It all boils down to the correct use of either method at the right time. I've seen very few arguments/comparisons where both methods were applied correctly. Assumptions of any method are very much underrated and far too often ignored.
EDIT : to clarify, the problem lies in the fact that the estimate based on p differs from the estimate based on t in the Bayesian framework when working with uninformative priors (which is in a number of cases the only possible solution). This is not true in the ML framework for phylogenetic inference. It is not a matter of a wrong prior, it is inherent to the method.
| null | CC BY-SA 2.5 | null | 2010-09-03T20:24:52.440 | 2010-09-03T23:00:59.537 | 2010-09-03T23:00:59.537 | 1124 | 1124 | null |
2366 | 2 | null | 2326 | 4 | null | Actually, people get paid big money for statistical guidance through experiments... If you're not too sure about it, I'd also advise to consult a statistician. An internet forum is not the best aid for complex analyses. Much of what is possible depends on the structure within the dataset: How are the variables distributed, how is the correlation structure,...
But I'll take a shot at it.
So you have $m$ measurements at 4 different times, being:
- virgin state
- after preparation
- after first round (shot in "virgin state")
- after second round (shot in "non-virgin state")
Your response variable is $MV$. Let's assume the error of the chronograph is random and normally distributed, this makes life a lot more easy. Let's also assume that for all other measurements the assumptions of parametric models apply, i.e. that every measurement consists of $n$ indipendent and identically distributed random variables, being your $n$ tested cases.
A rather basic approach would involve
- using a Kolgomorov-Smirnov test to compare distributions (think about correction for multitesting)
- a model selection procedure to determine the best predictors for $MV$ using either set
So in theory, you could use a GEE with "round" as a factor, indicating whether the measurement was made in the first round or in the second round, thus allowing for different intercepts and coefficients for first and second round that can be compared and tested. You have to specify "case" as a random factor in the GEE to get correct estimates of your SE. Basically, if the main effect "round" is significant, there is a difference before and after in average $MV$. If the interaction "round":measurement is significant, the impact of the measurement is different between the two rounds. Now it boils down to use any sensible criterium to select the best model.
Without seeing the data, it's impossible to say whether this approach is actually valid. You will have to do some descriptive and exploratory analysis to really understand the underlying distributions and correlations. Otherwise you can never know whether you violated assumptions of any method.
On a sidenote: Give this dataset to 10 statisticians, and chance is big you get as many different models. You should keep in mind whether or not the model also makes sense in real life.
my 2 cents
| null | CC BY-SA 2.5 | null | 2010-09-03T21:16:13.070 | 2010-09-03T21:16:13.070 | null | null | 1124 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.