Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2367 | 2 | null | 2352 | 18 | null | You need to specify the purpose of the model before you can say whether Shao's results are applicable. For example, if the purpose is prediction, then LOOCV makes good sense and the inconsistency of variable selection is not a problem. On the other hand, if the purpose is to identify the important variables and explain how they affect the response variable, then Shao's results are obviously important and LOOCV is not appropriate.
The AIC is asymptotically LOOCV and BIC is asymptotically equivalent to a leave-$v$-out CV where $v=n[1-1/(\log(n)-1)]$ --- the BIC result for linear models only. So the BIC gives consistent model selection. Therefore a short-hand summary of Shao's result is that AIC is useful for prediction but BIC is useful for explanation.
| null | CC BY-SA 2.5 | null | 2010-09-04T03:12:13.830 | 2010-09-14T02:33:16.480 | 2010-09-14T02:33:16.480 | 159 | 159 | null |
2368 | 2 | null | 2296 | 3 | null | You should take a look at some of the nonparametric Bayesian approaches (see [this paper](https://users.umiacs.umd.edu/%7Ehal/docs/daume08ihfrm.pdf) and [this paper](https://web.archive.org/web/20130426174725/http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.133.2649&rep=rep1&type=pdf)) to factor analysis which do not assume the number of factors to be known; the first one can also model the case where the factors have a dependency structure among them.
| null | CC BY-SA 4.0 | null | 2010-09-04T03:58:10.850 | 2023-01-06T04:42:12.417 | 2023-01-06T04:42:12.417 | 362671 | 881 | null |
2369 | 2 | null | 2356 | 9 | null | Frequentist confidence intervals bound the rate of false positives (Type I errors), and guarantee their coverage will be bounded below by the confidence parameter, even in the worst case. Bayesian credibility intervals don't.
So if the thing you care about is false positives and you need to bound them, confidence intervals are the the approach that you'll want to use.
For example, let's say you have an evil king with a court of 100 courtiers and courtesans and he wants to play a cruel statistical game with them. The king has a bag of a trillion fair coins, plus one unfair coin whose heads probability is 10%. He's going to perform the following game. First, he'll draw a coin uniformly at random from the bag.
Then the coin will be passed around a room of 100 people and each one will be forced to do an experiment on it, privately, and then each person will state a 95% uncertainty interval on what they think the coin's heads probability is.
Anybody who gives an interval that represents a false positive -- i.e. an interval that doesn't cover the true value of the heads probability -- will be beheaded.
If we wanted to express the /a posteriori/ probability distribution function of the coin's weight, then of course a credibility interval is what does that. The answer will always be the interval [0.5, 0.5] irrespective of outcome. Even if you flip zero heads or one head, you'll still say [0.5, 0.5] because it's a heck of a lot more probable that the king drew a fair coin and you had a 1/1024 day getting ten heads in a row, than that the king drew the unfair coin.
So this is not a good idea for the courtiers and courtesans to use! Because when the unfair coin is drawn, the whole room (all 100 people) will be wrong and they'll all get beheaded.
In this world where the most important thing is false positives, what we need is an absolute guarantee that the rate of false positives will be less than 5%, no matter which coin is drawn. Then we need to use a confidence interval, like Blyth-Still-Casella or Clopper-Pearson, that works and provides at least 95% coverage irrespective of the true value of the parameter, even in the worst case. If everybody uses this method instead, then no matter which coin is drawn, at the end of the day we can guarantee that the expected number of wrong people will be no more than five.
So the point is: if your criterion requires bounding false positives (or equivalently, guaranteeing coverage), you gotta go with a confidence interval. That's what they do. Credibility intervals may be a more intuitive way of expressing uncertainty, they may perform pretty well from a frequentist analysis, but they are not going to provide the guaranteed bound on false positives you'll get when you go asking for it.
(Of course if you also care about false negatives, you'll need a method that makes guarantees about those too...)
| null | CC BY-SA 2.5 | null | 2010-09-04T04:22:51.527 | 2010-09-04T05:26:37.343 | 2010-09-04T05:26:37.343 | 1122 | 1122 | null |
2370 | 1 | null | null | 8 | 418 | I am trying fit a Cox Proportional Hazard model with coxph with a cmoputer of 12GB of RAM. It keeps running out of memory. Is there a biglm equivalent of coxph?
| Is there a biglm equivalent for coxph? | CC BY-SA 2.5 | null | 2010-09-04T05:19:51.860 | 2010-10-08T16:07:41.500 | 2010-10-08T16:07:41.500 | 8 | 1126 | [
"r",
"survival",
"large-data"
] |
2371 | 2 | null | 2370 | 5 | null | Maybe take a look at the [DatABEL](http://cran.r-project.org/web/packages/DatABEL/) package. I know it is used in genomic studies with large data that may be stored on the HD instead of RAM. From what I read in the help file, you can then apply different kind of model, including survival model.
| null | CC BY-SA 2.5 | null | 2010-09-04T07:17:51.167 | 2010-09-04T11:18:53.117 | 2010-09-04T11:18:53.117 | null | 930 | null |
2372 | 2 | null | 2272 | 18 | null | I disagree with Srikant's answer on one fundamental point. Srikant stated this:
"Inference Problem:
Your inference problem is: What values of θ are reasonable given the observed data x?"
In fact this is the BAYESIAN INFERENCE PROBLEM. In Bayesian statistics we seek to calculate P(θ| x) i.e the probability of the parameter value given the observed data (sample). The CREDIBLE INTERVAL is an interval of θ that has a 95% chance (or other) of containing the true value of θ given the several assumptions underlying the problem.
The FREQUENTIST INFERENCE PROBLEM is this:
Are the observed data x reasonable given the hypothesised values of θ?
In frequentist statistics we seek to calculate P(x| θ) i.e the probability of observing the data (sample) given the hypothesised parameter value(s). The CONFIDENCE INTERVAL (perhaps a misnomer) is interpreted as: if the experiment that generated the random sample x were repeated many times, 95% (or other) of such intervals constructed from those random samples would contain the true value of the parameter.
Mess with your head? That's the problem with frequentist statistics and the main thing Bayesian statistics has going for it.
As Sikrant points out, P(θ| x) and P(x| θ) are related as follows:
P(θ| x) = P(θ)P(x| θ)
Where P(θ) is our prior probability; P(x| θ) is the probability of the data conditional on that prior and P(θ| x) is the posterior probability. The prior P(θ) is inherently subjective, but that is the price of knowledge about the Universe - in a very profound sense.
The other parts of both Sikrant's and Keith's answers are excellent.
| null | CC BY-SA 2.5 | null | 2010-09-04T10:22:19.790 | 2010-09-04T10:28:57.360 | 2010-09-04T10:28:57.360 | 521 | 521 | null |
2373 | 2 | null | 2272 | 7 | null | As I understand it: A credible interval is a statement of the range of values for the statistic of interest that remain plausible given the particular sample of data that we have actually observed. A confidence interval is a statement of the frequency with which the true value lies in the confidence interval when the experiment is repeated a large number of times, each time with a different sample of data from the same underlying population.
Normally the question we want to answer is "what values of the statistic are consistent with the observed data", and the credible interval gives a direct answer to that question - the true value of the statistic lies in a 95% credible interval with probability 95%. The confidence interval does not give a direct answer to this question; it is not correct to assert that the probability that the true value of the statistic lies within the 95% confidence interval is 95% (unless it happens to coincide with the credible interval). However this is a very common misinterpretation of a frequentist confidence interval as it the interpretation that would be a direct answer to the question.
The paper by Jayne's that I discuss in another question gives a good example of this (example #5), were a perfectly correct confidence interval is constructed, where the particular sample of data on which it is based rules out any possibility of the true value of the statistic being in the 95% confidence interval! This is only a problem if the confidence interval is incorrectly interpreted as a statment of plausible values of the statistic on the basis of the particular sample we have observed.
At the end of the day, it is a matter of "horses for courses", and which interval is best depends on the question you want answered - just choose the method that directly answers that question.
I suspect confidence intervals are more useful when analysing [desgined] repeatable experiments (as that is just the assumption underlying the confidence interval), and credible intervals better when analysing observational data, but that is just an opinion (I use both sorts of intervals in my own work, but wouldn't describe myself as an expert in either).
| null | CC BY-SA 2.5 | null | 2010-09-04T11:07:44.643 | 2010-09-04T11:07:44.643 | null | null | 887 | null |
2374 | 1 | 2375 | null | 18 | 26857 | I used to analyse items from a psychometric point of view. But now I am trying to analyse other types of questions on motivation and other topics. These questions are all on Likert scales. My initial thought was to use factor analysis, because the questions are hypothesised to reflect some underlying dimensions.
- But is factor analysis appropriate?
- Is it necessary to validate each question regarding its dimensionality ?
- Is there a problem with performing factor analysis on likert items?
- Are there any good papers and methods on how to conduct factor analysis on Likert and other categorical items?
| Factor analysis of questionnaires composed of Likert items | CC BY-SA 3.0 | null | 2010-09-04T11:15:48.317 | 2014-05-14T23:35:57.370 | 2011-10-04T07:04:04.897 | 930 | 1154 | [
"factor-analysis",
"scales",
"psychometrics",
"likert",
"psychology"
] |
2375 | 2 | null | 2374 | 23 | null | From what I've seen so far, FA is used for attitude items as it is for other kind of rating scales. The problem arising from the metric used (that is, "are Likert scales really to be treated as numeric scales?" is a long-standing debate, but providing you check for the bell-shaped response distribution you may handle them as continuous measurements, otherwise check for non-linear FA models or optimal scaling) may be handled by polytmomous IRT models, like the Graded Response, Rating Scale, or Partial Credit Model. The latter two may be used as a rough check of whether the threshold distances, as used in Likert-type items, are a characteristic of the response format (RSM) or of the particular item (PCM).
Regarding your second point, it is known, for example, that response distributions in attitude or health surveys differ from one country to the other (e.g. chinese people tend to highlight 'extreme' response patterns compared to those coming from western countries, see e.g. Song, X.-Y. (2007) Analysis of multisample structural equation models with applications to Quality of Life data, in Handbook of Latent Variable and Related Models, Lee, S.-Y. (Ed.), pp 279-302, North-Holland). Some methods to handle such situation off the top of my head:
- use of log-linear models (marginal approach) to highlight strong between-groups imbalance at the item level (coefficients are then interpreted as relative risks instead of odds);
- the multi-sample SEM method from Song cited above (Don't know if they do further work on that approach, though).
Now, the point is that most of these approaches focus at the item level (ceiling/floor effect, decreased reliability, bad item fit statistics, etc.), but when one is interested in how people deviate from what would be expected from an ideal set of observers/respondents, I think we must focus on person fit indices instead.
Such $\chi^2$ statistics are readily available for IRT models, like INFIT or OUTFIT mean square, but generally they apply on the whole questionnaire. Moreover, since estimation of items parameters rely in part on persons parameters (e.g., in the marginal likelihood framework, we assume a gaussian distribution), the presence of outlying individuals may lead to potentially biased estimates and poor model fit.
As proposed by Eid and Zickar (2007), combining a latent class model (to isolate group of respondents, e.g. those always answering on the extreme categories vs. the others) and an IRT model (to estimate item parameters and persons locations on the latent trait in both groups) appears a nice solution. Other modeling strategies are described in their paper (e.g. HYBRID model, see also Holden and Book, 2009).
Likewise, [unfolding models](http://www.psychology.gatech.edu/unfolding/publications.html) may be used to cope with response style, which is defined as a consistent and content-independent pattern of response category (e.g. tendency to agree with all statements). In the social sciences or psychological literature, this is know as Extreme Response Style (ERS). References (1–3) may be useful to get an idea on how it manifests and how it may be measured.
Here is a short list of papers that may help to progress on this subject:
- Hamilton, D.L. (1968). Personality attributes associated with extreme response style. Psychological Bulletin, 69(3): 192–203.
- Greanleaf, E.A. (1992). Measuring extreme response style. Public Opinion Quaterly, 56(3): 328-351.
- de Jong, M.G., Steenkamp, J.-B.E.M., Fox, J.-P., and Baumgartner, H. (2008). Using Item Response Theory to Measure Extreme Response Style in Marketing Research: A Global Investigation. Journal of marketing research, 45(1): 104-115.
- Morren, M., Gelissen, J., and Vermunt, J.K. (2009). Dealing with extreme response style in cross-cultural research: A restricted latent class factor analysis approach
- Moors, G. (2003). Diagnosing Response Style Behavior by Means of a Latent-Class Factor Approach. Socio-Demographic Correlates of Gender Role Attitudes and Perceptions of Ethnic Discrimination Reexamined. Quality & Quantity, 37(3), 277-302.
- de Jong, M.G. Steenkamp J.B., Fox, J.-P., and Baumgartner, H. (2008). Item Response Theory to Measure Extreme Response Style in Marketing Research: A Global Investigation. Journal of Marketing Research, 45(1), 104-115.
- Javaras, K.N. and Ripley, B.D. (2007). An “Unfolding” Latent Variable Model for Likert Attitude Data. JASA, 102(478): 454-463.
- slides from Moustaki, Knott and Mavridis, Methods for detecting outliers in latent variable models
- Eid, M. and Zickar, M.J. (2007). Detecting response styles and faking in personality and organizational assessments by Mixed Rasch Models. In von Davier, M. and Carstensen, C.H. (Eds.), Multivariate and Mixture Distribution Rasch Models, pp. 255–270, Springer.
- Holden, R.R. and Book, A.S. (2009). Using hybrid Rasch-latent class modeling to improve the detection of fakers on a personality inventory. Personality and Individual Differences, 47(3): 185-190.
| null | CC BY-SA 2.5 | null | 2010-09-04T12:21:17.757 | 2010-09-07T10:18:19.340 | 2010-09-07T10:18:19.340 | 930 | 930 | null |
2376 | 2 | null | 2348 | 2 | null | [MH sampling](http://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm) is used when it's difficult to sample from the target distribution (e.g., when the prior isn't [conjugate](http://en.wikipedia.org/wiki/Conjugate_prior) to the likelihood). So you use a proposal distribution to generate samples and accept/reject them based on the acceptance probability. The [Gibbs sampling](http://en.wikipedia.org/wiki/Gibbs_sampling) algorithm is a particular instance of MH where the proposals are always accepted. Gibbs sampling is one of the most commonly used algorithm due to its simplicity but it may not always to possible to apply, in which case one resorts to MH based on accept/reject proposals.
| null | CC BY-SA 2.5 | null | 2010-09-04T21:25:49.863 | 2010-09-04T21:25:49.863 | null | null | 881 | null |
2377 | 1 | 2392 | null | 12 | 5104 | I am curious if there is a transform which alters the skew of a random variable without affecting the kurtosis. This would be analogous to how an affine transform of a RV affects the mean and variance, but not the skew and kurtosis (partly because the skew and kurtosis are defined to be invariant to changes in scale). Is this a known problem?
| A transform to change skew without affecting kurtosis? | CC BY-SA 2.5 | null | 2010-09-04T23:00:04.117 | 2011-05-20T09:56:36.523 | 2010-09-08T08:09:20.603 | 183 | 795 | [
"data-transformation",
"random-variable",
"moments"
] |
2378 | 2 | null | 2348 | 1 | null | In physics, statistical physics in particular, Metropolis-type algorithm(s) are used extensively. There are really countless variants of these, and the new ones are being actively developed. It's much too broad topic to give any sort of expanation here, so if you're interested you can start e.g. from [these lecture notes](http://mcwa.csi.cuny.edu/umass/lectures.html) or from the ALPS library webpage (http://alps.comp-phys.org/mediawiki).
| null | CC BY-SA 2.5 | null | 2010-09-05T03:23:37.750 | 2010-09-05T03:23:37.750 | null | null | 1197 | null |
2379 | 1 | 2415 | null | 86 | 17030 | Mathematics has its famous [Millennium Problems](http://en.wikipedia.org/wiki/Millennium_Prize_Problems) (and, historically, [Hilbert's 23](http://en.wikipedia.org/wiki/Hilbert%27s_problems)), questions that helped to shape the direction of the field.
I have little idea, though, what the Riemann Hypotheses and P vs. NP's of statistics would be.
So, what are the overarching open questions in statistics?
Edited to add:
As an example of the general spirit (if not quite specificity) of answer I'm looking for, I found a "Hilbert's 23"-inspired lecture by David Donoho at a "Math Challenges of the 21st Century" conference: [High-Dimensional Data Analysis: The Curses and Blessings of Dimensionality](http://www-stat.stanford.edu/~donoho/Lectures/AMS2000/AMS2000.html)
So a potential answer could talk about big data and why it's important, the types of statistical challenges high-dimensional data poses, and methods that need to be developed or questions that need to be answered in order to help solve the problem.
| What are the 'big problems' in statistics? | CC BY-SA 2.5 | null | 2010-09-05T04:16:29.570 | 2019-10-13T08:40:34.290 | 2014-07-19T12:45:28.427 | 22468 | 1106 | [
"history"
] |
2380 | 2 | null | 2379 | 6 | null | As an example of the general spirit (if not quite specificity) of answer I'm looking for, I found a "Hilbert's 23"-inspired lecture by David Donoho at a "Math Challenges of the 21st Century" conference:
[High-Dimensional Data Analysis: The Curses and Blessings of Dimensionality](http://www-stat.stanford.edu/~donoho/Lectures/AMS2000/AMS2000.html)
| null | CC BY-SA 2.5 | null | 2010-09-05T05:23:25.660 | 2010-09-05T05:36:49.673 | 2010-09-05T05:36:49.673 | 1106 | 1106 | null |
2381 | 2 | null | 322 | 2 | null | Jaynes [shows](http://omega.albany.edu:8008/ETJ-PDF/cc11g.pdf) how to derive Shannon's entropy from basic principles in his [book](http://omega.albany.edu:8008/JaynesBookPdf.html).
One idea is that if you approximate $n!$ by $n^n$, entropy is the rewriting of the following quantity
$$\frac{1}{n}\log \frac{n!}{(n p_1)!\cdots (n p_d)!}$$
The quantity inside the log is the number of different length n observation sequences over $d$ outcomes that are matched by distribution $p$, so it's a kind of a measure of explanatory power of the distribution.
| null | CC BY-SA 2.5 | null | 2010-09-05T06:49:22.653 | 2010-09-05T06:49:22.653 | null | null | 511 | null |
2382 | 2 | null | 2379 | 4 | null | Mathoverflow has a similar question about [big problems in probability theory](https://mathoverflow.net/questions/37151/what-are-the-big-problems-in-probability-theory).
It would appear from that page that the biggest questions are to do with self avoiding random walks and percolations.
| null | CC BY-SA 2.5 | null | 2010-09-05T08:36:31.083 | 2010-09-05T08:59:37.680 | 2017-04-13T12:58:32.177 | -1 | 352 | null |
2383 | 2 | null | 7 | 2 | null | [Here's another list](https://web.archive.org/web/20151223171454/https://sites.google.com/site/munaga71/Home/datasetlinks) that might be of help.
| null | CC BY-SA 4.0 | null | 2010-09-05T09:22:57.927 | 2022-11-22T03:05:03.507 | 2022-11-22T03:05:03.507 | 362671 | 976 | null |
2384 | 1 | 2387 | null | 2 | 356 | Say I have a series of forecasts and observations like this:
```
EntityF EntityO
2004 120 125
2006 166 173
2008 150 167
2010 152 -
```
And assume that the (i) entity is the same and (ii) the forecasting methodology is constant.
I'd like to
- Produce a meaningful metric of the forecasting error.
- Be able to predict the current forecast (2010) error based on 1.
| Testing prediction time series against real data | CC BY-SA 2.5 | null | 2010-09-05T11:15:57.607 | 2010-09-16T06:33:13.167 | 2010-09-16T06:33:13.167 | null | 722 | [
"time-series",
"forecasting"
] |
2385 | 1 | 2389 | null | 6 | 368 | I have a very large data set which I would like to summarise in as small a space as possible, preferably one side of A4.
The data are from a customer satisfaction survey and are Likert-type scales, 5 scales for each work area, with 190 work areas in total. I would also like to represent the response rate on the visualisation somehow, because response rates are very variable and I want management to look at these as well as the actual scores.
If necessary I don't mind somehow reducing the 5 scales down to one (using factor analysis or some such thing). One or two sides of A4 to go to the senior management team who are of course very busy with lots of other things and decidedly non-technical. Use of colour is no problem, in fact would probably be seen as a boon.
It's just occurred to me that representing the order of the work areas, rather than their absolute value, would be OK, but again I don't want to lose the response rate information.
Hope this question isn't too vague, any ideas gratefully received. I am using R and anticipate that this work will involve my learning ggplot2, which I have not as yet got around to.
| Data visualisation- summarise 190 means and response rates | CC BY-SA 2.5 | null | 2010-09-05T11:37:47.670 | 2010-10-08T16:06:54.717 | 2010-10-08T16:06:54.717 | 8 | 199 | [
"data-visualization",
"large-data"
] |
2386 | 2 | null | 2379 | 2 | null | My answer would be the struggle between frequentist and Bayesian statistics. When people ask you which you "believe in", this is not good! Especially for a scientific discipline.
| null | CC BY-SA 2.5 | null | 2010-09-05T11:43:35.673 | 2010-09-05T11:43:35.673 | null | null | 561 | null |
2387 | 2 | null | 2384 | 5 | null |
- You could use Mean Absolute Error
(mean of $|F-O|$) or Mean Squared
Error (mean of $(F-O)^2$)
- If your forecast method is unbiased, then the best estimate of a future forecast error is 0 and the variance of the forecast error can be estimated by the MSE.
| null | CC BY-SA 2.5 | null | 2010-09-05T12:50:11.763 | 2010-09-05T12:50:11.763 | null | null | 159 | null |
2388 | 2 | null | 2385 | 2 | null | I would suggest you check out either box-plots (if you have an intro text to R, box plots always seem to be one of the first plots they use), or you can plot the means of each group on the Y axis and use the X-axis to represent each of your 190 work areas (and then maybe put error bars representing a confidence interval for the estimate of the mean).
You can plot each of the likert scales next to each other, and use a different color to represent the means, and as long as you choose distinct colors and the same order for your likert scales across work areas people will be able to distinguish them.
But I personally would only plot the scales next to each other if they are expected to have some sort of relationship with each other (if scale A is high I might expect scale B to be low). If they are not you could panel the charts on top of each other (check out the lattice package in R, and [here](http://processtrends.com/pg_charts_stacked_transformed_data.htm) is what I think is a good example with sample R code), and so you only need to label one X-axis (this also allows you to use different Y-axis scales if the scales are not easily plotted on all the same Y-levels, although by your description this doesn't seem to be the case). You could also include response rate as one of the panels (maybe represented as a bar).
What is difficult with 190 different groups is you will have trouble distinguishing different work groups unless you highlight specific groups, but any chart with all of the groups will be excellent to examine overall trends (and maybe spot outliers). Also if your work groups have no logical ordering or higher order groupings the orientation on the axis will be arbitrary. You could order according to values on one of the scales (or according to response rate).
Also I am personally learning R at the moment, and I would highly suggest you check out the [Use R!](http://www.springer.com/series/6991?detailsPage=titles) series by Springer. The book A Beginner's Guide to R is one of the best intro texts I have encountered, and they have books on ggplot2 and the lattice packages that would likely help you.
As an end if you post some examples of plots and code to make them some more of the R savy crowd on the forum will likely be able to give you suggestions. When you do finish come back and post your results!
HTH and good luck.
| null | CC BY-SA 2.5 | null | 2010-09-05T13:22:42.097 | 2010-09-05T13:22:42.097 | null | null | 1036 | null |
2389 | 2 | null | 2385 | 6 | null | I find a heatmap to be one of the most effective ways of summarizing large amounts of multi-dimensional data in a confined space. The LearnR blog has [a nice example](http://learnr.wordpress.com/2010/01/26/ggplot2-quick-heatmap-plotting/) of creating one in ggplot2.
| null | CC BY-SA 2.5 | null | 2010-09-05T15:12:53.097 | 2010-09-05T15:12:53.097 | null | null | 5 | null |
2390 | 1 | 2394 | null | 7 | 4938 | I'm familiar with what the 2nd moment (variance) indicates as well as what the 3rd moment (skewness) indicates. I know that on a histogram the 4th moment (kurtosis) indicates the "peeked-ness" of the data. My question asks what are the practical implications/interpretations of a kurtotic distribution. I'm asking this because I haven't found a case yet where I thought the 4th moment was theoretically interesting and interpretable. I understand that such interpretations/implications are likely to be dataset specific, so I am looking for descriptions of example datasets where the kurtosis of the distribution was theoretically interesting and interpretable.
| What practical implications/interpretations are there of a kurtotic distribution? | CC BY-SA 2.5 | null | 2010-09-05T15:59:43.343 | 2013-11-27T09:55:50.423 | null | null | 196 | [
"definition",
"interpretation",
"kurtosis"
] |
2391 | 1 | 2424 | null | 68 | 78679 | Suppose that I have three populations with four, mutually exclusive characteristics. I take random samples from each population and construct a crosstab or frequency table for the characteristics that I am measuring. Am I correct in saying that:
- If I wanted to test whether there is any relationship between the populations and the characteristics (e.g. whether one population has a higher frequency of one of the characteristics), I should run a chi-squared test and see whether the result is significant.
- If the chi-squared test is significant, it only shows me that there is some relationship between the populations and characteristics, but not how they are related.
- Furthermore, not all of the characteristics need to be related to the population. For example, if the different populations have significantly different distributions of characteristics A and B, but not of C and D, then the chi-squared test may still come back as significant.
- If I wanted to measure whether or not a specific characteristic is affected by the population, then I can run a test for equal proportions (I have seen this called a z-test, or as prop.test() in R) on just that characteristic.
In other words, is it appropriate to use the `prop.test()` to more accurately determine the nature of a relationship between two sets of categories when the chi-squared test says that there is a significant relationship?
| What is the relationship between a chi squared test and test of equal proportions? | CC BY-SA 3.0 | null | 2010-09-05T16:35:45.123 | 2018-11-27T21:26:23.710 | 2018-11-27T21:26:23.710 | 28666 | 1195 | [
"chi-squared-test",
"proportion",
"contingency-tables",
"z-test"
] |
2392 | 2 | null | 2377 | 6 | null | My answer is the beginnings of a total hack, but I am not aware of any established way to do what you ask.
My first step would be to rank order your dataset you can find the proportional position in your dataset and then transform it to a normal distribution, this method was used in Reynolds & Hewitt, 1996. See sample R code below in PROCMiracle.
Once the distribution is normal, then the problem has been turned on its head - a matter of adjusting kurtosis but not skew. A google search suggested that one could follow the procedures of John & Draper, 1980 to adjust the kurtosis but not the skew - but I could not replicate that result.
My attempts to develop a crude spreading/narrowing function that takes the input (normalized) value and adds or subtracts a value from it proportional to the position of the variable on the normal scale does result in a monotonic adjustment, but in practice tends to create a bimodal distribution though one that has the desired skewness and kurtosis values.
I realize this is not a complete answer, but I thought it might provide a step in the right direction.
```
PROCMiracle <- function(datasource,normalrank="BLOM")
{
switch(normalrank,
"BLOM" = {
rmod <- -3/8
nmod <- 1/4
},
"TUKEY" = {
rmod <- -1/3
nmod <- 1/3
},
"VW" ={
rmod <- 0
nmod <- 1
},
"NONE" = {
rmod <- 0
nmod <- 0
}
)
print("This may be doing something strange with NA values! Beware!")
return(scale(qnorm((rank(datasource)+rmod)/(length(datasource)+nmod))))
}
```
| null | CC BY-SA 2.5 | null | 2010-09-05T17:07:00.207 | 2010-09-05T17:07:00.207 | null | null | 196 | null |
2394 | 2 | null | 2390 | 9 | null | The kurtosis also indicates the "fat tailedness" of the distribution. A distribution with high kurtosis will have many extreme events (events far away from the center) and many "typical" events (events near the center). A distribution with low kurtosis will have events a moderate distance from the center.
This picture may help: [http://mvpprograms.com/help/images/KurtosisPict.jpg](http://mvpprograms.com/help/images/KurtosisPict.jpg)
| null | CC BY-SA 2.5 | null | 2010-09-05T18:55:00.790 | 2010-09-05T18:55:00.790 | null | null | 1146 | null |
2395 | 2 | null | 2379 | 4 | null | You might check out Harvard's ["Hard Problems in the Social Sciences' colloquium](http://socialscience.fas.harvard.edu/hardproblems) held earlier this year. Several of these talks offer issues in the use of statistics and modeling in the social sciences.
| null | CC BY-SA 2.5 | null | 2010-09-05T19:18:58.237 | 2010-09-05T19:18:58.237 | null | null | 401 | null |
2396 | 2 | null | 2379 | 13 | null | I'm not sure how big they are, but there is a [Wikipedia page](http://en.wikipedia.org/wiki/Unsolved_problems_in_Statistics) for unsolved problems in statistics. Their list includes:
>
Inference and testing
Systematic errors
Admissability of the Graybill–Deal estimator
Combining dependent p-values in Meta-analysis
Behrens–Fisher problem
Multiple comparisons
Open problems in Bayesian statistics
Experimental design
Problems in Latin squares
Problems of a more philosophical nature
Sampling of species problem
Doomsday argument
Exchange paradox
| null | CC BY-SA 3.0 | null | 2010-09-05T19:19:03.197 | 2018-01-13T14:36:38.283 | 2018-01-13T14:36:38.283 | 7290 | 196 | null |
2397 | 1 | 2408 | null | 11 | 7380 | I'm looking for the limiting distribution of multinomial distribution over d outcomes. IE, the distribution of the following
$$\lim_{n\to \infty} n^{-\frac{1}{2}} \mathbf{X_n}$$
Where $\mathbf{X_n}$ is a vector value random variable with density $f_n(\mathbf{x})$ for $\mathbf{x}$ such that $\sum_i x_i=n$, $x_i\in \mathbb{Z}, x_i\ge 0$ and 0 for all other $\mathbf{x}$, where
$$f_{n}(\mathbf{x})=n!\prod_{i=1}^d\frac{p_i^{x_i}}{x_i!}$$
I found one form in Larry Wasserman's "All of Statistics" Theorem 14.6, [page 237](http://yaroslavvb.com/upload/wasserman-multinomial.pdf) but for limiting distribution it gives Normal with a singular covariance matrix, so I'm not sure how to normalize that. You could project the random vector into (d-1)-dimensional space to make covariance matrix full-rank, but what projection to use?
Update 11/5
Ray Koopman has a nice [summary](http://groups.google.com/group/sci.stat.math/browse_thread/thread/add687f2a741c5f6/1178f6b77c8b07c7?q=singular+author:koopman#1178f6b77c8b07c7) of the problem of singular Gaussian. Basically, singular covariance matrix represents perfect correlation between variables, which is not possible to represent with a Gaussian. However, one could get a Gaussian distribution for the conditional density, conditioned on the fact that the value of random vector is valid (components add up to $n$ in the case above).
The difference for the conditional Gaussian, is that inverse is replaced with pseudo-inverse, and normalization factor uses "product of non-zero eigenvalues" instead of "product of all eigenvalues". Ian Frisce gives [link](http://fedc.wiwi.hu-berlin.de/xplore/tutorials/mvahtmlnode34.html) with some details.
There's also a way to express normalization factor of conditional Gaussian without referring to eigenvalues,
[here](https://math.stackexchange.com/questions/4106/normalization-factor-for-restricted-density)'s a derivation
| Asymptotic distribution of multinomial | CC BY-SA 3.0 | null | 2010-09-05T19:52:53.743 | 2020-08-06T19:10:04.343 | 2017-04-13T12:19:38.853 | -1 | 511 | [
"asymptotics",
"multinomial-distribution"
] |
2398 | 2 | null | 2397 | 2 | null | It looks to me like Wasserman's covariance matrix is singular, to see, multiply it by a vector of $d$ ones, i.e. $[1,1,1,\dots,1]^\prime$ of length $d$.
[Wikipedia](http://en.wikipedia.org/wiki/Multinomial_distribution) gives the same covariance matrix anyway. If we restrict ourselves to just a binomial distribution then the standard central limit theorem tells us that the binomial distribution (after appropriate scaling) converges to the normal as $n$ gets big (see [wikipedia again](http://en.wikipedia.org/wiki/Binomial_distribution#Normal_approximation)). Applying similar ideas you should be able to show that an appropriately scaled mulinomial is going to converge in distribution to the multivariate normal, i.e. each marginal distribution is just a binomial and converges to the normal distribution, and the variance between them is known.
So, I am very confident you will find that the distribution of
$$\frac{X_n - np}{\sqrt{n}}$$
converges to the multivariate normal with zero mean and covariance
$$\frac{C}{n}$$
where $C$ is the covariance matrix of the multinomial in question and $p$ is the vector of probabilities $[p_1,\dots,p_d]$.
| null | CC BY-SA 2.5 | null | 2010-09-05T22:04:14.960 | 2010-09-05T22:20:30.703 | 2010-09-05T22:20:30.703 | 352 | 352 | null |
2399 | 2 | null | 2390 | 0 | null | There is the [Kurtosis risk](http://en.wikipedia.org/wiki/Kurtosis_risk) which isn't explained fantastically well at that link.
In general, measures of normality (or deviation therefrom) are crucial if you are using analyses that assume normality. For example, the standard workhorse Pearson-r correlation coefficient is severely sensitive to outliers and becomes essentially invalid as excess kurtosis deviates from 0.
The K² test is often used to check a distribution for normality and incorporates the sample kurtosis as a factor.
| null | CC BY-SA 2.5 | null | 2010-09-05T22:41:39.633 | 2010-09-05T22:41:39.633 | null | null | 869 | null |
2400 | 2 | null | 322 | 3 | null | Grünwald and Dawid's paper [Game theory, maximum entropy, minimum discrepancy and robust Bayesian decision theory](http://projecteuclid.org/euclid.aos/1091626173) discuss generalisations of the traditional notion of entropy. Given a loss, its associated entropy function is the mapping from a distribution to the minimal achievable expected loss for that distribution. The usual entropy function is the generalised entropy associated with the log loss. Other choices of losses yield different entropy such as the Rényi entropy.
| null | CC BY-SA 2.5 | null | 2010-09-05T23:52:45.783 | 2010-09-05T23:52:45.783 | null | null | 1201 | null |
2401 | 1 | 2409 | null | 12 | 2591 | I've never liked how people typically analyze data from Likert scales as if error were continuous & Gaussian when there are reasonable expectations that these assumptions are violated at least at the extremes of the scales. What do you think of the following alternative:
If the response takes value $k$ on an $n$-point scale, expand that data to $n$ trials, $k$ of which have the value 1 and $n-k$ of which have the value 0. Thus, we're treating response on a Likert scale as if it is the overt aggregate of a covert series of binomial trials (in fact, from a cognitive science perspective, this is actually an appealing model for the mechanisms involved in such decision making scenarios). With the expanded data, you can now use a mixed effects model specifying respondent as a random effect (also question as a random effect if you have multiple questions) and using the binomial link function to specify the error distribution.
Can anyone see any assumption violations or other detrimental aspects of this approach?
| Is it appropriate to treat n-point Likert scale data as n trials from a binomial process? | CC BY-SA 3.0 | null | 2010-09-06T00:58:21.607 | 2020-11-15T08:12:12.180 | 2020-11-15T08:11:25.820 | 930 | 364 | [
"binomial-distribution",
"likert",
"scales",
"psychometrics",
"psychology"
] |
2402 | 2 | null | 2401 | 9 | null | If you really wish to abandon the assumption of interval level data for likert scales I would suggest that you assume the data to be a ordered logit or probit instead. Likert scales usually measure strength of response and hence higher values should indicate a stronger response on the underlying item of interest.
Suppose that you have a $H$ item scale and that $S$ represents the unobserved strength of response on the item of interest. Then you can assume the following response model:
$y = 1$ if $S \le \alpha_1$
$y = h\ $ if $\alpha_{h-1} < S\ \le \alpha_h$ for $h = 2, 3, ..H-1$
$y = H\ $ if $\alpha_{H-1} < S <\ \infty$
Assuming that $S$ follows a normal distribution with an unknown mean and variance would give a ordered probit model.
| null | CC BY-SA 2.5 | null | 2010-09-06T01:15:20.940 | 2010-09-06T01:21:13.017 | 2010-09-06T01:21:13.017 | null | null | null |
2403 | 2 | null | 2377 | 1 | null | Another possible interesting technique has come to mind, though this doesn't quite answer the question, is to transform a sample to have a fixed sample L-skew and sample L-kurtosis (as well as a fixed mean and L-scale). These four constraints are linear in the order statistics. To keep the transform monotonic on a sample of $n$ observations would then require another $n-1$ equations. This could then be posed as a quadratic optimization problem: minimize the $\ell_2$ norm between the sample order statistics and the transformed version subject to the given constraints. This is a kind of wacky approach, though. In the original question, I was looking for something more basic and fundamental. I also was implicitly looking for a technique which could be applied to individual observations, independent of having an entire cohort of samples.
| null | CC BY-SA 2.5 | null | 2010-09-06T02:32:10.617 | 2010-09-06T02:32:10.617 | null | null | 795 | null |
2404 | 2 | null | 2390 | 2 | null | I seem to remember that the median has a smaller standard error than the mean when the samples are drawn from a leptokurtic distribution, but the mean has a smaller standard error when the distribution is platykurtic. I think I read this in one of Wilcox' books. Thus the kurtosis may dictate which kinds of locational tests one uses..
| null | CC BY-SA 2.5 | null | 2010-09-06T02:58:30.540 | 2010-09-06T02:58:30.540 | null | null | 795 | null |
2405 | 1 | 2406 | null | 5 | 653 | Suppose that I have a market research survey with the question "What brand of television are you planning on buying" and then also have a choice that is "I don't plan on buying a television." Respondents may choose more than one brand, but they may not make any other choices if they select that they do not plan on buying a television.
If I wanted to do an analysis of television brands among only those people who said they were going to buy a television, can I remove all of the respondents that said they would not buy a television, then recalculate the proportions for each brand with the new base number of observations?
If this skews the demographic distribution of my sample, is that okay? (e.g. the full sample has 30% Caucasians and 10% Asians, but the sample of people who say they will buy televisions is 30% Asians and 10% Caucasians. It seems like this should be okay, since we would not expect the two populations (TV buyers and people in general) to be the same.
| Survey questionnaire rebasing after removal of a selection | CC BY-SA 2.5 | null | 2010-09-06T04:14:20.773 | 2010-09-16T12:43:13.977 | 2010-09-16T12:43:13.977 | null | 1195 | [
"survey"
] |
2406 | 2 | null | 2405 | 4 | null | The short answer: what you propose to do sounds reasonable.
It often occurs in survey research that a question only applies to a subset of the population.
In such situations you typically want to say:
- "Of American/Australian/French/etc. adults where this question is applicable, $X\%$ believes/intends to do/thinks/etc. ..."
Representative: As you say, in survey research you are often concerned with your findings being representative of a target population. If we assume that your initial sample is representative, then taking just the subset that intend to buy a television will be a representative sample of your population that intends to buy a television. This is true even if the subset that intends to buy a television is different to the subset that does not intend to buy a television in terms of demographics or some other factor in addition
to television purchasing intention.
In addition, you may want to produce cross-tabulations of purchase intention with demographics.
Split Question: I suppose you also need to think about what is the best split-question. Consumers vary in the degree to which they are intending to purchase a television and the degree to which they have thought about brands. Thus, you need to think about your research question. For example, you might ask the question: "if you were going to purchase a television, which brand would you consider?" This would be good for assessing longer term brand purchasing intentions. Your structure focuses on the subset of consumers who are planning to purhcase.
| null | CC BY-SA 2.5 | null | 2010-09-06T06:45:09.040 | 2010-09-06T08:06:05.930 | 2010-09-06T08:06:05.930 | 183 | 183 | null |
2407 | 2 | null | 2385 | 3 | null | To give you a few more things to look at:
- Principal components - look at some previous answers about PC. In particular, this answer may be helpful.
- Cluster analysis. This page gives quite a nice overall in R.
I would recommend trying as many things as possible and see what comes out. Once you have your data in R in a reasonable format, it shouldn't take too long to try these things.
| null | CC BY-SA 2.5 | null | 2010-09-06T07:48:01.307 | 2010-09-06T07:48:01.307 | 2017-04-13T12:44:33.237 | -1 | 8 | null |
2408 | 2 | null | 2397 | 7 | null | The covariance is still non-negative definite (so is a valid [multivariate normal distribution](http://en.wikipedia.org/wiki/Multivariate_normal_distribution)), but not positive definite: what this means is that (at least) one element of the random vector is a linear combination of the others.
As a result, any draw from this distribution will always lie on a subspace of $R^d$. As a consequence, this means it is not possible to define a density function (as the distribution is concentrated on the subspace: think of the way a univariate normal will concentrate at the mean if the variance is zero).
However, as suggested by Robby McKilliam, in this case you can drop the last element of the random vector. The covariance matrix of this reduced vector will be the original matrix, with the last column and row dropped, which will now be positive definite, and will have a density (this trick will work in other cases, but you have to be careful which element you drop, and you may need to drop more than one).
| null | CC BY-SA 2.5 | null | 2010-09-06T10:36:39.710 | 2010-09-06T11:28:12.913 | 2010-09-06T11:28:12.913 | 495 | 495 | null |
2409 | 2 | null | 2401 | 17 | null | I don't know of any articles related to your question in the psychometric literature. It seems to me that ordered logistic models allowing for random effect components can handle this situation pretty well.
I agree with @Srikant and think that a proportional odds model or an ordered probit model (depending on the link function you choose) might better reflect the intrinsic coding of Likert items, and their typical use as rating scales in opinion/attitude surveys or questionnaires.
Other alternatives are: (1) use of adjacent instead of proportional or cumulative categories (where there is a connection with log-linear models); (2) use of item-response models like the partial-credit model or the rating-scale model (as was mentioned in my response on [Likert scales analysis](https://stats.stackexchange.com/questions/2374/likert-scales-analysis/2375#2375)). The latter case is comparable to a mixed-effects approach, with subjects treated as random effects, and is readily available in the SAS system (e.g., [Fitting mixed-effects models for repeated ordinal outcomes with the NLMIXED procedure](http://brm.psychonomic-journals.org/content/34/2/151.full.pdf)) or R (see [vol. 20](http://www.jstatsoft.org/v20) of the Journal of Statistical Software). You might also be interested in the discussion provided by John Linacre about [Optimizing Rating Scale Category Effectiveness](http://www.winsteps.com/a/linacre-optimizing-category.pdf).
The following papers may also be useful:
- Wu, C-H (2007). An Empirical Study on the Transformation of Likert-scale Data to Numerical Scores. Applied Mathematical Sciences, 1(58): 2851-2862.
- Rost, J and and Luo, G (1997). An Application of a Rasch-Based Unfolding Model to a Questionnaire on Adolescent Centrism. In Rost, J and Langeheine, R (Eds.), Applications of latent trait and latent class models in the social sciences, New York: Waxmann.
- Lubke, G and Muthen, B (2004). Factor-analyzing Likert-scale data under the assumption of multivariate normality complicates a meaningful comparison of observed groups or latent classes. Structural Equation Modeling, 11: 514-534.
- Nering, ML and Ostini, R (2010). Handbook of Polytomous Item Response Theory Models. Routledge Academic
- Bender R and Grouven U (1998). Using binary logistic regression models for ordinal data with non-proportional odds. Journal of Clinical Epidemiology, 51(10): 809-816. (Cannot find the pdf but this one is available, Ordinal logistic regression in medical research)
| null | CC BY-SA 3.0 | null | 2010-09-06T10:43:15.640 | 2013-01-17T14:37:11.723 | 2017-04-13T12:44:37.793 | -1 | 930 | null |
2410 | 1 | 2411 | null | 14 | 1942 | A colleague in my office said to me today "Tree models aren't good because they get caught by extreme observations".
A search here resulted in [this thread](https://stats.stackexchange.com/questions/1292/what-is-the-weak-side-of-decision-trees) that basically supports the claim.
Which leads me to the question - under what situation can a CART model be robust, and how is that shown?
| Can CART models be made robust? | CC BY-SA 2.5 | null | 2010-09-06T14:59:09.557 | 2022-04-27T13:37:17.247 | 2017-04-13T12:44:37.583 | -1 | 253 | [
"regression",
"classification",
"robust",
"cart"
] |
2411 | 2 | null | 2410 | 15 | null | No, not in their present forms.
The problem is that convex loss functions cannot be made to be robust to contamination by outliers (this is a well known fact since the 70's but keeps being rediscovered periodically, see for instance this paper for one recent such re-discovery):
[http://www.cs.columbia.edu/~rocco/Public/mlj9.pdf](http://www.cs.columbia.edu/%7Erocco/Public/mlj9.pdf)
Now, in the case of regression trees, the fact that CART uses marginals (or alternatively univariate projections) can be used:
one can think of a version of CART where the s.d. criterion is replaced by a more
robust counterpart (MAD or better yet, Qn estimator).
# Edit:
I recently came across an older paper implementing the approach suggested above (using robust M estimator of scale instead of the MAD). This will impart robustness to "y" outliers to CART/RF's (but not to outliers located on the design space, which will affect the estimates of the model's hyper-parameters) See:
Galimberti, G., Pillati, M., & Soffritti, G. (2007). Robust regression trees based on M-estimators.
Statistica, LXVII, 173–190.
| null | CC BY-SA 4.0 | null | 2010-09-06T15:20:06.863 | 2019-06-01T09:03:30.480 | 2020-06-11T14:32:37.003 | -1 | 603 | null |
2412 | 2 | null | 2410 | 6 | null | You might consider using [Breiman's](https://en.wikipedia.org/wiki/Leo_Breiman) bagging or [random forests](https://en.wikipedia.org/wiki/Random_forest). One good reference is [Breiman "Bagging Predictors"](https://doi.org/10.1007/BF00058655) (1996). Also summarized in Clifton Sutton's ["Classification and Regression Trees, Bagging, and Boosting"](http://mason.gmu.edu/%7Ecsutton/vt6.pdf) in the Handbook of Statistics.
You can also see [Andy Liaw and Matthew Wiener R News discussion](https://cran.r-project.org/doc/Rnews/Rnews_2002-3.pdf) of the randomForest package.
| null | CC BY-SA 4.0 | null | 2010-09-06T15:20:57.673 | 2022-04-27T13:37:17.247 | 2022-04-27T13:37:17.247 | 79696 | 5 | null |
2413 | 2 | null | 328 | 4 | null | You should check out [http://area51.stackexchange.com/proposals/117/quantitative-finance?referrer=b3Z9BBygZU6P1xPZSakPmQ2](http://area51.stackexchange.com/proposals/117/quantitative-finance?referrer=b3Z9BBygZU6P1xPZSakPmQ2), they are trying to start one on stackexhange.com
| null | CC BY-SA 2.5 | null | 2010-09-06T16:26:44.860 | 2010-09-06T16:26:44.860 | null | null | 1137 | null |
2415 | 2 | null | 2379 | 48 | null | A big question should involve key issues of statistical methodology or, because statistics is entirely about applications, it should concern how statistics is used with problems important to society.
This characterization suggests the following should be included in any consideration of big problems:
- How best to conduct drug trials. Currently, classical hypothesis testing requires many formal phases of study. In later (confirmatory) phases, the economic and ethical issues loom large. Can we do better? Do we have to put hundreds or thousands of sick people into control groups and keep them there until the end of a study, for example, or can we find better ways to identify treatments that really work and deliver them to members of the trial (and others) sooner?
- Coping with scientific publication bias. Negative results are published much less simply because they just don't attain a magic p-value. All branches of science need to find better ways to bring scientifically important, not just statistically significant, results to light. (The multiple comparisons problem and coping with high-dimensional data are subcategories of this problem.)
- Probing the limits of statistical methods and their interfaces with machine learning and machine cognition. Inevitable advances in computing technology will make true AI accessible in our lifetimes. How are we going to program artificial brains? What role might statistical thinking and statistical learning have in creating these advances? How can statisticians help in thinking about artificial cognition, artificial learning, in exploring their limitations, and making advances?
- Developing better ways to analyze geospatial data. It is often claimed that the majority, or vast majority, of databases contain locational references. Soon many people and devices will be located in real time with GPS and cell phone technologies. Statistical methods to analyze and exploit spatial data are really just in their infancy (and seem to be relegated to GIS and spatial software which is typically used by non-statisticians).
| null | CC BY-SA 2.5 | null | 2010-09-06T17:27:01.890 | 2010-09-06T17:27:01.890 | null | null | 919 | null |
2416 | 5 | null | null | 0 | null | Overview
[Mixed models](http://en.wikipedia.org/wiki/Mixed_model) are linear models that include both fixed effects and random effects*. They are used to model longitudinal or nested data; such data do not have independent errors and mixed models can account for the arising correlations. Mixed models are also known as multilevel or hierarchical linear models.
A classic example is the estimation of test scores of students: if test scores are correlated within classes, schools, districts, etc., mixed models allow the modeler to simultaneously estimate the differences between individual students and between the groups to which they belong (with the possibility of including covariates at all levels).
In a mixed model, study units are thought of as sampled from a population; the fixed effects are estimates of the population average effect, whereas the random effects are specific to the study units. In matrix form, a mixed effects model might be:
$$
\bf Y=X\boldsymbol\beta + Zb + \boldsymbol\varepsilon
$$
where $\bf X$ is the design matrix, $\boldsymbol\beta$ is a vector of the population average effects, $\bf Z$ is a subset of the columns of $\bf X$, $\bf b$ is a vector of the unit specific deviations from the population effects, and $\boldsymbol \varepsilon$ is a vector of random errors.
* Note that here we follow terminology used in statistics, social sciences, and biostatistics; similar terminology ("fixed effects", "random effects") is also used in econometrics, but the meaning is different.
References
StatsExchangers often recommend the following resources for learning more about mixed models:
- Modern Applied Statistics with S by Venables and Ripley (2002)
- "Random Effects Models for Longitudinal Data" (Biometrics
38:963—974) by Laird and Ware (1982)
- Analyzing linguistic data by Baayen (2008)
- Hierarchical Linear Models by Raudenbush and Bryk (2001)
- Data Analysis Using Regression and Multilevel/Hierarchical Models by Gelman and Hill (2006)
- Applied Longitudinal Data Analysis by Singer and Willett (2003).
Software packages
Mixed models are available in the following statistical packages:
- lme4 and nlme for R
- PROC MIXED and GLIMMIX for SAS
- MLwiN
- xtreg, xtmixed, xtlogit, xtmelogit, xtmepoisson, and other xt* commands; user-contributed package GLLAMM for Stata
- Mplus
- HLM
| null | CC BY-SA 3.0 | null | 2010-09-06T18:24:38.507 | 2015-10-25T00:57:09.050 | 2015-10-25T00:57:09.050 | 28666 | null | null |
2417 | 4 | null | null | 0 | null | Mixed (aka multilevel or hierarchical) models are linear models that include both fixed effects and random effects. They are used to model longitudinal or nested data. | null | CC BY-SA 3.0 | null | 2010-09-06T18:24:38.507 | 2015-12-15T01:07:11.833 | 2015-12-15T01:07:11.833 | 28666 | null | null |
2419 | 1 | 282321 | null | 15 | 11914 | Is there a a good python library for training boosted decision trees ?
| Boosted decision trees in python? | CC BY-SA 3.0 | null | 2010-09-06T19:00:03.070 | 2022-06-18T07:57:59.773 | 2012-03-11T09:58:27.203 | null | 961 | [
"python",
"cart",
"boosting"
] |
2420 | 1 | null | null | 2 | 366 | I have some data values of type date/time (the last date that a resource was accessed) and I wish to chart this data on the y-axis against the different categories of resource on the x-axis.
What would be a sensible type of chart for displaying this sort of data ? For example is a histogram / bar chart satisfactory or is there a different graph type more applicable ?
EDIT: Are bar charts for only displaying quantities - can dates be considered quantities ?
| Date/Time data on the y-axis | CC BY-SA 2.5 | null | 2010-09-06T19:00:42.580 | 2010-09-07T08:21:20.630 | 2010-09-07T07:49:42.283 | 414 | 414 | [
"data-visualization"
] |
2421 | 2 | null | 2419 | 12 | null | My first look would be at [Orange](http://www.ailab.si/orange/), which is a fully-featured app for ML, with a backend in Python. See e.g. [orngEnsemble](http://www.ailab.si/orange/doc/modules/orngEnsemble.htm).
Other promising projects are [mlpy](https://mlpy.fbk.eu/) and the [scikit.learn](http://scikit-learn.sourceforge.net/).
I know that [PyCV](http://pycv.sharkdolphin.com/) include several boosting procedures, but apparently not for CART.
Take also a look at [MLboost](http://sourceforge.net/projects/mlboost/)
| null | CC BY-SA 2.5 | null | 2010-09-06T19:28:19.213 | 2010-09-06T19:35:13.690 | 2010-09-06T19:35:13.690 | 930 | 930 | null |
2422 | 2 | null | 2420 | 7 | null | I presume that you have only a few resources that you are interested in? If so, then histograms are fine, or you could also try box-plots:
```
#Some R code
#Create random dates for resource A & resource B
dates.a = as.Date(rnorm(100, 200, 100), origin="2008-01-01")
dates.b = as.Date(rnorm(100, 300, 50), origin="2008-01-01")
df = data.frame(dates = c(dates.a, dates.b),
type=rep(c("A", "B"), each=100))
#Plot the two resource together using boxplots
boxplot(df$dates~df$type)
#See ?hist for histograms.
```
This gives:

| null | CC BY-SA 2.5 | null | 2010-09-06T20:25:58.380 | 2010-09-06T20:25:58.380 | null | null | 8 | null |
2423 | 1 | 2426 | null | 13 | 4455 | Can anyone recommend some books that are considered to be standard references for classical (frequentist) statistics? IE, fairly comprehensive, and also, been around for a while so that typos and mistakes in formulas had a chance to be checked and corrected
| Standard reference for classical mathematical statistics? | CC BY-SA 2.5 | null | 2010-09-06T20:28:53.090 | 2013-08-29T13:47:48.317 | 2010-09-08T06:40:30.013 | 183 | 511 | [
"references",
"mathematical-statistics"
] |
2424 | 2 | null | 2391 | 33 | null | Very short answer:
The chi-Squared test (`chisq.test()` in R) compares the observed frequencies in each category of a contingency table with the expected frequencies (computed as the product of the marginal frequencies). It is used to determine whether the deviations between the observed and the expected counts are too large to be attributed to chance. Departure from independence is easily checked by inspecting residuals (try `?mosaicplot` or `?assocplot`, but also look at the `vcd` package). Use `fisher.test()` for an exact test (relying on the hypergeometric distribution).
The `prop.test()` function in R allows to test whether proportions are comparable between groups or does not differ from theoretical probabilities. It is referred to as a $z$-test because the test statistic looks like this:
$$
z=\frac{(f_1-f_2)}{\sqrt{\hat p \left(1-\hat p \right) \left(\frac{1}{n_1}+\frac{1}{n_2}\right)}}
$$
where $\hat p=(p_1+p_2)/(n_1+n_2)$, and the indices $(1,2)$ refer to the first and second line of your table.
In a two-way contingency table where $H_0:\; p_1=p_2$, this should yield comparable results to the ordinary $\chi^2$ test:
```
> tab <- matrix(c(100, 80, 20, 10), ncol = 2)
> chisq.test(tab)
Pearson's Chi-squared test with Yates' continuity correction
data: tab
X-squared = 0.8823, df = 1, p-value = 0.3476
> prop.test(tab)
2-sample test for equality of proportions with continuity correction
data: tab
X-squared = 0.8823, df = 1, p-value = 0.3476
alternative hypothesis: two.sided
95 percent confidence interval:
-0.15834617 0.04723506
sample estimates:
prop 1 prop 2
0.8333333 0.8888889
```
For analysis of discrete data with R, I highly recommend [R (and S-PLUS) Manual to Accompany Agresti’s Categorical Data Analysis (2002)](https://www.stat.ufl.edu/~aa/cda/Thompson_manual.pdf), from Laura Thompson.
| null | CC BY-SA 4.0 | null | 2010-09-06T20:39:55.953 | 2018-06-14T13:55:01.687 | 2018-06-14T13:55:01.687 | 77478 | 930 | null |
2425 | 2 | null | 2423 | 9 | null | I have found Statistical Inference by Casella and Berger to be a relatively comprehensive introduction.
| null | CC BY-SA 2.5 | null | 2010-09-06T21:47:23.083 | 2010-09-06T21:47:23.083 | null | null | 743 | null |
2426 | 2 | null | 2423 | 5 | null | E. L. Lehmann, Theory of Point Estimation, 1983, and its companion book, Testing Statistical Hypotheses.
(NB: The latest edition of TPE, coauthored with George Casella, has not been getting good reviews on Amazon, but the original is still a classic.)
| null | CC BY-SA 3.0 | null | 2010-09-06T22:05:28.123 | 2013-08-29T13:47:48.317 | 2013-08-29T13:47:48.317 | 22047 | 919 | null |
2427 | 1 | 2442 | null | 13 | 4019 | I would like to solve [Project Euler 213](http://projecteuler.net/index.php?section=problems&id=213) but don't know where to start because I'm a layperson in the field of Statistics, notice that an accurate answer is required so the Monte Carlo method won't work. Could you recommend some statistics topics for me to read on? Please do not post the solution here.
>
Flea Circus
A 30×30 grid of squares contains 900 fleas, initially one flea per square.
When a bell is rung, each flea jumps to an adjacent square at random (usually 4 possibilities, except for fleas on the edge of the grid or at the corners).
What is the expected number of unoccupied squares after 50 rings of the bell? Give your answer rounded to six decimal places.
| How should one approch Project Euler problem 213 ("Flea Circus")? | CC BY-SA 3.0 | null | 2010-09-06T22:44:39.583 | 2016-12-01T21:53:26.533 | 2016-12-01T10:07:28.067 | 28666 | 18 | [
"self-study",
"monte-carlo",
"markov-process"
] |
2428 | 2 | null | 2427 | 1 | null | I suspect that some knowledge of discrete-time [Markov chains](http://en.wikipedia.org/wiki/Markov_chain) could prove useful.
| null | CC BY-SA 2.5 | null | 2010-09-06T23:09:16.800 | 2010-09-06T23:09:16.800 | null | null | 495 | null |
2429 | 2 | null | 2423 | 5 | null | I'd recommend [Theory of Statistics](http://rads.stackoverflow.com/amzn/click/0387945466) by Mark Schervish.
| null | CC BY-SA 2.5 | null | 2010-09-07T02:04:47.937 | 2010-09-07T02:04:47.937 | null | null | 881 | null |
2430 | 1 | null | null | 3 | 1738 | I am trying to identify approximate 3% of the population for some characteristic feature. Standard decision tree or logistic regression gives too many false positives. Is there a chance that rules based classifier can improve performance? I would like to get approx 75% of recall with 95% of precission (i.e. FalsePositives <= 5% of Positives)
| When does rules based classifier outperforms decision trees? | CC BY-SA 2.5 | null | 2010-09-07T03:12:09.853 | 2022-04-30T12:57:37.987 | 2010-09-07T08:05:20.140 | null | null | [
"machine-learning",
"classification"
] |
2431 | 2 | null | 2423 | 3 | null | [All of Statistics](http://rads.stackoverflow.com/amzn/click/0387402721)
| null | CC BY-SA 2.5 | null | 2010-09-07T03:19:29.143 | 2010-09-07T03:19:29.143 | null | null | 183 | null |
2432 | 1 | 2445 | null | 15 | 2515 |
- Is there a modelling technique like LOESS that allows for zero, one, or more discontinuities, where the timing of the discontinuities are not known apriori?
- If a technique exists, is there an existing implementation in R?
| LOESS that allows discontinuities | CC BY-SA 2.5 | null | 2010-09-07T03:24:59.747 | 2017-11-01T11:32:17.717 | 2017-11-01T11:32:17.717 | 28666 | 183 | [
"r",
"regression",
"curve-fitting",
"change-point",
"loess"
] |
2433 | 2 | null | 2432 | 6 | null | Here are some methods and associated R packages to solve this problem
Wavelet thresolding estimation in regression allows for discontonuities. You may use the package wavethresh in R.
A lot of tree based methods (not far from the idea of wavelet) are usefull when you have disconitnuities. Hence package treethresh, package tree !
In the familly of "local maximum likelihood" methods... among others:
Work of Pozhel and Spokoiny: Adaptive weights Smoothing (package aws)
Work by Catherine Loader: package locfit
I guess any kernel smoother with locally varying bandwidth makes the point but I don't know R package for that.
note: I don't really get what is the difference between LOESS and regression... is it the idea that in LOESS alrgorithms should be "on line" ?
| null | CC BY-SA 2.5 | null | 2010-09-07T05:12:15.380 | 2011-02-17T17:10:30.397 | 2011-02-17T17:10:30.397 | 223 | 223 | null |
2434 | 2 | null | 2427 | 7 | null | Could you not iterate through the probabilities of occupation of the cells for each flea. That is, flea k is initially in cell (i(k),j(k)) with probability 1. After 1 iteration, he has probability 1/4 in each of the 4 adjacent cells (assuming he's not on an edge or in a corner). Then the next iteration, each of those quarters gets "smeared" in turn. After 50 iterations you have a matrix of occupation probabilities for flea k. Repeat over all 900 fleas (if you take advantage of symmetries this reduces by nearly a factor of 8) and add the probabilities (you don't need to store all of them at once, just the current flea's matrix (hmm, unless you are very clever, you may want an additional working matrix) and the current sum of matrices). It looks to me like there are lots of ways to speed this up here and there.
This involves no simulation at all. However, it does involve quite a lot of computation; it should not be very hard to work out the simulation size required to give the answers to somewhat better than 6 dp accuracy with high probability and figure out which approach will be faster. I expect this approach would beat simulation by some margin.
| null | CC BY-SA 2.5 | null | 2010-09-07T07:18:25.347 | 2010-09-07T07:23:41.247 | 2010-09-07T07:23:41.247 | 805 | 805 | null |
2435 | 2 | null | 2430 | 5 | null | Probably you just have unbalanced classes (3% to 97%, if I understood well) -- try balancing them (get this 3% of true ones and about equal number of false ones) and check the classifier build on this case. If you are worried that you have thrown out most of your data, iterate it few times and connect them with some simple blender, like voting. (More complex blenders will also suffer from unbalanced classes). You should also check some better classifiers than a single tree or logistic regression -- like SVM or Random Forest.
Of course you can also use some classifier immune to imbalance problem, like kNN or as you say some rule-based approach.
| null | CC BY-SA 2.5 | null | 2010-09-07T08:04:24.807 | 2010-09-07T08:04:24.807 | null | null | null | null |
2437 | 2 | null | 2420 | 1 | null | My personal preference would be for box plots. If the distributions of date/time in one or more categories are skewed, then box plots would definitely be more informative than bars.
| null | CC BY-SA 2.5 | null | 2010-09-07T08:21:20.630 | 2010-09-07T08:21:20.630 | null | null | 266 | null |
2438 | 2 | null | 7 | 9 | null | NIST provides a [Reference Dataset archive](http://www.itl.nist.gov/div898/strd/general/dataarchive.html).
| null | CC BY-SA 2.5 | null | 2010-09-07T08:58:26.777 | 2010-09-07T08:58:26.777 | null | null | 830 | null |
2439 | 1 | 2440 | null | 9 | 798 | A cursory search reveals that [Latin squares](http://en.wikipedia.org/wiki/Latin_square) are fairly extensively used in the design of experiments. During my PhD, I have studied various theoretical properties of Latin squares (from a combinatorics point-of-view), but do not have a deep understanding of what is it about Latin squares that make them particularly well-suited to experimental design.
I understand that Latin squares are good at allowing statisticians to efficiently study situations where there are two factors which vary in different "directions". But, I'm also fairly confident there would be many other techniques that could be used.
>
What is it, in particular, about Latin squares that make them so well suited for the design of experiments, that other designs do not have?
Moreover, there are zillions of Latin squares to choose from, so which Latin square do you choose? I understand that choosing one at random is important, but there would still be some Latin squares that would be less suited to running experiments than others (e.g. the Cayley table of a cyclic group). This raises the following question.
>
Which properties of Latin squares are desirable and which properties of Latin squares are undesirable for experimental design?
| Desirable and undesirable properties of Latin squares in experiments? | CC BY-SA 2.5 | null | 2010-09-07T11:48:49.003 | 2022-05-15T03:50:45.243 | 2022-05-15T03:50:45.243 | 11887 | 386 | [
"experiment-design",
"latin-square"
] |
2440 | 2 | null | 2439 | 8 | null | Imagine:
- you were interested in the effect of word type (nouns, adjectives, adverbs, and verbs) on recall.
- you wanted to include word type as a within-subjects factor (i.e., all participants were exposed to all conditions)
Such a design would raise the issue of carry over effects. I.e., the order of the conditions may effect the dependent variable recall. For example, participants might get better at recalling words with practice. Thus, if the conditions were always presented in the same order, then the effect or order would be confounded with the effect of condition (i.e., word type).
Latin Squares is one of several strategies for dealing with order effects.
A Latin Squares design could involve assigning participants to one of four separate orderings (i.e., a between subjects condition called order):
- nouns adjectives adverbs verbs
- adjectives adverbs verbs nouns
- adverbs verbs nouns adjectives
- verbs nouns adjectives adverbs
Thus, the Latin Squares design only entails a subset of possible orderings, and to some extent the effect of order can be estimated.
In a [blog post](http://jeromyanglim.blogspot.com/2008/11/carryover-effects-in-repeated-measures.html) I suggest the following simple rules of thumb:
- "If order is the focus of the analysis (e.g., skill acquisition looking at effects of practice), then don't worry about order effects
- If order effects are very strong, it may be better to stick to between subjects designs
- if order effects are small or moderate or unknown, typical design strategy depends on the number of levels of the within-subjects factor of interest.
If there are few levels (e.g., 2,3,4 perhaps), present all orders (counterbalance)
If there are more levels (e.g., 4+ perhaps), adopt a latin squares approach or randomise ordering"
To specifically answer your question, Latin Squares designs allow you to get the statistical power benefits of a within-subjects design while, potentially at least, minimising the main problem of within subjects designs: i.e., order effects.
| null | CC BY-SA 2.5 | null | 2010-09-07T13:03:36.843 | 2010-09-07T13:47:53.373 | 2010-09-07T13:47:53.373 | 183 | 183 | null |
2441 | 2 | null | 2427 | 5 | null | An analytical approach may be tedious and I have not thought through the intricacies but here is an approach that you may want to consider. Since you are interested in the expected number of cells that are empty after 50 rings you need to define a markov chain over the "No of the fleas in a cell" rather than the position of a flea (See Glen_b's [answer](https://stats.stackexchange.com/questions/2427/how-should-one-approch-project-euler-problem-213/2434#2434) which models the position of a flea as a markov chain. As pointed out by Andy in the comments to that answer that approach may not get what you want.)
Specifically, let:
$n_{ij}(t)$ be the number of fleas in a cell in row $i$ and column $j$.
Then the markov chain starts with the following state:
$n_{ij}(0) =1$ for all $i$ and $j$.
Since, fleas move to one of four adjacent cells, the state of a cell changes depending on how many fleas are in the target cell and how many fleas are there in the four adjacent cells and the probability that they will move to that cell. Using this observation, you can write the state transition probabilities for each cell as a function of the state of that cell and the state of the adjacent cells.
If you wish I can expand the answer further but this along with a basic introduction to markov chains should get you started.
| null | CC BY-SA 2.5 | null | 2010-09-07T14:37:45.337 | 2010-09-07T14:37:45.337 | 2017-04-13T12:44:21.160 | -1 | null | null |
2442 | 2 | null | 2427 | 12 | null | You're right; Monte Carlo is impracticable. (In a naive simulation--that is, one that exactly reproduces the problem situation without any simplifications--each iteration would involve 900 flea moves. A crude estimate of the proportion of empty cells is $1/e$, implying the variance of the Monte-Carlo estimate after $N$ such iterations is approximately $1/N 1/e (1 - 1/e) = 0.2325\ldots /N$. To pin down the answer to six decimal places, you would need to estimate it to within 5.E-7 and, to achieve a confidence of 95+% (say), you would have to approximately halve that precision to 2.5E-7. Solving $\sqrt(0.2325/N) \lt 2.5E-7$ gives $N > 4E12$, approximately. That would be around 3.6E15 flea moves, each taking several ticks of a CPU. With one modern CPU available you will need a full year of (highly efficient) computing. And I have somewhat incorrectly and overoptimistically assumed the answer is given as a proportion instead of a count: as a count, it will need three more significant figures, entailing a million fold increase in computation... Can you wait a long time?)
As far as an analytical solution goes, some simplifications are available. (These can be used to shorten a Monte Carlo computation, too.) The expected number of empty cells is the sum of the probabilities of emptiness over all the cells. To find this, you could compute the probability distribution of occupancy numbers of each cell. Those distributions are obtained by summing over the (independent!) contributions from each flea. This reduces your problem to finding the number of paths of length 50 along a 30 by 30 grid between any given pair of cells on that grid (one is the flea's origin and the other is a cell for which you want to calculate the probability of the flea's occupancy).
| null | CC BY-SA 2.5 | null | 2010-09-07T14:51:35.883 | 2010-09-07T14:51:35.883 | null | null | 919 | null |
2443 | 2 | null | 2391 | 34 | null | A chi-square test for equality of two proportions is exactly the same thing as a $z$-test. The chi-squared distribution with one degree of freedom is just that of a normal deviate, squared. You're basically just repeating the chi-squared test on a subset of the contingency table. (This is why @chl gets the exact same $p$-value with both tests.)
The problem of doing the chi-squared test globally first and then diving down to do more tests on subsets is you won't necessarily preserve your alpha -- that is, you won't control false positives to be less than 5% (or whatever $\alpha$) across the whole experiment.
I think if you want to do this properly in the classical paradigm, you need to identify your hypotheses at the outset (which proportions to compare), collect the data, and then test the hypotheses such that the total threshold for significance of each test sums to $\alpha$. Unless you can prove a priori that there's some correlation.
The most powerful test for equality of proportions is called [Barnard's test for superiority](http://en.wikipedia.org/wiki/Barnard%27s_test).
| null | CC BY-SA 3.0 | null | 2010-09-07T15:12:36.180 | 2013-09-07T16:27:36.933 | 2013-09-07T16:27:36.933 | 7290 | 1122 | null |
2445 | 2 | null | 2432 | 15 | null | It sounds like you want to perform multiple changepoint detection followed by independent smoothing within each segment. (Detection can be online or not, but your application is not likely to be online.) There's a lot of literature on this; Internet searches are fruitful.
- DA Stephens wrote a useful introduction to Bayesian changepoint detection in 1994 (App. Stat. 43 #1 pp 159-178: JSTOR).
- More recently Paul Fearnhead has been doing nice work (e.g., Exact and efficient Bayesian inference for multiple changepoint problems, Stat Comput (2006) 16: 203-213: Free PDF).
- A recursive algorithm exists, based on a beautiful analysis by D Barry & JA Hartigan
Product Partition Models for Change Point Models, Ann. Stat. 20:260-279: JSTOR;
A Bayesian Analysis for Change Point Problems, JASA 88:309-319: JSTOR.
- One implementation of the Barry & Hartigan algorithm is documented in O. Seidou & TBMJ Ourda, Recursion-based Multiple Changepoint Detection in Multivariate Linear Regression and Application to River Streamflows, Water Res. Res., 2006: Free PDF.
I haven't looked hard for any R implementations (I had coded one in Mathematica a while ago) but would appreciate a reference if you do find one.
| null | CC BY-SA 2.5 | null | 2010-09-07T15:45:24.470 | 2010-09-08T22:31:40.630 | 2010-09-08T22:31:40.630 | 919 | 919 | null |
2446 | 1 | null | null | 8 | 522 | In order to correlate or compare means of two dependent variables.
In my case, I need to correlate individual (e.g. subjects=30) slope values from different conditions (e.g. conditions=4), and each slope value summarizes the relation between the dependent variable (e.g. measured 4 times in each level of the independent variable) and the independent variable (e.g. 5 levels).
How to correct the df of the comparison to reflect the fact that each data point (slope value) summarizes many measurements?
Note: I am not asking how to do a regression between slope values. I already did regression in a within subject design, minimized euclidean distance regression etc.
| How to choose df for comparisons between summary statistics (e.g. slope values)? | CC BY-SA 2.5 | null | 2010-09-07T16:01:51.877 | 2010-09-21T20:33:56.847 | 2010-09-15T17:04:17.370 | 1084 | 1084 | [
"correlation",
"regression",
"statistical-significance",
"degrees-of-freedom"
] |
2447 | 2 | null | 2432 | 7 | null | do it with koencker's broken line regression, see page 18 of this vignette
[http://cran.r-project.org/web/packages/quantreg/vignettes/rq.pdf](http://cran.r-project.org/web/packages/quantreg/vignettes/rq.pdf)
In response to Whuber last comment:
This estimator is defined like this.
$x\in\mathbb{R}$, $x_{(i)}\geq x_{(i-1)}\;\forall i$,
$e_i:=y_{i}-\beta_{i}x_{(i)}-\beta_0$,
$z^+=\max(z,0)$, $z^-=\max(-z,0)$,
$\tau \in (0,1)$, $\lambda\geq 0$
$\underset{\beta\in\mathbb{R}^n|\tau, \lambda}{\min.} \sum_{i=1}^{n} \tau e_i^++\sum_{i=1}^{n}(1-\tau)e_i^-+\lambda\sum_{i=2}^{n}|\beta_{i}-\beta_{i-1}|$
$\tau$ gives the desired quantile (i.e. in the example, $\tau=0.9$). $\lambda$ directs the number of breakpoint: for $\lambda$ large this estimator shrinks to no break point (corresponding to the classicla linear quantile regression estimator).
Quantile Smoothing Splines
Roger Koenker, Pin Ng, Stephen Portnoy
Biometrika, Vol. 81, No. 4 (Dec., 1994), pp. 673-680
PS: there is a open acess working paper with the same name by the same others but it's not the same thing.
| null | CC BY-SA 2.5 | null | 2010-09-07T16:03:44.463 | 2010-09-10T17:36:27.873 | 2010-09-10T17:36:27.873 | 603 | 603 | null |
2448 | 2 | null | 2430 | 4 | null | This is only valid for the logit: you can use another link function (complementary log-log or cloglog in short). This is a variation of the classical logit function that allows for assymetry (when one tail of the link function does not go to 0 at the same speed as the other tail goes to 1). I had a very good experience fitting one of these to a database with about 1% of 'ones'.
this is a good starting point reference:
[Link](https://doi.org/10.1007/3-540-44842-X_5)
These can be fitted using the zelig package in R.
edit: there is a good comparaison with a logit link here:
[http://rss.acs.unt.edu/Rdoc/library/VGAM/html/cloglog.html](http://rss.acs.unt.edu/Rdoc/library/VGAM/html/cloglog.html)
| null | CC BY-SA 4.0 | null | 2010-09-07T16:09:53.610 | 2022-04-30T12:57:37.987 | 2022-04-30T12:57:37.987 | 79696 | 603 | null |
2449 | 2 | null | 2430 | 2 | null | One strategy would be to use margin based methods with uneven margins (see [this paper](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.130.4424&rep=rep1&type=pdf)). Or you can use [active learning](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.155.1168&rep=rep1&type=pdf) to provide the learner with more balanced classes. Besides this, actually there are a number of other ways too that you can use to deal with imbalanced dataset. See [this survey paper](http://www.ece.stevens-tech.edu/~hhe/PDFfiles/ImbalancedLearning.pdf) which discusses a number of techniques such as resampling, cost-sensitive approaches, active learning, etc (and evaluation methods).
| null | CC BY-SA 2.5 | null | 2010-09-07T16:45:12.883 | 2010-09-07T17:14:37.033 | 2010-09-07T17:14:37.033 | 881 | 881 | null |
2450 | 2 | null | 2423 | 4 | null | A comprehensive and authoratative reference is Kendall's Advanced Theory of Statistics
- Volume 1 Distribution Theory
- Volume 2A Classical Inference and Linear Models
There is also a Volume 2B but it is Bayesian Inference.
Other than those, I agree the Casella and Berger is an excellent reference at the graduate level, and suggest Bain and Engelhardt's Introduction to Probability and Mathematical Statistics for upper-level undergraduates.
| null | CC BY-SA 2.5 | null | 2010-09-07T17:02:12.730 | 2010-09-07T17:02:12.730 | null | null | 1107 | null |
2452 | 2 | null | 1964 | 3 | null | One place to start would be Silverman's [nearest-neighbor estimator](http://nedwww.ipac.caltech.edu/level5/March02/Silverman/Silver2_5.html), but to add in the weights somehow. (I am not sure exactly what your weights are for here.) The nearest neighbor method can evidently be formulated in terms of distances. I believe your first and second nearest neighbor method are versions of the nearest-neighbor method, but without a kernel function, and with a small value of $k$.
| null | CC BY-SA 2.5 | null | 2010-09-07T21:40:03.113 | 2010-09-07T21:40:03.113 | null | null | 795 | null |
2453 | 2 | null | 4 | 10 | null | I'm going to ask the consultant's dumb question. Why do you want to know if these distributions are different in a statistically significant way?
Is it that the data that you are using are representative samples from populations or processes, and you want to assess the evidence that those populations or processes differ? If so, then a statistical test is right for you. But this seems like a strange question to me.
Or, are you interested in whether you really need to behave as though those populations or processes are different, regardless of the truth? Then you will be better off determining a loss function, ideally one that returns units that are meaningful to you, and predicting the expected loss when you (a) treat the populations as different, and (b) treat them as the same. Or you can choose some quantile of the loss distribution if you want to adopt a more or less conservative position.
| null | CC BY-SA 2.5 | null | 2010-09-07T23:43:17.103 | 2010-09-07T23:43:17.103 | null | null | 187 | null |
2454 | 2 | null | 2432 | 2 | null | It should be possible to code a solution in R using the non-linear regression function nls, b splines (the bs function in the spline package, for example) and the ifelse function.
| null | CC BY-SA 2.5 | null | 2010-09-07T23:47:38.133 | 2010-09-07T23:47:38.133 | null | null | 187 | null |
2455 | 1 | 2456 | null | 14 | 18969 | Age pyramid looks like this:

I would like to make something similar, namely a 2 barplots (not histograms) with same categories, rotated vertically and extending to both sides as in pyramid.
Is it a simple way to do this in R?
It would be also nice to control the colour of each bar.
| How to make age pyramid like plot in R? | CC BY-SA 2.5 | null | 2010-09-08T00:31:39.353 | 2016-07-17T19:59:33.040 | null | null | null | [
"r",
"data-visualization"
] |
2456 | 2 | null | 2455 | 21 | null | You can do this with [the pyramid.plot() function](http://rss.acs.unt.edu/Rdoc/library/plotrix/html/pyramid.plot.html) from the `plotrix` package. Here's an example:
```
library(plotrix)
xy.pop<-c(3.2,3.5,3.6,3.6,3.5,3.5,3.9,3.7,3.9,3.5,3.2,2.8,2.2,1.8,
1.5,1.3,0.7,0.4)
xx.pop<-c(3.2,3.4,3.5,3.5,3.5,3.7,4,3.8,3.9,3.6,3.2,2.5,2,1.7,1.5,
1.3,1,0.8)
agelabels<-c("0-4","5-9","10-14","15-19","20-24","25-29","30-34",
"35-39","40-44","45-49","50-54","55-59","60-64","65-69","70-74",
"75-79","80-44","85+")
mcol<-color.gradient(c(0,0,0.5,1),c(0,0,0.5,1),c(1,1,0.5,1),18)
fcol<-color.gradient(c(1,1,0.5,1),c(0.5,0.5,0.5,1),c(0.5,0.5,0.5,1),18)
par(mar=pyramid.plot(xy.pop,xx.pop,labels=agelabels,
main="Australian population pyramid 2002",lxcol=mcol,rxcol=fcol,
gap=0.5,show.values=TRUE))
```
Which ends up looking like this:

| null | CC BY-SA 2.5 | null | 2010-09-08T00:39:29.500 | 2010-09-08T00:45:25.083 | 2010-09-08T00:45:25.083 | 5 | 5 | null |
2457 | 1 | 2463 | null | 27 | 16911 | I would just like someone to confirm my understanding or if I'm missing something.
The definition of a markov process says the next step depends on the current state only and no past states. So, let's say we had a state space of a,b,c,d and we go from a->b->c->d. That means that the transition to d could only depend on the fact that we were in c.
However, is it true that you could just make the model more complex and kind of "get around" this limitation? In other words, if your state space were now aa, ab, ac, ad, ba, bb, bc, bd, ca, cb, cc, cd, da, db, dc, dd, meaning that your new state space becomes the previous state combined with the current state, then the above transition would be *a->ab->bc->cd and so the transition to cd (equivalent in the previous model to d) is now "dependent" on a state which, if modeled differently, is a previous state (I refer to it as a sub-state below).
Am I correct in that one can make it "depend on previous states (sub-state)" (I know technically it doesn't in the new model since the sub-state is no longer a real state) maintain the markov property by expanding the state space as I did? So, one could in effect create a markov process that could depend on any number of previous sub-states.
| Markov Process that depends on present state and past state | CC BY-SA 4.0 | null | 2010-09-08T01:57:18.193 | 2020-04-10T14:15:18.650 | 2020-04-10T14:15:18.650 | 268072 | 1208 | [
"markov-process"
] |
2459 | 2 | null | 2427 | 3 | null | if you are going to go the numerical route, a simple observation: the problem appears to be subject to red-black parity (a flea on a red square always moves to a black square, and vice-versa). This can help reduce your problem size by a half (just consider two moves at a time, and only look at fleas on the red squares, say.)
| null | CC BY-SA 2.5 | null | 2010-09-08T02:11:18.783 | 2010-09-08T02:11:18.783 | null | null | 795 | null |
2462 | 2 | null | 2457 | 10 | null | The definition of a markov process says the next step depends on the current state only and no past states.
That is the Markov property and it defines a first order MC, which is very tractable mathematically and quite easy to present/explain. Of course you could have $n^{th}$ order MC (where the next state depends on the current and the past $n-1$ states) as well as variable order MCs (when the length of the memory is fixed but depends on the previous state).
$n^{th}$ order MCs retain the explicit formulation for the distribution of the stationary state, but as you pointed out, the size of the state matrix growths with $n$ such that an unrestricted $n^{th}$ order MC with $k$ states has $O(k^{2n})$ entry in its state matrix.
You may want to have a look at recent papers such as Higher-order multivariate Markov chains and their applications as this field is advancing quiet fast.
| null | CC BY-SA 3.0 | null | 2010-09-08T02:17:56.713 | 2015-05-02T15:34:42.320 | 2015-05-02T15:34:42.320 | -1 | 603 | null |
2463 | 2 | null | 2457 | 33 | null | Technically, both the processes you describe are markov chains. The difference is that the first one is a first order markov chain whereas the second one is a second order markov chain. And yes, you can transform a second order markov chain to a first order markov chain by a suitable change in state space definition. Let me explain via an example.
Suppose that we want to model the weather as a stochastic process and suppose that on any given day the weather can be rainy, sunny or cloudy. Let $W_t$ be the weather in any particular day and let us denote the possible states by the symbols $R$ (for rainy), $S$ for (sunny) and $C$ (for cloudy).
First Order Markov Chain
$P(W_t = w | W_{t-1}, W_{t-2},W_{t-3} ..) = P(W_t = w | W_{t-1})$
Second Order Markov Chain
$P(W_t = w | W_{t-1}, W_{t-2},W_{t-3} ..) = P(W_t = w | W_{t-1},W_{t-2})$
The second order markov chain can be transformed into a first order markov chain be re-defining the state space as follows. Define:
$Z_{t-1,t}$ as the weather on two consecutive days.
In other words, the state space can take one of the following values: $RR$, $RC$, $RS$, $CR$, $CC$, $CS$, $SR$, $SC$ and $SS$. With this re-defined state space we have the following:
$P(Z_{t-1,t} = z_{t-1,t} | Z_{t-2,t-1}, Z_{t-3,t-2}, ..) = P(Z_{t-1,t} = z_{t-1,t} | Z_{t-2,t-1})$
The above is clearly a first order markov chain on the re-defined state space. The one difference from the second order markov chain is that your redefined markov chain needs to be specified with two initial starting states i.e., the chain must be started with some assumptions about the weather on day 1 and on day 2.
| null | CC BY-SA 2.5 | null | 2010-09-08T02:20:47.227 | 2010-09-08T02:28:25.460 | 2010-09-08T02:28:25.460 | null | null | null |
2464 | 2 | null | 4 | 2 | null | One measure of the difference between two distribution is the "maximum mean discrepancy" criteria, which basically measures the difference between the empirical means of the samples from the two distributions in a Reproducing Kernel Hilbert Space (RKHS). See this paper ["A kernel method for the two sample problem"](http://arxiv.org/PS_cache/arxiv/pdf/0805/0805.2368v1.pdf).
| null | CC BY-SA 2.5 | null | 2010-09-08T03:00:19.690 | 2010-09-08T03:00:19.690 | null | null | 881 | null |
2465 | 2 | null | 2446 | 2 | null | Here's how I have understood your question:
- you have two groups of participants
- Five observations per participant
- Based on the five observations, you can extract a single summary statistic (e.g., if the five observations were performance over five time points, the summary statistic might be the slope of the regression line predicting performance from time)
General points:
- If you want to test whether there are differences between groups on the summary statistic, you can do a standard t-test with standard degrees of freedom.
- Having ore observations per individual will increase the reliability with which you measure the summary statistic.
- Greater reliability of measurement means larger expected group differences and thus greater statistical power (see reliability attenuation).
Very similar points could be made if instead of having two groups you had a numeric variable measured once on each participant, such as age, and you wanted to correlate this with your summary statistic.
There are many ways to measure something on a set of participants. You just happened to have applied an algorithm (e.g., a linear regression leading to a slope) to a set of observations to derive your measure.
| null | CC BY-SA 2.5 | null | 2010-09-08T03:34:47.097 | 2010-09-08T03:34:47.097 | null | null | 183 | null |
2466 | 1 | 2482 | null | 13 | 725 | Say I have a population of 50 million unique things, and I take 10 million samples (with replacement)... The first graph is I've attached shows how many times I sample the same "thing", which is relatively rare as the population is larger than my sample.
However if my population is only 10 million things, and I take 10 million samples, as the second graph shows I will more often sample the same thing repeated times.
My question is - from my frequency table of observations (the data in the bar charts) is it possible to get an estimate of the original population size when it is an unknown? And it would be great if you could provide a pointer to how to go about this in R.

| Estimate the size of a population being sampled by the number of repeat observations | CC BY-SA 2.5 | null | 2010-09-08T04:44:53.493 | 2015-10-05T23:56:22.660 | 2015-10-05T23:56:22.660 | 12359 | 1210 | [
"r",
"sampling",
"expectation-maximization"
] |
2467 | 1 | 2473 | null | 5 | 7051 | I have analysed several dimensions in a survey. Each part of the survey represents a theoretical dimension and is analysed with factorial analysis.
I want to use scores from factor analysis to do a classification.
- The first factors represents a large part of the variance. Can I keep only first factor or do I need to retain all factors?
- After factor analysis, I did a PROMAX rotation which is an oblique rotation. How should I use the output from the PROMAX rotation? If I have taken account of that how do I compute distance with factors correlations matrix?
| Classification after factor analysis | CC BY-SA 3.0 | null | 2010-09-08T04:49:55.197 | 2016-11-17T12:15:57.527 | 2016-11-17T12:15:57.527 | 29949 | 1154 | [
"classification",
"clustering",
"factor-analysis",
"psychometrics"
] |
2468 | 2 | null | 2466 | 5 | null | You can estimate via a binomial distribution. If there are $n$ draws, with replacement, from $k$ objects (with $k$ unknown), the probability of an object being drawn once in a single draw is $P = \frac{1}{k}$. Think of this as a coinflip now. The probability of exactly $m$ heads (i.e. $m$ duplicates) from $n$ trials is ${n \choose m} P^m (1-P)^{n-m}$. Multiply this by $n$ to get the expected number of times observed (your plot). For large $n$ it can be a little hairy to back out $k$ from the data, but for small $m$, you can probably do fine assuming the $(1-P)$ term is equal to $1$.
edit: one possible way to fix the numerical problems is to look at the ratios of counts. That is, if $P_m$ is the probability of drawing $m$ heads, then $P_{m} / P_{m+1}$ is equal to $(k-1)\frac{m+1}{n-m}$. Then look at the ratios of counts of duplicates in your data to get multiple estimates of $k$, then take the median or mean.
| null | CC BY-SA 2.5 | null | 2010-09-08T05:03:32.267 | 2010-09-08T16:55:59.350 | 2010-09-08T16:55:59.350 | 795 | 795 | null |
2469 | 1 | 2472 | null | 13 | 32054 | Question:
What is a good method for conducting post hoc tests of differences between group means after adjusting for the effect of a covariate?
Prototypical example:
- Four groups, 30 participants per group (e.g., four different clinical psychology populations)
- Dependent Variable is numeric (e.g., intelligence scores)
- Covariate is numeric (e.g., index of socioeconomic status)
- Research questions concern whether any pair of groups are significantly different on the dependent variable after controlling for the covariate
Related Questions:
- What is the preferred method?
- What implementations are available in R?
- Are there any general references on how a covariate changes procedures for conducting post hoc tests?
| Post hoc tests in ANCOVA | CC BY-SA 2.5 | null | 2010-09-08T05:41:48.627 | 2014-02-11T13:41:58.733 | null | null | 183 | [
"anova",
"multiple-comparisons",
"ancova"
] |
2470 | 2 | null | 2469 | 2 | null | Combining simple methods that you can easily access from R and general principles you could use Tukey's HSD simply enough. The error term from the ANCOVA will provide the error term for the confidence intervals.
In R code that would be...
```
#set up some data for an ANCOVA
n <- 30; k <- 4
y <- rnorm(n*k)
a <- factor(rep(1:k, n))
cov <- y + rnorm(n*k)
#the model
m <- aov(y ~ cov + a)
#the test
TukeyHSD(m)
```
(ignore the error in the result, it just means the covariate wasn't assessed, which is what you want)
That gives narrower confidence intervals than you get if you run the model without the cov... as expected.
Any post hoc technique that relies on the residuals from the model for the error variance could easily be used.
| null | CC BY-SA 3.0 | null | 2010-09-08T07:00:04.273 | 2012-06-09T17:14:56.070 | 2012-06-09T17:14:56.070 | 601 | 601 | null |
2471 | 2 | null | 2466 | 8 | null | This sounds like a form of 'mark and recapture' aka 'capture-recapture', a well-known technique in ecology (and some other fields such as epidemiology). Not my area but [the Wikipedia article on mark and recapture](http://en.wikipedia.org/wiki/Mark_and_recapture) looks reasonable, though your situation is not the one to which the Lincoln–Petersen method explained there applies.
I think shabbychef is one the right track for your situation, but using the Poisson distribution to approximate the binomial would probably make things a bit simpler and should be a very good approximation if the population size is very large, as in your examples. I think getting an explicit expression for the maximum likelihood estimate of the population size should then be pretty straightforward (see e.g. [Wikipedia again](http://en.wikipedia.org/wiki/Poisson_distribution#Maximum_likelihood)), though i don't have time to work out the details right now.
| null | CC BY-SA 2.5 | null | 2010-09-08T07:09:54.603 | 2010-09-08T11:29:12.837 | 2010-09-08T11:29:12.837 | 449 | 449 | null |
2472 | 2 | null | 2469 | 13 | null | Multiple testing following ANCOVA, or more generally any GLM, but the comparisons now focus on the adjusted group/treatment or marginal means (i.e. what the scores would be if groups did not differ on the covariate of interest). To my knowledge, Tukey HSD and Scheffé tests are used. Both are quite conservative and will tend to bound type I error rate. The latter is preferred in case of unequal sample size in each group. I seem to remember that some people also use Sidak correction on specific contrasts (when it is of interest of course) as it is less conservative than the Bonferroni correction.
Such tests are readily available in the R `multcomp` package (see `?glht`). The accompagnying vignette include example of use in the case of a simple linear model (section 2), but it can be extended to any other model form. Other examples can be found in the `HH` packages (see `?MMC`). Several MCP and resampling procedures (recommended for strong inferences, but it relies on a different approach to the correction for Type I error rate inflation) are also available in the `multtest` package, through [Bioconductor](http://www.bioconductor.org), see refs (3–4). The definitive reference to multiple comparison is the book from the same authors: Dudoit, S. and van der Laan, M.J., Multiple Testing Procedures with Applications to Genomics (Springer, 2008).
Reference 2 explained the difference between MCP in the general case (ANOVA, working with unadjusted means) vs. ANCOVA. There are also several papers that I can't remember actually, but I will look at them.
Other useful references:
- Westfall, P.H. (1997). Multiple Testing of General Contrasts Using Logical Contraints and Correlations. JASA 92: 299-306.
- Westfall, P.H. and Young, S.S. (1993) Resampling Based Multiple Testing, Examples and Methods for p-Value Adjustment. John Wiley and Sons: New York.
- Pollard, K.S., Dudoit, S., and van der Laan, M.J. (2004). Multiple Testing Procedures: R multtest Package and Applications to Genomics.
- Taylor, S.L. Lang, D.T., and Pollard, K.S. (2007). Improvements to the multiple testing package multtest. R News 7(3): 52-55.
- Bretz, F., Genz, A., and Hothorn, L.A. (2001). On the numerical availability of multiple comparison procedures. Biometrical Journal, 43(5): 645–656.
- Hothorn, T., Bretz, F., and Westfall, P. (2008). Simultaneous Inference in General Parametric Models. Department of Statistics: Technical Reports, Nr. 19.
The first two are referenced in SAS PROC related to MCP.
| null | CC BY-SA 2.5 | null | 2010-09-08T07:17:02.653 | 2010-09-08T07:17:02.653 | null | null | 930 | null |
2473 | 2 | null | 2467 | 4 | null | One solution to your 1. question is to use cross-validation. You compute classification accuracy for models with different number of components and then pick one with the highest classification accuracy. You can check the references below:
[PLS Dimension Reduction for Classification with Microarray Data](http://www.bepress.com/sagmb/vol3/iss1/art33/)
[Rasch-based high-dimensionality data reduction and class prediction with applications to microarray gene expression data](http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V03-4Y34W95-2&_user=10&_coverDate=07%2F31%2F2010&_rdoc=1&_fmt=high&_orig=search&_origin=search&_sort=d&_docanchor=&view=c&_searchStrId=1454239636&_rerunOrigin=google&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=e2279e3a8f1e8a43932e99beadcdd90b&searchtype=a)
To my experience factor rotation does not improve classification accuracy. Please report your results.
| null | CC BY-SA 2.5 | null | 2010-09-08T07:36:36.987 | 2010-09-08T07:36:36.987 | null | null | 609 | null |
2474 | 2 | null | 2467 | 2 | null | One approach that side-steps cross-validation to determine the optimal number of factors is to use the nonparametric Bayesian approaches for factor analysis. These approaches let the number of factors to be unbounded and eventually decided by the data. See [this paper](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.142.4273&rep=rep1&type=pdf) that uses such an approach for classification based on factor analysis.
| null | CC BY-SA 2.5 | null | 2010-09-08T07:42:53.730 | 2010-09-08T07:42:53.730 | null | null | 881 | null |
2476 | 1 | 2478 | null | 23 | 10824 | To cluster (text) documents you need a way of measuring similarity between pairs of documents.
Two alternatives are:
- Compare documents as term vectors using Cosine Similarity - and TF/IDF as the weightings for terms.
- Compare each documents probability distribution using f-divergence e.g. Kullback-Leibler divergence
Is there any intuitive reason to prefer one method to the other (assuming average document sizes of 100 terms)?
| Measuring Document Similarity | CC BY-SA 2.5 | null | 2010-09-08T10:23:59.423 | 2015-03-25T06:54:55.450 | 2010-09-17T20:20:49.300 | null | 1212 | [
"information-retrieval"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.