Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
113 | 1 | null | null | 11 | 484 | I have been looking into theoretical frameworks for method selection (note: not model selection) and have found very little systematic, mathematically-motivated work. By 'method selection', I mean a framework for distinguishing the appropriate (or better, optimal) method with respect to a problem, or problem type.
What I have found is substantial, if piecemeal, work on particular methods and their tuning (i.e. prior selection in Bayesian methods), and method selection via bias selection (e.g. [Inductive Policy: The Pragmatics of Bias Selection](http://portal.acm.org/citation.cfm?id=218546)). I may be unrealistic at this early stage of machine learning's development, but I was hoping to find something like what [measurement theory](ftp://ftp.sas.com/pub/neural/measurement.html) does in prescribing admissible transformations and tests by scale type, only writ large in the arena of learning problems.
Any suggestions?
| What are some good frameworks for method selection? | CC BY-SA 2.5 | null | 2010-07-19T20:54:23.200 | 2010-10-08T23:57:02.170 | 2010-07-21T15:44:07.450 | 39 | 39 | [
"machine-learning",
"methodology",
"mathematical-statistics"
] |
114 | 1 | null | null | 35 | 1667 | What statistical research blogs would you recommend, and why?
| What statistical blogs would you recommend? | CC BY-SA 3.0 | null | 2010-07-19T21:00:53.077 | 2016-10-13T15:18:29.730 | 2016-10-13T15:18:29.730 | 28666 | 8 | [
"references"
] |
115 | 2 | null | 103 | 1 | null | [We Love Datavis](http://datavis.tumblr.com/), a data visualization tumblog.
| null | CC BY-SA 3.0 | null | 2010-07-19T21:01:35.757 | 2012-10-24T15:02:27.243 | 2012-10-24T15:02:27.243 | 615 | 127 | null |
116 | 2 | null | 114 | 12 | null | Cosma Shalizi's [blog](http://www.cscs.umich.edu/~crshalizi/weblog/), often talks about statistics, and is always interesting.
| null | CC BY-SA 2.5 | null | 2010-07-19T21:04:16.080 | 2010-07-19T21:04:16.080 | null | null | 72 | null |
117 | 2 | null | 114 | 25 | null | [http://www.r-bloggers.com/](http://www.r-bloggers.com/) is an aggregated blog from lots of blogs that talk about statistics using R, and the [#rstats](http://search.twitter.com/search?q=%23rstats) hashtag on twitter is also helpful. I write quite a bit about [statistics and R in genetics research](http://gettinggeneticsdone.blogspot.com/search/label/R).
| null | CC BY-SA 2.5 | null | 2010-07-19T21:04:24.283 | 2010-07-20T15:13:37.983 | 2010-07-20T15:13:37.983 | 36 | 36 | null |
118 | 1 | 151 | null | 548 | 307338 | In the definition of standard deviation, why do we have to square the difference from the mean to get the mean (E) and take the square root back at the end? Can't we just simply take the absolute value of the difference instead and get the expected value (mean) of those, and wouldn't that also show the variation of the data? The number is going to be different from square method (the absolute-value method will be smaller), but it should still show the spread of data. Anybody know why we take this square approach as a standard?
The definition of standard deviation:
$\sigma = \sqrt{E\left[\left(X - \mu\right)^2\right]}.$
Can't we just take the absolute value instead and still be a good measurement?
$\sigma = E\left[|X - \mu|\right]$
| Why square the difference instead of taking the absolute value in standard deviation? | CC BY-SA 3.0 | null | 2010-07-19T21:04:39.057 | 2022-11-23T10:16:14.803 | 2021-01-19T20:58:21.960 | 919 | 83 | [
"standard-deviation",
"definition",
"absolute-value",
"faq"
] |
119 | 2 | null | 118 | 9 | null | There are many reasons; probably the main is that it works well as parameter of normal distribution.
| null | CC BY-SA 3.0 | null | 2010-07-19T21:11:44.797 | 2013-04-27T14:09:42.487 | 2013-04-27T14:09:42.487 | null | null | null |
120 | 2 | null | 118 | 98 | null | One way you can think of this is that standard deviation is similar to a "distance from the mean".
Compare this to distances in euclidean space - this gives you the true distance, where what you suggested (which, btw, is the [absolute deviation](http://en.wikipedia.org/wiki/Average_absolute_deviation)) is more like a [manhattan distance](http://en.wikipedia.org/wiki/Manhattan_distance_transform) calculation.
| null | CC BY-SA 2.5 | null | 2010-07-19T21:14:07.983 | 2010-07-19T21:14:07.983 | null | null | 41 | null |
121 | 2 | null | 118 | 155 | null | The squared difference has nicer mathematical properties; it's continuously differentiable (nice when you want to minimize it), it's a sufficient statistic for the Gaussian distribution, and it's (a version of) the L2 norm which comes in handy for proving convergence and so on.
The mean absolute deviation (the absolute value notation you suggest) is also used as a measure of dispersion, but it's not as "well-behaved" as the squared error.
| null | CC BY-SA 2.5 | null | 2010-07-19T21:14:25.407 | 2010-07-19T21:14:25.407 | null | null | 61 | null |
123 | 2 | null | 118 | 21 | null | Squaring the difference from the mean has a couple of reasons.
- Variance is defined as the 2nd moment of the deviation (the R.V here is $(x-\mu)$) and thus the square as moments are simply the expectations of higher powers of the random variable.
- Having a square as opposed to the absolute value function gives a nice continuous and differentiable function (absolute value is not differentiable at 0) - which makes it the natural choice, especially in the context of estimation and regression analysis.
- The squared formulation also naturally falls out of parameters of the Normal Distribution.
| null | CC BY-SA 3.0 | null | 2010-07-19T21:15:20.917 | 2017-04-20T00:53:18.180 | 2017-04-20T00:53:18.180 | 5176 | 130 | null |
124 | 1 | null | null | 33 | 2018 | I'm a programmer without statistical background, and I'm currently looking at different classification methods for a large number of different documents that I want to classify into pre-defined categories. I've been reading about kNN, SVM and NN. However, I have some trouble getting started. What resources do you recommend? I do know single variable and multi variable calculus quite well, so my math should be strong enough. I also own Bishop's book on Neural Networks, but it has proven to be a bit dense as an introduction.
| Statistical classification of text | CC BY-SA 2.5 | null | 2010-07-19T21:17:30.543 | 2018-12-30T19:39:56.940 | 2010-07-21T22:17:00.927 | null | 131 | [
"classification",
"information-retrieval",
"text-mining"
] |
125 | 1 | null | null | 245 | 148505 | Which is the best introductory textbook for Bayesian statistics?
One book per answer, please.
| What is the best introductory Bayesian statistics textbook? | CC BY-SA 2.5 | null | 2010-07-19T21:18:12.713 | 2021-10-19T15:45:27.030 | 2012-01-22T20:18:28.350 | null | 5 | [
"bayesian",
"references"
] |
126 | 2 | null | 125 | 65 | null | My favorite is ["Bayesian Data Analysis"](http://www.stat.columbia.edu/~gelman/book/) by Gelman, et al. (The pdf version is legally free since April 2020!)
| null | CC BY-SA 4.0 | null | 2010-07-19T21:19:43.570 | 2020-04-06T16:52:41.577 | 2020-04-06T16:52:41.577 | 53690 | 5 | null |
127 | 2 | null | 125 | 31 | null | Another vote for Gelman et al., but a close second for me -- being of the learn-by-doing persuasion -- is Jim Albert's ["Bayesian Computation with R"](http://www-math.bgsu.edu/~albert/bcwr/).
| null | CC BY-SA 4.0 | null | 2010-07-19T21:23:20.593 | 2019-04-02T12:06:42.873 | 2019-04-02T12:06:42.873 | 53690 | 61 | null |
128 | 1 | 191 | null | 14 | 33131 | In Plain English, how does one interpret a Bland-Altman plot?
What are the advantages of using a Bland-Altman plot over other methods of comparing two different measurement methods?
| How does one interpret a Bland-Altman plot? | CC BY-SA 2.5 | null | 2010-07-19T21:23:57.973 | 2020-04-02T17:50:20.670 | 2016-07-13T08:05:10.397 | 1352 | 132 | [
"data-visualization",
"bland-altman-plot"
] |
129 | 2 | null | 125 | 8 | null | I quite like [Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference](http://rads.stackoverflow.com/amzn/click/1584885874) by Gamerman and Lopes.
| null | CC BY-SA 2.5 | null | 2010-07-19T21:24:58.567 | 2010-10-05T13:56:15.550 | 2010-10-05T13:56:15.550 | 8 | 8 | null |
130 | 1 | 131 | null | 41 | 13563 | I had a plan of learning R in the near future. Reading [another question](https://stats.stackexchange.com/questions/3/what-are-some-valuable-statistical-analysis-open-source-projects) I found out about Clojure. Now I don't know what to do.
I think a big advantage of R for me is that some people in Economics use it, including one of my supervisors (though the other said: stay away from R!). One advantage of Clojure is that it is Lisp-based, and as I have started learning Emacs and I am keen on writing my own customisations, it would be helpful (yeah, I know Clojure and Elisp are different dialects of Lisp, but they are both Lisp and thus similar I would imagine).
I can't ask which one is better, because I know this is very personal, but could someone give me the advantages (or advantages) of Clojure x R, especially in practical terms? For example, which one should be easier to learn, which one is more flexible or more powerful, which one has more libraries, more support, more users, etc?
My intended use: The bulk of my estimation should be done using Matlab, so I am not looking for anything too deep in terms of statistical analysis, but rather a software to substitute Excel for the initial data manipulation and visualisation, summary statistics and charting, but also some basic statistical analysis or the initial attempts at my estimation.
| Clojure versus R: advantages and disadvantages for data analysis | CC BY-SA 2.5 | null | 2010-07-19T21:26:27.023 | 2023-01-06T21:49:20.187 | 2017-04-13T12:44:27.570 | -1 | 90 | [
"r"
] |
131 | 2 | null | 130 | 27 | null | Let me start by saying that I love both languages: you can't go wrong with either, and they are certainly better than something like C++ or Java for doing data analysis.
For basic data analysis I would suggest R (especially with plyr). IMO, R is a little easier to learn than Clojure, although this isn't completely obvious since Clojure is based on Lisp and there are numerous fantastic Lisp resources available (such as [SICP](https://mitp-content-server.mit.edu/books/content/sectbyfn/books_pres_0/6515/sicp.zip/index.html)). There are less keywords in Clojure, but the libraries are much more difficult to install and work with. Also, keep in mind that R (or S) is largely derived from Scheme, so you would benefit from Lisp knowledge when using it.
In general:
The main advantage of R is the community on CRAN (over 2461 packages and counting). Nothing will compare with this in the near future, not even a commercial application like matlab.
Clojure has the big advantage of running on the JVM which means that it can use any Java based library immediately.
I would add that I gave [a talk relating Clojure/Incanter to R](https://web.archive.org/web/20151023130854/http://files.meetup.com/1406240/From%20Lisp%20to%20Clojure-Incanter%20and%20R.pdf) a while ago, so you may find it of interest. In my experience around creating this, Clojure was generally slower than R for simple operations.
| null | CC BY-SA 4.0 | null | 2010-07-19T21:28:41.907 | 2023-01-06T21:49:20.187 | 2023-01-06T21:49:20.187 | 30155 | 5 | null |
132 | 2 | null | 125 | 11 | null | Coming from non-statistical background I found [Introduction to Applied Bayesian Statistics and Estimation for Social Scientists](http://rads.stackoverflow.com/amzn/click/038771264X) quite informative and easy to follow.
| null | CC BY-SA 3.0 | null | 2010-07-19T21:29:37.040 | 2017-02-10T17:57:46.607 | 2017-02-10T17:57:46.607 | 12080 | 22 | null |
133 | 2 | null | 4 | -1 | null | I don't know how to use SAS/R/Orange, but it sounds like the kind of test you need is a [chi-square test](http://en.wikipedia.org/wiki/Chi-square_test).
| null | CC BY-SA 2.5 | null | 2010-07-19T21:31:53.813 | 2010-07-19T21:31:53.813 | null | null | 139 | null |
134 | 1 | 3449 | null | 23 | 22773 | On smaller window sizes, `n log n` sorting might work. Are there any better algorithms to achieve this?
| Algorithms to compute the running median? | CC BY-SA 2.5 | null | 2010-07-19T21:32:38.523 | 2021-08-19T04:28:21.460 | 2010-08-03T12:14:50.543 | 8 | 138 | [
"algorithms",
"median"
] |
135 | 2 | null | 4 | 18 | null | I believe that this calls for a [two-sample Kolmogorov–Smirnov test](http://www.itl.nist.gov/div898/software/dataplot/refman1/auxillar/ks2samp.htm), or the like. The two-sample Kolmogorov–Smirnov test is based on comparing differences in the [empirical distribution functions](http://en.wikipedia.org/wiki/Empirical_distribution_function) (ECDF) of two samples, meaning it is sensitive to both location and shape of the the two samples. It also generalizes out to a multivariate form.
This test is found in various forms in different packages in R, so if you are basically proficient, all you have to do is install one of them (e.g. [fBasics](http://cran.r-project.org/web/packages/fBasics/fBasics.pdf)), and run it on your sample data.
| null | CC BY-SA 2.5 | null | 2010-07-19T21:36:12.850 | 2010-07-19T21:52:08.617 | 2010-07-19T21:52:08.617 | 39 | 39 | null |
137 | 2 | null | 124 | 20 | null | I recommend these books - they are highly rated on Amazon too:
"Text Mining" by Weiss
"Text Mining Application Programming", by Konchady
For software, I recommend RapidMiner (with the text plugin), free and open-source.
This is my "text mining process":
- collect the documents (usually a web crawl)
[sample if too large]
timestamp
strip out markup
- tokenize: break into characters, words, n-grams, or sliding windows
- stemming (aka lemmatization)
[include synonyms]
see porter or snowflake algorithm
pronouns and articles are usually bad predictors
- remove stopwords
- feature vectorization
binary (appears or doesn’t)
word count
relative frequency: tf-idf
information gain, chi square
[have a minimum value for inclusion]
- weighting
weight words at top of document higher?
Then you can start the work of classifying them. kNN, SVM, or Naive Bayes as appropriate.
You can see my series of text mining videos [here](http://vancouverdata.blogspot.com/2010/11/text-analytics-with-rapidminer-loading.html)
| null | CC BY-SA 3.0 | null | 2010-07-19T21:38:09.370 | 2017-07-21T07:15:29.190 | 2017-07-21T07:15:29.190 | 166832 | 74 | null |
138 | 1 | 1213 | null | 78 | 48150 | I'm interested in learning [R](http://en.wikipedia.org/wiki/R_%28programming_language%29) on the cheap. What's the best free resource/book/tutorial for learning R?
| Free resources for learning R | CC BY-SA 3.0 | null | 2010-07-19T21:38:10.290 | 2016-02-08T17:30:40.050 | 2016-02-08T16:52:47.047 | 28666 | 142 | [
"r",
"references"
] |
139 | 2 | null | 138 | 24 | null | If I had to choose one thing, make sure that you read ["The R Inferno"](http://www.burns-stat.com/pages/Tutor/R_inferno.pdf).
There are many good resources on [the R homepage](http://www.r-project.org), but in particular, read ["An Introduction to R"](http://cran.r-project.org/doc/manuals/R-intro.pdf) and ["The R Language Definition"](http://cran.r-project.org/doc/manuals/R-lang.pdf).
| null | CC BY-SA 2.5 | null | 2010-07-19T21:39:17.220 | 2010-07-19T21:39:17.220 | null | null | 5 | null |
140 | 2 | null | 138 | 8 | null | The official guides are pretty nice; check out [http://cran.r-project.org/manuals.html](http://cran.r-project.org/manuals.html) . There is also a lot of contributed documentation there.
| null | CC BY-SA 2.5 | null | 2010-07-19T21:39:35.690 | 2010-07-19T21:39:35.690 | null | null | null | null |
141 | 2 | null | 103 | 2 | null | Light-hearted: [Indexed](http://thisisindexed.com/)
Also, see older visualizations from the same creator at the original [Indexed Blog](http://indexed.blogspot.com/).
| null | CC BY-SA 3.0 | null | 2010-07-19T21:40:02.540 | 2012-10-24T14:58:17.090 | 2012-10-24T14:58:17.090 | 615 | 142 | null |
142 | 2 | null | 138 | 6 | null | After you learn the basics, I find the following sites very useful:
- R-bloggers.
- Subscribing to the Stack overflow R tag.
| null | CC BY-SA 2.5 | null | 2010-07-19T21:42:57.987 | 2010-07-19T21:42:57.987 | 2017-05-23T12:39:26.523 | -1 | 8 | null |
143 | 2 | null | 124 | 5 | null | Neural network may be to slow for a large number of documents (also this is now pretty much obsolete).
And you may also check Random Forest among classifiers; it is quite fast, scales nice and does not need complex tuning.
| null | CC BY-SA 2.5 | null | 2010-07-19T21:48:28.567 | 2010-07-19T21:48:28.567 | null | null | null | null |
144 | 2 | null | 138 | 18 | null | [Quick-R](http://www.statmethods.net/index.html) can be a good place to start.
A little bit data mining oriented [R and Data Mining](http://www.rdatamining.com) resources: [Examples and Case Studies](http://www.rdatamining.com/docs/r-and-data-mining-examples-and-case-studies) and [R Reference Card for Data Mining](http://www.rdatamining.com/docs/R-refcard-data-mining.pdf).
| null | CC BY-SA 3.0 | null | 2010-07-19T21:48:52.670 | 2015-07-04T01:14:16.383 | 2015-07-04T01:14:16.383 | 43755 | 22 | null |
145 | 1 | 147 | null | 6 | 2607 | >
Possible Duplicate:
Locating freely available data samples
Where can I find freely accessible data sources?
I'm thinking of sites like
- http://www2.census.gov/census_2000/datasets/?
| Free Dataset Resources? | CC BY-SA 2.5 | null | 2010-07-19T21:50:16.260 | 2010-08-30T15:02:00.623 | 2017-04-13T12:44:54.643 | -1 | 138 | [
"dataset"
] |
146 | 1 | 149 | null | 15 | 14458 | A while ago a user on R-help mailing list asked about the soundness of using PCA scores in a regression. The user is trying to use some PC scores to explain variation in another PC (see full discussion [here](http://r.789695.n4.nabble.com/PCA-and-Regression-td2280038.html)). The answer was that no, this is not sound because PCs are orthogonal to each other.
Can someone explain in a bit more detail why this is so?
| Can one use multiple regression to predict one principal component (PC) from several other PCs? | CC BY-SA 3.0 | null | 2010-07-19T21:52:51.707 | 2014-12-12T11:50:37.933 | 2014-12-12T11:50:37.933 | 28666 | 144 | [
"regression",
"pca"
] |
147 | 2 | null | 145 | 6 | null | Amazon has free Public Data sets for use with EC2.
[http://aws.amazon.com/publicdatasets/](http://aws.amazon.com/publicdatasets/)
Here's a list: [http://developer.amazonwebservices.com/connect/kbcategory.jspa?categoryID=243](http://developer.amazonwebservices.com/connect/kbcategory.jspa?categoryID=243)
| null | CC BY-SA 2.5 | null | 2010-07-19T21:53:02.283 | 2010-07-19T21:53:02.283 | null | null | 142 | null |
148 | 2 | null | 145 | 3 | null | [http://infochimps.org/](http://infochimps.org/) - is a good resource for free data sets.
| null | CC BY-SA 2.5 | null | 2010-07-19T21:58:51.867 | 2010-07-19T21:58:51.867 | null | null | 130 | null |
149 | 2 | null | 146 | 12 | null | A principal component is a weighted linear combination of all your factors (X's).
example: PC1 = 0.1X1 + 0.3X2
There will be one component for each factor (though in general a small number are selected).
The components are created such that they have zero correlation (are orthogonal), by design.
Therefore, component PC1 should not explain any variation in component PC2.
You may want to do regression on your Y variable and the PCA representation of your X's, as they will not have multi-collinearity. However, this could be hard to interpret.
If you have more X's than observations, which breaks OLS, you can regress on your components, and simply select a smaller number of the highest variation components.
[Principal Component Analysis](http://rads.stackoverflow.com/amzn/click/0387954422) by Jollife a very in-depth and highly cited book on the subject
This is also good: [http://www.statsoft.com/textbook/principal-components-factor-analysis/](http://www.statsoft.com/textbook/principal-components-factor-analysis/)
| null | CC BY-SA 3.0 | null | 2010-07-19T22:02:10.340 | 2012-01-04T06:55:32.167 | 2012-01-04T06:55:32.167 | 74 | 74 | null |
150 | 2 | null | 125 | 7 | null | For complete beginners, try William Briggs [Breaking the Law of Averages: Real-Life Probability and Statistics in Plain English](http://rads.stackoverflow.com/amzn/click/0557019907)
| null | CC BY-SA 2.5 | null | 2010-07-19T22:13:29.830 | 2010-07-19T22:13:29.830 | null | null | 25 | null |
151 | 2 | null | 118 | 246 | null | If the goal of the standard deviation is to summarise the spread of a symmetrical data set (i.e. in general how far each datum is from the mean), then we need a good method of defining how to measure that spread.
The benefits of squaring include:
- Squaring always gives a non-negative value, so the sum will always be zero or higher.
- Squaring emphasizes larger differences, a feature that turns out to be both good and bad (think of the effect outliers have).
Squaring however does have a problem as a measure of spread and that is that the units are all squared, whereas we might prefer the spread to be in the same units as the original data (think of squared pounds, squared dollars, or squared apples). Hence the square root allows us to return to the original units.
I suppose you could say that absolute difference assigns equal weight to the spread of data whereas squaring emphasises the extremes. Technically though, as others have pointed out, squaring makes the algebra much easier to work with and offers properties that the absolute method does not (for example, the variance is equal to the expected value of the square of the distribution minus the square of the mean of the distribution)
It is important to note however that there's no reason you couldn't take the absolute difference if that is your preference on how you wish to view 'spread' (sort of how some people see 5% as some magical threshold for $p$-values, when in fact it is situation dependent). Indeed, there are in fact several competing methods for measuring spread.
My view is to use the squared values because I like to think of how it relates to the Pythagorean Theorem of Statistics: $c = \sqrt{a^2 + b^2}$ …this also helps me remember that when working with independent random variables, variances add, standard deviations don't. But that's just my personal subjective preference which I mostly only use as a memory aid, feel free to ignore this paragraph.
An interesting analysis can be read here:
- Revisiting a 90-year-old debate: the advantages of the mean deviation - Stephen Gorard (Department of Educational Studies, University of York); Paper presented at the British Educational Research Association Annual Conference, University of Manchester, 16-18 September 2004
| null | CC BY-SA 4.0 | null | 2010-07-19T22:31:12.830 | 2022-11-23T10:16:14.803 | 2022-11-23T10:16:14.803 | 362671 | 81 | null |
152 | 1 | 1087 | null | 18 | 5938 | Label switching (i.e., the posterior distribution is invariant to switching component labels) is a problematic issue when using MCMC to estimate mixture models.
- Is there a standard (as in widely accepted) methodology to deal with the issue?
- If there is no standard approach then what are the pros and cons of the leading approaches to solve the label switching problem?
| Is there a standard method to deal with label switching problem in MCMC estimation of mixture models? | CC BY-SA 2.5 | null | 2010-07-19T22:37:38.013 | 2023-01-31T11:20:51.193 | 2011-03-27T16:03:35.180 | 919 | null | [
"bayesian",
"markov-chain-montecarlo",
"mixture-distribution"
] |
153 | 2 | null | 10 | 14 | null | The simple answer is that Likert scales are always ordinal. The intervals between positions on the scale are monotonic but never so well-defined as to be numerically uniform increments.
That said, the distinction between ordinal and interval is based on the specific demands of the analysis being performed. Under special circumstances, you may be able to treat the responses as if they fell on an interval scale. To do this, typically the respondents need to be in close agreement regarding the meaning of the scale responses and the analysis (or the decisions made based on the analysis) should be relatively insensitive to problems that may arise.
| null | CC BY-SA 2.5 | null | 2010-07-19T22:39:27.230 | 2010-07-19T22:39:27.230 | null | null | 145 | null |
154 | 2 | null | 1 | 33 | null | I am currently researching the trial roulette method for my masters thesis as an elicitation technique. This is a graphical method that allows an expert to represent her subjective probability distribution for an uncertain quantity.
Experts are given counters (or what one can think of as casino chips) representing equal densities whose total would sum up to 1 - for example 20 chips of probability = 0.05 each. They are then instructed to arrange them on a pre-printed grid, with bins representing result intervals. Each column would represent their belief of the probability of getting the corresponding bin result.
Example: A student is asked to predict the mark in a future exam. The
figure below shows a completed grid for the elicitation of
a subjective probability distribution. The horizontal axis of the
grid shows the possible bins (or mark intervals) that the student was
asked to consider. The numbers in top row record the number of chips
per bin. The completed grid (using a total of 20 chips) shows that the
student believes there is a 30% chance that the mark will be between
60 and 64.9.
Some reasons in favour of using this technique are:
- Many questions about the shape of the expert's subjective probability distribution can be answered without the need to pose a long series of questions to the expert - the statistician can simply read off density above or below any given point, or that between any two points.
- During the elicitation process, the experts can move around the chips if unsatisfied with the way they placed them initially - thus they can be sure of the final result to be submitted.
- It forces the expert to be coherent in the set of probabilities that are provided. If all the chips are used, the probabilities must sum to one.
- Graphical methods seem to provide more accurate results, especially for participants with modest levels of statistical sophistication.
| null | CC BY-SA 4.0 | null | 2010-07-19T22:40:47.947 | 2018-12-29T18:42:01.680 | 2018-12-29T18:42:01.680 | 79696 | 108 | null |
155 | 1 | null | null | 37 | 8116 | I really enjoy hearing simple explanations to complex problems. What is your favorite analogy or anecdote that explains a difficult statistical concept?
My favorite is [Murray's](http://www-stat.wharton.upenn.edu/~steele/Courses/434/434Context/Co-integration/Murray93DrunkAndDog.pdf) explanation of cointegration using a drunkard and her dog. Murray explains how two random processes (a wandering drunk and her dog, Oliver) can have unit roots but still be related (cointegrated) since their joint first differences are stationary.
>
The drunk sets out from the bar, about to wander aimlessly in random-walk fashion. But
periodically she intones "Oliver, where are you?", and Oliver interrupts his aimless
wandering to bark. He hears her; she hears him. He thinks, "Oh, I can't let her get too far
off; she'll lock me out." She thinks, "Oh, I can't let him get too far off; he'll wake
me up in the middle of the night with his barking." Each assesses how far
away the other is and moves to partially close that gap.
| What is your favorite layman's explanation for a difficult statistical concept? | CC BY-SA 2.5 | null | 2010-07-19T22:43:50.967 | 2013-10-23T15:29:05.390 | 2012-04-04T16:22:03.290 | 8489 | 154 | [
"teaching",
"communication"
] |
156 | 1 | 198 | null | 4 | 271 | I know this must be standard material, but I had difficulty in finding a proof in this form.
Let $e$ be a standard white Gaussian vector of size $N$. Let all the other matrices in the following be constant.
Let $v = Xy + e$, where $X$ is an $N\times L$ matrix and $y$ is an $N\times 1$ vector, and let
$$\left\{\begin{align}
\bar y &= (X^TX)^{-1}X^Tv\\
\bar e &= v - X\bar y
\end{align}\right.\quad.$$
If $c$ is any constant vector, $J = N - \mathrm{rank}(X)$, and
$$\left\{\begin{align}
u &= c^T\bar y\\
s^2 &= \bar e^T\bar ec^T(X^TX)^{-1}c
\end{align}\right.\quad,$$
then the random variable defined as $t = u/\sqrt{s^2/J}$ follows a normalized Student's T distribution with J degrees of freedom.
I would be grateful if you could provide an outline for its proof.
| How to get to a t variable from linear regression | CC BY-SA 3.0 | null | 2010-07-19T22:50:13.297 | 2012-05-15T04:52:05.677 | 2012-05-14T21:49:03.273 | 10515 | 148 | [
"regression"
] |
157 | 2 | null | 155 | 10 | null | Definitely the Monty Hall Problem. [http://en.wikipedia.org/wiki/Monty_Hall_problem](http://en.wikipedia.org/wiki/Monty_Hall_problem)
| null | CC BY-SA 2.5 | null | 2010-07-19T22:52:22.730 | 2010-07-19T22:52:22.730 | null | null | 36 | null |
159 | 2 | null | 103 | 9 | null | [Junk Charts](http://junkcharts.typepad.com/) is always interesting and thought-provoking, usually providing both criticism of visualizations in the popular media and suggestions for improvements.
| null | CC BY-SA 2.5 | null | 2010-07-19T23:00:30.737 | 2010-07-19T23:00:30.737 | null | null | 145 | null |
160 | 2 | null | 103 | 2 | null | [Dataspora](https://web.archive.org/web/20120102015341/http://dataspora.com/blog/), a data science blog.
| null | CC BY-SA 4.0 | null | 2010-07-19T23:06:43.987 | 2022-11-29T16:32:39.563 | 2022-11-29T16:32:39.563 | 362671 | 158 | null |
161 | 1 | null | null | 20 | 15068 | Econometricians often talk about a time series being integrated with order k, I(k). k being the minimum number of differences required to obtain a stationary time series.
What methods or statistical tests can be used to determine, given a level of confidence, the order of integration of a time series?
| What methods can be used to determine the Order of Integration of a time series? | CC BY-SA 2.5 | null | 2010-07-19T23:11:36.240 | 2010-07-20T11:14:41.487 | 2010-07-19T23:39:49.573 | 159 | 154 | [
"time-series"
] |
162 | 2 | null | 155 | 15 | null |
- If you carved your distribution (histogram) out
of wood, and tried to balance it on
your finger, the balance point would
be the mean, no matter the shape of the distribution.
- If you put a stick in the middle of
your scatter plot, and attached the
stick to each data point with a
spring, the resting point of the
stick would be your regression line. [1]
[1] this would technically be principal components regression. you would have to force the springs to move only "vertically" to be least squares, but the example is illustrative either way.
| null | CC BY-SA 3.0 | null | 2010-07-19T23:13:32.150 | 2012-04-09T08:18:37.273 | 2012-04-09T08:18:37.273 | 74 | 74 | null |
164 | 2 | null | 145 | 3 | null | For governmental data:
US: [http://www.data.gov/](http://www.data.gov/)
World: [http://www.guardian.co.uk/world-government-data](http://www.guardian.co.uk/world-government-data)
| null | CC BY-SA 2.5 | null | 2010-07-19T23:19:44.963 | 2010-07-19T23:19:44.963 | null | null | 158 | null |
165 | 1 | 207 | null | 275 | 183632 | Maybe the concept, why it's used, and an example.
| How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | CC BY-SA 3.0 | null | 2010-07-19T23:21:05.320 | 2022-12-31T01:32:21.020 | 2017-08-10T08:21:26.363 | 11887 | 74 | [
"bayesian",
"markov-chain-montecarlo",
"intuition",
"teaching"
] |
166 | 1 | null | null | 16 | 56719 | Australia is currently having an election and understandably the media reports new political poll results daily. In a country of 22 million what percentage of the population would need to be sampled to get a statistically valid result?
Is it possible that using too large a sample could affect the results, or does statistical validity monotonically increase with sample size?
| How do you decide the sample size when polling a large population? | CC BY-SA 2.5 | null | 2010-07-19T23:21:35.430 | 2018-11-06T22:19:57.360 | 2010-09-17T13:20:58.950 | 442 | 154 | [
"sample-size",
"polling"
] |
167 | 2 | null | 146 | 10 | null | Principal components are orthogonal by definition, so any pair of PCs will have zero correlation.
However, PCA can be used in regression if there are a large number of explanatory variables. These can be reduced to a small number of principal components and used as predictors in a regression.
| null | CC BY-SA 2.5 | null | 2010-07-19T23:26:31.473 | 2010-07-19T23:26:31.473 | null | null | 159 | null |
168 | 1 | 179 | null | 30 | 6882 | For univariate kernel density estimators (KDE), I use Silverman's rule for calculating $h$:
\begin{equation}
0.9 \min(sd, IQR/1.34)\times n^{-0.2}
\end{equation}
What are the standard rules for multivariate KDE (assuming a Normal kernel).
| Choosing a bandwidth for kernel density estimators | CC BY-SA 2.5 | null | 2010-07-19T23:26:44.747 | 2017-12-26T08:55:18.090 | 2015-04-23T05:51:56.433 | 9964 | 8 | [
"smoothing",
"kernel-smoothing"
] |
169 | 2 | null | 145 | 4 | null | For time series data, try the [Time Series Data Library](http://robjhyndman.com/TSDL).
| null | CC BY-SA 2.5 | null | 2010-07-19T23:27:36.400 | 2010-07-19T23:27:36.400 | null | null | 159 | null |
170 | 1 | 174 | null | 132 | 44159 | Are there any free statistical textbooks available?
| Free statistical textbooks | CC BY-SA 2.5 | null | 2010-07-19T23:29:54.663 | 2023-06-02T12:01:00.867 | 2010-10-27T09:23:26.720 | 69 | 8 | [
"teaching",
"references"
] |
171 | 2 | null | 161 | 16 | null | There are a number of statistical tests (known as "unit root tests") for dealing with this problem. The most popular is probably the "Augmented Dickey-Fuller" (ADF) test, although the Phillips-Perron (PP) test and the KPSS test are also widely used.
Both the ADF and PP tests are based on a null hypothesis of a unit root (i.e., an I(1) series). The KPSS test is based on a null hypothesis of stationarity (i.e., an I(0) series). Consequently, the KPSS test can give quite different results from the ADF or PP tests.
| null | CC BY-SA 2.5 | null | 2010-07-19T23:32:30.337 | 2010-07-19T23:32:30.337 | null | null | 159 | null |
172 | 2 | null | 166 | 14 | null | Sample size doesn't much depend on the population size, which is counter-intuitive to many.
Most polling companies use 400 or 1000 people in their samples.
There is a reason for this:
A sample size of 400 will give you a confidence interval of +/-5% 19 times out of 20 (95%)
A sample size of 1000 will give you a confidence interval of +/-3% 19 times out of 20 (95%)
When you are measuring a proportion near 50% anyways.
This calculator isn't bad:
[http://www.raosoft.com/samplesize.html](http://www.raosoft.com/samplesize.html)
| null | CC BY-SA 2.5 | null | 2010-07-19T23:34:18.163 | 2010-09-11T18:48:28.197 | 2010-09-11T18:48:28.197 | 74 | 74 | null |
173 | 1 | null | null | 23 | 10125 | I recently started working for a tuberculosis clinic. We meet periodically to discuss the number of TB cases we're currently treating, the number of tests administered, etc. I'd like to start modeling these counts so that we're not just guessing whether something is unusual or not. Unfortunately, I've had very little training in time series, and most of my exposure has been to models for very continuous data (stock prices) or very large numbers of counts (influenza). But we deal with 0-18 cases per month (mean 6.68, median 7, var 12.3), which are distributed like this:
[image lost to the mists of time]
[image eaten by a grue]
I've found a few articles that address models like this, but I'd greatly appreciate hearing suggestions from you - both for approaches and for R packages that I could use to implement those approaches.
EDIT: mbq's answer has forced me to think more carefully about what I'm asking here; I got too hung-up on the monthly counts and lost the actual focus of the question. What I'd like to know is: does the (fairly visible) decline from, say, 2008 onward reflect a downward trend in the overall number of cases? It looks to me like the number of cases monthly from 2001-2007 reflects a stable process; maybe some seasonality, but overall stable. From 2008 through the present, it looks like that process is changing: the overall number of cases is declining, even though the monthly counts might wobble up and down due to randomness and seasonality. How can I test if there's a real change in the process? And if I can identify a decline, how could I use that trend and whatever seasonality there might be to estimate the number of cases we might see in the upcoming months?
| Time series for count data, with counts < 20 | CC BY-SA 3.0 | null | 2010-07-19T23:37:22.980 | 2017-02-27T14:37:11.170 | 2017-02-27T14:37:11.170 | 11887 | 71 | [
"r",
"time-series",
"poisson-distribution",
"count-data",
"epidemiology"
] |
174 | 2 | null | 170 | 78 | null | Online books include
- http://davidmlane.com/hyperstat/
- http://vassarstats.net/textbook/
- https://dwstockburger.com/Multibook/mbk.htm
- https://web.archive.org/web/20180122061046/http://bookboon.com/en/statistics-ebooks
- http://www.freebookcentre.net/SpecialCat/Free-Statistics-Books-Download.html
Update: I can now add my own forecasting textbook
- Forecasting: principles and practice (Hyndman & Athanasopoulos, 2012)
| null | CC BY-SA 4.0 | null | 2010-07-19T23:37:43.807 | 2023-06-02T11:48:32.663 | 2023-06-02T11:48:32.663 | 362671 | 159 | null |
175 | 1 | null | null | 93 | 218140 | Often times a statistical analyst is handed a set dataset and asked to fit a model using a technique such as linear regression. Very frequently the dataset is accompanied with a disclaimer similar to "Oh yeah, we messed up collecting some of these data points -- do what you can".
This situation leads to regression fits that are heavily impacted by the presence of outliers that may be erroneous data. Given the following:
- It is dangerous from both a scientific and moral standpoint to throw out data for no reason other than it "makes the fit look bad".
- In real life, the people who collected the data are frequently not available to answer questions such as "when generating this data set, which of the points did you mess up, exactly?"
What statistical tests or rules of thumb can be used as a basis for excluding outliers in linear regression analysis?
Are there any special considerations for multilinear regression?
| How should outliers be dealt with in linear regression analysis? | CC BY-SA 2.5 | null | 2010-07-19T23:39:49.730 | 2020-09-18T08:21:19.847 | 2010-08-13T12:59:06.957 | 159 | 13 | [
"regression",
"outliers"
] |
176 | 2 | null | 22 | 52 | null | Let us say a man rolls a six sided die and it has outcomes 1, 2, 3, 4, 5, or 6. Furthermore, he says that if it lands on a 3, he'll give you a free text book.
Then informally:
The Frequentist would say that each outcome has an equal 1 in 6 chance of occurring. She views probability as being derived from long run frequency distributions.
The Bayesian however would say hang on a second, I know that man, he's David Blaine, a famous trickster! I have a feeling he's up to something. I'm going to say that there's only a 1% chance of it landing on a 3 BUT I'll re-evaluate that beliefe and change it the more times he rolls the die. If I see the other numbers come up equally often, then I'll iteratively increase the chance from 1% to something slightly higher, otherwise I'll reduce it even further. She views probability as degrees of belief in a proposition.
| null | CC BY-SA 3.0 | null | 2010-07-19T23:40:01.007 | 2011-09-18T10:09:48.690 | 2011-09-18T10:09:48.690 | 81 | 81 | null |
177 | 2 | null | 175 | 39 | null | Rather than exclude outliers, you can use a robust method of regression. In R, for example, the [rlm() function from the MASS package](http://stat.ethz.ch/R-manual/R-patched/library/MASS/html/rlm.html) can be used instead of the `lm()` function. The method of estimation can be tuned to be more or less robust to outliers.
| null | CC BY-SA 3.0 | null | 2010-07-19T23:45:44.677 | 2011-10-10T09:02:51.173 | 2011-10-10T09:02:51.173 | 159 | 159 | null |
178 | 2 | null | 3 | 11 | null | [RapidMiner](http://rapid-i.com/) for data and text mining
| null | CC BY-SA 3.0 | null | 2010-07-19T23:48:50.943 | 2013-04-20T07:21:26.257 | 2013-04-20T07:21:26.257 | 74 | 74 | null |
179 | 2 | null | 168 | 21 | null | For a univariate KDE, you are better off using something other than Silverman's rule which is based on a normal approximation. One excellent approach is the Sheather-Jones method, easily implemented in R; for example,
```
plot(density(precip, bw="SJ"))
```
The situation for multivariate KDE is not so well studied, and the tools are not so mature. Rather than a bandwidth, you need a bandwidth matrix. To simplify the problem, most people assume a diagonal matrix, although this may not lead to the best results. The [ks package in R](http://cran.r-project.org/web/packages/ks/) provides some very useful tools including allowing a full (not necessarily diagonal) bandwidth matrix.
| null | CC BY-SA 2.5 | null | 2010-07-19T23:59:29.487 | 2010-07-19T23:59:29.487 | null | null | 159 | null |
180 | 2 | null | 145 | 5 | null | I really like the [FRED](http://research.stlouisfed.org/fred2/), from the St. Louis Fed (economics data). You can chart the series or more than one series, you can do some transformations to your data and chart it, and the NBER recessions are shaded.
| null | CC BY-SA 2.5 | null | 2010-07-20T00:06:20.580 | 2010-07-20T00:06:20.580 | null | null | 90 | null |
181 | 1 | 1097 | null | 794 | 1053273 | Is there a standard and accepted method for selecting the number of layers, and the number of nodes in each layer, in a feed-forward neural network? I'm interested in automated ways of building neural networks.
| How to choose the number of hidden layers and nodes in a feedforward neural network? | CC BY-SA 3.0 | null | 2010-07-20T00:15:02.920 | 2022-08-31T12:09:15.680 | 2017-03-15T17:51:15.800 | 153217 | 159 | [
"model-selection",
"neural-networks"
] |
182 | 2 | null | 175 | 30 | null | Sometimes outliers are bad data, and should be excluded, such as typos. Sometimes they are Wayne Gretzky or Michael Jordan, and should be kept.
Outlier detection methods include:
Univariate -> boxplot. outside of 1.5 times inter-quartile range is an outlier.
Bivariate -> scatterplot with confidence ellipse. outside of, say, 95% confidence ellipse is an outlier.
Multivariate -> Mahalanobis D2 distance
Mark those observations as outliers.
Run a logistic regression (on Y=IsOutlier) to see if there are any systematic patterns.
Remove ones that you can demonstrate they are not representative of any sub-population.
| null | CC BY-SA 2.5 | null | 2010-07-20T00:15:47.393 | 2010-09-09T00:10:56.520 | 2010-09-09T00:10:56.520 | 74 | 74 | null |
183 | 1 | 518 | null | 4 | 1909 | I need to analyze the 100k MovieLens dataset for clustering with two algorithms of my choice, between the likes of k-means, agnes, diana, dbscan, and several others. What tools (like Rattle, or Weka) would be best suited to help me make some simple clustering analysis over this dataset?
| What tools could be used for applying clustering algorithms on MovieLens? | CC BY-SA 2.5 | null | 2010-07-20T00:20:51.767 | 2013-07-15T11:25:42.467 | null | null | 166 | [
"clustering"
] |
184 | 2 | null | 33 | 7 | null | Try using the `stl()` function for time series decomposition. It provides a very flexible method for extracting a seasonal component from a time series.
| null | CC BY-SA 2.5 | null | 2010-07-20T00:21:58.193 | 2010-07-20T00:21:58.193 | null | null | 159 | null |
185 | 2 | null | 124 | 11 | null | A great introductory text covering the topics you mentioned is [Introduction to Information Retrieval](http://www.informationretrieval.org), which is available online in full text for free.

| null | CC BY-SA 4.0 | null | 2010-07-20T00:30:00.173 | 2018-12-30T19:39:56.940 | 2018-12-30T19:39:56.940 | 79696 | 80 | null |
187 | 2 | null | 181 | 16 | null | As far as I know there is no way to select automatically the number of layers and neurons in each layer. But there are networks that can build automatically their topology, like EANN (Evolutionary Artificial Neural Networks, which use Genetic Algorithms to evolved the topology).
There are several approaches, a more or less modern one that seemed to give good results was [NEAT (Neuro Evolution of Augmented Topologies)](http://nn.cs.utexas.edu/?neat).
| null | CC BY-SA 3.0 | null | 2010-07-20T00:47:45.310 | 2017-02-20T16:03:33.397 | 2017-02-20T16:03:33.397 | 128677 | 119 | null |
188 | 2 | null | 165 | 94 | null | I'd probably say something like this:
"Anytime we want to talk about probabilities, we're really integrating a density. In Bayesian analysis, a lot of the densities we come up with aren't analytically tractable: you can only integrate them -- if you can integrate them at all -- with a great deal of suffering. So what we do instead is simulate the random variable a lot, and then figure out probabilities from our simulated random numbers. If we want to know the probability that X is less than 10, we count the proportion of simulated random variable results less than 10 and use that as our estimate. That's the "Monte Carlo" part, it's an estimate of probability based off of random numbers. With enough simulated random numbers, the estimate is very good, but it's still inherently random.
"So why "Markov Chain"? Because under certain technical conditions, you can generate a memoryless process (aka a Markovian one) that has the same limiting distribution as the random variable that you're trying to simulate. You can iterate any of a number of different kinds of simulation processes that generate correlated random numbers (based only on the current value of those numbers), and you're guaranteed that once you pool enough of the results, you will end up with a pile of numbers that looks "as if" you had somehow managed to take independent samples from the complicated distribution you wanted to know about.
"So for example, if I want to estimate the probability that a standard normal random variable was less than 0.5, I could generate ten thousand independent realizations from a standard normal distribution and count up the number less than 0.5; say I got 6905 that were less than 0.5 out of 10000 total samples; my estimate for P(Z<0.5) would be 0.6905, which isn't that far off from the actual value. That'd be a Monte Carlo estimate.
"Now imagine I couldn't draw independent normal random variables, instead I'd start at 0, and then with every step add some uniform random number between -0.5 and 0.5 to my current value, and then decide based on a particular test whether I liked that new value or not; if I liked it, I'd use the new value as my current one, and if not, I'd reject it and stick with my old value. Because I only look at the new and current values, this is a Markov chain. If I set up the test to decide whether or not I keep the new value correctly (it'd be a random walk Metropolis-Hastings, and the details get a bit complex), then even though I never generate a single normal random variable, if I do this procedure for long enough, the list of numbers I get from the procedure will be distributed like a large number of draws from something that generates normal random variables. This would give me a Markov Chain Monte Carlo simulation for a standard normal random variable. If I used this to estimate probabilities, that would be a MCMC estimate."
| null | CC BY-SA 3.0 | null | 2010-07-20T00:52:13.287 | 2015-02-16T06:06:58.363 | 2015-02-16T06:06:58.363 | 57408 | 61 | null |
189 | 2 | null | 23 | 12 | null | Let $F(x)$ denote the cdf; then you can always approximate the pdf of a continuous random variable by calculating $$ \frac{F(x_2) - F(x_1)}{x_2 - x_1},$$ where $x_1$ and $x_2$ are on either side of the point where you want to know the pdf and the distance $|x_2 - x_1|$ is small.
| null | CC BY-SA 3.0 | null | 2010-07-20T00:59:34.643 | 2014-12-03T01:21:36.467 | 2014-12-03T01:21:36.467 | 5339 | 173 | null |
190 | 2 | null | 170 | 14 | null | [A New View of Statistics](http://www.sportsci.org/resource/stats/) by Will G. Hopkins is great! It is designed to help you understand how to understand the results of statistical analyses, not how to prove statistical theorems.
| null | CC BY-SA 3.0 | null | 2010-07-20T01:07:38.383 | 2015-03-02T00:01:35.840 | 2015-03-02T00:01:35.840 | 25 | 25 | null |
191 | 2 | null | 128 | 12 | null | The Bland-Altman plot is more widely known as the Tukey Mean-Difference Plot (one of many charts devised by John Tukey [http://en.wikipedia.org/wiki/John_Tukey](http://en.wikipedia.org/wiki/John_Tukey)).
The idea is that x-axis is the mean of your two measurements, which is your best guess as to the "correct" result and the y-axis is the difference between the two measurement differences. The chart can then highlight certain types of anomalies in the measurements. For example, if one method always gives too high a result, then you'll get all of your points above or all below the zero line. It can also reveal, for example, that one method over-estimates high values and under-estimates low values.
If you see the points on the Bland-Altman plot scattered all over the place, above and below zero, then the suggests that there is no consistent bias of one approach versus the other (of course, there could be hidden biases that this plot does not show up).
Essentially, it is a good first step for exploring the data. Other techniques can be used to dig into more particular sorts of behaviour of the measurements.
| null | CC BY-SA 2.5 | null | 2010-07-20T01:17:17.377 | 2010-07-20T01:17:17.377 | null | null | 173 | null |
192 | 1 | 293 | null | 5 | 4107 | I'm aware that this one is far from yes or no question, but I'd like to know which techniques do you prefer in categorical data analysis - i.e. cross tabulation with two categorical variables.
I've come up with:
- χ2 test - well, this is quite self-explanatory
Fisher's exact test - when n < 40,
Yates' continuity correction - when n > 40,
- Cramer's V - measure of association for tables which have more than 2 x 2 cells,
- Φ coefficient - measure of association for 2 x 2 tables,
- contingency coefficient (C) - measure of association for n x n tables,
- odds ratio - independence of two categorical variables,
- McNemar marginal homogeniety test,
And my question here is: Which statistical techniques for cross-tabulated data (two categorical variables) do you consider relevant (and why)?
| Cross tabulation of two categorical variables: recommended techniques | CC BY-SA 2.5 | null | 2010-07-20T01:18:11.523 | 2020-11-05T10:10:06.100 | 2020-10-30T16:05:03.157 | 930 | 1356 | [
"categorical-data",
"contingency-tables",
"association-measure"
] |
193 | 2 | null | 166 | 9 | null | Suppose that you want to know what percentage of people would vote for a particular candidate (say, $\pi$, note that by definition $\pi$ is between 0 and 100). You sample $N$ voters at random to find out how they would vote and your survey of these $N$ voters tells you that the percentage is $p$. So, you would like to establish a confidence interval for the true percentage.
If you assume that $p$ is normally distributed (an assumption that may or may not be justified depending on how 'big' $N$ is) then your confidence interval for $\pi$ would be of the following form:
$$
CI = [ p - k * sd(p),~~ p + k * sd(p)]
$$
where $k$ is a constant that depends on the extent of confidence you want (i.e., 95% or 99% etc).
From a polling perspective, you want the width of your confidence interval to be 'low'. Usually, pollsters work with the margin of error which is basically one-half of the CI. In other words, $\text{MoE} = k * sd(p)$.
Here is how we would go about calculating $sd(p)$: By definition, $p = \sum X_i / N$ where, $X_i = 1$ if voter $i$ votes for candidate and $0$ otherwise.
Since, we sampled the voters at random, we could assume that $X_i$ is a i.i.d Bernoulli random variable. Therefore,
$$
Var(P) = V\left( \sum\frac{X_i}{N}\right) = \frac{\sum V(X_i)}{N^2} = \frac{N \pi (1-\pi)}{N^2} = \frac{\pi (1-\pi)}{N}.
$$
Thus,
$$
sd(p) = \sqrt{\frac{\pi * (1-\pi)}{N}}
$$
Now to estimate margin of error we need to know $\pi$ which we do not know obviously. But, an inspection of the numerator suggests that the 'worst' estimate for $sd(p)$ in the sense that we get the 'largest' standard deviation is when $\pi = 0.5$. Therefore, the worst possible standard deviation is:
$$
sd(p) = \sqrt{0.5 * 0.5 / N } = 0.5 / \sqrt{N}
$$
So, you see that the margin of error falls off exponentially with $N$ and thus you really do not need very big samples to reduce your margin of error, or in other words $N$ need not be very large for you to obtain a narrow confidence interval.
For example, for a 95 % confidence interval (i.e., $k= 1.96$) and $N = 1000$, the confidence interval is:
$$
\left[p - 1.96 \frac{0.5}{\sqrt{1000}},~~ p + 1.96 \frac{0.5}{\sqrt{1000}}\right] = [p - 0.03,~~ p + 0.03]
$$
As we increase $N$ the costs of polling go up linearly but the gains go down exponentially. That is the reason why pollsters usually cap $N$ at 1000 as that gives them a reasonable error of margin under the worst possible assumption of $\pi = 50\%$.
| null | CC BY-SA 3.0 | null | 2010-07-20T01:45:12.020 | 2016-04-08T20:00:37.107 | 2016-04-08T20:00:37.107 | -1 | null | null |
194 | 1 | 200 | null | 9 | 1387 | I am sure that everyone who's trying to find patterns in historical stock market data or betting history would like to know about this. Given a huge sets of data, and thousands of random variables that may or may not affect it, it makes sense to ask any patterns that you extract out from the data are indeed true patterns, not statistical fluke.
A lot of patterns are only valid when they are tested in the samples. And even those that are patterns that are valid out of samples may cease to become valid when you apply it in the real world.
I understand that it is not possible to completely 100% make sure a pattern is valid all the time, but besides in and out of samples tests, are their any tests that could establish the validness of a pattern?
| Data Mining-- how to tell whether the pattern extracted is meaningful? | CC BY-SA 4.0 | null | 2010-07-20T01:47:36.197 | 2022-05-15T06:03:20.027 | 2022-05-15T06:03:20.027 | 175 | 175 | [
"data-mining"
] |
195 | 1 | 2872 | null | 8 | 1408 | I am looking at fitting distributions to data (with a particular focus on the tail) and am leaning towards Anderson-Darling tests rather than Kolmogorov-Smirnov. What do you think are the relative merits of these or other tests for fit (e.g. Cramer-von Mises)?
| What do you think is the best goodness of fit test? | CC BY-SA 2.5 | null | 2010-07-20T02:01:05.727 | 2010-09-20T00:29:57.047 | null | null | 173 | [
"hypothesis-testing",
"fitting"
] |
196 | 1 | 197 | null | 31 | 14239 | Besides [gnuplot](http://en.wikipedia.org/wiki/Gnuplot) and [ggobi](http://www.ggobi.org/), what open source tools are people using for visualizing multi-dimensional data?
Gnuplot is more or less a basic plotting package.
Ggobi can do a number of nifty things, such as:
- animate data along a dimension or among discrete collections
- animate linear combinations varying the coefficients
- compute principal components and other transformations
- visualize and rotate 3 dimensional data clusters
- use colors to represent a different dimension
What other useful approaches are based in open source and thus freely reusable or customizable?
Please provide a brief description of the package's abilities in the answer.
| Open source tools for visualizing multi-dimensional data? | CC BY-SA 3.0 | null | 2010-07-20T02:17:24.800 | 2016-07-29T02:59:10.510 | 2012-11-21T06:25:07.173 | 9007 | 87 | [
"data-visualization",
"open-source"
] |
197 | 2 | null | 196 | 13 | null | How about R with [ggplot2](http://had.co.nz/ggplot2/)?
Other tools that I really like:
- Processing
- Prefuse
- Protovis
| null | CC BY-SA 2.5 | null | 2010-07-20T02:24:38.993 | 2010-07-20T02:42:01.603 | 2010-07-20T02:42:01.603 | 5 | 5 | null |
198 | 2 | null | 156 | 4 | null | Start with the distribution of $\bar{y}$, show that since $v$ is normal, $\bar{y}$ is multivariate normal and that consequently $u$ must also be a multivariate normal; also show that the covariance matrix of $\bar{y}$ is of the form $\sigma^2\cdot(X^T X)^{-1}$ and thus -- if $\sigma^2$ were known -- the variance of $u$ would be $\sigma^2 c^T (X^T X)^{-1} c$. Show that the distribution of $\bar{e}^T \bar{e}$ must be chi-squared and (carefully) find the degrees of freedom. Think about how what the operation $\bar{e}^T \bar{e} c^T (X^T X)^{-1} c$ must therefore produce, and what it's distribution and degrees of freedom are.
The result follows (almost) immediately from the definition of the t-distribution.
| null | CC BY-SA 3.0 | null | 2010-07-20T02:32:43.653 | 2012-05-15T04:52:05.677 | 2012-05-15T04:52:05.677 | 183 | 61 | null |
199 | 2 | null | 194 | 6 | null | You could try:
- Bagging http://en.m.wikipedia.org/wiki/Bootstrap_aggregating
- Boosting http://en.m.wikipedia.org/wiki/Boosting
- Cross validation http://en.m.wikipedia.org/wiki/Cross-validation_(statistics)
| null | CC BY-SA 2.5 | null | 2010-07-20T02:32:53.093 | 2010-07-20T02:32:53.093 | null | null | 5 | null |
200 | 2 | null | 194 | 16 | null | If you want to know that a pattern is meaningful, you need to show what it actually means. Statistical tests do not do this. Unless your data can be said to be in some sense "complete", inferences draw from the data will always be provisional.
You can increase your confidence in the validity of a pattern by testing against more and more out of sample data, but that doesn't protect you from it turning out to be an artefact. The broader your range of out of sample data -- eg, in terms of how it is acquired and what sort of systematic confounding factors might exist within it -- the better the validation.
Ideally, though, you need to go beyond identifying patterns and come up with a persuasive theoretical framework that explains the patterns you've found, and then test that by other, independent means. (This is called "science".)
| null | CC BY-SA 2.5 | null | 2010-07-20T02:48:45.177 | 2012-08-20T10:05:15.320 | 2012-08-20T10:05:15.320 | 174 | 174 | null |
201 | 2 | null | 7 | 10 | null | Start R and type `data()`. This will show all datasets in the search path.
Many additional datasets are available in add-on packages.
For example, there are some interesting real-world social science datasets in the `AER` package.
| null | CC BY-SA 2.5 | null | 2010-07-20T03:11:36.027 | 2010-07-20T03:11:36.027 | null | null | 183 | null |
202 | 2 | null | 138 | 13 | null |
- If you like learning through videos, I collated a list of R training videos.
- I also prepared a general post on learning R with suggestions on books, online manuals, blogs, videos, user interfaces, and more.
| null | CC BY-SA 3.0 | null | 2010-07-20T03:13:22.953 | 2011-05-27T03:38:47.843 | 2011-05-27T03:38:47.843 | 183 | 183 | null |
203 | 1 | null | null | 23 | 27540 | Following on from [this question](https://stats.stackexchange.com/questions/10/under-what-conditions-should-likert-scales-be-used-as-ordinal-or-interval-data):
Imagine that you want to test for differences in central tendency between two groups (e.g., males and females)
on a 5-point Likert item (e.g., satisfaction with life: Dissatisfied to Satisfied).
I think a t-test would be sufficiently accurate for most purposes,
but that a bootstrap test of differences between group means would often provide more accurate estimate of confidence intervals.
What statistical test would you use?
| Group differences on a five point Likert item | CC BY-SA 2.5 | null | 2010-07-20T03:31:45.820 | 2018-10-11T11:08:24.087 | 2017-04-13T12:44:32.747 | -1 | 183 | [
"t-test",
"ordinal-data",
"likert",
"scales"
] |
204 | 2 | null | 196 | 11 | null | The lattice package in R.
>
Lattice is a powerful and elegant high-level data visualization
system, with an emphasis on multivariate data,that is sufficient for
typical graphics needs, and is also flexible enough to handle most
nonstandard requirements.
[Quick-R has a quick introduction](http://www.statmethods.net/advgraphs/trellis.html).
| null | CC BY-SA 3.0 | null | 2010-07-20T03:35:58.693 | 2012-11-21T06:38:02.093 | 2012-11-21T06:38:02.093 | 183 | 183 | null |
205 | 1 | 353 | null | 23 | 2446 | I'm curious about why we treat fitting GLMS as though they were some special optimization problem. Are they? It seems to me that they're just maximum likelihood, and that we write down the likelihood and then ... we maximize it! So why do we use Fisher scoring instead of any of the myriad of optimization schemes that has been developed in the applied math literature?
| Why do we make a big fuss about using Fisher scoring when we fit a GLM? | CC BY-SA 2.5 | null | 2010-07-20T03:51:24.050 | 2021-11-18T13:19:22.933 | 2021-11-18T13:19:22.933 | 11887 | 187 | [
"generalized-linear-model",
"optimization",
"history",
"fisher-scoring"
] |
206 | 1 | 209 | null | 73 | 1110763 | What is the difference between discrete data and continuous data?
| What is the difference between discrete data and continuous data? | CC BY-SA 3.0 | null | 2010-07-20T03:53:54.767 | 2020-01-30T17:24:38.353 | 2011-05-27T03:35:28.263 | 183 | 188 | [
"continuous-data",
"discrete-data"
] |
207 | 2 | null | 165 | 256 | null | First, we need to understand what is a Markov chain. Consider the following [weather](http://en.wikipedia.org/wiki/Examples_of_Markov_chains#A_very_simple_weather_model) example from Wikipedia. Suppose that weather on any given day can be classified into two states only: sunny and rainy. Based on past experience, we know the following:
$P(\text{Next day is Sunny}\,\vert \,\text{Given today is Rainy)}=0.50$
Since, the next day's weather is either sunny or rainy it follows that:
$P(\text{Next day is Rainy}\,\vert \,\text{Given today is Rainy)}=0.50$
Similarly, let:
$P(\text{Next day is Rainy}\,\vert \,\text{Given today is Sunny)}=0.10$
Therefore, it follows that:
$P(\text{Next day is Sunny}\,\vert \,\text{Given today is Sunny)}=0.90$
The above four numbers can be compactly represented as a transition matrix which represents the probabilities of the weather moving from one state to another state as follows:
$P = \begin{bmatrix}
& S & R \\
S& 0.9 & 0.1 \\
R& 0.5 & 0.5
\end{bmatrix}$
We might ask several questions whose answers follow:
---
Q1: If the weather is sunny today then what is the weather likely to be tomorrow?
A1: Since, we do not know what is going to happen for sure, the best we can say is that there is a $90\%$ chance that it is likely to be sunny and $10\%$ that it will be rainy.
---
Q2: What about two days from today?
A2: One day prediction: $90\%$ sunny, $10\%$ rainy. Therefore, two days from now:
First day it can be sunny and the next day also it can be sunny. Chances of this happening are: $0.9 \times 0.9$.
Or
First day it can be rainy and second day it can be sunny. Chances of this happening are: $0.1 \times 0.5$.
Therefore, the probability that the weather will be sunny in two days is:
$P(\text{Sunny 2 days from now} = 0.9 \times 0.9 + 0.1 \times 0.5 = 0.81 + 0.05 = 0.86$
Similarly, the probability that it will be rainy is:
$P(\text{Rainy 2 days from now} = 0.1 \times 0.5 + 0.9 \times 0.1 = 0.05 + 0.09 = 0.14$
---
In linear algebra (transition matrices) these calculations correspond to all the permutations in transitions from one step to the next (sunny-to-sunny ($S_2S$), sunny-to-rainy ($S_2R$), rainy-to-sunny ($R_2S$) or rainy-to-rainy ($R_2R$)) with their calculated probabilities:
[](https://i.stack.imgur.com/gNcxV.png)
On the lower part of the image we see how to calculate the probability of a future state ($t+1$ or $t+2$) given the probabilities (probability mass function, $PMF$) for every state (sunny or rainy) at time zero (now or $t_0$) as simple matrix multiplication.
If you keep forecasting weather like this you will notice that eventually the $n$-th day forecast, where $n$ is very large (say $30$), settles to the following 'equilibrium' probabilities:
$P(\text{Sunny}) = 0.833$
and
$P(\text{Rainy}) = 0.167$
In other words, your forecast for the $n$-th day and the $n+1$-th day remain the same. In addition, you can also check that the 'equilibrium' probabilities do not depend on the weather today. You would get the same forecast for the weather if you start of by assuming that the weather today is sunny or rainy.
The above example will only work if the state transition probabilities satisfy several conditions which I will not discuss here. But, notice the following features of this 'nice' Markov chain (nice = transition probabilities satisfy conditions):
Irrespective of the initial starting state we will eventually reach an equilibrium probability distribution of states.
Markov Chain Monte Carlo exploits the above feature as follows:
We want to generate random draws from a target distribution. We then identify a way to construct a 'nice' Markov chain such that its equilibrium probability distribution is our target distribution.
If we can construct such a chain then we arbitrarily start from some point and iterate the Markov chain many times (like how we forecast the weather $n$ times). Eventually, the draws we generate would appear as if they are coming from our target distribution.
We then approximate the quantities of interest (e.g. mean) by taking the sample average of the draws after discarding a few initial draws which is the Monte Carlo component.
There are several ways to construct 'nice' Markov chains (e.g., Gibbs sampler, Metropolis-Hastings algorithm).
| null | CC BY-SA 3.0 | null | 2010-07-20T04:00:14.387 | 2016-08-18T18:05:16.030 | 2016-08-18T18:05:16.030 | 67822 | null | null |
208 | 2 | null | 206 | 13 | null | Temperatures are continuous. It can be 23 degrees, 23.1 degrees, 23.100004 degrees.
Sex is discrete. You can only be male or female (in classical thinking anyways). Something you could represent with a whole number like 1, 2, etc
The difference is important as many statistical and data mining algorithms can handle one type but not the other. For example in regular regression, the Y must be continuous. In logistic regression the Y is discrete.
| null | CC BY-SA 3.0 | null | 2010-07-20T04:07:11.093 | 2017-06-28T19:16:14.737 | 2017-06-28T19:16:14.737 | 74 | 74 | null |
209 | 2 | null | 206 | 64 | null | Discrete data can only take particular values. There may potentially be an infinite number of those values, but each is distinct and there's no grey area in between. Discrete data can be numeric -- like numbers of apples -- but it can also be categorical -- like red or blue, or male or female, or good or bad.
Continuous data are not restricted to defined separate values, but can occupy any value over a continuous range. Between any two continuous data values, there may be an infinite number of others. Continuous data are always essentially numeric.
It sometimes makes sense to treat discrete data as continuous and the other way around:
- For example, something like height is continuous, but often we
don't really care too much about tiny differences and instead group
heights into a number of discrete bins -- i.e. only measuring
centimetres --.
- Conversely, if we're counting large amounts of some discrete entity
-- i.e. grains of rice, or termites, or pennies in the economy -- we may choose not to think of 2,000,006 and 2,000,008 as crucially
different values but instead as nearby points on an approximate
continuum.
It can also sometimes be useful to treat numeric data as categorical, eg: underweight, normal, obese. This is usually just another kind of binning.
It seldom makes sense to consider categorical data as continuous.
| null | CC BY-SA 4.0 | null | 2010-07-20T04:16:52.663 | 2020-01-30T17:24:38.353 | 2020-01-30T17:24:38.353 | 272284 | 174 | null |
210 | 2 | null | 206 | 27 | null | Data is always discrete. Given a sample of `n` values on a variable, the maximum number of distinct values the variable can take is equal to `n`. See this quote
>
All actual sample spaces are discrete, and all observable random
variables have discrete distributions. The continuous distribution is
a mathematical construction, suitable for mathematical treatment,
but not practically observable. E.J.G. Pitman (1979, p. 1).
Data on a variable are typically assumed to be drawn from a random variable.
The random variable is continuous over a range if there is an infinite number of possible values that the variable can take between any two different points in the range.
For example, height, weight, and time are typically assumed to be continuous.
Of course, any measurement of these variables will be finitely accurate and in some
sense discrete.
It is useful to distinguish between ordered (i.e., ordinal), unordered (i.e., nominal),
and binary discrete variables.
Some introductory textbooks confuse a continuous variable with a numeric variable.
For example, a score on a computer game is discrete even though it is numeric.
Some introductory textbooks confuse a ratio variable with continuous variables. A count variable is a ratio variable, but it is not continuous.
In actual practice, a variable is often treated as continuous when it can take on a sufficiently large number of different values.
### References
- Pitman, E. J. G. 1979. Some basic theory for statistical inference. London: Chapman and Hall. Note: I found the quote in the introduction of Chapter 2 of Murray Aitkin's book Statistical Inference: An Integrated Bayesian/Likelihood Approach
| null | CC BY-SA 3.0 | null | 2010-07-20T04:19:12.023 | 2012-03-30T05:31:05.047 | 2012-03-30T05:31:05.047 | 183 | 183 | null |
211 | 2 | null | 138 | 7 | null | I have written a document that is freely available at my website and on CRAN. See the linked page:
[icebreakeR](http://www.ms.unimelb.edu.au/~andrewpr/r-users/)
The datasets that are used in the document are also linked from that page. Feedback is welcome and appreciated!
Andrew
| null | CC BY-SA 2.5 | null | 2010-07-20T04:49:07.680 | 2010-07-20T04:49:07.680 | null | null | 187 | null |
212 | 1 | 5001 | null | 5 | 1609 | I have 2 ASR (Automatic Speech Recognition) models, providing me with text transcriptions for my testdata. The error measure I use is Word Error Rate.
What methods do I have to test for statistical significance of my new results?
An example:
I have an experiment with 10 speaker, all having 100 (the same) sentences, total 900 words per speaker. Method A has an WER (word error rate) of 19.0%, Method B 18.5%.
How do I test whether Method B is significantly better?
| What method to use to test Statistical Significance of ASR results | CC BY-SA 2.5 | null | 2010-07-20T04:54:20.793 | 2010-11-29T18:25:11.713 | 2010-07-21T06:19:29.143 | 190 | 190 | [
"statistical-significance"
] |
213 | 1 | 532 | null | 103 | 68364 | Suppose I have a large set of multivariate data with at least three variables. How can I find the outliers? Pairwise scatterplots won't work as it is possible for an outlier to exist in 3 dimensions that is not an outlier in any of the 2 dimensional subspaces.
I am not thinking of a regression problem, but of true multivariate data. So answers involving robust regression or computing leverage are not helpful.
One possibility would be to compute the principal component scores and look for an outlier in the bivariate scatterplot of the first two scores. Would that be guaranteed to work? Are there better approaches?
| What is the best way to identify outliers in multivariate data? | CC BY-SA 2.5 | null | 2010-07-20T05:02:33.793 | 2019-05-16T14:50:42.977 | 2016-08-20T15:26:22.127 | 28666 | 159 | [
"multivariate-analysis",
"outliers"
] |
214 | 2 | null | 170 | 8 | null | Some free Stats textbooks are also available [here](http://www.e-booksdirectory.com/mathematics.php).
| null | CC BY-SA 2.5 | null | 2010-07-20T05:02:42.573 | 2010-07-20T05:02:42.573 | null | null | 40 | null |
215 | 2 | null | 195 | 2 | null | I'm not sure about these tests, so this answer may be off-topic. Apologies if so. But, are you sure that you want a test? It really depends on what the purpose of the exercise is. Why are you fitting the distributions to the data, and what will you do with the fitted distributions afterward?
If you want to know what distribution fits best just because you're interested, then a test may help.
On the other hand, if you want to actually do something with the distribution, then you'd be better off developing a loss function based on your intentions, and using the distribution that gives you the most satisfactory value for the loss function.
It sounds to me from your description (particular focus on the tail) that you want to actually do something with the distribution. If so, it's hard for me to imagine a situation where an existing test will provide better guidance than comparing the effects of the fitted distributions in situ, somehow.
| null | CC BY-SA 2.5 | null | 2010-07-20T05:03:12.730 | 2010-07-20T05:03:12.730 | null | null | 187 | null |
216 | 1 | 217 | null | 10 | 719 | What are some good visualization libraries for online use? Are they easy to use and is there good documentation?
| Web visualization libraries | CC BY-SA 3.0 | null | 2010-07-20T05:04:40.840 | 2017-11-23T14:22:40.880 | 2017-11-23T08:47:55.583 | 11887 | 191 | [
"data-visualization",
"protovis"
] |
217 | 2 | null | 216 | 7 | null | IMO, [Protovis](http://vis.stanford.edu/protovis/) is the best and is very well documented and supported. It is the basis for my [webvis](http://cran.r-project.org/web/packages/webvis/index.html) R package.
These are also very good, although they have more of a learning curve:
- Processing
- Prefuse
| null | CC BY-SA 2.5 | null | 2010-07-20T05:10:08.383 | 2010-07-20T05:15:51.977 | 2010-07-20T05:15:51.977 | 5 | 5 | null |
Subsets and Splits