Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1195 | 1 | null | null | 7 | 431 | Many income surveys (especially older ones) truncate key variables, such as household income, at some arbitrary point, to protect confidentiality. This point changes over time. This reduces inequality measures associated with the variable. I am interested in fitting a Pareto tail to the truncated distribution, replacing truncated values with imputed values to mimic the actual distribution. What's the best way to do this?
| How can I apply a Pareto tail to a truncated distribution? | CC BY-SA 2.5 | null | 2010-08-03T20:23:34.770 | 2011-05-12T17:12:20.963 | 2010-08-03T21:44:30.103 | null | 687 | [
"pareto-distribution",
"data-imputation"
] |
1196 | 2 | null | 1164 | 23 | null | Statistics is a tool for non-statistical-minded researchers, and they just don't care.
I once tried to help with a Medicine article my ex-wife was co-authoring. I wrote several pages describing the data, what it suggested, why certain observations had been excluded from the study... and the lead researcher, a doctor, threw it all away and asked someone to compute a p-value, which is all she (and just about everyone who would read the article) cared about.
| null | CC BY-SA 2.5 | null | 2010-08-03T20:26:23.000 | 2010-08-03T20:26:23.000 | null | null | 666 | null |
1197 | 2 | null | 1194 | 8 | null | I haven't read her work beyond the abstract of the linked paper, but my sense is that the distinction between "explanation" and "prediction" should be thrown away and replaced with the distinction between the aims of the practitioner, which are either "causal" or "predictive". In general, I think "explanation" is such a vague word that it means nearly nothing. For example, is Hooke's Law explanatory or predictive? On the other end of the spectrum, are predictively accurate recommendation systems good causal models of explicit item ratings? I think we all share the intuition that the goal of science is explanation, while the goal of technology is prediction; and this intuition somehow gets lost in consideration of the tools we use, like supervised learning algorithms, that can be employed for both causal inference and predictive modeling, but are really purely mathematical devices that are not intrinsically linked to "prediction" or "explanation".
Having said all of that, maybe the only word that I would apply to a model is interpretable. Regressions are usually interpretable; neural nets with many layers are often not so. I think people sometimes naively assume that a model that is interpretable is providing causal information, while uninterpretable models only provide predictive information. This attitude seems simply confused to me.
| null | CC BY-SA 2.5 | null | 2010-08-03T20:39:20.213 | 2010-08-03T20:39:20.213 | null | null | 303 | null |
1198 | 2 | null | 652 | 7 | null | This book is dynamite:
[George E. P. Box, Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building](http://rads.stackoverflow.com/amzn/click/0471093157)
It starts from zero knowledge of Statistics but it doesn't insult the reader's intelligence. It's incredibly practical but with no loss of rigour; in fact, it underscores the danger of ignoring underlying assumptions (which are often false in real life) of common tests.
It's out of print but it's very easy to find a copy. Follow the link for a few options.
| null | CC BY-SA 2.5 | null | 2010-08-03T21:19:22.083 | 2010-08-03T21:39:31.527 | 2010-08-03T21:39:31.527 | 666 | 666 | null |
1199 | 2 | null | 1194 | 5 | null | as others have already said, the distinction is somewhat meaningless, except in so far as the aims of the researcher are concerned.
Brad Efron, one of the commentators on [The Two Cultures](http://www.stat.osu.edu/~bli/dmsl/papers/Breiman.pdf) paper, made the following observation (as discussed [in my earlier question](https://stats.stackexchange.com/questions/6/the-two-cultures-statistics-vs-machine-learning)):
>
Prediction by itself is
only occasionally sufficient. The post
office is happy with any method that
predicts correct addresses from
hand-written scrawls. Peter
Gregory undertook his study for
prediction purposes, but also to
better understand the medical basis of
hepatitis. Most statistical surveys
have the identification of causal
factors as their ultimate goal.
Certain fields (eg. Medicine) place a heavy weight on model fitting as explanatory process (the distribution, etc.), as a means to understanding the underlying process that generates the data. Other fields are less concerned with this, and will be happy with a "black box" model that has a very high predictive success. This can work its way into the model building process as well.
| null | CC BY-SA 2.5 | null | 2010-08-03T21:30:22.343 | 2010-08-03T21:46:47.963 | 2017-04-13T12:44:28.813 | -1 | 5 | null |
1200 | 2 | null | 1194 | 7 | null | I am still a bit unclear as to what the question is. Having said that, to my mind the fundamental difference between predictive and explanatory models is the difference in their focus.
Explanatory Models
By definition explanatory models have as their primary focus the goal of explaining something in the real world. In most instances, we seek to offer simple and clean explanations. By simple I mean that we prefer parsimony (explain the phenomena with as few parameters as possible) and by clean I mean that we would like to make statements of the following form: "the effect of changing $x$ by one unit changes $y$ by $\beta$ holding everything else constant". Given these goals of simple and clear explanations, explanatory models seek to penalize complex models (by using appropriate criteria such as AIC) and prefer to obtain orthogonal independent variables (either via controlled experiments or via suitable data transformations).
Predictive Models
The goal of predictive models is to predict something. Thus, they tend to focus less on parsimony or simplicity but more on their ability to predict the dependent variable.
However, the above is somewhat of an artificial distinction as explanatory models can be used for prediction and sometimes predictive models can explain something.
| null | CC BY-SA 2.5 | null | 2010-08-03T21:32:40.813 | 2010-08-03T21:32:40.813 | null | null | null | null |
1201 | 2 | null | 11 | 3 | null | Sorry, no quick answer. There are thick books dedicated to answering this question. Here's a 600-page long example: [Harrell's Regression Modeling Strategies](http://rads.stackoverflow.com/amzn/click/1441929185)
| null | CC BY-SA 2.5 | null | 2010-08-03T21:50:09.007 | 2010-08-03T21:50:09.007 | null | null | 666 | null |
1202 | 1 | null | null | 3 | 693 | The question in short: What methods can be used to quantify distributional relationships between data when the distribution is unknown?
Now the longer story: I have a list of distributions and would like to rank them based on their similarity to a given base-line distribution. Correlation jumps into my mind in such a case and the Spearman correlation coefficient in particular given that it does not make any distributional assumptions. However, I would actually need to create the coefficient based on binned data (as this is done for histograms or densities) rather than the raw data, and I don't know if this is actually a valid step or if I am just manufacturing data.
In other words, if I have a 10000 point data set for each distribution, I would first create a binned distribution for each were each bin is of equal width and contains the frequencies of how many points each bin has. Just the way this is done for density plots or histograms. Each bin is on a discrete scale. The data is actually computer screen coordinate data and values are between 1 and 1024. Each pixel position could represent a bin (but larger bins are possible e.g. every 5 pixel being one bin). I would then compare the sequence of bins with each other rather than the raw data. The data set would look like this.
```
bins: 1 2 3 4 .... 1024<br>
dist#base:1 2 2 3 ..... 3<br>
dist#1: 1 4 5 5 3<br>
dist#2: 2 2 3 5 6<br>
...<br>
dist#1000: 1 2 4 6 6<br>
```
Does this make sense? Are there better ways of doing that? Are there better statistical methods? The goal of all this is to first) test how close are distributions from measure A to measure B and second) if I can predict one, if the other is missing.
| Ranking distributional data by similarity | CC BY-SA 3.0 | null | 2010-08-03T22:09:33.840 | 2018-02-11T00:49:37.040 | 2018-02-11T00:07:14.130 | 186294 | 608 | [
"distributions",
"relative-distribution"
] |
1203 | 2 | null | 608 | 6 | null | In an attempt to partially answer my own question, I read [Wikipedia's](https://stats.stackexchange.com/questions/577/is-there-any-reason-to-prefer-the-aic-or-bic-over-the-other) description of leave-one-out cross validation
>
involves using a single observation
from the original sample as the
validation data, and the remaining
observations as the training data.
This is repeated such that each
observation in the sample is used once
as the validation data.
In R code, I suspect that that would mean something like this...
```
resid <- rep(NA, Nobs)
for (lcv in 1:Nobs)
{
data.loo <- data[-lcv,] #drop the data point that will be used for validation
loo.model <- lm(y ~ a+b,data=data.loo) #construct a model without that data point
resid[lcv] <- data[lcv,"y"] - (coef(loo.model)[1] + coef(loo.model)[2]*data[lcv,"a"]+coef(loo.model)[3]*data[lcv,"b"]) #compare the observed value to the value predicted by the loo model for each possible observation, and store that value
}
```
... is supposed to yield values in resid that is related to the AIC. In practice the sum of squared residuals from each iteration of the LOO loop detailed above is a good predictor of the AIC for the notable.seeds, r^2 = .9776. But, [elsewhere](https://stats.stackexchange.com/questions/577/is-there-any-reason-to-prefer-the-aic-or-bic-over-the-other) a contributor suggested that LOO should be asymptotically equivalent to the AIC (at least for linear models), so I'm a little disappointed that r^2 isn't closer to 1. Obviously this isn't really an answer - more like additional code to try to encourage someone to try to provide a better answer.
Addendum: Since AIC and BIC for models of fixed sample size only vary by a constant, the correlation of BIC to squared residuals is the same as the correaltion of AIC to squared residuals, so the approach I took above appears to be fruitless.
| null | CC BY-SA 2.5 | null | 2010-08-03T22:23:31.813 | 2010-08-03T22:58:09.447 | 2017-04-13T12:44:35.347 | -1 | 196 | null |
1204 | 2 | null | 1202 | 0 | null | Edit: I misunderstood the question at first. Your observations are actually paired. Sample1 Bin1 to Baseline Bin1 etc.
What you could do is take the difference between sample and baseline for each bin, then use the Wilcoxon signed rank statistic on the differences.
[http://en.wikipedia.org/wiki/Wilcoxon_signed-rank_test](http://en.wikipedia.org/wiki/Wilcoxon_signed-rank_test)
If
- D is your sequence of differences, then
- R is the ranks of |D|, and
- psi = 0 if D < 0, 1 if D > 0.
- W = Sum(psi*R)
In R
```
wilcox.test(sample1-baseline)$statistic
```
| null | CC BY-SA 2.5 | null | 2010-08-03T22:25:14.913 | 2010-08-03T22:39:31.763 | 2010-08-03T22:39:31.763 | 287 | 287 | null |
1205 | 1 | null | null | 15 | 15603 | I want to perform a two-sample T-test to test for a difference between two independent samples which each sample abides by the assumptions of the T-test (each distribution can be assumed to be independent and identically distributed as Normal with equal variance). The only complication from the basic two-sample T-test is that the data is weighted. I am using weighted means and standard deviations, but weighted N's will artificially inflate the size of the sample, hence bias the result. Is it simply a case of replacing the weighted Ns with the unweighted Ns?
| Two-sample T-test with weighted data | CC BY-SA 2.5 | null | 2010-08-03T22:56:48.420 | 2010-08-04T15:57:30.210 | null | null | 366 | [
"t-test"
] |
1206 | 2 | null | 1194 | 21 | null | One practical issue that arises here is variable selection in modelling. A variable can be an important explanatory variable (e.g., is statistically significant) but may not be useful for predictive purposes (i.e., its inclusion in the model leads to worse predictive accuracy). I see this mistake almost every day in published papers.
Another difference is in the distinction between principal components analysis and factor analysis. PCA is often used in prediction, but is not so useful for explanation. FA involves the additional step of rotation which is done to improve interpretation (and hence explanation). There is a [nice post today on Galit Shmueli's blog about this](http://www.bzst.com/2010/08/pca-debate.html).
Update: a third case arises in time series when a variable may be an important explanatory variable but it just isn't available for the future. For example, home loans may be strongly related to GDP but that isn't much use for predicting future home loans unless we also have good predictions of GDP.
| null | CC BY-SA 3.0 | null | 2010-08-03T23:36:08.593 | 2015-07-28T11:25:31.220 | 2015-07-28T11:25:31.220 | 68299 | 159 | null |
1207 | 1 | null | null | 61 | 37684 | This post is the continuation of another post related to a [generic method for outlier detection in time series](https://stats.stackexchange.com/questions/1142/simple-algorithm-for-online-outlier-detection-of-a-generic-time-series).
Basically, at this point I'm interested in a robust way to discover the periodicity/seasonality of a generic time series affected by a lot of noise.
From a developer point of view, I would like a simple interface such as:
`unsigned int discover_period(vector<double> v);`
Where `v` is the array containing the samples, and the return value is the period of the signal.
The main point is that, again, I can't make any assumption regarding the analyzed signal.
I already tried an approach based on the signal autocorrelation (detecting the peaks of a correlogram), but it's not robust as I would like.
| Period detection of a generic time series | CC BY-SA 3.0 | null | 2010-08-04T00:32:13.360 | 2021-07-06T18:38:45.747 | 2017-04-13T12:44:56.303 | -1 | 667 | [
"time-series",
"algorithms",
"frequency",
"real-time"
] |
1210 | 2 | null | 1133 | 12 | null | You should look into "partitioning chi-squared". This is similar in logic to performing post-hoc tests in ANOVA. It will allow you to determine whether your significant overall test is primarily attributable to differences in particular categories or groups of categories.
A quick google turned up this presentation, which at the end discusses methods for partitioning chi-squared.
[http://www.ed.uiuc.edu/courses/EdPsy490AT/lectures/2way_chi-ha-online.pdf](http://www.ed.uiuc.edu/courses/EdPsy490AT/lectures/2way_chi-ha-online.pdf)
| null | CC BY-SA 2.5 | null | 2010-08-04T01:26:59.243 | 2010-08-04T01:26:59.243 | null | null | 485 | null |
1211 | 2 | null | 770 | 4 | null | It's like that old joke. When asked for directions the philosopher said "Well, if I wanted to go there, I wouldn't start from here ..."
While I think each "culture" should be open to learning from the other, they have different ways of looking at the world.
I think the problem with learning statistics through studying machine learning algorithms is that, whilst ML algorithms start with statistical concepts, statistics doesn't start with algorithms, but probability models.
| null | CC BY-SA 2.5 | null | 2010-08-04T02:05:29.027 | 2010-08-04T02:05:29.027 | null | null | 521 | null |
1212 | 2 | null | 1174 | 22 | null | First of all my advice is you must refrain from trying out a Poisson distribution just as it is to the data. I suggest you must first make a theory as to why should Poisson distribution fit a particular dataset or a phenomenon.
Once you have established this, the next question is whether the distribution is homogeneous or not. This means whether all parts of the data are handled by the same poisson distribution or is there a variation in this based on some aspect like time or space. Once you have convinced of these aspects, try the following three tests:
- likelihood ratio test using a chi square variable
- use of conditional chi-square statistic; also called poisson dispersion test or variance test
- use of the neyman-scott statistic, that is based on a variance stabilizing transformation of the poisson variable
search for these and you will find them easily on the net.
| null | CC BY-SA 3.0 | null | 2010-08-04T03:23:21.247 | 2015-09-03T22:17:07.177 | 2015-09-03T22:17:07.177 | 56216 | 25692 | null |
1213 | 2 | null | 138 | 57 | null |
## Some useful R links (find out the link that suits you):
Intro:
- for R basics http://cran.r-project.org/doc/contrib/usingR.pdf
- for data manipulation http://had.co.nz/plyr/plyr-intro-090510.pdf
- http://portal.stats.ox.ac.uk/userdata/ruth/APTS2012/APTS.html
- Interactive intro to R programming language https://www.datacamp.com/courses/introduction-to-r
- Application focused R tutorial https://www.teamleada.com/tutorials/introduction-to-statistical-programming-in-r
- In-browser learning for R http://tryr.codeschool.com/
with a focus on economics:
- lecture notes with R code http://www.econ.uiuc.edu/~econ472/e-Tutorial.html
- A brief guide to R and Economics http://people.su.se/~ma/R_intro/R_intro.pdf
Graphics: plots, maps, etc.:
- tutorial with info on plots http://cran.r-project.org/doc/contrib/Rossiter-RIntro-ITC.pdf
- a graph gallery of R plots and charts with supporting code http://addictedtor.free.fr/graphiques/
- A tutorial for Lattice http://osiris.sunderland.ac.uk/~cs0her/Statistics/UsingLatticeGraphicsInR.htm
- Ggplot R graphics http://had.co.nz/ggplot2/
- Ggplot Vs Lattice @ http://had.co.nz/ggplot/vs-lattice.html
- Multiple tutorials for using ggplot2 and Lattice http://learnr.wordpress.com/tag/ggplot2/
- Google Charts with R http://www.iq.harvard.edu/blog/sss/archives/2008/04/google_charts_f_1.shtml
- Introduction to using RGoogleMaps @ http://cran.r-project.org/web/packages/RgoogleMaps/vignettes/RgoogleMaps-intro.pdf
- Thematic Maps with R https://stackoverflow.com/questions/1260965/developing-geographic-thematic-maps-with-r
- geographic maps in R http://smartdatacollective.com/Home/22052
GUIs:
- Poor Man GUI for R http://wiener.math.csi.cuny.edu/pmg/
- R Commander is a robust GUI for R http://socserv.mcmaster.ca/jfox/Misc/Rcmdr/installation-notes.html
- JGR is a Java-based GUI for R http://jgr.markushelbig.org/Screenshots.html
Time series & finance:
- a good beginner’s tutorial for Time Series http://www.stat.pitt.edu/stoffer/tsa2/index.html
- Interesting time series packages in R http://robjhyndman.com/software
- advanced time series in R http://www.wise.xmu.edu.cn/2007summerworkshop/download/Advanced%20Topics%20in%20Time%20Series%20Econometrics%20Using%20R1_ZongwuCAI.pdf
- provides a great analysis and visualization framework for quantitative trading http://www.quantmod.com/
- Guide to Credit Scoring using R http://cran.r-project.org/doc/contrib/Sharma-CreditScoring.pdf
- an Open Source framework for Financial Analysis http://www.rmetrics.org/
Data / text mining:
- A Data Mining tool in R http://rattle.togaware.com/
- An online e-book for Data Mining with R http://www.liaad.up.pt/~ltorgo/DataMiningWithR/
- Introduction to the Text Mining package in R http://cran.r-project.org/web/packages/tm/vignettes/tm.pdf
Other statistical techniques:
- Quick-R http://www.statmethods.net/
- annotated guides for a variety of models http://www.ats.ucla.edu/stat/r/dae/default.htm
- Social Network Analysis http://www.r-project.org/conferences/useR-2008/slides/Bojanowski.pdf
Editors:
- Komodo Edit R editor http://www.sciviews.org/SciViews-K/index.html
- Tinn-R makes for a good R editor http://www.sciviews.org/Tinn-R/
- An Eclipse plugin for R @ http://www.walware.de/goto/statet
- Instructions to install StatET in Eclipse http://www.splusbook.com/Rintro/R_Eclipse_StatET.pdf
- RStudio http://rstudio.org/
- Emacs Speaks Statistics, a statistical language package for Emacs http://ess.r-project.org/
Interfacing w/ other languages / software:
- to embed R data frames in Excel via multiple approaches http://learnr.wordpress.com/2009/10/06/export-data-frames-to-multi-worksheet-excel-file/
- provides a tool to make R usable from Excel http://www.statconn.com/
- Connect to MySQL from R http://erikvold.com/blog/index.cfm/2008/8/20/how-to-connect-to-mysql-with-r-in-wndows-using-rmysql
- info about pulling data from SAS, STATA, SPSS, etc. http://www.statmethods.net/input/importingdata.html
- Latex http://www.stat.uni-muenchen.de/~leisch/Sweave/
- R2HTML http://www.feferraz.net/en/P/R2HTML
Blogs, newsletters, etc.:
- A very informative blog http://blog.revolutionanalytics.com/
- A blog aggregator for posts about R http://www.r-bloggers.com/
- R mailing lists http://www.r-project.org/mail.html
- R newsletter (old) http://cran.r-project.org/doc/Rnews/
- R journal (current) http://journal.r-project.org/
Other / uncategorized: (as of yet)
- Web Scraping in R http://www.programmingr.com/content/webscraping-using-readlines-and-rcurl
- a very interesting list of packages that is seriously worth a look http://www.omegahat.org/
- Commercial versions of R @ http://www.revolutionanalytics.com/
- Red R for R tasks http://code.google.com/p/r-orange/
- KNIME for R (worth a serious look) http://www.knime.org/introduction/screenshots
- R Tutorial for Titanic https://statsguys.wordpress.com/
| null | CC BY-SA 3.0 | null | 2010-08-04T03:28:53.913 | 2015-04-01T21:11:30.723 | 2020-06-11T14:32:37.003 | -1 | 25692 | null |
1214 | 2 | null | 1207 | 57 | null | If you really have no idea what the periodicity is, probably the best approach is to find the frequency corresponding to the maximum of the spectral density. However, the spectrum at low frequencies will be affected by trend, so you need to detrend the series first. The following R function should do the job for most series. It is far from perfect, but I've tested it on a few dozen examples and it seems to work ok. It will return 1 for data that have no strong periodicity, and the length of period otherwise.
Update: Version 2 of function. This is much faster and seems to be more robust.
```
find.freq <- function(x)
{
n <- length(x)
spec <- spec.ar(c(x),plot=FALSE)
if(max(spec$spec)>10) # Arbitrary threshold chosen by trial and error.
{
period <- round(1/spec$freq[which.max(spec$spec)])
if(period==Inf) # Find next local maximum
{
j <- which(diff(spec$spec)>0)
if(length(j)>0)
{
nextmax <- j[1] + which.max(spec$spec[j[1]:500])
period <- round(1/spec$freq[nextmax])
}
else
period <- 1
}
}
else
period <- 1
return(period)
}
```
| null | CC BY-SA 4.0 | null | 2010-08-04T05:41:02.697 | 2021-07-06T18:38:45.747 | 2021-07-06T18:38:45.747 | 11887 | 159 | null |
1215 | 1 | 1220 | null | 2 | 181 | I have a list of sold items by size. Shoes in this case
```
Size Qty
35 2
36 1
37 4
38 4
39 32
40 17
41 23
42 57
43 95
44 90
45 98
46 33
47 16
48 4
total: 476
```
I have to tell the owner how much of every size to buy. The problem is, I can't say him.
- You should buy 95 shoes size 43 for every one of size 36...
The usual practice is to buy the whole size curve and buy extras for the most selling sizes.
This is about a year worth of data.
How should I present this information in an easy to understand way?
What I want to present is a general rule. Something like "for every size curve, you should buy x additional shoes of size x".
The idea would be to later apply this approach to other clothing items.
| Analysis of sells (what to buy) | CC BY-SA 4.0 | null | 2010-08-04T05:41:51.087 | 2020-01-01T17:57:19.090 | 2020-01-01T17:57:19.090 | 92235 | 698 | [
"data-visualization"
] |
1216 | 2 | null | 1215 | 3 | null | I suggest he stocks $y_s$ pairs of shoes of size $s$ where $y_s$ is chosen so that the probability of running out of stock before the next delivery is set to some acceptable level (e.g., 5%).
It seems reasonable to assume $y_s$ is Poisson with rate $\lambda_s$. You can estimate $\lambda_s$ as the average sales of that size over the last few delivery periods. Then all that remains is to find the 95th percentile of the Poisson distribution with mean equal to the estimated rate for each size.
| null | CC BY-SA 2.5 | null | 2010-08-04T06:12:22.350 | 2010-08-04T08:13:41.147 | 2010-08-04T08:13:41.147 | 159 | 159 | null |
1217 | 2 | null | 1194 | 12 | null | Example: A classic example that I have seen is in the context of predicting human performance.
Self-efficacy (i.e., the degree to which a person thinks that they can perform a task well) is often a strong predictor of task performance. Thus, if you put self-efficacy into a multiple regression along with other variables such as intelligence and degree of prior experience, you often find that self-efficacy is a strong predictor.
This has lead some researchers to suggest that self-efficacy causes task performance. And that effective interventions are those which focus on increasing a person's sense of self-efficacy.
However, the alternative theoretical model sees self-efficacy largely as a consequence of task performance. I.e., If you are good, you'll know it. In this framework interventions should focus on increasing actual competence and not perceived competence.
Thus, including a variable like self-efficacy might increase prediction, but assuming you adopt the self-efficacy-as-consequence model, it should not be included as a predictor if the aim of the model is to elucidate causal processes influencing performance.
This of course raises the issue of how to develop and validate a causal theoretical model. This clearly relies on multiple studies, ideally with some experimental manipulation, and a coherent argument about dynamic processes.
Proximal versus distal: I've seen similar issues when researchers are interested in the effects of distal and proximal causes. Proximal causes tend to predict better than distal causes. However, theoretical interest may be in understanding the ways in which distal and proximal causes operate.
Variable selection issue: Finally, a huge issue in social science research is the variable selection issue.
In any given study, there is an infinite number of variables that could have been measured
but weren't. Thus, interpretation of models need to consider the implications of this when making theoretical interpretations.
| null | CC BY-SA 2.5 | null | 2010-08-04T06:16:41.453 | 2010-08-04T06:16:41.453 | null | null | 183 | null |
1218 | 2 | null | 459 | 1 | null | I would strongly enjoin you to avoid red as an indicator: there are many sorts of colour-deficiency that make this choice problematic (see eg [http://en.wikipedia.org/wiki/Color_blindness#Design_implications_of_color_blindness](http://en.wikipedia.org/wiki/Color_blindness#Design_implications_of_color_blindness) ).
The high-contrast option is I believe the best choice.
| null | CC BY-SA 2.5 | null | 2010-08-04T06:51:12.213 | 2010-08-04T06:51:12.213 | null | null | null | null |
1219 | 2 | null | 764 | 10 | null | A very good article explaining the general approach of LMMs and their advantage over ANOVA is:
- Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59, 390-412.
Linear mixed-effects models (LMMs) generalize regression models to have residual-like components, random effects, at the level of, e.g., people or items and not only at the level of individual observations. The models are very flexible, for instance allowing the modeling of varying slopes and intercepts.
LMMs work by using a likelihood function of some kind, the probability of your data given some parameter, and a method for maximizing this (Maximum Likelihood Estimation; MLE) by fiddling around with the parameters. MLE is a very general technique allowing lots of different models, e.g., those for binary and count data, to be fitted to data, and is explained in a number of places, e.g.,
- Agresti, A. (2007). An Introduction to Categorical Data Analysis (2nd Edition). John Wiley & Sons.
LMMs, however, can't deal with non-Gaussian data like binary data or counts; for that you need Generalized Linear Mixed-effects Models (GLMMs). One way to understand these is first to look into GLMs; also see Agresti (2007).
| null | CC BY-SA 2.5 | null | 2010-08-04T08:20:14.020 | 2010-08-04T08:20:14.020 | null | null | 702 | null |
1220 | 2 | null | 1215 | 4 | null | Following on from Rob's answer, you could estimate an acceptable level using a Normal approximation. So if an acceptable rate of running out is 5%, then use the following rule:
Qty + 1.644 * Qty^(0.5)
The value 1.644 comes from the [Normal](http://en.wikipedia.org/wiki/Normal_distribution) or Gaussian distribution.
Other acceptable rates could be:
- 1%: change 1.644 to 2.326
- 10%: change 1.644 to 1.282
Further Rationale
Rob suggested that you could model your data using a Poisson distribution. When the rate (or in your case Qty) is large (say 10), then the Normal distribution gives a good approximation. The following graph shows the Poisson 95th percentile (red) and the Normal approximation in green. As you can see they are fairly close. The raw data is shown as points.
[Normal approximation http://img821.imageshack.us/img821/3636/tmps.jpg](http://img821.imageshack.us/img821/3636/tmps.jpg)
| null | CC BY-SA 2.5 | null | 2010-08-04T08:20:54.580 | 2010-08-04T08:33:07.973 | 2010-08-04T08:33:07.973 | 8 | 8 | null |
1221 | 2 | null | 534 | 10 | null | In the twins example it is not just the correlation that suggests causality, but also the associated information or prior knowledge.
Suppose I add one further piece of information. Assume that the diligent twin spent 6 hours studying for a stats exam, but due to an unfortunate error the exam was in history. Would we still conclude the study was the cause of the superior performance?
Determining causality is as much a philosophical question as a scientific one, hence the tendency to invoke philosophers such as David Hume and Karl Popper when causality is discussed.
Not surprisingly medicine has made significant contributions to establishing causality through heuristics, such as Koch's postulates for establishing the causal relationship between microbes and disease.
These have been extended to "molecular Koch's postulates" required to show that a gene in a pathogen encodes a product that contributes to the disease caused by the pathogen.
Unfortunately I can't post a hyperlinks supposedly beCAUSE I'm a new user (not true) and don't have enough "reputation points". The real reason is anybody's guess.
| null | CC BY-SA 2.5 | null | 2010-08-04T08:39:21.830 | 2010-08-04T08:44:43.833 | 2010-08-04T08:44:43.833 | 521 | 521 | null |
1222 | 2 | null | 886 | 9 | null | I can't post a comment (the appropriate place for this comment) as I don't have enough reputation, but the answer accepted as the best answer by the question owner misses the point.
"If statistics is all about maximizing likelihood, then machine learning is all about minimizing loss."
The likelihood is a loss function. Maximising likelihood is the same as minimising a loss function: the deviance, which is just -2 times the log-likelihood function. Similarly finding a least squares solution is about minimising the loss function describing the residual sum of squares.
Both ML and stats use algorithms to optimise the fit of some function (in the broadest terms) to data. Optimisation necessarily involves minimising some loss function.
| null | CC BY-SA 2.5 | null | 2010-08-04T09:07:44.193 | 2010-08-04T09:07:44.193 | null | null | 521 | null |
1223 | 1 | null | null | 18 | 9771 | I'm looking for some robust techniques to remove outliers and errors (whatever the cause) from financial time-series data (i.e. tickdata).
Tick-by-tick financial time-series data is very messy. It contains huge (time) gaps when the exchange is closed, and make huge jumps when the exchange opens again. When the exchange is open, all kinds of factors introduce trades at price levels that are wrong (they did not occur) and/or not representative of the market (a spike because of an incorrectly entered bid or ask price for example). [This paper by tickdata.com](http://www.tickdata.com/pdf/Tick_Data_Filtering_White_Paper.pdf) (PDF) does a good job of outlining the problem, but offers few concrete solutions.
Most papers I can find online that mention this problem either ignore it (the tickdata is assumed filtered) or include the filtering as part of some huge trading model which hides any useful filtering steps.
Is anybody aware of more in-depth work in this area?
Update: [this questions](https://stats.stackexchange.com/questions/1142/simple-algorithm-for-online-outlier-detection-of-a-generic-time-series) seems similar on the surface but:
- Financial time series is (at least at the tick level) non-periodic.
- The opening effect is a big issue because you can't simply use the last day's data as initialisation even though you'd really like to (because otherwise you have nothing). External events might cause the new day's opening to differ dramatically both in absolute level, and in volatility from the previous day.
- Wildly irregular frequency of incoming data. Near open and close of the day the amount of datapoints/second can be 10 times higher than the average during the day. The other question deals with regularly sampled data.
- The "outliers" in financial data exhibit some specific patterns that could be detected with specific techniques not applicable in other domains and I'm -in part- looking for those specific techniques.
- In more extreme cases (e.g. the flash crash) the outliers might amount to more than 75% of the data over longer intervals (> 10 minutes). In addition, the (high) frequency of incoming data contains some information about the outlier aspect of the situation.
| Robust outlier detection in financial timeseries | CC BY-SA 2.5 | null | 2010-08-04T10:02:35.090 | 2022-01-11T16:19:56.830 | 2017-04-13T12:44:46.083 | -1 | 127 | [
"time-series",
"outliers"
] |
1224 | 1 | null | null | 2 | 186 | With data from two centres I want to account for potential heterogeneity or confounders between two centers. So the analysis will initially be stratified by clinical center and a chi square test performed with one degree of freedom. Is this appropriate with just two centres? Or is there an alternative?
| Heterogeneity with two studies | CC BY-SA 2.5 | null | 2010-08-04T10:38:03.740 | 2010-10-19T06:44:37.850 | 2010-10-18T21:55:09.667 | 930 | null | [
"hypothesis-testing",
"clinical-trials"
] |
1225 | 2 | null | 652 | 3 | null | Statistics as Principled Argument by Abelson is a good side book to learning statistics, particularly if your substantive field is in the social sciences. It won't teach you how to do analysis, but it will teach you about statistical thinking.
I reviewed this book [here](http://www.statisticalanalysisconsulting.com/book-review-statistics-as-principled-argument-by-robert-abelson/)
| null | CC BY-SA 3.0 | null | 2010-08-04T10:55:31.480 | 2015-07-20T16:35:32.303 | 2015-07-20T16:35:32.303 | 686 | 686 | null |
1226 | 2 | null | 30 | 3 | null | There are two parts to testing a random number generator. If you're only concerned with testing a uniform generator, then yes, something like the DIEHARD test suite is a good idea.
But often you need to test a transformation of a uniform generator. For example, you might use a uniform generator to create exponentially or normally distributed values. You may have a high-quality uniform generator -- say you have a trusted implementation of a well-known algorithm such as Mersenne Twister -- but you need to test whether the transformed output has the right distribution. In that case you need to do some sort of goodness of fit test such as Kolmogorov-Smirnov. But for starters, you could verify that the sample mean and variance have the values you expect.
Most people don't -- and shouldn't -- write their own uniform random number generator from scratch. It's hard to write a good generator and easy to fool yourself into thinking you've written a good one when you haven't. For example, Donald Knuth tells the story in [TAOCP volume 2](http://rads.stackoverflow.com/amzn/click/0201896842) of a random number generator he wrote that turned out to be awful. But it's common for people to have to write their own code to produce random values from a new distribution.
| null | CC BY-SA 2.5 | null | 2010-08-04T11:48:13.410 | 2010-08-04T11:48:13.410 | null | null | 319 | null |
1228 | 1 | 1273 | null | 10 | 1192 | I am using a control chart to try to work on some infection data, and will raise an alert if the infection is considered "out of control".
Problems arrive when I come to a set of data where most of the time points have zero infection, with only a few occasions of one to two infections, but these already exceed the control limit of the chart, and raise an alert.
How should I work on the control chart if the data set is having very few positive infection counts?
| How to interpret a control chart containing a majority of zero values? | CC BY-SA 4.0 | null | 2010-08-04T12:05:30.000 | 2022-11-27T06:33:46.727 | 2022-11-27T06:33:46.727 | 362671 | 588 | [
"data-visualization",
"quality-control"
] |
1229 | 2 | null | 652 | 3 | null | The best intro in my eyes is the following one:
>
David Howell - Statistical Methods for
Psychology
It is the BEST in making statistical concepts understandable for non mathematicians so that they get the math afterwards!
Unfortunately it is updated every year and, hence, pricey.
| null | CC BY-SA 2.5 | null | 2010-08-04T12:32:08.780 | 2010-08-04T12:32:08.780 | null | null | 442 | null |
1230 | 2 | null | 652 | 5 | null | I am a big fan of [Statistical Models - Theory and Practice](http://rads.stackoverflow.com/amzn/click/0521671051) by David Friedman. It succeeds remarkably well to introduce and motivate the different concepts of statistical modeling through concrete, and historically important problems (cholera in London, Yule on the causes of poverty, Political repression in the McCarty era..).
Friedman illustrates the principles of modeling, and the pitfalls. In some sense, the discussion shows how to think about the critical issues and is honest about the connection between the statistical models and the real world phenomena.
| null | CC BY-SA 2.5 | null | 2010-08-04T12:32:12.103 | 2010-08-04T12:32:12.103 | null | null | 358 | null |
1231 | 2 | null | 1228 | 2 | null | You are asking quite a tricky question!
This is outside my area of expertise, but I know that [Prof Farrington](https://web.archive.org/web/20100507070649/http://www.mcs.open.ac.uk/People/c.p.farrington) does some work on this problem. So I would look at a some of his papers and follow a few of his references. To get you started, this [report](http://stats-www.open.ac.uk/TechnicalReports/Cusum.pdf) looks relevant.
| null | CC BY-SA 4.0 | null | 2010-08-04T12:52:05.560 | 2022-11-27T06:33:10.127 | 2022-11-27T06:33:10.127 | 362671 | 8 | null |
1232 | 2 | null | 1228 | 1 | null | Would it make sense to plot the control chart based on an average of the weekly infections or another similar floating average? Would this then 'damp' out spikes due to daily high values whilst ensuring that changes in trends are picked up in a relatively timely manner.
| null | CC BY-SA 2.5 | null | 2010-08-04T12:56:10.217 | 2010-08-04T12:56:10.217 | null | null | 210 | null |
1233 | 2 | null | 1228 | 1 | null | Perhaps, you can build an edge case in your routine/software to deal with the situation. If you detect several zeros in the dataset then you set a separate control for that particular situation. This is obviously a hack and not a principled solution but may serve your present needs till you can come up with something better.
| null | CC BY-SA 2.5 | null | 2010-08-04T13:18:12.700 | 2010-08-04T13:18:12.700 | null | null | null | null |
1234 | 2 | null | 1223 | 5 | null | I have (with some delay) changed my answer to reflect your concern about the lack of 'adaptability' of the unconditional mad/median.
You can address the problem of time varying volatility with the robust statistics framework. This is done by using a robust estimator of the conditional variance (instead of the robust estimator of the unconditional variance I was suggesting earlier): the M-estimation of the GARCH model. Then you will have a robust, time varying estimate of $(\hat{\mu}_t,\hat{\sigma}_t)$ which are not the same as those
produced by the usual GARCH fit. In particular, they are not driven by a few far away outliers. Because these estimate are not driven by them, you can use them to reliably flag the outliers using the historical distribution of the
$$\frac{x_t-\hat{\mu}_t}{\hat{\sigma}_t}$$
You can find more information (and a link to an R package) in this [paper](http://www.econ.kuleuven.ac.be/public/ndbae06/PDF-FILES/MGARCH.pdf):
>
Boudt, K. and Croux, C. (2010). Robust M-Estimation of Multivariate
GARCH Models.
| null | CC BY-SA 3.0 | null | 2010-08-04T13:32:07.383 | 2013-11-08T15:43:39.733 | 2013-11-08T15:43:39.733 | 603 | 603 | null |
1235 | 2 | null | 1223 | 8 | null | I'll add some paper references when I'm back at a computer, but here are some simple suggestions:
Definitely start by working with returns. This is critical to deal with the irregular spacing where you can naturally get big price gaps (especially around weekends). Then you can apply a simple filter to remove returns well outside the norm (eg. vs a high number of standard deviations). The returns will adjust to the new absolute level so large real changes will result in the loss of only one tick. I suggest using a two-pass filter with returns taken from 1 step and n steps to deal with clusters of outliers.
Edit 1: Regarding the usage of prices rather than returns: asset prices tend to not be stationary, so IMO that can pose some additional challenges. To account for the irregularity and power law effects, I would advise some kind of adjustment if you want to include them in your filter. You can scale the price changes by the time interval or by volatility. You can refer to the "realized volatility" literture for some discussion on this. Also discussed in Dacorogna et. al.
To account for the changes in volatility, you might try basing your volatility calculation from the same time of the day over the past week (using the seasonality).
| null | CC BY-SA 2.5 | null | 2010-08-04T13:47:36.963 | 2010-08-04T15:45:37.880 | 2010-08-04T15:45:37.880 | 5 | 5 | null |
1236 | 2 | null | 652 | 1 | null | As a biologist, I found the Sokal and Rohlf text to be quite readable, despite its voluminous-ness. It's not so great as a quick reference, but does walk one through statistical theory.
R. R. Sokal and F. J. Rohlf, Biometry the principles and practice of statistics in biological research, Third. (New York: W.J. Freeman and Company, 1995).
| null | CC BY-SA 2.5 | null | 2010-08-04T14:02:21.880 | 2010-08-04T14:02:21.880 | null | null | 124 | null |
1237 | 2 | null | 109 | 1 | null | I usually like to run simulations to answer questions like this, but without confirmed details of the algorithm the question asker wants evaluated and no obvious implementation of the Holm-Sidak procedure available in R, that is not possible. For my answer I eyeballed the code provided [here](http://www.mathworks.com/matlabcentral/fileexchange/12786). Assuming that is the right procedure, and assuming the null hypothesis is that all group means are equal:
Feel free to correct me. I usually like to run simulations on such things. But, without the ability to readily do that, I can't check my answer. So, I might be entirely wrong here. My answer is that usually the Holm-Sidak will demonstrate greater power, but that the answer in a strict sense is "it depends". Both methods use the pooled error term and assume homogeneity of variance, so there is no difference in the procedures there. However, since Holm-Sidak adjusts in a stepwise manner early comparisons are more likely to pass a threshold of significance than later comparisons, especially since the freedom to assess later comparisons is dependent on the outcome of previous comparisons. Thus, it seems likely that in situations where the differences in means between groups to be compared is roughly equal (and meets the Tukey HSD threshold for significance) and the number of groups is sufficiently large (purposely vague without a simulation), that Holm-Sidak will fail to reach significance for the later comparisons. Thus, in these situations Tukey's HSD will have more power than Holm-Sidak.
| null | CC BY-SA 2.5 | null | 2010-08-04T14:32:12.320 | 2010-08-04T14:38:22.450 | 2010-08-04T14:38:22.450 | 196 | 196 | null |
1238 | 2 | null | 30 | 2 | null | The [NIST publishes a list of statistical tests](http://csrc.nist.gov/groups/ST/toolkit/rng/stats_tests.html) with a reference implementation in C.
There is also [TestU01](http://www.iro.umontreal.ca/~simardr/testu01/tu01.html) by some smart folks, including respected PRNG researcher Pierre L'Ecuyer. Again, there is a reference implementation in C.
As pointed out by other commenters, these are for testing the generation of pseudo random bits. If you transform these bits into a different random variable (e.g. Box-Muller transform from uniform to Normal), you'll need additional tests to confirm the correctness of the transform algorithm.
| null | CC BY-SA 2.5 | null | 2010-08-04T14:47:27.783 | 2010-08-04T14:49:15.003 | 2010-08-04T14:49:15.003 | null | 729 | null |
1240 | 2 | null | 30 | 4 | null | Small correction to Colin's post: the CRAN package
[RDieHarder](http://cran.r-project.org/package=RDieHarder) is an interface to
[DieHarder](http://www.phy.duke.edu/~rgb/General/dieharder.php), the Diehard rewrite / extension / overhaul done by [Robert G. Brown](http://www.phy.duke.edu/~rgb/) (who kindly lists me as a coauthor based on my RDieHarder wrappers) with recent contribution by David Bauer.
Among other things, DieHarder includes the [NIST battery of tests](http://csrc.nist.gov/groups/ST/toolkit/rng/stats_tests.html) mentioned in Mark's post as well as some new ones. This is ongoing research and has been for a while. I gave a talk at useR! 2007 about RDieHarder which you can get from [here](http://dirk.eddelbuettel.com/presentations.html).
| null | CC BY-SA 2.5 | null | 2010-08-04T15:11:13.447 | 2010-08-04T15:11:13.447 | null | null | 334 | null |
1241 | 1 | 1264 | null | 12 | 5184 | When would you tend to use ROC curves over some other tests to determine the predictive ability of some measurement on an outcome?
When dealing with discrete outcomes (alive/dead, present/absent), what makes ROC curves more or less powerful than something like a chi-square?
| What do ROC curves tell you that traditional inference wouldn't? | CC BY-SA 2.5 | null | 2010-08-04T15:13:27.610 | 2011-10-25T18:17:27.177 | 2010-08-27T02:42:25.167 | 159 | 684 | [
"regression",
"roc"
] |
1243 | 2 | null | 1174 | 2 | null | I think the main point is the one sidmaestro raises...does the experimental setup or data generation mechanism support the premise that the data might arise from a Poisson distribution.
I'm not a big fan of testing for distributional assumptions, since those tests typically aren't very useful. What seems more useful to me is to make distributional or model assumptions that are flexible and reasonably robust to deviations from the model, typically for purposes of inference. In my experience, it is not that common to see mean=variance, so often the negative binomial model seems more appropriate, and includes the Poisson as a special case.
Another point that is important in going for distributional testing, if that's what you want to do, is to make sure that there aren't strata involved which would make your observed distribution a mixture of other distributions. Individual stratum-specific distributions might appear Poisson, but the observed mixture might not be. An analogous situation from regression only assumes that the conditional distribution of Y|X is normally distributed, and not really the distribution of Y itself.
| null | CC BY-SA 2.5 | null | 2010-08-04T15:24:11.960 | 2010-08-04T15:24:11.960 | null | null | 732 | null |
1244 | 2 | null | 1149 | 7 | null | The way I think about this really is in terms of information. Say each of $X_{1}$ and $X_{2}$ has some information about $Y$. The more correlated $X_{1}$ and $X_{2}$ are with each other, the more the information content about $Y$ from $X_{1}$ and $X_{2}$ are similar or overlapping, to the point that for perfectly correlated $X_{1}$ and $X_{2}$, it really is the same information content. If we now put $X_{1}$ and $X_{2}$ in the same (regression) model to explain $Y$, the model tries to "apportion" the information that ($X_{1}$,$X_{2}$) contains about $Y$ to each of $X_{1}$ and $X_{2}$, in a somewhat arbitrary manner. There is no really good way to apportion this, since any split of the information still leads to keeping the total information from ($X_{1}$,$X_{2}$) in the model (for perfectly correlated $X$'s, this really is a case of non-identifiability). This leads to unstable individual estimates for the individual coefficients of $X_{1}$ and $X_{2}$, though if you look at the predicted values $b_{1}X_{1}+b_{2}X_{2}$ over many runs and estimates of $b_{1}$ and $b_{2}$, these will be quite stable.
| null | CC BY-SA 3.0 | null | 2010-08-04T15:37:40.830 | 2012-08-20T18:30:25.480 | 2012-08-20T18:30:25.480 | 13091 | 732 | null |
1245 | 2 | null | 1223 | 14 | null | The problem is definitely hard.
Mechanical rules like the +/- N1 times standard deviations, or +/ N2 times MAD, or +/- N3 IQR or ... will fail because there are always some series that are different as for example:
- fixings like interbank rate may be constant for some time and then jump all of a sudden
- similarly for e.g. certain foreign exchanges coming off a peg
- certain instrument are implicitly spreads; these may be near zero for periods and all of a sudden jump manifold
Been there, done that, ... in a previous job. You could try to bracket each series using arbitrage relations ships (e.g. assuming USD/EUR and EUR/JPY are presumed good, you can work out bands around what USD/JPY should be; likewise for derivatives off an underlying etc pp.
Commercial data vendors expand some effort on this, and those of use who are clients of theirs know ... it still does not exclude errors.
| null | CC BY-SA 2.5 | null | 2010-08-04T15:51:53.803 | 2010-08-04T15:51:53.803 | null | null | 334 | null |
1246 | 2 | null | 1241 | 6 | null | An ROC curve is used when the predictor is continuous and the outcome is discrete, so a chi-square test would not be applicable. In fact, ROC analysis is in some sense equivalent to the Mann-Whitney test: the area under the curve is P(X>Y) which is the quantity being tested by the M-W test. However Mann-Whitney analysis does not emphasize selecting a cutoff, while that is the main point of the ROC analysis. Additionally, ROC curves are often used as just a visual display of the predictive ability of a covariate.
| null | CC BY-SA 2.5 | null | 2010-08-04T15:55:11.867 | 2010-08-04T15:55:11.867 | null | null | 279 | null |
1247 | 2 | null | 1241 | 6 | null | The shortest answer is that traditional tests of signal detection only give you a single point on the ROC (receiver operating characteristic) while the curve allows you to see responses through a range of values. It's possible that the criteria and d' shift as one shifts throughout the curve. It's like the difference between a t-test generated by selecting two classes of predictor variables and two regression lines generated by looking at parametric manipulations of each predictor variable.
| null | CC BY-SA 2.5 | null | 2010-08-04T15:56:39.150 | 2010-08-04T15:56:39.150 | null | null | 601 | null |
1248 | 2 | null | 1205 | 8 | null | Use regression methods. A simple linear regression with group coded as 0-1 (or 1-2, etc) is equivalent to a t-test, but regression software usually has the capability to incorporate weigths correctly.
| null | CC BY-SA 2.5 | null | 2010-08-04T15:57:30.210 | 2010-08-04T15:57:30.210 | null | null | 279 | null |
1249 | 1 | null | null | 7 | 575 | How to find a non-trivial upper bound on $E[\exp(Z^2)]$ when $Z \sim {\rm Bin}(n, n^{-\beta})$ with $\beta \in (0,1)$? A trivial bound is obtained for substituting $Z$ with $n$.
A background on this question. In the paper by Baraud, 2002 -- Non-asymptotic minimax rates of testing in signal detection, if one is to substitute the model in Eq. (1), by a random effects model, then the above quantity appears in the computation of a lower bound.
| Non-trivial bound for $E[\exp(Z^2)]$ when $Z \sim {\rm Bin}(n, n^{-\beta})$ with $\beta \in (0,1)$ | CC BY-SA 2.5 | null | 2010-08-04T16:04:56.337 | 2011-04-29T00:25:44.380 | 2011-04-29T00:25:44.380 | 3911 | 168 | [
"probability",
"binomial-distribution",
"mathematical-statistics"
] |
1251 | 2 | null | 1173 | 16 | null | Thanks for all you answers. For completeness I thought I should include what I usually do. I tend to do a combination of the suggestions given: dots, boxplots (when n is large), and se (or sd) ranges.
(Removed by moderator because the site hosting the image no longer appears to work correctly.)
From the dot plot, it is clear that data is far more spread out the "handle bar" plots suggest. In fact, there is a negative value in A3!
---
I've made this answer a CW so I don't gain rep
| null | CC BY-SA 3.0 | null | 2010-08-04T16:29:26.283 | 2015-01-20T22:43:57.627 | 2015-01-20T22:43:57.627 | 919 | 8 | null |
1252 | 1 | 1349 | null | 47 | 17341 | I was wondering if there is a statistical model "cheat sheet(s)" that lists any or more information:
- when to use the model
- when not to use the model
- required and optional inputs
- expected outputs
- has the model been tested in different fields (policy, bio, engineering, manufacturing, etc)?
- is it accepted in practice or research?
- expected variation / accuracy / precision
- caveats
- scalability
- deprecated model, avoid or don't use
- etc ..
I've seen hierarchies before on various websites, and some simplistic model cheat sheets in various textbooks; however, it'll be nice if there is a larger one that encompasses various types of models based on different types of analysis and theories.
| Statistical models cheat sheet | CC BY-SA 3.0 | null | 2010-08-04T16:39:49.250 | 2021-05-10T18:32:04.360 | 2016-08-04T12:59:35.907 | 22468 | 59 | [
"references",
"modeling"
] |
1253 | 1 | null | null | 6 | 443 | I'm trying to compute item-item similarity using [Jaccard (specifically Tanimoto)](http://en.wikipedia.org/wiki/Jaccard_index#Tanimoto_coefficient_.28extended_Jaccard_coefficient.29) on a large list of data in the format
```
(userid, itemid)
```
An item is considered as rated if i have a userid-itemid pair. I have about 800k users and 7900 items, and 3.57 million 'ratings'. I've restricted my data to users who have rated at least n items(usually 10). However, I'm wondering if I should place an upper limit on number of items rated. When users rate 1000 or more items, each user generates 999000 pairwise-combinations of items to use in my calc, assuming the calculation
```
n! / (n-r)!
```
Adding this much input data slows the calculating process down tremendously, even when the workload is distributed(using hadoop). I'm thinking that the users who rate many, many items are not my core users and might be diluting my similarity calculations.
My gut tells me to limt the data to customers who have rated between 10 and 150-200 items but I'm not sure if there is a better way to statistically determine these boundaries.
Here are some more details about my source data's distribution. Please feel free to enlighten me on any statistical terms that I might have butchered!
The distribution of my users' itemCounts:
[alt text http://www.neilkodner.com/images/littlesnapper/itemsRated.png](http://www.neilkodner.com/images/littlesnapper/itemsRated.png)
```
> summary(raw)
itemsRated
Min. : 1.000
1st Qu.: 1.000
Median : 1.000
Mean : 4.466
3rd Qu.: 3.000
Max. :2069.000
> sd(raw)
itemsRated
16.46169
```
If I limit my data to users who have rated at least 10 items:
```
> above10<-raw[raw$itemsRated>=10,]
> summary(above10)
Min. 1st Qu. Median Mean 3rd Qu. Max.
10.00 13.00 19.00 34.04 35.00 2069.00
> sd(above10)
[1] 48.64679
> length(above10)
[1] 64764
```
If I further limit my data to users who have rated between 10 and 150 items:
```
> above10less150<-above10[above10<=150]
> summary(above10less150)
Min. 1st Qu. Median Mean 3rd Qu. Max.
10.00 13.00 19.00 28.17 33.00 150.00
> sd(above10less150)
[1] 24.32098
> length(above10less150)
[1] 63080
```
Edit: I dont think this is an issue of outliers as much as the data is positively skewed.
| How to limit my input data for Jaccard item-item similarity calculation? | CC BY-SA 3.0 | null | 2010-08-04T16:53:04.197 | 2012-07-08T08:54:17.417 | 2012-07-08T08:54:17.417 | 930 | 738 | [
"distributions",
"data-mining"
] |
1254 | 2 | null | 1252 | 22 | null | Do you mean a statistical analysis decision tree? ([google search](http://www.google.com/images?um=1&hl=en&biw=1024&bih=628&tbs=isch:1&sa=1&q=statistical+analysis+decision+tree&aq=f&aqi=&aql=&oq=&gs_rfai=)), like this (only with extensions):
[](https://i.stack.imgur.com/VM7XX.gif)
(source: [processma.com](http://www.processma.com/resource/images/htest.gif))
?
BTW, notice that the chart in wrong in that the tests it offers for median are not for median but for rank... (it would be for median if the distribution is symmetrical)
| null | CC BY-SA 4.0 | null | 2010-08-04T17:30:03.033 | 2019-02-18T20:52:18.257 | 2019-02-18T20:52:18.257 | 253 | 253 | null |
1255 | 2 | null | 1253 | 2 | null | I'm confused: shouldn't you only need the 7900^2 item similarities, for which you use ratings from all users, which is still quite sparse?
UPDATE
I still think there's a more efficient way to do this, but maybe I'm just being dense. Specifically, consider item A and item B. For item A, generate a U-dimensional vector of 0's and 1's, where U is the number of users in your data set, and there's a 1 in dimension i if and only if user i rated item A. Do the same thing for item B. Then you can easily generate the AB, A and B terms for your equation from these vectors. Importantly, these vectors are very sparse, so they can produce a very small data set if encoded properly.
- Iterate over the item ID's to generate their cross product: (ItemAID, ItemBID)
- Map this pair to this n-tuple: (ItemAID, ItemBID, ItemAVector, ItemBVector)
- Reduce this n-tuple to your similarity measure: (ItemAID,ItemBID,SimilarityMetric)
If you set up a cache of the ItemXVector's at the start, this computation should be very fast.
| null | CC BY-SA 2.5 | null | 2010-08-04T17:35:42.507 | 2010-08-04T18:26:22.043 | 2010-08-04T18:26:22.043 | 303 | 303 | null |
1256 | 1 | 1263 | null | 6 | 2895 | I have a colleague who calculates correlations in which one set of scores for a subject (e.g. 100 scores) is correlated with another set of scores for that same subject. The resulting correlation reflects the degree to which those sets of scores are associated for that subject. He needs to do this for N subjects. Consider the following dataset:
```
ncol <- 100
nrow <- 100
x <- matrix(rnorm(ncol*nrow),nrow,ncol)
y <- matrix(rnorm(ncol*nrow),nrow,ncol)
```
The correct output vector of correlations would be:
```
diag(cor(t(x),t(y)))
```
Is there a faster way to do this without using a multicore package in R?
| How can one speed up this correlation calculation in R without multicore? | CC BY-SA 2.5 | null | 2010-08-04T17:39:29.803 | 2010-08-05T10:00:43.613 | null | null | 196 | [
"r",
"correlation",
"efficiency"
] |
1257 | 1 | 1312 | null | 2 | 200 | Consider the following sequential, adaptive data generating process for $Y_1$, $Y_2$, $Y_3$. (By sequential I mean that we generate $Y_1$, $Y_2$, $Y_3$ in sequence and by adaptive I mean that $Y_3$ is generated depending on the observed values of $Y_1$ and $Y_2$.):
$Y_1 = X_1\ \beta + \epsilon_1$
$Y_2 = X_2\ \beta + \epsilon_2$
$Y_3 = X_3\ \beta + \epsilon_3$
$
X_3 =
\begin{cases}
X_{31} & \mbox{if }Y_1 Y_2 \gt 0 \\ X_{32} & \mbox{if }Y_1 Y_2 \le 0
\end{cases}$
where,
$X_1$, $X_2$, $X_{31}$ and $X_{32}$ are all 1 x 2 vectors.
$\beta$ is a 2 x 1 vector
$\epsilon_i \sim N(0,\sigma^2)$ for $i$ = 1, 2, 3
Suppose we observe the following sequence: {$Y_1 = y_1,\ Y_2 = y_2,\ X_3 = X_{31},\ Y_3 = y_3$} and wish to estimate the parameters $\beta$ and $\sigma$.
In order to write down the likelihood function note that we have four random variables: $Y_1$, $Y_2$, $X_3$ and $Y_3$. Therefore, the joint density of $Y_1$, $Y_2$, $X_3$ and $Y_3$ is given by:
$f(Y_1, Y_2, X_3, Y_3 |-) = f(Y_1|-)\ f(Y_2|-)\ [\ f(Y_3|X_{31},-)\ P(X_3=X_{31}|-)$
$\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad + \ f(Y_3|X_{32},-)\ P(X_3={X_{32}}|-)\ ]$
(Note: I am suppressing the dependency of the density on $\beta$ and $\sigma$.)
Since the likelihood conditions on the observed data and our sequence is such that $y_1 y_2 >0$. Therefore, we have:
$L(\beta,\ \sigma | X_1,\ X_2,\ X_{31}, y_1, y_2, y_3) = f(Y_1|-)\ f(Y_2|-)\ f(Y_3|X_{31},-)\ P(X_3=X_{31}) $
Is the above the correct likelihood function for this data generating process?
| What is the correct likelihood function for an sequential, adaptive data generation process? | CC BY-SA 2.5 | null | 2010-08-04T18:04:52.290 | 2017-04-26T18:56:56.637 | 2017-04-26T18:56:56.637 | 113090 | null | [
"time-series",
"likelihood"
] |
1258 | 2 | null | 1164 | 12 | null | As someone who has learned a little bit of statistics for my own research, I'll guess that the reasons are pedagogical and inertial.
I've observed within my own field that the order in which topics are taught reflects the history of the field. Those ideas which came first are taught first, and so on. For people who only dip into stats for cursory instruction, this means they'll learn classical stats first, and probably last. Then, even if they learn more, the classical stuff with stick with them better due to primacy effects.
Also, everyone knows what a two sample t-test is. Less than everyone knows what a Mann-Whitney or Wilcoxon Rank Sum test is. This means that I have to exert just a little bit of energy on explaining what my robust test is, versus not having to exert any with a classical test. Such conditions will obviously result in fewer people using robust methods than should.
| null | CC BY-SA 2.5 | null | 2010-08-04T18:12:23.507 | 2010-08-04T18:12:23.507 | null | null | 287 | null |
1259 | 2 | null | 726 | 11 | null | >
All information looks like noise until you break the code.
Hiro in Neal Stephenson's Snow Crash (1992)
| null | CC BY-SA 3.0 | null | 2010-08-04T18:29:57.000 | 2015-08-19T15:43:58.967 | 2015-08-19T15:43:58.967 | 49500 | 743 | null |
1260 | 2 | null | 1256 | 1 | null | It might be one of those cases where using a different [BLAS engine](http://cran.r-project.org/bin/windows/base/rw-FAQ.html#Can-I-use-a-fast-BLAS_003f) would help. But I am not sure of it - it needs testing (and depends on your machine)
| null | CC BY-SA 2.5 | null | 2010-08-04T18:35:22.340 | 2010-08-04T18:35:22.340 | null | null | 253 | null |
1261 | 1 | 1282 | null | 4 | 271 | Suppose I have a table of counts that look like this
```
A B C
Success 1261 230 3514
Failure 381 161 4012
```
I have a hypothesis that there is some probability $p$ such that $P(Success_A) = p^i$, $P(Success_B) = p^j$ and $P(Success_C) = p^k$.
Is there some way to produce estimates for $p$, $i$, $j$ and $k$? The idea I have is to iteratively try values for $p$ between 0 and 1, and values for $i$, $j$ and $k$ between 1 and 5. Given the column totals, I could produce expected values, then calculate $\chi^2$ or $G^2$.
This would produce a best fit, but it wouldn't give any confidence interval for any of the values. It's also not particularly computationally efficient.
As a side question, if I wanted to test the goodness of fit of a particular set of values for $i$, $j$ and $k$ (specifically 1, 2, and 3), once I've calculated $\chi^2$ or $G^2$, I'd want to calculate significance on the $\chi^2$ distribution with 1 degree of freedom, correct? This isn't a normal contingency table since relationship of each column to the others is fixed to a single value. Given $p$, $i$, $j$ and $k$, filling in a single value in a cell fixes what the values of the other cells must be.
| Fitting a fixed, exponential relationship between categories with categorical data | CC BY-SA 2.5 | null | 2010-08-04T19:06:50.250 | 2010-08-05T10:19:24.293 | null | null | 287 | [
"modeling",
"categorical-data",
"chi-squared-test"
] |
1262 | 2 | null | 1261 | 2 | null | You could write the likelihood function like so:
$L(p,i,j,k|-) \propto (p^i)^{1261} (1-p^i)^{381} (p^j)^{230} (1-p^j)^{161} (p^k)^{3514} (1-p^k)^{4012}$
Maximize the above likelihood function to estimate your parameters. Constructing confidence intervals and hypothesis testing should be straight forward once you have the estimates.
| null | CC BY-SA 2.5 | null | 2010-08-04T19:19:23.767 | 2010-08-05T10:19:24.293 | 2010-08-05T10:19:24.293 | null | null | null |
1263 | 2 | null | 1256 | 3 | null | While making a call to `diag` you throw out a lot of information, so you can save time by simply not calculating it. You code is equivalent to:
```
sapply(1:100,function(i) cor(x[i,],y[i,]))
```
Extended to reflect comments: This code will be slower for small matrices since it does not use the full "vectorization power" of `cor`. So, if you'd like to make fast calculations on small matrices, write it as a C chunk. If one would like to parallelize it (again, will be profitable only for large matrices), may use this code replacing `sapply` with `mc.lapply` or something like this.
| null | CC BY-SA 2.5 | null | 2010-08-04T19:30:30.623 | 2010-08-05T10:00:43.613 | 2010-08-05T10:00:43.613 | null | null | null |
1264 | 2 | null | 1241 | 13 | null | The ROC function (it is not necessarily a curve) allows you to assess the discrimination ability provided by a a specific statistical model (comprised of a predictor variable or a set of them).
A main consideration of ROCs is that model predictions do not only stem from the model's ability to discriminate/make predictions based on the evidence provided by predictor variables. Also operating is a response criteria that defines how much evidence is necessary for the model to predict a response, and what is the outcome of these responses. The value that is established for the response criteria will greatly influence the model predictions, and ultimately the type of mistakes that it will make.
Consider a generic model with predictor variables and a response criteria. This model is trying to predict the Presence of X,by responding Yes or No.
So you have the following confusion matrix:
```
**X present X absent**
**Model Predicts X Present** Hit False Alarm
**Model Predicts X Absent** Miss Correct Rejection
```
In this matrix, you only need to consider the proportion of Hits and the False Alarms (because the others can be derived from these, given that they have to some to 1). For each response criteria, you wil ave a different confusion matrix. The errors (Misses and False Alarms) are negatively related, which means that a response criteria that minimizes false alarms maximizes misses and vice-versa. The message is: there is no free lunch.
So, in order to understand how well the model discriminates cases/makes predictions, independently of the response criteria established, you plot the Hits and False rates produced across the range of possible response criteria.
What you get from this plot is the ROC function. The area under the function provides an unbiased, and non-parametric measure of the discrimination ability of the model. This measure is very important because it is free of any confounds that could have been produced by the response criteria.
A second important aspect, is that by analyzing the function, one can define what response criteria is better for your objectives. What types of errors you want to avoid, and what are errors are OK. For instance, consider an HIV test: it is a test that looks up some sort of evidence (in this case antibodies) and makes a discrimination/prediction based on the comparison of the evidence with response criterion. This response criterion is usually set very low, so that you minimize Misses. Of course this will result in more False Alarms, which have a cost, but a cost that is negligible when compared to the Misses.
With ROCs you can assess some model's discrimination ability, independently of the response criteria, and also establish the optimal response criteria, given the needs and constraints of whatever that you are measuring.
Tests like hi-square cannot help at all in this because even if your testing if the predictions are at chance level, many different Hit-False Alarm pairs are consistent with chance level.
Some frameworks, like signal detection theory, assume a priori that the evidence available for discrimination has specific distribuiton (e.g., normal distribution, or gamma distribution). When these assumptions hold (or are pretty close), some really nice measures are available that make your life easier.
hope this helps to elucidate you on the advantages of ROCs
| null | CC BY-SA 2.5 | null | 2010-08-04T19:31:22.247 | 2010-08-04T19:31:22.247 | null | null | 447 | null |
1265 | 2 | null | 1256 | 2 | null | This really depends on the relative numbers of "scores" and "subjects". The method you use calculates lots of cross-correlations which are not required. However, if there are relatively few "subjects" relative to "scores", then this probably doesn't matter too much, and the method you suggest is probably as good as anything, as it uses a small number of efficient blas operations. However, if there are a large number of "subjects" relative to scores, then it may well be quicker to loop over the rows computing the correlation for each pair separately, using the code suggested by "mbq".
| null | CC BY-SA 2.5 | null | 2010-08-04T19:35:08.557 | 2010-08-04T19:35:08.557 | null | null | 643 | null |
1266 | 1 | 2338 | null | 16 | 17516 | The following question is one of those holy grails for me for some time now, I hope someone might be able to offer a good advice.
I wish to perform a non-parametric repeated measures multiway anova using R.
I have been doing some online searching and reading for some time, and so far was able to find solutions for only some of the cases: friedman test for one way nonparametric repeated measures anova, ordinal regression with {car} Anova function for multi way nonparametric anova, and so on. The partial solutions is NOT what I am looking for in this question thread. I have summarized my findings so far in a post I published some time ago (titled: [Repeated measures ANOVA with R (functions and tutorials)](http://www.r-statistics.com/2010/04/repeated-measures-anova-with-r-tutorials/), in case it would help anyone)
---
If what I read online is true, this task might be achieved using a mixed Ordinal Regression model (a.k.a: Proportional Odds Model).
I found two packages that seems relevant, but couldn't find any vignette on the subject:
- http://cran.r-project.org/web/packages/repolr/
- http://cran.r-project.org/web/packages/ordinal/
So being new to the subject matter, I was hoping for some directions from people here.
Are there any tutorials/suggested-reading on the subject? Even better, can someone suggest a simple example code for how to run and analyse this in R (e.g: "non-parametric repeated measures multiway anova") ?
| A non-parametric repeated-measures multi-way Anova in R? | CC BY-SA 4.0 | null | 2010-08-04T20:01:07.787 | 2019-05-28T08:27:05.370 | 2019-05-28T08:27:05.370 | 11887 | 253 | [
"r",
"anova",
"repeated-measures",
"nonparametric",
"manova"
] |
1268 | 1 | 1393 | null | 14 | 6317 | I am using Singular Value Decomposition as a dimensionality reduction technique.
Given `N` vectors of dimension `D`, the idea is to represent the features in a transformed space of uncorrelated dimensions, which condenses most of the information of the data in the eigenvectors of this space in a decreasing order of importance.
Now I am trying to apply this procedure to time series data. The problem is that not all the sequences have the same length, thus I cant really build the `num-by-dim` matrix and apply SVD. My first thought was to pad the matrix with zeros by building a `num-by-maxDim` matrix and filling the empty spaces with zeros, but I'm not so sure if that is the correct way.
My question is how do you the SVD approach of dimensionality reduction to time series of different length? Alternatively are there any other similar methods of eigenspace representation usually used with time series?
Below is a piece of MATLAB code to illustrate the idea:
```
X = randn(100,4); % data matrix of size N-by-dim
X0 = bsxfun(@minus, X, mean(X)); % standarize
[U S V] = svd(X0,0); % SVD
variances = diag(S).^2 / (size(X,1)-1); % variances along eigenvectors
KEEP = 2; % number of dimensions to keep
newX = U(:,1:KEEP)*S(1:KEEP,1:KEEP); % reduced and transformed data
```
(I am coding mostly in MATLAB, but I'm comfortable enough to read R/Python/.. as well)
| SVD dimensionality reduction for time series of different length | CC BY-SA 2.5 | null | 2010-08-04T20:51:04.053 | 2010-12-17T07:59:12.010 | 2010-12-17T07:59:12.010 | 223 | 170 | [
"time-series",
"machine-learning",
"pca",
"data-transformation",
"multivariate-analysis"
] |
1269 | 2 | null | 1268 | 2 | null | Filling with zero is bad. Try filling with resampling using observations from the past.
| null | CC BY-SA 2.5 | null | 2010-08-04T21:10:02.473 | 2010-08-04T21:10:02.473 | null | null | 223 | null |
1270 | 1 | 1276 | null | 5 | 834 | If I have a (financial) time series, and I sample it with two different periods, at 5 and at 60 minute intervals, can I create an exponential moving average on the 5 minute sampled data which is the same as an exponential moving average on the 60 minute sampled data?
Something like this:
e1 = EMA(a1) applied on sampled_data(60 min)
e2 = EMA(a2) applied on sampled_data(5 min)
a1 and a2 are the smoothing factors of the exponential moving average (the period)
Can I compute the a2 value for any a1 value, such that e1 = e2?
When I say that e1 = e2 I mean that if I graph the values of the EMA computed from 5 min data on top of the 60 min data chart and EMA, the two EMAs should be superposed. This means that in between two data points for EMA(60 min) there will be 60/5=12 data points for EMA(5 min).
| Exponential moving averages of a time series with varying sampling | CC BY-SA 2.5 | null | 2010-08-04T21:12:03.857 | 2010-08-04T23:36:10.817 | null | null | 749 | [
"time-series"
] |
1271 | 2 | null | 1266 | 6 | null | When in doubt, bootstrap! Really, I don't know of a canned procedure to handle such a scenario.
Bootstrapping is a generally applicable way of generating some error parameters from the data at hand. Rather than relying on the typical parametric assumptions, bootstrap procedures capitalize on the characteristics of the sample to generate an empirical distribution against which your sample estimates can be compared.
Google scholar is gold...it's been done before...at least once.
Lunneborg, Clifford E.; Tousignant, James P.;
1985
"Efron's Bootstrap with Application to the Repeated Measures Design."
Multivariate Behavioral Research; Apr85, Vol. 20 Issue 2, p161, 18p
| null | CC BY-SA 2.5 | null | 2010-08-04T21:16:37.787 | 2010-08-04T21:27:49.517 | 2010-08-04T21:27:49.517 | 485 | 485 | null |
1272 | 1 | null | null | 0 | 574 | I have a set of data which consists of many different types (measurable, categorical)
For example:
name measurable_attribute_1 categorical_attribute_1 measurable_attribute_2 categorical_attribute_2 ...
Number of attributes may grows quite quickly during my study: into my spreadsheet, I can as many new entries as attribute... I have about a hundred of entries in this classification scheme, about 70 attribute, so far, and I am at the beginning of my data collection.
I would like to perform statistical analysis of this data set. For example, what are the common features of the entries that have a similar categorical_attribute and this range of values of measurable_attribute.
Well, I would like to generate relationships between attributes in order to create training images.
However, I am not sure of how to organize the data prior to classification. Even though, should I organize the data?... (referring to this [question](https://stats.stackexchange.com/questions/47/clustering-of-large-heavy-tailed-dataset))
Also, I can hardly gather entries into classes.
I do not want to introduce any bias obviously.
I am also quite new to statistical analysis (but eager to learn).
| How to organize a dataset with many attributes | CC BY-SA 2.5 | null | 2010-08-04T21:30:19.030 | 2010-08-05T16:33:34.410 | 2017-04-13T12:44:24.667 | -1 | null | [
"algorithms",
"categorical-data",
"classification"
] |
1273 | 2 | null | 1228 | 6 | null | Change the variable. Run a control chart for the "time between infections" variable. That way, instead of a discrete variable with a very small range of values, you have a continuous variable with an adequate range of values. If the interval between infections gets too small, the chart will give an "out of control" indication.
This procedure was recommended by Donald Wheeler in [Understanding Variation: The Key to Managing Chaos](http://rads.stackoverflow.com/amzn/click/0945320531).
| null | CC BY-SA 2.5 | null | 2010-08-04T21:54:28.677 | 2010-08-04T23:48:08.457 | 2010-08-04T23:48:08.457 | 666 | 666 | null |
1274 | 1 | 1277 | null | 6 | 768 | [0,1,0,2,4,1,0,1,5,1,4,2,1,3,1,1,1,1,0,1,1,0,2,0,2,0,0,1,0,1,2,2,1,2,4,1,4,1,0,0,4,1,0,1,0,1,1,2,1,1,0,0]
What is the best way to convince myself that these data are correlated? that no univariate discrete distribution would approximate them well? that a time series model is necessary to better estimate the future distribution of counts?
| Best way to show these or similar count data are not independent? | CC BY-SA 2.5 | null | 2010-08-04T22:17:59.627 | 2010-08-07T08:40:07.287 | null | null | 273 | [
"correlation",
"count-data"
] |
1275 | 2 | null | 1268 | 1 | null | You could estimate univariate time series models for the 'short' series and extrapolate them into the future to 'align' all the series.
| null | CC BY-SA 2.5 | null | 2010-08-04T22:46:30.457 | 2010-08-04T22:46:30.457 | null | null | null | null |
1276 | 2 | null | 1270 | 5 | null | The trivial and non-helpful answer is "Yes, downsample your 5-minute data to 60-minute data."
More practically, without throwing out 90% of your data, the answer is generally "No, unless you get extremely lucky with sampling your five-minute data." You should get an answer that's close (and under most noise models I suspect they'll be equal in expectation) just by rescaling your smooth by a factor of 12, but any source of randomness in your data is going to cause some difference in the two curves on a point-by-point basis.
| null | CC BY-SA 2.5 | null | 2010-08-04T23:36:10.817 | 2010-08-04T23:36:10.817 | null | null | 61 | null |
1277 | 2 | null | 1274 | 5 | null | You could just plot the ACF and check if the first coefficient is inside the critical values. The critical values are ok for non-Gaussian time series (at least asymptotically).
Alternatively, fit a simple count time series model such as the INAR(1) and see if the coefficient is significantly different from zero.
| null | CC BY-SA 2.5 | null | 2010-08-04T23:46:28.333 | 2010-08-04T23:46:28.333 | null | null | 159 | null |
1278 | 1 | null | null | 4 | 7892 | (This is part-2 of my long question, you can have a look at part-1 [here](https://stats.stackexchange.com/questions/1099/how-to-handle-count-data-categorical-data-when-it-has-been-converted-to-a-rate))
I am going to do a quasi-experiment, with measuring the base line of a sample (actually not quite a sample, but a ward, with high patient turn-over rate), and then we do a intervention, and measure the variables (i.e. infection rate) again.
I googled a bit and found that this is something called a single case experiment, and it was said that single case experiment doesn't have very solid statistics because you don't have the control, you can't conclude on the causality in a solid manner.
I have googled a bit again and found that I can compare the incidence rate (or call it infection rate), but doing something like "incidence rate difference" (IRD) or "incidence rate ratio" (IRR). (I found it from [here](http://www.statsdirect.com/help/rates/incidence_rates.htm))
What is the difference between IRD and t-test? And is there any statistical test complementary for IRR?
But mostly importantly, is it appropriate for me to use this test (does it have a name?) for single case experiment? Because the patients in the ward keep changing, this is what I worried about.
| Is calculating Incidence Rate Difference/Ratio appropriate for single case experimental design? | CC BY-SA 4.0 | null | 2010-08-05T01:24:22.440 | 2020-03-06T12:38:57.593 | 2020-03-06T12:38:57.593 | 11887 | 588 | [
"epidemiology",
"incidence-rate-ratio"
] |
1279 | 2 | null | 485 | 3 | null | [UCCS mathematics video archive](http://www.uccs.edu/~math/vidarchive.html) has
archived videos from a range of courses in mathematics. Several subjects called Mathematical Statistics I and Mathematical Statistics II are available. The main site requires a free registration to access.
Slightly more accessible are the videos for a subset of the courses on the [UCCS MathOnline YouTube page](http://www.youtube.com/user/UCCSMathOnline/videos?view=1). Two instances of this are as follows.
The lecture style often involves Dr. Morrow working through problems on the whiteboard.
## Linear Models
>
Taught by Dr. Greg Morrow, Math 483 from UCCS. Methods and results of
linear algebra are developed to formulate and study a fundamental and
widely applied area of statistics. Topics include generalized
inverses, multivariate normal distribution and the general linear
model. Applications focus on model building, design models, and
computing methods. The Statistical Analysis System (software) is
introduced as a tool for doing computations.
[Course info](http://cmes.uccs.edu/Summer2006/Math483/courseinfo.php): Seems to
use Introduction to Linear Regression by Montgomery, Peck, and Vining.
## Mathematical Statistics 1
>
Greg Morrow's Math 481 course from Math Online at the University of
Colorado in Colorado Springs
Course Description: Exponential, Beta, Gamma, Student, Fisher and Chi-square
distributions are covered in this course, along with joint and conditional
distributions, moment generating techniques, transformations of random
variables and vectors.
- Course info
- Syllabus from one year
- Mathematical Statistics and Data Analysis, 3rd ed., by John A. Rice.
| null | CC BY-SA 3.0 | null | 2010-08-05T02:36:08.460 | 2012-08-21T07:26:40.843 | 2012-08-21T07:26:40.843 | 183 | 183 | null |
1280 | 2 | null | 726 | 140 | null | >
87% of statistics are made up on the spot
-Unknown

[Dilbert.com](http://dilbert.com/strips/comic/2008-05-08/)
| null | CC BY-SA 2.5 | null | 2010-08-05T02:42:11.413 | 2011-01-14T09:38:16.137 | 2011-01-14T09:38:16.137 | 442 | 553 | null |
1281 | 2 | null | 1272 | 0 | null | You could look at Exploratory Factor Analysis. It will tell you which attributes are the most similar to each other.
| null | CC BY-SA 2.5 | null | 2010-08-05T02:57:24.893 | 2010-08-05T02:57:24.893 | null | null | 74 | null |
1282 | 2 | null | 1261 | 5 | null | Following up on my comment, this question would be very simple if i, j, and k were not restricted to be integers. The reason is as follows: pA, pB, and pC denote the observed probability of success in the three groups. Then let p=pA, i=1, j=log(pB)/log(pA), and k=log(pC)/log(pA). These will easily satisfy the required conditions (except for j and k being between 1 and 5, but that looks like an ad-hoc simplifying assumption instead of a real constraint).
In fact, if you do this with the given data, you get j=2.009 and k=2.884 which I think prompted the original question.
It is even possible to get standard errors for these quantities (or rather their logarithm). Note that if pB = p^j, then log(-log(pB)) = log(j) + log(-log(p)), so one can use logistic regression with a complimentary log-log link for the number of failures (the complimentary log-log function is log(-log(1-x)) and this link is built in for most statitical software such as R or SAS). Then one could check whether the 95% CIs include integers, or perhaps run a likelihood-ratio (or other) test comparing the fit of the unrestricted model to one where j and k are rounded to the nearest integer.
The above assumes that i=1. Something similar could probably be done for other integer i's (probably by having an offset of log(i) in the model - I have not thought it through).
In the end, I want to note that you should make sure that your hypothesis is meaningful by itself, and did not come from playing with the data. Otherwise any statistical test is biased because you picked a form of the null hypothesis (out all the possible weird forms that you could have imagined) that is likely to fit.
| null | CC BY-SA 2.5 | null | 2010-08-05T03:06:36.013 | 2010-08-05T03:06:36.013 | null | null | 279 | null |
1285 | 2 | null | 1272 | 1 | null | One option would involve using optimal scaling principal components analysis.
The approach allows you to state your measurement assumptions about each variable (e.g., nominal, ordinal, numeric).
I've used it in SPSS: see the Categories Add-On module (i.e., Analyze - Dimension Reduction - Optimal Scaling).
I'm not sure, but the homals package in R may also implement the procedure.
A quick Google ( [http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=optimal+scaling+principal+components](http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=optimal+scaling+principal+components))
revealed this reference:
[http://takane.brinkster.net/Yoshio/p009.pdf](http://takane.brinkster.net/Yoshio/p009.pdf)
| null | CC BY-SA 2.5 | null | 2010-08-05T03:55:13.897 | 2010-08-05T04:14:12.710 | 2010-08-05T04:14:12.710 | 183 | 183 | null |
1286 | 1 | 1335 | null | 8 | 9247 | Sometimes I want to do an exact test by examining all possible combinations of the data to build an empirical distribution against which I can test my observed differences between means. To find the possible combinations I'd typically use the combn function. The choose function can show me how many possible combinations there are. It is very easy for the number of combinations to get so large that it is not possible to store the result of the combn function, e.g. combn(28,14) requires a 2.1 Gb vector. So I tried writing an object that steped through the same logic as the combn function in order to provide the values off an imaginary "stack" one at a time. However, this method (as I instantiated it) is easily 50 times slower than combn at reasonable combination sizes, leading me to think it will also be painfully slow for larger combination sizes.
Is there a better algorithm for doing this sort of thing than the algorithm used in combn?Specifically is there a way to generate and pull the Nth possible combination without calculating through all previous combinations?
| How can I obtain some of all possible combinations in R? | CC BY-SA 2.5 | null | 2010-08-05T04:54:46.270 | 2016-08-25T20:43:28.717 | 2016-08-25T20:43:28.717 | 101426 | 196 | [
"r",
"nonparametric",
"combinatorics"
] |
1287 | 2 | null | 1252 | 9 | null | Reading "Using Multivariate Statistics (4th Edition) Barbara G. Tabachnick"
I found these decision trees based on major research question. I think they are quite useful. Following this link you'll find an extract of the book
[http://www.psychwiki.com/images/d/d8/TF2.pdf](http://www.psychwiki.com/images/d/d8/TF2.pdf)
see pages 29 to 31
| null | CC BY-SA 2.5 | null | 2010-08-05T04:58:19.663 | 2010-08-05T04:58:19.663 | null | null | 10229 | null |
1288 | 2 | null | 165 | 13 | null | So there are plenty of answers here paraphrased from statistics/probability textbooks, Wikipedia, etc. I believe we have "laypersons" where I work; I think they are in the marketing department. If I ever have to explain anything technical to them, I apply the rule "show don't tell." With that rule in mind, I would probably show them something like this.
The idea here is to try to code an algorithm that I can teach to spell--not by learning all of the hundreds (thousands?) of rules like When adding an ending to a word that ends with a silent e, drop the final e if the ending begins with a vowel. One reason that won't work is I don't know those rules (i'm not even sure the one I just recited is correct). Instead I am going to teach it to spell by showing it a bunch of correctly spelled words and letting it extract the rules from those words, which is more or less the essence of Machine Learning, regardless of the algorithm--pattern extraction and pattern recognition.
The success criterion is correctly spelling a word the algorithm has never seen before (i realize that can happen by pure chance, but that won't occur to the marketing guys, so i'll ignore--plus I am going to have the algorithm attempt to spell not one word, but a lot, so it's not likely we'll be deceived by a few lucky guesses).
An hour or so ago, I downloaded (as a plain text file) from the excellent Project Gutenberg Site, the Herman Hesse novel Siddhartha. I'll use the words in this novel to teach the algorithm how to spell.
So I coded the algorithm below that scanned this novel, three letters at a time (each word has one additional character at the end, which is 'whitespace', or the end of the word). Three-letter sequences can tell you a lot--for instance, the letter 'q' is nearly always followed by 'u'; the sequence 'ty' usually occurs at the end of a word; z rarely does, and so forth. (Note: I could just as easily have fed it entire words in order to train it to speak in complete sentences--exactly the same idea, just a few tweaks to the code.)
None of this involves MCMC though, that happens after training, when we give the algorithm a few random letters (as a seed) and it begins forming 'words'. How does the algorithm build words? Imagine that it has the block 'qua'; what letter does it add next? During training, the algorithm constructed a massive l*etter-sequence frequency matrix* from all of the thousands of words in the novel. Somewhere in that matrix is the three-letter block 'qua' and the frequencies for the characters that could follow the sequence. The algorithm selects a letter based on those frequencies that could possibly follow it. So the letter that the algorithm selects next depends on--and solely on--the last three in its word-construction queue.
So that's a Markov Chain Monte Carlo algorithm.
I think perhaps the best way to illustrate how it works is to show the results based on different levels of training. Training level is varied by changing the number of passes the algorithm makes though the novel--the more passes thorugh the greater the fidelity of its letter-sequence frequency matrices. Below are the results--in the form of 100-character strings output by the algorithm--after training on the novel 'Siddharta'.
---
A single pass through the novel, Siddhartha:
>
then whoicks ger wiff all mothany stand ar you livid theartim mudded
sullintionexpraid his sible his
(Straight away, it's learned to speak almost perfect Welsh; I hadn't expected that.)
---
After two passes through the novel:
>
the ack wor prenskinith show wass an twor seened th notheady theatin land
rhatingle was the ov there
---
After 10 passes:
>
despite but the should pray with ack now have water her dog lever pain feet
each not the weak memory
---
And here's the code (in Python, i'm nearly certain that this could be done in R using an MCMC package, of which there are several, in just 3-4 lines)
```
def create_words_string(raw_string) :
""" in case I wanted to use training data in sentence/paragraph form;
this function will parse a raw text string into a nice list of words;
filtering: keep only words having more than 3 letters and remove
punctuation, etc.
"""
pattern = r'\b[A-Za-z]{3,}\b'
pat_obj = re.compile(pattern)
words = [ word.lower() for word in pat_obj.findall(raw_string) ]
pattern = r'\b[vixlm]+\b'
pat_obj = re.compile(pattern)
return " ".join([ word for word in words if not pat_obj.search(word) ])
def create_markov_dict(words_string):
# initialize variables
wb1, wb2, wb3 = " ", " ", " "
l1, l2, l3 = wb1, wb2, wb3
dx = {}
for ch in words_string :
dx.setdefault( (l1, l2, l3), [] ).append(ch)
l1, l2, l3 = l2, l3, ch
return dx
def generate_newtext(markov_dict) :
simulated_text = ""
l1, l2, l3 = " ", " ", " "
for c in range(100) :
next_letter = sample( markov_dict[(l1, l2, l3)], 1)[0]
simulated_text += next_letter
l1, l2, l3 = l2, l3, next_letter
return simulated_text
if __name__=="__main__" :
# n = number of passes through the training text
n = 1
q1 = create_words_string(n * raw_str)
q2 = create_markov_dict(q1)
q3 = generate_newtext(q2)
print(q3)
```
| null | CC BY-SA 3.0 | null | 2010-08-05T09:15:34.467 | 2013-10-02T12:17:36.667 | 2013-10-02T12:17:36.667 | 24521 | 438 | null |
1289 | 1 | null | null | 9 | 3209 | I am having difficulties to select the right way to visualize data. Let's say we have bookstores that sells books, and every book has at least one category.
For a bookstore, if we count all the categories of books, we acquire a histogram that shows the number of books that falls into a specific category for that bookstore.
I want to visualize the bookstore behavior, I want to see if they favor a category over other categories. I don't want to see if they are favoring sci-fi all together, but I want to see if they are treating every category equally or not.
I have ~1M bookstores.
I have thought of 4 methods:
- Sample the data, show only 500 bookstore's histograms. Show them in 5 separate pages using 10x10 grid. Example of a 4x4 grid:
- Same as #1. But this time sort x axis values according to their count desc, so if there is a favoring it will be seen easily.
- Imagine putting the histograms in #2 together like a deck and showing them in 3D. Something like this:
- Instead of using third axis suing color to represent colors, so using a heatmap (2D histogram):
If generally bookstores prefer some categories to others it will be displayed as a nice gradient from left to right.
Do you have any other visualization ideas/tools to represent multiple histograms?
| Visualizing multiple "histograms" (bar-charts) | CC BY-SA 4.0 | null | 2010-08-05T10:03:34.997 | 2019-08-25T23:50:08.677 | 2020-06-11T14:32:37.003 | -1 | 760 | [
"pca",
"data-visualization",
"histogram",
"barplot"
] |
1290 | 2 | null | 1286 | 1 | null | Generating combinations is pretty easy, see for instance [this](http://compprog.wordpress.com/2007/10/17/generating-combinations-1/); write this code in R and then process each combination at a time it appears.
| null | CC BY-SA 2.5 | null | 2010-08-05T10:17:47.227 | 2010-08-05T10:17:47.227 | null | null | null | null |
1291 | 2 | null | 1289 | 12 | null | As you have found out there are no easy answers to your question!
I presume that you interested in finding strange or different book stores? If this is the case then you could try things like [PCA](https://stats.stackexchange.com/questions/53/pca-on-correlation-or-covariance/78#78) (see the wikipedia [cluster analysis](http://en.wikipedia.org/wiki/Cluster_analysis) page for more details).
To give you an idea, consider this example. You have 26 bookshops (with names A, B,..Z). All bookshops are similar, except:
- Shop Z sells only a few History books.
- Shops O-Y sell more romance books than average.
A principal components plot highlights these shops for further investigation.
Here's some sample R code:
```
> d = data.frame(Romance = rpois(26, 50), Horror = rpois(26, 100),
Science = rpois(26, 75), History = rpois(26, 125))
> rownames(d) = LETTERS
#Alter a few shops
> d[15:25,][1] = rpois(11,150)
> d[26,][4] = rpois(1, 10)
#look at the data
> head(d, 2)
Romance Horror Science History
A 36 107 62 139
B 47 93 64 118
> books.PC.cov = prcomp(d)
> books.scores.cov = predict(books.PC.cov)
# Plot of PC1 vs PC2
> plot(books.scores.cov[,1],books.scores.cov[,2],
xlab="PC 1",ylab="PC 2", pch=NA)
> text(books.scores.cov[,1],books.scores.cov[,2],labels=LETTERS)
```
This gives the following plot:
[PCA plot http://img265.imageshack.us/img265/7263/tmplx.jpg](http://img265.imageshack.us/img265/7263/tmplx.jpg)
Notice that:
- Shop z is an outlying point.
- The others shops form two distinct groups.
Other possibilities
You could also look at [GGobi](http://www.ggobi.org/), I've never used it, but it looks interesting.
| null | CC BY-SA 2.5 | null | 2010-08-05T10:31:54.357 | 2010-08-05T12:58:40.073 | 2017-04-13T12:44:33.237 | -1 | 8 | null |
1292 | 1 | 1297 | null | 36 | 47849 | [Decision trees](http://en.wikipedia.org/wiki/Decision_tree) seems to be a very understandable machine learning method.
Once created it can be easily inspected by a human which is a great advantage in some applications.
What are the practical weak sides of Decision Trees?
| What is the weak side of decision trees? | CC BY-SA 2.5 | null | 2010-08-05T10:42:44.327 | 2019-04-05T16:31:35.390 | null | null | 217 | [
"machine-learning",
"nonparametric",
"cart"
] |
1293 | 1 | null | null | 2 | 740 | My girlfriend (B.B.A) is really interested in Actuarial science. She's looking at self teaching her self. She's good with basic math (Calculus 1 and 2) and stats
What are some of the essential sources she needs to read in order to learn and excel in the field?
| Essential reads for people interested in Actuary and Actuarial science | CC BY-SA 2.5 | null | 2010-08-05T10:47:12.837 | 2010-08-07T17:47:26.950 | 2010-08-07T17:47:26.950 | null | 59 | [
"references"
] |
1294 | 2 | null | 1289 | 3 | null | I would suggest something that hasn't got a defined name (probably "parallel plot") and looks like this:

Basically you plot all counts for all bookstores as points over categories listed on x axis and connect the results from each bookstore with a line. Still this may be too tangled for 1M lines, though. The concept comes from GGobi which was already mentioned by csgillespie.
| null | CC BY-SA 4.0 | null | 2010-08-05T11:22:54.870 | 2019-01-07T11:15:31.740 | 2019-01-07T11:15:31.740 | 79696 | null | null |
1295 | 2 | null | 1292 | 27 | null | One disadvantage is that all terms are assumed to interact. That is, you can't have two explanatory variables that behave independently. Every variable in the tree is forced to interact with every variable further up the tree. This is extremely inefficient if there are variables that have no or weak interactions.
| null | CC BY-SA 2.5 | null | 2010-08-05T11:58:42.133 | 2010-08-05T11:58:42.133 | null | null | 159 | null |
1296 | 1 | 1353 | null | 10 | 4144 | I have detector which will detect an event with some probability p. If the detector says that an event occured, then that is always the case, so there are not false-positives. After I run it for some time, I get k events detected. I would like to calculate what the total number of events that occured was, detected or otherwise, with some confidence, say 95%.
So for example, let's say I get 13 events detected. I would like to be able to calculate that there were between 13 and 19 events with 95% confidence based on p.
Here's what I've tried so far:
The probability of detecting k events if there were n total is:
`binomial(n, k) * p^k * (1 - p)^(n - k)`
The sum of that over n from k to infinity is:
`1/p`
Which means, that the probability of there being n events total is:
`f(n) = binomial(n, k) * p^(k + 1) * (1 - p)^(n - k)`
So if I want to be 95% sure I should find the first partial sum `f(k) + f(k+1) + f(k+2) ... + f(k+m)` which is at least 0.95 and the answer is `[k, k+m]`. Is this the correct approach? Also is there a closed formula for the answer?
| How to find a confidence interval for the total number of events | CC BY-SA 2.5 | null | 2010-08-05T11:59:25.080 | 2011-04-29T00:26:13.040 | 2011-04-29T00:26:13.040 | 3911 | 762 | [
"probability",
"confidence-interval"
] |
1297 | 2 | null | 1292 | 40 | null | Here are a couple I can think of:
- They can be extremely sensitive to small perturbations in the data: a slight change can result in a drastically different tree.
- They can easily overfit. This can be negated by validation methods and pruning, but this is a grey area.
- They can have problems out-of-sample prediction (this is related to them being non-smooth).
Some of these are related to the problem of [multicollinearity](http://en.wikipedia.org/wiki/Multicollinearity): when two variables both explain the same thing, a decision tree will greedily choose the best one, whereas many other methods will use them both. Ensemble methods such as random forests can negate this to a certain extent, but you lose the ease of understanding.
However the biggest problem, from my point of view at least, is the lack of a principled probabilistic framework. Many other methods have things like confidence intervals, posterior distributions etc., which give us some idea of how good a model is. A decision tree is ultimately an ad hoc heuristic, which can still be very useful (they are excellent for finding the sources of bugs in data processing), but there is the danger of people treating the output as "the" correct model (from my experience, this happens a lot in marketing).
| null | CC BY-SA 2.5 | null | 2010-08-05T12:08:23.523 | 2010-08-05T13:31:59.900 | 2010-08-05T13:31:59.900 | 495 | 495 | null |
1298 | 2 | null | 1296 | 2 | null | I think you misunderstood the purpose of confidence intervals. Confidence intervals allow you to assess where the true value of the parameter is located. So, in your case, you can construct a confidence interval for $p$. It does not make sense to construct an interval for the data.
Having said that, once you have an estimate of $p$ you can calculate the probability that you will observe different realizations such as 14, 15 etc using the binomial pdf.
| null | CC BY-SA 2.5 | null | 2010-08-05T12:12:16.980 | 2010-08-05T12:12:16.980 | null | null | null | null |
1299 | 2 | null | 1292 | 12 | null | My answer is directed to CART (the C 4.5/C 5 implementations) though i don't think are limited to it. My guess is that this is what the OP has in mind--it's usually what someone means when they say "Decision Tree."
Limitations of Decision Trees:
---
Low-Performance
By 'performance' i don't mean resolution, but execution speed. The reason why it's poor is that you need to 'redraw the tree' every time you wish to update your CART model--data classified by an already-trained Tree, that you then want to add to the Tree (i.e., use as a training data point) requires that you start from over--training instances can not be added incrementally, as they can for most other supervised learning algorithms. Perhaps the best way to state this is that Decision Trees cannot be trained in online mode, rather only in batch mode. Obviously you won't notice this limitation if you don't update your classifier, but then i would expect that you see a drop in resolution.
This is significant because for Multi-Layer Perceptrons for instance, once it's trained, then it can begin classifying data; that data can also be used to 'tune' the already-trained classifier, though with Decision Trees, you need to retrain with the entire data set (original data used in training plus any new instances).
---
Poor Resolution on Data With Complex Relationships Among the Variables
Decision Trees classify by step-wise assessment of a data point of unknown class, one node at time, starting at the root node and ending with a terminal node. And at each node, only two possibilities are possible (left-right), hence there are some variable relationships that Decision Trees just can't learn.
---
Practically Limited to Classification
Decision Trees work best when they are trained to assign a data point to a class--preferably one of only a few possible classes. I don't believe i have ever had any success using a Decision Tree in regression mode (i.e., continuous output, such as price, or expected lifetime revenue). This is not a formal or inherent limitation but a practical one. Most of the time, Decision Trees are used for prediction of factors or discrete outcomes.
---
Poor Resolution With Continuous Expectation Variables
Again, in principle, it's ok to have independent variables like "download time" or "number of days since previous online purchase"--just change your splitting criterion to variance (it's usually Information Entropy or Gini Impurity for discrete variables) but in my experience Decision Trees rarely work well in these instance. Exceptions are cases like "student's age" which looks continuous but in practice the range of values is quite small (particularly if they are reported as integers).
| null | CC BY-SA 2.5 | null | 2010-08-05T12:47:45.727 | 2010-08-05T12:47:45.727 | null | null | 438 | null |
1302 | 2 | null | 1293 | 2 | null | sources:
1) statistics upto ANOVA
2) probability upto Central limit theorem
3) Basic programming and data analysis skills
4) Familiarity with business economics
5) Financial mathematics
This is a verrry basic smattering of things that she needs to read.
| null | CC BY-SA 2.5 | null | 2010-08-05T13:04:27.533 | 2010-08-05T13:04:27.533 | null | null | null | null |
1304 | 2 | null | 1296 | 3 | null | If you measure $k$ events and know your detection efficiency is $p$ you can automatically correct your measured result up to the "true" count $k_\mathrm{true} = k/p$.
Your question is then about finding the range of $k_\mathrm{true}$ where 95%
of the observations will fall. You can use the [Feldman-Cousins method](http://arxiv.org/abs/physics/9711021) to estimate this interval. If you have access to [ROOT](http://root.cern.ch/drupal/) there is a class to do this calculation for you.
You would calculate the upper and lower limits with Feldman-Cousins from the
uncorrected number of events $k$ and then scale them up to 100% with $1/p$.
This way the actual number of measurements determines your uncertainty, not
some scaled number that wasn't measured.
```
{
gSystem->Load("libPhysics");
const double lvl = 0.95;
TFeldmanCousins f(lvl);
const double p = 0.95;
const double k = 13;
const double k_true = k/p;
const double k_bg = 0;
const double upper = f.CalculateUperLimit(k, k_bg) / p;
const double lower = f.GetLowerLimit() / p;
std::cout << "["
lower <<"..."<<
k_true <<"..."<<
upper <<
"]" << std::endl;
}
```
| null | CC BY-SA 2.5 | null | 2010-08-05T13:35:59.983 | 2010-08-05T16:34:40.807 | 2010-08-05T16:34:40.807 | 56 | 56 | null |
1306 | 2 | null | 856 | 10 | null | I have a suspicion that 'exact' tests and sampling weights are essentially incompatible concepts. I checked in Stata, which has good facilities for sample surveys and reasonable ones for exact tests, and its 8 possible test statistics for a crosstab with sample weights don't include any 'exact' tests such as Fisher's.
The relevant Stata manual entry (for svy: tabulate twoway) advises using its default test in all cases. This default method is based on the usual Pearson's chi-squared statistic. To quote:
"To account for the survey design, the statistic is turned into an F statistic with noninteger degrees of freedom by using a second-order Rao and Scott (1981, 1984) correction".
Refs:
- Rao, J. N. K., and A. J. Scott. 1981. The analysis of categorical data from complex sample surveys: Chi-squared tests for goodness of fit and independence in two-way tables. Journal of the American Statistical Association 76:221–230.
- Rao, J. N. K., and A. J. Scott. 1984. On chi-squared tests for multiway contingency tables with cell proportions estimated from survey data. Annals of Statistics 12: 46–60.
| null | CC BY-SA 2.5 | null | 2010-08-05T13:56:03.143 | 2010-08-05T13:56:03.143 | null | null | 449 | null |
1307 | 2 | null | 1293 | 4 | null | And for bedtime reading, "Against the Gods: The Remarkable Story of Risk" by Peter L. Bernstein. She'll find out how Lloyds of London started, among many other interesting bits. Highly recommended and only $13 from amazon.
| null | CC BY-SA 2.5 | null | 2010-08-05T13:57:42.843 | 2010-08-05T13:57:42.843 | null | null | 247 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.