Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1415 | 2 | null | 973 | 2 | null | Prostate (gene expression array)
- k=2
- n=48+52
- p=6033
Available via (among other) R package [spls](http://cran.r-project.org/web/packages/spls/)
name of the dataset: prostate
error rate = 3/102 (see [here](http://www.stat.wisc.edu/%7Ekeles/Papers/C_SPLS.pdf)) also I think there are paper which show 1/102 error rate. I would say this is an easy test case.
| null | CC BY-SA 2.5 | null | 2010-08-08T19:02:55.487 | 2010-11-09T08:29:57.363 | 2010-11-09T08:29:57.363 | 223 | 223 | null |
1416 | 2 | null | 1413 | 3 | null | Douglas Bates addressed this on the sig-ME list a while back:
- using glmer with user-defined link function
I'm not aware of significant changes since, but his recommendation (using a quasi family with specified link and variance) might be of use. Hopefully this addresses your first and third questions. I'm not aware of other packages - sorry.
| null | CC BY-SA 2.5 | null | 2010-08-08T19:26:40.020 | 2010-08-08T19:26:40.020 | null | null | 251 | null |
1417 | 2 | null | 485 | 6 | null | This site from Ecole normal Supérieure de Paris contains a lot of very interesting video
[http://www.diffusion.ens.fr/index.php?res=themes&idtheme=30](http://www.diffusion.ens.fr/index.php?res=themes&idtheme=30)
I greatly encourage you to visit this site !!
Among other you will find there all video presentation from the conference "Mathematical Foundations of Learning Theory" that held in 2006.
| null | CC BY-SA 2.5 | null | 2010-08-08T19:59:34.217 | 2010-08-08T19:59:34.217 | null | null | 223 | null |
1418 | 2 | null | 1195 | 6 | null | The following paper describes a couple of approaches for imputing right censored data in the same domain (i.e. topcoded wage data). They use a truncated normal distribution and describe a single imputation model assuming homoscedasticity, and a multiple imputation model assuming heterscedasticity. Also a second paper of interest, where a generalized beta distribution is assumed, might be closer to what you want.
- Multiple Imputation Approaches for Right-Censored Wages in the German IAB-Employment Register
- Measuring Inequality Using Censored Data: A Multiple Imputation Approach
| null | CC BY-SA 2.5 | null | 2010-08-08T19:59:40.893 | 2010-08-08T19:59:40.893 | null | null | 251 | null |
1419 | 2 | null | 485 | 7 | null | The folks at SLAC put videos of their lecture series online. Given that their audience is mostly physicists, they tend to be fairly mathematical.
- SLUO Lecture Series (see the "Stat" links)
| null | CC BY-SA 2.5 | null | 2010-08-08T20:22:58.713 | 2010-08-08T20:22:58.713 | null | null | 251 | null |
1420 | 2 | null | 1202 | 4 | null | I think relative distribution methods are a good candidate for the question you pose.
Since you're comparing data based on binning, this is very similar to the method of constructing a probability-probability plot. Taking it a step further, you can can actually construct a relative CDF/PDF for two distributions based on empirical data. From there, you can apply graphical and statistical techniques to explore the relative differences and perform inference on the relative distribution.
Handcock and Morris have an [interesting book](http://csde.washington.edu/~handcock/RelDist/) devoted to this topic and there's the [reldist package](http://cran.r-project.org/web/packages/reldist/index.html) in R available for applying these methods. The following might be worth skimming to see if this is of any interest:
- Relative Distribution Methods
- Applying Relative Distribution Methods in R
| null | CC BY-SA 2.5 | null | 2010-08-08T20:38:09.240 | 2010-08-08T20:38:09.240 | null | null | 251 | null |
1421 | 2 | null | 1412 | 1 | null | I don't see how the question in your example is sensible. The slope of the values is the slope of the values. Using a logistic link function then you get the slope of the logit of the values. There's no under or overestimating.
The more interesting case in your (our) field is that of interactions in accuracy. You might want to read [Dixon (2008)](http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6WK4-4RH8SFS-1&_user=10&_coverDate=11%2F30%2F2008&_rdoc=1&_fmt=high&_orig=search&_sort=d&_docanchor=&view=c&_searchStrId=1424726112&_rerunOrigin=google&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=47db7bad266e5fff07e6fc8c0ceeac42) as one of the more recent papers on this problem. It also addresses many of your fundamental concerns.
In general, in cognitive and perceptual psychology a logit link function is better than any other standard link. If you want to know the true effects of your independent variables, (i.e. whether they interact or are additive, whether they are linear or curvilinear) then you would need to know better the true underlying model. Since you probably don't know that logistic regression is probably better than almost anything else and vastly better than just analyzing meaned accuracy scores.
The primary consequence of doing this is contradicting other findings where mean accuracy scores were put into an ANOVA or regression.
* EDIT*
Now that you've added some data it looks like you're trying to model a floor effect that shouldn't be there. At some point the task becomes impossible. It looks like that already happened at your level 4 difficulty. Modelling level 5 is useless. What if you had a level 6 or 7 difficulty?
It looks like a logistic will fit points 1-4 pretty well.
And, you should be looking at residuals to assess fit, not just the curves overlaid.
| null | CC BY-SA 3.0 | null | 2010-08-08T21:57:57.290 | 2017-12-21T02:09:02.167 | 2017-12-21T02:09:02.167 | 601 | 601 | null |
1423 | 2 | null | 726 | 119 | null | >
Prediction is very difficult, especially about the future.
-- Niels Bohr
| null | CC BY-SA 2.5 | null | 2010-08-08T23:05:17.700 | 2010-08-08T23:05:17.700 | null | null | 749 | null |
1424 | 1 | 1425 | null | 6 | 1553 | This definitely sounds like a homework, but I assure you that it's not. You're probably familiar with the [Risk](http://en.wikipedia.org/wiki/Risk_%28game%29) game. Now, friend of mine rolled 3 aces in one hand. I reckon that probability of such event is
$$C(n,k) = \frac{(n-1+k)!}{(n-1)!k!}$$
so that's $8!/(5!*3!) = 56$, so the probability is $1/56$. Am I correct?
Problem starts here: he rolled 3 aces in his 2nd attack, so he asked me: now, tell me 'bout the odds, you do statistics! And I must admit that I'm still stunned by his tremendous attacks (he lost 3 tanks both times).
Is the probability of such two consecutive events $1/56^{2}$?
| The "Risk" game dice problem | CC BY-SA 3.0 | null | 2010-08-09T00:00:13.560 | 2016-08-17T17:37:49.293 | 2016-08-17T17:37:49.293 | 24669 | 1356 | [
"probability",
"games",
"dice"
] |
1425 | 2 | null | 1424 | 13 | null | You should not do a calculation of probability for an event deemed surprising post hoc as if it were an event specified before it was rolled (observed).
It's very difficult to to do a proper calculation of post hoc probability, because what other events would have been deemed at least as surprising depends on what the context is, and also on the person doing the deeming.
Would three ones twice in a row at an earlier or later stage of the game have been as surprising? Would you rolling three ones have been as surprising as him rolling them? Would three sixes be as surprising as three ones? and so on... What is the totality of all the events would have been surprising enough to generate a post like this one?
To take an extreme example, imagine a wheelbarrow-full-of-dice (ten thousand, say), each with a tiny individualized serial number. We tip the barrow out and exclaim "Whoah, what are the chances of getting this?" -- and if we work it out, $P(d_1=3)\cdot P(d_2=6)\cdot
\ldots P(d_{10000}=2)$ is $6^{-10000}$. Astronomically small. If we repeat the experiment, we get an equally unusual event. In fact, every single time we do it, we get an event so astronomically unbelievably small that we could almost [power a starship](https://en.wikipedia.org/wiki/Infinite_Improbability_Drive) with it. The problem is that the calculation is meaningless, because we specified the event post-hoc.
(Even if it were legitimate to do the calculation as if it were a pre-specified event, it looks like you have that calculation incorrect. Specifically, the probability (for an event specified before the roll) of taking three dice and rolling $(1,1,1)$ is $(1/6)^3 = 1/216$, because the three rolls are independent, not $1/56$, and the probability of doing it twice out of a total of two rolls is the square of that - but neither the condition of being pre-specified nor the "out of two rolls" actually hold)
| null | CC BY-SA 3.0 | null | 2010-08-09T00:37:45.090 | 2013-12-20T07:46:37.133 | 2013-12-20T07:46:37.133 | 805 | 805 | null |
1426 | 2 | null | 726 | 64 | null | >
"It's easy to lie with statistics; it is easier to lie without them."
-- Frederick Mosteller
| null | CC BY-SA 3.0 | null | 2010-08-09T01:44:41.210 | 2015-03-06T04:35:44.930 | 2015-03-06T04:35:44.930 | 59319 | 319 | null |
1427 | 1 | 1428 | null | 2 | 705 | I'm exploring the use of changepoint detection or other methods (am slowly becoming aware of wavelet transformation, etc. but have tons to learn in this area) to identify key shifts in health care performance patterns over time. However, many of the metrics I'm seeking to analyze (e.g., health care quality metrics) are both generally calculated and more reasonably interpreted as rolling-12 month aggregates. For example, it's important to me to track on a monthly basis the proportion of patients who are up-to-date on a certain lab test within the 12-month period ending that month, but I'm not particularly concerned with how many of these tests occurred specifically in August. So it's something sort of like having a moving average to work with as a raw starting point.
That said, there are also reasons why this rolling-12 aggregation does not result in a stationary process either.
My thought was to account for the data structure and seasonality by modeling it as a function of a 1-month and 12-month lag. Is this the proper way to think about this data? Is there anything else or a better approach I should be doing/considering? Again, my general goal is surveillance of the general trend as well as breaks -- so if it affects the answer, I'm looking at this in the context of using the R strucchange package, CUSUM statistics, or some other approach to identify good and bad anomalies.
Thanks,
Shelby
| Time-series data pre-aggregated into non-stationary rolling 12-month periods: are there special considerations for modeling? | CC BY-SA 2.5 | null | 2010-08-09T03:57:50.097 | 2011-04-14T14:58:25.980 | 2011-04-14T14:58:25.980 | 919 | 394 | [
"time-series",
"modeling",
"change-point"
] |
1428 | 2 | null | 1427 | 5 | null | The 12-month rolling aggregation will remove seasonality which makes the task easier. For non-seasonal time series, the methods in the [strucchange](http://cran.r-project.org/package=strucchange) package for R are excellent.
For seasonal time series, you might look at the BFAST (Breaks For Additive Seasonal and Trend) method which is implemented in the [bfast](http://cran.r-project.org/package=bfast) package for R. This method involves applying strucchange to the trend and seasonal components obtained from a decomposition of the data (applied iteratively to allow for the breaks discovered). You could apply bfast on the original data (without the 12-month aggregation).
Neither of these methods requires stationarity.
I would think that a direct modelling approach such as the one you propose would be less capable of finding general breaks due to the additional assumptions being made.
| null | CC BY-SA 2.5 | null | 2010-08-09T04:05:27.520 | 2010-08-09T04:13:46.500 | 2010-08-09T04:13:46.500 | 159 | 159 | null |
1429 | 2 | null | 1268 | 1 | null | I'm somewhat confused by your example code, as it seems you drop the `V` variable from the computation of `newX`. Are you looking to model `X` as a reduced rank product, or are you interested in a reduced column space of `X`? in the latter case, I think an EM-PCA approach would work. you can find matlab code under the title [Probabilistic PCA with missing values](http://lear.inrialpes.fr/~verbeek/software.php).
hth,
| null | CC BY-SA 2.5 | null | 2010-08-09T04:22:42.533 | 2010-08-09T04:22:42.533 | null | null | 795 | null |
1430 | 1 | 1431 | null | 11 | 4927 | Does anybody have a nice example of a stochastic process that is 2nd-order stationary, but is not strictly stationary?
| Example of a process that is 2nd order stationary but not strictly stationary | CC BY-SA 3.0 | null | 2010-08-09T06:50:42.803 | 2015-09-29T23:19:18.367 | 2015-09-29T23:19:18.367 | 22228 | 352 | [
"time-series",
"stochastic-processes",
"stationarity"
] |
1431 | 2 | null | 1430 | 7 | null | Take any process $(X_t)_t$ with independent components that has a constant first and second moment and put a varying third moment.
It is second order stationnary because $E[ X_t X_{t+h} ]=0$ and it is not strictly stationnary
because $P( X_t \geq x_t, X_{t+1} \geq x_{t+1})$ depends upon $t$
| null | CC BY-SA 2.5 | null | 2010-08-09T07:05:20.680 | 2010-08-09T08:29:24.173 | 2010-08-09T08:29:24.173 | 223 | 223 | null |
1432 | 1 | 1435 | null | 92 | 96453 | In answering [this](https://stats.stackexchange.com/questions/1412/consequences-of-an-improper-link-function-in-n-alternative-forced-choice-procedur) question John Christie suggested that the fit of logistic regression models should be assessed by evaluating the residuals. I'm familiar with how to interpret residuals in OLS, they are in the same scale as the DV and very clearly the difference between y and the y predicted by the model. However for logistic regression, in the past I've typically just examined estimates of model fit, e.g. AIC, because I wasn't sure what a residual would mean for a logistic regression. After looking into [R's help files](https://stat.ethz.ch/R-manual/R-devel/library/stats/html/glm.summaries.html) a little bit I see that in R there are five types of glm residuals available, `c("deviance", "pearson", "working","response", "partial")`. The help file refers to:
- Davison, A. C. and Snell, E. J. (1991) Residuals and diagnostics. In: Statistical Theory and Modelling. In Honour of Sir David Cox, FRS, eds. Hinkley, D. V., Reid, N. and Snell, E. J., Chapman & Hall.
I do not have a copy of that. Is there a short way to describe how to interpret each of these types? In a logistic context will sum of squared residuals provide a meaningful measure of model fit or is one better off with an Information Criterion?
| What do the residuals in a logistic regression mean? | CC BY-SA 4.0 | null | 2010-08-09T07:32:32.767 | 2020-09-04T09:39:43.583 | 2018-08-01T15:02:20.500 | 7290 | 196 | [
"r",
"logistic",
"generalized-linear-model",
"residuals",
"aic"
] |
1433 | 2 | null | 6 | 69 | null | What enforces more separation than there should be is each discipline's lexicon.
There are many instances where ML uses one term and Statistics uses a different term--but both refer to the same thing--fine, you would expect that, and it doesn't cause any permanent confusion (e.g., features/attributes versus expectation variables, or neural network/MLP versus projection-pursuit).
What's much more troublesome is that both disciplines use the same term to refer to completely different concepts.
A few examples:
Kernel Function
In ML, kernel functions are used in classifiers (e.g., SVM) and of course in kernel machines. The term refers to a simple function (cosine, sigmoidal, rbf, polynomial) to map non-linearly separable to a new input space, so that the data is now linearly separable in this new input space. (versus using a non-linear model to begin with).
In statistics, a kernel function is weighting function used in density estimation to smooth the density curve.
Regression
In ML, predictive algorithms, or implementations of those algorithms that return class labels "classifiers" are (sometimes) referred to as machines--e.g., support vector machine, kernel machine. The counterpart to machines are regressors, which return a score (continuous variable)--e.g., support vector regression.
Rarely do the algorithms have different names based on mode--e.g., a MLP is the term used whether it returns a class label or a continuous variable.
In Statistics, regression, if you are attempting to build a model based on empirical data, to predict some response variable based on one or more explanatory variables or more variables--then you are doing regression analysis. It doesn't matter whether the output is a continuous variable or a class label (e.g., logistic regression). So for instance, least-squares regression refers to a model that returns a continuous value; logistic regression on the other hand, returns a probability estimate which is then discretized to a class labels.
Bias
In ML, the bias term in the algorithm is conceptually identical to the intercept term used by statisticians in regression modeling.
In Statistics, bias is non-random error--i.e., some phenomenon influenced the entire data set in the same direction, which in turn means that this kind of error cannot be removed by resampling or increasing the sample size.
| null | CC BY-SA 3.0 | null | 2010-08-09T10:12:35.817 | 2016-08-16T18:12:10.847 | 2016-08-16T18:12:10.847 | 438 | 438 | null |
1434 | 2 | null | 421 | 4 | null | "[How to Tell the Liars from the Statisticians](http://rads.stackoverflow.com/amzn/click/0824718178)" by Hooke. I am fond of its way of explaining the concepts of statistics to laypersons.
As for explaining the motivations of statisticians, "The Lady Tasting Tea" is good reading.
| null | CC BY-SA 2.5 | null | 2010-08-09T10:23:41.673 | 2010-08-11T08:37:11.667 | 2010-08-11T08:37:11.667 | 509 | 830 | null |
1435 | 2 | null | 1432 | 47 | null | The easiest residuals to understand are the deviance residuals as when squared these sum to -2 times the log-likelihood. In its simplest terms logistic regression can be understood in terms of fitting the function $p = \text{logit}^{-1}(X\beta)$ for known $X$ in such a way as to minimise the total deviance, which is the sum of squared deviance residuals of all the data points.
The (squared) deviance of each data point is equal to (-2 times) the logarithm of the difference between its predicted probability $\text{logit}^{-1}(X\beta)$ and the complement of its actual value (1 for a control; a 0 for a case) in absolute terms. A perfect fit of a point (which never occurs) gives a deviance of zero as log(1) is zero. A poorly fitting point has a large residual deviance as -2 times the log of a very small value is a large number.
Doing logistic regression is akin to finding a beta value such that the sum of squared deviance residuals is minimised.
This can be illustrated with a plot, but I don't know how to upload one.
| null | CC BY-SA 3.0 | null | 2010-08-09T10:26:35.913 | 2014-03-28T13:31:33.143 | 2014-03-28T13:31:33.143 | 22311 | 521 | null |
1436 | 2 | null | 1337 | 72 | null | A mathematician, a physicist and a statistician went hunting for deer. When they chanced upon one buck lounging about, the mathematician fired first, missing the buck's nose by a few inches. The physicist then tried his hand, and missed the tail by a wee bit. The statistician started jumping up and down saying "We got him! We got him!"
| null | CC BY-SA 2.5 | null | 2010-08-09T10:29:38.373 | 2010-08-09T10:29:38.373 | null | null | 830 | null |
1437 | 1 | null | null | 1 | 1113 | Do you think it can be used instead of k means? I obtained a correlation with the first 2 components as they carry over 90% of the weight. Would you agree on the technique?
| Can Principal Component Analysis be used alone to infer major patterns within data instead of k means clustering? | CC BY-SA 2.5 | null | 2010-08-09T10:37:59.673 | 2010-08-09T12:45:05.153 | 2010-08-09T10:59:13.217 | 8 | null | [
"pca"
] |
1438 | 2 | null | 1437 | 2 | null | I think it depends on your data set and what you want to do with it. If you look at my answer to this [question](https://stats.stackexchange.com/questions/1289/visualizing-multiple-histograms/1291#1291), you will see that it indicates groups/differences. However, it certainly doesn't prove differences - it just gives you an idea of where differences may lie.
How long would it take you run a quick k-means analysis on your data? When I have a large multivariate data set, I try many different techniques to get a handle on it.
| null | CC BY-SA 2.5 | null | 2010-08-09T10:47:18.197 | 2010-08-09T10:47:18.197 | 2017-04-13T12:44:44.530 | -1 | 8 | null |
1439 | 2 | null | 103 | 2 | null | I can't pick just one :)
Check out this great blog post by flowingdata: [37 Data-ish blogs you should know about](http://flowingdata.com/2009/05/06/37-data-ish-blogs-you-should-know-about/)
| null | CC BY-SA 2.5 | null | 2010-08-09T11:14:34.503 | 2010-08-09T11:14:34.503 | null | null | 665 | null |
1440 | 2 | null | 1437 | 0 | null | It's possible that my background in psychological research is disguising some understanding of the broader application of PCA and K-means, but I'd say the following:
- PCA is used to reduce a set of variables to a smaller number of dimensions
- k-means is used to group cases.
For example, take a study of 1000 participants who have completed 10 different ability tests (verbal, mathematics, spatial, etc.).
I could use PCA (or factor analysis) to group tests in order to identify the main underlying dimensions.
I could use k-means to identify types of cases.
As a side point, I often find one approach is [theoretically more interesting](http://jeromyanglim.blogspot.com/2009/09/cluster-analysis-and-single-dominant.html). If the cluster analysis is just grouping in terms of high and low on the first principal component, then I find PCA to be the more meaningful analysis, and the whole concept of clusters as an arbitrary dichotomisation (or categorisation) of a continuous variable.
| null | CC BY-SA 2.5 | null | 2010-08-09T12:45:05.153 | 2010-08-09T12:45:05.153 | null | null | 183 | null |
1441 | 1 | 337429 | null | 10 | 1983 | This question follows from [my previous question](https://stats.stackexchange.com/questions/1430/example-of-a-2nd-order-stationary-but-not-strictly-stationary-process), where Robin answered the question in the case of weak stationary processes. Here, I am asking a similar question for (strong?) stationary processes. I'll define what this means (the definition can also be found [here](http://en.wikipedia.org/wiki/Stationary_process)).
>
Let $X(t)$ be a stochastic process. We say that $X(t)$ is Nth-order stationary if, for every $t_1, t_2, \dots, t_N$ we have that the joint cumulative density functions
$$F_{X(t_1),X(t_2),\dots,X(t_N)} = F_{X(t_1 + \tau),X(t_2 + \tau),\dots,X(t_N + \tau)}$$
for all $\tau$.
This is quite a strong condition, it says that the joint statistics don't change at all as time shifts.
For example, a 1st order stationary process is such that $F_{X(t_1)} = F_{X(t_2)}$ for all $t_1$ and $t_2$. That is, the $X(t)$ are all identically distributed. It is quite easy to see that a 1st order stationary process need not be 2nd order stationary. Simply assign a correlation structure to say $X(t)$, $X(t+1)$, $X(t+2)$ that does not correspond to a (symmetric) Toeplitz matrix. That is, in vector form, the covariance matrix of $[ X(t), X(t+1), X(t+2)]$ could be given as
$$\left[\begin{array}{cc}
\sigma^2 & a & b \newline
a & \sigma^2 & c \newline
b & c& \sigma^2
\end{array}\right]$$
for $a,b,c$ distinct. This is now not 2nd order stationary because $E[X(t)X(t+1)] = a$ and, time shifting by 1 we have $E[X(t+1)X(t+2)] = c \neq a$.
In a similar way (presumably), a process that is 1st and 2nd order stationary need not be 3rd order stationary and this leads to my question:
>
Does somebody have a nice example of a stochastic process that is both 1st and 2nd order stationary, but not 3rd order stationary?
| Example of a stochastic process that is 1st and 2nd order stationary, but not strictly stationary (Round 2) | CC BY-SA 3.0 | null | 2010-08-09T12:50:55.720 | 2018-03-29T15:33:21.537 | 2018-03-29T15:33:21.537 | 161461 | 352 | [
"time-series",
"stochastic-processes",
"stationarity",
"example"
] |
1442 | 2 | null | 173 | 15 | null | To assess the historical trend, I'd use a gam with trend and seasonal components. For example
```
require(mgcv)
require(forecast)
x <- ts(rpois(100,1+sin(seq(0,3*pi,l=100))),f=12)
tt <- 1:100
season <- seasonaldummy(x)
fit <- gam(x ~ s(tt,k=5) + season, family="poisson")
plot(fit)
```
Then `summary(fit)` will give you a test of significance of the change in trend and the plot will give you some confidence intervals. The assumptions here are that the observations are independent and the conditional distribution is Poisson. Because the mean is allowed to change smoothly over time, these are not particularly strong assumptions.
To forecast is more difficult as you need to project the trend into the future. If you are willing to accept a linear extrapolation of the trend at the end of the data (which is certainly dodgy but probably ok for a few months), then use
```
fcast <- predict(fit,se.fit=TRUE,
newdata=list(tt=101:112,season=seasonaldummyf(x,h=12)))
```
To see the forecasts on the same graph:
```
plot(x,xlim=c(0,10.5))
lines(ts(exp(fcast$fit),f=12,s=112/12),col=2)
lines(ts(exp(fcast$fit-2*fcast$se),f=12,s=112/12),col=2,lty=2)
lines(ts(exp(fcast$fit+2*fcast$se),f=12,s=112/12),col=2,lty=2)
```
You can spot the unusual months by looking for outliers in the (deviance) residuals of the fit.
| null | CC BY-SA 2.5 | null | 2010-08-09T13:36:43.330 | 2010-08-13T13:01:56.437 | 2010-08-13T13:01:56.437 | 159 | 159 | null |
1443 | 2 | null | 6 | 15 | null | Ideally one should have a thorough knowledge of both statsitics and machine learning before attempting to answer his question. I am very much a neophyte to ML, so forgive me if wat I say is naive.
I have limited experience in SVMs and regression trees. What strikes me as lacking in ML from a stats point of view is a well developed concept of inference.
Inference in ML seems to boil down almost exclusively to the predictice accuracy, as measured by (for example) mean classification error (MCE), or balanced error rate (BER) or similar. ML is in the very good habit of dividing data randomly (usually 2:1) into a training set and a test set. Models are fit using the training set and performance (MCE, BER etc) is assessed using the test set. This is an excellent practice and is only slowly making its way into mainstream statistics.
ML also makes heavy use of resampling methods (especially cross-validation), whose origins appear to be in statistics.
However, ML seems to lack a fully developed concept of inference - beyond predictive accuracy. This has two results.
1) There does not seem to be an appreciation that any prediction (parameter estimation etc.) is subject to a random error and perhaps systemmatics error (bias). Statisticians will accept that this is an inevitable part of prediction and will try and estimate the error. Statistical techniques will try and find an estimate that has minimum bias and random error. Their techniques are usually driven by a model of the data process, but not always (eg. Bootstrap).
2) There does not seem to be a deep understanding in ML of the limits of applying a model to new data to a new sample from the same population (in spite of what I said earlier about the training-test data set approach). Various statistical techniques, among them cross validation and penalty terms applied to likelihood-based methods, guide statisticians in the trade-off between parsimony and model complexity. Such guidelines in ML seem much more ad hoc.
I've seen several papers in ML where cross validation is used to optimise a fitting of many models on a training dataset - producing better and better fit as the model complexity increases. There appears little appreciation that the tiny gains in accuracy are not worth the extra complexity and this naturally leads to over-fitting. Then all these optimised models are applied to the test set as a check on predictive performance and to prevent overfitting. Two things have been forgotten (above). The predictive performance will have a stochastic component. Secondly multiple tests against a test set will again result in over-fitting. The "best" model will be choisen by the ML practitioner without a full appreciation he/she has cherry picked from one realisation of many possible outomes of this experiment. The best of several tested models will almost certainly not reflect the true performance on new data.
Any my 2 cents worth. We have much to learn from each other.
| null | CC BY-SA 2.5 | null | 2010-08-09T13:51:29.007 | 2010-08-09T13:56:39.603 | 2010-08-09T13:56:39.603 | 521 | 521 | null |
1444 | 1 | 1446 | null | 236 | 196934 | If I have highly skewed positive data I often take logs. But what should I do with highly skewed non-negative data that include zeros? I have seen two transformations used:
- $\log(x+1)$ which has the neat feature that 0 maps to 0.
- $\log(x+c)$ where c is either estimated or set to be some very small positive value.
Are there any other approaches? Are there any good reasons to prefer one approach over the others?
| How should I transform non-negative data including zeros? | CC BY-SA 3.0 | null | 2010-08-09T13:57:51.753 | 2022-10-20T16:28:12.010 | 2015-08-11T08:45:22.567 | 49647 | 159 | [
"data-transformation",
"large-data"
] |
1445 | 2 | null | 1444 | 11 | null | I assume you have continuous data.
If the data include zeros this means you have a spike on zero which may be due to some particular aspect of your data. It appears for example in wind energy, wind below 2 m/s produce zero power (it is called cut in) and wind over (something around) 25 m/s also produce zero power (for security reason, it is called cut off). While the distribution of produced wind energy seems continuous there is a spike in zero.
My solution: In this case, I suggest to treat the zeros separately by working with a mixture of the spike in zero and the model you planned to use for the part of the distribution that is continuous (wrt Lebesgue).
| null | CC BY-SA 3.0 | null | 2010-08-09T14:05:50.187 | 2013-05-28T21:09:50.753 | 2013-05-28T21:09:50.753 | 22047 | 223 | null |
1446 | 2 | null | 1444 | 67 | null | It seems to me that the most appropriate choice of transformation is contingent on the model and the context.
The '0' point can arise from several different reasons each of which may have to be treated differently:
- Truncation (as in Robin's example): Use appropriate models (e.g., mixtures, survival models etc)
- Missing data: Impute data / Drop observations if appropriate.
- Natural zero point (e.g., income levels; an unemployed person has zero income): Transform as needed
- Sensitivity of measuring instrument: Perhaps, add a small amount to data?
I am not really offering an answer as I suspect there is no universal, 'correct' transformation when you have zeros.
| null | CC BY-SA 2.5 | null | 2010-08-09T14:22:11.460 | 2010-08-09T14:22:11.460 | null | null | null | null |
1447 | 1 | 1450 | null | 24 | 13547 | I want to fully grasp the notion of $r^2$ describing the amount of variation between variables. Every web explanation is a bit mechanical and obtuse. I want to "get" the concept, not just mechanically use the numbers.
E.g.: Hours studied vs. test score
$r$ = .8
$r^2$ = .64
- So, what does this mean?
- 64% of the variability of test scores can be explained by hours?
- How do we know that just by squaring?
| Coefficient of Determination ($r^2$): I have never fully grasped the interpretation | CC BY-SA 3.0 | null | 2010-08-09T14:52:42.430 | 2017-04-19T18:33:23.947 | 2016-03-04T16:01:26.207 | 485 | 6967 | [
"regression",
"correlation",
"variance"
] |
1448 | 2 | null | 1447 | 6 | null | A mathematical demonstration of the relationship between the two is here: [Pearson's correlation and least squares regression analysis](http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient#Pearson.27s_correlation_and_least_squares_regression_analysis).
I am not sure if there is a geometric or any other intuition that can be offered apart from the math but if I can think of one I will update this answer.
Update: Geometric Intuition
Here is a geometric intuition I came up with. Suppose that you have two variables $x$ and $y$ which are mean centered. (Assuming mean centered lets us ignore the intercept which simplifies the geometrical intuition a bit.) Let us first consider the geometry of linear regression. In linear regression, we model $y$ as follows:
$y = x\ \beta + \epsilon$.
Consider the situation when we have two observations from the above data generating process given by the pairs ($y_1,y_2$) and ($x_1,x_2$). We can view them as vectors in two-dimensional space as shown in the figure below:
[alt text http://a.imageshack.us/img202/669/linearregression1.png](http://a.imageshack.us/img202/669/linearregression1.png)
Thus, in terms of the above geometry, our goal is to find a $\beta$ such that the vector $x\ \beta$ is the closest possible to the vector $y$. Note that different choices of $\beta$ scale $x$ appropriately. Let $\hat{\beta}$ be the value of $\beta$ that is our best possible approximation of $y$ and denote $\hat{y} = x\ \hat{\beta}$. Thus,
$y = \hat{y} + \hat{\epsilon}$
From a geometrical perspective we have three vectors. $y$, $\hat{y}$ and $\hat{\epsilon}$. A little thought suggests that we must choose $\hat{\beta}$ such that three vectors look like the one below:
[alt text http://a.imageshack.us/img19/9524/intuitionlinearregressi.png](http://a.imageshack.us/img19/9524/intuitionlinearregressi.png)
In other words, we need to choose $\beta$ such that the angle between $x\ \beta$ and $\hat{\epsilon}$ is 900.
So, how much variation in $y$ have we explained with this projection of $y$ onto the vector $x$. Since the data is mean centered the variance in $y$ is equals ($y_1^2+y_2^2$) which is the square of the distance between the point represented by the point $y$ and the origin. The variation in $\hat{y}$ is similarly the distance from the point $\hat{y}$ and the origin and so on.
By the Pythagorean theorem, we have:
$y^2 = \hat{y}^2 + \hat{\epsilon}^2$
Therefore, the proportion of the variance explained by $x$ is $\frac{\hat{y}^2}{y^2}$. Notice also that $cos(\theta) = \frac{\hat{y}}{y}$. and the wiki tells us that the [geometrical interpretation of correlation](http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient#Geometric_interpretation) is that correlation equals the cosine of the angle between the mean-centered vectors.
Therefore, we have the required relationship:
(Correlation)2 = Proportion of variation in $y$ explained by $x$.
Hope that helps.
| null | CC BY-SA 2.5 | null | 2010-08-09T15:09:34.683 | 2010-08-09T19:58:51.157 | 2010-08-09T19:58:51.157 | null | null | null |
1449 | 2 | null | 726 | 56 | null | >
My greatest concern was what to call
it. I thought of calling it
'information,' but the word was overly
used, so I decided to call it
'uncertainty.' When I discussed it
with John von Neumann, he had a better
idea. Von Neumann told me, 'You should
call it entropy, for two reasons. In
the first place your uncertainty
function has been used in statistical
mechanics under that name, so it
already has a name. In the second
place, and more important, no one
really knows what entropy really is,
so in a debate you will always have
the advantage.'
Claude Elwood Shannon
| null | CC BY-SA 2.5 | null | 2010-08-09T15:36:26.553 | 2010-08-09T15:36:26.553 | null | null | 223 | null |
1450 | 2 | null | 1447 | 31 | null | Start with the basic idea of variation. Your beginning model is the sum of the squared deviations from the mean. The R^2 value is the proportion of that variation that is accounted for by using an alternative model. For example, R-squared tells you how much of the variation in Y you can get rid of by summing up the squared distances from a regression line, rather than the mean.
I think this is made perfectly clear if we think about the simple regression problem plotted out. Consider a typical scatterplot where you have a predictor X along the horizontal axis and a response Y along the vertical axis.
The mean is a horizontal line on the plot where Y is constant. The total variation in Y is the sum of squared differences between the mean of Y and each individual data point. It's the distance between the mean line and every individual point squared and added up.
You can also calculate another measure of variability after you have the regression line from the model. This is the difference between each Y point and the regression line. Rather than each (Y - the mean) squared we get (Y - the point on the regression line) squared.
If the regression line is anything but horizontal, we're going to get less total distance when we use this fitted regression line rather than the mean--that is there is less unexplained variation. The ratio between the extra variation explained and the original variation is your R^2. It's the proportion of the original variation in your response that is explained by fitting that regression line.
[](https://i.stack.imgur.com/Fbzzy.png)
Here is some R code for a graph with the mean, the regression line, and segments from the regression line to each point to help visualize:
```
library(ggplot2)
data(faithful)
plotdata <- aggregate( eruptions ~ waiting , data = faithful, FUN = mean)
linefit1 <- lm(eruptions ~ waiting, data = plotdata)
plotdata$expected <- predict(linefit1)
plotdata$sign <- residuals(linefit1) > 0
p <- ggplot(plotdata, aes(y=eruptions, x=waiting, xend=waiting, yend=expected) )
p + geom_point(shape = 1, size = 3) +
geom_smooth(method=lm, se=FALSE) +
geom_segment(aes(y=eruptions, x=waiting, xend=waiting, yend=expected, colour = sign),
data = plotdata) +
theme(legend.position="none") +
geom_hline(yintercept = mean(plotdata$eruptions), size = 1)
```
| null | CC BY-SA 3.0 | null | 2010-08-09T15:44:50.163 | 2017-04-19T18:33:23.947 | 2017-04-19T18:33:23.947 | 485 | 485 | null |
1451 | 2 | null | 942 | 19 | null | The list in the presentation that you reference seems fairly arbitrary to me, and the technique that would be used will really depend on the specific problem. You will note however that it also includes [Kalman filters](http://en.wikipedia.org/wiki/Kalman_filter), so I suspect that the intended usage is as a filtering technique. Wavelet transforms generally fall under the subject of [signal processing](http://en.wikipedia.org/wiki/Category%3aSignal_processing), and will often be used as a pre-processing step with very noisy data. An example is the "[Multi-scale anomaly detection](http://portal.acm.org/citation.cfm?id=1343118.1343331)" paper by Chen and Zhan (see below). The approach would be to run an analysis on the different spectrum rather than on the original noisy series.
Wavelets are often compared to a continuous-time fourier transform, although they have the benefit of being localized in both time and frequency. Wavelets can be used both for signal compression and also for smoothing (wavelet shrinkage). Ultimately, it could make sense to apply a further statistical after the wavelet transform has been applied (by looking in at the auto-correlation function for instance). One further aspect of wavelets that could be useful for anomaly detection is the effect of localization: namely, a discontinuity will only influence the wavelet that is near it (unlike a fourier transform). One application of this is to finding locally stationary time series (using an LSW).
[Guy Nason](http://www.stats.bris.ac.uk/~magpn/) has a nice book that I would recommend if you want to delve further into the practical statistical application: "[Wavelet Methods in Statistics with R](http://rads.stackoverflow.com/amzn/click/0387759603)". This is specifically targeting the application of wavelets to statistical analysis, and he provides many real world examples along with all the code (using the [wavethresh package](http://cran.r-project.org/web/packages/wavethresh/index.html)). Nason's book does not address "anomaly detection" specifically, although it does do an admiral job of providing a general overview.
Lastly, [the wikipedia article](http://en.wikipedia.org/wiki/Wavelet) does provide many good introductory references, so it is worth going through it in detail.
- Xiao-yun Chen, Yan-yan Zhan "Multi-scale anomaly detection algorithm based on infrequent pattern of time series" Journal of Computational and Applied Mathematics Volume 214 , Issue 1 (April 2008)
- G.P. Nason "Wavelet Methods in Statistics with R" Springer, 2008
[As a side note: if you are looking for a good modern technique for change point detection, I would suggest trying a HMM before spending too much time with wavelet methods, unless you have good reason to be using wavelets in your particular field. This is based on my personal experience. There are of course many other nonlinear models that could be considered, so it really depends on your specific problem.]
| null | CC BY-SA 2.5 | null | 2010-08-09T15:51:47.087 | 2010-08-09T15:51:47.087 | null | null | 5 | null |
1452 | 2 | null | 1444 | 43 | null | The log transforms with shifts are special cases of the [Box-Cox transformations](http://en.wikipedia.org/wiki/Box-Cox_transformation):
$y(\lambda_{1}, \lambda_{2}) =
\begin{cases}
\frac {(y+\lambda_{2})^{\lambda_1} - 1} {\lambda_{1}} & \mbox{when } \lambda_{1} \neq 0 \\ \log (y + \lambda_{2}) & \mbox{when } \lambda_{1} = 0
\end{cases}$
These are the extended form for negative values, but also applicable to data containing zeros. Box and Cox (1964) presents an algorithm to find appropriate values for the $\lambda$'s using maximum likelihood. This gives you the ultimate transformation.
A reason to prefer Box-Cox transformations is that they're developed to ensure assumptions for the linear model. There's some work done to show that even if your data cannot be transformed to normality, then the estimated $\lambda$ still lead to a symmetric distribution.
I'm not sure how well this addresses your data, since it could be that $\lambda = (0, 1)$ which is just the log transform you mentioned, but it may be worth estimating the requried $\lambda$'s to see if another transformation is appropriate.
In R, the [boxcox.fit](http://www.stat.ucl.ac.be/ISdidactique/Rhelp/library/geoR/html/boxcox.fit.html) function in package `geoR` will compute the parameters for you.
| null | CC BY-SA 2.5 | null | 2010-08-09T16:43:48.870 | 2010-08-10T01:59:44.693 | 2010-08-10T01:59:44.693 | 251 | 251 | null |
1453 | 2 | null | 1447 | 3 | null | The [Regression By Eye](http://onlinestatbook.com/stat_sim/reg_by_eye/index.html) applet could be of use if you're trying to develop some intuition.
It lets you generate data then guess a value for R, which you can then compare with the actual value.
| null | CC BY-SA 2.5 | null | 2010-08-09T16:49:45.147 | 2010-08-09T16:49:45.147 | null | null | 251 | null |
1454 | 1 | 1457 | null | 2 | 125 | I have a data set that contains two types of points. The first type of points come from an N(0,1) distribution. The second type of points come from an N(m,v) distribution for some real m and some positive, real v. The objective is to classify each point as type 1 or type 2, and to identify m & v. We have no apriori information about m & v. Any ideas?
| How to identify points and an unknown distribution in a two type clustering problem? | CC BY-SA 2.5 | null | 2010-08-09T17:43:33.807 | 2011-04-29T00:21:42.403 | 2011-04-29T00:21:42.403 | 3911 | 247 | [
"clustering",
"mixture-distribution"
] |
1455 | 1 | 1456 | null | 17 | 15364 | I have a bunch of articles presenting "OR" with a- 95% CI (confidence intervals).
I want to estimate from the articles the P value for the observed OR. For that, I need an assumption regarding the OR distribution. What distribution can I safely assume/use?
| What is the distribution of OR (odds ratio)? | CC BY-SA 2.5 | null | 2010-08-09T17:47:12.227 | 2014-03-31T16:41:03.707 | 2011-04-29T00:22:12.450 | 3911 | 253 | [
"distributions",
"odds-ratio"
] |
1456 | 2 | null | 1455 | 13 | null | The log odds ratio has a Normal asymptotic distribution :
$\log(\hat{OR}) \sim N(\log(OR), \sigma_{\log(OR)}^2)$
with $\sigma$ estimated from the contingency table. See, for example, page 6 of the notes:
- Asymptotic Theory for Parametric Models
| null | CC BY-SA 2.5 | null | 2010-08-09T18:00:24.507 | 2010-08-13T06:44:20.513 | 2010-08-13T06:44:20.513 | 251 | 251 | null |
1457 | 2 | null | 1454 | 1 | null | You could use a [mixture model](http://en.wikipedia.org/wiki/Mixture_model) to separate out the components. The data generating process can be represented as follows:
Let:
$z_i$: be the type (1 or 2) for the $i^{th}$ observation,
$y_i$ be the $i^{th}$ observation.
Then you have:
$f(y_i|z_i=1) \sim N(0,1)$
$f(y_i|z_i=2) \sim N(m,v)$
$P(z_i=1) = \pi$ and
$P(z_i=2) = 1-\pi$.
The likelihood function is given by:
$L(m,v,\pi|-) = \sum_{y_i} \pi f(y_i|z_i=1) + (1-\pi) f(y_i|z_i=2)$
You can then use either [EM or MCMC](http://en.wikipedia.org/wiki/Mixture_model#Common_approaches_for_estimation_in_mixture_models) to estimate for the model parameters.
| null | CC BY-SA 2.5 | null | 2010-08-09T18:03:31.257 | 2010-08-09T18:36:17.977 | 2010-08-09T18:36:17.977 | null | null | null |
1458 | 1 | null | null | 46 | 6887 | I find it hard to understand what really is the issue with multiple comparisons. With a simple analogy, it is said that a person who will make many decisions will make many mistakes. So very conservative precaution is applied, like Bonferroni correction, so as to make the probability that, this person will make any mistake at all, as low as possible.
But why do we care about whether the person has made any mistake at all among all decisions he/she has made, rather than the percentage of the wrong decisions?
Let me try to explain what confuses me with another analogy. Suppose there are two judges, one is 60 years old, and the other is 20 years old. Then Bonferroni correction tells the one which is 20 years old to be as conservative as possible, in deciding for execution, because he will work for many more years as a judge, will make many more decisions, so he has to be careful. But the one at 60 years old will possibly retire soon, will make fewer decisions, so he can be more careless compared to the other. But actually, both judges should be equally careful or conservative, regardless of the total number of decisions they will make. I think this analogy more or less translates to the real problems where Bonferroni correction is applied, which I find counterintuitive.
| Why is multiple comparison a problem? | CC BY-SA 2.5 | null | 2010-08-09T18:03:54.360 | 2020-02-08T06:34:30.300 | 2010-12-17T07:48:12.923 | 223 | 148 | [
"hypothesis-testing",
"multiple-comparisons"
] |
1459 | 1 | null | null | 9 | 9312 | I am trying to build a time series regression forecasting model for an outcome variable, in dollar amount, in terms of other predictors/input variables and autocorrelated errors. This kind of model is also called dynamic regression model. I need to learn how to identify transfer functions for each predictor and would love to hear from you about ways to do just that.
| How to identify transfer functions in a time series regression forecasting model? | CC BY-SA 2.5 | null | 2010-08-09T18:10:39.633 | 2013-08-28T17:05:23.470 | 2010-08-13T13:04:38.100 | 159 | 833 | [
"time-series",
"forecasting",
"dynamic-regression"
] |
1460 | 1 | 1994 | null | 2 | 1571 | I am trying to figure out the best transformation of my consumption variable. I am running a probit regression to look at whether or not a household enrolls in health insurance. Consumption per capita is an independent variable and in my current model I use both consumption and consumption squared (two separate variables) to show that consumption increases but with diminishing returns. This makes for fairly straightforward interpretation. However, using the log of consumption is a slightly better fit because it normalizes the distribution and contributes a bit more to the overall R2 for the model but it is more difficult to interpret. Which would you suggest I use - log of consumption or consumption plus the quadratic function? My research is focused on health economics so I'm not sure what the preference is in that discipline. Any insight would be much appreciated. Thank you!
| Ideal transformation for consumption variable in a probit model | CC BY-SA 2.5 | null | 2010-08-09T18:16:21.917 | 2010-09-16T14:17:36.323 | 2010-09-16T06:56:30.077 | null | 834 | [
"data-transformation",
"econometrics"
] |
1461 | 2 | null | 1460 | 0 | null | It seems to me that you already have a 'partial' statistics answer (better $R^2$ as to the decision as to what to choose: log vs quadratic). You could use other data-driven metrics (e.g., out-of-sample hit rates, whether the parameters are reasonable etc) to judge which model structure is 'better'.
PS: By the way, are these two models consistent with economic theory? Just asking as I do not know this area. Another way to select a model is to check its consistency with theory.
| null | CC BY-SA 2.5 | null | 2010-08-09T18:27:59.960 | 2010-08-09T18:27:59.960 | null | null | null | null |
1462 | 1 | 1467 | null | 4 | 3795 | Let's say I want to make a football simulator based on real-life data.
Say I have a player who averages 5.3 yards per carry with a SD of 1.7 yards.
I'd like to generate a random variable that simulates the next few plays.
eg: 5.7, 4.9, 5.3, etc.
What stats terms to I need to look up to pursue this idea? Density function? The normal curve estimates what boundaries the data generally fall within, but how do I translate that into simulation of subsequent data points?
Thanks for any guidance!
| Using Std.Dev and Mean to generate hypothetical/additional data points? | CC BY-SA 2.5 | null | 2010-08-09T18:53:30.313 | 2010-08-13T16:15:41.417 | null | null | 6967 | [
"standard-deviation"
] |
1463 | 2 | null | 1458 | 41 | null | You've stated something that is a classic counter argument to Bonferroni corrections. Shouldn't I adjust my alpha criterion based on every test I will ever make? This kind of ad absurdum implication is why some people do not believe in Bonferroni style corrections at all. Sometimes the kind of data one deals with in their career is such that this is not an issue. For judges who make one, or very few decisions on each new piece of evidence this is a very valid argument. But what about the judge with 20 defendants and who is basing their judgment on a single large set of data (e.g. war tribunals)?
You're ignoring the kicks at the can part of the argument. Generally scientists are looking for something — a p-value less than alpha. Every attempt to find one is another kick at the can. One will eventually find one if one takes enough shots at it. Therefore, they should be penalized for doing that.
The way you harmonize these two arguments is to realize they are both true. The simplest solution is to consider testing of differences within a single dataset as a kicks at the can kind of problem but that expanding the scope of correction outside that would be a slippery slope.
This is a genuinely difficult problem in a number of fields, notably FMRI where there are thousands of data points being compared and there are bound to be some come up as significant by chance. Given that the field has been historically very exploratory one has to do something to correct for the fact that hundreds of areas of the brain will look significant purely by chance. Therefore, many methods of adjustment of criterion have been developed in that field.
On the other hand, in some fields one might at most be looking at 3 to 5 levels of a variable and always just test every combination if a significant ANOVA occurs. This is known to have some problems (type 1 errors) but it's not particularly terrible.
It depends on your point of view. The FMRI researcher recognizes a real need for a criterion shift. The person looking at a small ANOVA may feel that there's clearly something there from the test. The proper conservative point of view on the multiple comparisons is to always do something about them but only based on a single dataset. Any new data resets the criterion... unless you're a Bayesian...
| null | CC BY-SA 3.0 | null | 2010-08-09T18:55:55.957 | 2016-03-10T00:06:53.230 | 2016-03-10T00:06:53.230 | 601 | 601 | null |
1464 | 2 | null | 1462 | 4 | null | You need a random number generator for the standard normal. Either you can supply mean and standard deviation as arguments to the function, or you simply scale yourself by multiplying with the latter for the variability and adding the former for for the central location.
Here is a quick example of the former approach:
```
> set.seed(42)
> x <- rnorm(1000, 5.3, 1.7) # 1000 draws of N(5.5, 1.7)
> print(c(mean=mean(x), sd=sd(x)))
mean sd
5.2561 1.7043
>
```
| null | CC BY-SA 2.5 | null | 2010-08-09T19:02:57.657 | 2010-08-09T19:02:57.657 | null | null | 334 | null |
1465 | 2 | null | 1460 | 3 | null | I am not sure I understand your interpretation: a log-transformed predictor would imply that the effect is increasing with diminishing returns, while a quadratic function as a predictor would imply the existance of a peak in the effect (for ax^2+bx+c the peak is at -b/(2a)). I would assume the latter is less realistic in your context, and is also not supported by the better R-squared for the log-transformed predictor.
| null | CC BY-SA 2.5 | null | 2010-08-09T19:13:45.493 | 2010-08-09T19:13:45.493 | null | null | 279 | null |
1466 | 2 | null | 1462 | 5 | null | If you want a realistic simulation you need to find a distribution that describes the real process good enough (a model).
When a real player makes a move he will on average (e.g.) throw `X` yards, with a standard deviation of `Y`. This does however not mean that the distribution of throws is a normal distribution. You should plot the (histogrammed) `throw` distribution and plot your fit (a Normal distribution with mean `X` and σ=`Y`) and determine if it fits good enough. If not find a distribution that describes the real data better.
Once you have that down you need to generate random numbers from the distribution you determined.
EDIT
If you data is complete enough you could maybe avoid having to create a complete model. You would create a [frequency distribution](http://en.wikipedia.org/wiki/Frequency_distribution) of your data (a histogram) and then do [rejection sampling](http://en.wikipedia.org/wiki/Rejection_sampling) directly from it.
| null | CC BY-SA 2.5 | null | 2010-08-09T19:13:48.053 | 2010-08-10T06:53:15.300 | 2010-08-10T06:53:15.300 | 56 | 56 | null |
1467 | 2 | null | 1462 | 8 | null | Of course you can use rnorm() in R, but it may be easier to understand how drawing from a pdf works by using the [probability integral transform](http://en.wikipedia.org/wiki/Normal_distribution#Generating_values_from_normal_distribution).
Basically, once we specify the structure of the pdf, we can transform this into a cdf (empirically, to ignore what the equation is), and because the values of the cdf have unique values from 0 to 1, we can back-calculate a draw from the original pdf by matching random draws from 0 to 1, with the cdf.
This way, you only need to have a RNG from 0 to 1, and the function of the pdf, and you're set. Here is the R code:
```
x <- seq(-4, 4, len = 1000)
f <- function(x, mu = 0, sigma = 1) {
out <- 1 / sqrt(2*pi*sigma^2) * exp(-(x - mu)^2 / (2*sigma^2))
out
}
x.ecdf <- cumsum(f(x)) / sum(f(x))
out <- vector()
y <- runif(100)
for (i in 1:length(y)) {
out[i] <- which((y[i] - x.ecdf)^2 == min((y[i] - x.ecdf)^2))
}
par(mfrow = c(1,2))
plot(x, x.ecdf)
hist(x[out], breaks = 20)
```
[alt text http://probabilitynotes.files.wordpress.com/2010/08/rnormish.png](http://probabilitynotes.files.wordpress.com/2010/08/rnormish.png)
| null | CC BY-SA 2.5 | null | 2010-08-09T19:53:30.360 | 2010-08-09T20:28:40.977 | 2010-08-09T20:28:40.977 | 291 | 291 | null |
1468 | 2 | null | 1458 | 10 | null | To fix ideas: I will take the case when you obverse, $n$ independent random variables $(X_i)_{i=1,\dots,n}$ such that for $i=1,\dots,n$ $X_i$ is drawn from $\mathcal{N}(\theta_i,1)$. I assume that you want to know which one have non zero mean, formally you want to test:
$H_{0i} : \theta_i=0$ Vs $H_{1i} : \theta_i\neq 0$
Definition of a threshold: You have $n$ decisions to make and you may have different aim. For a given test $i$ you are certainly going to choose a threshold $\tau_i$ and decide not to accept $H_{0i}$ if $|X_i|>\tau_i$.
Different options: You have to choose the thresholds $\tau_i$ and for that you have two options:
- choose the same threshold for everyone
- to choose a different threshold for everyone (most often a datawise threshold, see below).
Different aims: These options can be driven for different aims such as
- Controling the probability to reject wrongly $H_{0i}$ for one or more than one $i$.
- Controlling the expectation of the false alarm ratio (or False Discovery Rate)
What ever is your aim at the end, it is a good idea to use a datawise threshold.
My answer to your question: your intuition is related to the main heuristic for choosing a datawise threshold. It is the following (at the origin of Holm's procedure which is more powerfull than Bonferoni):
Imagine you have already taken a decision for the $p$ lowest $|X_{i}|$ and the decision is to accept $H_{0i}$ for all of them. Then you only have to make $n-p$ comparisons and you haven't taken any risk to reject $H_{0i}$ wrongly ! Since you haven't used your budget, you may take a little more risk for the remaining test and choose a larger threshold.
In the case of your judges: I assume (and I guess you should do the same) that both judge have the same budgets of false accusation for their life. The 60 years old judge may be less conservative if, in the past, he did not accuse anyone ! But if he already made a lot of accusation he will be more conservative and maybe even more than the youndest judge.
| null | CC BY-SA 2.5 | null | 2010-08-09T21:18:01.033 | 2010-08-10T05:10:38.933 | 2020-06-11T14:32:37.003 | -1 | 223 | null |
1469 | 1 | 2084 | null | 5 | 249 | Can someone recommend a text with derivations of classical estimator efficiency results? I'm particularly interested in likelihood and pseudo-likelihood estimators for multi-variate discrete models
| A Primer on Estimator Efficiency? | CC BY-SA 2.5 | null | 2010-08-09T21:55:08.393 | 2022-11-29T18:35:55.963 | 2019-01-28T08:02:44.497 | 11887 | 511 | [
"estimation",
"references",
"efficiency"
] |
1470 | 2 | null | 1458 | 13 | null | Related to the comment earlier, what the fMRI researcher should remember is that clinically-important outcomes are what matter, not the density shift of a single pixel on a fMRI of the brain. If it doesn't result in a clinical improvement/detriment, it doesn't matter. That is one way of reducing the concern about multiple comparisons.
See also:
- Bauer, P. (1991). Multiple testing in clinical trials. Stat Med, 10(6), 871-89; discussion 889-90.
- Proschan, M. A. & Waclawiw, M. A. (2000). Practical guidelines for multiplicity adjustment in clinical trials. Control Clin Trials, 21(6), 527-39.
- Rothman, K. J. (1990). No adjustments are needed for multiple comparisons. Epidemiology (Cambridge, Mass.), 1(1), 43-6.
- Perneger, T. V. (1998). What's wrong with bonferroni adjustments. BMJ (Clinical Research Ed.), 316(7139), 1236-8.
| null | CC BY-SA 2.5 | null | 2010-08-09T22:18:22.997 | 2010-08-09T22:18:22.997 | null | null | 561 | null |
1471 | 1 | 1473 | null | 13 | 35087 | I need to analyze with R the data from a medical survey (with 100+ coded columns) that comes in a CSV. I will use [rattle](http://rattle.togaware.com/) for some initial analysis but behind the scenes it's still R.
If I read.csv() the file, columns with numerical codes are treated as numerical data. I'm aware I could create categorical columns from them with factor() but doing it for 100+ columns is a pain.
I hope there is a better way to tell R to import the columns directly as factors. Or to at least to convert them in place afterwards.
Thank you!
| Is it possible to directly read CSV columns as categorical data? | CC BY-SA 2.5 | null | 2010-08-09T22:25:11.207 | 2010-08-10T00:25:34.777 | null | null | 840 | [
"r",
"categorical-data",
"data-transformation"
] |
1472 | 2 | null | 1459 | 7 | null | The classic approach, described in [Box, Jenkins & Reinsell (4th ed, 2008)](http://rads.stackoverflow.com/amzn/click/0470272848) involves looking at the cross-correlation function and the various auto-correlation functions, and making a lot of subjective decisions about the orders and lags for the various terms. The approach works ok for a single predictor, but is not really suitable for multiple predictors.
An alternative approach, described in [Pankratz (1991)](http://rads.stackoverflow.com/amzn/click/0471615285), involves fitting lagged regressions with AR errors and determining the appropriate rational lag structure from the fitted coefficients (also a relatively subjective process). Then refitting the entire model with the supposed lag structures and extracting the residuals. The order of the ARMA error process is determined from these residuals (using AIC for example). Then the final model is re-estimated. This approach works well for multiple predictors, and is considerably simpler to apply than the classic approach.
I wish I could say there was this neat automated procedure that did it all for you, but I can't. At least not yet.
| null | CC BY-SA 2.5 | null | 2010-08-09T22:27:53.513 | 2010-08-09T22:27:53.513 | null | null | 159 | null |
1473 | 2 | null | 1471 | 17 | null | You can use the `colClasses` argument to specify the classes of your data columns. For example:
```
data <- read.csv('foo.csv', colClasses=c('numeric', 'factor', 'factor'))
```
will assign numeric to the first column, factor to the second and third. Since you have so many columns, a shortcut might be:
```
data <- read.csv('foo.csv', colClasses=c('numeric', rep('factor', 37), 'character'))
```
or some such variation (i.e. assign numeric to first column, factor to next 37 columns, then character to the last one).
| null | CC BY-SA 2.5 | null | 2010-08-09T22:31:23.070 | 2010-08-10T00:25:34.777 | 2010-08-10T00:25:34.777 | 251 | 251 | null |
1474 | 2 | null | 1471 | 3 | null | or just do it after you read the data
```
dat <- read.csv("kdfjdkf")
apply(dat, 2, factor)
```
though this type of Q is probably more fit for [Stack Overflow](https://stackoverflow.com/questions/tagged/r).
edit: see below.
| null | CC BY-SA 2.5 | null | 2010-08-09T22:33:37.870 | 2010-08-09T23:48:44.260 | 2017-05-23T12:39:26.523 | -1 | 291 | null |
1475 | 1 | 1491 | null | 14 | 11365 | I want to cluster ~22000 points. Many clustering algorithms work better with higher quality initial guesses. What tools exist that can give me a good idea of the rough shape of the data?
I do want to be able to choose my own distance metric, so a program I can feed a list of pairwise distances to would be just fine. I would like to be able to do something like highlight a region or cluster on the display and get a list of which data points are in that area.
Free software preferred, but I do already have SAS and MATLAB.
| Visualization software for clustering | CC BY-SA 2.5 | null | 2010-08-09T22:33:40.163 | 2015-07-31T02:31:24.627 | 2010-11-13T20:32:11.607 | 930 | null | [
"data-visualization",
"clustering",
"software"
] |
1476 | 2 | null | 1458 | 26 | null | Well-respected statisticians have taken a wide variety of positions on multiple comparisons. It's a subtle subject. If someone thinks it's simple, I'd wonder how much they've thought about it.
Here's an interesting Bayesian perspective on multiple testing from Andrew Gelman: [Why we don't (usually) worry about multiple comparisons](http://www.stat.columbia.edu/%7Ecook/movabletype/archives/2008/03/why_i_dont_usua_1.html).
| null | CC BY-SA 2.5 | null | 2010-08-09T23:39:55.867 | 2010-08-09T23:39:55.867 | 2020-06-11T14:32:37.003 | -1 | 319 | null |
1477 | 2 | null | 1385 | 1 | null | If you have a reasonable hunch about the data generating process that is responsible for the data in question then you could use bayesian ideas to estimate the missing data. Under the bayesian approach you would simply assume that the missing data are also random variables and construct the posterior for the missing data conditional on the observed data. The posterior means would then be used as a substitute for the missing data.
The use of bayesian models may qualify as imputation under a broad sense of the term but I thought of mentioning it as it did not appear on your list.
| null | CC BY-SA 2.5 | null | 2010-08-10T00:32:44.947 | 2010-08-10T00:32:44.947 | null | null | null | null |
1478 | 1 | 1479 | null | 3 | 1410 | I am trying to write unit tests for a whole mess of statistics code. Some of the unit tests take the form: generate a sample following a null hypothesis, use code to get a p-value under that null, repeat hundreds of times, then look at all the p-values: if they are reasonably uniform, then the code passes. I usually check if the ratio of p-values < $\alpha$ is near $\alpha$ for $\alpha = 0.01, 0.05, 0.1$. But I am usually also interested in whether the p-values output deviate from uniformity. I usually test this with the Anderson-Darling test.
Here is where I have a circularity problem: how can I unit test my Anderson-Darling code? I can easily feed it uniformly generated variables, and get a p-value, repeat hundreds of times, but then I just have a bunch of p-values. I can q-q plot them, but I'm more interested in an automatic unit test I can run. What are some basic sanity checks I can implement automatically? there is the naive check of ratio of p-values < $\alpha$ noted above. I can also implement a Kolmogorov-Smirnov test. What else can I easily check for?
(I realize this question may seem hopelessly pedantic or naive or subject to infinite regress...)
edit some additional ideas:
- test the code on $\frac{i}{n}$ for $i = 1,2,...,n$, for different values of $n$. Presumably I can compute, by hand, the p-value for the A-D test in this case.
- compute $n$ p-values by feeding many uniform samples to the code $n$ times, then regress the order statistics of the p-values, $p_{(i)}$ vs $i/n$, to get $p_{(i)} = \beta_1 i/n + \beta_0$ and test the null $\beta_1 = 1, \beta_0 = 0$. Presumably I can simplify this test by hand in such a way that inspection reveals it to be correct.
- make sure the code is invariant with respect to permutation of the input. (duh)
| Testing implementation of Anderson-Darling test for uniform RV | CC BY-SA 2.5 | null | 2010-08-10T00:33:26.960 | 2018-08-27T08:44:52.570 | 2018-08-27T08:44:52.570 | 11887 | 795 | [
"hypothesis-testing",
"uniform-distribution"
] |
1479 | 2 | null | 1478 | 1 | null | You could test your Anderson-Darling code using data that is generated from an external library. However, you then run into the issue of how to test/trust the external library. At some point you have to trust that well established libraries are error free and that their output can be relied on.
Once you have the Anderson-Dalring code tested against data generated from an external library the circularity will be broken and you can rely on your own code if it passes the Anderson-Darling tests. The same will hold for the K-S (I presume Kolmogorov–Smirnov) test.
| null | CC BY-SA 2.5 | null | 2010-08-10T00:41:01.463 | 2010-08-10T00:41:01.463 | null | null | null | null |
1480 | 2 | null | 1385 | 2 | null | I might be a little unorthodox here, but what the heck. Please note: this line of thought comes from my own philosophy for classification, which is that I use it when my purpose is squarely on pure prediction -- not explanation, conceptual coherence, etc. Thus, what I'm saying here contradicts how I'd approach building a regression model.
Different classification approaches vary in their capability to handle missing data, and depending on some other factors^, I might just try #5: use a classifier that won't choke on those NAs. Part of the decision to go that route might also include thinking about how likely a similar proportion of NAs are to occur in the future data to which you'll be applying the model. If NAs for certain variables are going to be par for the course, then it might make sense to just roll with them (i.e., don't build a predictive model that assumes more informative data than what you'll actually have, or you'll be kidding yourself about how predictive it really is going to be). In fact, if I'm not convinced that NAs are missing at random, I'd be inclined to recode a new variable (or a new level if it's missing in a categorical variable) just to see if the missingness itself is predictive.
If I had a good reason to use a classifier that did not take missing data very well, then my approach would be #1 (multiple imputation), seeking to find a classification model that behaved similarly well across imputed data sets.
^Including: how much missingness you have in your predictors, whether there are systematic patterns (if there are, it would be worth taking a closer look and thinking through the implications for your analysis), and how much data you have to work with overall.
| null | CC BY-SA 2.5 | null | 2010-08-10T00:43:08.527 | 2010-08-10T00:43:08.527 | null | null | 394 | null |
1481 | 2 | null | 1462 | 1 | null | If you have all the relevant data rather than just summary data such as mean, SD etc. you could create your own distribution model from the real life data you have. Sort the data (y values) from lowest to highest and equally space them between 0 and 1 (x values). Then solve to find the coefficients of an nth order polynomial curve fit to the data (or several in piecewise fashion over parts of the data if necessary). Once you have these, it would be a simple matter to use a uniform random number generator to generate an x value between 0 and 1, and to plug this value into the polynomial equation to get a random y value that would approximate a draw from your distribution.
| null | CC BY-SA 2.5 | null | 2010-08-10T01:20:52.150 | 2010-08-10T01:20:52.150 | null | null | 226 | null |
1482 | 2 | null | 1475 | 1 | null | Take a look at [Cluster 3.0](http://bonsai.hgc.jp/~mdehoon/software/cluster/). I'm not sure if it will do all you want, but it's pretty well documented and lets you choose from a few distance metrics. The visualization piece is through a separate program called [Java TreeView](http://jtreeview.sourceforge.net/) ([screenshot](http://sourceforge.net/dbimage.php?id=20379)).
| null | CC BY-SA 2.5 | null | 2010-08-10T02:54:09.990 | 2010-08-10T02:54:09.990 | null | null | 251 | null |
1483 | 2 | null | 1405 | 10 | null | Assuming the odds ratios are independent, you can proceed as you would in general with any estimate, only you have to look at the log odds.
Take the difference of the log odds, $\delta$. The standard error of $\delta$ is $\sqrt{SE_{1}^2 + SE_{2}^2}$. Then you can obtain a p-value for the ratio $z = \delta/SE(\delta)$ from the standard normal.
UPDATE
The standard error of $\log OR$ is the square root of the sum of the reciprocals of the frequencies:
$SE(\log OR) = \sqrt{ {1 \over n_1} + {1 \over n_2} + {1 \over n_3} + {1 \over n_4} }$
In your case, each $n_i$ correspond to TP, FP, TN, FN.
| null | CC BY-SA 2.5 | null | 2010-08-10T04:19:05.090 | 2010-08-17T18:56:31.063 | 2010-08-17T18:56:31.063 | 251 | 251 | null |
1484 | 2 | null | 1475 | 5 | null | Exploring clustering results in high dimensions can be done in [R](http://www.r-project.org/) using the packages [clusterfly](http://had.co.nz/model-vis/) and [gcExplorer](http://cran.r-project.org/web/packages/gcExplorer/index.html). Look for more [here](http://cran.r-project.org/web/views/Cluster.html).
| null | CC BY-SA 2.5 | null | 2010-08-10T06:19:14.900 | 2010-08-10T06:19:14.900 | null | null | 339 | null |
1485 | 1 | 1732 | null | 6 | 1100 | In a particular application I was in need of machine learning (I know the things I studied in my undergraduate course). I used Support Vector Machines and got the problem solved. Its working fine.
Now I need to improve the system. Problems here are
- I get additional training examples every week. Right now the system starts training freshly with updated examples (old examples + new examples). I want to make it incremental learning. Using previous knowledge (instead of previous examples) with new examples to get new model (knowledge)
- Right my training examples has 3 classes. So, every training example is fitted into one of these 3 classes. I want functionality of "Unknown" class. Anything that doesn't fit these 3 classes must be marked as "unknown". But I can't treat "Unknown" as a new class and provide examples for this too.
- Assuming, the "unknown" class is implemented. When class is "unknown" the user of the application inputs the what he thinks the class might be. Now, I need to incorporate the user input into the learning. I've no idea about how to do this too. Would it make any difference if the user inputs a new class (i.e.. a class that is not already in the training set)?
Do I need to choose a new algorithm or Support Vector Machines can do this?
PS: I'm using libsvm implementation for SVM.
| Few machine learning problems | CC BY-SA 2.5 | null | 2010-08-10T06:41:06.993 | 2010-08-16T12:54:51.460 | 2010-08-16T12:54:51.460 | null | 851 | [
"machine-learning",
"svm"
] |
1486 | 2 | null | 97 | 14 | null | Conventional practice is to use the non-parametric statistics rank sum and mean rank to describe ordinal data.
Here's how they work:
Rank Sum
- assign a rank to each member in each
group;
- e.g., suppose you are looking at goals for each
player on two opposing football
teams then rank each member on
both teams from first to last;
- calculate rank sum by adding the ranks per group;
- the magnitude of the rank sum tells
you how close together the ranks are
for each group
Mean Rank
M/R is a more sophisticated statistic than R/S because it compensates for unequal sizes in the groups you are comparing. Hence, in addition to the steps above, you divide each sum by the number of members in the group.
Once you have these two statistics, you can, for instance, z-test the rank sum to see if the
difference between the two groups is statistically significant (I believe that's known as the Wilcoxon rank sum test, which is interchangeable, i.e., functionally equivalent to the Mann-Whitney U test).
R Functions for these statistics (the ones I know about, anyway):
wilcox.test in the standard R installation
meanranks in the cranks Package
| null | CC BY-SA 3.0 | null | 2010-08-10T06:42:10.703 | 2012-03-19T12:02:50.270 | 2012-03-19T12:02:50.270 | 1036 | 438 | null |
1487 | 1 | 1489 | null | 1 | 161 | We have performed a microarray screening of about 200 samples. In each sample we measure about 100 different variables. For technical reasons the screening of these 200 samples was divided into two batches with a couple of weeks interval between them. When all the data has been collected, I have performed principle component analysis (PCA) on the 200 x 100 table.
When we look on linear projection of first 4 components (responsible for ~70% of the variability), we see a clear division between the two experiment batches. An illustration to what I have can be seen here: [http://img594.imageshack.us/img594/3687/pca.png](http://img594.imageshack.us/img594/3687/pca.png)
What are the accepted techniques to address this issue?
| Correcting experiment results | CC BY-SA 2.5 | null | 2010-08-10T06:55:21.597 | 2010-09-16T10:02:46.633 | 2010-09-16T09:41:11.077 | 8 | 213 | [
"pca",
"experiment-design",
"normalization",
"microarray"
] |
1488 | 2 | null | 1475 | 1 | null | [Weka](http://www.cs.waikato.ac.nz/ml/weka/) is an open source program for data mining (wirtten and extensible in Java), [Orange](http://www.ailab.si/orange/) is an open source program and library for data mining and machine learning (written in Python). They both allow convenient and efficient visual exploration of multidimensional data
| null | CC BY-SA 3.0 | null | 2010-08-10T06:59:13.557 | 2011-12-06T09:35:54.143 | 2011-12-06T09:35:54.143 | 930 | 213 | null |
1489 | 2 | null | 1487 | 2 | null | I would think that the first step would be to examine the component loadings and the actual variables to see if you can identify why the two batches yielded discernible differences. Depending on the reasons for the differences you may or may not be able to use a statistical control to "correct" the results.
However, if you have every reason to believe samples were randomly assigned to batches, that the your testing methods have not reached floors or ceilings in their ability to assess the variables of interest, and the actual scores themselves are not of interest but only their relative position, perhaps you could standardize scores along each variable by batch.
Edit: It looks like csgillespie understands your research area and has provided you some good links. All I was suggesting was that for each batch you could calculate a Z score for each observed variable. This would have the effect of eliminating batch differences since for each variable in each batch the mean would be the same (0) and the standard deviation would be the same (1).
| null | CC BY-SA 2.5 | null | 2010-08-10T07:02:57.383 | 2010-08-10T15:31:41.193 | 2010-08-10T15:31:41.193 | 196 | 196 | null |
1491 | 2 | null | 1475 | 11 | null | GGobi (http://www.ggobi.org/), along with the R package rggobi, is perfectly suited to this task.
See the related presentation for examples: [http://www.ggobi.org/book/2007-infovis/05-clustering.pdf](http://www.ggobi.org/book/2007-infovis/05-clustering.pdf)
| null | CC BY-SA 2.5 | null | 2010-08-10T07:15:35.670 | 2010-08-10T07:15:35.670 | null | null | 5 | null |
1492 | 2 | null | 1149 | 36 | null | I was eating sushi once and thought that it might make a good intuitive demonstration of ill-conditioned problems. Suppose you wanted to show someone a plane using two sticks touching at their bases.
You'd probably hold the sticks orthogonal to each other. The effect of any kind of shakiness of your hands on the plane causes it to wobble a little around the what you were hoping to show people, but after watching you for a while they get a good idea of what plane you were intending to demonstrate.
But let's say you bring the sticks' ends closer together and watch the effect of your hands shaking. The plane it forms will pitch far more wildly. Your audience will have to watch longer to get a good idea of what plane the you are trying to demonstrate.
| null | CC BY-SA 2.5 | null | 2010-08-10T08:04:22.340 | 2010-08-10T08:04:22.340 | null | null | 167 | null |
1493 | 1 | null | null | 4 | 275 | $\chi^n_k=\sum_{i=1}^kx_i^n$ where $x_i$ are Gaussian variables and $n>2$?
| What is the distribution of $\chi^n_k$? | CC BY-SA 2.5 | null | 2010-08-10T08:45:44.720 | 2011-04-29T00:22:23.557 | 2011-04-29T00:22:23.557 | 3911 | 852 | [
"distributions",
"probability",
"stochastic-processes"
] |
1494 | 2 | null | 1487 | 5 | null | How did you normalise your microarray data? Standard ways are:
- Robust Multichip Average (RMA)
- Genechip RMA - this can be a bit slow for lots of samples.
This [presentation](http://www.ogic.ca/projects/SCNcourse/course_units/unit1/lecture/Introduction%20to%20Affymetrix%20Microarrays.ppt) gives a good overview of the two techniques.
---
There are two R microarray tutorial papers which may also help:
- A microarray analysis for differential gene expression in the soybean genome using Bioconductor and R (link to paper)
- Analysing yeast time course microarray data using Bioconductor (link to paper).
Both these papers provide the data and R commands.
Competing interest: I'm a author on the second paper.
| null | CC BY-SA 2.5 | null | 2010-08-10T08:50:49.683 | 2010-08-10T08:50:49.683 | null | null | 8 | null |
1495 | 1 | 1504 | null | 13 | 412 | Does anyone know of research which investigates the effectiveness (understandability?) of different visualization techniques?
For example, how quickly do people understand one form of visualization over another? Does interactivity with the visualization help people recall the data? Anything along those lines. An example of visualizations might be: scatter plots, graphs, timelines, maps, interactive interfaces (like Parallel Coordinates) etc.
I'm particularly interested in research within a lay-person population.
| Cognitive processing/interpretation of data visualizations techniques | CC BY-SA 2.5 | null | 2010-08-10T09:02:02.700 | 2021-02-03T15:11:14.310 | 2021-02-03T15:11:14.310 | 101426 | 665 | [
"data-visualization",
"presentation"
] |
1496 | 2 | null | 1444 | 22 | null | I'm presuming that zero != missing data, as that's an entirely different question.
When thinking about how to handle zeros in multiple linear regression, I tend to consider how many zeros do we actually have?
Only a couple of zeros
If I have a single zero in a reasonably large data set, I tend to:
- Remove the point, take logs and fit the model
- Add a small $c$ to the point, take logs and fit the model
Does the model fit change? What about the parameter values? If the model is fairly robust to the removal of the point, I'll go for quick and dirty approach of adding $c$.
You could make this procedure a bit less crude and use the boxcox method with shifts described in ars' answer.
Large number of zeros
If my data set contains a large number of zeros, then this suggests that simple linear regression isn't the best tool for the job. Instead I would use something like mixture modelling (as suggested by Srikant and Robin).
| null | CC BY-SA 4.0 | null | 2010-08-10T09:29:15.230 | 2019-09-19T15:35:09.693 | 2019-09-19T15:35:09.693 | 22047 | 8 | null |
1497 | 2 | null | 1495 | 8 | null | Cleveland reports on a lot of this research in his 1994 book [The Elements of Graphing Data](http://rads.stackoverflow.com/amzn/click/0963488414) (2nd ed). It is very readable and extremely useful.
| null | CC BY-SA 2.5 | null | 2010-08-10T10:24:28.057 | 2010-08-10T10:24:28.057 | null | null | 159 | null |
1498 | 2 | null | 1142 | 2 | null | what I do is group the measurements by hour and day of week and compare standard deviations of that. Still doesn't correct for things like holidays and summer/winter seasonality but its correct most of the time.
The downside is that you really need to collect a year or so of data to have enough so that stddev starts making sense.
| null | CC BY-SA 2.5 | null | 2010-08-10T10:54:56.550 | 2010-08-10T10:54:56.550 | null | null | 94 | null |
1499 | 2 | null | 1495 | 4 | null | Couple of thoughts:
- I think Bertin's Semiology of Graphics is a classic position in this area.
- As Rob pointed out, Cleveland has done some interesting work in the area, example here.
- Some examples of poor design from Stephen Few.
- Recently I stumbled upon interesting [and quite provocative, especially for Tufte / Cleveland fanatics as myself ;] piece of research from Human-Computer Interaction Lab at the University of Saskatchewan about usefulness of junk.
| null | CC BY-SA 2.5 | null | 2010-08-10T10:58:52.050 | 2010-08-10T10:58:52.050 | 2017-04-13T12:44:41.493 | -1 | 22 | null |
1500 | 2 | null | 1475 | 2 | null | I've had good experience with [KNIME](http://www.knime.org/) during one of my project. It 's an excellent solution for quick exploratory mining and graphing. On top of that it provides R and Weka modules seamless integration.
| null | CC BY-SA 2.5 | null | 2010-08-10T11:06:46.970 | 2010-08-10T11:06:46.970 | null | null | 22 | null |
1501 | 1 | 1503 | null | 4 | 301 | I'm interested in the process of testing or validating a particular implementation of a statistical method, and what datasets and/or published analysis exist that could be used to do this in practice.
For instance, if I write an algorithm to implement a simple linear regression, I might feed in some numbers and check the result looks good, or I might feed numbers into my code and some other system and compare. In some cases, people seem to have already done this and then publish the numbers and results which could be defined as reference data.
To start off, the best one I know is the [NIST Statistical Reference Datasets](http://www.itl.nist.gov/div898/strd/index.html) page that publishes a wide range of datasets and calculations that covers areas such as Analysis of Variance, Linaear Regression, Markov Chain and Monte Carlo simulation and Non-Linear regression.
Are there any other good / notable ones out there.
Edit: I reworded to make clear that I'm not just looking for open datasets, but I'm interested in datasets and solutions to specific statistical problems that could be used to test an implementation of a technique.
| What resources/methods exist for testing/validation or evaluation of Statistical Methods | CC BY-SA 3.0 | null | 2010-08-10T13:47:10.963 | 2013-09-11T19:15:32.583 | 2013-09-11T19:15:32.583 | 22311 | 114 | [
"dataset",
"validation",
"hypothesis-testing"
] |
1502 | 2 | null | 1475 | 1 | null | GGobi does look interesting for this. Another approach could be to treat your similarity/inverse distance matrices as network adjacency matrices and feeding that into a network analysis routine (e.g., either igraph in R or perhaps Pajek). With this approach I would experiment with cutting the cutting the node distances into a binary tie at various cutpoints.
| null | CC BY-SA 2.5 | null | 2010-08-10T14:02:22.253 | 2010-08-10T14:02:22.253 | null | null | 394 | null |
1503 | 2 | null | 1501 | 2 | null | See the stackoverflow question on this subject: [Datasets for Running Statistical Analysis on](https://stackoverflow.com/questions/2252144/datasets-for-running-statistical-analysis-on/).
I would reiterate [my answer](https://stackoverflow.com/questions/2252144/datasets-for-running-statistical-analysis-on/2252450#2252450), that R contains (in packages) many of the canonical datasets for specific statistical problems.
| null | CC BY-SA 2.5 | null | 2010-08-10T14:59:06.187 | 2010-08-10T14:59:06.187 | 2017-05-23T12:39:26.593 | -1 | 5 | null |
1504 | 2 | null | 1495 | 5 | null | This subject matter is often discussed under the discipline of HCI ([human-computer interaction](http://en.wikipedia.org/wiki/Human%E2%80%93computer_interaction)) which has [it's own journal](http://www.informaworld.com/smpp/title~db=all~content=t775653648).
There is a lot of great work being done on this at Stanford under [Jeffrey Heer](http://hci.stanford.edu/jheer/) (the creator of [protovis](http://vis.stanford.edu/protovis/) and [prefuse](http://prefuse.org/) amongst other things) and in the Stanford [Human-Computer Interaction Group](http://hci.stanford.edu/). As an example, read the ["Sizing the Horizon: The Effects of Chart Size and Layering on the Graphical Perception of Time Series Visualizations"](http://hci.stanford.edu/publications/2009/heer-horizon-chi09.pdf) paper. The material on the [CS147: Introduction to Human-Computer Interaction Design](http://hci.stanford.edu/courses/cs147/2010/) and [cs448b Data Visualization](https://graphics.stanford.edu/wikis/cs448b-09-fall) homepages may also be of interest.
You can also look at the projects list in [the CMU HCI Institute](http://www.hcii.cmu.edu/research/projects).
| null | CC BY-SA 2.5 | null | 2010-08-10T15:11:18.593 | 2010-08-10T15:25:14.560 | 2010-08-10T15:25:14.560 | 5 | 5 | null |
1505 | 2 | null | 1495 | 2 | null | Stephen Kosslyn studies human visual processing, and has written a book called [Graph Design for the Eye and Mind](http://rads.stackoverflow.com/amzn/click/0195306627). There's useful stuff in there, but he also suggests funny things sometimes. For example, he suggests truncating the y-axis on bar graphs at some point, so that they don't really start at 0, which seems a to be a deep sin to me.
| null | CC BY-SA 2.5 | null | 2010-08-10T15:29:25.843 | 2010-08-10T15:29:25.843 | null | null | 287 | null |
1506 | 2 | null | 1495 | 2 | null | Specifically on color and perception, I liked the papers below by Bergman, Rogowitz and Treinish.
- Why Should Engineers and Scientists Be Worried About Color?
- A Rule-based Tool for Assisting Colormap Selection
- How NOT to Lie with Visualization
- Lloyd Treinish's home page (for links to related work)
| null | CC BY-SA 2.5 | null | 2010-08-10T15:33:16.217 | 2010-08-10T15:33:16.217 | null | null | 251 | null |
1507 | 1 | 1545 | null | 1 | 1821 | Q: Is my approach correct?
Event: You toss 5 coins at once.
A student of mine claimed he got 4T & 1H in 39 out of 40 trials (!!)
I decided to calc the odds of this...
First, P(4T & 1H) = 5C4 * (1/2)^4 * (1/2)^1 = .16
I did this 2 ways:
---
1) Binomial Probability
n = 40
r = 39
p = .16
q = .84
P(Exactly 39) = 40 C 39 * (.16)^39 * (.84)^1 = 0%
---
2) Binomial Distribution:
n = 40
r = 39 (or more)
p = .16
q = .84
E(X) = u = np = (.16)(40) = 6.4
SD = SQRT(npq) = 3.16
Z(39) = (observed - expected) / SD = (39 - 6.4) / 3.16 = 10.3
p = P( Z > 10.3) = 0%
---
Conclusion: The odds of getting 4T & 1H in 39 out of 40 trials is negligible.
Student was on drugs at the time.
| Example of using binomial distribution | CC BY-SA 2.5 | null | 2010-08-10T16:03:57.047 | 2010-08-11T14:06:35.990 | null | null | 6967 | [
"binomial-distribution"
] |
1508 | 2 | null | 1507 | 0 | null | Technically, your case 1 and 2 calculations are not correct as they are not independent trials. You are tossing the same 5 coins 40 times. So, those events are dependent.
If you ignore the above issue then the above seems ok.
On some more reflection I think you can ignore the issue of dependency. Here is my reasoning: The probability of observing 4T & 1H is 0.16. In your case 1 and case 2 calculations you are using this probability across all 40 trials which implicitly accounts for the dependence in the trials.
Another way to think about the issue is: If you observe 4T and 1H in the first trial what can you say about the probability that you would observe 4T and 1H in the second. It clearly equals 0.16 and thus there is no dependency. Knowledge of the outcome of one trial does not give us any additional information about the events that are likely to happen in the subsequent trial. Thus, the trials are independent.
I think the calculation is fine as it stands.
| null | CC BY-SA 2.5 | null | 2010-08-10T16:23:37.183 | 2010-08-10T17:08:25.473 | 2010-08-10T17:08:25.473 | null | null | null |
1509 | 2 | null | 1207 | 4 | null | You could use the Hilbert Transformation from DSP theory to measure the instantaneous frequency of your data. The site [http://ta-lib.org/](http://ta-lib.org/) has open source code for measuring the dominant cycle period of financial data; the relevant function is called HT_DCPERIOD; you might be able to use this or adapt the code to your purposes.
| null | CC BY-SA 2.5 | null | 2010-08-10T17:29:28.280 | 2010-08-10T17:29:28.280 | null | null | 226 | null |
1510 | 2 | null | 1478 | 1 | null | >
I can q-q plot them, but I'm more interested in an automatic unit test I can run.
If a visual inspection of a q-q plot would suffice, then you could calculate an entropy measure such as the [Gini coefficient](http://en.wikipedia.org/wiki/Gini_coefficient) and accept the test after allowing for some tolerance for deviation from the 45 degree line. For more on comparing distributions in general, you could look to the [reldist](http://cran.r-project.org/web/packages/reldist/index.html) package in R for ideas.
Though not directly applicable, I think you'll also find the method presented by Cook, Gelman & Rubin interesting:
- Validation of Software for Bayesian Models Using Posterior Quantiles
| null | CC BY-SA 2.5 | null | 2010-08-10T17:51:38.350 | 2010-08-10T17:51:38.350 | null | null | 251 | null |
1511 | 2 | null | 1507 | 5 | null | The probability of observing 4 heads and 1 tail 39 times out of 40 after observing 4 heads and 1 tail 39 times out of 40 is 1.0.
:)
| null | CC BY-SA 2.5 | null | 2010-08-10T18:25:52.837 | 2010-08-10T18:25:52.837 | null | null | 601 | null |
1512 | 2 | null | 1207 | 10 | null | If you expect the process to be stationary -- the periodicity/seasonality will not change over time -- then something like a Chi-square periodogram (see e.g. Sokolove and Bushell, 1978) might be a good choice. It's commonly used in analysis of circadian data which can have extremely large amounts of noise in it, but is expected to have very stable periodicities.
This approach makes no assumption about the shape of the waveform (other than that it is consistent from cycle to cycle), but does require that any noise be of constant mean and uncorrelated to the signal.
```
chisq.pd <- function(x, min.period, max.period, alpha) {
N <- length(x)
variances = NULL
periods = seq(min.period, max.period)
rowlist = NULL
for(lc in periods){
ncol = lc
nrow = floor(N/ncol)
rowlist = c(rowlist, nrow)
x.trunc = x[1:(ncol*nrow)]
x.reshape = t(array(x.trunc, c(ncol, nrow)))
variances = c(variances, var(colMeans(x.reshape)))
}
Qp = (rowlist * periods * variances) / var(x)
df = periods - 1
pvals = 1-pchisq(Qp, df)
pass.periods = periods[pvals<alpha]
pass.pvals = pvals[pvals<alpha]
#return(cbind(pass.periods, pass.pvals))
return(cbind(periods[pvals==min(pvals)], pvals[pvals==min(pvals)]))
}
x = cos( (2*pi/37) * (1:1000))+rnorm(1000)
chisq.pd(x, 2, 72, .05)
```
The last two lines are just an example, showing that it can identify the period of a pure trigonometric function, even with lots of additive noise.
As written, the last argument (`alpha`) in the call is superfluous, the function simply returns the 'best' period it can find; uncomment the first `return` statement and comment out the second to have it return a list of all periods significant at the level `alpha`.
This function doesn't do any sort of sanity checking to make sure that you've put in identifiable periods, nor does it (can it) work with fractional periods, nor is there any sort of multiple comparison control built in if you decide to look at multiple periods. But other than that it should be reasonably robust.
| null | CC BY-SA 3.0 | null | 2010-08-10T18:41:10.997 | 2016-08-01T19:09:59.097 | 2016-08-01T19:09:59.097 | 53690 | 61 | null |
1513 | 2 | null | 1462 | 2 | null | if you decide to generate your distribution from the data you have observed, your model will never spit out a "tail value", ie, something outside the range of what you have observed.
your example data:
average 5.3 yards per carry with a SD of 1.7 yards
will have a max and a min, say 10 and 2. in that case, your calculated distribution will not have any weight in the tails, and will be unable to generate a value of 11 or 1. maybe this is ok, but it prevents you from ever generating one of those super-human events that everyone loves to watch.
the functional form for the standard normal has tails that go to infinity, so if you assume the distribution is normal, you will be able to generate simulated values (at very small probabilities) that are higher or lower than your observed data.
| null | CC BY-SA 2.5 | null | 2010-08-10T20:20:04.203 | 2010-08-10T20:20:04.203 | null | null | 125 | null |
1514 | 2 | null | 1444 | 17 | null | If you want something quick and dirty why not use the square root?
| null | CC BY-SA 2.5 | null | 2010-08-10T20:48:11.953 | 2010-08-10T20:48:11.953 | null | null | 856 | null |
1515 | 2 | null | 1493 | 2 | null | well, as a bound, if $n$ is even, $\chi_k^n$ will be bounded from below by a Chi-square, and $(\chi_k^n)^{1/n}$ should be bounded from above by the maximum of $k$ half-normals, or thereabouts.
| null | CC BY-SA 2.5 | null | 2010-08-10T21:34:55.513 | 2010-08-10T21:34:55.513 | null | null | 795 | null |
1516 | 2 | null | 118 | 69 | null | The reason that we calculate standard deviation instead of absolute error is that we are assuming error to be normally distributed. It's a part of the model.
Suppose you were measuring very small lengths with a ruler, then standard deviation is a bad metric for error because you know you will never accidentally measure a negative length. A better metric would be one to help fit a Gamma distribution to your measurements:
$\log(E(x)) - E(\log(x))$
Like the standard deviation, this is also non-negative and differentiable, but it is a better error statistic for this problem.
| null | CC BY-SA 3.0 | null | 2010-08-10T22:34:01.363 | 2016-01-27T22:28:06.527 | 2016-01-27T22:28:06.527 | 858 | 858 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.