Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4923 | 2 | null | 4920 | 7 | null | B is a linear transform of V. E represents an interaction between V and D. Have you considered specifying a model that is Y = Intercept + V + D + V:D? As @euphoria83 suggests, it seems likely that there is little variation in D, so it may not solve your problem; however it should at least make the independent contributions of V and D clear. Be sure to center both V and D beforehand.
| null | CC BY-SA 2.5 | null | 2010-11-26T03:36:26.110 | 2010-11-26T19:16:07.317 | 2010-11-26T19:16:07.317 | 196 | 196 | null |
4924 | 2 | null | 4360 | 4 | null | The conclusion you draw will be VERY dependent on the prior you choose for the probability of cheating and the prior probability that, given the flipper is lying, x heads are reported.
Putting the most mass on P(10000 heads reported|lying) is a little counter intuitive in my opinion. Unless the reporter is naive, I can't imagine anyone reporting that kind of falsified data (largely for the reasons you mentioned in the original post; it's too suspicious to most people.) If the coin really is unfair and the flipper were to report falsified data, then I think a more reasonable (and very approximate) prior on the reported results might be a discrete uniform prior P(X heads reported|lying) = 1/201 for the integers {9900, ..., 10100} and P(x heads reported|lying) = 0 for all other x. Suppose that you think the prior probability of lying is 0.5. Then some posterior probabilities are:
P(lying|9900 heads reported) = P(lying|10100 heads reported) = 0.70;
P(lying|9950 heads reported) = P(lying|10050 heads reported) = 0.54;
P(lying|10000 heads reported) = 0.47.
Most reasonable numbers of reported heads from a fair coin will result in suspicion. Just to show how sensitive the posterior probabilities are to your priors, if the prior probability of cheating is lowered to 0.10, then the posterior probabilities become:
P(lying|9900 heads reported) = P(lying|10100 heads reported) = 0.21;
P(lying|9950 heads reported) = P(lying|10050 heads reported) = 0.11;
P(lying|10000 heads reported) = 0.09.
So I think the original (and highly rated answer) could be expanded a little bit; in no way should you conclude that the data is falsified without thoroughly considering prior information. Also, just thinking about this intuitively, it seems that the posterior probabilities of lying are likely to be influenced more by the prior probability of lying rather than by the prior distribution of heads reported given that the flipper is lying (except for priors that put all their mass on a small number of heads reported given the flipper is lying, such as in my example.)
| null | CC BY-SA 2.5 | null | 2010-11-26T04:13:38.217 | 2010-11-26T05:06:35.820 | 2010-11-26T05:06:35.820 | 2144 | 2144 | null |
4925 | 2 | null | 4918 | 5 | null | I'm not sure if this is really what you are asking for, so please clarify or state otherwise if this isn't what you had in mind. In the mean time, here in an approach to calculate a 10% threshold and then also select the elements of that list which are above that threshold. This is using [R](http://www.r-project.org/).
```
#Generate list of "random numbers" with normal distribution, mean = 0, sd = 3
x <- rnorm(n = 1000, mean = 0, sd = 3)
#Returns the 90% quantile of the bunch of random numbers above
quantile(x, .9)
#Subset x and only return those which are above the 90% threshold.
x[x > quantile(x, .9)]
```
| null | CC BY-SA 2.5 | null | 2010-11-26T04:34:08.287 | 2010-11-26T16:35:47.747 | 2010-11-26T16:35:47.747 | 696 | 696 | null |
4926 | 2 | null | 4920 | 32 | null | Both B and E are derived from V. B and E are clearly not truly "independent" variables from each other. The underlying variable that really matters here is V. You should probably disgard both B and E in this case and keep V only.
In a more general situation, when you have two independent variables that are very highly correlated, you definitely should remove one of them because you run into the multicollinearity conundrum and your regression model's regression coefficients related to the two highly correlated variables will be unreliable. Also, in plain English if two variables are so highly correlated they will obviously impart nearly exactly the same information to your regression model. But, by including both you are actually weakening the model. You are not adding incremental information. Instead, you are infusing your model with noise. Not a good thing.
One way you could keep highly correlated variables within your model is to use instead of regression a Principal Component Analysis (PCA) model. PCA models are made to get rid off multicollinearity. The trade off is that you end up with two or three principal components within your model that are often just mathematical constructs and are pretty much incomprehensible in logical terms. PCA is therefore frequently abandoned as a method whenever you have to present your results to an outside audience such as management, regulators, etc... PCA models create cryptic black boxes that are very challenging to explain.
| null | CC BY-SA 2.5 | null | 2010-11-26T05:43:31.740 | 2010-11-26T05:43:31.740 | null | null | 1329 | null |
4927 | 2 | null | 4868 | 1 | null | I'm not 100% clear on the question, but I have a few points to add:
I'm assuming that the error you are trying to estimate is the prediction error. If so, I agree that 10 fold cross validation would be good (and likely unbiased) approximation of the true prediction error IF your training sets are sufficiently large. Large in this case means that the training sets provide enough information to build a "good" SVM (one that, in a sense, captures most of the underlying relationships between between the predictors and response.) Training sets of size 900 are more than likely large enough. In fact, unless the SVM you are fitting is extremely complex, I would recommend using a 5-fold cross validation in order to get a more precise estimate of prediction error (and yes, you can average the error estimates of the 5 folds to obtain an final estimate.)
With regards to the question:
"Would events tested using separately trained svms be comparable? i.e. through this technique could I then use my entire dataset instead of setting aside a certain fraction for training, or is this a statistically unwise thing to do?"
I don't understand this question, but since the phrase "entire dataset" is in a post about CV, I just want to warn you that estimating prediction error from models fit to all available data is generally a bad idea. For cross validation to make sense, each training set/test set pair should have no points in common. Otherwise, the true error will likely be underestimated.
| null | CC BY-SA 2.5 | null | 2010-11-26T05:57:08.417 | 2010-11-26T05:57:08.417 | null | null | 2144 | null |
4928 | 2 | null | 4908 | 1 | null | Sounds like you'll need a HMM to do that.
Have you read
Lawrence R. Rabiner (February 1989). ["A tutorial on Hidden Markov Models and selected applications in speech recognition"](http://www.ece.ucsb.edu/Faculty/Rabiner/ece259/Reprints/tutorial%20on%20hmm%20and%20applications.pdf)
There are a few examples of models parameters discovery on page 259.
The question is how do you plan to update the probabilites? Maybe you can use a kind of success ratio? (if the player has not managed to win much recently using one of the 3 strategies, he may use it less and try more the two others...)
| null | CC BY-SA 2.5 | null | 2010-11-26T06:24:56.867 | 2010-11-26T06:47:18.580 | 2010-11-26T06:47:18.580 | 1709 | 1709 | null |
4929 | 2 | null | 4920 | 7 | null | Here is an answer from the point of view of a machine learner, although I am afraid that I'll be beaten by real statisticians for it.
Is it possible for me to just "throw away" one of the variables?
Well, the question is what type of model you want to use for prediction. It depends e.g. on ...
- can the model with correlated predictors ? E.g. although NaiveBayes theoretically has problems with correlated variables, experiments have shown that it still can perform well.
- how does the model process the predictor variables ? E.g. the difference between B and V will be normalized out in a probability density estimation, maybe the same for E and V depending on the variance of D (as euphoria already said)
- which usage combination of B and E (one, none, both) delivers the best result, estimated by a mindful crossvalidation + a test on a holdout set ?
Sometimes we machine learners even perform genetic optimization to find the best arithmetic combination of a set of predictors.
| null | CC BY-SA 2.5 | null | 2010-11-26T08:13:43.497 | 2010-11-26T08:21:31.757 | 2010-11-26T08:21:31.757 | 264 | 264 | null |
4930 | 1 | 4931 | null | 11 | 12283 | Disclaimer: I posted this question on Stackoverflow, but I thought maybe this is better suited for this platform.
How do you test your own k-means implementation for multidimensional data sets?
I was thinking of running an already existing implementation (i.e., Matlab) on the data and compare the results with my algorithm. But this would require both algorithms to work more than roughly the same, and the mapping between the two results probably is no piece of cake.
Do you have a better idea?
| How do you test an implementation of k-means? | CC BY-SA 2.5 | null | 2010-11-26T08:54:39.863 | 2015-09-14T14:47:57.047 | 2010-11-26T09:43:22.517 | 930 | 2147 | [
"clustering",
"algorithms"
]
|
4931 | 2 | null | 4930 | 10 | null | The k-means includes a stochastic component, so it is very unlikely you will get the same result unless you have exactly the same implementation and use the same starting configuration. However, you could see if your results are in agreement with well-known implementations (don't know about Matlab, but implementation of k-means algorithm in R is well explained, see [Hartigan & Wong, 1979](http://jmlr.csail.mit.edu/proceedings/papers/v9/telgarsky10a/telgarsky10a.pdf)).
As for comparing two series of results, there still is an issue with label switching if it is to be run multiple times. Again, in the [e1071](http://cran.r-project.org/web/packages/e1071/index.html) R package, there is a very handy function (;`matchClasses()`) that might be used to find the 'best' mapping between two categories in a two-way classification table. Basically, the idea is to rearrange the rows so as to maximise their agreement with columns, or use a greedy approach and permute rows and columns until the sum of on the diagonal (raw agreement) is maximal. Coefficient of agreement like the [Kappa](http://en.wikipedia.org/wiki/Cohen%27s_kappa) statistic are also provided.
Finally, about how to benchmark your implementation, there are a lot of freely available data, or you can simulate a dedicated data set (e.g., through a finite mixture model, see the [MixSim](http://cran.r-project.org/web/packages/MixSim/index.html) package).
| null | CC BY-SA 2.5 | null | 2010-11-26T09:15:39.287 | 2010-11-26T09:15:39.287 | null | null | 930 | null |
4932 | 1 | 4933 | null | 2 | 58 | I have a data frame like this
```
User OS
A Windows
A Linux
B MacOS
C Linux
C FreeBSD
C Windows
D Windows
```
What I want to do is plot two types of statistics.
- The share of users with different number of OSs. So, it would be the fraction of the users with 1 OS, with OS etc.
- The share of users with only one OS and more than one OS.
For this I tried using `tapply(OS, User, unique)`. But don't know how to go about plotting the results.
I was wondering if this is the right way and what more would I need to do to to get the plots I wanted.
| R getting share of users with multiple of an element | CC BY-SA 2.5 | null | 2010-11-26T10:34:05.437 | 2010-11-26T10:51:53.453 | null | null | 2101 | [
"r",
"distributions",
"aggregation"
]
|
4933 | 2 | null | 4932 | 4 | null | You could try something like
```
> df <- data.frame(User=sample(LETTERS[1:10], 100, rep=T),
OS=sample(c("Win","Lin","Mac"), 100, rep=T))
> (res <- with(df, tapply(OS, User, function(x) length(unique(x)))))
A B C D E F G H I J
2 3 3 3 3 3 3 3 3 3
> barplot(table(res)) # for counts
> barplot(table(ifelse(res==1, "1", "2+")))
```
Replace `table()` by `prop.table()` if you want proportions instead of counts, as suggested by [@Chase](https://stats.stackexchange.com/users/696/chase) in a comment to your preceding question.
| null | CC BY-SA 2.5 | null | 2010-11-26T10:45:18.043 | 2010-11-26T10:51:53.453 | 2017-04-13T12:44:20.840 | -1 | 930 | null |
4934 | 1 | 4935 | null | 14 | 6950 | Are there any good papers or books dealing with the use of coordinate descent for L1 (lasso) and/or elastic net regularization for linear regression problems?
| Coordinate descent for the lasso or elastic net | CC BY-SA 2.5 | null | 2010-11-26T11:00:49.113 | 2022-11-27T07:02:56.493 | 2019-02-03T01:00:12.907 | 11887 | 439 | [
"regression",
"references",
"lasso",
"regularization",
"elastic-net"
]
|
4935 | 2 | null | 4934 | 16 | null | I earlier suggested the recent paper by Friedman and coll., [Regularization Paths for Generalized Linear Models via Coordinate Descent](http://www.jstatsoft.org/v33/i01/paper), published in the Journal of Statistical Software (2010). Here are some other references that might be useful:
- Pathwise coordinate optimization, by Friedman and coll.
- Fast Regularization Paths via Coordinate Descent, by Hastie (UseR! 2009)
- Coordinate descent algorithms for lasso penalized regression, by Wu and Lange (Ann. Appl. Stat. 2(1): 224-244, 2008; also on available on arXiv.org)
- Coordinate Descent for Sparse Solutions of Underdetermined Linear Systems of Equations, by Yagle (a bit too complex for me)
| null | CC BY-SA 2.5 | null | 2010-11-26T11:21:34.733 | 2010-11-26T11:21:34.733 | null | null | 930 | null |
4936 | 1 | 4938 | null | 14 | 4436 | What are the pros and cons of using LARS [1] versus using coordinate descent for fitting L1-regularized linear regression?
I am mainly interested in performance aspects (my problems tend to have `N` in the hundreds of thousands and `p` < 20.) However, any other insights would also be appreciated.
edit: Since I've posted the question, chl has kindly pointed out a paper [2] by Friedman et al where coordinate descent is shown to be considerably faster than other methods. If that's the case, should I as a practitioner simply forget about LARS in favour of coordinate descent?
>
[1] Efron, Bradley; Hastie, Trevor; Johnstone, Iain and Tibshirani, Robert (2004). "Least Angle Regression". Annals of Statistics 32 (2): pp. 407–499.
>
[2] Jerome H. Friedman, Trevor Hastie, Rob Tibshirani, "Regularization Paths for Generalized Linear Models via Coordinate Descent", Journal of Statistical Software, Vol. 33, Issue 1, Feb 2010.
| LARS vs coordinate descent for the lasso | CC BY-SA 2.5 | null | 2010-11-26T11:28:53.567 | 2010-11-27T09:46:13.680 | 2020-06-11T14:32:37.003 | -1 | 439 | [
"regression",
"lasso",
"regularization"
]
|
4937 | 2 | null | 4930 | 0 | null | Since k-means contains decisions that are randomly chosen (the initialization part only), I think the best way to try your algorithm is to select the initial points and let them fixed in your algorithm first and then choose another source code of the algorithm and fix the points in the same way. Then you can compare for real the results.
| null | CC BY-SA 2.5 | null | 2010-11-26T11:53:26.917 | 2010-11-26T11:53:26.917 | null | null | 1808 | null |
4938 | 2 | null | 4936 | 13 | null | In scikit-learn the implementation of [Lasso with coordinate descent](http://scikit-learn.sourceforge.net/modules/linear_model.html#lasso) tends to be faster than our implementation of LARS although for small p (such as in your case) they are roughly equivalent (LARS might even be a bit faster with the latest optimizations available in the master repo). Furthermore coordinate descent allows for efficient implementation of elastic net regularized problems. This is not the case for LARS (that solves only Lasso, aka L1 penalized problems).
Elastic Net penalization tends to yield a better generalization than Lasso (closer to the solution of ridge regression) while keeping the nice sparsity inducing features of Lasso (supervised feature selection).
For large N (and large p, sparse or not) you might also give a [stochastic gradient descent](http://scikit-learn.sourceforge.net/modules/sgd.html) (with L1 or elastic net penalty) a try (also implemented in scikit-learn).
Edit: here are some [benchmarks comparing LassoLARS and the coordinate descent implementation](http://picasaweb.google.com/olivier.grisel/MachineLearning#5543914944364375506) in scikit-learn
| null | CC BY-SA 2.5 | null | 2010-11-26T13:15:57.283 | 2010-11-27T09:46:13.680 | 2010-11-27T09:46:13.680 | 2150 | 2150 | null |
4939 | 1 | null | null | 9 | 2850 | 
I am given 15 points. The control limits are at +/- 3 $\sigma$. Points 1, 4, 5, 6, 7, 8, 9, 10, 11, 13, and 15 fall within the control limits. Points 2, 3, 12, and 14 are outside of the control limits, with 2 being below the lower control limit and 3, 12, and 14 being above the upper control limit.
How do I know if points 2, 3, 12, and 14 are out of control caused by chance causes or caused by assignable causes?
| Given a control chart that shows the mean and upper/lower control limits, how do I tell if the cause of out of control points is assignable or not? | CC BY-SA 2.5 | null | 2010-11-26T14:09:15.643 | 2010-11-30T11:58:59.917 | 2010-11-26T16:26:40.237 | 110 | 110 | [
"control-chart",
"engineering-statistics"
]
|
4940 | 2 | null | 4901 | 21 | null | Cox and Wermuth (1996) or Cox (1984) discussed some methods for detecting interactions. The problem is usually how general the interaction terms should be. Basically, we
(a) fit (and test) all second-order interaction terms, one at a time, and (b) plot their corresponding p-values (i.e., the No. terms as a function of $1-p$). The idea is then to look if a certain number of interaction terms should be retained: Under the assumption that all interaction terms are null the distribution of the p-values should be uniform (or equivalently, the points on the scatterplot should be roughly distributed along a line passing through the origin).
Now, as [@Gavin](https://stats.stackexchange.com/questions/4901/identifying-interaction-effects/4903#4903) said, fitting many (if not all) interactions might lead to overfitting, but it is also useless in a certain sense (some high-order interaction terms often have no sense at all). However, this has to do with interpretation, not detection of interactions, and a good review was already provided by Cox in [Interpretation of interaction: A review](http://arxiv.org/PS_cache/arxiv/pdf/0712/0712.1106v1.pdf) (The Annals of Applied Statistics 2007, 1(2), 371–385)--it includes references cited above. Other lines of research worth to look at are study of [epistatic effects](http://en.wikipedia.org/wiki/Epistasis) in genetic studies, in particular methods based on graphical models (e.g., [An efficient method for identifying statistical interactors in gene association networks](http://biostatistics.oxfordjournals.org/content/early/2009/07/22/biostatistics.kxp025.abstract)).
### References
- Cox, DR and Wermuth, N (1996). Multivariate Dependencies: Models, Analysis and Interpretation. Chapman and Hall/CRC.
- Cox, DR (1984). Interaction. International Statistical Review, 52, 1–31.
| null | CC BY-SA 4.0 | null | 2010-11-26T14:58:38.717 | 2018-07-14T18:05:53.047 | 2018-07-14T18:05:53.047 | 79696 | 930 | null |
4941 | 2 | null | 4934 | 5 | null | I've just come across this [lecture](https://web.archive.org/web/20100904040041/http://videolectures.net/kdd08_hastie_rpcd/) by Hastie and thought that others might find it interesting.
| null | CC BY-SA 4.0 | null | 2010-11-26T15:07:41.943 | 2022-11-27T07:02:56.493 | 2022-11-27T07:02:56.493 | 362671 | 439 | null |
4942 | 1 | 4951 | null | 3 | 5051 | The fisher linear classifier for two classes is a classifier with this discriminant function:
$h(x) = V^{T}X + v_0$
where
$V = \left[ \frac{1}{2}\Sigma_1 + \frac{1}{2}\Sigma_2\right]^{-1}(M_2-M_1)$
and $M_1$, $M_2$ are means and $\Sigma_1$,$\Sigma_2$ are covariances of the classes.
$V$ can be calculated easily but the fisher criterion cannot give us the optimum $v_0$.
Then, how we can find the optimum $v_0$ analytically if we are using fisher classifier?
| Threshold for Fisher linear classifier | CC BY-SA 2.5 | null | 2010-11-26T15:22:47.893 | 2010-11-26T20:33:56.237 | null | null | 2148 | [
"classification",
"optimization"
]
|
4943 | 2 | null | 4939 | 5 | null | My understanding of control charts is a little bit different... After the first signal at observation 2, wouldn't the process would be stopped and checked for problems, and then restarted?
In any case, you could use a p-value argument. The probability of observing 4 or more observations (out of 15) beyond their control limits is VERY tiny if the process is actually in control. Let's say the probability of an observation going outside of the control limits while the process is actually in control is about 0.01 (this exact probability depends on the in control distribution of the data), so if the process is in control, we expect a false alarm (ie out of control signal caused by random chance) every 100 observations or so. The probability of observing 4 or more out of control signals (out of 15) while the process is in control is about 0.000012, so it's very unlikely that the signals are due to random chance.
While an actual diagnosis would require you to look at the chart and possibly actually investigate the physical process, because the out of control points are both below and above the control limits, I'm betting there was a scale shift (i.e. increase in variance.)
| null | CC BY-SA 2.5 | null | 2010-11-26T15:45:12.343 | 2010-11-26T19:39:31.937 | 2010-11-26T19:39:31.937 | 2144 | 2144 | null |
4944 | 2 | null | 4939 | 7 | null | Yes, you should find and assignable cause for every point that's outside the limits. But things are a little more complicated.
First you have to determine if the process is in control, since a control chart is meaningless when the process is out of control. Nearly 1/4 of your observations falling outside the limits is a strong sign that the process may be out of control. Looking at the chart would be useful to determine whether the process is under control or not.
Besides falling outside the control limits, there are other potential reasons for needing to look for assignable causes for certain observations. For example, if you have several observations in a row falling on the same side of the mean -- especially if they're near the control limit -- they may need to assigned a special cause.
I might be able to be more specific if you'd post the chart itself.
If you want to learn more about control charts, [SPC Press](http://www.spcpress.com/reading_room.php) has a number of useful free resources. You might also want to look at [this book](http://rads.stackoverflow.com/amzn/click/0945320531): it's short, concise and very informative.
(Edit:)
I assumed we were talking about real-world data, not an exam question. In this case, the correct answer really is the first one: the points outside the control limits are (probably) caused by assignable causes.
The exam is a little sloppy in its terminology, though: you can't actually tell with 100% certainty that the points outside the control limits are not caused by chance. You can only say that there is a 99.7% probability that a particular point outside the limits is not caused by chance.
| null | CC BY-SA 2.5 | null | 2010-11-26T16:24:12.580 | 2010-11-30T11:58:59.917 | 2010-11-30T11:58:59.917 | 666 | 666 | null |
4945 | 1 | null | null | 10 | 22288 | I [posted a question earlier](https://stats.stackexchange.com/q/4914/919) but failed miserably in trying to explain what I am looking for (thanks to those who tried to help me anyway). Will try again, starting with a clean sheet.
Standard deviations are sensitive to scale. Since I am trying to perform a statistical test where the best result is predicted by the lowest standard deviation amongst different data sets, is there a way to "normalize" it for scale, or use a different standard-deviation-type test altogether?
Unfortunately dividing the resulting standard deviation by the mean in my case does not work, as the mean is almost always close to zero.
Thanks
| "Normalized" standard deviation | CC BY-SA 2.5 | null | 2010-11-26T16:34:00.810 | 2017-03-09T21:24:29.600 | 2017-04-13T12:44:37.420 | -1 | 2137 | [
"standard-deviation"
]
|
4946 | 2 | null | 4930 | 6 | null | The mapping between two sets of results is easy to compute, because the information you obtain in a test can be represented as a set of three-tuples: the first component is a (multidimensional) point, the second is an (arbitrary) cluster label supplied by your algorithm, and the third is an (arbitrary) cluster label supplied by a reference algorithm. Construct the $k$ by $k$ classification table for the label pairs: if the results agree, it will be a multiple of a permutation matrix. That is, each row and each column must have exactly one nonzero cell. That's a simple check to program. It's also straightforward to track small deviations from this ideal back to individual data points so you can see precisely how the two answers differ if they differ at all. I wouldn't bother to compute statistical measures of agreement: either there is perfect agreement (up to permutation) or there is not, and in the latter case you need to track down all points of disagreement to understand how they occur. The results either agree or they do not; any amount of disagreement, even at just one point, needs checking.
You might want to use several kinds of datasets for testing: (1) published datasets with published k-means results; (2) synthetic datasets with obvious strong clusters; (3) synthetic datasets with no obvious clustering. (1) is a good discipline to use whenever you write any math or stats program. (2) is easy to do in many ways, such as by generating some random points to serve as centers of clusters and then generating point clouds by randomly displacing the cluster centers relatively small amounts. (3) provides some random checks that potentially uncover unexpected behaviors; again, that's a good general testing discipline.
In addition, consider creating datasets that stress the algorithm by lying just on the boundaries between extreme solutions. This will require creativity and a deep understanding of your algorithm (which presumably you have!). One example I would want to check in any event would be sets of vectors of the form $i \mathbb{v}$ where $\mathbb{v}$ is a vector with no zero components and $i$ takes on sequential integral values $0, 1, 2, \ldots, n-1$. I would also want to check the algorithm on sets of vectors that form equilateral polygons. In either situation, cases where $n$ is not a multiple of $k$ are particularly interesting, including where $n$ is less than $k$. What is common to these situations is that (a) they use all the dimensions of the problem, yet (b) the correct solutions are geometrically obvious, and (c) there are multiple correct solutions.
(Form random equilateral polygons in $d \ge 2$ dimensions by starting with two nonzero vectors $\mathbb{u}$ and $\mathbb{v}$ chosen at random. (A good way is to let their $2d$ components be independent standard normal variates.) Rescale them to have unit length; let's call these $\mathbb{x}$ and $\mathbb{z}$. Remove the $\mathbb{x}$ component from $\mathbb{z}$ by means of the formula
$$\mathbb{w} = \mathbb{z} - ( \mathbb{z} \cdot \mathbb{x} ) \mathbb{x}.$$
Obtain $\mathbb{y}$ by rescaling $\mathbb{w}$ to have unit length. If you like, uniformly rescale both $\mathbb{x}$ and $\mathbb{y}$ randomly. The vectors $\mathbb{x}$ and $\mathbb{y}$ form an orthogonal basis for a random 2D subspace in $d$ dimensions. An equilateral polygon of $n$ vertices is obtained as the set of $\cos(2 \pi k / n) \mathbb{x} + \sin(2 \pi k / n) \mathbb{y}$ as the integer $k$ ranges from $0$ through $n-1$.)
| null | CC BY-SA 2.5 | null | 2010-11-26T16:46:09.033 | 2010-11-26T19:55:50.543 | 2010-11-26T19:55:50.543 | 930 | 919 | null |
4947 | 2 | null | 4914 | 5 | null | Smoothing, rolling averages, running means... are all nice ways (perhaps) to display data. But using the results of smoothed data as an input to any statistical analysis is likely to give misleading results, especially when done by novices. William Briggs emphasizes this point in his excellent blog in [this article](http://wmbriggs.com/blog/?p=195) and [this one](http://wmbriggs.com/blog/?p=735).
| null | CC BY-SA 2.5 | null | 2010-11-26T16:47:25.097 | 2010-11-26T16:53:01.127 | 2010-11-26T16:53:01.127 | 25 | 25 | null |
4948 | 2 | null | 4912 | 6 | null | You are conducting a one-sided test of a difference of proportions. Because all four outcomes--A, not A, B, not B--occur often (70 times or more in this case), the Normal approximation to the Binomial distribution will be just fine. Let $a$ be the number of occurrences of A, $b$ the number of occurrences of B, and $n$ the total sample (about 350). Under the null hypothesis $a = b$ the variance of a single observation is estimated with the combined data, $s^2 = (a+b)/(2n) \cdot (1 - (a+b)/(2n))$. The variances of A and B are estimated as $s^2/n$. The test statistic therefore is
$$z = \frac{a/n - b/n}{\sqrt{s^2/n + s^2/n}}.$$
The p-value equals $1 - \Phi^{-1}(z)$ where $\Phi^{-1}$ is the percentage point function for the standard normal distribution.
For example, with $n = 350$, $a = 280$, and $b = 70$, we estimate $a$ as 0.8, $b$ as 0.2, and $s^2$ as 0.25, giving $z = 15.87$. Obviously that's not due to chance. The result will be equally strong and obvious for any values of $a$, $b$, and $n$ anywhere close to these.
| null | CC BY-SA 2.5 | null | 2010-11-26T17:23:03.043 | 2010-11-28T19:19:27.373 | 2010-11-28T19:19:27.373 | 919 | 919 | null |
4949 | 1 | 5858 | null | 12 | 19995 | If two classes $w_1$ and $w_2$ have normal distribution with known parameters ($M_1$, $M_2$ as their means and $\Sigma_1$,$\Sigma_2$ are their covariances) how we can calculate error of the Bayes classifier for them theorically?
Also suppose the variables are in N-dimensional space.
Note: A copy of this question is also available at [https://math.stackexchange.com/q/11891/4051](https://math.stackexchange.com/q/11891/4051) that is still unanswered. If any of these question get answered, the other one will be deleted.
| Calculating the error of Bayes classifier analytically | CC BY-SA 2.5 | null | 2010-11-26T19:36:32.340 | 2019-08-06T12:29:57.810 | 2017-04-13T12:19:38.800 | -1 | 2148 | [
"probability",
"self-study",
"normality-assumption",
"naive-bayes",
"bayes-optimal-classifier"
]
|
4950 | 2 | null | 4939 | 2 | null | I found something interesting tucked away in a study document from the IEEE geared toward this exam:
>
Data points falling within the UCL and LCL range are considered to be in control and caused by chance causes.
Outliers falling above the UCL or below the LCL are considered to be out of control and caused by assignable causes.
If a number of points fall systematically above or below the mean (but are within the UCL and LCL) this may indicate a nonrandom out-of-control state.
The goal of a control chart is to detect out-of-control states quickly.
The chart, alone, will not indicate the root causes of the event, but it will provide investigative leads.
Apparently, if you cross either the UCL or LCL, there has to be an assignable cause.
This makes sense, given the [Wikipedia definition of characteristics of assignable (special) cause](https://secure.wikimedia.org/wikipedia/en/wiki/Assignable_cause#Special-cause_variation):
>
New, unanticipated, emergent or previously neglected phenomena within the system;
Variation inherently unpredictable, even probabilistically;
Variation outside the historical experience base; and
Evidence of some inherent change in the system or our knowledge of it.
| null | CC BY-SA 2.5 | null | 2010-11-26T20:16:29.420 | 2010-11-26T20:16:29.420 | null | null | 110 | null |
4951 | 2 | null | 4942 | 3 | null | When $X$ is normally distributed with known mean $M_1$ and covariance $\Sigma_1$ or with mean $M_2$ and covariance $\Sigma_2$, as indicated in comments to the question, then $V^{\ '}X$ is normally distributed either with mean $\mu_1 = V^{\ '} M_1$ and covariance $\sigma_1^2 = V^{\ '} \Sigma_1 V$ or with mean $\mu_2 = V^{\ '} M_2$ and covariance $\sigma_2^2 = V^{\ '} \Sigma_2 V$; $\mu_2 \gt \mu_1$. We might then care to optimize the chance of correct classification. This can be done provided we stipulate a prior distribution for the two classes. Letting $\pi_1$ be the chance of class 1 and $\pi_2$ the chance of class 2 and $\phi$ the standard normal pdf, then the posterior probabilities of the classes are equal (and therefore $x$ is at the threshold) when
$$f(x) = \pi_1 \phi(\frac{x - \mu_1}{\sigma_1}) - \pi_2 \phi(\frac{x - \mu_2}{\sigma_2}) = 0.$$
There will be at most one zero of $f$ between $x = \mu_1$ and $x = \mu_2$. (When the zeros lie outside this interval we might question the utility of this classifier.) Assuming one exists and choosing $v_0$ to be the negative of this zero gives a linear classifier $X \to V^{\ '}X + v_0$ that, when negative, indicates class 1 is more likely than class 2 and, when positive, indicates class 2 is more likely than class 1.
A simple case arises when the two classes are taken to be equally likely, $\pi_1 = \pi_2 = 1/2,$ for then it is clear from the symmetry and unimodality of $\phi$ that $v_0 = -(\mu_1 + \mu_2)/2$. Note, though, that in general it is not the case that the zero equals $\pi_1 \mu_1 + \pi_2 \mu_2$ (although that might be a good starting guess in a systematic search for the zero).
| null | CC BY-SA 2.5 | null | 2010-11-26T20:33:56.237 | 2010-11-26T20:33:56.237 | null | null | 919 | null |
4952 | 2 | null | 4939 | 4 | null | (Sorry for posting a new answer, I can't reply to comments directly yet)
I don't really agree with the statement:
"Apparently, if you cross either the UCL or LCL, there has to be an assignable cause"
To keep things simple, if your in control distribution is N(0,1), then you will still obtain false alarms once every 370 observations, on average, using a UCL of 3 and LCL of -3. When the chart signals, the process needs to be investigated. Only then can a reason for the signal be assigned (ie process change or random error.) Setting the UCL and LCL requires the user to balance the desired false alarm/missed detection rate (analogous to the Type I/Type II error trade off in hypothesis testing.)
You can also wait until a few signals to actually stop and investigate the process, but in that case, you may detect the shift too late if it really occurred at the first signal. Again, you can't have something for nothing and the user must use their judgment to decide on how to set up the control chart and monitor the process.
| null | CC BY-SA 2.5 | null | 2010-11-26T20:46:32.820 | 2010-11-26T20:46:32.820 | null | null | 2144 | null |
4953 | 2 | null | 4901 | 8 | null | I'll preface this response as I entirely agree with Gavin, and if you're interested in fitting any type of model it should be reflective of the phenomenon under study. What the problem is with the logic of identifying any and all effects (and what Gavin refers to when he says data dredging) is that you could fit an infinite number of interactions, or quadratic terms for variables, or transformations to your data, and you would inevitably find "significant" effects for some variation of your data.
As chl states, these higher order interaction effects don't really have any interpretation, and frequently even the lower order interactions don't make any sense. If your interested in developing a causal model you should only include terms you believe could be pertinent to your dependent variable A priori to fitting your model.
If you believe they can increase predictive power of your model, you should look up resources on model selection techniques to prevent over-fitting your model.
| null | CC BY-SA 3.0 | null | 2010-11-27T03:38:45.647 | 2016-04-21T09:54:50.017 | 2016-04-21T09:54:50.017 | 17230 | 1036 | null |
4954 | 1 | null | null | 1 | 591 | I am trying to estimate parameters of a two dimensional Normal distribution using Gibbs sampling. While it was very easy transform the posterior equation for mean vector to a single dimension normal function for sampling, I am not able to same for sigma(covariance).
Do I need to use the Wishart distribution as prior and then convert the posterior into a single dimensional gamma function ?
| Posterior expression for Gibbs sampling | CC BY-SA 2.5 | null | 2010-11-27T06:11:33.187 | 2010-11-27T12:27:28.083 | 2010-11-27T11:42:17.817 | 449 | 2157 | [
"markov-chain-montecarlo",
"gibbs"
]
|
4956 | 2 | null | 4817 | 3 | null | O.k,
I found that there is an unpaired solution to a sign test (A test of medians). It is called "Median test" And you can read about it [in Wikipedia](http://en.wikipedia.org/wiki/Median_test).
| null | CC BY-SA 2.5 | null | 2010-11-27T07:53:59.150 | 2010-11-27T07:53:59.150 | null | null | 253 | null |
4957 | 2 | null | 4949 | 0 | null | Here you might find several clues for your question, maybe is not there the full response but certainly very valuable parts of it.
[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2766788/](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2766788/)
| null | CC BY-SA 2.5 | null | 2010-11-27T12:13:06.333 | 2010-11-27T12:13:06.333 | null | null | 1808 | null |
4958 | 2 | null | 4954 | 3 | null | The Wishart distribution is the conjugate prior for the likelihood that comes from assuming a normally distributed error term for the linear regression model. You have to assume that either the covariance matrix is an inverted-wishart or the precision matrix (i.e., the inverse of the covariance matrix) is a wishart distribution.
As far as sampling is concerned it is possible to sample directly from the wishart and even from the multivariate normal. Thus, I do not think it is needed to sample from univariate normals or univariate gammas. Look around in the software you are using to see if it has samplers for the multivariate normal and the wishart.
| null | CC BY-SA 2.5 | null | 2010-11-27T12:27:28.083 | 2010-11-27T12:27:28.083 | null | null | null | null |
4959 | 1 | 4960 | null | 12 | 2509 | If $X_i$ is exponentially distributed $(i=1,...,n)$ with parameter $\lambda$ and $X_i$'s are mutually independent, what is the expectation of
$$ \left(\sum_{i=1}^n {X_i} \right)^2$$
in terms of $n$ and $\lambda$ and possibly other constants?
Note: This question has gotten a mathematical answer on [https://math.stackexchange.com/q/12068/4051](https://math.stackexchange.com/q/12068/4051). The readers would take a look at it too.
| How do you calculate the expectation of $\left(\sum_{i=1}^n {X_i} \right)^2$? | CC BY-SA 3.0 | null | 2010-11-27T14:54:04.023 | 2017-05-30T08:11:13.273 | 2017-04-13T12:19:38.853 | -1 | 2148 | [
"expected-value",
"exponential-distribution",
"gamma-distribution"
]
|
4960 | 2 | null | 4959 | 31 | null | If $x_i \sim Exp(\lambda)$, then (under independence), $y = \sum x_i \sim Gamma(n, 1/\lambda)$, so $y$ is gamma distributed (see [wikipedia](http://en.wikipedia.org/wiki/Exponential_distribution)). So, we just need $E[y^2]$. Since $Var[y] = E[y^2] - E[y]^2$, we know that $E[y^2] = Var[y] + E[y]^2$. Therefore, $E[y^2] = n/\lambda^2 + n^2/\lambda^2 = n(1+n)/\lambda^2$ (see [wikipedia](http://en.wikipedia.org/wiki/Gamma_distribution) for the expectation and variance of the gamma distribution).
| null | CC BY-SA 2.5 | null | 2010-11-27T15:46:40.613 | 2010-11-27T15:46:40.613 | null | null | 1934 | null |
4961 | 1 | 18765 | null | 82 | 47507 | Unlike other articles, I found the [wikipedia](http://en.wikipedia.org/wiki/Regularization_%28mathematics%29) entry for this subject unreadable for a non-math person (like me).
I understood the basic idea, that you favor models with fewer rules. What I don't get is how do you get from a set of rules to a 'regularization score' which you can use to sort the models from least to most overfit.
Can you describe a simple regularization method?
I'm interested in the context of analyzing statistical trading systems. It would be great if you could describe if/how I can apply regularization to analyze the following two predictive models:
Model 1 - price going up when:
- exp_moving_avg(price, period=50) > exp_moving_avg(price, period=200)
Model 2 - price going up when:
- price[n] < price[n-1] 10 times in a row
- exp_moving_avg(price, period=200) going up
But I'm more interested in getting a feeling for how you do regularization. So if you know better models for explaining it please do.
| What is regularization in plain english? | CC BY-SA 2.5 | null | 2010-11-27T16:24:53.693 | 2021-01-31T09:41:52.540 | null | null | 749 | [
"regularization"
]
|
4962 | 2 | null | 423 | 16 | null | From [xkcd](http://xkcd.com/815/):
Almost a Chi square...

>
As the CoKF approaches 0, productivity goes negative as you pull OTHER people into chair-spinning contests.
| null | CC BY-SA 3.0 | null | 2010-11-27T18:14:10.090 | 2011-10-25T20:59:20.577 | 2011-10-25T20:59:20.577 | 5880 | 253 | null |
4964 | 1 | 4965 | null | 32 | 23306 | All of my variables are continuous. There are no levels. Is it possible to even have interaction between the variables?
| Is interaction possible between two continuous variables? | CC BY-SA 2.5 | null | 2010-11-27T19:42:14.197 | 2010-11-28T11:40:27.143 | 2010-11-27T19:51:05.120 | 1894 | 1894 | [
"regression",
"modeling",
"interaction"
]
|
4965 | 2 | null | 4964 | 40 | null | Yes, why not? The same consideration as for categorical variables would apply in this case: The effect of $X_1$ on the outcome $Y$ is not the same depending on the value of $X_2$. To help visualize it, you can think of the values taken by $X_1$ when $X_2$ takes high or low values. Contrary to categorical variables, here interaction is just represented by the product of $X_1$ and $X_2$. Of note, it's better to center your two variables first (so that the coefficient for say $X_1$ reads as the effect of $X_1$ when $X_2$ is at its sample mean).
As kindly suggested by @whuber, an easy way to see how $X_1$ varies with $Y$ as a function of $X_2$ when an interaction term is included, is to write down the model $\mathbb{E}(Y|X)=\beta_0+\beta_1X_1+\beta_2X_2+\beta_3X_1X_2$.
Then, it can be seen that the effect of a one-unit increase in $X_1$ when $X_2$ is held constant may be expressed as:
$$
\begin{eqnarray*}
\mathbb{E}(Y|X_1+1,X_2)-\mathbb{E}(Y|X_1,X_2)&=&\beta_0+\beta_1(X_1+1)+\beta_2X_2+\beta_3(X_1+1)X_2\\
&&-\big(\beta_0+\beta_1X_1+\beta_2X_2+\beta_3X_1X_2\big)\\
&=& \beta_1+\beta_3X_2
\end{eqnarray*}
$$
Likewise, the effect when $X_2$ is increased by one unit while holding $X_1$ constant is $\beta_2+\beta_3X_1$. This demonstrates why it is difficult to interpret the effects of $X_1$ ($\beta_1$) and $X_2$ ($\beta_2$) in isolation. This will even be more complicated if both predictors are highly correlated. It is also important to keep in mind the linearity assumption that is being made in such a linear model.
You can have a look at [Multiple regression: testing and interpreting interactions](http://books.google.fr/books?id=LcWLUyXcmnkC&pg=PA130&lpg=PA130&dq=regression+interaction+between+two+continuous+variable&source=bl&ots=flkc0hSY1d&sig=QyoUq4jd0AJPZC7Px0JpAUZh3jg&hl=fr&ei=b27xTOTQEcGChQeAzJGmCw&sa=X&oi=book_result&ct=result&resnum=2&ved=0CB8Q6AEwATgK#v=onepage&q=regression%20interaction%20between%20two%20continuous%20variable&f=false), by Leona S. Aiken, Stephen G. West, and Raymond R. Reno (Sage Publications, 1996), for an overview of the different kind of interaction effects in multiple regression. (This is probably not the best book, but it's available through Google)
Here is a toy example in R:
```
library(mvtnorm)
set.seed(101)
n <- 300 # sample size
S <- matrix(c(1,.2,.8,0,.2,1,.6,0,.8,.6,1,-.2,0,0,-.2,1),
nr=4, byrow=TRUE) # cor matrix
X <- as.data.frame(rmvnorm(n, mean=rep(0, 4), sigma=S))
colnames(X) <- c("x1","x2","y","x1x2")
summary(lm(y~x1+x2+x1x2, data=X))
pairs(X)
```
where the output actually reads:
```
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.01050 0.01860 -0.565 0.573
x1 0.71498 0.01999 35.758 <2e-16 ***
x2 0.43706 0.01969 22.201 <2e-16 ***
x1x2 -0.17626 0.01801 -9.789 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.3206 on 296 degrees of freedom
Multiple R-squared: 0.8828, Adjusted R-squared: 0.8816
F-statistic: 743.2 on 3 and 296 DF, p-value: < 2.2e-16
```
And here is how the simulated data looks like:

To illustrate @whuber's second comment, you can always look at the variations of $Y$ as a function of $X_2$ at different values of $X_1$ (e.g., terciles or deciles); trellis displays are useful in this case. With the data above, we would proceed as follows:
```
library(Hmisc)
X$x1b <- cut2(X$x1, g=5) # consider 5 quantiles (60 obs. per group)
coplot(y~x2|x1b, data=X, panel = panel.smooth)
```

| null | CC BY-SA 2.5 | null | 2010-11-27T20:47:34.893 | 2010-11-28T11:40:27.143 | 2010-11-28T11:40:27.143 | 930 | 930 | null |
4966 | 2 | null | 4945 | 4 | null | If all your measurements are using the same units, then you've already addressed the scale problem; what's bugging you is degrees of freedom and precision of your estimates of standard deviation. If you recast your problem as comparing variances, then there are plenty of standard tests available.
For two independent samples, you can use the F test; its null distribution follows the (surprise) F distribution which is indexed by degrees of freedom, so it implicitly adjusts for what you're calling a scale problem. If you're comparing more than two samples, either Bartlett's or Levene's test might be suitable. Of course, these have the same problem as one-way ANOVA, they don't tell you which variances differ significantly. However, if, say, Bartlett's test did identify inhomogeneous variances, you could do follow-up pairwise comparisons with the F test and make a Bonferroni adjustment to maintain your experimentwise Type I error (alpha).
You can get details for all of this stuff in the [NIST/SEMATECH e-Handbook of Statistical Methods](http://www.itl.nist.gov/div898/handbook/).
| null | CC BY-SA 2.5 | null | 2010-11-27T20:58:14.037 | 2010-11-27T20:58:14.037 | null | null | 5792 | null |
4967 | 2 | null | 134 | 6 | null | If you maintain a length-k window of data as a sorted doubly linked list then, by means of a binary search (to insert each new element as it gets shifted into the window) and a circular array of pointers (to immediately locate elements that need to be deleted), each shift of the window requires O(log(k)) effort for inserting one element, only O(1) effort for deleting the element shifted out of the window, and only O(1) effort to find the median (because every time one element is inserted or deleted into the list you can update a pointer to the median in O(1) time). The total effort for processing an array of length N therefore is O((n-k)log(k)) <= O(n log(k)). This is better than any of the other methods proposed so far and it is not an approximation, it is exact.
| null | CC BY-SA 2.5 | null | 2010-11-27T21:45:52.897 | 2010-11-27T21:45:52.897 | null | null | 919 | null |
4968 | 2 | null | 423 | 116 | null | 
>
By the third trimester, there will be hundreds of babies inside you.
Also from [XKCD](http://xkcd.com/605/)
| null | CC BY-SA 2.5 | null | 2010-11-27T22:27:41.770 | 2010-11-29T14:53:40.757 | 2010-11-29T14:53:40.757 | 442 | 2166 | null |
4970 | 1 | null | null | 8 | 5694 | The matrix could be as large as $2500\times 2500$, what is the best algorithm to do that, is there some algorithm that is easy to write a program, is there any convenient packages for that?
| How to diagonalize a large sparse symmetric matrix, to get the eigen values and eigenvectors? | CC BY-SA 2.5 | null | 2010-11-28T04:38:02.957 | 2011-07-19T12:56:45.977 | 2010-11-28T11:55:30.167 | 930 | 2141 | [
"algorithms",
"matrix-decomposition"
]
|
4971 | 2 | null | 4970 | 5 | null | I don't know much about eigenvalues or what they are applicable to, but R seems to have a built in function for this purpose named `eigen()`. Calculating the eigenvalues & eigenvectors for a 2500 * 2500 matrix took ~ 1 minute on my machine.
```
> sampData <- runif(6250000, 0, 2)
> x <- matrix(sampData, ncol = 2500, byrow = TRUE)
> system.time(eigen(x))
user system elapsed
79.74 2.90 65.69
```
This question has also come up on [Stack Overflow](https://stackoverflow.com/questions/713878/how-expensive-is-it-to-compute-the-eigenvalues-of-a-matrix).
| null | CC BY-SA 2.5 | null | 2010-11-28T05:09:44.260 | 2010-11-28T05:09:44.260 | 2017-05-23T12:39:26.167 | -1 | 696 | null |
4972 | 1 | 4973 | null | 6 | 623 | I am a novice in statistics so please correct me if I am doing something fundamentally wrong. After wrestling for a long time with R in trying to fit my data to a good distribution, I figured out that it fits the Cauchy distribution with the following parameters:
```
location scale
37.029894 18.678936
( 3.405665) ( 2.779136)
```
The data was from a survey where 100 people were asked how many friends they talked to over a period of 20 days and I am trying to see if it fits a known distribution. I generated the QQ-plot with the reference line and it looks like the image given below. From what I have been reading on the web, if the points fall close to the reference line then it is a good evidence that the data comes from this distribution.
So, is this a good evidence to say that the distribution is Cauchy or do I need to run any more tests? If so, can someone tell me the physical interpretation of this result? I mean, I read that if the data falls into a Cauchy distribution, then it will not have a mean and standard deviation but can someone help me understand this in plain English? If it does not have a mean then from what I understand, I cannot sample from this distribution. What is one supposed to infer about the population based on this result? Or should I be looking at other models?
UPDATE: What am I trying to achieve?
I am trying to evaluate how much time it takes for some arbitrary piece of information to propagate for a population of size X. As this depends on the communication patterns of people, what I was trying to do was to build a model that could use the information from the 100 people I surveyed to give me patterns for the X number where X could be 500 or 1000.
QQ-Plot

Density Distribution of my data

Cauchy Distribution

QQ-Plot when trying to fit a Normal distribution to my data

UPDATE:
From all the suggestions, I think I now understand why this cannot be a Cauchy distribution. Thanks to everyone. @HairyBeast suggested that I look at a negative binomial distribution so I plotted the following as well:
QQ-Plot when Negative Binomial Distribution was used

Negative Binomial Distribution

| How to fit a model to self-reported number of friend interactions over a 20 day period? | CC BY-SA 3.0 | null | 2010-11-28T06:29:15.097 | 2011-10-07T02:15:16.517 | 2011-10-07T02:15:16.517 | 183 | 2164 | [
"r",
"distributions",
"dataset"
]
|
4973 | 2 | null | 4972 | 12 | null | First off, your response variable is discrete. The Cauchy distribution is continuous. Second, your response variable is non-negative. The Cauchy distribution with the parameters you specified puts about 1/5 of its mass on negative values. Whatever you have been reading about the QQ norm plot is false. Points falling close to the line is evidence of normality, not evidence in favor of being Cauchy distributed (EDIT: Disregard these last 2 sentences; a QQ Cauchy plot - not a QQ norm plot - was used, which is fine.) The Poisson distribution, used for modeling count data, is inappropriate since the variance is much larger than the mean. The Binomial distribution is also inappropriate since theoretically, your response variable has no upper bound. I'd look into the negative binomial distribution.
As a final note, your data does not necessarily have to come from a well known, "named" distribution. It may have come from a mixture of distributions, or may have a "true" distribution whose mass function is not a nice transformation of x to P(X=x). Don't try too hard to "force" a distribution to the data.
| null | CC BY-SA 2.5 | null | 2010-11-28T07:13:23.237 | 2010-11-28T22:37:06.110 | 2010-11-28T22:37:06.110 | 2144 | 2144 | null |
4974 | 2 | null | 3856 | 2 | null | Typical measures of autocorrelation, such as Moran's I, are global estimates of clumpiness and could be masked by a trend or by "averaging" of clumpiness. There are two ways you could handle this:
1) Use a local measure of autocorrelation - but the drawback is you don't get a single number for clumpiness. An example of this would be Local Moran's I*
Here is a document (from a google search) that at least introduces the terms and gives some derivations
[http://onlinelibrary.wiley.com/doi/10.1111/0022-4146.00224/abstract](http://onlinelibrary.wiley.com/doi/10.1111/0022-4146.00224/abstract)
2) Use a statistic specifically geared towards point distributions and their clumpieness at various spatial scales, such as Ripley's K
[http://scholar.google.com/scholar?q=Ripley%27s+K&hl=en&as_sdt=0&as_vis=1&oi=scholart](http://scholar.google.com/scholar?q=Ripley%27s+K&hl=en&as_sdt=0&as_vis=1&oi=scholart)
| null | CC BY-SA 2.5 | null | 2010-11-28T08:03:09.037 | 2010-11-28T08:03:09.037 | null | null | 787 | null |
4975 | 2 | null | 4972 | 6 | null | Agree with HairyBeast (+1) that Cauchy is not appropriate here (it's symmetric for one thing) and that negative binomial might well be better.
Disagree about QQ-plot though. You can do a QQ-plot for any distribution, not just normal. What you say about interpretation of a QQ-plot is correct, but note that 2 of your points lie very far indeed from the straight line.
On the Cauchy's lack of moments: this doesn't affect sampling. Once you know the parameters of the distribution sampling from it is easy (as the quantile function has a closed form) and the lack of moments is irrelevant. But the fact that the Cauchy distribution doesn't even have a mean does indicate that it's inappropriate here, as clearly it is meaningful to ask what's the expected number of friends with whom a person has a conversation in a 20-day period.
| null | CC BY-SA 2.5 | null | 2010-11-28T08:50:51.503 | 2010-11-28T08:50:51.503 | null | null | 449 | null |
4976 | 2 | null | 4970 | 6 | null | Take a look at [A Survey of Software for Sparse Eigenvalue Problems](http://www.grycap.upv.es/slepc/documentation/reports/str6.pdf) by Hernández et al.
| null | CC BY-SA 2.5 | null | 2010-11-28T12:05:28.813 | 2010-11-28T15:40:35.627 | 2010-11-28T15:40:35.627 | 439 | 439 | null |
4977 | 2 | null | 4961 | 12 | null | Put in simple terms, regularization is about benefiting the solutions you'd expect to get. As you mention, for example you can benefit "simple" solutions, for some definition of simplicity. If your problem has rules, one definition can be fewer rules. But this is problem-dependent.
You're asking the right question, however. For example in Support Vector Machines this "simplicity" comes from breaking ties in the direction of "maximum margin". This margin is something that can be clearly defined in terms of the problem. There is a very good geometric derivation in the [SVM article in Wikipedia](http://en.wikipedia.org/wiki/Support_vector_machine). It turns out that the regularization term is, arguably at least, the "secret sauce" of SVMs.
How do you do regularization? In general that comes with the method you use, if you use SVMs you're doing L2 regularization, if your using [LASSO you're doing L1 regularization](http://en.wikipedia.org/wiki/Lasso_%28statistics%29#LASSO_method) (see what hairybeast is saying). However, if you're developing your own method, you need to know how to tell desirable solutions from non-desirable ones, and have a function that quantifies this. In the end you'll have a cost term and a regularization term, and you want to optimize the sum of both.
| null | CC BY-SA 2.5 | null | 2010-11-28T12:51:05.570 | 2010-11-28T12:51:05.570 | null | null | 1540 | null |
4978 | 1 | 4984 | null | 22 | 5000 | I'm in no way a statistician (I've had a course in mathematical statistics but nothing more than that), and recently, while studying information theory and statistical mechanics, I met this thing called "uncertainty measure"/"entropy". I read Khinchin derivation of it as a measure of uncertainty and it made sense to me. Another thing that made sense was Jaynes description of MaxEnt to get a statistic when you know the arithmetic mean of one or more function/s on the sample (assuming you accept $-\sum p_i\ln p_i$ as a measure of uncertainty of course).
So I searched on the net to find the relationship with other methods of statistical inference, and God was I confused. For example [this](http://www.mdpi.org/entropy/papers/e3020058.pdf) paper suggest, assuming that i got it right, that you just get a ML estimator under a suitable reformulation of the problem; MacKey, in his book, says that MaxEnt can give you weird things, and you should't use it even for a starting estimate in a Bayesian inference; etc.. I'm having trouble finding good comparisons.
My question is, could you provide an explanation and/or good refences of weak and strong points of MaxEnt as a statistical inference method with quantitative comparisons to other methods (when applied to toy models for example)?
| Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods | CC BY-SA 2.5 | null | 2010-11-28T13:12:18.827 | 2011-02-17T16:03:19.943 | null | null | 2171 | [
"entropy",
"inference"
]
|
4979 | 2 | null | 4978 | 19 | null | For an entertaining critique of maximum entropy methods, I'd recommend reading some old newsgroup posts on sci.stat.math and sci.stat.consult, particularly the ones by Radford Neal:
- How informative is the Maximum Entropy method? (1994)
- Maximum Entropy Imputation (2002)
- Explanation of Maximum Entropy (2004)
I'm not aware of any comparisons between maxent and other methods: part of the problem seems to be that maxent is not really a framework, but an ambiguous directive ("when faced with an unknown, simply maximise the entropy"), which is interpreted in different ways by different people.
| null | CC BY-SA 2.5 | null | 2010-11-28T15:27:54.147 | 2010-11-28T22:05:12.887 | 2010-11-28T22:05:12.887 | 495 | 495 | null |
4980 | 1 | null | null | 11 | 5877 | What open-source implementations -- in any language -- exist out there that can compute lasso regularisation paths for linear regression by coordinate descent?
So far I am aware of:
- glmnet
- scikits.learn
Anything else out there?
| Lasso fitting by coordinate descent: open-source implementations? | CC BY-SA 2.5 | null | 2010-11-28T15:34:13.590 | 2018-05-20T19:33:03.783 | null | null | 439 | [
"regression",
"lasso",
"regularization"
]
|
4981 | 2 | null | 4766 | 5 | null | We have implemented this (along with a power iteration refinement) in the [scikit-learn](http://scikit-learn.sourceforge.net) python package.
Our [implementation](https://github.com/scikit-learn/scikit-learn/blob/master/scikits/learn/utils/extmath.py#L97) is able to find the exact same singular values and vectors if k + p > rank(M) as demonstrated in the [tests](https://github.com/scikit-learn/scikit-learn/blob/master/scikits/learn/utils/tests/test_svd.py#L14).
If you cut (k + p) before reaching near zero singular values (i.e. in the k + p < rank(M) case) then the singular vectors are indeed different from the ones you get with the un-truncated version but they might still be very useful in practice for features extraction in machine learning: for instance 'truncated' eigenfaces at 150 work as good for face recognition task with SVM as the top 150 first singular vectors of the full decomposition even though the rank of my faces dataset seems to be much higher.
This randomized / truncated SVD method looks really interesting in practice: it can really cut down the computation time as shown in this [benchmark](https://github.com/scikit-learn/scikit-learn/blob/fc5da45c9e94bc18a55cd592a70bf1c21da75852/benchmarks/plot_bench_svd.py):

| null | CC BY-SA 2.5 | null | 2010-11-28T15:54:33.683 | 2010-11-28T16:01:55.277 | 2010-11-28T16:01:55.277 | 2150 | 2150 | null |
4982 | 2 | null | 4970 | 4 | null | 2500x2500 is not such a large problem. Even without leveraging the sparsity the SVD implementation of scipy.linalg is able to decompose it in less than a minute. See [my answer](https://stats.stackexchange.com/questions/4766/randomized-svd-and-singular-values/4981#4981) to a related questions for more details.
For larger problems you will need to use the sparsity explicitly. The [gensim](http://nlp.fi.muni.cz/projekty/gensim/) project my help you for middle size problems that fit on a single computer but not in RAM and the [mahout](http://mahout.apache.org) implementation is able to deal with sparse matrices that don't even fit on a single hard-drive.
| null | CC BY-SA 2.5 | null | 2010-11-28T16:10:35.137 | 2010-11-28T16:10:35.137 | 2017-04-13T12:44:40.807 | -1 | 2150 | null |
4983 | 1 | 5053 | null | 3 | 1914 | I am using [SVM-light](http://svmlight.joachims.org/) with Matlab, for linear SVM. I would like to understand the output model, but I cannot find any documentation or help about it.
Here is the output:
```
sv_num: 639
upper_bound: 547
b: 1.4023
totwords: 576
totdoc: 2000
loo_error: -1
loo_recall: -1
loo_precision: -1
xa_error: 14.9500
xa_recall: 82.6000
xa_precision: 86.8559
maxdiff: 9.9611e-004
r_delta_sq: 575.0052
r_delta_avg: 23.9792
model_length: 0.4847
loss: 357.5542
vcdim: 136.1071
alpha: [2000x1 double]
lin_weights: []
index: [2000x1 double]
supvec: [639x576 double]
kernel_parm: [1x1 struct]
example_length: 23.9792
a: [2000x1 double]
```
So far, I understand there are `sv_num=639` support vectors. However, there are 638 positive index in the `index` vector ; ditto for the positive values of the `alpha` vector. The first value of the `alpha` vector is zero, and the non-zero values are then between 2 and 639. Same remark about the `supvec` vector. So it seems there are indeed 638 support vectors.
Actually, I would like to get the hyperplane `W` and the parameter `b` myself to see if I could understand the output model correctly. Since the kernel is linear, I thought of a sum over some coefficients times some support vectors.
For information, `a(i)` "should" store the values `alpha_i` times `y_i` for training samples `x_i`. And there are 638 non-zero values.
Has anyone used svmlight before? Can you understand the output model? Or is there a documentation anywhere?
---
I manage to get one thing right:
```
MyCoeff = model.a ./ y;
MyIndex = 1+max(model.index,0); % between 1 and 639
```
The support vector corresponding to any non-zero `MyCoeff(i)` is:
- x(i,:)
- model.supvec(MyIndex(i),:)
And `MyIndex` is 1 iff `MyCoeff` is zero. And `supvec(1,:)` is the zero vector.
---
Here are four pieces of code I have made:
```
MyCoeff = model.a ./ y;
W1 = zeros(1, h*w);
for i=1:length(MyCoeff)
W1 = W1 + MyCoeff(i)*x(i,:);
end;
W2 = zeros(1, h*w);
for i=1:model.sv_num
W2 = W2 + model.alpha(i)*model.supvec(i,:);
end;
eq((MyCoeff~=0),(model.index>0))
MesIndices = 1+max(model.index,0); % entre 1 et 639
W3 = zeros(1, h*w);
for i=1:length(MyCoeff)
W3 = W3 + MyCoeff(i)*model.supvec(MesIndices(i),:);
end;
W4 = zeros(1, h*w);
for i=1:length(MyCoeff)
W4 = W4 + model.alpha(MesIndices(i))*x(i,:);
end;
```
Here are the outputs:

I have understood the `index` link between the support vectors `supvec` and the matrix `x`. But I have not found the link between `alpha` and `a`.
---
I got it: apart from the difference of indices, `model.a` and `model.alpha` are the same.
```
for i=1:length(MesIndices)
W1 = W1 + model.a(i)*x(i,:);%*y(i);
W2 = W2 + model.alpha(MesIndices(i))*model.supvec(MesIndices(i),:);%*y(i);
W3 = W3 + model.a(i)*model.supvec(MesIndices(i),:);%*y(i);
W4 = W4 + model.alpha(MesIndices(i))*x(i,:);%*y(i);
end;
```
The question now is whether I should multiply by `y(i)`. I believe I should NOT since `model.a` is already `y` times the weight of the corresponding support vector, and $y\in\{-1,1\}$. So the good result should be the ORANGE face, which you can get by using one of the four lines above. Too bad I could not find any documentation. :')
| Output of linear SVM model in Matlab using SVM-light | CC BY-SA 2.5 | null | 2010-11-28T16:13:12.200 | 2012-11-23T15:08:49.377 | 2012-11-23T15:08:49.377 | 919 | 1351 | [
"svm",
"matlab"
]
|
4984 | 2 | null | 4978 | 19 | null | MaxEnt and Bayesian inference methods correspond to different ways of incorporating information into your modeling procedure. Both can be put on axiomatic ground (John Skilling's ["Axioms of Maximum Entropy"](http://yaroslavvb.com/papers/skilling-axioms.pdf) and Cox's ["Algebra of Probable Inference"](http://yaroslavvb.com/papers/cox-algebra.pdf)).
Bayesian approach is straightforward to apply if your prior knowledge comes in a form of a measurable real-valued function over your hypothesis space, so called "prior". MaxEnt is straightforward when the information comes as a set of hard constraints on your hypothesis space. In real life, knowledge comes neither in "prior" form nor in "constraint" form, so success of your method depends on your ability to represent your knowledge in the corresponding form.
On a toy problem, Bayesian model averaging will give you [lowest](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.27.153) average log-loss (averaged over many model draws) when the prior matches the true distribution of hypotheses. MaxEnt approach will give you [lowest](http://www.cwi.nl/~pdg/ftp/AOS231.pdf) worst-case log-loss when its constraints are satisfied (worst taken over all possible priors)
E.T.Jaynes, considered a father of "MaxEnt" methods also relied on Bayesian methods. On [page 1412](http://omega.albany.edu:8008/ETJ-PS/cc14g.ps) of his [book](http://omega.albany.edu:8008/JaynesBook.html), he gives an example where Bayesian approach resulted in a good solution, followed by an example where MaxEnt approach is more natural.
Maximum likelihood essentially takes the model to lie inside some pre-determined model space and trying to fit it "as hard as possible" in a sense that it'll have the highest sensitivity to data out of all model-picking methods restricted to such model space. Whereas MaxEnt and Bayesian are frameworks, ML is a concrete model fitting method, and for some particular design choices, ML can end up the method coming out of Bayesian or MaxEnt approach. For instance, MaxEnt with equality constraints is equivalent to Maximum Likelihood fitting of a certain exponential family. Similarly, an approximation to Bayesian Inference can lead to regularized Maximum Likelihood solution. If you choose your prior to make your conclusions maximally sensitive to data, result of Bayesian inference will correspond to Maximum Likelihood fitting. For instance, when inferring $p$ over Bernoulli trials, such prior [would be](http://www.amstat.org/publications/jse/v12n2/zhu.pdf) the limiting distribution Beta(0,0)
Real-life Machine Learning successes are often a mix of various philosophies. For instance, "Random Fields" were [derived](http://www.stat.ucla.edu/~yuille/courses/Stat231-Fall08/Pietra.pdf) from MaxEnt principles. Most popular implementation of the idea, regularized CRF, involves adding a "prior" on the parameters. As a result, the method is not really MaxEnt nor Bayesian, but influenced by both schools of thought.
I've collected some links on philosophical foundations of Bayesian and MaxEnt approaches [here](http://yaroslavvb.blogspot.com/2010/09/maxent-or-bayesian.html) and [here](http://yaroslavvb.blogspot.com/2005/03/joy-of-scanning.html).
Note on terminology: sometimes people call their method Bayesian [simply if](http://www.google.com/search?q=%22not+really+bayesian%22) it uses Bayes rule at some point. Likewise, "MaxEnt" is sometimes used for some method that favors high entropy solutions. This is not the same as "MaxEnt inference" or "Bayesian inference" as described above
| null | CC BY-SA 2.5 | null | 2010-11-28T22:05:16.413 | 2010-11-29T02:32:55.783 | 2010-11-29T02:32:55.783 | 511 | 511 | null |
4985 | 2 | null | 4908 | 1 | null | I may be misunderstanding your question (and falling into the same misunderstanding as fRed despite your explanation), in which case I apologize. It seems to me like you are saying you already know the various P_A, P_B, and P_C values for each context. I assume P_A + P_B + P_C = 1? Given those priors and the actual outcome you want to characterize the accuracy of the already established P values?
One approach might be simulation. Given the various P_A, P_B, and P_C values for each context you could generate guesses from the model at the probabilities indicated by each P value and then compare that to the obtained data. Averaging over many simulations you should get a sense of the average hit-rate of the model.
| null | CC BY-SA 2.5 | null | 2010-11-28T23:09:43.057 | 2010-11-28T23:09:43.057 | null | null | 196 | null |
4986 | 2 | null | 4980 | 5 | null | I have a MATLAB and C/C++ implementation [here](http://www.emakalic.org/blog/?page_id=7).
Let me know if you find it useful.
| null | CC BY-SA 2.5 | null | 2010-11-29T00:23:57.847 | 2010-11-29T00:23:57.847 | null | null | 530 | null |
4987 | 1 | 4990 | null | 3 | 4671 | I must have generated at least 5 Q-Q plots until now when trying to fit my data into a known distribution but I just noticed something that I could not understand. In the figure shown below, from what I've read from the wiki, X-axis is supposed to read "Negative Binomial Theoretical Quantiles" and Y-axis is supposed to read "Data quantiles". Agreed that this makes perfect sense. But when I looked at the figure, the X and Y axis go beyond 100 but how can there be quantiles beyond 100? What do they mean if they exist? Or is this graph produced by the qqplot of R totally different? Can someone help me understand this?
The way I was generating this data was using the following script:
```
library(MASS)
# Define the data
data <- c(67, 81, 93, 65, 18, 44, 31, 103, 64, 19, 27, 57, 63, 25, 22, 150,
31, 58, 93, 6, 86, 43, 17, 9, 78, 23, 75, 28, 37, 23, 108, 14, 137,
69, 58, 81, 62, 25, 54, 57, 65, 72, 17, 22, 170, 95, 38, 33, 34, 68,
38, 117, 28, 17, 19, 25, 24, 15, 103, 31, 33, 77, 38, 8, 48, 32, 48,
26, 63, 16, 70, 87, 31, 36, 31, 38, 91, 117, 16, 40, 7, 26, 15, 89,
67, 7, 39, 33, 58)
# Fit the data to a model
params = fitdistr(data, "Negative Binomial")
#using the answer from params create a set of theoretical values
plot(qnbinom(ppoints(data), size=2.3539444, mu=50.7752809), sort(data))
abline(0,1)
```

| Understanding a Quantile-Quantile Plot | CC BY-SA 2.5 | 0 | 2010-11-29T01:02:22.203 | 2017-11-12T17:22:04.010 | 2017-11-12T17:22:04.010 | 11887 | 2164 | [
"r",
"distributions",
"quantiles",
"qq-plot"
]
|
4988 | 1 | null | null | 5 | 270 | Suppose we have a distribution for $\mathbf{x}\in \{-1,1\}^n$ and are interested in representing $p(\mathbf{x})$ as a linear exponential family with sufficient statistics of the form $1,x_1,x_2,\ldots,x_1 x_2,\ldots,x_1 x_2 \cdots x_n$
Suppose we know that entropy some measure of complexity of target distribution is bounded. We can view the parameter vector as a probability distribution $D$. Can we say anything about how the mass in $D$ is distributed among terms of various orders for some interesting complexity measure? Are there complexity measures that force mass onto low-order terms?
Restricting attention to graphical models means discarding 3rd and higher order terms. I'm interested in some explanation why this works so well
This was motivated by a similar [question](https://cstheory.stackexchange.com/q/3332/434) on CSTheory
Edit 11/29
Below is the probability simplex of of distributions over $\{-1,1\}$ valued $x_1,x_2$, in "probability" coordinates and in "log-linear" coordinates. In second picture, x,y axes correspond to coefficients in front of $x_1$ and $x_2$, z axis is the coefficient of $x_1 x_2$ term. You can see first and second order dimensions are symmetric, so bounding entropy (or any other symmetric function of distribution) is not enough to force mass onto low-order coefficients....
[](https://i.stack.imgur.com/LRqQi.png)
Mathematica [source](http://yaroslavvb.com/upload/save/stats.SE.simplex.nb)
| Asymmetry between high order and low order interaction terms | CC BY-SA 4.0 | null | 2010-11-29T02:08:24.257 | 2019-01-14T23:15:12.813 | 2019-01-14T23:15:12.813 | 79696 | 511 | [
"model-selection",
"entropy",
"graphical-model"
]
|
4989 | 1 | 5248 | null | 4 | 2784 | My data consists of individual level observations nested within countries over time. I would like to use multilevel models along with some sort of selection model.
I have three related questions.
1) Are there any issues or concerns with using a Heckman selection model twice?
I have a model with two selection stages. While I can't conceive of any reason why this shouldn't work, I also haven't seen any examples of a heckman selection model being used twice. Assume I also have the necessary exclusion restrictions.
2) My primary tool is R. There exists a package, sampleSelection, but my understanding is that algorithm it uses may not be appropriate for panel data / multilevel models.
In stata, my understanding is the aforementioned is possible, but I'm not sure a package in R exists for doing heckman selection with multilevel models.
3) Are there better ways for accomplishing what I am trying to do? Perhaps using instruments alone would be better? If so, are there any R packages you could point me to? If there are no R packages, I am still interested in this answer since I would be happy to code something if necessary.
Thank you.
| Multi-stage selection model with panel data in R | CC BY-SA 2.5 | null | 2010-11-29T02:23:14.403 | 2010-12-08T10:22:15.833 | 2010-11-29T12:42:38.483 | null | 2050 | [
"r",
"panel-data",
"multilevel-analysis"
]
|
4990 | 2 | null | 4987 | 5 | null | I think R is doing perfectly what you want it to do.
You are plotting:
>
x = qnbinom(ppoints(data),
size=2.3539444, mu=50.7752809)
which is:
>
[1] 3 5 7 9 10 11 12 13
14 15 16 17 18 19 20 [16] 21
21 22 23 24 25 25 26 27 28 28
29 30 31 31 [31] 32 33 34 35
35 36 37 38 39 39 40 41 42 43
44 [46] 45 45 46 47 48 49 50
51 52 53 54 55 56 57 59 [61]
60 61 62 63 65 66 68 69 71 72
74 76 77 79 81 [76] 84 86 89
91 94 97 101 105 110 116 123 132 146
175
with respect to
>
y = sort(data)
which is:
>
[1] 6 7 7 8 9 14 15 15
16 16 17 17 17 18 19 [16] 19
22 22 23 23 24 25 25 25 26 26
27 28 28 31 [31] 31 31 31 31
32 33 33 33 34 36 37 38 38 38
38 [46] 39 40 43 44 48 48 54
57 57 58 58 58 62 63 63 [61]
64 65 65 67 67 68 69 70 72 75
77 78 81 81 86 [76] 87 89 91
93 93 95 103 103 108 117 117 137 150
170
Therefore, you have 100+ values on both the axis. If you want to plot quantiles, you need to tell R to do so by doing this:
>
plot(pnbinom(sort(data), size=2.3539444, mu=50.7752809), ppoints(data))
| null | CC BY-SA 2.5 | null | 2010-11-29T06:04:07.037 | 2010-11-29T06:15:40.893 | 2010-11-29T06:15:40.893 | 1307 | 1307 | null |
4991 | 1 | null | null | 12 | 3130 | I want to implement (in R) the following very simple Dynamic Linear Model for which I have 2 unknown time varying parameters (the variance of the observation error $\epsilon^1_t$ and the variance of the state error $\epsilon^2_t$).
$
\begin{matrix}
Y_t & = & \theta_t + \epsilon^1_t\\
\theta_{t+1} & = & \theta_{t}+\epsilon^2_t
\end{matrix} $
I want to estimate these parameters at each time point, without any look ahead bias.
From what I understand, I can use either a MCMC (on a rolling window to avoid the look ahead bias), or a particle filter (or Sequential Monte Carlo - SMC).
Which method would you use, and
What are the pros and cons of these two methods?
Bonus question: In these methods, how do you select the speed of change of the parameters? I guess we have to input an information here, because there is a bargain between using a lot of data to estimate the parameters and using less data to react more quickly to a change in the parameter?
| Estimating parameters of a dynamic linear model | CC BY-SA 3.0 | null | 2010-11-29T07:43:32.537 | 2015-04-27T06:02:33.610 | 2015-04-27T06:02:33.610 | 8336 | 1709 | [
"r",
"markov-chain-montecarlo",
"dlm",
"particle-filter"
]
|
4993 | 2 | null | 4961 | 27 | null | Suppose you perform learning via empirical risk minimization.
More precisely:
- you have got your non-negative loss function $L(\text{actual value},\text{ predicted value})$ which characterize how bad your predictions are
- you want to fit your model in a such way that its predictions minimize mean of loss function, calculated only on training data (the only data you have)
Then the aim of learning process is to find $\text{Model} = \text{argmin} \sum L(\text{actual}, \text{predicted}(\text{Model}))$ (this method is called empirical risk minimization).
But if you haven't got enough data and there is a huge amount of variables in your model, it is very probable to find such a model that not only explain patterns but also explains random noise in your data. This effect is called overfitting and it leads to degradation of generalization ability of your model.
In order to avoid overfitting a regularization term is introduced into the target function:
$\text{Model} = \text{argmin} \sum L(\text{actual}, \text{predicted}(\text{Model})) + \lambda R(\text{Model})$
Usually, this term $R(\text{Model})$ imposes a special penalty on complex models. For instance, on models with large coefficients (L2 regularization, $R$=sum of squares of coefficients) or with a lot if non-zero coefficients (L1 regularization, $R$=sum of absolute values of coefficients). If we are training decision tree, $R$ can be its depth.
Another point of view is that $R$ introduces our prior knowledge about a form of the best model ("it doesn't have too large coefficients", "it is almost orthogonal to $\bar a$")
| null | CC BY-SA 3.0 | null | 2010-11-29T10:32:29.997 | 2011-11-22T13:41:56.597 | 2011-11-22T13:41:56.597 | null | 1725 | null |
4995 | 2 | null | 4945 | 0 | null | Harvey:
You're absolutely right that the F and Bartlett's tests won't work to compare raw data with smoothed data! Once the data has been smoothed, there's all manner of autocorrelation in there, and the testing becomes much more complicated. Better to compare separate--and hopefully independent--sequences.
| null | CC BY-SA 2.5 | null | 2010-11-29T15:19:15.583 | 2010-11-29T16:53:58.290 | 2010-11-29T16:53:58.290 | 8 | 5792 | null |
4996 | 2 | null | 4988 | 3 | null | Under these assumptions any permutation of the monomials leads to exactly the same results, implying there is no inherent distinction between the low order terms and other terms.
---
If this doesn't seem convincing, let's look at a simple example. Pick any one of the monomials and set $f(\mathbf{x}, \theta)$ proportional to $\exp(\theta t(\mathbf{x}))$ where $t$ evaluates to $1$ for the selected monomial and otherwise equals $0$. In other words, letting $C(\theta) = \left(\exp(\theta) + 2^n - 1\right)^{-1}$ be the normalizing coefficient,
$$\eqalign{
f &= C(\theta) e^{\theta} &\text{ for this monomial}\cr
&= C(\theta) &\text{ for all other monomials.}
}$$
The entropy of this distribution equals
$$\eqalign{
H(\theta) &= -\mathbb{E}\left[\log(f)\right] \cr
&= \frac{\log(2^n-1) + (1+e^x)\log(1+e^x) - x e^x}{1 + e^x}
}$$
(writing $x = \theta - \log(2^n-1)$ to better isolate the dependence on $n$).
As $\theta$ ranges from $0$ to $\infty$, the entropy descends from $n/\log(2)$, which is the maximum attainable for any distribution, to $0$ (asymptotically), making the transition near $\theta = n \log(2)$. At this value of $\theta$ half the mass is concentrated on the chosen monomial and the entropy is approximately one-third of its maximum possible value.
We can play a more complicated game by focusing various amounts of the probability on any subset of the monomials we choose and putting vanishing amounts of probability on the remaining monomials. With such an approach we can meet any feasible lower bound on the entropy with a distribution that is as heavily concentrated on the chosen subset as possible. There is nothing to distinguish one subset from another. Thus, nothing informative can be said in general concerning the concentration of the distribution on the low order terms.
| null | CC BY-SA 2.5 | null | 2010-11-29T15:51:33.780 | 2010-11-29T16:19:52.817 | 2010-11-29T16:19:52.817 | 919 | 919 | null |
4997 | 1 | 5003 | null | 38 | 15682 | I'm using AIC (Akaike's Information Criterion) to compare non-linear models in R. Is it valid to compare the AICs of different types of model? Specifically, I'm comparing a model fitted by glm versus a model with a random effect term fitted by glmer (lme4).
If not, is there a way such a comparison can be done? Or is the idea completely invalid?
| Can AIC compare across different types of model? | CC BY-SA 2.5 | null | 2010-11-29T16:08:13.153 | 2022-03-19T09:26:30.813 | 2019-02-19T07:50:34.397 | 128677 | 2182 | [
"lme4-nlme",
"model-selection",
"aic"
]
|
4999 | 1 | 7276 | null | 6 | 657 | I recently came across the following paper: "[Stochastic Methods for $\ell_1$ Regularized Loss Minimization](http://www.cs.huji.ac.il/~shais/papers/ShalevTewari09.pdf)" by Shai Shalev-Shwartz and Ambuj Tewari, ICML 2009.
In the paper, the authors propose a modification of the coordinate descent algorithm for the LASSO in which the coordinates (the $\beta$s) are updated in a random order. This modification appears to have better run-time performance than deterministic coordinate descent.
Can you offer any intuition for why such a modification would make the algorithm faster in practice?
| Stochastic coordinate descent for $\ell_1$ regularization | CC BY-SA 2.5 | null | 2010-11-29T17:26:43.070 | 2014-12-31T05:33:49.130 | 2010-11-29T23:35:42.850 | 439 | 439 | [
"regression",
"lasso",
"regularization"
]
|
5000 | 2 | null | 4698 | 0 | null | Model evidence $P(D|M_i)$ can be viewed as an expectation of $P(D|w, M_i)$ with respect to distribution $P(w|M_i)$. You can then use Monte-Carlo methods to estimate it with required precision.
Other suitable options include using Laplace Approximation and then finding closed-form solution for evidence (as they do in RVM).
| null | CC BY-SA 2.5 | null | 2010-11-29T17:30:31.260 | 2010-11-29T17:30:31.260 | null | null | null | null |
5001 | 2 | null | 212 | 3 | null | The Sclite tool from [NIST](http://www.itl.nist.gov/iad/mig//tools/) offers a statistical test to compare two ASR systems on the same test set (http://www.itl.nist.gov/iad/mig//tools/).
For the test you described several of the test offered would be suitable (including the sign test) but not all are equally powerful.
| null | CC BY-SA 2.5 | null | 2010-11-29T17:49:27.980 | 2010-11-29T18:25:11.713 | 2010-11-29T18:25:11.713 | 919 | null | null |
5003 | 2 | null | 4997 | 20 | null | It depends. AIC is a function of the log likelihood. If both types of model compute the log likelihood the same way (i.e. include the same constant) then yes you can, if the models are nested.
I'm reasonably certain that `glm()` and `lmer()` don't use comparable log likelihoods.
The point about nested models is also up for discussion. Some say AIC is only valid for nested models as that is how the theory is presented/worked through. Others use it for all sorts of comparisons.
| null | CC BY-SA 2.5 | null | 2010-11-29T18:01:09.517 | 2010-11-29T20:43:10.753 | 2010-11-29T20:43:10.753 | 1390 | 1390 | null |
5004 | 1 | 5040 | null | 3 | 2587 | What is the Unscented Kalman Filter and when is it used in preference to other types of filters?
edit: I find the Wikipedia explanation a bit too technical to be readily understood.
| What is the Unscented Kalman Filter? | CC BY-SA 2.5 | null | 2010-11-29T19:08:24.390 | 2022-09-03T19:15:12.147 | 2010-11-29T20:22:17.710 | 439 | 439 | [
"kalman-filter"
]
|
5005 | 1 | null | null | 3 | 397 | The following piece of Perl code randomly maps a set of ranges onto a circumference of a circle. In the example, the circumference is of length 1000 and legal ranges are e.g. (0,8)=0,1,2,...,8 and (995,2)=995,996,...,999,0,1,2 (i.e. zero-based coordinates; both start and end are inclusive).
I take some arbitrary position on the circumference (e.g. 36) and count how many ranges cover it in each simulation.
finally, I calculate the mean and variance of this statistic.
```
use strict;
use warnings;
use Statistics::Descriptive;
my $n_simulations = 1000;
my $circumference = 1000;
my @lengths_distrib = (100) x 100; # distibution of range lengths
my $some_pos = 36; # arbitrary position
my $stat = Statistics::Descriptive::Full->new();
foreach my $sim ( 1 .. $n_simulations ) {
# randomly map ranges onto circumference
my @random_ranges =
map { my $start = int( rand($circumference) ); [ $start, ( $start + $_ -1 ) % $circumference ] }
@lengths_distrib;
# count how many range contain $some_pos
my $num_covering_ranges = scalar(
grep { ( $_->[0] <= $some_pos and $_->[1] >= $some_pos ) or ( $_->[1] < $_->[0] and $_->[1] > $some_pos ) }
@random_ranges
);
$stat->add_data($num_covering_ranges);
}
print $stat->mean, ' ', $stat->variance, "\n";
```
To the best of my knowledge, this kind of random variable should follow Poisson distribution (law of rare events and so on). Hence, the mean and variance should be equal. However, the variance seems to systematically be a bit lower than the mean.
What am I missing?
UPDATE
Following whuber response, a couple of notes and additions:
- the Poisson distribution is a good approximation of the binomial distribution if n is at least 20 and p is smaller than or equal to 0.05, and an excellent approximation if n ≥ 100 and np ≤ 10. source: wikipedia. The example I gave here is a toy one. I usually use much larger circumferences (length ~ 3M), many more ranges (around 25k) an relatively small ranges (size ~ 2.5k), so I think in general Poisson distribution should be appropriate. I will give it a look again on some real data.
- In my real data, I have e.g. some 25k ranges of different sizes. Currently, I'm simulating random mapping of these clones on the circumference, count the coverage of some point of interest in each simulation (this could be an arbitrary fixed point since all point are equal), then get the mean of this coverage over some 100 or 1000 simulations.
Now, I use a CDF of a Poisson distribution with this mean to get a P-value for the actual observation (number of ranges covering a specific point of interest). I take lower tail since I'm interested in under-covered positions.
How would you suggest going around this? Should I use a Binomial distribution? How can I evaluate its parameters? Note this is not exactly a Binomial RV, since while I do have n independent trials, the chances for each trial to succeed is not equal (it depends on the range size). Perhaps I should simply take the average success probability (average length) and not even have to use simulations, right? In other words, I guess what I'm asking is how does the sum of independent Bernoulli RV with different p's look like?
I should also note that while this example only deals with the number of ranges covering a single position, I'm also interested in the number of ranges covering some range. For now I use the simulations scheme exactly the same way described before.
Thanks!
| Unexpected under-dispersion in Perl simulations of Poisson RV | CC BY-SA 2.5 | null | 2010-11-29T20:35:43.210 | 2010-11-30T07:01:38.970 | 2010-11-30T07:01:38.970 | 634 | 634 | [
"distributions",
"binomial-distribution",
"random-variable",
"poisson-distribution",
"simulation"
]
|
5006 | 2 | null | 423 | 39 | null | [Here](http://phd.kt.pri.ee/2009/08/11/captain-statistics-episode-1/)'s a somewhat more technical one.

| null | CC BY-SA 2.5 | null | 2010-11-29T20:45:50.423 | 2010-11-29T23:10:51.197 | 2010-11-29T23:10:51.197 | 930 | null | null |
5007 | 1 | 34252 | null | 59 | 127292 | I have a plot I'm making in ggplot2 to summarize data that are from a 2 x 4 x 3 celled dataset. I have been able to make panels for the 2-leveled variable using `facet_grid(. ~ Age)` and to set the x and y axes using `aes(x=4leveledVariable, y=DV)`. I used `aes(group=3leveledvariable, lty=3leveledvariable)` to produce the plot so far. This gives me a visualization that is paneled by the 2-leveled variable, with the X axis representing the 4 leveled variable and different lines plotted within the panels for the 3-leveled variable. But the key for the 3-leveled variable is titled with the 3-leveled variable's name and I want it to be a title that has a character space in it. How can I rename the title of the legend?
Things I've tried that don't seem to work (where `abp` is my ggplot2 object):
```
abp <- abp + opts(legend.title="Town Name")
abp <- abp + scale_fill_continuous("Town Name")
abp <- abp + opts(group="Town Name")
abp <- abp + opts(legend.title="Town Name")
```
Example data:
```
ex.data <- data.frame(DV=rnorm(2*4*3), V2=rep(1:2,each=4*3), V4=rep(1:4,each=3), V3=1:3)
```
| How can I change the title of a legend in ggplot2? | CC BY-SA 3.0 | null | 2010-11-29T20:54:10.447 | 2013-08-04T15:52:12.873 | 2013-08-04T15:52:12.873 | 7290 | 196 | [
"r",
"data-visualization",
"ggplot2"
]
|
5009 | 2 | null | 5007 | 39 | null | You can change the title of the legend by modifying the scale for that legend. Here's an example using the CO2 dataset
```
library(ggplot2)
p <- qplot(conc, uptake, data = CO2, colour = Type) + scale_colour_discrete(name = "Fancy Title")
p <- p + facet_grid(. ~ Treatment)
p
```
EDIT:
Using the example data from above, here is a working solution. I think this mimics the plot that @drknexus is trying to create. As a side note, if anyone can explain why we have to treat V3 as a factor for it to be mapped to the legend, I'd appreciate it.
```
p <- qplot(V4, DV, data = ex.data, geom = "line", group = V3, lty = factor(V3))
p <- p + scale_linetype_discrete(name = "Fancy Title") + facet_grid(. ~ V2)
p
```

| null | CC BY-SA 2.5 | null | 2010-11-29T21:08:00.030 | 2010-11-29T22:54:18.910 | 2010-11-29T22:54:18.910 | 696 | 696 | null |
5010 | 2 | null | 4997 | 4 | null | This is a great question that I've been curious about for a while.
For models in the same family (ie. auto-regressive models of order k or polynomials) AIC/BIC makes a lot of sense. In other cases it's less clear. Computing the log-likelihood exactly (with the constant terms) should work, but using more complicated model comparison such as Bayes Factors is probably better (http://www.jstor.org/stable/2291091).
If the models have the same loss/error-function one alternative is to just compare the cross-validated log-likelihoods. That's usually what I try to do when I'm not sure AIC/BIC makes sense in a certain situation.
| null | CC BY-SA 2.5 | null | 2010-11-29T21:48:11.017 | 2010-11-29T21:48:11.017 | null | null | 2077 | null |
5011 | 1 | 5137 | null | 8 | 1791 | Kernel density estimator is given by
$$\hat{f}(x,h)=\frac{1}{nh}\sum_{i=1}^{n}K(\frac{x-X_{i}}{h})$$
where $X_1,...X_n$ i.i.d with some unknown density $f$, $h$ - bandwith,
$K$ - kernel function (
$\int_{-\infty}^{\infty}K(x)dx=1$,
$\int_{-\infty}^{\infty}K(x)xdx=0$,
$\int_{-\infty}^{\infty}K(x)x^2dx<\infty$).
The bias can be computed using Taylor expansion:
$$\int_{-\infty}^{\infty}\frac{1}{h}K(\frac{x-y}{h})f(y)dy-f(x)=\int_{-\infty}^{\infty}K(y)\left(f(x-hy)-f(x)\right)dy$$ $$=\int_{-\infty}^{\infty}K(y)\left(f'(x)hy+\frac{1}{2}f''(x)(hy)^{2}+o(h^{2})\right)dy=\frac{1}{2}f''(x)h^{2}+o(h^{2})$$
How to deal with periodic kernel and $f$ ($\int_{0}^{1}K(x)dx=1$,$\int_{0}^{1}K(x)xdx=0$,$\int_{0}^{1}K(x)x^2dx<\infty$)?
How can I use taylor expansion?($\int_{0}^{1}\frac{1}{h}K(\frac{y-x}{h})f(y)dy=\int_{-\frac{x}{h}}^{1-\frac{x}{h}}K(y)f(x-yh)dy\neq\int_{0}^{1}K(y)f(x-yh)dy$ -I can't use kernel properties)
Could you recommend a good book about kernel smoothing for circular data?
| Bias for kernel density estimator (periodic case) | CC BY-SA 2.5 | null | 2010-11-29T22:35:10.797 | 2015-04-23T05:53:56.053 | 2015-04-23T05:53:56.053 | 9964 | 2189 | [
"kernel-smoothing"
]
|
5012 | 2 | null | 5005 | 3 | null | If I understood correctly, a "range" of length $k$ (such as $k=100$) within a circumference of length $n$ (such as $n = 1000$) has a chance of $(2k+1)/n$ of covering a given point on the circumference, and all the chances in a simulation are independent. Therefore in a simulation of $N$ trials (such as $N=1000$) the count of ranges that cover a given point should follow a $\text{Binomial}(\frac{2k+1}{n}, N)$ distribution. This distribution has mean $(2k+1)\frac{N}{n}$ and variance $(2k+1)(1 - \frac{2k+1}{n})\frac{N}{n}$. The variance of your data (the simulation trial outcomes) will deviate from this latter value due to chance.
These considerations identify two reasons for your observations:
- The variance ought to be a fraction of the mean; specifically, it should only be about $1 - \frac{2k+1}{n}$ of the mean. In the case $k=100$ and $n=1000$, the variance should only be around 79.9% of the mean.
- When you conduct a small number of trials (and $N=1000$ is quite small for simulations of this type), expect noticeable chance deviations between the results and the expectations.
| null | CC BY-SA 2.5 | null | 2010-11-29T23:23:02.660 | 2010-11-29T23:23:02.660 | null | null | 919 | null |
5013 | 1 | 5030 | null | 6 | 3447 | 
N=2762
I've been exploring a data set that seems to give rise to this kind of plot rather frequently. Would you say this is one population with a different than normal population? Or are two populations confounding the normal distribution?
It used matplotlib and scipy.stats:
```
(x,y), (slope, inter, cor) = stats.probplot(data, dist='norm')
osmf = x.take([0, -1]) # endpoints
osrf = slope * osmf + inter
self.ax.plot(x, y, ',', osmf, osrf, '-', **self.kwargs)
```
| Quantile-Quantile Plot with Unknown Distribution? | CC BY-SA 2.5 | null | 2010-11-29T23:53:09.483 | 2010-12-03T03:44:23.953 | null | null | 2191 | [
"distributions"
]
|
5014 | 2 | null | 5013 | -1 | null | You may want to take a look at the [Anderson-Darling](http://en.wikipedia.org/wiki/Anderson%E2%80%93Darling_test#Test_for_Normality) test for normality which empirically tests whether or not your data comes from a given distribution. @chl recommends looking at the `scipy` toolkit, specifically `anderson()` in `morestats.py` for an implementation.
| null | CC BY-SA 2.5 | null | 2010-11-30T02:33:35.903 | 2010-12-03T03:44:23.953 | 2010-12-03T03:44:23.953 | 696 | 696 | null |
5015 | 1 | 25081 | null | 19 | 29920 | I am interested in learning (and implementing) an alternative to polynomial interpolation.
However, I am having trouble finding a good description of how these methods work, how they relate, and how they compare.
I would appreciate your input on the pros/cons/conditions under which these methods or alternatives would be useful, but some good references to texts, slides, or podcasts would be sufficient.
| What are the advantages / disadvantages of using splines, smoothed splines, and gaussian process emulators? | CC BY-SA 3.0 | null | 2010-11-30T02:36:57.163 | 2018-06-14T03:20:00.983 | 2012-03-22T16:30:04.380 | 1381 | 1381 | [
"interpolation",
"splines"
]
|
5016 | 1 | null | null | 4 | 619 | I'm trying to compute some p-values for samples from a distribution of sums of ~1000 random variables. The exact distribution of these random variables isn't known, but I have empirical estimates that I think are pretty accurate.
So far I've been using the central limit theorem to produce a normal approximation for this sum, which does OK since n is relatively large, but not great. Computing the exact distribution of the sum via convolutions is too slow, but I only really care about having high accuracy in the tails; everywhere else a rough approximation is fine.
Are there any methods that will allow me to improve my estimates in the tails (or just one tail) without having to compute the entire convolutions? I'm not sure if this would be a variant on the CLT implementation or something completely different. My gut feeling is that there isn't any way to do this, so I'd be open to any kind of solution at all!
| Central Limit Theorem Tails | CC BY-SA 2.5 | null | 2010-11-30T04:34:16.530 | 2010-11-30T05:21:01.157 | null | null | 2111 | [
"normal-distribution",
"approximation",
"central-limit-theorem"
]
|
5017 | 2 | null | 5016 | 3 | null | $n=1000$ gets you actually extremely close to normal if you do not go to the extreme tails ($n=10$ is often close enough to normal in the central region), but if you need to go there, estimating a PDF might well be impossible.
To get an event very far out in the tails, extreme fluctuations in your sumands need to happen. Since you say your distributions were "empirical estimates" you should ask yourself if you have enough input data to understand the tails of your summand distributions. In many applications empirical distributions are being truncated in the tails, since they are not well understood and might be the only contributors in some parameter regions.
No matter if I truncate my distributions or not, I often use [Monte Carlo simulations](http://de.wikipedia.org/wiki/Monte-Carlo-Simulation) to compute convolutions of (not parametrized) empirical distributions since computing power is cheap.
| null | CC BY-SA 2.5 | null | 2010-11-30T05:21:01.157 | 2010-11-30T05:21:01.157 | null | null | 56 | null |
5018 | 2 | null | 4551 | 12 | null | My old stats prof had a "rule of thumb" for dealing with outliers: If you see an outlier on your scatterplot, cover it up with your thumb :)
| null | CC BY-SA 2.5 | null | 2010-11-30T06:18:04.063 | 2010-11-30T06:18:04.063 | null | null | 74 | null |
5019 | 2 | null | 4551 | 65 | null | Reporting p-values when you did data-mining (hypothesis discovery) instead of statistics (hypothesis testing).
| null | CC BY-SA 3.0 | null | 2010-11-30T06:19:58.040 | 2012-08-02T03:12:36.993 | 2012-08-02T03:12:36.993 | 74 | 74 | null |
5020 | 2 | null | 4551 | 48 | null | A few mistakes that bother me:
- Assuming unbiased estimators are always better than biased estimators.
- Assuming that a high $R^2$ implies a good model, low $R^2$ implies a bad model.
- Incorrectly interpreting/applying correlation.
- Reporting point estimates without standard error.
- Using methods which assume some sort of Multivariate Normality (such as Linear Discriminant Analysis) when more robust, better performing, non/semiparametric methods are available.
- Using p-value as a measure of strength between a predictor and the response, rather than as a measure of how much evidence there is of some relationship.
| null | CC BY-SA 3.0 | null | 2010-11-30T06:54:37.230 | 2011-09-22T15:01:04.260 | 2011-09-22T15:01:04.260 | 919 | 2144 | null |
5022 | 2 | null | 4551 | 29 | null | Being exploratory but pretending to be confirmatory. This can happen when one is modifying the analysis strategy (i.e. model fitting, variable selection and so on) data driven or result driven but not stating this openly and then only reporting the "best" (i.e. with smallest p-values) results as if it had been the only analysis. This also pertains to the point if multiple testing that Chris Beeley made and results in a high false positive rate in scientific reports.
| null | CC BY-SA 2.5 | null | 2010-11-30T08:02:28.100 | 2010-12-01T15:50:26.957 | 2010-12-01T15:50:26.957 | 1573 | 1573 | null |
5023 | 1 | null | null | 4 | 317 | Is there any comprehensive reference on (or introduction to) how people have tried to model non-independent random variables? I already know about mixing processes, which express in various ways according to various coefficients how "future" events depend on "past" events, but that's about it...
| Modelling dependence between random variables | CC BY-SA 2.5 | null | 2010-11-30T08:37:51.447 | 2010-11-30T16:00:56.360 | null | null | 2197 | [
"random-variable",
"non-independent"
]
|
5024 | 2 | null | 4698 | 2 | null | Yes, you can do that. However, your I'd like to play with your formulas a little bit.
If the model is determined by the parameters, than $P(D|M_i)=\int P(D|w,M_i)P(M_i|w)*P(w)dw$ should be more appropriate. Since I guess the model is determined by the parameters in a deterministic (instead of stochastic) way, the formula can be abbreviated to $P(D|M_i)=\int P(D|w,M_i)*P(w)dw$.
Given this and only a finite uniform sample over the parameter space, your approximation is indeed correct.
| null | CC BY-SA 2.5 | null | 2010-11-30T09:33:50.900 | 2010-11-30T09:33:50.900 | null | null | 264 | null |
5025 | 1 | 5032 | null | 10 | 1794 | Suppose we have a simple linear regression model $Z = aX + bY$ and would like to test the null hypothesis $H_0: a=b=\frac{1}{2}$ against the general alternative.
I think one can use the estimate of $\hat{a}$ and $SE(\hat{a})$ and further apply a $Z$-test to get the confidence interval around $\frac{1}{2}$. Is this ok?
The other question is strongly related to this one. Suppose that we have a sample $\{(x_1,y_1,z_1),\ldots ,(x_n,y_n,z_n) \}$ and we compute $\chi^2$ statistics
\begin{equation}
\sum_{i=1}^n \frac{(z_i-\frac{x_i+y_i}{2})^2}{\frac{x_i+y_i}{2}}.
\end{equation}
Can these statistics be used to test the same null hypothesis?
| How to test if the slopes in the linear model are equal to a fixed value? | CC BY-SA 2.5 | null | 2010-11-30T10:07:53.553 | 2010-11-30T19:25:30.990 | 2010-11-30T16:45:51.890 | 8 | 1215 | [
"hypothesis-testing",
"regression"
]
|
5026 | 1 | null | null | 222 | 202047 | What is the difference between data mining, statistics, machine learning and AI?
Would it be accurate to say that they are 4 fields attempting to solve very similar problems but with different approaches? What exactly do they have in common and where do they differ? If there is some kind of hierarchy between them, what would it be?
Similar questions have been asked previously but I still don't get it:
- Data Mining and Statistical Analysis
- The Two Cultures: statistics vs. machine learning?
| What is the difference between data mining, statistics, machine learning and AI? | CC BY-SA 2.5 | null | 2010-11-30T11:26:15.473 | 2019-10-23T17:27:44.153 | 2017-04-13T12:44:33.550 | -1 | 2199 | [
"machine-learning",
"data-mining"
]
|
5028 | 2 | null | 5026 | 22 | null | We can say that they are all related, but they are all different things.
Although you can have things in common among them, such as that in statistics and data mining you use clustering methods.
Let me try to briefly define each:
- Statistics is a very old discipline mainly based on classical mathematical methods, which can be used for the same purpose that data mining sometimes is which is classifying and grouping things.
- Data mining consists of building models in order to detect the patterns that allow us to classify or predict situations given an amount of facts or factors.
- Artificial intelligence (check Marvin Minsky*) is the discipline that tries to emulate how the brain works with programming methods, for example building a program that plays chess.
- Machine learning is the task of building knowledge and storing it in some form in the computer; that form can be of mathematical models, algorithms, etc... Anything that can help detect patterns.
| null | CC BY-SA 3.0 | null | 2010-11-30T12:05:06.173 | 2013-10-23T15:54:59.260 | 2013-10-23T15:54:59.260 | 28740 | 1808 | null |
5030 | 2 | null | 5013 | 7 | null | There are a variety of different possibilities. For example, a chi-square distribution with degrees of freedom in the range of 30-40 would give rise to such a qq-plot. In R:
```
x <- rchisq(10000, df=35)
qqnorm(x)
qqline(x)
```
looks like this:

A mixture of two normals with different means doesn't apply though.
```
x <- c(rnorm(10000/2, mean=0), rnorm(10000/2, mean=2))
qqnorm(x)
qqline(x)
```
looks like this:

Note how the points cross the line, which is a different pattern than the one you observe.
| null | CC BY-SA 2.5 | null | 2010-11-30T15:35:14.227 | 2010-11-30T15:35:14.227 | null | null | 1934 | null |
5031 | 2 | null | 5023 | 2 | null | OK, I think in what exists now, what comes closest to what you are looking for is the coalescent theory. [Quoting from wikipedia](http://en.wikipedia.org/wiki/Coalescence_%28genetics%29):
>
In genetics, coalescent theory is a retrospective model of population genetics. It employs a sample of individuals from a population to trace all alleles of a gene shared by all members of the population to a single ancestral copy, known as the most recent common ancestor (MRCA; sometimes also termed the coancestor to emphasize the coalescent relationship[1]). The inheritance relationships between alleles are typically represented as a gene genealogy, similar in form to a phylogenetic tree. This gene genealogy is also known as the coalescent; understanding the statistical properties of the coalescent under different assumptions forms the basis of coalescent theory. The coalescent runs models of genetic drift backward in time to investigate the genealogy of antecedents.[2] In the most simple case, coalescent theory assumes no recombination, no natural selection, and no gene flow or population structure. Advances in coalescent theory, however, allow extension to the basic coalescent, and can include recombination, selection, and virtually any arbitrarily complex evolutionary or demographic model in population genetic analysis. The mathematical theory of the coalescent was originally developed in the early 1980s by John Kingman[3].
The citations list of the wikipedia article mentions a primer on coalescence, looks like a good place to start.
[This paper gives a review of coalescence and natural selection.](http://www.oeb.harvard.edu/faculty/wakeley/John/Reprints/WakeleyChapterLongDraft.pdf)
[This paper gives a relatively lowbrow example of how coalescence theory is used in the case of neutral selection.](http://onlinelibrary.wiley.com/doi/10.1111/j.1461-0248.2004.00572.x/pdf) This can help you get the feeling for the ideas. It also contains references to seminal papers in the field.
| null | CC BY-SA 2.5 | null | 2010-11-30T16:00:56.360 | 2010-11-30T16:00:56.360 | null | null | 2036 | null |
5032 | 2 | null | 5025 | 8 | null | In linear regression the assumption is that $X$ and $Y$ are not random variables. Therefore, the model
$$Z = a X + b Y + \epsilon$$
is algebraically the same as
$$Z - \frac{1}{2} X - \frac{1}{2} Y = (a - \frac{1}{2})X + (b - \frac{1}{2})Y + \epsilon = \alpha X + \beta Y + \epsilon.$$
Here, $\alpha = a - \frac{1}{2}$ and $\beta =b - \frac{1}{2}$. The error term $\epsilon$ is unaffected. Fit this model, estimating the coefficients as $\hat{\alpha}$ and $\hat{\beta}$, respectively, and test the hypothesis $\alpha = \beta = 0$ in the usual way.
---
The statistic written at the end of the question is not a chi-squared statistic, despite its formal similarity to one. A chi-squared statistic involves counts, not data values, and must have expected values in its denominator, not covariates. It's possible for one or more of the denominators $\frac{x_i+y_i}{2}$ to be zero (or close to it), showing that something is seriously wrong with this formulation. If even that isn't convincing, consider that the units of measurement of $Z$, $X$, and $Y$ could be anything (such as drams, parsecs, and pecks), so that a linear combination like $z_i - (x_i+y_i)/2$ is (in general) meaningless. It doesn't test anything.
| null | CC BY-SA 2.5 | null | 2010-11-30T16:25:00.833 | 2010-11-30T19:25:30.990 | 2010-11-30T19:25:30.990 | 919 | 919 | null |
5033 | 2 | null | 5013 | 5 | null | Your dataset clearly is not normal. (With this much data, any goodness of fit test will tell you that.) But you can read much more than that from the normal probability plot:
- The generally smooth curvature does not hint at a mixture structure.
- The upper tail is too stretched out (values too high compared to the reference distribution).
- The lower tail is too compressed (values also too high).
This suggests that a mild [Box-Cox transformation](http://en.wikipedia.org/wiki/Power_transform) will produce nearly-normal, or at least symmetric, data. To find it, consider some key values on this plot: the median, found above the x-value of 0, is about 0.90; +2 standard deviations is about 0.99; and -2 standard deviations is about 0.825. The nonlinearity is apparent from the simple calculations 0.99 - 0.90 = 0.09 whereas 0.90 - 0.825 = 0.075: the rise from the median to the upper tail is greater than the rise from the lower tail to the median. We can equalize the slopes by trying out some simple re-expressions of these three values only. For example, taking the reciprocals of the three key data values (Box-Cox power of -1) gives
```
1/0.825 = 1.21
1/0.90 = 1.11; 1.21 - 1.11 = 0.10 (new slope is 0.050 per SD)
1/0.99 = 1.01; 1.11 - 1.01 = 0.10 (0.050 per SD)
```
Because the slopes of the re-expressed values are now equal, we know the plot of reciprocals of the data will be approximately linear between -2 and +2 SDs. As a check, let's pick more points further out into the tails and see what the reciprocal does to them. I estimate that the value in the plot at -3 SD from the mean is around 0.79 and the value +3 SD from the mean is 1.05. The two slopes in question equal 0.053 and 0.052 per SD: close enough to each other and to the slopes found between -2 and +2 SD.
My estimates--based on the plot as shown on a monitor--are crude, so you will want to repeat these (simple, quick) calculations with the actual data. Nevertheless, there is considerable evidence that your data when suitably re-expressed with a simple transformation will be close to normally distributed.
| null | CC BY-SA 2.5 | null | 2010-11-30T17:08:52.140 | 2010-11-30T17:08:52.140 | null | null | 919 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.