Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6355 | 1 | 6359 | null | 13 | 16642 | I understand the premise of kNN algorithm for spatial data. And I know I can extend that algorithm to be used on any continuous data variable (or nominal data with Hamming Distance). However, what strategies are used when dealing with higher dimensional data?
For example, say I have a table of data (x[1], x[2], x[3], ..., x[n]) and I want to build a set of classifiers to predict one of those columns (say x[n]). Using kNN algorithm I would pick any two columns from the remaining columns (x[1]-x[n-1]) to train against. So say I could pick x[1] and x[2] and build a classifier off those. Or I could pick x[1] and x[4], or I could pick x[5] and x[8], etc. I could even pick just a single column and build a classifiers off that, or 3 columns and build a classifiers off that. Is there an advantage to using higher dimensions (2D, 3D, etc) or should you just build x-1 single dimension classifiers and aggregate their predictions in some way?
Since building all of these classifiers from all potential combinations of the variables would be computationally expensive. How could I optimize this search to find the the best kNN classifiers from that set? And, once I find a series of classifiers what's the best way to combine their output to a single prediction? Voting might be the simplest answer to this question. Or weighting each vote by error rates from the training data for each classifier.
How do most implementations apply kNN to a more generalized learning?
| Help understand kNN for multi-dimensional data | CC BY-SA 2.5 | null | 2011-01-18T20:52:59.467 | 2011-01-18T23:06:15.697 | null | null | 1929 | [
"machine-learning",
"k-nearest-neighbour"
]
|
6356 | 2 | null | 726 | 6 | null | We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
Pierre-Simon de Laplace. Also known as Laplace's demon
| null | CC BY-SA 3.0 | null | 2011-01-18T21:34:07.010 | 2011-08-14T15:03:21.607 | 2011-08-14T15:03:21.607 | 930 | 2392 | null |
6357 | 2 | null | 726 | 12 | null | One sees, from this Essay, that the theory of probabilities is basically just common sense reduced to calculus; it makes one appreciate with exactness that which accurate minds feel with a sort of instinct, often without being able to account for it.
Another one from Laplace
| null | CC BY-SA 2.5 | null | 2011-01-18T22:11:11.050 | 2011-01-18T22:11:11.050 | null | null | 2392 | null |
6358 | 1 | 6361 | null | 11 | 4816 | Thanks in advance for bearing with me, I am not a statistician of any kind and don't know how to describe what I'm imagining, so Google isn't helping me here...
I'm including a rating system in a web application I'm working on. Each user can rate each item exactly once.
I was imagining a scale with 4 values: "strongly dislike", "dislike", "like", and "strongly like", and I had planned on assigning these values of -5, -2, +2, and +5 respectively.
Now, if every item was going to have the same number of ratings, then I would be quite comfortable with this scoring system as clearly differentiating the most liked and least liked items. However, the items will not have the same number of ratings, and the disparity between the number of votes on different photos may be quite dramatic.
In that case, comparing the cumulative scores on two items means that an old item with a lot of mediocre ratings is going to have a much higher score than an exceptional new item with many fewer votes.
So, the first obvious thing I thought of us to take an average... but now if an item has only one rating of "+5" it has a better average than an item that has a score of 99 "+5" ratings and 1 "+2" rating. Intuitively that isn't an accurate representation of the popularity of an item.
I imagine this problem is common and you guys don't need me to belabor it with more examples, so I'll stop at this point and elaborate in comments if needed.
My questions are:
- What is this kind of problem called, and is there a term for the techniques used to solve it? I'd like to know this so I can read up on it.
- If you happen to know of any lay-friendly resources on the subject, I'd very much appreciate a link.
- Finally, I'd appreciate any other suggestions about how to effectively collect and analyze this kind of data.
| Weight a rating system to favor items rated highly by more people over items rated highly by fewer people? | CC BY-SA 3.0 | null | 2011-01-18T22:46:06.467 | 2017-11-09T14:20:04.097 | 2017-11-09T14:20:04.097 | 11887 | 2832 | [
"scales",
"rating"
]
|
6359 | 2 | null | 6355 | 5 | null | >
Is there an advantage to using higher dimensions (2D, 3D, etc) or should you just build x-1 single dimension classifiers and aggregate their predictions in some way?
This depends on whether your features are informative or not. Do you suspect that some features will not be useful in your classification task? To gain a better idea of your data, you can also try to compute pairwise correlation or mutual information between the response variable and each of your features.
To combine all (or a subset) of your features, you can try computing the L1 (Manhattan), or L2 (Euclidean) distance between the query point and each 'training' point as a starting point.
>
Since building all of these classifiers from all potential combinations of the variables would be computationally expensive. How could I optimize this search to find the the best kNN classifiers from that set?
This is the problem of feature subset selection. There is a lot of academic work in this area (see Guyon, I., & Elisseeff, A. (2003). An Introduction to Variable and Feature Selection. Journal of Machine Learning Research, 3, 1157-1182. for a good overview).
>
And, once I find a series of classifiers what's the best way to combine their output to a single prediction?
This will depend on whether or not the selected features are independent or not. In the case that features are independent, you can weight each feature by its mutual information (or some other measure of informativeness) with the response variable (whatever you are classifying on). If some features are dependent, then a single classification model will probably work best.
>
How do most implementations apply kNN to a more generalized learning?
By allowing the user to specify their own distance matrix between the set of points. kNN works well when an appropriate distance metric is used.
| null | CC BY-SA 2.5 | null | 2011-01-18T23:06:15.697 | 2011-01-18T23:06:15.697 | null | null | 1913 | null |
6360 | 2 | null | 6350 | 10 | null | Standard Classical one-way ANOVA can be viewed as an extension to the classical "2-sample T-test" to an "n-sample T-test". This can be seen from comparing a one-way ANOVA with only two groups to the classical 2-sample T-test.
I think where you are getting confused is that (under the assumptions of the model) the residuals and the raw data are BOTH normally distributed. However the raw data consist of normal distributions with different means (unless all the effects are exactly the same) but the same variance. The residuals on the other hand have the same normal distribution. This comes from the third assumption of homoscedasticity.
This is because the normal distribution is decomposable into a mean and variance components. If $Y_{ij}$ has a normal distribution with mean $\mu_{j}$ and variance $\sigma^2$ can be written as $Y_{ij}=\mu_{j}+\sigma\epsilon_{ij}$ where $\epsilon_{ij}$ has a standard normal distribution.
While ANOVA is derivable from the assumption of normality, I think (but am unsure) it can be replaced by an assumption of linearity (along the Best Linear Unbiased Estimator (BLUE) lines of estimation, where "BEST" is interpreted as minimum mean square error). I believe this basically involves replacing the distribution for $\epsilon_{ij}$ with any mutually independent distribution (over all i and j) which has mean 0 and variance 1.
In terms of looking at your raw data, it should look normal when plotted separately for each factor level in your model. This means plotting $Y_{ij}$ for each j on a separate graph.
| null | CC BY-SA 2.5 | null | 2011-01-18T23:06:31.050 | 2011-01-18T23:06:31.050 | null | null | 2392 | null |
6361 | 2 | null | 6358 | 17 | null | One way you can combat this is to use proportions in each category, which does not require you to put numbers in for each category (you can leave it as 80% rated as "strongly likes"). However proportions do suffer from the small number of ratings issue. This shows up in your example the Photo with 1 +5 rating would get a higher average score (and proportion) than one with the 99 +5 and 1 +2 rating. This doesn't fit well with my intuition (and I suspect most peoples).
One way to get around this small sample size issue is to use a Bayesian technique known as "[Laplace's rule of succession](http://en.wikipedia.org/wiki/Rule_of_succession)" (searching this term may be useful). It simply involves adding 1 "observation" to each category before calculating the probabilities. If you wanted to take an average for a numerical value, I would suggest a weighted average where the weights are the probabilities calculated by the rule of succession.
For the mathematical form, let $n_{sd},n_{d},n_{l},n_{sl}$ denote the number of responses of "strongly dislike", "dislike", "like", and "strongly like" respectively (in the two examples, $n_{sl}=1,n_{sd}=n_{d}=n{l}=0$ and $n_{sl}=99,n_{l}=1,n_{sd}=n_{d}=0$). You then calculate the probability (or weight) for strongly like as
$$Pr(\text{"Strongly Like"}) = \frac{n_{sl}+1}{n_{sd}+n_{d}+n_{l}+n_{sl}+4}$$
For the two examples you give, they give probabilities of "strongly like" as
$\frac{1+1}{1+0+0+0+4}=\frac{2}{5}$ and $\frac{99+1}{99+1+0+0+4}=\frac{100}{104}$ which I think agree more closely with "common sense". Removing the added constants give $\frac{1}{1}$ and $\frac{99}{100}$ which makes the first outcome seem higher than it should be (at least to me anyway).
The respective scores are just given by the weighted average, which I have written below as:
$$Score=\begin{array}{1 1} 5\frac{n_{sl}+1}{n_{sd}+n_{d}+n_{l}+n_{sl}+4}+2\frac{n_{l}+1}{n_{sd}+n_{d}+n_{l}+n_{sl}+4} \\ - 2\frac{n_{d}+1}{n_{sd}+n_{d}+n_{l}+n_{sl}+4}
-5\frac{n_{sd}+1}{n_{sd}+n_{d}+n_{l}+n_{sl}+4}\end{array}$$
Or more succinctly as
$$Score=\frac{5 n_{sl}+ 2 n_{l} - 2 n_{d} - 5 n_{sd}}{n_{sd}+n_{d}+n_{l}+n_{sl}+4}$$
Which gives scores in the two examples of $\frac{5}{5}=1$ and $\frac{497}{104}\sim 4.8$. I think this shows an appropriate difference between the two cases.
This may have been a bit "mathsy" so let me know if you need more explanation.
| null | CC BY-SA 2.5 | null | 2011-01-18T23:42:31.533 | 2011-01-19T00:02:12.053 | 2011-01-19T00:02:12.053 | 449 | 2392 | null |
6362 | 2 | null | 6358 | 2 | null | I'd take a graphical approach. The x-axis could be average rating and the y could be number of ratings. I used to do this with sports statistics to compare the contribution of young phenoms with that of veteran stars. The nearer a point is to the upper right corner, the closer to the ideal. Of course, deciding on the "best" item would still be a subjective decision, but this would provide some structure.
If you want to plot average rating against another variable, then you could set up number of ratings as the third variable using bubble size, in a bubble plot--e.g., in XL or SAS.
| null | CC BY-SA 2.5 | null | 2011-01-18T23:59:50.293 | 2011-01-18T23:59:50.293 | null | null | 2669 | null |
6363 | 2 | null | 6353 | 2 | null | You definitely can, by following the same method you'd use for the first categorical predictor. Create dummy variables just as you would for the first such variable. But it's often easier to use SPSS's Unianova command. You can look this up in any printed or pdf'd Syntax Guide, or you can access it through Analyze...General Linear Model...Univariate.
Despite being a little more complicated, the Regression command has a number of advantages over Unianova, though. The chief one is that you can choose 'missing pairwise' (you don't have to lose a case simply because it's missing a value for one or two predictors). You can also get many valuable diagnostics such as partial plots and influence statistics.
| null | CC BY-SA 2.5 | null | 2011-01-19T00:09:20.683 | 2011-01-19T00:09:20.683 | null | null | 2669 | null |
6364 | 1 | null | null | 9 | 15179 | Motivation: I'm writing a state estimator in MATLAB (the unscented Kalman filter), which calls for the update of the (upper-triangular) square-root of a covariance matrix $S$ at every iteration (that is, for a covariance matrix $P$, it is true that $P=SS^{T}$). In order for me to perform the requisite calculations, I need to do a Rank-1 Cholesky Update and Downdate using MATLAB's `cholupdate` function.
Problem: Unfortunately, during the course of the iterations, this matrix $S$ can sometimes lose positive definiteness. The Cholesky downdate fails on non-PD matrices.
My question is: are there any simple and reliable ways in MATLAB to make $S$ positive-definite?
(or more generally, is there a good way of making any given covariance $X$ matrix positive-definite?)
---
Notes:
- $S$ is full rank
- I've tried the eigendecomposition approach (which did not work). This basically involved finding $S = VDV^{T}$, setting all negative elements of $V,D = 1 \times 10^{-8}$, and reconstructing a new $S' = V' D' V'^{T}$ where $V',D'$ are matrices with only positive elements.
- I am aware of the Higham approach (which is implemented in R as nearpd), but it seems to only project to the nearest PSD matrix. I need a PD matrix for the Cholesky update.
| Making square-root of covariance matrix positive-definite (Matlab) | CC BY-SA 2.5 | null | 2011-01-19T01:00:36.910 | 2014-01-06T20:26:08.720 | 2011-01-19T01:54:42.757 | 2833 | 2833 | [
"matlab",
"covariance-matrix",
"numerics"
]
|
6365 | 2 | null | 6364 | 2 | null | in Matlab:
```
help cholupdate
```
I get
```
CHOLUPDATE Rank 1 update to Cholesky factorization.
If R = CHOL(A) is the original Cholesky factorization of A, then
R1 = CHOLUPDATE(R,X) returns the upper triangular Cholesky factor of A + X*X',
where X is a column vector of appropriate length. CHOLUPDATE uses only the
diagonal and upper triangle of R. The lower triangle of R is ignored.
R1 = CHOLUPDATE(R,X,'+') is the same as R1 = CHOLUPDATE(R,X).
R1 = CHOLUPDATE(R,X,'-') returns the Cholesky factor of A - X*X'. An error
message reports when R is not a valid Cholesky factor or when the downdated
matrix is not positive definite and so does not have a Cholesky factorization.
[R1,p] = CHOLUPDATE(R,X,'-') will not return an error message. If p is 0
then R1 is the Cholesky factor of A - X*X'. If p is greater than 0, then
R1 is the Cholesky factor of the original A. If p is 1 then CHOLUPDATE failed
because the downdated matrix is not positive definite. If p is 2, CHOLUPDATE
failed because the upper triangle of R was not a valid Cholesky factor.
CHOLUPDATE works only for full matrices.
See also chol.
```
| null | CC BY-SA 2.5 | null | 2011-01-19T01:40:09.367 | 2011-01-19T01:40:09.367 | null | null | 795 | null |
6366 | 1 | 6369 | null | 7 | 503 | I have a question about the AR(1) model. Expressed mathematically as:
$$ Z_{t} = \rho Z_{t-1} + \epsilon_{t}, t=1,..,T$$
$$ \epsilon_{t} \sim iid \ N(0,1) $$
My question is about the "transformation group" method of creating non-informative priors, which I believe initially was suggested by Edwin Jaynes (and is discussed in Chapter 12 of his book Probability Theory: The Logic of Science).
One possible suggestion for a transformation group is to consider "reversing" the time series and then rescaling. Thus my transformation group is the following:
$$\rho^{(1)} = \rho^{-1}$$
$$Z_{t}^{(1)} = \rho^{(1)}Z_{T-t+1}$$
Using the original AR distribution, you can show that this transformation basically just "shuffles" the $\epsilon_{t}$ terms, which by definition of the model are exchangeable. So, estimating $\rho$ using $Z_{t}$ is equivalent to estimating $\rho^{(1)}$ using $Z_{t}^{(1)}$ (i.e., the joint distribution of the noise is the same in both cases). Thus the prior for $\rho^{(1)}$ must be the probability transform of the prior for $\rho$. Or, in mathematical terms, the prior must satisfy the following functional equation:
$$f(\rho)=|{\frac{\partial \rho^{-1}}{\partial \rho}}| f(\rho^{-1})=\rho^{-2}f(\rho^{-1})$$
Unfortunately this does not describe a unique function. In fact, any function with the following form will satisfy the above functional equation:
$$
f(\rho) = (constant) \times \begin{bmatrix}
\ \rho^{2b} (1-\rho^{2})^{a} & |\rho|<1 \\
\ \rho^{-2(b+a+1)} (\rho^{2}-1)^{a} & |\rho|>1
\end{bmatrix}.$$
For $a > -1$ and $b>-\frac{1}{2}$ this distribution is proper, with the normalizing constant being the reciprocal of $2\beta(b+\frac{1}{2},a+1)$ where $\beta(a,b)$ is the "beta integral". Note that this class includes the "symmetric reference prior" recommended in Berger, J. O. and Yang, R. (1994). Noninformative priors and Bayesian testing for the AR(1) model. Econometric Theory 10 461–482.
Usually specifying a transformation group makes the solution unique, so I am perplexed as to how this group of transformations does not produce a unique solution. Have I done something wrong in the process of creating the transformation group?
UPDATE:Perhaps there is no transformation group which uniquely determines the prior in this case?
| Non-informative priors for the AR(1) model | CC BY-SA 3.0 | null | 2011-01-19T01:49:58.940 | 2017-11-27T17:49:06.920 | 2017-11-27T17:49:06.920 | 128677 | 2392 | [
"time-series",
"bayesian",
"prior"
]
|
6367 | 2 | null | 6364 | 4 | null | Here is code I've used in the past (using the SVD approach). I know you said you've tried it already, but it has always worked for me so I thought I'd post it to see if it was helpful.
```
function [sigma] = validateCovMatrix(sig)
% [sigma] = validateCovMatrix(sig)
%
% -- INPUT --
% sig: sample covariance matrix
%
% -- OUTPUT --
% sigma: positive-definite covariance matrix
%
EPS = 10^-6;
ZERO = 10^-10;
sigma = sig;
[r err] = cholcov(sigma, 0);
if (err ~= 0)
% the covariance matrix is not positive definite!
[v d] = eig(sigma);
% set any of the eigenvalues that are <= 0 to some small positive value
for n = 1:size(d,1)
if (d(n, n) <= ZERO)
d(n, n) = EPS;
end
end
% recompose the covariance matrix, now it should be positive definite.
sigma = v*d*v';
[r err] = cholcov(sigma, 0);
if (err ~= 0)
disp('ERROR!');
end
end
```
| null | CC BY-SA 2.5 | null | 2011-01-19T02:43:49.377 | 2011-01-19T02:43:49.377 | null | null | 1913 | null |
6368 | 1 | 6378 | null | 4 | 1626 | Please reference this question (stats.stackexchange.com) [http://bit.ly/got8Bs](http://bit.ly/got8Bs) for study design and initial reply from caracal.
My question now is: How can I test for an interaction contrast between two groups as (s)he suggested? I'm on the science side of things (minimal stats expertise) and use Graphpad Prism for basic analyses. I also have a primitive understanding of R. Is it possible to test H1′:(μ12−μ11)>(μ22−μ21) using either of these programs?
Thanks again!
| Test for interaction contrasts using Prism or R | CC BY-SA 2.5 | null | 2011-01-19T04:09:38.677 | 2011-01-19T13:06:52.697 | 2011-01-19T09:33:06.637 | 449 | 2816 | [
"r",
"interaction",
"contrasts"
]
|
6369 | 2 | null | 6366 | 4 | null | This transformation group is discrete and finite: it contains exactly two elements, the identity and inverting $\rho$. It's simply not big enough to determine a prior. In fact, you can choose any measurable function $f$ defined on $[-1,1]$ provided (a) it is integrable and (b) $\rho^{-2}f(1/\rho)d\rho$ is integrable on $[1, \infty]$. The latter restricts $f$ only in a neighborhood of $0$.
BTW, for this model to be practical you need to introduce a nuisance parameter $\sigma$: $\epsilon_t \sim N(0, \sigma^2)$.
| null | CC BY-SA 2.5 | null | 2011-01-19T04:29:23.700 | 2011-01-19T04:29:23.700 | null | null | 919 | null |
6370 | 2 | null | 6364 | 2 | null | One alternative way to compute the Cholesky factorisation is by fixing the diagonal elements of S to 1, and then introducing a diagonal matrix D, with positive elements.
This avoids the need to take square roots when doing the computations, which can cause problems when dealing with "small" numbers (i.e. numbers small enough so that the rounding which occurs due to floating point operations matters). The [wikipedia page](http://en.wikipedia.org/wiki/Cholesky_decomposition#Avoiding_taking_square_roots) has what this adjusted algorithm looks like.
So instead of $P=SS^T$ you get $P=RDR^T$ with $S=RD^{\frac{1}{2}}$
Hope this helps!
| null | CC BY-SA 2.5 | null | 2011-01-19T04:35:06.207 | 2011-01-19T04:35:06.207 | null | null | 2392 | null |
6371 | 1 | null | null | 7 | 671 | I have a finite-state, time-independent Markov chain with two absorbing states which models educational outcomes (the absorbing states are completing and not completing). The transition probabilities are estimated by taking the proportions of people who move from one state to another at successive time points (based on a census of the population at two successive time points), and I have calculated the stationary vector.
However, since this needs to be done with several different cuts of the data, I would like to know if there is any way of associating a confidence interval to the entries of the stationary vector, to aid in identifying significant differences.
The article
>
Karson, M. J. and Wrobleski, W. J. (1976),
CONFIDENCE INTERVALS FOR ABSORBING MARKOV CHAIN PROBABILITIES APPLIED TO LOAN PORTFOLIOS. Decision Sciences, 7: 10–17. doi: 10.1111/j.1540-5915.1976.tb00653.x
looks helpful, but I'm not sure if it what I need. So my questions are:
- Is there a way to estimate confidence intervals for the stationary vector?
- If yes, and it is in the cited article, do I just need to grit my teeth and push through it, or is there a more modern treatment?
- (a long shot, but could save me some work) Is there is a macro or similar for SAS to estimate said confidence intervals?
| Can we get confidence intervals for entries in stationary vector for an absorbing, time-independent Markov chain? | CC BY-SA 2.5 | null | 2011-01-19T05:52:52.100 | 2011-01-29T21:39:40.393 | 2011-01-24T03:09:08.333 | 1144 | 1144 | [
"confidence-interval",
"sas",
"markov-process"
]
|
6373 | 2 | null | 2356 | 12 | null | Keith Winstein,
EDIT: Just to clarify, this answer describes the example given in Keith Winstein Answer on the King with the cruel statistical game. The Bayesian and Frequentist answers both use the same information, which is to ignore the information on the number of fair and unfair coins when constructing the intervals. If this information is not ignored, the frequentist should use the integrated Beta-Binomial Likelihood as the sampling distribution in constructing the Confidence interval, in which case the Clopper-Pearson Confidence Interval is not appropriate, and needs to be modified. A similar adjustment should occur in the Bayesian solution.
EDIT: I have also clarified the initial use of the clopper Pearson Interval.
EDIT: alas, my alpha is the wrong way around, and my clopper pearson interval is incorrect. My humblest apologies to @whuber, who correctly pointed this out, but who I initially disagreed with and ignored.
The CI Using the Clopper Pearson method is very good
If you only get one observation, then the Clopper Pearson Interval can be evaluated analytically. Suppose the coin is comes up as "success" (heads) you need to choose $\theta$ such that
$$[Pr(Bi(1,\theta)\geq X)\geq\frac{\alpha}{2}] \cap [Pr(Bi(1,\theta)\leq X)\geq\frac{\alpha}{2}]$$
When $X=1$ these probabilities are $Pr(Bi(1,\theta)\geq 1)=\theta$ and $Pr(Bi(1,\theta)\leq 1)=1$, so the Clopper Pearson CI implies that $\theta\geq\frac{\alpha}{2}$ (and the trivially always true $1\geq\frac{\alpha}{2}$) when $X=1$. When $X=0$ these probabilities are $Pr(Bi(1,\theta)\geq 0)=1$ and $Pr(Bi(1,\theta)\leq 0)=1-\theta$, so the Clopper Pearson CI implies that $1-\theta \geq\frac{\alpha}{2}$, or $\theta\leq 1-\frac{\alpha}{2}$ when $X=0$. So for a 95% CI we get $[0.025,1]$ when $X=1$, and $[0,0.975]$ when $X=0$.
Thus, one who uses the Clopper Pearson Confidence Interval will never ever be beheaded. Upon observing the interval, it is basically the whole parameter space. But the C-P interval is doing this by giving 100% coverage to a supposedly 95% interval! Basically, the Frequentists "cheats" by giving a 95% confidence interval more coverage than he/she was asked to give (although who wouldn't cheat in such a situation? if it were me, I'd give the whole [0,1] interval). If the king asked for an exact 95% CI, this frequentist method would fail regardless of what actually happened (perhaps a better one exists?).
What about the Bayesian Interval? (specifically the Highest Posterior Desnity (HPD) Bayesian Interval)
Because we know a priori that both heads and tails can come up, the uniform prior is a reasonable choice. This gives a posterior distribution of $(\theta|X)\sim Beta(1+X,2-X)$ . Now, all we need to do now is create an interval with 95% posterior probability. Similar to the clopper pearson CI, the Cummulative Beta distribution is analytic here also, so that $Pr(\theta \geq \theta^{e} | x=1) = 1-(\theta^{e})^{2}$ and $Pr(\theta \leq \theta^{e} | x=0) = 1-(1-\theta^{e})^{2}$ setting these to 0.95 gives $\theta^{e}=\sqrt{0.05}\approx 0.224$ when $X=1$ and $\theta^{e}= 1-\sqrt{0.05}\approx 0.776$ when $X=0$. So the two credible intervals are $(0,0.776)$ when $X=0$ and $(0.224,1)$ when $X=1$
Thus the Bayesian will be beheaded for his HPD Credible interval in the case when he gets the bad coin and the Bad coin comes up tails which will occur with a chance of $\frac{1}{10^{12}+1}\times\frac{1}{10}\approx 0$.
First observation, the Bayesian Interval is smaller than the confidence interval. Another thing is that the Bayesian would be closer to the actual coverage stated, 95%, than the frequentist. In fact, the Bayesian is just about as close to the 95% coverage as one can get in this problem. And contrary to Keith's statement, if the bad coin is chosen, 10 Bayesians out of 100 will on average lose their head (not all of them, because the bad coin must come up heads for the interval to not contain $0.1$).
Interestingly, if the CP-interval for 1 observation was used repeatedly (so we have N such intervals, each based on 1 observation), and the true proportion was anything between $0.025$ and $0.975$, then coverage of the 95% CI will always be 100%, and not 95%! This clearly depends on the true value of the parameter! So this is at least one case where repeated use of a confidence interval does not lead to the desired level of confidence.
To quote a genuine 95% confidence interval, then by definition there should be some cases (i.e. at least one) of the observed interval which do not contain the true value of the parameter. Otherwise, how can one justify the 95% tag? Would it not be just a valid or invalid to call it a 90%, 50%, 20%, or even 0% interval?
I do not see how simply stating "it actually means 95% or more" without a complimentary restriction is satisfactory. This is because the obvious mathematical solution is the whole parameter space, and the problem is trivial. suppose I want a 50% CI? if it only bounds the false negatives then the whole parameter space is a valid CI using only this criteria.
Perhaps a better criterion is (and this is what I believe is implicit in the definition by Kieth) "as close to 95% as possible, without going below 95%". The Bayesian Interval would have a coverage closer to 95% than the frequentist (although not by much), and would not go under 95% in the coverage ($\text{100%}$ coverage when $X=0$, and $100\times\frac{10^{12}+\frac{9}{10}}{10^{12}+1}\text{%} > \text{95%}$ coverage when $X=1$).
In closing, it does seem a bit odd to ask for an interval of uncertainty, and then evaluate that interval by the using the true value which we were uncertain about. A "fairer" comparison, for both confidence and credible intervals, to me seems like the truth of the statement of uncertainty given with the interval.
| null | CC BY-SA 4.0 | null | 2011-01-19T08:05:54.043 | 2018-08-21T11:28:34.830 | 2018-08-21T11:28:34.830 | 53690 | 2392 | null |
6374 | 2 | null | 6181 | 9 | null | The multiple correlation in standard linear regression cannot be negative, the maths are easy to show it, although it depends on what "multiple correlation" is taken to mean. The usual way you would calculate $R^{2}$ is:
$$R^2=\frac{SSR}{TSS}$$
where
$$ SSR = \sum_{i} (\hat{Y_i}-\bar{Y})^2$$
and
$$ TSS = \sum_{i} (Y_i-\bar{Y})^2$$
Since sums of squares can never be negative, neither can the $R^2$ value, as long as its calculated this way.
However, $R^2$ calculated this way can be greater than 1, if you use an estimator which does not have the observed residuals sum to zero. Or mathematically, $R^2$ will be necessarily bounded by 1 if
$$\sum_{i} (\hat{Y_i}-Y_i)=0$$
and
$$\sum_{i} (\hat{Y_i}-Y_i)\hat{Y_i}=0$$
Or in words, the average of the residuals is equal to 0, and the fitted values are uncorrelated with the residuals over the whole data set.
This is because you can expand TSS as follows
$$ TSS = \sum_{i} (Y_i-\bar{Y})^2 = \sum_{i} ([Y_i-\hat{Y_i}]-[\bar{Y}-\hat{Y_i}])^2$$
$$=\sum_{i} (Y_i-\hat{Y_i})^2-2\sum_{i} [Y_i-\hat{Y_i}][\bar{Y}-\hat{Y_i}]+\sum_{i} (\bar{Y}-\hat{Y_i})^2$$
$$=\sum_{i} (Y_i-\hat{Y_i})^2-2\bar{Y}\sum_{i} [Y_i-\hat{Y_i}]+2\sum_{i} [Y_i-\hat{Y_i}]\hat{Y_i}+\sum_{i} (\bar{Y}-\hat{Y_i})^2$$
$$=\sum_{i} (Y_i-\hat{Y_i})^2+\sum_{i} (\bar{Y}-\hat{Y_i})^2$$
$$\implies TSS=SSR+\sum_{i} (Y_i-\hat{Y_i})^2 \geq SSR \geq 0$$
$$\implies 1 \geq \frac{SSR}{TSS}=R^2 \geq 0$$
The constraints listed are always satisfied by the usual OLS estimators (in fact they form part of the equations that define OLS estimation)
$R^2$ can go negative if it is calculated by $1-\frac{SSE}{TSS}$ Where $SSE=\sum_{i} (Y_i-\hat{Y_i})^2$ instead of the way I described.
As a (silly) example of $R^2>1$, you can put as the estimate $\hat{Y_i}=\bar{Y_i}+TSS$ So That $SSR=n(TSS)^2$ and $R^2=n(TSS)$ Which will exceed 1 for big enough n or TSS. To make $R^2$ go negative, set $\hat{Y_i}=Y_i+TSS$ so that $SSE=n(TSS)^2$ and $R^2=1-n(TSS)$ which will be less than 0 for big enough n and TSS
| null | CC BY-SA 2.5 | null | 2011-01-19T09:30:53.440 | 2011-01-19T09:30:53.440 | null | null | 2392 | null |
6375 | 1 | 6376 | null | 14 | 569 | I would like to learn a bit more on interactive data visualization (zooming, pointing, brushing, point-mapping and so on). I would welcome any:
- Tutorial/guide/book(?)/video on how to use such methods for statistical exploration.
- Pointers for good/interesting interactive data-viz packages (in R, and outside of it)
Just to start the ball rolling, I know that in R there are various ways to get interactive visualization, like [rggobi](http://www.ggobi.org/rggobi/), the new [googleViz R package](http://www.r-bloggers.com/r-and-google-visualization-api/), the [animation package](http://cran.r-project.org/web/packages/animation/index.html) and some others. But if there are other packages worth exploring (offering things that R doesn't), I would be glad to know about them (like jmp, mathlab, spss, sas, excel, and so on).
p.s: this is the first question to use the tag "interactive-visualization"
| Resources for learning to use (/create) dynamic (/interactive) statistical visualization | CC BY-SA 3.0 | null | 2011-01-19T10:07:22.923 | 2017-04-15T02:07:13.583 | 2017-04-15T02:07:13.583 | 11887 | 253 | [
"data-visualization",
"interactive-visualization"
]
|
6376 | 2 | null | 6375 | 13 | null | Apart from [Protovis](http://vis.stanford.edu/protovis/) (HTML+JS) or [Mayavi](http://mayavi.sourceforge.net/) (Python), I would recommend [Processing](http://processing.org/) which is
>
an open source programming language
and environment for people who want to
create images, animations, and
interactions. Initially developed to
serve as a software sketchbook and to
teach fundamentals of computer
programming within a visual context.
There are a lot of open-source scripts on [http://www.openprocessing.org/](http://www.openprocessing.org/), and a lot of [related books](http://processing.org/learning/books/) that deal with Processing but also data visualization.
I know there is a project to provide an R interface, [rprocessing](https://r-forge.r-project.org/projects/rprocessing/), but I don't know how it goes. There's also an interface with clojure/incanter (see e.g., [Creating Processing Visualizations with Clojure and Incanter](http://data-sorcery.org/2009/08/30/processing-intro/)).
There are many online resources, among which Stanford class notes, e.g. [CS448B](https://graphics.stanford.edu/wikis/cs448b-10-fall), or [7 Classic Foundational Vis Papers You Might not Want to Publicly Confess you Don’t Know](http://fellinlovewithdata.com/guides/7-classic-foundational-vis-papers).
| null | CC BY-SA 2.5 | null | 2011-01-19T10:43:48.173 | 2011-01-19T10:52:57.817 | 2011-01-19T10:52:57.817 | 930 | 930 | null |
6377 | 1 | null | null | 3 | 268 | I have hourly data for a variable for several years. I want to analyse each month separately. How do I arrange the data for the same month of different years?
For example, suppose I have Jan 1997, Jan 1998m and Jan 1999. Do I have to find a mean for say 18 Jan 1997, 18 Jan 1998 and 18 Jan1999.
Update
```
Hours Jan 96 Jan97 Jan98 Jan99
01:00 3 2 4 7 02:00 5 2 6 5 . . . . . . . . . . . 23:00 NA 5 7 3 00:00 01:00 . . .
```
The dots each represent a value and the data is for all days of the month. NA means that the value is missing.
| Arranging hourly data for several years | CC BY-SA 2.5 | null | 2011-01-19T12:41:46.500 | 2011-03-03T20:54:19.110 | 2011-03-03T20:54:19.110 | null | 2959 | [
"time-series"
]
|
6378 | 2 | null | 6368 | 3 | null | Since you've got a 2x2 design, there is only one (unique) interaction contrast. In that case, the interaction-test in the two-factorial ANOVA is equivalent to the two-sided test of the given contrast. Since you've got a one-sided a-priori hypothesis, you can take the interaction p-value of the ANOVA output and divide it by 2. If you want to test other (non-interaction) contrasts as well, you may have to think about p-value adjustment. In R:
```
> Nj <- c(41, 37, 42, 40) # generate some data in a 2x2 design
> DVa <- rnorm(Nj[1], 0, 1)
> DVb <- rnorm(Nj[2], 0.3, 1)
> DVc <- rnorm(Nj[3], 0.6, 1)
> DVd <- rnorm(Nj[4], 1.0, 1)
> DV <- c(DVa, DVb, DVc, DVd)
> IV1 <- factor(rep(1:2, c(sum(Nj[1:2]), sum(Nj[3:4]))))
> IV2 <- factor(rep(c(1:2, 1:2), Nj))
> summary(aov(DV ~ IV1*IV2)) # ANOVA with interaction test
Df Sum Sq Mean Sq F value Pr(>F)
IV1 1 20.959 20.9593 21.8302 6.392e-06 ***
IV2 1 2.358 2.3577 2.4556 0.11913
IV1:IV2 1 3.172 3.1722 3.3040 0.07103 .
Residuals 156 149.777 0.9601
# test single contrast in associated one-way design
> IV <- interaction(IV1, IV2) # factor for associated one-way design
> cc <- c(-1/2, 1/2, 1/2, -1/2) # contrast coefficients
> library(multcomp) # for glht()
# test single contrast, p-value = 0.5 * that of ANOVA interaction test
> summary(glht(aov(DV ~ IV), linfct=mcp(IV=cc), alternative="less"))
Simultaneous Tests for General Linear Hypotheses
Multiple Comparisons of Means: User-defined Contrasts
Fit: aov(formula = DV ~ IV)
Linear Hypotheses:
Estimate Std. Error t value Pr(<t)
1 >= 0 -0.2819 0.1551 -1.818 0.0355 *
```
| null | CC BY-SA 2.5 | null | 2011-01-19T12:52:07.167 | 2011-01-19T13:06:52.697 | 2011-01-19T13:06:52.697 | 1909 | 1909 | null |
6379 | 1 | 6393 | null | 15 | 2206 | I have a set of players. They play against each other (pairwise). Pairs of players are chosen randomly. In any game, one player wins and another one loses. The players play with each other a limited number of games (some players play more games, some less). So, I have data (who wins against whom and how many times). Now I assume that every player has a ranking that determines the probability of winning.
I want to check if this assumption is actually truth. Of course, I can use the [Elo rating system](http://en.wikipedia.org/wiki/Elo_rating_system) or the [PageRank algorithm](http://en.wikipedia.org/wiki/PageRank) to a calculate rating for every player. But by calculating ratings I do not prove that they (ratings) actually exist or that they mean anything.
In other words, I want to have a way to prove (or to check) that players do have different strengths. How can I do it?
ADDED
To be more specific, I have 8 players and only 18 games. So, there a lot of pairs of players who did not play against each other and there a lot of pairs that played only once with each other. As a consequence, I cannot estimate the probability of a win for a given pair of players. I also see, for example, that there is a player who won 6 times in 6 games. But maybe it is just a coincidence.
| How to prove that Elo rating or Page ranking have a meaning for my set? | CC BY-SA 3.0 | null | 2011-01-19T13:07:36.177 | 2022-12-11T17:49:29.033 | 2011-11-09T15:00:12.230 | 919 | 2407 | [
"goodness-of-fit",
"ranking",
"rating"
]
|
6380 | 1 | 6383 | null | 16 | 15223 | I am using GNU R at a Ubuntu-Lucid PC which has 4 CPUs.
In order to use all 4 CPUs, I installed the "r-cran-multicore" package.
As the package's manual lacks of practical examples that I understand, I need advice in how to optimize my script in order to make use of all 4 CPUs.
My dataset is a data.frame (called P1) that has 50,000 rows and 1600 cols. For each row, I'd like to calc the maximun, sum and mean. My script looks as follows:
```
p1max <- 0
p1mean <- 0
p1sum <-0
plength <- length(P1[,1])
for(i in 1:plength){
p1max <- c(p1max, max(P1[i,]))
p1mean <- c(p1mean, mean(P1[i,]))
p1sum <- c(p1sum, sum(P1[i,]))
}
```
Could anyone please tell me how to modify and run the script in order to use all 4 CPUs?
| How to optimize my R script in order to use "multicore" | CC BY-SA 2.5 | null | 2011-01-19T13:25:00.700 | 2011-01-19T21:44:00.163 | 2011-01-19T21:44:00.163 | null | 2838 | [
"r"
]
|
6381 | 2 | null | 6380 | 1 | null | Have a look at the [snow](http://cran.r-project.org/web/packages/snow/index.html) and [snowfall](http://cran.r-project.org/web/packages/snowfall/index.html) packages. Plenty of examples with those...
If you want to speed up that specific code rather than learning about R and parallelism you should do this
```
P1 = matrix(rnorm(1000), ncol=10, nrow=10
apply(P1, 1, max)
apply(P1, 1, mean)
apply(P1, 1, sum)
```
| null | CC BY-SA 2.5 | null | 2011-01-19T13:37:31.400 | 2011-01-19T14:20:03.513 | 2011-01-19T14:20:03.513 | 300 | 300 | null |
6382 | 2 | null | 6353 | 9 | null |
- If this is an SPSS syntax question, the answer is just put the categorical variable, coded appropriately, into the variable list for "independent variables" along with the continuous one.
- On the statistics: Is your categorical variable binary? If so, you need to use a dummy or other valid contrast code. If it is not binary, is your categorical variable ordinal or nominal? If nominal, then again, you must use some contrasting code strategy--in effect modeling the impact of each level of the variable on the outcome or "dependent" variable. If the categorical variable is ordinal, then most likely the sensible thing to do is to enter it as-is into the model, just as you would with a continuous predictor (i.e., "independent") variable. You would be assuming, in that case, that the increments between levels of the categorical predictor ("indepdent") variable; only rarely will this be a mistake, but when it is, you should again use a contrast code & model the impact of each level. This question comes up in this forum quite often -- here is a good analaysis
- How to handle missing data is, in my view, a completely separate matter. My understanding is that pairwise deletion is not viewed as a valid approach for multivariate regression. Listwise is pretty common but can can also bias results & certainly is a shame. Multiple imputation is a thing of beauty.
| null | CC BY-SA 2.5 | null | 2011-01-19T14:03:26.643 | 2011-01-19T14:03:26.643 | 2017-04-13T12:44:20.730 | -1 | 11954 | null |
6383 | 2 | null | 6380 | 11 | null | Use [foreach](http://cran.r-project.org/web/packages/foreach/index.html) and [doMC](http://cran.r-project.org/web/packages/doMC/index.html). The detailed explanation can be [found here](http://cran.r-project.org/web/packages/doMC/vignettes/gettingstartedMC.pdf). Your script will change very little, the line
```
for(i in 1:plength){
```
should be changed to
```
foreach(i=1:plength) %dopar% {
```
The prerequisites for any multitasking script using these packages are
```
library(foreach)
library(doMC)
registerDoMC()
```
Note of caution. According to the documentation you cannot use this in GUI.
As for your problem, do you really need multitasking? Your data.frame takes about 1.2GB of RAM, so it should fit into your memory. So you can simply use apply:
```
p1smry <- apply(P1,1,summary)
```
The result will be a matrix with summaries of each row.
You can also use function mclapply which is in the package multicore. Then your script might look like this:
```
loopfun <- function(i) {
summary(P1[i,])
}
res <- mclapply(1:nrow(P1),loopfun)
```
This will return the list, where i-th element will be the summary of i-th row. You can convert it to matrix using sapply
```
mres <- sapply(res,function(x)x)
```
| null | CC BY-SA 2.5 | null | 2011-01-19T14:25:28.853 | 2011-01-19T14:34:42.153 | 2011-01-19T14:34:42.153 | 2116 | 2116 | null |
6384 | 2 | null | 6380 | 15 | null | You've already got an answer as to how to use more than one core, but the real problem is with the way you have written your loops. Never extend your result vector/object at each iteration of a loop. If you do this, you force R to copy your result vector/object and extend it which all takes time. Instead, preallocate enough storage space before you start the loop and fill in as you go along. Here is an example:
```
set.seed(1)
p1 <- matrix(rnorm(10000), ncol=100)
system.time({
p1max <- p1mean <- p1sum <- numeric(length = 100)
for(i in seq_along(p1max)){
p1max[i] <- max(p1[i,])
p1mean[i] <- mean(p1[i,])
p1sum[i ]<- sum(p1[i,])
}
})
user system elapsed
0.005 0.000 0.005
```
Or you can do these things via `apply()`:
```
system.time({
p1max2 <- apply(p1, 1, max)
p1mean2 <- apply(p1, 1, mean)
p1sum2 <- apply(p1, 1, sum)
})
user system elapsed
0.007 0.000 0.006
```
But note that this is no faster than doing the loop properly and sometimes slower.
However, always be on the lookout for vectorised code. You can do row sums and means using `rowSums()` and `rowMeans()` which are quicker than either the loop or `apply` versions:
```
system.time({
p1max3 <- apply(p1, 1, max)
p1mean3 <- rowMeans(p1)
p1sum3 <- rowSums(p1)
})
user system elapsed
0.001 0.000 0.002
```
If I were a betting man, I would have money on the third approach I mention beating `foreach()` or the other multi-core options in a speed test on your matrix because they would have to speed things up considerably to justify the overhead incurred in setting up the separate processes that are farmed out the the different CPU cores.
Update: Following the comment from @shabbychef is it faster to do the sums once and reuse in the computation of the mean?
```
system.time({
p1max4 <- apply(p1, 1, max)
p1sum4 <- rowSums(p1)
p1mean4 <- p1sum4 / ncol(p1)
})
user system elapsed
0.002 0.000 0.002
```
Not in this test run, but this is far from exhaustive...
| null | CC BY-SA 2.5 | null | 2011-01-19T14:54:32.927 | 2011-01-19T21:32:27.480 | 2011-01-19T21:32:27.480 | 1390 | 1390 | null |
6385 | 2 | null | 6379 | 0 | null | You want to test the hypothesis that probability of a result depends on the matchup. $H_0$, then, is that every game is essentially a coin flip.
A simple test for this would be calculating the proportion of times the player with more previous games played will win, and comparing that to the binomial cumulative distribution function. That should show the existence of some kind of effect.
If you're interested about the quality of the Elo rating system for your game, a simple method would be to run a 10-fold crossvalidation on the predictive performance of the Elo model (which actually assumes outcomes aren't iid, but I'll ignore that) and comparing that to a coin flip.
| null | CC BY-SA 2.5 | null | 2011-01-19T15:34:31.023 | 2011-01-19T15:34:31.023 | null | null | 2456 | null |
6386 | 2 | null | 6377 | 3 | null | Could you give an example of what your dataset looks like? You're probably looking for the [reshape command](http://gbi.agrsci.dk/~shd/misc/Rdocs/reshape.pdf).
Lets say you have a dataframe with 2 columns, Date and X, where X is the time series you want to analyze. This dataset is in "long" form, with one observation per row. To convert it to "wide" form, where each month is a separate column, do something like this:
```
Date<-c('2009-01-01','2009-02-01','2010-01-01','2010-02-01')
X<-c(1,2,3,4)
Year<-as.numeric(format(as.POSIXct(Date), "%Y"))
Month<-as.numeric(format(as.POSIXct(Date), "%m"))
df<-as.data.frame(cbind(Year,Month,X))
df
df<-reshape(df,v.names='X',idvar="Year",timevar='Month',direction='wide')
df
mean(df$X.1) #January mean
mean(df$X.2) #Feb mean
```
| null | CC BY-SA 2.5 | null | 2011-01-19T15:40:54.390 | 2011-01-19T15:40:54.390 | null | null | 2817 | null |
6387 | 1 | 6397 | null | 5 | 793 | I would be very glad if someone can point me out to statistics and econometrics distance learning courses like [http://www2.statistics.com](http://www2.statistics.com).
Thanks in advance
| Statistics and econometrics distance learning | CC BY-SA 2.5 | null | 2011-01-19T16:20:10.837 | 2011-01-19T19:10:37.907 | 2011-01-19T19:10:37.907 | 930 | 1251 | [
"teaching",
"econometrics"
]
|
6388 | 2 | null | 6379 | 4 | null | If you want to test the null hypothesis that each player is equally likely to win or lose each game, I think you want a test of symmetry of the contingency table formed by tabulating winners against losers.
Set up the data so that you have two variables, 'winner' and 'loser' containing the ID of the winner and loser for each game, i.e. each 'observation' is a game. You can then construct a contingency table of winner vs loser. Your null hypothesis is that you'd expect this table to be symmetric (on average over repeated tournaments). In your case, you'll get an 8×8 table where most of the entries are zero (corresponding to players that never met), ie. the table will be very sparse, so an 'exact' test will almost certainly be necessary rather than one relying on asymptotics.
Such an exact test is available in Stata with the [symmetry command](http://www.stata.com/help.cgi?symmetry). In this case, the syntax would be:
```
symmetry winner loser, exact
```
No doubt it's also implemented in other statistics packages that I'm less familiar with.
| null | CC BY-SA 2.5 | null | 2011-01-19T16:23:40.973 | 2011-01-19T16:23:40.973 | null | null | 449 | null |
6389 | 2 | null | 6387 | 3 | null | Stanford has some good programs as part of [SCPD](http://scpd.stanford.edu/). In particular, you might be interested in the ["Data Mining and Applications Graduate Certificate"](http://scpd.stanford.edu/public/category/courseCategoryCertificateProfile.do?method=load&certificateId=1209602#searchResults).
| null | CC BY-SA 2.5 | null | 2011-01-19T16:24:23.583 | 2011-01-19T16:24:23.583 | null | null | 5 | null |
6390 | 2 | null | 5559 | 4 | null | One approach would be to use [v-optimal histograms](http://en.wikipedia.org/wiki/V-optimal_histograms), which are histograms that select bin sizes in order to minimize the sum of squared errors of the representation. Another alternative is equi-depth histograms which may be useful if you'd like each bin to contain (approximately) the same number of elements.
As Whuber mentioned, it would be helpful to first identify what it is that qualifies as a "good" binning.
From the revised question, it sounds like slotishtype is looking for an equi-depth histogram. Here is some example Matlab code for computing the bins, hope this helps.
```
% INPUT:
% v input vector of values (sorted)
% n number of bins youd like to split the data into
%
% OUTPUT:
% domain provides the representative value for each bin
% rep provides the range of each bin
%
% sample input from a skewed distribution
v = sort(exprnd(1, 1000, 1));
% number of bins
n = 5;
N = length(v);
d = zeros(N, 1);
[sv ix] = sort(v);
% figure out how many elements we'll put into each bin
bin_size = round(N / n);
rep = zeros(n, 3);
s = 1;
for i = 1:n
e = min(N, s + bin_size - 1);
% compute the representative of this bin
% (I use the mean, but you could change this depending on your problem)
rep(i, 1) = mean(sv(s:e));
% this is just to show the range of values each bin takes on
rep(i, 2) = min(sv(s:e));
rep(i, 3) = max(sv(s:e));
% d builds up a representation of the original data using the bin
% representatives for each value (vector quantization)
d(ix(s:e)) = rep(i, 1);
s = e + 1;
end
domain = unique(rep(:, 1));
fprintf('Bin representative vlaues:\n');
rep
fprintf('Range of each bin:\n');
rep(:, 3) - rep(:, 2)
```
| null | CC BY-SA 2.5 | null | 2011-01-19T17:28:12.457 | 2011-01-20T18:06:28.053 | 2011-01-20T18:06:28.053 | 1913 | 1913 | null |
6391 | 1 | null | null | 3 | 294 | I wonder how to reason when selecting samples. I am doing a panel data regression analysis about how the euro membership correlates with the budget deficit in the member countriues.
Using R that is:
```
BUDGETDEFICIT ~ EURODUMMY + some_controll_variables
```
EURODUMMY is the variable of interest. Using panel data I have anual observations for many countriies. My question is how selecting what sample to use effects the regression analysis.
Should I only use countries that has introduced the Euro?
Is adding other European countries or western like countries benefitial as they add more datapoints and/or acts like a reference gropup?
Or is id only bad for the data to add countries that has not introduced the euro as they have no variation in the variable of interest (EURODUMMY)?
Other comments regarding this problem is also welcome.
| Sample selection and variation in the variable of interest when using panel data | CC BY-SA 2.5 | null | 2011-01-19T17:33:49.987 | 2011-01-20T10:01:57.660 | null | null | 2724 | [
"r",
"sampling",
"panel-data"
]
|
6392 | 1 | 6394 | null | 3 | 120 | I am in the process of modeling the random variable X as the days from issuing an invoice until its payment. This variable is dependent on the days of credit for the invoice, so there are distinct X's depending on the days of credit, let's call them X_c. I'm starting with the most common, X_30, which should be centered around 30 with a very heavy tail and a very rapid ramp up of the density getting to 30.
A basic histogram looks like this:

I have lots of data to try and fit but would like to get some pointer into which distributions might model this better from a conceptual point of view.
| Where to start in estimating the days until payment of an invoice? | CC BY-SA 2.5 | null | 2011-01-19T17:55:22.683 | 2011-01-19T19:17:37.303 | 2011-01-19T19:17:37.303 | 449 | 2808 | [
"distributions",
"modeling",
"model-selection",
"survival",
"goodness-of-fit"
]
|
6393 | 2 | null | 6379 | 11 | null | You need a probability model.
The idea behind a ranking system is that a single number adequately characterizes a player's ability. We might call this number their "strength" (because "rank" already means something specific in statistics). We would predict that player A will beat player B when strength(A) exceeds strength(B). But this statement is too weak because (a) it is not quantitative and (b) it does not account for the possibility of a weaker player occasionally beating a stronger player. We can overcome both problems by supposing the probability that A beats B depends only on the difference in their strengths. If this is so, then we can re-express all the strengths is necessary so that the difference in strengths equals the log odds of a win.
Specifically, this model is
$$\mathrm{logit}(\Pr(A \text{ beats } B)) = \lambda_A - \lambda_B$$
where, by definition, $\mathrm{logit}(p) = \log(p) - \log(1-p)$ is the log odds and I have written $\lambda_A$ for player A's strength, etc.
This model has as many parameters as players (but there is one less degree of freedom, because it can only identify relative strengths, so we would fix one of the parameters at an arbitrary value). It is a kind of [generalized linear model](http://en.wikipedia.org/wiki/Generalized_linear_model) (in the Binomial family, with logit link).
The parameters can be estimated by [Maximum Likelihood](http://en.wikipedia.org/wiki/Maximum_likelihood). The same theory provides a means to erect confidence intervals around the parameter estimates and to test hypotheses (such as whether the strongest player, according to the estimates, is significantly stronger than the estimated weakest player).
Specifically, the likelihood of a set of games is the product
$$\prod_{\text{all games}}{\frac{\exp(\lambda_{\text{winner}} - \lambda_{\text{loser}})}{1 + \exp(\lambda_{\text{winner}} - \lambda_{\text{loser}})}}.$$
After fixing the value of one of the $\lambda$, the estimates of the others are the values that maximize this likelihood. Thus, varying any of the estimates reduces the likelihood from its maximum. If it is reduced too much, it is not consistent with the data. In this fashion we can find confidence intervals for all the parameters: they are the limits in which varying the estimates does not overly decrease the log likelihood. General hypotheses can similarly be tested: a hypothesis constrains the strengths (such as by supposing they are all equal), this constraint limits how large the likelihood can get, and if this restricted maximum falls too far short of the actual maximum, the hypothesis is rejected.
---
In this particular problem there are 18 games and 7 free parameters. In general that is too many parameters: there is so much flexibility that the parameters can be quite freely varied without changing the maximum likelihood much. Thus, applying the ML machinery is likely to prove the obvious, which is that there likely are not enough data to have confidence in the strength estimates.
| null | CC BY-SA 3.0 | null | 2011-01-19T18:05:02.633 | 2011-11-09T18:03:12.597 | 2011-11-09T18:03:12.597 | 2970 | 919 | null |
6394 | 2 | null | 6392 | 4 | null | As this is an example of [time to event analysis](http://en.wikipedia.org/wiki/Time_to_event_analysis), the obvious starting point would be the [Weibull distribution](http://en.wikipedia.org/wiki/Weibull_distribution).
| null | CC BY-SA 2.5 | null | 2011-01-19T18:30:35.307 | 2011-01-19T18:30:35.307 | null | null | 449 | null |
6396 | 2 | null | 4840 | 2 | null | Try the forecasting package for r. Specifically, the auto.arima() and ets() functions will model the seasonality and trend in the ozone data and allow you to make monthly predictions for future.
| null | CC BY-SA 2.5 | null | 2011-01-19T18:52:37.563 | 2011-01-19T18:52:37.563 | null | null | 2817 | null |
6397 | 2 | null | 6387 | 3 | null | Here are a few links:
[http://www.stat.tamu.edu/dist/](http://www.stat.tamu.edu/dist/)
[http://www.stat.colostate.edu/distance_degree.html](http://www.stat.colostate.edu/distance_degree.html)
[http://www.worldcampus.psu.edu/AppliedStatisticsCertificate.shtml](http://www.worldcampus.psu.edu/AppliedStatisticsCertificate.shtml)
| null | CC BY-SA 2.5 | null | 2011-01-19T19:03:46.913 | 2011-01-19T19:03:46.913 | null | null | 2775 | null |
6398 | 2 | null | 6268 | 4 | null | Here are some suggestions (about your plot, not about how I would illustrate correlation/regression analysis):
- The two univariate plots you show in the right and left margins may be simplified with a call to rug();
- I find more informative to show a density plot of $X$ and $Y$ or a boxplot, at risk of being evocative of the idea of a bi-normality assumption which makes no sense in this context;
- In addition to the regression line, it is worth showing a non-parametric estimate of the trend, like a loess (this is good practice and highly informative about possible local non linearities);
- Points might be highlighted (with varying color or size) according to Leverage effect or Cook distances, i.e. any of those measures that show how influential individual values are on the estimated regression line. I'll second @DWin's comment and I think it is better to highlight how individual points "degrade" goodness-of-fit or induce some kind of departure from the linearity assumption.
Of note, this graph assumes X and Y are non-paired data, otherwise I would stick to a Bland-Altman plot ($(X-Y)$ against $(X+Y)/2$), in addition to scatterplot.
| null | CC BY-SA 2.5 | null | 2011-01-19T19:08:24.507 | 2011-01-19T19:08:24.507 | null | null | 930 | null |
6400 | 1 | 6402 | null | 6 | 2613 | I am conducting a mulitple first order regression analysis of genetic data. The vectors of y-values do not all follow a normal distribution, therefore I need to implement a non-parametric regression using ranks.
Is the `lm()` function in R suitable for this, i.e.,
```
lin.reg <- lm(Y~X*Z)
```
where Y, X and Z are vectors of ordinal categorical variables?
I am interested in the p-value assigned to the coefficient of the interaction term in the first order model. The `lm()` function obtains this from a t-test, i.e., is the interaction coefficient significantly different from zero.
Is the automatic implementation of a t-test to determine this p-value appropriate when the regression model is carried out on data as described?
Thanks.
EDIT
Sample data for clarity:
```
Y <- c(4, 1, 2, 3) # A vector of ranks
X <- c(0, 2, 1, 1) # A vector of genotypes (0 = aa, 1 = ab, 2 = bb)
Z <- c(2, 2, 1, 0)
```
| Non-parametric regression | CC BY-SA 2.5 | null | 2011-01-19T20:48:08.387 | 2011-01-21T17:19:22.187 | 2011-01-19T21:10:30.230 | 2842 | 2842 | [
"r",
"regression",
"nonparametric",
"genetics"
]
|
6401 | 1 | 6403 | null | 3 | 2758 | I am currently estimating a bunch of ARMA models, and using them to predict subsets of my data. In order to evaluate their predictive accuracy I would like to make some ROC plots, however since all of my variables are continuous, I wonder how this could be done in R.
Best, Thomas
P.S: I have looked at the ROCR package, but this seems to only work for dichotomous variables.
| ROC plot for continuous data in R | CC BY-SA 2.5 | null | 2011-01-20T00:58:16.340 | 2018-08-23T11:45:24.727 | 2011-01-20T03:58:52.230 | 159 | 2704 | [
"r",
"forecasting",
"predictive-models",
"roc"
]
|
6402 | 2 | null | 6400 | 2 | null | If your response variable is ordinal, you may want to consider and "ordered logistic regression". This is basically where you model the cumulative probabilities {in the simple example, you would model $Pr(Y\leq 1),Pr(Y\leq 2),Pr(Y\leq 3)$}. This incorporates the ordering of the response into the model, without the need for an arbitrary assumption which transforms the ordered response into a numerical one (although having said that, this can be a useful first step in exploratory analysis, or in selecting which $X$ and $Z$ variables are not necessary)
There is a way that you can get the glm() function in R to give you the MLE's for this model (other wise you would need to write your own algorithm to get the MLEs). You define a new set of variables, say $W$, where these are defined as
$$W_{1jk} = \frac{Y_{1jk}}{\sum_{i=1}^{i=I} Y_{ijk}}$$
$$W_{2jk} = \frac{Y_{2jk}}{\sum_{i=2}^{i=I} Y_{ijk}}$$
$$...$$
$$W_{I-1,jk} = \frac{Y_{I-1,jk}}{\sum_{i=I-1}^{i=R} Y_{ijk}}$$
Where $i=1,..,I$ indexes the $Y$ categories, $j=1,..,J$ indexes the $X$ categories, and $k=1,..,K$ indexes the $Z$ categories. Then fit a glm() of W on X and Z using the complimentary log-log link function. Denoting $\theta_{ijk}=Pr(Y_{ijk}\leq i)$ as the cumulative probability, the MLE's of the theta's (assuming a multi-nomial distribution for $Y_{ijk}$ values) is then
$$\hat{\theta}_{ijk}=\hat{W}_{ijk}+\hat{\theta}_{(i-1)jk}(1-\hat{W}_{ijk}) \ \ \ i=1,\dots ,I-1$$
Where $\hat{\theta}_{0jk}=0$ and $\hat{\theta}_{Ijk}=1$ and $\hat{W}_{ijk}$ are the fitted values from the glm.
You can then use the deviance table (use the anova() function on the glm object) to assess the significance of the regressor variables.
EDIT: one thing I forgot to mention in my original answer was that in the glm() function, you need to specify weights when fitting the model to $W$, which are equal to the denominators in the respective fractions defining each $W$.
You could also try a Bayesian approach, but you would most likely need to use sampling techniques to get your posterior, and using the multinomial likelihood (but parameterised with respect to $\theta_{ijk}$, so the likelihood function will have differences of the form $\theta_{ijk}-\theta_{i-1,jk}$), the MLE's are a good "first crack" at genuinely fitting the model, and give an approximate Bayesian solution (as you may have noticed, I prefer Bayesian inference)
This method is in my lecture notes, so I'm not really sure how to reference it (there are no references given in the notes) apart from what I've just said.
Just another note, I won't harp on it, but I p-values are not all they are cracked up to be. A good post discussing this can be found [here](https://stats.stackexchange.com/questions/6308/article-about-misuse-of-statistical-method-in-nytimes). I like Harlod Jeffrey's quote above p-values (from his book probability theory) "A null hypothesis may be rejected because it did not predict something that was not observed" (this is because p-values ask for the probability of events more extreme than what was observed).
| null | CC BY-SA 2.5 | null | 2011-01-20T03:11:20.827 | 2011-01-21T17:19:22.187 | 2017-04-13T12:44:24.677 | -1 | 2392 | null |
6403 | 2 | null | 6401 | 3 | null | Well, that is the basis for ROC curves. You see what proportion of correct predictions (i.e. yes or no) are at a variety of predictor levels. The analog of an ROC curve for continuous outcomes would be a validation plot. You develop a prediction score on a training set and validate it on a test set. Or you develop it on the full set and then use bootstrap methods to create neo-samples for validation.
| null | CC BY-SA 2.5 | null | 2011-01-20T03:23:58.857 | 2011-01-20T03:23:58.857 | null | null | 2129 | null |
6404 | 1 | 6411 | null | 13 | 1487 | Anyone that follows baseball has likely heard about the out-of-nowhere MVP-type performance of Toronto's Jose Bautista. In the four years previous, he hit roughly 15 home runs per season. Last year he hit 54, a number surpassed by only 12 players in baseball history.
In 2010 he was paid 2.4 million and he's asking the team for 10.5 million for 2011. They're offering 7.6 million. If he can repeat that in 2011, he'll be easily worth either amount. But what are the odds of him repeating? How hard can we expect him to regress to the mean? How much of his performance can we expect was due to chance? What can we expect his regression-to-the-mean adjusted 2010 totals to be? How do I work it out?
I've been playing around with the Lahman Baseball Database and squeezed out a query that returns home run totals for all players in the previous five seasons who've had at least 50 at-bats per season.
The table looks like this (notice Jose Bautista in row 10)
```
first last hr_2006 hr_2007 hr_2008 hr_2009 hr_2010
1 Bobby Abreu 15 16 20 15 20
2 Garret Anderson 17 16 15 13 2
3 Bronson Arroyo 2 1 1 0 1
4 Garrett Atkins 29 25 21 9 1
5 Brad Ausmus 2 3 3 1 0
6 Jeff Baker 5 4 12 4 4
7 Rod Barajas 11 4 11 19 17
8 Josh Bard 9 5 1 6 3
9 Jason Bartlett 2 5 1 14 4
10 Jose Bautista 16 15 15 13 54
```
and the full result (232 rows) is available [here](http://datalove.org/files/regress_hr.csv).
I really don't know where to start. Can anyone point me in the right direction? Some relevant theory, and R commands would be especially helpful.
Thanks kindly
Tommy
Note: The example is a little contrived. Home runs definitely aren't the best indicator of a player's worth, and home run totals don't consider the varying number of chances per season that a batter has the chance to hit home runs (plate appearances). Nor does it reflect that some players play in more favourable stadiums, and that league average home runs change year over year. Etc. Etc. If I can grasp the theory behind accounting for regression to the mean, I can use it on more suitable measures than HRs.
| Measuring Regression to the Mean in Hitting Home Runs | CC BY-SA 2.5 | null | 2011-01-20T03:49:06.957 | 2014-09-28T17:20:22.787 | null | null | 827 | [
"r",
"regression",
"modeling"
]
|
6405 | 2 | null | 5734 | 1 | null | Gretl is a very good and standard open source time series library.
| null | CC BY-SA 2.5 | null | 2011-01-20T06:28:47.500 | 2011-01-20T06:28:47.500 | null | null | null | null |
6406 | 2 | null | 6404 | 2 | null | You need additional data on the players and their characteristics in the time-span you have data about home-runs. For the first step add some time-varying characteristics such as players age or experience. Then you can use HLM or panel data models. You will need to prepare data in the form:
```
First Last Year HR Experience Age
1. Bobby Abreu 2005 15 6 26
```
The most simple model would then be (the function lme is from package nlme)
```
lme(HR~Experience,random=~Experience|Year,data=your_data)
```
This model will heavily rely on assumption that each player's home-run number relies only on experience allowing some variability. It probably will not be very accurate, but you at least will get a feel how unlikely are Jose Bautista's numbers compared to average player. This model can be further improved by adding other player's characteristics.
| null | CC BY-SA 2.5 | null | 2011-01-20T09:36:25.980 | 2011-01-20T09:36:25.980 | null | null | 2116 | null |
6407 | 2 | null | 6391 | 2 | null | Model and data source
As in any other applications you should start from the plausible model, that would fit most of the countries in your cross-sectional dimension. Suppose you do have a quite universal theory that fits most of the countries in the world. Then you could try to include as many countries as possible using either [World Bank](http://data.worldbank.org/), [IFS ](http://www.imfstatistics.org/imf/logon.aspx) statistics or simply [Penn World table](http://pwt.econ.upenn.edu/php_site/pwt_index.php) data. However you may also limit yourself to more homogeneous samples like EU data taken from [Eurostat](http://epp.eurostat.ec.europa.eu/portal/page/portal/eurostat/home/) or [OECD](http://www.oecd.org/document/39/0,3746,en_2649_201185_46462759_1_1_1_1,00.html) statistics.
Cross-section of countries
Since in panel data models (you may also consider [linear mixed effects](http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-mixed-models.pdf) models as an option, since you also include interaction terms) under you (mostly) quantitative hypothesis "was there a significant (both statistically and economically) impact of belonging to euro-zone club on the budget deficit?" you have to balance the number of countries that are in the euro-zone, adopted it from a certain year, and are out of the club.
>
Should I only use countries that have introduced the Euro?
Is adding other European countries or western like countries beneficial as they add more data points and/or acts like a reference group?
Regarding these particular questions, in my opinion, the inclusion of EU countries and some OECD rivals to France, Germany and Italy, would be sufficient for your analysis. You have not to limit only to euro-zone club, because of selection bias. Well some countries are adopted euro within the time dimension, but it is better to compare with the countries that are not so restricting their fiscal policy due to strict Maastricht criteria they obliged to follow.
>
Or is id only bad for the data to add countries that has not introduced the euro as they have no variation in the variable of interest (EURODUMMY)?
Dummy is a variable that has two values $0$ and $1$, so if you include only (not in this case though) $1$ you will face pure multicollinearity problem with the common intercept term, that is present by default in `plm` model.
| null | CC BY-SA 2.5 | null | 2011-01-20T10:01:57.660 | 2011-01-20T10:01:57.660 | 2020-06-11T14:32:37.003 | -1 | 2645 | null |
6408 | 2 | null | 6354 | 0 | null | Just from reading the link, it seems to me that there is probably some packages that do what you want in the machine learning view on CRAN [http://cran.r-project.org/web/views/MachineLearning.html](http://cran.r-project.org/web/views/MachineLearning.html)
Hope this helps.
| null | CC BY-SA 2.5 | null | 2011-01-20T13:29:45.717 | 2011-01-20T13:29:45.717 | null | null | 656 | null |
6409 | 2 | null | 1475 | 4 | null | (Months later,)
a nice way to picture k-clusters and to see the effect of various k
is to build a
[Minimum Spanning Tree](http://en.wikipedia.org/wiki/Minimum_spanning_tree)
and look at the longest edges. For example,

Here there are 10 clusters, with 9 longest edges
855 899 942 954 1003 1005 1069 1134 1267.
For 9 clusters, collapse the cyan 855 edge; for 8, the purple 899;
and so on.
>
The single-link k-clustering algorithm ... is precisely Kruskal's algorithm
... equivalent to finding an MST and deleting the k-1 most expensive edges.
— Wayne,
[Greedy Algorithms](http://www.cs.princeton.edu/~wayne/kleinberg-tardos/04greedy-2x2.pdf).
22000 points, 242M pairwise distances, take ~ 1 gigabyte (float32):
might fit.
To view a high-dimensional tree or graph in 2d,
see Multidimensional Scaling (also from Kruskal),
and the huge literature on dimension reduction.
However, in dim > 20 say, most distances will be near the median,
so I believe dimension reduction cannot work there.
| null | CC BY-SA 2.5 | null | 2011-01-20T14:06:55.660 | 2011-01-20T15:30:01.123 | 2011-01-20T15:30:01.123 | 557 | 557 | null |
6410 | 1 | null | null | 8 | 3752 | I've a question which probably is going to show my ignorance about statistics :). I have a large set of machines that produce iron bars of certain lengths. For each machine, I have ran experiments and have a list of lengths. From those I can calculate a mean and sample standard deviation. I don't really care about their means and I am mainly focused on the variation. Therefore, I basically only record a sample standard deviation per machine. I think the results of each machine follow a normal distribution. So far so good :)
I now want to combine these variations into a single number. Therefore, I calculate the quadratic average of each machine variation, let's call it X. In the next step, I also would like to give an estimate for the spread around X. What is this number called and what's the best way to calculate it? I'm not sure it's related to the confidence interval of a standard deviation and I don't know whether the measurements are independent (a design fault would show up in all, a construction maybe only in some).
Example. I'll try to clarify with an example. Suppose I measure 3 machines and find that they produce lengths of
M1: 100 +/- 7
M2: 120 +/- 8
M3: 130 +/- 9
where the numbers behind the +/-'s are the sample standard deviations of observed values on that single machine. As said before, I don't care about the means but only in the spread, so I define {X_1, X_2, X_3} = {7,8,9}. Their quadratic average is X = RMS(X_i) = $\sqrt{194}$ and I think of X as an indication of the average spread of a machine in my park.
Suppose that I would have found {X_1, X_2, X_3} = {3,8,11}. Their quadratic average is the same $\sqrt{194}$, but the spread around it is obviously bigger. My confidence in the correctness of $\sqrt{194}$ as the average spread of a machine should therefore be lower (I'd like to test some more machines, for instance) and I would like to express this in a number.
Questions
Some answer to questions: they aren't identical; if some machine really misbehaves I could see it directly from the machine test (i.e. I would see it from a large X_i), but I wouldn't detect a small misbehavior. Furthermore, the sample amount for each machine could be different (I have more tests on my old machine wrt my new machines).
| Reliability of mean of standard deviations | CC BY-SA 2.5 | null | 2011-01-20T15:40:31.200 | 2011-01-22T07:16:28.483 | 2011-01-22T07:16:28.483 | 2848 | 2848 | [
"standard-deviation",
"reliability"
]
|
6411 | 2 | null | 6404 | 3 | null | I think that there's definitely a Bayesian shrinkage or prior correction that could help prediction but you might want to also consider another tack...
Look up players in history, not just the last few years, who've had breakout seasons after a couple in the majors (dramatic increases perhaps 2x) and see how they did in the following year. It's possible the probability of maintaining performance there is the right predictor.
There's a variety of ways to look at this problem but as mpiktas said, you're going to need more data. If you just want to deal with recent data then you're going to have to look at overall league stats, the pitchers he's up against, it's a complex problem.
And then there's just considering Bautista's own data. Yes, that was his best year but it was also the first time since 2007 he had over 350 ABs (569). You might want to consider converting the percentage increase in performance.
| null | CC BY-SA 2.5 | null | 2011-01-20T15:55:04.220 | 2011-01-20T15:55:04.220 | null | null | 601 | null |
6412 | 1 | null | null | 3 | 1051 | What is your favorite free tool on Linux for multivariate logistic regression?
Possibilities I've seen:
- R (see paper). This question says use design.
- Can you use SciPy?
Other choices?
Do people have experience with large data?
| Free tools on Linux for multivariate logistic regression? | CC BY-SA 2.5 | null | 2011-01-20T15:59:21.423 | 2011-10-06T18:42:03.443 | 2017-05-23T12:39:26.150 | -1 | 2849 | [
"r",
"logistic",
"python"
]
|
6413 | 2 | null | 6412 | 0 | null | I'm not sure, but i think PSPP does logistic regression. Not entirely certain if it can handle multivariate logistic regression.
| null | CC BY-SA 2.5 | null | 2011-01-20T16:41:35.723 | 2011-01-20T16:41:35.723 | null | null | 656 | null |
6414 | 2 | null | 1812 | 3 | null | I find myself routinely using parallel analysis (O'Connor, 2000). This solves the problem of how many factors to extract nicely.
See: [https://people.ok.ubc.ca/brioconn/nfactors/nfactors.html](https://people.ok.ubc.ca/brioconn/nfactors/nfactors.html)
O'Connor, B. P. (2000). SPSS and SAS programs for determining the number of components using parallel analysis and Velicer's MAP test. Behavior Research Methods, Instrumentation, and Computers, 32, 396-402.
| null | CC BY-SA 2.5 | null | 2011-01-20T17:27:34.687 | 2011-01-20T17:27:34.687 | null | null | null | null |
6415 | 2 | null | 6412 | 4 | null | If you want to use Python, you can use the [scikit-learn](http://scikit-learn.sourceforge.net/), which relies on Scipy. See the documentation for an example of the logistic regression:
[http://scikit-learn.sourceforge.net/auto_examples/linear_model/plot_logistic_path.html](http://scikit-learn.sourceforge.net/auto_examples/linear_model/plot_logistic_path.html)
This implementation is based on liblinear, thus it scales reasonably well. In addition, it implements L1 and L2 penalization, for sparse or shrunk regression, when dealing with high-dimensional data.
| null | CC BY-SA 3.0 | null | 2011-01-20T17:50:00.817 | 2011-10-06T18:42:03.443 | 2011-10-06T18:42:03.443 | 6689 | 1265 | null |
6416 | 2 | null | 6410 | 1 | null | If you want to test whether the variances of several machines deviates from the other variances combining them into average will not help you. The problem is that these differing variances will skew your average. To test whether there are different variances you can use [Bartlet's test](http://en.wikipedia.org/wiki/Bartlett%27s_test). It is sensitive to normality, but since you said that your data is normal this should not be a problem, though it would be a good idea to test that.
Now if you can assume that all the machines are similar in sense that they can have different means but similar variance, the problem is very simple. If you assume that machines are independent treat the variances from each machine as a random sample. Then estimate the mean and standard deviation of this sample. For large number of machines, the normal approximation will kick in, so it will not matter whether you use standard deviations or the variances. In both cases the sample mean will estimate average statistic of your choice, and standard deviation of the sample will estimate average spread of statistic of your choice. The 95% confidence interval will then be $\mu\pm 1.96\sigma$.
| null | CC BY-SA 2.5 | null | 2011-01-20T20:18:45.753 | 2011-01-20T20:18:45.753 | null | null | 2116 | null |
6417 | 2 | null | 3519 | 2 | null | In my opinion, Matlab is an ugly language. Perhaps it's gotten default arguments and named arguments in its core by now, but many examples you find online do the old "If there are 6 arguments, this, else if there are 5 arguments this and that..." and named arguments are just vectors with alternating strings (names) and values. That's so 1970's that I simply can't use it.
R may have its issues, and it is also old, but it was built on a foundation (Scheme/Lisp) that was forward-looking and has held up rather well in comparison.
That said, Matlab is much faster if you like to code with loops, etc. And it has much better debugging facilities. And more interactive graphics. On the other hand, what passes for documenting your code/libraries is rather laughable compared to R and you pay a pretty penny to use Matlab.
All IMO.
| null | CC BY-SA 2.5 | null | 2011-01-20T21:11:26.137 | 2011-01-20T21:11:26.137 | null | null | 1764 | null |
6418 | 1 | null | null | 10 | 6154 | I need an idea for a new rating system.. the problem in the ordinary one (just average votes) is that it does not count how many votes...
For example consider these 2 cases:
3 people voted 5/5
500 people voted 4/5
The ordinary voting systems just take the average, leading to the first one to be better..
However, I want the second one to get a higher rating, because many people have voted for the second...
Any help?
| Rating system taking account of number of votes | CC BY-SA 2.5 | null | 2011-01-20T21:52:48.840 | 2021-10-23T16:20:06.607 | 2011-01-22T14:04:39.037 | null | 2599 | [
"rating"
]
|
6419 | 2 | null | 6412 | 3 | null | R is a full statistical computing environment, featuring a programming language especially designed and optimized to this purpose and an enormous an high quality library tightly covering the whole are of data science.
SciPy is just a BLAS for Python with some support to basic statistics.
| null | CC BY-SA 2.5 | null | 2011-01-20T21:55:22.140 | 2011-01-20T21:55:22.140 | null | null | null | null |
6420 | 2 | null | 6418 | 6 | null | You could use a system like [reddit's "best" algorithm](https://web.archive.org/web/20210525100237/https://redditblog.com/2009/10/15/reddits-new-comment-sorting-system/) for sorting comments:
>
This algorithm treats the vote count
as a statistical sampling of a
hypothetical full vote by everyone,
much as in an opinion poll. It uses
this to calculate the 95% confidence
score for the comment. That is, it
gives the comment a provisional
ranking that it is 95% sure it will
get to. The more votes, the closer the
95% confidence score gets to the
actual score
So in the case of 3 people voting 5/5, you might be 95% sure the "actual" rating is at least a 1, whereas in the case of 500 people voting you might be 95% sure the "actual" rating is at least a 4/5.
| null | CC BY-SA 4.0 | null | 2011-01-20T22:00:51.767 | 2021-10-23T15:51:21.103 | 2021-10-23T15:51:21.103 | 338663 | 2817 | null |
6421 | 1 | null | null | 57 | 5642 | I still remember the Annals of Statistics paper on Boosting by Friedman-Hastie-Tibshirani, and the comments on that same issues by other authors (including Freund and Schapire). At that time, clearly Boosting was viewed as a breakthrough in many respects: computationally feasible, an ensemble method, with excellent yet mysterious performance. Around the same time, SVM came of age, offering a framework underpinned by solid theory and with plenty of variants and applications.
That was in the marvelous 90s. In the past 15 years, it seems to me that a lot of Statistics has been a cleaning and detailing operation, but with few truly new views.
So I'll ask two questions:
- Have I missed some revolutionary/seminal paper?
- If not, are there new approaches
that you think have the potential to
change the viewpoint of statistical
inference?
Rules:
- One answer per post;
- References or links welcome.
P.S.: I have a couple of candidates for promising breakthroughs. I will post them later.
| What are the breakthroughs in Statistics of the past 15 years? | CC BY-SA 2.5 | null | 2011-01-21T02:53:54.717 | 2019-07-06T21:41:41.143 | 2015-12-30T15:19:28.637 | 3277 | 30 | [
"mathematical-statistics",
"history"
]
|
6422 | 2 | null | 6418 | 1 | null | You may run into problems implied by the [Gibbard Satterthwaite](http://en.wikipedia.org/wiki/Gibbard-Satterthwaite_theorem) Theorem or [Arrow's Impossibility Theorem](http://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem), or any of the results of voting theory...
| null | CC BY-SA 2.5 | null | 2011-01-21T03:18:02.573 | 2011-01-21T03:18:02.573 | null | null | 795 | null |
6423 | 2 | null | 6418 | 13 | null | You are talking about a [shrinkage estimator](http://en.wikipedia.org/wiki/Shrinkage_estimator). Imdb is possibly the most famous example of this, how they calculate which movies will make it onto the [top 250](http://www.imdb.com/chart/top). It relies on the equation,
weighted rating (WR) = (v ÷ (v+m)) × R + (m ÷ (v+m)) × C , where:
```
* R = average for the movie (mean) = (Rating)
* v = number of votes for the movie = (votes)
* m = minimum votes required to be listed in the Top 250 (currently 3000)
* C = the mean vote across the whole report (currently 6.9)
```
They call this a "true bayesian rating" and that's true in the sense that our prior for the parameter "average rating" is that it is the same as for all other movies. This prior is then updated based on the "likelihood," which is the average rating for that movie, which has more strength if it has more votes. But I'm not sure whether this technically qualifies as bayesian, because neither the prior nor the posterior is a distribution... Can anyone clarify this?
| null | CC BY-SA 2.5 | null | 2011-01-21T03:48:51.013 | 2011-01-21T03:48:51.013 | null | null | 2073 | null |
6424 | 2 | null | 6421 | 14 | null | I'm not sure if you would call it a "breakthrough" per se, But the Publishing of Probability Theory: The Logic of Science By Edwin Jaynes and Larry Bretthorst may be noteworthy. Some of the things they do here are:
1) show equivalence between some iterative "seasonal adjustment" schemes and Bayesian "nuisance parameter" integration.
2) resolved the so called "Marginalisation Paradox" - thought to be the "death of bayesianism" by some, and the "death of improper priors" by others.
3) the idea that probability describes a state of knowledge about a proposition being true or false, as opposed to describing a physical property of the world.
The first three chapters of this book are available for free [here](http://bayes.wustl.edu/etj/prob).
| null | CC BY-SA 2.5 | null | 2011-01-21T08:38:17.833 | 2011-01-21T08:38:17.833 | null | null | 2392 | null |
6425 | 2 | null | 6421 | 14 | null | As an applied statistician and occasional minor software author, I'd say:
[WinBUGS](http://en.wikipedia.org/wiki/WinBUGS) (released 1997)
It's based on BUGS, which was released more than 15 years ago (1989), but it's WinBUGS that made Bayesian analysis of realistically complex models available to a far wider user base. See e.g. [Lunn, Spiegelhalter, Thomas & Best (2009)](http://dx.doi.org/10.1002/sim.3680) (and the discussion on it in [Statistics in Medicine vol. 28 issue 25](http://onlinelibrary.wiley.com/doi/10.1002/sim.v28:25/issuetoc)).
| null | CC BY-SA 3.0 | null | 2011-01-21T09:42:03.843 | 2015-12-30T15:12:24.560 | 2015-12-30T15:12:24.560 | 22047 | 449 | null |
6426 | 2 | null | 6421 | 8 | null | The Expectation-Propagation algorithm for Bayesian inference, especially in Gaussian Process classification, was arguably a significant breakthrough, as it provides an efficient analytic approximation method that works almost as well as computationally expensive sampling based approaches (unlike the usual Laplace approximation). See the work of Thomas Minka and others on the [EP roadmap](http://research.microsoft.com/en-us/um/people/minka/papers/ep/roadmap.html)
| null | CC BY-SA 2.5 | null | 2011-01-21T09:55:15.793 | 2011-01-21T14:03:33.117 | 2011-01-21T14:03:33.117 | 887 | 887 | null |
6427 | 2 | null | 6321 | 1 | null | As an alternative, [this beautiful paper](http://www2.warwick.ac.uk/fac/sci/statistics/crism/research/2005/05-15/05-15wv1.pdf) (a bit technical/mathsy, so beware) allows you to test for any correlation (not just $\rho=0$) using a Non-informative Bayesian Decision Theoretic approach.
| null | CC BY-SA 2.5 | null | 2011-01-21T10:05:07.917 | 2011-01-21T10:05:07.917 | null | null | 2392 | null |
6428 | 2 | null | 6404 | 3 | null | You can fit a model to this data alone and get predictions that account for regression to the mean by using mixed (multilevel) models. Predictions from such models account for regression to the mean. Even without knowing next to nothing about baseball I don't find results I got terribly believable, since, as you say, the model really needs to take account of other factors, such as plate appearances.
I think a Poisson mixed-effects model would be more suitable than a linear mixed model as the number of home runs is a count. Looking at the [data you provided](http://datalove.org/files/regress_hr.csv), a histogram of `hr` shows it is strongly positively skewed, suggesting so a linear mixed model isn't going to work well, and includes a fairly large number of zeroes, with or without log-transforming hr first.
Here's some code using the `lmer` function from the [lme4](http://cran.r-project.org/package=lme4) package. Having created an ID variable to identify each player and reshaped the data to 'long' format as mpiktas indicated in his answer, (i did that in Stata as i'm no good at data management in R, but you could do it in a spreadsheet package):
```
Year.c <- Year - 2008 # centering y eases computation and interpretation
(M1 <- lmer(HR ~ Year.c + (Year.c|ID), data=baseball.long, family=poisson(log), nAGQ=5))
```
This fits a model with a log-link giving an exponential dependence of hit-rate on year, which is allowed to vary between players. Other link functions are possible, though the identity link gave an error due to negative fitted values. A sqrt link worked ok though, and has lower BIC and AIC than the model with the log link, so it may be a better fit. The predictions for hit-rate in 2011 are sensitive to the chosen link function, particularly for players such as Bautista whose hit-rate has changed a lot recently.
I'm afraid I haven't managed to actually get such predictions out of `lme4` though. I'm more familiar with Stata, which makes it very easy to get predictions for observations with missing values for the outcome, although [xtmelogit](http://www.stata.com/help.cgi?xtmelogit) doesn't appear to offer any choice of link function other than log, which gave a prediction of 50 for Bautista's home runs in 2011. As I said, I don't find that terribly believable. I'd be grateful someone could show how to generate predictions for 2011 from the above `lmer` models above.
An [autoregressive model](http://en.wikipedia.org/wiki/AR%281%29) such as AR(1) for the player-level errors might be interesting too, but I don't know how to combine such a structure with a Poisson mixed model.
| null | CC BY-SA 2.5 | null | 2011-01-21T10:44:27.050 | 2011-01-21T10:44:27.050 | null | null | 449 | null |
6430 | 2 | null | 1812 | 3 | null | I would have to second chl's suggestion of the psych package, its extremely useful and has implementations of the MAP and parallel analysis criteria for number of factors. In my own experience, i have found that if you create factor analysis solutions for all the numbers between those returned by MAP and parallel analysis, you normally can find a relatively optimum solution.
I would also second the use of OpenMx for confirmatory factor analysis, as it seems to give the best results of all of them, and is much, much better for badly behaved matrices (as mine tend to be). The syntax is also quite nice, once you get used to it. The only issue that i have with it is that the optimiser is not open source, and thus it is not available on CRAN. Apparently they are working on an open source implementation of the optimiser, so that may not be an issue for much longer.
| null | CC BY-SA 2.5 | null | 2011-01-21T11:18:22.393 | 2011-01-21T11:18:22.393 | null | null | 656 | null |
6431 | 2 | null | 2356 | 65 | null | I said earlier that I would have a go at answering the question, so here goes...
Jaynes was being a little naughty in his paper in that a frequentist confidence interval isn't defined as an interval where we might expect the true value of the statistic to lie with high (specified) probability, so it isn't unduly surprising that contradictions arise if they are interpreted as if they were. The problem is that this is often the way confidence intervals are used in practice, as an interval highly likely to contain the true value (given what we can infer from our sample of data) is what we often want.
The key issue for me is that when a question is posed, it is best to have a direct answer to that question. Whether Bayesian credible intervals are worse than frequentist confidence intervals depends on what question was actually asked. If the question asked was:
(a) "Give me an interval where the true value of the statistic lies with probability p", then it appears a frequentist cannot actually answer that question directly (and this introduces the kind of problems that Jaynes discusses in his paper), but a Bayesian can, which is why a Bayesian credible interval is superior to the frequentist confidence interval in the examples given by Jaynes. But this is only becuase it is the "wrong question" for the frequentist.
(b) "Give me an interval where, were the experiment repeated a large number of times, the true value of the statistic would lie within p*100% of such intervals" then the frequentist answer is just what you want. The Bayesian may also be able to give a direct answer to this question (although it may not simply be the obvious credible interval). Whuber's comment on the question suggests this is the case.
So essentially, it is a matter of correctly specifying the question and properly intepreting the answer. If you want to ask question (a) then use a Bayesian credible interval, if you want to ask question (b) then use a frequentist confidence interval.
| null | CC BY-SA 2.5 | null | 2011-01-21T11:21:13.240 | 2011-01-21T11:21:13.240 | null | null | 887 | null |
6432 | 2 | null | 6421 | 12 | null | The introduction of the "intrinsic discrepancy" loss function and other "parameterisation free" loss functions into decision theory. It has many other "nice" properties, but I think the best one is as follows:
if the best estimate of $\theta$ using the intrinsic discrepancy loss function is $\theta^{e}$, then the best estimate of any one-to-one function of $\theta$, say $g(\theta)$ is simply $g(\theta^{e})$.
I think this is very cool! (e.g. best estimate of log-odds is log(p/(1-p)), best estimate of variance is square of standard deviation, etc. etc.)
The catch? the intrinsic discrepancy can be quite difficult to work out! (it involves min() funcion, a likelihood ratio, and integrals!)
The "counter-catch"? you can "re-arrange" the problem so that it is easier to calculate!
The "counter-counter-catch"? figuring out how to "re-arrange" the problem can be difficult!
Here are some references I know of which use this loss function. While I very much like the "intrinsic estimation" parts of these papers/slides, I have some reservations about the "reference prior" approach that is also described.
[Bayesian Hypothesis Testing:A Reference Approach](http://www.stat.duke.edu/research/conferences/valencia/IntStatRev.pdf)
[Intrinsic Estimation](http://www.uv.es/~bernardo/2005Test.pdf)
[Comparing Normal Means: New Methods for an Old Problem](http://www.uv.es/~bernardo/2007BayesAnalysis.pdf)
[Integrated Objective Bayesian Estimation and Hypothesis Testing](http://www.uv.es/~bernardo/JMBSlidesV9.pdf)
| null | CC BY-SA 2.5 | null | 2011-01-21T13:40:53.907 | 2011-01-21T13:40:53.907 | null | null | 2392 | null |
6433 | 1 | null | null | 3 | 1030 | I have samples of Bernoulli distributed variable that are neither the first nor the second i in iid. My goal is to model their sum.
I got from Wikipedia that I can use the poisson binomial distribution to make up for one of the i's, but then I have to keep all the inidividual probabilities.
It would probably also be possible to throw the central limit theorem against it somehow to model it as a Gaussian, but I wonder if I can do better.
Are there any results on how well a binomial distribution fits the sum of non identically non independently distributed Bernoulli samples. Especially if I can get some bounds on the accuracy wrt the correlation of the samples or something like that.
| Approximate a poisson binomial distribution with a binomial distribution? | CC BY-SA 2.5 | null | 2011-01-21T15:15:09.900 | 2014-04-14T16:37:59.887 | 2014-04-14T16:37:59.887 | 27403 | 2860 | [
"binomial-distribution",
"poisson-binomial-distribution"
]
|
6434 | 2 | null | 6404 | 2 | null | You might want to check out [The Book Blog.](http://www.insidethebook.com/ee/)
Tom Tango and the other authors of "The Book: Playing the Percentages in Baseball" are probably the best sources of sabermetrics out there. In particular, they love regression to the mean. They came up with a forecasting system designed to be the most basic acceptable system (Marcel), and it relies almost exclusively on regression to the mean.
Off the top of my head, I suppose one method would be to use such a forecast to estimate true talent, and then find an appropriate distribution around that mean talent. Once you have that, each plate appearance will be like a Bernoulli trial, so the binomial distribution could take you the rest of the way.
| null | CC BY-SA 2.5 | null | 2011-01-21T15:32:30.207 | 2011-01-21T15:32:30.207 | null | null | 2485 | null |
6438 | 1 | 6439 | null | 6 | 424 | Suppose you wanted to estimate the $q$ quantile of a distribution by observing $n$ independent draws from that distribution, but with $q < \frac{1}{n}$. What methods are available, and for what ranges of the 'undersampling coefficient' $q n$ are they expected to work?
I can imagine some parametric approaches: a fully parametric approach would assume the form of the distribution, then fit e.g. the scale and location, and estimate the quantile from the fitted distribution. A slightly less parametric form would try to estimate the form (e.g. fit to a power law) of the left tail, and use that to extrapolate. Are either of these sensible? Do they work? What are some other approaches?
| Quantile extrapolation? | CC BY-SA 2.5 | null | 2011-01-21T17:57:41.657 | 2020-09-27T06:28:21.957 | 2020-09-27T06:28:21.957 | 7290 | 795 | [
"estimation",
"quantiles",
"extreme-value"
]
|
6439 | 2 | null | 6438 | 1 | null | If you are also interested in quantiles with $q>1-1/n$ there is no definitive answer. You need to supply more details, since for distributions with heavy tails the estimation of such quantiles involves quite complicated mathematics. Try google search for tail index estimation and you will get plethora of links.
| null | CC BY-SA 2.5 | null | 2011-01-21T18:30:10.430 | 2011-01-21T18:30:10.430 | null | null | 2116 | null |
6440 | 2 | null | 6354 | 2 | null | I've just found out that the package [symbolicDA](http://keii.ae.jgora.pl/symbolicDA/index.html) is about to appear on CRAN in a few months' time.
| null | CC BY-SA 2.5 | null | 2011-01-21T18:46:58.413 | 2011-01-21T18:46:58.413 | null | null | 2831 | null |
6441 | 1 | null | null | 1 | 760 | I have a question regarding regression analysis on a dataset were the input values generate different results over time:
e.g.
```
1 2
2 2
3 5
4 1
2 5
3 8
```
How would I go about doing the regression on such a dataset, since the values change?
| Regression with repeated data | CC BY-SA 2.5 | null | 2011-01-21T20:07:12.527 | 2011-01-24T14:11:27.283 | 2011-01-21T20:43:16.063 | 449 | 2865 | [
"regression",
"repeated-measures"
]
|
6442 | 1 | 6443 | null | 4 | 1388 | ```
AIC(lm(Fertility ~ ., data=swiss))
[1] 326.0716
```
ok, since AIC is calculated as
```
-2*logLik(lm(Fertility ~ ., data=swiss)) + 2*7
```
Why does stepAIC return a smaller AIC?
```
stepAIC(lm(Fertility ~ ., data=swiss))
Start: AIC=190.69
```
| Question on AIC and stepAIC | CC BY-SA 2.5 | null | 2011-01-21T20:10:12.493 | 2011-01-21T20:58:46.413 | null | null | 339 | [
"r",
"regression",
"aic"
]
|
6443 | 2 | null | 6442 | 6 | null | Read `?AIC`. From the Details section we have:
```
The log-likelihood and hence the AIC is only defined up to an
additive constant. Different constants have conventionally be
used for different purposes and so ‘extractAIC’ and ‘AIC’ may give
different values (and do for models of class ‘"lm"’: see the help
for ‘extractAIC’).
```
So you are seeing the effect of different additive constants, as `stepAIC()` (in the MASS package) is using `extractAIC()` to compute the AIC of the models.
| null | CC BY-SA 2.5 | null | 2011-01-21T20:58:46.413 | 2011-01-21T20:58:46.413 | null | null | 1390 | null |
6444 | 1 | 6446 | null | 6 | 337 | Thus can we have random variables $X_1 , ... , X_n$ , each with zero mean and unit variance, such that for a sizeable representative sample:
(1) in the least-squares regression equation for $X_1$ against $X_2 ,..., X_n$ , the variable $X_n$ has a large coefficient, and its contribution is rated highly significant, while the other variables have markedly smaller coefficients and lower significance;
(2) in the least-squares regression for $X_n$ against $X_1 ,..., X_{n-1}$ , the variable $X_1$ has a small coefficient and low significance?
The reason for this question is that this effect appears in some real data (using SPSS), with five variables. But the result seems paradoxical; so perhaps we are doing something wrong.
| Can statistical prediction be asymmetric? | CC BY-SA 2.5 | null | 2011-01-21T21:14:02.597 | 2011-02-03T11:10:49.133 | 2011-01-22T15:50:09.937 | 449 | 2866 | [
"regression",
"spss"
]
|
6445 | 2 | null | 6441 | 3 | null | What do you mean by "the input values generate different results over time?" This happens a lot in regression analysis, and isn't typically a problem.
You could start by [loading your data into R](http://www.r-project.org/), and running a simple linear model.
```
x<-c(1,2,3,4,2,3)
y<-c(2,2,5,1,5,8)
model<-lm(y~x)
summary(model)
plot(x,y)
lines(x,fitted(model))
```
In the case of your example data, a simple linear model is terrible. Can you be more specific about what you're trying to do?
/edit: In response to suncoolso: once you've fit a simple linear model, you can use the "gls" command from the "nlme" package to fit a simple linear model with an autoregressive correlation structure.
| null | CC BY-SA 2.5 | null | 2011-01-21T21:40:08.813 | 2011-01-24T14:11:27.283 | 2011-01-24T14:11:27.283 | 2817 | 2817 | null |
6446 | 2 | null | 6444 | 7 | null | The coefficient of $X_n$ (and its significance) in the regression of $X_1$ on $X_2, \ldots, X_n$ can be computed by first obtaining the residuals $Y_1$ for the regression of $X_1$ on $X_2, \ldots, X_{n-1}$ and obtaining the residuals $Y_n$ for the regression of $X_n$ on $X_2, \ldots, X_{n-1}$. Then you regress $Y_1$ on $Y_n$.
Similarly, the coefficient of $X_1$ in the regression of $X_n$ on $X_1, \ldots, X_{n-1}$ is computed by regressing $Y_n$ on $Y_1$.
This reduces the question to one of ordinary regression (of one variable against another). The coefficients are not related to one another in a simple manner, because the variance of $Y_1$ does not have to equal the variance of $Y_n$, but the t-statistics will be identical.
It seems the software has not done what you expected.
| null | CC BY-SA 2.5 | null | 2011-01-22T00:44:35.250 | 2011-01-22T00:44:35.250 | null | null | 919 | null |
6447 | 2 | null | 1966 | 1 | null | A nice short paper by Jose Bernardo [here](http://www.uv.es/~bernardo/Kernel.pdf) gives a useful Bayesian method to estimate a density. But as with most things Bayesian, the computational cost must be paid for this method.
| null | CC BY-SA 2.5 | null | 2011-01-22T00:50:41.847 | 2011-01-22T00:50:41.847 | null | null | 2392 | null |
6448 | 1 | 6450 | null | 10 | 4218 | I think this is a rather basic question, but I just realize that I don't quite understand the term continuity correction.
I use R and found the same syntax `correct=TRUE` for both `chisq.test` and `mcnemar.test`. Are they referring to different continuity correction method?
I have heard of the yate's continuity for pearson chi-square test is not very popular now as it may "over-adjust" the result, then how about that for McNemar's chi-square test?
Thanks.
| Continuity correction for Pearson and McNemar's chi-square test | CC BY-SA 4.0 | null | 2011-01-22T03:43:13.820 | 2019-12-28T19:02:42.727 | 2019-12-28T19:02:42.727 | 92235 | 588 | [
"r",
"chi-squared-test",
"yates-correction"
]
|
6449 | 2 | null | 6444 | 7 | null | Your are trying to estimate model
\begin{align}
X_1&=\alpha_0+\alpha_2X_2+...+\alpha_nX_n+\varepsilon,\\
X_n&=\beta_0+\beta_1X_1+...+\beta_{n-1}X_{n-1}+\eta.
\end{align}
For such model ordinary least squares will give biased estimates. Assuming that $X_2,...,X_{n-1}$ are either deterministic or independent from $\varepsilon$ and $\eta$ we have
\begin{align}
EX_n\varepsilon&=E\left(\beta_0+\beta_1X_1+...+\beta_{n-1}X_{n-1}+\eta\right)\varepsilon\\
&=\beta_1EX_1\varepsilon+E\varepsilon\eta=\\
&=\beta_1E(\alpha_0+\alpha_2X_2+...+\alpha_nX_n+\varepsilon)\varepsilon+E\varepsilon\eta\\
&=\alpha_n\beta_1EX_n\varepsilon+\beta_1E\varepsilon^2 +E\varepsilon\eta.
\end{align}
So
\begin{align}
EX_n\varepsilon=\frac{\beta_1E\varepsilon^2+E\varepsilon\eta}{1-\alpha_n\beta_1}
\end{align}
The main assumption of linear regression is that the error is not correlated to the regressors. Without this assumption the estimates are biased and inconsistent. As we see in this case the assumption is violated, so it is entirely natural to expect unexpected results.
The are ways to get statistically sound estimates of the coefficients. This is an extremely common problem in econometrics called [endogeneity](http://en.wikipedia.org/wiki/Endogeneity_%28economics%29). Most popular solution is [two-stage least squares.](http://en.wikipedia.org/wiki/Two-stage_least_squares) In general these type of models are called [simultaneous equations](http://en.wikipedia.org/wiki/Simultaneous_equations_model).
Update
In the comments below the attention was drawn to the fact that the OP is just trying to fit two regressions and the model given above might be inappropriate. @onestop provided excellent reference about data modelling vs algorithmic modelling cultures. Nonetheless there is an important point I want to make in this case.
Taking into account @whuber answer we can restrict ourselves to the case where we have only two variables, $Y$ and $X$. Suppose having the sample $(y_i,x_i)$ we fit two linear regressions $Y$ vs $X$ and $X$ vs $Y$. The least squares estimates are the following:
\begin{align}
\beta_{YX}&=\frac{\sum_{i=1}^nx_iy_i}{\sum_{i=1}x_i^2} \\
\beta_{XY}&=\frac{\sum_{i=1}^nx_iy_i}{\sum_{i=1}y_i^2}
\end{align}
Now from the formulas it is clear that $\beta_{XY}$ and $\beta_{YX}$ coincide only in the case when $\sum_{i=1}^ny_i^2=\sum_{i=1}^nx_i^2$.
The corresponding $t$-statistics are then
\begin{align}
t_{YX}&=\frac{\beta_{YX}}{\sigma_{\beta_{XY}}}, \quad \sigma_{\beta_{YX}}^2=\frac{ \sigma_{YX}^2}{\sum_{i=1}^nx_i^2}, \quad \sigma_{YX}^2=\frac{1}{n-1}\sum_{i=1}^n(y_i-\beta_{YX}x_i)^2 \\
t_{XY}&=\frac{\beta_{XY}}{\sigma_{XY}}, \quad \quad \sigma_{\beta_{XY}}^2=\frac{ \sigma_{XY}^2}{\sum_{i=1}^nx_i^2}, \quad \sigma_{XY}^2=\frac{1}{n-1}\sum_{i=1}^n(y_i-\beta_{XY}x_i)^2
\end{align}
After a bit of manipulation we get the amazing fact (in my opinion) that
\begin{align}
t_{XY}=t_{YX}=\frac{\sum_{i=1}^nx_iy_i}{\sqrt{\frac{1}{n-1}\left(\sum_{i=1}^ny_i^2\sum_{i=1}^nx_i^2-\left(\sum_{i=1}^nx_iy_i\right)^2\right)}}
\end{align}
So this illustrates that OP clearly has a problem, since the $t$-statistics must coincide in this case. This of course was pointed out by @whuber.
So far so good. The problem arises when we want to get the distribution of this statistic. This is needed since $t$-statistic formally test the hypothesis that the coefficient $\beta_{XY}$ is zero. Without assuming any model, suppose that we want to test the null hypothesis that $cov(X,Y)=0$. We can also assume that we have $EX=EY=0$. Rewrite the $t$ statistic as follows:
\begin{align}
t=\frac{\frac{1}{\sqrt{n-1}}\sum_{i=1}^nx_iy_i}{\sqrt{\frac{1}{n-1}\sum_{i=1}^ny_i^2\frac{1}{n-1}\sum_{i=1}^nx_i^2-\left(\frac{1}{n-1}\sum_{i=1}^nx_iy_i\right)^2}}
\end{align}
Now since we have a sample due to law of large numbers we have
\begin{align*}
\frac{1}{n-1}\sum_{i=1}^ny_i^2\frac{1}{n-1}\sum_{i=1}^nx_i^2\xrightarrow{P}var(X)var(Y)\\
\frac{1}{n-1}\sum_{i=1}^nx_iy_i\xrightarrow{P}cov(X,Y),
\end{align*}
where $\xrightarrow{P}$ denotes convergence in probability.
So denominator of $t$-statistic converges to $\sqrt{var(X)var(Y)}$ under null hypothesis of $cov(X,Y)=0$.
Under null hypothesis due to central limit theorem we have that
\begin{align}
\frac{1}{\sqrt{n-1}}\sum_{i=1}^nx_iy_i\xrightarrow{D}N(0,var(XY)),
\end{align}
where $\xrightarrow{D}$ denotes convergence in distribution. So we get that under null-hypothesis of no correlation the $t$-statistic converges to
\begin{align}
t\xrightarrow{D}N\left(0,\frac{var(XY)}{var(X)var(Y)}\right)
\end{align}
Now if $(X,Y)$ are bivariate normal we have that $\frac{var(XY)}{var(X)var(Y)}=1$, under null hypothesis of $cov(X,Y)=0$, since zero correlation implies independence for normal random variables. The quantity $\frac{var(XY)}{var(X)var(Y)}$ is one, when $X$ is deterministic as is usually the case in linear regression, but then the null hypothesis $cov(X,Y)=0$ makes no sense.
Now if we assume that we have a model $Y=X\beta+\varepsilon$, under null hypothesis that $\beta=0$ and usual assumption $E(\varepsilon^2|X)=\sigma^2$ we get that
\begin{align}
t\xrightarrow{D}N(0,1),
\end{align}
since
\begin{align}
\frac{var(\varepsilon X)}{var(X)var(\varepsilon)}=\frac{\sigma^2var(X)}{\sigma^2var(X)}=1
\end{align}
But in the OP question we have two regressions, so if we allow model for one, we must allow model for the other and we arrive at the problem which I described in my initial answer.
So I hope that my lengthy update illustrates that if we do not assume usual model in doing regression, we cannot assume that usual statistics will have the same distributions.
| null | CC BY-SA 2.5 | null | 2011-01-22T04:44:22.490 | 2011-02-03T11:10:49.133 | 2011-02-03T11:10:49.133 | 2116 | 2116 | null |
6450 | 2 | null | 6448 | 11 | null | Another reason that continuity corrections for contingency tables have gone out of fashion is that they only make a noticeable difference when the cell counts are small, and modern computing power has made it feasible to calculate 'exact' p-values for such tables.
Exact tests for 2×2 tables for R are provided by the [exact2x2](http://cran.r-project.org/web/packages/exact2x2/) package written by [Michael Fay](http://www.niaid.nih.gov/about/organization/dcr/BRB/staff/Pages/michael.aspx). There's an accompanying [vignette about the exact McNemar's test and matching confidence intervals](http://cran.r-project.org/web/packages/exact2x2/vignettes/exactMcNemar.pdf).
| null | CC BY-SA 2.5 | null | 2011-01-22T10:20:15.813 | 2011-01-23T00:15:16.413 | 2011-01-23T00:15:16.413 | 449 | 449 | null |
6451 | 2 | null | 6418 | 1 | null | There is a simple (also to implement) heuristic to first seed the pool of votes with small number of dummy votes with average voting, and later replace it with incoming votes.
So, for instance, new object appears and you give it few votes rating it 2.5/5 (this is the best you can tell about it at the zero-knowledge point). Then the first vote comes, let's say 5/5, but it is somewhat tempered by the rest of the initial pool and the objects mean is slightly above 2.5. Than next votes come and the mean gradually moves from the initial guess to the real average which than has time to stabilize. Finally this algorithm converges to normal vote mean.
| null | CC BY-SA 2.5 | null | 2011-01-22T14:13:48.103 | 2011-01-22T14:13:48.103 | null | null | null | null |
6453 | 1 | 6456 | null | 6 | 1606 | I have run experiments on a group of users under two conditions, measuring the time it took users to finish their experiments. I used a cross-over design where half of the users started in the first conditions to end with the second, and the other half of the users did the other way around.
I analyze the data provided in a few different ANOVAs and find different p-values for my hypotheses. Some are below 0.05, some are below 0.01, some are over 0.05.
Do I need to fix an alpha level of statistical significance to be used in all my analysis, or can I report something like 'Hypothesis A is proven true at alpha level 0.05, while Hypothesis B is true at alpha level 0.01 (thus, possibly a stronger proof)'?
I don't know if I am being clear enough here. Let me know and I'll add details if needed.
Thanks.
| How to fix the threshold for statistical validity of p-values produced by ANOVAs? | CC BY-SA 2.5 | null | 2011-01-22T17:18:05.033 | 2011-01-22T19:35:29.030 | 2011-01-22T19:35:29.030 | 1320 | 1320 | [
"hypothesis-testing",
"anova",
"mixed-model",
"statistical-significance",
"p-value"
]
|
6454 | 1 | null | null | 6 | 5944 | Is it possible to do meta analysis of only two studies. What will be limitation of such analysis.
| Is it possible to do meta analysis of only two studies | CC BY-SA 2.5 | null | 2011-01-22T17:36:57.363 | 2011-01-22T23:56:52.097 | null | null | null | [
"meta-analysis"
]
|
6455 | 1 | null | null | 27 | 61932 | After an ARMA model is fit to a time series, it is common to check the residuals via the Ljung-Box portmanteau test (among other tests). The Ljung-Box test returns a p value. It has a parameter, h, which is the number of lags to be tested. Some texts recommend using h=20; others recommend using h=ln(n); most do not say what h to use.
Rather than using a single value for h, suppose that I do the Ljung-Box test for all h<50, and then pick the h which gives the minimum p value. Is that approach reasonable? What are the advantages and disadvantages? (One obvious disadvantage is increased computation time, but that is not a problem here.) Is there literature on this?
To elaborate slightly.... If the test gives p>0.05 for all h, then obviously the time series (residuals) pass the test. My question concerns how to interpret the test if p<0.05 for some values of h and not for other values.
| How many lags to use in the Ljung-Box test of a time series? | CC BY-SA 2.5 | null | 2011-01-22T18:02:39.427 | 2020-11-10T10:19:59.763 | 2011-01-23T19:37:13.097 | 2875 | 2875 | [
"time-series"
]
|
6456 | 2 | null | 6453 | 8 | null | Hey, but it seems you already looked at the results!
Usually, the risk of falsely rejecting the null (Type I error, or $\alpha$) should be decided before starting the analysis. Power might also be fixed to a given value (e.g., 0.80). At least, this is the "Neyman-Pearson" approach. For example, you might consider a risk of 5% ($\alpha=0.05$) for all your hypotheses, and if the tests are not independent you should consider correcting for multiple comparisons, using any single-step or step-down methods you like.
When reporting your results, you should indicate the Type I (and II, if applicable) error you considered (before seeing the results!), corrected or not for multiple comparisons, and give your p-values as p<.001 or p=.0047 for example.
Finally, I would say that your tests allow you to reject a given null hypothesis not to prove Hypothesis A or B. Moreover, what you describe as 0.001 being a somewhat stronger indication of an interesting deviation from the null than 0.05 is more in light with the Fisher approach to [statistical hypothesis testing](http://en.wikipedia.org/wiki/Statistical_hypothesis_testing).
| null | CC BY-SA 2.5 | null | 2011-01-22T18:09:44.463 | 2011-01-22T18:09:44.463 | null | null | 930 | null |
6457 | 2 | null | 6453 | 6 | null | My advice would be to tread carefully with p-values if you didn't have a specific hypothesis in mind before you started the experiment. Adjusting p-values for multiple and "vaguely specified" hypothesis (e.g. not specifying the alternative hypothesis) is difficult.
I suppose the "purist" would tell you that this should be fixed prior to looking at the data (one of my lecturers call not doing this intellectual dishonesty), but I would only say this is appropriate for "confirmatory analysis" where a well defined model (or set of models) has been set prior to the data being seen.
If the analysis is more "exploratory" then I would not worry about precise level so much, rather try to find relationships and try to explain why they may be there (i.e. use the analysis to build a model). tentative hypothesis testing may be useful as an initial guide, but you would need to get more data to confirm your hypothesis.
A useful way to "get more data" without running another experiment is to "lock up" some portion of your data and use the rest to "explore" and then once you are confident of a potentially useful model, "test" your theory with the data you "locked up". NOTE: you can on do the "test" once!
| null | CC BY-SA 2.5 | null | 2011-01-22T18:30:38.843 | 2011-01-22T18:30:38.843 | null | null | 2392 | null |
6458 | 2 | null | 6454 | 8 | null | Yes, it is [possible](https://stats.stackexchange.com/q/3652/1381), but whether it is appropriate depends on the intent of your analysis.
Meta-analysis is a method of combining information from different sources, so it is technically possible to do a meta-analysis of only two studies - even of multiple results within a single paper. The key concern is not if you can do this, but that the method is appropriate for the questions that you have and the conclusions that you want to make, and that you acknowledgE the limitations of your analysis.
For example, the typical use of meta-analysis is to quantitatively synthesize previous studies on a particular subject, such as the effects of some medical intervention. In this context, it is important to make your criteria for study selection before the analysis and then find all studies available that meet those criteria. These criteria might limit the scope of your search to publications in English, in a particular journal or set of journals, those that use particular methods, etc. In practice, it is necessary to be familiar with the studies you are interested in to state these criteria. However, if you non-randomly select two papers from among many that have been published, it would introduce bias into your study. If only two studies have been published, it might be hard to justify any conclusions from a meta-analysis but it could still be done.
On the other hand, I have used the meta-analytical approach to synthesize data from a single study, for example if summary statistics are reported for subgroups but I am interested in finding the overall mean and variance. I don't always call this a meta-analysis in mixed company, so as not to confuse this application of the method with the more common use of meta-analysis as a comprehensive review sensu stricto.
| null | CC BY-SA 2.5 | null | 2011-01-22T19:45:21.480 | 2011-01-22T22:42:40.807 | 2017-04-13T12:44:37.583 | -1 | 1381 | null |
6459 | 2 | null | 6455 | 3 | null | Before you zero-in on the "right" h (which appears to be more of an opinion than a hard rule), make sure the "lag" is correctly defined.
[http://www.stat.pitt.edu/stoffer/tsa2/Rissues.htm](http://www.stat.pitt.edu/stoffer/tsa2/Rissues.htm)
Quoting the section below Issue 4 in the above link:
"....The p-values shown for the Ljung-Box statistic plot are incorrect because the degrees of freedom used to calculate the p-values are lag instead of lag - (p+q). That is, the procedure being used does NOT take into account the fact that the residuals are from a fitted model. And YES, at least one R core developer knows this...."
Edit (01/23/2011): Here's an article by Burns that might help:
[http://lib.stat.cmu.edu/S/Spoetry/Working/ljungbox.pdf](http://lib.stat.cmu.edu/S/Spoetry/Working/ljungbox.pdf)
| null | CC BY-SA 2.5 | null | 2011-01-22T20:35:47.630 | 2011-01-23T17:48:49.720 | 2011-01-23T17:48:49.720 | 2775 | 2775 | null |
6461 | 2 | null | 6454 | 1 | null | If you compute a [likelihood ratio](http://pbr.psychonomic-journals.org/content/11/5/791.full.pdf) for the effect of interest in each study, you can simply multiply them together to obtain the aggregate weight of evidence for the effect.
| null | CC BY-SA 2.5 | null | 2011-01-22T23:56:52.097 | 2011-01-22T23:56:52.097 | null | null | 364 | null |
6462 | 1 | null | null | 5 | 163 | I have tested the [fuzzy C-means (FCM)](http://en.wikipedia.org/wiki/Cluster_Analysis#Fuzzy_c-means_clustering) algorithm using the R function `fanny` from the `cluster` package and I have wrote my own FCM algorithm to have more control on the distance function. The problem is that my final membership function converges many times to the 1/K, I mean almost all the values in the membership matrix are near to 1/K.
I am wondering what are the conditions that influence the final result. I am using fuzzy parameter m=2.
| In which case does FCM membership converge to 1/K? | CC BY-SA 2.5 | null | 2011-01-23T04:07:55.287 | 2020-09-23T00:09:00.727 | 2020-09-23T00:09:00.727 | 11887 | 2325 | [
"clustering",
"fuzzy",
"fuzzy-c-means-clustering"
]
|
6463 | 1 | 6465 | null | 4 | 256 | What software does one use to create a map that describes conditions over an arbitrary area ? The example I am thinking of would be a map of the US where states that voted one way or another would be red or blue. Obviously I can get a map of the US and use a graphics program to manually paint in the states, but I am figuring there should be some sort of software to do that in a somewhat automated fashion. Apart from a for-pay web app, I haven't been able to find anything. I am very new to the field so I'm not familiar with the available tools.
| What software is used for maps of the US (or other arbitrary areas)? | CC BY-SA 2.5 | null | 2011-01-23T07:41:38.150 | 2011-01-24T19:35:16.437 | 2011-01-23T10:03:27.403 | 930 | 2878 | [
"data-visualization"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.