Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2599 | 2 | null | 2597 | 19 | null | The following Wikipedia entry actually does a pretty good job of explaining the most popular and relatively simple methods:
- Determining the number of clusters in a data set
The [Elbow Method](http://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set#The_Elbow_Method) heuristic described there is probably the most popular due to its simple explanation (amount of variance explained by number of clusters) coupled with the visual check. The [information theoretic method](http://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set#An_Information_Theoretic_Approach.5B6.5D) isn't hard to implement either and the page has some pseudocode you could use to start. The latter is analagous to a penalized likelihood based on model complexity as in the well known information criteria such as AIC, BIC, etc.
| null | CC BY-SA 2.5 | null | 2010-09-12T20:26:40.637 | 2010-09-12T20:26:40.637 | null | null | 251 | null |
2600 | 5 | null | null | 0 | null | Psychometrics has evolved as a subfield of psychology to become the science of measurement of unobservable individual characteristics.
Unlike psychophysics which mostly focus on perceptual attributes, psychometrics aims at providing reproducible methods for assessing high-level or cognitive skills through the scientific development and evaluation of reliable and valid mental tests or questionnaires. Common traits of interest are knowledge, attitudes, personality, and abilities. Psychometric techniques developed for assessing intelligence or proficiency are now used in political science, social sciences, or health surveys. The [Psychometrika](http://www.psychometrika.org/) journal certainly reflects the current state of the art in this field, but other reviews now offer well-acknowledged articles on multivariate/multilevel data analysis, structural equation, or other latent variable models. Recommended books are [Nunally and Bernstein](http://rads.stackoverflow.com/amzn/click/007047849X) (1994), [Kline](http://rads.stackoverflow.com/amzn/click/1606238760) (1998), [Boomsma et al.](http://rads.stackoverflow.com/amzn/click/0387951474) (2001), [De Boeck and Wilson](http://rads.stackoverflow.com/amzn/click/0387402756) (2004), [Skrondal and Rabe-Hesketh](http://rads.stackoverflow.com/amzn/click/1584880007) (2004), and [Rao and Sinharay](http://rads.stackoverflow.com/amzn/click/0444521038) (2007).
Many R packages provide dedicated functions for applied psychometrics and item analysis, including exploratory and confirmatory factor analysis, multidimensional scaling, and other methods belonging to Classical Test Theory, Item Response Theory, and Structural Equation Modeling (See the [Psychometrics](http://cran.r-project.org/web/views/Psychometrics.html) task view and the [Journal of Statistical Software](http://www.jstatsoft.org/v20)). For Matlab users, there is the [IRTm](http://ppw.kuleuven.be/okp/software/irtm/) toolbox. For Stata users, there are [gllamm](http://www.gllamm.org/) and [raschtest](http://www.stata-journal.com/article.html?article=st0119), among others (Check out the [FreeIRT](http://www.freeirt.org/) project).
| null | CC BY-SA 3.0 | null | 2010-09-12T20:54:55.157 | 2015-08-29T00:06:49.490 | 2015-08-29T00:06:49.490 | 7290 | 930 | null |
2601 | 4 | null | null | 0 | null | Psychometrics has evolved as a subfield of psychology to become the science of measurement of unobservable individual characteristics. | null | CC BY-SA 2.5 | null | 2010-09-12T20:54:55.157 | 2010-09-12T20:54:55.157 | 2010-09-12T20:54:55.157 | 930 | 930 | null |
2602 | 1 | 2607 | null | 4 | 1477 | I often find parts of financial correlation matrices not statistically significantly different from zero. Sometimes, these correlations have a tangible effect on results - low correlations lead to high diversification benefit. Also, the sign of correlations can skew the results.
Would you consider it appropriate to make these correlations equal to zero to at least get rid of the directionality and, if necessary, fix the negative eigenvalues that might arise due to such corrections?
| How to deal with correlations not statistically significantly different from zero? | CC BY-SA 2.5 | null | 2010-09-12T21:38:14.563 | 2010-09-17T21:30:52.620 | null | null | 1250 | [
"correlation",
"statistical-significance"
] |
2603 | 2 | null | 2598 | 4 | null | Is the location of the treasure changed after each attempt? Wouldn't it be $(10\%)^5$?
| null | CC BY-SA 2.5 | null | 2010-09-12T21:54:02.223 | 2010-09-13T12:38:17.643 | 2010-09-13T12:38:17.643 | null | 1250 | null |
2604 | 2 | null | 2580 | 8 | null | You can't compare them directly since the Negative Binomial has more parameters. Indeed the Poisson is "nested" within the Negative Binomial in the sense that it's a limiting case, so the NegBin will always fit better than the Poisson. However, that makes it possible to consider something like a likelihood ratio test but the fact that the Poisson is at the boundary of the parameter space for the negative binomial may affect the distribution of the test statistic.
In any case, even if the difference in number of parameters wasn't a problem, you can't do K-S tests directly because you have estimated parameters, and K-S is specifically for the case where all parameters are specified. Your idea of using the bootstrap deals with this issue, but not the first one (difference in number of parameters)
I'd also be considering smooth tests of goodness of fit (e.g. see Rayner and Best's book), which, for example, can lead to a partition the chi-square goodness of fit test into components of interest (measuring deviations from the Poisson model in this case) - taken out to say fourth order or sixth order, this should lead to a test with good power for the NegBin alternative.
(Edit: You could compare your poisson and negbin fits via a chi-squared test but it will have low power. Partitioning the chi-square and looking only at say the first 4-6 components, as is done with smooth tests might do better.)
| null | CC BY-SA 3.0 | null | 2010-09-12T23:31:55.703 | 2013-03-22T00:29:55.287 | 2013-03-22T00:29:55.287 | 805 | 805 | null |
2605 | 2 | null | 2572 | 4 | null | Of course, there are limits to stress testing due to the requirement of the correlation matrix to remain positive semidefinite.
finding these limits is, imho, the critical part of what you are trying to do.
The answer to this problem depends on how many of the correlation coefficient you want to stress test simultaneously.
if you want to stress test only one coeficient at a time (say the entry $a_{ij},i\neq j$ of your correlation matrix), then you can use the [Gershgorin](http://en.wikipedia.org/wiki/Diagonally_dominant_matrix) theorem to place explicit bounds of the values that $a_{ij}$ can take. Let $a_{ij}^+$ and $a_{ij}^-$ be these bounds. Then you can stress test by computing you measure of risk for a grid of values in $(a_{ij}^-,a_{ij}^+)$
If you want to stress test more than one (say $k>1$) coefficient simultaneously, then there is no closed form solution for the bounds on these correlations coefficients. An exact solution for this problem exists however (i.e. the $k$-dimensional ellipse inside which your $k$ correlation coefficient are allowed to reside) but this requires a higher level of mathematical sophistication. If this is the case you are interested in, let me know in the comments.
Edit:
Exact solution for $k>1$: this is can be recasted as an [SDP](http://stanford.edu/~boyd/papers/sdp.html) problem.
say $a=(a_1,...,a_k)$ are $k$ correlation coefficients you want to vary and $w=(w_1,...,w_k)$ are strictly positive numbers and $C(a)$ is the $p$ by $p$ correlation matrix with $p\geq k$. Then,
$\underset{a|w}{\min.}\; a'w$
$s.t. C(a)\in \mathbb{R}_{++}$
where $\mathbb{R}_{++}$ is the set of all SDP symmetric matrices.
Imposing this constraint requires SDP programing. It can be shown that (via the Shur complement of $C(a)$) this is equivalent to imposing $p$ linear inequality and $p$ quadratic inequality on the values of $a$.
Now, we know the set of all solutions is a $k$ dimensional ellipse. Such ellipse is defined by $k+1$ points on it's boundary. Each solution to this SDP (corresponding to a given vector $w$) will be one point on the boundary of this ellipse.
Finally, each run of the minimization problem has time complexity $O(p^3)$ where $p$ is the number of assets.
there is a nice packages to solve SDP in [Matlab](http://cvxr.com/cvx/). I think there is also one in R, but last i checked it was not as nice.
| null | CC BY-SA 2.5 | null | 2010-09-13T00:13:30.363 | 2010-09-13T10:36:05.453 | 2010-09-13T10:36:05.453 | 603 | 603 | null |
2606 | 2 | null | 2580 | 4 | null | You can use a metric such as [Mean Squared Error](http://en.wikipedia.org/wiki/Mean_squared_error) between actual vs predicted values to compare the two models.
| null | CC BY-SA 2.5 | null | 2010-09-13T00:45:03.360 | 2010-09-13T00:45:03.360 | null | null | null | null |
2607 | 2 | null | 2602 | 2 | null | I often find parts of financial correlation matrices not statistically significantly different from zero
You might want to be careful with that. First of all, statistically significant and large enough to be meaningful are not the same [thing](https://stats.stackexchange.com/questions/2516/are-large-data-sets-inappropriate-for-hypothesis-testing/). Second, be careful that the test you use to assess the significance of the correlation coefficients do not assume the underlying data to be Gaussian because if you are dealing with financial series, they are most likely fat-tailed (and this is a case where it would make a difference, i.e. render your test void). Third, testing whether a single correlation coefficient is statistically significant is not the same as testing whether some of your correlation coefficients are statistically [significant](http://en.wikipedia.org/wiki/Multiple_comparisons).
Also, the sign of correlations can skew the results.
Presumably, not if the correlations are small to begin with (i.e. if you only fudge with small correlation coefficients this should not be a concern).
see Edward's comment below.
fix the negative eigenvalues that might arise due to such corrections?
decreasing the absolute value of a correlation coefficient (as you consider doing) will never make a PSD matrix become indefinite (<=> it will not change the sign of any of your eigenvalues).
see Whuber's comment below.
| null | CC BY-SA 2.5 | null | 2010-09-13T00:56:45.050 | 2010-09-13T16:19:52.300 | 2017-04-13T12:44:39.283 | -1 | 603 | null |
2608 | 2 | null | 2602 | 1 | null | I am not sure if this is feasible in your context but one thing you can do to avoid these issues is to use bayesian estimation and to compute the expected values of different investing decisions based on the posterior distribution of the parameters.
| null | CC BY-SA 2.5 | null | 2010-09-13T01:48:17.720 | 2010-09-13T01:48:17.720 | null | null | null | null |
2609 | 2 | null | 2597 | 5 | null | I recently became fund of the [clustergram visualization](http://www.r-statistics.com/2010/06/clustergram-visualization-and-diagnostics-for-cluster-analysis-r-code/) method (implemented in R).
I use it for an extra method to assess a "good" number of clusters. Extending it to other clustering methods is not so hard (I actually did it, just didn't get to publish the code)

| null | CC BY-SA 2.5 | null | 2010-09-13T05:22:06.293 | 2010-09-13T05:22:06.293 | null | null | 253 | null |
2610 | 2 | null | 2597 | 17 | null | It is rather difficult to provide a clear-cut solution about how to choose the "best" number of clusters in your data, whatever the clustering method you use, because Cluster Analysis seeks to isolate groups of statistical units (whether it be individuals or variables) for exploratory or descriptive purpose, essentially. Hence, you also have to interpret the output of your clustering scheme and several cluster solutions may be equally interesting.
Now, regarding usual statistical criteria used to decide when to stop to aggregate data, as pointed by @ars most are visual-guided criteria, including the analysis of the dendrogram or the inspection of clusters profiles, also called [silhouette](http://en.wikipedia.org/wiki/Silhouette_(clustering)) plots (Rousseeuw, 1987). Several numerical criteria, also known as validity indices, were also proposed, e.g. Dunn’s validity index, Davies-Bouldin valid- ity index, C index, Hubert’s gamma, to name a few. Hierarchical clustering is often run together with k-means (in fact, several instances of k-means since it is a stochastic algorithm), so that it add support to the clustering solutions found. I don't know if all of this stuff is readily available in Python, but a huge amount of methods is available in R (see the [Cluster](http://cran.r-project.org/web/views/Cluster.html) task view, already cited by @mbq for a related question, [What tools could be used for applying clustering algorithms on MovieLens?](https://stats.stackexchange.com/questions/183/what-tools-could-be-used-for-applying-clustering-algorithms-on-movielens/518#518)). Other approaches include [fuzzy clustering](http://en.wikipedia.org/wiki/Fuzzy_clustering) and [model-based clustering](http://www.stat.washington.edu/mclust/) (also called latent trait analysis, in the psychometric community) if you seek more robust way to choose the number of clusters in your data.
BTW, I just came across this webpage, [scipy-cluster](http://code.google.com/p/scipy-cluster/), which is an extension to Scipy for generating, visualizing, and analyzing hierarchical clusters. Maybe it includes other functionalities? I've also heard of [PyChem](http://pychem.sourceforge.net/) which offers pretty good stuff for multivariate analysis.
The following reference may also be helpful:
Steinley, D., & Brusco, M. J. (2008). Selection of variables in cluster analysis: An empirical comparison of eight procedures. Psychometrika, 73, 125-144.
| null | CC BY-SA 2.5 | null | 2010-09-13T08:26:14.417 | 2010-09-13T08:26:14.417 | 2017-04-13T12:44:48.803 | -1 | 930 | null |
2611 | 1 | null | null | 21 | 3506 | The logic of multiple imputation (MI) is to impute the missing values not once but several (typically M=5) times, resulting in M completed datasets. The M completed datasets are then analyzed with complete-data methods upon which the M estimates and their standard errors are combined using Rubin's formulas to obtain the "overall" estimate and its standard error.
Great so far, but i'm not sure how to apply this recipe when variance components of a mixed-effects model are concerned. The sampling distribution of a variance component is asymmetrical - therefore the corresponding confidence interval can't be given in the typical "estimate ± 1.96*se(estimate)" form. For this reason the R packages lme4 and nlme don't even provide the standard errors of the variance components, but only provide confidence intervals.
We can therefore perform MI on a dataset and then get M confidence intervals per variance component after fitting the same mixed-effect model on the M completed datasets. The question is how to combine these M intervals into one "overall" confidence interval.
I guess this should be possible - the authors of an article (yucel & demirtas (2010) Impact of non-normal random effects on inference by MI) seem to have done it, but they don't explain exactly how.
Any tips would be much obliged!
Cheers, Rok
| How to combine confidence intervals for a variance component of a mixed-effects model when using multiple imputation | CC BY-SA 2.5 | null | 2010-09-13T11:47:27.230 | 2010-10-06T14:31:47.217 | 2010-09-13T12:20:07.773 | 930 | 1266 | [
"modeling",
"confidence-interval",
"mixed-model",
"data-imputation"
] |
2612 | 2 | null | 2611 | 8 | null | This is a great question! Not sure this is a full answer, however, I drop these few lines in case it helps.
It seems that Yucel and Demirtas (2010) refer to an older paper published in the JCGS, [Computational strategies for multivariate linear mixed-effects models with missing values](http://sci2s.ugr.es/keel/pdf/specific/articulo/schafer_yucel02.pdf), which uses an hybrid EM/Fisher scoring approach for producing likelihood-based estimates of the VCs. It has been implemented in the R package [mlmmm](http://cran.r-project.org/web/packages/mlmmm/index.html). I don't know, however, if it produces CIs.
Otherwise, I would definitely check the [WinBUGS](http://www.mrc-bsu.cam.ac.uk/bugs/winbugs/contents.shtml) program, which is largely used for multilevel models, including those with missing data. I seem to remember it will only works if your MV are in the response variable, not in the covariates because we generally have to specify the full conditional distributions (if MV are present in the independent variables, it means that we must give a prior to the missing Xs, and that will be considered as a parameter to be estimated by WinBUGS...). It seems to apply to R as well, if I refer to the following thread on r-sig-mixed, [missing data in lme, lmer, PROC MIXED](https://stat.ethz.ch/pipermail/r-sig-mixed-models/2008q3/001192.html). Also, it may be worth looking at the [MLwiN](http://www.cmm.bristol.ac.uk/MLwiN/) software.
| null | CC BY-SA 2.5 | null | 2010-09-13T12:57:32.943 | 2010-09-13T14:14:58.480 | 2010-09-13T14:14:58.480 | 930 | 930 | null |
2613 | 1 | 2618 | null | 7 | 864 | I am advising a small medical study (two groups, treatment is a dummy variable), i.e. a 2x2 contingency table. I'm comparing the value of the [Pearson's $\chi^2$ test](http://en.wikipedia.org/wiki/Pearson%27s_chi-square_test) and a non parametric competitor [McNemar's $\chi^2$ test](http://en.wikipedia.org/wiki/McNemar%27s_test).
Edit
The answer brings an additional questions:
they have matched each case in group1 with 4 controls in group2 (matched according to all the variables they deem important, such as sex, age, hemisf) except treatment. Setting aside the question of whether the matching is well done (i.e. whether all important variables have indeed been identified), doesn't the fact that to each case corresponds four controls (i.e. not one) artificially inflate the significance of the results? (they still have to send me part of the data, this is why the table below does not reflect this 4 to 1 ratio).
I obtain the following (very different) results (n=116) (R CrossTable() function).:
```
[,1] [,2]
[1,] 39 9
[2,] 49 19
```
Statistics for All Table Factors
## Pearson's Chi-squared test with Yates' continuity correction
Chi^2 = 0.844691 d.f. = 1 p = 0.3580586
## McNemar's Chi-squared test with continuity correction
Chi^2 = 26.22414 d.f. = 1 p = 3.039988e-07
## Fisher's Exact Test for Count Data
Sample estimate odds ratio: 1.672924
Alternative hypothesis: true odds ratio is not equal to 1
p = 0.279528
95% confidence interval: 0.6356085 4.692326
The McNemar is the approximate version, but the exact version gives the same conclusions (strong rejection of the null).
My question is: how can i understand such a large difference between $\chi^2$ and McNemar ?
| How to explain such a big difference between parametric and non parametric test (and other questions)? | CC BY-SA 2.5 | null | 2010-09-13T13:14:51.620 | 2010-09-13T16:28:22.580 | 2010-09-13T16:28:22.580 | 603 | 603 | [
"hypothesis-testing",
"nonparametric"
] |
2614 | 2 | null | 372 | 1 | null | Understanding the causes of over-fitting, and how it can be avoided, is absoluetly vital in datamining. If you mine for statistical associations in data, you will always be able to find them (even if the data are purely random) but that doesn't mean they are of any predictive value. The more you mine, the more likely you are to dig up a spurious rule. I know that may sound rather obvious, but that doesn't mean that spurious rules never get presented to clients! ;o)
| null | CC BY-SA 2.5 | null | 2010-09-13T13:16:33.200 | 2010-09-13T13:16:33.200 | null | null | 887 | null |
2615 | 1 | 2664 | null | 7 | 613 | Suppose I have some initial correlation matrix. I want to stress each correlation in the matrix by the same constant simultaneously (except the diagonal; lets call this global parallel stress since it affects the entire matrix by the same constant at the same time). I start with adding 0.01% to each correlation and check if the matrix is still PSD. I continue increasing the stress levels by small increments. Eventually I encounter a stress level above which the matrix would no longer be PSD. Let's call this the upper stress boundary. I repeat the same procedure for negative stresses and find the lower stress boundary.
My general observation was that the minimum eigenvalue initial correlation matrix is roughly equal to the upper stress boundary. Is there an analytical solution for this problem?
Generally, I am more interested in the upper stress boundary. I stress the entire matrix at the same time because it is a simple way to reduce diversification benefit across all market variables. However, from a theoretical point of view it would be interesting to find out how to analytically calculate both lower and upper stress boundaries.
EDIT: please see a previous thread, which might be relevant here (some brilliant idea's from kwak): [Correlation stress testing](https://stats.stackexchange.com/questions/2572/correlation-stress-testing)
| Analytical solutions to limits of correlation stress testing | CC BY-SA 2.5 | null | 2010-09-13T13:35:11.777 | 2010-09-20T01:50:20.380 | 2017-04-13T12:44:33.977 | -1 | 1250 | [
"correlation"
] |
2616 | 2 | null | 485 | 1 | null | I just came across this website, [CensusAtSchool -- Informal inference](http://www.censusatschool.org.nz/2009/informal-inference/). Maybe worth looking at the videos and handouts...
| null | CC BY-SA 2.5 | null | 2010-09-13T14:07:11.150 | 2010-09-13T14:07:11.150 | null | null | 930 | null |
2617 | 1 | null | null | 4 | 4144 | I'm trying to take a normal distribution of points, and force them to become a uniform distribution. I've had little success on [S.O.](https://stackoverflow.com/questions/3696747/generation-of-uniformly-distributed-random-noise), so I thought I'd ask here.
Basically, I have a hash function which takes an `X`, `Y`, and seed value and generates numbers in a normal distribution with a mean of 0. These numbers vary only slightly from the numbers on all sides of them. When mapped out with the `X` and `Y` and the hashed value being `Z`, it creates a terrain map. The map has some large peaks far past 1 and -1, but the middle 50% of the values lie between (-0.4,0.4). I'm trying to smooth this map out so that it retains its shape, but has a more-or-less uniform (rather than normal) distribution.
As I said, the middle 50% of the values lie between (-0.4,0.4). The theoretical bounds for the hash function is (-2.25,2.25), though after generating a billion samples, the range of numbers found were about (-1.75,1.75).
I think I need to take the above information to determine the best fit normal distribution, then use it to transform each value. As I found out on S.O., it's a really tough problem for me to explain. Hopefully someone here can at least point me in the right direction explaining it, or understands what I'm trying to do.
---
The probability densities of my distribution, in blue, and `Normal(0,.72)`, in red: [-3.8,3.8]

---
Their cumulative probabilities: [-3.8,3.8]

---
The probability density after inverse probability transform: [0,1]

---
And the cumulative probability compared to `Uniform`: [0,1]

| How can I determine the best fit normal distribution from this information? | CC BY-SA 2.5 | null | 2010-09-13T14:07:17.403 | 2010-09-16T05:33:58.557 | 2017-05-23T12:39:26.593 | -1 | 1267 | [
"distributions",
"algorithms",
"data-transformation"
] |
2618 | 2 | null | 2613 | 11 | null | Because the null hypothesis your McNemar test tests, is not the same as the one tested by the $\chi^2$ test. The McNemar test actually tests whether the probability of 1-2 equals that of 2-1 (say first number is the row and second the column). If you switch columns, then you get a completely different outcome. The $\chi^2$ test just tests whether the frequencies can be calculated from the marginal frequencies, which means both categorical variables are independent.
To illustrate in R :
```
> x <- matrix(c(39,49,9,19),ncol=2)
> y <-x[,2:1]
> x
[,1] [,2]
[1,] 39 9
[2,] 49 19
> y
[,1] [,2]
[1,] 9 39
[2,] 19 49
> mcnemar.test(x)
McNemar's Chi-squared test with continuity correction
data: x
McNemar's chi-squared = 26.2241, df = 1, p-value = 3.04e-07
> mcnemar.test(y)
McNemar's Chi-squared test with continuity correction
data: y
McNemar's chi-squared = 6.2241, df = 1, p-value = 0.01260
> chisq.test(x)
Pearson's Chi-squared test with Yates' continuity correction
data: x
X-squared = 0.8447, df = 1, p-value = 0.3581
> chisq.test(y)
Pearson's Chi-squared test with Yates' continuity correction
data: y
X-squared = 0.8447, df = 1, p-value = 0.3581
```
It is obvious the result of the McNemar test is completely different depending on which column comes first, whereas the $\chi^2$ test gives exactly the same outcome. Now why is this one non-significant? Well, take a look at the expected values :
```
> m1 <- margin.table(x,1)/116
> m2 <- margin.table(x,2)/116
> outer(m1,m2)*116
[,1] [,2]
[1,] 36.41379 11.58621
[2,] 51.58621 16.41379
```
Pretty close to the table you have.
So both tests are not disagreeing at all. The $\chi^2$ rightfully concludes that both variables are independent, i.e. the counts in one variable are not influenced by the other and vice versa, and the McNemar test rightfully concludes that the probability of being first row-second column (0.07) is not the same as being second row-first column (0.42).
| null | CC BY-SA 2.5 | null | 2010-09-13T14:58:53.460 | 2010-09-13T14:58:53.460 | null | null | 1124 | null |
2619 | 1 | 2622 | null | 10 | 21601 | I have three variables:
- distance (continuous, variable range negative infinity to positive infinity)
- isLand (discrete categorical/ Boolean, variable range 1 or 0)
- occupants (discrete categorical, variable range 0-7)
I want to answer the following statistical questions:
- How to I compare distributions that have both categorical and continuous variable. For example, I like to determine if the data distribution of distance vs occupants varies depending on the value of isLand.
- Given two of the three variables, can I predict the third using some equation?
- How can I determine independence with more than two variables?
| Continuous and Categorical variable data analysis | CC BY-SA 2.5 | null | 2010-09-13T14:59:18.623 | 2019-07-01T08:39:16.593 | 2010-09-16T06:31:27.430 | null | 559 | [
"categorical-data",
"continuous-data"
] |
2620 | 2 | null | 2617 | 3 | null | The [inverse probability transform](http://en.wikipedia.org/wiki/Probability_integral_transform) seems to be what you want.
To get a better uniform fit, the CDF $F(x)$ in the inverse probabiliy transform needs to be the actual distribution generating the data. If this isn't actually normal, then that could account for the relatively poor fit. You might try a [Q-Q plot](http://en.wikipedia.org/wiki/Qq_plot) or [normality test](https://stats.stackexchange.com/questions/1645/appropriate-normality-tests-for-small-samples) first. Or your data may very well be normal, just with some noise, in which case you can obtain a best fit. I would suggest trying in order:
- Standardize your data to N(0, 1) and see if the transformation improves.
- After a normality test, fit a normal distribution to your data, then apply the transformation.
| null | CC BY-SA 2.5 | null | 2010-09-13T15:12:28.100 | 2010-09-13T19:54:36.780 | 2017-04-13T12:44:54.643 | -1 | 251 | null |
2621 | 2 | null | 2573 | 25 | null | To clarify Dirks point :
Say your data is a sample of a normal distribution. You could construct the following plot:

The red line is the empirical density estimate, the blue line is the theoretical pdf of the underlying normal distribution. Note that the histogram is expressed in densities and not in frequencies here. This is done for plotting purposes, in general frequencies are used in histograms.
So to answer your question : you use the empirical distribution (i.e. the histogram) if you want to describe your sample, and the pdf if you want to describe the hypothesized underlying distribution.
Plot is generated by following code in R :
```
x <- rnorm(100)
y <- seq(-4,4,length.out=200)
hist(x,freq=F,ylim=c(0,0.5))
lines(density(x),col="red",lwd=2)
lines(y,dnorm(y),col="blue",lwd=2)
```
| null | CC BY-SA 2.5 | null | 2010-09-13T15:14:47.923 | 2010-09-13T19:52:04.797 | 2010-09-13T19:52:04.797 | 1124 | 1124 | null |
2622 | 2 | null | 2619 | 5 | null | I would recommend reading about logistic or log-linear models in particular, and methods of categorical data analysis in general. The notes on the following course are pretty good for a start: [Analysis of Discrete Data](http://www.stat.psu.edu/online/courses/stat504/). The [textbook](http://rads.stackoverflow.com/amzn/click/0471226181) by Agresti is quite good. You might also consider [Kleinbaum](http://rads.stackoverflow.com/amzn/click/1441917411) for a quick start.
| null | CC BY-SA 2.5 | null | 2010-09-13T16:18:06.563 | 2010-09-13T16:18:06.563 | null | null | 251 | null |
2623 | 1 | 2627 | null | 16 | 13322 | Does the autocorrelation function have any meaning with a non-stationary time series?
The time series is generally assumed to be stationary before autocorrelation is used for Box and Jenkins modeling purposes.
| Autocorrelation in the presence of non-stationarity? | CC BY-SA 2.5 | null | 2010-09-13T16:56:42.960 | 2016-09-21T13:32:55.927 | 2016-05-24T09:43:57.127 | 1352 | 284 | [
"time-series",
"autocorrelation",
"box-jenkins"
] |
2624 | 2 | null | 2615 | 3 | null | This is not a complete answer but it was too long to fit in as a comment and hopefully gives you some ideas.
What I am about to say is not really a proof but more of an intuitive sketch as to why the minimum eigenvalue may matter as far as the upper stress boundary is concerned.
Every correlation matrix can be decomposed into a set of eigenvectors and a corresponding set of eigenvalues. The eigenvectors correspond to the axis of the ellipsoid associated with the correlation matrix and the eigenvalues give us a 'sense' of the length of the axis associated with the corresponding eigenvector.
Now, when you stress a correlation matrix you are essentially performing one or more of the following two operations simultaneously: 1. You are rotating the eigenvectors and 2. You are changing the eigenvalues. Notice that rotating eigenvectors simply changes the orientation of the ellipsoid but if the eigenvalues remain positive then we have a psd correlation matrix. However, as eigenvalues drop the ellipsoid shrinks in the direction of the corresponding eigenvector and eventually as the eigenvalue reaches zero the ellipsoid collapses completely in that direction resulting in a matrix that is not psd.
As you stress the correlation matrix there is less room along the axis that is shortest (i.e., the axis which has the lowest eigenvalue) to shrink and thus the minimum eigenvalue has a special role to play in stress testing.
| null | CC BY-SA 2.5 | null | 2010-09-13T17:27:13.523 | 2010-09-13T18:23:25.273 | 2010-09-13T18:23:25.273 | null | null | null |
2625 | 2 | null | 2617 | 4 | null | Interpreting your question as asking for the "inverse probability transform," as ars has indicated and you have confirmed in comments to the question, gives a simple solution: perform a sort to rank all the $z$ values in ascending order from $1$ to $N$ (the amount of data), then convert each $z$ into $2 Rank(z)/(N+1) - 1$. If you want to be really careful, scan over the ranked $z$'s to look for ties and assign each value within a group of tied values the average of their ranks.
| null | CC BY-SA 2.5 | null | 2010-09-13T17:37:30.807 | 2010-09-13T17:37:30.807 | null | null | 919 | null |
2626 | 2 | null | 2623 | 8 | null | In its alternative form as a variogram, the rate at which the function grows with large lags is roughly the square of the average trend. This can sometimes be a useful way to decide whether you have adequately removed any trends.
You can think of the variogram as the squared correlation multiplied by an appropriate variance and flipped upside down.
(This result is a direct consequence of the analysis presented at [Why does including latitude and longitude in a GAM account for spatial autocorrelation?](https://stats.stackexchange.com/a/35524/919), which shows how the variogram includes information about the expected squared difference between values at different locations.)
| null | CC BY-SA 3.0 | null | 2010-09-13T17:44:20.733 | 2016-05-24T11:43:50.740 | 2017-04-13T12:44:44.767 | -1 | 919 | null |
2627 | 2 | null | 2623 | 15 | null | @whuber gave a nice answer. I would just add, that you can simulate this very easily in R:
```
op <- par(mfrow = c(2,2), mar = .5 + c(0,0,0,0))
N <- 500
# Simulate a Gaussian noise process
y1 <- rnorm(N)
# Turn it into integrated noise (a random walk)
y2 <- cumsum(y1)
plot(ts(y1), xlab="", ylab="", main="", axes=F); box()
plot(ts(y2), xlab="", ylab="", main="", axes=F); box()
acf(y1, xlab="", ylab="", main="", axes=F); box()
acf(y2, xlab="", ylab="", main="", axes=F); box()
par(op)
```
Which ends up looking somewhat like this:

So you can easily see that the ACF function trails off slowly to zero in the case of a non-stationary series. The rate of decline is some measure of the trend, as @whuber mentioned, although this isn't the best tool to use for that kind of analysis.
| null | CC BY-SA 2.5 | null | 2010-09-13T17:55:10.243 | 2010-09-13T20:00:10.513 | 2010-09-13T20:00:10.513 | 5 | 5 | null |
2628 | 1 | 2629 | null | 57 | 30797 | Imagine you have to do reporting on the numbers of candidates who yearly take a given test. It seems rather difficult to infer the observed % of success, for instance, on a wider population due to the specifity of the target population. So you may consider that these data represent the whole population.
Are results of tests indicating that the proportions of males and females are different really correct? Does a test comparing observed and theoretical proportions appear to be a correct one, since you consider a whole population (and not a sample)?
| Statistical inference when the sample "is" the population | CC BY-SA 2.5 | null | 2010-09-13T18:35:23.810 | 2021-08-19T20:40:06.823 | 2010-09-14T07:59:54.997 | 183 | 1154 | [
"hypothesis-testing",
"population",
"sampling"
] |
2629 | 2 | null | 2628 | 39 | null | There may be varying opinions on this, but I would treat the population data as a sample and assume a hypothetical population, then make inferences in the usual way. One way to think about this is that there is an underlying data generating process responsible for the collected data, the "population" distribution.
In your particular case, this might make even more sense since you will have cohorts in the future. Then your population is really cohorts who take the test even in the future. In this way, you could account for time based variations if you have data for more than a year, or try to account for latent factors through your error model. In short, you can develop richer models with greater explanatory power.
| null | CC BY-SA 2.5 | null | 2010-09-13T19:30:09.810 | 2010-09-13T19:30:09.810 | null | null | 251 | null |
2630 | 2 | null | 2628 | 31 | null | Actually, if you're really positive you have the whole population, there's even no need to go into statistics. Then you know exactly how big the difference is, and there is no reason whatsoever to test it any more. A classical mistake is using statistical significance as "relevant" significance. If you sampled the population, the difference is what it is.
On the other hand, if you reformulate your hypothesis, then the candidates can be seen as a sample of possible candidates, which would allow for statistical testing. In this case, you'd test in general whether male and female differ on the test at hand.
As ars said, you can use tests of multiple years and add time as a random factor. But if your interest really is in the differences between these candidates on this particular test, you cannot use the generalization and testing is senseless.
| null | CC BY-SA 2.5 | null | 2010-09-13T20:12:57.343 | 2010-09-13T21:52:22.893 | 2010-09-13T21:52:22.893 | 1124 | 1124 | null |
2631 | 1 | 2633 | null | 9 | 3902 | This is a followup to [this](https://stats.stackexchange.com/q/2597/977) question. I am currently trying to implement the C-Index in order to find a near-optimal number of clusters from a hierarchy of clusters. I do this by calculating the C-Index for every step of the (agglomerative) hierarchical clustering. The problem is that the C-Index is minimal (0 to be exact) for very degenerated clusterings. Consider this:
$c = \frac{S-S_{min}}{S_{max}-S_{min}}$
In this case $S$ is the sum of all distances between pairs of observations in the same cluster over all clusters. Let $n$ be the number of these pairs. $S_{min}$ and $S_{max}$ are the sums of $n$ lowest/highest distances across all pairs of observations. In the first step of the hierarchical clustering, the two closest observations (minimal distance) are merged into a cluster. Let $d$ be the distance between these observations. Now there is one pair of observations in the same cluster, so $n=1$ (all other clusters are singletons). Consequently $S=d$. The problem is that $S_{min}$ also equals $d$, because $d$ is the smallest distance (that is why the observations where merged first). So for this case, the C-Index is always 0. It stays 0 as long as only singleton clusters are merged. This means the optimal clustering according the C-Index would always consist of a bunch of clusters containing two observations, and the rest singletons. Does this mean that the C-Index is not applicable to hierarchical clustering? Am I doing something wrong? I have searched a lot, but could not find any suitable explanation. Can someone refer me to some resource that is freely available on the internet? Or, if not, at least a book I may try to get at my universities library?
Thanks in advance!
| Can someone explain the C-Index in the context of hierarchical clustering? | CC BY-SA 2.5 | null | 2010-09-13T20:20:30.583 | 2010-09-13T21:04:03.930 | 2017-04-13T12:44:24.947 | -1 | 977 | [
"clustering"
] |
2632 | 2 | null | 2617 | 3 | null | OK, your hash function is generating numbers according to some distribution. You're thinking of it as normal, but it's really not.
There's a simple way to convert between any univariate distribution and a uniform distribution. That is to use the cumulative distribution function CDF, which is a simple monotonic function, S-shaped, going from y = 0 to y = 1.
So to convert a number x from your hash distribution into a uniform number y, simply take y = CDF(x). To convert from uniform y to hash number x, just invert the CDF function.
To get the CDF function, just make a table lookup. Generate a large number N of your hash function numbers, put them in a big array, and sort them in ascending order. That is your table. Then to calculate CDF(x), just lookup x in the table, by binary search. Then its index i in the table tells what y is, by y = i/N. (Actually, I'm cheating a bit. You will be more accurate if you do interpolation between two adjacent entries.)
If you want to invert the CDF function, just take your uniform number y, and get i = N*y. That gives you an index in the table, where you find x. Of course, you should interpolate, but if N is big enough, and you're not really fussy about accuracy, you don't really need to.
P.S. I'm glossing over some details, like what to do at the very ends of the table, or what if your table contains duplicate values, but this should get you going.
| null | CC BY-SA 2.5 | null | 2010-09-13T20:53:17.000 | 2010-09-13T21:01:50.457 | 2010-09-13T21:01:50.457 | 1270 | 1270 | null |
2633 | 2 | null | 2631 | 2 | null | This may be one of the cases where there's more art than science to clustering. I would suggest that you let your clustering algorithm run for a short time before letting the C-Index calculations kick in. "Short time" may be after processing a few pairs, just when it starts to exceed 0, or some other heuristic. (After all you don't expect to stop at 1 or 2 clusters, otherwise a different separation algorithm may have been deployed.)
For a book recommendation, I can suggest:
- Cluster Analysis by Brian Everitt, Sabine Landau, Morven Leese
You can scan/search the available contents on google books to see if it might meet your needs. It's worked as a reference for me in the past.
| null | CC BY-SA 2.5 | null | 2010-09-13T21:04:03.930 | 2010-09-13T21:04:03.930 | null | null | 251 | null |
2634 | 2 | null | 2619 | 2 | null |
- To examine the relationship between a continuous and categorical factor, a good start is to use side-by-side box plots, continuous on the left, categorical on the bottom. Are the means different? Use ANOVA to check.
- To examine the relationship between categorical factors, a good start is to use a mosaic plot, as well as a contingency table. You could group first then make separate plots.
- To predict occupants, ordinal logistic regression is probably the best way to go.
- To predict isLand, (binomial) logistic regression should do the trick.
- To predict distance, OLS regression will work.
| null | CC BY-SA 2.5 | null | 2010-09-13T23:32:13.713 | 2010-09-13T23:32:13.713 | null | null | 74 | null |
2635 | 1 | null | null | 12 | 27355 | I need to update the failure rate (given as deterministic) based on new rate of failure about the same system (it is a deterministic one too). I read about conjugate priors and Gamma distribution as a conjugate for the Poisson process.
Also, I can equate the mean value of Gamma dist. ($\beta/\alpha$) to the new rate (as a mean value) but I do not have any other information such as standard deviation, Coefficient of Variation, 90th percentile value,...etc. Is there a magic way to manipulate that and find parameters for the prior Gamma hence I get the posterior which Gamma too?
| Conjugate prior for a Gamma distribution | CC BY-SA 3.0 | null | 2010-09-14T00:20:44.673 | 2022-02-23T01:10:27.043 | 2013-08-02T05:12:13.260 | 805 | 1272 | [
"bayesian"
] |
2636 | 2 | null | 2547 | 2 | null | You're getting good answers here, but let me just add my 2 cents. I work in pharmacometrics, which deals in things like blood volume, elimination rate, base level of drug effect, maximum drug effect, and parameters like that.
We make a distinction between variables that can take on any value plus or minus, versus values that can only be positive. An example of a variable that can take on any value, plus or minus, would be drug effect, which could be positive, zero, or negative. An example of a variable that can only realistically be positive is blood volume or drug elimination rate.
We model these things with distributions that are typically either normal or lognormal, normal for the any-valued ones, and lognormal for the only-positive ones. A lognormal number is the number E taken to the power of a normally distributed number, and that is why it can only be positive.
For a normally distributed variable, the median, mean, and mode are the same number, so it doesn't matter which you use. However, for a lognormally distributed variable, the mean is larger than both the median and the mode, so it is not really very useful. In fact, the median is where the underlying normal has its mean, so it is a much more attractive measure.
Since age (presumably) can never be negative, a lognormal distribution is probably a better description of it than normal, so median (E to the mean of the underlying normal) is more useful.
| null | CC BY-SA 2.5 | null | 2010-09-14T00:23:26.010 | 2010-09-14T00:23:26.010 | null | null | 1270 | null |
2637 | 2 | null | 166 | 2 | null | As a rough generalization, any time you sample a fraction of the people in a population, you're going to get a different answer than if you sample the same number again (but possibly different people).
So if you want to find out how many people in Australia are >= 30 years old, and if the true fraction (God told us) just happened to be precisely 0.4, and if we ask 100 people, the average number we can expect to say they are >= 30 is 100 x 0.4 = 40, and the standard deviation of that number is +/- sqrt(100 * 0.4 * 0.6) = sqrt(24) ~ 4.9 or 4.9% (Binomial distribution).
Since that square root is in there, when the sample size goes up by 100 times, the standard deviation goes down by 10 times. So in general, to reduce the uncertainty of a measurement like this by a factor of 10, you need to sample 100 times as many people. So if you ask 100 x 100 = 10000 people, the standard deviation would go up to 49 or, as a percent, down to 0.49%.
| null | CC BY-SA 3.0 | null | 2010-09-14T00:55:16.510 | 2016-04-09T12:49:45.397 | 2016-04-09T12:49:45.397 | 5739 | 1270 | null |
2638 | 2 | null | 155 | 3 | null | Fun question.
Someone found out I work in biostatistics, and they asked me (basically) "Isn't statistics just a way of lying?"
(Which brings back the Mark Twain quote about Lies, Damn Lies, and Statistics.)
I tried to explain that statistics allows us to say with 100 percent precision that, given assumptions, and given data, that the probability of such-and-so was exactly such-and-such.
She wasn't impressed.
| null | CC BY-SA 2.5 | null | 2010-09-14T01:02:43.727 | 2010-09-14T18:11:45.067 | 2010-09-14T18:11:45.067 | 1270 | 1270 | null |
2639 | 1 | 2766 | null | 2 | 1377 | I have values from 4 different methods stored in the 4 matrices. Each of the 4 matrices contains values from a different method as:
```
Matrix_1 = 1 row x 20 column
Matrix_2 = 100 rows x 20 columns
Matrix_3 = 100 rows x 20 columns
Matrix_4 = 100 rows x 20 columns
```
The number of columns indicate the number of years. 1 row would contain the values corresponding to the 20 years. Other 99 rows for matrix 2, 3 and 4 are just the different realizations (or simulation runs). So basically the other 99 rows for matrix 2,3 and 4 are repeat cases (but not with exact values because of random numbers).
Consider `Matrix_1` as the reference truth (or base case). Now I want to compare the other 3 matrices with `Matrix_1` to see which one among those three matrices (each with 100 repeats) compares best, or closely imitates, with `Matrix_1`.
How can this be done in Matlab?
I know, manually, that we use confidence interval (CI) by plotting the `mean of Matrix_1`, and drawing each distribution of `mean of Matrix_2`, `mean of Matrix_3` and `mean of Matrix_4`. The largest CI among matrix 2, 3 and 4 which contains the reference truth (or `mean of Matrix_1`) will be the answer.
```
mean of Matrix_1 = (1 row x 1 column)
mean of Matrix_2 = (100 rows x 1 column)
mean of Matrix_3 = (100 rows x 1 column)
mean of Matrix_4 = (100 rows x 1 column)
```
I hope the question is clear and relevant to SA. Otherwise please feel free to edit/suggest anything in question. Thanks!
| How to compare different distributions with reference truth value in Matlab? | CC BY-SA 3.0 | 0 | 2010-09-14T01:11:32.053 | 2016-07-11T23:24:31.623 | 2016-07-11T23:24:31.623 | null | null | [
"distributions",
"confidence-interval",
"multiple-comparisons",
"matlab"
] |
2641 | 1 | 2647 | null | 648 | 471077 | The [wikipedia page](http://en.wikipedia.org/wiki/Likelihood_function) claims that likelihood and probability are distinct concepts.
>
In non-technical parlance, "likelihood" is usually a synonym for "probability," but in statistical usage there is a clear distinction in perspective: the number that is the probability of some observed outcomes given a set of parameter values is regarded as the likelihood of the set of parameter values given the observed outcomes.
Can someone give a more down-to-earth description of what this means? In addition, some examples of how "probability" and "likelihood" disagree would be nice.
| What is the difference between "likelihood" and "probability"? | CC BY-SA 2.5 | null | 2010-09-14T03:24:01.207 | 2021-11-28T21:59:44.893 | 2021-08-22T20:05:49.100 | 5679 | 386 | [
"probability",
"terminology",
"likelihood",
"intuition"
] |
2642 | 2 | null | 2641 | 76 | null | If I have a fair coin (parameter value) then the probability that it will come up heads is 0.5. If I flip a coin 100 times and it comes up heads 52 times then it has a high likelihood of being fair (the numeric value of likelihood potentially taking a number of forms).
| null | CC BY-SA 2.5 | null | 2010-09-14T03:44:11.397 | 2010-09-14T03:44:11.397 | null | null | 601 | null |
2644 | 2 | null | 2617 | 1 | null | Why does your transformation need to be a smooth deformation? Just take the binary representation of each number in 64-bit two's complement fixed point, or 64-bit IEEE floating point, and throw that into SHA-1. Boom, instant hash with uniform distribution of the outputs (assuming no duplicates).
| null | CC BY-SA 2.5 | null | 2010-09-14T04:40:27.327 | 2010-09-14T04:40:27.327 | null | null | 1122 | null |
2645 | 2 | null | 2641 | 79 | null | I will give you the perspective from the view of Likelihood Theory which originated with [Fisher](http://en.wikipedia.org/wiki/Ronald_Fisher) -- and is the basis for the statistical definition in the cited Wikipedia article.
Suppose you have random variates $X$ which arise from a parameterized distribution $F(X; \theta)$, where $\theta$ is the parameter characterizing $F$. Then the probability of $X = x$ would be: $P(X = x) = F(x; \theta)$, with known $\theta$.
More often, you have data $X$ and $\theta$ is unknown. Given the assumed model $F$, the likelihood is defined as the probability of observed data as a function of $\theta$: $L(\theta) = P(\theta; X = x)$. Note that $X$ is known, but $\theta$ is unknown; in fact the motivation for defining the likelihood is to determine the parameter of the distribution.
Although it seems like we have simply re-written the probability function, a key consequence of this is that the likelihood function does not obey the laws of probability (for example, it's not bound to the [0, 1] interval). However, the likelihood function is proportional to the probability of the observed data.
This concept of likelihood actually leads to a different school of thought, "likelihoodists" (distinct from frequentist and bayesian) and you can google to search for all the various historical debates. The cornerstone is the [Likelihood Principle](http://en.wikipedia.org/wiki/Likelihood_principle) which essentially says that we can perform inference directly from the likelihood function (neither Bayesians nor frequentists accept this since it is not probability based inference). These days a lot of what is taught as "frequentist" in schools is actually an amalgam of frequentist and likelihood thinking.
For deeper insight, a nice start and historical reference is Edwards' [Likelihood](http://rads.stackoverflow.com/amzn/click/0801844436). For a modern take, I'd recommend Richard Royall's wonderful monograph, [Statistical Evidence: A Likelihood Paradigm](http://rads.stackoverflow.com/amzn/click/0412044110).
| null | CC BY-SA 3.0 | null | 2010-09-14T05:16:17.883 | 2015-10-23T10:09:59.643 | 2015-10-23T10:09:59.643 | 48595 | 251 | null |
2646 | 2 | null | 2641 | 61 | null | Suppose you have a coin with probability $p$ to land heads and $(1-p)$ to land tails. Let $x=1$ indicate heads and $x=0$ indicate tails. Define $f$ as follows
$$f(x,p)=p^x (1-p)^{1-x}$$
$f(x,2/3)$ is probability of x given $p=2/3$, $f(1,p)$ is likelihood of $p$ given $x=1$. Basically likelihood vs. probability tells you which parameter of density is considered to be the variable
[](https://i.stack.imgur.com/1Dmpu.png)
| null | CC BY-SA 4.0 | null | 2010-09-14T06:04:27.093 | 2019-01-10T12:08:06.180 | 2019-01-10T12:08:06.180 | 79696 | 511 | null |
2647 | 2 | null | 2641 | 470 | null | The answer depends on whether you are dealing with discrete or continuous random variables. So, I will split my answer accordingly. I will assume that you want some technical details and not necessarily an explanation in plain English.
Discrete Random Variables
Suppose that you have a stochastic process that takes discrete values (e.g., outcomes of tossing a coin 10 times, number of customers who arrive at a store in 10 minutes etc). In such cases, we can calculate the probability of observing a particular set of outcomes by making suitable assumptions about the underlying stochastic process (e.g., probability of coin landing heads is $p$ and that coin tosses are independent).
Denote the observed outcomes by $O$ and the set of parameters that describe the stochastic process as $\theta$. Thus, when we speak of probability we want to calculate $P(O|\theta)$. In other words, given specific values for $\theta$, $P(O|\theta)$ is the probability that we would observe the outcomes represented by $O$.
However, when we model a real life stochastic process, we often do not know $\theta$. We simply observe $O$ and the goal then is to arrive at an estimate for $\theta$ that would be a plausible choice given the observed outcomes $O$. We know that given a value of $\theta$ the probability of observing $O$ is $P(O|\theta)$. Thus, a 'natural' estimation process is to choose that value of $\theta$ that would maximize the probability that we would actually observe $O$. In other words, we find the parameter values $\theta$ that maximize the following function:
$L(\theta|O) = P(O|\theta)$
$L(\theta|O)$ is called the likelihood function. Notice that by definition the likelihood function is conditioned on the observed $O$ and that it is a function of the unknown parameters $\theta$.
Continuous Random Variables
In the continuous case the situation is similar with one important difference. We can no longer talk about the probability that we observed $O$ given $\theta$ because in the continuous case $P(O|\theta) = 0$. Without getting into technicalities, the basic idea is as follows:
Denote the probability density function (pdf) associated with the outcomes $O$ as: $f(O|\theta)$. Thus, in the continuous case we estimate $\theta$ given observed outcomes $O$ by maximizing the following function:
$L(\theta|O) = f(O|\theta)$
In this situation, we cannot technically assert that we are finding the parameter value that maximizes the probability that we observe $O$ as we maximize the PDF associated with the observed outcomes $O$.
| null | CC BY-SA 4.0 | null | 2010-09-14T06:08:02.933 | 2019-03-24T14:27:29.807 | 2019-03-24T14:27:29.807 | null | null | null |
2648 | 1 | null | null | 7 | 420 | Imagine
- having a dependent variable $Y$ that is a proportion (i.e., the proportion of observations made at the given time point that satisfy a condition, where each time point involves 50 to 250 observations)
- $Y$ is measured at a series of time time points $X$, where $X = 1, 2, 3, ...$, typically to around 400.
- At initial time points, $Y$ typically equals zero or close to zero
- After an extended period of time $Y$ typically equals one or close to one
- At some point in between a transition occurs where values of $Y$ increase
- Throughout there is considerable time point to time point variability and given that Y is a proportion, the distribution of errors is not normal. Note also that the values of zero and one are common.
Properties of the data vary across studies, such as:
- the initial value of $Y$
- the time point when the value of $Y$ starts to increase
- the duration of transition from values mainly around 0 to mainly around 1
Questions
- What would be a good modelling approach to such data?
- How could the onset of the transition from values close to zero to values close to one be detected, especially given the non-normal errors?
| Predicting proportions from time with a discontinuity | CC BY-SA 2.5 | null | 2010-09-14T08:43:15.420 | 2017-11-01T11:31:57.867 | 2017-11-01T11:31:57.867 | 28666 | 183 | [
"time-series",
"proportion",
"change-point",
"sigmoid-curve"
] |
2649 | 2 | null | 2641 | 155 | null | I'll try and minimise the mathematics in my explanation as there are some good mathematical explanations already.
As Robin Girard comments, the difference between probability and likelihood is closely related to the [difference between probability and statistics](https://stats.stackexchange.com/questions/665/whats-the-difference-between-probability-and-statistics/675). In a sense probability and statistics concern themselves with problems that are opposite or inverse to one another.
Consider a coin toss. (My answer will be similar to [Example 1 on Wikipedia](http://en.wikipedia.org/wiki/Likelihood_function#Example_1).) If we know the coin is fair ($p=0.5$) a typical probability question is: What is the probability of getting two heads in a row. The answer is $P(HH) = P(H)\times P(H) = 0.5\times0.5 = 0.25$.
A typical statistical question is: Is the coin fair? To answer this we need to ask: To what extent does our sample support the our hypothesis that $P(H) = P(T) = 0.5$?
The first point to note is that the direction of the question has reversed. In probability we start with an assumed parameter ($P(head)$) and estimate the probability of a given sample (two heads in a row). In statistics we start with the observation (two heads in a row) and make INFERENCE about our parameter ($p = P(H) = 1- P(T) = 1 - q$).
[Example 1 on Wikipedia](http://en.wikipedia.org/wiki/File:LikelihoodFunctionAfterHH.png) shows us that the maximum likelihood estimate of $P(H)$ after 2 heads in a row is $p_{MLE} = 1$. But the data in no way rule out the the true parameter value $p(H) = 0.5$ (let's not concern ourselves with the details at the moment). Indeed only very small values of $p(H)$ and particularly $p(H)=0$ can be reasonably eliminated after $n = 2$ (two throws of the coin). After the [third throw](http://en.wikipedia.org/wiki/File:LikelihoodFunctionAfterHHT.png) comes up tails we can now eliminate the possibility that $P(H) = 1.0$ (i.e. it is not a two-headed coin), but most values in between can be reasonably supported by the data. (An exact binomial 95% confidence interval for $p(H)$ is 0.094 to 0.992.
After 100 coin tosses and (say) 70 heads, we now have a reasonable basis for the suspicion that the coin is not in fact fair. An exact 95% CI on $p(H)$ is now 0.600 to 0.787 and the probability of observing a result as extreme as 70 or more heads (or tails) from 100 tosses given $p(H) = 0.5$ is 0.0000785.
Although I have not explicitly used likelihood calculations this example captures the concept of likelihood: Likelihood is a measure of the extent to which a sample provides support for particular values of a parameter in a parametric model.
| null | CC BY-SA 4.0 | null | 2010-09-14T08:45:32.590 | 2020-07-28T09:27:25.707 | 2020-07-28T09:27:25.707 | 125268 | 521 | null |
2650 | 1 | 2651 | null | 2 | 1826 | I'm working with a panel dataset, I've used many models,
homogeneous (fixed effect, pooled ols and Driscoll and Kraay)
heterogeneous (swamy random coefficients) and would like to do a
post-estimation to select the model that best fit my regression.
Is there any method, command that may allow me to do this?
Any hint will be highly appreciated.
| Panel data and selection models issue | CC BY-SA 4.0 | null | 2010-09-14T09:05:22.790 | 2019-07-14T08:29:01.283 | 2019-07-14T08:29:01.283 | 11887 | 1251 | [
"model-selection",
"econometrics",
"panel-data"
] |
2651 | 2 | null | 2650 | 3 | null | According to my experience, model selection is not so much about seeking the model that best fits the regression. The first question to ask is : which model reflects my experimental design the best? The second question would then be : which model reflects the covariance structure in my data the best?
Only then you can start worrying about which model fits the data the best. There are a number of approaches to evaluate and comppare non-nested models. A naive approach would be to do a cross-validation and compare those results. One could eg do a leave-one-out crossvalidation, get the SS values for each run, and treat these SS values as sample data for model comparison.
There's a whole set of literature on comparing non-nested models, but that's always within the same framework. There's little possibilities for comparing models originating from different frameworks other than working with some general loss function.
| null | CC BY-SA 2.5 | null | 2010-09-14T10:02:54.613 | 2010-09-14T10:02:54.613 | null | null | 1124 | null |
2652 | 2 | null | 2648 | 3 | null | Ignoring the "change point" your description suggests to me a (non-linear) mixed effects model of the following form:
```
g(E(Yi)) = Xi*beta + Zi*U
```
Where The betas are fixed effects, the U's are random effects, g(E(yi)) is the (logit) link to a binomial mean.
This will deal with logitudinal correlation of data and the non-Guassian distribution issues.
This must be coupled with some form of change point model, probably a Hidden Markov Model (HMM).
[http://en.wikipedia.org/wiki/Hidden_markov_model](http://en.wikipedia.org/wiki/Hidden_markov_model)
It may be necessary to set-up the model as a Directed Acyclic Graph (DAG) in MCMC format, or even specify it fully in a Bayesian framework using software such as WinBUGS.
See:
[http://en.wikipedia.org/wiki/Directed_Acyclic_Graph](http://en.wikipedia.org/wiki/Directed_Acyclic_Graph)
[http://en.wikipedia.org/wiki/MCMC](http://en.wikipedia.org/wiki/MCMC)
[http://en.wikipedia.org/wiki/WinBUGS](http://en.wikipedia.org/wiki/WinBUGS)
| null | CC BY-SA 2.5 | null | 2010-09-14T11:59:07.703 | 2010-09-14T12:07:16.107 | 2010-09-14T12:07:16.107 | 521 | 521 | null |
2653 | 2 | null | 2579 | 4 | null | I think you use Stata, given your other post about [Panel data and selection models issue](https://stats.stackexchange.com/questions/2650/panel-data-and-selection-models-issue). Did you look at the following paper, [From the help desk: Swamy’s random-coefficients model](http://www.stata-journal.com/sjpdf.html?articlenum=st0046) from the Stata Journal (2003 3(3))? It seems that the command `xtrchh2` (available through `findit xtrchh` in Stata command line) includes an option about time, but I'm afraid it only allow to estimate the panel-specific coefficients. Looking around, I only found this article, [Estimation and testing of fixed-effect panel-data systems](http://www.stata-journal.com/article.html?article=st0084) (SJ 2005 5(2)), but it doesn't seem to address your question.
So maybe it is better to use the [xtreg](http://www.stata.com/help.cgi?xtreg) command directly.
If you have more than one random coefficient, then it may be better to [gllamm](http://www.gllamm.org/).
Otherwise, I would suggest trying the [plm](http://cran.r-project.org/web/packages/plm/index.html) R package (it has a lot of dependencies, but it mainly relies on the `nlme` and `survival` packages). The `effect` parameter that is passed to `plm()` seems to return individual, time or both (for balanced design) kind of effects; there's also a function names `plstest()`. I'm not a specialist of econometrics, I only used it for clinical trials in the past, but quoting the online help, it seems you will be able to get fixed effects for your time covariate (expressed as deviations from the overall mean or as deviations from the first value of the index):
```
library(plm)
data("Grunfeld", package = "plm")
gi <- plm(inv ~ value + capital, data = Grunfeld,
model = "within", effect = "twoways")
summary(gi)
fixef(gi,effect = "time")
```
where the data looks like (or see the plot below to get a rough idea):
```
firm year inv value capital
1 1 1935 317.6 3078.5 2.8
2 1 1936 391.8 4661.7 52.6
3 1 1937 410.6 5387.1 156.9
...
198 10 1952 6.00 74.42 9.93
199 10 1953 6.53 63.51 11.68
200 10 1954 5.12 58.12 14.33
```
For more information, check the accompagnying vignette or this paper, [Panel Data Econometrics in R: The plm Package](http://www.jstatsoft.org/v27/i02/paper), published in the JSS (2008 27(2)).

| null | CC BY-SA 2.5 | null | 2010-09-14T13:02:36.380 | 2010-09-14T13:28:24.557 | 2017-04-13T12:44:45.640 | -1 | 930 | null |
2654 | 2 | null | 2648 | 4 | null | Sounds to me that Y(X) is a [sigmiodal](http://en.wikipedia.org/wiki/Sigmoid_function) process. Thus logistic regression should be suitable for this data. If you model this in R with:
```
glm(Y~X,family=binomial)
```
you will find that the "sharpness" of the transition is determined by the magnitude of the X coefficient, and the point of transition (technically the mid-point) is at the ratio of the intercept coefficient to the X coefficient times -1. I made an image to illustrate this but cannot seem to upload it for some reason.
| null | CC BY-SA 2.5 | null | 2010-09-14T13:05:45.937 | 2010-09-14T13:05:45.937 | null | null | 229 | null |
2657 | 2 | null | 2628 | 2 | null | If you consider whatever it is that you are measuring to be a random process, then yes statistical tests are relevant. Take for example, flipping a coin 10 times to see if it is fair. You get 6 heads and 4 tails - what do you conclude?
| null | CC BY-SA 2.5 | null | 2010-09-14T14:58:42.770 | 2010-09-14T14:58:42.770 | null | null | 229 | null |
2658 | 2 | null | 2648 | 2 | null | James' approach looks good: each observation, according to your description, might have a Binomial(n[i], p[i]) distribution where n[i] is known and--to be fully general--p[i] is a completely unknown function that rises from near 0 to near 1 as i increases. A logistic regression (GLM with binomial response and logistic link) against X[i]==i alone as the explanatory variable might even work. If it's a poor fit, you can introduce additional terms, such as higher powers or (better yet, given the nonparametric spirit) splines. This readily allows for incorporating any covariates into the model, too.
In effect, what appears to be an abrupt change in the response might really just be a natural linear (or nearly linear) progression of logit(p) on which is superimposed Binomial variability. It is this possibility that leads me slightly away from the direction indicated by Thylacoleo, whose approach clearly is valid and likely to be effective. I'm just suspecting (hoping?) that your situation might be amenable to this somewhat simpler analysis.
A complicating possibility concerns the possible autocorrelation of the responses, but that would need to be investigated only if the logistic residuals look strongly over- or under-dispersed.
As a matter of EDA, you could smooth successive observations in a natural way and plot their logits against i. For instance, to smooth observations y[i] and y[i+1] you would construct (n[i]*y[i] + n[i+1]*y[i+1])/(n[i] + n[i+1]), effectively pooling two successive batches; longer smoothing windows can be constructed the same way. (This would automatically cancel out any negative short-term temporal correlation, too.) The fit to the smooth wouldn't be quite right--it would be less steep than appropriate--but it would suggest choices for the general form of the covariates (i.e., functions of i) to use in the regression.
This is, of course, only one of many possible models. For example, the observations at each time point might be a binomial mixture, which would allow both for overdispersion and another way of getting nonlinear fits on the logit scale.
| null | CC BY-SA 2.5 | null | 2010-09-14T15:28:33.867 | 2010-09-14T15:28:33.867 | null | null | 919 | null |
2659 | 2 | null | 2641 | 203 | null | This is the kind of question that just about everybody is going to answer and I would expect all the answers to be good. But you're a mathematician, Douglas, so let me offer a mathematical reply.
A statistical model has to connect two distinct conceptual entities: data, which are elements $x$ of some set (such as a vector space), and a possible quantitative model of the data behavior. Models are usually represented by points $\theta$ on a finite dimensional manifold, a manifold with boundary, or a function space (the latter is termed a "non-parametric" problem).
The data $x$ are connected to the possible models $\theta$ by means of a function $\Lambda(x, \theta)$. For any given $\theta$, $\Lambda(x, \theta)$ is intended to be the probability (or probability density) of $x$. For any given $x$, on the other hand, $\Lambda(x, \theta)$ can be viewed as a function of $\theta$ and is usually assumed to have certain nice properties, such as being continuously second differentiable. The intention to view $\Lambda$ in this way and to invoke these assumptions is announced by calling $\Lambda$ the "likelihood."
It's quite like the distinction between variables and parameters in a differential equation: sometimes we want to study the solution (i.e., we focus on the variables as the argument) and sometimes we want to study how the solution varies with the parameters. The main distinction is that in statistics we rarely need to study the simultaneous variation of both sets of arguments; there is no statistical object that naturally corresponds to changing both the data $x$ and the model parameters $\theta$. That's why you hear more about this dichotomy than you would in analogous mathematical settings.
| null | CC BY-SA 3.0 | null | 2010-09-14T15:45:09.347 | 2019-07-30T12:19:12.380 | 2019-07-30T12:19:12.380 | 919 | 919 | null |
2660 | 2 | null | 2639 | 2 | null | This is two questions; (1) has good standard explanations available anywhere, so I'll address (2). You can compare distributions using a Kolmogorov-Smirnov statistic: it is the maximum difference between their empirical distribution functions. (The value of the edf at a number $x$ is the proportion of data less than or equal to $x$.) Its distribution depends on the base distribution and on the size of the sample distribution, so in this case about the only option is to find the distribution of the KS statistic through simulation: draw several thousand (at least) samples of size 100 from the reference distribution in a way that emulates how batches B, C, and D were originally created. Compute the KS statistic for each sample: this is your simulation, which you can summarize with a histogram. Also compute the KS statistic for B, C, and D. Where they are situated with respect to the simulation results informs you about how close B, C, and D are to the reference.
NB: This answer responds to the question as originally posed: "1) Explain the concept of CI in brief. 2) How do I use CI to find which one among `B, C or D` imitates or is close to the `A`?"
| null | CC BY-SA 2.5 | null | 2010-09-14T16:43:26.823 | 2010-09-28T19:19:41.043 | 2010-09-28T19:19:41.043 | 919 | 919 | null |
2661 | 1 | 2665 | null | 7 | 18082 | I am doing some basic fitting of data and exploring different fits. I understand that the residual is the difference between the sample and the estimated function value. The norm of the residuals is a measure of the deviation between the correlation and the data. A lower norm signifies a better fit.
Suppose a cubic fit has a norm of residuals of 0.85655 and a linear fit has a norm of residuals of 0.89182. What if the norms are 0.17113 and 0.24916? Is the difference between these two significant? Is a norm of residual less than 1 considered good? If not what is generally regarded as "acceptable" norm of residual.
| Difference between Norm of Residuals and what is a "good" Norm of Residual | CC BY-SA 2.5 | null | 2010-09-14T17:18:10.680 | 2010-10-14T17:12:45.643 | 2010-10-14T17:12:45.643 | 930 | 559 | [
"regression",
"residuals"
] |
2663 | 2 | null | 2628 | 19 | null | Traditionally, statistical inference is taught in the context of probability samples and the nature of sampling error. This model is the basis for the test of significance. However, there are other ways to model systematic departures from chance and it turns out that our parametric (sampling based) tests tend to be good approximations of these alternatives.
Parametric tests of hypotheses rely on sampling theory to produce estimates of likely error. If a sample of a given size is taken from a population, knowledge of the systematic
nature of sampling makes testing and confidence intervals meaningful. With a population, sampling theory is simply not relevant and tests are not meaningful in the traditional sense. Inference is useless, there is nothing to infer to, there is just the thing...the
parameter itself.
Some get around this by appealing to super-populations that the current census represents. I find these appeals unconvincing--parametric tests are premised on probability sampling and its characteristics. A population at a given time may be a sample of a larger population over time and place. However, I don't see any way that one could legitimately argue that this is a random (or more generally any form form of a probability) sample. Without a probability sample, sampling theory and the
traditional logic of testing simply do not apply. You may just as well test on the basis of a convenience sample.
Clearly, to accept testing when using a population, we need to dispense with the basis of those tests in sampling procedures. One way to do this is to recognize the close connection between our sample-theoretic tests--such as t, Z, and F--and randomization procedures. Randomization tests are based on the sample at hand. If I collect
data on the income of males and females, the probability model and the basis for our estimates of error are repeated random allocations of the actual data values. I could compare observed differences across groups to a distribution based on this randomization. (We do this all the time in experiments, by the way, where the random sampling from a population model is rarely appropriate).
Now, it turns out that sample-theoretic tests are often good approximations of randomization tests. So, ultimately, I think tests from populations are useful and meaningful within this framework and can help to distinguish systematic from chance variation--just like with sample-based tests. The logic used to get there is a little different, but it doesn't have much affect on the practical meaning and use of tests. Of course, it might be better to just use randomization and permutation tests directly given they are easily available with all our modern computing power.
| null | CC BY-SA 2.5 | null | 2010-09-14T18:15:20.813 | 2010-09-14T18:15:20.813 | null | null | 485 | null |
2664 | 2 | null | 2615 | 2 | null | As kwak has pointed out, my question was answered on another forum:
[http://www.or-exchange.com/questions/695/analytical-solutions-to-limits-of-correlation-stress-testing](http://www.or-exchange.com/questions/695/analytical-solutions-to-limits-of-correlation-stress-testing)
I did several quick calculations and the suggested analytical solution is consistent with the numerical ones. I still need to check the proof.
| null | CC BY-SA 2.5 | null | 2010-09-14T19:18:26.480 | 2010-09-14T19:18:26.480 | null | null | 1250 | null |
2665 | 2 | null | 2661 | 11 | null | So, I would recommend using standard method for comparing nested models. In your case, you consider two alternative models, the cubic fit being the more "complex" one. An F- or $\chi^2$-test tells you whether the residual sum of squares or deviance significantly decrease when you add further terms. It is very like comparing a model including only the intercept (in this case, you have residual variance only) vs. another one which include one meaningful predictor: does this added predictor account for a sufficient part of the variance in the response? In your case, it amounts to say: Modeling a cubic relationship between X and Y decreases the unexplained variance (equivalently, the $R^2$ will increase), and thus provide a better fit to the data compared to a linear fit.
It is often used as a test of linearity between the response variable and the predictor, and this is the reason why Frank Harrell advocates the use of [restricted cubic spline](http://lib.stat.cmu.edu/S/Harrell/help/Design/html/Design.trans.html) instead of assuming a strict linear relationship between Y and the continuous Xs (e.g. age).
The following example comes from a book I was reading some months ago ([High-dimensional data analysis in cancer research](http://www.springer.com/biomed/cancer/book/978-0-387-69763-5), Chap. 3, p. 45), but it may well serves as an illustration. The idea is just to fit different kind of models to a simulated data set, which clearly highlights a non-linear relationship between the response variable and the predictor. The true generative model is shown in black. The other colors are for different models (restricted cubic spline, B-spline close to yours, and CV smoothed spline).
```
library(rms)
library(splines)
set.seed(101)
f <- function(x) sin(sqrt(2*pi*x))
n <- 1000
x <- runif(n, 0, 2*pi)
sigma <- rnorm(n, 0, 0.25)
y <- f(x) + sigma
plot(x, y, cex=.4)
curve(f, 0, 6, lty=2, add=TRUE)
# linear fit
lm00 <- lm(y~x)
# restricted cubic spline, 3 knots (2 Df)
lm0 <- lm(y~rcs(x,3))
lines(seq(0,6,length=1000),
predict(lm0,data.frame(x=seq(0,6,length=1000))),
col="red")
# use B-spline and a single knot at x=1.13 (4 Df)
lm1 <- lm(y~bs(x, knots=1.13))
lines(seq(0,6,length=1000),
predict(lm1,data.frame(x=seq(0,6,length=1000))),
col="green")
# cross-validated smoothed spline (approx. 20 Df)
xy.spl <- smooth.spline(x, y, cv=TRUE)
lines(xy.spl, col="blue")
legend("bottomleft", c("f(x)","RCS {rms}","BS {splines}","SS {stats}"),
col=1:4, lty=c(2,rep(1,3)),bty="n", cex=.6)
```

Now, suppose you want to compare the linear fit (`lm00`) and model relying on B-spline (`lm1`), you just have to do an F-test to see that the latter provides a better fit:
```
> anova(lm00, lm1)
Analysis of Variance Table
Model 1: y ~ x
Model 2: y ~ bs(x, knots = 1.13)
Res.Df RSS Df Sum of Sq F Pr(>F)
1 998 309.248
2 995 63.926 3 245.32 1272.8 < 2.2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
Likewise, it is quite usual to compare [GLM](http://en.wikipedia.org/wiki/Generalized_linear_model) with [GAM](http://en.wikipedia.org/wiki/Generalized_additive_model) based on the results of a $\chi^2$-test.
| null | CC BY-SA 2.5 | null | 2010-09-14T20:21:34.290 | 2010-10-08T07:22:07.983 | 2010-10-08T07:22:07.983 | 930 | 930 | null |
2667 | 1 | 2674 | null | 7 | 786 | When given samples of a discrete random variable, the entropy of the distribution may be estimated by $- \sum \hat{P_i} \log{\hat{P_i}}$, where $\hat{P_i}$ is the sample estimate of the frequency of the $i$th value. (this is up to a constant determined by the base of the log.) This estimate should not be applied to observations from a continuous distribution, at least naively, because it would yield a value which depends only on the sample size.
[Beirlant et al](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.87.5281) describe a number of approaches for the continuous problem, including estimates based on empirical CDF, nearest neighbor distances and the $m$-spacing estimate, which is given by $$\frac{1}{n}\sum_{i=1}^{n-m}\log{(\frac{n}{m}(X_{(i+m)} - X_{(i)}))}$$, where $X_{(i)}$ is the $i$th order statistic of the sample, and $m$ varies in a certain way with $n$. It is not clear how this estimate is to be computed in the presence of ties, i.e. it does not appear to be applicable to discrete distributions. (a naive correction for ties (drop terms which have $\log{0}$) appears to give an estimator which does not depend on the relative frequency of the classes, only their values, which seems wrong.)
The question: is there an 'all-purpose' estimator which can deal with both discrete and continuous (or even mixed) distributions?
| All-Purpose Sample Entropy | CC BY-SA 2.5 | null | 2010-09-14T21:43:41.883 | 2010-09-15T09:34:46.873 | null | null | 795 | [
"discrete-data",
"continuous-data",
"entropy"
] |
2668 | 2 | null | 665 | 8 | null | Probability is the embrace of uncertainty, while statistics is an empirical, ravenous pursuit of the truth (damned liars excluded, of course).
| null | CC BY-SA 2.5 | null | 2010-09-14T21:56:16.450 | 2010-09-14T21:56:16.450 | null | null | null | null |
2670 | 1 | null | null | 4 | 5839 | There are 100 people in a room.
What are the odds at least 1 person is born in January?
What is the best way to calculate this?
I used Binomial Distribution.
$E(x) = np = 100 \frac{1}{12} = 8.3$
$std.dev. = \sqrt{npq} = 2.76$
$Z(\text{1 person}) = (1 - 8.3) / 2.76 = -2.64$
$p = P(Z<-2.64) = .004$ or $.4%$ odds that less than 1 person is born in Jan
Can a confidence interval calc apply here? If so, how?
(With 95% certainty, the number of people born in Jan is b/w x & y)
| Odds at least 1 person is born in January? | CC BY-SA 2.5 | null | 2010-09-15T00:45:11.290 | 2022-12-06T01:00:14.010 | 2010-09-16T12:29:04.900 | null | 1279 | [
"probability",
"confidence-interval",
"binomial-distribution"
] |
2671 | 2 | null | 2670 | 12 | null | Do you mean [odds or probability](http://en.wikipedia.org/wiki/Odds#Gambling_odds_versus_probabilities)?
$$p(\text{at least 1 person born in January}) = 1 - p(\text{no one born in January})$$
Assuming Independence:
\begin{align}
p(\text{no one born in January}) &= P(\text{person 1 not born in Jan}) \\
&\cdot P(\text{person 2 not born in Jan})\\
&\cdot \dotsm P(\text{person 100 not born in Jan})
\end{align}
Assume $P(\text{not born in Jan}) = \frac{365-31}{365}$
$$p(\text{no one born in January}) = \frac{334}{365}^{100} = 0.00013975
$$
So, $p(\text{at least 1 person born in January}) = 0.99986025$
This is a binomial calculation.
A few points.
- Your question is about probability it is not about statistics, so it is not meaningful to ask for a confidence interval on a probability calculation. A confidence interval is a measure of uncertainty about a parameter estimate. By framing the question as you have you enable a direct calculation of the parameter value (p = 334/365).
- Not withstanding comment (1) you have attempted to use a normal approximation to the binomial distribution for your calculation. This will produce a serious error when p is close to 0 or 1 (as here) and is not necessary as exact binomial solutions are trivial in this case.
- Your calculation of the mean or expected value (and Std. dev.) of the binomial distribution is correct. We can use this and the binomial distribution to estimate the probability of seeing a range of results. I will use p(born Jan) = 31/365 ~ 0.085.
We expect to see 8.5 people born in January in a class of 100 people. Of course this is impossible. But the probability of seeing 9 people born in January is:
$$\begin{align}
p(K = k|n = 100)&= \binom{n}{k} * p^k*(1-p)^{(n-k)} \\\\
&= \binom{100}{9} * (0.085)^9(1 - 0.085)^{(100-9)} \\\\
&= 1.902*10^{12}*(2.316*10^{-10})*(0.00030855) \\\\
&= 0.13591726 \\\\
\end{align}$$
Where $\binom{n}{k}$ is the [binomial coefficient](http://en.wikipedia.org/wiki/Binomial_coefficient).
Similarly the probability of seeing 8 people born in January is: 0.14315621. These two outcomes cover nearly 28% of the probability.
We can construct an approximately symmetrical bound on the mean of 0.085 by using the cumulative binomial distribution functions. I'll use one in Stata, but I think Excel has them. It turns out the probability of seeing 3 or fewer January birthdays is 2.53% and the probability of seeing 15 or more January birthdays is 2.19%. So the probability of seeing 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 or 14 birthdays is 95.3% which is a reasonable estimate of a 95% "coverage probability", although [coverage probability](http://en.wikipedia.org/wiki/Coverage_probability) has a slightly different meaning in statistics.
| null | CC BY-SA 4.0 | null | 2010-09-15T01:07:21.113 | 2022-12-06T01:00:14.010 | 2022-12-06T01:00:14.010 | 11887 | 521 | null |
2672 | 2 | null | 2670 | 6 | null | I will shamelessly propose using simulation (in R) to answer your questions.
`rbinom(n = 100000, size = 100, prob = 1/12)` draws 100 trials from the binomial distribution 100,000 times and returns a vector of the number of 'successes' (i.e., January births, using the naive 1/12 probability).
You should be able to approximate whatever you want from the results.
| null | CC BY-SA 2.5 | null | 2010-09-15T06:30:41.257 | 2010-09-15T06:30:41.257 | null | null | 71 | null |
2673 | 2 | null | 2670 | 8 | null | Just to add few comments to Thylacoleo excellent answer. From the CDC’s National Vital Statistics System [recent report](http://www.cdc.gov/nchs/data/nvsr/nvsr54/nvsr54_02.pdf), we can estimate the probability of being born on January 1st with a bit more precision.
Month
From page 50 of the report, we the probability of being born in a certain month:
```
#R code
#Month totals - page 50
m_totals = c(329803, 307248, 336920, 330106, 346754, 337425,
364226, 360103, 359644, 354048, 320094, 343579)
m_probs = m_totals/sum(m_totals)
```
This gives the probability of being born in January as 0.0806 which is a bit lower that 31/365=0.085. For info, Feb and Nov have probabilities around 0.076(ish).
Day of the week
For fun, (and since it was on the next page), from page 51 of the report we get the probability of being born on a certain day:
```
#Starting on a Sunday - page 51
> d_totals = c(7563, 11733, 13001, 12598, 12514, 12396, 8605)
> d_probs = d_totals/sum(d_totals)
> round(d_probs,4)
[1] 0.0965 0.1496 0.1658 0.1607 0.1596 0.1581 0.1097
```
Looks like Sunday is a popular day!
Notes
- This page pointed me in the direction of the CDC report.
| null | CC BY-SA 2.5 | null | 2010-09-15T06:49:48.500 | 2010-09-15T06:49:48.500 | null | null | 8 | null |
2674 | 2 | null | 2667 | 6 | null | Entropy is entropy with respect to a measure
As noticed in the answer to this question [https://mathoverflow.net/questions/33088/entropy-of-a-general-prob-measure/33090#33090](https://mathoverflow.net/questions/33088/entropy-of-a-general-prob-measure/33090#33090) , entropy is only defined with respect to a given measure. For example discrete entropy is entropy with respect to counting measure.
Sample entropy should be an estimate of a predefined entropy.
I think the idea of sample entropy can be generalised to any type of entropy but you need to know which entropy you are trying to estimate before estimating it.
Example of entropy with respect to counting+lebesgues
For example, if you are trying to estimate the entropy with respect to the sum of the lebesgues measure on $[0,1]$ and the counting measure on $\{0,1\}$ a good estimate certainly (my intuition) is a sum of the two estimates you mention in your question (with $i=0,1$ in the first sum).
| null | CC BY-SA 2.5 | null | 2010-09-15T09:34:46.873 | 2010-09-15T09:34:46.873 | 2017-04-13T12:58:32.177 | -1 | 223 | null |
2675 | 1 | 2701 | null | 3 | 1801 | We want to work out a value for "Average time in service" for our users.
However, as we have many subscribers still active, how can we do that? If we just look at average time of deactivated subscribers, that won't show the picture.
Is there some "correct" way to do this? We know our churn rate by month, so maybe we need to work out from those still on the service, when they're likely to leave? and use that?
| How to calculate average time in service | CC BY-SA 3.0 | null | 2010-09-15T10:49:45.213 | 2011-05-24T15:30:25.013 | 2011-05-24T15:30:25.013 | 919 | 12517 | [
"survival"
] |
2678 | 2 | null | 2675 | 5 | null | Sounds like [Survival analysis](http://en.wikipedia.org/wiki/Time_to_event_analysis) to me. The still active subscribers have censored times.
| null | CC BY-SA 2.5 | null | 2010-09-15T11:50:29.130 | 2010-09-15T11:50:29.130 | null | null | 449 | null |
2679 | 1 | 2680 | null | 4 | 3549 | Car analogy:
Assume the traffic (number of cars per hour) on a road has a Poisson distribution, and the time between cars has the matching exponential distribution. If the chance of each car being red is independent from both time and the color of other cars, will the number of red cars per hour also have a Poisson distribution?
I strongly suspect so. Furthermore I expect E(red cars)=p(red) * E(cars), and because of the Poisson distribution σ(red cars) = p(red) * σ(cars). But how would I (dis)prove this?
| Is a random sample of a Poisson distribution also Poisson distributed? | CC BY-SA 2.5 | null | 2010-09-15T14:18:53.813 | 2010-09-18T05:07:02.720 | null | null | 1287 | [
"poisson-distribution"
] |
2680 | 2 | null | 2679 | 5 | null | The answer to your first question is yes. If the sum of two independent variables is poisson then the individual variables are also poisson. See [Raikov's Theorem](http://en.wikipedia.org/wiki/Raikov%27s_theorem)
| null | CC BY-SA 2.5 | null | 2010-09-15T14:28:02.933 | 2010-09-15T18:15:34.653 | 2010-09-15T18:15:34.653 | 919 | null | null |
2681 | 2 | null | 2679 | 4 | null | What you have described is a standard and important result from the theory of stochastic processes. Pick up any good book in the library on those, flip to the "Poisson Processes" chapter, and look for "decomposition" or sometimes "thinning".
Two of my favorites are Introduction to Stochastic Processes by Cinlar and Essentials of Stochastic Processes by Durrett.
| null | CC BY-SA 2.5 | null | 2010-09-15T14:34:29.747 | 2010-09-15T14:34:29.747 | null | null | null | null |
2682 | 2 | null | 2679 | 1 | null | I think you need additional assumptions to draw this conclusion.
Imagine this: the cars on the road are originally unpainted, but just before you observe them a demon decides what color they will become: red or some other color (of which there could be many). The demon does not have a clock (to ensure that whatever he does is independent of time), but he does have a fair coin. Ordinarily he flips the coin to determine whether the car will be red or some other color. However, his can of red paint can color ten cars and needs to be used as quickly as possible. So, if the coin decides red, the demon just goes ahead and paints the next ten cars red. Once the paint is exhausted, he goes back to flipping the coin. Other colors are painted in the same fashion.
Obviously the car count per unit time is unaffected by this procedure, but I hope it's just as obvious that the count of red cars per unit time is not a Poisson process, because it will be overdispersed: there will be too many high counts, due to the temporal clumping of red cars, and too many low counts, due to the temporal clumping of non-red cars.
It could be that this scenario violates your sense of "the chance of red being independent of the color of the other cars," but it's difficult to know exactly what this statement means, since it's open to various interpretations. In the present case, the probability that the next car to be observed is red, given that the previous car was non-red ("the other" colors), is independent of the color of the previous car.
| null | CC BY-SA 2.5 | null | 2010-09-15T14:39:11.517 | 2010-09-15T14:39:11.517 | null | null | 919 | null |
2684 | 1 | 2740 | null | 5 | 17763 | I am using g*power to calculate the a priori power for a 2 group repeated measures anova with a continuous dependent variable with 3 time points. I think that higher values for the "correlation among rep measures" should indicate a better test (i.e., high test-retest correltaion) and thus yield better power and a lower total sample size; however the opposite seems to be true:
Test family: F tests; Statistical test: ANOVA: repeated measures, between factors
Effect size f: 0.25
alpha error probability: 0.05
Power: 0.8
Number of groups: 2
Number of measurements: 3
Correlation among rep measures: 0.8 (a reasonable value for these neuropsychological assessments of cognitive function) -> total sample size: 112
Correlation among rep measures: 0.5 (a poor test would have a value this low) -> 86
What am I not understanding? Thanks!
Henry
| How does "correlation among repeated measures" work for repeated measures power analysis in g*power? | CC BY-SA 2.5 | null | 2010-09-15T16:40:39.463 | 2016-03-04T05:59:37.200 | 2010-09-16T12:27:41.330 | null | 1290 | [
"hypothesis-testing",
"anova"
] |
2685 | 2 | null | 1799 | 2 | null | I don't think that repeated measures is appropriate here - as I understand your protocol, the five temperatures (either hot or cold) are not repeated measures as they occur at different temperatures.
However, it does feel like you need some data reduction across the five levels in each temperature group. You might be well-served to consider some exploratory analysis to decide what the general shape of the relationship of pain perception to ordinal temperature input is across the hot and cold groups; then reduce that to a linear slope or exponential function; then model that single variable as your dependent variable with gender and A/B as factors.
Latent growth curve modeling might also be appropriate, but that is beyond me.
| null | CC BY-SA 2.5 | null | 2010-09-15T16:53:28.813 | 2010-09-15T16:53:28.813 | null | null | 1290 | null |
2686 | 1 | null | null | 9 | 335 | Help me here, please. Perhaps before even giving me an answer you may need to help me ask the question. I have never learned about time series analysis and do not know if that is indeed what I need. I have never learned about time smoothed averages and do not know if that is indeed what I need. My statistics background: I have 12 credits in biostatistics (multiple linear regression, multiple logistic regression, survival analysis, multi-factorial anova but never repeated measures anova).
So please look at my scenarios below. What are the buzzwords I should be searching for and can you suggest a resource to learn what I need to learn?
I want to look at several different data sets for totally different purposes but common to all of them is that there are dates as one variable. So a couple of examples spring to mind: clinical productivity over time (as in how many surgeries or how many office visits) or electric bill over time (as in money paid to electricity company per month).
For both of the above the near universal way to do it is to create a spreadsheet of month or quarter in one column and in the other column would be something such as electricity payment or number of patients seen in the clinic. However, counting per month leads to a lot of noise that has no meaning. For instance, if I usually pay the electricity bill on the 28th of every month but on one occasion I forget and so I only pay it 5 days later on the 3rd of the next month then one month will appear as if there was zero expense and the next month will show ginormous expense. Since one has the actual dates of payment why would one purposefully throw away the very granular data by boxing it into expenses by calendar month.
Similarly if I am out of town for 6 days at a conference then that month will appear to be very unproductive and if those 6 days fell near the end of the month, the next month will be uncharacteristicaly busy since there will be a whole waiting list of people who wanted to see me but had to wait till I returned.
Then of course there are the obvious seasonal variations. Air conditioners use a lot of electricity so obviously one has to adjust for summer heat. Billions of children are referred to me for recurrent acute otitis media in the winter and hardly any in the summer and early fall. No child of school age gets scheduled for elective surgery in the first 6 weeks that schools return following the long summer vacation. Seasonality is just one independent variable that affects the dependent variable. There must be other independent variables some of which can be guessed and others that are not known.
A whole bunch of different issues crop up when looking at enrollment in a longstanding clinical study.
What branch of statistics lets us look at this over time by simply looking at events and their actual dates but without creating artificial boxes (months/quarters/years) that do not really exist.
I thought of making the weighted average count for any event. For instance number of patients seen this week is equal to 0.5*nr seen this week + 0.25*nr seen last week + 0.25*nr seen next week.
I want to learn more about this. What buzzwords should I be searching for?
| Understanding productivity or expenses over time without falling victim to stochastic interruptions | CC BY-SA 2.5 | null | 2010-09-15T18:42:29.020 | 2010-09-24T17:57:25.143 | null | null | 104 | [
"time-series",
"finance",
"count-data"
] |
2687 | 2 | null | 2684 | 1 | null | Higher correlation among repeated measures means that each repeat is adding less information to the first measurement. If the correlation were 1, the repeats are no extra information to the original measurement, and the sample size could be calculated ignoring the repeats. If the correlation were 0, each repeated measurement of a subject adds the same amount of information as an extra subject, so the sample size could be calculated treating every observation as independent.
| null | CC BY-SA 2.5 | null | 2010-09-15T19:03:14.480 | 2010-09-15T19:03:14.480 | null | null | 449 | null |
2688 | 1 | null | null | 11 | 2399 | I have a dataset where I need to do linear regression. Unfortunately there is a problem with heteroscedasticity. I´ve rerun the analysis using robust regression with the HC3 estimator for the variance and also done bootstrapping with the bootcov function in Hmisc for R. The results are quite close. What is generally recommended?
| Whether to use robust linear regression or bootstrapping when there is heteroscedasticity? | CC BY-SA 3.0 | null | 2010-09-15T19:20:45.563 | 2011-09-28T00:02:30.217 | 2011-09-28T00:02:30.217 | 183 | 1291 | [
"regression",
"bootstrap",
"heteroscedasticity"
] |
2689 | 1 | 2707 | null | 7 | 3183 | I'm doing this only for learning purposes. I've no intentions of reversing the methods of IMDB.
I asked myself I owned IMDB or similar website. How would I compute the movie rating?
All I can think of is Arithmetic Mean
For a movie data provided below computation would be
>
(38591*10 + 27994*9 + 32732*8 + 17864*7 + 7361*6 + 2965*5 + 1562*4 + 1073*3 + 891*2 + 3401*1) / 134434 = 8.17055953

---
My rating 8.17055953 which is close to IMDB's Arithmetic Mean.
- Whats wrong with my score? Why is it not ideal (because IMDB didn't use it)?
- If you had to compute. How would you have done it? What factors you would consider?
Note:
I'm not asking about the ranking mechanism for the Top/Bottom listing (that is shown here : [http://www.imdb.com/chart/top](http://www.imdb.com/chart/top)). This question is about how IMDb turns the arithmetic mean into the weighted mean. Take "Manos: The Hands of Fate" — Unweighted mean 2.1, weighted mean 1.5. (Or "The Godfather", another example of ballot stuffing.)
| How would YOU compute IMDB movie rating? | CC BY-SA 2.5 | null | 2010-09-15T19:54:28.173 | 2010-09-17T02:08:35.420 | 2010-09-15T22:39:07.443 | null | 851 | [
"rating"
] |
2690 | 2 | null | 2684 | 1 | null | I would guess the explanation is something like this: The higher the correlation between the time points, the lower the proportion of unshared variance. That is, the lower the proportion of variance that can contain the effect (i.e., the unshared variance). Given that in this unshared variance the ratio of effect to non-effect is the same (i.e., your effect size) the total amount of variance that contains the effect is smaller the higher the correlation between the time points. Hence, the higher the correlation the more power (i.e., sample size) needed to find the expected effect.
However, I think there is another issue in your question you have to think about. What does the effect size stand for? I am not a user of g-power, but i guess it is the effect size for the interaction of group by time points. In this case my explanation should apply but you should ask yourself if this is the effect you are interrested in.
Furthermore, you shouldn't confuse test-retest reliability with correlation between time-points. Generally you want a high test-retest reliability, but in most cases you want a small correlation between time points but an effect between the time point (i.e., rather low correlation).
Hope this is all clear, currently in a hurry, but can get extend/clarify tomorrow.
| null | CC BY-SA 2.5 | null | 2010-09-15T20:02:18.693 | 2010-09-15T20:02:18.693 | null | null | 442 | null |
2691 | 1 | 140579 | null | 1313 | 894129 | In today's pattern recognition class my professor talked about PCA, eigenvectors and eigenvalues.
I understood the mathematics of it. If I'm asked to find eigenvalues etc. I'll do it correctly like a machine. But I didn't understand it. I didn't get the purpose of it. I didn't get the feel of it.
I strongly believe in the following quote:
>
You do not really understand something unless you can explain it to your grandmother. -- Albert Einstein
Well, I can't explain these concepts to a layman or grandma.
- Why PCA, eigenvectors & eigenvalues? What was the need for these concepts?
- How would you explain these to a layman?
| Making sense of principal component analysis, eigenvectors & eigenvalues | CC BY-SA 4.0 | null | 2010-09-15T20:05:55.993 | 2022-12-04T10:39:59.730 | 2019-09-27T13:25:06.037 | 919 | 851 | [
"pca",
"intuition",
"eigenvalues",
"faq"
] |
2692 | 2 | null | 2689 | 8 | null | Partial answer. See the help page entitled: [The vote average for film "X" should be Y! Why are you displaying another rating?](http://www.imdb.com/help/show_leaf?voteaverage)
In short, IMDb uses:
>
a complex voter weighting system to make sure
that the final rating is
representative of the general voting
population and not subject to over
influence from individuals who are not
regular participants in the poll.
Also note that:
>
In order to avoid leaving the scheme open to abuse, [IMDb does not] disclose the exact methods used.
| null | CC BY-SA 2.5 | null | 2010-09-15T20:42:28.683 | 2010-09-15T20:42:28.683 | null | null | 938 | null |
2693 | 2 | null | 2688 | 5 | null | In economics, the Eicker-White or "robust" standard errors are typically reported. Bootstrapping (unfortunately, I'd say) is less common. I'd say that the robust estimates are the standard version.
| null | CC BY-SA 2.5 | null | 2010-09-15T20:45:01.947 | 2010-09-15T20:45:01.947 | null | null | 401 | null |
2695 | 2 | null | 2691 | 56 | null | Alright, I'll give this a try. A few months back I dug through a good amount of literature to find an intuitive explanation I could explain to a non-statistician. I found the derivations that use Lagrange multipliers the most intuitive.
Let's say we have high dimension data - say 30 measurements made on an insect. The bugs have different genotypes and slightly different physical features in some of these dimensions, but with such high dimension data it's hard to tell which insects belong to which group.
PCA is a technique to reduce dimension by:
- Taking linear combinations of the original variables.
- Each linear combination explains the most variance in the data it can.
- Each linear combination is uncorrelated with the others
Or, in mathematical terms:
- For $Y_j = a_j' x$ (linear combination for jth component)
- For $k > j$, $V(Y_k) < V(Y_j)$ (first components explain more variation)
- $a_k' a_j = 0$ (orthogonality)
Finding linear combinations that satisfy these constraints leads us to eigenvalues. Why?
I recommend checking out the book An Introduction to Multivariate Data Analysis for the full derivation (p. 50), but the basic idea is successive optimizations problems (maximizing variance) constrained such that a'a = 1 for coefficients a (to prevent the case when variance could be infinite) and constrained to make sure the coefficients are orthogonal.
This leads to optimization with Lagrange multipliers, which in turn reveals why eigenvalues are used. I am too lazy to type it out (sorry!) but, [this PDF](https://web.archive.org/web/20140329080447/http://web.ipac.caltech.edu/staff/fmasci/home/statistics_refs/PrincipalComponentAnalysis.pdf) goes through the proof pretty well from this point.
I would never try to explain this to my grandmother, but if I had to talk generally about dimension reduction techniques, I'd point to this trivial projection example (not PCA). Suppose you have a Calder mobile that is very complex. Some points in 3-d space close to each other, others aren't. If we hung this mobile from the ceiling and shined light on it from one angle, we get a projection onto a lower dimension plane (a 2-d wall). Now, if this mobile is mainly wide in one direction, but skinny in the other direction, we can rotate it to get projections that differ in usefulness. Intuitively, a skinny shape in one dimension projected on a wall is less useful - all the shadows overlap and don't give us much information. However, if we rotate it so the light shines on the wide side, we get a better picture of the reduced dimension data - points are more spread out. This is often what we want. I think my grandmother could understand that :-)
| null | CC BY-SA 4.0 | null | 2010-09-15T21:07:58.990 | 2022-12-04T10:39:59.730 | 2022-12-04T10:39:59.730 | 362671 | 7 | null |
2696 | 2 | null | 1 | 10 | null | I'd recommend letting the subject expert specify the mean or mode of the prior but I'd feel free to adjust what they give as a scale. Most people are not very good at quantifying variance.
And I would definitely not let the expert determine the distribution family, in particular the tail thickness. For example, suppose you need a symmetric distribution for a prior. No one can specify their subjective belief so finely as to distinguish a normal distribution from, say, a Student-t distribution with 5 degrees of freedom. But in some contexts the t(5) prior is much more robust than the normal prior. In short, I think the choice of tail thickness is a technical statistical matter, not a matter of quantifying expert opinion.
| null | CC BY-SA 2.5 | null | 2010-09-15T21:08:26.077 | 2010-09-15T21:08:26.077 | null | null | 319 | null |
2697 | 2 | null | 2691 | 127 | null | Hmm, here goes for a completely non-mathematical take on PCA...
Imagine you have just opened a cider shop. You have 50 varieties of cider and you want to work out how to allocate them onto shelves, so that similar-tasting ciders are put on the same shelf. There are lots of different tastes and textures in cider - sweetness, tartness, bitterness, yeastiness, fruitiness, clarity, fizziness etc etc. So what you need to do to put the bottles into categories is answer two questions:
1) What qualities are most important for identifying groups of ciders? e.g. does classifying based on sweetness make it easier to cluster your ciders into similar-tasting groups than classifying based on fruitiness?
2) Can we reduce our list of variables by combining some of them? e.g. is there actually a variable that is some combination of "yeastiness and clarity and fizziness" and which makes a really good scale for classifying varieties?
This is essentially what PCA does. Principal components are variables that usefully explain variation in a data set - in this case, that usefully differentiate between groups. Each principal component is one of your original explanatory variables, or a combination of some of your original explanatory variables.
| null | CC BY-SA 2.5 | null | 2010-09-15T21:14:37.067 | 2010-09-15T21:14:37.067 | null | null | 266 | null |
2698 | 2 | null | 2691 | 161 | null | Let's do (2) first. PCA fits an ellipsoid to the data. An ellipsoid is a multidimensional generalization of distorted spherical shapes like cigars, pancakes, and eggs. These are all neatly described by the directions and lengths of their principal (semi-)axes, such as the axis of the cigar or egg or the plane of the pancake. No matter how the ellipsoid is turned, the eigenvectors point in those principal directions and the eigenvalues give you the lengths. The smallest eigenvalues correspond to the thinnest directions having the least variation, so ignoring them (which collapses them flat) loses relatively little information: that's PCA.
(1) Apart from simplification (above), we have needs for pithy description, visualization, and insight. Being able to reduce dimensions is a good thing: it makes it easier to describe the data and, if we're lucky to reduce them to three or less, lets us draw a picture. Sometimes we can even find useful ways to interpret the combinations of data represented by the coordinates in the picture, which can afford insight into the joint behavior of the variables.
---
The figure shows some clouds of $200$ points each, along with ellipsoids containing 50% of each cloud and axes aligned with the principal directions. In the first row the clouds have essentially one principal component, comprising 95% of all the variance: these are the cigar shapes. In the second row the clouds have essentially two principal components, one about twice the size of the other, together comprising 95% of all the variance: these are the pancake shapes. In the third row all three principal components are sizable: these are the egg shapes.

Any 3D point cloud that is "coherent" in the sense of not exhibiting clusters or tendrils or outliers will look like one of these. Any 3D point cloud at all--provided not all the points are coincident--can be described by one of these figures as an initial point of departure for identifying further clustering or patterning.
The intuition you develop from contemplating such configurations can be applied to higher dimensions, even though it is difficult or impossible to visualize those dimensions.
| null | CC BY-SA 3.0 | null | 2010-09-15T21:33:12.360 | 2014-11-11T16:53:10.420 | 2014-11-11T16:53:10.420 | 919 | 919 | null |
2699 | 2 | null | 2689 | 5 | null | >
Whats wrong with my score? Why is it not ideal (because IMDB didn't
use it)?
If the score was only for your use, then nothing is wrong with your calculation.
However, IMDB try to make it difficult for people to obviously influence the final score.
>
If you had to compute. How would you have done it? What factors you
would consider?
Here are a few factors that you could consider (but will be unable to check):
- The final score may be weighted according to how many votes have been cast.
- Votes may be weighted by a time variable. For example, votes cast last year are less important than votes cast today.
- Votes cast by users who have voted for other movies have more weight, i,e. a reputation coefficient.
- Perhaps they incorporate data from other sites.
| null | CC BY-SA 2.5 | null | 2010-09-15T21:39:44.023 | 2010-09-15T21:39:44.023 | null | null | 8 | null |
2700 | 2 | null | 2691 | 439 | null | The manuscript ["A tutorial on Principal Components Analysis" by Lindsay I Smith](https://ourarchive.otago.ac.nz/bitstream/handle/10523/7534/OUCS-2002-12.pdf?sequence=1&isAllowed=y) really helped me grok PCA. I think it's still too complex for explaining to your grandmother, but it's not bad. You should skip first few bits on calculating eigens, etc. Jump down to the example in chapter 3 and look at the graphs.
I have some examples where I worked through some toy examples so I could understand PCA vs. OLS linear regression. I'll try to dig those up and post them as well.
edit:
You didn't really ask about the difference between Ordinary Least Squares (OLS) and PCA but since I dug up my notes I did a [blog post about it](http://www.cerebralmastication.com/2010/09/principal-component-analysis-pca-vs-ordinary-least-squares-ols-a-visual-explination/). The very short version is OLS of y ~ x minimizes error perpendicular to the independent axis like this (yellow lines are examples of two errors):

If you were to regress x ~ y (as opposed to y ~ x in the first example) it would minimize error like this:

and PCA effectively minimizes error orthogonal to the model itself, like so:

More importantly, as others have said, in a situation where you have a WHOLE BUNCH of independent variables, PCA helps you figure out which linear combinations of these variables matter the most. The examples above just help visualize what the first principal component looks like in a really simple case.
In my blog post I have the R code for creating the above graphs and for calculating the first principal component. It might be worth playing with to build your intuition around PCA. I tend to not really own something until I write code that reproduces it.
| null | CC BY-SA 4.0 | null | 2010-09-15T21:42:36.200 | 2019-01-06T11:54:14.150 | 2019-01-06T11:54:14.150 | -1 | 29 | null |
2701 | 2 | null | 2675 | 2 | null | onestop is right, you're looking to do survival analysis. In general, you can use a nonparametric [Kaplan-Meier estimator](http://en.wikipedia.org/wiki/Kaplan-Meier_estimator) to plot the survival curve and then derive the "average time in service". The area underneath the survival curve works if you don't have any censored data (i.e. subscribers who are still active). Michael Berry had a nice and clear explanation of this in a blog post for a business scenario similar to yours:
- Why is the area under the survival curve equal to the average tenure?
In your case, where you have censored data, the median time (0.5 quantile) is available as the "average time in service" -- as noted by @onestop.
Harvey Motulsky's book has a nice discussion of this:
- Survival Curves
| null | CC BY-SA 2.5 | null | 2010-09-16T00:20:19.813 | 2010-09-16T16:23:10.463 | 2010-09-16T16:23:10.463 | 251 | 251 | null |
2702 | 2 | null | 665 | 12 | null | Probability answers questions about what will happen, statistics answers questions about what did happen.
| null | CC BY-SA 2.5 | null | 2010-09-16T00:54:49.907 | 2010-09-16T00:54:49.907 | null | null | 9426 | null |
2703 | 2 | null | 2691 | 18 | null | I might be a bad person to answer this because I'm the proverbial grandmother who has had the concept explained to me and not much more, but here goes:
Suppose you have a population. A large portion of the population is dropping dead of heart attacks. You are trying to figure out what causes the heart attacks.
You have two pieces of data: height and weight.
Now, it's clear that there's SOME relationship between weight and heart attacks, but the correlation isn't really strong. There are some heavy people who have a lot of heart attacks, but some don't.
Now, you do a PCA, and it tells you that weight divided by height ('body mass') is a much more likely predictor of heart attacks then either weight or height, because, lo and behold, the "reality" is that it's body mass that causes the heart attacks.
Essentially, you do PCA because you are measuring a bunch of things and you don't really know if those are really the principal components or if there's some deeper underlying component that you didn't measure.
[Please feel free to edit this if it's completely off base. I really don't understand the concept any more deeply than this].
| null | CC BY-SA 3.0 | null | 2010-09-16T02:04:43.360 | 2013-06-20T10:05:32.863 | 2013-06-20T10:05:32.863 | 22047 | 239 | null |
2705 | 2 | null | 2684 | 2 | null |
- All else being equal, bigger correlations in a repeated measures design means more power
- A larger correlation means that more of the variance in the dependent variable is explained by systematic effects of cases/individuals
- Prior research or previous datasets are useful for estimating the correlation. Estimates of test-retest reliability of measures is a starting point. Interestingly, I've found that greater changes in the means (i.e., the hypothesis of interest) is associated with slightly lower correlations than would otherwise be the case. Thus, as a very rough rule of thumb, you might want to reduce your estimate of the correlation a little bit from the published test-retest reliability estimates, perhaps by .1 or .2. Also, you might want to tweak the test-retest reliability estimate if the time between measurements is different to the published test-retest reliability estimates (i.e., increasing it if the time frame is shorter, and increasing it if it is longer).
- Yes, more reliable tests will generally lead to larger test-retest reliability and thus, stronger correlations between levels of the repeated measures factor and more power. They should also lead to larger observed effect sizes (i.e., cohen's ds) because there is less attenuation for low reliability. Of course, theoretically the more you are interested in traits rather than states, the harder it will be to actually create meaningful changes.
| null | CC BY-SA 3.0 | null | 2010-09-16T03:26:35.713 | 2011-06-07T04:24:49.023 | 2011-06-07T04:24:49.023 | 183 | 183 | null |
2706 | 2 | null | 2691 | 8 | null | Basically PCA finds new variables which are linear combinations of the original variables such that in the new space, the data has fewer dimensions. Think of a data set consisting of the points in 3 dimensions on the surface of a flat plate held up at an angle. In the original x, y, z axes you need 3 dimensions to represent the data, but with the right linear transformation, you only need 2.
Basically what @Joel said, but only linear combinations of the input variables.
| null | CC BY-SA 2.5 | null | 2010-09-16T04:10:44.910 | 2010-09-16T04:10:44.910 | null | null | 1303 | null |
2707 | 2 | null | 2689 | 7 | null | First, define the theoretical construct of interest. There are many ways that a rating can be defined:
- What is the theoretical target population? The entire world, English speakers, people who visit IMDB, people who have seen the movie in question?
- What is the target time frame? It is the rating of the movie now or averaged over its release time.
- Is it a democratic rating or an expert rating? Some people are more knowledgeable about the worth of movies. Some people are better able to differentiate a good from a bad movie. Some people are more consistent in their ratings over time. Should ratings from people who are "better" at rating movies be given more worth. This relates to a philosophical question of aesthetics and the meaning of intersubjective goodness.
- Assuming you could get honest ratings from the entire target population in the entire time frame weighted or not by expertise, what is the mapping between these ratings and the composite rating? This could be the arithmetic mean. Alternatively, there are many other ways of combining individual ratings. For example, you could use an interpolated median. Some alternatives would have minimal effect on the rank order of films, but would have a major influence on the absolute value of the rating.
- Is the number of people interested in the movie relevant to the rating?
Second, use all the available information to estimate the theoretical construct.
This is where the issues discussed by others would be important.
- the role of demographic adjustments would depend on your definition of the target population
- a weight for trust could be incorporated. Many indicators could be used:
the number of previous ratings (more ratings would suggest someone who is engaged in the site more)
the degree to which previous ratings are consistent with other raters or at least a subset of raters (greater consistency would suggest thoughtful and honest responding;
the degree to which responses are distributed over an extended period of time (this would suggest that the person is less likely to be attempting to game the system)
degree of engagement with the site in general: e.g., accessing the site, contributing to discussion boards (more engagement, more trust)
- as mentioned by @csgillespie you could weight more recent votes greater if you wanted to estimate current attitudes to the film
- You could weight for expertise in ratings. This would be correlated with trust ratings, but there is a difference.
Third, validate and monitor the estimation process using external trusted data sources.
| null | CC BY-SA 2.5 | null | 2010-09-16T05:11:05.977 | 2010-09-17T02:08:35.420 | 2010-09-17T02:08:35.420 | 183 | 183 | null |
2708 | 2 | null | 2691 | 9 | null | I view PCA as a geometric tool. If you are given a bunch of points in 3-space which are pretty much all on a straight line, and you want to figure out the equation of that line, you get it via PCA (take the first component). If you have a bunch of points in 3-space which are mostly planar, and want to discover the equation of that plane, do it via PCA (take the least significant component vector and that should be normal to the plane).
| null | CC BY-SA 2.5 | null | 2010-09-16T05:15:42.383 | 2010-09-16T05:15:42.383 | null | null | 795 | null |
2709 | 2 | null | 2617 | 1 | null | To recap, you have a function $f(x,y)$ which is continuous, and pseudo-random uniform at integer $(x,y)$, but nearly normal over the plane. You wish to make $f$ pseudo-random uniform over the whole plane. Suppose you could somehow make $f$ parametrized by some parameter $\lambda$, say by using $\theta$ to twiddle the pseudo-random uniform number generated at the lattice points. Then if you took $g(x,y) = \sum_{i} f(x,y,\lambda_i)$ for a whole bunch of $\lambda_i$ values, $g(x,y)$ would look much more normal, via the Central Limit Theorem. You could then invert the normal to get a uniform field. (I was inspired by Mike Dunlavey's comment on adding 12 uniforms to get a normal).
| null | CC BY-SA 2.5 | null | 2010-09-16T05:33:58.557 | 2010-09-16T05:33:58.557 | null | null | 795 | null |
2710 | 2 | null | 2688 | 3 | null | You could use generalized least squares, such as the gls() function from the nlme package, which allows you to specify a variance function using the weight argument.
| null | CC BY-SA 2.5 | null | 2010-09-16T06:36:14.457 | 2010-09-16T06:36:14.457 | null | null | 187 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.