Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
5788
1
5789
null
7
4923
I am attempting to examine the change in slope between a predictor and response over two years. In year 1, it is definitely positive. (Linear regression, the 95% CI of the slope doesn't overlap 0). In year 2, the point estimate of the slope is close to 0 (0.002) and the CI overlaps 0. This is what I would expect if the slope was, well, actually 0. And given that any test of the slope will suggest that I cannot reject that it is 0 - great! Although, yes, I know, that doesn't confirm that the slope is 0. Not falling into that trap. But, with a slope value very close to 0, and a CI that overlaps 0...it all seems like the slope has declined. However, I now have a colleague who thinks I cannot say that the value of the slope declined between year 1 and year 2. Rather, they think that I can only say that there is no relationship between the predictor and response in year 2. Am I correct in interpreting my results that the slope of the year 2 relationship is indeed ~0? Or is there a problem with my interpretation?
How to tell if the slope of a line is 0 or there is just no relationship?
CC BY-SA 2.5
null
2010-12-28T17:00:55.580
2010-12-29T09:44:26.790
2010-12-29T09:44:26.790
null
101
[ "regression", "statistical-significance" ]
5789
2
null
5788
8
null
Well, your colleague definitely is incorrect, as many standard examples will show. For example, for the data $(x, x(1-x))$ with values of $x$ equal to $0, 1/n, 2/n, \ldots, 1$, the slope is exactly zero but obviously there's a strong (quadratic) relationship between "predictor" ($x$) and "response" ($x(1-x)$). Your colleague would be correct to say "there is no significant evidence of a linear relationship in year 2." But you can do more than that, as you know. You can assess the change between the two years. Asserting that the "slope has declined" compares two estimates to each other. As such, you need to account for the uncertainty in both estimates, not just in the one for year two. From the mere fact that one slope is significantly different from zero and the other slope is not, you cannot legitimately conclude their difference is significant. However, a valid comparison is simple to do: associated with each slope is a standard error (a routine part of regression software output) and a degree of freedom (usually equal to the number of data values minus the number of parameters). Assuming no temporal correlation in the error terms, you can compute the standard error for the difference in slopes as usual: take the root of the sum of squares of their SEs. Refer the t-statistic (i.e., the ratio of the slope difference to the SE of their difference) to Student's t distribution. The degree of freedom to use equals the sum of the DFs of the individual slopes. The usual caveats and assumptions about tests hold, of course. But presumably, because you're already using linear regressions in each of the years separately, you have checked that suitable assumptions are reasonable for these data. About the only additional thing you might want to check is that the variances of the residuals are approximately the same in the two years. (You could use an F-test for this.) If not, that's worth exploring further in order to understand just how the change came about. Assuming it was abrupt and there's no sign of temporal trends in the residuals (no [heteroscedasticity](http://en.wikipedia.org/wiki/Heteroscedasticity)), you might consider using a [t-test for unequal variances](http://en.wikipedia.org/wiki/Student%27s_t-test#Unequal_sample_sizes.2C_unequal_variance) instead of the simpler test described above.
null
CC BY-SA 2.5
null
2010-12-28T17:54:58.717
2010-12-28T17:54:58.717
null
null
919
null
5790
2
null
5782
42
null
Update I've underestimated Taylor expansions. They actually work. I assumed that integral of the remainder term can be unbounded, but with a little work it can be shown that this is not the case. The Taylor expansion works for functions in bounded closed interval. For random variables with finite variance [Chebyshev inequality](http://en.wikipedia.org/wiki/Tchebycheff_inequality) gives $$P(|X-EX|>c)\le \frac{\operatorname{Var}(X)}{c}$$ So for any $\varepsilon>0$ we can find large enough $c$ so that $$P(X\in [EX-c,EX+c])=P(|X-EX|\le c)<1-\varepsilon$$ First let us estimate $Ef(X)$. We have \begin{align} Ef(X)=\int_{|x-EX|\le c}f(x)dF(x)+\int_{|x-EX|>c}f(x)dF(x) \end{align} where $F(x)$ is the distribution function for $X$. Since the domain of the first integral is interval $[EX-c,EX+c]$ which is bounded closed interval we can apply Taylor expansion: \begin{align} f(x)=f(EX)+f'(EX)(x-EX)+\frac{f''(EX)}{2}(x-EX)^2+\frac{f'''(\alpha)}{3!}(x-EX)^3 \end{align} where $\alpha\in [EX-c,EX+c]$, and the equality holds for all $x\in[EX-c,EX+c]$. I took only $4$ terms in the Taylor expansion, but in general we can take as many as we like, as long as function $f$ is smooth enough. Substituting this formula to the previous one we get \begin{align} Ef(X)&=\int_{|x-EX|\le c}f(EX)+f'(EX)(x-EX)+\frac{f''(EX)}{2}(x-EX)^2dF(x)\\\\ &+\int_{|x-EX|\le c}\frac{f'''(\alpha)}{3!}(x-EX)^3dF(x) +\int_{|x-EX|>c}f(x)dF(x) \end{align} Now we can increase the domain of the integration to get the following formula \begin{align} Ef(X)&=f(EX)+\frac{f''(EX)}{2}E(X-EX)^2+R_3\\\\ \end{align} where \begin{align} R_3&=\frac{f'''(\alpha)}{3!}E(X-EX)^3+\\\\ &+\int_{|x-EX|>c}\left(f(EX)+f'(EX)(x-EX)+\frac{f''(EX)}{2}(x-EX)^2+f(X)\right)dF(x) \end{align} Now under some moment conditions we can show that the second term of this remainder term is as large as $P(|X-EX|>c)$ which is small. Unfortunately the first term remains and so the quality of the approximation depends on $E(X-EX)^3$ and the behaviour of third derivative of $f$ in bounded intervals. Such approximation should work best for random variables with $E(X-EX)^3=0$. Now for the variance we can use Taylor approximation for $f(x)$, subtract the formula for $Ef(x)$ and square the difference. Then $E(f(x)-Ef(x))^2=(f'(EX))^2\operatorname{Var}(X)+T_3$ where $T_3$ involves moments $E(X-EX)^k$ for $k=4,5,6$. We can arrive at this formula also by using only first-order Taylor expansion, i.e. using only the first and second derivatives. The error term would be similar. Other way is to expand $f^2(x)$: \begin{align} f^2(x)&=f^2(EX)+2f(EX)f'(EX)(x-EX)\\\\ &+[(f'(EX))^2+f(EX)f''(EX)](X-EX)^2+\frac{(f^2(\beta))'''}{3!}(X-EX)^3 \end{align} Similarly we get then \begin{align*} Ef^2(x)=f^2(EX)+[(f'(EX))^2+f(EX)f''(EX)]\operatorname{Var}(X)+\tilde{R}_3 \end{align*} where $\tilde{R}_3$ is similar to $R_3$. The formula for variance then becomes \begin{align} \operatorname{Var}(f(X))=[f'(EX)]^2\operatorname{Var}(X)-\frac{[f''(EX)]^2}{4}\operatorname{Var}^2(X)+\tilde{T}_3 \end{align} where $\tilde{T}_3$ have only third moments and above.
null
CC BY-SA 4.0
null
2010-12-28T19:23:58.687
2020-03-01T13:35:12.273
2020-03-01T13:35:12.273
5654
2116
null
5791
1
5823
null
9
2964
## Background I have a model with 17 parameters, and I currently use the coefficient of variation ($\text{CV}=\sigma/\mu$) to summarize the prior and posterior distributions of each parameter. All of the parameters are > 0. I would also like to summarize these pdfs on a normalized scale (in this case standard deviation normalized by the mean) so that they can be compared to each other, and with other statistics presented in similar adjacent plots (sensitivity, explained variance). I will include density plots for each parameter separately, but I would like to summarize them here. However, the sensitivity of the CV to $\mu$ causes the following confusion that, although easily explained in text, would be preferable to avoid. - the posterior CV of one parameter is greater than the prior because the mean has decreased more than the variance (parameter O in figure). - one of the parameters (N) is in units of temperature. It has a 95% prior CI of (8,12 Celsius $\simeq$ 281-285K); when I present the data in units of Kelvin which is only defined for positive values, the CV is <1%, if presented as C, the CV is closer to 40%. To me, it seems that neither of these CVs provides an intuitive representation of the CI. ## Question Are there better ways to present this information, either as a CV or as another statistic? ## Figure As an example, this is the type of plot that I am planning to present, with posterior CV in black and prior CV in grey. For scale, the CV of parameter `O` is 1.6. ![alt text](https://i.stack.imgur.com/CVeO1.png)
Alternatives to using Coefficient of Variation to summarize a set of parameter distributions?
CC BY-SA 2.5
null
2010-12-28T19:53:11.963
2017-06-12T17:00:41.810
2017-06-12T17:00:41.810
11887
1381
[ "distributions", "data-visualization", "descriptive-statistics", "winsorizing", "l-moments" ]
5792
1
5793
null
9
10031
How can I extract the R-square from a fit [rpart](http://cran.r-project.org/web/packages/rpart/index.html) model? `rsq.rpart(fit)` plots the two graphs, but I simply want to extract the R-square value for the full tree. I assume this is fairly obvious, but numerous searches didn't really lend anything useful. Any help you can provide will be greatly appreciated.
R-square from rpart model
CC BY-SA 2.5
null
2010-12-28T19:56:05.327
2010-12-28T20:31:25.117
2010-12-28T20:31:25.117
930
569
[ "r", "rpart" ]
5793
2
null
5792
9
null
The advantage of R is that most of the time you can easily access the source code. So in your case, start with ``` > rsq.rpart ``` (without parenthesis) to see what the function actually does. The $R^2$ values are obtained as ``` tmp <- printcp(fit) rsq.val <- 1-tmp[,c(3,4)] ``` where for each row (aka, No. splits) we have the "apparent" and "relative" (wrt. cross-validation) statistics.
null
CC BY-SA 2.5
null
2010-12-28T20:28:57.350
2010-12-28T20:28:57.350
null
null
930
null
5795
2
null
5786
1
null
I guess you know that the kernel for logistic regression is a non parametric one, so first of all you have that restriction. Regarding the R package the one I know and works pretty well is [np](http://cran.r-project.org/web/packages/np/index.html): Nonparametric kernel smoothing methods for mixed data types This package provides a variety of nonparametric (and semiparametric) kernel methods that seamlessly handle a mix of continuous, unordered, and ordered factor data types. Regarding the state of the art kernell I can recomend to experiment with the ones described in this [paper](http://www.cs.mcgill.ca/~icml2009/papers/505.pdf) from 2009. Read it carefully to choose the one that is best and more actual for you.
null
CC BY-SA 2.5
null
2010-12-28T22:01:12.497
2010-12-28T22:01:12.497
null
null
1808
null
5796
2
null
5786
8
null
Gaussian process classification (using Expectation Propagation) is probably the state-of-the-art in machine learning. There is an excellent [book](http://www.gaussianprocess.org/gpml/chapters/) by Rasmussen and Williams (downloadable for free), the [website](http://www.gaussianprocess.org/gpml/) for which has a very good MATLAB implementation. More software, books, papers etc. [here](http://www.gaussianprocess.org/). However, in practice, KLR will probably work just as well for most problems, the major difficulty is in selecting the kernel and regularisation parameters, which is probably best done by cross-validation, although leave-one-out cross-validation can be approximated very efficiently, see [Cawley and Talbot (2008).](http://dx.doi.org/10.1007/s10994-008-5055-9)
null
CC BY-SA 2.5
null
2010-12-28T22:30:00.733
2010-12-28T22:30:00.733
null
null
887
null
5797
1
5824
null
6
360
Given $K$ possible 'treatments' of some kind, and independent observations of some response under those treatments, say $X_{i,k}$ for $i=1,\ldots,n_k$ and $k=1,\ldots,K$, I am faced with the classical data-mining dilemma. The task is to simultaneously: - Find the treatment, say $k^*$, which maximizes effect, and - Estimate the effect size of treatment $k^*$. Here you can think of effect size as a measure of location, say mean. The naive approach to this problem is as follows: - Estimate the effect size of each treatment. For example, if the effect size is the mean, let $\bar{X_k} = (\sum_i X_{i,k}) / n_k$. - Pick the treatment which maximizes the estimated effect, i.e. let $k^* = \arg\max_k \bar{X_k}$. - Estimate the effect size of $k^*$ by $\bar{X_{k^*}}$. The problem, of course, is that the estimate of the optimal effect size is positively biased because the choice of optimal treatment is not independent of the estimated effect sizes, which are unconditionally unbiased. Additionally, the goals are often somewhat broader: instead of picking the treatment which maximizes effect, we are tasked with sorting the treatments by estimated effect size and estimating the effect sizes. Because of this, I am looking for some kind of function $f$ which 'unbiases' the estimated effect sizes, but preserves monotonicity. That is, $f$ takes a $K$-vector to a $K$-vector, and - $f$ preserves monotonicity: if $\bar{X_{i_1}} \le \bar{X_{i_2}} \le \ldots \le \bar{X_{i_K}}$ and $(Y_1,Y_2,\ldots,Y_K) = f(\bar{X_1},\bar{X_2},\ldots,\bar{X_K})$, then $Y_{i_1} \le Y_{i_2} \le \ldots \le Y_{i_K}$. - $f$ (approximately, possibly under weird assumptions) unbiases the effect size estimates: if $(Y_1,Y_2,\ldots,Y_K) = f(\bar{X_1},\bar{X_2},\ldots,\bar{X_K})$, and $Y_{i_1} \le Y_{i_2} \le \ldots \le Y_{i_K}$, then $Y_{i_j}$ is a (nearly) unbiased estimate of the effect size of the $j$th smallest effect treatment. Probably such a function would have to be aware of some additional information, like the number of observations $n_k$ and some measure of the spread of the response under each treatment. Probably the monotonicity requirement is questionable if the standard errors of the effect estimates vary wildly. Sweep that under the rug if possible. I am also interested in approaches to this problem when the response observations are not independent, rather they are paired..
Is there a bias correction for effect size in a data mining context?
CC BY-SA 2.5
null
2010-12-28T22:45:59.357
2015-09-22T03:07:38.077
2015-09-22T03:07:38.077
9007
795
[ "multivariate-analysis", "data-mining", "multiple-comparisons", "unbiased-estimator", "bias-correction" ]
5799
2
null
5750
8
null
Perhaps you could do a preliminary analysis on a random subset of the data to form hypotheses, and then test those few hypotheses of interest using the rest of the data. That way you would not have to correct for nearly as many multiple tests. (I think...) Of course, if you use such a procedure you will be reducing the size of the dataset used for the final analysis and so reduce your power to find real effects. However, corrections for multiple comparisons reduce power as well and so I'm not sure that you would necessarily lose anything.
null
CC BY-SA 2.5
null
2010-12-29T01:22:48.120
2010-12-29T01:22:48.120
null
null
1679
null
5801
1
5805
null
1
314
Sorry if this is a n00b question, I'm just trying to wrap my head around the problem. I am trying to refute conventional wisdom here, so any help is greatly appreciated. And now for the question: For example, I flip a coin 100 times and I end up with 50 consecutive heads followed by 50 consecutive tails. How would I calculate the probability or likelihood that the next time I replay the scenario that I will end up with the exact same result? Would the equation be different if we were using a ternary or quaternary event instead of a binary event? Conventional, non-statistical, wisdom would assume that coming to the same result on consecutive attempts would be incredibly remote, but that the odds of eventually getting an identical result would increase as more attempts are made, i.e. 50/50, 50/50 would be implausible, but 50/50, 49/51, 51/49 ... 50/50 would be plausible.
Is it possible to predict the likelihood of an order of random events?
CC BY-SA 2.5
null
2010-12-29T06:09:32.537
2010-12-29T09:30:18.017
2010-12-29T06:13:55.593
183
2131
[ "probability", "randomness" ]
5802
2
null
4356
17
null
Rpart only provides univariate splits. I believe, based upon your question, that you are not entirely familiar with the difference between a univariate partitioning method and a multivariate partitioning method. I have done my best to explain this below, as well as provide some references for further research and to suggest some R packages to implement these methods. Rpart is a tree based classifier that uses recursive partitioning. With partitioning methods you must define the points within your data at which a split is to be made. The rpart algorithm in R does this by finding the variable and the point which best splits (and thus reduces) the RSS. Because the splits only happen along one variable at a time, these are univariate splits. A Multivariate Split is typically defined as a simultaneous partitioning along multiple axis (hence multivariate), i.e. the first rpart node might split along Age>35, the second node might split along Income >25,000, and the third node might split along Cities west of the Mississippi. The second and third nodes are split on smaller subsets of the overall data, so in the second node the income criterion best splits the RSS only for those people who have an age of over 35, it does not apply to observations not found in this node, the same applies for the Cities criterion. One could continue doing this until there is a node for each observation in your dataset (rpart uses a minimum bucket size function in addition to a minimum node size criterion and a cp parameter which is the minimum the r-squared value must increase in order to continue fitting). A multivariate method, such as Patient Rule Induction Method (the prim package in R) would simultaneously split by selecting, for example, All Observations where Income was Greater than 22,000, Age>32, and Cities West of Atlanta. The reason why the fit might be different is because the calculation for the fit is multivariate instead of univariate, the fit of these three criterion is calculated based upon the simultaneous fit of the three variables on all observations meeting these criterion rather than iteratively partitioning based upon univariate splits (as with rpart). There are varying beliefs in regards to the effectiveness of univariate versus multivariate partitioning methods. Generally what I have seen in practice, is that most people prefer univariate partitioning (such as rpart) for explanatory purposes (it is only used in prediction when dealing with a problem where the structure is very well defined and the variation among the variables is fairly constant, this is why these are often used in medicine). Univariate tree models are typically combined with ensemble learners when used for prediction (i.e. a Random Forest). People who do use multivariate partitioning or clustering (which is very closely related to multivariate partitioning) often do so for complex problems that univariate methods fit very poorly, and do so mainly for prediction, or to group observations into categories. I highly recommend Julian Faraway's book Extending the Linear Model with R. Chapter 13 is dedicated entirely to the use of Trees (all univariate). If you're interested further in multivariate methods, Elements of Statistical Learning by Hastie et. al, provides an excellent overview of many multivariate methods, including PRIM (although Friedman at Stanford has his original article on the method posted on his website), as well as clustering methods. In regards to R Packages to utilize these methods, I believe you're already using the rpart package, and I've mentioned the prim package above. There are various built in clustering routines, and I am quite fond of the party package mentioned by another person in this thread, because of its implementation of conditional inference in the decision tree building process. The optpart package lets you perform multivariate partitioning, and the mvpart package (also mentioned by someone else) lets you perform multivariate rpart trees, however I personally prefer using partDSA, which lets you combine nodes further down in your tree to help prevent partitioning of similar observations, if I feel rpart and party are not adequate for my modeling purposes. Note: In my example of an rpart tree in paragraph 2, I describe how partitioning works with node numbers, if one were to draw out this tree, the partitioning would proceed to the left if the rule for the split was true, however in R I believe the split actually proceeds to the right if the rule is true.
null
CC BY-SA 2.5
null
2010-12-29T06:58:48.683
2010-12-30T06:06:56.613
2010-12-30T06:06:56.613
2166
2166
null
5805
2
null
5801
2
null
You have some event with certain probability of success. So the probability that the next time this event happens is the success probability. In your case you have flipped coin 100 times with 50 consecutive heads and 50 consecutive tails. The probability of this event happening is $1/2^{100}$. If you repeat this experiment independent of the first one, the probability that it will happen remains the same. Although you generated 100 random [Bernoulli](http://en.wikipedia.org/wiki/Bernoulli_distribution) variables with probability $p=1/2$, your random variable of interest is a Bernoulli with probability $p=1/2^{100}$. The probability that that eventually you get the same result increases with number of trials. This number of trials needed for success is random variable with [geometric distribution](http://en.wikipedia.org/wiki/Geometric_distribution). If you use ternary or quartenary event the same holds, as long as your a looking for specific event to happen. Then calculate probability of this event and again you have a Bernoulli random variable with this probability.
null
CC BY-SA 2.5
null
2010-12-29T09:29:59.850
2010-12-29T09:29:59.850
null
null
2116
null
5806
2
null
5801
3
null
The probability of 50 tails followed by 50 heads, assuming balanced iid bernoulli coins, is simply $\prod_{i=1}^{100} P(Event_i=Predicted_i)=\frac 1 2 ^{50} \times \frac 1 2 ^{50} = 2^{-100}$. 50/50 is just as unlikely as 49/51, if order must be preserved.
null
CC BY-SA 2.5
null
2010-12-29T09:30:18.017
2010-12-29T09:30:18.017
null
null
2456
null
5807
1
null
null
6
1782
Colleagues of mine recently presented a work where they calibrate boosted regression trees (BRT) models on small data sets ($n= 30$). They validated the models using leave-one-out cross validation (LOOCV) using R2, RMSPE and RPD indices. They also provided these indices computed by training and validating the model on the full dataset. The R2, RMSPE and RPD values obtained through LOOCV were almost strictly equal to the R2, RMSPE and RPD values obtained when validating on the training data set. My questions are : - Is such a results expected for LOOCV on BRT? - Is this because BRT is relatively insensitive to outliers (and to single individuals?) that excluding one individual during LOOCV does not make a difference, providing nearly similar calibrated models with same performance metrics on the excluded individuals? - In that case does LOOCV for BRT makes any sense, compared to repeated k-fold CV with $k < n$? Thank you in advance
Leave-one-out cross validation and boosted regression trees
CC BY-SA 2.5
null
2010-12-29T14:31:10.063
2010-12-29T21:20:27.750
2010-12-29T17:28:11.153
null
2561
[ "regression", "machine-learning", "cross-validation" ]
5808
1
5809
null
7
9136
One of the ANCOVA assumptions, as I read it [here](http://www.google.gr/url?sa=t&source=web&cd=2&sqi=2&ved=0CCoQFjAB&url=http%3A%2F%2F161.111.161.171%2Festadistica2004%2FANOVA%2FANCOVA.pdf&rct=j&q=ancova%20assumptions%20covariates%20independent%20randomization%20&ei=0UobTcOkKoey8gOq9rCeBQ&usg=AFQjCNFzfXuU154mlSPmMA8oN48fxXhRiw&sig2=6udO2dbErcUJeB_qi4kd_w&cad=rja), says that: Independent variables orthogonal to covariates. In traditional ANCOVA, the independents area assumed to be orthogonal to the factors. If the covariate is influenced by the categorical independents, then the control adjustment ANCOVA makes on the dependent variable prior to assessing the effects of the categorical independents will be biased since some indirect effects of the independents will be removed from the dependent. See also the third assumption [here](http://en.wikiversity.org/wiki/Advanced_ANOVA/ANCOVA). Could you tell me how should I verify that this assumption is met, in a given model containing three factors and one covariate? You can refer to SPSS or R ways of doing this. Thank you very much
Checking factor / covariate independence in ANCOVA
CC BY-SA 2.5
null
2010-12-29T15:14:45.243
2010-12-29T17:12:47.310
null
null
339
[ "r", "spss", "ancova", "assumptions" ]
5809
2
null
5808
10
null
One approach is to see if the covariate is correlated with the predictor variables. That is, if the ANCOVA is given by: ``` predicted ~ covariate + predictor1*predictor2*predictor3 ``` Then first assess whether the covariate and the various predictor effects/interactions are correlated: ``` covariate ~ predictor1*predictor2*predictor3 ``` If you find that the covariate is correlated with any of the predictor variables or their interaction, then you're violating the assumption you cite. If the covariate is categorical with more than 2 levels, you'll have to assess the correlations via multinomial regression.
null
CC BY-SA 2.5
null
2010-12-29T16:02:23.433
2010-12-29T16:35:50.477
2010-12-29T16:35:50.477
364
364
null
5810
2
null
5808
1
null
EDIT As pointed out in the comments below, this is an answer to a different question, one about ensuring that the effect of the covariate is the same in all groups. You have to check for interactions of the covariate and the factors. So if the ANCOVA model was ``` lm1 <- lm(outcome ~ covariate + factor1*factor2*factor3) ``` then the you can add all the interactions and look at ``` lm2 <- lm(outcome ~ covariate*factor1*factor2*factor3) ``` followed by an F-test: ``` anova(lm1, lm2) ``` It might also make sense to add fewer interactions (especially if you don't have a lot of data), so that the large number of high order interactions don't eat up the power.
null
CC BY-SA 2.5
null
2010-12-29T16:39:58.993
2010-12-29T17:12:47.310
2010-12-29T17:12:47.310
279
279
null
5812
2
null
5807
4
null
It is hard to tell without data, but the set may be "too homogeneous" to make LOO work -- imagine you have a set $X$ and you duplicate all objects to make a set $X_d$ -- while BRT usually have very good accuracy on its train, it is pretty obvious that LOO on $X_d$ will probably give identical results to test-on-train. So if the accuracy if good I would even try resampling CV (on each of let's say 10 folds you make train of an equal size to the full set by sampling objects with replacement and test from objects that were not placed in train -- this should spit them in about 1:2 proportion) on this data to verify this result. EDIT: More precise algorithm of resampling CV Given a dataset with $N$ objects and $M$ attributes: - Training set is made by randomly selecting $N$ objects from the original set with replacement - The objects that were not selected in the step 1 form the test set (this is roughly $\frac{1}{3}N$ objects) - Classifier is trained on a train set and tested on test set, and the measured error is gathered - Steps 1-3 are repeated $T$ times, where $T$ is more less arbitrary, say 10, 15 or 30
null
CC BY-SA 2.5
null
2010-12-29T17:27:19.787
2010-12-29T21:20:27.750
2010-12-29T21:20:27.750
null
null
null
5813
1
5816
null
1
212
I am a beginner level programmer preparing for the interview in medical research company. Job sounds damn interesting and I would like to get there. To show my skills and interest, I want to write a program related to the topic. I think, statistical analysis is quite used in that field, isn't it? What would you suggest, as idea for demo program? Calculation of basic statistical parameters of input data? UPDATE: @chl: my background: beginner Silverlight programmer, with basic (college) level of statistics; my intentions: small application, that shows I know Silverlight & I have interest in medical research @onestop: company's area of activity: research in proteins, metabolics, central nervous system; job specification: I have just basic requirements, and there is Silverlight knowledge listed
Demo for bioinformatics
CC BY-SA 2.5
0
2010-12-29T17:48:41.317
2010-12-29T22:09:28.347
2010-12-29T22:09:28.347
2564
2564
[ "bioinformatics" ]
5814
1
null
null
7
327
[Traffic light synchronization](http://ops.fhwa.dot.gov/publications/fhwahop08024/chapter6.htm) nowadays is not a tedious project, Image: ![image](https://i.stack.imgur.com/KtnTF.png) For a personal research project I'm trying to build a statistical model to solve such problems in two and higher dimensions. I would like to synchronize `x` 4 legged (approaches) intersections such that: - The intersections are all located within the same area in a grid type format. I'd like to think of this as a 1/0 matrix, where 1 indicates a signalized intersection and 0 doesn't: Matrix Example [1 1 1 1] [0 1 0 1] [1 1 1 1] This assumes that all nodes are connected in a vertical or horizontal but not diagonal manner and that they could have a direction of either one or two way. - Each Intersection has a traffic light on each approach - The allowable movements per leg are left, right, and Through (lrt) and no U-Turns - Each Leg can be one lane, i.e. one lane that allows for lrt or approaches can have combined, through and right (1 lane), and left (1 lane, or dedicated such as alpha left lanes, beta through lanes, and gamma right lanes. Some intersections should allow for restricted or no movement, as in an intersection could be setup as "no-left turns allowed" - Each lane should have a distinct vehicle flow (vehicles / minutes (Time)) per movement, so a two lane (left + through, and right) approach could have the following movement: Left + Through lane: 10 vehicles / minute (through) and 5 vehicles per minute turning left Right lane: 15 vehicles per minute - Assume green cycle time (time from beginning of green to beginning second green to depend on the vehicle volumes, yet could only be between t1 and t2 seconds. Green time = red time + yellow time). Yellow time should be assumed as a constant either 1, 2, or 3 seconds (depending on the jurisdiction). if pedestrian crossing is warranted, the red time should be assumed to be "white go ped: 4 seconds x lane" + "blinking ped: 4 seconds x lane". remember that the number of lanes would be from the opposing approaches combined. - The system should optimize green time so that maximum flow is passed through the system at all intersections depending no the existing volumes and on historical values. - Although there are historical data on the number of vehicles that enter the green network, I assume it is best to use a Monte Carlo simulation to simulate vehicle entry into the system and then calibrated to historical data. - The end result should be an equation or system of equations that will give you the green time for each intersection depending on volumes and what to expect from the other intersections. I know this might not be the clearest question, but I'm happy to answer any questions and or comments.
Help in setting up and solving a transportation / traffic problem
CC BY-SA 2.5
null
2010-12-29T18:22:44.217
2011-06-25T12:45:38.133
null
null
59
[ "monte-carlo", "transportation" ]
5816
2
null
5813
1
null
Write a script that aligns short DNA fragments (25-100 basepairs) to an existing genome.
null
CC BY-SA 2.5
null
2010-12-29T20:03:22.763
2010-12-29T20:03:22.763
null
null
1795
null
5817
1
5822
null
3
652
I want to know if a covariate for each subject interacts with three types of trials, and the difficulty of those trials. My dependent measures are accuracy and response times (RT). For this question, I’d like to focus on RTs. Traditionally, people in my field have dichotomized the covariate of interest and used ANOVAs for analysis. I would like to treat the covariate as the continuous variable it is, and treat the subjects as random effects. I want to analyze this using mixed-models in `R` (`nlme`). The first 2 trial types can be either easy or hard and the third trial type is a combination of the first 2. These trials can be easy-easy, easy-hard, hard-easy, hard-hard. I expect people who have higher scores on the covariate to show a smaller difference between hard and easy RTs for at least 1 trial type. This is a repeated-measures design with each subject completing 3 blocks of 40 trials of each of the trial types (for trialtypes 1 & 2: 20 easy, 20 hard; for trialtype3, 10 easy-easy, 10 easy-hard, 10 hard-hard, 10 hard-easy). Stated differently, each subject completes 3 blocks of 120 trials with the various trialtypes randomly ordered. Only RTs for correct trials will be analyzed (resulting in an unbalanced design for RT data). Besides a counter-balancing of response keys, this is a completely within-subjects design. To summarize, what is the model (or models) that will allow me to test for interactions between trialtypes, difficulty, and the covariate using `nlme` in `R`?
What is my model statement for mixed models (nlme) in R?
CC BY-SA 3.0
null
2010-12-29T20:17:43.350
2013-09-04T14:36:18.880
2013-09-04T14:36:18.880
21599
2322
[ "r", "mixed-model", "lme4-nlme" ]
5818
2
null
5696
2
null
Here's a couple of ways to think about. 1 A) You could treat each full sequence of codings as a ordered set of events (i.e. ["head nod", "head shake", "head nod", "eyebrow raised"] and ["head nod", "head shake", "eyebrow raised"]), then align the sequences using an algorithm that made sense to you ( [http://en.wikipedia.org/wiki/Sequence_alignment](http://en.wikipedia.org/wiki/Sequence_alignment) ). You could then compute inter coder reliability for the entire sequence. B) Then, again using the aligned sequences, you could compare when they said an event happened, given that they both observed the event. 2) Alternately, you could model this as a Hidden Markov Model, and use something like the Baumn-Welch algorithm to impute the probabilities that, given some actual event, each coder actually coded the data correctly. [http://en.wikipedia.org/wiki/Baum-Welch_algorithm](http://en.wikipedia.org/wiki/Baum-Welch_algorithm)
null
CC BY-SA 2.5
null
2010-12-29T20:25:54.147
2010-12-30T21:38:28.320
2010-12-30T21:38:28.320
82
82
null
5819
1
5820
null
10
10924
I am using [scipy.stats.gaussian_kde](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gaussian_kde.html) to estimate a pdf for some data. The problem is that the resulting pdf takes values larger than 1. As far as I understand, this should not happen. Am I mistaken? If so why?
Kernel density estimate takes values larger than 1
CC BY-SA 2.5
null
2010-12-29T20:27:46.593
2010-12-30T15:28:38.317
2010-12-30T15:28:38.317
919
977
[ "distributions", "probability", "estimation", "python" ]
5820
2
null
5819
17
null
You are mistaken. The CDF should not be greater than 1, but the PDF may be. Think, for example, of the PDF of a Gaussian random variable with mean zero and standard deviation $\sigma$: $$f(x) = \frac{1}{\sqrt{2\sigma\pi}}\exp(-\frac{x^2}{2\sigma^2})$$ if you make $\sigma$ very small, then for $x = 0$, the PDF is arbitrarily large!
null
CC BY-SA 2.5
null
2010-12-29T20:33:48.510
2010-12-29T20:33:48.510
null
null
795
null
5822
2
null
5817
1
null
(This isn't an answer, simply an attempt at clarification of the question. I would have made this a comment to the question, but I haven't had success putting code in a comment) Am I correct in interpreting your design as captured by the data frame resulting from the following (ignoring the arbitrary number of subjects and trials): ``` a = data.frame(expand.grid( s = factor(1:10) , trial = 1:20 , task1 = c('absent','easy','hard') , task2 = c('absent','easy','hard') )); a = a[!(a$task1=='absent' & a$task2=='absent'),] ``` Thus, ``` table(a$task1,a$task2) ``` yields ``` easy absent hard easy 200 200 200 absent 200 0 200 hard 200 200 200 ``` I'm more familiar with lme4 than nlme, but I know that the absent/absent cell will cause lmer to fail if you try a model with a task1:task2 interaction. Somehow we need to get lmer to recognize that this is a nested model with two tasks that are presented either singly or combined...
null
CC BY-SA 2.5
null
2010-12-29T22:11:52.237
2010-12-29T22:11:52.237
null
null
364
null
5823
2
null
5791
6
null
It seems to me that CV is inappropriate here. I think you may be better off separating the change in location from the change in dispersion. In addition, the distributions you mention in your comment to the question are, for most parameter values, skewed (positively, except for the beta distribution). That makes me question whether the standard deviation is the best choice for a measure of dispersion; perhaps the interquartile range (IQR) might be better, or possibly the median absolute deviation? Similarly, rather than the mean, I might consider the median or the mode as the measure of location. The choice might in practice be determined by ease of computation as well as the field of application, the details of the model... Say you choose to use the IQR and the mode. You could summarise the change in dispersion using the ratio of posterior to prior IQRs, probably plotted on a log scale as that's usually appropriate for ratios. You could summarise the change in location using the ratio of the difference between posterior and prior modes to the prior IQR, or to the posterior IQR, or perhaps to the geometric mean of the posterior and prior IQRs. These are just some quick ideas that came to mind. I can't claim any strong underpinnings for them, or even any great personal attachment to them.
null
CC BY-SA 2.5
null
2010-12-29T22:49:05.940
2010-12-29T22:49:05.940
null
null
449
null
5824
2
null
5797
0
null
Ok, duh, one approach would be to use the James-Stein shrinkage. This will not, I believe, unbias the estimates, but will reduce the mean squared error.
null
CC BY-SA 2.5
null
2010-12-29T23:55:41.287
2010-12-29T23:55:41.287
null
null
795
null
5825
2
null
5791
6
null
Some alternatives, which have the same 'flavor' as CV are: - The coefficient of L-variation, see e.g. Viglione. The second sample L-moment takes the place of the sample standard deviation. - The $\gamma$-Winsorized standard deviation divided by the $\gamma$-trimmed mean. You probably want to divide this by $1 - 2\gamma$ to get a consistent scaling across different values of $\gamma$. I would try $\gamma = 0.2$. - The square root of the number of samples times the McKean-Schrader estimate of the standard error of the median, divided by the sample median.
null
CC BY-SA 2.5
null
2010-12-30T04:50:05.367
2010-12-30T04:50:05.367
null
null
795
null
5826
1
5830
null
10
14609
I was wondering if anybody had experience in how to set the knot points when using cubic regression splines. Some background: I have a response and predictor variable, and I want to determine the trend relationship between the two. To see what it looks like without making too many assumptions, I've fit a smoothing spline curve using the `gam` function in R. The trend is obviously not linear, but otherwise well-behaved: smooth, and not too wiggly. I'd now like to model this trend using a simple, cubic regression spline (there are various practical issues with using the `gam` fit, or I'd just use that). Of course, using a regression spline requires the knots to be specified in advance. It's not too hard to do that with linear splines: I'd insert a knot where the slope of the smooth fit changes substantially, eg around local minima/maxima. However, cubic splines appear to be a more complicated story. Any guidance on where I should put the knots would be much appreciated.
Knot selection for cubic regression splines
CC BY-SA 2.5
null
2010-12-30T05:53:33.917
2010-12-30T10:11:57.400
null
null
1569
[ "regression", "splines" ]
5827
1
null
null
5
2474
We work on cervical cancer detection and we started accumulating data in 2006 by performing HPV testing on all our samples. We have accumulated a data set with almost 400,000 women. Data enters the database on a daily basis. Most women come for a yearly smear so most women are more than one time in the database (2-13x). Most women are HPV negative (appr 85%) but some become HPV positive during this observation period. I would be very interested in looking at the rate of infection (incidence) of HPV16. I suppose I will have to look at the women with at least 2 samples and restricting to women going from a negative sample to a positive sample. I have a few questions: - How do I start? Do I use a survival analysis and how do I do this specifically in Stata or Prism5? - Do I censor some women and how (these women who are HPV16+ on the first sample, we don't know when they acquired this)? - What do I do with all the HPV negative women, do they stay in the analysis? - How would I compare the incidence rate according to age? I would specifically like to know if young women (<30 years) have a different pattern when compared to women of >30 years.
How do I calculate and compare incidence of infection between age-groups
CC BY-SA 2.5
null
2010-12-30T06:07:23.883
2011-08-15T13:15:32.400
2010-12-30T09:49:09.903
930
2571
[ "epidemiology", "stata", "disease" ]
5828
2
null
4356
1
null
Multivariate splits as defined in the CART book aren't implemented in `rpart`. The CART software package from Salford Systems has this feature, but AFAIK it uses a proprietary algorithm licensed from Breiman, Friedman et al.
null
CC BY-SA 2.5
null
2010-12-30T06:31:03.853
2010-12-30T06:31:03.853
null
null
1569
null
5829
2
null
5776
0
null
This a job for Bayes Theorem, not for null-hypothesis testing. You've given us some information, so we should now simply determine how much more or less consistent that information is with the hypothesis than with its negation & adjust our priors accordingly. Based on what you've told us I conclude that the likelihood "Bill is skilled" is low--somewhere in the neighborhood of 10% (that is, I'll bet he is "lucky" unless you offer me odds better than 9:1 against "skilled"). Here's why: - Neither the failure to make hypotheses in advance of the 2 games, the lack of information about how many games have been played or will be played after the 2 in question, nor the possibility that someone is cheating, etc. really matter. Having been supplied no information bearing on the situation, we should assume an 8-player poker game drawn at random from the universe of such games being played. That deals w/ w/ all the "what ifs" & "could bes" that Whuber draws our attention to: b/c the proportion of games that are fixed, or involve 1 avg player competing w/ 7 imbeciles, or are played w/ unshuffled decks etc., is tiny in comparison to the proportion of games that are fair & involve "run of the mill players," you will just be making your life complicated if you assume this game involves anything other than a "normal game" (if you have a different sense of how the universe of 8-person games is populated, then modify this part of the analysis accordingly; I'm just trying to demonstrate how to think about this problem!). - Based on billions of games & meticulous record-keeping, I put the likelihood at about 0.10 that any randomly selected player in a normal game of poker is a "skilled" (again, if you have a different sense of what the talent distribution looks like, substitute your own estimate here). - In a normal game of poker, the likelihood that a skilled poker player will win any 2 consecutive hands against 7 randomly selected players is only a scintilla higher than the likelihood that an unskilled one will. Poker is a game of skill, yes, but variance is super high (if you think otherwise, fine, but in that case you definitely are not an experienced poker player). If you had said "Bill made 2 consecutive 40-foot jump shots," or "won 2 consecutive olympic marathons," in contrast, then my priors about how much better he is than the average baseketball player or the average marathon runner would shift much much more dramatically-- those are lower-variance indicators of those types of skill. - Because the likelihood ratio for the hypothesis and the negation of the hypothesis is thus very close to 1, you won't do much better here than going with your priors. Again, mine is that there is 1/10 chance that Bill is skilled.
null
CC BY-SA 2.5
null
2010-12-30T06:36:57.283
2010-12-30T06:36:57.283
null
null
11954
null
5830
2
null
5826
13
null
This is a tricky problem and most people just select the knots by trial and error. One approach which is growing in popularity is to use penalized regression splines instead. Then knot selection has little effect provided you have lots of knots. The coefficients are constrained to avoid any coefficient being too large. It turns out that this is equivalent to a mixed effects model where the spline coefficients are random. Then the whole problem can be solved using REML without worrying about knot selection or a smoothing parameter. Since you use R, you can fit such a model using the `spm()` function in the [SemiPar package](http://cran.r-project.org/web/packages/SemiPar/index.html).
null
CC BY-SA 2.5
null
2010-12-30T09:08:39.313
2010-12-30T09:08:39.313
null
null
159
null
5831
1
5883
null
18
748
I think it is fair to say statistics is an applied science so when averages and standard deviations are calculated it is because someone is looking to make some decisions based on those numbers. Part of being a good statistician then I would hope is being able to "sense" when the sample data can be trusted and when some statistical test is completely misrepresenting the true data we're interested in. Being a programmer that is interested in analysis of big data sets I'm relearning some statistics and probability theory but I can't shake this nagging feeling that all the books I've looked at are kind of like politicians that get up on stage and say a whole bunch of things and then append the following disclaimer at the end of their speech: > Now, I'm not saying that this is good or bad but the numbers say it's good so you should vote for me anyway. Maybe you get that but maybe you don't so here's a question. Where do I go to find war stories by statisticians where some decisions were based on some statistical information that later turned out to be completely wrong?
War stories where wrong decisions were made based on statistical information?
CC BY-SA 3.0
null
2010-12-30T10:11:11.710
2016-12-20T19:25:02.097
2016-12-20T18:32:46.517
22468
null
[ "references", "inference", "descriptive-statistics", "history", "fallacy" ]
5832
2
null
5826
7
null
It depends what you mean by "not too wiggly", but you might like to take a look at fractional polynomials for a simpler approach to fitting smooth curves that are not linear but not 'wiggly'. See [Royston & Altman 1994](http://www.jstor.org/stable/2986270) and the [mfp package](http://cran.r-project.org/web/packages/mfp/index.html) in R or the [fracpoly command](http://www.stata.com/help.cgi?fracpoly) in Stata.
null
CC BY-SA 2.5
null
2010-12-30T10:11:57.400
2010-12-30T10:11:57.400
null
null
449
null
5833
2
null
3589
0
null
Well a lot depends on the assumptions you make. If you assume that the data is stationary then more data for series one will give you abetter estimate of its volatility. This estimate can be used to improve the correlation estimate. So the follwoing statment is incorrect: "The history of Y's price before X went public is useless for assessing their subsequent correlation"
null
CC BY-SA 2.5
null
2010-12-30T10:58:21.897
2010-12-30T10:58:21.897
null
null
null
null
5834
1
null
null
2
904
In a Markov chain, a state $j$ is transient if $f_{jj}<1$ ($f_{jj}$ is probability of ever visiting state $j$ starting from state $j$ ). Suppose, I have an irreducible transient DTMC (means all states are transient). Now, I want to prove that for any $i,j$ in $S$ ($S$ is DTMC state space), $f_{ij}$ (i.e probability of ever reaching state $j$ starting from state $i$) is less than 1. It is clear that $f_{ii}<1$ and $f_{jj}<1$. But, how to prove that $f_{ij}<1$ for any $i,j$. Thanks Prasenjit
First passage time distribution in a irreducible transient discrete-time Markov chain (DTMC)
CC BY-SA 2.5
null
2010-12-30T10:58:50.857
2010-12-31T16:46:13.443
2010-12-30T17:17:09.033
449
2516
[ "markov-process" ]
5835
2
null
5827
0
null
I don't use Stata or Prism 5, so I can't help with that part, but a survival analysis does sound appropriate, at least from what you've said. The most commonly used method is Cox proportional hazard, which is certainly available in Stata. Instructions on how to code the data are program specific, so I will leave it to a Stata expert to help with that. Survival analysis can deal with various kinds of censoring. In particular, you have two types to deal with: 1) Some women were positive when you first saw them, and 2) Some women were negative when you last saw them. But, again, how to do this will be dependent on the program you use. You can include age as a time varying covariate.
null
CC BY-SA 2.5
null
2010-12-30T12:58:23.283
2010-12-30T16:21:13.550
2010-12-30T16:21:13.550
449
686
null
5836
1
5838
null
7
1238
This is more of a "how to use R" question than an actual hardcore statistics question, but I think the concentration of R masters here makes this a good forum for it. I'm refreshing a time series graphing package that currently uses gnuplot. The first step is getting somewhere close to the current graphs, and then I hope to be able to add more statistical analysis from R later. Currently, the gnuplot-generated graphs look like this: ![alt text](https://i.stack.imgur.com/EkuiN.png) The corresponding graph from R looks like this: ![alt text](https://i.stack.imgur.com/mqWEr.png) I'm reasonably happy with this. What's missing is the legend stuff from below the gnuplot graph. I can add a legend saying "In" and "Out", but how to best present the average, maximal and minimal values? I could add horizontal lines to the graph and print the values inside the graph area, but the placement might interfere with the actual graph.
Annotating graphs in R
CC BY-SA 2.5
null
2010-12-30T15:06:34.093
2011-01-02T22:56:50.777
2010-12-30T16:27:12.743
930
2443
[ "r", "time-series", "data-visualization", "ggplot2" ]
5837
2
null
5827
3
null
Survival analysis is a good idea, but: - Do not discard anybody - even those with only one data point. The fact that they did or did not have HPV at that age is informative. - Your data is interval censored: left-censored for those who are positive at first measurement, right-censored for those who stay negative, and censored between the two measurement times for those who change status. I would set up age 0 (or some other fixed value before the earliest measurement) as the starting point for time, and then estimate the hazard of the infection non-parametrically - Stata probably has a routine for that, but if not R certainly does. The result would be the age-dependent instantenous probability of infection - pretty much what you are looking for. If you want to estimate how covariates affect this hazard, then interval-censored regression is needed - standard Cox regression does not handle this situation.
null
CC BY-SA 2.5
null
2010-12-30T15:34:32.293
2010-12-30T15:34:32.293
null
null
279
null
5838
2
null
5836
5
null
A quick and dirty way to paste some text and numerical results along the labels of your legend is to simply rename the factor levels. For instance, ``` df <- data.frame(x=rnorm(100), y=rnorm(100), f=gl(2,50)) df$f2 <- df$f levels(df$f2) <- paste(levels(df$f), tapply(df$y, df$f, mean), sep=": ") p <- ggplot(data=df) + geom_point(aes(x=x, y=y, color=f2)) p + opts(legend.position = 'bottom', legend.title=NULL) ``` You can add whatever you want into the new labels, such as mean, min, max, etc. (e.g., create a custom function, inspired from `summary()` that returns the values you want, and append them to `c("In","Out")`).
null
CC BY-SA 2.5
null
2010-12-30T16:23:03.247
2010-12-30T16:23:03.247
null
null
930
null
5839
2
null
5836
6
null
I'm sure that there's a more elegant way to do this but you can try this : ``` Data <- data.frame(serie1 = cumsum(rnorm(100)), serie2 = rnorm(100), temps = as.Date(1:100, origin = "2000-01-01")) label <- c("In traffic [max 2.0G bps] [Avg 1.1G bps] [95% 1.8G bps] [Min 569.4M bps]", "Out traffic [max 2.0G bps] [Avg 672M bps] [95% 2.1G bps] [Min 154.3M bps]") p <- ggplot(melt(Data, id = "temps"), aes(temps, value, group = variable, colour = variable)) p <- p + geom_line() +xlab("") + opts(legend.position = "bottom", legend.title = NULL) p <- p + scale_colour_manual(values = c("red", "blue"), breaks = c("serie1", "serie2"), labels = label) print(p) ```
null
CC BY-SA 2.5
null
2010-12-30T16:59:30.710
2010-12-30T16:59:30.710
null
null
2028
null
5841
2
null
5327
4
null
@Firefeather: Does your data contain (and can only really ever contain) only positive values? If so, model it using a generalized linear model with gamma error and log link. If it contains zeros then you could consider a two stage (logistic regression for probability of zero and gamma regression for the positive values). This latter scenario can also be modeled as a single regression using a zero inflated gamma. Some great explanations of this were given on a SAS list a few years ago. Start here if interested and search for follow-ups. [link text](http://listserv.uga.edu/cgi-bin/wa?A2=ind0805A&L=sas-l&P=R20779) Might help point you in another direction if the truncated regression turns out implausible.
null
CC BY-SA 2.5
null
2010-12-30T17:17:29.253
2010-12-30T17:17:29.253
null
null
2040
null
5842
1
5846
null
6
2700
I have a load of point data in 3D, and I wish to cluster them like so: - Each cluster contains points all of which are at most distance $d$ from another point in the cluster. - All points in two distinct clusters are at least distance $d$ from each other. The distances used are Euclidean distances. Is this a well known clustering algorithm? If so, what is is called? Thanks.
Cluster data points by distance between clusters
CC BY-SA 2.5
null
2010-12-30T17:59:52.847
2010-12-30T20:04:33.040
null
null
null
[ "clustering" ]
5844
1
null
null
4
1879
Working on probabilistic outputs of kernel methods I found the formulation of the SVM as a Penalized Method using the Binomial Deviance (described for example in "[The Elements of Statistical Learning Theory](http://www-stat.stanford.edu/~tibs/ElemStatLearn/), 2nd edition" by Hastie et al., pp 426--430). The model is called "Kernel Logistic Regression (KLR)". References on "The well studied KLR" lead to literature on smoothing splines, but KLR is never mentioned there. What is the connection between KLR and Smoothing Splines? Thank you in advance for your answer!
What is the connection between Kernel Logistic Regression and Smoothing Splines?
CC BY-SA 2.5
null
2010-12-30T19:53:26.203
2015-04-14T19:18:40.800
2015-04-14T19:18:40.800
9964
2549
[ "machine-learning", "logistic", "svm", "smoothing", "kernel-trick" ]
5845
1
5849
null
11
949
Can the group recommend a good introduction text/resource to applied resampling techniques? Specifically, I am interested in alternatives to classical parametric tests (e.g. t tests, ANOVA, ANCOVA) for comparing groups when assumptions such as normality are clearly violated. An example problem type I would like to educate myself as to a better way to solve may involve something such as: I) 2 Groups: Treatment and Control Dependent Var: Change in account balance dollars after intervention Covariate: Pre intervention account balance dollars. Issue with applying ANCOVA: Many subjects will not have any change (many zeros). II) 2 Groups: Treatment and Control Dependent Var: new accounts added Covariate: Pre intervention number of accounts. *Many subjects will not have any added account (many zeros). Can I use a bootstrap? A permutation test? This is the type of analysis I would like to apply nonparametric resampling methods to.
Good text for resampling?
CC BY-SA 3.0
null
2010-12-30T19:53:29.647
2023-04-30T05:55:45.193
2017-02-25T18:00:27.267
11887
2040
[ "references", "bootstrap", "resampling" ]
5846
2
null
5842
4
null
Create a graph in which the points are nodes and two points are connected with an edge if and only if they lie within distance $d$ of each other. Stated in these terms, your criteria become - Every node in a cluster of two or more nodes is connected to at least one other node in that cluster. - No two points in any disjoint clusters can be connected to each other. In short, you want to compute the connected components of this graph. Linear-time algorithms are well known and simple to execute, as described in the [Wikipedia article](http://en.wikipedia.org/wiki/Connected_component_%28graph_theory%29). To do this efficiently for a lot of points in 3D, you want to limit the amount of distance calculation you perform. Data structures such as [octrees](http://en.wikipedia.org/wiki/Octree), or even simpler ones (such as exploiting a [lexicographic sorting](http://en.wikipedia.org/wiki/Lexicographical_order) of the points) work well.
null
CC BY-SA 2.5
null
2010-12-30T20:04:33.040
2010-12-30T20:04:33.040
null
null
919
null
5847
1
5864
null
4
2454
So, working in web analytics, it's common to use a chi-square test for A/B tests to test statistical significance. I often use a calculator like this: [http://www.usereffect.com/split-test-calculator](http://www.usereffect.com/split-test-calculator) , so if each variation gets 1000 impressions, and A has 100 conversions and B has 200 conversions, I know that, with a 99.9% confidence interval (assuming randomized samples) that B is the winning variation. But occasionally, I have values that create percentages over 100%, and seem to throw these calculations off. So, if I'm testing two donation pages, and I want to use the $ values for conversions, how would I go about that? My goal is to make a statistically informed decision about which page is performing better, in terms of its ability to raise more money. If I have 1000 impressions of each variation, and one variation raises `$2000` off of 200 contributions, and the othe other raises `$2500` off of 100 contributions, what calculation can I use to know if the difference is meaningful?
Statistical significance (chi squared) for percentages over 100%?
CC BY-SA 2.5
null
2010-12-30T20:08:04.423
2010-12-31T12:10:01.110
2010-12-30T20:42:34.797
null
142
[ "chi-squared-test", "hypothesis-testing" ]
5848
2
null
5845
5
null
Phillip Good, Permutation, Parametric, and Bootstrap Tests of Hypotheses (3rd Edition). Springer, 2005. This book is mathematically easy, accessible, and covers a wide range of applications.
null
CC BY-SA 2.5
null
2010-12-30T20:13:12.627
2010-12-30T20:13:12.627
null
null
919
null
5849
2
null
5845
7
null
As for a good reference, I would recommend Philip Good, [Resampling Methods: A Practical Guide to Data Analysis](http://www.springer.com/birkhauser/applied+probability+and+statistics/book/978-0-8176-4386-7) (Birkhäuser Boston, 2005, 3rd ed.) for an applied companion textbook. And here is [An Annotated Bibliography for Bootstrap Resampling](http://www-stat.wharton.upenn.edu/%7Estine/mich/bibliography.pdf). [Resampling methods: Concepts, Applications, and Justification](http://www.creative-wisdom.com/pub/mirror/resampling_methods.pdf) also provides a good start. There are many R packages that facilitate the use of resampling techniques: - boot, for bootstraping -- but see also P. Burns, The Statistical Bootstrap and Other Resampling Methods, for illustrations - coin, for permutation tests (but see the accompanying vignette which includes extensive help) (There are many other packages...)
null
CC BY-SA 4.0
null
2010-12-30T20:18:40.273
2023-04-30T05:55:45.193
2023-04-30T05:55:45.193
56940
930
null
5850
1
5851
null
4
601
Given a random variable $X_1$ drawn from a distribution with cdf $F$, and random variables $X_2, \cdots,X_n$ drawn from another distribution with cdf $G$, what is the formula for the probability that $X_1$ is the $k$-th out of the $n$ variables, when they are sorted in order. If it helps, assume that $F$ and $G$ are both defined on [0,1], with positive density everywhere.
Order statistics: probability random variable is k-th out of n when ordered
CC BY-SA 2.5
null
2010-12-30T21:57:40.627
2010-12-31T10:33:41.713
2010-12-31T10:33:41.713
930
2582
[ "order-statistics" ]
5851
2
null
5850
6
null
By computing the probability that $k-1$ of the $G$-distributed $X$'s are less than or equal to $X_1$, conditional on the value of $X_1 = t$, and integrating over all $t$ we obtain $$\int_{-\infty}^{\infty}{\binom{n-1}{k-1} G(t)^{k-1}(1-G(t))^{n-k}dF(t)}.$$ This of course assumes independence of all the $X_i$. Given no assumptions about any relationship about $F$ and $G$, this expression should not simplify in general.
null
CC BY-SA 2.5
null
2010-12-30T22:57:51.933
2010-12-30T22:57:51.933
null
null
919
null
5852
2
null
5844
3
null
There is a book on "Nonparametric Regression and Generalized Linear Models: A Roughness Penalty Approach" by [Green and Silverman](http://rads.stackoverflow.com/amzn/click/0412300400), that is probably a good start, but my copy is in my office, so I can't get it until the new year. Essentially IIRC, the link between kernel methods and smoothing splines is that a regularisation term is used that penalises particular properties of the function implemented by the model, commonly (as the name suggests) the roughness (as measured by the second derivative or curvature). For kernel methods, the regularisation operator depends on the choice of kernel, but the regularisation operator is not dependent on the particular sample of data, which is the connection with splines rather than other non-parametric models. Smoothing splines can be used with more or less any loss function (including the logistic loss), just as kernel methods can use more or less any (convex) loss. Often the squared error or hinge losses are not the best ones, but they get most of the attention, and you can sometimes incorporate useful expert knowledge about the task via the loss just as in GLMs. Hopefully I can give a better answer once I have a chance to refer to my books! P.S. there is a [paper](http://dx.doi.org/10.1109/5.58326) on regularisation networks by Poggio and Girosi that may well be quite relevant as well.
null
CC BY-SA 2.5
null
2010-12-31T00:02:24.833
2010-12-31T00:02:24.833
null
null
887
null
5853
1
5876
null
9
18341
(This relates to my programming question on Stack Overflow: [Bell Curve Gaussian Algorithm (Python and/or C#)](https://stackoverflow.com/questions/4560554/).) On Answers.com, I found this simple example: - Find the arithmetic mean (average) => Sum of all values in the set, divided by the numbers of elements in the set - Find the sum of the squares of all values in the set - Divide output of (2) over the numbers of elements in the set - Subtract the square of mean (1) from the output of (3) - Take the square root of the outcome of (4) Example: Set A={1,3,4,5,7} - (1+3+4+5+7)/5 = 4 - (1*1+3*3+4*4+5*5+7*7)=1+9+16+25+49 = 100 - 100 / 5 = 20 - 20 - 4*4=20-16 = 4 - SQRT(4) = 2 (This comes from a post on [wiki.answers.com](http://wiki.answers.com/Q/How_to_find_the_variance_of_a_set_of_numbers#ixzz19fIdujyG).) Now given all that, how can I fit the above data to a bell curve (such as a credit score) ranging from 200 to 800. Obviously the number 5 in the above set would be 500. But then what is the formula for determining what 3 should be on the same scale. Even though the original set Set A={1,3,4,5,7} is not a bell-curve, I want to force it into a bell-curve. Imagine these are scores of 5 people. Next month the scores might change as follows: Set `A2={1,2,4,5,9}` (one guys loses a point, and the top guy gains two more points - the rich get richer and the poor get poorer). Then perhaps a new guy comes into the set: Set `A3={1,2,4,5,8,9}`.
Forcing a set of numbers to a gaussian bell-curve
CC BY-SA 3.0
null
2010-12-31T05:49:57.543
2016-10-15T20:52:19.757
2017-05-23T12:39:26.167
-1
2585
[ "algorithms", "normal-distribution" ]
5854
1
null
null
23
36273
I am working on creating a website, which displays the census data for a user selected Polygons & would like to graphically show the distribution of various parameters (one graph per parameter). The data usually has the following properties: - The sample size tend to be large (say around 10,000 data points) - The range in values tends to be quire large (for example, the minimum population can be less than 100 & the maximum can be something like 500,000) - q1 usually is close to the minimum (say 200) while q2 & q3 will be within 10,000 - It doesn't look anything like a normal distribution I am not a statistician and hence my description might not be exactly clear. I would like to show this distribution on a graph, which will be seen by citizens (the layman, if you like). I would have best liked to use a histogram, but it is not possible due to the large range of values, due to which making bins is not really easy & straight forward. From what little I know about statistics, a box plot is what is often used to show this kind of data, but I feel that for a layperson, deciphering the Box plot is not easy. What are my options to show this data in an easy to understand manner?
What are some alternatives to a boxplot?
CC BY-SA 2.5
null
2010-12-31T06:38:50.327
2022-11-24T16:23:39.150
2010-12-31T09:59:05.407
930
2586
[ "distributions", "data-visualization", "boxplot" ]
5856
2
null
5854
5
null
Here is a [matlab function](http://www.mathworks.com/matlabcentral/fileexchange/23661-distribution-plot) for plotting multiple histograms side-by-side in 2D as an alternative to box-plot. See the picture on the top. And here is [another one](http://www.mathworks.com/matlabcentral/fileexchange/26508-notboxplot-alternative-to-box-plots) The density strip is another alternative to box-plot. It is a shaded monochrome strip whose darkness at a point is proportional to the probability density of the quantity at that point. This is an [R implementation](https://rdrr.io/cran/denstrip/man/denstrip.html) of the density strip
null
CC BY-SA 4.0
null
2010-12-31T09:18:51.613
2019-01-12T16:46:05.810
2019-01-12T16:46:05.810
339
339
null
5857
2
null
5854
4
null
How about using quantiles? It is not necessary to present a graph then, only a table. For village census I think the users will be most interested how many there are villages of certain size, so giving for example deciles will tell them them information such as $x\%$ of all the villages are smaller than the certain number. For deciles $x=0,10,20,...,100$. You can graph this table with the percents on a x-axis and the deciles on the y-axis.
null
CC BY-SA 2.5
null
2010-12-31T09:37:33.170
2010-12-31T09:37:33.170
null
null
2116
null
5858
2
null
4949
33
null
There's no closed form, but you could do it numerically. As a concrete example, consider two Gaussians with following parameters $$\mu_1=\left(\begin{matrix} -1\\\\ -1 \end{matrix}\right), \mu_2=\left(\begin{matrix} 1\\\\ 1 \end{matrix}\right)$$ $$\Sigma_1=\left(\begin{matrix} 2&1/2\\\\ 1/2&2 \end{matrix}\right),\ \Sigma_2=\left(\begin{matrix} 1&0\\\\ 0&1 \end{matrix}\right)$$ Bayes optimal classifier boundary will correspond to the point where two densities are equal ![](https://i.stack.imgur.com/lPhju.png) Since your classifier will pick the most likely class at every point, you need to integrate over the density that is not the highest one for each point. For the problem above, it corresponds to volumes of following regions ![](https://i.stack.imgur.com/Z4ftE.png) ![](https://i.stack.imgur.com/ydUAQ.png) You can integrate two pieces separately using some numerical integration package. For the problem above I get `0.253579` using following Mathematica code ``` dens1[x_, y_] = PDF[MultinormalDistribution[{-1, -1}, {{2, 1/2}, {1/2, 2}}], {x, y}]; dens2[x_, y_] = PDF[MultinormalDistribution[{1, 1}, {{1, 0}, {0, 1}}], {x, y}]; piece1 = NIntegrate[dens2[x, y] Boole[dens1[x, y] > dens2[x, y]], {x, -Infinity, Infinity}, {y, -Infinity, Infinity}]; piece2 = NIntegrate[dens1[x, y] Boole[dens2[x, y] > dens1[x, y]], {x, -Infinity, Infinity}, {y, -Infinity, Infinity}]; piece1 + piece2 ```
null
CC BY-SA 3.0
null
2010-12-31T09:41:57.923
2013-06-25T07:10:08.940
2013-06-25T07:10:08.940
2148
511
null
5859
1
null
null
21
11997
I am confused about how to decide whether to treat time as continuous or discrete in survival analysis. Specifically, I want to use survival analysis to identify child- and household-level variables that have the largest discrepancy in their impact on boys' versus girls' survival (up to age 5). I have a dataset of child ages (in months) along with an indicator for whether the child is alive, the age at death (in months), and other child- and household-level variables. Since time is recorded in months and all children are under age 5, there are many tied survival times (often at half-year intervals: 0mos, 6mos, 12mos, etc). Based on what I have read about survival analysis, having many tied survival times makes me think I should be treating time as discrete. However, I have read several other studies where survival time is in, for example, person-years (and so surely there are tied survival times) and continuous-time methods like Cox proportional hazards are used. What are the criteria I should use to decide whether to treat time as continuous or discrete? For my data and question, using some continuous-time model (Cox, Weibull, etc) makes intuitive sense to me, but the discrete nature of my data and the amount of tied survival times seem to suggest otherwise.
Survival analysis: continuous vs discrete time
CC BY-SA 2.5
null
2010-12-31T10:10:49.467
2019-06-29T08:14:36.320
2019-06-29T08:14:36.320
3277
2587
[ "survival", "ties" ]
5860
2
null
5854
5
null
I rather like [violin plots](http://en.wikipedia.org/wiki/Violin_plot) myself, as this gives an idea of the shape of the distribution. However if the large range of values is the issue, then maybe it would be best to plot the log of the data rather than the raw values, that would then make choosing the box sizes for histograms etc. As the display is for laymen, don't mention logs and mark the axis 10, 100, 1000, 10000, 100000, 1000000 etc.
null
CC BY-SA 2.5
null
2010-12-31T10:12:59.793
2010-12-31T11:57:20.967
2010-12-31T11:57:20.967
930
887
null
5861
2
null
5854
14
null
A boxplot isn't that complicated. After all, you just need to compute the three [quartiles](http://en.wikipedia.org/wiki/Quartile), and the min and max which define the range; a subtlety arises when we want to draw the whiskers and various methods have been proposed. For instance, in a [Tukey boxplot](http://mathworld.wolfram.com/Box-and-WhiskerPlot.html) values outside 1.5 times the inter-quartile from the first or third quartile would be considered as outliers and displayed as simple points. See also [Methods for Presenting Statistical Information: The Box Plot for a good overview](https://www.sci.utah.edu/%7Ekpotter/publications/potter-2006-MPSI.pdf), by Kristin Potter. The [R](http://cran.r-project.org/) software implements a slightly different rule but the source code is available if you want to study it (see the `boxplot()` and `boxplot.stats()` functions). However, it is not very useful when the interest is in identifying outliers from a very skewed distribution (but see, [An adjusted boxplot for skewed distributions](https://www.sciencedirect.com/science/article/abs/pii/S0167947307004434), by Hubert and Vandervieren, CSDA 2008 52(12)). As far as online visualization is concerned, I would suggest taking a look at [Protovis](https://web.archive.org/web/20100401031355/http://vis.stanford.edu/protovis/) which is a plugin-free js toolbox for interactive web displays. The [examples](https://web.archive.org/web/20110717074112/http://mbostock.github.com/protovis/ex) page has very illustrations of what can be achieved with it, in very few lines.
null
CC BY-SA 4.0
null
2010-12-31T10:17:27.987
2022-11-24T16:23:39.150
2022-11-24T16:23:39.150
362671
930
null
5862
2
null
5854
8
null
I'd suggest you persevere with histograms. They're much more widely understood than the alternatives. Use a log scale to cope with the large range of values. Here's an example I cooked up in a couple of minutes in Stata: ![Histogram with log scale on the value axis](https://i.stack.imgur.com/kCbit.png) I admit that the x-axis numerical labels weren't entirely straightforward or automatic, but as you're building a website I'm sure your programming skills are up to the challenge!
null
CC BY-SA 2.5
null
2010-12-31T11:13:59.307
2010-12-31T11:13:59.307
2020-06-11T14:32:37.003
-1
449
null
5863
2
null
5854
2
null
If you are targeting the general population (i.e. a non statistical-savvy audience) you should focus on eye-candy rather than statistical accuracy. Forget about boxplots, let alone violin plots (I personally find them very difficult to read)! If you'd ask the average street man what a quantile is, you would mostly get some wide eyed silence... You should use barplots, bubble charts, maybe some pie charts (brrrr). Forget about error bars (although I would put SD in text somewhere where applicable). Use colors, shapes, thick lines, 3D. You should make each chart unique and immediately easy to understand, even without having to read all the legends/axes etc. Make a smart use of maps by coloring them. [Information is beautiful](http://www.informationisbeautiful.net/) is a very good resource to get ideas. Look at this chart for instance: [Caffeine and Calories](http://www.informationisbeautiful.net/visualizations/caffeine-and-calories/): anyone can understand it, and it's pleasing to the eye. And, of course, have a look at Edward Tufte's work.
null
CC BY-SA 2.5
null
2010-12-31T11:14:35.397
2010-12-31T11:14:35.397
null
null
582
null
5864
2
null
5847
1
null
I assume you have data on the amount of each donation, not just the two totals for each variation. You don't want to use [Pearson's chi-square test](http://en.wikipedia.org/wiki/Pearson%27s_chi-square_test); that's only appropriate for counts or proportions, not for amounts. You want an independent (unpaired) two-sample test for equality of location. The classic such '[parametric](http://en.wikipedia.org/wiki/Parametric_statistics)' test is [Student's t-test](http://en.wikipedia.org/wiki/T-test#Independent_two-sample_t-test). As it's quite plausible that the variances differ as well as the means, so I'd suggest you use [Welch's t-test](http://en.wikipedia.org/wiki/Welch%27s_t-test), which is a slight variation that allows for unequal variances. You might also consider '[non-parametric](http://en.wikipedia.org/wiki/Non-parametric_statistics)' ('distribution-free') tests, of which the obvious one is the [Mann–Whitney U test](http://en.wikipedia.org/wiki/Mann-Whitney_U_test), but as your outcome of interest is clearly the difference in mean donation and your sample size is reasonably large, you're almost certainly better off sticking with parametric tests such as the above t-tests that use the mean as the location parameter and so directly address the null hypothesis of equality of means of the two samples. By the way, don't forget to include the people who saw either variation of the donation page but didn't donate anything as donations of value zero.
null
CC BY-SA 2.5
null
2010-12-31T12:10:01.110
2010-12-31T12:10:01.110
null
null
449
null
5865
2
null
4551
4
null
Using Analysis of Covariance (ANCOVA) to try to "control for" or "regress out" the influence of a covariate that is known to be correlated with, or affect the influence of, other predictor variables. More discussion at this question: [Checking factor/covariate independence in ANCOVA](https://stats.stackexchange.com/questions/5808/checking-factor-covariate-independence-in-ancova)
null
CC BY-SA 2.5
null
2010-12-31T13:17:04.073
2010-12-31T13:17:04.073
2017-04-13T12:44:46.433
-1
364
null
5866
2
null
5859
4
null
I suspect if you use continuous time models you will want to use interval censoring, reflecting the fact that you don't know the exact time of failure, just an interval in which the failure ocurred. If you fit parametric regression models with interval censoring using maximum likelihhod the tied survival times is not an issue IIRC.
null
CC BY-SA 2.5
null
2010-12-31T14:30:55.177
2010-12-31T14:30:55.177
null
null
887
null
5867
2
null
2234
13
null
There are around 100 classification and regression models which are trainable via the [caret package](https://cran.r-project.org/web/packages/caret/). Any of the classification models will be an option for you (as opposed to regression models, which require a continuous response). For example to train a random forest: ``` library(caret) train(response~., data, method="rf") ``` See the caret model training vignette which comes with the distribution for a full list of the models available. It is split into dual-use and classification models (both of which you can use) and regression-only (which you can't). caret will automatically train the parameters for your chosen model for you.
null
CC BY-SA 3.0
null
2010-12-31T14:56:21.053
2016-05-23T06:14:51.047
2016-05-23T06:14:51.047
75266
2589
null
5868
1
5869
null
3
177
With real-valued $X_1, X_2, \ldots$, define $Max_n := \max(X_1,\ldots,X_n)$ record value or high-water mark $NextMax_n :=$ the next greater high water, $Max_{n+m} > Max_n$ $Up_n := NextMax_n - Max_n$. Can anyone suggest an introduction to estimating $NextMax$ and $Up$, for - $X$ i.i.d. from a known distribution, - nonparametric, estimate-as-you-go? Either a book chapter or an online stats course would do nicely.
Basics of extreme values / high-water marks?
CC BY-SA 2.5
null
2010-12-31T15:07:55.817
2011-01-01T15:44:40.887
2011-01-01T15:44:40.887
557
557
[ "references", "extreme-value" ]
5869
2
null
5868
2
null
Stuart Coles' [book](http://rads.stackoverflow.com/amzn/click/1852334592) on extreme value statistics would probably be a good place to start.
null
CC BY-SA 2.5
null
2010-12-31T15:19:49.277
2010-12-31T15:19:49.277
null
null
887
null
5870
1
5875
null
4
293
Lets say we have three independent variables: $$\eqalign{ X_{1}\sim &B(n,\frac{1}{2}+\beta) \cr X_{2}\sim &B(n,\frac{1}{2}) \cr X_{3}\sim &B(n,\frac{1}{2}-\beta). }$$ I'm looking for the distribution of $$\max(X_{1},X_{2},X_{3})-\min(X_{1},X_{2},X_{3}).$$ For sufficiently large $n$ we have $$\Pr(\max(X_{1},X_{2},X_{3})=k)\approx \Pr(X_{1}=k)$$ and similarly $$\Pr(\min(X_{1},X_{2},X_{3})=k)\approx \Pr(X_{3}=k).$$ So using those two we have: $$\eqalign{ &\Pr(\max(X_{1},X_{2},X_{3})-\min(X_{1},X_{2},X_{3})=k) \cr &\approx \Pr(X_{1}-X_{3}=k) \cr &=(\frac{1}{2}+\beta)^k (\frac{1}{2}-\beta)^{2n+k}\sum\limits_{i=k}^{n}\binom{n}{i}\binom{n}{i-k} \left(\frac{1+2\beta}{1-2\beta} \right)^{2i} }$$ but this doesn't seem to be easy to approximate... I was thinking about another approach. Binomial variables can be approximated by normal distributions and the sum of normally distributed variables is well known, so it would be easy to calculate. The question: Are there any inequalities that could give a better approximation? I'm aware only of the Chernoff bound but it is not very helpful here.
The range of non-identically distributed binomial variables
CC BY-SA 2.5
null
2010-12-31T15:25:01.957
2010-12-31T21:45:34.447
2010-12-31T21:45:34.447
919
1643
[ "distributions", "random-variable", "approximation" ]
5871
2
null
5834
3
null
Your claim is false: there exist transient Markov chains such that $f_{ij}=1$ for some (but not all) states $i$ and $j$. For example, assume that the state space is the union of the discrete halfline $\mathbb{Z}_+$ and of a discrete circle $\mathbb{Z}/N\mathbb{Z}$ with $N\ge3$, the halfline and the circle meeting at $0$. Write $c(k)$ for the $k$th state on the circle, counted clockwise and starting from $0$, thus $c(0)=c(N)=0$ but $c(k)$ for $1\le k\le N-1$ is not on the halfline $\mathbb{Z}_+$. The transitions are as follows. If one is at $i$ in $\mathbb{Z}_+$ with $i\ne0$, one moves to $i+1$ or to $i-1$ with probability $p$ or $1-p$, respectively. If one is at $0$, one moves to $1$ or to $c(1)$, both with positive probability. If one is at $c(k)$ with $1\le k\le N-1$, one moves to $c(k+1)$ with probability $1$. In words, while on the halfline, one performs a biased random walk and while on the circle, one moves on the circle clockwise and deterministically until one is back at $0$. For every $p>1/2$, this Markov chain is transient. Nevertheles, for every $k$ and $\ell$ such that $1\le k<\ell\le N-1$, starting from $c(k)$, one hits $c(\ell)$ with full probability hence $f_{c(k)c(\ell)}=1$.
null
CC BY-SA 2.5
null
2010-12-31T15:42:51.813
2010-12-31T16:46:13.443
2010-12-31T16:46:13.443
2592
2592
null
5872
2
null
5870
3
null
Call $R_n=\max(X_1,X_2,X_3)-\min(X_1,X_2,X_3)$. The usual LLN shows that $R_n/n$ converges to $2\beta$ almost surely and in $L^1$. (The distribution of $R_n$ for a given fixed $n$ is an altogether different story.)
null
CC BY-SA 2.5
null
2010-12-31T15:58:09.770
2010-12-31T15:58:09.770
null
null
2592
null
5873
1
null
null
6
1147
I would like to test a hypothesis that a database contains data that are correct in 95% or more of the samples. I found this [Hypothesis test for a proportion](http://stattrek.com/Lesson5/Proportion.aspx?Tutorial=AP) experiment design that looked appropriate. I will randomly sample one-tenth of the population and mark each datum as correct or incorrect. Here is a summary of the design: - State the hypotheses: (Null Hypothesis: P >= 0.95, Alternative: P < 0.95) - Formulate an analysis plan (Significance level is 0.05) - Analyze sample data (calculate the σ and compute the z-score test statistic (z)) - Interpret results (Reject the null hypothesis iff the P-value is less than the significance level (0.05).) I have a handful of questions: - Is this an appropriate test to use if there is no magnitude of error (only that the datum is correct or not)? - The StatTrek page says to stop after 10 successes and 10 failures, but to sample no more than 10% of the population. Are 10, 10, and 10 just helpful rules of thumb, or is there some other guiding principle?
Experiment design for proportion
CC BY-SA 2.5
null
2010-12-31T16:02:54.980
2011-01-07T20:05:42.093
null
null
2591
[ "experiment-design", "proportion" ]
5874
1
5878
null
3
689
The greatest gain of the statistics classes in both school and university seems to be that I now have an inkling of which QA site to use for this question. :) I'm a programmer and I'm making a questionnaire with a financial background. There will be many (probably over 100) questions, which will be presented to the user one-by-one, like in a [wizard](http://en.wikipedia.org/wiki/Wizard_(software)). Since it answering them all will be quite time consuming (likely over an hour), it would be best if the user interface could show the estimated time remaining - both for the current question and all remaining together. As there is no way of determining beforehand how long a question will take, my idea was to gather statistical data about it (especially during alpha/beta stages). The naive approach would be to just average the times taken for each question, but (thanks again to the above mentioned statistics classes) I'm aware that this isn't really the best way to do it. There are all kinds of normal distribution curves, error calculations, approximation methods, and other arcane mathematical artifacts that I could not even begin to dream about. Some more information: - Questions are of very varying lengths. There are simple yes/no questions, and there are questions that will make you scratch your head. There are questions that ask to enter your contact details, and there are questions that require to meticulously fill a 30-row table. Some questions will ask you to calculate stuff. Some questions will require you to fetch some paperwork from your employer or something. Which brings up another point: - The whole process can be suspended/saved and resumed at a later time. This shouldn't happen very often, but given the length of the process, and that some people don't pay attention to the pre-requirements, it can happen. - Some questions may not be applicable based on answers to previous questions. For example, there might be a row of questions which are only applicable to businesses and not private persons. - Some questions are essentialy duplicated based on previous questions. For example, if you entered that in the past year you worked for 4 employers, you will need to enter the same data for each of them. - I can detect: When the user starts/finishes a question. Hence, I can calculate the duration. When the user goes idle. This is rather imprecise. I can only detect when there is no keyboard/mouse activity. This can be because the user has gone away, but it can also mean that he is digging through the papers looking for numbers. Or (since it will be a webpage), maybe he simply switched to another program - or another tab in the browser. Which questions have been answered, which definately will need to be answered, which definately won't need to be answered, and which might need to be answered. For the last type of questions, I'm not sure if it might be possible to calculate a meaningful probability based on previous answers. Perhaps. So... any ideas on how to tackle this estimate? Remember - I'm going for maximum accuracy (or is it precision? better make it both), and would rather have better estimates than lower resource usage.
How to best estimate the time remaining for a variable-length questionnaire?
CC BY-SA 2.5
null
2010-12-31T16:15:37.393
2017-06-12T16:58:22.613
2017-06-12T16:58:22.613
11887
2590
[ "estimation", "winsorizing" ]
5875
2
null
5870
4
null
For $n \le 20$ just compute the distribution by brute force. For example, with $n=3$ the pdf is $$\begin{array}{} k=0: &-\frac{7}{64} \left(-1+4 \beta ^2\right)^3 \cr k=1: &\frac{39}{256} \left(1-4 \beta ^2\right)^2 \left(3+4 \beta ^2\right) \cr k=2: &-\frac{3}{128} \left(-15-92 \beta ^2+560 \beta ^4+192 \beta ^6\right) \cr k=3: &\frac{1}{256} \left(21+564 \beta ^2+1392 \beta ^4+448 \beta ^6\right) \end{array}$$ This was produced by the Mathematica code ``` Clear[f, i, q]; f[n_, p_, k_] := Binomial[n, k] p^k (1 - p)^(n - k); i[k1_, k2_, k3_, k_] := If[Max[{k1, k2, k3}] - Min[{k1, k2, k3}] == k, 1, 0]; q[n_, k_, p_] := Sum[f[n, 1/2 + p, k1] f[n, 1/2, k2] f[n, 1/2 - p, k3] i[k1, k2, k3, k], {k1, 0, n}, {k2, Max[0, k1 - k], Min[n, k1 + k]}, {k3, Max[0, k1 - k, k2 - k], Min[n, k1 + k, k2 + k]}]; With[{n=3}, Table[q[n, k, \[Beta]], {k,0,n}]] ``` Calculations of the entire distribution for $n=20$ take four seconds. Here are plots for $\beta = 0, 1/6, 1/3$. (As $\beta$ increases, the distribution shifts to the right.) ![alt text](https://i.stack.imgur.com/W9ewt.png)
null
CC BY-SA 2.5
null
2010-12-31T17:11:08.333
2010-12-31T17:11:08.333
null
null
919
null
5876
2
null
5853
13
null
A scaled range, like 200 to 800 (for SATs, e.g.), is just a change of units of measurement. (It works exactly like changing temperatures in Fahrenheit to those in Celsius.) The middle value of 500 is intended to correspond to the average of the data. The range is intended to correspond to about 99.7% of the data when the data do follow a Normal distribution ("Bell curve"). It is guaranteed to include 8/9 of the data ([Chebyshev's Inequality](http://en.wikipedia.org/wiki/Chebyshev%27s_inequality)). In this case, the formula 1-5 computes the standard deviation of the data. This is simply a new unit of measurement for the original data. It needs to correspond to 100 units in the new scale. Therefore, to convert an original value to the scaled value, - Subtract the average. - Divide by the standard deviation. - Multiply by 100. - Add 500. If the result lies beyond the range $[200, 800]$ you can either use it as-is or "clamp" it to the range by rounding up to 200, down to 800. In the example, using data $\{1,3,4,5,7\}$, the average is $4$ and the SD is $2$. Therefore, upon rescaling, $1$ becomes $(1 - 4)/2 * 100 + 500 = 350$. The entire rescaled dataset, computed similarly, is $\{350, 450, 500, 550, 650\}$. When the original data are distributed in a distinctly non-normal way, you need another approach. You no longer compute an average or SD. Instead, put all the scores in order, from 1st (smallest) up to $n$th (largest). These are their ranks. Convert any rank $i$ into its percentage $(i-1/2)/n$. (In the example, $n=5$ and data are already in rank order $i=1,2,3,4,5$. Therefore their percentages are $1/10, 3/10, 5/10, 7/10, 9/10$, often written equivalently as $10\%, 30\%$, etc.) Corresponding to any percentage (between $0$ and $1$, necessarily) is a normal quantile. It is computed with the [normal quantile function](http://en.wikipedia.org/wiki/Quantile_function), which is closely related to the [error function](http://en.wikipedia.org/wiki/Normal_distribution#Quantile_function). (Simple [numerical approximations](http://en.wikipedia.org/wiki/Normal_distribution#Numerical_approximations_for_the_normal_CDF) are straightforward to code.) Its values, which typically will be between -3 and 3, have to be rescaled (just as before) to the range $[200, 800]$. Do this by first multiplying the normal quantile by 100 and then adding 500. The normal quantile function is available in many computing platforms, including spreadsheets (Excel's normsinv, for instance). For example, the normal quantiles (or "normal scores") for the data $\{1,3,4,5,7\}$ are $\{372, 448, 500, 552, 628\}$. This "[normal scoring](http://en.wikipedia.org/wiki/Normal_score)" approach will always give scores between 200 and 800 when you have 370 or fewer values. When you have 1111 or fewer values, all but the highest and lowest will have scores between 200 and 800.
null
CC BY-SA 2.5
null
2010-12-31T17:47:11.673
2011-01-09T16:55:51.487
2011-01-09T16:55:51.487
919
919
null
5877
2
null
5853
4
null
You could try this approach - normalise your data set to range between the values -1 and +1 thus: $$ \left(\frac{\text{individual_value} - \text{min_of_all_values}}{\text{max_of_all_values} - \text{min_of-all_values}}-0.5\right)*2. $$ This will convert every value in your data set to a value between -1 and +1, with the actual maximum and minimum values being set to +1 and -1 respectively, and then reset these +1 and -1 values to be +0.9999 and -0.9999 (necessary for following calculations.) Then apply the [Fisher Transformation](http://en.wikipedia.org/wiki/Fisher_transformation) to each of the above normalised values to "force it" to approximately conform to a normal distribution, and then "un-normalise" each of these Fisher Transform values to range in value between 200 and 800 thus: $$ \frac{\text{Fish_value} - \text{min_all_Fish_values}}{\text{max_all_Fish_values} - \text{min_all_Fish_values}}*600 + 200 $$ The maximum Fisher Transform value will be set to exactly 800, the minimum Fisher Transform value will be set to exactly 200, and all the other values will lie between these two extremes, according to an approximate normal distribution. Referencing your original question on SO and the issue of scalability, the advantage of this approach is that provided any new data point is not itself a new maximum or minimum for the data set as a whole you can apply the above calculations to the new data point to get its score between 200 and 800 without affecting any of the existing scores of the original data set. If a new data point is a new maximum or minimum you will have to recalculate the scores for the whole data set with this new "normalising" maximum or minimum value.
null
CC BY-SA 3.0
null
2010-12-31T18:58:15.357
2012-12-13T15:17:01.830
2012-12-13T15:17:01.830
7290
226
null
5878
2
null
5874
3
null
I do not see why averages should not work. You might want to use [winsorized](http://en.wikipedia.org/wiki/Winsorized_mean) or [trimmed](http://en.wikipedia.org/wiki/Trimmed_mean) averages to guard against outliers, which are likely due to inability to tell exactly whether user is idle or not. Then for the estimated time remaining show the sum of the averages for each question. You could build more complicated and precise methods if you collected data on the completed time together with user characteristics. For that though you will probably need another questionaire, so unless you are expecting a lot of feedback in alpha and beta phases, I suggest using the simple averages. Sometimes simplest methods work the best.
null
CC BY-SA 2.5
null
2010-12-31T19:06:34.287
2010-12-31T20:34:31.280
2010-12-31T20:34:31.280
2116
2116
null
5879
2
null
5874
3
null
As @mpiktas points out, simple is good. But I can't help thinking there will be substantial variation among users. Roughly, we can conceive of the time to complete each question as depending on the question and on the user, so the trick is to tease out the question-specific timing factors from the experience of previous users and to estimate the user-specific timing factor from the user's experience with the questions they have already tackled. This opens up the possibilities greatly. A good answer would rely on exploratory analysis of preliminary data. In general, though, a combination of (1) determining a suitable way to express question completion times (logarithms? square roots?) and (2) robust analysis of variance techniques (such as [median polish](http://www.itl.nist.gov/div898/software/dataplot/refman1/ch3/median_p.pdf)) or [IRLS](http://en.wikipedia.org/wiki/Iteratively_reweighted_least_squares) regression is likely to do far better than ignoring the user-specific factors ("simple averaging"). Once you have estimated question-specific timing factors, which could be as simple as computing median or mean times per question, you could easily estimate the timing factor for a new user by comparing their times to typical times for the questions they have already attempted. For instance (this is truly simplistic, but gives the flavor of the idea) if they take twice as long to complete the first five questions as the sum of the typical completion times for those questions, you could estimate that this user will take twice as long as the average to complete the next 95 questions. You can then add the average completion times for those 95 questions and double the sum.
null
CC BY-SA 2.5
null
2010-12-31T21:35:13.707
2010-12-31T21:35:13.707
null
null
919
null
5880
1
5881
null
7
2068
The probability of that a number between 1 to x is prime is $\frac{1}{\ln{x}}$ as per prime number theorem and also the total number of prime numbers between $1$ to $x$ will be $\frac{x}{\ln{x}}$. But if we select $n$ (32 bit) random numbers, what is the probability that $p$ of them are primes? Or Simply put What is the probability of picking $p$ primes from $n$ random numbers (32 bit). TIA..
What is the probability of picking p primes from n random numbers?
CC BY-SA 2.5
null
2011-01-01T09:48:45.877
2011-01-27T23:45:44.303
2011-01-01T18:07:23.237
null
2597
[ "probability" ]
5881
2
null
5880
16
null
There are 203,280,221 primes less than $2^{32}$. ([Source](http://www.ieeta.pt/~tos/primes.html)). So the probability that a random 32-bit number is prime is $203,280,221 / 2^{32} \approx 0.04733$. Assuming you want selection with replacement, i.e. the same number can be picked more than once, the probability of picking $p$ primes from $n$ 32-bit random numbers is, from the [probability mass function of the binomial distribution](http://en.wikipedia.org/wiki/Binomial_distribution#Probability_mass_function), $$\frac {n!}{p!(n-p)!} 0.04733^p (1-0.04733)^{n-p} . $$
null
CC BY-SA 2.5
null
2011-01-01T10:20:34.620
2011-01-27T23:45:44.303
2011-01-27T23:45:44.303
919
449
null
5882
2
null
1980
4
null
Robert Gentleman wrote a paper called ["Reproducible Research: A Bioinformatics Case Study"](http://www.bepress.com/sagmb/vol4/iss1/art2/) It implements a short set of analyses as an R Package and uses Sweave. It also discusses the use of Sweave more generally. See the ["Related Files"](http://www.bepress.com/sagmb/vol4/iss1/art2/) section of the article page for an archive file of all files and folders used. ### Reference: - Gentleman, Robert (2005) "Reproducible Research: A Bioinformatics Case Study," Statistical Applications in Genetics and Molecular Biology: Vol. 4 : Iss. 1, Article 2. - DOI: 10.2202/1544-6115.1034 - Available at: http://www.bepress.com/sagmb/vol4/iss1/art2
null
CC BY-SA 2.5
null
2011-01-01T10:35:17.683
2011-01-01T10:35:17.683
null
null
183
null
5883
2
null
5831
7
null
This isn't exactly what you're asking for, but I think you'd like David Freedman's work. He calls BS on misapplication of statistical tests, etc... See here: [http://www.stat.berkeley.edu/~freedman/](http://www.stat.berkeley.edu/~freedman/). One of my favorites is “What is the probability of an earthquake?”.
null
CC BY-SA 2.5
null
2011-01-01T22:49:18.977
2011-01-01T22:49:18.977
null
null
2489
null
5885
1
null
null
36
15971
Any suggestions for a good source to learn MCMC methods?
Good sources for learning Markov chain Monte Carlo (MCMC)
CC BY-SA 3.0
null
2011-01-02T03:16:43.820
2021-05-18T15:56:24.433
2015-01-16T12:17:58.473
22228
2601
[ "references", "markov-chain-montecarlo" ]
5886
2
null
5885
6
null
[Gilks W.R., Richardson S., Spiegelhalter D.J. Markov Chain Monte Carlo in Practice. Chapman & Hall/CRC, 1996.](http://books.google.co.uk/books?id=TRXrMWY_i2IC) A relative oldie now, but still a goodie.
null
CC BY-SA 2.5
null
2011-01-02T07:36:06.087
2011-01-02T07:36:06.087
null
null
449
null
5887
1
5888
null
9
453
We have a set biological samples that was pretty expensive to obtain. We put these samples through a series of tests to generate data that are used for building a predictive model. For this purpose we have divided the samples to training (70%) and testing (30%) sets. We have successfully created a model and applied it on the testing set to discover that the performance was "less than optimal". The experimentalists now want to improve the biological tests in order to create a better model. Provided that we cannot obtain new samples, would you suggest us to re-shuffle the samples to create new training and validation sets or to stick with the original division. (We don't have any indication that the division was a problematic one).
Should I re-shuffle my data?
CC BY-SA 2.5
null
2011-01-02T08:31:58.303
2012-01-22T20:51:15.340
2012-01-22T20:51:15.340
930
2604
[ "classification", "modeling", "experiment-design", "cross-validation", "bootstrap" ]
5888
2
null
5887
12
null
As you already use a holdout sample, I would say you should keep it and build your new models on the same training sample so that all models will consider the same relationships between features. In addition, if you perform feature selection, the samples must be left out before any of these filtering stages; that is, feature selection must be included in the cross-validation loop. Of note, there are more powerful methods than a 0.67/0.33 split for model selection, namely k-fold cross-validation or leave-one-out. See e.g. [The Elements of Statistical Learning](http://www-stat.stanford.edu/~tibs/ElemStatLearn/) (§7.10, pp. 241-248), [www.modelselection.org](http://www.modelselection.org/) or [A survey of cross-validation procedures for model selection](http://www.i-journals.org/ss/viewarticle.php?id=54&layout=abstract) by Arlot and Celisse (more advanced mathematical background required).
null
CC BY-SA 2.5
null
2011-01-02T09:22:46.873
2011-01-02T20:40:16.190
2011-01-02T20:40:16.190
919
930
null
5889
2
null
5885
16
null
For online tutorials, there are - A tutorial in MCMC, by Sahut (2000) - Tutorial on Markov Chain Monte Carlo, by Hanson (2000) - Markov Chain Monte Carlo for Computer Vision, by Zhu et al. (2005) - Introduction to Markov Chain Monte Carlo simulations and their statistical analysis, by Berg (2004). - A Tutorial on Markov Chain Monte-Carlo and Bayesian Modeling by Martin B. Haugh (2021). [Practical Markov Chain Monte Carlo](https://projecteuclid.org/journals/statistical-science/volume-7/issue-4/Practical-Markov-Chain-Monte-Carlo/10.1214/ss/1177011137.full), by Geyer (Stat. Science, 1992), is also a good starting point, and you can look at the [MCMCpack](http://cran.r-project.org/web/packages/MCMCpack/index.html) or [mcmc](http://cran.r-project.org/web/packages/mcmc/) R packages for illustrations.
null
CC BY-SA 4.0
null
2011-01-02T09:32:02.837
2021-05-18T15:56:24.433
2021-05-18T15:56:24.433
296197
930
null
5890
2
null
5885
9
null
I haven't read it (yet), but if you're into R, there is Christian P. Robert's and George Casella's book: [Introducing Monte Carlo Methods with R (Use R)](http://rads.stackoverflow.com/amzn/click/1441915753) I know of it from following his (very good) [blog](http://xianblog.wordpress.com/)
null
CC BY-SA 3.0
null
2011-01-02T10:17:49.443
2015-01-16T07:23:13.243
2015-01-16T07:23:13.243
7224
253
null
5891
2
null
5885
4
null
[Dani Gamerman & Hedibert F. Lopes. Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference (2nd ed.). Boca Raton, FL: Champan & Hall/CRC, 2006. 344 pp. ISBN 0-412-81820-5](http://books.google.co.uk/books?id=yPvECi_L3bwC). -- a more recently updated book than Gilks, Richardson & Spiegelhalter. I haven't read it myself, but it was well [reviewed in Technometrics in 2008](http://pubs.amstat.org/doi/abs/10.1198/tech.2008.s542), and the first edition also got a good [review in The Statistician](http://www.jstor.org/stable/2988381) back in 1998.
null
CC BY-SA 2.5
null
2011-01-02T11:20:10.533
2011-01-02T12:06:00.227
2011-01-02T12:06:00.227
930
449
null
5893
1
null
null
12
1812
A name: first, possibly a middle, and surname. I'm curious about how much information you can mine out of a name, using publicly available datasets. I know that you can get the following with anywhere between a low-high probability (depending on the input) using US census data: 1) Gender. 2) Race. Facebook for instance, used exactly that to find out, with a decent level of accuracy, the racial distribution of users of their site (https://www.facebook.com/note.php?note_id=205925658858). What else can be mined? I'm not looking for anything specific, this is a very open-ended question to assuage my curiousity. My examples are US specific, so we'll assume that the name is the name of someone located in the US; but, if someone knows of publicly available datasets for other countries, I'm more than open to them too. I'm not quite sure if this is the right place for this, if it's not, I'd appreciate if someone could point me to a more appropriate place. I hope this is an interesting question, and this is the appropriate place!
How much information can you mine out of a name?
CC BY-SA 2.5
null
2011-01-02T17:04:44.293
2023-05-01T07:05:12.343
null
null
null
[ "dataset", "data-mining", "census" ]
5894
1
5897
null
4
133
I am new to statistics so I apologize for the simplicity of my question. I have a bag with 27 numbers inside; I will pull one number out until I get number 17. THe numbers will be returned to the bag after every try so the prob of success will always be 1/27. I am trying to determine on average how many pulls I will need.
Number of tried for a favorable result
CC BY-SA 2.5
null
2011-01-02T17:52:18.390
2011-01-03T00:10:55.017
null
null
2610
[ "binomial-distribution", "negative-binomial-distribution" ]
5895
2
null
5894
2
null
It is easier to think backwards. What is the chance of NOT choosing 17? That chance is 26/27. What is the chance of not choosing 17 twice in a row? The answer is (26/27)(26/27). How about three times in a row? The answer is (26/27)(26/27) *(26/27). More generally, the answer is (26/27)^K, where K is the number of times you try and "^" means "to the power of". You can compute that probability for various values of K, and find a value of K that gives an answer close to 0.50.
null
CC BY-SA 2.5
null
2011-01-02T17:59:57.440
2011-01-02T17:59:57.440
null
null
25
null
5896
2
null
5894
3
null
On average, you need 27 tries.
null
CC BY-SA 2.5
null
2011-01-02T19:07:29.217
2011-01-02T19:07:29.217
null
null
2592
null
5897
2
null
5894
2
null
Harvey explained the logic. But the name of the distribution you're interested in is the geometric distribution.
null
CC BY-SA 2.5
null
2011-01-02T19:14:42.883
2011-01-02T19:14:42.883
null
null
1028
null
5898
2
null
5893
4
null
From the first name predict region, age, first generation immigrant status. From the last name you could predict geographical location of original patronym. For the full name you could predict social and economic status (Thurston Howell III).
null
CC BY-SA 2.5
null
2011-01-02T20:43:26.670
2011-01-02T20:43:26.670
null
null
82
null
5899
1
5905
null
5
398
Imagine a committee of people in charge of hiring a CEO. A committee member can vote "No hire" (+0), "Maybe"(+1), "Hire"(+2) for a potential CEO candidate. Each CEO is scored based on the votes, and the CEO with the highest score is hired. Now, imagine a situation where two candidates are tied... both of them have a score of 5. However, candidate1 had 2Hires and 1Maybe (ie, 2+2+1), whereas candidate2 had 5Maybes (ie, 1+1+1+1+1). Here is my question: - Intuitively, I think we can say that we are more confident about candidate1 even though the scores are tied. What specific domain of statistics am I dealing with here (so that I can investigate on my own) - In statistics, how does one go about resolving such situations? And I am not just merely talking about resolving a tie... but rather, how to resolve this in a generic manner?
What to do about ties in voting results?
CC BY-SA 3.0
null
2011-01-02T21:14:13.577
2019-06-29T08:21:26.367
2019-06-29T08:21:26.367
3277
2605
[ "ties", "voting-system" ]
5900
2
null
5885
6
null
[Handbook of Markov Chain Monte Carlo, Steve Brooks, Andrew Gelman, Galin Jones and Xiao-Li Meng, eds. 2011 CRC Press.](http://www.crcpress.com/product/isbn/9781420079418;jsessionid=AYSwK8euwh4-aVy7BXE2wQ%2a%2a) [Chapter 4](http://www.stat.columbia.edu/~gelman/research/published/GelmanMCMCchapter4.pdf), 'Inference from simulations and monitoring convergence' by Gelman and Shirley, is available online.
null
CC BY-SA 2.5
null
2011-01-02T21:39:14.060
2011-01-02T21:39:14.060
null
null
1381
null