Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
3924
1
null
null
1
229
Sorry if this is a very basic question, my statistics knowledge is very low! I've got a dataset which is basically a set of greyscale values in a greyscale texture (ie. 0-255). What I want to do is take the mean value, and find out what the values would be at a specified standard-deviation to the left and right of the mean. Is this possible? I presume I'd have to presume a normal distribution? I hope that makes sense!
Value based on a supplied standard-deviation
CC BY-SA 3.0
null
2010-10-23T16:31:41.513
2017-11-12T17:24:01.703
2017-11-12T17:24:01.703
11887
1683
[ "standard-deviation" ]
3925
2
null
3921
14
null
I think chl's answer is great. One thing I might add, is for the case you'd want to compare the correlation between the items. For that you can use something like a [Correlation scatter-plot matrix for ordered-categorical data](http://www.r-statistics.com/2010/04/correlation-scatter-plot-matrix-for-ordered-categorical-data/) ![alt text](https://i.stack.imgur.com/hCzCn.png) (That code still needs some tweaking - but it gives the general idea...)
null
CC BY-SA 2.5
null
2010-10-23T18:34:23.693
2010-10-23T18:34:23.693
null
null
253
null
3926
1
null
null
2
169
I'm looking at two plant categories (all species fall into one type or the other) at several different types of locations. Within each type of location, I have several samples, so I have an average of each type of plant as it is found in each type of location. Within each type of location, I want to compare the average number of one type to the average number of the other type. The number of one type of plant is not necessarily independent of the other (there is only a certain amount of resources after all). I feel as though there must be a simple test to look at this, but I can't remember what it is. For the difference in average number of ONE type across the different locations, I used an ANOVA; if there were only two locations I could have used a t-test. If I had one population type and one type of plant, and looked at it before and after some manipulation, then I would use a paired/dependent t-test... Any suggestions would be phenomenally helpful. Thanks!
How to look at two different (but related) averages within a population
CC BY-SA 2.5
null
2010-10-23T19:21:33.553
2010-12-23T06:44:25.977
null
null
null
[ "categorical-data", "mean" ]
3927
2
null
3926
0
null
Assuming that you are looking at proportions you need to do a [One-proportion z-test](http://en.wikipedia.org/wiki/Statistical_hypothesis_testing#Common_test_statistics). The link takes you to the wiki which has common test statisitics used in various situations including the one-proportion z-test. If I have understood your situation correctly you have the following set-up. Let: $n$ be the total number of trees in a location, $n_a$ be the number of trees in that location of type 'a'. Thus, your hypothesis potentially is: $H_0: \pi_a = \pi_0$, $H_A: \pi_a \ne \pi_0$, where $\pi_a$ is the true proportion of trees of type $a$, You are approximating $\pi_a$ with $p_a$ where: $p_a = \frac{n_a}{n}$ and $\pi_0$ is your hypothesis about the true proportion for that type.
null
CC BY-SA 2.5
null
2010-10-23T20:50:56.860
2010-10-23T20:50:56.860
null
null
null
null
3928
1
3951
null
7
424
I have developmental data collected across several grades (1-6), where each child in each grade is measured many times. I would like to be able to assess whether there are any linear or non-linear trends in the response variable across grade. Does it make sense to run a first lmer treating grade as continuous, obtain the residuals, then run a second lmer treating grade as a factor? That is: ``` fit1 = lmer( formula = response ~ (1|child)+grade_as_numeric , data = my_data , family = gaussian ) my_data$resid = residuals(fit1) fit2 = lmer( formula = resid ~ (1|child)+grade_as_factor , data = my_data , family = gaussian ) ``` As I understand it, fit1 will tell me if there are any linear trends in the data, while fit2 will tell me if there are any non-linear trends in the data in addition to the linear trends obtained in fit1. If this is sensible, how might I apply it to a second binomial response variable given that the residuals from a binomial model are not 0/1?
Assessing linearity in a mixed effects model
CC BY-SA 2.5
null
2010-10-23T21:24:24.377
2010-10-24T17:16:01.820
null
null
364
[ "mixed-model", "binomial-distribution" ]
3929
2
null
3926
0
null
For a particular plot we estimate "% cover" per plant species (summed across all species within a type to give the % cover for the plant type); in a forest canopy (as one example of the locations we used) this means that the total will definitely be greater than 100%, so a statistic using proportions wont work... When I say 'plants', I mean all the plants there (trees, herbs, shrubs, etc). The two categories were "native" and "introduced/invasive" so all plants will fall into one or the other.
null
CC BY-SA 2.5
null
2010-10-23T21:55:19.983
2010-10-23T21:55:19.983
null
null
null
null
3930
1
null
null
32
35850
Most standard distributions in R have a family of commands - pdf/pmf, cdf/cmf, quantile, random deviates (for example- dnorm, pnorm, qnorm, rnorm). I know it's easy enough to make use of some standard commands to reproduce these functions for the discrete uniform distributions, but is there already a preferred built-in family of functions for modeling discrete uniform distributions in R that I'm unaware of?
Are there default functions for discrete uniform distributions in R?
CC BY-SA 2.5
null
2010-10-23T22:02:20.557
2021-10-25T07:37:19.767
null
null
null
[ "r", "distributions", "uniform-distribution" ]
3931
1
3934
null
229
247343
I was asked today in class why you divide the sum of square error by $n-1$ instead of with $n$, when calculating the standard deviation. I said I am not going to answer it in class (since I didn't wanna go into unbiased estimators), but later I wondered - is there an intuitive explanation for this?!
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
CC BY-SA 3.0
null
2010-10-23T22:04:57.160
2023-04-27T08:59:40.013
2021-01-11T14:06:29.100
919
253
[ "standard-error", "intuition", "teaching", "bessels-correction", "faq" ]
3932
2
null
3931
66
null
A common one is that the definition of variance (of a distribution) is the second moment recentered around a known, definite mean, whereas the estimator uses an estimated mean. This loss of a degree of freedom (given the mean, you can reconstitute the dataset with knowledge of just $n-1$ of the data values) requires the use of $n-1$ rather than $n$ to "adjust" the result. Such an explanation is consistent with the estimated variances in ANOVA and variance components analysis. It's really just a special case. The need to make some adjustment that inflates the variance can, I think, be made intuitively clear with a valid argument that isn't just ex post facto hand-waving. (I recollect that Student may have made such an argument in his 1908 paper on the t-test.) Why the adjustment to the variance should be exactly a factor of $n/(n-1)$ is harder to justify, especially when you consider that the adjusted SD is not an unbiased estimator. (It is merely the square root of an unbiased estimator of the variance. Being unbiased usually does not survive a nonlinear transformation.) So, in fact, the correct adjustment to the SD to remove its bias is not a factor of $\sqrt{n/(n-1)}$ at all! Some introductory textbooks don't even bother introducing the adjusted sd: they teach one formula (divide by $n$). I first reacted negatively to that when teaching from such a book but grew to appreciate the wisdom: to focus on the concepts and applications, the authors strip out all inessential mathematical niceties. It turns out that nothing is hurt and nobody is misled.
null
CC BY-SA 2.5
null
2010-10-23T22:21:58.407
2010-10-23T22:21:58.407
null
null
919
null
3934
2
null
3931
146
null
The standard deviation calculated with a divisor of $n-1$ is a standard deviation calculated from the sample as an estimate of the standard deviation of the population from which the sample was drawn. Because the observed values fall, on average, closer to the sample mean than to the population mean, the standard deviation which is calculated using deviations from the sample mean underestimates the desired standard deviation of the population. Using $n-1$ instead of $n$ as the divisor corrects for that by making the result a little bit bigger. Note that the correction has a larger proportional effect when $n$ is small than when it is large, which is what we want because when n is larger the sample mean is likely to be a good estimator of the population mean. When the sample is the whole population we use the standard deviation with $n$ as the divisor because the sample mean is population mean. (I note parenthetically that nothing that starts with "second moment recentered around a known, definite mean" is going to fulfil the questioner's request for an intuitive explanation.)
null
CC BY-SA 3.0
null
2010-10-24T03:46:26.027
2020-10-07T12:25:39.747
2020-10-07T12:25:39.747
919
1679
null
3935
2
null
3911
7
null
You are correct in saying that the 95% confidence intervals are things that result from using a method that works in 95% of cases, rather than any individual interval having a 95% likelihood of containing the expected value. "The logical basis and interpretation of confidence limits are, even now, a matter of controversy." {David Colquhoun, 1971, Lectures on Biostatistics} That quotation is taken from a statistics textbook published in 1971, but I would contend that it is still true in 2010. The controversy is probably most extreme in the case of confidence intervals for binomial proportions. There are many competing methods for calculating those confidence intervals, but they are all inaccurate in one or more senses and even the worst performing method has proponents among textbook authors. Even so called ‘exact’ intervals fail to yield the properties expected of confidence intervals. In a paper written for surgeons (widely known for their interest in statistics!), John Ludbrook and I argued for the routine use of confidence intervals calculated using a uniform Bayesian prior because such intervals have frequentist properties as good as any other method (on average exactly 95% coverage over all true proportions) but, importantly, much better coverage over all observed proportions (exactly 95% coverage). The paper, because of its target audience, is not terribly detailed and so it may not convince all statistician, but I am working on a follow-up paper with the full set of results and justifications. This is a case where the Bayesian approach has frequentist properties as good as the frequentist approach, something that happens fairly often. The assumption of a uniform prior is not problematical because a uniform distribution of population proportions is built into every calculation of frequentist coverage that I've come across. You ask: "Are there ways of looking at confidence intervals, at least in some circumstances, which would be meaningful to users of statistics?" My answer, then, is that for binomial confidence intervals one can get intervals that contain the population proportion exactly 95% of the time for all observed proportions. That is a yes. However, the conventional use of confidence intervals expects coverage for all population proportions and for that the answer is "No!" The length of the answers to your question, and the various responses to them suggests that confidence intervals are widely misunderstood. If we change our objective from coverage for all true parameter values to coverage of the true parameter value for all sample values, it might get easier because the intervals will then be shaped to be directly relevant to the observed values rather than for the performance of the method per se.
null
CC BY-SA 3.0
null
2010-10-24T04:23:55.957
2011-06-07T05:09:43.653
2011-06-07T05:09:43.653
1679
1679
null
3936
2
null
3924
2
null
I suspect that you want to re-scale the values so that they have a specified standard deviation. If so, then it is easy. Let's assume that you have a mean of x and a standard deviation of y, and the desired standard deviation is z. Subtract x from every observation, then divide them by y so that you have a mean of zero and a standard deviation of one. Now multiply by z and add x to each value. You will end up with the original mean, x, and a standard deviation of z. This will work whatever the distribution of the data as it is simply linearly re-scaling the x-axis of the distribution.
null
CC BY-SA 2.5
null
2010-10-24T04:47:40.530
2010-10-24T04:47:40.530
null
null
1679
null
3938
2
null
3926
1
null
A location (Between) x Plant Type (Within) mixed ANOVA may do what you want. The disadvantage of this approach is that there is no clear post-hoc to perform at each location, so you'll have to do a bonferroni correction or take some other approach (e.g. [question 1](https://stats.stackexchange.com/questions/575/post-hocs-for-within-subjects-tests) or [question 2](https://stats.stackexchange.com/questions/1266/a-non-parametric-repeated-measures-multi-way-anova-in-r)). Since your data is sourced from percentages and is bounded at the upper and lower extremities you may want to transform the raw data first (see this [issue/approach](http://www.sportsci.org/resource/stats/counts.html)).
null
CC BY-SA 2.5
null
2010-10-24T06:20:31.360
2010-10-24T06:20:31.360
2017-04-13T12:44:39.283
-1
196
null
3939
2
null
3930
15
null
The [CRAN Task View: Probability Distributions](http://finzi.psych.upenn.edu/views/Distributions.html) page says: > The discrete uniform distribution can be easily obtained with the basic functions. I guess something on the lines of this should do: ``` a <- round(runif(1000, min=0, max=100)) ``` EDIT As csgillespie pointed out, this is not correct... ``` a <- ceiling(runif(1000, min=0, max=100)) ``` will work though (note that the example will generate values between 1 and 100, not 0 and 100)
null
CC BY-SA 2.5
null
2010-10-24T07:12:40.887
2010-10-25T08:34:59.653
2010-10-25T08:34:59.653
582
582
null
3940
2
null
3930
36
null
As nico wrote, they're not implemented in R. Assuming we work in 1..k, those functions should look like: For random generation: ``` rdu<-function(n,k) sample(1:k,n,replace=T) ``` PDF: ``` ddu<-function(x,k) ifelse(x>=1 & x<=k & round(x)==x,1/k,0) ``` CDF: ``` pdu<-function(x,k) ifelse(x<1,0,ifelse(x<=k,floor(x)/k,1)) ```
null
CC BY-SA 2.5
null
2010-10-24T07:36:16.800
2010-10-24T07:36:16.800
null
null
null
null
3941
2
null
3931
20
null
This is a total intuition, but the simplest answer is that is a correction made to make standard deviation of one-element sample undefined rather than 0.
null
CC BY-SA 2.5
null
2010-10-24T10:28:15.323
2010-10-24T10:28:15.323
null
null
null
null
3942
2
null
3931
16
null
You can gain a deeper understanding of the $n-1$ term through geometry alone, not just why it's not $n$ but why it takes exactly this form, but you may first need to build up your intuition cope with $n$-dimensional geometry. From there, however, it's a small step to a deeper understanding of degrees of freedom in linear models (i.e. model df & residual df). I think there's little doubt that [Fisher](http://en.wikipedia.org/wiki/Ronald_Fisher) thought this way. Here's a book that builds it up gradually: Saville DJ, Wood GR. Statistical methods: the geometric approach. 3rd edition. New York: Springer-Verlag; 1991. 560 pages. [9780387975177](http://en.wikipedia.org/wiki/Special:BookSources/9780387975177) (Yes, 560 pages. I did say gradually.)
null
CC BY-SA 3.0
null
2010-10-24T11:01:31.550
2017-08-25T17:17:04.040
2017-08-25T17:17:04.040
168153
449
null
3943
1
3946
null
94
93163
In which cases should one prefer the one over the other? I found someone who claims an advantage for Kendall, [for pedagogical reasons](http://web.archive.org/web/20090207060710/http://www.rsscse.org.uk/ts/bts/noether/text.html), are there other reasons?
Kendall Tau or Spearman's rho?
CC BY-SA 4.0
null
2010-10-24T13:15:49.687
2018-11-04T01:16:49.227
2018-11-04T01:16:49.227
75266
253
[ "correlation", "nonparametric", "spearman-rho", "kendall-tau" ]
3944
1
3948
null
21
34088
Reading about methods and results of statistical analysis, especially in epidemiology, I very often [hear](https://www.acpjournals.org/doi/10.7326/0003-4819-136-2-200201150-00009) about adjustment or controlling of the models. How would you explain, to a non-statistician, the purpose of that? How do you interpret your results after controlling for certain variable? Small walk-through in Stata or R, or a pointer to one online, would a true gem.
Explain model adjustment, in plain English
CC BY-SA 4.0
null
2010-10-24T13:58:01.527
2022-11-28T05:11:09.387
2022-11-28T05:11:09.387
362671
22
[ "regression", "modeling", "epidemiology" ]
3945
2
null
3943
37
null
I refer the honorable gentleman to [my previous answer](https://stats.stackexchange.com/questions/3730/pearsons-or-spearmans-correlation-with-non-normal-data/3744#3744): "...confidence intervals for Spearman’s rS are less reliable and less interpretable than confidence intervals for Kendall’s τ-parameters", according to Kendall & Gibbons (1990).
null
CC BY-SA 2.5
null
2010-10-24T14:24:27.680
2010-10-24T14:24:27.680
2017-04-13T12:44:33.550
-1
449
null
3946
2
null
3943
48
null
I found that Spearman correlation is mostly used in place of usual linear correlation when working with integer valued scores on a measurement scale, when it has a moderate number of possible scores or when we don't want to make rely on assumptions about the bivariate relationships. As compared to Pearson coefficient, the interpretation of Kendall's tau seems to me less direct than that of Spearman's rho, in the sense that it quantifies the difference between the % of concordant and discordant pairs among all possible pairwise events. In my understanding, Kendall's tau more closely resembles [Goodman-Kruskal Gamma](http://en.wikipedia.org/wiki/Gamma_test_(statistics)). I just browsed an article from Larry Winner in the J. Statistics Educ. (2006) which discusses the use of both measures, [NASCAR Winston Cup Race Results for 1975-2003](http://www.amstat.org/publications/jse/v14n3/datasets.winner.html). I also found [@onestop](https://stats.stackexchange.com/questions/3730/pearsons-or-spearmans-correlation-with-non-normal-data/3744#3744) answer about [Pearson's or Spearman's correlation with non-normal data](https://stats.stackexchange.com/questions/3730/pearsons-or-spearmans-correlation-with-non-normal-data) interesting in this respect. Of note, Kendall's tau (the a version) has connection to Somers' D (and Harrell's C) used for predictive modelling (see e.g., [Interpretation of Somers’ D under four simple models](http://www.imperial.ac.uk/nhli/r.newson/miscdocs/intsomd1.pdf) by RB Newson and reference [6](http://www.jstatsoft.org/v15/i01/paper) therein, and articles by Newson published in the Stata Journal 2006). An overview of rank-sum tests is provided in [Efficient Calculation of Jackknife Confidence Intervals for Rank Statistics](http://www.jstatsoft.org/v15/i01/paper), that was published in the JSS (2006).
null
CC BY-SA 2.5
null
2010-10-24T14:26:35.747
2010-10-31T10:29:50.693
2017-04-13T12:44:37.583
-1
930
null
3947
1
3954
null
84
8590
I understand the basics of what a Support Vector Machines' aim is in terms of classifying an input set into several different classes, but what I don't understand is some of the nitty-gritty details. For starters, I'm a bit confused by the use of Slack Variables. What is their purpose? I'm doing a classification problem where I've captured pressure readings from sensors I've placed on the insole of a shoe. A subject will sit, stand, and walk for a couple of minutes while pressure data is recorded. I want to train a classifier to be able to determine whether a person is sitting, standing or walking and be able to do that for any future test data. What classifier type do I need to try? What is the best way for me to train a classifier from the data I've captured? I have 1000 entries for sitting, standing and walking (3x1000=3000 total), and they all have the following feature vector form. (pressurefromsensor1, pressurefromsensor2, pressurefromsensor3, pressurefromsensor4)
Help me understand Support Vector Machines
CC BY-SA 2.5
null
2010-10-24T15:11:52.427
2015-07-31T07:16:17.830
2010-10-26T19:45:29.510
null
1224
[ "machine-learning", "classification", "svm" ]
3948
2
null
3944
38
null
Easiest to explain by way of an example: Imagine study finds that people who watched the World Cup final were more likely to suffer a heart attack during the match or in the subsequent 24 hours than those who didn't watch it. Should the government ban football from TV? But men are more likely to watch football than women, and men are also more likely to have a heart attack than women. So the association between football-watching and heart attacks might be explained by a third factor such as sex that affects both. (Sociologists would distinguish here between gender, a cultural construct that is associated with football-watching, and sex, a biological category that is associated with heart-attack incidence, but the two are cleary very strongly correlated so i'm going to ignore that distinction for simplicity.) Statisticians, and especially epidemiologists, call such a third factor a confounder, and the phenomenon [confounding](http://en.wikipedia.org/wiki/Confounding). The most obvious way to remove the problem is to look at the association between football-watching and heart-attack incidence in men and women separately, or in the jargon, to stratify by sex. If we find that the association (if there still is one) is similar in both sexes, we may then choose to combine the two estimates of the association across the two sexes. The resulting estimate of the association between football-watching and heart-attack incidence is then said to be adjusted or controlled for sex. We would probably also wish to control for other factors in the same way. Age is another obvious one (in fact epidemiologists either stratify or adjust/control almost every association by age and sex). Socio-economic class is probably another. Others can get trickier, e.g. should we adjust for beer consumption while watching the match? Maybe yes, if we're interested in the effect of the stress of watching the match alone; but maybe no, if we're considering banning broadcasting of World Cup football and that would also reduce beer consumption. Whether given variable is a confounder or not depends on precisely what question we wish to address, and this can require very careful thought and get quite tricky and even contentious. Clearly then, we may wish to adjust/control for several factors, some of which may be measured in several categories (e.g. social class) while others may be continuous (e.g. age). We could deal with the continuous ones by splitting into (age-)groups, thereby turning them into categorical ones. So say we have 2 sexes, 5 social class groups and 7 age groups. We can now look at the association between football-watching and heart-attack incidence in 2×5×7 = 70 strata. But if our study is fairly small, so some of those strata contain very few people, we're going to run into problems with this approach. And in practice we may wish to adjust for a dozen or more variables. An alternative way of adjusting/controlling for variables that is particularly useful when there are many of them is provided by [regression analysis](http://en.wikipedia.org/wiki/Regression_model) with multiple dependent variables, sometimes known as multivariable regression analysis. (There are different types of regression models depending on the type of outcome variable: least squares regression, logistic regression, proportional hazards (Cox) regression...). In observational studies, as opposed to experiments, we nearly always want to adjust for many potential confounders, so in practice adjustment/control for confounders is often done by regression analysis, though there are other alternatives too though, such as standardization, weighting, propensity score matching...
null
CC BY-SA 2.5
null
2010-10-24T15:20:40.330
2010-10-24T15:20:40.330
null
null
449
null
3949
2
null
3943
24
null
Again somewhat philosophical answer; the basic difference is that Spearman's Rho is an attempt to extend R^2 (="variance explained") idea over nonlinear interactions, while Kendall's Tau is rather intended to be a test statistic for nonlinear correlation test. So, Tau should be used for testing nonlinear correlations, Rho as R extension (or for people familiar with R^2 -- explaining Tau to unsuspecting audience in limited time is painful).
null
CC BY-SA 2.5
null
2010-10-24T15:57:33.700
2010-10-24T15:57:33.700
null
null
null
null
3950
1
3956
null
7
1134
I am currently working with feature-vectors that are made up of continuous attributes, so I can use the euclidean distance for things like KNN-classification and clustering. Now I want to add a nominal attribute that has a special distance function defined. What options do I have of combining these distance functions, so I still obtain one distance for two vectors?
What options are there to combine different distance functions?
CC BY-SA 2.5
null
2010-10-24T16:34:57.410
2010-10-24T20:03:13.467
null
null
977
[ "distance-functions" ]
3951
2
null
3928
5
null
Not really an answer, but I was interested in trying it out... I assume that the pattern is not easily recognisable just by plotting? So I tried to make up some data that might behave this way: ``` set.seed(69) id<- rep(1:20, each=6) x<-rep(1:6, 20) y<-jitter(x+id/5, factor=5) + jitter(sin(x), factor=5) df1<-data.frame(id, x, y) plot(y~x) xyplot(y~x|id, data=df1, type="l") ``` For me, if I had this data without knowing how it was made, I think I would have trouble picking out the overlaid signal and perhaps assume it was linear. Resid vs fitted of lmer1 (below) doesn't show much, but the resid vs x is more suggestive. ``` lmer1<-lmer(y~x+(1|id), data=df1, REML=F) xyplot(resid(lmer1)~fitted(lmer1), type=c("p", "smooth")) xyplot(resid(lmer1)~x, type=c("p", "smooth")) ``` Using your suggestion of conducting an ANOVA on the residuals gives a significant effect, indicating perhaps some kind of systematic difference: ``` lmer2<-lmer(resid(lmer1)~factor(x)+(1|id), data=df1, REML=F) #sd attributed to id is 0 anova(lmer2) ``` So perhaps this method may be useful to determine whether you need to include higher order terms by using, maybe, increasing order polynomials: ``` lmer3.1 <- lmer(y~poly(x,2)+(1|id), data=df1, REML=F) lmer3.2 <- lmer(y~poly(x,3)+(1|id), data=df1, REML=F) lmer3.3 <- lmer(y~poly(x,4)+(1|id), data=df1, REML=F) anova(lmer1, lmer3.1, lmer3.2, lmer3.3) ``` In this method the cubic function 'wins' and might be a useful approximation of what's going on and comes quite close to the generated model: ``` lmer4<-lmer(y~x+sin(x)+(1|id), data=df1) anova(lmer3.2, lmer4) ``` I don't really know if this helps or not, but hopefully this simulated data mirrors your problem and somebody can use this to give a more exact answer to your question. I'm not sure about the binomial part of your question.
null
CC BY-SA 2.5
null
2010-10-24T17:16:01.820
2010-10-24T17:16:01.820
null
null
966
null
3953
2
null
3944
12
null
Onestop explained it pretty well, I'll just give a simple R example with made up data. Say x is weight and y is height, and we want to find out if there's a difference between males and females: ``` set.seed(69) x <- rep(1:10,2) y <- c(jitter(1:10, factor=4), (jitter(1:10, factor=4)+2)) sex <- rep(c("f", "m"), each=10) df1 <- data.frame(x,y,sex) with(df1, plot(y~x, col=c(1,2)[sex])) lm1 <- lm(y~sex, data=df1) lm2 <- lm(y~sex+x, data=df1) anova(lm1); anova(lm2) ``` You can see that without controlling for weight (in anova(lm1)) there's very little difference between the sexes, but when weight is included as a covariate (controlled for in lm2) then the difference becomes more apparent. ``` #In case you want to add the fitted lines to the plot coefs2 <- coef(lm2) abline(coefs2[1], coefs2[3], col=1) abline(coefs2[1]+coefs2[2], coefs2[3], col=2) ```
null
CC BY-SA 2.5
null
2010-10-24T17:43:38.477
2010-10-24T17:43:38.477
null
null
966
null
3954
2
null
3947
111
null
I think you are trying to start from a bad end. What one should know about SVM to use it is just that this algorithm is finding a hyperplane in hyperspace of attributes that separates two classes best, where best means with biggest margin between classes (the knowledge how it is done is your enemy here, because it blurs the overall picture), as illustrated by a famous picture like this: ![alt text](https://i.stack.imgur.com/v5DDG.png) Now, there are some problems left. First of all, what to with those nasty outliers laying shamelessly in a center of cloud of points of a different class? ![alt text](https://i.stack.imgur.com/WWNkm.png) To this end we allow the optimizer to leave certain samples mislabelled, yet punish each of such examples. To avoid multiobjective opimization, penalties for mislabelled cases are merged with margin size with an use of additional parameter C which controls the balance among those aims. Next, sometimes the problem is just not linear and no good hyperplane can be found. Here, we introduce kernel trick -- we just project the original, nonlinear space to a higher dimensional one with some nonlinear transformation, of course defined by a bunch of additional parameters, hoping that in the resulting space the problem will be suitable for a plain SVM: ![alt text](https://i.stack.imgur.com/A0wtW.png) Yet again, with some math and we can see that this whole transformation procedure can be elegantly hidden by modifying objective function by replacing dot product of objects with so-called kernel function. Finally, this all works for 2 classes, and you have 3; what to do with it? Here we create 3 2-class classifiers (sitting -- no sitting, standing -- no standing, walking -- no walking) and in classification combine those with voting. Ok, so problems seems solved, but we have to select kernel (here we consult with our intuition and pick RBF) and fit at least few parameters (C+kernel). And we must have overfit-safe objective function for it, for instance error approximation from cross-validation. So we leave computer working on that, go for a coffee, come back and see that there are some optimal parameters. Great! Now we just start nested cross-validation to have error approximation and voila. This brief workflow is of course too simplified to be fully correct, but shows reasons why I think you should first try with [random forest](http://en.wikipedia.org/wiki/Random_forest), which is almost parameter-independent, natively multiclass, provides unbiased error estimate and perform almost as good as well fitted SVMs.
null
CC BY-SA 2.5
null
2010-10-24T17:44:34.170
2010-10-24T17:44:34.170
null
null
null
null
3955
1
null
null
15
580
### Context: I have a group of websites where I record the number of visits on a daily basis: ``` W0 = { 30, 34, 28, 30, 16, 13, 8, 4, 0, 5, 2, 2, 1, 2, .. } W1 = { 1, 3, 21, 12, 10, 20, 15, 43, 22, 25, .. } W2 = { 0, 0, 4, 2, 2, 5, 3, 30, 50, 30, 30, 25, 40, .. } ... Wn ``` ### General Question: - How do I determine which sites are the most active? By this I mean receiving more visits or having a sudden increase in visits during the last few days. For illustration purposes, in the small example above W0 would be initially popular but is starting to show abandoning, W1 is showing a steady popularity (with some isolated peak), and W3 an important raise after a quiet start). ### Initial thoughts: I found this [thread on SO](https://stackoverflow.com/questions/1635703/understanding-algorithms-for-measuring-trends) where a simple formula is described: ` // pageviews for most recent day y2 = pageviews[-1] // pageviews for previous day y1 = pageviews[-2] // Simple baseline trend algorithm slope = y2 - y1 trend = slope * log(1.0 +int(total_pageviews)) error = 1.0/sqrt(int(total_pageviews)) return trend, error ` This looks good and easy enough, but I'm having a problem with it. The calculation is based on slopes. This is fine and is one of the features I'm interested in, but IMHO it has problems for non-monotonic series. Imagine that during some days we have a constant number of visits (so the slope = 0), then the above trend would be zero. ### Questions: - How do I handle both cases (monotonic increase/decrease) and large number of hits? - Should I use separate formulas?
Determining whether a website is active using daily visits
CC BY-SA 2.5
null
2010-10-24T19:55:39.883
2011-02-26T14:46:45.880
2017-05-23T12:39:26.167
-1
1694
[ "time-series", "forecasting" ]
3956
2
null
3950
5
null
I can think of three: - Combine them in a linear manner ($d=d_1+\alpha d_2$) and find best $\alpha$ by some optimization, let's say minimizing CV error for kNN or minimizing silhouette for clustering. - Train separate classifiers / cluster the data few times based on both distances and then blend the results. This may not work too well because you have only 2 base methods. - For classification only, you may use "klNN" -- get $k$ neighbors based on first metric and $l$ based on the second.
null
CC BY-SA 2.5
null
2010-10-24T20:03:13.467
2010-10-24T20:03:13.467
null
null
null
null
3958
1
null
null
2
915
> Possible Duplicate: What are some valuable Statistical Analysis open source projects? I understand the basics of statistical analysis, but I am not good at math. What is the best website or software package (preferably FOSS) that can analysize the data, find correlations, levels of confidence, etc. automagically/with a great UI?
What is your favorite, easy to use statistical analysis website or software package?
CC BY-SA 2.5
null
2010-10-24T20:23:32.503
2010-10-24T23:15:48.183
2017-04-13T12:44:46.680
-1
1689
[ "software" ]
3959
2
null
3958
5
null
Your objectives seem rather vague, but I think the open-source [R statistical package](http://www.R-project.org/) should fit your needs, and beyond. Although primarily a command line driven software, you will find several useful GUIs, e.g. [Rcommander](http://socserv.mcmaster.ca/jfox/Misc/Rcmdr/) or [deducer](http://www.deducer.org/) to help you start with. The [CRAN](http://cran.r-project.org/) website contains everything you need to start with R, including a lot of [official](http://cran.r-project.org/manuals.html) and [contributed](http://cran.r-project.org/other-docs.html) documentation. R is made of several additional packages (a kind of extensions to the core statistical functions), and you will find interesting pointers on these related questions: [What R packages do you find most useful in your daily work?](https://stats.stackexchange.com/questions/73/what-r-packages-do-you-find-most-useful-in-your-daily-work), [I just installed the latest version of R. What packages should I obtain?](https://stats.stackexchange.com/questions/1676/i-just-installed-the-latest-version-of-r-what-packages-should-i-obtain).
null
CC BY-SA 2.5
null
2010-10-24T20:48:31.153
2010-10-24T20:48:31.153
2017-04-13T12:44:52.277
-1
930
null
3960
2
null
3958
1
null
[RapidMiner](http://rapidminer.com/) is a nice GUI based, workflow data mining tool. It's open source and runs on mac, linux, windows. I think R and RapidMiner will end up as the predominant tools, with R being for people that like command-line style linux-like work, and RapidMiner for Windows and Mac types that prefer GUI based work.
null
CC BY-SA 2.5
null
2010-10-24T21:15:53.470
2010-10-24T21:15:53.470
null
null
74
null
3961
1
3971
null
6
1051
Suppose you have an $n$-vector $X$. For a fixed real number, $r$ between $-1$ and $1$, can one generate a random permutation of the integers $1,2,\ldots,n$, call it $i_1,i_2,\ldots,i_n$ such that the vector $X$ and the vector $\tilde{X}$ defined by $\tilde{X_j} = X_{i_j}$ have expected sample correlation of $r$? I am looking for a process that generates such permutations. Without loss of generality, I believe one may assume $X$ has zero sample mean, and unit sample standard deviation.
Random permutation of a vector with a fixed expected sample correlation to the original?
CC BY-SA 3.0
null
2010-10-24T23:27:39.733
2017-02-15T22:50:52.443
2017-02-15T22:50:52.443
28666
795
[ "correlation", "monte-carlo", "combinatorics" ]
3963
1
3969
null
4
158
Below is part of the proof of large deviations result. K is cumulant generating function. Can anyone explain how the last step follows? [](https://i.stack.imgur.com/iHYax.png) This is page 157 of McCullagh's "Tensor Methods in Statistics"
Large deviations proof question
CC BY-SA 4.0
null
2010-10-25T03:31:56.330
2019-01-13T11:43:46.177
2019-01-13T11:43:46.177
79696
511
[ "probability", "mathematical-statistics" ]
3965
1
4001
null
3
2429
i am working with a set of distributions. I have so far analyzed several probability distributions with respect to each other, that means i have made a t-test, to see how probable it is that two events happen at the same time according to those distributions. That means i have computed: ``` let x1 and x2 be probability distributions prob(x1=Z, x2=Z), z be any value ``` thats what the t-test gave me. though i used something similar so the distributions can have different variances. I have now computed the overlap of both distributions with respect to each other. what i want to do now is look at the whole thing backwards. I want to look at a certain range and be able to give a probability that there is 1 or two of those occurrences. perhaps another way to look at it is using dice. I have two dice with differing probability distributions (probably gaussian for me, but lets assume one throws 6 more often, the other one 3 more often). The first way to look at it would be to tell whether both dice throw the same number. What i want to do is to tell if both dice throw a number in a certain range, for example 5 and 6. ``` let x1 and x2 be probability distributions prob(x1=Z, x2=Z), z be a value a certain range ``` I think it would help if you could tell me what the name of the thing is that i am trying to do, so i can read up. But i would also be happy to have a solution that works out of the box. any pointers how i can compute this? cheers and thanks for advice edit: provided an example.
Overlapping probability distributions
CC BY-SA 2.5
null
2010-10-25T08:52:48.763
2015-03-26T22:53:30.063
2010-10-26T11:16:36.720
1516
1516
[ "distributions", "probability" ]
3966
2
null
3931
8
null
Sample variance can be thought of to be the exact mean of the pairwise "energy" $(x_i-x_j)^2/2$ between all sample points. The definition of sample variance then becomes $$ s^2 = \frac{2}{n(n-1)}\sum_{i< j}\frac{(x_i-x_j)^2}{2} = \frac{1}{n-1}\sum_{i=1}^n(x_i-\bar{x})^2 .$$ This also agrees with defining variance of a random variable as the expectation of the pairwise energy, i.e. let $X$ and $Y$ be independent random variables with the same distribution, then $$ V(X) = E\left(\frac{(X-Y)^2}{2}\right) = E((X-E(X))^2) . $$ To go from the random variable defintion of variance to the defintion of sample variance is a matter of estimating a expectation by a mean which is can be justified by the philosophical principle of typicality: The sample is a typical representation the distribution. (Note, this is related to, but not the same as estimation by moments.)
null
CC BY-SA 2.5
null
2010-10-25T09:51:10.803
2010-10-25T09:51:10.803
null
null
null
null
3967
1
null
null
17
1785
I'm a pharmacologist and, in my experience, almost all papers in basic biomedical research use Student's t-test (either to support inference or to conform to expectations...). A couple of years ago it came to my attention that Student's t-test is not the most efficient test that might be used: sequential tests offer much more power for any sample size, or a far smaller sample size on average for equivalent power. Sequential procedures of varying complexity are used in clinical research but I've never seen one used in a basic biomedical research publication. I note that they are also absent from the introductory level statistics textbooks that are all that most basic scientists are likely to see. My question is three-fold: - Given the very substantial efficiency advantage of sequential tests, why are they not more widely used? - Is there a drawback associated with the used of sequential methods that would mean that their use by non-statisticians is to be discouraged? - Are statistics students taught about sequential testing procedures?
Sequential hypothesis testing in basic science
CC BY-SA 2.5
null
2010-10-25T10:14:17.670
2022-11-23T09:52:15.920
2010-10-26T00:58:33.383
1679
1679
[ "hypothesis-testing", "teaching", "statistical-significance" ]
3969
2
null
3963
10
null
Think back to the proof of Chebychev's inequality, and you'll be home free. The RHS of the next-to-the-last equality is $P[\exp (\xi (X - x)) \geq 1]$. Now think of when $X$ has a PDF $f$ (but this is not required); the probability is $$ \int_{\exp (\xi (u - x))\geq 1} 1 \cdot f(u) \mathrm{d}u $$ Now, over the region of integration the quantity $\exp (\xi (u - x)) \geq 1$, so by monotonicity we get that the above integral is at most $$ \int_{\exp (\xi (u - x))\geq 1} \exp (\xi (u - x)) \cdot f(u) \mathrm{d}u $$ and since the integrand is nonnegative this last integral is at most $$ \int \exp (\xi (u - x)) \cdot f(u) \mathrm{d}u $$ which is exactly $\exp[ -\xi x]$ multiplied by the moment generating function of $X$, denoted $M(\xi)$. Finally, recognize $M = \exp (K)$ and we're finished. By the way, the infimum holds because the inequality is true for every $\xi >0$.
null
CC BY-SA 2.5
null
2010-10-25T14:33:53.893
2010-10-25T14:33:53.893
null
null
null
null
3970
2
null
3893
1
null
I guess this is an intuitive way to think about the relation between CV and causal inference: (please correct if I am wrong) I always think about CV as a way to evaluate the performance of a model in predictions. However, in causal inference we are more concerned with something equivalent to Occam's Razor (parsimony), hence CV won't help. Thanks.
null
CC BY-SA 2.5
null
2010-10-25T14:39:37.197
2010-10-25T14:39:37.197
null
null
1307
null
3971
2
null
3961
7
null
The answers are no, not for all $r$ in general; yes, for a restricted range of $r$ that is readily computed; but there remain a wide set of choices to be made. I will use a standard notation where the action of a permutation $\sigma$ is written $ X^\sigma_i = X_{\sigma (i)}$ and the set of all permutations of the $n$ coordinates is $S_n$. As you note in the question, upon standardizing $X$ it suffices to investigate $\mathbb{E}[{X^\sigma}'X]$. Because $X'X = 1$, a correlation of $r = 1$ is certainly attainable by means of the identity permutation $\epsilon$ (where $\epsilon(i) = i$ for all $i$). However, for any given $X$ there is a minimum attainable correlation: it is realized by associating the $k^\text{th}$ smallest component of $X^\sigma$ with the $k^\text{th}$ largest component of $X$. For example, with $X = (-2,1,1)/\sqrt{6}$ the smallest possible correlation of $-1/2$ is achieved by $X^\sigma = (1,1,-2)/\sqrt{6}$. Let's call this minimum correlation $r_{min}(X)$ and let $\sigma_{min}(X)$ be any permutation achieving this minimum value. Every possible expected correlation of value between $r_{min}(X)$ and $1$ is attainable by means of a distribution supported on just $\sigma_{min}$ and $\epsilon$. Specifically, set $$p = \frac{r - r_{min}}{1 - r_{min}}$$ and generate the permutation $\sigma_{min}$ with probability $1 - p$ and the permutation $\epsilon$ with probability $p$. (If $r_{min} = 1$ this formula is undefined but there's nothing to do anyway.) I suspect you would like a more "interesting" distribution of permutations than this. To create this you will need to add more conditions. Here's one way to frame your problem: to every permutation $\sigma$ corresponds the number $f(\sigma) = {X^\sigma}'X$. An arbitrary probability distribution over the permutations assigns a non-negative value $p(\sigma)$ to each permutation according to the axioms of probability. The expectation of $f$, which is the expected correlation between $X$ and $X^\sigma$, of course equals $$\sum_{\sigma \in S_n}{p(\sigma)f(\sigma)}.$$ Given a desired expected correlation $r$, you therefore have freedom to choose the $n!$ values $p(\sigma)$ subject to the conditions $$\sum_{\sigma \in S_n}{p(\sigma)} = 1,$$ $$\sum_{\sigma \in S_n}{p(\sigma)f(\sigma)} = r,$$ $$p(\sigma) \ge 0 \text{ for all } \sigma \in S_n.$$ I have merely demonstrated that this linear program is feasible if and only if $r_{min} \le r \le 1$. You are free to choose among the solutions (a convex set of distributions) in any way you like. For instance, you might prefer to use as uniform a choice of permutations as possible, in which case you might seek to minimize the variance of the $p(\sigma)$ (thought of just as a set of numbers) subject to the preceding conditions. That's a [quadratic program](http://en.wikipedia.org/wiki/Quadratic_programming), for which there are many good solution methods and much available software. Solving this (exactly) will become problematic once $n$ exceeds about $8$ or so, because it involves $n!$ variables and you'll just overwhelm the software. In such cases you might want to restrict the distributions further, such as requiring them to be only cyclic and anti-cyclic permutations of the sorted coordinates (just $2n$ variables). Another possibility is to choose a bunch of permutations randomly--making sure to include the order-reversing permutation among them so the minimum correlation can be included--and then finding an approximately uniform distribution among them.
null
CC BY-SA 2.5
null
2010-10-25T14:46:32.913
2010-10-25T14:52:57.817
2010-10-25T14:52:57.817
919
919
null
3972
2
null
3857
2
null
I just came across this webpage hosted by ICPSR on [data management plans](http://www.icpsr.umich.edu/icpsrweb/ICPSR/dmp/index.jsp). Although I think the goals of ICPSR will be somewhat different than your business (e.g. they are heavily interested in making the data readily able to be disseminated without violating confidentiality), I imagine they have useful information to businesses. Particularly advice on creating metadata seems to me to be universal.
null
CC BY-SA 2.5
null
2010-10-25T14:59:22.130
2010-10-25T14:59:22.130
null
null
1036
null
3973
2
null
3857
2
null
In the case of a much smaller scales, I experienced using dropbox fora sharing/syncing a copy of the data files (and scripts and results) with other researchers/collaborators (I wrote about it [here](http://www.r-statistics.com/2010/05/syncing-files-across-computers-using-dropbox/)). The other tool I have used is google docs for collecting and sharing data (about which I [wrote here](http://www.r-statistics.com/2010/03/google-spreadsheets-google-forms-r-easily-collecting-and-importing-data-for-analysis/))
null
CC BY-SA 2.5
null
2010-10-25T15:54:27.227
2010-10-25T15:54:27.227
null
null
253
null
3974
1
null
null
6
587
I have (trivariate: multivariate with three variables) data that appears to be good empirical and reasonable theoretical fit for a (univariate) convolution of an exponential and a normal distribution (some times called exp-norm or exGauss distributions). My data are samples from the joint distribution: J(R,G,B) It appears that the marginals of R,G,B follow: $R=R_N+R_E$ with $R_N \sim N(\mu_R,\sigma^2_R)$ and $R_E \sim Exp(\lambda_R)$ (and likewise for G,B). I would like to effectively summarize the marginal, conditional, and joint distributions. The main purpose of the summary is to compare these distributions (marginal, conditional, and joint) with other distributions generated in a like manner. My difficulty is that (1) I don't know a form for the joint (or the conditional) distribution and (2) following from that, I have no parameters to estimate. Thus, I need either a distribution free (non-parametric) approach to working with this data or I need to figure out a multivariate distribution that matches the data and the form of the marginals. Or should I think about other options?
From Marginal Exp-Norm Distributions to What Conditionals and Joint?
CC BY-SA 2.5
null
2010-10-25T16:39:19.197
2013-06-28T22:51:39.283
2013-06-28T22:51:39.283
22468
1704
[ "distributions", "multivariable", "joint-distribution" ]
3975
2
null
3921
32
null
I like the centered count view. This particular version removes the neutral answers (effectively treating neutral and n/a as the same) to show only the amount of agree/disagree opinions. The 0 point is where red and blue meet. The count axis is clipped out. ![alt text](https://i.stack.imgur.com/3lV2o.png) For comparison, here are the same five responses as stacked percentages, showing both neutral (gray) and no answer (white). ![alt text](https://i.stack.imgur.com/A6Mmx.png) Update: Paper suggesting a similar method: [Plotting Likert and Other Rating Scales (PDF)](https://www.amstat.org/membersonly/proceedings/2011/papers/300784_64164.pdf)
null
CC BY-SA 3.0
null
2010-10-25T17:40:25.237
2013-08-11T20:36:17.530
2013-08-11T20:36:17.530
1191
1191
null
3976
2
null
3974
4
null
Would [copulas](http://en.wikipedia.org/wiki/Copula_%28statistics%29) be any use here? I don't know enough about them, or your problem, to be sure.
null
CC BY-SA 2.5
null
2010-10-25T17:50:35.663
2010-10-25T17:50:35.663
null
null
449
null
3977
1
3978
null
0
198
M or MM? I prefer "M" -- it's shorter, but someone here was pushing for "MM".
How do you abbreviate "Millions"?
CC BY-SA 2.5
null
2010-10-25T20:17:25.713
2015-01-11T19:41:03.320
2015-01-11T19:41:03.320
28666
1531
[ "terminology", "abbreviation" ]
3978
2
null
3977
1
null
I would use M, as it is the SI prefix for million (mega-).
null
CC BY-SA 2.5
null
2010-10-25T21:13:34.243
2010-10-25T21:13:34.243
null
null
582
null
3979
2
null
3977
1
null
I would use "million", or 10^6, because M stands for molar (concentration in moles per litre) in my discipline.
null
CC BY-SA 2.5
null
2010-10-25T21:39:42.333
2010-10-25T21:39:42.333
null
null
1679
null
3980
1
4021
null
4
170
I have some data corresponding to changes in a binomial variable - i.e, in month 1 there were n1 trials and k1 successes, and in month 2 there were n2 trials and k2 successes. Say I have M of these cases, and in between month 1 and month 2 there were performed a number of different operations (so for case 1 we might have tried treatments a and b, and for case 2 b,c,and d), each of which could have increased or decreased the success rate. I would like to examine the effects of these treatments by regressing on the categorical covariates corresponding to the presence or absence of a,b,c,d,etc - what is the best way to go about this? I suppose I am looking for something analogous to a binomial ancova, but using change in the dependent variable.
What distribution to use to model changes in ratios?
CC BY-SA 2.5
null
2010-10-26T00:24:40.997
2010-10-27T14:04:15.370
2010-10-26T06:21:20.107
449
null
[ "time-series", "regression", "binomial-distribution", "ancova" ]
3981
2
null
3749
11
null
I think a few methods that can be used, but not designed specifically for you, are as follows: Modeling approaches: - Topic Models (used to find patters in a set of documents and/or information retrieval) a. Simplest one is LDA b. Dynamic topic models (IMHO, most suited for your case, without much domain knowledge) c. Correlated topic models (IMHO, if 2. is not good, it makes sense to try this) These approaches are not used in finance (I am not aware, as I don't work in specifically in finance), but they have very general applicability. They use the latent variable formulation, which is very similar to that of HMM. They have shown to be state-of-the-art in topic modeling. You can watch a nice presentation by David Blei (great presenter, apart from his awesome!! research) here. The specific references, the slides for the presentation, and more complicated models can be accessed from his website. He is doing some great work which is very general, so it might not be surprising if he has already done something in finance. Another great reference in the same field is his advisor, Michael Jordan's, website. Its hard to find specific references there as he publishes so much! - Time series and sequential data models (HMM specifically) Apart from Jordan and Blei, the other prolific research is Zoubin Ghahramani (and his coauthor Beal). You can find here the specific HMM models that you require. A few impressive ones are: The infinite hidden markov models, Time sensitive Dirichlet Process Mixture Models. - Software There is a R package called lda and topicmodels for most of the "good" models. Blei and Ghahramani maintain C, Matlab codes on their website as well. Good Luck!
null
CC BY-SA 2.5
null
2010-10-26T05:40:40.813
2010-10-26T06:19:55.590
2010-10-26T06:19:55.590
null
1307
null
3982
1
null
null
1
681
``` var w = 810, h = 400, mapMargin = 30; geo = pv.Geo.scale().range(w, h); var vis = new pv.Panel() .width(w) .height(h) .top(50) .bottom(30) .def("i", -1); var dot = vis.add(pv.Dot) .data(geoPopList) .left(function(d) {return geo(d.center).x}) .top(function(d) {return geo(d.center).y}) .radius(function(d) {return areaToRadius(d.speakers)}) .fillStyle(function(d){return col(d.speakers)}) .strokeStyle("#ffffff"); vis.render(); ``` I'm using above code to make Dot Chart in Protovis. How at add zoom and pan to this? I saw an example at [http://vis.stanford.edu/protovis/ex/transform.html](http://vis.stanford.edu/protovis/ex/transform.html). But problem with my code is how to transform on geo scale.
Adding Zoom & Pan for Protovis Dot Chart with GeoScale
CC BY-SA 2.5
null
2010-10-26T05:56:12.743
2010-10-26T13:52:10.313
2010-10-26T13:52:10.313
5
1706
[ "data-visualization", "protovis", "javascript" ]
3984
1
null
null
14
34332
I am running a logistic model. The actual model dataset has more than 100 variables but I am choosing a test data set in which there are around 25 variables. Before that I also made a dataset which had 8-9 variables. I am being told that AIC and SC values can be used to compare the model. I observed that the model had higher SC values even when the variable had low p values ( ex. 0053) . To my intuition a model which has variables having good significance level should result in low SC and AIC values. But that isn't happening. Can someone please clarify this. In short I want to ask the following questions: - Does the number of variable have anything to do with SC AIC ? - Should I concentrate on p values or low SC AIC values ? - What are the typical ways of reducing SC AIC values ?
Understanding AIC and Schwarz criterion
CC BY-SA 3.0
null
2010-10-26T07:28:19.323
2012-03-24T09:05:12.993
2012-03-24T09:05:12.993
930
1763
[ "model-selection", "logistic", "aic" ]
3986
2
null
3984
15
null
It is quite difficult to answer your question in a precise manner, but it seems to me you are comparing two criteria (information criteria and p-value) that don't give the same information. For all information criteria (AIC, or Schwarz criterion), the smaller they are the better the fit of your model is (from a statistical perspective) as they reflect a trade-off between the lack of fit and the number of parameters in the model; for example, the Akaike criterion reads $-2\log(\ell)+2k$, where $k$ is the number of parameters. However, unlike AIC, SC is consistent: the probability of choosing incorrectly a bigger model converges to 0 as the sample size increases. They are used for comparing models, but you can well observe a model with significant predictors that provide poor fit (large residual deviance). If you can achieve a different model with a lower AIC, this is suggestive of a poor model. And, if your sample size is large, $p$-values can still be low which doesn't give much information about model fit. At least, look if the AIC shows a significant decrease when comparing the model with an intercept only and the model with covariates. However, if your interest lies in finding the best subset of predictors, you definitively have to look at methods for variable selection. I would suggest to look at penalized regression, which allows to perform variable selection to avoid overfitting issues. This is discussed in Frank Harrell's Regression Modeling Strategies (p. 207 ff.), or Moons et al., [Penalized maximum likelihood estimation to directly adjust diagnostic and prognostic prediction models for overoptimism: a clinical example](http://www.jclinepi.com/article/S0895-4356%2804%2900171-4/abstract), J Clin Epid (2004) 57(12). See also the [Design](http://cran.r-project.org/web/packages/Design/index.html) (`lrm`) and [stepPlr](http://cran.r-project.org/web/packages/stepPlr/index.html) (`step.plr`) R packages, or the [penalized](http://cran.r-project.org/web/packages/penalized/index.html) package. You may browse related questions on variable selection on this SE.
null
CC BY-SA 2.5
null
2010-10-26T08:15:47.120
2010-10-27T09:44:59.743
2010-10-27T09:44:59.743
930
930
null
3987
2
null
3967
5
null
I don't know much of sequential tests and their application outside of interim analysis (Jennison and Turnbull, 2000) and computerized adaptive testing (van der Linden and Glas, 2010). One exception is in some fMRI studies that are associated to large costs and difficulty to enroll subjects. Basically, in this case sequential testing primarily aims at stopping the experiment earlier. So, I am not surprised that these very tailored approaches are not taught in usual statistical classes. Sequential tests are not without their pitfalls, though (type I and II error have to be specified in advance, choice of the stopping rule and multiple look at results should be justified, p-values are not uniformly distributed under the null as in a fixed sample design, etc.). In most design, we work with a pre-specified experimental setting or a preliminary power study was carried out, to optimize some kind of cost-effectiveness criterion, in which case standard testing procedures apply. I found, however, the following paper from Maik Dierkes about fixed vs. open sample design very interesting: [A claim for sequential designs of experiments](https://web.archive.org/web/20141222173202/http://www.wiwi.uni-muenster.de/fcm/downloads/forschen/2008_A_claim_for_sequential_designs_of_experiments.pdf).
null
CC BY-SA 4.0
null
2010-10-26T09:13:52.530
2022-11-23T09:52:15.920
2022-11-23T09:52:15.920
362671
930
null
3989
1
3992
null
13
5373
Let $Q_n = C_n.\{|X_i-X_j|;i < j\}_{(k)}$ so for a very short sample like $\{1,3,6,2,7,5\}$ it can be calculated from finding the $k$th order static of pairwise differences: ``` 7 6 5 3 2 1 1 6 5 4 2 1 2 5 4 3 1 3 4 3 2 5 2 1 6 1 7 ``` h=[n/2]+1=4 k=h(h-1)/2=8 Thus $Q_n=C_n. 2$ Obviously for large samples saying consist of 80,000 records we need very large memory. Is there anyway to calculate $Q_n$ in 1D space instead of 2D? A link to the answer [ftp://ftp.win.ua.ac.be/pub/preprints/92/Timeff92.pdf](ftp://ftp.win.ua.ac.be/pub/preprints/92/Timeff92.pdf) although I cannot fully understand it.
How to calculate Rousseeuw’s and Croux’ (1993) Qn scale estimator for large samples?
CC BY-SA 3.0
null
2010-10-26T09:34:08.073
2019-12-02T14:52:47.783
2016-03-16T02:51:58.517
805
1637
[ "data-transformation", "scales", "robust", "optimal-scaling" ]
3992
2
null
3989
15
null
Update: The crux of the problem is that in order to achieve the $O(n\log(n))$ time complexity, one needs in the order of $O(n)$ storage. --- No, $O(n\log(n))$ is the lower theoretical bound for the time complexity of (see (1)) selecting the $k^{th}$ element among all $\frac{n(n-1)}{2}$ possible $|x_i - x_j|: 1 \leq i \lt j \leq n$. You can get $O(1)$ space, but only by naively checking all combinations of $x_i-x_j$ in time $O(n^2)$. The good news is that you can use the $\tau$ estimator of scale (see (2) and (3) for an improved version and some timing comparisons), implemented in the function `scaleTau2()` in the `R` package [robustbase](http://cran.r-project.org/web/packages/robustbase/index.html). The univariate $\tau$ estimator is a two-step (i.e. re-weighted) estimator of scale. It has 95 percent Gaussian efficiency, 50 percent breakdown point, and complexity of $O(n)$ time and $O(1)$ space (plus it can easily be made 'online', shaving off half the computational costs in repeated use -- although you will have to dig into the `R` code to implement this option, it is rather straightforward to do). - The complexity of selection and ranking in X + Y and matrices with sorted columns G. N. Frederickson and D. B. Johnson, Journal of Computer and System Sciences Volume 24, Issue 2, April 1982, Pages 197-208. - Yohai, V. and Zamar, R. (1988). High breakdown point estimates of regression by means of the minimization of an efficient scale. Journal of the American Statistical Association 83 406–413. - Maronna, R. and Zamar, R. (2002). Robust estimates of location and dispersion for high- dimensional data sets. Technometrics 44 307–317 Edit To use this - Fire up R (it's free and can be downloaded from here) - Install the package by typing: ``` install.packages("robustbase") ``` - Load the package by typing: ``` library("robustbase") ``` - Load your data file and run the function: ``` mydatavector <- read.table("address to my file in text format", header=T) scaleTau2(mydatavector) ```
null
CC BY-SA 4.0
null
2010-10-26T10:20:01.583
2019-12-02T14:52:47.783
2019-12-02T14:52:47.783
66885
603
null
3993
2
null
3893
13
null
It seems to me that your question more generally addresses different flavour of validation for a predictive model: Cross-validation has somewhat more to do with internal validity, or at least the initial modelling stage, whereas drawing causal links on a wider population is more related to external validity. By that (and as an update following @Brett's nice remark), I mean that we usually build a model on a working sample, assuming an hypothetical conceptual model (i.e. we specify the relationships between predictors and the outcome(s) of interest), and we try to obtain reliable estimates with a minimal classification error rate or a minimal prediction error. Hopefully, the better the model performs, the better it will allow us to predict outcome(s) on unseen data; still, CV doesn't tell anything about the "validity" or adequacy of the hypothesized causal links. We could certainly achieve decent results with a model where some moderation and/or mediation effects are neglected or simply not known in advance. My point is that whatever the method you use to validate your model (and holdout method is certainly not the best one, but still it is widely used in epidemiological study to alleviate the problems arising from stepwise model building), you work with the same sample (which we assume is representative of a larger population). On the contrary, generalizing the results and the causal links inferred this way to new samples or a plausibly related population is usually done by replication studies. This ensures that we can safely test the predictive ability of our model in a "superpopulation" which features a larger range of individual variations and may exhibit other potential factors of interest. Your model might provide valid predictions for your working sample, and it includes all potential confounders you may have think of; however, it is possible that it will not perform as well with new data, just because other factors appear in the intervening causal path that were not identified when building the initial model. This may happen if some of the predictors and the causal links inferred therefrom depend on the particular trial centre where patients were recruited, for example. In genetic epidemiology, many [genome-wide association studies](http://en.wikipedia.org/wiki/Genome-wide_association_study) fail to replicate just because we are trying to model complex diseases with an oversimplified view on causal relationships between DNA markers and the observed phenotype, while it is very likely that gene-gene (epistasis), gene-diseases (pleiotropy), gene-environment, and population substructure all come into play, but see for example [Validating, augmenting and refining genome-wide association signals](http://www.nature.com/nrg/journal/v10/n5/abs/nrg2544.html) (Ioannidis et al., Nature Reviews Genetics, 2009 10). So, we can build-up a performant model to account for the observed cross-variations between a set of genetic markers (with very low and sparse effect size) and a multivariate pattern of observed phenotypes (e.g., volume of white/gray matter or localized activities in the brain as observed through fMRI, responses to neuropsychological assessment or personality inventory), still it won't perform as expected on an independent sample. As for a general reference on this topic, can recommend chapter 17 and Part III of [Clinical Prediction Models](http://www.clinicalpredictionmodels.org/), from EW Steyerberg (Springer, 2009). I also like the following article from Ioannidis: > Ioannidis, JPA, Why Most Published Research Findings Are False? PLoS Med. 2005 2(8): e124
null
CC BY-SA 2.5
null
2010-10-26T10:20:31.147
2010-10-27T20:34:55.987
2010-10-27T20:34:55.987
930
930
null
3997
1
4022
null
8
29971
I have a dataset with a very simple logfile-like structure, I want to subset the data according to date ranges but can only do on one parameter. my data looks like this: ``` date_time loc_id node energy kgco2 1 2009-02-27 00:11:08 87 103 0.00000 0.00000 2 2009-02-27 01:05:05 87 103 7.00000 3.75900 3 2009-02-27 02:05:05 87 103 6.40039 3.43701 4 2009-02-27 03:05:05 87 103 4.79883 2.57697 5 2009-02-27 04:05:05 87 103 4.10156 2.20254 6 2009-02-27 05:05:05 87 103 2.59961 1.39599 ``` the file includes data for a a whole year, I want to create summary plots for every month and perhaps week I am processing the date_time as follows: ``` > dt <-as.POSIXlt(ae$date_time) > ae$dt <- dt > names(ae$dt) [1] "sec" "min" "hour" "mday" "mon" "year" "wday" "yday" "isdst" ``` now I'm trying to subset the data as: ``` > x <- ae$energy[ae$dt$year=="110" & ae$dt$mon=="10"] > x numeric(0) ``` "110" is because the following: ``` > range(ae$dt$year) [1] 109 110 ``` I have also tried the following with no luck: ``` > d <- subset(ae, (dt$year=="110" & dt$mon=="10"), select=energy) ``` these however do work: ``` > d <- subset(ae, dt$year=="110", select=energy) ``` and so does this ``` > d <- subset(ae, dt$mon=="10", select=energy) ``` any ideas on how can I subset by selecting both year and month? Thanks,
Subsetting a data-frame in R based on dates
CC BY-SA 2.5
null
2010-10-26T15:10:55.440
2010-10-27T14:34:10.650
2010-10-26T15:22:53.140
449
1716
[ "time-series", "r" ]
3998
2
null
3997
4
null
A few points: - I'm not sure why that's happening. Clearly the POSIXlt slots are wrong. I typically use POSIXct unless I absolutely need to adjust the slots. - One option is to use the dates directly rather than messing with the slots, and say <= and >= to subset. Something like ae[ae$date >= as.POSIXlt("2009-10-01") & ae$date < as.POSIXlt("2009-11-01"),] - You should consider using a time series for this, since that's the exact purpose of that data structure (and they provide many useful functions for dealing with data over time). One of the most common is zoo. xts also includes a number of functions that can help with this kind of thing.
null
CC BY-SA 2.5
null
2010-10-26T15:35:24.063
2010-10-26T15:35:24.063
null
null
5
null
3999
1
4003
null
5
28590
I have 5 categories, each category is divided into the subcategories low, medium and high. An object can belong to one or more of these categories with a number between 1 and 100 in each subcategory but the sum for each category can no exceed 100. Is there are a way to summarise this into one single number? Any hints or directions are very welcomed.
How to create an index
CC BY-SA 2.5
null
2010-10-26T15:39:43.867
2013-03-14T06:46:42.387
null
null
791
[ "descriptive-statistics" ]
4001
2
null
3965
4
null
I can't make sense of any of the statements in the question, but you're asking for terminology and that request at least I can understand and appreciate, so here's some to get you started. I will italicize technical terms and key concepts to draw them to your attention. The upward face of a die after it is thrown or the side of a coin that shows after it is flipped are outcomes. The assignment of numbers to outcomes, such as labeling the faces of a die with 1, 2, ..., 6 or the faces of a coin 0 and 1, is called a random variable. A range (or, generally, a set) of values for which we would like to know a probability is called an event. With two dice you have two random quantities and are using two random variables to analyze them. This is a multivariate setting. Now the outcomes consist of ordered pairs: one outcome for the first variable, one outcome for the second. These outcomes have joint probabilities. The most fundamental issue concerns whether the outcomes of one are somehow connected with the outcomes of the other. When they are not, the two variables are independent. Throws of two dice are usually physically unrelated, so we expect them to be independent. If you are using dice as a model for some other phenomenon, though, watch out! You need to determine whether it is appropriate to treat your variables as independent. With two independent variables $X$ and $Y$, the probabilities multiply. Specifically, let $E$ be an event for $X$ (that is, a set of numbers that $X$ might attain) and let $F$ be an event for $Y$. Then the joint probability of $E$ and $F$, meaning the probability that $X$ has a value in $E$ (written $\Pr_X[E]$) and $Y$ has a value in $F$, equals the product of the probabilities, $\Pr_X[E] \quad \Pr_Y[F]$. When the variables are not independent, the joint probability of $E$ and $F$ may differ (considerably) from this product. It sounds to me like you are asking about either the theoretical computation of joint probabilities or about how to estimate joint probabilities from observations. An excellent place to get up to speed quickly on all this stuff, as well as sort out what t-tests really do, what probability distributions are, and what "gaussian" really means, is to read through Gonick & Smith's [A Cartoon Guide to Statistics](http://rads.stackoverflow.com/amzn/click/0062731025) (seriously!).
null
CC BY-SA 2.5
null
2010-10-26T17:17:14.607
2010-10-26T17:17:14.607
null
null
919
null
4002
2
null
3907
1
null
Since the individuals were born with the 'genetic change' I would use their birth as the starting time instead of the time at which they entered the registry. The following is my reasoning: First, ignore the effect of other variables on survival such as gender, income levels, exercise levels etc. For the sake of illustration we will assume that these variables have no differential impact on the time at which they have a cardio event. Second, I am assuming that you wish to investigate the following question: Does the difference in gene types (broadly speaking) result in a differential impact on the time it takes for a person to have a cardio event? I suppose that there is an underlying theory which states that the answer to the above question is 'yes'. Now, consider two individuals both of whom had the gene change. One, enters the registry just before a cardio event and the other has a cardio event several years after entering the registry. However, for the sake of dicussion assume that the ages of both people are the same. Thus, the time to cardio event is technically the same (their age). However, if you use the starting time as the time at which they enter the registry you would draw different conclusions about how long it takes for the cardio event to happen which would bias your conclusions. I am assuming that my interpretation of your goals and the situation is correct. Please correct me if I am wrong about some aspect.
null
CC BY-SA 2.5
null
2010-10-26T18:18:51.977
2010-10-26T18:29:36.550
2010-10-26T18:29:36.550
null
null
null
4003
2
null
3999
6
null
Your solution should depend on how you plan to use the information. If, for instance, you intend to use these data as potential explanatory variables in a model, then you are better off without an index, because that is likely to cause a loss of explanatory power. Just use the original variables. If, on the other hand, you would like to make a map to portray degrees of urbanization in a simple manner, then an index makes sense. What remains in the second case is to make "urbanization" operational. Suppose, for the sake of exploring this issue, that instead of using the word "urbanization" you used some term that was completely unintelligible to me. How would I go about finding out what it meant? There is nothing in the data that will reveal the answer. What you need is either a quantitative definition of urbanization in terms of these 15 variables or else you need a 16th variable that correctly captures the degree of urbanization in some "test" or "calibration" cases. Then you could statistically explore the correlations among the 15 original variables and the degree of urbanization with the aim of finding a combination of those 15 that is reliably associated with urbanization. This can be done using [canonical correlation analysis](http://en.wikipedia.org/wiki/Canonical_correlation). An alternative that (I suspect) is frequently used is just to dodge these fundamental considerations and make up an answer. Some people invent "weights" for each of the categories, form the weighted sum of the values, and declare that value to be whatever they would like it to be, whether it's "urbanization" or "environmental impact" or whatever. The problems of that approach in an objective or scientific context ought to be obvious, but it is an easy solution. An intermediate approach remains agnostic about what "urbanization" might mean and merely seeks a parsimonious description of the variables you have. In a special situation they might all be a mixture of two extremes, allowing the vector of 15 values to be described by a single parameter (the mixture proportions). You could take that parameter to be an "index" of the data and then go on to explore the extent to which it seems to agree with your idea of urbanization. This approach is carried out using [Principal Components Analysis](http://en.wikipedia.org/wiki/Principal_component_analysis) (PCA) or [Factor Analysis](http://en.wikipedia.org/wiki/Factor_analysis). All these methods (canonical correlation, PCA, FA) are typically available in full-featured statistical software. PCA has been discussed at length on [this site](https://stats.stackexchange.com/q/2691/919): explore the "PCA" tag for more information.
null
CC BY-SA 2.5
null
2010-10-26T20:57:01.053
2010-10-26T20:57:01.053
2017-04-13T12:44:41.493
-1
919
null
4004
1
null
null
3
1458
Does anyone know of a step-by-step guide for the practical implementation of Gifi Methods for Optimal Scaling in R: The Package homals? Although I have an OK theoretical understanding (thanks chl for directing me to articles), I'm a tech novice and some of the language/technical information is a bit too complicated for me. On that note, can I do the same thing within SPSS??
How to understand optimal Scaling in R: The Package homals for novices
CC BY-SA 2.5
null
2010-10-26T21:40:51.110
2021-12-09T17:39:05.927
2021-12-09T17:39:05.927
11887
1647
[ "r", "pca", "spss", "ordinal-data", "optimal-scaling" ]
4005
1
null
null
2
388
I am trying work out the variance of the population estimate var (R) where R = X/Y (X = sum(x) and Y = sum(y)). For each member of the population I know y and I have stratified the population and taken a random sample of each strata. For each member of the sample I need to estimate x. x is not known with certainty and the estimate of x has its own standard error (which I can estimate from a pre sample calibration exercise). However, which I know enough to know that this error should be taken into account my knowledge of sample statistics is limited to the situation where x is a quantity that can be exactly determined. If anyone can help with an answer or point me to anappropriate reference source it would be appreciated.
Stratified Random Sampling of Ratio (X/Y) where X itself is an estimate
CC BY-SA 2.5
null
2010-10-26T22:58:33.633
2010-10-27T01:36:31.307
null
null
null
[ "standard-error", "sampling" ]
4006
2
null
4004
2
null
Check out homals [vignette](http://cran.r-project.org/web/packages/homals/vignettes/homals.pdf).
null
CC BY-SA 2.5
null
2010-10-26T23:17:16.707
2010-10-26T23:17:16.707
null
null
null
null
4007
2
null
4004
3
null
With regards to SPSS, I believe you need the [Categories add-on module](http://www.spss.com/software/statistics/categories/). - SPSS White Paper has further information including lots of references. - PPT presentation by Richard Bell
null
CC BY-SA 2.5
null
2010-10-27T00:46:03.703
2010-10-27T07:48:02.677
2010-10-27T07:48:02.677
183
183
null
4008
2
null
4005
2
null
I am going to translate what you said into math and answer your question. If my translation into math is not correct please clarify via the comments and I will edit my response as appropriate. You have: $R = \frac{\sum_i{X_i}}{\sum_i{Y_i}}$ where, $i= 1, 2, ..n$ index members of the population, $Y_i$ is a constant $X_i$ is a random variable with an estimated $Var(X_i)$ (this is what you call as the standard error). Thus, $Var(R) = \frac{ n Var(X_i) }{(\sum_i{Y_i})^2}$
null
CC BY-SA 2.5
null
2010-10-27T01:36:31.307
2010-10-27T01:36:31.307
null
null
null
null
4009
1
null
null
6
11467
### Initial Question: I am trying to calculate inter-rater reliability. Previous researchers in this area have used intraclass correlation. SPSS has options for two-way random, mixed random and one-way random models. SPSS help says choose the right one based on whether the 'people effects are random' and 'item effects are random', can anybody please explain to me what these terms mean? Also I wanted to know how to structure my data file - I think I need to put each raters' rating of each item in a separate column (rather than each rater on a separate row), is this right? ### UPDATED question: Thanks to the help I have received, I have visited the websites, which were useful. I have decided that I need to use the mixed model, and I have transposed my files so that each rater is a column. ### Additivity: The websites also mentioned some assumptions, one of which is additivity, tested with Tukey's test of non-additivity. Apparently, this tests the hypothesis that there is no multiplicative interaction between the cases and the items. I think maybe I have too many variables in my calculation, because the examples others use have a lot less, e.g. each rater making only one rating, as described below, my raters have made many more ratings than that. - What does the additivity assumption mean? - What should be done when the test of non-additivity is statistically significant? ### Inter-rater reliability with multiple objects and properties I will explain in more detail what I have done. I have five raters and they have rated facial and vocal expressions of participants watching different emotional films. Raters make 18 ratings per film, and there are 16 films, so there is a total of 288 variables per rater, per participant rated. Each rater has rated the same 4 participants each, and I have placed these participants into different files. I thought about making seperate files for each of the 16 films, but that would be so many files. - How should I calculate ICC and organise my files in this specific situation? - Should I calculate a mean of individual ICCs or perhaps select a random sample?
Inter-rater reliability using Intra-class correlation with ratings for multiple objects on multiple properties
CC BY-SA 3.0
null
2010-10-27T02:29:31.147
2016-07-05T14:35:54.150
2011-08-24T22:48:45.273
5739
2025
[ "spss", "reliability", "agreement-statistics", "intraclass-correlation" ]
4010
1
null
null
1
260
I am now having a sequence of numbers, e.g. $x_1, x_2,\dots , x_N$, ($0\leq x_i\leq 1$), which may represent the correlation between a couple of measurements. I want to set a threshold for this sequence, that for $x_i$ which are greater than the threshold may have a significant difference with $x_j$ which are smaller than the threshold. Or in another form, assuming these values are obtained from a distribution which is unknown, I want to determine a threshold, that $\Pr(x_i > \text{threshold}) = 0.1$, any non-parametric method can determine this threshold?
Set a threshold for a sequence
CC BY-SA 2.5
null
2010-10-27T04:05:47.197
2023-04-04T08:00:30.070
2010-10-27T09:21:47.860
930
null
[ "correlation", "nonparametric" ]
4011
2
null
3907
0
null
@Skirant you lost me on your last comment, but I agree with Whuber that by using birth at start start you are distorting your sample as it doesn't take into account people with that gene change that actually already had the event or died from it. On top of it once people enter the regstry there is a chance that they change their behaviour to compensate for the higher risk. I suggest you use entry into the registry as start time and age (at entry) as a covariate or age at event as a time dependent covariate Let me know how you go
null
CC BY-SA 2.5
null
2010-10-27T04:32:45.037
2010-10-27T04:32:45.037
null
null
10229
null
4012
2
null
3907
2
null
You need to be careful to distinguish two different "start times" in studies such as this: - The origin for the time variable, i.e. the point which you're calling t=0 for each participant - The time at which an individual enters the study, i.e. the time from which you would record an event if one happened In the simple cases first taught in survival analysis, these times are assumed to be the same. For long-term cohort studies, it's usually much better to allow them to differ. The most suitable time origin for cohort studies of chronic diseases (such as cardiovascular disease here) is usually date of birth, as Srikant suggests above. That's because for chronic diseases the baseline hazard varies strongly with age. But (unless it's a birth cohort) individuals don't enter the study when they are born. If they have an event before they enter the study you wouldn't record this, and that could cause bias if you don't handle it properly by distinguishing entry time from time origin. This is sometimes called delayed-entry.
null
CC BY-SA 2.5
null
2010-10-27T06:40:00.510
2010-10-27T06:40:00.510
null
null
449
null
4013
1
4015
null
20
10365
I've read through [this thread](https://stats.stackexchange.com/questions/665/whats-the-difference-between-probability-and-statistics), and it looks to me like it can be said that: - statistics = induction ? - probability = deduction ? But I am wondering if there might be some more details on the comparison that I am missing. For example, is statistics equal to induction, or is it just a particular case of it? It does seem that probability is a sub case of deduction (since it is a sub case of mathematical thinking). I know this is a picky question, but in a sense this is why I am asking it - because I want to be sure how these terms can be compared accurately.
Can you say that statistics and probability is like induction and deduction?
CC BY-SA 2.5
null
2010-10-27T07:47:44.047
2020-03-17T19:18:40.160
2017-04-13T12:44:45.783
-1
253
[ "probability", "mathematical-statistics" ]
4014
2
null
4013
3
null
Yes! Maybe statistics isn't strictly equal to induction, but statistics is the solution to the [problem of induction](http://en.wikipedia.org/wiki/Problem_of_induction) in my opinion.
null
CC BY-SA 2.5
null
2010-10-27T08:18:14.943
2010-10-27T08:18:14.943
null
null
449
null
4015
2
null
4013
18
null
I think it is the best to quickly recap the meaning of inductive and deductive reasoning before answering your question. - Deductive Reasoning: "Deductive arguments are attempts to show that a conclusion necessarily follows from a set of premises. A deductive argument is valid if the conclusion does follow necessarily from the premises, i.e., if the conclusion must be true provided that the premises are true. A deductive argument is sound if it is valid and its premises are true. Deductive arguments are valid or invalid, sound or unsound, but are never false or true." (quoted from wikipedia, emphasis added). - "Inductive reasoning, also known as induction or inductive logic, or educated guess in colloquial English, is a kind of reasoning that allows for the possibility that the conclusion is false even where all of the premises are true. The premises of an inductive logical argument indicate some degree of support (inductive probability) for the conclusion but do not entail it; that is, they do not ensure its truth." (from wikipedia, emphasis added) To stress the main difference: Whereas deductive reasoning transfers the truth from premises to conclusions, inductive reasoning does not. That is, whereas for deductive reasoning you never broaden your knowledge (i.e., everything is in the premises, but sometimes hidden and needs to be demonstrated via proofs), inductive reasoning allows you to broaden your knowledge (i.e., you may gain new insights that are not already contained in the premises, however, for the cost of not knowing their truth). How does this relate to probability and statistics? In my eyes, probability is necessarily deductive. It is a branch of math. So based on some axioms or ideas (supposedly true ones) it deduces theories. However, statistics is not necessarily inductive. Only if you try to use it for generating knowledge about unobserved entities (i.e., pursuing inferential statistics, see also onestop's answer). However, if you use statistics to describe the sample (i.e., decriptive statistics) or if you sampled the whole population, it is still deductive as you do not get any more knowledge or information as that is already present in the sample. So, if you think about statistics as being the heroic endeavor of scientists trying to use mathematical methods to find regularities that govern the interplay of the empirical entities in the world, which is in fact never successful (i.e., we will never really know if any of our theories is true), then, yeah, this is induction. It's also the Scientific Method as articulated by Francis Bacon, upon which modern empirical science is founded. The method leads to inductive conclusions which are at best highly probable, though not certain. This in turn leads to misunderstanding among non-scientists about the meaning of a scientific theory and a scientific proof. --- Update: After reading Conjugate Prior's answer (and after some overnight thinking) I would like to add something. I think the question on whether or not (inferential) statistical reasoning is deductive or inductive depends on what exactly it is that you are interested in, i.e., what kind of conclusion you are striving for. If you are interested in probabilistic conclusions, then statistical reasoning is deductive. This means, if you want to know if e.g., in 95 out of 100 cases the population value is within a certain interval (i.e., confidence interval) , then you can get a truth value (true or not true) for this statement. You can say (if the assumptions are true) that it is the case that in 95 out of 100 cases the population value is within the interval. However, in no empirical case you will know if the population value is in your obtained CI. Either it is or not, but there is no way to be sure. The same reasoning applies for probabilities in classical p-value and Bayesian statistics. You can be sure about probabilities. However, if you are interested in conclusions about empirical entities (e.g., where is the population value) you can only argue inductive. You can use all available statistical methods to accumulate evidence that support certain propositions about empirical entities or the causal mechanisms with which they interact. But you will never be certain on any of these propositions. To recap: The point I want to make that it is important at what you are looking. Probabilites you can deduce, but for every definite proposition about things you can only find evidence in favor for. Not more. See also onestop's link to the induction problem.
null
CC BY-SA 3.0
null
2010-10-27T08:28:20.517
2016-03-19T22:34:10.047
2016-03-19T22:34:10.047
109158
442
null
4016
2
null
4009
10
null
These are distinct ways of accounting for raters or items variance in overall variance, following [Shrout and Fleiss (1979)](http://www3.uta.edu/faculty/ricard/COED/Shrout%20&%20Fleiss%20(1979)%20Intraclass%20correlations.pdf) (cases 1 to 3 in Table 1): - One-way random effects model: raters are considered as sampled from a larger pool of potential raters, hence they are treated as random effects; the ICC is then interpreted as the % of total variance accounted for by subjects/items variance. This is called the consistency ICC. - Two-way random effects model: both factors -- raters and items/subjects -- are viewed as random effects, and we have two variance components (or mean squares) in addition to the residual variance; we further assume that raters assess all items/subjects; the ICC gives in this case the % of variance attributable to raters + items/subjects. - Two-way mixed model: contrary to the one-way approach, here raters are considered as fixed effects (no generalization beyond the sample at hand) but items/subjects are treated as random effects; the unit of analysis may be the individual or the average ratings. I would say raters have to be entered as columns, although I'm not a specialist of SPSS. Dave Garson's [dedicated website](http://faculty.chass.ncsu.edu/garson/PA765/reliab.htm) is worth looking at for those working with SPSS. There is also a complete on-line tutorial on [reliability analysis](http://www.nyu.edu/its/statistics/Docs/intracls.html) (Robert A. Yaffee) [[archived version](http://archive.is/sT5Tj)]. For theoretical consideration about the mixed-effect approach, please consider reading my answer to this related question: [Reliability in Elicitation Exercise](https://stats.stackexchange.com/questions/1015/reliability-in-elicitation-exercise/3667#3667).
null
CC BY-SA 3.0
null
2010-10-27T09:13:38.807
2015-04-21T22:45:53.847
2017-04-13T12:44:23.203
-1
930
null
4017
1
4018
null
10
11258
I would like to learn to use Rcpp. I went through the docs on the package's CRAN website, but i feel working on a practical example (second practical, considering convolve3) would be more helpfull . I propose the following code, from the the robustbase package because it's neither too long nor too short, uses a combination of R types and R functions and has one of those small arithmetic iterations that are too slow in R. How would you go about Rcpp-ing it ? ``` scaleTau2<-function (x, c1 = 4.5, c2 = 3, consistency = TRUE, mu.too = FALSE){ n <- length(x) medx <- median(x) x. <- abs(x - medx) sigma0 <- median(x.) mu <- if (c1 > 0) { x. <- x./(sigma0 * c1) w <- 1 - x. * x. w <- ((abs(w) + w)/2)^2 sum(x * w)/sum(w) } else medx x <- (x - mu)/sigma0 rho <- x^2 rho[rho > c2^2] <- c2^2 if (!identical(consistency, FALSE)) { Erho <- function(b) 2*((1-b^2)*pnorm(b)-b*dnorm(b)+b^2)-1 Es2 <- function(c2) Erho(c2*qnorm(3/4)) nEs2 <-ifelse(consistency == "finiteSample",n-2,n)*Es2(c2) } else nEs2 <- n c(if (mu.too) mu, sigma0 * sqrt(sum(rho)/nEs2)) } ``` Please explain as much as you can. EDIT It's really the idea of a step by step explanation of how you would go about converting a well written (and documented) R code (so at least the foundations are okay) unto an efficient implementation. The choice of the code is arguable a bit random but i think it reflects the arch-typical script on our blueprints (calls R functions that one doesn't want to translate, uses arithmetic loops....). EDIT2 from the comments i realize this may actually be a big work to do in C++ (i didn't realize it when posting the code). In regard of this, using individual pieces as pedagogical devices is ok. I'll eventually parse the pieces together by editing the question.
Translate R to C++ (eventually with Rcpp)
CC BY-SA 2.5
null
2010-10-27T09:59:40.793
2018-04-11T15:45:35.220
2010-10-27T19:04:31.570
930
603
[ "r", "c++" ]
4018
2
null
4017
14
null
Interesting question, but quite possibly too challenging to be discussed briefly: - You would need a C++-side implementations of median() - The cited code from package robustbase is highly 'R-optimised' which may not be the best starting point. - Rcpp is not an 'R compiler' that you toss any such function at to 'make it faster'. It is more about connecting existing C++ code, or writing new C++ code. - Of course the above can be translated (Turing-equivalence and all that) but that may not be the best way to learn about using Rcpp. I think we have simpler examples on the mailing list. Lastly, isn't this a programming question for SO? ;-)
null
CC BY-SA 2.5
null
2010-10-27T12:17:28.647
2010-10-27T12:17:28.647
null
null
334
null
4019
1
4020
null
7
4647
Let us consider a comparison of two machine learning algorithms (A and B) on some dataset. Results (root mean squared error) of both algorithms depend on randomly generated initial approximation (parameters). Questions: - When I use the same parameters for both algorithms, "usually" A slightly outperforms B. How many different experiments (with different parameters / updated /) have I to perform to make "sure" that A is better than B? - How to measure significance of my results? (To what extent I am "sure"?) Relevant links are welcome! PS. I've seen papers in which authors use t-test and p-value; but i'm not sure if it is ok to use them in a such situation. UPDATE. The problem is that A (almost) always outperforms B if initial params and learning/validation/testing sets are the same; but it doesn't neccessarily hold if they differ. I see the following approaches here: - split data into disjoint sets D_1, D_2, ...; generate parameters params_1; compare A(params_1, D_2, ...,) and B(params_1, D_2, ...,) on D_1; generate params_2; compare A(params_2, D_1, D_3,...) and B(params_2, D_1, D_3,...) on D_2 and so on. Remember how often A outperforms B. - split data into disjoint sets D_1, D_2, ...; generate parameters params_1a and params_1b; compare A(params_1a, D_2, ...,) and B(params_1b, D_2, ...,) on D_1; .... Remember how often A outperforms B. - first, do cross-validation for A. Then, independently, for B. Compare results. Which approach is better? How to find significance of the result in this best case?
Measuring statistical significance of machine learning algorithms comparison
CC BY-SA 2.5
null
2010-10-27T12:56:45.837
2010-10-27T20:00:28.423
2010-10-27T20:00:28.423
1725
1725
[ "machine-learning", "statistical-significance" ]
4020
2
null
4019
4
null
- You have two biases to remove here -- the selection of the initial parameters set and the selection of train/test data. So, I don't think it is good to compare algorithms based on the same initial parameters set; I would just run the evaluation over few different initial sets for each of the algorithms to get more general approximation. The next step is something that you are probably doing already, so using some kind of cross-validation. - t-test is a way to go (I assume that you are getting this RMS as a mean from cross validation [and evaluation over few different starting parameters set, supposing you decided to use my first suggestion], so you can also calculate the standard deviation); more fancy method is to use Mann-Whitney-Wilcoxon test. [Wikipedia article about cross validation](http://en.wikipedia.org/wiki/Cross-validation_%28statistics%29) is quite nice and have some references worth reading. UPDATE AFTER UPDATE: I still think that making paired test (Dikran's way) looks suspicious.
null
CC BY-SA 2.5
null
2010-10-27T13:17:40.497
2010-10-27T16:48:32.687
2010-10-27T16:48:32.687
null
null
null
4021
2
null
3980
1
null
You are still looking for binomial regression. You have to rearrange the data so that each observation is one set of binomial trials (separately for pre- and post), adding a "time" and "subject" variable. The effect of "time" on the probability of success is like the pre-post difference (on the logit scale), its interactions with the type of treatment are the differential effects of each treatment. The model will need a subject effect as well to capture the baseline of each subject. This could be a fixed or random effect. You should also watch out for overdispersion, and use a quasi-binomial model. If you don't understand the above, you might need to consult a statistician in real life - there is no simple solution to your problem, and it is easy to mess up.
null
CC BY-SA 2.5
null
2010-10-27T14:04:15.370
2010-10-27T14:04:15.370
null
null
279
null
4022
2
null
3997
6
null
ok timeseries seem to have done the trick: ``` aets <- as.xts(read.zoo("n8_energy_actual2009_2010.csv", header=TRUE, sep=",", FUN=as.POSIXct)) eats.2010 <- aets["2010-01::2010-10"] ```
null
CC BY-SA 2.5
null
2010-10-27T14:34:10.650
2010-10-27T14:34:10.650
null
null
1716
null
4023
1
null
null
10
4677
I am comparing two groups of mutants each of which can have only one out of 21 different phenotypes. I would like to see whether distribution of these outcomes is similar between two groups. I found an [online test](http://www.fon.hum.uva.nl/Service/Statistics/EqualDistribX2.html) that calculates the "Chi-square test for equality of distributions" and gives me some plausible results. However, I have quite a few zeroes in this Table, so can I use chi-square in this case at all? Here is the table with two groups and counts of particular phenotypes: ``` 2 1 2 3 1 6 1 4 13 77 7 27 0 1 0 4 0 2 2 7 2 3 1 5 1 9 2 6 0 3 3 0 1 3 0 3 1 0 1 2 0 1 ```
Chi-square test for equality of distributions: how many zeroes does it tolerate?
CC BY-SA 2.5
null
2010-10-27T14:39:11.707
2010-10-28T13:00:07.440
2010-10-27T18:22:28.543
449
1727
[ "distributions", "chi-squared-test", "contingency-tables" ]
4024
2
null
4023
5
null
The usual guidelines are that the expected counts should be greater than 5, but it can be somewhat relaxed as discussed in the following article: > Campbell , I, Chi-squared and Fisher–Irwin tests of two-by-two tables with small sample recommendations, Statistics in Medicine (2007) 26(19): 3661–3675. See also Ian Campbell's [homepage](http://www.iancampbell.co.uk/twobytwo/twobytwo.htm). Note that in R, there's always the possibility to compute $p$-value by a Monte Carlo approach (`chisq.test(..., sim=TRUE)`), instead of relying on the asymptotic distribution. In you case, it appears that about 80% of the expected counts are below 5, and 40% are below 1. Would it make sense to aggregate some of the observed phenotypes?
null
CC BY-SA 2.5
null
2010-10-27T15:11:25.390
2010-10-27T15:28:30.650
2010-10-27T15:28:30.650
930
930
null
4025
1
null
null
2
1395
I am using the `naivebayes` function of the [e1071](http://cran.r-project.org/web/packages/e1071/index.html) library. Some example commands are: ``` model = naiveBayes(Species ~ ., data = iris) pred = predict(model, iris[,]) ``` My question is: how can I obtain the maximum likelihood estimate for the conditional probability distribution of this model?
MLE for Naive Bayes in R
CC BY-SA 2.5
null
2010-10-27T15:45:20.203
2010-11-26T22:44:24.210
2010-10-27T22:08:34.000
null
null
[ "r", "machine-learning", "e1071", "naive-bayes" ]
4026
2
null
4025
1
null
This seems a bit ambiguous... What's wrong with `model$table`?
null
CC BY-SA 2.5
null
2010-10-27T16:26:56.013
2010-10-27T16:26:56.013
null
null
null
null
4029
2
null
4023
8
null
Perfectly feasible these days to do [Fisher's 'exact' test](http://en.wikipedia.org/wiki/Fisher%27s_exact_test) on such a table. I just got p = 0.087 using Stata (`tabi 2 1 \ 2 3 \ .... , exact`. Execution took 0.19 seconds). EDIT after chl's comment below (tried adding as a comment but can't format): It works in R 2.12.0 for me, though i had to increase the 'workspace' option over its default value of 200000: ``` > fisher.test(x) Error in fisher.test(x) : FEXACT error 7. LDSTP is too small for this problem. Try increasing the size of the workspace. > system.time(result<-fisher.test(x, workspace = 400000)) user system elapsed 0.11 0.00 0.11 > result$p.value [1] 0.0866764 ``` (The execution time is slightly quicker than in Stata, but that's of dubious relevance given the time taken to work out the meaning of the error message, which uses 'workspace' to mean something different from R's usual meaning despite the fact that fisher.test is part of R's core 'stats' package.)
null
CC BY-SA 2.5
null
2010-10-27T18:28:19.503
2010-10-28T13:00:07.440
2010-10-28T13:00:07.440
449
449
null
4030
1
null
null
7
2865
I want to extend the standard [VECM](http://en.wikipedia.org/wiki/Error_correction_model) form: Δx[t] = δ0 + ... + Π x[t-1] + Φ1 Δx[t-1] + Φ1 Δx[t-2] + ... + ε to include exogenous variables (i.e. variables that are participant in describing the relationships amongst `Δx[t]`, but are not in the dependent set `Δx[t]`). Let's call these new variables `z`. Would like to determine a MLE of the coefficients in the revised VECM: Δx[t] = δ0 + ... + Π x[t-1] + Φ1 Δx[t-1] + ... + θ1 z[t-1] + ... +ε I am familiar with the johansen method in determining the MLE coefficients for the 1st form. I'm not sure how can modify the method to determine the second form. Could use an optimiser to find the coefficients for VECM with exogenous vars, but thought there is probably a more direct approach.
Finding coefficients for VECM + exogenous variables
CC BY-SA 2.5
null
2010-10-27T18:35:57.930
2011-04-26T18:27:01.343
2010-10-27T18:54:53.467
930
1127
[ "time-series", "econometrics" ]
4031
2
null
3984
8
null
Grouping SC and AIC together IS WRONG. They are very different things, even though people heavily misuse them. AIC is meaningful when you are predicting things, using SC in this scenario can lead (not all the times) to wrong results. Similarly, if you are interested in doing model selection with the principle of parsimony (Occam's Razor) SC is better. I don't want to go into the theoretical details, but in a nutshell: SC -- good for parsimonious models when you want something equivalent to simplest possible model to explain your data, AIC -- When you want to predict. AIC doesn't assume that your true model lies in the model space where as SC does. Secondly, using p-values and information criteria together can be also misleading as explained by chl.
null
CC BY-SA 2.5
null
2010-10-27T20:31:24.907
2010-10-27T20:31:24.907
null
null
1307
null
4033
2
null
2909
4
null
This is an alternating sum. Each successive pair almost cancels; such pair-sums eventually decrease monotonically. One approach, then, is to compute the sum in pairs where $n$ = {1,2}, {3,4}, {5,6}, etc. (Doing so eliminates a lot of floating point error, too.) Some more tricks can help: (1) To solve $\tan(t) = t / \alpha$ for a positive constant $\alpha$, a good starting value to search--and an excellent approximation for the $n^\text{th}$ largest root-- is $t = (n + 1/2)\pi - \frac{\alpha}{(n + 1/2)\pi}$. I suspect Newton-Raphson should work really well. (2) After a small number of initial terms, the sums of pairs start decreasing in size very, very consistently. The logarithms of the absolute values of exponentially-spaced pairs quickly decrease almost linearly. This means you can interpolate among a very small number of calculated pair-sums to estimate all the pair-sums you did not compute. For example, by computing the values for only pairs (2,3), (4,5), (8,9), (16,17), ..., (16384, 16385) and constructing the interpolating polynomial for these (thought of as the values of a function at 1, 2, ..., 14) and using the arguments $h = \mu = \sigma = 1$, I was able to achieve six-figure precision for the worst-case errors. (Even nicer, the errors oscillate in sign, suggesting the precision in the summed interpolated values might be quite a bit better than six figures.) You could probably estimate the limiting sum to good precision by extrapolating linearly off the end of these values (which translates to a power law) and integrating the extrapolating function out to infinity. To complete this example calculation you also need the first term. That gives six-figure precision by means of only 29 computed terms in the summation. (3) Note that the function really depends on $h/\sigma$ and $\mu/\sigma$, not on all three of these variables independently. The dependence on $T$ is weak (as it should be); you might be content to fix its value throughout all your calculations. (4) On top of all this, consider using some [series-acceleration methods](http://en.wikipedia.org/wiki/Series_acceleration), like [Aitken's method](http://en.wikipedia.org/wiki/Aitken%27s_delta-squared_process). A good accounting of this appears in [Numerical Recipes](http://www.nr.com/). Added (5) You can estimate the tail of the sum with an integral. Upon writing $\theta_n = (n + 1/2)\pi - 1/t_n$, the equation $\tan(\theta_n) = \theta_n / \alpha$ (with $\alpha = \mu h / \sigma^2$) can be solved for $t_n$, which is small, and then for $\theta_n$ by substituting back. Expanding the tangent in a Taylor series in $t_n$ gives the approximate solution $$\theta_n = z - \frac{\alpha}{ z} - \frac{\alpha^2 - \alpha^3/3 }{z^3} + O\left((\frac{\alpha}{n})^5\right)$$ where $z = (n + 1/2)\pi$. Provided $n$ is sufficiently large, the exponential factors of the form $1 - \exp(\frac{-\sigma^2 \theta_n^2 T}{2 h^2}) \exp(\frac{-\mu^2 T}{2 \sigma^2})$ become extremely close to 1 so you can neglect them. Typically these terms can be neglected even for small $n$ because $\theta_n^2$ is $\Theta\left(n^2\right)$, making the first exponential go to zero extremely quickly. (This happens once $n$ substantially exceeds $\alpha / T^{1/2}$. Do your calculations for large $T$ if you can!) Using this expression for $\theta_n$ to sum the terms for $n$ and $n+1$ lets us approximate them (once all the smoke clears) as $$\frac{2}{\pi n^2}-\frac{4}{\pi n^3}+\frac{13 \pi ^2+6 (4-3 \alpha ) \alpha }{2 \pi ^3 n^4}+O\left(\frac{1}{n^5}\right) \text{.}$$ Replacing the sum starting at $n = 2N$ by an integral over $N$ starting at $N - 1/4$ approximates the tail. (The integral has to be multiplied by a common factor of $\exp(-\alpha)$.) The error in the integral is $O(1/n^4)$. Thus, to achieve three significant figures you will typically need to compute around eight or so of the terms in the sum and then add this tail approximation.
null
CC BY-SA 2.5
null
2010-10-27T22:39:50.370
2010-10-28T20:12:38.843
2010-10-28T20:12:38.843
919
919
null
4034
1
4038
null
13
3269
This may be a naive question, but here goes. If I have a set of empirical data and fit a kernel density to it, and then obtain a new single value which possibly comes from the same process which generated the original data set, can I assign a probability that this new value belongs to the set/process by simply reading the value off the y axis where the new value on the x axis intersects the kernel density line and dividing by the area under the density line?
Interpretation/use of kernel density
CC BY-SA 2.5
null
2010-10-27T23:56:53.503
2010-10-28T08:22:03.600
2010-10-28T08:22:03.600
null
226
[ "kde" ]
4036
5
null
null
0
null
- stata.com (official website) - Wikipedia article Some other sources of Stata help: - Stata's built-in search facilities - Statalist FAQ - Resources from UCLA 'Stata' is an invented word, not an acronym, and therefore should not have all its letters capitalized (i.e. 'STATA' is considered incorrect). See the [last item of the Statalist FAQ](http://www.stata.com/support/faqs/res/statalist.html#spell).
null
CC BY-SA 2.5
null
2010-10-28T07:26:22.207
2010-10-28T07:46:12.183
2010-10-28T07:46:12.183
449
449
null
4037
4
null
null
0
null
A statistical software package. Use this tag for any on-topic question that (a) involves Stata either as a critical part of the question or expected answer, & (b) is not just about how to use Stata.
null
CC BY-SA 4.0
null
2010-10-28T07:26:22.207
2019-10-08T17:20:58.670
2019-10-08T17:20:58.670
121522
449
null
4038
2
null
4034
13
null
No, I'm afraid not. The kernel density estimand is the [probability density function](http://en.wikipedia.org/wiki/Probability_density_function). The y-value is an estimate of the probability density at that value of x, so the area under the curve between x1 and x2 estimates the probability of the random variable X  falling between x1 and x2, assuming that X was generated by the same process that generated the data which you fed into the kernel density estimate. The kernel density estimate doesn't say anything about the probability a new value was generated by the same process.
null
CC BY-SA 2.5
null
2010-10-28T07:55:48.317
2010-10-28T07:55:48.317
null
null
449
null
4039
2
null
3715
5
null
When applying the formula for P(B|A) for [Agresti-Coull](http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval#Agresti-Coull_Interval), it seems important to me to use, for the denominator (ñ), a number with uncertainty. The formula ñ=P(A)*N+4 (where N is the size of your sample) gives you this number, after you calculate P(A) with an uncertainty. With the [uncertainties package](http://packages.python.org/uncertainties/), this would be: ``` # Calculation of P(A): P_A = ufloat(…, …) # Calculation of P(B|A): P_B_A = …/(P_A*N+4) ``` Thus, P(B|A) is automatically correlated to P(A). Furthermore, you must make sure that you feed `ufloat()` with standard deviations. This mean using a particular z_{1-alpha/2} value, in the Agresti-Coull formula. Hope this helps!
null
CC BY-SA 3.0
null
2010-10-28T08:15:50.047
2017-02-12T09:15:31.103
2017-02-12T09:15:31.103
1735
1735
null
4040
1
4041
null
17
91925
In R, I have a data frame comprising a class label C (a factor) and two measurements, M1 and M2. How do I compute the correlation between M1 and M2 within each class? Ideally, I'd get back a data frame with one row for each class and two columns: the class label C and the correlation.
R: compute correlation by group
CC BY-SA 2.5
null
2010-10-28T08:24:14.630
2015-10-08T21:14:58.500
2010-10-28T09:13:26.317
8
439
[ "r", "correlation" ]
4041
2
null
4040
20
null
The package plyr is the way to go. Here is a simple solution: ``` xx <- data.frame(group = rep(1:4, 100), a = rnorm(400) , b = rnorm(400) ) head(xx) require(plyr) func <- function(xx) { return(data.frame(COR = cor(xx$a, xx$b))) } ddply(xx, .(group), func) ``` The output will be: ``` group COR 1 1 0.05152923 2 2 -0.15066838 3 3 -0.04717481 4 4 0.07899114 ```
null
CC BY-SA 2.5
null
2010-10-28T08:40:21.723
2010-10-28T08:40:21.723
null
null
253
null
4042
2
null
4040
12
null
If you are inclined to use functions in the base package, you can use the `by` function, then reassemble the data: ``` xx <- data.frame(group = rep(1:4, 100), a = rnorm(400) , b = rnorm(400) ) head(xx) # This returns a "by" object result <- by(xx[,2:3], xx$group, function(x) {cor(x$a, x$b)}) # You get pretty close to what you want if you coerce it into a data frame via a matrix result.dataframe <- as.data.frame(as.matrix(result)) # Add the group column from the row names result.dataframe$C <- rownames(result) ```
null
CC BY-SA 2.5
null
2010-10-28T10:46:38.670
2010-10-28T10:46:38.670
null
null
1195
null
4043
1
null
null
8
1270
I have a few time series that were (for technical reasons) acquired with slightly different time intervals, ranging between 19 and 21 seconds. Now, I would like to average the values of these different time series over time, so I thought that I could do some sort of interpolation of the values at some regular interval (e.g. each 20 seconds). Could someone point out a good way to do this?
Averaging time series with different sampling interval
CC BY-SA 4.0
null
2010-10-28T11:29:57.120
2020-01-31T00:36:39.107
2020-01-31T00:36:39.107
11887
582
[ "time-series", "interpolation", "unevenly-spaced-time-series" ]