Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
6681
2
null
726
9
null
> [Statistics are] the only tools by which an opening can be cut through the formidable thicket of difficulties that bars the path of those who pursue the science of man. -- Sir Francis [Galton](http://galton.org/)
null
CC BY-SA 2.5
null
2011-01-28T20:23:49.323
2011-01-28T20:23:49.323
null
null
2592
null
6682
2
null
6680
3
null
Here is the possible solution with taking a clarification in comment in mind, but still I think that the question is invalid as it stands. ``` ##This gives the table A with both columns in character, which we will use df<-rbind(c("Y",10),c("Y",12),c("Y",18),c("X",22), c("X",12), c("Z",11), c("Z",15)) > df [,1] [,2] [1,] "Y" "10" [2,] "Y" "12" [3,] "Y" "18" [4,] "X" "22" [5,] "X" "12" [6,] "Z" "11" [7,] "Z" "15" ##break the table into the list l <-tapply(df[,2],df[,1],function(l)l) ##calculate the maximum length of the list element n <- max(sapply(l,length)) ##pad the elements of the list with empty strings tbB<-t(sapply(l,function(x){res <- rep("",n);res[1:length(x)]<-x;res})) > tbB [,1] [,2] [,3] X "22" "12" "" Y "10" "12" "18" Z "11" "15" "" ##Comma separated file write.table(tbB,file="tableB.csv",quote=FALSE,col.names=FALSE,sep=",") ##Tab delimited write.table(tbB,file="tableB.txt",quote=FALSE,col.names=FALSE,sep="\t") ``` If we read the table A from the file, then `df` is a data.frame with first column a `factor` and second column `numeric` (this is a default R behaviour). To apply the code above it is necessary to transform the columns to `character`: ``` df <- sapply(df,as.character) ```
null
CC BY-SA 2.5
null
2011-01-28T20:47:51.733
2011-01-28T20:47:51.733
null
null
2116
null
6683
2
null
6670
6
null
I asked a [similar question](https://stats.stackexchange.com/questions/168/choosing-a-bandwidth-for-kernel-density-estimators) a few months ago. Rob Hyndman provided an excellent [answer](https://stats.stackexchange.com/questions/168/choosing-a-bandwidth-for-kernel-density-estimators/179#179) that recommends the Sheather-Jones method. One addition point. In R, for the `density` function, you set the bandwidth explicitly via the `bw` argument. However, I often find that the `adjust` argument is more helpful. The `adjust` argument scales the value of the bandwidth. So `adjust=2` means double the bandwidth.
null
CC BY-SA 2.5
null
2011-01-28T21:58:45.647
2011-01-28T21:58:45.647
2017-04-13T12:44:41.967
-1
8
null
6684
1
6718
null
4
17116
Note: I've updated the example case code, there were some errors in the previous version Cross posted to R-help, because I half suspect this is 'unexpected behaviour'. I want to predict values from an existing lm (linear model, e.g. lm.obj) result in R using a new set of predictor variables (e.g. newdata). Specifically, I am interested in the predicted y value at the mean, 1 SD above of the mean, and 1 SD below the mean for each predictor. However, it seems that because my linear models was made by calling scale() on the target predictor that predict exits with an error, "Error in scale(xxA, center = 9.7846094491829, scale = 0.959413568556403) : object 'xxA' not found". By debugging predict, I can see that the error occurs in a call to model.frame. By debugging model frame I can see the error occurs with this command: variables <- eval(predvars, data, env); it seems likely that the error is because predvars looks like this: list(scale(xxA, center = 10.2058714830537, scale = 0.984627257169526), scale(xxB, center = 20.4491690881149, scale = 1.13765718273923)) An example case: ``` dat <- data.frame(xxA = rnorm(20,10), xxB = rnorm(20,20)) dat$out <- with(dat,xxA+xxB+xxA*xxB+rnorm(20,20)) lm.res.scale <- lm(out ~ scale(xxA)*scale(xxB),data=dat) my.data <- lm.res.scale$model #load the data from the lm object newdata <- expand.grid(X1=c(-1,0,1),X2=c(-1,0,1)) names(newdata) <- c("scale(xxA)","scale(xxB)") newdata$Y <- predict(lm.res.scale,newdata) ``` Is there something I could do before passing newdata or lm.obj to predict() that would prevent the error? I tried: From the help file it looks like I might be able to do something with the terms, argument but I haven't quite figured out what I would need to do. Alternatively, is there a fix for model.frame that would prevent the error? Should predict() behave this way? Additional Details: However, I really want a solution that, in one step will provide values like: ``` coef(lm.res.scale)[1]+ coef(lm.res.scale)[2]*newdata[,1]+ coef(lm.res.scale)[3]*newdata[,2]+ coef(lm.res.scale)[4]*newdata[,1]*newdata[,2] ``` I think that should be exactly what predict() should do. That is, I think my example code should be equivalent to: ``` dat <- data.frame(xxA = rnorm(20,10), xxB = rnorm(20,20)) dat$out <- with(dat,xxA+xxB+xxA*xxB+rnorm(20,20)) #rescaling outside of lm X1 <- with(dat,as.vector(scale(xxA))) X2 <- with(dat,as.vector(scale(xxB))) y <- with(dat,out) lm.res.correct <- lm(y~X1*X2) my.data <- lm.res.correct$model #load the data from the lm object newdata <- expand.grid(X1=c(-1,0,1),X2=c(-1,0,1)) #No need to rename newdata as it matches my lm object already newdata$Y <- predict(lm.res.correct,newdata) ``` Notably, adjusting my formula to include as.vector() does not solve the problem with my attempt to use predict() directly with newdata.
How can one use the predict function on a lm object where the IVs have been dynamically scaled?
CC BY-SA 2.5
null
2011-01-28T22:32:16.647
2011-01-30T20:04:27.090
2011-01-29T02:00:13.550
196
196
[ "r", "regression" ]
6685
2
null
6680
3
null
Similar to what @mpiktas proposed, but using `aggregate` ``` # Construct the data frame let <- c("Y", "Y", "Y", "X", "X", "Z", "Z") num <- c(10, 12, 18, 22, 12, 11, 15) df <- data.frame(let, num) # Aggregate data by the first column. # Do not apply any transformation to the data (use the identity function) ag <- aggregate(df$num, FUN=identity, by=list(let)) # Find the line with maximum number of elements maxlen <- max(sapply(ag$x, length))) # Transform the list to a matrix res <- t(sapply(ag$x, function(x){c(x, rep(NA, maxlen-length(x)))})) row.names(res) <- ag[,1] ``` Outputs: ``` 1> res [,1] [,2] [,3] X 22 12 NA Y 10 12 18 Z 11 15 NA ```
null
CC BY-SA 2.5
null
2011-01-28T23:08:18.223
2011-01-28T23:08:18.223
null
null
582
null
6686
2
null
6684
1
null
See if this console scrape shows what you might have wanted. Changed the names of the new data columns: ``` > dat <- data.frame(xxA = rnorm(20,10), xxB = rnorm(10,20)) > dat$out <- with(dat,xxA+xxB+xxA*xxB+rnorm(20,20)) > xVar <- "scale(xxA)" > traceVar <- "scale(xxB)" > DVname <- "out" > lm.obj <- lm.res.scale <- lm(out ~ scale(xxA)*scale(xxB),data=dat) > my.data <- lm.obj$model #load the data from the lm object > X1 <- my.data[,xVar] > X2 <- my.data[,traceVar] > DV <- lm.obj$model[,DVname] > newdata <- expand.grid(X1=c(-1,0,1),X2=c(-1,0,1)) > newdata$xxA <- newdata$X1 * sd(my.data[,xVar]) > newdata$xxB <- newdata$X2 * sd(my.data[,traceVar]) > names(newdata) <- names(dat) #have to rename to original variable names for predict to work > newdata$Y <- predict(lm.obj,newdata) > str(newdata) 'data.frame': 9 obs. of 5 variables: $ xxA: num -1 0 1 -1 0 1 -1 0 1 $ xxB: num -1 -1 -1 0 0 0 1 1 1 $ out: num -1 0 1 -1 0 1 -1 0 1 $ NA : num -1 -1 -1 0 0 0 1 1 1 $ Y : num 47.7 44.7 41.8 46.5 44.7 ... - attr(*, "out.attrs")=List of 2 ..$ dim : Named int 3 3 .. ..- attr(*, "names")= chr "X1" "X2" ..$ dimnames:List of 2 .. ..$ X1: chr "X1=-1" "X1= 0" "X1= 1" .. ..$ X2: chr "X2=-1" "X2= 0" "X2= 1" ``` EDIT With your requested revisions, this is what I would suggest: ``` newdat <- expand.grid(X1=c(-1,0,1),X2=c(-1,0,1)) newdat$xxA <- mean(dat$xxA) + newdat$X1*sd(dat$xxA) newdat$xxB <- mean(dat$xxB) + newdat$X2*sd(dat$xxB) newdata$Y <- predict(lm.res.scale,newdat[, c("xxA","xxB") ] ) newdat$Y <- predict(lm.res.scale,newdat[, c("xxA","xxB") ] ) newdat X1 X2 xxA xxB Y 1 -1 -1 8.931712 18.59229 214.0096 2 0 -1 9.830117 18.59229 231.4839 3 1 -1 10.728522 18.59229 248.9583 4 -1 0 8.931712 19.77820 225.3516 5 0 0 9.830117 19.77820 243.9916 6 1 0 10.728522 19.77820 262.6315 7 -1 1 8.931712 20.96410 236.6936 8 0 1 9.830117 20.96410 256.4992 9 1 1 10.728522 20.96410 276.3047 ``` Notice you don't need to do all those back-flips to match the model terms inside the lm-object. You only need to provide it with properly named data on the scale of the original data. The predict machinery will take care of the scaling.
null
CC BY-SA 2.5
null
2011-01-29T00:28:42.120
2011-01-29T12:04:57.473
2011-01-29T12:04:57.473
2129
2129
null
6688
1
null
null
15
1388
Suppose I want to build a binary classifier. I have several thousand features and only a few 10s of samples. From domain knowledge, I have a good reason to believe that the class label can be accurately predicted using only a few features, but I have no idea which ones. I also want the final decision rule to be easy to interpret/explain, further necessitating a small number of features. Certain subsets of my features are highly correlated, so selecting the most predictive few independently wouldn't work. I also want to be able to meaningfully do hypothesis testing on my features. Is the following stepwise regression procedure reasonable under these conditions: - Given the features already in the model (or just the intercept on the first iteration), select the feature that produces the largest log likelihood ratio when added to the model. Use the likelihood ratio chi-square test to calculate a nominal P-value for each hypothesis test performed in this selection. The null here is that adding the extra variable to the model provides no additional predictive ability. The alternative is that it does increase predictive abilityl - Treat the hypotheses tested in Step 1 of each iteration as a family and calculate the false discovery rate for the smallest P-value (for the feature selected) using something like Benjamini-Hochberg. - Goto 1 unless some stopping criteria are met. - Report the false discovery rates for the individual features, but not the P-value for the model as a whole (since this will be massively inflated). Each of these multiple testing corrected P-values represents the statistical significance of that feature given all of the features previously added to the model. Does doing something like this under these circumstances successfully avoid all of the typical criticisms of stepwise regression? Are the false discovery rates calculated in this way reasonable?
Sane stepwise regression?
CC BY-SA 2.5
null
2011-01-29T04:44:13.400
2018-04-27T14:12:24.700
2011-01-29T14:07:37.390
1347
1347
[ "regression", "logistic", "multiple-comparisons", "stepwise-regression" ]
6689
2
null
6421
13
null
[LARS](http://en.wikipedia.org/wiki/Least-angle_regression) gets my vote. It combines linear regression with variable selection. Algorithms to compute it usually give you a collection of $k$ linear models, the $i$th one of which has nonzero coefficients for only $i$ regressors, so you can easily look at models of different complexity.
null
CC BY-SA 2.5
null
2011-01-29T05:40:49.240
2011-01-29T05:40:49.240
null
null
795
null
6690
1
6691
null
4
488
If I see `Rmath.h` in `/usr/share/R/include`, the signature of the function `dpois` or `Rf_dpois` is ``` double dpois(double, double, int); ``` However, if I do `?dpois` in `R`, I see: ``` dpois(x, lambda, log = FALSE) ``` Are these both the same thing? If yes, can someone please clarify? Also I tried to find the code for `dpois` at [R-svn](http://svn.r-project.org/R/trunk/src/library/stats/). Can someone please tell how to find the code for functions like these?
What is the difference between Rf_dpois in Rmath.h and the dpois that I use directly in R?
CC BY-SA 4.0
null
2011-01-29T06:45:35.563
2018-10-19T06:59:29.070
2018-10-19T06:59:29.070
128677
1307
[ "r" ]
6691
2
null
6690
3
null
Yes they are the same. I am guessing that `x` is double because of [type conversions](http://en.wikipedia.org/wiki/Type_conversion). If `integer` is multiplied by `double` in C, the `double` is converted to integer. There are checks in the code that `x` is really integer, though it is of type `double`. The code for this function is in `src/nmath/dpois.c`. I found it by doing `grep -R "dpois"` in directory with extracted R source code. This will work from the terminal in Linux and in Mac OS X. In Windows you will need to install grep. I suggest installing [Rtools](http://www.murdoch-sutherland.com/Rtools/index.html). Note. The code is very short, but I intentionally do not post it here, since I do not know for sure if I will not break some licence by doing that.
null
CC BY-SA 2.5
null
2011-01-29T06:55:11.320
2011-01-29T08:11:23.197
2011-01-29T08:11:23.197
2116
2116
null
6692
2
null
6421
42
null
The answer is so simple that i have to write all this gibberish to make CV let me post it: [R](http://www.r-project.org)
null
CC BY-SA 2.5
null
2011-01-29T08:53:30.500
2011-01-29T10:15:28.620
2011-01-29T10:15:28.620
704
704
null
6693
2
null
6690
3
null
In reference to the question on how to find the source for headers of other functions, see this article in R News by Uwe Ligges: Uwe Ligges. R Help Desk: [Accessing the sources](http://cran.r-project.org/doc/Rnews/Rnews_2006-4.pdf). R News, 6(4):43-45, October 2006.
null
CC BY-SA 2.5
null
2011-01-29T14:05:27.887
2011-01-29T14:05:27.887
null
null
1390
null
6694
2
null
6688
1
null
For the purposes of my answer, I will denote the binary variable of interest as $Y_i \text{ ;}(i=1,\dots,n)$ and the predictors $X_{ij} \text{ ;} (j=1,\dots,p)$ and assume that $Y$ has values of $Y=0$ and $Y=1$. It will also be convenient to define $\gamma_m$ to indicate the model $m \text{ ;}(m=1,..,M)$, such that $\gamma_m^TX_{ij}$ is equal to $X_{ij}$ if the jth variable is in the mth model, and $0$ otherwise. I would make a modification to your method, and give a rationale. You are using a classifier model, which means you want to predict the value of a categorical variable into the future - so you should really be defining a prediction rule (given a new set of predictors $X_{j}$, how will you predict whether $Y=1$ or $Y=0$). So I would suggest evaluating the prediction directly, rather than the likelihood ratio. However, the observation predicted should not be included in the estimation of the model (because this is exactly the situation you will face when actually using your model). So have a new step 1) (bold is my suggested change). 1) Given the features already in the model (or just the intercept on the first iteration), select the feature that produces the best predictions when added to the model. Now you need to decide - what you want "best" to mean mathematically - how to split your data into "fitting" and "predicting" parts I will make a suggestion for each: - An intuitive definition for a "good" classifier (and also computationally simple) is the proportion of correct classifications it makes. However, you may have some additional knowledge of the specific consequences of making a correct or incorrect classification (e.g. predicting correctly when $Y=1$ is twice as important than when $Y=0$). In this case you should incorporate this knowledge into the definition of "good". But for the equations in my answer I will use $F=\frac{C}{C+I}$ as the criterion ($F$="fraction" or "frequency" $C$="correct" $I$="incorrect") - Because you don't have a lot of data, you need as much as possible to fit the model, so a simple drop one jacknife procedure can be used. You leave observation $1$ out, fit the model with observations $2,\dots,n$, and use this to predict observation $1$. Then you leave observation $2$ out, fit the model with observations $1,3,\dots,n$, and use this to predict observation $2$; and so on until each observation has been "left out" and predicted. You will then have $n$ predictions, and you can now calculate $F=\frac{C}{n}$, the fraction of correctly predicted values for the particular model. Subscript this for the particular model $F_m$. You would then calculate $F_m$ for each model $(m=1,\dots,M)$, and pick the model which predicts the best $m=\text{argmax}_{m\in M} F_m$. Note that the good thing about the above method is that you do not need to worry about how many variables are in your model or how correlated these variables are (unless it makes it impossible to actually fit the model). This is because the model is fit separately to the prediction, so bias due to over-fitting, or degradation due to numerical instability will show up in the poorer predictions. In a step-wise situation it is done sequentially, so at the $sth$ you have $M_s=p+1$ models to choose between: one each for "removing" each $X_{j}$ which is in the model, one for "adding" each $X_{j}$ which is not in the model, and one for keeping the model unchanged (you stop the procedure when you choose this model, and this is your final model). If there is a tie, you need an additional criteria to split the winners (or you could let you algorithm "branch" off, and see where each "branch" ends up, then take the "branch" which had the best predictions at its final step) Step-wise can be risky because you may find "local maximums" instead of "global maximums", especially because you have such a large number of predictors (this is a big "space" to optimise over, and is probably multi-modal - meaning there are many "best" models) The good thing about this is that the model you choose has a clear, directly relevant interpretation: The model which predicted the highest proportion of results correctly, out of the alternatives considered. And you have a clear measure of exactly how good your binary classifier is (it classified $100F$ percent correctly). I think you will find this a lot easier to justify your choice of final model to a non-statistician, rather than trying to explain why the p-value indicates the model is good. And for hypothesis testing, you can declare any effect remaining in your final model as "significant" in that the relationships contained in this model we able to re-produce the data ($Y$) the most effectively. Two final remarks: - You could also use this machinery to decide if step-wise is better than forward selection (only add variables) or backward selection (start from full model, and only remove variables). - You could fit the full model (or any model with $p\geq n$) by "ridging" the model, which amounts to adding a small number to the diagonal elements of the $X^TX$ matrix, or $X^TWX$ for GLMs before inverting when calculating your betas, to give $(X^TX+\lambda I)^{-1}X^TY$ or $(X^TWX+\lambda I)^{-1}X^TWY$. Basically, $\lambda$ constrains the sum of squares of the betas to be less than a particular value, increasing the value of $\lambda$ decreases this constraint (which is a "smooth" model selection procedure in its own right, if you think about it).
null
CC BY-SA 2.5
null
2011-01-29T16:25:14.063
2011-01-29T16:25:14.063
null
null
2392
null
6695
2
null
6690
5
null
You could always test things, courtesy of [inline](http://cran.r-project.org/packages=inline) and [Rcpp](http://cran.r-project.org/packages=Rcpp). Here we pass three lines of code to C++ to receive the two parameters, and then pass those to `Rf_dpois` (while setting the `log` argument to false): ``` R> library(Rcpp) R> library(inline) R> src <- "double x = Rcpp::as<double>(xs); + double y = Rcpp::as<double>(ys); + return(Rcpp::wrap(Rf_dpois(x, y, false)));" R> fun <- cxxfunction(signature(xs="numeric",ys="numeric"), src, plugin = "Rcpp") R> fun(2, 0.5) [1] 0.0758163 R> dpois(2, 0.5) [1] 0.0758163 R> fun(1.0, 0.5) [1] 0.303265 R> dpois(1.0, 0.5) [1] 0.303265 R> ``` The code to `dpois()` is in `R-2.12.1/src/nmath/dpois.c`. And yes, it can be freely copied.
null
CC BY-SA 2.5
null
2011-01-29T16:31:29.507
2011-01-29T16:31:29.507
null
null
334
null
6697
2
null
6421
11
null
Just falling within the 15 year window, I believe, are the algorithms for controlling [False Discovery Rate](http://en.wikipedia.org/wiki/False_discovery_rate). I like the 'q-value' approach.
null
CC BY-SA 2.5
null
2011-01-29T18:37:00.597
2011-01-29T18:37:00.597
null
null
795
null
6698
1
6699
null
3
2925
Let's say you regress monthly stock returns, and the regression model has a standard error around such monthly stock returns of 2%. Next, you forecast the next year annual stock return based on the regressed monthly returns. What is the standard error around this annual stock return? Some may think this is not a very good example. Don't get hung up on what is the proper way to model stock returns. This has nothing to do with the question. The question is simply figuring out the calculation to convert a monthly standard error into an annual one.
How do you compute the annual standard error of a regression model when the model itself is based on monthly observations?
CC BY-SA 2.5
null
2011-01-29T20:30:28.627
2011-04-28T23:46:12.220
2011-04-28T22:57:34.777
2970
1329
[ "standard-error" ]
6699
2
null
6698
2
null
The general rule is $\sqrt N$. So from daily data, annual volatility would be estimated as `sd(v)* sqrt(255)`. Hence for your example, multiply the estimated error by the square root of twelve. Edit: In response to the follow-up comment, here is a complete worked example with real data. The annual volatility comes out to be around 16% which seems about right. You could modify this for rolling volatilities, or weekly data, or .... ``` R> library(tseries) Loading required package: quadprog Loading required package: zoo ‘tseries’ version: 0.10-22 ‘tseries’ is a package for time series analysis and computational finance. See ‘library(help="tseries")’ for details. R> SP500 <- get.hist.quote("^GSPC", "2000-01-01", "2011-01-29", quote="Close", compression="m") trying URL 'http://chart.yahoo.com/table.csv?s=^GSPC&a=0&b=01&c=2000&d=0&e=29&f=2011&g=m&q=q&y=0&z=^GSPC&x=.csv' Content type 'text/csv' length unknown opened URL ....... downloaded 8162 bytes time series starts 2000-01-03 time series ends 2011-01-03 R> head(SP500) Close 2000-01-03 1394.46 2000-02-01 1366.42 2000-03-01 1498.58 2000-04-03 1452.43 2000-05-01 1420.60 2000-06-01 1454.60 R> sd(diff(log(SP500))) Close 0.0478781 R> sd(diff(log(SP500)))*sqrt(12) Close 0.165855 R> ```
null
CC BY-SA 2.5
null
2011-01-29T20:32:23.547
2011-01-29T21:53:52.430
2011-01-29T21:53:52.430
334
334
null
6700
2
null
6658
2
null
If I'm reading you right (and changing Tal's 4 to a 5), then at [http://en.wikipedia.org/wiki/Statistical_hypothesis_testing](http://en.wikipedia.org/wiki/Statistical_hypothesis_testing) if you scroll halfway down you'll find the formula for "Two-proportion z-test, pooled for d0 = 0." I would think you'd want to do such a test for each of the five years, then choose a meta-analytic method of pooling the results. (You can also use an online calculator for each test. [http://www.dimensionresearch.com/resources/calculators/ztest.html](http://www.dimensionresearch.com/resources/calculators/ztest.html) and [http://www.surveystar.com/our_services/ztest.htm](http://www.surveystar.com/our_services/ztest.htm) are not perfect but each looks serviceable.) --- In light of further comments...From the research question you've posed, it sounds as if regional differences per se are not important. Therefore you could simplify a great deal by collapsing across thorough-treatment regions and not-thorough-treatment regions, yielding two sets of regions for which to test the difference in proportions. You could do this for each of the years on which you have a substantial amount of data. Then you could pool the different years' test results using a standard meta-analytic method, and you would have a single answer to your question of whether the two levels of implementation show significantly different results.
null
CC BY-SA 2.5
null
2011-01-29T20:49:17.667
2011-01-31T18:48:45.210
2011-01-31T18:48:45.210
2669
2669
null
6701
2
null
949
9
null
This post was originally intended as a long comment rather than a complete answer to the question at hand. From the question, it's a little unclear if the interest lies only in the binary case or, perhaps, in more general cases where they may be continuous or take on other discrete values. One example that doesn't quite answer the question, but is related, and which I like, deals with item-preference rankings obtained via paired comparisons. The Bradley–Terry model can be expressed as a logistic regression where $$ \mathrm{logit}( \Pr(Y_{ij} = 1) ) = \alpha_i - \alpha_j , $$ and $\alpha_i$ is an "affinity", "popularity", or "strength" parameter of item $i$ with $Y_{ij} = 1$ indicating item $i$ was preferred over item $j$ in a paired comparison. If a full round-robin of comparisons is performed (i.e., a pairwise preference is recorded for each unordered $(i,j)$ pair), then it turns out that the rank order of the MLEs $\hat{\alpha}_i$ correspond to the rank order of $S_i = \sum_{j \neq i} Y_{ij}$, the sum total of times each object was preferred over another. To interpret this, imagine a full round-robin tournament in your favorite competitive sport. Then, this result says that the Bradley–Terry model ranks the players/teams according to their winning percentage. Whether this is an encouraging or disappointing result depends on your point of view, I suppose. NB This rank-ordering result does not hold, in general, when a full round-robin is not played.
null
CC BY-SA 3.0
null
2011-01-29T22:03:58.937
2012-09-20T12:47:09.693
2012-09-20T12:47:09.693
2970
2970
null
6702
1
274715
null
11
12855
Here's what I want to do, but there seem to be no `predict` method for the mlogit. Any ideas? ``` library(mlogit) data("Fishing", package = "mlogit") Fish <- mlogit.data(Fishing, varying = c(2:9), shape = "wide", choice = "mode") Fish_fit<-Fish[-1,] Fish_test<-Fish[1,] m <- mlogit(mode ~price+ catch | income, data = Fish_fit) predict(m,newdata=Fish_test) ```
Predict after running the mlogit function in R
CC BY-SA 2.5
null
2011-01-29T22:51:20.317
2017-04-20T03:43:53.080
2011-01-30T14:15:55.020
null
2817
[ "r", "logistic", "logit", "multinomial-distribution" ]
6703
2
null
6609
2
null
One way is to build an SPSS PLUM or NOMREG model that checks for an interaction between each predictor and a binary predictor, “time.” In that scenario you'd use just a single column for all the values of your outcome variable. For 1/2 the data set, time would be marked 0, and for the other half it'd be marked 1. Essentially you’d be treating time as if it were like gender or any other binary predictor that potentially could interact with other predictors.
null
CC BY-SA 2.5
null
2011-01-29T23:59:29.663
2011-02-25T03:13:15.720
2011-02-25T03:13:15.720
2669
2669
null
6704
2
null
6601
13
null
Conditional probability probably leads to most mistakes in everyday experience. There are many harder concepts to grasp, of course, but people usually don't have to worry about them--this one they can't get away from & is a source of rampant misadventure.
null
CC BY-SA 2.5
null
2011-01-30T00:05:34.997
2011-01-30T00:05:34.997
null
null
11954
null
6705
1
null
null
7
3174
I'm doing an autocorrelation analysis for a spatially distributed collection of observations. To perform my analysis, I am using Moran's I statistic. My questions are: (1) What are the implications and benefits of using different weighting functions, i.e. $d^{-1}$, $d^{-2}$, $\exp(-d)$, and (2) Is there any (perhaps informal) answer to which of the possible weighting functions is used most frequently in the geo-statistics literature (and for what purposes)? As for why I care: I am trying to explore whether there is clustering in my data set at different scales of structure, following some of the methodology of [Fauchald 2000](http://www.esajournals.org/doi/abs/10.1890/0012-9658%282000%29081%5B0773%3ASDPPIT%5D2.0.CO%3B2). I am plotting Moran's I versus aggregation scale. The interesting thing that the resulting correlation curves show very different qualitative behavior when calculating using $d^{-1}$ and $d^{-2}$ weighting functions ($d^{-1}$ has a discontinuity point, for example). I'm having a hard time understanding why this would be true -- does anyone have experience with this who may be able to point me to some references?
Choice of weight function in Moran's I
CC BY-SA 2.5
null
2011-01-30T01:20:12.840
2020-02-05T14:33:56.367
null
null
1283
[ "clustering", "autocorrelation", "spatial", "scale-invariance" ]
6706
1
6708
null
7
7118
I am doing the following to fit my data using an exponential function: ``` # Define the data data <- c(67, 81, 93, 65, 18, 44, 31, 103, 64, 19, 27, 57, 63, 25, 22, 150, 31, 58, 93, 6, 86, 43, 17, 9, 78, 23, 75, 28, 37, 23, 108, 14, 137, 69, 58, 81, 62, 25, 54, 57, 65, 72, 17, 22, 170, 95, 38, 33, 34, 68, 38, 117, 28, 17, 19, 25, 24, 15, 103, 31, 33, 77, 38, 8, 48, 32, 48, 26, 63, 16, 70, 87, 31, 36, 31, 38, 91, 117, 16, 40, 7, 26, 15, 89, 67, 7, 39, 33, 58) # Fit the data to a model params = fitdistr(data, "exponential") ``` I got the following for the params: ``` > params rate 0.019694623 (0.002087626) ``` I want to draw a QQ-plot to see how good the fit was. I am guessing I need to generate exponentially distributed data using the parameter generated and then use some function to draw the QQ plot but am not sure how to go about doing this. Can someone tell me how to do this in R?
How do I generate a QQ-Plot for data fitted using fitdistr?
CC BY-SA 2.5
null
2011-01-30T01:23:50.887
2014-11-21T01:17:12.637
2014-11-21T01:17:12.637
805
2164
[ "r", "distributions", "goodness-of-fit", "exponential-distribution", "qq-plot" ]
6707
1
6727
null
5
1925
I've seen at least 3 sources on time series* state that the component of a series that is variously called random, stochastic, or noise (something clearly separate from any deterministic, patterned component) itself consists of 2 parts, a systematic part and an unsystematic part. I can't for the life of me figure out how the random part can in turn have a systematic part. I can see how the series can have unsystematic and systematic parts, but not how the random component of the series could have both of these. *For example, Burns and Grove, The Practice of Nursing Research (2005, p. 475-6).
A "systematic" part of a random time series component?
CC BY-SA 2.5
null
2011-01-30T01:50:47.830
2011-01-30T22:16:38.797
null
null
2669
[ "time-series" ]
6708
2
null
6706
9
null
Try the following code: ``` simdata <- qexp(ppoints(length(data)), rate = params$estimate) qqplot(data, simdata) ``` (Inspired by the base R implementation of `qqnorm`) PS When using non-base R functions, you should state what library they come from. I had to Google to discover that `fitdistr` is from MASS.
null
CC BY-SA 2.5
null
2011-01-30T01:53:42.750
2011-01-30T01:53:42.750
null
null
2975
null
6709
2
null
6655
2
null
I would suggest that the type of estimator depends on a few things: - What are the consequences of getting the estimate wrong? (e.g. is it less bad if your estimator is too high, compared to being too low? or are you indifferent about the direction of error? if an error is twice as big, is this twice as bad? is it percentage error or absolute error that is important? Is the estimation only intermediate step that is required for prediction? is large sample behaviour more or less important than small sample behaviour?) - What is your prior information about the quantity you are estimating? (e.g. how is the data functionally related to your quantity? do you know if the quantity is positive? discrete? have you estimated this quantity before? how much data do you have? Is there any "group invariance" structure in your data?) - What software do you have? (e.g. no good suggesting MCMC if you don't have the software to do it, or using a GLMM if you don't know how to do it.) The first two points are context specific, and by thinking about your specific application, you will generally be able to define certain properties that you would like your estimator to have. You then choose the estimator which you can actually calculate, which has as many of the properties which you want it to have. I think the lack of context that a teaching course has with estimation, means that often "default" criterion are used, similarly for prior information (the most obvious "default" being that you know the sampling distribution of your data). Having said that, some of the default methods are good, especially if you don't know enough about the context. But if you do know the context, and you have the tools to incorporate that context, then you should, for otherwise you may get counter-intuitive results (because of what you ignored). The I'm not a big fan of MVUE as a general rule, because you often need to sacrifice too much variance to get unbiased-ness. For example, imagine you are throwing darts at a dartboard, and you want to hit to the bulls-eye. Supposing that the maximum deviation from the bulls-eye is 6cm for a particular throwing strategy, but the center of the dart points is 1cm above of the bullseye. This is not MVUE, because the center should be on the bullseye. But suppose that in order to shift the distribution down 1cm (on the average), you have to increase your radius to at least 10cm (so the maximum error is now 10cm, and not 6cm). This is the kind of thing that can happen with MVUE, unless the variance is already small. Suppose I was a much more accurate throw, and could narrow my error to 0.1cm. Now the bias really matters, because I will never hit the bullseye! In short, for me, bias only matters when it is small compared to the variance. And you will usually only get small variances when you have a large sample.
null
CC BY-SA 2.5
null
2011-01-30T03:41:02.227
2011-01-30T03:41:02.227
null
null
2392
null
6710
2
null
6421
9
null
Adding my own 5 cents, I believe the most significant breakthrough of the past 15 years has been Compressed Sensing. LARS, LASSO, and a host of other algorithms fall in this domain, in that Compressed Sensing explains why they work and extends them to other domains.
null
CC BY-SA 2.5
null
2011-01-30T04:04:07.940
2011-01-30T04:04:07.940
null
null
30
null
6712
2
null
6670
1
null
What I usually do is to calculate the plugin bandwidth using Silverman's formula (h_p) and then crossvalidate in the range of [h_p/5, 5h_p] to find the optimal bandwidth. This crossvalidation can be done either by using leave-one-out least squares crossvalidation or by leave-one-out-likelihood crossvalidation.
null
CC BY-SA 2.5
null
2011-01-30T04:40:55.097
2011-01-30T04:40:55.097
null
null
null
null
6714
2
null
6652
21
null
I wouldn't call the definition of CIs as wrong, but they are easy to mis-interpret, due to there being more than one definition of probability. CIs are based on the following definition of Probability (Frequentist or ontological) (1)probability of a proposition=long run proportion of times that proposition is observed to be true, conditional on the data generating process Thus, in order to be conceptually valid in using a CI, you must accept this definition of probability. If you don't, then your interval is not a CI, from a theoretical point of view. This is why the definition used the word proportion and NOT the word probability, to make it clear that the "long run frequency" definition of probability is being used. The main alternative definition of Probability (Epistemological or probability as an extension of deductive Logic or Bayesian) is (2)probability of a proposition = rational degree of belief that the proposition is true, conditional on a state of knowledge People often intuitively get both of these definitions mixed up, and use whichever interpretation happens to appeal to their intuition. This can get you into all kinds of confusing situations (especially when you move from one paradigm to the other). That the two approaches often lead to the same result, means that in some cases we have: rational degree of belief that the proposition is true, conditional on a state of knowledge = long run proportion of times that proposition is observed to be true, conditional on the data generating process The point is that it does not hold universally, so we cannot expect the two different definitions to always lead to the same results. So, unless you actually work out the Bayesian solution, and then find it to be the same interval, you cannot give the interval given by the CI the interpretation as a probability of containing the true value. And if you do, then the interval is not a Confidence Interval, but a Credible Interval.
null
CC BY-SA 2.5
null
2011-01-30T06:15:56.497
2011-02-01T11:04:32.597
2011-02-01T11:04:32.597
2392
2392
null
6715
1
6716
null
8
155
Please pardon me if this question is not clear. I am not sure if I am using the right terminologies. I have conducted an experiment in different environments multiple times. So my data looks something like this: ``` Environment1 1.2 2.1 1.1 1.5 1.6 Environment2 4.2 2.6 3.5 2.5 2.9 Environment3 7.2 4.6 5.3 4.5 1.6 Environment4 0.0 0.0 1.2 15.0 0.0 Environment5 3.2 2.4 7.2 5.5 6.6 Environment6 23.2 32.1 18.1 1.5 19.6 ``` I can clearly see (or maybe my intuition says) that the experiment was not conducted properly in Environment4 (too low and fluctuating a lot) and Environment5 (way too high) but am not sure how to prove this. Am I supposed to rely on hypothesis-testing with the hypothesis: > The experiment was not conducted properly in Environments 4 and 6. and then use some procedure to prove this? Or is there a standard way of showing this? Can someone please help me how to approach this kind of problems? I am using R.
Can someone help me understand what type of problem I am looking at? Not sure if this classifies as hypothesis-testing
CC BY-SA 2.5
null
2011-01-30T06:23:56.603
2011-01-30T06:43:32.580
2011-01-30T06:32:25.987
2164
2164
[ "r", "distributions", "hypothesis-testing", "statistical-significance", "experiment-design" ]
6716
2
null
6715
2
null
You can do a [student test](http://en.wikipedia.org/wiki/Student%27s_t-test) to see if the mean is different between the group 4,6 and the rest. Even if your sample size is small you will conclude in a difference. Note that it will tell you that group 4,6 is significantly different in average from the rest but it won't tell you that "The experiment was not conducted properly in Environments 4 and 6" which can't be answered without a knowledge of what "properly" means in the observations.
null
CC BY-SA 2.5
null
2011-01-30T06:43:32.580
2011-01-30T06:43:32.580
null
null
223
null
6717
2
null
6707
3
null
"random" is often used as if it was a real property of the data under study, where it should be replaced with "uncertain". To give an example, if I ask you what how much money you earned over the past month, and you don't tell me, it is not "random", but just uncertain. However, treating the uncertainty as if it was random allows you to make some useful conclusions. The noise is not "random" per se, but given that we usually have limited knowledge of how each particular piece of "noise" is generated, assuming that it is random can be useful. Now whenever you fit a model to some data, you will have residuals from that model. And if treating the "noise" as if it was random was a good idea, then the residuals from the model should be consistent with whatever definition of "randomness" you have used in fitting the model. If they are not, then basically the time series is telling you that the "randomness" you assumed is not a good description of what it actually happening, and it gives you a clue as to what a better description might be. For example, if I fit a linear relationship for the systematic part, but it is actually quadratic, then the so-called "random" noise will not look random at all, rather it will contain the squared component of the systemtatic part. To make it even more concrete, suppose that your response $Y$ is a deterministic function of $X$, say $Y=3+2X+X^2$. Now, because you don't know this function you suppose that $Y=\alpha+\beta X + error$, and because you have no reason to doubt the model prior to seeing the data, you assume that the error is just "random noise" (usually $N(0,\sigma^2)$). However, once you actually fit your data and look at the residuals, they will all line up as an exact quadratic function of the residuals. Thus there is a systematic component to the "noise" (in fact the "noise" is entirely systematic). This is basically "Nature" telling "You" that you model is wrong, and gives a clue as to how it could be improved. The same kind of thing is happening in the time series. you could just replace the model above with $Y_{1}=1,Y_{2}=6,Y_{3}=0.5,Y_{4}=10,Y_{5}=3,Y_{6}=10$ and for $t\geq7$ have $Y_t=10+2 Y_{t-1} -5Y_{t-1}^3 + 2Y_{t-5}$ and the same kind of thing would happen.
null
CC BY-SA 2.5
null
2011-01-30T07:00:30.623
2011-01-30T07:00:30.623
null
null
2392
null
6718
2
null
6684
5
null
When you use the `predict` with `newdata` argument you must supply the the data.frame with the same column names. In your code you have ``` newdata <- expand.grid(X1=c(-1,0,1),X2=c(-1,0,1)) names(newdata) <- c("scale(xxA)","scale(xxB)") ``` But the formula supplied to lm object is ``` out ~ scale(xxA)*scale(xxB) ``` So when you call the predict, it tries to find objects `xxA` and `xxB` in your data and apply function `scale` as per your initial request. But all R finds are objects `scale(xxA)` and `scale(xxB)`. So naturally it produces the error. Now if you supply correctly named `newdata` ``` newdata <- expand.grid(xxA=c(-1,0,1),xxB=c(-1,0,1)) ``` and try to use it for prediction ``` newdata$Y <- predict(lm.res.scale,newdata) ``` R will remember the how it scaled original data and apply the same scaling to your new data. In this case supplied value -1 for `xxA` will be subtracted the original mean of `xxA` and divided by the original standard value of `xxA`. If you want to get prediction of 1 S.D below the mean, you will need to supply this value. In your case then newdata should look like this: ``` newdata <- expand.grid(xxA=mean(dat$xxA)+sd(dat$xxA)*c(-1,0,1),xxB=mean(dat$xxB)+sd(dat$xxB)*c(-1,0,1)) ``` I gathered all the solutions in one place to compare: ``` ##Prepare data set.seed(1) dat <- data.frame(xxA = rnorm(20,10), xxB = rnorm(20,20)) dat$out <- with(dat,xxA+xxB+xxA*xxB+rnorm(20,20)) dat <- within(dat,{ X1 <- as.numeric(scale(xxA)) X2 <- as.numeric(scale(xxB)) }) ##Estimate the models lm.res.scale <- lm(out ~ scale(xxA)*scale(xxB),data=dat) lm.res.correct <- lm(out~X1*X2,data=dat) lm.mod <- lm(out ~ I(scale(xxA))*I(scale(xxB)), data=dat) rms.res <- Glm(out ~ scale(xxA)*scale(xxB),data=dat) ##Build data for prediction newdata <- expand.grid(xxA=c(-1,0,1),xxB=c(-1,0,1)) newdata$X1<-newdata$xxA newdata$X2<-newdata$xxB ##Gather the predictions newdata$Yscaled <- predict(lm.res.scale,newdata) newdata$Ycorrect <- predict(lm.res.correct,newdata) newdata$YwithI <- predict(lm.mod,newdata) newdata$Ywithrms <- Predict(rms.res,xxA=c(-1,0,1),xxB=c(-1,0,1),conf.int=FALSE)[,3] ##Build alternative data for prediction newdata2 <- expand.grid(xxA=mean(dat$xxA)+sd(dat$xxA)*c(-1,0,1),xxB=mean(dat$xxB)+sd(dat$xxB)*c(-1,0,1)) #Predict newdata$Yorigsc <- predict(lm.res.scale,newdata2) ``` I used `set.seed` so the results should be the same if you try to repeat it. The newdata looks like this: ``` > newdata xxA xxB X1 X2 Yscaled Ycorrect YwithI Ywithrms Yorigsc 1 -1 -1 -1 -1 25.79709 225.9562 221.7517 221.7517 225.9562 2 0 -1 0 -1 25.63030 244.5181 243.0404 243.0404 244.5181 3 1 -1 1 -1 25.46351 263.0800 264.3291 264.3291 263.0800 4 -1 0 -1 0 25.36341 234.6981 231.7012 231.7012 234.6981 5 0 0 0 0 26.21499 254.0704 254.0704 254.0704 254.0704 6 1 0 1 0 27.06657 273.4427 276.4396 276.4396 273.4427 7 -1 1 -1 1 24.92972 243.4400 241.6507 241.6507 243.4400 8 0 1 0 1 26.79967 263.6227 265.1004 265.1004 263.6227 9 1 1 1 1 28.66962 283.8054 288.5501 288.5501 283.8054 ``` As expected `Yscaled` produces not the result we need since the original scaling is applied. In the case when we scale data before `lm` (`Ycorrect`) and when we supply alternative unscaled values (`Yorigsc`) results coincide and are the ones needed. Now the other prediction methods give different results. This happens since R is forced to forget the original scaling using formula ``` out ~ I(scale(xxA))*I(scale(xxB)) ``` or package `rms`. But when we use predict, the values are still scaled, but now according to supplied values of `xxA` and `xxB`. This is best illustrated by following statement, which in some way mimics what predict does with the data: ``` > eval(expression(cbind(scale(xxA),scale(xxB))),env=as.list(newdata)) [,1] [,2] [1,] -1.154701 -1.154701 [2,] 0.000000 -1.154701 [3,] 1.154701 -1.154701 [4,] -1.154701 0.000000 [5,] 0.000000 0.000000 [6,] 1.154701 0.000000 [7,] -1.154701 1.154701 [8,] 0.000000 1.154701 [9,] 1.154701 1.154701 ``` We can see that in this case, scaling does not change original values too much, but this is even worse, since the values from predict look reasonable, when in fact they are wrong.
null
CC BY-SA 2.5
null
2011-01-30T07:50:33.400
2011-01-30T20:04:27.090
2011-01-30T20:04:27.090
2116
2116
null
6719
2
null
6707
3
null
Systematic and unsystematic are rather ambiguous terms. One of the possible explanations is given by @probabilityislogic. Another may be given [here.](http://en.wikipedia.org/wiki/Decomposition_of_time_series) Since the context you gave is time series, I think this might be related to [Wold's theorem](http://en.wikipedia.org/wiki/Wold%27s_theorem). Unfortunately wikipedia text captures the essence, but does not go into the details of which part is systematic and non systematic. I did not manage to find appropriate link to refer to, so I will try give some explanation based on the [book](http://www.setbook.com.ua/books/120541.html?PHPSESSID=i41mmv16i1t034pb3u0o0pkkf1) I have. This subject is also discussed [in this book](http://books.google.com/books?id=BjaHPwAACAAJ&dq=cramer+leadbetter&hl=fr&ei=pGpFTdDQOZChOtzGuJEC&sa=X&oi=book_result&ct=result&resnum=1&ved=0CCkQ6AEwAA). I will not give precise and rigorous definitions, since they involve Hilbert spaces and other graduate mathematics stuff, which I think is not really necessary to get the point across. Each [covariance-stationary](http://en.wikipedia.org/wiki/Stationary_process) process $\{X_t,t\in \mathbb{Z}\}$ can be uniquely decomposed into two stationary proceses: $X_t=M_t+N_t$, singular $M_t$ and regular $N_t$. Singular and regular processes are defined via their prediction properties. In stationary process theory the prediction of process $X_t$ at time $t$ is formed from linear span of its history $(X_s,s<t)$. Singular processes are processes for which the prediction error: $$E(\hat{X}_t-X_t)^2$$ is zero. Such processes sometimes are called deterministic, and in your context can be also called systematic. The most simple example of such process is $X_t=\eta$ for all $t$ and $\eta$ some random variable. Then the linear prediction of $X_t$ based on its history will always be $\eta$. The error of such prediction as defined above would be zero. Regular stationary processes on the other hand cannot be predicted without error from their history. It can be shown that the stationary process $N_t$ is regular if and only if it admits $MA(\infty)$ decomposition. This means that there exists [white-noise](http://en.wikipedia.org/wiki/White_noise) sequence $(\varepsilon_t)$ such that $$N_t=\sum_{t=0}^{\infty}c_n\varepsilon_{t-n}.$$ where coefficients $c_n$ are such, that the equality holds. These processes sometimes are called non-deterministic, or probably non-systematic in your case.
null
CC BY-SA 2.5
null
2011-01-30T09:21:36.527
2011-01-30T13:51:17.730
2011-01-30T13:51:17.730
2116
2116
null
6720
1
6726
null
10
4740
An exponential model is a model described by following equation: $$\hat{y_{i}}=\beta_{0}\cdot e^{\beta_{1}x_{1i}+\ldots+\beta_{k}x_{ki}}$$ The most common approach used to estimate such model is linearization, which can be done easily by calculating logarithms of both sides. What are the other approaches? I'm especially interested in those which can handle $y_{i}=0$ in some observations. Update 31.01.2011 I'm aware of the fact that this model can't produce zero. I'll elaborate a bit what I'm modeling and why I choose this model. Let's say we want to predict how much money does a client spend in a shop. Of course many clients are just looking and they don't buy anything, that why there are 0. I didn't want to use linear model because it produces a lot of negative values, which doesn't make any sense. The other reason is that this model works really good, much better than the linear. I've used genetic algorithm to estimate those parameters so it wasn't 'scientific' approach. Now I'd like to know how to deal with problem using more scientific methods. It can be also assumed that most, or even all, of the variables are binary variables.
Estimation of exponential model
CC BY-SA 2.5
null
2011-01-30T11:56:51.913
2011-01-31T14:48:08.047
2011-01-31T09:50:09.067
1643
1643
[ "estimation", "nonlinear-regression" ]
6721
2
null
6720
10
null
This is a [generalized linear model](http://en.wikipedia.org/wiki/Generalized_linear_model) (GLM) with a log [link function](http://en.wikipedia.org/wiki/Generalized_linear_model#Link_function). Any probability distribution on $[0,\infty)$ with non-zero density at zero will handle $y_i=0$ in some observations; the most common would be the Poisson distribution, resulting in [Poisson regression](http://en.wikipedia.org/wiki/Poisson_regression), a.k.a. log-linear modelling. Another choice would be a [negative binomial distribution](http://en.wikipedia.org/wiki/Negative_binomial_distribution). If you don't have count data, or if $y_i$ takes non-integer values, you can still use the framework of generalized linear models without fully specifying a distribution for $\operatorname{P}(y_i|\bf{x})$ but instead only specifying the relationship between its mean and variance using [quasi-likelihood](http://en.wikipedia.org/wiki/Quasi-likelihood).
null
CC BY-SA 2.5
null
2011-01-30T12:30:39.593
2011-01-30T12:50:51.010
2011-01-30T12:50:51.010
449
449
null
6722
2
null
6720
3
null
You can always use [non-linear least squares](http://en.wikipedia.org/wiki/Non-linear_least_squares). Then your model will be: $$y_i=\beta_0\exp(\beta_1x_{1i}+...+\beta_kx_{ki})+\varepsilon_i$$ The zeroes in $y_i$ then will be treated as deviations from the non-linear trend.
null
CC BY-SA 2.5
null
2011-01-30T14:04:34.183
2011-01-30T14:04:34.183
null
null
2116
null
6723
1
6752
null
6
8473
Given the dataset [cars.txt](http://dl.dropbox.com/u/138632/statistics/R%20data/cars.txt), we want to formulate a good regression model for the Midrange Price using the variables Horsepower, Length, Luggage, Uturn, Wheelbase, and Width. Both: - using all possible subsets selection, and - using an automatic selection technique. For the first part, we do in R: ``` cars <- read.table(file=file.choose(), header=TRUE) names(cars) #regression attach(cars) leap <- leaps(x=cbind(cars$Horsepower, cars$Length, cars$Luggage, cars$Uturn, cars$Wheelbase, cars$Width), y=cars$MidrangePrice, method=c("r2"), nbest=3) combine <- cbind(leap$which,leap$size, leap$r2) n <- length(leap$size) dimnames(combine) <- list(1:n,c("horsep","length","Luggage","Uturn","Wheelbase","Width","size","r2")) round(combine, digits=3) leap.cp <- leaps(x=cbind(cars$Horsepower, cars$Length, cars$Luggage, cars$Uturn, cars$Wheelbase, cars$Width), y=cars$MidrangePrice, nbest=3) combine.cp <- cbind(leap.cp$which,leap.cp$size, leap.cp$Cp) dimnames(combine.cp) <- list(1:n,c("horsep","length","Luggage","Uturn","Wheelbase","Width","size","cp")) round(combine.cp, digits=3) plot(leap.cp$size, leap.cp$Cp, ylim=c(1,7)) abline(a=0, b=1) ``` Am I correct in my interpretation that the most adequate model is one with 4 parameters (the three variables Horsepower, Wheelbase and Width) because it has the lowest Mallows' Cp value? For the second part, we can choose between the forward, backward or stepwise selection models: ``` #stepwise selection methods #forward slm.foward <- step(lm(cars$MidrangePrice ~1, data=cars), scope=~cars$Horsepower + cars$Length + cars$Luggage + cars$Uturn + cars$Wheelbase + cars$Horsepower+ cars$Width, direction="forward") #backward reg.lm1 <- lm(cars$MidrangePrice ~ cars$Horsepower + cars$Length + cars$Luggage + cars$Uturn + cars$Wheelbase + cars$Horsepower + cars$Width) slm.backward <- step(reg.lm1, direction="backward") #stepwise reg.lm1 <- lm(cars$MidrangePrice ~ cars$Horsepower + cars$Length + cars$Luggage + cars$Uturn + cars$Wheelbase + cars$Horsepower + cars$Width) slm.stepwise <- step(reg.lm1,direction="both") ``` How do I interpret the results I get from this R code?
Regression selection using all possible subsets selection and automatic selection techniques
CC BY-SA 3.0
null
2011-01-30T15:14:03.557
2011-05-27T04:02:47.707
2011-05-27T04:02:47.707
159
2980
[ "r", "regression", "model-selection", "stepwise-regression" ]
6724
2
null
4556
10
null
The "mechanical" result of just plugging in $z = 1$ into the transfer response is essentially a product of two facts. The steady-state gain is (usually, I believe) defined as the (magnitude of the) limiting response as $t \to \infty$ of the system to a unit-step input. The so-called final-value theorem states that, if the limit $\lim_{n \to \infty} y(n)$ exists, then $\lim_{n \to \infty} y(n) = \lim_{z \to 1} (1-z^{-1}) Y(z)$ where $y(n)$ is the time-domain output and $Y(z)$ is its corresponding $z$-transform. Now, for the steady-state gain, the input is a unit-step function, so $x(n) = 1$ for each $n \geq 0$. Hence, $$ X(z) = \sum_{n=0}^\infty x(n) z^{-n} = \sum_{n=0}^\infty z^{-n} = \frac{1}{1-z^{-1}} . $$ Using the transfer equation, we get that the $z$-transform of the output is $$ Y(z) = X(z) H(z) = \frac{H(z)}{1-z^{-1}} . $$ (Assuming that the limit $\lim_{n\to\infty} y(n)$ exists) we have that $$ \lim_{n \to \infty} y(n) = \lim_{z \to 1} (1-z^{-1}) Y(z) = \lim_{z \to 1}\, H(z) . $$ The left-hand side is the steady-state value of a step-response (i.e., it is the value of the response as time goes to $\infty$ of a one-unit constant input), and so the steady-state gain is $|\lim_{n \to \infty} y(n)| = \lim_{n \to \infty} |y(n)|$. Technically, you need to check that the limit exists (which I've tried to emphasize). It seems to me that a sufficient condition would be that all the poles of the transfer response be strictly inside the unit circle. (Caveat lector: I haven't checked that closely at all.) If this does not sufficiently clarify things, you might try doing Google searches on terms like "dc gain" and "final-value theorem", which are closely related to what you want.
null
CC BY-SA 2.5
null
2011-01-30T17:39:27.313
2011-01-30T18:04:18.460
2011-01-30T18:04:18.460
2970
2970
null
6725
1
null
null
4
266
I have to implement a genetic algorithm and therefore select the "fittest" of all possible members of a generation. I have the following arrays of fitnesses, their weighed equivalents and a cumulative array. ``` Fitnesses: [0.2, 0.0, 0.2, 0.0, 0.0, 0.0] Weighed Fitnesses: [0.5, 0.0, 0.5, 0.0, 0.0, 0.0] Cumulative Fitnesses: [0.5, 0.5, 1.0, 1.0, 1.0, 1.0] ``` Now I have to choose five new members, with a higher probability of the chosen member being fitter than the others, i.e. the probability of being chosen proportional to the fitness. How would I go about doing this by using a random number?
Pick a random item with the help of an array of cumulative probabilities
CC BY-SA 2.5
null
2011-01-30T17:56:44.463
2011-01-31T00:08:48.627
null
null
1205
[ "random-variable", "random-generation" ]
6726
2
null
6720
11
null
There are several issues here. (1) The model needs to be explicitly probabilistic. In almost all cases there will be no set of parameters for which the lhs matches the rhs for all your data: there will be residuals. You need to make assumptions about those residuals. Do you expect them to be zero on the average? To be symmetrically distributed? To be approximately normally distributed? Here are two models that agree with the one specified but allow drastically different residual behavior (and therefore will typically result in different parameter estimates). You can vary these models by varying assumptions about the joint distribution of the $\epsilon_{i}$: $$\text{A:}\ y_{i} =\beta_{0} \exp{\left(\beta_{1}x_{1i}+\ldots+\beta_{k}x_{ki} + \epsilon_{i}\right)}$$ $$\text{B:}\ y_{i} =\beta_{0} \exp{\left(\beta_{1}x_{1i}+\ldots+\beta_{k}x_{ki}\right)} + \epsilon_{i}.$$ (Note that these are models for the data $y_i$; there usually is no such thing as an estimated data value $\hat{y_i}$.) (2) The need to handle zero values for the y's implies the stated model (A) is both wrong and inadequate, because it cannot produce a zero value no matter what the random error equals. The second model above (B) allows for zero (or even negative) values of y's. However, one should not choose a model solely on such a basis. To reiterate #1: it is important to model the errors reasonably well. (3) Linearization changes the model. Typically, it results in models like (A) but not like (B). It is used by people who have analyzed their data enough to know this change will not appreciably affect the parameter estimates and by people who are ignorant of what is happening. (It is hard, many times, to tell the difference.) (4) A common way to handle the possibility of a zero value is to propose that $y$ (or some re-expression thereof, such as the square root) has a strictly positive chance of equally zero. Mathematically, we are mixing a point mass (a "delta function") in with some other distribution. These models look like this: $$\eqalign{ f(y_i) &\sim F(\mathbf{\theta}); \cr \theta_j &= \beta_{j0} + \beta_{j1} x_{1i} + \cdots + \beta_{jk} x_{ki} }$$ where $\Pr_{F_\theta}[f(Y) = 0] = \theta_{j+1} \gt 0$ is one of the parameters implicit in the vector $\mathbf{\theta}$, $F$ is some family of distributions parameterized by $\theta_1, \ldots, \theta_j$, and $f$ is the reexpression of the $y$'s (the "link" function of a generalized linear model: see onestop's reply). (Of course, then, $\Pr_{F_\theta}[f(Y) \le t]$ = $(1 - \theta_{j+1})F_\theta(t)$ when $t \ne 0$.) Examples are the [zero-inflated Poisson and Negative Binomial models](http://rss.acs.unt.edu/Rdoc/library/pscl/html/zeroinfl.html). (5) The issues of constructing a model and fitting it are related but different. As a simple example, even an ordinary regression model $Y = \beta_0 + \beta_1 X + \epsilon$ can be fit in many ways by means of least squares (which gives the same parameter estimates as Maximum Likelihood and almost the same standard errors), [iteratively reweighted least squares](http://en.wikipedia.org/wiki/Iteratively_reweighted_least_squares), various other forms of "[robust least squares](http://en.wikipedia.org/wiki/Robust_regression)," etc. The choice of fitting is often based on convenience, expedience (e.g., availability of software), familiarity, habit, or convention, but at least some thought should be given to what is appropriate for the assumed distribution of the error terms $\epsilon_i$, to what the [loss function](http://en.wikipedia.org/wiki/Loss_function) for the problem might reasonably be, and to the possibility of exploiting additional information (such as a [prior distribution](http://en.wikipedia.org/wiki/Prior_probability) for the parameters).
null
CC BY-SA 2.5
null
2011-01-30T17:57:06.893
2011-01-31T14:48:08.047
2011-01-31T14:48:08.047
919
919
null
6727
2
null
6707
6
null
The Burns reference that you are quoting seems to dividing the stochastic part into autocorrelation error, which is a byproduct of any time series analysis (and is systematic), vs. truly random error which is uncontrollable. -Ralph Winters
null
CC BY-SA 2.5
null
2011-01-30T22:16:38.797
2011-01-30T22:16:38.797
null
null
3489
null
6728
1
6729
null
43
32671
I was trying to fit my data into various models and figured out that the `fitdistr` function from library `MASS` of `R` gives me `Negative Binomial` as the best-fit. Now from the [wiki](http://en.wikipedia.org/wiki/Negative_binomial_distribution) page, the definition is given as: > NegBin(r,p) distribution describes the probability of k failures and r successes in k+r Bernoulli(p) trials with success on the last trial. Using `R` to perform model fitting gives me two parameters `mean` and `dispersion parameter`. I am not understanding how to interpret these because I cannot see these parameters on the wiki page. All I can see is the following formula: ![Negative Binomial Distribution Formula](https://i.stack.imgur.com/Tpnyi.png) where `k` is the number of observations and `r=0...n`. Now how do I relate these with the parameters given by `R`? The help file does not provide much information either. Also, just to say a few words about my experiment: In a social experiment that I was conducting, I was trying to count the number of people each user contacted in a period of 10 days. The population size was 100 for the experiment. Now, if the model fits the Negative Binomial, I can blindly say that it follows that distribution but I really want to understand the intuitive meaning behind this. What does it mean to say that the number of people contacted by my test subjects follows a negative binomial distribution? Can someone please help clarify this?
Understanding the parameters inside the Negative Binomial Distribution
CC BY-SA 2.5
null
2011-01-30T23:22:12.240
2019-03-08T00:18:15.763
null
null
2164
[ "r", "distributions", "modeling", "negative-binomial-distribution" ]
6729
2
null
6728
49
null
You should look [further down the Wikipedia article on the NB](https://en.wikipedia.org/wiki/Negative_binomial_distribution#Gamma%E2%80%93Poisson_mixture), where it says "gamma-Poisson mixture". While the definition you cite (which I call the "coin-flipping" definition since I usually define it for classes as "suppose you want to flip a coin until you get $k$ heads") is easier to derive and makes more sense in an introductory probability or mathematical statistics context, the gamma-Poisson mixture is (in my experience) a much more generally useful way to think about the distribution in applied contexts. (In particular, this definition allows non-integer values of the dispersion/size parameter.) In this context, your dispersion parameter describes the distribution of a hypothetical Gamma distribution that underlies your data and describes unobserved variation among individuals in their intrinsic level of contact. In particular, it is the shape parameter of the Gamma, and it may be helpful in thinking about this to know that the coefficient of variation of a Gamma distribution with shape parameter $\theta$ is $1/\sqrt{\theta}$; as $\theta$ becomes large the latent variability disappears and the distribution approaches the Poisson.
null
CC BY-SA 4.0
null
2011-01-30T23:58:00.913
2019-03-08T00:18:15.763
2019-03-08T00:18:15.763
2126
2126
null
6730
2
null
6725
2
null
Whuber answer is of course very good and simple, but if You are looking for something faster (binary search approach needs $O(logn)$ steps) then You can look at Walker's alias method, it is described well in Knuths Art of Computer Programming, unfortunatelly I can't tell You the exact page because I have only Polish version of this book. You can also read [this article](http://www.cs.utk.edu/~vose/Publications/random.pdf). These method allows to draw a random number from any finite discrete distribution in constant time.
null
CC BY-SA 2.5
null
2011-01-31T00:08:48.627
2011-01-31T00:08:48.627
null
null
1643
null
6731
1
6732
null
11
5103
I have a design matrix of p regressors, n observations, and I am trying to compute the sample variance-covariance matrix of the parameters. I am trying to directly calculate it using svd. I am using R, when I take svd of the design matrix, I get three components: a matrix $U$ which is $n \times p$, a matrix $D$ which is $1\times 3$ (presumably eigenvalues), and a matrix $V$ which is $3\times 3$. I diagonalized $D$, making it a $3\times 3$ matrix with 0's in the off-diagonals. Supposedly, the formula for covariance is: $V D^2 V'$, however, the matrix does not match, nor is it even close to R's built in function, `vcov`. Does anyone have any advice/references? I admit that I am a bit unskilled in this area.
Using Singular Value Decomposition to Compute Variance Covariance Matrix from linear regression model
CC BY-SA 2.5
null
2011-01-31T01:11:51.263
2011-01-31T07:17:26.280
2011-01-31T07:17:26.280
2116
2984
[ "r", "regression" ]
6732
2
null
6731
16
null
First, recall that under assumptions of multivariate normality of the linear-regression model, we have that $$ \hat{\beta} \sim \mathcal{N}( \beta, \sigma^2 (X^T X)^{-1} ) . $$ Now, if $X = U D V^T$ where the right-hand side is the SVD of X, then we get that $X^T X = V D U^T U D V = V D^2 V^T$. Hence, $$ (X^T X)^{-1} = V D^{-2} V^T . $$ We're still missing the estimate of the variance, which is $$ \hat{\sigma}^2 = \frac{1}{n - p} (y^T y - \hat{\beta}^T X^T y) . $$ Though I haven't checked, hopefully vcov returns $\hat{\sigma}^2 V D^{-2} V^T$. Note: You wrote $V D^2 V^T$, which is $X^T X$, but we need the inverse for the variance-covariance matrix. Also note that in $R$, to do this computation you need to do ``` vcov.matrix <- var.est * (v %*% d^(-2) %*% t(v)) ``` observing that for matrix multiplication we use `%*%` instead of just `*`. `var.est` above is the estimate of the variance of the noise. (Also, I've made the assumptions that $X$ is full-rank and $n \geq p$ throughout. If this is not the case, you'll have to make minor modifications to the above.)
null
CC BY-SA 2.5
null
2011-01-31T03:04:22.890
2011-01-31T03:04:22.890
null
null
2970
null
6733
1
6735
null
4
251
Say I have a list with $n$ elements (say number $1$'s) and I want to do 1000 random changes to them, such as a $+1$. If I (1) picked a random element, (2) changed it, and then (3) did these two steps (1)+(2) another 999 times, this would probably be completely random, right? Now, what if, whenever I change an element, I take it from the list and reinsert the changed element at the end of the list (i.e. this is the new step (2)) – would the changes also be completely random then? I have the feeling that it might decrease the probability for already changed elements to get changed again, but can't quite put my finger on the cause. Also, consider this variant: instead of inserting them in the end, I insert them in a random new location after the change. How would that influence the probability for picking any element? (PS. If someone knows good tags for this, please add them and delete this paragraph...)
Picking random elements from a list – still random if they are reinserted at the end?
CC BY-SA 2.5
null
2011-01-31T03:08:41.140
2011-01-31T04:16:40.627
null
null
2440
[ "probability" ]
6734
1
6739
null
10
4229
I have been reading the description of ridge regression in [Applied Linear Statistical Models](http://rads.stackoverflow.com/amzn/click/007310874X), 5th Ed chapter 11. The ridge regression is done on body fat data available [here](http://www.cst.cmich.edu/users/lee1c/spss/V16_materials/DataSets_v16/BodyFat-TxtFormat.txt). The textbook matches the output in SAS, where the back transformed coefficients are given in the fitted model as: $$ Y=-7.3978+0.5553X_1+0.3681X_2-0.1917X_3 $$ This is shown from SAS as: ``` proc reg data = ch7tab1a outest = temp outstb noprint; model y = x1-x3 / ridge = 0.02; run; quit; proc print data = temp; where _ridge_ = 0.02 and y = -1; var y intercept x1 x2 x3; run; Obs Y Intercept X1 X2 X3 2 -1 -7.40343 0.55535 0.36814 -0.19163 3 -1 0.00000 0.54633 0.37740 -0.13687 ``` But R gives very different coefficients: ``` data <- read.table("http://www.cst.cmich.edu/users/lee1c/spss/V16_materials/DataSets_v16/BodyFat-TxtFormat.txt", sep=" ", header=FALSE) data <- data[,c(1,3,5,7)] colnames(data)<-c("x1","x2","x3","y") ridge<-lm.ridge(y ~ ., data, lambda=0.02) ridge$coef coef(ridge) > ridge$coef x1 x2 x3 10.126984 -4.682273 -3.527010 > coef(ridge) x1 x2 x3 42.2181995 2.0683914 -0.9177207 -0.9921824 > ``` Can anyone help me understand why?
Difference between ridge regression implementation in R and SAS
CC BY-SA 3.0
null
2011-01-31T03:14:14.757
2014-07-09T23:37:13.357
2014-07-09T23:37:13.357
7290
2040
[ "r", "sas", "ridge-regression" ]
6735
2
null
6733
9
null
It's necessary to be a little bit careful about what you mean by "random" and "completely random" here. What you are describing, at least in your first scheme is a set of random draws such that the resulting vector is multinomially distributed with n = 1000 and probabilities $p_1 = p_2 = \cdots = p_n = 1/n$. Even in your first scheme, the resulting variables are not independent. Marginally, they do all have the same distribution and, intuitively (though I didn't check carefully), it seems that the vector of counts would have a property called exchangeability which, in some sense, is "almost independent and identically distributed". In the second scheme, where you move each new selection to the end of the list after incrementing it, the reordered variables are not even exchangeable any more, nor do they have the same marginal distribution. Intuitively, this is because the ones being moved to the end of the list will tend to have larger values than ones at the beginning of the list. This is related to, but not quite, what are called order statistics. I would have to think about the last scheme (i.e., reinserting in a random location) more carefully. On the surface, I believe you would essentially end up with the same distribution as in the first case. But, I've not checked that even remotely formally, so take that with a hefty dose of salt.
null
CC BY-SA 2.5
null
2011-01-31T03:24:39.737
2011-01-31T04:16:40.627
2011-01-31T04:16:40.627
2970
2970
null
6736
1
null
null
6
389
What properties do these measures have and how can I determine which one is better for a given purpose? What are extreme cases where they differ a lot?
Properties of Battacharyya distance vs Kullback-Leibler divergence
CC BY-SA 2.5
null
2011-01-31T05:39:54.900
2021-03-03T12:01:20.113
null
null
2440
[ "distributions", "probability", "kullback-leibler" ]
6737
2
null
2356
30
null
This is a "fleshed out" example given in a book written by Larry Wasserman [All of statistics](https://www.ic.unicamp.br/~wainer/cursos/1s2013/ml/livro.pdf) on Page 216 (12.8 Strengths and Weaknesses of Bayesian Inference). I basically provide what Wasserman doesn't in his book 1) an explanation for what is actually happening, rather than a throw away line; 2) the frequentist answer to the question, which Wasserman conveniently does not give; and 3) a demonstration that the equivalent confidence calculated using the same information suffers from the same problem. In this example, he states the following situation - An observation, X, with a Sampling distribution: $(X|\theta)\sim N(\theta,1)$ - Prior distribution of $(\theta)\sim N(0,1)$ (he actually uses a general $\tau^2$ for the variance, but his diagram specialises to $\tau^2=1$) He then goes to show that, using a Bayesian 95% credible interval in this set-up eventually has 0% frequentist coverage when the true value of $\theta$ becomes arbitrarily large. For instance, he provides a graph of the coverage (p218), and checking by eye, when the true value of $\theta$ is 3, the coverage is about 35%. He then goes on to say: ...What should we conclude from all this? The important thing is to understand that frequentist and Bayesian methods are answering different questions. To combine prior beliefs with data in a principled way, use Bayesian inference. To construct procedures with guaranteed long run performance, such as confidence intervals, use frequentist methods... (p217) And then moves on without any disection or explanation of why the Bayesian method performed apparently so bad. Further, he does not give a answer from the frequentist approach, just a broad brush statement about "the long-run" - a classical political tactic (emphasise your strength + others weakness, but never compare like for like). I will show how the problem as stated $\tau=1$ can be formulated in frequentist/orthodox terms, and then show that the result using confidence intervals gives precisely the same answer as the Bayesian one. Thus any defect in the Bayesian (real or perceived) is not corrected by using confidence intervals. Okay, so here goes. The first question I ask is what state of knowledge is described by the prior $\theta\sim N(0,1)$? If one was "ignorant" about $\theta$, then the appropriate way to express this is $p(\theta)\propto 1$. Now suppose that we were ignorant, and we observed $Y\sim N(\theta,1)$, independently of $X$. What would our posterior for $\theta$ be? $$p(\theta|Y)\propto p(\theta)p(Y|\theta)\propto exp\Big(-\frac{1}{2}(Y-\theta)^2\Big)$$ Thus $(\theta|Y)\sim N(Y,1)$. This means that the prior distribution given in Wassermans example, is equivalent to having observed an iid copy of $X$ equal to $0$. Frequentist methods cannot deal with a prior, but it can be thought of as having made 2 observations from the sampling distribution, one equal to $0$, and one equal to $X$. Both problems are entirely equivalent, and we can actually give the frequentist answer for the question. Because we are dealing with a normal distribution with known variance, the mean is a sufficient statistic for constructing a confidence interval for $\theta$. The mean is equal to $\overline{x}=\frac{0+X}{2}=\frac{X}{2}$ and has a sampling distribution $$(\overline{x}|\theta)\sim N(\theta,\frac{1}{2})$$ Thus an $(1-\alpha)\text{%}$ CI is given by: $$\frac{1}{2}X\pm Z_{\alpha/2}\frac{1}{\sqrt{2}}$$ But, using The results of example 12.8 for Wasserman, he shows that the posterior $(1-\alpha)\text{%}$ credible interval for $\theta$ is given by: $$cX\pm \sqrt{c}Z_{\alpha/2}$$. Where $c=\frac{\tau^{2}}{1+\tau^{2}}$. Thus, plugging in the value at $\tau^{2}=1$ gives $c=\frac{1}{2}$ and the credible interval becomes: $$\frac{1}{2}X\pm Z_{\alpha/2}\frac{1}{\sqrt{2}}$$ Which are exactly the same as the confidence interval! So any defect in the coverage exhibited by the Bayesian method, is not corrected by using the frequentist confidence interval! [If the frequentist chooses to ignore the prior, then to be a fair comparison, the Bayesian should also ignore this prior, and use the ignorance prior $p(\theta)\propto 1$, and the two intervals will still be equal - both $X \pm Z_{\alpha/2})$]. So what the hell is going on here? The problem is basically one of non-robustness of the normal sampling distribution. because the problem is equivalent to having already observed a iid copy, $X=0$. If you have observed $0$, then this is extremely unlikely to have occurred if the true value is $\theta=4$ (probability that $X\leq 0$ when $\theta=4$ is 0.000032). This explains why the coverage is so bad for large "true values", because they effectively make the implicit observation contained in the prior an outlier. In fact you can show that this example is basically equivalent to showing that the arithmetic mean has an unbounded influence function. Generalisation. Now some people may say "but you only considered $\tau=1$, which may be a special case". This is not true: any value of $\tau^2=\frac{1}{N}$ $(N=0,1,2,3,\dots)$ can be interpreted as observing $N$ iid copies of $X$ which were all equal to $0$, in addition to the $X$ of the question. The confidence interval will have the same "bad" coverage properties for large $\theta$. But this becomes increasingly unlikely if you keep observing values of $0$ (and no rational person would continue to worry about large $\theta$ when you keep seeing $0$).
null
CC BY-SA 4.0
null
2011-01-31T06:44:48.373
2018-08-21T11:24:31.280
2018-08-21T11:24:31.280
53690
2392
null
6738
2
null
6736
1
null
First properties are explained competently [here](http://en.wikipedia.org/wiki/Kullback-Leibler_divergence) and [there](http://en.wikipedia.org/wiki/Bhattacharyya_distance). Which one is better suited to a given purpose will depend on said given purpose so you might think about rephrasing this part of your question.
null
CC BY-SA 2.5
null
2011-01-31T07:43:50.690
2011-01-31T07:43:50.690
null
null
2592
null
6739
2
null
6734
6
null
Though ridge regression looks [at first like simple algorithm](http://en.wikipedia.org/wiki/Ridge_regression) the devil is in the details. Apparently original variables are scaled, and parameter $\lambda$ is not the parameter you would think it is given the original description. From what I gathered reading [the reference](http://books.google.com/books?id=zWpegyjze34C&lpg=PP1&dq=Measurement%2C%20Regression%20and%20Calibration&hl=fr&pg=PA51#v=onepage&q&f=false) given in R help page of `lm.ridge` there is no one agreed way of doing ridge regression. So the difference in results can only be explained by different algorithms used by R and SAS. Hopefully someone more knowledgeable can give more detailed answer. You can see what kind of algorithm is applied in R by looking at the source of `lm.ridge`. Just type `lm.ridge` in the R prompt.
null
CC BY-SA 2.5
null
2011-01-31T08:18:17.773
2011-01-31T08:18:17.773
null
null
2116
null
6740
1
6742
null
6
2921
I am reading the second edition of Categorical Data Analysis by Alan Agresti, and somehow stuck in the following second paragraph: ![enter image description here](https://i.stack.imgur.com/ahQDT.png) I don't quite understand why $\beta\pi(\hat{x})(1 - \pi(\hat{x}))$ will give the probability when $x = 26.3$, can anyone enlighten me? Thanks.
Alternative ways for interpretation of odds
CC BY-SA 2.5
null
2011-01-31T08:43:05.747
2011-01-31T14:50:46.843
2011-01-31T14:50:46.843
930
588
[ "logistic" ]
6742
2
null
6740
6
null
The answer is near the bottom of [p166](http://books.google.co.uk/books?id=hpEzw4T0sPUC&pg=PA166). It's using a linear approximation (what social scientists would call a 'marginal effect'). A small change $\delta x$ in $x$ gives a change in probability of: $$\delta\pi \approx \frac{\partial \pi(x)}{\partial x} \delta x.$$ With $\operatorname{logit}(\pi(x)) = \alpha + \beta x$, it's straightforward to show that $ \partial \pi(x) / \partial x = \beta \pi(x)(1-\pi(x))$.
null
CC BY-SA 2.5
null
2011-01-31T11:19:42.163
2011-01-31T11:27:32.697
2011-01-31T11:27:32.697
449
449
null
6744
1
6750
null
4
794
I have no math or stats background, so this may be a real dumbo question. I'm developing a time estimation app which would let user enter her worst case and best case estimates, and then select a probability curve which she feels looks realistic. I needed to draw some kind of curve that sums up to 1, so I came up with a normal distribution. I'm not sure that it captures how we give time estimates but this is not what question is about. I want user to be able to move finger on the chart, so the curve would bend accordingly. I assume it would feel nice if curve's peak followed the finger, but then I have the following question: is it possible to get normal distribution curve by arbitrary peak point? If not, which curve is good for this? UPDATE: I see that I can obtain μ from x location. Can I get σ for an arbitrary y though (combined with known μ)?
Get distribution curve characteristics by selecting peak point?
CC BY-SA 2.5
null
2011-01-31T13:12:42.683
2011-01-31T15:35:37.323
2011-01-31T13:25:01.897
2989
2989
[ "distributions", "estimation", "normal-distribution" ]
6746
1
null
null
11
605
I'm working with a two-state process with $x_t$ in $\{1, -1\}$ for $t = 1, 2, \ldots$ The autocorrelation function is indicative of a process with long-memory, i.e. it displays a power law decay with an exponent < 1. You can simulate a similar series in R with: ``` > library(fArma) > x<-fgnSim(10000,H=0.8) > x<-sign(x) > acf(x) ``` My question: is there a canonical way to optimally predict the next value in the series given just the autocorrelation function? One way to predict is simply to use $\hat{x}(t) = x(t-1)$ which has a classification rate of $(1 + \rho_1) / 2$, where $\rho$ is the lag-1 autocorrelation, but I feel like it must be possible to do better by taking into account the long-memory structure.
Predicting long-memory processes
CC BY-SA 2.5
null
2011-01-31T14:03:05.300
2019-10-31T10:59:54.217
2019-10-31T10:59:54.217
11887
2425
[ "time-series", "predictive-models", "autocorrelation", "long-range-dependence" ]
6747
1
6749
null
5
700
I want to measure the correlation between two 1D point processes $x$ and $y$. Ordinarily I could use the bivariate K-function $K(t) = \frac{T}{n_xn_y} \sum_{i=1}^{n_x} \sum_{j=1}^{n_y} w(x_i,y_j) I[d(x_i,y_j)<t]$ where $n_x$ is the number of observations in $x$ and $n_y$ is the number of observations in $y$. Deviation from $K(t)=t$ is an indication of correlation between the two point processes. However, my analysis is complicated by the fact that the unconditional distributions of the point processes are non-uniform. They each have a characteristic distribution of intervals between observations, which could be different for $x$ and $y$. To make things more complicated, the intensities vary over the time period (for example, the intensity might be low around midnight and higher during daylight hours). Assuming I have enough data to get good estimates of the unconditional distributions of $x$ and $y$, and of the way that intensities vary across time, is there a way to take this into account?
Correlation of two 1d point processes with non-uniform unconditional distribution
CC BY-SA 2.5
null
2011-01-31T14:10:18.723
2011-02-01T12:09:07.593
2011-01-31T15:14:46.450
919
2425
[ "correlation", "cross-correlation", "point-process" ]
6748
2
null
6705
3
null
Although not within the domain of geostatistics, for question #2, I would casually say the most frequent weighting function used in my field (Criminology) would be a a binary weighting scheme. Although I have rarely seen a good theoretical or empirical argument to use one weighting scheme over another (or how one defines a neighbor in a binary weighting scheme either). It may simply be because of historical preference and convienance that such a scheme is typically used. There is a distinction that should be drawn between data driven approaches to constructing spatial weights and the theory based approach to deriving spatial weights. You are currently performing the former, and in this approach you are implicitly treating the estimation of spatial weights as a problem of measurement error, and hence should use techniques to validate your measurements (which is considerably complicated due to the endogeneity of the spatial weights). Using a weighting scheme based on some of the chance variation in the data and using it in subsequent causal models is synonymous with other fallacies related to inference and data snooping. Unfortunately I have no good references of spatial weight models validated in any meaningful way besides the extent of the auto-correlation, which to be frank isn't all that convincing of an empirical argument. Spatial dependence can be the result of either causal processes (i.e. the value at one point in space affects the value at another point in space), or it can be the result of other measurement errors (i.e. the measured support of the data do not match the support of the processes that generate those phenomena). This is oppossed to theory based construction of spatial weights (or "model-driven" in Luc Anselin's terminology), in which one specifies the weight matrix a priori to estimating a model. I did not read the Fauchald paper you cited, but it appears in the abstract they have plausible theoretical explanations for the observed patterns based on some optimal foraging strategy. For readings I would suggest Luc Anselin's book, [Spatial Econometrics: Methods and Models](http://books.google.com/books?id=3dPIXClv4YYC&dq=anselin&source=gbs_navlinks_s) (1988), particularly chapters 2 and 3 will be of most interest. Also as another work with a similar viewpoint to mine (although it will likely be of less interest) is an essay piece by Gary King, "[Why context should not count](http://dx.doi.org/10.1016/0962-6298%2895%2900079-8)". I would also suggest another paper as it appears they had similar goals to yours, and defined the weights for a lattice system based on variogram estimates ([Negreiros, 2010](http://dx.doi.org/10.1080/17538940903253898)).
null
CC BY-SA 2.5
null
2011-01-31T14:37:55.117
2011-01-31T14:43:16.333
2011-01-31T14:43:16.333
1036
1036
null
6749
2
null
6747
4
null
Yes. The key is that how you measure correlation does not change, but the expected value does depend on the hypothesized underlying process. $K$ (and cross-$K$, which is what you are looking at) is defined generally for inhomogeneous point processes. For a good discussion with examples see [Philip Dixon's analysis](http://www.stat.iastate.edu/preprint/articles/2001-18.pdf). Because all his examples are 2D, the specific formulas for expectations do not apply directly to your 1D data, but all the general ideas do apply. Note that with non-uniform distributions the definition (and computation) of $K$ do not change, but the expectation does change. With non-stationary distributions you also have to take care to distinguish the two forms of cross-K between your two series of data.
null
CC BY-SA 2.5
null
2011-01-31T15:14:21.927
2011-01-31T15:14:21.927
null
null
919
null
6750
2
null
6744
5
null
Because normal distributions are symmetric, and it's natural to think of "best" and "worst" cases symmetrically (e.g., the chance of the best case is no greater than 5% and the chance of the worst case is no less than the same percent), they are not likely to work well in this application. In effect you have three degrees of freedom to specify a distribution. Pick any family that requires three parameters and is suited to distributions of "time". If you are estimating time to complete a task, people often have used [scaled Beta distributions](http://en.wikipedia.org/wiki/Beta_distribution) for this purpose. Another good choice is the [Weibull distribution](http://en.wikipedia.org/wiki/Weibull_distribution), which often arises in such applications, and possibly the [Gamma distribution](http://en.wikipedia.org/wiki/Gamma_distribution). The latter two require only two parameters, but many people have introduced a third location parameter that shifts the distribution (usually to the right), so-called "three-parameter Weibull" and "three-parameter Gamma". You can also embed both these distributions into a larger three-parameter family that includes the normal, called a "[generalized Gamma](http://www.weibull.com/LifeDataWeb/generalized_gamma_distribution.htm)". Typically some latitude is allowed for the time to exceed the "worst" case, so you would interpret the worst case as, say, somewhere between the 80th and 99th percentile of the distribution rather than it maximum (and similarly, mutatis mutandis, for the best case). Thus, whatever family you pick, you need to be able to find its usual parameters (such as $\alpha$, $\beta$, and the scale factor for the scaled Beta distribution), given the values of a lower percentile (the best case), an upper percentile (the worst case), and the mode. This can always be done (provided the percentiles and modes can be modeled by the chosen distributional family at all), but the amount of coding work--and the details of the mathematics--will vary with the distributional family. Because this can challenge one's numerical programming skills, many applications use approximations, often gross ones (given the qualitative nature of the elicited information). A common one occurs in [task duration modeling](http://en.wikipedia.org/wiki/Beta_distribution#Task_duration_modeling), where scaled Beta distributions are estimated by means of formulas that are not exactly right but are extremely simple (they are linear combinations of the three values).
null
CC BY-SA 2.5
null
2011-01-31T15:34:01.507
2011-01-31T15:34:01.507
null
null
919
null
6751
2
null
6744
2
null
The normal curve is not going to work for you: - it is not constrained by a minimum/maximum - it is symmetric around the peak - it has a very restrictive shape There are loads of other statistical distributions that would probably fit your needs better. I would especially look at distributions confined to the 0-1 interval (and then rescaled to your min-max interval) such as the [Beta distribution](http://en.wikipedia.org/wiki/Beta_distribution). The "peak" of the distribution is called "mode" in stat-speak. The formula for the mode is quite straightforward for the Beta. Note, however, that this distribution has two parameters, and you probably want to allow some way of changing both.
null
CC BY-SA 2.5
null
2011-01-31T15:35:37.323
2011-01-31T15:35:37.323
null
null
279
null
6752
2
null
6723
5
null
For the second part, you must interpret the output as the steps towards your final model. For example, in the forward case you begin with Start: AIC=377.95 cars$MidrangePrice ~ 1 ``` Df Sum of Sq RSS AIC + cars$Horsepower 1 4979.3 3054.9 300.66 + cars$Wheelbase 1 3172.3 4862.0 338.76 + cars$Length 1 2448.8 5585.4 350.14 + cars$Width 1 1969.2 6065.0 356.89 + cars$Uturn 1 1450.2 6584.0 363.63 + cars$Luggage 1 1079.6 6954.7 368.12 <none> 8034.2 377.95 ``` Your current model is only considering the constant `cars$MidrangePrice ~ 1`. Each row in the table indicates that in case you add that variable (for example, Horsepower), you will get the following results rearding `Sq` `RSS`(Residual Sum of Squares) and `AIC` (Akaike Information Criterion). In the other cases you must read the results the same way. Hope this helps :)
null
CC BY-SA 2.5
null
2011-01-31T15:40:37.687
2011-01-31T15:40:37.687
null
null
2902
null
6753
1
32356
null
6
6630
... instead of e.g. the popular Equal-Width-Histograms. Additional question: What is a good/robust rule of the thumb to calculate the number of bins for equal frequency histograms (like the [Freedmann-Diaconis-Rule](https://stats.stackexchange.com/questions/798/calculating-optimal-number-of-bins-in-a-histogram-for-n-where-n-ranges-from-30-3/862#862) for equal-width).
When to use Equal-Frequency-Histograms
CC BY-SA 2.5
null
2011-01-31T16:41:29.663
2012-07-19T15:38:19.587
2017-04-13T12:44:29.013
-1
264
[ "histogram", "rule-of-thumb" ]
6754
1
null
null
1
175
I am a beginner in statistics, and I want to learn machine learning :). Therefore, I have gathered some sample data to practice. But, the problem is I want to create a feature (or attribute), which is common for every entity. But, this feature is observed and measured for entity X in Y meters and Z seconds, and for entity W in M meters and L seconds, and there are many observations like described before. Which algorithm should I apply to create a common feature for all entities? BTW, I am using WEKA.
What should I do to compare different sets of data?
CC BY-SA 2.5
null
2011-01-31T16:46:08.220
2011-01-31T18:21:57.650
2011-01-31T18:21:57.650
930
2170
[ "multiple-comparisons", "feature-selection", "normalization", "weka" ]
6755
1
6757
null
7
2767
The data files from [http://www.csie.ntu.edu.tw/~cjlin/libsvm/](http://www.csie.ntu.edu.tw/~cjlin/libsvm/) are in 'svm' format. I am trying to read this in to sparse matrix representation in R. Is there an easy/efficient way to do this? Here is what I am doing now: read in file line by line (800,000 lines), for each line separate classes, values, and cols. Store the classes as a list and the features as a .csr sparse matrix (1 row), then rbind the feature row with all previous rows. This is terribly inefficient and basically won't finish (12 minutes for 1000 lines). I think it comes from rbinding the sparse matrices once the number of rows starts to get large. Note: the matrix (800000*48000) is too big to build and then convert to sparse format. Thanks!
Reading in SVM files in R (libsvm)
CC BY-SA 2.5
null
2011-01-31T17:46:43.547
2011-02-01T18:58:10.837
2011-01-31T18:40:30.193
2310
2310
[ "svm", "matrix", "dataset" ]
6757
2
null
6755
4
null
The `e1071` package has a means for exporting to the libsvm "svm" format in [the write.svm function](http://ugrad.stat.ubc.ca/R/library/e1071/html/write.svm.html). But to the best of my knowledge, there is no `read.svm` function.
null
CC BY-SA 2.5
null
2011-01-31T19:01:59.317
2011-01-31T19:01:59.317
null
null
5
null
6758
1
6775
null
15
18173
I've fit a model with several independent variables, one of which is the lag of the dependent variable, using the dynlm package. Assuming I have 1-step-ahead forecasts for my independent variables, how do I get 1-step-ahead forecasts for my dependent variables? Here is an example: ``` library(dynlm) y<-arima.sim(model=list(ar=c(.9)),n=10) #Create AR(1) dependant variable A<-rnorm(10) #Create independant variables B<-rnorm(10) C<-rnorm(10) y<-y+.5*A+.2*B-.3*C #Add relationship to independant variables data=cbind(y,A,B,C) #Fit linear model model<-dynlm(y~A+B+C+L(y,1),data=data) #Forecast A<-c(A,rnorm(1)) #Assume we already have 1-step forecasts for A,B,C B<-c(B,rnorm(1)) C<-c(C,rnorm(1)) y=window(y,end=end(y)+c(1,0),extend=TRUE) newdata<-cbind(y,A,B,C) predict(model,newdata) ``` And here is an example using the dyn package, which works. ``` library(dyn) #Fit linear model model<-dyn$lm(y~A+B+C+lag(y,-1),data=data) #Forecast predict(model,newdata)the dyn packages, which works: ```
1-step-ahead predictions with dynlm R package
CC BY-SA 2.5
null
2011-01-31T19:58:00.220
2020-04-15T18:09:19.697
2011-01-31T23:33:00.067
2817
2817
[ "r", "dynamic-regression" ]
6759
1
6800
null
71
472242
How can I remove duplicate rows from this example data frame? ``` A 1 A 1 A 2 B 4 B 1 B 1 C 2 C 2 ``` I would like to remove the duplicates based on both the columns: ``` A 1 A 2 B 4 B 1 C 2 ``` Order is not important.
Removing duplicated rows data frame in R
CC BY-SA 3.0
null
2011-01-31T19:58:19.363
2015-08-30T20:20:35.677
2015-08-30T20:20:35.677
62183
2725
[ "r" ]
6760
2
null
6759
51
null
You are looking for `unique()`. ``` a <- c(rep("A", 3), rep("B", 3), rep("C",2)) b <- c(1,1,2,4,1,1,2,2) df <-data.frame(a,b) unique(df) > unique(df) a b 1 A 1 3 A 2 4 B 4 5 B 1 7 C 2 ```
null
CC BY-SA 2.5
null
2011-01-31T20:22:27.047
2011-01-31T20:22:27.047
null
null
307
null
6761
2
null
6544
2
null
It's clear the solution I posted previously is inadequate and inelegant. Here is my second attempt, which 100% solves my problem. Please let me know if you spot any bugs! I will cross post to stack overflow, if you all think that would be a better place to get comments on my code. ``` #A function to iteratively predict a time series ipredict <-function(model, newdata, interval = "none", level = 0.95, na.action = na.pass, weights = 1) { P<-predict(model,newdata=newdata,interval=interval, level=level,na.action=na.action,weights=weights) for (i in seq(1,dim(newdata)[1])) { if (is.na(newdata[i])) { if (interval=="none") { P[i]<-predict(model,newdata=newdata,interval=interval, level=level,na.action=na.action,weights=weights)[i] newdata[i]<-P[i] } else{ P[i,]<-predict(model,newdata=newdata,interval=interval, level=level,na.action=na.action,weights=weights)[i,] newdata[i]<-P[i,1] } } } P_end<-end(P)[1]*frequency(P)+(end(P)[2]-1) #Convert (time,period) to decimal time P<-window(P,end=P_end-1*frequency(P)) #Drop last observation, which is NA return(P) } #Example usage: library(dyn) y<-arima.sim(model=list(ar=c(.9)),n=10) #Create AR(1) dependant variable A<-rnorm(10) #Create independant variables B<-rnorm(10) C<-rnorm(10) Error<-rnorm(10) y<-y+.5*A+.2*B-.3*C+.1*Error #Add relationship to independant variables data=cbind(y,A,B,C) #Fit linear model model.dyn<-dyn$lm(y~A+B+C+lag(y,-1),data=data) summary(model.dyn) #Forecast linear model A<-c(A,rnorm(5)) B<-c(B,rnorm(5)) C<-c(C,rnorm(5)) y=window(y,end=end(y)+c(5,0),extend=TRUE) newdata<-cbind(y,A,B,C) P1<-ipredict(model.dyn,newdata) P2<-ipredict(model.dyn,newdata,interval="prediction") #Plot plot(y) lines(P1,col=2) ```
null
CC BY-SA 2.5
null
2011-01-31T21:04:56.823
2011-01-31T21:04:56.823
null
null
2817
null
6762
1
6766
null
4
805
We have n actors. Each actor chooses from n*2 actions. How can I calculate the probability that at least one actor will choose a unique outcome? For example, say we have 5 pickup artists in a town with 10 bars. Each PUA chooses a bar at random. What are the odds that at least one PUA will have a bar to himself/herself? This is relevant to some work I'm doing on scheduling in a distributed system. I know that the relevant equation should probably start with $\binom{n + 2n -1}{n}$, but then what?
Probability of at least one unique outcome
CC BY-SA 2.5
0
2011-01-31T22:49:27.690
2016-08-25T20:44:44.043
2016-08-25T20:44:44.043
101426
2995
[ "probability", "combinatorics" ]
6763
1
null
null
2
490
I have 80 chemical batches, each of which has 8 associated measurements like pH, viscosity, etc. Is there a way to use Principal Components Analysis or Factor Analysis to group the similar batches together? I know I can use cluster analysis, but I want to have a different visual representation of batch groups other than the dendogram. I'd like to have a bi-plot. I tried to perform a PCA, but got an error message that there cannot be more variables than observations. My matrix has the 80 batches as the column names, and the 8 measurements as the row names. When this matrix is transposed, PCA just groups the like measurements, not the batches.
Statistically group chemical batches
CC BY-SA 2.5
null
2011-01-31T23:06:13.850
2011-05-02T13:52:06.720
2011-02-01T00:14:02.047
null
2473
[ "pca", "factor-analysis" ]
6766
2
null
6762
5
null
This is not a simple calculation in general, though the probability will be almost 1 if $n$ is any substantial size. To take your example of five people and 10 bars, there are $10^5$, i.e 100,000 possible equally-likely distributions. All of these will have at least one person alone, except the 10 cases where all five are in a single bar or the 900 cases where three are in a particular bar and two in another. That leaves 99090 distributions which satisfy your condition, more than 99%. Note that 99090 is not divisible by $\binom{5 + 10 - 1}{5} = 2002$. For $n=1$ to $6$, the fractions seem to be $\frac{2}{2},$ $\frac{12}{16},$ $\frac{210}{216},$ $\frac{3920}{4096},$ $\frac{99090}{100000}$ and $\frac{2962872}{2985984}$, where the denominator in each case is $(2n)^n$. I would expect the probabilities then to keep rising as $n$ increases beyond 6.
null
CC BY-SA 2.5
null
2011-02-01T00:44:41.830
2011-02-01T14:56:57.243
2011-02-01T14:56:57.243
919
2958
null
6767
2
null
6763
0
null
You need to be more specific about what you mean by "similar", and provide some type of example. Maybe the following will help as an example of 8 batches with 3 measurements. Feel free to change it if you think it will help. ``` texinp <- " Batch M1 M2 M3 1 0.01 14 -123 2 0.03 23 30 3 0.02 12 23 4 0.01 18 -107 5 0.04 49 110 6 0.01 8 -101 7 0.02 11 11 8 0.01 14 -115" #Read the data into a data frame df <- read.table(textConnection(texinp), header=TRUE) df #Plot some stuff plot(df$M1, df$M2) plot(df$M2, df$M3) plot(df$M1, df$M3) ``` If you're not using R, show the example in whatever you're using.
null
CC BY-SA 2.5
null
2011-02-01T00:49:31.910
2011-02-01T02:04:59.757
2011-02-01T02:04:59.757
2775
2775
null
6768
2
null
6763
1
null
You'll be hard-pressed to show an 8-D representation of sets of similar batches using anything but a dreary table. But, along the lines of Bill's point I think, if you're willing to select the 3 most interesting or most discriminating dimensions, you could show where each batch falls within a cube defined by those 3. Perhaps better is to first do a cluster analysis and then show where each cluster falls within such a cube. Some software (such as SPSS) will allow you to assign each batch a color or a symbol according to its cluster, and you could draw spikes from each point to its cluster's centroid to create a nice, vivid effect.
null
CC BY-SA 2.5
null
2011-02-01T01:03:26.380
2011-02-01T01:03:26.380
null
null
2669
null
6770
1
7149
null
5
6478
I'm writing a program for an assignment for my Probability and Statistics course. This program performs several millions of simulated coin tosses, and then analyses various properties of the generated sequence. One of the things it does is count how many runs does the generated sequence contain, and their lengths. One result for 10.000.000 tosses is: Number of runs of length 1 is: 2500986 Number of runs of length 2 is: 1246647 Number of runs of length 3 is: 625656 Number of runs of length 4 is: 311689 Number of runs of length 5 is: 156673 Number of runs of length 6 is: 78464 Number of runs of length 7 is: 39253 Number of runs of length 8 is: 19547 Number of runs of length 9 is: 9866 Number of runs of length 10 or more is: 9818 Now, my problem is following: I need to compare the result acquired by simulation to theoretical expectation of sorts. Basically, I need that same result as above, calculated on paper. A pattern in the results obviously exists, but I just cant figure out a formula. Anyone, help please?
Expected number of runs of length n in coin toss
CC BY-SA 3.0
null
2011-02-01T01:51:58.023
2020-09-06T18:49:39.783
2013-08-28T19:32:19.017
27581
2996
[ "self-study", "binomial-distribution", "simulation" ]
6771
2
null
6728
11
null
As I mentioned in my earlier post to you, I'm working on getting my head around fitting a distribution to count data also. Here's among what I've learned: When the variance is greater than the mean, overdispersion is evident and thus the negative binomial distribution is likely appropriate. If the variance and mean are the same, the Poisson distribution is suggested, and when the variance is less than the mean, it's the binomial distribution that's recommended. With the count data you're working on, you're using the "ecological" parameterization of the Negative Binomial function in R. Section 4.5.1.3 (Page 165) of the following freely-available book speaks to this specifically (in the context of R, no less!) and, I hope, might address some of your questions: [http://www.math.mcmaster.ca/~bolker/emdbook/book.pdf](http://www.math.mcmaster.ca/~bolker/emdbook/book.pdf) If you come to conclude that your data are zero-truncated (i.e., the probability of 0 observations is 0), then you might want to check out the zero-truncated flavor of the NBD that's in the R [VGAM package](http://cran.r-project.org/web/packages/VGAM/index.html). Here's an example of its application: ``` library(VGAM) someCounts = data.frame(n = c(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16), freq = c(182479,76986,44859,24315,16487,15308,5736, 2843,1370,1115,1127,49,100,490,106,2)) fit = vglm(n ~ 1, posnegbinomial, control = vglm.control(maxit = 1000), weights=freq, data=someCounts) Coef(fit) pdf2 = dposnegbin(x=with(someCounts, n), munb=0.8344248, size=0.4086801) print( with(someCounts, cbind(n, freq, fitted=pdf2*sum(freq))), dig=9) ``` I hope this is helpful.
null
CC BY-SA 3.0
null
2011-02-01T04:54:02.557
2012-10-07T23:25:50.940
2012-10-07T23:25:50.940
7290
2730
null
6772
1
8644
null
7
1586
Not sure if you can help me but here's the problem. I have two binomial proportions A and B (95% CI) - $A = 2\% \pm 0.2\%,\quad B = 3\% \pm 0.2\%.$ In other words, B's proportion is 50% higher than A's. Business people tend to prefer speaking about this difference in terms of a percentage increase. In other words, B's proportion is 50% higher than A's. How do I go about calculating a Confidence Interval for the 50% increase of B's proportion over A. I've done some research into Fieller’s formula which is said to provide exact percent effect. However, besides not understanding Fieller's theorem, I don't know how I would go about calculating this in R. Can anyone help? Edit: E.g., if the CI for those intervals is 95%, the CI for the 50% increase = 98.52%.
How to compute Confidence Interval associated to a Binomial proportion's increase?
CC BY-SA 2.5
null
2011-02-01T05:58:26.000
2011-03-22T22:11:13.873
2011-03-22T22:11:13.873
930
2595
[ "r", "confidence-interval", "binomial-distribution", "proportion" ]
6773
1
6872
null
6
1386
I have a logistic regression model in GeoBugs to estimate predictors of prevalence of a disease. Can anyone tell me if there is a simple way to determine the lower and upper bounds of phi for spatial.exp. This is part of the model: ``` Infected[i] ~ dbin(p[i], tested[i]) logit(p[i]) <- alpha + beta1*covar1[i] + beta2*covar2[i] + u[i] mu[i] <- 0 u[1:N] ~ spatial.exp(mu[], x[], y[], tau, phi, 1) ``` This how I have specified the priors: ``` phi ~ dunif(0.25, 20) ``` At the moment my selection has been arbitrary. The GeoBugs manual only goes as far as saying that the bounds of the distribution are determined by the data. What I would like to know is there a dummies version of the best way to select the values.
Geostatistical analysis using spatial.exp in WinBugs
CC BY-SA 2.5
null
2011-02-01T06:35:55.837
2011-02-04T04:27:12.527
2011-02-04T04:23:15.950
1834
1834
[ "bayesian", "logistic", "spatial", "bugs" ]
6774
2
null
6763
0
null
I'm not pretty sure, but i think that if you more variable than observation the Partial Least Square (PLS) regression is the right tool for you, and it's commonly used by chemometrician. As you know this site is R biased and i use R too, so i'll recommend to take a look at pls package package which can also do principal component regression, and it also have a biplot.mvr (S3 method for biplot) function to make a biplot. A good starting point can be [http://mevik.net/work/software/pls.html](http://mevik.net/work/software/pls.html) , is the website of package's author and have some useful link like the journal of statistical software article about the R package pls.
null
CC BY-SA 2.5
null
2011-02-01T07:30:18.610
2011-02-01T07:55:29.913
2011-02-01T07:55:29.913
2028
2028
null
6775
2
null
6758
13
null
Congratulations, you have found a bug. Prediction for `dynlm` with new data is broken if lagged variables are used. To see why look at the output of ``` predict(model) predict(model,newdata=data) ``` The results should be the same, but they are not. Without `newdata` argument, the `predict` function basically grabs `model` element from the `dynlm` output. With `newdata` argument `predict` tries to form new model matrix from `newdata`. Since this involves parsing formula supplied to `dynlm` and the formula has function `L`, which is defined only internaly in function `dynlm`, the incorrect model matrix is formed. If you try to debug, you will see, that the lagged dependent variable is not being lagged in the case of `newdata` argument is supplied. What you can do is to lag the dependent variable and include it in the `newdata`. Here is the code illustrating this approach. I use `set.seed` so it would be easily reproducible. ``` library(dynlm) set.seed(1) y<-arima.sim(model=list(ar=c(.9)),n=10) #Create AR(1) dependant variable A<-rnorm(10) #Create independant variables B<-rnorm(10) C<-rnorm(10) y<-y+.5*A+.2*B-.3*C #Add relationship to independant variables data=cbind(y,A,B,C) #Fit linear model model<-dynlm(y~A+B+C+L(y,1),data=data) ``` Here is the buggy behaviour: ``` > predict(model) 2 3 4 5 6 7 8 9 10 3.500667 2.411196 2.627915 2.813815 2.468595 1.733852 2.114553 1.423225 1.470738 > predict(model,newdata=data) 1 2 3 4 5 6 7 8 9 10 2.1628335 3.7063579 2.9781417 2.1374301 3.2582376 1.9534558 1.3670995 2.4547626 0.8448223 1.8762437 ``` Form the `newdata` ``` #Forecast fix. A<-c(A,rnorm(1)) #Assume we already have 1-step forecasts for A,B,C B<-c(B,rnorm(1)) C<-c(C,rnorm(1)) newdata<-ts(cbind(A,B,C),start=start(y),freq=frequency(y)) newdata<-cbind(lag(y,-1),newdata) colnames(newdata) <- c("y","A","B","C") ``` Compare forecast with model fit: ``` > predict(model) 2 3 4 5 6 7 8 9 10 3.500667 2.411196 2.627915 2.813815 2.468595 1.733852 2.114553 1.423225 1.470738 > predict(model,newdata=newdata) 1 2 3 4 5 6 7 8 9 10 11 NA 3.500667 2.411196 2.627915 2.813815 2.468595 1.733852 2.114553 1.423225 1.470738 1.102367 ``` As you can see for historical data the forecast coincides and the last element contains the 1-step ahead forecast.
null
CC BY-SA 3.0
null
2011-02-01T07:39:48.617
2012-02-17T09:20:06.207
2012-02-17T09:20:06.207
2116
2116
null
6776
1
null
null
5
1151
I would appreciate some help getting some EM stuff straight. So, say I generate data in R as follows: ``` N <- 100 epsilon <- rnorm(N) X <- 10*runif(N) beta.0 <- 10 beta.1 <- 3 sigma <- 2 Y <- beta.0 + beta.1 * X + sigma * epsilon epsilon2 <- rnorm(N) X2 <- 10*runif(N) Y2 <- 3 - X2 + 0.25 * epsilon2 Y.mix <- c(Y, Y2) X.mix <- c(X, X2) ``` Now, in expectation maximization, in the first step, I have some prior probability, say 0.5, of the data being from either one or the other distribution. So, using EM I know I can estimate the mean and variance of the two mixtures. From looking at a density plot, it seems like the means are at about -2 and 30 for the data I simulated. But, at what stage in EM do I back out the betas? I want to recover the slope, intercept, and sd deviation parameters for the 2 regression-type equations. Thanks for an clarification.
Understanding expectation maximization for simple 2 linear mixture case
CC BY-SA 2.5
null
2011-02-01T09:07:09.633
2011-08-24T23:34:27.057
2011-03-27T16:04:19.703
919
2693
[ "r", "machine-learning", "mixture-distribution" ]
6777
1
null
null
4
195
I would like to apply Bayesian network on some data. However, some of the variables are related to time. E.g. Number of time he/she visit library. As the value can be defined as Total number of this person visits, Average weekly number of he/she visit, Daily number of visits, or even Number of visits in the last 3 days etc... I do not want to include all these different variations in my network Then how should I treat feature like this properly? Many thanks for your help
How should I deal with features with time related values in Bayesian network?
CC BY-SA 2.5
null
2011-02-01T10:56:25.047
2016-05-01T20:19:23.587
2016-05-01T20:19:23.587
7290
2454
[ "time-series", "bayesian" ]
6778
2
null
6747
0
null
It's a little weird to answer my own question, but... To address the non-stationarity issue, it seems as though it's not particularly tricky to compute the expected $K$-function under the hypothesis of independent event times. In particular, let's say that we're looking at point processes on the interval $I=[0,T]$, and our processes $X$ and $Y$ have unconditional distributions $f_X(x)$ and $f_Y(y)$. We'll also denote the unit ball in one dimension by $B(x,t) = \{y\in I:d(x,y)<t\}$. Then $K_{XY}(t) = \mathbb{E}[\textrm{number of Y points in a neighbourhood t of a randomly chosen X point}]$ $K_{XY}(t) = \mathbb{E}_X[\mathbb{E}_Y[\textrm{number of Y points within t of x}]]$ $K_{XY}(t) = \int_I dx \int_{B(x,t)} dy \, f_X(x) f_Y(y)$ So deviations from this function in the estimated $\hat{K}_{XY}(t)$ will be an indicator of correlation between the point processes. Does this sound reasonable? --- Edit: In fact, I think this all holds if you take $I\subseteq\mathbb{R}^n$ and interpret $x$, $y$ as vectors.
null
CC BY-SA 2.5
null
2011-02-01T12:09:07.593
2011-02-01T12:09:07.593
null
null
2425
null
6779
1
6782
null
7
3392
Apologies in advance - I'm a bit of a newbie with R but I've been Google-ing this for weeks and I haven't found a straight answer. The problem Basically, I'm looking to plot proportions with their confidence intervals on a box and whiskers plot using the R environment. i.e. Plot the results of binom.test(100,1000) and binom.test(125,1000) What is the best package (I don't have a preference either way) and way to go about doing this?
Plotting binomial proportions on a box & whiskers plot using R
CC BY-SA 2.5
null
2011-02-01T12:51:07.040
2011-02-01T14:57:09.207
2011-02-01T14:57:09.207
null
2595
[ "r", "data-visualization", "boxplot" ]
6780
1
6781
null
30
19889
I have several query frequencies, and I need to estimate the coefficient of Zipf's law. These are the top frequencies: ``` 26486 12053 5052 3033 2536 2391 1444 1220 1152 1039 ```
How to calculate Zipf's law coefficient from a set of top frequencies?
CC BY-SA 2.5
null
2011-02-01T13:24:57.997
2021-11-30T18:52:21.957
2011-03-15T19:04:43.443
8
2998
[ "distributions", "estimation", "pareto-distribution", "zipf" ]
6781
2
null
6780
24
null
Update I've updated the code with maximum likelihood estimator as per @whuber suggestion. Minimizing sum of squares of differences between log theoretical probabilities and log frequencies though gives an answer would be a statistical procedure if it could be shown that it is some kind of M-estimator. Unfortunately I could not think of any which could give the same results. Here is my attempt. I calculate logarithms of the frequencies and try to fit them to logarithms of theoretical probabilities [given by this formula](http://en.wikipedia.org/wiki/Zipf's_law). The final result seems reasonable. Here is my code in R. ``` fr <- c(26486, 12053, 5052, 3033, 2536, 2391, 1444, 1220, 1152, 1039) p <- fr/sum(fr) lzipf <- function(s,N) -s*log(1:N)-log(sum(1/(1:N)^s)) opt.f <- function(s) sum((log(p)-lzipf(s,length(p)))^2) opt <- optimize(opt.f,c(0.5,10)) > opt $minimum [1] 1.463946 $objective [1] 0.1346248 ``` The best quadratic fit then is $s=1.47$. The maximum likelihood in R can be performed with `mle` function (from `stats4` package), which helpfully calculates standard errors (if correct negative maximum likelihood function is supplied): ``` ll <- function(s) sum(fr*(s*log(1:10)+log(sum(1/(1:10)^s)))) fit <- mle(ll,start=list(s=1)) > summary(fit) Maximum likelihood estimation Call: mle(minuslogl = ll, start = list(s = 1)) Coefficients: Estimate Std. Error s 1.451385 0.005715046 -2 log L: 188093.4 ``` Here is the graph of the fit in log-log scale (again as @whuber suggested): ``` s.sq <- opt$minimum s.ll <- coef(fit) plot(1:10,p,log="xy") lines(1:10,exp(lzipf(s.sq,10)),col=2) lines(1:10,exp(lzipf(s.ll,10)),col=3) ``` Red line is sum of squares fit, green line is maximum-likelihood fit. ![Log-log graph of fits](https://i.stack.imgur.com/ChSRC.png)
null
CC BY-SA 3.0
null
2011-02-01T14:12:29.077
2015-01-19T17:58:47.520
2015-01-19T17:58:47.520
-1
2116
null
6782
2
null
6779
6
null
This really does look to be more appropriate to an [r] tagged question in SO but there seems to be a surprisingly wide degree of tolerance for such questions in CV, so here goes. The answer also assumes you did not really want to use the existing `boxplot()` function which is what would be used to construct a true box-and-whiskers plot (and which doesn't really use binomial CI's, but rather uses the Tukey defined `fivenum()` function ). The plotrix and gplots packages both have what appears to be the same plotCI function; ``` est1 <- binom.test(100,1000) ; est2 <- binom.test(125,1000) estvals <- rbind( c(est1$estimate, est1$conf.int[1], est1$conf.int[2]), c(est2$estimate, est2$conf.int[1], est2$conf.int[2])) plotCI(x = c(1,2), estvals[,1], ui=estvals[,3], li=estvals[,2], xlim=c(0.5,2.5), ylim=c(0, 0.15) , xaxt="n", xlab="Groups") ``` The basic function is the `arrows()` plotting function with an argument that splays the points of the arrows to 90 degrees in both directions.
null
CC BY-SA 2.5
null
2011-02-01T14:28:40.207
2011-02-01T14:34:04.300
2011-02-01T14:34:04.300
2129
2129
null
6784
2
null
6544
2
null
One more method, which has been suggested in other topics, is to just use the arima function with xregs. Arima seems to be able to make forecasts from a new set of xregs just fine.
null
CC BY-SA 2.5
null
2011-02-01T16:19:03.493
2011-02-01T16:19:03.493
null
null
2817
null
6785
2
null
6777
2
null
I am afraid you won't like this answer, but it depends on what you want to do. Non-Time-Series-Modelling-Point of View Here is an example: Suppose you want to predict whether a certain person is visiting the library at the next day. The starting point for your data is hence the data-id + person_id + label (visit=true/false). Now the question is which predictors are needed. Can you tell me which of this predictors is meaningful or not (without performing any modelling) ? - average-number-of-visits per week: Persons visiting the library more often have a higher chance of visiting the next day - number-of-visits-in-the-last/days/hours: Some people go one only once per week to the library, but not always on the same day. So depending on the total-visit-count per week + whether the customer has visited the library recently, one could say whether the same person will come in again the next day. - daily-number-of-visits: Every day multiple visits ? Or only multiple visits on the one weekday where the person comes in ? and so on ... I think you get the idea. I would try to generate a good amount of predictors and then apply subset selection techniques to find the best combination (yes, I am a machine learner). Time-Series-Modelling-Point of View I admit I do not know that much about time-series analysis. I can only imagine that drawing one timeline per customer (e.g. in days or hours on the time/x-axis + plus a peak everytime the customer visits) and classifying this timelines (in context of the example in the first section) might help. PS: I am aware that this answer is not as complete as it should be. However, I hope to stimulate the discussion that way so one can come up with the true answer.
null
CC BY-SA 2.5
null
2011-02-01T17:08:20.320
2011-02-01T17:08:20.320
null
null
264
null
6786
2
null
6780
26
null
There are several issues before us in any estimation problem: - Estimate the parameter. - Assess the quality of that estimate. - Explore the data. - Evaluate the fit. For those who would use statistical methods for understanding and communication, the first should never be done without the others. For estimation it is convenient to use maximimum likelihood (ML). The frequencies are so large we can expect the well-known asymptotic properties to hold. ML uses the assumed probability distribution of the data. [Zipf's Law](https://en.wikipedia.org/wiki/Zipf's_law) supposes the probabilities for $i=1,2,\ldots,n$ are proportional to $i^{-s}$ for some constant power $s$ (usually $s\gt 0$). Because these probabilities must sum to unity, the constant of proportionality is the reciprocal of the sum $$H_s(n)=\frac{1}{1^s} + \frac{1}{2^s} + \cdots + \frac{1}{n^s}.$$ Consequently, the logarithm of the probability for any outcome $i$ between $1$ and $n$ is $$\log(\Pr(i)) = \log\left(\frac{i^{-s}}{H_s(n)}\right) = -s\log(i) - \log(H_s(n)).$$ For independent data summarized by their frequencies $f_i, i=1,2,\ldots, n$, the probability is the product of the individual probabilities, $$\Pr(f_1,f_2,\ldots,f_n) = \Pr(1)^{f_1}\Pr(2)^{f_2}\cdots\Pr(n)^{f_n}.$$ Thus the log probability for the data is $$\Lambda(s) = -s \sum_{i=1}^n{f_i \log(i)} - \left(\sum_{i=1}^n{f_i}\right) \log\left(H_s(n)\right).$$ Considering the data as fixed, and expressing this explicitly as a function of $s$, makes it the log [Likelihood](https://stats.stackexchange.com/questions/2641). Numerical minimization of the log Likelihood with the data given in the question yields $\hat{s} = 1.45041$ and $\Lambda(\hat{s}) = -94046.7$. This is significantly better (but just barely so) than the least squares solution (based on log frequencies) of $\hat{s}_{ls} = 1.463946$ with $\Lambda(\hat{s}_{ls}) = -94049.5$. (The optimization can be done with a minor change to the elegant, clear R code provided by mpiktas.) ML will also estimate confidence limits for $s$ in the usual ways. The chi-square approximation gives $[1.43922, 1.46162]$ (if I did the calculations correctly :-). Given the nature of Zipf's law, the right way to graph this fit is on a log-log plot, where the fit will be linear (by definition): ![enter image description here](https://i.stack.imgur.com/G1kEQ.png) To evaluate the goodness of fit and explore the data, look at the residuals (data/fit, log-log axes again): ![enter image description here](https://i.stack.imgur.com/mYmkx.png) This is not too great: although there's no evident serial correlation or heteroscedasticity in the residuals, they typically are around 10% (away from 1.0). With frequencies in the thousands, we wouldn't expect deviations by more than a few percent. The goodness of fit is readily tested with chi square. We obtain $\chi^2 = 656.476$ with 10 - 1 = 9 degrees of freedom; this is highly significant evidence of departures from Zipf's Law. --- Because the residuals appear random, in some applications we might be content to accept Zipf's Law (and our estimate of the parameter) as an acceptable albeit rough description of the frequencies. This analysis shows, though, that it would be a mistake to suppose this estimate has any explanatory or predictive value for the dataset examined here.
null
CC BY-SA 3.0
null
2011-02-01T17:22:39.173
2016-11-30T16:33:14.047
2017-04-13T12:44:23.203
-1
919
null
6787
1
null
null
3
252
I've identified a Single Nucleotide Polymorphism (SNP) that is associated with a variable of interest (lets just call it height) using a Genome Wide Association Study (GWAS). The SNP can be of the values A/A, A/G or G/G in each of the 69 samples. I also have gene expression data for the 69 samples and it appears that the SNP is affecting the expression of a large number (~500) genes. I'm wondering if there is a convenient statistical package/technique available to assess if the variable (height) becomes independent of genotype (A/A, A/G or G/G) conditioning on the expression of each of the ~500 genes who's expression are associated with the SNP (i.e. assessing conditional independence of each of the ~500 genes)?
Assessing conditional independence of genes in Trans-eQTL cluster
CC BY-SA 2.5
null
2011-02-01T17:54:27.250
2011-02-01T22:17:04.600
2011-02-01T22:17:04.600
930
null
[ "regression", "independence", "genetics" ]
6788
2
null
6770
2
null
You have n independent Bernoulli rv, $X_1, X_2, ..., X_n$ with $P(X_i=1)=P(X_i=0)=p=1/2$. You generate n bits $b_1, b_2, ..., b_n$, as observations on this sequence. Let N be the rv giving the number of runs of one, for example, in the simulated sequence of bits. $N=\sum_{i=1}^n Y_i$, where $Y_i$ is the random variable that takes 1 as value if $b_i$ is the starting bit of a run of ones and 0 otherwise. $E(Y_1)=P(Y_1=1)=p$ while for $i=2,...,n$ $P(Y_i=1)=P(X_{i-1}=0, X_i=1)=(1-p)p.$ Hence the expected value of N, is $E(N)=p+(n-1)(1-p)p$
null
CC BY-SA 4.0
null
2011-02-01T18:12:51.627
2020-09-06T18:49:39.783
2020-09-06T18:49:39.783
2376
2376
null
6789
2
null
6755
4
null
I found a way that is at least now feasible. Instead of the sparseM package I use the Matrix package to build the sparse matrices. Store the entries and columns in separate lists and then build the matrix by: ``` data=sparseMatrix(i=rep(1,length[[1]]),j=columns[[loop]], x=entires[[loop]],dims=c(120000,47235)) for(loop in 2:120000){ if(loop %% 1000==0){ print(loop) print(Sys.time()) } data[loop,columns[[loop]]]=entries[[loop]] } ``` This still takes awhile (about 2 hours) but at least it works. I store the corresponding classes in another list not in the sparse matrix.
null
CC BY-SA 2.5
null
2011-02-01T18:58:10.837
2011-02-01T18:58:10.837
null
null
2310
null
6790
2
null
6787
4
null
I would recommend looking at the [snpMatrix](http://www.bioconductor.org/packages/2.3/bioc/html/snpMatrix.html) R package. Within the `snp.lhs.tests()` function, height will be the phenotype (or outcome), your SNP data will be stored in a vector (`snp.data`), and your gene expression levels will enter the model as covariates. I never used this kind of covariates (in the GWAS I am familiar with we adjust for population stratification and subject-specific covariates), so I can't give an example of use. Just give it a try to see if it suits your needs. Otherwise, a larger modeling framework is available in the [GGtools](http://www.bioconductor.org/packages/2.2/bioc/html/GGtools.html) package, from the [Bioconductor](http://www.bioconductor.org) project too. It was developed by Vince J. Carey for dealing specifically with SNP and gene expression data, in GWAS or eQTL studies. There are numerous examples of use on the web (look up for tutorial by VJ Carey with `GGtools`). However, it's mainly to model gene expression as a function of observed genotypes (chromosome- or genome-wide). Caution. Please note that `GGtools` package has a lot of dependencies, in particular `GGBase` (which provides specific environments for holding genetic data) and `snpMatrix` (which an provides an efficient storage of SNP data). You will also need to install the base packages from the Bioconductor repository (this is not part of CRAN); detailed instructions are available [here](http://www.bioconductor.org/install/index.html).
null
CC BY-SA 2.5
null
2011-02-01T22:15:50.637
2011-02-01T22:15:50.637
null
null
930
null
6791
1
6797
null
8
11146
Let Year 1 be last year's data and Year 2 be this year's data. Suppose that in Year 1, you had a likert scale that was 1-9 (Categorical/Ordinal) and that in Year 2, for the same question you had a likert scale that was 1-5 (Categorical/Ordinal). What would be some of the things that you would try (if at all) to compare the two years worth of data? What I've done so far: - Compared distributions (shape, skew, and kurtosis, statistically equal) - Rescaled 1-9 to 1-5 and the changes YoY in frequencies match logical expectations derived from industry news/events and qualitative research findings. Note: This is not homework. It also may not have a definite answer. But, I need a hand! Thanks in advance!
How to compare Likert scales with varying number of categories across time?
CC BY-SA 2.5
null
2011-02-01T23:36:42.177
2023-04-13T11:04:09.670
2011-03-30T18:42:33.780
930
776
[ "scales", "likert" ]
6793
2
null
6762
5
null
Below, I briefly sketch out one "back of the envelope" calculation, which is in no way optimal, but does at least show that the probability converges to one exponentially fast. Before getting into the (gory) details, we give the answer. $$ \Pr(\text{at least one actor chooses a unique outcome}) \geq 1 - (e/4)^{n/2} . $$ As mentioned before, comparing this result to the above for small $n$ shows that the result is not super sharp. But, it does provide a guarantee for all $n$ and shows that as $n$ grows, the probability goes to 1 at least exponentially fast. (The actual rate is probability more like $e^{b n \log n + o(n)}$). --- The idea is to turn the problem into an occupancy or coupon-collecting problem. If you want a more precise answer, you can refer to the (extensive) literature on these subjects, but to prove a faster rate takes messier analysis. Let $U_n$ be the proportion of actors that choose a unique outcome (when we have $n$ total actors), which we use below. (Technically, the analysis below is for even $n$, but that's a minor point. The same result holds for odd $n$; just replace $n/2$ by $(n-1)/2$ and $3n/2$ by $(3n+1)/2$ in the relevant places.) Restating the problem in terms of balls and bins. We have $n$ balls and $2n$ bins. The "balls" are the "actors" and the "bins" are the possible outcomes. We want to know the probability that at least one bin has only a single ball in it. The probability that any particular bin has only one ball is $\left(1 - \frac{1}{2n}\right)^n$. So, the expected proportion of balls in a bin by themselves (i.e., actors that choose a unique outcome) is $$ \mathbb{E}(U_n) = \left(1 - \frac{1}{2n}\right)^n \to e^{-1/2} \approx 60.65\% $$ This gives us pretty good hope that the probability of at least one ball being in a bin by itself is (very) high. To get a bound on this probability, note that if each of the occupied bins has at least two balls in it, then at most $n/2$ bins can be occupied. So $$ \Pr(\text{no bin has exactly 1 ball}) \leq \Pr(\text{no more than n/2 bins are occupied}) $$ Now, if no more than $n/2$ bins are occupied, then there must be some subset of $3n/2$ bins that are all empty. There are ${2n \choose 3n/2} = {2n \choose n/2}$ such subsets, and so by the union bound, we get $$ \Pr(\text{some subset of 3n/2 bins is empty}) \leq {2n \choose n/2} \left(1 - \frac{3n/2}{2n}\right)^{n} = {2n \choose n/2} 4^{-n} $$ To finish up, we use the elementary inequality $$ {2n \choose n/2} \leq \left(\frac{2n e}{n/2}\right)^{n/2} = (4e)^{n/2} . $$ Putting it all together, we get $$ \Pr(\text{no bin has exactly 1 ball}) \leq (4 e)^{n/2} 4^{-n} = (e/4)^{n/2} . $$ Hence, $$ \Pr(\text{at least one actor chooses a unique outcome}) \geq 1 - (e/4)^{n/2} . $$
null
CC BY-SA 2.5
null
2011-02-02T02:44:08.977
2011-02-06T14:48:56.133
2011-02-06T14:48:56.133
2970
2970
null