Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6464 | 2 | null | 6463 | 3 | null | You should try R software with package [spdep](http://cran.r-project.org/web/packages/spdep/index.html). see the vignette [here](http://cran.r-project.org/web/packages/spdep/vignettes/sids.pdf)
| null | CC BY-SA 2.5 | null | 2011-01-23T08:13:06.243 | 2011-01-23T09:26:32.053 | 2011-01-23T09:26:32.053 | 930 | 223 | null |
6465 | 2 | null | 6463 | 13 | null | In addition to the package that @robin pointed too, you should look at the [Spatial](http://cran.r-project.org/web/views/Spatial.html) Task View on CRAN. What you are describing is known as a [choropleth map](http://en.wikipedia.org/wiki/Choropleth_map), as illustrated here: [Choropleth Maps of Presidential Voting](http://www.r-bloggers.com/choropleth-maps-of-presidential-voting/), or [U.S. Unemployment Data: Animated Choropleth Maps](http://www.r-bloggers.com/u-s-unemployment-data-animated-choropleth-maps/).
In R, they can be handled using
- the base graphics routines (with maps),
- the lattice way (see mapplot() in latticeExtra),
- the ggplot2 way (see e.g., Choropleth Challenge Results for example code).
Other softwares that allow to deal with geographical maps include [Mondrian](http://rosuda.org/mondrian/Mondrian.html), [Quantum GIS](http://www.qgis.org/) (and I guess many other GIS programs), but data format may vary from one software to the other.
| null | CC BY-SA 2.5 | null | 2011-01-23T09:26:00.510 | 2011-01-23T09:26:00.510 | null | null | 930 | null |
6466 | 2 | null | 577 | 6 | null | AIC should rarely be used, as it is really only valid asymptotically. It is almost always better to use AICc (AIC with a correction for finite sample size). AIC tends to overparameterize: that problem is greatly lessened with AICc. The main exception to using AICc is when the underlying distributions are heavily leptokurtic. For more on this, see the book Model Selection by Burnham & Anderson.
| null | CC BY-SA 2.5 | null | 2011-01-23T14:11:09.733 | 2011-01-23T14:11:09.733 | null | null | 2875 | null |
6467 | 2 | null | 6463 | 2 | null | I've not tried [CrackMaps](http://www.crack4mac.com/) but it looks very interesting. Only available for the Mac, I presume.
| null | CC BY-SA 2.5 | null | 2011-01-23T14:31:32.623 | 2011-01-23T14:31:32.623 | null | null | 2617 | null |
6468 | 1 | null | null | 3 | 934 | This is a question that landed on my desk, and I don't have the requisite experience to reply.
A researcher has been asked to perform (by a journal reviewer) a two-way factorial ANCOVA on some microrarray data (a single array will be a single measurement of ~30,000 entities from a single sample in this case).
The suggestion was to use ethnicity and age as co-variables. I understand that ANCOVA requires at least one categorical and one continuous predictor variables.
The researcher has 3 ethnic groupings in his samples - 6 of group A, 2 of group B and 1 of group C. These samples are spread across three treatments (X, Y and Z), with 3 samples per treatment. This means the treatments cannot be split evenly by ethnicity, group C will only appear in one of the treatments.
Is this sample size/grouping issue a barrier to running a two-way factorial ANCOVA? Is there another test that might be more appropriate under the circumstances?
| Requirements to perform a two-way factorial ANCOVA | CC BY-SA 2.5 | null | 2011-01-23T15:16:47.363 | 2011-01-23T17:48:31.583 | null | null | null | [
"ancova"
]
|
6469 | 1 | 6470 | null | 21 | 24393 | How do I fit a linear model with autocorrelated errors in R? In stata I would use the `prais` command, but I can't find an R equivalent...
| Simple linear model with autocorrelated errors in R | CC BY-SA 2.5 | null | 2011-01-23T15:36:47.640 | 2015-02-17T08:58:47.903 | 2011-01-23T17:35:50.843 | null | 2817 | [
"r",
"time-series",
"autocorrelation"
]
|
6470 | 2 | null | 6469 | 24 | null | Have a look at `gls` (generalized least squares) from the package [nlme](http://cran.r-project.org/web/packages/nlme/index.html)
You can set a correlation profile for the errors in the regression, e.g. ARMA, etc:
```
gls(Y ~ X, correlation=corARMA(p=1,q=1))
```
for ARMA(1,1) errors.
| null | CC BY-SA 2.5 | null | 2011-01-23T15:52:41.160 | 2011-01-23T15:52:41.160 | null | null | 300 | null |
6471 | 2 | null | 6469 | 7 | null | Use function gls from package nlme. Here is the example.
```
##Generate data frame with regressor and AR(1) error. The error term is
## \eps_t=0.3*\eps_{t-1}+v_t
df <- data.frame(x1=rnorm(100), err=filter(rnorm(100)/5,filter=0.3,method="recursive"))
##Create ther response
df$y <- 1 + 2*df$x + df$err
###Fit the model
gls(y~x, data=df, corr=corAR1(0.5,form=~1))
Generalized least squares fit by REML
Model: y ~ x
Data: df
Log-restricted-likelihood: 9.986475
Coefficients:
(Intercept) x
1.040129 2.001884
Correlation Structure: AR(1)
Formula: ~1
Parameter estimate(s):
Phi
0.2686271
Degrees of freedom: 100 total; 98 residual
Residual standard error: 0.2172698
```
Since model is fitted using maximum likelihood you need to supply starting values. The default starting value is 0, but as always it is good to try several values to ensure the convergence.
As Dr. G pointed out you can also use other correlation structures, namely ARMA.
Note that in general least squares estimates are consistent if covariance matrix of regression errors is not multiple of identity matrix, so if you fit model with specific covariance structure, first you need to test whether it is appropriate.
| null | CC BY-SA 2.5 | null | 2011-01-23T16:11:36.907 | 2011-01-23T18:56:29.413 | 2011-01-23T18:56:29.413 | 2116 | 2116 | null |
6472 | 2 | null | 6468 | 2 | null | A total sample size of nine is simply too small to do a two-way factorial ANCOVA or a two way-factorial ANOVA. I would have thought it's stretching things to even do a one-way ANOVA on nine observations, especially with 30,000 different outcomes. I think you need a bigger sample if you wish to make any statistical inferences.
| null | CC BY-SA 2.5 | null | 2011-01-23T17:48:31.583 | 2011-01-23T17:48:31.583 | null | null | 449 | null |
6473 | 1 | 6474 | null | 8 | 2822 | I was wondering if anyone has ported the examples from Durbin & Koopman "Time Series Analysis by State Space Methods" to R?
You can find RATS code for the examples online and obviosly SsfPack/Ox but no signs of an R companion for this book...
| R examples for Durbin & Koopman "Time Series Analysis by State Space Methods" | CC BY-SA 2.5 | null | 2011-01-23T18:44:29.417 | 2018-05-15T08:16:24.717 | 2018-05-15T07:26:50.410 | 128677 | 300 | [
"r",
"time-series",
"references",
"state-space-models"
]
|
6474 | 2 | null | 6473 | 15 | null | There is a package in CRAN ([KFAS](http://cran.r-project.org/web/packages/KFAS/index.html)) which implements a good portion of the algorithms described in Durbin & Koopman, including "exact" non-informative distributions for the state vector, or parts of it.
Although it is not paticularly tied to Durbin & Koopman's book, you might also be interested in package [dlm](http://cran.r-project.org/web/packages/dlm/index.html) and the companion book [Dynamic Linear Models with R](http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Dstripbooks&field-keywords=Petris).
| null | CC BY-SA 2.5 | null | 2011-01-23T18:53:28.767 | 2011-01-23T18:53:28.767 | null | null | 892 | null |
6475 | 1 | 6491 | null | 4 | 6768 | I have used a three-way ANOVA to analyze the the effect of genes, transcription factors, and different conditions on the gene expression. Now I have 9 elements, SSa, SSb, SSc, SSab, SSbc, SSac, SSabc, SSe. How can I interpret these values? I appreciate if you introduce me some resources, or share your own experience with interpreting 3-way ANOVA.
Best
| How to interpret a three-way ANOVA? | CC BY-SA 2.5 | null | 2011-01-23T19:48:59.000 | 2011-01-24T16:01:01.520 | 2011-01-23T19:55:49.120 | 930 | 2885 | [
"anova",
"interpretation"
]
|
6476 | 2 | null | 6469 | 30 | null | In addition to the `gls()` function from `nlme`, you can also use the `arima()` function in the `stats` package using MLE. Here is an example with both functions.
```
x <- 1:100
e <- 25*arima.sim(model=list(ar=0.3),n=100)
y <- 1 + 2*x + e
###Fit the model using gls()
require(nlme)
(fit1 <- gls(y~x, corr=corAR1(0.5,form=~1)))
Generalized least squares fit by REML
Model: y ~ x
Data: NULL
Log-restricted-likelihood: -443.6371
Coefficients:
(Intercept) x
4.379304 1.957357
Correlation Structure: AR(1)
Formula: ~1
Parameter estimate(s):
Phi
0.3637263
Degrees of freedom: 100 total; 98 residual
Residual standard error: 22.32908
###Fit the model using arima()
(fit2 <- arima(y, xreg=x, order=c(1,0,0)))
Call:
arima(x = y, order = c(1, 0, 0), xreg = x)
Coefficients:
ar1 intercept x
0.3352 4.5052 1.9548
s.e. 0.0960 6.1743 0.1060
sigma^2 estimated as 423.7: log likelihood = -444.4, aic = 896.81
```
The advantage of the arima() function is that you can fit a much larger variety of ARMA error processes. If you use the auto.arima() function from the forecast package, you can automatically identify the ARMA error:
```
require(forecast)
fit3 <- auto.arima(y, xreg=x)
```
| null | CC BY-SA 2.5 | null | 2011-01-23T21:22:19.533 | 2011-01-24T04:48:03.363 | 2011-01-24T04:48:03.363 | 159 | 159 | null |
6477 | 1 | 6486 | null | 13 | 36377 | I have run Levene's and Bartlett's test on groups of data from one of my experiments to validate that I am not violating ANOVA's assumption of homogeneity of variances. I'd like to check with you guys that I'm not making any wrong assumptions, if you don't mind :D
The p-value returned by both of those tests is the probability that my data, if it were generated again using equal variances, would be the same. Thus, using those tests, to be able to say that I do not violate ANOVA's assumption of homogeneity of variances, I would only need a p-value that is higher than a chosen alpha level (say 0.05)?
E.g., with the data I am currently using, the Bartlett's test returns p=0.57, while the Levene's test (well they call it a Brown-Forsythe Levene-type test) gives a p=0.95. That means, no matter which test I use, I can say that the data I meet the assumption. Am I making any mistake?
Thanks.
| Interpretation of the p-values produced by Levene's or Bartlett's test for homogeneity of variances | CC BY-SA 2.5 | null | 2011-01-23T21:42:19.447 | 2011-01-24T15:20:28.567 | 2011-01-23T23:15:38.383 | 1320 | 1320 | [
"anova",
"heteroscedasticity",
"levenes-test"
]
|
6478 | 1 | 6485 | null | 30 | 68051 | When building a CART model (specifically classification tree) using rpart (in R), it is often interesting to know what is the importance of the various variables introduced to the model.
Thus, my question is: What common measures exists for ranking/measuring variable importance of participating variables in a CART model? And how can this be computed using R (for example, when using the rpart package)
For example, here is some dummy code, created so you might show your solutions on it. This example is structured so that it is clear that variable x1 and x2 are "important" while (in some sense) x1 is more important then x2 (since x1 should apply to more cases, thus make more influence on the structure of the data, then x2).
```
set.seed(31431)
n <- 400
x1 <- rnorm(n)
x2 <- rnorm(n)
x3 <- rnorm(n)
x4 <- rnorm(n)
x5 <- rnorm(n)
X <- data.frame(x1,x2,x3,x4,x5)
y <- sample(letters[1:4], n, T)
y <- ifelse(X[,2] < -1 , "b", y)
y <- ifelse(X[,1] < 0 , "a", y)
require(rpart)
fit <- rpart(y~., X)
plot(fit); text(fit)
info.gain.rpart(fit) # your function - telling us on each variable how important it is
```
(references are always welcomed)
| How to measure/rank "variable importance" when using CART? (specifically using {rpart} from R) | CC BY-SA 2.5 | null | 2011-01-23T22:06:03.373 | 2023-03-20T01:50:42.053 | 2011-01-27T10:42:22.663 | 8 | 253 | [
"r",
"classification",
"model-selection",
"cart",
"rpart"
]
|
6479 | 2 | null | 6477 | 5 | null | You're on "the right side of the p-value." I'd just adjust your statement slightly to say that, IF the groups had equal variances in their populations, this result of p=0.95 indicates that random sampling using these n-sizes would produce variances this far apart or farther 95% of the time. In other words, strictly speaking it's correct to phrase the result in terms of what it says about the null hypothersis, but not in terms of what it says about the future.
| null | CC BY-SA 2.5 | null | 2011-01-23T22:20:05.120 | 2011-01-23T22:39:18.393 | 2011-01-23T22:39:18.393 | 2669 | 2669 | null |
6480 | 1 | 6482 | null | 7 | 323 | I have run 10,000 random samples (910 data points each) on a data set of about 75,000 data points. I would like to make a continuous distribution out of this so that I can test the probability of getting the results of a particular non-random sample which I made based on theoretical concerns.
For each random sample (and for the "real" sample), I collected the number of hits, the number of hits + misses (this number varies somewhat for reasons which I don't think are important), and the relative frequency of hits (hits / hits+misses).
Ideally, I'd like to take the relative frequencies and turn it into a continuous distribution (I assume it will be roughly normal), so that I can then see how likely the "real" relative frequency would be (using something simple like a T-test). But I'm not sure how to go about doing that.
On the other hand, is there an easier way to test the probability of obtaining my actual results just given a long file of the results of each random sample?
I assume there's some kind of R function that would make this fairly straightforward. Any hints?
| How can I make a continuous distribution out of simulation results in R? | CC BY-SA 2.5 | null | 2011-01-24T00:56:13.530 | 2011-01-24T18:42:45.843 | 2011-01-24T09:37:48.097 | 144 | 52 | [
"r",
"distributions",
"simulation"
]
|
6481 | 1 | 6510 | null | 3 | 349 | As the title suggests, I am hesitating on whether to use ordinal logistic regression or not. I don't think I have the time to understand that and to figure out how to work it out in R, so can I just ignore it? Will the consequences be serious (i.e. seriously under/over-estimate the effect size)?
Thanks.
| Consequence of ignoring the order of a categorical variable with different levels in logistic regression | CC BY-SA 3.0 | 0 | 2011-01-24T06:40:36.760 | 2015-05-20T14:38:44.167 | 2015-05-20T14:38:44.167 | 28740 | 588 | [
"logistic",
"ordinal-data"
]
|
6482 | 2 | null | 6480 | 3 | null | It sounds like what you're describing is a bootstrap simulation in order to estimate the distribution of a statistic (the relative frequencies).
The package I'd suggest you to look into is the [boot package:](http://cran.r-project.org/web/packages/boot/index.html)
>
functions and datasets for
bootstrapping from the book "Bootstrap
Methods and Their Applications" by A.
C. Davison and D. V. Hinkley (1997,
CUP).
It should hold many functions needed for what (it seems to me) that you are trying to do.
Here are some [tutorials on the boot package](http://www.r-bloggers.com/bootstrapping-and-the-boot-package-in-r/).
Best,
Tal
| null | CC BY-SA 2.5 | null | 2011-01-24T07:42:57.160 | 2011-01-24T07:42:57.160 | null | null | 253 | null |
6483 | 1 | null | null | 7 | 377 | I've items that have a geo-spatial position and a temporal origin. For both dimensions, I build clusters so far.
I'm now in search of a way to merge this different clusters forming spatio-temporal clusters. Of course, I want to prevent calculating completely new clusters from the scratch and rather use the existing information through the previous clustering.
Is there any algorithm how to build 3d clusters by merging a previous 1d and a 2d clustering process?
Thanks!
| Merging spatial and temporal clusters | CC BY-SA 2.5 | null | 2011-01-24T07:48:41.997 | 2011-06-02T22:31:51.410 | 2011-01-24T19:33:04.137 | 919 | 2880 | [
"time-series",
"clustering",
"spatial"
]
|
6484 | 1 | null | null | 1 | 2537 | I am using binary logistic regression to test this model:
$\text{logit}(\rho_1) = \alpha + \beta_1(\text{INDDIR}) + \beta_2(\text{INDCHAIR}) + \beta_3(\text{BOARDSIZE}) +\\ \beta_4(\text{DIRSHIP}) + \beta_5(\text{MEETING}) + \beta_6(\text{EXPERT}) + \beta_7(\text{INSTI}) +\\ \beta_8(\text{DEBT}) + \beta_9(\text{LnSIZE}) + \beta_{10}(\text{BIG4})$
I have some doubt on the casewise list (outliers). The first time I run the analysis, there were 44 outliers. I deleted all these 44 cases. Then, I re-run the analysis. Again, 11 outliers were appeared. I deleted all these 11 cases then re-run. Again and again, 6 outliers were found!
So, should I delete all outliers or it would be a never ending story?
| Problem with casewise list in logistic regression | CC BY-SA 4.0 | null | 2011-01-24T09:23:24.853 | 2020-01-30T07:30:17.283 | 2020-01-30T07:30:17.283 | 143489 | 2793 | [
"logistic",
"spss"
]
|
6485 | 2 | null | 6478 | 48 | null | Variable importance might generally be computed based on the corresponding reduction of predictive accuracy when the predictor of interest is removed (with a permutation technique, like in Random Forest) or some measure of decrease of node impurity, but see (1) for an overview of available methods. An obvious alternative to CART is RF of course ([randomForest](http://cran.r-project.org/web/packages/randomForest/index.html), but see also [party](http://cran.r-project.org/web/packages/party/index.html)). With RF, the Gini importance index is defined as the averaged Gini decrease in node impurities over all trees in the forest (it follows from the fact that the Gini impurity index for a given parent node is larger than the value of that measure for its two daughter nodes, see e.g. (2)).
I know that Carolin Strobl and coll. have contributed a lot of simulation and experimental studies on (conditional) variable importance in RFs and CARTs (e.g., (3-4), but there are many other ones, or her thesis, [Statistical Issues in Machine Learning – Towards Reliable Split Selection and Variable Importance Measures](http://edoc.ub.uni-muenchen.de/8904/1/Strobl_Carolin.pdf)).
To my knowledge, the [caret](http://caret.r-forge.r-project.org/Classification_and_Regression_Training.html) package (5) only considers a loss function for the regression case (i.e., mean squared error). Maybe it will be added in the near future (anyway, an example with a classification case by k-NN is available in the on-line help for `dotPlot`).
However, Noel M O'Boyle seems to have some R code for [Variable importance in CART](http://www.redbrick.dcu.ie/~noel/R_classification.html).
References
- Sandri and Zuccolotto. A bias correction algorithm for the Gini variable importance measure in classification trees. 2008
- Izenman. Modern Multivariate Statistical Techniques. Springer 2008
- Strobl, Hothorn, and Zeilis. Party on!. R Journal 2009 1/2
- Strobl, Boulesteix, Kneib, Augustin, and Zeilis. Conditional variable importance for random forests. BMC Bioinformatics 2008, 9:307
- Kuhn. Building Predictive Models in R Using the caret Package. JSS 2008 28(5)
| null | CC BY-SA 2.5 | null | 2011-01-24T10:47:25.223 | 2011-01-24T10:47:25.223 | null | null | 930 | null |
6486 | 2 | null | 6477 | 8 | null | The p-value of your significance test can be interpreted as the probability of observing the value of the relevant statistic as or more extreme than the value you actually observed, given that the null hypothesis is true. (note that the p-value makes no reference to what values of the statistic are likely under the alternative hypothesis)
EDIT: in mathematical terminology, this can be written as:
$$p-value = Pr(T > T_{obs} | H_{0})$$
where $T$ is some function of the data (the "statistic") and $T_{obs}$ is the actual value of $T$ observed; $H_{0}$ denotes the conditions implied by the null hypothesis on the sampling distribution of $T$.
You can never be sure that you're assumptions hold true, only whether or not the data you observed is consistent with your assumptions. A p-value gives a rough measure of this consistency.
A p-value does not give the probability that the same data will be observed, only the probability that the value of the statistic is as or more extreme to the value observed, given the null hypothesis.
| null | CC BY-SA 2.5 | null | 2011-01-24T12:53:41.863 | 2011-01-24T14:56:01.570 | 2011-01-24T14:56:01.570 | 2392 | 2392 | null |
6487 | 1 | null | null | 1 | 2501 | Looking to estimate a `VARX(p,q)` type VECM in R if possible.
I'd like to estimate a VECM with p lags (lags relative to the level, not diff of the vars) on the endogenous variables and q lags on the exogenous components. Any ideas? At the moment I'm using the `ca.jo()` function and adding the contemporaneous and lagged exogenous variables to the `dumvar=` option. I do not feel confident about this approach.
Can I do this in R? Should I switch software?
| Lagged Exogenous Variables in VECM with R | CC BY-SA 2.5 | null | 2011-01-24T13:12:50.933 | 2011-01-24T15:23:46.473 | 2011-01-24T15:23:46.473 | null | null | [
"r",
"time-series",
"cointegration"
]
|
6488 | 2 | null | 6487 | 4 | null | If your variables are exogenous and their lags are exogenous, then there is no problem in treating them as dummy variables. So you can add them to `dumvar` option. My answer [to similar question](https://stats.stackexchange.com/questions/4030/finding-coefficients-for-vecm-exogenous-variables/5291#5291) applies.
| null | CC BY-SA 2.5 | null | 2011-01-24T13:23:25.360 | 2011-01-24T13:23:25.360 | 2017-04-13T12:44:52.277 | -1 | 2116 | null |
6489 | 2 | null | 2950 | 0 | null | I wouldnt cluster or classify the data at all. Since youve got ratio scaled data (isotopes) my method of choice would be PCA (Principal Component Analysis). By colouring your points in the PCA diagram according to your "eco-type" you could see their dispersal within the isotope ratio variation. Further, you will see the influence of each parameter (isotope ratios) in the biplot - the result will contain much more information than a classification.
| null | CC BY-SA 2.5 | null | 2011-01-24T15:19:25.240 | 2011-01-24T15:19:25.240 | null | null | null | null |
6490 | 2 | null | 6477 | 3 | null | While the previous comments are 100% correct, the plots produced for model objects in R provide a graphic summary of this question. Personally, i always find the plots much more useful than the p value, as one can transform the data afterward and spot the changes immediately in the plot.
| null | CC BY-SA 2.5 | null | 2011-01-24T15:20:28.567 | 2011-01-24T15:20:28.567 | null | null | 656 | null |
6491 | 2 | null | 6475 | 2 | null | Just a (not so) brief note on the question as posed, before I give a way to intepret, you must be careful about which sums of squares you have calculated, for they relate to different kinds of null hypothesis. For example, the term $SSa$ may be for just the effect $a$ assuming everything else is in the model, or it may be any effect including $a$, which is $a+ab+ac+abc$, or some other null hypothesis which refers to effect $a$. The standard literature refers to four different Types of sums of squares, corresponding to different ways of analysing the data (e.g. total contribution, sequential contribution, marginal contribution). You should investigate whatever software you are using to see what sums of squares it gives you. If you are using R it is most likely sequential which means basically "in the order specified when you wrote the model"
There probably is no "simple" way to interpret a 3-way (or higher) ANOVA, but here's how I do it. I think of it like a contingency table (or "contingency cube" in 3-way case). You have a factor for the row, a factor for the column, and a factor for the "depth". As you move around the cells in the table, the mean changes and we can observe patterns in how the mean varies as we move around the table in a "structured way".
If the factors are "independent" in their association with the response, then this indicates that the mean value only depends on which row, column, and depth we are in. For instance, if you change the row (but leave the column and depth fixed) then the effect of the column is unchanged, and the effect of the depth is unchanged.
If all factors are present, then it matters which particular cell the observation is in (i.e. each cell has a unique mean under the model). For instance, if you change the row (but leave the column and depth fixed) then the effect of the column always changes, and the effect of the depth always changes.
For the "in-between" interpretation where 2-way interactions are present but 3-way interaction are not, it is a bit more tricky to interpret. This goes along the lines of if you change the row (but leave the column and depth fixed) then the effect column-by-row always changes, and the effect of the depth-by-row always changes, but the effect of the depth-by-column remains the same.
So like I said, probably not "simple", but this is how I would interpret interactions in a 3-way ANOVA. And finally, the $SSe$ term is simply the variation which could not be explained by the full 3-way model (residual or error variance). The sums of squares are devices to make testing a "null hypothesis" more straight forward. But in multi-way ANOVA, there are many null hypothesis one can test, and so no "automatic" procedure know what the user will want in their particular case, so they give a default. It is important to understand what the default sums of squares can be used to test, and then if any of those tests are the one you want to do.
| null | CC BY-SA 2.5 | null | 2011-01-24T16:01:01.520 | 2011-01-24T16:01:01.520 | null | null | 2392 | null |
6492 | 1 | 6679 | null | 16 | 3184 | I want to use a t distribution to model short interval asset returns in a bayesian model. I'd like to estimate both the degrees of freedom (along with other parameters in my model) for the distribution. I know that asset returns are quite non-normal, but I don't know too much beyond that.
What is an appropriate, mildly informative prior distribution for the degrees of freedom in such a model?
| What's a good prior distribution for degrees of freedom in a t distribution? | CC BY-SA 2.5 | null | 2011-01-24T17:55:40.930 | 2020-02-26T16:49:59.363 | 2011-01-24T18:03:05.840 | 1146 | 1146 | [
"distributions",
"bayesian",
"modeling",
"prior"
]
|
6493 | 1 | 6506 | null | 24 | 6795 | I have been using log normal distributions as prior distributions for scale parameters (for normal distributions, t distributions etc.) when I have a rough idea about what the scale should be, but want to err on the side of saying I don't know much about it. I use it because the that use makes intuitive sense to me, but I haven't seen others use it. Are there any hidden dangers to this?
| Weakly informative prior distributions for scale parameters | CC BY-SA 3.0 | null | 2011-01-24T18:02:29.257 | 2020-05-11T11:12:51.243 | 2013-01-24T18:08:12.653 | 1146 | 1146 | [
"distributions",
"bayesian",
"modeling",
"prior",
"maximum-entropy"
]
|
6494 | 2 | null | 6480 | 2 | null | Sounds like you want something like a [Kernel Density Estimate](http://en.wikipedia.org/wiki/Kernel_density_estimation) of the distribution. In R I think you want [density](http://stat.ethz.ch/R-manual/R-patched/library/stats/html/density.html) function.
| null | CC BY-SA 2.5 | null | 2011-01-24T18:42:45.843 | 2011-01-24T18:42:45.843 | null | null | 1146 | null |
6496 | 2 | null | 6463 | 3 | null | Self promotion: [JMP](http://www.jmp.com) (commercial desktop software) does choropleths of US states, US counties, world countries and regions of select other countries.
| null | CC BY-SA 2.5 | null | 2011-01-24T19:35:16.437 | 2011-01-24T19:35:16.437 | null | null | 1191 | null |
6498 | 1 | 6503 | null | 26 | 6698 | This may be hard to find, but I'd like to read a well-explained ARIMA example that
- uses minimal math
- extends the discussion beyond building a model into using that model to forecast specific cases
- uses graphics as well as numerical results to characterize the fit between forecasted and actual values.
| Seeking certain type of ARIMA explanation | CC BY-SA 2.5 | null | 2011-01-25T02:19:35.050 | 2017-07-14T08:51:15.390 | 2017-07-14T08:51:15.390 | 11887 | 2669 | [
"time-series",
"arima",
"intuition"
]
|
6499 | 1 | 6548 | null | 7 | 1366 | Assume I have a random vector $X = \{x_1, x_2, ..., x_N\}$, composed of i.i.d. binomially distributed values. If it would simplify the problem substantially, we can approximate them as normally distributed. Given that all other parameters are fixed, I want to know how $E[min(X)]$ (the expected value of the smallest number in the vector $X$) scales with $N$.
I don't care about a precise answer. I just want to know how it scales, i.e. linearly (obviously not), exponentially, power law, etc.
| How do extreme values scale with sample size? | CC BY-SA 2.5 | null | 2011-01-25T04:16:28.640 | 2011-01-27T20:44:19.810 | 2011-01-25T04:23:44.263 | 1347 | 1347 | [
"binomial-distribution",
"normal-distribution",
"approximation",
"extreme-value"
]
|
6500 | 2 | null | 6498 | 15 | null | I tried to do that in chapter 7 of my [1998 textbook](http://robjhyndman.com/forecasting/) with Makridakis & Wheelwright. Whether I succeeded or not I'll leave others to judge. You can read some of the chapter online via [Amazon](http://rads.stackoverflow.com/amzn/click/0471532339) (from p311). Search for "ARIMA" in the book to persuade Amazon to show you the relevant pages.
Update: I have a new book which is free and online. The [ARIMA chapter is here](http://otexts.com/fpp/8/).
| null | CC BY-SA 3.0 | null | 2011-01-25T04:27:55.573 | 2012-05-17T22:14:08.437 | 2012-05-17T22:14:08.437 | 159 | 159 | null |
6501 | 2 | null | 6353 | 1 | null | A simple way to turn categorical variables into a set of dummy variables for use in models in SPSS is using the do repeat syntax. This is the simplest to use if your categorical variables are in numeric order.
```
*making vector of dummy variables.
vector dummy(3,F1.0).
*looping through dummy variables using do repeat, in this example category would be the categorical variable to recode.
do repeat dummy = dummy1 to dummy3 /#i = 1 to 3.
compute dummy = 0.
if category = #i dummy = 1.
end repeat.
execute.
```
Otherwise you can simply run a set of if statements to make your dummy variables. My current version (16) has no native ability to specify a set of dummy variables automatically in the regression command (like you can in Stata using the [xi command](http://www.stata.com/support/faqs/data/dummy.html)) but I wouldn't be surprised if this is available in some newer version. Also take note of dmk38's point #2, this coding scheme is assuming nominal categories. If your variable is ordinal more discretion can be used.
I also agree with dmk38 and the talk about regression being better because of its ability to specify missing data in a particular manner is a completely separate issue.
| null | CC BY-SA 2.5 | null | 2011-01-25T04:29:36.703 | 2011-01-25T04:29:36.703 | null | null | 1036 | null |
6502 | 1 | 6512 | null | 18 | 6193 | In a LASSO regression scenario where
$y= X \beta + \epsilon$,
and the LASSO estimates are given by the following optimization problem
$ \min_\beta ||y - X \beta|| + \tau||\beta||_1$
Are there any distributional assumptions regarding the $\epsilon$?
In an OLS scenario, one would expect that the $\epsilon$ are independent and normally distributed.
Does it make any sense to analyze the residuals in a LASSO regression?
I know that the LASSO estimate can be obtained as the posterior mode under independent double-exponential priors for the $\beta_j$. But I haven't found any standard "assumption checking phase".
Thanks in advance (:
| LASSO assumptions | CC BY-SA 2.5 | null | 2011-01-25T04:55:04.823 | 2012-10-18T07:53:30.350 | null | null | 2902 | [
"regression",
"lasso",
"assumptions",
"residuals"
]
|
6503 | 2 | null | 6498 | 7 | null | My suggested reading for an intro to ARIMA modelling would be
[Applied Time Series Analysis for the Social Sciences](http://books.google.com/books?id=D6-CAAAAIAAJ&q=Applied+Time+Series+Analysis+for+the+Social+Sciences&dq=Applied+Time+Series+Analysis+for+the+Social+Sciences&hl=en&ei=QaLNTKD-HcH88Aabt-Al&sa=X&oi=book_result&ct=result&resnum=1&ved=0CC8Q6AEwAA) 1980
by R McCleary ; R A Hay ; E E Meidinger ; D McDowall
This is aimed at social scientists so the mathematical demands are not too rigorous. Also for shorter treatments I would suggest two Sage Green Books (although they are entirely redundant with the McCleary book),
- Interrupted Time Series Analysis
by David McDowall, Richard McCleary,
Errol Meidinger, and Richard A. Hay,
Jr
- Time Series Analysis by Charles
W. Ostrom
The Ostrom text is only ARMA modelling and does not discuss forecasting. I don't think they would meet your requirement for graphing forecast error either. I'm sure you could dig up more useful resources by examining questions tagged with time-series on this forum as well.
| null | CC BY-SA 2.5 | null | 2011-01-25T05:14:30.753 | 2011-01-25T05:24:27.843 | 2011-01-25T05:24:27.843 | 1036 | 1036 | null |
6504 | 2 | null | 5831 | 4 | null | You might check out a recent presentation on SSRN by Bernard Black, "Bloopers: How (Mostly) Smart People Get Causal Inference Wrong."
[http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1663404](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1663404)
I will say that I also admire David Freedman and appreciate his work. Though I was a UC Berkeley grad student while he was here, he passed away before I had a chance to take his course. You might have a look at his collected works edited by a few other Berkeley professors: "Statistical Models and Causal Inference: A Dialogue with the Social Sciences."
| null | CC BY-SA 3.0 | null | 2011-01-25T05:38:56.410 | 2016-12-20T18:33:39.523 | 2016-12-20T18:33:39.523 | 22468 | 401 | null |
6505 | 1 | 6511 | null | 34 | 183614 | Suppose I am going to do a univariate logistic regression on several independent variables, like this:
```
mod.a <- glm(x ~ a, data=z, family=binominal("logistic"))
mod.b <- glm(x ~ b, data=z, family=binominal("logistic"))
```
I did a model comparison (likelihood ratio test) to see if the model is better than the null model by this command
```
1-pchisq(mod.a$null.deviance-mod.a$deviance, mod.a$df.null-mod.a$df.residual)
```
Then I built another model with all variables in it
```
mod.c <- glm(x ~ a+b, data=z, family=binomial("logistic"))
```
In order to see if the variable is statistically significant in the multivariate model, I used the `lrtest` command from `epicalc`
```
lrtest(mod.c,mod.a) ### see if variable b is statistically significant after adjustment of a
lrtest(mod.c,mod.b) ### see if variable a is statistically significant after adjustment of b
```
I wonder if the `pchisq` method and the `lrtest` method are equivalent for doing loglikelihood test? As I dunno how to use `lrtest` for univate logistic model.
| Likelihood ratio test in R | CC BY-SA 3.0 | null | 2011-01-25T05:51:13.933 | 2019-08-25T13:42:30.850 | 2015-12-17T13:52:34.487 | 25741 | 588 | [
"r",
"logistic",
"diagnostic"
]
|
6506 | 2 | null | 6493 | 23 | null | I would recommend using a "Beta distribution of the second kind" (Beta2 for short) for a mildly informative distribution, and to use the conjugate inverse gamma distribution if you have strong prior beliefs. The reason I say this is that the conjugate prior is non-robust in the sense that, if the prior and data conflict, the prior has an unbounded influence on the posterior distribution. Such behaviour is what I would call "dogmatic", and not justified by mild prior information.
The property which determines robustness is the tail-behaviour of the prior and of the likelihood. A very good article outlining the technical details is [here](https://projecteuclid.org/download/pdf_1/euclid.ba/1340371079). For example, a likelihood can be chosen (say a t-distribution) such that as an observation $y_i \rightarrow \infty$ (i.e. becomes arbitrarily large) it is discarded from the analysis of a location parameter (much in the same way that you would intuitively do with such an observation). The rate of "discarding" depends on how heavy the tails of the distribution are.
Some slides which show an application in the hierarchical modelling context can be found [here](http://www-stat.wharton.upenn.edu/statweb/Conference/OBayes09/slides/Luis%20Pericchi.pdf) (shows the mathematical form of the Beta2 distribution), with a paper [here](http://www-stat.wharton.upenn.edu/statweb/Conference/OBayes09/AbstractPapers/Pericchi_Paper.pdf).
If you are not in the hierarchical modeling context, then I would suggest comparing the posterior (or whatever results you are creating) but use the Jeffreys prior for a scale parameter, which is given by $p(\sigma)\propto\frac{1}{\sigma}$. This can be created as a limit of the Beta2 density as both its parameters converge to zero. For an approximation you could use small values. But I would try to work out the solution analytically if at all possible (and if not a complete analytical solution, get the analytical solution as far progressed as you possibly can), because you will not only save yourself some computational time, but you are also likely to understand what is happening in your model better.
A further alternative is to specify your prior information in the form of constraints (mean equal to $M$, variance equal to $V$, IQR equal to $IQR$, etc. with the values of $M,V,IQR$ specified by yourself), and then use the [maximum entropy distribution](http://en.wikipedia.org/wiki/Principle_of_maximum_entropy) (search any work by Edwin Jaynes or Larry Bretthorst for a good explanation of what Maximum Entropy is and what it is not) with respect to Jeffreys' "invariant measure" $m(\sigma)=\frac{1}{\sigma}$.
MaxEnt is the "Rolls Royce" version, while the Beta2 is more a "sedan" version. The reason for this is that the MaxEnt distribution "assumes the least" subject to the constraints you have put into it (e.g., no constraints means you just get the Jeffreys prior), whereas the Beta2 distribution may contain some "hidden" features which may or may not be desirable in your specific case (e.g., if the prior information is more reliable than the data, then Beta2 is bad).
The other nice property of MaxEnt distribution is that if there are no unspecified constraints operating in the data generating mechanism then the MaxEnt distribution is overwhelmingly the most likely distribution that you will see (we're talking odds way over billions and trillions to one). Therefore, if the distribution you see is not the MaxEnt one, then there is likely additional constraints which you have not specified operating on the true process, and the observed values can provide a clue as to what that constraint might be.
| null | CC BY-SA 4.0 | null | 2011-01-25T06:05:53.627 | 2020-05-11T11:12:51.243 | 2020-05-11T11:12:51.243 | 58887 | 2392 | null |
6507 | 2 | null | 6499 | 1 | null | The distribution of the minimum of any set of N iid random variables is:
$$f_{min}(x)=Nf(x)[1-F(x)]^{N-1}$$
Where $f(x)$ is the pdf and $F(x)$ is the cdf (this is sometime called a $Beta-F$ distribution, because it is a compound of a Beta distribution and an arbitrary distribution). Hence the expectation (in this particular case) is given by:
$$E[min(X)] = N\sum_{x=0}^{x=n} xf(x)[1-F(x)]^{N-1}$$
Which means that $E[min(X)]=NE(x_1[1-F(x_1)]^{N-1})$. Using the "delta method" to approximation this expectation $E[g(x)]\approx g(E[X])$ gives
$$E[min(X)]=NE(x_1[1-F(x_1)]^{N-1})\approx N(E(x_1)[1-F(E(x_1))]^{N-1})$$
Substituting $np=E[x_1]$ then gives the approximation:
$$E[min(X)]\approx Nnp[1-F(np)]^{N-1}$$
Note that $F(np)\approx \frac{1}{2}$ (via normal approx.) to give
$$E[min(X)]\approx \frac{Nnp}{2^{N-1}}$$
| null | CC BY-SA 2.5 | null | 2011-01-25T07:00:26.590 | 2011-01-25T23:25:12.873 | 2011-01-25T23:25:12.873 | 2392 | 2392 | null |
6508 | 2 | null | 5327 | 7 | null | If we go for a simple answer, the excerpt from the [Wooldridge book](http://books.google.com/books?id=cdBPOJUP4VsC&printsec=frontcover&dq=wooldridge+econometrics&hl=fr&ei=1HM-TfauIIfqOZqBqY8L&sa=X&oi=book_result&ct=result&resnum=2&ved=0CC8Q6AEwAQ#v=onepage&q=wooldridge%20econometrics&f=false) (page 533) is very appropriate:
... both heteroskedasticity and nonnormality result in the Tobit estimator $\hat{\beta}$ being inconsistent for $\beta$. This inconsistency occurs because the derived density of $y$ given $x$ hinges crucially on $y^*|x\sim\mathrm{Normal}(x\beta,\sigma^2)$. This nonrobustness of the Tobit estimator shows that data censoring can be very costly: in the absence of censoring ($y=y^*$) $\beta$ could be consistently estimated under $E(u|x)=0$ [or even $E(x'u)=0$].
The notations in this excerpt comes from Tobit model:
\begin{align}
y^{*}&=x\beta+u, \quad u|x\sim N(0,\sigma^2)\\
y^{*}&=\max(y^*,0)
\end{align}
where $y$ and $x$ are observed.
To sum up the difference between least squares and Tobit regression is the inherent assumption of normality in the latter.
Also I always thought that the [original article of Amemyia](http://www.jstor.org/pss/1914031) was quite nice in laying out the theoretical foundations of the Tobit regression.
| null | CC BY-SA 2.5 | null | 2011-01-25T07:03:46.943 | 2011-01-25T07:03:46.943 | null | null | 2116 | null |
6509 | 2 | null | 6499 | 2 | null | The table in [this page](http://books.google.com/books?id=dVP-RTea5wcC&lpg=PP1&dq=a%20first%20course%20in%20order%20statistics&hl=fr&pg=PA69#v=onepage&q&f=false) of [this book](http://books.google.com/books?id=dVP-RTea5wcC&printsec=frontcover&dq=a+first+course+in+order+statistics&hl=fr&ei=93s-TaLkGZOWsgOvyOWkBQ&sa=X&oi=book_result&ct=result&resnum=1&ved=0CCcQ6AEwAA#v=onepage&q&f=false) might help you. The explicit formulas for the expectation of minimum of sample of binomial distributions is given [in the page before](http://books.google.com/books?id=dVP-RTea5wcC&lpg=PP1&dq=a%20first%20course%20in%20order%20statistics&hl=fr&pg=PA68#v=onepage&q&f=false).
| null | CC BY-SA 2.5 | null | 2011-01-25T07:31:29.650 | 2011-01-25T07:31:29.650 | null | null | 2116 | null |
6510 | 2 | null | 6481 | 2 | null | If you need to quickly use an ordinal variable as an input, I recommend using it as a factor. The order of the factor is important for the interpretation of the results, but it's rather simple to reorder to the factor (using the factor command). The way the results are reported is the first group in the factor is held constant as the intercept and the coefficients for the subsequent groups are the difference between your reference group (the first item in the factor) and the other group. For example, if you three races coded, White, Black, and Asian. Since R does things alphabetically by default, Asian will become your reference group if you use "+factor(race)" which means you won't see a coefficient for Asian, but you will see a coefficient for Black and for White. These coefficients will be for the Difference between White and Asian and the Difference between Black and Asian.
An easy way to think of this is each level in the factor is treated as a binary variable. Each variable is assigned a coefficient, but the input is binary, meaning if your observation is Asian, Asian=1, Black=0, White=0, so it doesn't matter what the coefficients are for Black or White if your observation is Asian because any coefficient multiplied by zero will still be zero. Thus they're all mutually exclusive.
This also works for ordinal things, such as high, medium and low income. Depending on what your independent variable is and how the data was collected this may be completely suitable for your needs. It is important to note that this does not work well for hierarchical variables. If your input(s) are hierarchical in nature, I highly recommend using a nested mixed random effects model (lme4 package).
| null | CC BY-SA 2.5 | null | 2011-01-25T07:48:25.753 | 2011-01-25T07:54:33.577 | 2011-01-25T07:54:33.577 | 2166 | 2166 | null |
6511 | 2 | null | 6505 | 30 | null | Basically, yes, provided you use the correct difference in log-likelihood:
```
> library(epicalc)
> model0 <- glm(case ~ induced + spontaneous, family=binomial, data=infert)
> model1 <- glm(case ~ induced, family=binomial, data=infert)
> lrtest (model0, model1)
Likelihood ratio test for MLE method
Chi-squared 1 d.f. = 36.48675 , P value = 0
> model1$deviance-model0$deviance
[1] 36.48675
```
and not the deviance for the null model which is the same in both cases. The number of df is the number of parameters that differ between the two nested models, here df=1. BTW, you can look at the source code for `lrtest()` by just typing
```
> lrtest
```
at the R prompt.
| null | CC BY-SA 2.5 | null | 2011-01-25T08:01:14.260 | 2011-01-25T08:01:14.260 | null | null | 930 | null |
6512 | 2 | null | 6502 | 16 | null | I am not an expert on LASSO, but here is my take.
First note that OLS is pretty robust to violations of indepence and normality. Then judging from the Theorem 7 and the discussion above it in the article [Robust Regression and Lasso](http://arxiv.org/pdf/0811.1790.pdf) (by X. Huan, C. Caramanis and S. Mannor) I guess, that in LASSO regression we are more concerned not with the distribution of $\varepsilon_i$, but in the joint distribution of $(y_i,x_i)$. The theorem relies on the assumption that $(y_i,x_i)$ is a sample, so this is comparable to usual OLS assumptions. But LASSO is less restrictive, it does not constrain $y_i$ to be generated from the linear model.
To sum up, the answer to your first question is no. There are no distributional assumptions on $\varepsilon$, all distributional assumptions are on $(y,X)$. Furthermore they are weaker, since in LASSO nothing is postulate on conditional distribution $(y|X)$.
Having said that, the answer to the second question is then also no. Since the $\varepsilon$ does not play any role it does not make any sense to analyse them the way you analyse them in OLS (normality tests, heteroscedasticity, Durbin-Watson, etc). You should however analyse them in context how good the model fit was.
| null | CC BY-SA 3.0 | null | 2011-01-25T08:13:53.123 | 2012-10-18T07:53:30.350 | 2012-10-18T07:53:30.350 | 2116 | 2116 | null |
6513 | 1 | 6539 | null | 7 | 5984 | I want to predict inter-day electricity load. My data are electricity loads for 11 months, sampled in 30 minute intervals. I also got the weather-specific data from a meteorological station (temperature, relative humidity, wind direction, wind speed, sunlight). From this, I want to predict the electricity load until the end of the day.
I can run my algorithm until 10:00 of the present day and after that it should give the prediction of loads in 30 minute intervals. So, it should tell the load at 10:30, 11:00, 11:30 and so on until 24:00.
My first attempt was to create a linear model in R.
```
BP.TS <- ts(Buying.power, frequency = 48)
a <- data.frame(
Time, BP.TS, Weekday, Pressure, Temperature, RelHumidity, AvgWindSpeed, AvgWindDirection, MaxWindSpeed, MaxWindDirection, SunLightTime,
m, Buying.2dayago, AfterHolidayAndBPYesterday8, MovingAvgLast7DaysMidnightTemp
)
a <- a[(6*48+1):nrow(a),]
start = 9716
steps.ahead = 21
par(mfrow=c(5,2))
for (i in 1:10) {
train <- a[1:(start+(i-1)*48),]
test <- a[((i-1)*48+start+1):((i-1)*48+start+steps.ahead),]
summary(reg <- lm(log(BP.TS)~., data=train, na.action=NULL))
pred <- exp(predict(reg, test))
plot(test$BP.TS, type="o")
lines(pred, col=2)
cat("MAE", mean(abs(test$BP.TS - pred)), "\n")
}
```
This is not very succesful. Now I try to model the data with ARIMA. I used auto.arima() from the [forecast package](http://cran.r-project.org/web/packages/forecast/index.html). These are the results I got:
```
> auto.arima(BP.TS)
Series: BP.TS
ARIMA(2,0,1)(1,1,2)[48]
Call: auto.arima(x = BP.TS)
Coefficients:
ar1 ar2 ma1 sar1 sma1 sma2
1.1816 -0.2627 -0.554 0.4381 -1.2415 0.3051
s.e. 0.0356 0.0286 0.033 0.0952 0.0982 0.0863
sigma^2 estimated as 256118: log likelihood = -118939.7
AIC = 237893.5 AICc = 237893.5 BIC = 237947
```
Now if I try something like:
```
reg = arima(train$BP.TS, order=c(2,0,1), xreg=cbind(
train$Time,
train$Weekday,
train$Pressure,
train$Temperature,
train$RelHumidity,
train$AvgWindSpeed,
train$AvgWindDirection,
train$MaxWindSpeed,
train$MaxWindDirection,
train$SunLightTime,
train$Buying.2dayago,
train$MovingAvgLastNDaysLoad,
train$X1, train$X2, train$X3, train$X4, train$X5, train$X6, train$X7, train$X8, train$X9,
train$X11, train$X12, train$X13, train$X14, train$X15, train$X16, train$X17, train$X18,
train$MovingAvgLast7DaysMidnightTemp
))
p <- predict(reg, n.ahead=21, newxreg=cbind(
test$Time,
test$Weekday,
test$Pressure,
test$Temperature,
test$RelHumidity,
test$AvgWindSpeed,
test$AvgWindDirection,
test$MaxWindSpeed,
test$MaxWindDirection,
test$SunLightTime,
test$Buying.2dayago,
test$MovingAvgLastNDaysLoad,
test$X1, test$X2, test$X3, test$X4, test$X5, test$X6, test$X7, test$X8, test$X9,
test$X11, test$X12, test$X13, test$X14, test$X15, test$X16, test$X17, test$X18,
test$MovingAvgLast7DaysMidnightTemp
))
plot(test$BP.TS, type="o", ylim=c(6300,8300))
par(new=T)
plot(p$pred, col=2, ylim=c(6300,8300))
cat("MAE", mean(p$se), "\n")
```
I get even worse results. Why? I ran out of ideas, so please help. If there is additional information I need to give, please ask.
| Predicting daily electricity load - fitting time series | CC BY-SA 2.5 | null | 2011-01-25T08:17:54.077 | 2015-01-14T12:33:46.350 | null | null | 2889 | [
"r",
"time-series",
"regression",
"predictive-models",
"arima"
]
|
6515 | 2 | null | 6505 | 29 | null | An alternative is the `lmtest` package, which has an `lrtest()` function which accepts a single model. Here is the example from `?lrtest` in the `lmtest` package, which is for an LM but there are methods that work with GLMs:
```
> require(lmtest)
Loading required package: lmtest
Loading required package: zoo
> ## with data from Greene (1993):
> ## load data and compute lags
> data("USDistLag")
> usdl <- na.contiguous(cbind(USDistLag, lag(USDistLag, k = -1)))
> colnames(usdl) <- c("con", "gnp", "con1", "gnp1")
> fm1 <- lm(con ~ gnp + gnp1, data = usdl)
> fm2 <- lm(con ~ gnp + con1 + gnp1, data = usdl)
> ## various equivalent specifications of the LR test
>
> ## Compare two nested models
> lrtest(fm2, fm1)
Likelihood ratio test
Model 1: con ~ gnp + con1 + gnp1
Model 2: con ~ gnp + gnp1
#Df LogLik Df Chisq Pr(>Chisq)
1 5 -56.069
2 4 -65.871 -1 19.605 9.524e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>
> ## with just one model provided, compare this model to a null one
> lrtest(fm2)
Likelihood ratio test
Model 1: con ~ gnp + con1 + gnp1
Model 2: con ~ 1
#Df LogLik Df Chisq Pr(>Chisq)
1 5 -56.069
2 2 -119.091 -3 126.04 < 2.2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
| null | CC BY-SA 2.5 | null | 2011-01-25T08:49:04.917 | 2011-01-25T08:49:04.917 | null | null | 1390 | null |
6516 | 1 | null | null | 5 | 296 | I'm currently working on audio data trying to perform a recognition for given classes (for example grinding coffee etc.). However, I have some trouble distinguishing the null class from interesting sound segments.
Currently, I simply look at the audio intensity. As I have a limited, known number of classes I want to detect, I thought about saving some aggregate of the signals (mean fft) comparing the unclassified signal to it. If it is close enough to one of the saved aggregates do a classification, if not just drop it.
My approach seems to me quite naive. Therefore, input/ideas appreciated ;)
| Ideas for Segmenting Audio Data | CC BY-SA 2.5 | null | 2011-01-25T09:44:42.360 | 2011-01-25T15:11:37.393 | null | null | 2904 | [
"machine-learning"
]
|
6517 | 2 | null | 6516 | 2 | null | There are interesting ideas you have already started to dig in. You should just dig a little more.
- Using decomposition on a meaningfull basis like fft or wavelet transform is interesting. This is alternative representation. You can think of a lot of alternative representations... wavelet is my favorite for audio data.
- Using "directly" the whole signal to try to discriminate if it is from the null class is not a good idea, and as you already started with, you need to build up statistical summary. This can be one summary but there can be a few more. You have to think wether this statistical summary contains interesting information for the discimination you want to do or not. For example I doubt that the mean of the FFT is interesting in your case. This is dimensionality reduction.
- There exists automatic ways to find meaningfull statistical summary. For example you can build up automatically a large number of candidate (with for example linear combination of your wavelet coefficient, or simply the coefficient themselves, or some other combination, or wavelet coefficient and fft coeff, ...) and measure there discrimination power somehow and keep the one that have best dicrimination power.
| null | CC BY-SA 2.5 | null | 2011-01-25T10:33:14.700 | 2011-01-25T10:33:14.700 | null | null | 223 | null |
6518 | 1 | 6523 | null | 17 | 394 | Has anyone written a brief survey of the various approaches to statistics? To a first approximation you have frequentist and Bayesian statistics. But when you look closer you also have other approaches like likelihoodist and empirical Bayes. And then you have subdivisions within groups such as subjective Bayes objective Bayes within Bayesian statistics etc.
A survey article would be good. It would be even better if it included a diagram.
| Statistical landscape | CC BY-SA 2.5 | null | 2011-01-25T11:40:59.857 | 2011-01-27T21:06:54.677 | 2011-01-27T21:06:54.677 | 319 | 319 | [
"bayesian",
"frequentist",
"philosophical"
]
|
6519 | 1 | null | null | 3 | 253 | We are measuring data with a high sample rate (20 kHz) and calculated a big standard error due to our system setup. Currently we are only interested in slow signals (in the order of Hz's) is it valid to use averaging (once per 20.000 samples) and thus lower our standard error with an order 20.000?
| What does averaging do for the noise level of my signal | CC BY-SA 2.5 | null | 2011-01-25T12:11:54.427 | 2011-01-25T16:32:18.687 | null | null | 2907 | [
"standard-error",
"measurement"
]
|
6520 | 1 | 6522 | null | 5 | 439 | I have 100 geographical regions in a country. For each region the total number of houses and the number of vacant houses have been collected yearly over 20 years. I have also some other economic indicators at the country level (GDP, interest rate etc.). Now, given the forecasts for these indicators for next year, I want to forecast next year vacancy rate.
I have first used an auto-regressive mixed-effect model in `R` (package lme4) where the vacancy rate (computed as the ratio of vacant houses over the total number of houses) in a region depends on the last year's vacancy rate, the mean vacancy rate of neighboring regions, GDP and interest rate.
The problem with this model is that the vacancy rate can go outside the range [0,1], which obviously does not make sense. I need to restrict the range of the vacancy rate: a simple fix is restricting ex post.
Does anybody have experience with such models? I think that I can use some mixed multinomial logit model probably.
I would appreciate if you could provide guidance along with some `R` code.
Regards
| Modeling vacancy rate | CC BY-SA 2.5 | null | 2011-01-25T12:26:40.787 | 2011-02-07T09:08:55.787 | 2011-02-07T09:08:55.787 | 1443 | 1443 | [
"time-series",
"mixed-model",
"forecasting"
]
|
6521 | 2 | null | 6520 | 3 | null | It would seem to make sense to use a generalized linear mixed model with `family=binomial` and a logit or probit link. This would restrict your fitted values to the range (0,1). I don't know whether you can combine that with an autoregressive error structure in `lmer4` though.
| null | CC BY-SA 2.5 | null | 2011-01-25T12:43:29.727 | 2011-01-25T12:43:29.727 | null | null | 449 | null |
6522 | 2 | null | 6520 | 4 | null | One of the tricks in modelling percentages is to use the [logit transformation](http://en.wikipedia.org/wiki/Logit). Then instead of modelling percentage $p_i$ as linear function you model the logit transform of this percentage:
\begin{align}
y_i=\log\frac{p_i}{1-p_i}
\end{align}
In R you will need to create new transformed variable and use it as a dependent variable in lmer.
You might look into modelling directly the number of empty houses instead of percentages, then you will not have a problem with non-sensical values. I suggest using log transformation for that. This of course means that you might get more non-vacant houses than there are houses, but this can be used as an indicator of model inadequacies. If on the other hand you have for some regions full booking in historical data meaning that demand was larger than supply, you might want to look into censored regression models.
| null | CC BY-SA 2.5 | null | 2011-01-25T12:53:36.730 | 2011-01-25T12:53:36.730 | null | null | 2116 | null |
6523 | 2 | null | 6518 | 5 | null | Here's one I found via a Google Image search, but maybe it's too brief, and the diagram too simple: [http://labstats.net/articles/overview.html](http://labstats.net/articles/overview.html)
| null | CC BY-SA 2.5 | null | 2011-01-25T13:11:39.397 | 2011-01-25T13:11:39.397 | null | null | 449 | null |
6524 | 1 | null | null | -1 | 2200 | fT(t;B,C) = exp(-t/C)-exp(-t/B) / C-B where our mean is C+B and t>0.
so far i have found my log likelihood functions and differentiated them as follows:
dl/dB = sum[t*exp(t/C) / (B^2(exp(t/c)-exp(t/B)))] +n/(C-B) = 0
i have also found a similar dl/dC.
I have now been asked to comment what you can find in the way of sufficient statistics for estimating these parameters and why there is no simple way of using Maximum Likelihood for estimation in the problem. I am simply unsure as to what to comment upon. Any help would be appreciated. Thanks, Rachel
| Maximum likelihood and sufficient statistics | CC BY-SA 2.5 | null | 2011-01-25T13:28:53.477 | 2018-03-15T23:58:18.083 | 2011-01-25T14:36:03.493 | 449 | 2908 | [
"self-study",
"maximum-likelihood"
]
|
6525 | 1 | null | null | 2 | 545 | I'm very new to R Programming. So please excuse for such a simple doubt.

I want to plot the above graph. The x & y values are sequence from 0 to 2560. I want plot a a curve on the points where x*y=10^6.
What are the line required in R Programming Language.
| How can I plot this simple graph (Refer Image) in R? | CC BY-SA 2.5 | 0 | 2011-01-25T13:59:38.873 | 2011-01-25T15:18:53.783 | 2011-01-25T14:54:24.167 | 1706 | 1706 | [
"r"
]
|
6526 | 2 | null | 6518 | 7 | null | Here's a longer article. No diagram and isn't exactly a survey, more, as the abstract puts it, of 'an idiosyncratic walk through some of these issues':
M. J. Bayarri and J. O. Berger (2004). ["The Interplay of Bayesian and Frequentist Analysis"](http://www.jstor.org/stable/4144373). Statistical Science 19 (1):58-80.
(access requires a JSTOR subscription)
| null | CC BY-SA 2.5 | null | 2011-01-25T14:00:23.943 | 2011-01-25T14:00:23.943 | null | null | 449 | null |
6527 | 2 | null | 6525 | 7 | null | I think all you need is:
```
curve(1e6/x,0,2560)
```
EDIT in light of comments:
Or perhaps:
```
plot(...<your data>...)
curve(1e6/x, 1e6/2560,2560, add=TRUE)
```
| null | CC BY-SA 2.5 | null | 2011-01-25T14:08:35.037 | 2011-01-25T15:18:53.783 | 2011-01-25T15:18:53.783 | 449 | 449 | null |
6528 | 2 | null | 6525 | 0 | null | The easiest way is to compute all y values at every given x values, like:
```
df <- data.frame(x=1:2560)
df$y <- 10^6/df$x
# the latter equivalent to:
# df <- within(df,y<-10^6/x)
```
And after plot the dataframe:
```
plot(df, type="l", main=expression(f(x) == frac(10^{6},x)))
```

| null | CC BY-SA 2.5 | null | 2011-01-25T14:12:36.200 | 2011-01-25T15:10:03.390 | 2011-01-25T15:10:03.390 | 2714 | 2714 | null |
6529 | 2 | null | 6455 | 12 | null | The answer definitely depends on:
What are actually trying to use the $Q$ test for?
The common reason is: to be more or less confident about joint statistical significance of the null hypothesis of no autocorrelation up to lag $h$ (alternatively assuming that you have something close to a weak white noise) and to build a parsimonious model, having as little number of parameters as possible.
Usually time series data has natural seasonal pattern, so the practical rule-of-thumb would be to set $h$ to twice this value. Another one is the forecasting horizon, if you use the model for forecasting needs. Finally if you find some significant departures at latter lags try to think about the corrections (could this be due to some seasonal effects, or the data was not corrected for outliers).
>
Rather than using a single value for h, suppose that I do the Ljung-Box test for all h<50, and then pick the h which gives the minimum p value.
It's a joint significance test, so if the choice of $h$ is data-driven, then why should I care about some small (occasional?) departures at any lag less than $h$, supposing that it is much less than $n$ of course (the power of the test you mentioned). Seeking to find a simple yet relevant model I suggest the information criteria as described below.
>
My question concerns how to interpret the test if $p<0.05$ for some values of $h$ and not for other values.
So it will depend on how far from the present it happens. Disadvantages of far departures: more parameters to estimate, less degrees of freedom, worse predictive power of the model.
Try to estimate the model including the MA and\or AR parts at the lag where the departure occurs AND additionally look at one of information criteria (either AIC or BIC depending on the sample size) this would bring you more insights on what model is more parsimonious. Any out-of-sample prediction exercises are also welcome here.
| null | CC BY-SA 2.5 | null | 2011-01-25T14:15:47.943 | 2011-01-25T14:15:47.943 | null | null | 2645 | null |
6530 | 2 | null | 6519 | 2 | null | Averaging is just a low pass filter so if you want to filter out the high frequency component of your signal, it will do that.
Obviously different averaging techniques and different parameters will filter out high frequency components in a different way. See for instance [this illustration of the frequency response of a simple moving average](http://ptolemy.eecs.berkeley.edu/eecs20/week12/freqResponseRA.html).
| null | CC BY-SA 2.5 | null | 2011-01-25T14:57:37.033 | 2011-01-25T14:57:37.033 | null | null | 300 | null |
6531 | 2 | null | 6516 | 1 | null | I think the R package [rggobi](http://www.ggobi.org/rggobi/introduction.pdf) is exactly what you're looking for. Audo recognition is even their [example problem](http://www.ggobi.org/book/2007-infovis/05-clustering.pdf)!
| null | CC BY-SA 2.5 | null | 2011-01-25T15:11:37.393 | 2011-01-25T15:11:37.393 | null | null | 2817 | null |
6532 | 2 | null | 6483 | 2 | null | I don’t know if I understood your question correctly, but I’ll give it a try.
Any hierarchical agglomerative algorithm can do the job. Remember that agglomerative algorithms proceed by “pasting” observations to a cluster and treating the cluster as a single unit.
I would suggest this:
- Substitute your 1d or 2d observations with the clusters’ centroids.
- Use a distance to assign the temporal observations into the 1d/2d cluster. (Euclidean distance might work).
Hope this works =)
| null | CC BY-SA 2.5 | null | 2011-01-25T16:14:11.083 | 2011-01-25T16:19:54.587 | 2011-01-25T16:19:54.587 | 2902 | 2902 | null |
6533 | 2 | null | 6519 | 2 | null | Yes, you can reduce the noise by averaging or in general by reducing the sampling rate of the signal. This is a very common technique in signal processing commonly refereed to as [oversampling](http://en.wikipedia.org/wiki/Oversampling). It is widely used in sigma-delta converters. Oversampling basically means that the sampling rate of the signal is much higher than the needed Nyquist limit (i.e. 2 times higher than bandwidth of the signal). By low-pass filtering and decimating the signal, you reduce the standard deviation of the noise by the decimation factor (assuming a uniform distribution for the noise).
Having said that, I doubt in your example you can downsample by a factor of 20000. From your description it seems that your "signal" include frequency up to few Hertz. Notice that to get best/correct results you should sample your signal higher than the Nyquist limit. For example if you want to preserve everything below 20 Hz, you should not decimate by a factor more than 500 (i.e. 20000 / 500 = 40 Hz = 2 x 20 Hz). If you are not much into signal processing, to do the downsampling correctly I suggest using a ready made package so that you know the lowpass-filter is implemented correctly. In MATLAB you can use the [resample](http://www.mathworks.com/help/toolbox/signal/resample.html) function in the signal-processing toolbox. In R you may use the [decimate](http://rgm2.lab.nig.ac.jp/RGM2/R_man-2.9.0/library/signal/man/decimate.html) function from the signal package.
| null | CC BY-SA 2.5 | null | 2011-01-25T16:32:18.687 | 2011-01-25T16:32:18.687 | null | null | 2020 | null |
6534 | 1 | 6536 | null | 35 | 226650 | So, I have a data set of percentages like so:
```
100 / 10000 = 1% (0.01)
2 / 5 = 40% (0.4)
4 / 3 = 133% (1.3)
1000 / 2000 = 50% (0.5)
```
I want to find the standard deviation of the percentages, but weighted for their data volume. ie, the first and last data points should dominate the calculation.
How do I do that? And is there a simple way to do it in Excel?
| How do I calculate a weighted standard deviation? In Excel? | CC BY-SA 2.5 | null | 2011-01-25T16:44:25.457 | 2020-05-27T18:11:44.207 | 2011-01-25T23:42:29.083 | 142 | 142 | [
"standard-deviation",
"excel",
"weighted-mean"
]
|
6535 | 1 | null | null | 4 | 354 | Is there any downside to using a repeated-measure ANOVA when you have a within-subjects 2-level factor where 1 level is represented by more trials than the other. I know in ANOVA we are comparing means, but does the the tighter standard error on the one level throw off the ANOVA? For example, if my IV is stimulus congruency and I present congruent trials 80 times while only presenting incongruent trials 20 times, can I just run the the ANOVA including all of the trials or should I run the analysis by selecting the same amount of trials for each factor level (i.e. select only 20 of the congruent trials to pair with the 20 incongruent trials)? If this is OK for a strictly within-subjects design, does it cause problems if a balanced or unbalanced between-subjects variable is added to the design?
| Does a within-subjects factor with unequally represented levels mess up a repeated-measures ANOVA? | CC BY-SA 2.5 | null | 2011-01-25T17:06:41.823 | 2011-01-25T19:52:40.277 | 2011-01-25T19:52:40.277 | 2322 | 2322 | [
"anova",
"repeated-measures",
"standard-error"
]
|
6536 | 2 | null | 6534 | 47 | null | The [formula for weighted standard deviation](http://www.itl.nist.gov/div898/software/dataplot/refman2/ch2/weightsd.pdf) is:
$$ \sqrt{ \frac{ \sum_{i=1}^N w_i (x_i - \bar{x}^*)^2 }{ \frac{(M-1)}{M} \sum_{i=1}^N w_i } },$$
where
$N$ is the number of observations.
$M$ is the number of nonzero weights.
$w_i$ are the weights
$x_i$ are the observations.
$\bar{x}^*$ is the weighted mean.
Remember that the formula for weighted mean is:
$$\bar{x}^* = \frac{\sum_{i=1}^N w_i x_i}{\sum_{i=1}^N w_i}.$$
Use the appropriate weights to get the desired result. In your case I would suggest to use $\frac{\mbox{Number of cases in segment}}{\mbox{Total number of cases}}$.
To do this in Excel, you need to calculate the weighted mean first. Then calculate the $(x_i - \bar{x}^*)^2$ in a separate column. The rest must be very easy.
| null | CC BY-SA 3.0 | null | 2011-01-25T18:22:20.437 | 2013-07-31T11:13:57.223 | 2013-07-31T11:13:57.223 | 28661 | 2902 | null |
6537 | 2 | null | 6535 | 2 | null | I think there are ways to do unbalanced repeated measures ANOVA; maybe look at the books "Variance Components" by Searle, Casella, and McCulloch, and "Components of Variance" by Cox and Solomon. It also seems to be in some books about multilevel / hierarchical / mixed effect models (e.g. Raudenbush and Bryk), where it is treated as a special case of a hierarchical / multilevel / mixed effect model.
I'm not sure what programs support unbalanced repeated measures anova (["Unbalanced designs create special difficulties for the analysis of variance."](http://www.ats.ucla.edu/stat/sas/library/repeated_ut.htm)), but most statistical programs support mixed effect models. Is there any reason that would be insufficient?
| null | CC BY-SA 2.5 | null | 2011-01-25T18:58:46.707 | 2011-01-25T18:58:46.707 | null | null | 2739 | null |
6538 | 1 | null | null | 86 | 13373 | I know people love to close duplicates so I am not asking for a reference to start learning statistics (as [here](https://stats.stackexchange.com/questions/414/introduction-to-statistics-for-mathematicians)).
I have a doctorate in mathematics but never learned statistics. What is the shortest route to the equivalent knowledge to a top notch BS statistics degree and how do I measure when I have achieved that.
If a list of books would suffice (assuming I do the exercises lets say), that's terrific. Yes, I expect working out problems to be an implicit part of learning it but I want to fast track as much as realistically possible. I am not looking for an insanely rigorous treatment unless that is part of what statistical majors generally learn.
| Mathematician wants the equivalent knowledge to a quality stats degree | CC BY-SA 3.0 | null | 2011-01-25T19:03:49.097 | 2018-09-07T10:47:23.517 | 2017-11-23T20:25:40.270 | 28666 | 2912 | [
"references",
"careers"
]
|
6539 | 2 | null | 6513 | 12 | null | I've played around with electrical demand models, and I can tell you that it's a good idea to start "zoomed out". Each region has its own characteristics, but the general idea is the same.
Electric demand is a function of many variables. Starting with the slowest moving terms.
- General Economic Activity is the slowest moving term (typically the 3 to 8 year time frame). This term is typically related to Gross Domestic Product for the area. Electrical Demand may generally grow faster than GDP, but the electrical demand "ups" during good economic times, and demand "downs" during recessions provide an obvious link to GDP. See the blue line in the first graph below.
- Next, is the Seasonal Term (annual time frame). For instance in the U.S., the Summer Peak shows up in August, the Winter Peak shows up in January, the Spring Trough shows up in April and the Fall Trough shows up in November. See the red line in top two graphs below. In the second graph, I have shown the Seasonal Term to be constant for each month, but you can easily improve that by a linear or non-linear relationship for each month (monthly time frame).
- You are now down to the daily time frame. The bottom graph shows the Electrical Demand for Texas for one 24 hour period (12/22/2010). The Day-time Peak was at 7:00PM (19:00) and the Night-time Trough was at 4:00AM (04:00). This time frame is where you want to consider holidays, weekends, weather, etc. However, keep in mind that those other variables (in 1 and 2 above) are also affecting your results.
So, from your description, you have data for 11 months. Look at the first graph below and assume that you have data for 11 months. Is that enough to get an idea of the Seasonal Term for the year? I would use a minimum of 10 years of monthly data to get a feel for the Seasonal Term. The idea here is to tweek the structure of your daily model differently during months of "rapid seasonal change" versus months of "slow seasonal change".
Next, I would play around with the size and structure of the "data window" that you will use to estimate your daily model. For example, will you get a better daily model if you include daily fall and winter data when estimating a summer daily model? Or, is it better to use 10 rescaled "summer data windows", one for each year in 10 years of data, when estimating a summer daily model?
Once you get all of the deterministic terms working well, then, and only then would I go after the ARIMA terms.



| null | CC BY-SA 2.5 | null | 2011-01-25T19:30:59.893 | 2011-01-25T19:30:59.893 | null | null | 2775 | null |
6540 | 1 | 6555 | null | 1 | 2534 | Let say I have this kind of data:
```
1 01/1/1980
2 01/2/1999
3 03/12/2000
-1 03/6/2005
-5 07/07/2007
```
how can I calculate the Present Value (PV) to them in respect to current date, let say with 5% interest rate, in spreadsheet? `Current date` means today.
[Update] I am probably misunderstanding terms `PV` and `FV`. I am trying to find some ready function similar to [this one](http://docs.google.com/support/bin/answer.py?hl=en&answer=155151). Whatever method you use it must work with the above data. Please, stop spam.
| How do calculate Present Value in Google Spreadsheet? | CC BY-SA 2.5 | null | 2011-01-25T19:34:09.613 | 2020-07-03T20:20:38.167 | 2020-07-03T20:20:38.167 | 35989 | 2914 | [
"econometrics"
]
|
6541 | 2 | null | 6538 | 3 | null | I come from a computer science background focusing on machine learning.
However, I really started to understand (and more important to apply) statistics after taking a Pattern Recognition course using Bishop's Book
[https://www.microsoft.com/en-us/research/people/cmbishop/#!prml-book](https://www.microsoft.com/en-us/research/people/cmbishop/#!prml-book)
here are some course slides from MIT:
[http://www.ai.mit.edu/courses/6.867-f03/lectures.html](http://www.ai.mit.edu/courses/6.867-f03/lectures.html)
This will just give you the background (+ some matlab code) to use statistics for real work problems and is definitely more on the applied side.
Yet, it highly depends on what you want to do with your knowledge.
To get a measure for how good you are you might want to browse the open course ware of some university for advanced statistics courses, to check if you know the topics covered.
Just my 5 cent.
| null | CC BY-SA 4.0 | null | 2011-01-25T19:38:57.673 | 2018-09-07T10:47:23.517 | 2018-09-07T10:47:23.517 | 131198 | 2904 | null |
6542 | 2 | null | 6538 | 3 | null | E.T. Jaynes "Probability Theory: The Logic of Science: Principles and Elementary Applications Vol 1", Cambridge University Press, 2003 is pretty much a must-read for the Bayesian side of statistics, at about the right level. I'm looking forward to recommendations for the frequentist side of things (I have loads of monographs, but very few good general texts).
| null | CC BY-SA 3.0 | null | 2011-01-25T19:44:38.850 | 2013-10-09T21:32:11.783 | 2013-10-09T21:32:11.783 | 17230 | 887 | null |
6543 | 2 | null | 6538 | 12 | null | I can't speak for the more rigorous schools, but I am doing a B.S. in General Statistics (the most rigorous at my school) at University of California, Davis, and there is a fairly heavy amount of reliance on rigor and derivation. A doctorate in math will be helpful, insomuch as you will have a very strong background in real analysis and linear algebra--useful skills in statistics. My statistics program has about 50% of the coursework going to support the fundamentals (linear algebra, real analysis, calculus, probability, estimation), and the other 50% goes towards specialized topics that rely on the fundamentals (nonparametrics, computation, ANOVA/Regression, time series, Bayesian analysis).
Once you get the fundamentals, jumping to the specifics is usually not too difficult. Most of the individuals in my classes struggle with the proofs and real analysis, and easily grasp the statistical concepts, so coming from a math background will most definitely help. That being said, the following two texts have pretty good coverage of many topics covered in statistics. Both were recommended in the link you provided, by the way, so I wouldn't say your question and the one you linked are necessarily uncorrelated.
[Mathematical Methods of Statistics](http://rads.stackoverflow.com/amzn/click/0691080046), by Harald Cramer
[All of Statistics: A Concise Course in Statistical Inference](http://rads.stackoverflow.com/amzn/click/1441923225), by Larry Wasserman
| null | CC BY-SA 2.5 | null | 2011-01-25T19:48:44.050 | 2011-01-25T22:18:51.977 | 2011-01-25T22:18:51.977 | 1118 | 1118 | null |
6544 | 1 | 6784 | null | 7 | 8013 | I have a dataset that I want to fit a simple linear model to, but I want to include the lag of the dependent variable as one of the regressors. Then I want to predict future values of this time series using forecasts I already have for the independent variables. The catch is: how do I incorporate the lag into my forecast?
Here's an example:
```
#A function to calculate lags
lagmatrix <- function(x,max.lag){embed(c(rep(NA,max.lag),x),max.lag)}
lag <- function(x,lag) {
out<-lagmatrix(x,lag+1)[,lag]
return(out[1:length(out)-1])
}
y<-arima.sim(model=list(ar=c(.9)),n=1000) #Create AR(1) dependant variable
A<-rnorm(1000) #Create independant variables
B<-rnorm(1000)
C<-rnorm(1000)
Error<-rnorm(1000)
y<-y+.5*A+.2*B-.3*C+.1*Error #Add relationship to independant variables
#Fit linear model
lag1<-lag(y,1)
model<-lm(y~A+B+C+lag1)
summary(model)
#Forecast linear model
A<-rnorm(50) #Assume we know 50 future values of A, B, C
B<-rnorm(50)
C<-rnorm(50)
lag1<- #################This is where I'm stuck##################
newdata<-as.data.frame(cbind(A,B,C,lag1))
predict.lm(model,newdata=newdata)
```
| Predicting from a simple linear model with lags in R | CC BY-SA 2.5 | null | 2011-01-25T19:56:10.017 | 2017-09-10T20:59:55.757 | 2011-01-26T09:11:58.750 | null | 2817 | [
"r",
"time-series",
"regression",
"predictive-models"
]
|
6545 | 2 | null | 725 | 1 | null | This is a very late response, so this may no longer be of interest, but I am working on putting together an R library with various hyperspectral image processing capabilities. At the moment my focus has been on endmember detection and unmixing. If this is still something which is of interest please let me know. My hope is to publish a beta version to CRAN or R-Forge in the near future but I would be happy to send out the code itself.
Best,
Dan
| null | CC BY-SA 2.5 | null | 2011-01-25T20:40:17.520 | 2011-01-25T20:40:17.520 | null | null | null | null |
6548 | 2 | null | 6499 | 10 | null | Assume that the random variables $x_k$ are i.i.d., nonnegative, integer valued, bounded by $n$, and such that $P(x_k=0)$ and $P(x_k=1)$ are both positive. For every $N\ge1$, let
$$
X_N= \min\{x_1,\ldots,x_N\}.
$$
Then, when $N\to+\infty$,
$$
E(X_N)=c^N(1+o(1)),
$$
where $c<1$ is independent of $N$ and given by
$$
c=P(x_k\ge1).
$$
Hence $E(X_N)$ is exponentially small. When each $x_k$ is Binomial $(n,p)$ with $n\ge1$ and $p$ in $(0,1)$ fixed, the result holds with $c=1-(1-p)^n$.
---
To see this, note that $[X_N\ge i]=[x_1\ge i]\cap\cdots\cap[x_N\ge i]$ for every $i$ and that, since $X_N$ is nonnegative and integer valued, $E(X_N)$ is the sum over $i\ge1$ of $P(X_N\ge i)$, hence
$$
E(X_N)=\sum_{i\ge 1}P(x_1\ge i)^N.
$$
For every $i\ge n+1$, $P(x_1\ge i)=0$. For every $2\le i\le n$, $0\le P(x_1\ge i)\le P(x_1\ge 2)$. Hence
$$
c^N\le E(X_N)\le c^N+(n-1)d^N,
$$
with
$$
c=P(x_1\ge1),\quad d=P(x_1\ge 2).
$$
Because $P(x_k=1)$ is positive, one knows that $d<c$, hence $E(X_N)\sim c^N$ when $N\to+\infty$.
| null | CC BY-SA 2.5 | null | 2011-01-25T21:10:34.973 | 2011-01-27T20:44:19.810 | 2011-01-27T20:44:19.810 | 2592 | 2592 | null |
6549 | 1 | 6551 | null | 2 | 1461 | Given a univariate sample $\vec X = X_1, ..., X_n$ with standard deviation 1 and a strictly monotone transformation $t: R \to R$ with the property that the standard deviation of $t(\vec X)$ is also 1 (where $t(\vec X)$ is $t$ applied to each $X_i$). If one know fits a normal distribution to $\vec X$ and $t(\vec X)$, one observes the following fact: the likelihood at the respective MLEs of $\vec X$ and $t(\vec X)$ is the same. The reason for that is, that the standard deviation is the MLE for the $\sigma$ parameter of the normal distribution, so just the mean changes, which does not change the likelihood.
In some computations with multivariate data I observed the same fact, using transformations such that $\det(cov(t(\vec X)))=1)$. But I did not see a straightforward proof for that! Note that in the multivariate case, the MLEs for the variance parameters do change, even if the above determinant is always 1.
I am sure I am not the first to notice, but my (somewhat limited) literature on the multivariate normal distribution does not give me a clue how to prove the above statement, i.e. that the multivariate normal likelihood does only depend on the determinant of the $cov(t(\vec X))$, regardless of choice of $t$.
A proof boils down to looking at the terms of the form $x'\Sigma x$ for $\det \Sigma = 1$.
Do you have more clue?
Thanks, Philipp
| Likelihood at MLE and transformations, the multivariate normal case | CC BY-SA 2.5 | null | 2011-01-25T22:33:24.837 | 2011-01-27T03:32:30.250 | null | null | 2916 | [
"normal-distribution",
"multivariate-analysis",
"maximum-likelihood"
]
|
6550 | 1 | 6553 | null | 3 | 1908 | I have about 500 text documents stored in a directory. I want to randomly select one to read in the contents (this process will be repeated several times). The names of the files are not sequential. By this I mean they are not named something like 1.txt, 2.txt, 3.txt,... The names are basically random characters.
How can I do this?
Thanks
| Reading random text file in R | CC BY-SA 2.5 | null | 2011-01-25T23:01:12.220 | 2011-01-26T09:11:37.347 | 2011-01-26T09:11:37.347 | null | 2310 | [
"r",
"dataset"
]
|
6551 | 2 | null | 6549 | 3 | null | This is due to the invariance property of MLEs. It is an elementary exercise in calculus to show that if $E_{MLE}$ maximises the function $F(E)$ then any monotonic function of $E$, say $g=g(E)$ will also be maximised at $g_{MLE}=g(E_{MLE})$.
$F(E)=F[g^{-1}(E)]$ and therefore the maximum occurs at $F_{MAX}=F(E_{MLE})=F[g^{-1}(E_{MLE})]$, which means that $g_{MLE}=g(E_{MLE})$.
If $g(.)$ were not monotone (1-to-1 so that $g^{-1}(.)$ is well defined), then the above argument would not hold.
EDIT: I think this is why the MLE's change for the determinant transformation, perhaps the $t(.)$ function is not a 1-to-1 function. It is all about the functional properties of your $t(.)$ function (whether it is 1-to-1 or not).
EDIT#2: After some discussion, the transformation $t(.)$ may be 1-to-1, but if the inverse transformation $t^{-1}(.)$ doesn't have a Jacobian of 1, then the value of the likelihood at its maximum will be different. Fixing the variance to be 1 may one way of achieving this property.
The class of transformations $t(X)$ has a approximate (exact for normals?) covariance of $$cov[t(X)]\approx (\frac{\partial t(x)}{\partial t})_{x=X_{MLE}}^T cov[X](\frac{\partial t(x)}{\partial t})_{x=X_{MLE}}$$.
Fixing the determinant to 1 fixes (approximately)
$$|\frac{\partial t(x)}{\partial t}|^{-2}_{x=X_{MLE}}\approx |cov[X]|$$
Now if $t(.)$ is invertible, the jacobian of the inverse transformation is just the inverse of the jacobian. SO we have:
$$|\frac{\partial t(x)}{\partial t}|^{-1}_{x=X_{MLE}}=|\frac{\partial t^{-1}(x)}{\partial t}|_{x=X_{MLE}}\approx |cov[X]|^{\frac{1}{2}}$$
Now in the normal distribution, the likelihood evaluated at the MLE, the exponential part just becomes 1. So the maximised likelihood for X is given by
$$f_X(x_{MLE}) = \frac{1}{(2\pi)^{\frac{N}{2}} |cov[X]|^{\frac{1}{2}}}$$
using the Jacobian rule for transforming to $T=t(X)$ gives
$$f_T(t_{MLE}) = \frac{|\frac{\partial t^{-1}(x)}{\partial t}|_{x=X_{MLE}}}{(2\pi)^{\frac{N}{2}} |cov(X)|} \approx\frac{|cov[X]|^{\frac{1}{2}}}{(2\pi)^{\frac{N}{2}} |cov[X]|^{\frac{1}{2}}} = (2\pi)^{-\frac{N}{2}}$$
Which is a constant, as your analysis shows (you could even check my calculations, by comparing the likelihood you get with $(2\pi)^{-\frac{N}{2}}$, and see if it is right). I would imagine that either the approximation is exact for normals, or exact for certain transformations $t(.)$, or becomes exact as $N\rightarrow\infty$, or the approximation error is in decimal places which you checked (or can't calculate), OR the computer you use to get the MLEs of $T$ uses the same approximation technique as I have (called the delta method).
| null | CC BY-SA 2.5 | null | 2011-01-25T23:39:15.153 | 2011-01-27T03:32:30.250 | 2011-01-27T03:32:30.250 | 2392 | 2392 | null |
6552 | 2 | null | 6550 | 3 | null | Do a list.files and then access each file by its index.
```
lisfil <- list.files("E:\\Data") #Replace E:\\Data with your directory name
lenlisfil <- length(lisfil)
for (i in 1:lenlisfil) {
#Do something to the current file such as:
print(lisfil[i])
}
```
| null | CC BY-SA 2.5 | null | 2011-01-25T23:39:45.370 | 2011-01-25T23:39:45.370 | null | null | 2775 | null |
6553 | 2 | null | 6550 | 6 | null | First, select one file randomly from the current directory:
```
file <- sample(list.files(), 1)
```
Then do whatever you want with it, e.g. read all lines:
```
readLines(file)
```
If you want to repeat this several times, you could do it in one step:
```
readLines(sample(list.files(), 1))
```
| null | CC BY-SA 2.5 | null | 2011-01-25T23:48:34.877 | 2011-01-25T23:48:34.877 | null | null | 2714 | null |
6554 | 1 | 6573 | null | 5 | 206 | If I have a parameter with a prior $X\sim N(\mu,\sigma)$ and three data points, $x_1,x_2,x_3$, how can I calculate the posterior in R or BUGS?
Here is an attempt at doing this in BUGS, that I have gotten stuck on:
```
data:
list(x=1.4, 2.1, 1.1, mu = 10, tau=1/10)
model {
for (i in 1:3) {
x[i] ~ dnorm(mu, tau)
}
}
```
Any help appreciated
| Getting started on BUGS, need help implementing a simple model | CC BY-SA 2.5 | null | 2011-01-25T23:48:47.747 | 2011-01-26T15:45:58.250 | null | null | 2750 | [
"r",
"bayesian",
"bugs"
]
|
6555 | 2 | null | 6540 | 2 | null | This sounds more like you want the Future Value. The Present Value is usually defined by valuing cash flows which occur in the future, not in the past. But it amounts to the same equations.
Anyways, you multiply the values in each period by $(1+r)^{t_0-t}$ and just add the results for your PV. $r$ is your interest rate (with annual compounding), $t$ is the actual time of the cash flow (measured in years), and $t_0$ is where the Present is (also measured in years).
So to update from 01/01/1980 to 01/01/2011 is just $1\times (1+0.05)^{2011-1980}$.
Now if you want to get pedantic and include the days in your calculations, you just need to count the "left over" days and divide them by $365$ (or $365.24$ if you've gone insane with details), and put this as a decimal or fraction addition to $t$
So for updating from 03/06/2005 to 01/01/2011 we have $31+28+31+30+31+3=154$ days between 01/01/2005 and 03/06/2005 so this gives an update of $-1\times (1+0.05)^{2011-(2005+\frac{154}{365})}$
| null | CC BY-SA 2.5 | null | 2011-01-26T00:05:52.667 | 2011-01-26T00:05:52.667 | null | null | 2392 | null |
6556 | 2 | null | 6524 | 1 | null | The "failsafe" way to find a sufficient statistic in just about any problem: Calculate the Bayesian posterior distribution (up to prop constant) using a uniform prior. If a sufficient statistic exists, this method will tell you what it is. So basically, strip all multiplicative factors (those factors which do not involve your parameters, but may involve functions of the data) from your likelihood function. I would suggest that the "sum function" stuff is your sufficient statistic (although the notation was not clear at the time I answered this question).
NOTE: the use of a uniform prior may make the posterior improper but it will show you what the functions are sufficient for your problem.
| null | CC BY-SA 2.5 | null | 2011-01-26T00:41:37.533 | 2011-01-26T00:46:55.203 | 2011-01-26T00:46:55.203 | 2392 | 2392 | null |
6557 | 1 | 6615 | null | 11 | 18711 | Let's say I have data:
```
x1 <- rnorm(100,2,10)
x2 <- rnorm(100,2,10)
y <- x1+x2+x1*x2+rnorm(100,1,2)
dat <- data.frame(y=y,x1=x1,x2=x2)
res <- lm(y~x1*x2,data=dat)
summary(res)
```
I want to plot the continuous by continuous interaction such that x1 is on the X axis and x2 is represented by 3 lines, one which represents x2 at a Z-score of 0, one at Z-score of +1, and another at a Z-score of -1, with each line a separate color and labelled. How can I do this using ggplot2?
For example, it might look something like this (though of course with different colored lines rather than different line types):

| How can one plot continuous by continuous interactions in ggplot2? | CC BY-SA 2.5 | null | 2011-01-26T01:16:56.443 | 2013-05-16T21:34:18.333 | 2011-01-26T01:48:30.457 | 196 | 196 | [
"r",
"regression",
"ggplot2",
"interaction"
]
|
6558 | 2 | null | 6513 | 2 | null | @Grega:
I ran out of room in the comment area, so I started another answer.
From your comment, that's the problem. Each system/grid has its own "signature". It's a combination of system dynamics, age, weather, local economics, cultures, traditions, etc. In Japan, its worse. Some sections of the country run at 50Hz and other sections run at 60Hz. These grids connect at high voltage DC stations, which means some areas behave like electrical "islands" (totally different behavior than their neighbors just a few miles away). If you're zeroed-in on one factory, the predictability goes down even more. Fewer users means higher uncertainty.
No matter how you do this, it's going to be messy.
I would filter out the daily/weekday/weekend/holiday component to get a Seasonal Term. How? A 31 day CENTERED moving average? 51 day CMA? XX day CMA? You'll have to experiment with that, but I would make it a variable so you can tweek it later. Whatever filter you end up with, keep in mind that it stops short of either end of your data (a 31 day centered moving average will start on the 16th day of your raw data and end on the 16th day from the end of your raw data).
Next, the best you can do with 11 months of data is to "make up" month 12 (draw a line from your filtered series at its end to its beginning). Next, subtract your Seasonal Term (the filtered data) from your raw data to get a residual. Fit the residual to your weather data, allowing for factors for day-of-the-week and holidays.
Some factors that you may need to add are:
1) A "production run" factor. Whatever they make at this factory, if they switch from one product/category to another, the power demand required to make one product may be different than what is required for another product.
2) A "change over" factor. This is when they shift from one product/category to another. Sometimes it takes days of preparation for the switch.
3) A "work shift" factor. If they have three shifts per day, power demand for the late shift will probably be significantly different than the day shifts.
Good luck. As you probably know, this kind of a problem can get real frustrating.
====== Edit to answer Grega's first comment (01/25/2011) ====================
@Grega: Answering your first comment, I'm afraid it doesn't. The idea behind a model like this would be to have multiple "similar instances" of your 32 future points, so you can fit those points, and then predict new points. You don't have typical "similar instances" because yesterday was not the same day-of-the-week as today. You have to use last week's same day-of-the-week and the previous week's same day-of-the-week, etc. By the time you get several "similar instances" (say 20), you're typically more than 20 weeks in the past (a holiday may screw up one or more of your weeks). At that point, you're in a whole different season of the year. So, in order to use those days in another season, you need to remove the Seasonal Term from the raw data.
It's a sloppy situation, but it's the best you can do with 11 months of data.
| null | CC BY-SA 2.5 | null | 2011-01-26T01:18:49.687 | 2011-01-26T04:19:31.357 | 2011-01-26T04:19:31.357 | 2775 | 2775 | null |
6559 | 2 | null | 6544 | 0 | null | Ok, I answered my own problem, but my solution could use more testing and probably isn't perfect. Suggestions would be appreciated!
First of all I used a modified version of the parseCall function available [here](http://rguha.net/code/R/parseCall.R):
```
parseCall <- function(obj) {
if (class(obj) != 'call') {
stop("Must supply a 'call' object")
}
srep <- deparse(obj)
if (length(srep) >1) srep <- paste(srep,sep='',collapse='')
fname <- unlist(strsplit(srep,"\\("))[1]
func <- unlist(strsplit(srep, paste(fname,"\\(",sep='')))[2]
func <- unlist(strsplit(func,""))
func <- paste(func[-length(func)],sep='',collapse="")
func <- unlist(strsplit(func,","))
vals <- list()
nms <- c()
cnt <- 1
for (args in func) {
arg <- unlist(strsplit(args,"="))[1]
val <- unlist(strsplit(args,"="))[2]
arg <- gsub(" ", "", arg)
val <- gsub(" ", "", val)
vals[[cnt]] <- val
nms[cnt] <- arg
cnt <- cnt + 1
}
names(vals) <- nms
return(vals)
}
```
This function returns the dependent variable of a linear regression
```
getDepVar <- function(call) {
call<-parseCall(call)
formula<-call$formula
out<-unlist(strsplit(formula,"~")[1])
return(out[1])
}
```
And finally, this function does the magic:
```
ipredict <-function(model,newdata) {
Y<-getDepVar(model$call)
P<-predict(model,newdata=newdata)
for (i in seq(1,dim(newdata)[1])) {
if (is.na(newdata[i,Y])) {
newdata[i,Y]<-predict(model,newdata=newdata[1:i,])[i]
P[i]<-newdata[i,Y]
}
}
return(P)
}
```
Example usage (based on my question):
```
#A function to calculate lags
lagmatrix <- function(x,max.lag){embed(c(rep(NA,max.lag),x),max.lag)}
lag <- function(x,lag) {
out<-lagmatrix(x,lag+1)[,lag]
return(out[1:length(out)-1])
}
y<-arima.sim(model=list(ar=c(.9)),n=1000) #Create AR(1) dependant variable
A<-rnorm(1000) #Create independant variables
B<-rnorm(1000)
C<-rnorm(1000)
Error<-rnorm(10)
y<-y+.5*A+.2*B-.3*C+.1*Error #Add relationship to independant variables
#Fit linear model
model<-lm(y~A+B+C+I(lag(y,1)))
summary(model)
#Forecast linear model
A<-c(A,rnorm(50)) #Assume we know 50 future values of A, B, C
B<-c(B,rnorm(50))
C<-c(C,rnorm(50))
y<-c(y,rep(NA,50))
newdata<-as.data.frame(cbind(y,A,B,C))
ipredict(model,newdata=newdata)
```
| null | CC BY-SA 2.5 | null | 2011-01-26T02:15:57.690 | 2011-01-26T02:15:57.690 | null | null | 2817 | null |
6560 | 1 | 6564 | null | 8 | 203 | I'm looking for some help devising a hypothesis test for the following situation.
- I have a radioactive source that spits out a particle every so often.
- Also, I have two particle detectors: a red particle detector, and a green particle detector. Whenever the red particle detector detects a particle, it flashes a red light; let $R$ denote the event that the particle was detected by the red detector, and $r$ the complement event that the particle was not detected by the red detector. Whenever the green particle detector detects a particle, it flashes a green light; let $G$ be the event that the green detector detects the particle, and $g$ that it does not. Thus, each emitted particle falls into one of four categories:
detected by both detectors ($RG$),
detected by the red detector but not the green detector ($Rg$),
detected by the green detector but not the red detector ($rG$), or
not detected by either detector ($rg$).
- Each time a particle is emitted, the red detector has some probability of detecting the particle, and the green detector has some probability of detecting the particle. (They will never trigger a false detection when no particle is present.) I know that each particle is handled identically and independently of all other particles, but I don't know whether the two detectors are independent of each other. It's possible that they are independent (i.e., $\Pr[RG] = \Pr[R] \Pr[G]$), or that they are correlated (i.e., $\Pr[RG] \ne \Pr[R] \Pr[G]$); I don't know which is the case, a priori.
- I keep a count of the number of $RG$-detections (i.e., number of times when both detectors detected something), number of $Rg$-detections (i.e., number of times when the red detector detected something, but not the green one), and the number of $rG$-detections. Unfortunately, I have no way to measure the number of $rg$-situations, since those particles aren't detected by either of the detector. At the end of the experiment, I have three non-negative integers, representing these counts.
I want to test the hypothesis $H$ that the two detectors are independent, i.e., that event $R$ is independent of event $G$. Can anyone help suggest a way to compute the $p$-value of this hypothesis, given 3 numbers from such an experiment?
I'd be perfectly satisfied with a computer algorithm/procedure to compute the $p$-value. I don't need a simple formula; something that could be computed by a computer would be sufficient.
---
Here's another way to view this. We could form a 2x2 contingency table, such as this one:
G | g
---------
R | 17 22
r | 12 ?
recording that we saw 17 $RG$-events, 22 $Rg$-events, and so on. Unfortunately, the lower-right cell is empty, since we don't know how many $rg$-particles were emitted. If we had counts for all four cells, presumably we could use Fisher's exact test, but we don't. Also, we aren't given $\Pr[R]$ or $\Pr[G]$ (I guess they are nuisance parameters) or the total number of particles emitted.
Any suggestions?
| Test for independence, when I'm missing one bin of a 2x2 contingency | CC BY-SA 2.5 | null | 2011-01-26T02:28:03.963 | 2011-01-27T23:42:20.253 | null | null | 2921 | [
"hypothesis-testing"
]
|
6561 | 2 | null | 6557 | 5 | null | Computing the estimates for y with Z-score of 0 (y0 column), -1 (y1m column) and 1 (y1p column):
```
dat$y0 <- res$coefficients[[1]] + res$coefficients[[2]]*dat$x1 + res$coefficients[[3]]*0 + res$coefficients[[4]]*dat$x1*0
dat$y1m <- res$coefficients[[1]] + res$coefficients[[2]]*dat$x1 + res$coefficients[[3]]*-1 + res$coefficients[[4]]*dat$x1*-1
dat$y1p <- res$coefficients[[1]] + res$coefficients[[2]]*dat$x1 + res$coefficients[[3]]*1 + res$coefficients[[4]]*dat$x1*1
```
Plotting the lines with base `plot()`:
```
plot(dat$x1, dat$y0, type="l", xlab="x1", ylab="Estimates")
lines(dat$x1, dat$y1m, col="red")
lines(dat$x1, dat$y1p, col="blue")
```

To use ggplot, you may call [geom_line](http://had.co.nz/ggplot2/geom_abline.html):
```
ggplot(dat, aes(x1, y0)) + geom_line() +
geom_line(aes(x1, y1m), color="red") +
geom_line(aes(x1, y1p), color="blue") +
theme_bw() + opts(title="") + xlab("x1") + ylab("Estimates")
```

| null | CC BY-SA 2.5 | null | 2011-01-26T02:28:45.770 | 2011-01-26T02:28:45.770 | null | null | 2714 | null |
6562 | 1 | 6566 | null | 4 | 1334 | I have read an [article](http://nlp.stanford.edu/~manning/courses/ling289/logistic.pdf) from Christopher Manning, and saw an interesting code for collapsing categorical variable in an logistic regression model:
```
glm(ced.del ~ cat + follows + I(class == 1), family=binomial("logit"))
```
Does the `I(class == 1)` means that the `class` variable has been recoded into either it is `1` or it is not `1`?
After that, I am thinking of modifying it a bit:
```
glm(ced.del ~ cat + follows + I(class %in% C(1,2)), family=binomial("logit"))
```
I am planning to merge the variable `class` from `c(1,2,3,4)` into two groups, one group contains `c(1,2)`, another group contains `c(3,4)`, can the code above give me the result I want?
| Collapsing categorical data easily for regression in R | CC BY-SA 4.0 | null | 2011-01-26T02:41:40.690 | 2019-11-19T00:12:21.507 | 2019-11-19T00:12:21.507 | 11887 | 588 | [
"r",
"logistic",
"categorical-encoding"
]
|
6563 | 1 | null | null | 18 | 3014 | There is one variable in my data have 80% of missing data. The data is missing because of non-existence (i.e. how much bank loan the company owes). I came across an article saying that dummy variable adjustment method is the solution for this problem. Meaning that I need to transform this continuous variable to categorical?
Is this the only solution? I do not want to drop this variable as I think theoretically, it is important to my research question.
| 80% of missing data in a single variable | CC BY-SA 4.0 | null | 2011-01-26T02:53:22.350 | 2020-03-03T14:41:10.503 | 2020-03-03T14:41:10.503 | 11887 | 2793 | [
"missing-data"
]
|
6564 | 2 | null | 6560 | 8 | null | If the missing count were close to 22*12/17, the table would appear independent. This is consistent with your observations. If the missing count is far from this value, the table would exhibit strong lack of dependence. This, too, is consistent with your observations. Evidently your data cannot discriminate the two cases: independence or lack thereof are unidentifiable. Therefore, your only hope is to adopt additional assumptions, such as a prior for the missing count (equivalently, for the total number of emitted particles).
| null | CC BY-SA 2.5 | null | 2011-01-26T03:22:22.153 | 2011-01-26T03:22:22.153 | null | null | 919 | null |
6565 | 2 | null | 6563 | 27 | null | Are the data "missing" in the sense of being unknown or does it just mean there is no loan (so the loan amount is zero)? It sounds like the latter, in which case you need an additional binary dummy to indicate whether there is a loan. No transformation of the loan amount is needed (apart, perhaps, from a continuous re-expression, such as a root or started log, which might be indicated by virtue of other considerations).
This works well in a regression. A simple example is a conceptual model of the form
$$\text{dependent variable (Y) = loan amount (X) + constant.}$$
With the addition of a loan indicator ($I$), the regression model is
$$Y = \beta_I I + \beta_X X + \beta_0 + \epsilon$$
with $\epsilon$ representing random errors with zero expectations. The coefficients are interpreted as:
$\beta_0$ is the expectation of $Y$ for no-loan situations, because those are characterized by $X = 0$ and $I = 0$.
$\beta_X$ is the marginal change in $Y$ with respect to the amount of the loan ($X$).
$\beta_I + \beta_0$ is the intercept for the cases with loans.
| null | CC BY-SA 2.5 | null | 2011-01-26T03:27:01.367 | 2011-01-26T03:34:37.983 | 2011-01-26T03:34:37.983 | 919 | 919 | null |
6566 | 2 | null | 6562 | 4 | null | This shows what the first is doing:
```
> class <- sample(1:5, 10, replace = TRUE)
> class == 1
[1] TRUE FALSE FALSE FALSE TRUE TRUE TRUE FALSE TRUE TRUE
> as.numeric(class == 1)
[1] 1 0 0 0 1 1 1 0 1 1
> ## or better,
> factor(as.numeric(class == 1))
[1] 1 0 0 0 1 1 1 0 1 1
Levels: 0 1
```
`class == 1` is creating an indicator variable as you surmise. The `I(....)` bit is due to us having to insulate certain operations from R's formula parsing code.
Similarly, `class %in% c(1,2)` is doing the same thing but matching on `1` or `2`:
```
> class %in% c(1,2)
[1] TRUE FALSE FALSE TRUE TRUE TRUE TRUE FALSE TRUE TRUE
> as.numeric(class %in% c(1,2))
[1] 1 0 0 1 1 1 1 0 1 1
> ## etc
```
and it works within the modelling calls:
```
> set.seed(1)
> dat <- data.frame(Y = rnorm(100), X1 = rnorm(100),
+ class = sample(1:5, 100, replace = TRUE))
> mod <- lm(Y ~ X1 + I(class %in% c(1,4)), data = dat)
> coef(mod)
(Intercept) X1 I(class %in% c(1, 4))TRUE
0.138287085 -0.003827624 -0.072059592
```
but I think it is neater if you create a new variable for the class compression you want and use that directly in your model:
```
> dat <- within(dat, class2 <- factor(class %in% c(1, 4),
+ levels = c(FALSE,TRUE)))
> mod2 <- lm(Y ~ X1 + class2, data = dat)
> coef(mod2)
(Intercept) X1 class2TRUE
0.138287085 -0.003827624 -0.072059592
```
| null | CC BY-SA 2.5 | null | 2011-01-26T09:07:10.353 | 2011-01-26T09:07:10.353 | null | null | 1390 | null |
6567 | 2 | null | 4466 | 4 | null | I would recomend two things in addition to the excellent answers already present;
- At Key points in your code, dump out the current data as a flat file, suitably named and described in comments, thus highlighting if one package has produced differing results where the differences have been introduced. These data files, as well as the original input and the resulting output should be included in your 'reproducible research set'
- Include some testing of the packages concerned within your code, for instance using something like TestThat. The hard part is making small, reproducible tests that are likely to highlight any changes in what a package does that relate to your analysis. This would at least highlight to another person that there is some difference in the environments.
| null | CC BY-SA 2.5 | null | 2011-01-26T10:02:07.550 | 2011-01-26T10:02:07.550 | null | null | 114 | null |
6568 | 2 | null | 6176 | 16 | null | For a really short introduction (seven page pdf), there's also this, intended to allow you to follow papers that use a bit of measure theory :
[A Measure Theory Tutorial (Measure Theory for Dummies)](http://www.ee.washington.edu/research/guptalab/publications/MeasureTheoryTutorial.pdf). Maya R. Gupta.
Dept of Electrical Engineering, University of Washington, 2006. ([archive.org](https://web.archive.org/web/20100812214640/http://www.ee.washington.edu/research/guptalab/publications/MeasureTheoryTutorial.pdf) copy)
The author gives some refs at the end and says "one of the friendliest books is Resnick’s, which teaches measure theoretic graduate level probability with the assumption that you do not have a B.A. in mathematics."
S. I. Resnick, [A probability path](http://books.google.co.uk/books?id=T9-PMDSVDNsC), Birkhäuser, 1999. 453 pages.
| null | CC BY-SA 3.0 | null | 2011-01-26T11:09:56.750 | 2017-02-27T10:55:00.140 | 2017-02-27T10:55:00.140 | 10278 | 449 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.