Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5250 | 1 | 5325 | null | 36 | 40374 | I am trying to analyse some data using a mixed effect model. The data I collected represent the weight of some young animals of different genotype over time.
I am using the approach proposed here:
[https://gribblelab.wordpress.com/2009/03/09/repeated-measures-anova-using-r/](https://gribblelab.wordpress.com/2009/03/09/repeated-measures-anova-using-r/)
In particular I'm using solution #2
So I have something like
```
require(nlme)
model <- lme(weight ~ time * Genotype, random = ~1|Animal/time,
data=weights)
av <- anova(model)
```
Now, I would like to have some multiple comparisons.
Using `multcomp` I can do:
```
require(multcomp)
comp.geno <- glht(model, linfct=mcp(Genotype="Tukey"))
print(summary(comp.geno))
```
And, of course, I could do the same with time.
I have two questions:
- How do I use mcp to see the interaction between Time and Genotype?
- When I run glht I get this warning:
covariate interactions found -- default contrast might be inappropriate
What does it mean? Can I safely ignore it? Or what should I do to avoid it?
EDIT:
I found [this PDF](http://cran.r-project.org/web/packages/multcomp/vignettes/generalsiminf.pdf) that says:
>
Because it is impossible to determine the parameters of interest automatically in this case, mcp() in multcomp will by default generate comparisons for the main effects only, ignoring covariates and interactions. Since version 1.1-2, one can specify to average over interaction terms and covariates using arguments interaction_average = TRUE and covariate_average = TRUE respectively, whereas versions older than 1.0-0 automatically averaged over interaction terms. We suggest to the users, however, that they write out, manually, the set of contrasts they want. One should do this whenever there is doubt about what the default contrasts measure, which typically happens in models with higher order interaction terms. We refer to Hsu (1996), Chapter~7, and Searle (1971), Chapter~7.3, for further discussions and examples on this issue.
I do not have access to those books, but maybe someone here has?
| Multiple comparisons on a mixed effects model | CC BY-SA 3.0 | null | 2010-12-08T11:22:42.240 | 2015-10-28T21:18:42.730 | 2015-01-07T15:52:47.797 | 582 | 582 | [
"r",
"anova",
"mixed-model",
"multiple-comparisons",
"repeated-measures"
]
|
5251 | 2 | null | 5226 | 1 | null | If you lose iid and normality assumptions things can get ugly. In [Anderson book](http://rads.stackoverflow.com/amzn/click/0471047457) you can find explicit formulas for expectations of type
$\sum_{s,r,t,u}E(X_s-m)(X_r-m)(X_t-m)(X_u-m)$
when $X=(x_1,...,x_n)$ is a sample from stationary process, with mean $m$. In general it is not possible to express such types of moments using only the second and first moments. If we have $cov(X_i,X_j)=0$, it does not guarantee that $cov(X_i^2,X_j^2)=0$ for example. It does only for normal variables, for which zero-correlation equals independence.
| null | CC BY-SA 2.5 | null | 2010-12-08T11:31:14.750 | 2010-12-08T11:31:14.750 | null | null | 2116 | null |
5252 | 2 | null | 5235 | 22 | null | In the regression model:
$$y_i=\mathbf{x}_i'\beta+u_i$$
the usual assumption is that $(y_i,\mathbf{x}_i,u_i)$, $i=1,...,n$ is an iid sample. Under assumptions that $E\mathbf{x}_iu_i=0$ and $E(\mathbf{x}_i\mathbf{x}_i')$ has full rank, the ordinary least squares estimator:
$$\widehat{\beta}=\left(\sum_{i=1}^n\mathbf{x}_i\mathbf{x}_i'\right)^{-1}\sum_{i=1}\mathbf{x}_iy_i$$
is consistent and asymptotically normal. The expected covariance between a residual and the response variable then is:
$$Ey_iu_i=E(\mathbf{x}_i'\beta+u_i)u_i=Eu_i^2$$
If we furthermore assume that $E(u_i|\mathbf{x}_1,...,\mathbf{x}_n)=0$ and $E(u_i^2|\mathbf{x}_1,...,\mathbf{x}_n)=\sigma^2$, we can calculate the expected covariance between $y_i$ and its regression residual:
$$\begin{align*}
Ey_i\widehat{u}_i&=Ey_i(y_i-\mathbf{x}_i'\widehat{\beta})\\\\
&=E(\mathbf{x}_i'\beta+u_i)(u_i-\mathbf{x}_i(\widehat{\beta}-\beta))\\\\
&=E(u_i^2)\left(1-E\mathbf{x}_i' \left(\sum_{j=1}^n\mathbf{x}_j\mathbf{x}_j'\right)^{-1}\mathbf{x}_i\right)
\end{align*}$$
Now to get the correlation we need to calculate $\text{Var}(y_i)$ and $\text{Var}(\hat{u}_i)$. It turns out that
$$\text{Var}(\hat u_i)=E(y_i\hat{u}_i),$$
hence
$$\text{Corr}(y_i,\hat u_i)=\sqrt{1-E\mathbf{x}_i' \left(\sum_{j=1}^n\mathbf{x}_j\mathbf{x}_j'\right)^{-1}\mathbf{x}_i}$$
Now the term $\mathbf{x}_i' \left(\sum_{j=1}^n\mathbf{x}_j\mathbf{x}_j'\right)^{-1}\mathbf{x}_i$ comes from diagonal of the hat matrix $H=X(X'X)^{-1}X'$, where $X=[\mathbf{x}_i,...,\mathbf{x}_N]'$. The matrix $H$ is idempotent, hence it satisfies a following property
$$\text{trace}(H)=\sum_{i}h_{ii}=\text{rank}(H),$$
where $h_{ii}$ is the diagonal term of $H$. The $\text{rank}(H)$ is the number of linearly independent variables in $\mathbf{x}_i$, which is usually the number of variables. Let us call it $p$. The number of $h_{ii}$ is the sample size $N$. So we have $N$ nonnegative terms which should sum up to $p$. Usually $N$ is much bigger than $p$, hence a lot of $h_{ii}$ would be close to the zero, meaning that the correlation between the residual and the response variable would be close to 1 for the bigger part of observations.
The term $h_{ii}$ is also used in various regression diagnostics for determining influential observations.
| null | CC BY-SA 3.0 | null | 2010-12-08T12:15:49.997 | 2013-11-07T12:15:14.593 | 2013-11-07T12:15:14.593 | 2116 | 2116 | null |
5253 | 1 | 5256 | null | 188 | 1020056 | After reading a dataset:
```
dataset <- read.csv("forR.csv")
```
- How can I get R to give me the number of cases it contains?
- Also, will the returned value include of exclude cases omitted with na.omit(dataset)?
| How do I get the number of rows of a data.frame in R? | CC BY-SA 3.0 | null | 2010-12-08T12:16:53.550 | 2013-08-22T15:46:09.177 | 2011-10-08T01:21:44.497 | 183 | 1950 | [
"r"
]
|
5255 | 2 | null | 5253 | 40 | null | Briefly:
- Run dim(dataset) to retrieve both n and k, you can also use nrow(df) and ncol(df) (and even NROW(df) and NCOL(df) -- variants are needed for other types too).
- If you transform e.g. via dataset <- na.omit(dataset), then the cases are gone and are not counted. But if you do e.g. summary(dataset) the NA cases are accounted for.
| null | CC BY-SA 2.5 | null | 2010-12-08T12:29:53.513 | 2010-12-08T14:51:02.570 | 2010-12-08T14:51:02.570 | null | 334 | null |
5256 | 2 | null | 5253 | 217 | null | `dataset` will be a data frame. As I don't have `forR.csv`, I'll make up a small data frame for illustration:
```
set.seed(1)
dataset <- data.frame(A = sample(c(NA, 1:100), 1000, rep = TRUE),
B = rnorm(1000))
> head(dataset)
A B
1 26 0.07730312
2 37 -0.29686864
3 57 -1.18324224
4 91 0.01129269
5 20 0.99160104
6 90 1.59396745
```
To get the number of cases, count the number of rows using `nrow()` or `NROW()`:
```
> nrow(dataset)
[1] 1000
> NROW(dataset)
[1] 1000
```
To count the data after omitting the `NA`, use the same tools, but wrap `dataset` in `na.omit()`:
```
> NROW(na.omit(dataset))
[1] 993
```
The difference between `NROW()` and `NCOL()` and their lowercase variants (`ncol()` and `nrow()`) is that the lowercase versions will only work for objects that have dimensions (arrays, matrices, data frames). The uppercase versions will work with vectors, which are treated as if they were a 1 column matrix, and are robust if you end up subsetting your data such that R drops an empty dimension.
Alternatively, use `complete.cases()` and `sum` it (`complete.cases()` returns a logical vector [`TRUE` or `FALSE`] indicating if any observations are `NA` for any rows.
```
> sum(complete.cases(dataset))
[1] 993
```
| null | CC BY-SA 3.0 | null | 2010-12-08T12:34:11.843 | 2013-08-22T15:46:09.177 | 2013-08-22T15:46:09.177 | 1390 | 1390 | null |
5257 | 2 | null | 3845 | 4 | null | It is a known bug in plm package, when `effect="individual" ` the pgmm crashes. The bug is fixed in plm version 1.2-7.
As for simulation, you need to calculate estimates of individual effects, since they are not estimated in GMM. At the current moment, the plm does not have the functions for predicting GMM model. I have created and put them [here](https://github.com/mpiktas/vzplm), but I do not advise to use them if you are not familiar with R. I have submitted these functions (forecast.pgmm and predict.pgmm) to plm maintainers, but they did not include them yet in plm package.
| null | CC BY-SA 2.5 | null | 2010-12-08T13:04:03.837 | 2010-12-08T13:04:03.837 | null | null | 2116 | null |
5258 | 2 | null | 5171 | 10 | null | [Log-linear models](http://en.wikipedia.org/wiki/Poisson_regression) might be another option to look at, if you want to study your two-way data structure.
If you assume that the two samples are matched (i.e., there is some kind of dependency between the two series of locutions) and you take into consideration that data are actually counts that can be considered as scores or ordered responses (as suggested by @caracal), then you can also look at marginal models for matched-pairs, which usually involve the analysis of a square contingency table. It may not be necessarily the case that you end up with such a square Table, but we can also decide of an upper-bound for the number of, e.g. passive sentences. Anyway, models for matched pairs are well explained in Chapter 10 of Agresti, [Categorical Data Analysis](http://www.stat.ufl.edu/%7Eaa/cda/cda.html); relevant models for ordinal categories in square tables are testing for quasi-symmetry (the difference in the effect of a category from one case to the other follows a linear trend in the category scores), conditional symmetry ($\pi_{ab}<\pi_{ab}$ or $\pi_{ab}>\pi_{ab}$, $\forall a,b$), and quasi-uniform association (linear-by-linear association off the main diagonal, which in the case of equal-interval scores means an uniform local association). Ordinal quasi-symmetry (OQS) is a special case of linear logit model, and it can be compared to a simpler model where only marginal homogeneity holds with an LR test, because ordinal quasi-symmetry + marginal homogeneity $=$ symmetry.
Following Agresti's notation (p. 429), we consider $u_1\leq\dots\leq u_I$ ordered scores for variable $X$ (in rows) and variable $Y$ (in columns); $a$ or $b$ denotes any row or column. The OQS model reads as the following log-linear model:
$$
\log\mu_{ab}=\lambda+\lambda_a+\lambda_b+\beta u_b +\lambda_{ab}
$$
where $\lambda_{ab}=\lambda_{ba}$ for all $a<b$. Compared to the usual QS model for nominal data which is $\log\mu_{ab}=\lambda+\lambda_a^X+\lambda_b^Y+\lambda_{ab}$, where $\lambda_{ab}=0$ would mean independence between the two variables, in the OQS model we impose $\lambda_b^Y-\lambda_b^X=\beta u_b$ (hence introducing the idea of a linear trend). The equivalent logit representation is $\log(\pi_{ab}/\pi_{ba})=\beta(u_b-u_a)$, for $a\leq b$.
If $\beta=0$, then we have symmetry as a special case of this model. If $\beta\neq 0$, then we have stochastically ordered margins, that is $\beta>0$ means that column mean is higher compared to row mean (and the greater $|\beta|$, the greater the differences between the two joint probabilities distributions $\pi_{ab}$ and $\pi_{ba}$ are, which will be reflected in the differences between row and column marginal distributions). A test of $\beta=0$ corresponds to a test of marginal homogeneity. The interpretation of the estimated $\beta$ is straightforward: the estimated probability that score on variable $X$ is $x$ units more positive than the score on $Y$ is $\exp(\hat\beta x)$ times the reverse probability. In your particular case, it means that $\hat\beta$ might allow to quantify the influence that one particular speaker exerts on the other.
Of note, all R code was made available by Laura Thompson in her [S Manual to Accompany Agresti's Categorical Data Analysis](https://web.archive.org/web/20110805121758/https://home.comcast.net/%7Elthompson221/Splusdiscrete2.pdf).
Hereafter, I provide some example R code so that you can play with it on your own data. So, let's try to generate some data first:
```
set.seed(56)
d <- as.data.frame(replicate(2, rpois(420, 1.5)))
colnames(d) <- paste("S", 1:2, sep="")
d.tab <- table(d$S1, d$S2, dnn=names(d)) # or xtabs(~S1+S2, d)
library(vcdExtra)
structable(~S1+S2, data=d)
# library(ggplot2)
# ggfluctuation(d.tab, type="color") + labs(x="S1", y="S2") + theme_bw()
```
Visually, the cross-classification looks like this:
```
S2 0 1 2 3 4 5 6
S1
0 17 35 31 8 7 3 0
1 41 41 30 23 7 2 0
2 19 43 18 18 5 0 1
3 11 21 9 15 2 1 0
4 0 3 4 1 0 0 0
5 1 0 0 2 0 0 0
6 0 0 0 1 0 0 0
```
Now, we can fit the OQS model. Unlike Laura Thompson which used the base `glm()` function and a custom design matrix for symmetry, we can rely on the [gnm](http://cran.r-project.org/web/packages/gnm/index.html) package; we need, however, to add a vector for numerical scores to estimate $\beta$ in the above model.
```
library(gnm)
d.long <- data.frame(counts=c(d.tab), S1=gl(7,1,7*7,labels=0:6),
S2=gl(7,7,7*7,labels=0:6))
d.long$scores <- rep(0:6, each=7)
summary(mod.oqs <- gnm(counts~scores+Symm(S1,S2), data=d.long,
family=poisson))
anova(mod.oqs)
```
Here, we have $\hat\beta=0.123$, and thus the probability that Speaker B scores 4 when Speaker A scores 3 is $\exp(0.123)=1.13$ times the probability that Speaker B have a score of 3 while Speaker A have a score of 4.
I recently came across the [catspec](https://cran.r-project.org/src/contrib/Archive/catspec/) R package which seems to offer similar facilities, but I didn't try it. There was a good tutorial at UseR! 2009 about all this stuff: [Introduction to Generalized Nonlinear Models in R](http://www.r-project.org/conferences/useR-2009/tutorials/gnmCourse-3up.pdf), but see also the accompanying vignette, [Generalized nonlinear models in R: An overview of the gnm package](http://cran.r-project.org/web/packages/gnm/vignettes/gnmOverview.pdf).
If you want to grasp the idea with real data, there are a lot of examples with real data sets in the [vcdExtra](http://cran.r-project.org/web/packages/vcdExtra/index.html) package from Michael Friendly. About the OQS model, Agresti used data on Premarital Sex and Extramarital sex (Table 10.5, p. 421). Results are discussed in §10.4.7 (p. 430), and $\hat\beta$ was estimated at -2.86. The code below allow (partly grabbed from Thompson's textbook) to reproduce these results. We would need to relevel factor levels so as to set the same baseline than Agresti.
```
table.10.5 <- data.frame(expand.grid(PreSex=factor(1:4),
ExSex=factor(1:4)),
counts=c(144,33,84,126,2,4,14,29,0,2,6,25,0,0,1,5))
table.10.5$scores <- rep(1:4,each=4)
summary(mod.oqs <- gnm(counts~scores+Symm(PreSex,ExSex), data=table.10.5,
family=poisson)) # beta = -2.857
anova(mod.oqs) # G^2(5)=2.10
```
| null | CC BY-SA 4.0 | null | 2010-12-08T14:39:39.863 | 2022-12-11T17:43:57.517 | 2022-12-11T17:43:57.517 | 362671 | 930 | null |
5259 | 2 | null | 726 | 15 | null | `Torture numbers, and they'll confess to anything.` ~Gregg Easterbrook
| null | CC BY-SA 2.5 | null | 2010-12-08T15:13:53.143 | 2010-12-08T15:13:53.143 | null | null | 1371 | null |
5260 | 1 | 5273 | null | 4 | 237 | Suppose $\hat{m} = \frac{1}{N}\sum_{i=1}^{N}(X_i)$ where $X_i \sim N(m,\sigma)$.
Are the following steps correct?
\begin{align}\operatorname{Var}\left\{(\hat{m}-m)^2\right\} &= \mathrm E\left\{(\hat{m}-m)^4\right\} - \mathrm E^2\left\{(\hat{m}-m)^2\right\}\\&= 3\mathrm E^2\left\{(\hat{m}-m)^2\right\} - \mathrm E^2\left\{(\hat{m}-m)^2\right\}\\&= 2\mathrm E^2\left\{(\hat{m}-m)^2\right\}\end{align}
and I know that $ \mathrm E\left\{(\hat{m}-m)^2\right\} = \frac{1}{N^2}\sigma$.
(I was wrong here. Read the Update)
Then, $\operatorname{Var}\left\{(\hat{m}-m)^2\right\} = 2\frac{1}{N^4}\sigma^2$
---
However the textbook says (without any proving) that
$$\operatorname{Var}\left\{(\hat{m}-m)^2\right\} \tilde{} \frac{1}{N^2} $$
Where am I going wrong?
Update:
as [whuber](https://stats.stackexchange.com/users/919/whuber) told in the comments, i was wrong about $ \mathrm E\left\{(\hat{m}-m)^2\right\} $. This expectation equals to $\frac{1}{N}\sigma$ and not $\frac{1}{N^2}\sigma$.
Therefore, the variance is
$$\operatorname{Var}\left\{(\hat{m}-m)^2\right\} = 2\mathrm E^2\left\{(\hat{m}-m)^2\right\} = 2\frac{1}{N^2}\sigma^2 \tilde{} \frac{1}{N^2}$$
Anyway, the answer provided by [mpiktas](https://stats.stackexchange.com/users/2116/mpiktas) is also correct and i prefer to chose it as the best answer.
| Calculating $\operatorname{Var}\left\{(\hat{m}-m)^2\right\}$ for a univariate normal distribution | CC BY-SA 4.0 | null | 2010-12-08T15:37:55.173 | 2023-03-12T04:46:29.670 | 2023-03-12T04:46:29.670 | 362671 | 2148 | [
"normal-distribution",
"variance",
"estimation",
"expected-value"
]
|
5261 | 2 | null | 5247 | 3 | null | Two figures of merit in control charting are (1) the expected length of time the process will appear to remain in control when in fact it is; and (2) the expected length of time it takes for an OOC condition to be detected after the process first moves out of control.
Under the usual assumptions--iid normally distributed values, no serial correlation, etc--we can reduce the first case to analyses of correlated coin flipping experiments. An accurate solution takes some work; people usually run simulations. However, each rule by itself has a simple interpretation:
Rule 1 characterizes each measurement by whether it lies beyond the interval $[-3\sigma, 3\sigma]$ (with 0.27% probability) or not. It corresponds, then, to flipping a coin with $\Pr(\text{heads})$ =0.0027 and we want to know the expected number of flips before a "heads" (OOC condition) is observed.
Rule 2 characterizes each measurement by whether it exceeds $2\sigma$ or falls below $-2\sigma$. This is like a "3-sided" coin: a multinomial distribution. One face says "above $2 \sigma$" and occurs with probability 2.28%. Let's call this "heads 1". Another face says "below $-2 \sigma$" and also occurs with probability 2.28%. Call this "heads 2". The third face says "between $-2 \sigma$ and $2 \sigma$" and occurs with probability 95.45%. The analogous question concerns the expected number of flips with this coin before a sequence of two heads of the same type is observed. The calculation might not be easy, but it's easy to see this event is fairly rare: the chance of either head appearing is just 4.55%, but given that it just appeared, the chance that a head of the same type immediately follows it is only 2.28%. Thus, if we only had a pair of throws to consider, an OOC event of this type would occur with probability 4.55% * 2.28% * 2 (multiply by 2 to account for both types of "heads") = 0.21%.
Rule 3 can be analyzed in a similar manner (but is more complicated).
Note that the rules are interrelated: a single observation can violate two or even all three rules, even though all preceding observations were in control. However, this has fairly low probability of occurring, so to a good approximation we can assume the rules are mutually exclusive (allowing us to sum their probabilities).
The purpose of rules 2 and 3 is to reduce the expected time needed to detect an OOC condition caused by a systematic change in the mean. How they accomplish this is intuitively clear: a small increase in mean, for example, only slightly increases the chance of triggering rule 1, but greatly increases the chance of triggering rule 2. For example, a one-sd increase in mean increases the chance of a rule 1 violation to $1 - \Phi(3-1) + \Phi(-3-1)$ = 2.28%, which is expected to take about 1/0.028 = 44 time steps to detect, but a rule 3 violation (four in a row above 1 sd) now has slightly greater than a $(1/2)^4$ = 6.25% chance of occurring, which will be detected almost three times quicker (around 16 time steps).
In summary, these rules can be understood by analyzing sequences of coin flips (or die rolls); each one corresponds to an event (or sequence of events) that is sufficiently rare that a process in control will go for a long time without triggering an OOC signal; and the combined set of rules is formulated to be able to detect relatively small shifts of the mean as quickly as possible.
| null | CC BY-SA 2.5 | null | 2010-12-08T16:12:12.467 | 2010-12-08T16:12:12.467 | null | null | 919 | null |
5262 | 2 | null | 5249 | 9 | null | I wonder if there is a criterion collision going on here.
One complaint about file formats such as Excel, SQL, etc are that you have to define the datatypes in advance to have it behave well, which runs contrary to the "something more light weight" criterion (as I understand your restriction to be more time related than computationally related).
In contrast, the criteria that it not muck up the data, or allow the data to be mucked up, require some error checking. Unless you let the system auto-magically figure out the data types (which is essentially where Excel is failing you), there no way to have your cake and eat it too.
IMO, of the two, the second criterion is more important. Data integrity, once violated, makes analysis difficult or impossible. Lost observations or invalid values (if not properly checked) can mess up everything.
In regards to DIF, the actual raw text is not human readable and would be difficult (IMO) for humans to do data entry in.
IMO, you should give delimited files a fair shake. As mentioned above in the comments, the 'data mangling' is mostly the fault of a subset of tools you are using. Well behaved programs should not mangle delimited files. The greatest source of mangling is a poorly specified delimiter. For example, if your data might have commas, a CSV is inappropriate. If it might have tabs TSV is inappropriate. For many (but not all) programs you can specify an alternate delimiter. For example, I've used the tilde (~) in a couple difficult cases.
| null | CC BY-SA 2.5 | null | 2010-12-08T16:32:54.793 | 2010-12-08T16:44:01.007 | 2010-12-08T16:44:01.007 | 196 | 196 | null |
5263 | 2 | null | 5181 | 5 | null | The correct answer here is [hierarchical modeling](http://en.wikipedia.org/wiki/Hierarchical_Bayes_model) (also called multilevel modeling). What you want to do is have the variance parameters drawn from a common prior distribution who's parameters are also estimated. Something like
CountryVariance_i ~ D(Location, Scale)
Location ~ D_2(LocationPriorParams)
Scale ~ D_3(ScalePriorParams)
Where D, D_2, D_3 are whatever distributions would like.
This formalizes the notion that the variance parameters should be similar, and will do the correct kind of shrinkage. I believe that Gelman's books ([Bayesian Data Analysis](http://rads.stackoverflow.com/amzn/click/158488388X) and [Data Analysis Using Regression and Multilevel/Hierarchical Models](http://www.stat.columbia.edu/~gelman/arm/)) talk quite a bit about this kind of thing (though perhaps not a lot for variance parameters).
I don't have a great introduction on hand, so I recommend looking at the wikipedia page and searching for "Hierarchical modeling" or something and clicking on various introductions (there are many). Look for graphs as these are helpful for understanding hierarchical models.
| null | CC BY-SA 2.5 | null | 2010-12-08T16:51:02.557 | 2010-12-16T03:26:47.083 | 2010-12-16T03:26:47.083 | 1146 | 1146 | null |
5264 | 2 | null | 4451 | 1 | null | Check out kaggle.com , they have some contests about social networks and they give out datasets.
Also, Stanford's SNAP is a great resource. And it has research works to boot.
| null | CC BY-SA 2.5 | null | 2010-12-08T16:58:44.833 | 2010-12-08T16:58:44.833 | null | null | null | null |
5265 | 2 | null | 5238 | 3 | null | I suggest you look at google refine (http://code.google.com/p/google-refine/). I think is a very good tool for editing CSV files
| null | CC BY-SA 2.5 | null | 2010-12-08T17:19:52.813 | 2010-12-08T17:19:52.813 | null | null | null | null |
5266 | 1 | null | null | 4 | 607 | I have a need to analyze an experiment that had a binomial outcome and categorical predictors. For context we sent out an email to our customers and encouraged them to register their account online (did register or did not register online). There are different nine groups that received the email. The groups are segmented based on their account opening date with our firm (1 week ago, 2 weeks ago, 3 weeks ago, etc...).
What method would I use to determine if the response from each of the nine groups is significantly different? We have used R and ran ANOVA in the past when our dependent is continuous but I am not sure how to apply the same procedure with a binomial.
| Analysis with binomial outcome and categorical predictor | CC BY-SA 3.0 | null | 2010-12-08T17:34:11.617 | 2017-01-12T15:41:15.810 | 2017-01-12T15:41:15.810 | 11887 | null | [
"r",
"logistic",
"anova",
"binomial-distribution"
]
|
5267 | 2 | null | 5266 | 2 | null | The model you re looking for is called a [logistic regression](http://en.wikipedia.org/wiki/Logistic_regression).
Look at the `glm` function in R and use `family=binomial()`.
| null | CC BY-SA 2.5 | null | 2010-12-08T17:48:19.950 | 2010-12-09T02:28:42.037 | 2010-12-09T02:28:42.037 | 919 | 1979 | null |
5268 | 1 | null | null | 11 | 965 | I wish to cluster users together in a database, with each user represented by a number of features that are both discrete and continuous in nature. The aim is to define a small number of archetypal "users" with specific set of features. All other users are then categorized as being similar to one or other of these archetypes. An important consideration is that I expect the features to have strong dependency structures, and I would like the method to be effective at making these explicitly visible.
For example say the features per user are:
- gender (m/f)
- location (one of 10 cities)
- favorite color (red/green/blue).
Let's say that we have N users and that favorite color is a R.V. dependent on gender and city. How are we to discover possible strong correlations with gender and/or location and favorite colors? There are a number of clustering techniques, from K-NN, k-means, matrix factorization, even PCA, but many seem to hide the underlying correlations that tie the users together.
Could anyone recommend suitable methods for this unsupervised learning task?
---
[heavily edited in an effort to revive and resolve]
| Recommended method for finding archetypes or clusters | CC BY-SA 3.0 | null | 2010-12-08T17:56:29.253 | 2023-04-12T21:09:27.230 | 2013-02-13T04:13:31.840 | 7290 | null | [
"clustering",
"categorical-data",
"unsupervised-learning"
]
|
5269 | 2 | null | 5260 | 3 | null | I think you intended to take the mean of the $x_i$, instead you took the sum in the definition of $\hat{m}$. This makes the quantity $\hat{m} - m$ look weird.
| null | CC BY-SA 2.5 | null | 2010-12-08T18:06:58.873 | 2010-12-08T18:06:58.873 | null | null | 795 | null |
5270 | 1 | 5276 | null | 3 | 2925 | I have some data which, after lots of searching, I concluded would probably benefit from using a linear mixed effects model. I think I have an interesting result here, but I am having a little trouble figuring out how to interpret all of the results. This is what I get from the summary() function in R:
```
> summary(nonzero.lmer)
Linear mixed model fit by REML
Formula: relative.sents.A ~ relative.sents.B + (1 | id.A) + (1 | abstract)
Data: nonzero
AIC BIC logLik deviance REMLdev
-698.8 -683.9 354.4 -722.6 -708.8
Random effects:
Groups Name Variance Std.Dev.
id.A (Intercept) 1.0790e-04 0.0103877
abstract (Intercept) 3.0966e-05 0.0055647
Residual 2.9675e-04 0.0172263
Number of obs: 146, groups: id.A, 97; abstract, 52
Fixed effects:
Estimate Std. Error t value
(Intercept) 0.017260 0.003046 5.667
relative.sents.B 0.428808 0.080050 5.357
Correlation of Fixed Effects:
(Intr)
rltv.snts.B -0.742
```
My question involves the relationship between the dependent variable ("relative.sents.A") and "relative.sents.B" once the random factors are factored out. I gather that the t-value of 5.357 for relative.sents.B should be significant.
But does this show what the direction of the effect is? I am thinking that because the coefficient for the slope is positive that this means that as relative.sents.B increases, so does my dependent variable. Is this correct?
The book I've been using briefly mentions that the correlation reported here is not a normal correlation, but goes into no details. Normally, I'd look there to figure out the direction and magnitude of the effect. Is that wrong?
If I'm wrong on both counts, then what is a good (hopefully reasonably straightforward) way to discover the direction and size of the effect?
| Interpreting size and direction of fixed effects in a linear mixed effect model | CC BY-SA 2.5 | null | 2010-12-08T18:27:38.503 | 2010-12-08T20:06:40.940 | null | null | 52 | [
"r",
"mixed-model"
]
|
5271 | 1 | 5272 | null | 2 | 11172 | I have fit a linear model and I now doing a lack-of-fit test. If the model has a good fit, the residual deviance should follow a $\chi^2$ distribution. The model I just ran has a very poor fit, and now I want to show this visually. I have used the following R code to plot the value of the residual deviance and the $\chi^2$ distribution it should come from (under the hypothesis of good fit).
```
df <- 384162
resdev <- 750749
x <- seq(from=350000, to=800000, by=1)
y <- dchisq(x, df=df)
plot(x, y, type='l')
abline(v=resdev, col='red')
```
Everything is working fine, except now the X-axis values are displaying in scientific notation. How to I control the formating of the axis values? I understand it may be crowded if I use normal notation, so can I format labels something like `350k` for 350,000?
| How to change X-axis format in plot | CC BY-SA 2.5 | null | 2010-12-08T18:46:12.053 | 2010-12-08T19:08:27.073 | null | null | 1973 | [
"r"
]
|
5272 | 2 | null | 5271 | 1 | null | Look at function `axis` help page. Something like the following should work:
```
axis(1, at = x <- seq(3.5e5, 8e5, by = 5e4), labels = paste(x/1000, "k", sep = "" ))
```
| null | CC BY-SA 2.5 | null | 2010-12-08T19:08:27.073 | 2010-12-08T19:08:27.073 | null | null | 2116 | null |
5273 | 2 | null | 5260 | 7 | null | If $\hat{m}=\frac{1}{n}\sum_{i=1}^nX_i$, where $X_i$ is iid normal sample, then $\hat{m}\sim N\left(m,\frac{\sigma^2}{n}\right)$. Then $(\hat{m}-m)\sim N\left(0,\frac{\sigma^2}{n}\right)$ and we can apply the results about normal distribution. We have
\begin{align*}
\operatorname{Var}\left(\left(\hat{m}-m\right)^2\right)&=\operatorname{Var}\left(\left(N\left(0,\frac{\sigma^2}{n}\right)\right)^2\right)\\\\
&=\mathrm E\left(N\left(0,\frac{\sigma^2}{n}\right)\right)^4-\left(\mathrm E\left(N\left(0,\frac{\sigma^2}{n}\right)\right)^2\right)^2\\\\
&=3\frac{\sigma^4}{n^2}-\frac{\sigma^4}{n^2}=2\frac{\sigma^4}{n^2}.
\end{align*}
| null | CC BY-SA 4.0 | null | 2010-12-08T19:35:22.073 | 2023-03-12T04:41:25.077 | 2023-03-12T04:41:25.077 | 362671 | 2116 | null |
5274 | 2 | null | 5233 | 3 | null | Let's take a derivative (difference) here, so instead of working with location, you work directly with what you want: distance.
Say word FOO appears in the text 30 times. Calculate the distance (number of other words) between each consecutive occurrence of FOO, creating a vector of 29 distances. Then pick your plot: histogram, density, xy with log x, etc.
This doesn't show you where clusters are, but it does show clustering.
| null | CC BY-SA 2.5 | null | 2010-12-08T19:43:07.323 | 2010-12-08T19:43:07.323 | null | null | 1764 | null |
5275 | 1 | 5277 | null | 5 | 1703 | Let $\{x_i\}_{i=1}^n$ be a sample from a multivariate Gaussian distribution ${\cal N}(0, \Sigma_X)$ and $\{y_i\}_{i=1}^m$ be a sample from ${\cal N}(0, \Sigma_Y)$.
Are there hypothesis tests for $\Sigma_X = \Sigma_Y$? Pointers to relevant literature would be very appreciated.
| Two-sample test for multivariate normal distributions under the assumption that means are the same | CC BY-SA 2.5 | null | 2010-12-08T19:53:15.887 | 2010-12-08T21:56:58.590 | 2010-12-08T20:19:25.480 | 930 | 168 | [
"hypothesis-testing",
"multivariate-analysis"
]
|
5276 | 2 | null | 5270 | 1 | null | If we assume that your model is adequate, then your reasoning is correct, if relative.sents.B increases by 1, relative.sents.A will increase by 0.43.
| null | CC BY-SA 2.5 | null | 2010-12-08T20:06:40.940 | 2010-12-08T20:06:40.940 | null | null | 2116 | null |
5277 | 2 | null | 5275 | 7 | null | The [Mauchly's test](http://en.wikipedia.org/wiki/Mauchly%27s_sphericity_test) allows to test if a given covariance matrix is proportional to a reference (identity or other) and is available through `mauchly.test()` under R. It is mostly used in repeated-measures design (to test (1) if the dependent variable VC matrices are equal or homogeneous, and (2) whether the correlations between the levels of the within-subjects variable are comparable--altogether, this is known as the sphericity assumption).
Box’s M statistic is used (in MANOVA or LDA) to test for homogeneity of covariance matrices, but as it is very sensitive to normality it will often reject the null ([R code](http://finzi.psych.upenn.edu/R/Rhelp02a/archive/33330.html) not available in standard packages).
Covariance structure models as found in [Structural Equation Modeling](http://en.wikipedia.org/wiki/Structural_equation_modeling) are also an option for more complex stuff (although in multigroup analysis testing for the equality of covariances makes little sense if the variances are not equal), but I have no references to offer actually.
I guess any textbook on multivariate data analysis would have additional details on these procedures. I also found this article for the case where normality assumption is not met:
>
Aslam, S and Rocke, DM. A robust
testing procedure for the equality of
covariance matrices, Computational
Statistics & Data Analysis 49 (2005)
863-874
| null | CC BY-SA 2.5 | null | 2010-12-08T20:52:08.077 | 2010-12-08T21:56:58.590 | 2010-12-08T21:56:58.590 | 930 | 930 | null |
5278 | 1 | 5280 | null | 67 | 21480 | What way (ways?) is there to visually explain what is ANOVA?
Any references, link(s) (R packages?) will be welcomed.
| How to visualize what ANOVA does? | CC BY-SA 3.0 | null | 2010-12-08T21:45:45.977 | 2017-05-16T05:33:21.900 | 2017-03-29T07:47:18.597 | 11887 | 253 | [
"data-visualization",
"anova",
"teaching"
]
|
5279 | 2 | null | 5249 | 6 | null | In all seriousness, I would consider RData files created by R itself as it fits
- reliable (check)
- simple (call it a draw--the format is binary)
- open (check: doesn't get more open than R source code)
- interoperable (check: works everywhere R works)
Close enough for me. If by systems you mean applications rather than operating system then the last point is a fail.
Oh, and RData is efficient as the files are now by default compressed (which used to be an option which was turned off by default).
| null | CC BY-SA 2.5 | null | 2010-12-08T22:11:26.423 | 2010-12-08T22:26:10.397 | 2010-12-08T22:26:10.397 | 930 | 334 | null |
5280 | 2 | null | 5278 | 54 | null | Personally, I like introducing linear regression and ANOVA by showing that it is all the same and that linear models amount to partition the total variance: We have some kind of variance in the outcome that can be explained by the factors of interest, plus the unexplained part (called the 'residual'). I generally use the following illustration (gray line for total variability, black lines for group or individual specific variability) :

I also like the [heplots](http://cran.r-project.org/web/packages/heplots/index.html) R package, from Michael Friendly and John Fox, but see also [Visual Hypothesis Tests in Multivariate Linear Models: The heplots Package for R](http://socserv.mcmaster.ca/jfox/heplots-dsc-paper.pdf).
Standard ways to explain what ANOVA actually does, especially in the Linear Model framework, are really well explained in [Plane answers to complex questions](http://www.springer.com/statistics/statistical+theory+and+methods/book/978-0-387-95361-8), by Christensen, but there are very few illustrations. Saville and Wood's [Statistical methods: The geometric approach](http://www.springer.com/statistics/book/978-0-387-97517-7) has some examples, but mainly on regression. In Montgomery's [Design and Analysis of Experiments](http://eu.wiley.com/WileyCDA/WileyTitle/productCd-EHEP000137.html), which mostly focused on DoE, there are illustrations that I like, but see below

(these are mine :-)
But I think you have to look for textbooks on Linear Models if you want to see how sum of squares, errors, etc. translates into a vector space, as shown on [Wikipedia](http://en.wikipedia.org/wiki/Ordinary_least_squares#Geometric_approach). [Estimation and Inference in Econometrics](http://qed.econ.queensu.ca/pub/dm-book/), by Davidson and MacKinnon, seems to have nice illustrations (the 1st chapter actually covers OLS geometry) but I only browse the French translation (available [here](http://russell.vcharite.univ-mrs.fr/EIE/)). [The Geometry of Linear Regression](http://www.ceseephd.net/wp-content/uploads/2011/12/Projection1.pdf) has also some good illustrations.
Edit:
Ah, and I just remember this article by Robert Pruzek, [A new graphic for one-way ANOVA](http://www.acase.org/rpruzek/anovagraphic.pdf).
Edit 2
And now, the [granova](http://cran.r-project.org/web/packages/granova/index.html) package (mentioned by @gd047 and associated to the above paper) has been ported to ggplot, see [granovaGG](http://cran.r-project.org/web/packages/granovaGG/index.html) with an illustration for one-way ANOVA below.

| null | CC BY-SA 3.0 | null | 2010-12-08T22:24:17.513 | 2013-09-30T15:34:41.750 | 2013-09-30T15:34:41.750 | 30872 | 930 | null |
5281 | 1 | 5284 | null | 8 | 1778 | Suppose we are to estimate a expectation problem $E(f(X))$, where $X$ is a random variable with known distribution, by simulation and Large Law of numbers estimator. [Antithetic method](http://en.wikipedia.org/wiki/Antithetic_variates) is a way to reduce variance of estimator in such cases.
If $X$ is a 1D random variable with cdf $F$, antithetic method is applied as follows:
get a sample U of the uniform distribution over $[0,1]$, then $X_1=F^{-1}(U)$ has F as cdf, and $Y_1=F^{-1}(1-F(X_1)) $also has F as cdf and $X_1$ and $Y_1$ have negative correlation. Then E(f(X)) is estimated by $\frac{\sum_{i=1}^N f(X_i)+f(Y_i)}{2N}$.
Here are my questions:
- if $X_1$ and $Y_1$ have negative correlation, then to reduce the variance of estimator, is it correct that $f(X_1)$ and $f(Y_1)$ must also have negative correlation? What is the condition on $f$ for this to be true?
- If $X$ is a multivariate random variable, since its cdf $F$ has no quantile inverse F^{-1}, is it still possible to apply antithetic method in general cases?
If consider the special case where the components of $X$ are independent, is it possible to apply antithetic method? How if yes?
- I notice that antithetic also increases the samples without actually doing more sampling of any distribution. I remember increasing sample size will also reduce variance of LLN estimators. If the variance of the estimator can be reduced by antithetic method, how much is contributed by increase of sample size and how much by introducing negative correlation between samples? If use iid samples with the same size as those samples after doubled by antithetic, which one will have less variance?
Thanks and regards!
| Questions about antithetic variate method | CC BY-SA 2.5 | null | 2010-12-09T00:42:19.260 | 2011-05-17T20:29:54.773 | 2010-12-09T02:27:09.217 | 919 | 1005 | [
"expected-value"
]
|
5282 | 2 | null | 5160 | 8 | null | These two links provide some R (and C) code examples of implementing a DP normal mixture:
- http://ice.uchicago.edu/2008_presentations/Rossi/density_estimation_with_DP_priors.ppt
- http://www.duke.edu/~neelo003/r/DP02.r
Found another reference. Chapter 15 has DP winbugs code:
- http://www.ics.uci.edu/~wjohnson/BIDA/BIDABook.html
| null | CC BY-SA 2.5 | null | 2010-12-09T02:10:10.680 | 2010-12-20T14:22:03.660 | 2010-12-20T14:22:03.660 | 2310 | 2310 | null |
5283 | 2 | null | 5278 | 10 | null | Check out Hadley Wickham's presentation ([pdf](http://ggplot2.org/resources/2007-past-present-future.pdf), [mirror](http://www.webcitation.org/6K1QO80KC)) on ggplot.
Starting on pages 23-40 of this document he describes an interesting approach to visualizing ANOVAs.
*Link taken from: [http://had.co.nz/ggplot2/](http://had.co.nz/ggplot2/)
| null | CC BY-SA 3.0 | null | 2010-12-09T02:19:57.483 | 2013-09-30T15:56:55.523 | 2013-09-30T15:56:55.523 | 7290 | 995 | null |
5284 | 2 | null | 5281 | 8 | null |
- Yes. There's no simple condition. When $f$ is monotonic, $f(X_1)$ and $f(Y_1)$ will still be negatively correlated. When $f$ is not monotonic, all bets are off. For example, let $F$ be a uniform distribution on $[-1,1]$ and let $f(x) = x^2$. Then $X_1 = -Y_1$, whence $f(X_1) = f(Y_1)$, implying $f(X_1)$ and $f(Y_1)$ are perfectly correlated: you gain no additional information about the expectation from $(X_1, Y_1)$ than you do from $X_1$ alone. The cost of using the antithetic method in this extreme case is to double the sample size in order to achieve a given estimation variance.
A practical example of the problem with non-monotonic $f$ appears here.
- Yes, in some cases. Use the antithetic method on the components of $X$ separately. This ought to work provided the components are not strongly correlated or when $F$ is symmetric.
- Provided $f(X_1)$ and $f(Y_1)$ are negatively correlated, you get smaller estimation variance with the antithetic method. As an extreme example of this, consider the case where $F$ is uniform on $[-1,1]$ and $f$ is the identity. Then for any single sample, $Y_1 = -X_1$ and their mean $(X_1+Y_1)/2 = 0$ estimates the mean of $F$ exactly; whereas the mean of two independent samples $(X_1, X_2)$ has a variance of $1/6$.
This technique seems to be related, at least in spirit, to [Latin hypercube sampling](http://en.wikipedia.org/wiki/Latin_hypercube_sampling).
| null | CC BY-SA 3.0 | null | 2010-12-09T02:26:44.333 | 2011-05-17T20:29:54.773 | 2011-05-17T20:29:54.773 | 919 | 919 | null |
5286 | 2 | null | 5249 | 4 | null | In response to Dirk Eddelbuettel's answer, I suggest using the [HDF5 file format](http://www.hdfgroup.org/HDF5/whatishdf5.html). It is less simple than the RData format, or you might say, 'more rich', but certainly more interoperable (can be used in C, Java, Matlab, etc). I have found that I/O involving large HDF5 files is very fast.
| null | CC BY-SA 2.5 | null | 2010-12-09T05:38:20.340 | 2010-12-09T05:38:20.340 | null | null | 795 | null |
5287 | 2 | null | 5114 | 2 | null | What about the [Kumaraswamy](http://en.wikipedia.org/wiki/Kumaraswamy_distribution) distribution, which has the following pdf:
$$
f(x; a,b) = a b x^{a-1}{ (1-x^a)}^{b-1}
$$
for $a>0$, $b>0$, $0 < x < 1$. This distribution can be [rescaled](http://en.wikipedia.org/wiki/Kumaraswamy_distribution#Generalizing_to_arbitrary_range) to have the required support.
| null | CC BY-SA 2.5 | null | 2010-12-09T10:07:54.657 | 2010-12-09T10:07:54.657 | null | null | 8 | null |
5288 | 1 | 5313 | null | 5 | 7743 | I'm working in a piece of software designed for satellite image classification based on various features of objects in the image. The software provides various built-in features like the mean of the values in the object, the maximum and minimum of the values etc. However, I want to use the median of the values.
I don't have access to the raw values in the object, all I have is the information below:
- Mean
- Max
- Min
- Standard Deviation
And I can do arithmetic on those values using standard operators (+, -, /, *, ^ etc).
Is there a way to calculate the median (or something closely approximating it) from just this information?
| Calculate median without access to raw data | CC BY-SA 2.5 | null | 2010-12-09T10:11:07.247 | 2010-12-09T17:34:04.427 | null | null | 261 | [
"descriptive-statistics",
"mean"
]
|
5289 | 2 | null | 5288 | 9 | null | If you know underlying distribution of the data, you can.
For example, for normal distributed data, the mean and median are same (median=mode=mean).
Or for exponential distribution with mean $\lambda^{-1}$ the median is $\lambda^{-1} ln(2)$.
it is impossible to obtain median without having raw data or knowing the actual data distribution.
| null | CC BY-SA 2.5 | null | 2010-12-09T10:36:27.497 | 2010-12-09T10:36:27.497 | null | null | 2148 | null |
5290 | 1 | 5338 | null | 0 | 7759 | I have a time series with an exponential moving average and I want to calculate a moving return of the EMA over the last m periods (something like a smoothed moving return).
Let's say:
Y(t) is the value of the time series at time period t
S(t) is the value of an EMA of Y at time period t
Now R(t) is the return of the EMA over the last m time periods:
R(t) = S(t) / S(t-m) - 1
My question is: how many time periods should the EMA calculation use for a given m?
Precisely, if the EMA is calculated using S(t) = alpha * Y(t) + (1-alpha) * S(t-1) and alpha is set by 2/(N+1), then how should N depend on m?
I'm assuming that N should be sufficiently less than m to prevent 'overlap' of Y values that are used in the calculation of S(t) and S(t-m).
Any theories or best practices about this?
| Moving return of exponential moving average -- choice of alpha | CC BY-SA 2.5 | null | 2010-12-09T11:08:23.557 | 2021-01-31T18:15:08.157 | 2021-01-31T18:15:08.157 | 11887 | 2316 | [
"time-series",
"exponential-smoothing"
]
|
5291 | 2 | null | 4030 | 6 | null | In [Johansen article](http://www.jstor.org/stable/2938278) VECM is specified with dummy variables. If your exogenous variables are strictly exogenous I see no reason why you cannot use original Johansen VECM,
so look in the article how Johansen treats dummies.
R package `vars` implements Johansen approach, where you can supply the dummies.
| null | CC BY-SA 2.5 | null | 2010-12-09T12:09:12.687 | 2010-12-09T12:09:12.687 | null | null | 2116 | null |
5292 | 1 | 5301 | null | 73 | 128436 | Is there any GUI for R that makes it easier for a beginner to start learning and programming in that language?
| Good GUI for R suitable for a beginner wanting to learn programming in R? | CC BY-SA 3.0 | null | 2010-12-09T13:49:10.580 | 2017-05-06T07:25:03.283 | 2017-05-06T07:25:03.283 | 28666 | 1808 | [
"r"
]
|
5293 | 1 | null | null | 6 | 3452 | My name is Tuhin.
I came up with a couple of questions when I was doing an
analysis in R.
I did a logistic regression analysis in R and tried to check
how good the model fits the data.
But, I got stuck as I could not get the pseudo R square value
for the model which could give me some idea about the variation
explained by the model.
Could you please guide me on how to achieve this value (pseudo
R square for Logistic regression analysis).
It would also be helpful if you could show me a way to get the
Hosmer Lemeshow statistic for the model as well. I found out a
user defined function to do it, but if there is a quicker way
possible, it would be really helpful.
I would be very grateful if you can provide me the answers to
my queries.
Eagerly waiting for your response.
Regards
| Find out pseudo R square value for a Logistic Regression analysis | CC BY-SA 2.5 | null | 2010-12-09T13:56:21.027 | 2010-12-09T17:18:23.010 | 2010-12-09T15:33:07.360 | 930 | null | [
"r",
"logistic",
"goodness-of-fit"
]
|
5295 | 2 | null | 5249 | 2 | null | I'm not quite sure why fixed text format with the appropriate meta data does not meet your criteria. It is not as simple to read as a delimiter but you need metadata to use the information anyway. The task of writing syntax to read the program simply depends on how large and complicated the structure of the dataset is. SPSS and Excel have a GUI to help with these tasks.
There are only two errors with CSV files I have come across:
- Missing fields without delimiters (so every other field in that record is misplaced, I have also had this problem with missing tags in XML)
- A comma within a text string
(if you have encountered other problems feel free to give examples)
Two is solved with a more irregular delimiter as drnexus suggested (a pipe (|) is one I have encountered before, but a tilde (~) works just as well in that neither is likely to be included in string fields.) One is a problem not easily solved by whatever software you are using, and both are problems with the way people wrote the files to begin with, not the software used to read the files.
I'd also like to say I agree with drnexus on both this thread and his [response](https://stats.stackexchange.com/questions/5238/strategy-for-editing-comma-separated-value-csv-files/5241#5241) on your other recent thread about editing these files. You seem to be complaining about the software you use (particularly Excel) and asking to store data in a format that conforms to your ill behaved software. Maybe the question should be how to get Excel to stop auto-formatting plain text files. Your reliable criteria as it appears to me is a software problem with reading plain text files. I don't use R for data management, but I have not had that hard of a time reading delimited files in SPSS as you seem to be suggesting.
If the original files are not written properly what makes you expect any software to reliably read the file? And a specific file format will certainly not prevent you from incorrectly writing the data to whatever file type you choose to begin with.
| null | CC BY-SA 2.5 | null | 2010-12-09T14:13:59.967 | 2010-12-09T14:13:59.967 | 2017-04-13T12:44:29.013 | -1 | 1036 | null |
5296 | 2 | null | 5292 | 24 | null | This has been answered [several times on StackOverflow](https://stackoverflow.com/questions/1439059/best-ide-texteditor-for-r). The top selections on there seem to consistently be Eclipse with StatET or Emacs with ESS.
I wouldn't say that there are any good gui's to make it easier to learn the language. The closest thing would be [deducer](http://www.deducer.org/pmwiki/pmwiki.php?n=Main.DeducerManual) from Ian Fellows. But there are plenty of other resources (books, papers, blogs, packages, etc.) available for learning.
| null | CC BY-SA 3.0 | null | 2010-12-09T14:32:36.933 | 2011-10-15T07:24:40.610 | 2017-05-23T12:39:26.203 | -1 | 5 | null |
5297 | 2 | null | 5266 | 0 | null | As @fabians noted, this can be done with logistic regression. You should investigate the inclusion of your predictor as an ordinal variable (not categorical) because with 9 levels you likely could model it that way and gain power. Then you can test for a linear effect or a quadratic effect etc. instead of ignoring the ordering of the "categories".
| null | CC BY-SA 2.5 | null | 2010-12-09T14:38:43.207 | 2010-12-09T14:38:43.207 | null | null | 2040 | null |
5298 | 2 | null | 5293 | 5 | null | Take a look at the `lrm()` function from the [Design](http://cran.r-project.org/web/packages/Design/index.html) package. It features everything you need for fitting GLM. The Hosmer and Lemeshow test has limited power and depends on arbitrary discretization; it is discussed in Harrell, Regression Modeling Strategies (p. 231) and on the [R-help](http://www.biostat.wustl.edu/archives/html/s-news/1999-04/msg00147.html) mailing-list. There is also a comparison of GoF tests for logistic regression in [A comparison of goodness-of-fit tests for the logistic regression model](http://www.ds.unifi.it/cipollini/GLM/Doc/HHLcL1997_GofFit.pdf), Stat. Med. 1997 16(9):965.
Here is an example of use:
```
library(Design) # depends on Hmisc
x1 <- rnorm(500)
x2 <- rnorm(500)
L <- x1+abs(x2)
y <- ifelse(runif(500)<=plogis(L), 1, 0)
f <- lrm(y ~ x1+x2, x=TRUE, y=TRUE)
resid(f, 'gof')
```
which yields something like
```
Sum of squared errors Expected value|H0 SD
100.33517914 100.37281429 0.37641975
Z P
-0.09998187 0.92035872
```
but see `help(residuals.lrm)` for additional help.
The following thread contains critical discussions that might also be helpful: Logistic Regression: [Which pseudo R-squared measure is the one to report (Cox & Snell or Nagelkerke)?](https://stats.stackexchange.com/questions/3559/logistic-regression-which-pseudo-r-squared-measure-is-the-one-to-report-cox-s/)
| null | CC BY-SA 2.5 | null | 2010-12-09T14:42:42.797 | 2010-12-09T15:04:18.770 | 2017-04-13T12:44:45.783 | -1 | 930 | null |
5299 | 1 | 51167 | null | 2 | 2215 | I am looking for applied references to data augmentation (preferably with some written code). Either online references are books would be great.
I found this book online:
[http://www.amazon.com/Bayesian-Missing-Data-Problems-Biostatistics/dp/142007749X/ref=sr_1_1?ie=UTF8&s=books&qid=1291905761&sr=1-1](http://rads.stackoverflow.com/amzn/click/142007749X)
But with no reviews I am hesitant on purchasing it.
Thanks!
Edit: I have two variables X and Y. Let's say X follows a mixture of normals and there is a logistic relationship between X and Y. There is measurement error when observing X. We observe 100 X Y pairs and need to estimate the function between the two.
In a book on measurement error (John P. Buonaccorsi) the author recommends data augmentation (I believe the introduced variables are the true X means) for estimation. However no details are given. I am looking for simple examples (R code but doesn't really matter) to get started.
| Data Augmentation Examples | CC BY-SA 2.5 | 0 | 2010-12-09T14:49:52.150 | 2017-01-24T02:55:51.520 | 2017-01-24T02:55:51.520 | 12359 | 2310 | [
"markov-chain-montecarlo",
"error",
"mixture-distribution",
"measurement",
"data-augmentation"
]
|
5300 | 1 | null | null | 2 | 125 | I have a set of data with 4 input variables and 1 result field (they're all numerical values). I want to determine the influence of variable #4 on the outcomes, so it makes sense to try to fix the other 3 inputs and the compare the results versus the value of var #4. I realize I'm probably using horrendous statistics terminology, but I was just looking for some quick help--and yes, this can be quick and dirty, it doesn't have to be professional level.
| Fixing 3 variables to find the influence of a 4th? | CC BY-SA 2.5 | null | 2010-12-09T14:56:16.567 | 2010-12-09T15:34:47.920 | null | null | 2320 | [
"correlation"
]
|
5301 | 2 | null | 5292 | 37 | null | I would second @Shane's recommendation for [Deducer](http://www.deducer.org/pmwiki/pmwiki.php?n=Main.DeducerManual), and would also recommend [the R Commander](http://socserv.mcmaster.ca/jfox/Misc/Rcmdr/) by John Fox. The CRAN package is [here](http://cran.r-project.org/web/packages/Rcmdr/index.html). It's called the R "Commander" because it returns the R commands associated with the point-and-click menu selections, which can be saved and run later from the command prompt.
In this way, if you don't know how to do something then you can find it in the menus and get an immediate response for the proper way to do something with R code. It looks like Deducer operates similarly, though I haven't played with Deducer for a while.
The base R Commander is designed for beginner-minded tasks, but there are plugins available for some more sophisticated analyses (Deducer has plugins, too). Bear in mind, however, that no GUI can do everything, and at some point the user will need to wean him/herself from pointing-and-clicking. Some people (myself included) think that is a good thing.
| null | CC BY-SA 3.0 | null | 2010-12-09T15:05:19.457 | 2011-10-15T07:24:31.043 | 2011-10-15T07:24:31.043 | -1 | null | null |
5302 | 2 | null | 5300 | 4 | null | Fit linear regression model:
$Y=\beta_0+\beta_1X_1+\beta_2X_2+\beta_3X_3+\beta_4X_4$
where $X_i$ are your input variables and $Y$ is the outcome. The interpretation of coefficient $\beta_4$ then is the size of change of Y if we change $X_4$ by one unit, holding other input variables constant.
| null | CC BY-SA 2.5 | null | 2010-12-09T15:09:13.803 | 2010-12-09T15:34:47.920 | 2010-12-09T15:34:47.920 | 930 | 2116 | null |
5304 | 1 | 5320 | null | 52 | 39185 | Dear everyone - I've noticed something strange that I can't explain, can you? In summary: the manual approach to calculating a confidence interval in a logistic regression model, and the R function `confint()` give different results.
I've been going through Hosmer & Lemeshow's Applied logistic regression (2nd edition). In the 3rd chapter there is an example of calculating the odds ratio and 95% confidence interval. Using R, I can easily reproduce the model:
```
Call:
glm(formula = dataset$CHD ~ as.factor(dataset$dich.age), family = "binomial")
Deviance Residuals:
Min 1Q Median 3Q Max
-1.734 -0.847 -0.847 0.709 1.549
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.8408 0.2551 -3.296 0.00098 ***
as.factor(dataset$dich.age)1 2.0935 0.5285 3.961 7.46e-05 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 136.66 on 99 degrees of freedom
Residual deviance: 117.96 on 98 degrees of freedom
AIC: 121.96
Number of Fisher Scoring iterations: 4
```
However, when I calculate the confidence intervals of the parameters, I get a different interval to the one given in the text:
```
> exp(confint(model))
Waiting for profiling to be done...
2.5 % 97.5 %
(Intercept) 0.2566283 0.7013384
as.factor(dataset$dich.age)1 3.0293727 24.7013080
```
Hosmer & Lemeshow suggest the following formula:
$$
e^{[\hat\beta_1\pm z_{1-\alpha/2}\times\hat{\text{SE}}(\hat\beta_1)]}
$$
and they calculate the confidence interval for `as.factor(dataset$dich.age)1` to be (2.9, 22.9).
This seems straightforward to do in R:
```
# upper CI for beta
exp(summary(model)$coefficients[2,1]+1.96*summary(model)$coefficients[2,2])
# lower CI for beta
exp(summary(model)$coefficients[2,1]-1.96*summary(model)$coefficients[2,2])
```
gives the same answer as the book.
However, any thoughts on why `confint()` seems to give different results? I've seen lots of examples of people using `confint()`.
| Why is there a difference between manually calculating a logistic regression 95% confidence interval, and using the confint() function in R? | CC BY-SA 2.5 | null | 2010-12-09T15:41:59.660 | 2019-04-29T00:21:04.340 | 2019-04-29T00:21:04.340 | 11887 | 1991 | [
"r",
"regression",
"logistic",
"confidence-interval",
"profile-likelihood"
]
|
5305 | 1 | null | null | 10 | 7501 | I have an irregularly spaced `XTS` time series (with `POSIXct` values as index type).
How can I build a new time series sampled at a let's say 10 minute interval, but with each sample moment aligned to a round time (13:00:00, 13:10:00, 13:20:00, ...). If a resampling moment doesn't fall exactly on a original series value, I want to take the previous one.
| How to re-sample an XTS time series in R? | CC BY-SA 2.5 | null | 2010-12-09T15:55:17.107 | 2019-07-23T22:27:26.160 | 2019-07-23T22:27:26.160 | 11887 | 749 | [
"r",
"time-series",
"sampling",
"unevenly-spaced-time-series"
]
|
5306 | 1 | 5309 | null | 7 | 6326 | I'm trying to transition to `R` from using `SPSS`. In the past, I've setup my data in the wide format for my repeated measures ANOVAs in `SPSS`. How do I need to setup my data to run `nlme()`? The data is balanced. It has within-subjects variables of trial type(3 levels) each measured on 8 times, and a between-subjects factor with 2 levels. 2 separate analyses will be run; one with response time as the DV and another with accuracy as the DV. I know the data has to be in a long format but I'm not sure how the columns should be arranged. Should 1 column be the subject id, another the trial type, another the time, and then 1 for the DV? Does this matter? Any points in the right direction would be greatly appreciated. Thanks.
| How do I setup up repeated measures data for analysis with nlme()? | CC BY-SA 3.0 | null | 2010-12-09T15:55:42.223 | 2013-09-04T15:59:02.337 | 2013-09-04T15:59:02.337 | 21599 | 2322 | [
"r",
"mixed-model",
"repeated-measures",
"dataset",
"lme4-nlme"
]
|
5307 | 2 | null | 5304 | 20 | null | I believe if you look into the help file for confint() you will find that the confidence interval being constructed is a "profile" interval instead of a Wald confidence interval (your formula from HL).
| null | CC BY-SA 2.5 | null | 2010-12-09T16:17:46.617 | 2010-12-09T16:17:46.617 | null | null | 2040 | null |
5308 | 1 | 5326 | null | 0 | 3879 | For this dataset:
```
data my;
input x y;
datalines;
-122.413582861209 37.7828877716232
-122.417876547159 37.7848288325307
;
proc print;
run;
```
The output and related tables are:

How can I import, save and use these values to their maximum precision?
| Read decimal values in SAS | CC BY-SA 2.5 | null | 2010-12-09T16:34:50.800 | 2015-07-08T21:52:33.730 | 2015-07-08T21:52:33.730 | 28666 | 1077 | [
"sas"
]
|
5309 | 2 | null | 5306 | 4 | null | Yes, you need to set up your data so that each grouping factor, dependendent variable and covariate corresponds to a column and every row contains one observation (i.e. long format):
Everything that you enter into the model formulas for the random and fixed parts has to be a column in your data set.
You can use `reshape` to get your data from wide to long format and back.
If you are transitioning to R and do a lot of mixed models, try to get a copy of Pinheiro/Bates "Mixed-Effects Models in S and S-PLUS", it's a comprehensive reference from the guys who wrote `nlme` with a lot of worked examples.
| null | CC BY-SA 2.5 | null | 2010-12-09T16:41:02.197 | 2010-12-09T16:41:02.197 | null | null | 1979 | null |
5310 | 2 | null | 5305 | 5 | null | ```
library(xts)
?endpoints
```
For instance
```
tmp=zoo(rnorm(1000), as.POSIXct("2010-02-1")+(1:1000)*60)
tmp[endpoints(tmp, "minutes", 20)]
```
to subsample every 20 minutes. You might also want to check out `to.minutes`, `to.daily`, etc.
| null | CC BY-SA 3.0 | null | 2010-12-09T17:01:30.993 | 2015-08-05T09:21:11.167 | 2015-08-05T09:21:11.167 | 67395 | 300 | null |
5311 | 2 | null | 5292 | 13 | null | I think that the command line is the best interface, and especially for the beginners. The sooner you'll start with console, the sooner you'll find out that this is the fastest, the most comfortable and what's most important the only fully non-limiting way of using R.
| null | CC BY-SA 2.5 | null | 2010-12-09T17:11:31.780 | 2010-12-09T17:11:31.780 | null | null | null | null |
5312 | 2 | null | 5293 | 1 | null | Pseudo R Square is very easy to calculate manually. You just need to look up the -2LL value for the baseline based on the average probability of occurrence of the binomial event. And, you need the -2LL value for the actual Logistic regression.
Let's say the -2LL value for the baseline is 10 and for the Logistic regression model it is 5. Then, the calculation of the Pseudo R Square is: (10 - 5)/10 = 50%.
Another common Pseudo R Square measure is the McFadden R Square that generates the same result. Its calculation is: 1-(5/10) = 50%.
The Pseudo R Square measures do tell you how much your Logistic Regression model reduces the error vs simply guessing the average probability of occurrence for every observations.
| null | CC BY-SA 2.5 | null | 2010-12-09T17:18:23.010 | 2010-12-09T17:18:23.010 | null | null | 1329 | null |
5313 | 2 | null | 5288 | 6 | null | The question can be construed as requesting a nonparametric estimator of the median of a sample in the form f(min, mean, max, sd). In this circumstance, by contemplating extreme (two-point) distributions, we can trivially establish that
$$ 2\ \text{mean} - \text{max} \le \text{median} \le 2\ \text{mean} - \text{min}.$$
There might be an improvement available by considering the constraint imposed by the known SD. To make any more progress, additional assumptions are needed. Typically, some measure of skewness is essential. (In fact, skewness can be estimated from the deviation between the mean and the median relative to the sd, so one should be able to reverse the process.)
One could, in a pinch, use these four statistics to obtain a [maximum-entropy](http://en.wikipedia.org/wiki/Principle_of_maximum_entropy) solution and use its median for the estimator. Actually, the min and max probably won't be any good, but in a satellite image there are fixed upper and lower bounds (e.g., 0 and 255 for an eight-bit image); these would constrain the maximum-entropy solution nicely.
It's worth remarking that general-purpose image processing software is capable of producing far more information than this, so it could be worthwhile looking at other software solutions. Alternatively, often one can trick the software into supplying additional information. For example, if you could divide each apparent "object" into two pieces you would have statistics for the two halves. That would provide useful information for estimating a median.
| null | CC BY-SA 2.5 | null | 2010-12-09T17:34:04.427 | 2010-12-09T17:34:04.427 | null | null | 919 | null |
5314 | 2 | null | 5292 | 2 | null | I used JGR for a short while, until it became apparent it would quickly consume all the memory on my system. I have not used it since, and recommend you do not use it.
| null | CC BY-SA 2.5 | null | 2010-12-09T17:39:07.070 | 2010-12-09T17:39:07.070 | null | null | 795 | null |
5315 | 1 | 5322 | null | 2 | 405 | I am relating age to a binary outcome in a logistic model (or, more to the point I would like to). However, the distribution of the ages looks like this:
```
nn <- 1000
age <- c(rpois(nn / 3, lambda = 0.5),
rnorm(nn / 3, mean = 10, sd = 2),
runif(nn / 3, min = 0, max = 15))
age <- age[which(age > 0)]
```

How would you approach this problem?
Thanks!
| How would you deal with a bimodally distributed predictor variable in bivariate logistic regression? | CC BY-SA 2.5 | null | 2010-12-09T17:56:04.053 | 2010-12-10T02:23:20.637 | null | null | 1991 | [
"distributions",
"logistic"
]
|
5316 | 2 | null | 5306 | 2 | null | It's unlikely reshape will help you out if you're coming straight from SPSS files because you not only need it set up differently, you probably need different data than you were using in SPSS. In the standard repeated measures analysis you enter means of each individual condition for each subject into the analysis but for nlme you enter each individual data point. You'll have to go back to the raw data files. If those have a data point on each line and columns to identify the condition of each point then you can just basically concatenate each file together, identifying the separate subjects along the way. Something like the following would work in that case
```
fList <- list.files('myDataF/')
dList <- lapply(fList, function(f) {x <- read.table(paste('myDataF', f, sep = '/'), header = TRUE)
x$subj <- f
return(x)})
myData <- do.call(rbind, dList)
```
(this could be even simpler if you knew that there was exactly the same number of lines in each file)
| null | CC BY-SA 2.5 | null | 2010-12-09T17:56:56.857 | 2010-12-09T17:56:56.857 | null | null | 601 | null |
5317 | 2 | null | 5290 | 1 | null | This is actually a rather complex problem. There are a few directions you can look into. One way, typically recommended in the forecasting literature, is to optimize for the forecasting error.
If you have a specific application in mind you can define your own cost function to optimize.
A different view on this is to look at the EWMA as a state space model, then the problem is equivalent to setting up an appropriate Kalman filter which you can do with MLE, see for instance [Time Series Analysis by State Space Methods](http://rads.stackoverflow.com/amzn/click/0198523548)
There are other directions you can go, but I think this will give you an idea...
| null | CC BY-SA 2.5 | null | 2010-12-09T18:00:15.903 | 2010-12-09T18:00:15.903 | null | null | 300 | null |
5318 | 2 | null | 5306 | 2 | null | NLME relies on a "univariate" as opposed to "multi-variate" data structure. See the description below, copied from my response to another question here:
[Data manipulation in R for functional data analysis](https://stats.stackexchange.com/questions/3702/data-manipulation-in-r-for-functional-data-analysis/3705#3705)
As to how you would get the data into R and into one of these formats, we'd need to know more about what your input file looks like and the format that it is in. However, here are some general tips on formatting the type of data that you have for analysis in any system.
Singer (Applied Longitudinal Data Analysis) suggests two generally useful layouts for the statistical analysis of longitudinal data: the person-level (mutlivariate) structure or the person-period (univariate) structure. The latter is generally preferrable for a number of reasons.
The person-level data structure (or the multivariate format) contains one row of data for each observational unit (such as persons) and a variable for each measurement period. Age would not be included in the data set and would be implicit in the levels of your time factor (e.g., in a repeated measures ANOVA). This structure can lead naturally to summaries that aren't very meaningful, is less efficient than it could be, and cannot account for your unequally spaced observations (differing age intervals between observations) or time-varying covariates.
That data setup might look something like this...
```
Mutlivariate
ID Gender height1 height2 height3 height4 height5
1 Boy 76.2 74.6 78.2 77.7 76
2 Boy 80.4 78.0 81.8 80.5 80
3 Boy 83.3 82.0 85.4 83.3 83
4 Girl 96.0 94.9 97.1 98.6 96
5 Girl 87.7 90.0 89.6 90.3 89
6 Girl 85.7 86.9 87.9 87.0 86
```
A preferable layout is often the person-period layout (or the univariate format) where each individual has a record for each time for which they were observed. The person-period dataset has a number of advantages. First, it leads to more natural summaries of the data, e.g. getting an average by group, by time or by group and time is now straight forward. Second, the dataset will accommodate entry of unequal intervals in the time dimension, such as you have here. In addition, if you have them, you could add columns for any other demographic covariates and these could differ over time. Also, data in this format is prepared for modern analytical techniques such as multilevel modeling. Finally, the univariate data structure is consistent with good practice in database design and normalization, increasing efficiency and making it appropriate for the typical query structure.
The univariate layout would look something like this...
```
Univariate
ID Age Gender Height
1 1 Boy 76.2
1 1.5 Boy 74.6
1 3 Boy 78.2
1 5 Boy 77.7
2 1 Girl 80.4
2 1.5 Girl 81.8
2 3 Girl 80.5
2 5 Girl 80
3 1 Boy 115.8
3 1.5 Boy 112.3
3 3 Boy 111.0
3 5 Boy 104.1
```
| null | CC BY-SA 2.5 | null | 2010-12-09T18:55:54.040 | 2010-12-09T18:55:54.040 | 2017-04-13T12:44:48.803 | -1 | 485 | null |
5319 | 2 | null | 5305 | 2 | null | I'm still not sure what you're trying to do and I still think an example would help, but I thought I'd guess that you may be interested in `align.time`.
```
# Compare this:
tmp[endpoints(tmp, "minutes", 20)]
# with this:
align.time( tmp[endpoints(tmp, "minutes", 20)], n=60*20 )
```
| null | CC BY-SA 2.5 | null | 2010-12-09T18:58:39.610 | 2010-12-09T18:58:39.610 | null | null | 1657 | null |
5320 | 2 | null | 5304 | 48 | null | After having fetched the data from the [accompanying website](http://www.ats.ucla.edu/stat/Stata/examples/alr2/), here is how I would do it:
```
chdage <- read.table("chdage.dat", header=F, col.names=c("id","age","chd"))
chdage$aged <- ifelse(chdage$age>=55, 1, 0)
mod.lr <- glm(chd ~ aged, data=chdage, family=binomial)
summary(mod.lr)
```
The 95% CIs based on profile likelihood are obtained with
```
require(MASS)
exp(confint(mod.lr))
```
This often is the default if the `MASS` package is automatically loaded. In this case, I get
```
2.5 % 97.5 %
(Intercept) 0.2566283 0.7013384
aged 3.0293727 24.7013080
```
Now, if I wanted to compare with 95% Wald CIs (based on asymptotic normality) like the one you computed by hand, I would use `confint.default()` instead; this yields
```
2.5 % 97.5 %
(Intercept) 0.2616579 0.7111663
aged 2.8795652 22.8614705
```
Wald CIs are good in most situations, although profile likelihood-based may be useful with complex sampling strategies. If you want to grasp the idea of how they work, here is a brief overview of the main principles: [Confidence intervals by the profile likelihood method, with applications in veterinary epidemiology](http://people.upei.ca/hstryhn/stryhn208.pdf). You can also take a look at Venables and Ripley's MASS book, §8.4, pp. 220-221.
| null | CC BY-SA 3.0 | null | 2010-12-09T19:00:23.177 | 2017-01-22T23:19:58.883 | 2017-01-22T23:19:58.883 | 89665 | 930 | null |
5321 | 1 | 5334 | null | 4 | 3782 | I'm implementing PCA using eigenvalue decomposition in Matlab. I know Matlab has PCA implemented, but it helps me understand all the technicalities when I write code.
I've been following the guidance from [here](http://books.google.com/books?id=bXzAlkODwa8C&lpg=PA647&dq=eigenvalue%20decomposition&pg=PA647#v=onepage&q&f=false), but I'm getting different results in comparison to built-in function `princomp`.
Could anybody look at it and point me in the right direction.
Here's the code:
```
function [mu, Ev, Val ] = pca(data)
% mu - mean image
% Ev - matrix whose columns are the eigenvectors corresponding to the eigen
% values Val
% Val - eigenvalues
if nargin ~= 1
error ('usage: [mu,E,Values] = pca_q1(data)');
end
mu = mean(data)';
nimages = size(data,2);
for i = 1:nimages
data(:,i) = data(:,i)-mu(i);
end
L = data'*data;
[Ev, Vals] = eig(L);
[Ev,Vals] = sort(Ev,Vals);
% computing eigenvector of the real covariance matrix
Ev = data * Ev;
Val = diag(Vals);
Vals = Vals / (nimages - 1);
% normalize Ev to unit length
proper = 0;
for i = 1:nimages
Ev(:,i) = Ev(:,1)/norm(Ev(:,i));
if Vals(i) < 0.00001
Ev(:,i) = zeros(size(Ev,1),1);
else
proper = proper+1;
end;
end;
Ev = Ev(:,1:nimages);
```
| Problems with implementing PCA in Matlab | CC BY-SA 3.0 | null | 2010-12-09T19:07:48.340 | 2015-02-04T14:11:22.420 | 2015-02-04T14:11:22.420 | 28666 | 1371 | [
"machine-learning",
"pca",
"matlab"
]
|
5322 | 2 | null | 5315 | 7 | null | Logistic (or other) regression does not make any assumptions about the distribution of the predictors, they are only assumed to be known exactly. So there is nothing stopping you from going ahead and using age in your regression model.
The issue you have to actually worry about is whether the effect of age is linear on the logit scale, but that has nothing to do with its distribution.
| null | CC BY-SA 2.5 | null | 2010-12-09T19:35:12.803 | 2010-12-09T19:35:12.803 | null | null | 279 | null |
5323 | 2 | null | 5315 | 2 | null | @andrew: I personally like a simple approach which is to create a logit plot where I would do something along the lines of binning age into 10 groups (deciles) and plotting the logit against these bins. This gives a sense for the linearity of the variable in the logit and may suggest a transformation.
| null | CC BY-SA 2.5 | null | 2010-12-09T20:02:54.003 | 2010-12-09T20:02:54.003 | null | null | 2040 | null |
5324 | 2 | null | 5292 | 4 | null | Personally ESS, but as stated above i have found Rcmdr very easy to use.
| null | CC BY-SA 2.5 | null | 2010-12-09T21:31:56.837 | 2010-12-09T21:31:56.837 | null | null | 2262 | null |
5325 | 2 | null | 5250 | 31 | null | If `time` and `Genotype` are both categorical predictors as they appear to be, and you are interested in comparing all time/Genotype pairs to each other, then you can just create one interaction variable, and use Tukey contrasts on it:
```
weights$TimeGeno <- interaction(weigths$Time, weights$Geno)
model <- lme(weight ~ TimeGeno, random = ~1|Animal/time, data=weights)
comp.timegeno <- glht(model, linfct=mcp(TimeGeno="Tukey"))
```
If you are interested in other contrasts, then you can use the fact that the `linfct` argument can take a matrix of coefficients for the contrasts - this way you can set up exactly the comparisons you want.
EDIT
There appears some concern in the comments that the model fitted with the `TimeGeno` predictor is different from the original model fitted with the `Time * Genotype` predictor. This is not the case, the models are equivalent. The only difference is in the parametrization of the fixed effects, which is set up to make it easier to use the `glht` function.
I have used one of the built-in datasets (it has Diet instead of Genotype) to demonstrate that the two approaches have the same likelihood, predicted values, etc:
```
> # extract a subset of a built-in dataset for the example
> data(BodyWeight)
> ex <- as.data.frame(subset(BodyWeight, Time %in% c(1, 22, 44)))
> ex$Time <- factor(ex$Time)
>
> #create interaction variable
> ex$TimeDiet <- interaction(ex$Time, ex$Diet)
>
> model1 <- lme(weight ~ Time * Diet, random = ~1|Rat/Time, data=ex)
> model2 <- lme(weight ~ TimeDiet, random = ~1|Rat/Time, data=ex)
>
> # the degrees of freedom, AIC, BIC, log-likelihood are all the same
> anova(model1, model2)
Model df AIC BIC logLik
model1 1 12 367.4266 387.3893 -171.7133
model2 2 12 367.4266 387.3893 -171.7133
Warning message:
In anova.lme(model1, model2) :
fitted objects with different fixed effects. REML comparisons are not meaningful.
>
> # the second model collapses the main and interaction effects of the first model
> anova(model1)
numDF denDF F-value p-value
(Intercept) 1 26 1719.5059 <.0001
Time 2 26 28.9986 <.0001
Diet 2 13 85.3659 <.0001
Time:Diet 4 26 1.7610 0.1671
> anova(model2)
numDF denDF F-value p-value
(Intercept) 1 24 1719.5059 <.0001
TimeDiet 8 24 29.4716 <.0001
>
> # they give the same predicted values
> newdata <- expand.grid(Time=levels(ex$Time), Diet=levels(ex$Diet))
> newdata$TimeDiet <- interaction(newdata$Time, newdata$Diet)
> newdata$pred1 <- predict(model1, newdata=newdata, level=0)
> newdata$pred2 <- predict(model2, newdata=newdata, level=0)
> newdata
Time Diet TimeDiet pred1 pred2
1 1 1 1.1 250.625 250.625
2 22 1 22.1 261.875 261.875
3 44 1 44.1 267.250 267.250
4 1 2 1.2 453.750 453.750
5 22 2 22.2 475.000 475.000
6 44 2 44.2 488.750 488.750
7 1 3 1.3 508.750 508.750
8 22 3 22.3 518.250 518.250
9 44 3 44.3 530.000 530.000
```
The only difference is that what hypotheses are easy to test. For example, in the first model it is easy to test whether the two predictors interact, in the second model there is no explicit test for this. On the other hand, the joint effect of the two predictors is easy to test in the second model, but not the first one. The other hypotheses are testable, it is just more work to set those up.
| null | CC BY-SA 3.0 | null | 2010-12-09T21:44:24.397 | 2015-10-28T21:18:42.730 | 2015-10-28T21:18:42.730 | 279 | 279 | null |
5326 | 2 | null | 5308 | 4 | null | The full precision value is there, you just don't see it. SAS uses formats to control what it prints/shows, even if you don't notice it (there are default numeric and character formats).
Try using `FORMAT x y 17.12;` either in the data step to set it permanently, or in the `PROC PRINT` step to set it temporarily.
| null | CC BY-SA 2.5 | null | 2010-12-09T21:50:21.977 | 2010-12-10T06:47:56.540 | 2010-12-10T06:47:56.540 | 183 | 279 | null |
5327 | 1 | 6508 | null | 11 | 28089 | My (very basic) knowledge of the Tobit regression model isn't from a class, like I would prefer. Instead, I have picked up pieces of information here and there through several Internet searches. My best guess at the assumptions for truncated regression are that they are very similar to the ordinary least squares (OLS) assumptions. I have no idea if that is correct, though.
Hence my question: What are the assumptions I should check for when performing Tobit regression?
Note: The original form of this question referred to truncated regression, which was not the model I was using or asking about. I have corrected the question.
| What are the assumptions for applying a Tobit regression model? | CC BY-SA 2.5 | null | 2010-12-10T00:54:26.633 | 2017-03-13T00:07:50.723 | 2011-01-24T20:30:24.627 | 1583 | 1583 | [
"regression",
"assumptions"
]
|
5328 | 2 | null | 4451 | 1 | null | Facebook social graph, application installations and Last.fm users, events, groups at
[http://odysseas.calit2.uci.edu/research/](http://odysseas.calit2.uci.edu/research/)
Two datasets (collected April-May 2009) which contain representative samples of ~1 million users Facebook-wide, with a few annotated properties: for each sampled user, the friend list, privacy settings, and network membership are included. A third dataset ( collected Feb 2008) includes a bipartite graph that represents application installations by Facebook users. A fourth dataset with Daily Active Users and application installations over 6 months (collected Sept 2007-Feb 2008). A fifth dataset that includes a representative sample of Last.fm users obtained using multigraph sampling (collected July 2010).
| null | CC BY-SA 2.5 | null | 2010-12-10T01:12:14.670 | 2010-12-10T01:12:14.670 | null | null | null | null |
5329 | 1 | 5337 | null | 2 | 250 | I want to cluster a graph using the FCM algorithm, I used the adjacency matrix of the graph as the data, and the "Euclidean" distance as metric.
The problem is that the adjacency matrix is full of zero's (depends actually on the degree of the node) and my network is a kinda huge more than 2000 node.
For the results I get many small size clusters and some big size clusters, that some time contain about 50% of the population and it is not right.
I think my problem is from the data representation. is there any conditions that must be satisfied apart from the FCM algorithm conditions.
| Are there any conditions on the data in Fuzzy c-mean clustering? | CC BY-SA 4.0 | null | 2010-12-10T01:40:31.767 | 2020-09-23T00:09:52.293 | 2020-09-23T00:09:52.293 | 11887 | 2325 | [
"clustering",
"fuzzy",
"fuzzy-c-means-clustering"
]
|
5330 | 1 | 5335 | null | 1 | 1783 | Hi my apologies first, I'm a biologist and not so good in statistics. In my study I'm studying the effect of concentration of feed on growth of a certain specimen. I have with me the different concentrations and the growth data for each concentration. What do you suggest is the best statistical test to find out the best concentration (i.e the one having maximum growth). Thanks in advance.
| Which test to find out best concentration (the one having maximum effect)? | CC BY-SA 2.5 | null | 2010-12-10T01:44:12.643 | 2010-12-10T08:09:31.127 | null | null | null | [
"regression",
"mathematical-statistics"
]
|
5331 | 2 | null | 5315 | 0 | null | If binning the age variable is an option, you might want to use a classification tree to get an idea of where to place the cutoffs.
| null | CC BY-SA 2.5 | null | 2010-12-10T02:23:20.637 | 2010-12-10T02:23:20.637 | null | null | 1945 | null |
5332 | 2 | null | 5292 | 2 | null | I recommend [Tinn-R](http://www.sciviews.org/Tinn-R/) (Which is the acronym for Tinn is not Notepad)
| null | CC BY-SA 2.5 | null | 2010-12-10T02:48:55.077 | 2010-12-10T07:08:08.270 | 2010-12-10T07:08:08.270 | 1709 | 1709 | null |
5333 | 1 | 5374 | null | 5 | 4640 | I have a linear mixed-effect model which I hope will answer the question of whether an increase in the frequency of use of one word leads to an increase of the frequency of use of that word by another person in a conversation, factoring out random effects of subject and topic of conversation. The basic model I've come up with looks like this:
```
Linear mixed model fit by REML
Formula: relative.sents.A ~ relative.sents.B + (1 | id.A) + (1 | abstract)
Data: nonzero
AIC BIC logLik deviance REMLdev
-698.8 -683.9 354.4 -722.6 -708.8
Random effects:
Groups Name Variance Std.Dev.
id.A (Intercept) 1.0790e-04 0.0103877
abstract (Intercept) 3.0966e-05 0.0055647
Residual 2.9675e-04 0.0172263
Number of obs: 146, groups: id.A, 97; abstract, 52
Fixed effects:
Estimate Std. Error t value
(Intercept) 0.017260 0.003046 5.667
relative.sents.B 0.428808 0.080050 5.357
Correlation of Fixed Effects:
(Intr)
rltv.snts.B -0.742
```
The "dependent" variable is relative frequency of use by one person, and the fixed variable is relative frequency of use by another. I decided to see what the R^2 would be:
```
> cor(nonzero$relative.sents.A, fitted(nonzero.lmer))^2
[1] 0.6705905
```
To see what proportion of this is due to the fixed effect, I made a new model with only the random effects:
```
> summary(r.only.lmer)
Linear mixed model fit by REML
Formula: relative.sents.A ~ 1 + (1 | id.A) + (1 | abstract)
Data: nonzero
AIC BIC logLik deviance REMLdev
-678.2 -666.3 343.1 -696.7 -686.2
Random effects:
Groups Name Variance Std.Dev.
id.A (Intercept) 1.2868e-04 0.0113435
abstract (Intercept) 7.8525e-06 0.0028022
Residual 3.7643e-04 0.0194017
Number of obs: 146, groups: id.A, 97; abstract, 52
Fixed effects:
Estimate Std. Error t value
(Intercept) 0.029149 0.002088 13.96
```
...and tried the same thing:
```
> cor(nonzero$relative.sents.A, fitted(r.only.lmer))^2
[1] 0.6882534
```
To my surprise, without that fixed effect, R^2 seems to increase!
Does this mean my model is useless? If so, any suggestions on what might be wrong? Or am I somehow misinterpreting these results?
| Interpreting correlation from two linear mixed-effect models | CC BY-SA 2.5 | null | 2010-12-10T03:04:44.617 | 2010-12-11T11:52:26.187 | 2010-12-10T18:15:33.003 | 52 | 52 | [
"r",
"mixed-model"
]
|
5334 | 2 | null | 5321 | 4 | null | The line `[Ev,Vals] = sort(Ev,Vals);` probably does not do what you think it should. The sort command would have to be applied separately. Moreover, `eig` returns a diagonal matrix of eigenvalues, which you have to strip out to a vector. Probably you want to do the following:
```
[Ev, Vals] = eig(L);
Vals = diag(Vals); %strip out the eigenvalues;
[Vals,sidx] = sort(Vals); %do the sort
Ev = Ev(:,sidx); %apply the sort to the eigenvectors as well.
```
I would guess this is actually unecessary, because `eig` returns the eigenvectors, -values, in ascending order anyway.
On a stylistic note, you can subtract out the mean using `bsxfun` as follows:
```
X = bsxfun(@minus,X,mu);
```
instead of calling the for-loop.
| null | CC BY-SA 2.5 | null | 2010-12-10T03:25:24.200 | 2010-12-10T03:25:24.200 | null | null | 795 | null |
5335 | 2 | null | 5330 | 2 | null | It sounds like an ANOVA (analysis of variance) setting, except that usually ANOVA is used to ask the question whether there are differences in growth between the concentrations. If you used ANOVA and found that there is a difference, then it wouldn't indicate which is the "best" one, just that they are different. There are post-hoc tests to look at all the different pairs.
I'd take a different approach though:
Can the different concentration be considered as gradually increasing? (rather than just "different" from each other). What kind of relationship do you anticipate between the feed concentration and the growth? How many concentrations to you have? how many records per concentration? If you have more than 2-3 concentrations and sufficient data on each concentration level you could fit a parametric or nonparametric growth curve (x-axis is concentration, y-axis is growth). This would give you a clearer idea of what is going on, with less assumptions.
| null | CC BY-SA 2.5 | null | 2010-12-10T03:30:21.893 | 2010-12-10T03:30:21.893 | null | null | 1945 | null |
5337 | 2 | null | 5329 | 0 | null | Different types of measures that form the matrix may be used to identify the cell values, where the measure controls how the clusters are formed. Some examples of values that can be used as measures include minimum distance from one node to the other, the degree of connectivity of each node and the amount of nodes needed to pass by to reach the other node.
I consider that minimum distance from one node to the other should be the best choice.
You can also try using the minkowsky distance which is the standard distance used in fcm, for calculating this distance you need a parameter which is the degree of fuzzyness of the distance, wwhen this parameter is 1 this distance is equal to the euclidian distan.
| null | CC BY-SA 2.5 | null | 2010-12-10T04:39:47.820 | 2010-12-10T04:39:47.820 | null | null | 1808 | null |
5338 | 2 | null | 5290 | 3 | null | N itself is often set in an arbitrary manner that makes sense for the application. Frankly the alpha = 2/(N+1) is also somewhat arbitrary, [http://www.itl.nist.gov/div898/handbook/pmc/section4/pmc431.htm](http://www.itl.nist.gov/div898/handbook/pmc/section4/pmc431.htm) suggests instead doing an optimization that minimizes the mean squared error. (Nevertheless, alpha = 2/(N+1) seems to be fairly standard practice.)
So as to how n should depend on m, or vice versa, I say try to find something that makes sense given your particular data and that will help with interpreting your results. I wouldn't worry about overlap too much, as it's not like the data is independent--that's why you do the EMA in the first place.
| null | CC BY-SA 2.5 | null | 2010-12-10T05:49:12.900 | 2010-12-10T05:49:12.900 | null | null | 2073 | null |
5339 | 2 | null | 5330 | 1 | null | If you have growth curves from each feed concentration, with time on the x-axis, you'll first want to estimate the growth from each curve. % change at the end points may be sufficient for your purposes, but to use all of your data consider looking to fitting some kind of model to each growth curve with a regression. Then you can estimate the parameters of the model and do a statistical comparison on those. Your institution might have access to graph pad, (can't link b/c not enough rep but google), and it is very intuitive.
This approach has some benefits. For example, one concentration might lead to faster growth initially but not peak as high, for whatever reason--maybe it becomes slightly toxic with time. This analysis could potentially detect that.
For more help we'd need to know the type of growth you expect from your specimen. If it's bacteria there are a ton of ways to do growth curves, see e.g. the classic paper by Zwietering et al here: [http://aem.asm.org/cgi/reprint/56/6/1875.pdf](http://aem.asm.org/cgi/reprint/56/6/1875.pdf).
| null | CC BY-SA 2.5 | null | 2010-12-10T06:07:01.160 | 2010-12-10T06:07:01.160 | null | null | 2073 | null |
5340 | 1 | 5357 | null | 5 | 31134 | In Excel or Google Docs you can readily construct a Cumulative Distribution Function for the Normal distribution using the NORMDIST function (X, Mean, Standard Deviation). I am trying to do the same for a Lognormal distribution using the LOGNORMALDIST function that has the same parameters (X, Mean, Standard Deviation). I have played around with using X or LN(X) on the mentioned parameters. But, I get absurd results.
Do you know how to generate a Lognormal Cumulative Distribution Function using LOGNORMALDIST?
Thanks in advance for any assistance. I will readily reward anyone with a good answer with a positive vote.
| How to use the LOGNORMALDIST function to generate a Cumulative Distribution Function? | CC BY-SA 2.5 | null | 2010-12-10T06:17:00.830 | 2010-12-10T17:11:44.917 | 2010-12-10T17:11:44.917 | 919 | 1329 | [
"distributions",
"excel",
"p-value"
]
|
5341 | 2 | null | 5330 | 2 | null | Are you not looking after method for studying [dose-response relationship](http://en.wikipedia.org/wiki/Dose-response_relationship)? From the description you gave, I guess your growth curves are highly non-linear. Moreover, the problem of using simple ANOVA to compare two growth curve (i.e., equivalent to a time course curve) is that it would be blind to the fact that the different doses are administered in a sequential order, though it should give a reasonable first idea of some kind of differences between the two, if any. However, multiple comparisons are likely not to answer the question you really want to ask since we often expect very few differences at baseline and larger ones at intermediate concentration levels, whereas what you generally want to assess is what's the lowest concentration level that yields a significant difference between the two products.
If you are using R, the [drc](http://cran.r-project.org/web/packages/drc/index.html) package (see also [www.bioassay.dk](http://www.bioassay.dk)) allows to fit various models to DR data, and plot the resulting DR curve like the one shown below. It was generated from the on-line help with data `secalonic` whose description is:
>
Data stem from an experiment assessing
the inhibitory effect of secalonic
acids on plant growth.
Gong, X. and Zeng, R. and Luo, S. and
Yong, C. and Zheng, Q. (2004) Two new
secalonic acids from Aspergillus
Japonicus and their allelopathic
effects on higher plants, Proceedings
of International Symposium on
Allelopathy Research and Application,
27-29 April, Shanshui, Guangdong,
China (Editors: R. Zeng and S. Luo),
209-217.
Ritz, C (2009) Towards a unified
approach to dose-response modeling in
ecotoxicology To appear in Environ
Toxicol Chem.

| null | CC BY-SA 2.5 | null | 2010-12-10T08:09:31.127 | 2010-12-10T08:09:31.127 | null | null | 930 | null |
5342 | 2 | null | 5340 | 4 | null | On Microsoft Office for Mac 2008 `LOGNORMDIST(5;1;1)` gives `0,728882893` in Excel. On R `plnorm(5,1,1)` gives `0.7288829`. In R you need to supply mean and standard deviation on log scale, so it seems that in Excel you need to do the same.
| null | CC BY-SA 2.5 | null | 2010-12-10T08:19:06.807 | 2010-12-10T08:19:06.807 | null | null | 2116 | null |
5343 | 1 | null | null | 4 | 1515 | I am trying to predict the outcome of a random variable x, which is a real-valued number. In some cases I can observe another variable y1, which should approximate x. I model y1 as a Gaussian distribution with mean of 0 and an empirically estimated standard deviation, and use the probability density function of that Gaussian to predict x.
In some cases I also have a second observation y2, which I can model similarly as a Gaussian.
What is the appropriate way to combine y1 and y2 into an estimate on x? Should I add the distributions and model x as a mixture of Gaussians, or should I multiply them? Or, should I try both and pick the answer that maximizes the probability of sampling an independent set of data?
| Combining Probability Density Functions | CC BY-SA 2.5 | null | 2010-12-10T08:20:25.247 | 2010-12-12T19:04:16.523 | 2010-12-10T17:25:18.083 | 2328 | 2328 | [
"distributions",
"probability",
"normal-distribution"
]
|
5344 | 1 | 5345 | null | 97 | 59253 | I have fit a few mixed effects models (particularly longitudinal models) using `lme4` in `R` but would like to really master the models and the code that goes with them.
However, before diving in with both feet (and buying some books) I want to be sure that I am learning the right library. I have used `lme4` up to now because I just found it easier than `nlme`, but if `nlme` is better for my purposes then I feel I should use that.
I'm sure neither is "better" in a simplistic way, but I would value some opinions or thoughts. My main criteria are:
- Easy to use (I'm a psychologist by training, and not particularly versed in statistics or coding, but I'm learning)
- Good features for fitting longitudinal data (if there is a difference here- but this is what I mainly use them for)
- Good (easy to interpret) graphical summaries, again not sure if there is a difference here but I often produce graphs for people even less technical than I, so nice clear plots are always good (I'm very fond of the xyplot function in lattice() for this reason).
| How to choose nlme or lme4 R library for mixed effects models? | CC BY-SA 4.0 | null | 2010-12-10T09:31:14.580 | 2023-01-04T20:17:15.790 | 2023-01-04T20:17:15.790 | 119468 | 199 | [
"r",
"mixed-model",
"lme4-nlme"
]
|
5345 | 2 | null | 5344 | 77 | null | Both packages use `Lattice` as the backend, but `nlme` has some nice features like `groupedData()` and `lmList()` that are lacking in `lme4` (IMO). From a practical perspective, the two most important criteria seem, however, that
- lme4 extends nlme with other link functions: in nlme, you cannot fit outcomes whose distribution is not gaussian, lme4 can be used to fit mixed-effects logistic regression, for example.
- in nlme, it is possible to specify the variance-covariance matrix for the random effects (e.g. an AR(1)); it is not possible in lme4.
Now, `lme4` can easily handle very huge number of random effects (hence, number of individuals in a given study) thanks to its C part and the use of sparse matrices. The `nlme` package has somewhat been superseded by `lme4` so I won't expect people spending much time developing add-ons on top of `nlme`. Personally, when I have a continuous response in my model, I tend to use both packages, but I'm now versed to the `lme4` way for fitting GLMM.
Rather than buying a book, take a look first at the Doug Bates' draft book on R-forge: [lme4: Mixed-effects Modeling with R](http://lme4.r-forge.r-project.org/book/).
| null | CC BY-SA 3.0 | null | 2010-12-10T10:40:25.643 | 2015-01-23T12:11:27.467 | 2015-01-23T12:11:27.467 | 35034 | 930 | null |
5346 | 1 | null | null | 8 | 6812 | I understand that in One-Way ANOVA two alternative F-Ratios have been derived to be robust when homogeneity of variance has been violated. Tomarkin and Serlin (1986) review amongst other techniques the Brown-Forsythe and Welch F-Ratios and conclude that both control the type I error rate well.
So far I have only come across W and B-Fs F Ratios in One-Way ANOVAS. Am I able to use them in ANOVAS with two factors? And if no, why not?
Thanks
| Brown-Forsythe and Welch f-ratios in two-way ANOVAs? | CC BY-SA 2.5 | null | 2010-12-10T10:54:27.050 | 2016-09-05T14:04:50.877 | 2010-12-10T11:03:12.177 | null | null | [
"anova",
"heteroscedasticity"
]
|
5347 | 1 | 5358 | null | 43 | 13545 | I am modeling a random variable ($Y$) which is the sum of some ~15-40k independent Bernoulli random variables ($X_i$), each with a different success probability ($p_i$). Formally, $Y=\sum X_i$ where $\Pr(X_i=1)=p_i$ and $\Pr(X_i=0)=1-p_i$.
I am interested in quickly answering queries such as $\Pr(Y\leq k)$ (where $k$ is given).
Currently, I use random simulations to answer such queries. I randomly draw each $X_i$ according to its $p_i$, then sum all $X_i$ values to get $Y'$. I repeat this process a few thousand times and return the fraction of times $\Pr(Y'\leq k)$.
Obviously, this is not totally accurate (although accuracy greatly increases as the number of simulations increases). Also, it seems I have enough data about the distribution to avoid the use simulations. Can you think of a reasonable way to get the exact probability $\Pr(Y\leq k)$?
p.s.
I use Perl & R.
EDIT
Following the responses I thought some clarifications might be needed. I will shortly describe the setting of my problem. Given is a circular genome with circumference `c` and a set of `n` ranges mapped to it. For example, `c=3*10^9` and `ranges={[100,200],[50,1000],[3*10^9-1,1000],...}`. Note all ranges are closed (both ends are inclusive). Also note that we only deal with integers (whole units).
I am looking for regions on the circle that are undercovered by the given `n` mapped ranges. So to test whether a given a range of length `x` on the circle is undercovered, I test the hypothesis that the `n` ranges are mapped randomly. The probability a mapped range of length `q>x` will fully cover the given range of length `x` is `(q-x)/c`. This probability becomes quite small when `c` is large and/or `q` is small. What I'm interested is the number of ranges (out of `n`) which cover `x`. This is how `Y` is formed.
I test my null hypothesis vs. one sided alternative (undercoverage). Also note I am testing multiple hypothesis (different `x` lengths), and sure to correct for this.
| How can I efficiently model the sum of Bernoulli random variables? | CC BY-SA 4.0 | null | 2010-12-10T11:06:05.337 | 2023-04-29T06:26:57.767 | 2023-04-29T06:26:57.767 | 362671 | 634 | [
"r",
"distributions",
"binomial-distribution",
"random-variable",
"poisson-binomial-distribution"
]
|
5349 | 2 | null | 5347 | 8 | null | With different $p_i$ your best bet I think is normal approximation. Let $B_n=\sum_{i=1}^np_i(1-p_i)$. Then
\begin{align*}
B_n^{-1/2}\left(\sum_{i=1}^nX_i-\sum_{i=1}^np_i\right)\to N(0,1),
\end{align*}
as $n\to\infty$, provided that for each $\varepsilon>0$
\begin{align*}
B_n^{-1}\sum_{i=1}^nE\left((X_i-p_i)^2\mathbf{1}\{|X_i-p_i|>\varepsilon B_n^{1/2}\}\right)\to 0,
\end{align*}
as $n\to\infty$, which for Bernoulli variables will hold if $B_n\to\infty$. This is the so-called Lindeberg condition, which is sufficient and necessary for convergence to the standard normal.
Update:
The approximation error can be calculated from the following inequality:
\begin{align*}
\sup_x|F_n(x)-\Phi(x)|\le AL_n,
\end{align*}
where
\begin{align*}
L_n=B_n^{-3/2}\sum_{i=1}^nE|X_i-p_i|^3
\end{align*}
and $F_n$ is the cdf of the scaled and centered sum of $X_i$.
As whuber pointed out, the convergence can be slow for badly behaved $p_i$. For $p_i=\frac{1}{1+i}$ we have $B_n\approx \ln n$ and $L_n\approx (\ln n)^{-1/2}$. Then taking $n=2^{16}$ we get that the maximum deviation from the standard normal cdf is a whopping 0.3.
| null | CC BY-SA 3.0 | null | 2010-12-10T12:06:50.643 | 2013-11-21T11:13:07.207 | 2013-11-21T11:13:07.207 | 2116 | 2116 | null |
5350 | 2 | null | 5344 | 56 | null | As chl pointed out, the main difference is what kind of variance-covariance structure you can specify for the random effects. In `lme4` you can specify either:
- diagonal covariance structures (i.e., enforce mutually uncorrelated random effects via syntax like ~ (1 | group)+ (0 + x1 | group) + (0 + x2 | group))
- or unstructured covariance matrices (i.e. all correlations are estimated, ~ (1 + x1 + x2 | group))
- or partially diagonal, partially unstructured covariance (y ~ (1 + x1 | group) + (0 + x2 | group), where you would estimate a correlation between the random intercept and random slope for x1, but no correlations between the random slope for x2 and the random intercept and between the random slope for x2 and the random slope for x1).
`nlme` offers a much broader class of covariance structures for the random effects. My experience is that the flexibility of `lme4` is sufficient for most applications, however.
I'd also add a third difference in capabilities that may be more relevant for many longitudinal data situations: `nlme` let's you specify variance-covariance structures for the residuals (i.e. spatial or temporal autocorrelation or heteroskedasticity or covariate-dependent variability) in the `weights` argument (c.f. `?varFunc`), while `lme4` only allows fixed prior weights for the observations.
A fourth difference is that it can be difficult to get `nlme` to fit (partially) crossed random effects, while that's a non-issue in `lme4`.
You'll probably be fine if you stick with `lme4`.
| null | CC BY-SA 2.5 | null | 2010-12-10T13:18:17.567 | 2010-12-10T13:18:17.567 | null | null | 1979 | null |
5351 | 1 | 5428 | null | 14 | 9371 | I asked this [question](https://stackoverflow.com/questions/4403433/calculate-newy-west-standard-errors-without-an-an-lm-object-in-r) yesterday on StackOverflow, and got an answer, but we agreed that it seems a bit hackish and there may be a better way to look at it.
The question: I would like calculate the Newey-West (HAC) standard errors for a vector (in this case a vector of stock returns). The function `NeweyWest()` in the `sandwich` package does this, but takes an `lm` object as an input. The solution Joris Meys offered is to project the vector onto 1, which turns my vector into residuals to feed into `NeweyWest()`. That is:
```
as.numeric(NeweyWest(lm(rnorm(100) ~ 1)))
```
for the variance of the mean.
Should I be doing it like this? Or is there a way to more directly do what I want? Thanks!
| Calculate Newey-West standard errors without an lm object in R | CC BY-SA 2.5 | null | 2010-12-10T14:30:40.167 | 2014-01-02T14:13:55.960 | 2017-05-23T12:39:26.203 | -1 | 1445 | [
"r",
"standard-error",
"autocorrelation",
"heteroscedasticity"
]
|
5352 | 1 | 5353 | null | 4 | 363 | I have a dataset where the difficulty level of my items and skill of my participants is unknown, but I would like to be able to extract some metric of how individual participants do on the test. What comes to mind initially is to use an item response theory model. However, I have two a priori constraints that makes me question whether IRT will be able to adapt to my peculiar case, or whether there is a better approach to use. As there are two constraints, I will split them into two questions. If these questions are too simple, please refer me to an entry level IRT text which will provide the answers. The first question is this:
I know that the difficulty of my items increases monotonically, by which I mean I know item A is harder than item B, and so on, but I do not know the extent to which item A is harder than item B. Is there a way to construct an item response theory model to adjust the model so that the item difficulty parameters are forced to reflect this?
P.S. I was unable to think of appropriate tags for this question, if you have some suggestions please either make the edits yourself or suggest them in comments.
| Is there a way to place a priori constraints on item difficulty under item response theory? | CC BY-SA 2.5 | null | 2010-12-10T15:17:37.817 | 2016-10-04T11:33:14.597 | 2016-10-04T11:33:14.597 | 35989 | 196 | [
"psychometrics",
"item-response-theory"
]
|
5353 | 2 | null | 5352 | 5 | null | Off the shelf, the various software offerings from SSI may do what you want.
If I wanted to enforce the constraint, and I didn't have too much data, then I'd switch into a BUGS / JAGS MCMC framework for modelling. I'd use the LSAT example in book 2 of the worked examples (contained in the JAGS distribution or as a separate download from the BUGS people) as the code for standard '3 parameter' IRT model. Then I'd add a line imposing the parameter constraint as described on p.35 of the current JAGS manual. I'd then curl up with a good book, since convergence on such models tends to take ages.
Or I might not bother. Standard errors for such parameters tend to be pretty tiny for a respectable number of subjects, so the ordering constraint may not make any difference to your estimation certainty. And if your parameters end up sitting on top of one another, they're probably trying to tell you your prior is wrong.
| null | CC BY-SA 2.5 | null | 2010-12-10T16:15:26.540 | 2010-12-10T16:15:26.540 | null | null | 1739 | null |
5354 | 1 | 5356 | null | 49 | 151056 | I've got some data about airline flights (in a data frame called `flights`) and I would like to see if the flight time has any effect on the probability of a significantly delayed arrival (meaning 10 or more minutes). I figured I'd use logistic regression, with the flight time as the predictor and whether or not each flight was significantly delayed (a bunch of Bernoullis) as the response. I used the following code...
```
flights$BigDelay <- flights$ArrDelay >= 10
delay.model <- glm(BigDelay ~ ArrDelay, data=flights, family=binomial(link="logit"))
summary(delay.model)
```
...but got the following output.
```
> flights$BigDelay <- flights$ArrDelay >= 10
> delay.model <- glm(BigDelay ~ ArrDelay, data=flights, family=binomial(link="logit"))
Warning messages:
1: In glm.fit(x = X, y = Y, weights = weights, start = start, etastart = etastart, :
algorithm did not converge
2: In glm.fit(x = X, y = Y, weights = weights, start = start, etastart = etastart, :
fitted probabilities numerically 0 or 1 occurred
> summary(delay.model)
Call:
glm(formula = BigDelay ~ ArrDelay, family = binomial(link = "logit"),
data = flights)
Deviance Residuals:
Min 1Q Median 3Q Max
-3.843e-04 -2.107e-08 -2.107e-08 2.107e-08 3.814e-04
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -312.14 170.26 -1.833 0.0668 .
ArrDelay 32.86 17.92 1.833 0.0668 .
---
Signif. codes: 0 â***â 0.001 â**â 0.01 â*â 0.05 â.â 0.1 â â 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 2.8375e+06 on 2291292 degrees of freedom
Residual deviance: 9.1675e-03 on 2291291 degrees of freedom
AIC: 4.0092
Number of Fisher Scoring iterations: 25
```
What does it mean that the algorithm did not converge? I thought it be because the `BigDelay` values were `TRUE` and `FALSE` instead of `0` and `1`, but I got the same error after I converted everything. Any ideas?
| Logistic regression model does not converge | CC BY-SA 2.5 | null | 2010-12-10T16:28:53.980 | 2021-07-15T01:28:11.177 | 2016-01-19T18:50:12.203 | 22311 | 1973 | [
"r",
"logistic",
"separation"
]
|
5356 | 2 | null | 5354 | 40 | null | `glm()` uses an iterative re-weighted least squares algorithm. The algorithm hit the maximum number of allowed iterations before signalling convergence. The default, documented in `?glm.control` is 25. You pass control parameters as a list in the `glm` call:
```
delay.model <- glm(BigDelay ~ ArrDelay, data=flights, family=binomial,
control = list(maxit = 50))
```
As @Conjugate Prior says, you seem to be predicting the response with the data used to generate it. You have complete separation as any `ArrDelay < 10` will predict `FALSE` and any `ArrDelay >= 10` will predict `TRUE`. The other warning message tells you that the fitted probabilities for some observations were effectively 0 or 1 and that is a good indicator you have something wrong with the model.
The two warnings can go hand in hand. The likelihood function can be quite flat when some $\hat{\beta}_i$ get large, as in your example. If you allow more iterations, the model coefficients will diverge further if you have a separation issue.
| null | CC BY-SA 4.0 | null | 2010-12-10T17:00:15.503 | 2021-07-15T01:28:11.177 | 2021-07-15T01:28:11.177 | 182175 | 1390 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.