Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8239
|
1
| null | null |
8
|
496
|
I have just discovered, through [this answer](https://stats.stackexchange.com/questions/8232/understanding-chi-squared-distribution/8233#8233), a [nice video resource](http://www.youtube.com/user/khanacademy#grid/user/1328115D3D8A2566) to share with students in introductory statistics courses. The lectures, produced by the [Khan Academy](http://www.khanacademy.org/#Statistics), are fairly didactic and well illustrated. I could not find the author of the videos.
Is there another set of video courses which I could recommend to my students, who are social scientists taking an introductory course in statistics?
The aforementioned set is software-independent and free to watch, two conditions that I need to maintain.
|
Introductory statistics video courses for social scientists
|
CC BY-SA 3.0
| null |
2011-03-13T15:25:27.093
|
2012-12-10T02:39:58.003
|
2017-04-13T12:44:33.310
|
-1
|
3582
|
[
"references"
] |
8240
|
1
| null | null |
7
|
188
|
I have N uniform-random points $p_j$ in a box in $E^d$,
$a_i \le x_i \le b_i$,
and want to estimate the expected distance of the point nearest the origin in $L_q$:
$\quad$ nearest( points $p_j$, box $a_i .. b_i$, $q$ ) $\;\equiv\;$
$\min_{j=1..N}$ $\sum_{i=1..d}$ |$p_{ji}|^q$
The box might straddle 0 in some dimensions, $a_i < 0 < b_i$.
Also I'd like the estimator to work for $0 < q < 1$ too, the "fractional metric".
(16 March)
Let's try to do a simpler case: $L_1$, unit cube 0 $\le x_i \le$ 1.
Geometrically, we want (correct me) the diagonal slice
of the unit cube with volume $\frac{1}{N+1}$.
(Does anyone have a picture or applet
of a 3d cube sliced into equal-volume slices ?)
If d is big enough for a central limit theorem to hold,
$\quad \sum_{i=1..d} uniform_i
\ \sim\ \mathcal{N}( \frac{d}{2}, \frac{d}{12} )$;
so
$\mathcal{N}^{-1}( \frac{1}{N+1} )$ gives approximately the cut,
and the expected nearest distance, that I want.
In Python with scipy.stats, this is
```
def cutcube( dim, vol ):
""" cut a slice of the unit cube in E^dim with volume vol
normal approximation: cutcube( 3, 1/6 ) = .339 ~ 1/3
vol 1/(N+1) -> E nearest of N random points in the cube ?
"""
return norm.ppf( vol, loc=dim/2, scale=np.sqrt( dim/12 )) / dim
cutcube( dim=2, vol=1/10 ) = 0.24
cutcube( dim=4, vol=1/10 ) = 0.32
cutcube( dim=8, vol=1/10 ) = 0.37
cutcube( dim=16, vol=1/10 ) = 0.41
cutcube( dim=32, vol=1/10 ) = 0.43
```
|
Estimate the nearest of N random points in a box in E^d?
|
CC BY-SA 2.5
| null |
2011-03-13T16:10:01.597
|
2011-03-18T13:20:32.560
|
2011-03-18T13:20:32.560
|
557
|
557
|
[
"central-limit-theorem",
"extreme-value"
] |
8241
|
2
| null |
8240
|
6
| null |
First, here are some commonly known facts which will be useful. Suppose i.i.d $X_1, \cdots, X_n$ have the cumulative distribution function $F(X) = P[X \leq x]$, then cumulative distribution function of $\min X_i$ is $G(X) = 1-(1-F(X))^n.$ The expected value of a nonnegative random variable in terms of its cdf is $E[X] = \int_0^\infty G(x) dx$. The median is $G^{-1}(1/2)$.
In this problem, the cdf of the distance from the center is
$$F(r) = P[|X| \leq r] = \frac{vol(B_q(r) \cap E^d)}{vol(E^d)}$$
where $B_q(r) = \{x: |x| \leq q\}$ and $vol(S)$ denotes the volume (Lesbesgue measure) of $S$.
The tricky part is evaluating $vol(B_q(r) \cap E^d)$.
We will deal with the special case that $a_i \geq 0$ for $1 \leq i \leq d$.
The general case follows from subdividing $E^d$ by quadrant.
Since we are working with the $q$-norm, it will be invaluable to do a change of coordinates, letting $y_i = x_i^q$. (Remember that we are assuming that the $x_i$ are nonnegative.) Then we have $\frac{dy_i}{dx_i} = qx_i^{q-1}$, so
$$dx_i = q^{-1} x_i^{1-q}dy_i = (1/q) y_i^{(1-q)/q} dy_i.$$
For notational clarity, we illustrate the calculation for $d=2$, and for general $d$ our methods should extend in the obvious way.
Assuming that $|(a_1, a_2)| \leq r$, we have
$$
vol(B_q(r) \cap E^d) = \int_{a_1}^{\min(b_1, r^q)} \int_{a_2}^{\min(b_2, r^q - y_1)} (1/q) y_2^{(1-q)/q} dy_2 (1/q) y_1^{(1-q)/q} dy_1
$$
$$= (1/q)^2 \int_{a_1}^{\min(b_1, r^q)} \int_{a_2}^{\min(b_2, r^q - y_1)} y_2^{(1-q)/q} dy_2 y_1^{(1-q)/q} dy_1$$
$$=(1/q)^2 \int_{a_1}^{\min(b_1, r^q)} q [\min(b_2, r^q - y_1)^{1/q} - a_2^{1/q}]y_1^{(1-q)/q} dy_1$$
$$=(1/q) \int_{a_1}^{\min(b_1, r^q)} [\min(b_2, r^q - y_1)^{1/q} - a_2^{1/q}]y_1^{(1-q)/q} dy_1$$
We will want to split the preceding integral into the cases when $y_1 > r^q - b_2$ and $y_1 \leq r^q - b_2$. The integral in the first case is
$$(1/q) \int_{a_1}^{r^q - b_2} [(r^q - y_1)^{1/q} - a_2^{1/q}]y_1^{(1-q)/q} dy_1$$
and the integral in the second case is
$$
(1/q) \int_{r^q - b_2}^{\min(b_1, r^q)} [b_2^{1/q} - a_2^{1/q}]y_1^{(1-q)/q} dy_1.
$$
The reader is strongly advised to find a Computer Algebra System at this point.
| null |
CC BY-SA 2.5
| null |
2011-03-13T16:51:04.557
|
2011-03-14T02:20:23.793
|
2011-03-14T02:20:23.793
|
3567
|
3567
| null |
8242
|
1
|
8245
| null |
18
|
8784
|
I would like to use R to plot out something like this:

It would seem possible but highly complex to keep track of the coordinates, width, height, etc. Intuitively it would seem best to treat each cell as a new plot and transform the coordinates for each cell. Is there a way to do this in R?
thanks!
|
Plotting sparklines in R
|
CC BY-SA 2.5
| null |
2011-03-13T17:21:18.183
|
2016-01-24T15:15:46.383
|
2011-03-13T20:20:12.920
| null |
812
|
[
"r",
"data-visualization",
"tables"
] |
8243
|
1
| null | null |
1
|
3900
|
I need to find partial derivatives for:
$L=\frac{1}{2}\sum_{i=1}^{n} w_{i}^{2}\sigma_{i}^{2} -\lambda \left( \sum_{i=1}^{n} w_{i} \bar{r_{i}} - \bar{r} \right) -\mu \left( \sum_{i=1}^{n} w_{i}-1 \right)$
with 5 variables, I get 5 partial derivatives $\frac{\partial L}{\partial w_{1}}$, $\frac{\partial L}{\partial w_{2}}$, $\frac{\partial L}{\partial w_{3}}$, $\frac{\partial L}{\partial \lambda}$ and $\frac{\partial L}{\partial \mu}$ with which I need to solve for $w_{1}$, $w_{2}$ and $w_{3}$. For all $i$, $\sigma_{i}$ is known. I don't want to manually differentiate; it could be more cumbersome with more restrictions. Later, I need to find the point where the minimum variance is located; i.e. to find out the optimal values for $\lambda$, $\mu$ and $w_{i}$ for all $i$ where $i\in\{1,2,3\}$. I need to vary the 5 variables to find the optimal solutions (a bit like in Excel but no such thing currently available) and then show the frontier graphically. So my question is which program would you use to solve such thing?
|
Program to compute partial derivatives
|
CC BY-SA 2.5
| null |
2011-03-13T17:25:26.993
|
2011-07-08T20:37:16.110
|
2011-03-13T20:10:10.260
|
919
|
2914
|
[
"optimization"
] |
8245
|
2
| null |
8242
|
22
| null |
I initially managed to produce something approaching your original picture with some quick and dirty R code (see this [gist](https://gist.github.com/868376)), until I discovered that the [sparkTable](http://cran.r-project.org/web/packages/sparkTable/index.html) package should do this very much better, provided you are willing to use $\LaTeX$. (In the meantime, it has also been pointed out by @Bernd!)
Here is an example, from `help(sparkEPS)`:

It should not be too difficult to arrange this the way you want.
| null |
CC BY-SA 2.5
| null |
2011-03-13T19:59:54.457
|
2011-03-13T19:59:54.457
| null | null |
930
| null |
8246
|
1
| null | null |
4
|
585
|
I am evaluating a Quadratic Discriminant Analysis (QDA) classifier on a high-dimensionality feature set. The features come from highly non-Gaussian distributions. However, when I transform the features to each have a Gaussian distribution in two different ways, the resulting classifiers performs worse than QDA applied on the raw features with the following three metrics:
- Accuracy (classes are balanced)
- Area under the ROC
- A probabilistic metric
The first way of transforming each feature to a Gaussian distribution disregarded the class. For each feature, it found the parameters of the the corresponding distribution, used the distribution's CDF function or approximation for each data point, and then used the Gaussian's inverse CDF.
The second way did the same thing, but included class labels and treated data from each class independently.
Any idea why this occurs?
I have confirmed that it is not due to ...
- A bug in the code
- Distributions changing from train set to test set
|
Using QDA for Non-Gaussian distributions
|
CC BY-SA 2.5
|
0
|
2011-03-13T20:45:44.760
|
2014-10-21T08:35:13.920
| null | null |
3595
|
[
"distributions",
"classification",
"discriminant-analysis"
] |
8247
|
2
| null |
8237
|
23
| null |
In the simplest case, when you have only one predictor (simple regression), say $X_1$, the $F$-test tells you whether including $X_1$ does explain a larger part of the variance observed in $Y$ compared to the null model (intercept only). The idea is then to test if the added explained variance (total variance, TSS, minus residual variance, RSS) is large enough to be considered as a "significant quantity". We are here comparing a model with one predictor, or explanatory variable, to a baseline which is just "noise" (nothing except the grand mean).
Likewise, you can compute an $F$ statistic in a multiple regression setting: In this case, it amounts to a test of all predictors included in the model, which under the HT framework means that we wonder whether any of them is useful in predicting the response variable. This is the reason why you may encounter situations where the $F$-test for the whole model is significant whereas some of the $t$ or $z$-tests associated to each regression coefficient are not.
The $F$ statistic looks like
$$ F = \frac{(\text{TSS}-\text{RSS})/(p-1)}{\text{RSS}/(n-p)},$$
where $p$ is the number of model parameters and $n$ the number of observations. This quantity should be referred to an $F_{p-1,n-p}$ distribution for a critical or $p$-value. It applies for the simple regression model as well, and obviously bears some analogy with the classical ANOVA framework.
Sidenote.
When you have more than one predictor, then you may wonder whether considering only a subset of those predictors "reduces" the quality of model fit. This corresponds to a situation where we consider nested models. This is exactly the same situation as the above ones, where we compare a given regression model with a null model (no predictors included). In order to assess the reduction in explained variance, we can compare the residual sum of squares (RSS) from both model (that is, what is left unexplained once you account for the effect of predictors present in the model). Let $\mathcal{M}_0$ and $\mathcal{M}_1$ denote the base model (with $p$ parameters) and a model with an additional predictor ($q=p+1$ parameters), then if $\text{RSS}_{\mathcal{M}_1}-\text{RSS}_{\mathcal{M}_0}$ is small, we would consider that the smaller model performs as good as the larger one. A good statistic to use would the ratio of such SS, $(\text{RSS}_{\mathcal{M}_1}-\text{RSS}_{\mathcal{M}_0})/\text{RSS}_{\mathcal{M}_0}$, weighted by their degrees of freedom ($p-q$ for the numerator, and $n-p$ for the denominator). As already said, it can be shown that this quantity follows an $F$ (or Fisher-Snedecor) distribution with $p-q$ and $n-p$ degrees of freedom. If the observed $F$ is larger than the corresponding $F$ quantile at a given $\alpha$ (typically, $\alpha=0.05$), then we would conclude that the larger model makes a "better job". (This by no means implies that the model is correct, from a practical point of view!)
A generalization of the above idea is the [likelihood ratio test](http://en.wikipedia.org/wiki/Likelihood-ratio_test).
If you are using R, you can play with the above concepts like this:
```
df <- transform(X <- as.data.frame(replicate(2, rnorm(100))),
y = V1+V2+rnorm(100))
## simple regression
anova(lm(y ~ V1, df)) # "ANOVA view"
summary(lm(y ~ V1, df)) # "Regression view"
## multiple regression
summary(lm0 <- lm(y ~ ., df))
lm1 <- update(lm0, . ~ . -V2) # reduced model
anova(lm1, lm0) # test of V2
```
| null |
CC BY-SA 2.5
| null |
2011-03-13T21:04:34.900
|
2011-03-13T21:38:19.687
|
2011-03-13T21:38:19.687
|
930
|
930
| null |
8248
|
2
| null |
8236
|
2
| null |
First, the size of the dataset. I recommend taking small, tractable samples of the dataset (either by randomly choosing N datapoints, or by randomly choosing several relatively small rectangles in the X-Y plane and taking all points that fall within that plane) and then honing your analysis techniques on this subset. Once you have an idea of the form of analysis that works, you can apply it to larger portions of the dataset.
PCA is primarily used as a dimensionality reduction technique; your dataset is only three dimensions (one of which is categorical), so I doubt it would apply here.
Try working with Matlab or R to visualize the points you are analyzing in the X-Y plane (or their relative density if working with the entire data set), both for individual types and all types the combined, and seeing what patterns emerge visually. That can help guide a more rigorous analysis.
| null |
CC BY-SA 2.5
| null |
2011-03-13T21:29:15.393
|
2011-03-13T21:29:15.393
| null | null |
3595
| null |
8249
|
2
| null |
8182
|
1
| null |
The following example (taken from kernlab reference manual) shows you how to access the various components of the kernel PCA:
```
data(iris)
test <- sample(1:50,20)
kpc <- kpca(~.,data=iris[-test,-5],kernel="rbfdot",kpar=list(sigma=0.2),features=2)
pcv(kpc) # returns the principal component vectors
eig(kpc) # returns the eigenvalues
rotated(kpc) # returns the data projected in the (kernel) pca space
kernelf(kpc) # returns the kernel used when kpca was performed
```
Does this answer your question?
| null |
CC BY-SA 3.0
| null |
2011-03-13T23:02:22.433
|
2014-09-19T12:33:41.923
|
2014-09-19T12:33:41.923
|
28666
|
21360
| null |
8250
|
2
| null |
8243
|
7
| null |
Just a remark: is there an inequality constraint $\sum_{i=1}^{n} w_{i} \leq 1$ in your system? I merely ask because $\mu$ is usually the KKT multiplier for inequality constraints. If indeed there is an inequality constraint, you will need to satisfy more conditions than just $\nabla L = 0$ to attain optimality (i.e. dual feasibility, complementarity slackness conditions etc.) In which case, you'd be better off using a proper optimization solver. I don't know if $\bar r_{i}, \bar r$ are constants and if the Hessian is positive semidefinite, but if so, this looks like a quadratic program (QP), and you can get solvers for this type of problem (e.g. `quadprog` in MATLAB).
But, if you know what you're doing.... here are some ways to get derivatives.
- Finite differencing. You did not mention if you wanted numerical or exact derivatives. If accuracy isn't too big a deal, a finite difference perturbation is the easiest option. Choose a small enough $h$ for your application and you're all set. http://en.wikipedia.org/wiki/Numerical_differentiation#Finite_difference_formulae
- Complex methods. If you want a more accurate derivative, you can also calculate a complex derivative. http://en.wikipedia.org/wiki/Numerical_differentiation#Complex_variable_methods. Essentially, the idea is this:
$$F'(x) \approx \frac{\mathrm{Im}(F(x + ih))}{h}
$$
Any programming language that implements a complex number type (e.g. Fortran) can be used to implement this. The derivative you get will be a real number. This is far more accurate than finite differencing, and for a sufficiently small $h$, you'll get a derivative that's pretty close to the exact derivative (to the limit of your machine precision). Note that you can only get 1st order derivatives using this method; it cannot be chained. Some people do interpolation to calculate 2nd order derivatives, but all the nice properties of the complex method are lost in that approach.
- Automatic differentiation (AD). This is the fastest and most accurate technique for obtaining numerical derivatives of any order (they can be made accurate to machine precision), but it's also the most complicated from a software point of view. Most optimization modeling languages (like AMPL or GAMS) provide AD facilities to solvers. This is a whole topic unto itself, but in short, if you're using MATLAB, you can use the INTLAB toolbox to quickly and easily calculate the derivatives you need. See this page for options: http://www.autodiff.org/?module=Tools
But for a system as small and as simple as yours, I'd just do it by hand. Or use a symbolic tool like Maxima (free) or Sage (also free, front end to Maxima).
| null |
CC BY-SA 2.5
| null |
2011-03-14T03:05:50.457
|
2011-03-14T03:05:50.457
| null | null |
2833
| null |
8251
|
1
|
8259
| null |
11
|
6317
|
In a paper I was reading recently I came across the following bit in their data analysis section:
>
The data table was then split into tissues and cell lines, and the two subtables were separately median polished (the rows and columns were iteratively adjusted to have median 0) before being rejoined into a single table. We finally then selected for the subset of genes whose expression varied by at least 4-fold from the median in this sample set in at least three of the samples tested
I have to say I don't really follow the reasoning here. I was wondering if you could help me answer the following two questions:
- Why is it desirably/helpful to adjust the median in the datasets? Why should it be done separately for different type of samples?
- How is this not modifying the experimental data? Is this a known way of picking a number of genes/variables from a large set of data, or is it rather adhoc?
Thanks,
|
The use of median polish for feature selection
|
CC BY-SA 2.5
| null |
2011-03-14T07:13:51.857
|
2011-03-14T13:06:06.607
|
2011-03-14T07:34:02.277
|
930
|
3014
|
[
"feature-selection",
"median",
"genetics"
] |
8252
|
2
| null |
8251
|
4
| null |
You may find some clues in pages 4 and 5 of [this](http://www.stats.ox.ac.uk/pub/MASS4/VR4stat.pdf)
It is a method of calculating residuals for the model
$$y_{i,j} = m + a_i + b_j + e_{i,j}$$
by calculating values for $m$, $a_i$ and $b_j$ so that if the $e_{i,j}$ are tabulated, the median of each row and of each column is 0.
The more conventional approach amounts to calculating values for $m$, $a_i$ and $b_j$ so that the mean (or sum) of each row and each column of residuals is 0.
The advantage of using the median is robustness to a small number of outliers; the disadvantage is that you are throwing away potentially useful information if there are no outliers.
| null |
CC BY-SA 2.5
| null |
2011-03-14T08:02:07.447
|
2011-03-14T08:02:07.447
| null | null |
2958
| null |
8253
|
2
| null |
8243
|
6
| null |
Here is the example of implementation with R. R is flexible in regards of combining symbolic differentiation and numerical optimisation. This means that you can get from symbolic expression to function used in numerical optimisation quite fast. The cost is of course speed, since hand-written functions will certainly work faster.
So first create the expression:
```
n <- 3
wi <- paste("w",1:n,sep="")
sigma<-paste("sigma",1:n,sep="")
lfun <- paste(paste(wi,sigma,sep="^2*"),collapse="+")
rr <- paste("r",1:n,sep="")
res1 <- paste(paste(wi,rr,sep="*"),collapse="+")
res2 <- paste(wi,collapse="+")
optfun <- paste("1/2*(",lfun,")-lambda*(",res1,"-barr)-mu*(",res2,"-1)",collapse="")
vars <- c(wi,"lambda","mu")
optexpr <- parse(text=optfun)
```
Note that I only did string manipulation which is pretty readable. The result is:
```
> optexpr
expression(1/2*( w1^2*sigma1+w2^2*sigma2+w3^2*sigma3 )-lambda*( w1*r1+w2*r2+w3*r3 -barr)-mu*( w1+w2+w3 -1))
attr(,"srcfile")
<text>
```
Now use symbolic differentiation to get the equations which we need to solve:
```
> gradexpr <- lapply(vars,function(l)D(optexpr,name=l))
> gradexpr
[[1]]
1/2 * (2 * w1 * sigma1) - lambda * r1 - mu
[[2]]
1/2 * (2 * w2 * sigma2) - lambda * r2 - mu
[[3]]
1/2 * (2 * w3 * sigma3) - lambda * r3 - mu
[[4]]
-(w1 * r1 + w2 * r2 + w3 * r3 - barr)
[[5]]
-(w1 + w2 + w3 - 1)
```
Now each of the variables is separate variable. For numerical optimisation it makes sense to combine indexed `w`, `r`, and `sigma` into vectors. This involves a little black magic, but it is not hard. We need the following function:
```
subvars <- function(expr,tb) {
if(length(expr)==1) {
nm <- deparse(expr)
if(nm %in% tb[,1]) {
e <- tb[tb[,1]==nm,2]
return(parse(text=e)[[1]])
}
else
return(expr)
}
else {
for (i in 2:length(expr)) {
expr[[i]] <- subvars(expr[[i]],tb)
}
}
return(expr)
}
```
This function substitutes the expressions according to substitution table. Here is the end result.
```
> subtable <- rbind(cbind(wi,paste("w[",1:n,"]",sep="")),
cbind(sigma,paste("sigma[",1:n,"]",sep="")),
cbind(rr,paste("r[",1:n,"]",sep=""))
)
> eqs <- lapply(gradexpr,function(l)subvars(l,subtable))
> eqs
[[1]]
1/2 * (2 * w[1] * sigma[1]) - lambda * r[1] - mu
[[2]]
1/2 * (2 * w[2] * sigma[2]) - lambda * r[2] - mu
[[3]]
1/2 * (2 * w[3] * sigma[3]) - lambda * r[3] - mu
[[4]]
-(w[1] * r[1] + w[2] * r[2] + w[3] * r[3] - barr)
[[5]]
-(w[1] + w[2] + w[3] - 1)
```
Now write a function which evaluates the equations.
```
eqs.fun <- function(w,sigma,r,barr,lambda,mu) {
cl <- match.call()
args <- as.list(cl)
args <- args[-1]
env <- parent.frame()
args <- lapply(args,eval,envir=env)
sapply(eqs,function(l)eval(l,env=args))
}
```
This is a human readable format, where we supply all the neccessary data to the equations. For optimisation we need a variant of this function where all the variables and the data is separated:
```
nl.eqs.fun <- function(p,sigma,r,barr) {
eqs.fun(w=p[1:3],sigma=sigma,r=r,barr=barr,lambda=p[4],mu=p[5])
}
```
And now we can solve the equations. For this use function `nleqslv` from the package nleqslv:
```
nleqslv(c(0,0,0,0,0),nl.eqs.fun,nl.jac.fun,sigma=c(1,1,1),r=c(1,0.5,0.5),barr=1)
$x
[1] 1.000000e+00 -1.254481e-16 2.403703e-16 2.000000e+00 -1.000000e+00
$fvec
[1] -3.330669e-16 -5.551115e-16 -1.110223e-16 0.000000e+00 -2.220446e-16
$termcd
[1] 1
$message
[1] "Function criterion near zero"
$scalex
[1] 1 1 1 1 1
$nfcnt
[1] 1
$njcnt
[1] 1
```
We can confirm that the solution is correct with `solve.QP` from the `quadprog` package, since our problem is quadratic optimisation problem.
```
solve.QP(Dmat=diag(c(1,1,1)),dvec=rep(0,3),Amat=cbind(c(1,0.5,0.5),c(1,1,1)),bvec=c(1,1),meq=2)
$solution
[1] 1.000000e+00 0.000000e+00 -1.110223e-16
$value
[1] 0.5
$unconstrained.solution
[1] 0 0 0
$iterations
[1] 3 0
$Lagrangian
[1] 2 1
$iact
[1] 1 2
```
In this case approach described is an overkill, since there is readily available package to solve the problem. But it is not that hard to extend it to any optimisation function. Also you can calculate the jacobian of the equations very easy.
The main advantage is that this code is much easier to debug, and if you want to change your optimisation function, you only need to change the expressions, conversion to numerical functions is automatic (if the data remains the same).
| null |
CC BY-SA 2.5
| null |
2011-03-14T09:10:13.773
|
2011-03-14T09:10:13.773
| null | null |
2116
| null |
8254
|
1
|
8255
| null |
12
|
10094
|
Lets say I am regressing Y on X1 and X2, where X1 is a numeric variable and X2 is a factor with four levels (A:D). Is there any way to write the linear regression function `lm(Y ~ X1 + as.factor(X2))` so that I can choose a particular level of X2 -- say, B -- as the baseline?
|
Choose factor level as dummy base in lm() in R
|
CC BY-SA 2.5
| null |
2011-03-14T09:16:32.003
|
2011-03-18T16:17:07.760
|
2011-03-18T16:17:07.760
| null |
3671
|
[
"r"
] |
8255
|
2
| null |
8254
|
15
| null |
You can use `relevel()` to change the baseline level of your factor. For instance,
```
> g <- gl(3, 2, labels=letters[1:3])
> g
[1] a a b b c c
Levels: a b c
> relevel(g, "b")
[1] a a b b c c
Levels: b a c
```
| null |
CC BY-SA 2.5
| null |
2011-03-14T09:21:08.567
|
2011-03-14T09:21:08.567
| null | null |
930
| null |
8257
|
2
| null |
8251
|
3
| null |
Looks like you are reading a paper that has some gene differential expression analysis. Having done some research involving microarray chips, I can share what little knowledge (hopefully correct) I have about using median polish.
Using median polish during the summarization step of microarray preprocessing is somewhat of a standard way to rid data of outliers with perfect match probe only chips (at least for RMA).
Median polish for microarray data is where you have the chip effect and probe effect as your rows and columns:
for each probe set (composed of n number of the same probe) on x chips:
```
chip1 chip2 chip3 ... chipx
probe1 iv iv iv ... iv
probe2 iv iv iv ... iv
probe3 iv iv iv ... iv
...
proben iv iv iv ... iv
```
where iv are intensity values
Because of the variability of the probe intensities, almost all analysis of microarray data is preprocessed using some sort of background correction and normalization before summarization.
here are some links to the bioC mailing list threads that talk about using median polish vs other methods:
[https://stat.ethz.ch/pipermail/bioconductor/2004-May/004752.html](https://stat.ethz.ch/pipermail/bioconductor/2004-May/004752.html)
[https://stat.ethz.ch/pipermail/bioconductor/2004-May/004734.html](https://stat.ethz.ch/pipermail/bioconductor/2004-May/004734.html)
Data from tissues and cell lines are usually analysed separately because when cells are cultured their expression profiles change dramatically from collected tissue samples. Without having more of the paper it is difficult to say whether or not processing the samples separately was appropriate.
Normalization, background correction, and summarization steps in the analysis pipeline are all modifications of experimental data, but in it's unprocessed state, the chip effects, batch effects, processing effects would overshadow any signal for analysis. These microarray experiments generate lists of genes that are candidates for follow up experiments (qPCR, etc) to confirm the results.
As far as being ad hoc, ask 5 people what fold difference is required for a gene to be considered differentially expressed and you will come up with at least 3 different answers.
| null |
CC BY-SA 2.5
| null |
2011-03-14T10:18:31.373
|
2011-03-14T10:56:54.570
|
2011-03-14T10:56:54.570
|
3704
|
3704
| null |
8258
|
2
| null |
4099
|
4
| null |
I believe Belsely said that CI over 10 is indicative of a possible moderate problem, while over 30 is more severe.
In addition, though, you should look at the variance shared by sets of variables in the high condition indices. There is debate (or was, last time I read this literature) on whether collinearity that involved one variable and the intercept was problematic or not, and whether centering the offending variable got rid of the problem, or simply moved it elsewhere.
| null |
CC BY-SA 2.5
| null |
2011-03-14T10:57:21.963
|
2011-03-14T10:57:21.963
| null | null |
686
| null |
8259
|
2
| null |
8251
|
11
| null |
Tukey Median Polish, algorithm is used in the [RMA](http://bmbolstad.com/misc/ComputeRMAFAQ/ComputeRMAFAQ.html) normalization of microarrays. As you may be aware, microarray data is quite noisy, therefore they need a more robust way of estimating the probe intensities taking into account of observations for all the probes and microarrays. This is a typical model used for normalizing intensities of probes across arrays.
$$Y_{ij} = \mu_{i} + \alpha_{j} + \epsilon_{ij}$$
$$i=1,\ldots,I \qquad j=1,\ldots, J$$
Where $Y_{ij}$ is the $log$ transformed PM intensity for the $i^{th}$probe on the $j^{th}$ array. $\epsilon_{ij}$ are background noise and they can be assumed to correspond to noise in normal linear regression. However, a distributive assumption on $\epsilon$ may be restrictive, therefore we use Tukey Median Polish to get the estimates for $\hat{\mu_i}$ and $\hat{\alpha_j}$. This is a robust way of normalizing across arrays, as we want to separate signal, the intensity due to probe, from the array effect, $\alpha$. We can obtain the signal by normalizing for the array effect $\hat{\alpha_j}$ for all the arrays. Thus, we are only left with the probe effects plus some random noise.
The link that I have quoted before uses Tukey median polish to estimate the differentially expressed genes or "interesting" genes by ranking by the probe effect. However, the paper is pretty old, and probably at that time people were still trying to figure out how to analyze microarray data. Efron's non-parametric empirical Bayesian methods paper came in 2001, but probably may not have been widely used.
However, now we understand a lot about microarrays (statistically) and are pretty sure about their statistical analysis.
Microarray data is pretty noisy and RMA (which uses Median Polish) is one of the most popular normalization methods, may be because of its simplicity. Other popular and sophisticated methods are: GCRMA, VSN. It is important to normalize as the interest is probe effect and not array effect.
As you expect, the analysis could have benefited by some methods which take advantage of information borrowing across genes. These may include, Bayesian or empirical Bayesian methods. May be the paper that you are reading is old and these techniques weren't out until then.
Regarding your second point, yes they are probably modifying the experimental data. But, I think, this modification is for a better cause, hence justifiable. The reason being
a) Microarray data are pretty noisy. When the interest is probe effect, normalizing data by RMA, GCRMA, VSN, etc. is necessary and may be taking advantage of any special structure in the data is good. But I would avoid doing the second part. This is mainly because if we don't know the structure in advance, it is better not impose a lot of assumptions.
b) Most of the microarray experiments are exploratory in their nature, that is, the researchers are trying to narrow down to a few set of "interesting" genes for further analysis or experiments. If these genes have a strong signal, modifications like normalizations should not (substantially) effect the final results.
Therefore, the modifications may be justified. But I must remark, overdoing the normalizations may lead to wrong results.
| null |
CC BY-SA 2.5
| null |
2011-03-14T11:43:23.427
|
2011-03-14T13:06:06.607
|
2011-03-14T13:06:06.607
|
1307
|
1307
| null |
8260
|
1
|
8270
| null |
3
|
572
|
I've known that, in orthogonal rotation, if the rotation matrix has determinant of -1 then reflection is present. Otherwise the determinant is +1 and we have pure rotation. May I extend this "sign-of-determinant" rule for non-orthogonal rotations? Such as orthogonal-into-oblique axes or oblique-into-orthogonal axes rotations? For example, this matrix
```
.9427 .2544 .1665 .1377
-.0451 -.0902 -.9940 -.0421
.3325 .3900 .1600 .8437
.4052 .8702 .2269 .1644
```
is an oblique-to-orthogonal rotation (I think, because sums of squares in rows, not in columns, are 1). Its determinant is -0.524. May I state that the rotation contains a reflection? Thanks in advance.
|
Detecting reflection in non-orthogonal rotation
|
CC BY-SA 2.5
| null |
2011-03-14T12:12:27.620
|
2011-03-14T15:38:45.710
|
2011-03-14T15:38:45.710
|
919
|
3277
|
[
"rotation"
] |
8261
|
1
| null | null |
7
|
1910
|
If $X$ is a $m × n$ matrix, where $m$ is the number of measurement types (variables) and $n$ is the number of samples, would it be correct to perform a PCA on a matrix that has $m \geq n$ ? If not, please provide some arguments why would this be a problem.
I remember having heard that doing such an analysis would be invalid, but the [Wikipedia page for PCA](http://en.wikipedia.org/wiki/Principal_component_analysis) doesn't mention a low $n/m$ ratio as being a potential limitation for using the method.
Please note that I am a biologist and aim at a more practical answer (if possible).
|
Is the matrix dimension important for performing a valid PCA?
|
CC BY-SA 2.5
| null |
2011-03-14T12:19:15.427
|
2011-12-01T01:01:28.680
| null | null |
3467
|
[
"pca"
] |
8263
|
2
| null |
8261
|
2
| null |
I do not think that you would get any useful information from such an analysis, as lore in my subject area (psychology) suggests a 10;1 ratio in favour of n as a precondition. In some circumstances (where communalities are high) you can get away with 5 or 3 to 1, but a ratio of less than 1 is probably a recipe for disaster.
| null |
CC BY-SA 2.5
| null |
2011-03-14T12:37:29.147
|
2011-03-14T12:37:29.147
| null | null |
656
| null |
8264
|
2
| null |
8261
|
3
| null |
PCA of variables. Number of observations n is low relative to number of variables.
1) Mathematical aspect. Whenever n<=m correlation matrix is singular which means some of last m principle components are zero-variance, that is, they are not existant. This is not a problem to PCA, generally speaking, since you could just ignore those. However, many software (mostly those uniting PCA and Factor Analysis in one command or procedure) will not allow you to have singular correlation matrix.
2) Statistical aspect. To have your results reliable you must have correlations reliable; that requires considerable sample size which always should be larger than number of variables. They say, if you have m=20 you ought to have n=100 or so. But if you have m=100 you should have n=300 or so. As m grows, minimal recommended n/m proportion diminishes.
| null |
CC BY-SA 2.5
| null |
2011-03-14T12:52:57.630
|
2011-03-14T12:52:57.630
| null | null |
3277
| null |
8265
|
2
| null |
7103
|
2
| null |
You could use any probabilistic time series model in combination with [arithmetic coding](http://en.wikipedia.org/wiki/Arithmetic_coding).
You'd have to quantize the data, though. Idea: the more likely an "event" is to occur, the more bits for that event are reserved. E.g if $p(x_t = 1| x_{1:t-1}) = 0.5$ with $x_{1:t-1}$ being the history of events seen so far, then coding that event will cost you 1 bit, while all others have to use more bits.
| null |
CC BY-SA 2.5
| null |
2011-03-14T12:55:11.880
|
2011-03-14T12:55:11.880
| null | null |
2860
| null |
8266
|
1
|
8284
| null |
4
|
6875
|
I have a lot of data on previous race history and I'm trying to predict a percentage chance of winning the next race using Regression, kNN, and SVM learning algorithms.
Say a race has 5 runners, and each runner has a previous best course time of, say $T_i$ (seconds).
I've also introduced an additional input for RANK of previous best course time of the 5 runners with value 0 to $1 - \frac{T_i-T_{min}}{T_{max}-T_{min}}$
My question is: does introducing both the absolute best course time and rank best course time cause any problems?
I understand that these inputs are likely to correlate but if someone runs a world record time they are more likely to win easily but this will get lost using the rank input only which would assign them a rank of 1.
|
Advice on classifier input correlation
|
CC BY-SA 2.5
| null |
2011-03-14T13:43:20.683
|
2011-03-14T20:02:55.153
|
2011-03-14T15:15:59.677
| null | null |
[
"machine-learning",
"correlation"
] |
8267
|
1
|
8269
| null |
4
|
41796
|
In R, I am trying to write a function to subset and exclude observations in a data frame based on three variables. My data looks something like this:
```
data.frame': 43 obs. of 8 variables:
$ V1: chr "ENSG00000008438" "ENSG00000048462" "ENSG00000006075" "ENSG00000049130" ...
$ V2: chr "ENST00000008938" "ENST00000053243" "ENST00000225245" "ENST00000228280" ...
$ V3: chr "ENSP00000008938" "ENSP00000053243" "ENSP00000225245" "ENSP00000228280" ...
$ V4: chr "PGLYRP1" "TNFRSF17" "CCL3" "KITLG" ...
$ V5: chr "19" "16" "17" "12" ...
$ V6: chr "q13.32" "p13.13" "q12" "q21.32" ...
$ V7: int 46522415 12058964 34415602 88886566 8276226 150285143 29138654 76424442 136871919 6664568 ...
$ V8: int 46526304 12061925 34417515 88974238 8291203 150294844 29190208 76448092 136875735 6670599 ...
>
```
What I am trying to do is this:
```
input = function(x) {
pruned1 <- subset(x[-which(x$V5 == 1 & x$V7 > 113818477 & x$V8 < 114658477), ])
pruned2 <- subset(pruned1[-which(pruned1$V5 == 1 & pruned1$V7 > 192461456 & pruned1$V8 < 192549912), ])
# and so on
}
```
So I want to take my original data frame, exlude observations that fulfil all three of the subsetting conditions, make a new object of the remaining observations (pruned1) and exclude any observations that fulfil a new set of condidtions and so on.
The problem I am having seems to be the third condition in the first subsetting, as this returns:
```
'data.frame': 1 obs. of 8 variables:
$ V1: chr NA
$ V2: chr NA
$ V3: chr NA
$ V4: chr NA
$ V5: chr NA
$ V6: chr NA
$ V7: int NA
$ V8: int NA
```
While only using the first two conditions returns:
```
'data.frame': 38 obs. of 8 variables:
$ V1: chr "ENSG00000008438" "ENSG00000048462" "ENSG00000006075" "ENSG00000049130" ...
$ V2: chr "ENST00000008938" "ENST00000053243" "ENST00000225245" "ENST00000228280" ...
$ V3: chr "ENSP00000008938" "ENSP00000053243" "ENSP00000225245" "ENSP00000228280" ...
$ V4: chr "PGLYRP1" "TNFRSF17" "CCL3" "KITLG" ...
$ V5: chr "19" "16" "17" "12" ...
$ V6: chr "q13.32" "p13.13" "q12" "q21.32" ...
$ V7: int 46522415 12058964 34415602 88886566 8276226 150285143 29138654 76424442 136871919 6664568 ...
$ V8: int 46526304 12061925 34417515 88974238 8291203 150294844 29190208 76448092 136875735 6670599 ...
>
```
Doing this manually is not an options, since I have a large number of input files that I would like so prune in the same manner.
Thanks in advance for any help!
|
Help on subsetting data frames using multiple logical operators in R
|
CC BY-SA 2.5
| null |
2011-03-14T14:30:10.763
|
2011-03-15T07:55:15.623
|
2011-03-14T14:41:49.677
|
3705
|
3705
|
[
"r"
] |
8268
|
2
| null |
8266
|
2
| null |
I think it depends on if the purpose of your model is descriptive (e.g. considering variable importance or hypothesis tests in the regression) or purley predictive. If it is the former, then certainly input features that are strongly correlated will create difficulties in making inference about how variables impact the output and relate to each other. For example, can you say variable 1 is the most important if it shares most of its variance with variable 2?
Even in regression, multicollinearity will not impact the coefficients, only the standard error (the estimates will still be those that minimize the squared error) so the predictions are ok.
I tend to consider colinearity between inputs not that big an issue when building a predictive model. The best and only way to know for certain is to build a model with both variables and then with only the one with the strongest relationship to the target variable and see which produces the best predictions on new data.....
| null |
CC BY-SA 2.5
| null |
2011-03-14T14:42:57.020
|
2011-03-14T14:42:57.020
| null | null |
2040
| null |
8269
|
2
| null |
8267
|
6
| null |
You are using subset wrongly. The syntax of subset is
>
pruned1 <- subset(x, !( V5 == 1 & V7 > 113818477 & V8 < 114658477) )
("!" is the negation operator)
and not
>
pruned1 <- subset(x[-which(x$V5 == 1 & x$V7 > 113818477 & x$V8 < 114658477)
Hope that helps
PS You might want to look into whether using '&' or '&&' fits your situation best, I can't remember if that will cause a problem.
| null |
CC BY-SA 2.5
| null |
2011-03-14T15:07:22.570
|
2011-03-14T17:11:35.753
|
2011-03-14T17:11:35.753
|
2807
|
2807
| null |
8270
|
2
| null |
8260
|
2
| null |
When the determinant is negative, composing with any reflection will give a positive determinant. In that sense you are correct. However, in another sense the question does not appear to be meaningful, because the matrix you give, although it is row normalized, is not orthogonal (it is not a "rotation," nor--unlike rotation matrices--can it be written as a finite product of reflections). Whether or not it "contains" a reflection depends on what group you consider the matrix to be part of and what subgroup you want to relate it to, neither of which has been specified.
| null |
CC BY-SA 2.5
| null |
2011-03-14T15:16:08.273
|
2011-03-14T15:16:08.273
| null | null |
919
| null |
8271
|
1
| null | null |
2
|
11442
|
I wanted to carry out a two-way ANCOVA for my data. However, SPSS isn't liking that I have only one IV.
I have
- One IV: Group (3 levels, i.e. 2 experimental groups and a control group),
- One DV: outcome measure (2 levels, i.e. time 1 and time 2),
- One covariate (I have checked and this demographic continuous variable moderately correlates with the DV).
Please could someone help as to whether I am missing something in terms of putting this into SPSS?
|
How to set up a two-way mixed ANOVA with a covariate in SPSS?
|
CC BY-SA 3.0
| null |
2011-03-14T15:36:51.410
|
2011-04-15T04:13:26.403
|
2011-04-15T04:13:26.403
|
183
| null |
[
"spss",
"repeated-measures",
"ancova"
] |
8272
|
2
| null |
8267
|
1
| null |
Another way to do this is
```
y <-
x[
!(x$V5 == 1 & x$V7 > 113818477 & x$V8 < 114658477) &
!(x$V5 == 1 & x$V7 > 192461456 & x$V8 < 192549912),
]
```
Also, V5 is a character variable, so `x$V5 == “1”` might be preferable -- e.g., if you have any codes that are actually formatted as " 1", you will need to take care to specify `x$V5 == “ 1”`. @ulvund `&` is necessary for this application because it returns element-wise logical AND, whereas `&&` “evaluates left to right examining only the first element of each vector. Evaluation proceeds only until the result is determined” (?Logic).
| null |
CC BY-SA 2.5
| null |
2011-03-14T16:00:19.347
|
2011-03-14T16:00:19.347
| null | null |
3432
| null |
8273
|
1
| null | null |
14
|
11500
|
I got asked something similar to this in interview today.
The interviewer wanted to know what is the probability that an at-the-money option will end up in-the-money when volatility tends to infinity.
I said 0% because the normal distributions that underly the Black-Scholes model and the random walk hypothesis will have infinite variance. And so I figured the probability of all values will be zero.
My interviewer said the right answer is 50% because the normal distribution will still be symmetric and almost uniform. So when you integrate from mean to +infinity you get 50%.
I am still not convinced with his reasoning.
Who is right?
|
What is the probability that a normal distribution with infinite variance has a value greater than its mean?
|
CC BY-SA 2.5
| null |
2011-03-14T16:19:27.127
|
2018-08-24T12:23:08.450
| null | null |
2283
|
[
"normal-distribution",
"variance"
] |
8274
|
2
| null |
8273
|
14
| null |
Neither form of reasoning is mathematically rigorous--there's no such thing as a normal distribution with infinite variance, nor is there a limiting distribution as the variance grows large--so let's be a little careful.
In the Black-Scholes model, the log price of the underlying asset is assumed to be undergoing a random walk. The problem is equivalent to asking "what is the chance that the asset's (log) value at the expiration date will exceed its current (log) value?" Letting the volatility increase without limit is equivalent to letting the expiration date increase without limit. Thus, the answer should be the same as asking "what is the limit, as $t \to \infty$, that the value of a random walk at time $t$ is greater than its value at time $0$?" By symmetry (exchanging upticks and downticks), (and noting that in the continuous model the chance of being at the money is $0$) those probabilities equal $1/2$ for any $t \gt 0$, whence their limit indeed exists and equals $1/2$.
| null |
CC BY-SA 2.5
| null |
2011-03-14T17:02:26.200
|
2011-03-14T18:34:18.040
|
2011-03-14T18:34:18.040
|
919
|
919
| null |
8275
|
2
| null |
8236
|
4
| null |
The type of data you describe is ususally called "marked point patterns", R has a task view for spatial statistics that offers many good packages for this type of analysis, most of which are probably not able to deal with the kind of humongous data you have :(
>
For example, maybe events of type A usually don't occur where events of type B do. Or maybe in some area, there are mostly events of type C.
These are two fairly different type of questions:
The second asks about the positioning of one type of mark/event. Buzzwords to look for in this context are f.e. intensity estimation or K-function estimation if you are interested in discovering patterns of clustering (events of a kind tend to group together) or repulsion (events of a kind tend to be separated).
The first asks about the correlation between different types of events. This is usually measured with mark correlation functions.
I think subsampling the data to get a more tractable data size is dangerous (see comment to @hamner's reply), but maybe you could aggregate your data: Divide the observation window into a managable number of cells of equal size and tabulate the event counts in each. Each cell is then described by the location of its centre and a 10-vector of counts for your 10 mark types. You should be able to use the standard methods for marked point processes on this aggregated process.
| null |
CC BY-SA 2.5
| null |
2011-03-14T17:17:15.127
|
2011-03-14T17:17:15.127
| null | null |
1979
| null |
8276
|
1
|
8326
| null |
3
|
515
|
The two main packages I use at work are Palisade Risk and SPSS Clementine - they are both quite old versions and I've been supplementing the ability to analyze properly at work with more modern abilities available at home. E .g.at home I've been testing RapidMiner 5, which I really like in principle but have experienced a few issues with large data sets whereby the computational time progresses for hours then crashes for yet unknown reasons.
What I like about Palisade & Clementine is they seem quite robust in ability to handle large amounts of input data and more importantly the ability to easily deploy models, i.e. once we have a model(s) that works we just feed it raw data and the outputs are predicted. However, Palisade & SPSS are both very limited in modern stats methods and machine learning algorithms that are hugely behind methods available in R/RapidMiner (Eg cross validation to name one). I would really like to convince my company to invest in R/Rapidminer but the ease of deployment of models is really the crux of the problem. May I ask how easy or difficult it is to achieve model deployment in the following and ask any advice from fellow professionals?
i) RapidMiner
ii) R
iii) other recommendations?
|
Help with deploying a model
|
CC BY-SA 2.5
| null |
2011-03-14T17:26:32.603
|
2011-03-15T17:14:46.560
| null | null | null |
[
"machine-learning",
"software"
] |
8277
|
2
| null |
8130
|
1
| null |
The test you probably want is [Fisher's exact test](http://en.wikipedia.org/wiki/Fisher%27s_exact_test). Unfortunately, given the likely very low click-through rate and small expected effect size, you will need enormous N to achieve the confidence interval you want. Lets say that the 'true' click-through rate of your best ad is .11, and your second best .1. Further, let's say you want the probability that you improperly fail to reject the null hypothesis (that there is no difference between the two ads), to be less than .20. If this is so, you will need an N on the order of 10,000.
```
> library(statmod)
> power.fisher.test(.1,.11,20000,20000,.05)
[1] 0.84
```
As a commenter suggested, you likely should not care about a ten percent difference in ad performance. For grosser differences, the necessary size of the samples decreases quickly.
```
> power.fisher.test(.1,.2,200,200,.05)
[1] 0.785
```
| null |
CC BY-SA 2.5
| null |
2011-03-14T17:32:09.420
|
2011-03-15T02:41:01.220
|
2011-03-15T02:41:01.220
|
82
|
82
| null |
8278
|
1
|
8325
| null |
3
|
162
|
I am planning a study to collect data of patients dying after brain infection versus number of cases for the next 5 years in 2 cities where they manage the disease with different drugs.
- Will a year wise t-test be better or year wise Relative Risk of death and meta analysis be better to compare the results of the study?
- I am a beginner in statistics. A word of explanation would benefit me.
The data would look like
```
Year City 1 Deaths City 1 cases City 2 Deaths City 2 cases
2011 237 1124 10 1226
2012 228 1030 26 1181
2013 1500 6061 10 1122
2014 528 2320 32 1173
2015 645 3024 11 1232
```
|
How to assess drug effects using annual death rate data?
|
CC BY-SA 2.5
| null |
2011-03-14T17:53:57.350
|
2011-03-15T17:10:20.340
|
2011-03-15T07:13:19.640
|
2116
|
2956
|
[
"meta-analysis",
"experiment-design",
"t-test",
"relative-risk"
] |
8279
|
2
| null |
8187
|
1
| null |
I'm not sure why time series is not being accepted as a solution to your problem since this is equally spaced chronological data . Confounding variables although unknown in nature can be proxied by both ARIMA structure and/or Detectable Interventions. I'm not sure who is not accepting that time series analysis is appropriate so we disagree with that advice.
Time series methods are not just pure autoregressive in form but easily extend to Polynomial Distributed Lags or ADL in user-specified supporting series such as the number of reported cases.
In my opinion this is an example of a pooled cross-sectional time series problem. Gregory Chow developed a test for the constancy of parameters across groups in 1960 ;
[http://en.wikipedia.org/wiki/Chow_test](http://en.wikipedia.org/wiki/Chow_test)
[http://en.wikipedia.org/wiki/Test_for_structural_change](http://en.wikipedia.org/wiki/Test_for_structural_change)
In this case, the test needs to be front-ended with Outlier Detection to ensure a Gaussian set of errors i.e. no proven anomalous data or in other words the error process can't be proven to be Non-Gaussian within each state.
X1 is the number of cases and Y is the number of deaths. X2 is the empirically identified point of anomaly ;(2009 .. period 6 for State1 and 2004 .. period 1 for State2 . Outlier Detection led to identifying one anomalous data point for each state reflecting some unknown background variable thus yielding a more robust estimate of the mortality rates.
## Analysis of State1
State1 Y(T) = -.65649
+[X1(T)][.0046)] CASES
+[X2(T)][-1.3608)] PULSE6 I~P00006STATE1
+ [A(T)]
Suggesting an unusually low mortality rate for 2009
## Analysis of State2
State2 Y(T) = 123.55
+[X1(T)][(+ .0468)] CASES
+[X2(T)][(+ 35.7590)] PULSE1 I~P00001STATE2
+ [A(T)]
Suggesting an unusually high mortality rate for 2004
This leads to estimating two cleansed data points
STATE YEAR Y OBSERVED Y ADJUSTED
STATE1 2009 3 4.36
STATE2 2004 254 218.24
Replacing these two observed possibly errant values possibly due to some unspecified concomitant factor (“lurking Variable”) one computes a rate of.0046 for STATE1 and .0468 for STATE2.
The Chow Test for constancy of parameters across groups easily yields a rejection of the null hypothesis of equal coefficients, thus the states can be said to have significantly different mortality rates. Note that even though the “experiment was not a controlled one” it is possible to identify deterministic effects reflecting some uncontrolled input which untreated can distort the analysis.
P.S. I am fully aware that we can’t have 4.36 or 218.24 mortalities in real life !
| null |
CC BY-SA 2.5
| null |
2011-03-14T18:12:36.827
|
2011-03-14T18:12:36.827
| null | null |
3382
| null |
8280
|
2
| null |
8273
|
-2
| null |
You should be doing your analysis based on a log normal distribution, not a normal one. You interviewer is wrong when he states that the distribution is symmetrical. It would never be, regardless of the variance. You also need to distinguish between volatility and what you are calling infinite variance. A stocks price, for example, has no upper limit, thus it has "infinite variance".
| null |
CC BY-SA 2.5
| null |
2011-03-14T18:17:05.443
|
2011-03-14T18:17:05.443
| null | null |
3489
| null |
8281
|
1
| null | null |
8
|
320
|
The two-sample Kolmogorov-Smirnov statistic is normally defined as
$D = \max_x |A(x) - B(x)|.$
I would like to compute a variant that retains the sign of the difference between the distribution functions:
$D^\prime = A(x) - B(x) \quad\text{where}\quad x = \arg\max_x |A(x) - B(x)|.$
What is the right name of $D^\prime$?
Note that $D^\prime$ is different than
$D^+ = \max_x [A(x) - B(x)]$
and
$D^- = \max_x [B(x) - A(x)],$
which [Coberly and Lewis](http://dx.doi.org/10.1007/BF02479749) describe as “one-sided” KS-statistics.
For context, the $D^\prime$ as defined above is used by [Perlman et al.](http://dx.doi.org/10.1126/science.1100709) to profile drug-treated cultured cells. (See page 3–4 of the supplement.) They compute $D^\prime$ independently for each image feature of the sample, then standardize the $D^\prime$-values (using the mean and standard deviation of the same measurement on mock-treated samples). Correlated profiles (of standardized $D^\prime$-values) indicate drugs that have similar biological effects.
|
What is the right name for the variant of the Kolmogorov-Smirnov statistic that retains the sign of the difference?
|
CC BY-SA 2.5
| null |
2011-03-14T18:19:51.103
|
2012-03-09T02:49:46.733
|
2011-03-14T18:45:55.043
|
220
|
220
|
[
"kolmogorov-smirnov-test"
] |
8282
|
2
| null |
8281
|
1
| null |
It looks like a variant of [Kuiper's test](http://en.wikipedia.org/wiki/Kuiper%27s_test) to me, although Kuiper's V = D+ + D− ≠ D'.
| null |
CC BY-SA 2.5
| null |
2011-03-14T18:33:40.373
|
2011-03-14T18:33:40.373
| null | null |
3582
| null |
8283
|
1
| null | null |
2
|
789
|
I know that Mclust does the fit on its own but I am trying to implement an optimization with the aim to generate a mixture of 2 gaussians with the combine moments as closed as possible to the moment of my returns' distribution.
The objective is to
`Min Abs((Mean Ret - MeanFit)/Mean Fit) + Abs((Std Ret -Stdev Fit)/Stdev) + Abs((Sk Ret-Sk fit)/Sk Fit) + Abs((Kurt Ret- Kurt Fit))`
Taking into account that I fix the weight between the two gaussians at (0.2;0.8) I implement the below code in R:
```
distance <-function(parameter,x) {
u=mean(x)
s=sd(x)
sk=skewness(x)
kurt=kurtosis(x)
d1=dnorm(x,parameter[1],parameter[2])
d2=dnorm(x,parameter[3],parameter[4])
dfit=0.2d1+0.8d2
ufit=mean(dfit)
sdfit=sd(dfit)
skfit=skewness(fit)
kurtfit=kurtosis(fit)
abs((u-ufit)/ufit)+abs(s-sdfit)/sdfit)+abs((sk-skfit)/skfit)+abs((kurt-kurtfit)/kurtfit))
}
Parameter<-c(0,0.01,0,0.01) # starting point of the optimization
opp<-optim(parameter,distance,x=conv)
```
- could anybody tell me whether it is the right approach ?
- should I add some constraint like
ufit=0.2*mean(d1)+0.8*mean(d2)...
thank you very much in advance for your time and help.
Sam
|
Fitting 4-moment distribution with mixture gaussian
|
CC BY-SA 2.5
| null |
2011-03-14T17:42:26.557
|
2011-03-27T16:03:13.370
|
2011-03-27T16:03:13.370
|
919
| null |
[
"r",
"distributions",
"optimization",
"mixture-distribution"
] |
8284
|
2
| null |
8266
|
5
| null |
It depends on the classifier. Some classifiers (such as Naive Bayes) explicitly assume feature independence, so they might behave in unexpected ways. Other classifiers (such as SVM) care about it much less. In image analysis it is routine to throw thousands of highly correlated features at SVM, and SVM seems to perform decently.
For kNN, adding more features will artificially inflate their importance. Suppose you have two features, best course time and coach experience. They will influence the distance equally. Now you add another feature 'course time multiplied by two'. This is essentially a replica of the first feature, but kNN doesn't know about it. So this feature now will influence the distance computation more significantly. Whether you want this or not will depend on the task. You probably don't want features to influence the distance more just because you thought of more "synonyms" for them.
A compromise might be to perform feature selection first and then use kNN. This way two "synonyms" of the same feature will be retained only if both are important.
| null |
CC BY-SA 2.5
| null |
2011-03-14T20:02:55.153
|
2011-03-14T20:02:55.153
| null | null |
3369
| null |
8285
|
1
| null | null |
2
|
5829
|
I am a novice when it comes to stats, so I apologize beforehand for the simplicity of my question.
I trying to figure out the best way to normalize (this may be the wrong term) my data in so that the maximum value is 1 and the minimum value is 0. What are my options and what methods would you suggest as the best?
My data will be non-negative and has no bound (though it may be bound in some situations, how would this make it different?).
I am measuring content across many different dimensions and I want to be able to make comparisons in terms of how relevant a given piece of content is. Additionally, I want to display values across these dimensions that is explicable and easily understood.
Thank you in advance for helping a newbie :)
|
Normalizing data between 0 and 1
|
CC BY-SA 2.5
| null |
2011-03-14T20:48:28.090
|
2011-08-08T17:35:19.670
|
2011-08-08T17:35:19.670
|
919
|
1514
|
[
"normalization"
] |
8286
|
1
| null | null |
4
|
112
|
I have this dataset with size (rows) and shape (columns)
and in each cell #disease/#in cell .
How would I answer the following questions?
Is size an indicator of disease?
Is shape an indicator of disease?
Is there a size*shape effect?
```
Size shapea shapeb shapec %a %b %c
<4 mm 5 / 5771 0 / 26 1 / 522 0.1% 0.0% 0.2%
4-6 mm 9 / 739 1 / 11 2 / 149 1.2% 9.1% 1.3%
6-8 mm 3 / 496 0 / 7 1 / 126 0.6% 0.0% 0.8%
>=8 mm 33 / 276 18 / 26 8 / 146 12.0% 69.2% 5.5%
Total 50 / 7282 19 / 70 12 / 943 0.7% 27.1% 1.3%
```
Thanks,
Luther88
Edits
@suncoolsu
in detail these are nodule data. size is the size of the nodule (the cutoffs are what have been published and accepted as standards). shape is the shape of the nodule. the numerator is the number of malignant nodules and the denominator is the number of nodules of given size and shape combinations.
@onestop
i do not have the raw data to do logistic regression. just this table.
any further suggestions?
|
Indicators of disease
|
CC BY-SA 2.5
| null |
2011-03-14T21:04:46.997
|
2011-03-16T09:38:20.743
|
2011-03-16T09:38:20.743
|
930
| null |
[
"proportion"
] |
8287
|
2
| null |
1112
|
1
| null |
My earlier post has a method to rank between 0 and 1. [Advice on classifier input correlation](https://stats.stackexchange.com/questions/8266/advice-on-classifier-input-correlation)
However, the ranking I have used, Tmin/Tmax uses the sample min/max but you may find the population min/max more appropriate. Also look up z scores
| null |
CC BY-SA 2.5
| null |
2011-03-14T21:20:55.353
|
2011-03-14T21:20:55.353
|
2017-04-13T12:44:51.217
|
-1
| null | null |
8289
|
2
| null |
8286
|
5
| null |
You can use [logistic regression](http://en.wikipedia.org/wiki/Logistic_regression) to answer these questions. Disease is the outcome. Fitting a model with size and shape as covariates will allow you to test for a size effect and a shape effect, and adding an interaction between size and shape will allow you to test for a size*shape interaction. You can get p-value for each from [likelihood ratio tests](http://en.wikipedia.org/wiki/Likelihood-ratio_test).
It probably makes sense to treat shape as categorical, i.e. use indicator variables, but fit a linear effect of size, at least to start off with. To do that you could represent size by the midpoint of the two inner bounded intervals, i.e. 5 and 7, and for the outer two intervals either use the mean sizes if you know them or pick some reasonable values (you know more than me how small and large the smallest and largest values are so what are reasonable values).
| null |
CC BY-SA 2.5
| null |
2011-03-14T21:42:36.733
|
2011-03-14T21:42:36.733
| null | null |
449
| null |
8290
|
1
|
8295
| null |
4
|
4831
|
I have data in this format:
Words,Source,Result
pestology Gomel, cheap, 0
cocreating, cheap, 1
munitioner impersonating nonextinct, cheap, 1
Kolomna, expensive, 1
Enyo's snakemouth, expensive, 0
blueberries backare farriers, cheap, 0
markets rafales, cheap, 0
...
[http://bit.ly/h2ynoG](http://bit.ly/h2ynoG)
How should I determine whether the phrases in column one can explain the Result value? What if I want to determine which individual words in the phrase are correlated with successful results?
Also, how should I determine whether the data in which Source is cheap is representative of all of the data?
Thanks.
|
How should I measure the relationship between a variable containing text and a binomial variable?
|
CC BY-SA 2.5
| null |
2011-03-14T22:23:47.850
|
2011-03-15T19:11:31.293
|
2011-03-15T19:11:31.293
|
3489
|
3712
|
[
"correlation",
"binomial-distribution",
"text-mining"
] |
8291
|
1
|
8294
| null |
10
|
5842
|
Stata allows estimating clustered standard errors in models with fixed effects but not in models random effects? Why is this?
By clustered standard errors, I mean clustering as done by stata's cluster command (and as advocated in Bertrand, Duflo and Mullainathan).
By fixed effects and random effects, I mean varying-intercept. I have not considered varying slope.
|
Clustered standard errors and multi-level models
|
CC BY-SA 2.5
| null |
2011-03-14T22:28:24.027
|
2017-02-02T22:51:09.600
|
2017-02-02T22:51:09.600
|
28666
|
3700
|
[
"mixed-model",
"multilevel-analysis",
"clustered-standard-errors"
] |
8292
|
1
| null | null |
0
|
4592
|
I have got a strange problem with using functions which are supposed to be in the libraries lmtest & language R (downloaded from cran.r-project recently), namely lrtest() and pvals.fnc() - however, they are not available. I have upgraded my R-version to 2.12.2 but it has not solved the problem.
```
library(lmtest)
lrtest()
returns: Error: could not find function "lrtest"
```
Any help would be greatly appreciated.
T
|
No lrtest() function available in lmtest
|
CC BY-SA 2.5
| null |
2011-03-14T23:04:00.050
|
2015-09-07T07:01:00.433
|
2011-03-15T00:10:00.890
| null | null |
[
"r"
] |
8293
|
2
| null |
8292
|
4
| null |
post your `sessionInfo()`.
`lmtest:::lrtest(x)` should call a generic function which then determines the appropriate code to run on 'x'.
If you have not updated your packages, you may perhaps need to do so, or simply re-run `install.packages('lmtest')` for your updated R install.
| null |
CC BY-SA 3.0
| null |
2011-03-14T23:12:21.900
|
2015-09-07T07:01:00.433
|
2015-09-07T07:01:00.433
|
84191
|
317
| null |
8294
|
2
| null |
8291
|
12
| null |
When you cluster on some observed attribute, you are making a statistical correction to the standard errors to account for some presumed similarity in the distribution of observations within clusters. When you estimate a multi-level model with random effects, you are explicitly modeling that variation, not treating it simply as a nuisance, thus clustering is not needed.
| null |
CC BY-SA 2.5
| null |
2011-03-15T01:01:45.957
|
2011-03-15T01:01:45.957
| null | null |
3265
| null |
8295
|
2
| null |
8290
|
4
| null |
I would start by transforming the phrases into numbers via a document term matrix, with a 1 denoting the presence of a word and 0 being the absence of a word. Then you can perform correlation analysis.
[http://en.wikipedia.org/wiki/Document-term_matrix](http://en.wikipedia.org/wiki/Document-term_matrix)
R Code

| null |
CC BY-SA 2.5
| null |
2011-03-15T02:13:29.387
|
2011-03-15T16:55:53.313
|
2011-03-15T16:55:53.313
|
3489
|
3489
| null |
8297
|
1
|
8313
| null |
6
|
1635
|
I am looking for a self-study site on the web that will allow me to verify my understanding of some basic probability and stats concepts and operations.
What I would like is a site with data and problems (along with the solutions) that I could use to practice my skills (such as regression methods/analysis, pairwise t-tests, computation and interpretation of confidence intervals, etc.). I've seen workbooks like this, but the data is not available for me to import into R or Excel.
Surely there must be a site like this on the internet, arranged by topics for study, but somehow I have not been able to find it. Also, if anyone knows of a good workbook that comes with data (on CD or via web) I'd be interested in it too.
I'll be using R and Excel - so pointers to these would be great too.
|
Looking for stats/probability practice problems with data and solutions
|
CC BY-SA 3.0
| null |
2011-03-15T04:06:00.550
|
2016-12-20T23:54:15.153
|
2016-12-20T23:54:15.153
|
22468
|
10633
|
[
"r",
"probability",
"self-study",
"references",
"excel"
] |
8298
|
2
| null |
8297
|
6
| null |
If you are interested in Statistical Machine Learning, which seems to be THE thing these days, [Tibshirani, Hastie, and Friedman's book](http://www-stat.stanford.edu/~tibs/ElemStatLearn/) is an invaluable resource. It is the latest edition and has a self contained website devoted to it.
| null |
CC BY-SA 2.5
| null |
2011-03-15T04:09:32.143
|
2011-03-15T04:09:32.143
| null | null |
1307
| null |
8299
|
1
|
8302
| null |
4
|
145
|
While trying to make sense of MDL and stochastic complexity, I found this previous question: [Measures of model complexity](https://stats.stackexchange.com/questions/2828), in which Yaroslav Bulatov defines model complexity as "how hard it is to learn from limited data."
It is not clear to me how Minimal Description Length (MDL) measures this. What I am looking for is some sort of probability inequality (analagous to the VC upper bound) which relates the "code length" of a model with its worst case behavior on fitting data generated by itself. If such a concrete result cannot be found in the literature, even an empirical example would be enlightening.
|
Relationship between MDL and "difficulty of learning from data"
|
CC BY-SA 2.5
| null |
2011-03-15T05:44:31.573
|
2011-03-16T20:23:20.790
|
2017-04-13T12:44:24.677
|
-1
|
3567
|
[
"model-selection"
] |
8300
|
1
| null | null |
5
|
2653
|
I am investigating the effect of a drug on EEG and cognition in epilepsy patients.
We have done EEG and neuropsychological (NP) tests twice, before and after medical treatment.
Some of parameters in EEG and NP tests were significantly altered after drug treatment (done by Wilcoxon rank sum test, as the distribution was not normal).
Now, I would like to calculate correlation between change in EEG parameters and change in NP tests.
There are 7 parameters in EEG parameters and 27 in NP tests. I would like to run correlation analysis on every pair and see if any pair is significantly correlated.
My questions are:
- Should I use nonparametric correlation, say Spearman, to calculate $\rho$? (as values of EEG and NP paramaters are not normal)
- Since there are 7 parameters in EEG and 27 parameters in NP tests, there should be 7*27 number of testing, which requires correction of p value for multiple comparisons. Which method should I use?
I am looking for not-so-conservative method. I have seen from another posting that I can use permutation test for multiple comparisons. Could someone explain how I can do this?
|
Correlation analysis and correcting $p$-values for multiple testing
|
CC BY-SA 3.0
| null |
2011-03-15T06:59:28.983
|
2012-07-04T15:47:08.500
|
2012-07-04T15:47:08.500
|
4856
| null |
[
"correlation",
"multiple-comparisons",
"permutation-test"
] |
8301
|
2
| null |
8300
|
4
| null |
In my opinion you should test if your variables are distributed normally and chose a suitable test accordingly.
Concerning the correction for alpha inflation: What you are doing is data mining. You have experimental data and now you are digging in it to find ... anything. Do that. But also know that anything you might find is just an observation and as such not reliable. Perform that exploratory thing, pick some promising pairs of variables and conduct new experiments to test these pairs for correlation.
| null |
CC BY-SA 2.5
| null |
2011-03-15T07:31:45.437
|
2011-03-15T07:31:45.437
| null | null |
1048
| null |
8302
|
2
| null |
8299
|
4
| null |
The worst case of code length of a code induced by a probability density is just the negative log of the most unlikely event, since $L_p(x) = -\log p(x)$.
I am currently working through Peter Grünwald's book 'The MDL principle'. He defines the complexity of a model class as the log of the sum of probabilities it can assign to data.
Thus, the more different datasets a model can fit, the more complex it is.
$x^n$ here is a complete data set. $X^n$ is the set of all possible data sets of length $n$. This might seem confusing, but it is done this way wo allow non-iid data.
The formal definition goes as follows: $$COMP(\mathcal{M}) := \log \sum_{x^n \in X^n} p_{x^n}(x^n)$$
where $p_{x^n}$ is the optimal $p \in \mathcal{M}$ with regard to $x^n$.
| null |
CC BY-SA 2.5
| null |
2011-03-15T07:36:50.213
|
2011-03-15T07:42:07.103
|
2011-03-15T07:42:07.103
|
2860
|
2860
| null |
8303
|
1
| null | null |
53
|
52386
|
I am fitting a binomial family glm in R, and I have a whole troupe of explanatory variables, and I need to find the best (R-squared as a measure is fine). Short of writing a script to loop through random different combinations of the explanatory variables and then recording which performs the best, I really don't know what to do. And the `leaps` function from package [leaps](http://cran.r-project.org/web/packages/leaps/index.html) does not seem to do logistic regression.
Any help or suggestions would be greatly appreciated.
|
How to do logistic regression subset selection?
|
CC BY-SA 3.0
| null |
2011-03-15T07:45:52.233
|
2020-01-18T17:07:55.563
|
2017-07-08T08:23:47.843
|
79696
|
3721
|
[
"r",
"logistic"
] |
8304
|
2
| null |
8267
|
8
| null |
If you are using `which`, then `subset` is unnecessary. If we have a logical condition these are equivalent in R:
```
data[cond,]
data[which(cond),]
subset(data,cond)
```
The benefit of the subset is that you do not need to use `$` to get to the variables you are subsetting on.
Furthermore if you do successive subsetings it makes more sense to concatenate all the conditions and then do subseting. So if you have the condition 1 and condition 2, which themselves may be complicated conditions then instead of
```
pruned1 <- data[cond1, ]
pruned2 <- pruned1[cond2, ]
```
Do
```
cond <- cond1 & cond2
pruned2 <- data[cond,]
```
The main benefit of the second approach that it is faster and cleaner, since you will not be creating new objects.
| null |
CC BY-SA 2.5
| null |
2011-03-15T07:55:15.623
|
2011-03-15T07:55:15.623
| null | null |
2116
| null |
8305
|
2
| null |
8303
|
17
| null |
First of all $R^2$ is not an appropriate goodness-of-fit measure for logistic regression, take an information criterion $AIC$ or $BIC$, for example, as a good alternative.
Logistic regression is estimated by maximum likelihood method, so `leaps` is not used directly here. An extension of `leaps` to `glm()` functions is the [bestglm](http://cran.r-project.org/web/packages/bestglm/index.html) package (as usually recommendation follows, consult vignettes there).
You may be also interested in the article by David W. Hosmer, Borko Jovanovic and Stanley Lemeshow [Best Subsets Logistic Regression](http://www.jstor.org/pss/2531779) // Biometrics Vol. 45, No. 4 (Dec., 1989), pp. 1265-1270 (usually accessible through the university networks).
| null |
CC BY-SA 2.5
| null |
2011-03-15T08:54:26.503
|
2011-03-15T08:54:26.503
| null | null |
2645
| null |
8306
|
1
|
8308
| null |
15
|
9139
|
As title, is there anything like this? I know how to calculate CI for arithmetic mean, but how about geometric mean? Thanks.
|
Confidence interval for geometric mean
|
CC BY-SA 2.5
| null |
2011-03-15T10:02:23.260
|
2013-03-27T07:09:52.933
|
2011-03-15T12:03:24.493
| null |
588
|
[
"confidence-interval",
"mean"
] |
8307
|
2
| null |
1112
|
29
| null |
As often, my first question was going to be "why do you want to do this", then I saw you've already answered this in the comments to the question: "I am measuring content across many different dimensions and I want to be able to make comparisons in terms of how relevant a given piece of content is. Additionally, I want to display values across these dimensions that is explicable and easily understood."
There is no reason to normalize the data so that the max is 1 and the min is zero in order to achieve this, and my opinion is that this would be a bad idea in general. The max or min values could very easily be [outliers](http://en.wikipedia.org/wiki/Outlier) that are unrepresentative of the population distribution. @osknows parting remark about using [$z$-scores](http://en.wikipedia.org/wiki/Z-score) is a much better idea. $z$-scores (aka standard scores) normalize each variable using its standard deviation rather than its range. The standard deviation is less influenced by outliers. In order to use $z$-scores, it's preferable that each variable has a roughly normal distribution, or at least has a roughly symmetric distribution (i.e. isn't severely skew) but if necessary you can apply some appropriate [data transformation](http://en.wikipedia.org/wiki/Data_transformation_%28statistics%29) first in order to achieve this; which transformation to use could be determined by finding the best fitting [Box–Cox transformation](http://en.wikipedia.org/wiki/Box-Cox_transformation).
| null |
CC BY-SA 2.5
| null |
2011-03-15T10:04:23.910
|
2011-03-15T10:49:20.283
|
2011-03-15T10:49:20.283
|
449
|
449
| null |
8308
|
2
| null |
8306
|
20
| null |
The geometric mean $(\prod_{i=1}^n X_i)^{1/n}$ is an arithmetic mean after taking logs $1/n \sum_{i=1}^n \log X_i$, so if you do know the CI for the arithmetic mean do the same for the logarithms of your data points and take exponents of the upper and lower bounds.
| null |
CC BY-SA 2.5
| null |
2011-03-15T10:17:39.707
|
2011-03-15T15:51:17.957
|
2011-03-15T15:51:17.957
|
919
|
2645
| null |
8309
|
1
| null | null |
11
|
8325
|
Let A be $n \times p$ matrix of independent variables and B be the corresponding $n \times 1$ matrix of the dependent values. In ridge regression, we define a parameter $\lambda$ so that: $\beta=(A^\mathrm{T}A+\lambda I)^{-1}A^\mathrm{T}B$ .
Now let [u s v]=svd(A) and $d_{i}=i^{th}$ diagonal entry of 's'. we define degrees of freedom (df)= $\sum_{i=1}^{n} \frac{(d_{i})^2}{(d_{i})^2+\lambda}$ . Ridge regression shrinks the coefficients of low-variance components and hence the parameter $\lambda$ controls the degrees of freedom.So for $\lambda=0$, which is the case of normal regression, df=n, and hence all the independent variables will be considered. The problem I am facing is to find the value of $\lambda$ given 'df' and the matrix 's'. I have tried to re-arrange the above equation but was not getting a closed form solution. Please provide any helpful pointers.
|
How to calculate regularization parameter in ridge regression given degrees of freedom and input matrix?
|
CC BY-SA 2.5
| null |
2011-03-15T10:34:15.403
|
2018-10-29T00:37:23.980
|
2011-03-15T10:47:46.733
|
3722
|
3722
|
[
"ridge-regression"
] |
8310
|
2
| null |
7939
|
3
| null |
The conditional mean is defined by:
$$E(Y|X)\equiv\int y f(y|x) dy$$
Where $f(Y|X)$ is the conditional density. Using the product rule, you can show:
$$f(y|x)=\frac{f(y,x)}{f(x)}$$
Substituting this back into the integral you get
$$E(Y|X)\equiv\frac{\int y f(y,x) dy}{f(x)}$$
Which is of the form you seek, if you use the kernel density estimator.
| null |
CC BY-SA 2.5
| null |
2011-03-15T10:42:34.337
|
2011-03-15T10:42:34.337
| null | null |
2392
| null |
8312
|
2
| null |
8303
|
31
| null |
Stepwise and "all subsets" methods are generally bad. See Stopping Stepwise: [Why Stepwise Methods are Bad and what you Should Use](http://www.nesug.org/proceedings/nesug07/sa/sa07.pdf) by David Cassell and myself (we used SAS, but the lesson applies) or Frank Harrell Regression Modeling Strategies. If you need an automatic method, I recommend LASSO or LAR. A LASSO package for logistic regression is available [here](http://www4.stat.ncsu.edu/~boos/var.select/lasso.adaptive.html), another interesting article is on the [iterated LASSO for logistic](http://www.stat.uiowa.edu/techrep/tr392.pdf)
| null |
CC BY-SA 2.5
| null |
2011-03-15T11:20:17.760
|
2011-03-15T11:20:17.760
| null | null |
686
| null |
8313
|
2
| null |
8297
|
5
| null |
The [Statistics topic area on Wikiversity](http://en.wikiversity.org/wiki/Statistics) is worth a look. It's got a long way to go before it's a comprehensive stand-alone syllabus, to be honest, but some of the Courses are more advanced than others, and when there's not much material as yet there are often links to free online resources.
| null |
CC BY-SA 2.5
| null |
2011-03-15T11:26:34.293
|
2011-03-15T11:26:34.293
| null | null |
449
| null |
8314
|
2
| null |
8297
|
4
| null |
I realise that this may not be what you are looking for, but R core and all packages come with data sets on which to practice the functionalities in each package. Many of these data sets are quite famous, and often a link is given to the paper in which the data are described. You could use these datasets in R and then after you finish your analysis look at what the authors of the paper did with the same data.
That being said, its rare for there to be a right answer in any real data analysis problem, mostly one learns by realising that your techniques were not appropriate and re-iterating until one reaches some level of statisfaction. Obviosuly this is a moving target though, as your skills increase an older dataset may yield new insights.
| null |
CC BY-SA 2.5
| null |
2011-03-15T11:37:46.567
|
2011-03-15T11:37:46.567
| null | null |
656
| null |
8315
|
2
| null |
8309
|
11
| null |
A Newton-Raphson/Fisher-scoring/Taylor-series algorithm would be suited to this.
You have the equation to solve for $\lambda$
$$h(\lambda)=\sum_{i=1}^{p}\frac{d_{i}^{2}}{d_{i}^{2}+\lambda}-df=0$$
with derivative
$$\frac{\partial h}{\partial \lambda}=-\sum_{i=1}^{p}\frac{d_{i}^{2}}{(d_{i}^{2}+\lambda)^{2}}$$
You then get:
$$h(\lambda)\approx h(\lambda^{(0)})+(\lambda-\lambda^{(0)})\frac{\partial h}{\partial \lambda}|_{\lambda=\lambda^{(0)}}=0$$
re-arranging for $\lambda$ you get:
$$\lambda=\lambda^{(0)}-\left[\frac{\partial h}{\partial \lambda}|_{\lambda=\lambda^{(0)}}\right]^{-1}h(\lambda^{(0)})$$
This sets up the iterative search. For initial starting values, assume $d^{2}_{i}=1$ in the summation, then you get $\lambda^{(0)}=\frac{p-df}{df}$.
$$\lambda^{(j+1)}=\lambda^{(j)}+\left[\sum_{i=1}^{p}\frac{d_{i}^{2}}{(d_{i}^{2}+\lambda^{(j)})^{2}}\right]^{-1}\left[\sum_{i=1}^{p}\frac{d_{i}^{2}}{d_{i}^{2}+\lambda^{(j)}}-df\right]$$
This "goes" in the right direction (increase $\lambda$ when summation is too big, decrease it when too small), and typically only takes a few iterations to solve. Further the function is monotonic (an increase/decrease in $\lambda$ will always decrease/increase the summation), so that it will converge uniquely (no local maxima).
| null |
CC BY-SA 3.0
| null |
2011-03-15T12:11:19.050
|
2011-12-06T06:57:36.580
|
2011-12-06T06:57:36.580
|
930
|
2392
| null |
8316
|
1
|
8322
| null |
5
|
267
|
I assume a simple model $\log(y_i) \sim \mathcal{N}(\mu_i,\sigma)$ with $\mu_i=\alpha + \beta x_i$ . Now if I have the estimates $\hat{\alpha}$ and $\hat{\beta}$, how do I calculate the fitted values $\hat{y}$? (Using $\hat{y}= \exp(\hat{\alpha} + \hat{\beta} x_i)$ seems to work; this would be the median of the log-normal distribution.)
|
Fitted values for a log-normal model
|
CC BY-SA 2.5
| null |
2011-03-15T13:11:49.413
|
2011-03-15T15:17:13.680
|
2011-03-15T13:16:59.297
|
2970
|
1443
|
[
"modeling"
] |
8317
|
2
| null |
8283
|
5
| null |
I am not exactly sure what your code is trying to do, but it seems like you should be using $rnorm()$ (with a large $n$) instead of $dnorm()$, since the functions $mean()$, $var()$, etc. are designed to be used on samples, not densities.
In any case, using the method-of-moments is easy to do algebraically.
Let's say your sample is $X=[x_1, \cdots, x_n]$. Your sample moments are:
$$m_1(X) = (1/n) \sum_{i} x_i$$
$$m_2(X) = (1/n) \sum_{i} x_i^2$$
$$m_3(X) = (1/n) \sum_{i} x_i^3$$
$$m_4(X) = (1/n) \sum_{i} x_i^4$$
The moments of $Z \sim N(\mu, \sigma^2)$ are:
$$m_1(Z) = \mu$$
$$m_2(Z) = \mu^2 + \sigma^2$$
$$m_3(Z) = \mu^3 + 3\mu\sigma^2$$
$$m_4(Z) = \mu^3 + 6\mu^2\sigma^2 + 3(\sigma^2)^2$$
Furthermore, if your random variable $Y$ is a mixture of $Z_1, Z_2, \cdots, Z_m$, with $P[X=Z_i]=p_i$, then
$$m_1(Y) = p_1 m_1(Z_1) + p_2 m_1 (Z_2) + \cdots + p_m m_1 (Z_m)$$
$$m_2(Y) = p_1 m_2(Z_1) + p_2 m_2 (Z_2) + \cdots + p_m m_1 (Z_m)$$
$$\cdots$$
If you use enough gaussians you should be able to match a finite number of moments exactly. Under constraint you may need to define a distance metric on your moments vector $(m_1, m_2, m_3, m_4)$. The euclidean distance should work fine but you will want to differentially weight the moments, since the higher-order moments could be very large or very small depending on what distribution you are working with. Also, the first and second moments are generally more important.
| null |
CC BY-SA 2.5
| null |
2011-03-15T14:04:23.587
|
2011-03-15T14:27:33.083
|
2011-03-15T14:27:33.083
|
3567
|
3567
| null |
8318
|
1
| null | null |
22
|
43805
|
One of the predictors in my logistic model has been log transformed. How do you interpret the estimated coefficient of the log transformed predictor and how do you calculate the impact of that predictor on the odds ratio?
|
Interpretation of log transformed predictors in logistic regression
|
CC BY-SA 2.5
| null |
2011-03-15T14:27:14.323
|
2021-09-14T14:34:12.503
|
2020-03-03T04:57:07.913
|
11887
| null |
[
"regression",
"logistic",
"data-transformation"
] |
8319
|
2
| null |
8309
|
6
| null |
Here is the small Matlab code based on the formula proved by probabilityislogic:
```
function [lamda] = calculate_labda(Xnormalised,df)
[n,p] = size(Xnormalised);
%Finding SVD of data
[u s v]=svd(Xnormalised);
Di=diag(s);
Dsq=Di.^2;
%Newton-rapson method to solve for lamda
lamdaPrev=(p-df)/df;
lamdaCur=Inf;%random large value
diff=lamdaCur-lamdaPrev;
threshold=eps(class(XstdArray));
while (diff>threshold)
numerator=(sum(Dsq ./ (Dsq+lamdaPrev))-df);
denominator=sum(Dsq./((Dsq+lamdaPrev).^2));
lamdaCur=lamdaPrev+(numerator/denominator);
diff=lamdaCur-lamdaPrev;
lamdaPrev=lamdaCur;
end
lamda=lamdaCur;
end
```
| null |
CC BY-SA 4.0
| null |
2011-03-15T14:34:44.480
|
2018-10-17T13:26:14.680
|
2018-10-17T13:26:14.680
|
7290
|
3722
| null |
8320
|
2
| null |
8318
|
18
| null |
If you exponentiate the estimated coefficient, you'll get an odds ratio associated with a $b$-fold increase in the predictor, where $b$ is the base of the logarithm you used when log-transforming the predictor.
I usually choose to take logarithms to base 2 in this situation, so I can interpet the exponentiated coefficient as an odds ratio associated with a doubling of the predictor.
| null |
CC BY-SA 2.5
| null |
2011-03-15T14:40:15.637
|
2011-03-15T14:40:15.637
| null | null |
449
| null |
8321
|
1
| null | null |
5
|
5605
|
In multiple linear regression, let us say that the F test shows that the model is significant. But the t-tests for beta values does not say that the beta values are non-zero. What can we conclude in such a situation?
Does the fact that the tests for the beta values failed affect the fact that the model is significant?
|
Significant multiple linear regression model with non-significant betas?
|
CC BY-SA 2.5
| null |
2011-03-15T15:02:17.730
|
2011-06-14T16:49:43.333
|
2011-03-16T13:05:34.637
| null |
3725
|
[
"multiple-regression"
] |
8322
|
2
| null |
8316
|
3
| null |
The conditional expectation of $y_i$ at the value $x_i$ can be estimated as $\exp(\hat{\alpha} + \hat{\beta}x_i + s^2/2)$ where $s^2$ is the mean squared error in your regression (estimating $\sigma^2$). Don't trust this until you have closely looked for evidence of heteroscedasticity (i.e., changes in the true value of $\sigma$ for different values of $x$) and found nothing of importance.
| null |
CC BY-SA 2.5
| null |
2011-03-15T15:17:13.680
|
2011-03-15T15:17:13.680
| null | null |
919
| null |
8323
|
2
| null |
8300
|
3
| null |
You want to perform a canonical correlation analysis. This will provide information about correlations among linear combinations of your sets of parameters, potentially uncovering stronger information of the type you seek. The [Wikipedia article](http://en.wikipedia.org/wiki/Canonical_correlation) explains the theory, provides the equations, and presents the appropriate hypothesis test. It requires solving a 7 by 7 eigenvalue problem determined by the correlation and cross-correlation matrices of the variables, which is fast and straightforward. The `R` package [CCorA](http://cc.oulu.fi/~jarioksa/softhelp/vegan/html/CCorA.html) will do it, for instance. (I haven't tested this package.)
It might help first to re-express your original variables so they are each approximately symmetrically distributed. There are many ways to do this using, for example, [Box-Cox transformations](http://www.itl.nist.gov/div898/handbook/pmc/section5/pmc52.htm).
| null |
CC BY-SA 2.5
| null |
2011-03-15T15:33:39.937
|
2011-03-15T15:33:39.937
| null | null |
919
| null |
8324
|
2
| null |
8321
|
1
| null |
Read the section on tests of joint significance here: [http://www.people.fas.harvard.edu/~blackwel/ftests.pdf](http://www.people.fas.harvard.edu/~blackwel/ftests.pdf)
Your F-test is of a different hypothesis than the t test, that all beta = 0, rather than some beta = 0.
| null |
CC BY-SA 2.5
| null |
2011-03-15T17:02:20.803
|
2011-03-15T17:02:20.803
| null | null |
1893
| null |
8325
|
2
| null |
8278
|
1
| null |
Since your data are categorical, a t test is likely inappropriate for any parameter I can think you might want to do inference on. Here is a brief intro to a very similar problem as yours:
[http://walkerbioscience.com/word-files/Categorical%20data.doc](http://walkerbioscience.com/word-files/Categorical%20data.doc)
| null |
CC BY-SA 2.5
| null |
2011-03-15T17:10:20.340
|
2011-03-15T17:10:20.340
| null | null |
1893
| null |
8326
|
2
| null |
8276
|
0
| null |
R/Rapidminer are free. Deployment can get tricky if you are concerned about multiple GB datasets, but other than that, everything is a breeze. RStudio on a linux server has been getting great reviews, and Revolution Analytics sells an enterprise version of R at a very reasonable cost with a focus on enhanced deployability, big data, fancy UI, and data export to make R more accessible to the non-statistician.
| null |
CC BY-SA 2.5
| null |
2011-03-15T17:14:46.560
|
2011-03-15T17:14:46.560
| null | null |
1893
| null |
8327
|
2
| null |
8271
|
1
| null |
Set it up as a regular regression, they are mathematically equivalent, just make sure your non-continuous variables are defined appropriately (as categorical or whatever) to SPSS.
| null |
CC BY-SA 2.5
| null |
2011-03-15T18:43:25.903
|
2011-03-15T18:43:25.903
| null | null |
1893
| null |
8328
|
1
| null | null |
11
|
6268
|
I'm trying to implement an EM algorithm for the following factor analysis model;
$$W_j = \mu+B a_j+e_j \quad\text{for}\quad j=1,\ldots,n$$
where $W_j$ is p-dimensional random vector, $a_j$ is a q-dimensional vector of latent variables and $B$ is a pxq matrix of parameters.
As a result of other assumptions used for the model, I know that $W_j\sim N(\mu, BB'+D)$ where $D$ is the variance covariance matrix of error terms $e_j$, $D$ = diag($\sigma_1^2$,$\sigma_2^2$,...,$\sigma_p^2$).
For the EM algorithm to work, I'm doing dome iterations involving estimation of $B$ and $D$ matrices and during these iterations I'm computing the inverse of $BB'+D$ at each iteration using new estimates of $B$ and $D$. Unfortunately during the course of iterations, $BB'+D$ loses its positive definiteness (but it shouldn't because it is a variance-covariance matrix) and this situation ruins the convergence of the algorithm. My questions are:
- Does this situation show that there is something wrong with my algorithm since the likelihood should increase at every step of EM?
- What are the practical ways to make a matrix positive definite?
Edit: I'm computing the inverse by using a matrix inversion lemma which states that:
$$(BB'+D)^{-1}=D^{-1}-D^{-1}B (I_q+B'D^{-1}B)^{-1} B'D^{-1}$$
where the right side involves only the inverses of $q\times q$ matrices.
|
How to make a matrix positive definite?
|
CC BY-SA 2.5
|
0
|
2011-03-15T19:27:04.463
|
2011-03-17T20:09:34.710
|
2011-03-17T20:09:34.710
|
919
|
3499
|
[
"factor-analysis",
"expectation-maximization"
] |
8329
|
2
| null |
8321
|
5
| null |
One way that this could happen is to put two highly correlated predictors into your model. Because they are highly correlated, each of their coefficients will have relatively wide standard errors. As a result, these coefficients may not be statistically significant separately. Taken together, they may be a strong predictor of the outcome.
The covariance between the coefficient estimates would be pretty negative (increasing the estimate of one of the coefficients leads to a strong decrease in the estimate of the other coefficient because the two are highly linearly related), cancelling out the high variances for the estimators separately. As a result, your model has predictive power and your regressors are important, but their colinearity obscures this fact by creating large (though accurate) standard errors.
| null |
CC BY-SA 2.5
| null |
2011-03-15T19:59:57.180
|
2011-03-15T19:59:57.180
| null | null |
401
| null |
8330
|
1
| null | null |
1
|
1000
|
I have a multiple response (categories) question (Q11, ..., Q15). My problem is that the data-entry clerck repeats the answers in the same case.
Example:
```
Q11 Q12 Q13 Q14 Q15
Case 1 001 003 015 001 022 (001 was type twice)
Case 2 089 032 089 089 014 (089 was typed three times)
```
I need to find a way to identified the cases with these errors.
Thanks,
|
Multiple response question with duplicated answers
|
CC BY-SA 2.5
| null |
2011-03-15T20:31:46.950
|
2011-03-16T13:04:04.497
|
2011-03-16T13:04:04.497
| null | null |
[
"spss"
] |
8332
|
2
| null |
8328
|
3
| null |
OK, since you're doing FA I'm assuming that $B$ is of full column rank $q$ and $q<p$. We need a few more details though. This may be a numerical problem; it may also be a problem with your data.
How are you computing the inverse? Do you need the inverse explicitly, or can re-express the calculation as the solution to a linear system? (ie to get $A^{-1}b$ solve $Ax=b$ for x, which is typically faster and more stable)
What is happening to $D$? Are the estimates really small/0/negative? In some sense it is the critical link, because $BB'$ is of course rank deficient and defines a singular covariance matrix before adding $D$, so you can't invert it. Adding the positive diagonal matrix $D$ technically makes it full rank but $BB'+D$ could still be horribly ill conditioned if $D$ is small.
Oftentimes the estimate for the idiosyncratic variances (your $\sigma^2_i$, the diagonal elements of $D$) is near zero or even negative; these are called Heywood cases. See eg [http://www.technion.ac.il/docs/sas/stat/chap26/sect21.htm](http://www.technion.ac.il/docs/sas/stat/chap26/sect21.htm) (any FA text should discuss this as well, it's a very old and well-known problem). This can result from model misspecification, outliers, bad luck, solar flares... the MLE is particularly prone to this problem, so if your EM algorithm is designed to get the MLE look out.
If your EM algorithm is approaching a mode with such estimates it's possible for $BB'+D$ to lose its positive definiteness, I think. There are various solutions; personally I'd prefer a Bayesian approach but even then you need to be careful with your priors (improper priors or even proper priors with too much mass near 0 can have the same problem for basically the same reason)
| null |
CC BY-SA 2.5
| null |
2011-03-15T21:56:51.387
|
2011-03-17T18:21:42.010
|
2011-03-17T18:21:42.010
|
26
|
26
| null |
8333
|
2
| null |
7969
|
8
| null |
I'm answering my own question, but I found a pretty good solution, even if I do not fully understand it. Looking at the code from the GNU Scientific Library, here is how it samples gamma variables (`r` is the random number generator, `a` is $\alpha$ and `b` is $\beta$):
```
if (a < 1)
{
double u = gsl_rng_uniform_pos (r);
return gsl_ran_gamma (r, 1.0 + a, b) * pow (u, 1.0 / a);
}
```
`gsl_ran_gamma` is the function which returns a gamma random sample (so the above is a recursive call), while `gsl_rng_uniform_pos` returns a uniformly distributed number in $(0,1)$ (the `_pos` is for strictly positive as it is guaranteed to not return 0.0).
Therefore, I can take the log of the last expression and use
```
return log(gsl_ran_gamma(r, 1.0 + a, b)) + log(u)/a;
```
To get what I wanted. I now have two `log()` calls (but one less `pow()`), but the result is probably better. Before, as whuber pointed out, I had something raised to the power of $1/a$, potentially a huge number. Now, in logspace, I'm multiplying by $1/a$. So, it is less likely to underflow.
| null |
CC BY-SA 2.5
| null |
2011-03-15T22:29:01.410
|
2011-03-16T18:15:03.767
|
2011-03-16T18:15:03.767
|
2067
|
2067
| null |
8334
|
1
|
8357
| null |
8
|
2378
|
I am trying to set up an automatic screen to detect structural breaks in large numbers of time series.
The time series are weekly and represent behaviour of customers. I have set up a Chow test. I use the most recent 4 weeks and compare it to the preceding 22. I want to know if their recent behaviour has significantly diverged from the preceding behaviour.
My question is this:
>
Is the Chow Test the most appropriate test for this question?
If this is not the most appropriate test how can I determine what test is the most appropriate one?
|
Chow test or not?
|
CC BY-SA 2.5
| null |
2011-03-15T22:59:36.823
|
2015-11-17T11:07:46.493
|
2011-03-16T13:02:46.637
| null |
2997
|
[
"time-series",
"hypothesis-testing",
"chow-test",
"change-point"
] |
8335
|
1
|
8353
| null |
4
|
1725
|
I am a beginner in statistics and poor in mathematics. I am trying to to assess effect of intervention in one state versus another using annual data.
```
My data are
State 1 State 2
Cases Deaths Cases Deaths
2004 1125 5 2024 254
2005 1213 5 1978 209
2006 1003 4 2294 217
2007 1425 6 2312 249
2008 1172 4 1528 197
2009 1092 3 1683 204
2010 1316 4 2024 218
```
When I was stuck for the correct statistical procedure, one senior member "irishstat" advised me the following very convincing analysis for which I am ever grateful:
X1 is the number of cases and Y is the number of deaths. X2 is the empirically identified point of anomaly; (2009 .. period 6 for State1 and 2004 .. period 1 for State2. Outlier detection led to identifying one anomalous data point for each state reflecting some unknown background variable thus yielding a more robust estimate of the mortality rates.
Analysis of State1
```
State1 Y(T) = -.65649
+[X1(T)][.0046)] CASES +[X2(T)][-1.3608)] PULSE6 I~P00006STATE1 + [A(T)]
Suggesting an unusually low mortality rate for 2009
Analysis of State2
State2 Y(T) = 123.55
+[X1(T)][(+ .0468)] CASES +[X2(T)][(+ 35.7590)] PULSE1 I~P00001STATE2 + [A(T)]
Suggesting an unusually high mortality rate for 2004
```
This leads to estimating two cleansed data points
```
STATE YEAR Y OBSERVED Y ADJUSTED
STATE1 2009 3 4.36
STATE2 2004 254 218.24
```
Replacing these two observed possibly errant values possibly due to some unspecified concomitant factor (“lurking Variable”) one computes a rate of.0046 for STATE1 and .0468 for STATE2.
My problem now is how to do The Chow Test for constancy of parameters across groups to check for rejection of the null hypothesis of equal coefficients. I have SPSS v19. Kindly advise me step by step.
|
How to do Chow test for constancy of parameters across 2 groups
|
CC BY-SA 2.5
| null |
2011-03-15T23:30:08.777
|
2011-03-17T11:37:41.620
|
2011-03-16T13:01:26.090
| null |
2956
|
[
"statistical-significance",
"spss",
"chow-test"
] |
8336
|
2
| null |
8206
|
3
| null |
Fully customizable [ggplot2](http://had.co.nz/ggplot2/) boxplot...
```
#bootstrap
data <- data.frame(value=rnorm(100,mean = 0.5, sd = 0.2),group=0)
#processing
metaData <- ddply(data,~group,summarise,
mean=mean(data$value),
sd=sd(data$value),
min=min(data$value),
max=max(data$value),
median=median(data$value),
Q1=0,Q3=0
)
bps <- boxplot.stats(data$value,coef=1.5)
metaData$min <- bps$stats[1] #lower wisker
metaData$max <- bps$stats[5] #upper wisker
metaData$Q1 <- bps$stats[2] # 1st Quartile
metaData$Q3 <- bps$stats[4] # 3rd Quartile
#adding outliers
out <- data.frame() #initialising storage for outliers
if(length(bps$out) > 0){
for(n in 1:length(bps$out)){
pt <-data.frame(value=bps$out[n],group=0)
out<-rbind(out,pt)
}
}
#adding labels
labels <-data.frame(value=metaData$max, label="Upper bound")
labels <-rbind(labels,data.frame(value=metaData$min, label="Lower bound"))
labels <-rbind(labels,data.frame(value=metaData$median, label="Median"))
labels <-rbind(labels,data.frame(value=metaData$Q1, label="First quartile"))
labels <-rbind(labels,data.frame(value=metaData$Q3, label="Third quartile"))
#drawing
library(ggplot2)
p <- ggplot(metaData,aes(x=group,y=mean))
p <- p + geom_segment(aes(x=c(0.1,0,0.1),y=c(0,0,1),xend=c(0,0,-0.1),yend=c(0,1,1)))
p <- p + geom_text(aes(y=c(0,1),label=c(0,1),x=0.2))
p <- p + geom_errorbar(aes(ymin=min,ymax=max),linetype = 1,width = 0.5) #main range
p <- p + geom_linerange(aes(ymin=min,ymax=max),linetype = 1,width = 0, color="white")# white line range
p <- p + geom_linerange(aes(ymin=min,ymax=max),linetype = 2) #main range dotted
p <- p + geom_crossbar(aes(y=median,,ymin=Q1,ymax=Q3),linetype = 1,fill='white') #box
if(length(out) >0) p <- p + geom_point(data=out,aes(x=group,y=value),shape=4) # drawning outliers if any
p <- p + scale_x_discrete(breaks=c(0))
p <- p + scale_y_continuous(name= "Value")
p <- p + geom_text(data=labels,aes(x=0.5,y=value,label=round(value,2)),colour="black",angle=0,hjust=0.5, vjust=0.5,size=3)
p <- p + opts(panel.background = theme_rect(fill = "white",colour = NA))
p <- p + opts(panel.grid.minor = theme_blank(), panel.grid.major = theme_blank())
p <- p + opts(axis.title.x=theme_blank())
p <- p + opts(axis.text.x = theme_blank())
p <- p + opts(axis.title.y=theme_blank())
p <- p + opts(axis.text.y = theme_blank())
p + coord_flip()
```
Result:

...code maybe a bit ugly but works the right way.
| null |
CC BY-SA 2.5
| null |
2011-03-15T23:45:40.963
|
2011-03-16T00:07:39.330
|
2011-03-16T00:07:39.330
|
3376
|
3376
| null |
8337
|
2
| null |
1112
|
0
| null |
A very simple option is dividing each number in your data by the largest number in your data. If you have many small numbers and a few very large ones, this might not convey the information well. But it's relatively easy; if you think meaningful information is lost when you graph the data like this, you could try one of the more sophisticated techniques that others have suggested.
| null |
CC BY-SA 2.5
| null |
2011-03-16T00:27:24.787
|
2011-03-16T00:27:24.787
| null | null |
3700
| null |
8338
|
1
|
8339
| null |
3
|
102
|
I'm looking at binomial data where I believe that the probability of the outcome is the product of two independent factors. If you think of it as a two step decision,
- At the first step, there is a certain probability $p$ that you will have a success, and $1-p$ that you will have a failure. If you have a failure at this step, there is no further opportunity for success.
- Then, at the second step, there is a certain probability $q$ of success.
So, the overall probability of success is $p\times q$. I'm assuming, for now, that $p$ and $q$ vary independently across individuals. Both $p$ and $q$ have some variance across individuals, but not necessarily the same one.
Due to the nature of the data, I can estimate the variance of $q$, and the variance of $p\times q$. From this, I would like to estimate the variance of $p$. I just can't wrap my mind around how.
|
Estimating variability of unseen factor
|
CC BY-SA 2.5
| null |
2011-03-16T01:37:12.077
|
2011-03-17T12:41:40.457
| null | null |
287
|
[
"probability",
"estimation",
"binomial-distribution",
"variance"
] |
8339
|
2
| null |
8338
|
4
| null |
Supposing for independent $X$, $Y$, we know $E[X]$, $E[XY]$, $var(X)$, $var(XY)$. Thus we know $E[Y] = E[XY]/E[X]$, $E[X^2]=var(X) - E[X]^2$ and $E[(XY)^2]=var[XY]-E[XY]^2$.
Since $E[(XY)^2] = E[X^2 Y^2] = E[X^2] E[Y^2]$ for independent $X$, $Y$, we have $E[Y^2] = E[(XY)^2]/E[X^2]$.
To summarize,
$$var(Y) = \frac{var(XY) + E[XY]^2}{var(X) + E[X]^2}- \left(\frac{E[XY]}{E[X]}\right)^2.$$
| null |
CC BY-SA 2.5
| null |
2011-03-16T01:46:27.583
|
2011-03-17T12:41:40.457
|
2011-03-17T12:41:40.457
|
2116
|
3567
| null |
8340
|
1
|
8346
| null |
2
|
670
|
Update: I wanted to clarify that this is a simulation. Sorry if I confused everyone. I have also used meaningful names for my variables.
I am not a statistician so please correct me if I make a blunder in explaining what I want. In regard to my [previous question](https://stats.stackexchange.com/questions/7996/what-is-a-good-way-of-estimating-the-dependence-of-an-output-variable-on-the-inpu), I have reproduced parts of my question here for reference.
>
I am evaluating a scenario's output
dependence on three
variables: Area, Speed and NumOfVehicles. For this, I am
conducting the following experiments:
Fix Area+Speed, Vary NumOfVehicles - Total four sets of (Area+Speed) each having 4 variations of NumOfVehicles
Fix Speed+NumOfVehicles, Vary Area - Total four sets of (Speed+NumOfVehicles) each having 3 variations of Area
Fix NumOfVehicles+Area, Vary Speed - Total four sets of (NumOfVehicles+Area) each having 6 variations of Speed
The output of any simulation is the
value of a variable over time. The output
variable I am observing is the time at which 80% of the cars crashe.
I am trying to determine which
parameter(s) dominate the outcome of
the experiment. By dominate, I mean
that sometimes, the outcomes just does
not change when one of the parameters
change but when some other parameter
is changed even by a small amount, a
large change in the output is
observed. I need to capture this
effect and output some analysis from
which I can understand the dependence
of the output on the input parameters.
A friend suggested Sensitivity
Analysis but am not sure if there are
simpler ways of doing it. Can someone
please help me with a good (possibly
easy because I don't have a Stats
background) technique? It would be
great if all this can be done in R.
My previous result was not very satisfactory looking at the regression results. So what I did was that I went ahead and repeated all my experiments 20 times each with different variations of each variable (so for instance, instead of 4 variations of Area, I now have 8 and so on). Following is the summary I obtained out of R after using linear regression:
```
Call:
lm(formula = T ~ Area + Speed + NumOfVehicles)
Residuals:
Min 1Q Median 3Q Max
-0.13315 -0.06332 -0.01346 0.04484 0.29676
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.04285 0.02953 1.451 0.148
Area 0.70285 0.02390 29.406 < 2e-16 ***
Speed -0.15560 0.02080 -7.479 2.12e-12 ***
NumOfVehicles -0.27447 0.02927 -9.376 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.08659 on 206 degrees of freedom
Multiple R-squared: 0.8304, Adjusted R-squared: 0.8279
F-statistic: 336.2 on 3 and 206 DF, p-value: < 2.2e-16
```
as opposed to my previous result:
```
lm(formula = T ~ Area + Speed + NumOfVehicles)
Residuals:
Min 1Q Median 3Q Max
-0.35928 -0.06842 -0.00698 0.05591 0.42844
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.01606 0.16437 -0.098 0.923391
Area 0.80199 0.15792 5.078 0.000112 ***
Speed -0.27440 0.13160 -2.085 0.053441 .
NumOfVehicles -0.31898 0.14889 -2.142 0.047892 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.1665 on 16 degrees of freedom
Multiple R-squared: 0.6563, Adjusted R-squared: 0.5919
F-statistic: 10.18 on 3 and 16 DF, p-value: 0.0005416
```
From my understanding, my current results have a lower standard error so that is good. In addition the Pr value also seems quite low which tells me that this result is better than my previous result. So can I go ahead and say that A has the maximum effect on the output and then come S and V in that order? Can I make any other deductions from this result?
Also, I was suggested that I look into adding additional variates like $A^2$ etc. but if $A$ is the area, what does saying "time" depends on $A^2$ actually mean?
|
Comparing and understanding my linear regression result with a previous attempt
|
CC BY-SA 2.5
| null |
2011-03-16T03:51:14.743
|
2011-03-16T16:07:19.607
|
2017-04-13T12:44:39.283
|
-1
|
2164
|
[
"r",
"regression",
"experiment-design"
] |
8342
|
1
|
8378
| null |
13
|
13242
|
I have a three level contingency table, with count data for several species, the host plant from which they were collected and whether that collection happened on a rainy day (this actually matters!). Using R, fake data might be something like this:
```
count <- rpois(8, 10)
species <- rep(c("a", "b"), 4)
host <- rep(c("c","c", "d", "d"), 2)
rain <- c(rep(0,4), rep(1,4))
my.table <- xtabs(count ~ host + species + rain)
, , rain = 0
species
host a b
c 12 15
d 10 13
, , rain = 1
species
host a b
c 11 12
d 12 7
```
Now, I want to know two things: Are species associated with host plants? Does "rain or not" effect this association? I used `loglm()` from `MASS` for this:
```
# Are species independent to host plants, given the effect of rain?
loglm(~species + host + rain + species*rain + host*rain, data=my.table)
# Given any relationship between host plants and species, does rain change it?
loglm(~species + host + rain + species*host)
```
This is a little outside my comfort level, and I wanted to check that I'd set the models up right and that this was the best way to approach these questions.
|
Appropriate way to deal with a 3-level contingency table
|
CC BY-SA 3.0
| null |
2011-03-16T04:49:19.827
|
2014-06-19T14:20:21.280
|
2014-06-19T14:20:21.280
|
7290
|
3732
|
[
"r",
"categorical-data",
"log-linear"
] |
8343
|
2
| null |
8340
|
1
| null |
This result looks great. One thing that I would recommend doing is looking at the residuals for each of the two models to see if there are any trends in the first model that are not present in the second. If the model has residuals with an approximately normal distribution then you won't need to add more terms like A^2. The command for this is:
```
my_model <- (formula = T ~ A + S + V)
resid(my_model
```
| null |
CC BY-SA 2.5
| null |
2011-03-16T05:06:26.793
|
2011-03-16T05:06:26.793
| null | null |
3727
| null |
8344
|
1
|
8354
| null |
20
|
11018
|
I am trying to understand how influence functions work. Could someone explain in the context of a simple OLS regression
\begin{equation}
y_i = \alpha + \beta \cdot x_i + \varepsilon_i
\end{equation}
where I want the [influence function](http://en.wikipedia.org/wiki/Robust_statistics#Empirical_influence_function) for $\beta$.
|
Influence functions and OLS
|
CC BY-SA 2.5
| null |
2011-03-16T06:50:47.653
|
2022-11-12T15:21:33.273
|
2022-11-12T15:21:33.273
|
11887
|
2693
|
[
"regression",
"least-squares",
"influence-function"
] |
8345
|
2
| null |
8299
|
1
| null |
MDL for model selection seems to mean "choose model with lowest parametric complexity". Since parametric complexity is equal to worst case excess code length when you don't know the distribution, I bet it also gives you the number of observations needed to get close to true member of `M` in KL-divergence. Parametric complexity is a function of number of observations `n` and is given as `COMP` in bayer's answer.
Chapter 14 and chapter 7 of Grunwald's book give more details
| null |
CC BY-SA 2.5
| null |
2011-03-16T07:16:45.877
|
2011-03-16T07:16:45.877
| null | null |
511
| null |
8346
|
2
| null |
8340
|
5
| null |
First, to begin with, I think it was ok to rephrase the question in your previous post, now I just remember I have read it, but all the discussion is lost. I beg pardons if the ideas here are repeated (was lazy to search for the original post).
There are some doubts regarding your experimental scheme and conclusions:
- Is the data you are working with: 3 explanatory variables A, V and S (not A, B, C as you wrote and it is nor correct to call them parameters, parameters are the coefficients you estimate by linear regression) and the dependent T, true empirical data you observe in real life or you just simulate the data based on some likely values that you believe are true? However even in believable profile of the attributes (T, A, V, S) what is the joint distribution then? The conclusion that the data is not from real observations: goes from here
So what I did was that I went ahead and repeated all my experiments 20 times each with different variations of each parameters (so for instance, instead of 4 variations of A, I now have 8 and so on).
Thus there is (I guess) little to do with bootstrapping or any other re-sampling techniques or Monte Carlo simulations. Therefore by manually setting the experimental design you do the conclusions that have little empirical background. I wish to be wrong in this place though.
- Impacts of the variables are not measured by the value of parameters NOR by the smaller standard errors, p values! If you wish to compare the impacts in terms of marginal effects you need to implement the standardized regression coefficients also known as beta coefficients. You can calculate them manually (coef()*sd()/sd()) or redo the regressions scaling all 4 variables first with scale() function. You may compare then the marginal effects that are represented by beta parameters. Standardized data is dimensionless so you may compare the impacts of different explanatory variables.
- However if the variables are slowly varying and strictly positive you may be interested not in marginal impacts, but in percent changes that are given by elasticities of different variables. Take the original model and estimated parameters coef() and multiply it by mean()/mean(), in additive model you get mean elasticity then. Note that you can't do the same in standardized (zero-mean) model, and since the fractions are in general different so will be betas and elasticities. The interpretation of elasticity is straightforward if the explanatory variable changes by 1 per cent what is the per cent change of dependent variable.
All other suggestion like take transformations, add some other variables ($A^2$ for example) comes from the further statistical inspection of the residuals for regression's traditional weak spots: is linear model a good approximation (RESET), no multicollinearity (VIF, CI), no heteroscedasticity (White or any other), no autocorrelation (DW, Ljung-Box, etc.) and so on.
P.S. the current result is neither good nor bad, it is just looks suspicious to me.
| null |
CC BY-SA 2.5
| null |
2011-03-16T07:17:02.403
|
2011-03-16T07:17:02.403
| null | null |
2645
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.