Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5571 | 1 | 5579 | null | 12 | 6431 | I recently fit 4 multiple regression models for the same predictor/response data. Two of the models I fit with Poisson regression.
```
model.pois <- glm(Response ~ P1 + P2 +...+ P5, family=poisson(), ...)
model.pois.inter <- glm(Response ~ (P1 + P2 +...+ P5)^2, family=poisson(), ...)
```
Two of the models I fit with negative binomial regression.
```
library(MASS)
model.nb <- glm.nb(Response ~ P1 + P2 +...+ P5, ...)
model.nb.inter <- glm.nb(Response ~ (P1 + P2 +...+ P5)^2, ...)
```
Is there a statistical test I can use to compare these models? I've been using the AIC as a measure of the fit, but AFAIK this doesn't represent an actual test.
| Comparing regression models on count data | CC BY-SA 3.0 | null | 2010-12-16T18:04:08.740 | 2014-12-10T12:22:50.943 | 2014-12-10T12:22:50.943 | 56216 | 1973 | [
"regression",
"aic",
"count-data",
"likelihood-ratio",
"model-comparison"
]
|
5572 | 1 | 5584 | null | 8 | 531 | I am planing a pre-post treatment-control design study with a large number of pre-treatment measurements. I have subjects divided into a control group and a treatment group. For both groups, I will collect hourly data for one year prior to the start of the treatment and then continue collecting data for another year. This will yield approximately 9000 pre-treatment measurements and 9000 post treatment measurements for each subject.
The treatment is something that cannot be stopped once it is started, so a cross-over design could only be of the form AA/AB, which won't take advantage of the benefits of that type of design.
The psychological and bio-medical literature suggests using an ANCOVA model, where the pre-treatment data is used as a covariate in the model. Putting 9000 covariates in a model seems totally ridiculous. Also, reducing the pre-treatment data to a summary statistic doesn't take advantage of the large number of measurements.
I'm sure that this must have come up before, any ideas? References to published results would be especially helpful.
| Taking advantage of many pre-treatment measurements | CC BY-SA 2.5 | null | 2010-12-16T18:16:48.147 | 2010-12-17T21:50:50.030 | 2010-12-17T21:50:50.030 | null | 743 | [
"mixed-model",
"experiment-design",
"repeated-measures",
"panel-data"
]
|
5573 | 1 | 5577 | null | 7 | 769 | I would like to create an "equilibrated histogram" with roughly the same number of data points in each bin. The second restriction I have is that I should have only 4 bins. Given the following list of numbers below, how can I achieve that?
-2.153, -1.732, -1.699, -1.559, -1.355, -1.306, -1.151, -1.129, -0.636, 0.4085, 0.5408, 0.5731, 0.5842, 0.6206, 0.8175, 0.8274, 0.8710, 1.3214, 1.5552, 2.2342
Thanks!
| How to build an "equilibrated histogram"? | CC BY-SA 2.5 | null | 2010-12-16T20:06:26.387 | 2013-06-18T18:12:49.263 | 2010-12-16T20:49:03.377 | 696 | 64 | [
"r",
"histogram"
]
|
5574 | 2 | null | 5563 | 5 | null | As @David mentioned, R is not doing symbolic calculations on its own, and such jobs should be done in other programs or with wrapper packages.
Yet, calculating limit numerically is usually quite easy -- in most cases you just need to find argument $n$ large enough so the difference between $f(n+1)-f(n)$ is smaller than the desired accuracy -- this can be done in a simple loop.
| null | CC BY-SA 2.5 | null | 2010-12-16T20:10:54.397 | 2010-12-16T20:10:54.397 | null | null | null | null |
5575 | 2 | null | 5573 | 7 | null | You are looking for [quantiles](http://en.wikipedia.org/wiki/Quantile); in R there is a function `quantile` that will calculate them for you; `Hmisc` R package provides `cut2` function which explicitly calculates such "equilibrated bins".
| null | CC BY-SA 2.5 | null | 2010-12-16T20:14:35.177 | 2010-12-16T20:14:35.177 | null | null | null | null |
5577 | 2 | null | 5573 | 8 | null | To follow up on @mbq's suggestion, here's the code to do that with R:
```
require(Hmisc)
x <- c(-2.153, -1.732, -1.699, -1.559, -1.355
, -1.306, -1.151, -1.129, -0.636, 0.4085
, 0.5408, 0.5731, 0.5842, 0.6206, 0.8175
, 0.8274, 0.8710, 1.3214, 1.5552, 2.2342
)
eqBins <- cut2(x, g = 4)
#what are the bins and how many in each?
> as.data.frame(table(eqBins))
eqBins Freq
1 [-2.153,-1.306) 5
2 [-1.306, 0.541) 5
3 [ 0.541, 0.827) 5
4 [ 0.827, 2.234] 5
```
| null | CC BY-SA 3.0 | null | 2010-12-16T20:48:42.173 | 2013-06-18T18:12:49.263 | 2013-06-18T18:12:49.263 | 919 | 696 | null |
5578 | 2 | null | 5364 | 2 | null | I discovered a paper by Reiss, et. al.,
[Partial least squares confidence interval calculation for industrial end-of-batch quality prediction](http://dx.doi.org/10.1016/j.chemolab.2009.11.003), in which appears the quote:
>
The PLS prediction should be accompanied by an online confidence
interval to indicate the accuracy of the prediction. The formulation of
the confidence interval for the PLS prediction is an area of study that
has not concluded a “gold standard”.
This paper contains a reference to the 'excellent survey of such work', [Standard error of prediction for multiway PLS](http://dx.doi.org/10.1016/S0169-7439%2801%2900204-0), by Faber and Bro, and a paper by Faber and Kowalski,
[Propagation of measurement errors for the validation of predictions obtained by principal component regression and partial least squares](http://http%3A%2F%2Fdx.doi.org%2Fdoi%3A10.1002%2F%28SICI%291099-128X%28199705%2911%3A3%3C181%3A%3AAID-CEM459%3E3.0.CO%3B2-7).
I will summarize these results as they become available...
| null | CC BY-SA 2.5 | null | 2010-12-16T21:07:01.267 | 2010-12-16T22:27:47.410 | 2010-12-16T22:27:47.410 | 795 | 795 | null |
5579 | 2 | null | 5571 | 16 | null | You can compare the negative binomial model to the corresponding Poisson model with a likelihood ratio test. A Poisson model is equivalent to a negative binomial model with an overdispersion parameter of zero. Therefore they are nested models and likelihood ratios are valid. The complication is that the overdispersion parameter is restricted to be non-negative, i.e. it logically can't be less than zero, so the null hypothesis is on the boundary of the parameter space. This means that instead of comparing twice the log-likelihood to a chi-squared distribution with one degree of freedom, you need to compare it to a mixture distribution consisting of equal parts of a chi-squared with 1 d.f. and a point mass at zero (a chi-squared distribution with zero degrees of freedom). What that means in practice is that you can calculate the p-value using the chi-squared with 1 d.f. and then halve it. For more details and background, see Case 5 of [Self & Liang JASA 1987; 82:605-610.](http://www.jstor.org/stable/2289471).
Note that some statistical software packages, such as Stata, will do this all for you automatically when you fit a negative binomial model. In fact I've shamelessly cribbed much of the above from the Stata help system -- if you have Stata see `help j_chibar`.
| null | CC BY-SA 3.0 | null | 2010-12-16T21:47:11.920 | 2013-11-18T16:27:07.357 | 2013-11-18T16:27:07.357 | 17230 | 449 | null |
5580 | 2 | null | 5525 | 5 | null | A more general approach is to use the `logLik()` function. It returns an object with the attribute `df` that gives the fitted models degrees of freedom. The benefit of this approach is that it works with many other model classes (including `glm`). In the case of ordinary linear regression (`lm`) this corresponds to the number of parameters + 1 for the estimate of the error variance.
From the `logLik` documentation:
>
For "lm" fits it is assumed that the scale has been estimated (by maximum likelihood or REML), and all the constants in the log-likelihood are included.
You can get the number of observations this way too.
```
> X1 <- rnorm(10)
> X2 <- rnorm(10)
> Y <- X1 + X2 + rnorm(10)
> model <- lm(Y~X1+X2)
> ll <- logLik(model)
> attributes(ll)
$nall
[1] 10
$nobs
[1] 10
$df
[1] 4
$class
[1] "logLik"
```
| null | CC BY-SA 2.5 | null | 2010-12-16T23:00:58.127 | 2010-12-16T23:16:31.660 | 2010-12-16T23:16:31.660 | 1670 | 1670 | null |
5581 | 2 | null | 4551 | 130 | null | Failing to look at (plot) the data.
| null | CC BY-SA 2.5 | null | 2010-12-16T23:13:28.913 | 2010-12-16T23:13:28.913 | null | null | 1670 | null |
5583 | 2 | null | 5572 | 1 | null | Excuse my previous post. I now see that you are not referring to 9000 different covariates.
What I have written does not apply to your situation.
Sincerest apologies.
Paul
There is a lot of discussion about matching and dimensionality reduction on pre-treatment covariates that may be worthwhile examining - i.e. propensity weighting via logistic regression and establishing balance on the pre-treatment covariates vis a vis different matching approaches.
Please refer to the following ............... [http://gking.harvard.edu/matchit](http://gking.harvard.edu/matchit)
This approach is easily executed in r, but the number of variables you have would be looking to use would be very unlikely to work.
Cheers Paul
| null | CC BY-SA 2.5 | null | 2010-12-17T04:14:41.050 | 2010-12-17T05:49:56.120 | 2010-12-17T05:49:56.120 | 2238 | 2238 | null |
5584 | 2 | null | 5572 | 5 | null | This is not a complete answer, but just a few thoughts:
- More pre-treatment measures should increase the reliability of your measurement of baseline differences. Increasing reliability of measuring baseline differences should increase your statistical power in detecting group differences (assuming a real effect exists) using the pre-post control design.
- 9000 pre-treatment measures is a lot. Such a design would usually imply that you are interested in the temporal dynamics of some phenomena. Nonetheless, if you are just using measurements as an indicator of baseline differences, then there would be a number of strategies for incorporating this into your model.
The simplest strategy would be to take the mean for each participant.
If there is trend in participant data, then an estimate of the individual's score just before the intervention may be more of interest.
Even more sophisticated would be to develop a model for each individual of what their score would be on the dependent variable following the intervention based on some projection using the pre-treatment measures. This might be more relevant if there was some form of seasonal or other systematic effect operating in different ways for different individuals.
You may also want to read this [earlier question on strategies for analysing such designs](https://stats.stackexchange.com/questions/3466/best-practice-when-analysing-pre-post-treatment-control-designs).
| null | CC BY-SA 2.5 | null | 2010-12-17T05:13:59.897 | 2010-12-17T05:13:59.897 | 2017-04-13T12:44:33.310 | -1 | 183 | null |
5585 | 2 | null | 604 | 1 | null | I am having troubling following your reasoning, but here are some things you should consider.
Generally, the harder you fit a model to your training data, the worse the model will perform on independent validation data sets. By over-fitting the model to the training set, you risk capturing predictor-response relationships that are particular to the training set you are using. These relationships are likely due to random chance. When building a model for classification, you want to only capture the predictor-response relationships that are common to all training sets. This is requires careful selection of the right size model (big enough to capture the true predictor-response relationship, small enough to not to overfit to your particular training set.)
Also, the fact that a linear regression gives an R^2 of 1 doesn't mean much. For example, I can generate a 101 X 100 matrix of N(0,1) observations, take the first column to be the "response", and the other 100 columns to be "predictors." This will give me an R^2 value of 1, even though the "response" and "predictors" are independent (assuming the rows/columns are linearly independent, which they will be with probability 1 if they are all N(0,1) observations.) So in your n=5 observation, p=20 predictor case, you can choose any 5 predictors and get a perfect fit. R^2 is generally a pretty poor model assessment metric.
Also, unless you are certain the conditional distribution of the predictors is multivariate normal and that the predictors have a common covariance matrix, LDA may not be the best choice here. There are several better nonparametric/semiparametric methods available.
Maybe you can clarify your post a little bit to get a better response.
| null | CC BY-SA 2.5 | null | 2010-12-17T05:45:18.537 | 2010-12-17T05:45:18.537 | null | null | 2144 | null |
5586 | 1 | null | null | 3 | 3728 | I have several OLS models with robust s.e.'s that predict an outcome variable Y. For instance:
Model 1:
$Y=B_0 +B_1X_1$
Model 2:
$Y=B_0 + B_1X_1 + B_2X_2$
Model 3:
$Y=B_0 +B_1X_1 + B_2X_2 +B_3X_3$
I am interested in giving an average effect for $B_1$ across Models 1-3 with an accompanied 95% CI.
Can I just take the average of $B_1$'s across Models 1-3 and the average of standard errors to construct my confidence interval? What is this called?
| Average effect of coefficients across multiple linear models? | CC BY-SA 3.0 | null | 2010-12-17T06:31:41.993 | 2021-02-04T22:51:05.250 | 2016-08-15T16:27:59.967 | 22468 | null | [
"regression",
"confidence-interval",
"linear-model",
"regression-coefficients",
"mean"
]
|
5588 | 2 | null | 5514 | 3 | null | Conditional logit models for unit $i$ in group $j$ of size $G$, $P(Y_{ij}=1|\sum_{k=1}^GY_{gj}=M)$ for some $0<M<G$. So, suppose you have a group of size 2 in which there is one success. Then, group member number 1's contribution to the likelihood is given by $A^{Y_{1j}}B^{1-Y_{1j}}$, where,
$$
\begin{matrix}
A & = P(Y_{1j}=1|Y_{1j}+Y_{2j}=1) = \frac{P(Y_{ij}=1 \cap Y_{1j}+Y_{2j}=1)}{P(Y_{1j}+Y_{2j}=1)} \\
& = \frac{P(Y_{1j}=1 \cap Y_{2j}=0)}{P(Y_{1j}=1 \cap Y_{2j}=0)+P(Y_{1j}=0 \cap Y_{2j}=1)}.
\end{matrix}
$$
The term $B$ is derived in a similar manner. Specifying the response probabilities in terms of a logit model,
$$
\begin{matrix}
A & = \frac{\frac{\exp(\alpha + \beta'x_{1j})}{1+\exp(\alpha + \beta'x_{1j})}\frac{1}{1+\exp(\alpha + \beta'x_{2j})}}{\frac{\exp(\alpha + \beta'x_{1j})}{1+\exp(\alpha + \beta'x_{1j})}\frac{1}{1+\exp(\alpha + \beta'x_{2j})} + \frac{1}{1+\exp(\alpha + \beta'x_{1j})}\frac{\exp(\alpha + \beta'x_{2j})}{1+\exp(\alpha + \beta'x_{2j})}}\\
& = \frac{\exp(\alpha)\exp(\beta'x_{1j})}{\exp(\alpha)\exp(\beta'x_{1j})+\exp(\alpha)\exp(\beta'x_{2j})}\\
& = \frac{\exp(\beta'x_{1j})}{\exp(\beta'x_{1j})+\exp(\beta'x_{2j})},
\end{matrix}
$$
that is, the intercept cancels out due to the conditioning. In fact, all additively separable group specific effects cancel in this way, which is why conditional logit is sometimes called "fixed effects logit" model (with "fixed effects" defined in the econometrics sense, not the mixed models sense). This property is also why conditional logit is so useful for matched case control data, effectively partialling out potential sources of confounding due to heterogeneity across matching strata.
The coefficients are indeed on the "usual" logit scale.
| null | CC BY-SA 2.5 | null | 2010-12-17T06:57:01.517 | 2010-12-17T13:15:20.630 | 2010-12-17T13:15:20.630 | 96 | 96 | null |
5589 | 2 | null | 5586 | 4 | null | If these 3 models are estimated from independent samples, then you can assume that $\beta_1$ are independent for these 3 models. Then you can average them. The standard error of the average then will be the square root from the average of the squares of the standard errors.
However you should check if you do not have [omited-variables problem](http://en.wikipedia.org/wiki/Omitted-variable_bias). If you do, then one of the $\beta_1$ in your models is biased, so it should be discarded.
Of course if you have all the data for these 3 models, I suggest estimate Model 3 with all the data and then use the coefficient $\beta_1$ with its standard error, assuming that your model is adequate.
| null | CC BY-SA 2.5 | null | 2010-12-17T07:11:15.503 | 2010-12-17T07:11:15.503 | null | null | 2116 | null |
5590 | 2 | null | 5571 | 5 | null | I believe `anova()` in R can be used for this. Despite its name, it's a likelihood ratio test. Crawley in his [The R Book](http://www.bio.ic.ac.uk/research/mjcraw/therbook/index.htm) has some examples of usage.
| null | CC BY-SA 2.5 | null | 2010-12-17T07:16:03.467 | 2010-12-17T07:16:03.467 | null | null | 144 | null |
5591 | 1 | 5592 | null | 48 | 7219 | Roughly speaking a p-value gives a probability of the observed outcome of an experiment given the hypothesis (model). Having this probability (p-value) we want to judge our hypothesis (how likely it is). But wouldn't it be more natural to calculate the probability of the hypothesis given the observed outcome?
In more details. We have a coin. We flip it 20 times and we get 14 heads (14 out of 20 is what I call "outcome of experiment"). Now, our hypothesis is that the coin is fair (probabilities of head and tail are equal to each other). Now we calculate the p-value, that is equal to the probability to get 14 or more heads in 20 flips of coin. OK, now we have this probability (0.058) and we want to use this probability to judge our model (how is it likely that we have a fair coin).
But if we want to estimate the probability of the model, why don't we calculate the probability of the model given the experiment? Why do we calculate the probability of the experiment given the model (p-value)?
| Why do people use p-values instead of computing probability of the model given data? | CC BY-SA 3.0 | null | 2010-12-17T10:36:49.853 | 2021-11-05T10:51:01.080 | 2016-08-11T20:26:39.243 | 28666 | 2407 | [
"likelihood",
"p-value"
]
|
5592 | 2 | null | 5591 | 34 | null | Computing the probability that the hypothesis is correct doesn't fit well within the frequentist definition of a probability (a long run frequency), which was adopted to avoid the supposed subjectivity of the Bayesian definition of a probability. The truth of a particular hypothesis is not a random variable, it is either true or it isn't and has no long run frequency. It is indeed more natural to be interested in the probability of the truth of the hypothesis, which is IMHO why p-values are often misinterpreted as the probability that the null hypothesis is true. Part of the difficulty is that from Bayes rule, we know that to compute the posterior probability that a hypothesis is true, you need to start with a prior probability that the hypothesis is true.
A Bayesian would compute the probability that the hypothesis is true, given the data (and his/her prior belief).
Essentially in deciding between frequentist and Bayesian approaches is a choice whether the supposed subjectivity of the Bayesian approach is more abhorrent than the fact that the frequentist approach generally does not give a direct answer to the question you actually want to ask - but there is room for both.
In the case of asking whether a coin is fair, i.e. the probability of a head is equal to the probability of a tail, we also have an example of a hypothesis that we know in the real world is almost certainly false right from the outset. The two sides of the coin are non-symmetric, so we should expect a slight asymmetry in the probabilities of heads and tails, so if the coin "passes" the test, it just means we don't have enough observations to be able to conclude what we already know to be true - that the coin is very slightly biased!
| null | CC BY-SA 3.0 | null | 2010-12-17T11:06:10.647 | 2012-03-14T10:34:51.520 | 2012-03-14T10:34:51.520 | 887 | 887 | null |
5594 | 2 | null | 490 | 5 | null | If you are only interested in generalization performance, you are probably better off not performing any feature selection and using regularization instead (e.g. ridge regression). There have been several open [challenges](http://clopinet.com/isabelle/Projects/NIPS2003) in the machine learning community on feature selection, and methods that rely on regularization rather than feature selection generally perform at least as well, if not better.
| null | CC BY-SA 2.5 | null | 2010-12-17T11:13:27.427 | 2010-12-17T11:13:27.427 | null | null | 887 | null |
5596 | 2 | null | 5115 | 7 | null | [Adolphe Quetelet](http://en.wikipedia.org/wiki/Adolphe_Quetelet) for his work on the "average man", and for pioneering the use of statistics in the social sciences. Before him, statistics were largely confined to the physical sciences (astronomy, in particular).
| null | CC BY-SA 2.5 | null | 2010-12-17T12:20:38.070 | 2010-12-17T12:20:38.070 | null | null | null | null |
5597 | 1 | 5600 | null | 26 | 570 | As a biologist, many of the research projects I work on at some point involve collaboration with a statistician, whether it be for simple advice or for implementing and testing a model for my data. My statistics colleagues admit that they do a significant amount of collaboration, insomuch that the tenure review process only considers papers on which they are the first or last author.
What would make me (or any other scientist) a better collaborator? What would make it easier for you (as a statistician) to work with me? Specifically, what is one statistics concept you wish all of your scientist collaborators already understood?
| Statistics collaboration | CC BY-SA 2.5 | null | 2010-12-17T12:27:48.893 | 2017-01-21T09:30:29.850 | 2017-01-21T09:30:29.850 | 28666 | 1973 | [
"academia"
]
|
5598 | 2 | null | 5597 | 3 | null | Having no preconceived ideas about the method you should use solely based on papers. Their ideas, logic or methods may be faulty. You want to think about your problem and use the most appropriate set of tools. This reminds me of reproducing cited information without checking the source.
On the other hand, paper with methods (or logic) that differs from the rest of literature may hinder or cull a review process because "it's not the norm".
| null | CC BY-SA 2.5 | null | 2010-12-17T13:49:05.330 | 2010-12-17T13:49:05.330 | null | null | 144 | null |
5599 | 2 | null | 5597 | 10 | null | I think the concept that few scientists grasp is this: A statistical result can really only be taken at face value when the statistical methods were chosen in advance while the experiment was being planned (or while preliminary data were collected to polish methods).
You are likely to be mislead if you first analyze the data this way, then that way, then try something else, then analyze only a subset of data, then analyze only that subset after removing an obvious outlier..... and only stop when the results match your preconceptions or has lot of asterisks. That is a fine way to generate an hypothesis, but not an appropriate way to test one.
| null | CC BY-SA 2.5 | null | 2010-12-17T14:01:35.943 | 2010-12-18T03:00:48.173 | 2010-12-18T03:00:48.173 | 25 | 25 | null |
5600 | 2 | null | 5597 | 13 | null | My answer is from the point of view of an UK academic statistician. In particular, as an academic that gets judged on advances in statistical methodology.
>
What would make me (or any other
scientist) a better collaborator?
To be blunt - money. My time isn't free and I (as an academic) don't get employed to carry out standard statistical analysis. Even being first/last author on a paper that uses standard methodology is worth very little to me (in terms promotion and my personal research). Paying for my time will buy me out of administrative or teaching duties. Payment could be through a joint grant.
In the UK, every five or so years academics have to submit their four best papers. My papers are judged on their contribution to the statistical literature. It sucks, but that's the way it is.
Now it may well be that you have a very interesting problem which would lead to advances in statistical techniques. However, just think about the size of your statistics department compared to the rest of the Uni. There probably won't be enough statisticians to go around.
In saying that, I do try and do some "statistical consultancy" once a year to broaden my interests and to help for teaching purposes. This year I did some [survival analysis](http://csgillespie.wordpress.com/2010/12/08/new-paper-survival-analysis/). However, I've never advertised this fact and I still get half dozen requests each year for help!
Sorry for being so negative :(
>
Specifically, what is one statistics
concept you wish all of your scientist
collaborators already understood?
That statisticians do statistical research. As one of my collaborators said:
>
Surely there's nothing left to solve in statistics?
| null | CC BY-SA 2.5 | null | 2010-12-17T15:00:51.533 | 2010-12-18T21:22:28.910 | 2010-12-18T21:22:28.910 | 919 | 8 | null |
5601 | 1 | 5652 | null | 5 | 6067 | I am trying to view the output from the GBM package for boosted trees in R. Below I am fitting a single tree without any sampling in order to compare the tree to the complete dataset. First, create the data set:
```
set.seed(1973)
############## CREATE DATA#############################################
N <- 1000
X1 <- runif(N)
X2 <- 2*runif(N)
X3 <- ordered(sample(letters[1:4],N,replace=TRUE),levels=letters[4:1])
X4 <- factor(sample(letters[1:6],N,replace=TRUE))
X5 <- factor(sample(letters[1:3],N,replace=TRUE))
X6 <- 3*runif(N)
mu <- c(-1,0,1,2)[as.numeric(X3)]
SNR <- 10 # signal-to-noise ratio
Y <- X1**1.5 + 2 * (X2**.5) + mu
sigma <- sqrt(var(Y)/SNR)
Y <- Y + rnorm(N,0,sigma)
# introduce some missing values
X1[sample(1:N,size=500)] <- NA
X4[sample(1:N,size=300)] <- NA
data <- data.frame(Y=Y,X1=X1,X2=X2,X3=X3,X4=X4,X5=X5,X6=X6)
########################################################################
#Fit model##############################################################
gbm1 <- gbm(Y~X1+X2+X3+X4+X5+X6, # formula
data=data, # dataset
var.monotone=c(0,0,0,0,0,0), # -1: monotone decrease,
# +1: monotone increase,
# 0: no monotone restrictions
distribution="gaussian", # bernoulli, adaboost, gaussian,
# poisson, coxph, and quantile available
n.trees=1, # number of trees
shrinkage=1, # shrinkage or learning rate,
# 0.001 to 0.1 usually work
interaction.depth=1, # 1: additive model, 2: two-way interactions, etc.
bag.fraction = 1, # subsampling fraction, 0.5 is probably best
train.fraction = 1, # fraction of data for training,
# first train.fraction*N used for training
n.minobsinnode = 10, # minimum total weight needed in each node
keep.data=TRUE, # keep a copy of the dataset with the object
verbose=TRUE) # print out progress
###########################################################################
```
Next, look at the tree. This suggests, I think, a split on X2 at the value 1.5. However, this suggests 522 records one direction and 478 the other. Looking at the data, this record split does not correspond to the counts. Any insight? Is this a bug?
```
pretty.gbm.tree(gbm1,i.tree = 1)
length(d<-subset(data, data$X2>1.50,3)[,1])
```
| How to view GBM package trees? | CC BY-SA 2.5 | null | 2010-12-17T15:25:31.870 | 2019-08-20T05:02:05.413 | 2019-08-20T05:02:05.413 | 11887 | 2040 | [
"r",
"boosting"
]
|
5602 | 1 | null | null | 2 | 243 | I have data which has several properties (metadata, as key value pairs, where the keyspace is shared over the whole dataset) per object.
I took a sample of objects and divided them in n groups according to an unknown algorithm.
What statistical methods or algorithm exists to find the relevant properties and their weight for the division to
receive a similar grouping of the data than the "unknown algorithm" did?
| Methods of grouping sets of data | CC BY-SA 2.5 | null | 2010-12-17T17:21:32.020 | 2010-12-21T14:58:31.110 | 2010-12-21T14:58:31.110 | 2423 | 2423 | [
"classification",
"data-mining"
]
|
5603 | 1 | 5640 | null | 3 | 775 | How does one approach the problem of modeling a "birth-death process" where the arrivals are dependent on the current state in the following way: if the population is above a certain point, the probability of an arrival decreases.
Basically, I'm interested in complicating (slightly) an existing model of "births" that just has Poisson distributed arrivals and thinking of adding in the idea that there's a "saturation point" above which arrivals are less likely (waiting until the population drops back below the point).
Should I be reading about nth-order Markov processes? Or should I be looking at queueing theory?
| Modeling a birth-death process that is not memoryless | CC BY-SA 2.5 | null | 2010-12-17T17:24:10.373 | 2010-12-24T02:13:25.680 | 2010-12-23T15:53:46.737 | 446 | 446 | [
"stochastic-processes"
]
|
5604 | 1 | null | null | 3 | 2797 | I have a data set that includes the number of visits to a website. Here are some descriptive statistics for my data
Median: 4
Mean: 14.1352
SD: 121.8119
Clearly, there are some huge values (individuals who have visited the site thousands of times.) To remove these outliers I considered simply removing any data that falls outside more than 3.5 standard deviations from the mean. The result is that I discovered there is still a significant fat tail with my data. After removal of data that falls outside more than 3.5 standard deviations my descriptive statistics adjust to
Median: 4
Mean: 10.2201
SD: 19.7492
I also explored using a winsorized mean but again since my data is asymmetric I feel like my descriptive statistics are biased. Is there a method that I can use to reevaluate my data to provide descriptive statistics that would represent a ‘majority’ of the population?
As I understand the concept of bootstrapping, I could sample my population and then resample and generation thousands of populations that may represent my population differently based on the resample of my original sample population. Would this method be appropriate?
Any other ideas or direction?
Any references or examples with R would be very much appreciated as well.
| Removing outliers from asymmetric data | CC BY-SA 3.0 | null | 2010-12-17T19:04:04.553 | 2017-04-10T15:45:27.100 | 2017-04-10T15:45:27.100 | 11887 | null | [
"r",
"outliers",
"descriptive-statistics",
"winsorizing"
]
|
5605 | 2 | null | 5591 | 6 | null | "Roughly speaking p-value gives a probability of the observed outcome of an experiment given the hypothesis (model)."
but it doesn't. Not even roughly - this fudges an essential distinction.
The model is not specified, as Raskolnikov points out, but let's assume you mean a binomial model (independent coin tosses, fixed unknown coin bias). The hypothesis is the claim that the relevant parameter in this model, the bias or probability of heads, is 0.5.
"Having this probability (p-value) we want to judge our hypothesis (how likely it is)"
We may indeed want to make this judgement but a p-value will not (and was not designed to) help us do so.
"But wouldn't it be more natural to calculate the probability of the hypothesis given the observed outcome?"
Perhaps it would. See all the discussion of Bayes above.
"[...] Now we calculate the p-value, that is equal to the probability to get 14 or more heads in 20 flips of coin. OK, now we have this probability (0.058) and we want to use this probability to judge our model (how is it likely that we have a fair coin)."
'of our hypothesis, assuming our model to be true', but essentially: yes. Large p-values indicate that the coin's behaviour is consistent with the hypothesis that it is fair. (They are also typically consistent with the hypothesis being false but so close to being true we do not have enough data to tell; see 'statistical power'.)
"But if we want to estimate the probability of the model, why we do not calculate the probability of the model given the experiment? Why do we calculate the probability of the experiment given the model (p-value)?"
We actually don't calculate the probability of the experimental results given the hypothesis in this setup. After all, the probability is only about 0.176 of seeing exactly 10 heads when the hypothesis is true, and that's the most probable value. This isn't a quantity of interest at all.
It is also relevant that we don't usually estimate the probability of the model either. Both frequentist and Bayesian answers typically assume the model is true and make their inferences about its parameters. Indeed, not all Bayesians would even in principle be interested in the probability of the model, that is: the probability that the whole situation was well modelled by a binomial distribution. They might do a lot of model checking, but never actually ask how likely the binomial was in the space of other possible models. Bayesians who care about Bayes Factors are interested, others not so much.
| null | CC BY-SA 2.5 | null | 2010-12-17T19:10:47.387 | 2010-12-17T19:10:47.387 | null | null | 1739 | null |
5606 | 2 | null | 5542 | 2 | null | I propose the following solution to 2), and would appreciate feedback:
- Data include mean, $Y$, sample size $n$, and standard error $\sigma$; calculate precision ($\tau=\frac{1}{\sigma\sqrt{n}}$) because it is required for logN parameterization by BUGS
- data $Y\sim \text{N}(\beta_0,\tau)$
- precision $\tau\sim\text{Gamma}(\frac{n}{2},\frac{n}{2\tau})$
- diffuse priors
- use $N(\mu=\beta_0, \sigma=\frac{1}{\sqrt{\tau}}$) prior
- Here is the code:
library(rjags)
data <- data.frame(Y = c(1.6, 2.5, 1.8, 1.8, 1.7, 2.5),
n = c(4, 4, 4, 3, 4, 3),
se = c(0.2, 0.41, 0.24, 0.27, 0.2, 0.14))
# convert se to precision
data <- transform(data, obs.prec = 1/se)[, colnames(data)!='se']
# write a bugs model
sink(file= 'model.bug') #put following in file 'model.bug'
#i don't think sink() actually works like this
model
{
for (k in 1:length(Y)) {
Y[k] ~ dnorm(beta.o, tau.y[k])
tau.y[k] <- prec.y * n[k]
u1[k] <- n[k]/2
u2[k] <- n[k]/(2 * prec.y)
obs.prec[k] ~ dgamma(u1[k], u2[k])
}
beta.o ~ dnorm(3, 0.0001)
prec.y ~ dgamma(0.001, 0.001)
sd.y <- 1/sqrt(prec.y)
}
sink()
model <- jags.model(file = "model.bug",
data = data,
n.adapt = 500,
n.chains = 4)
mcmc.object <- coda.samples(model = model,
variable.names = c( 'beta.o', 'sd.y'),
n.iter = 10000,
thin = 50)
summary(mcmc.object)
## Update
I have revised this approach to compute a posterior predictive distribution. It required some modifications, mostly computing a posterior predictive distribution for an unobserved sample.
Details here:
David S. LeBauer, Dan Wang, Katherine T. Richter, Carl C. Davidson, and Michael C. Dietze 2013. Facilitating feedbacks between field measurements and ecosystem models. Ecological Monographs 83:133–154. [http://dx.doi.org/10.1890/12-0137.1](http://dx.doi.org/10.1890/12-0137.1) [pdf](https://github.com/PecanProject/pecan/blob/master/documentation/wang2013pys.pdf?raw=true)
Examples of this and simpler approaches here: [https://github.com/dlebauer/pecan-priors/blob/master/priors_demo.Rmd](https://github.com/dlebauer/pecan-priors/blob/master/priors_demo.Rmd)
| null | CC BY-SA 3.0 | null | 2010-12-17T19:57:51.563 | 2014-04-08T21:43:23.227 | 2014-04-08T21:43:23.227 | 1381 | 1381 | null |
5607 | 2 | null | 5534 | 7 | null | It’s much easier to simultaneously construct $X_i$ and $Y_i$ having the desired properties,
by first letting $Y_i$ be i.i.d. Uniform$[0,1]$ and then taking $X_i = F^{-1}(Y_i)$. This is the basic method for generating random variables with arbitrary distributions.
The other direction, where you are first given $X_i$ and then asked to construct $Y_i$, is more difficult, but is still possible for all distributions. You just have to be careful with how you define $Y_i$.
Attempting to define $Y_i$ as $Y_i = F(X_i)$ fails to produce uniformly distributed $Y_i$ when $F$ has jump discontinuities. You have to spread the point masses in the distribution of $X_i$ across the the gaps created by the jumps.
Let $$D = \{x : F(x) \neq \lim_{z \to x^-} F(z)\}$$ denote the set of jump discontinuities of $F$. ($\lim_{z\to x^-}$ denotes the limit from the left. All distributions functions are right continuous, so the main issue is left discontinuities.)
Let $U_i$ be i.i.d. Uniform$[0,1]$ random variables, and define
$$Y_i =
\begin{cases}
F(X_i), & \text{if }X_i \notin D \\
U_i F(X_i) + (1-U_i) \lim_{z \to X_i^-} F(z), & \text{otherwise.}
\end{cases}
$$
The second part of the definition fills in the gaps uniformly.
The quantile function $F^{-1}$ is not a genuine inverse when $F$ is not 1-to-1. Note that if $X_i \in D$ then $F^{-1}(Y_i) = X_i$, because the pre-image of the gap is the corresponding point of discontinuity. For the continuous parts where $X_i \notin D$, the flat sections of $F$ correspond to intervals where $X_i$ has 0 probability so they don’t really matter when considering $F^{-1}(Y_i)$.
The second part of your question follows from similar reasoning after the first part which asserts that $X_i = F^{-1}(Y_i)$ with probability 1. The empirical CDFs are defined as
$$G_n(y) = \frac{1}{n} \sum_{i=1}^n 1_{\{Y_i \leq y\}}$$
$$F_n(x) = \frac{1}{n} \sum_{i=1}^n 1_{\{X_i \leq x\}}$$
so
$$
\begin{align}
G_n(F(x))
&= \frac{1}{n} \sum_{i=1}^n 1_{\{Y_i \leq F(x) \}}
= \frac{1}{n} \sum_{i=1}^n 1_{\{F^{-1}(Y_i) \leq x \}}
= \frac{1}{n} \sum_{i=1}^n 1_{\{X_i \leq x \}}
= F_n(x)
\end{align}
$$
with probability 1.
It should be easy to convince yourself that $Y_i$ has Uniform$[0,1]$ distribution by looking at pictures. Doing so rigorously is tedious, but can be done. We have to verify that $P(Y_i \leq u) = u$ for all $u \in (0,1)$. Fix such $u$ and let $x^* = \inf\{x : F(x) \geq u \}$ — this is just the value of quantile function at $u$. It’s defined this way to deal with flat sections. We’ll consider two separate cases.
First suppose that $F(x^*) = u$. Then
$$
Y_i \leq u
\iff Y_i \leq F(x^*)
\iff F(X_i) \leq F(x^*).
$$
Since $F$ is a non-decreasing function and $F(x^*) = u$,
$$
F(X_i) \leq F(x^*) \iff X_i \leq x^* .
$$
Thus,
$$
P[Y_i \leq u]
= P[X_i \leq x^*]
= F(x^*)
= u .
$$
Now suppose that $F(x^*) \neq u$. Then necessarily $F(x^*) > u$, and $u$ falls inside one of the gaps. Moreover, $x^* \in D$, because otherwise $F(x^*) = u$ and we have a contradiction.
Let $u^* = F(x^*)$ be the upper part of the gap. Then by the previous case,
$$
\begin{align}
P[Y_i \leq u]
&= P[Y_i \leq u^*] - P[u < Y_i \leq u^*]\\
&= u^* - P[u < Y_i \leq u^*].
\end{align}
$$
By the way $Y_i$ is defined, $P(Y_i = u^*) = 0$ and
$$
\begin{align}
P[u < Y_i \leq u^*]
&= P[u < Y_i < u^*] \\
&= P[u < Y_i < u^* , X_i = x^*] \\
&= u^* - u .
\end{align}
$$
Thus, $P[Y_i \leq u] = u$.
| null | CC BY-SA 2.5 | null | 2010-12-17T20:13:50.217 | 2010-12-17T20:13:50.217 | null | null | 1670 | null |
5608 | 2 | null | 5591 | 11 | null | As a former academic who moved into practice, I'll take a shot. People use p-values because they are useful. You can't see it in textbooky examples of coin flips. Sure they're not really solid foundationally, but maybe that is not as necessary as we like to think when we're thinking academically.
In the world of data, we're surrounded by a literally infinite number of possible things to look into next. With p-value computations all you need as an idea of what is uninteresting and a numerical heuristic for what sort of data might be interesting (well, plus a probability model for uninteresting). Then individually or collectively we can scan things pretty simple, rejecting the bulk of the uninteresting. The p-value allows us to say "If I don't put much priority on thinking about this otherwise, this data gives me no reason to change".
I agree p-values can be misinterpreted and overinterpreted, but they're still an important part of statistics.
| null | CC BY-SA 2.5 | null | 2010-12-17T20:55:13.000 | 2010-12-17T20:55:13.000 | null | null | 2134 | null |
5609 | 2 | null | 5399 | 5 | null | There are no strong results and it does not depend on Gaussianity. In the case where $x_1$ and $x_2$ are scalars, you are asking if knowing the variance of the variables implies something about their covariance. whuber’s answer is right. The Cauchy-Schwarz Inequality and positive semidefiniteness constrain the possible values.
The simplest example is that the squared covariance of a pair of variables can never exceed the product of their variances. For covariance matrices there is a generalization.
Consider the block partitioned covariance matrix of $[x_1 \ x_2]$,
$$
\left[
\begin{array}{cc}
\Sigma_{11} & \Sigma_{12} \\
\Sigma_{21} & \Sigma_{22}
\end{array}
\right].
$$
Then
$$\Vert \Sigma_{12} \Vert_q^2 \leq \Vert \Sigma_{11} \Vert_q \Vert \Sigma_{22} \Vert_q$$
for all [Schatten q-norms](http://en.wikipedia.org/wiki/Schatten_norm). Positive (semi)definiteness of the covariance matrix also provides the constraint that
$$
\Sigma_{11} - \Sigma_{12} \Sigma_{22}^{-1} \Sigma_{21}
$$
must be positive (semi)definite. $\Sigma_{22}^{-1}$ is the (Moore-Penrose) inverse of $\Sigma_{22}$.
| null | CC BY-SA 2.5 | null | 2010-12-17T21:44:43.637 | 2010-12-17T22:05:36.543 | 2010-12-17T22:05:36.543 | 1670 | 1670 | null |
5610 | 2 | null | 4364 | 21 | null | [Stein’s Lemma](http://en.wikipedia.org/wiki/Stein%27s_lemma) provides a very useful characterization. $Z$ is standard Gaussian iff
$$E f’(Z) = E Z f(Z)$$
for all absolutely continuous functions $f$ with $E|f’(Z)| < \infty$.
| null | CC BY-SA 2.5 | null | 2010-12-17T22:00:34.923 | 2010-12-17T23:24:38.617 | 2010-12-17T23:24:38.617 | 1670 | 1670 | null |
5611 | 2 | null | 411 | 4 | null | I think you have to consider the theoretical vs applied advantages of the different notions of distance. Mathematically natural objects don’t necessarily translate well into application.
Kolmogorov-Smirnov is the most well-known for application, and is entrenched in testing for goodness of fit. I suppose that one of the reasons for this is that when the underlying distribution $F$ is continuous the distribution of the statistic is independent of $F$.
Another is that it can be easily inverted to give confidence bands for the CDF.
But it’s often used in a different way where $F$ is estimated by $\hat{F}$, and the test statistic takes the form
$$\sup_x | F_n(x) - \hat{F}(x)|.$$
The interest is in seeing how well $\hat{F}$ fit the data and acting as if $\hat{F} = F$, even though the asymptotic theory does not necessarily apply.
| null | CC BY-SA 2.5 | null | 2010-12-17T22:33:50.943 | 2010-12-17T22:42:22.920 | 2010-12-17T22:42:22.920 | 1670 | 1670 | null |
5613 | 2 | null | 5591 | 13 | null | Your question is a great example of frequentist reasoning and is, actually quite natural. I've used this example in my classes to demonstrate the nature of hypothesis tests. I ask for a volunteer to predict the results of a coin flip. No matter what the result, I record a "correct" guess. We do this repeatedly until the class becomes suspicious.
Now, they have a null model in their head. They assume the coin is fair. Given that assumption of 50% correct when is everything is fair, every successive correct guess arouses more suspicion that the fair coin model is incorrect. A few correct guesses and they accept the role of chance. After 5 or 10 correct guesses, the class always begins to suspect that the chance of a fair coin is low. Thus it is with the nature of hypothesis testing under the frequentist model.
It is a clear and intuitive representation of the frequentist take on hypothesis testing. It is the probability of the observed data given that the null is true. It is actually quite natural as demonstrated by this easy experiment. We take it for granted that the model is 50-50 but as evidence mounts, I reject that model and suspect that there is something else at play.
So, if the probability of what I observe is low given the model I assume (the p-value) then I have some confidence in rejecting my assumed model. Thus, a p-value is a useful measure of evidence against my assumed model taking into account the role of chance.
A disclaimer: I took this exercise from a long forgotten article in, what I recall, was one of the ASA journals.
| null | CC BY-SA 2.5 | null | 2010-12-18T05:56:43.220 | 2010-12-18T06:11:01.040 | 2010-12-18T06:11:01.040 | 485 | 485 | null |
5614 | 1 | null | null | 8 | 1746 | While it is easier to use the Pearson chi-square/Cressie-Read type test, I would like to test the equality of proportions in $k$ categories across two groups using a Kolmogorov-Smirnov type test of the form proposed by [Pettitt & Stephens (1977)](http://www.jstor.org/stable/1268631) (see also [here](http://www4.gu.edu.au:8080/adt-root/uploads/approved/adt-QGU20031006.143823/public/03Chapter2.pdf)).
In particular as the authors of that paper point out, it may have some power against trending alternatives. So their one-sample nominal/categorical Kolmogorov-Smirnov test has the form:
$$ D_n = \sup_{\pi}\sup_{1 \leq j \leq k}\vert \sum_{i=1}^j(f_{exp,\pi(i)}-f_{obs,\pi(i)})\vert$$
where $\pi$ is a permutation of the order of the categories, $f_{.,i}$ are the observed and expected frequencies (or equivalently, proportion of observations) in category $i$. This can be written equivalently as:
$$ D_n = \frac{1}{2} \sum_{i=1}^k\vert f_{exp,i}-f_{obs,i} \vert$$
I would like to extend this to a two-sample case using a randomising/permutation procedure, such:
$$ D_n^{(r)} = \frac{1}{2} \sum_{i=1}^k\vert f^{(r)}_{\text{group1},i}-f^{(r)}_{\text{group2},i} \vert,\, r=1,\dots,R $$
where $.^{(r)}$ denotes a statistic calculated based on the $r^{\text{th}}$ permutation of the categorical variable. Reject if the value of the original statistic is larger than the value of $95\%$ of the permuted statistics.
Any comments as to the pros/cons/validity of such a procedure are very welcome. Thanks.
| Two-sample permutation Kolmogorov-Smirnov tests | CC BY-SA 2.5 | null | 2010-12-18T06:56:55.773 | 2010-12-19T06:48:42.843 | null | null | 2399 | [
"hypothesis-testing"
]
|
5615 | 2 | null | 5604 | 6 | null | Don't remove any outliers until you explore the data a bit further. I suggest that you should do a log transform on the data and see whether it becomes more nearly symmetrical--the outliers may not be as extreme as you think. (Log values make perfect sense if there is some sort of power law at play.)
| null | CC BY-SA 2.5 | null | 2010-12-18T07:14:17.327 | 2010-12-18T07:14:17.327 | null | null | 1679 | null |
5617 | 1 | 5622 | null | 23 | 73411 | Let's say we have two factors (A and B), each with two levels (A1, A2 and B1, B2) and a response variable (y).
The when performing a two way ANOVA of the type:
```
y~A+B+A*B
```
We are testing three null hypothesis:
- There is no difference in the means
of factor A
- There is no difference
in means of factor B
- There is no interaction between factors A and B
When written down, the first two hypothesis are easy to formulate (for 1 it is $H_0:\; \mu_{A1}=\mu_{A2}$)
But how should hypothesis 3 be formulated?
edit: and how would it be formulated for the case of more then two levels?
Thanks.
| What is the NULL hypothesis for interaction in a two-way ANOVA? | CC BY-SA 2.5 | null | 2010-12-18T13:50:36.403 | 2015-04-24T09:09:14.283 | 2010-12-18T16:12:53.090 | 930 | 253 | [
"hypothesis-testing",
"anova"
]
|
5618 | 2 | null | 5617 | 10 | null | An interaction tells us that the levels of factor A have different effects based on what level of factor B you're applying. So we can test this through a linear contrast. Let C = (A1B1 - A1B2) - (A2B1 - A2B2) where A1B1 stands for the mean of the group that received A1 and B1 and so on. So here we're looking at A1B1 - A1B2 which is the effect that factor B is having when we're applying A1. If there is no interaction this should be the same as the effect B is having when we apply A2: A2B1 - A2B2. If those are the same then their difference should be 0 so we could use the tests:
$H_0: C = 0\quad\text{vs.}\quad H_A: C \neq 0.$
| null | CC BY-SA 2.5 | null | 2010-12-18T14:14:19.383 | 2010-12-18T15:16:35.160 | 2010-12-18T15:16:35.160 | 930 | 1028 | null |
5619 | 1 | null | null | 5 | 10058 | I know that $r$ is itself a measure of the effect size, but I would like to know if using Spearman's rank test I can argue that the relation between X and Y is significant with $r = 0.33$ and that the effect is medium, as I do with Pearson test.
| Effect size of Spearman's rank test | CC BY-SA 2.5 | null | 2010-12-18T15:33:30.063 | 2010-12-21T17:02:01.640 | 2010-12-18T16:07:11.407 | 930 | null | [
"correlation",
"effect-size"
]
|
5620 | 1 | 5621 | null | 9 | 8404 | I'm reading "[The Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/download.html)" and early on there are references to p-vectors (page 10) and K-vectors (page 12).
What exactly is meant by a p-vector and K-vector?
| p-vector and K-vector | CC BY-SA 3.0 | null | 2010-12-18T15:34:36.363 | 2015-10-14T14:50:47.170 | 2015-10-14T14:50:47.170 | 919 | 1212 | [
"mathematical-statistics",
"terminology"
]
|
5621 | 2 | null | 5620 | 11 | null | It's merely some generic notation for a vector of $p$ attributes or variables observed on $i=1,\dots, N$ individuals, so that you can define $X^T = (X_1,X_2,\dots,X_p)$ as a vector of inputs, in the feature (or input) space (and each individual will have one such vector of observed inputs).
The $K$ notation seems to be reserved to the output space: in a classical linear regression model where $Y=X\beta$, Y is a scalar ($K=1$), whereas in a multivariate setting (say, you record weight, height, and color) it could be a $K$-vector (i.e., 3-vector with my example).
| null | CC BY-SA 2.5 | null | 2010-12-18T15:53:41.117 | 2010-12-18T15:53:41.117 | null | null | 930 | null |
5622 | 2 | null | 5617 | 19 | null | I think it's important to clearly separate the hypothesis and its corresponding test. For the following, I assume a balanced, between-subjects CRF-$pq$ design (equal cell sizes, Kirk's notation: Completely Randomized Factorial design).
$Y_{ijk}$ is observation $i$ in treatment $j$ of factor $A$ and treatment $k$ of factor $B$ with $1 \leq i \leq n$, $1 \leq j \leq p$ and $1 \leq k \leq q$. The model is $Y_{ijk} = \mu_{jk} + \epsilon_{i(jk)}, \quad \epsilon_{i(jk)} \sim N(0, \sigma_{\epsilon}^2)$
Design:
$\begin{array}{r|ccccc|l}
~ & B 1 & \ldots & B k & \ldots & B q & ~\\\hline
A 1 & \mu_{11} & \ldots & \mu_{1k} & \ldots & \mu_{1q} & \mu_{1.}\\
\ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots\\
A j & \mu_{j1} & \ldots & \mu_{jk} & \ldots & \mu_{jq} & \mu_{j.}\\
\ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots\\
A p & \mu_{p1} & \ldots & \mu_{pk} & \ldots & \mu_{pq} & \mu_{p.}\\\hline
~ & \mu_{.1} & \ldots & \mu_{.k} & \ldots & \mu_{.q} & \mu
\end{array}$
$\mu_{jk}$ is the expected value in cell $jk$, $\epsilon_{i(jk)}$ is the error associated with the measurement of person $i$ in that cell. The $()$ notation indicates that the indices $jk$ are fixed for any given person $i$ because that person is observed in only one condition. A few definitions for the effects:
$\mu_{j.} = \frac{1}{q} \sum_{k=1}^{q} \mu_{jk}$ (average expected value for treatment $j$ of factor $A$)
$\mu_{.k} = \frac{1}{p} \sum_{j=1}^{p} \mu_{jk}$ (average expected value for treatment $k$ of factor $B$)
$\alpha_{j} = \mu_{j.} - \mu$ (effect of treatment $j$ of factor $A$, $\sum_{j=1}^{p} \alpha_{j} = 0$)
$\beta_{k} = \mu_{.k} - \mu$ (effect of treatment $k$ of factor $B$, $\sum_{k=1}^{q} \beta_{k} = 0$)
$(\alpha \beta)_{jk} = \mu_{jk} - (\mu + \alpha_{j} + \beta_{k}) = \mu_{jk} - \mu_{j.} - \mu_{.k} + \mu$
(interaction effect for the combination of treatment $j$ of factor $A$ with treatment $k$ of factor $B$, $\sum_{j=1}^{p} (\alpha \beta)_{jk} = 0 \, \wedge \, \sum_{k=1}^{q} (\alpha \beta)_{jk} = 0)$
$\alpha_{j}^{(k)} = \mu_{jk} - \mu_{.k}$
(conditional main effect for treatment $j$ of factor $A$ within fixed treatment $k$ of factor $B$, $\sum_{j=1}^{p} \alpha_{j}^{(k)} = 0 \, \wedge \, \frac{1}{q} \sum_{k=1}^{q} \alpha_{j}^{(k)} = \alpha_{j} \quad \forall \, j, k)$
$\beta_{k}^{(j)} = \mu_{jk} - \mu_{j.}$
(conditional main effect for treatment $k$ of factor $B$ within fixed treatment $j$ of factor $A$, $\sum_{k=1}^{q} \beta_{k}^{(j)} = 0 \, \wedge \, \frac{1}{p} \sum_{j=1}^{p} \beta_{k}^{(j)} = \beta_{k} \quad \forall \, j, k)$
With these definitions, the model can also be written as:
$Y_{ijk} = \mu + \alpha_{j} + \beta_{k} + (\alpha \beta)_{jk} + \epsilon_{i(jk)}$
This allows us to express the null hypothesis of no interaction in several equivalent ways:
- $H_{0_{I}}: \sum_{j}\sum_{k} (\alpha \beta)^{2}_{jk} = 0$
(all individual interaction terms are $0$, such that $\mu_{jk} = \mu + \alpha_{j} + \beta_{k} \, \forall j, k$. This means that treatment effects of both factors - as defined above - are additive everywhere.)
- $H_{0_{I}}: \alpha_{j}^{(k)} - \alpha_{j}^{(k')} = 0 \quad \forall \, j \, \wedge \, \forall \, k, k' \quad (k \neq k')$
(all conditional main effects for any treatment $j$ of factor $A$ are the same, and therefore equal $\alpha_{j}$. This is essentially Dason's answer.)
- $H_{0_{I}}: \beta_{k}^{(j)} - \beta_{k}^{(j')} = 0 \quad \forall \, j, j' \, \wedge \, \forall \, k \quad (j \neq j')$
(all conditional main effects for any treatment $k$ of factor $B$ are the same, and therefore equal $\beta_{k}$.)
- $H_{0_{I}}$: In a diagramm which shows the expected values $\mu_{jk}$ with the levels of factor $A$ on the $x$-axis and the levels of factor $B$ drawn as separate lines, the $q$ different lines are parallel.
| null | CC BY-SA 2.5 | null | 2010-12-18T17:38:48.130 | 2010-12-20T09:24:11.373 | 2010-12-20T09:24:11.373 | 1909 | 1909 | null |
5623 | 2 | null | 5619 | 3 | null | With increasing sample size $n$, $r_{z} = \sqrt{n-1} r_{S}$ is asymptotically $N(0, 1)$ distributed (standard normal distribution). In R
```
rSz <- sqrt(n-1) * rS
(pVal <- 1-pnorm(rSz)) # one-sided p-value, test for positive rank correlation
```
| null | CC BY-SA 2.5 | null | 2010-12-18T17:55:17.180 | 2010-12-18T17:55:17.180 | null | null | 1909 | null |
5624 | 2 | null | 5604 | 2 | null | The answer you get depends on the question you ask. Dason asked what you are trying to do; I would also ask this question. You say you know some of your data is "tainted" but do you know that this taint applies to the high values? Many sorts of counts have long tails, with no bad data at all.
If you just want to summarize central tendency, you could give mean, median and various trimmed means (the median is just the 50% trimmed mean, after all). Or, perhaps better, you could supply a box plot or dot plot or density plot of the data. Perhaps you do want to take the log; that often makes sense for counts, where you are interested in multiplicative rather than additive differences (e.g. Mary viewed the site 10 times as often as Jim, who viewed it twice as much as Bob)
| null | CC BY-SA 2.5 | null | 2010-12-18T20:51:52.010 | 2010-12-18T20:51:52.010 | null | null | 686 | null |
5625 | 2 | null | 5597 | 8 | null | To get a good answer, you must write a good question. Answering a statistics question without context is like boxing blindfolded. You might knock your opponent out, or you might break your hand on the ring post.
What goes into a good question?
- Tell us the PROBLEM you are trying to solve. That is, the substantive problem, not the statistical aspects.
- Tell us what math and statistics you know. If you’ve had one course in Introductory Stat, then it won’t make sense for us to give you an answer full of mixed model theory and matrix algebra. On the other hand, if you’ve got several courses or lots of experience, then we can assume you know some basics.
- Tell us what data you have, where it came from, what is missing, how many variables, what are the Dependent Variables (DVs) and Independent Variables (IVs) – if any, and anything else we need to know about the data. Also tell us which (if any) statistical software you use.
- Are you thinking of hiring a consultant, or do you just want pointers in some direction?
- THEN, and ONLY THEN tell us what you’ve tried, why you aren’t happy, and so on.
| null | CC BY-SA 2.5 | null | 2010-12-18T20:55:08.027 | 2010-12-18T20:55:08.027 | null | null | 686 | null |
5626 | 2 | null | 452 | 7 | null | In Introductory Econometrics (Woolridge, 2009 edition page 268) this question is addressed. Woolridge says that when using robust standard errors, the t-statistics obtained only have distributions which are similar to the exact t-distributions if the sample size is large. If the sample size is small, the t-stats obtained using robust regression might have distributions that are not close to the t distribution and this could throw off inference.
| null | CC BY-SA 2.5 | null | 2010-12-19T00:59:28.370 | 2010-12-19T00:59:28.370 | null | null | null | null |
5627 | 2 | null | 4364 | 1 | null | Its characteristic function has the same form as its pdf. I am not sure of another distribution which does that.
| null | CC BY-SA 2.5 | null | 2010-12-19T02:56:46.690 | 2010-12-19T02:56:46.690 | null | null | null | null |
5628 | 2 | null | 5517 | 8 | null | Here's (at least most of) a solution with `MCMCglmm`.
First fit the equivalent intercept-variance-only model with `MCMCglmm`:
```
library(MCMCglmm)
primingHeid.MCMCglmm = MCMCglmm(fixed=RT ~ RTtoPrime * ResponseToPrime + Condition,
random=~Subject+Word, data = primingHeid)
```
Comparing fits between `MCMCglmm` and `lmer`, first retrieving my hacked version of `arm::coefplot`:
```
source(url("http://www.math.mcmaster.ca/bolker/R/misc/coefplot_new.R"))
## combine estimates of fixed effects and variance components
pp <- as.mcmc(with(primingHeid.MCMCglmm, cbind(Sol, VCV)))
## extract coefficient table
cc1 <- coeftab(primingHeid.MCMCglmm,ptype=c("fixef", "vcov"))
## strip fixed/vcov indicators to make names match with lmer output
rownames(cc1) <- gsub("(Sol|VCV).", "", rownames(cc1))
## fixed effects -- v. similar
coefplot(list(cc1[1:5,], primingHeid.lmer))
## variance components -- quite different. Worth further exploration?
coefplot(list(cc1[6:8,], coeftab(primingHeid.lmer, ptype="vcov")),
xlim=c(0,0.16), cex.pts=1.5)
```
Now try it with random slopes:
```
primingHeid.rs.MCMCglmm = MCMCglmm(fixed=RT ~ RTtoPrime * ResponseToPrime + Condition,
random=~Subject+Subject:Condition+Word,
data = primingHeid)
summary(primingHeid.rs.MCMCglmm)
```
This does give some sort of "MCMC p-values" ... you'll have to explore for yourself and see whether the whole thing makes sense ...
| null | CC BY-SA 3.0 | null | 2010-12-19T03:07:09.180 | 2013-08-24T15:05:00.227 | 2013-08-24T15:05:00.227 | 7290 | 2126 | null |
5629 | 1 | 5745 | null | 2 | 1283 | Is there a software package that supports, or could support, through operator overloading or extensions, code such as the following;
```
x = rand_arr(10) ; array 10 elements long
y = rand_arr(10)
z = x + y ; elemental addition (z[0]=x[0]+y[0];z[1]=x[1]+y[1],...)
print z
```
If the above, or something similar, is valid in this hypothetical language, does it also support, or could it support via operator overloading / custom classes / extensions / whatever, the following additional syntax:
```
x.err[*] = 0.1 % a fixed error
y.err = rand_arr(10) % a dynamic noisy error range
z = x + y
print z.err
```
Or whatever the implementation dictates, but the point being that the error is tracked for me through the `z = x+y` without work on my part.
I could imagine default support as per a standard (or customizable) function for `+-/*e^` and `log()` operators, and support for extensions or hooks to add support for additional functions (system, and user) such as `z = smooth(x,3)` and `z = my_func(x^2)`
This would be a nice feature for a language.
## Error Explanation
There is a discussion of [ERROR PROPAGATION](http://www.rit.edu/cos/uphysics/uncertainties/Uncertaintiespart2.html#propagation) here.
For the error tracking, I'm assuming that `x.err` is the error associated with each measurement, in this case, a constant value of `0.1`. `y.err` is similar, but each measurement is part of some noise. Given a standard definition of error propagation, such as the following:
```
When adding x + y with errors dx and dy, the solution has error dx + dy
```
Or in code
```
z = x + y => dz = dx + dy
```
I can add lines of code to track dz, but is there a language that supports, or could support, dz without me doing extra work?
| Implementing error propagation | CC BY-SA 2.5 | null | 2010-12-19T04:20:06.560 | 2016-09-07T11:45:57.263 | 2010-12-21T20:02:06.103 | null | 957 | [
"error-propagation"
]
|
5630 | 2 | null | 5629 | 4 | null | It's difficult to determine what you're asking for because you haven't specified the semantics of "x.err" etc., but it sounds like you might be interested in [interval arithmetic](http://en.wikipedia.org/wiki/Interval_arithmetic#Implementations). [Implementations](http://www.cs.utep.edu/interval-comp/intlang.html) are available in Lisp, Numerica, Maple, Matlab, and Mathematica. [Libraries](http://www.cs.utep.edu/interval-comp/intsoft.html) are available for ADA, Fortran, C++, etc.
### Edit
In light of comments (scattered between here and SO), it appears the OP is asking about implementing automatic [error propagation](http://www.rit.edu/cos/uphysics/uncertainties/Uncertaintiespart2.html#propagation). In principle that's no harder than implementing, say, a complex number class or interval arithmetic, which are straightforward exercises in any OO system. The data structure would be a tuple $(x, \epsilon)$ where $x$ represents a real number and $\epsilon \ge 0$ quantifies its "error". The usual numbers would be embedded into this structure via $x \to (x, 0)$. The semantics would include
$$\eqalign{
(x, \epsilon) + (y, \delta) = &(x+y, (\epsilon^2 + \delta^2)^{1/2}) \cr
(x, \epsilon) - (y, \delta) = &(x-y, (\epsilon^2 + \delta^2)^{1/2}) \cr
(x, \epsilon) \times (y, \delta) = &(x y,((y \epsilon)^2 + (x \delta)^2)^{1/2}) \cr
\text{etc.} \cr
f((x, \epsilon), (y, \delta)) = &(f(x, y), || (\frac{\partial f}{\partial x}\epsilon, \frac{\partial f}{\partial y}\delta)||). \cr
}$$
(The last generalizes everything preceding it and points the way to implementing powers, exponentials, logs, trig functions, etc.)
Such an implementation could truly be handy for those who understand deeply what's going on. For others--who might not be aware of or appreciate the importance of assumptions of independence, of relatively small errors, of differentiability of functions, etc.--it would be truly dangerous. As a simple example, when $(x, \epsilon)$ and $(y, \delta)$ are perfectly negatively correlated (implying $\epsilon = \delta$), the correct operation is $(x, \epsilon) + (y, \delta) = (x + y, 0)$.
| null | CC BY-SA 2.5 | null | 2010-12-19T05:08:59.810 | 2010-12-21T19:22:49.363 | 2010-12-21T19:22:49.363 | 919 | 919 | null |
5631 | 2 | null | 5629 | 3 | null | I recommend you look into [R](http://www.r-project.org/), the open source software for statistical computing. R supports many vectorized functions to deal with element by element computation of objects. Chapter 3 of [The R Inferno](http://lib.stat.cmu.edu/S/Spoetry/Tutor/R_inferno.pdf) provides a good outline on vectorization in R. From your post above, I'm assuming you have some background in computer programming, and R's command line interface will probably feel right at home to you.
Here's your pseudo code above implemented in R:
```
set.seed(42) #Makes random numbers reproducible.
x <- sample(1:20, size = 10, replace = FALSE)
y <- sample(1:20, size = 10, replace = FALSE)
z <- x + y
z
> z
[1] 29 32 23 20 19 23 35 4 23 19
```
I'm not entirely sure what you are trying to do with the error coding, but R has the capability to support just about any mathematical or statistical computation you can dream up. If someone else hasn't already implemented a solution for your problem, you can leverage the fact R is a programming language and write your own code to solve it. It is relatively easy to interface with other languages such as C, C++, python from R so there are plenty of options there.
| null | CC BY-SA 2.5 | null | 2010-12-19T05:12:26.403 | 2010-12-19T05:12:26.403 | null | null | 696 | null |
5632 | 2 | null | 5620 | 8 | null | In mathematics and physics, the "x" in "x-vector" stands for the dimension of the vector. The meanings of $K$ and $p$ were previously established. Typically a "p-vector" is written as a column vector and a "p-covector" would be written as a row vector.
| null | CC BY-SA 2.5 | null | 2010-12-19T05:18:34.387 | 2010-12-19T05:18:34.387 | null | null | 919 | null |
5633 | 2 | null | 5614 | 3 | null | The answer depends on the nature of the data generation process and on the alternative hypothesis you have in mind.
Your test is a kind of unweighted chi-square. Because of this lack of weighting, changes that principally affect the less-populated categories will be difficult to detect. For example, your test is going to be much less powerful than the chi-square test for a uniform shift in location, which is detected primarily by noticing that almost all the probability in one tail gets shifted into the other tail.
For example, suppose your categories are integer ranges $[i, i+1)$ indexed by $i$ and you are observing normal variates of unit variance but unknown mean. 100 observations of a standard normal variate, say, will mainly occupy categories $-2$ through $1$, although you can expect a few to occupy categories $-3$ and $2$. Even for a whopping big shift of $5$ standard errors (i.e., a change in mean of $5/\sqrt{100} = 0.5$), the power of your K-D-like test is only about 50% (when $\alpha = 0.05$).
It is difficult to conceive of a setting where this test will be more powerful than the chi-square test. If you think you are in such a situation, perform some simulations to find out what the power is and how it compares to the standard alternative tests.
| null | CC BY-SA 2.5 | null | 2010-12-19T06:15:26.400 | 2010-12-19T06:48:42.843 | 2010-12-19T06:48:42.843 | 919 | 919 | null |
5634 | 1 | 5635 | null | 3 | 31346 | So I have data like:
```
Cost 20 30 10 5
Rating 5 3 2 5
```
I want to make a chart of rating vs. cost, so the points would be
```
[(5,20), (3,30), (2,10), (5,5)]
```
I can't seem to get excel to do anything other than put the two rows as independent series. Am I missing something, or I do have to pivot the data somehow to make it do that?
(Actually, I'm using an old-ish version of [Numbers.app](http://www.apple.com/iwork/numbers/) on OS X, but I'm hoping the concept will be the same. I have access to excel if need be.)
| In Excel, how do I plot two rows against each other? | CC BY-SA 2.5 | null | 2010-12-19T15:53:17.357 | 2014-07-29T14:28:00.070 | 2010-12-19T18:14:33.343 | 930 | 1531 | [
"data-visualization",
"excel"
]
|
5635 | 2 | null | 5634 | 5 | null | Select the two rows and do a scatterplot which I think is called an XY plot in Excel (sorry, I run a Linux machine, so I do not have Excel installed).
| null | CC BY-SA 2.5 | null | 2010-12-19T15:56:51.113 | 2010-12-19T15:56:51.113 | null | null | 582 | null |
5636 | 2 | null | 3497 | 4 | null | The answer to your original question is yes, because the classical theory applies under your sampling scheme. You don’t need any assumptions on the original data matrix. All of the randomness (implicitly behind standard errors and consistency) comes from your scheme for sampling $N$ rows from the data matrix.
Think of your entire dataset (100M rows) as being the population. Each estimate (assuming your sample of size $N$ is a simple random sample of the rows) is a consistent estimate of the regression coefficients (say, $\hat{\beta}_*$) computed from the entire data set. Moreover, it is approximately Normal with mean equal to $\hat{\beta}_*$ and some covariance. The usual estimate of the covariance of the estimate is also consistent. If you repeat this $M$ times and average those $M$ estimates, then the resulting estimate (say, $\hat{\beta}_{avg}$) will also be approximately Normal. You can treat those $M$ estimates as being nearly independent (uncorrelated) as long as $N$ and $M$ are small relative to 100M. That’s an important assumption. The idea being that sampling without replacement is approximately the same as sampling with replacement when the sample size is small compared to the population size.
That being said, I think that your problem really is one of how to efficiently approximate the regression estimate ($\hat{\beta}_*$) computed from the entire data set. There is a difference between (1) averaging $M$ estimates based on samples of size $N$ and (2) one estimate based on a sample of size $MN$. The MSE of (2) will generally be smaller than the MSE of (1). They would only be equal if the estimate was linear in the data, but that is not the case. I assume you are using least squares. The least squares estimate is linear in the $Y$ (response) vector, but not the $X$ (covariates) matrix. You are randomly sampling $Y$ and $X$.
(1) and (2) are both simple schemes, but not necessarily efficient. (Though it may not matter since you only have 30 variables.) There are better ways. Here is one example: [http://arxiv.org/abs/0710.1435](http://arxiv.org/abs/0710.1435)
| null | CC BY-SA 2.5 | null | 2010-12-19T18:04:59.073 | 2010-12-19T21:49:54.970 | 2010-12-19T21:49:54.970 | 1670 | 1670 | null |
5637 | 2 | null | 5418 | 2 | null | Another piece of advise might be to look at packages yours will be depending on or interacting with, especially if these implement some [items](https://stats.stackexchange.com/questions/5418/first-r-packages-source-code-to-study-in-preparation-for-writing-own-package/5433#5433) [Joshua Ulrich](https://stats.stackexchange.com/users/1657/joshua-ulrich) mentioned or have been written by renowned authors. It might be helpful to learn how things are done in your field, to ensure some compatibility. Often people will have thought about certain issues and reading their solution migth be helpful.
| null | CC BY-SA 2.5 | null | 2010-12-19T18:29:21.010 | 2010-12-19T18:29:21.010 | 2017-04-13T12:44:31.577 | -1 | 1355 | null |
5640 | 2 | null | 5603 | 3 | null | If this is for a particular application, I would first consider the question: "do you need to fit such a complex model?" which is code for, what other, more simple methods have you tried prior to this one? Have you looked at a plot of the "births & deaths" data that you have?
I would advise that you look up some of the work on "state space" theory, and kalman filtering as a starter. It sounds like you basically have an autoregressive process that isn't a simple random walk. These can usually be dealt with via the Kalman filter (as long as you are willing to assume normality of the errors).
A simple way (I think) is to consider a "regression" of the births/deaths against the current state. You could just use OLS as a simple start (to figure out what's going on), but this ignores that the errors from the regression are correlated (and not independent as in OLS). This will have the effect of OLS standard errors being either too small or too big, depending on the direction of the correlation (the OLS estimates are still "unbiased")
| null | CC BY-SA 2.5 | null | 2010-12-20T03:39:18.017 | 2010-12-20T03:39:18.017 | null | null | 2392 | null |
5641 | 2 | null | 5090 | 7 | null | Here's the solution I came up with: The trick is to add NAs to the end of the observation data. When seeing NA as a response variable the Kalman filter algorithm will simply predict the next value and not update the state vector. This is exactly what we want to make our forecast.
```
nAhead <- 12
mod <- dlmModSeas(4)+dlmModReg(cbind(rnorm(100+nAhead),rnorm(100+nAhead)))
fi <- dlmFilter(c(rnorm(100),rep(NA,nAhead)),mod)
```
Is this correct?
| null | CC BY-SA 2.5 | null | 2010-12-20T03:50:36.193 | 2010-12-20T03:50:36.193 | null | null | 2451 | null |
5642 | 2 | null | 2182 | 6 | null | It’s easier to explain in terms of standard deviations, rather than confidence intervals.
Your friend’s conclusion is basically correct under the simplest model where you have simple random sampling and two candidates. Now the sample proportions satisfy $p_A + p_B = 1$ so that $p_B = 1 - p_A$. Thus,
$$Var(p_A - p_B) = Var(2 p_A - 1) = 4 Var(p_A)$$
and so
$$SD(p_A - p_B) = 2 SD(p_A).$$
What makes this simple relationship possible is that $p_A$ and $p_B$ are perfectly negatively correlated, because in general
$$Var(p_A - p_B) = Var(p_A) + Var(p_B) - 2 Cov(p_A, p_B).$$
Outside this simple model, if $p_A + p_B = 1$ does not hold in general, then you must take into account the correlation between $p_A$ and $p_B$ that is not included in the margin of error. It is possible for $SD(p_A - p_B) \ll 2 SD(p_A)$.
But all this nuance seems to indicate that the polling organizations should report the margin of error on the difference. Where’s Nate Silver?
| null | CC BY-SA 2.5 | null | 2010-12-20T05:45:35.210 | 2010-12-22T06:32:25.093 | 2010-12-22T06:32:25.093 | 1670 | 1670 | null |
5644 | 2 | null | 5382 | 0 | null | I worked on it and thought this Bayesian equation will be useful.
RatingTM = SRTM/(1 + AWD/WDTM) + MSRT/(1 + WDTM/AWD)
The variables are:
SR = Team member's self rating
AWD = Average work done by team
WD = Work done by team member
MSRT = Mean self rating of team
(TM: TeamMember)
Please comment if you think this is not right. Thanks!
| null | CC BY-SA 2.5 | null | 2010-12-20T07:25:21.537 | 2010-12-20T07:25:21.537 | null | null | 2344 | null |
5645 | 2 | null | 2982 | 6 | null | $L_1$ penalization is part of an optimization problem. Soft-thresholding is part of an algorithm. Sometimes $L_1$ penalization leads to soft-thresholding.
For regression, $L_1$ penalized least squares (Lasso) results in soft-thresholding when the columns of the $X$ matrix are orthogonal (assuming the rows correspond to different samples). It is really straight-forward to derive when you consider the special case of mean estimation, where the $X$ matrix consists of a single $1$ in each row and zeroes everywhere else.
For the general $X$ matrix, computing the Lasso solution via cyclic coordinate descent results in essentially iterative soft-thresholding. See [http://projecteuclid.org/euclid.aoas/1196438020](http://projecteuclid.org/euclid.aoas/1196438020) .
| null | CC BY-SA 2.5 | null | 2010-12-20T07:31:41.610 | 2010-12-21T01:03:16.813 | 2010-12-21T01:03:16.813 | 1670 | 1670 | null |
5646 | 2 | null | 2854 | 3 | null | First I would like to point out that to get the object `ks3` you do not need to do that much of the book-keeping code. Use the features of `ddply`:
```
ks3 <- ddply(o,.(ai,Gs),function(d){
temp <- ols(value~as.numeric(bc)+as.numeric(age),data=d,x=T,y=T)
t2 <- bootcov(temp,B=1000,coef.reps=T)
data.frame(t2$boot.Coef,ai=d$ai[1],gc2=d$Gs[1])
})
```
Also `scales=free` works much better if you want to see what kind of graph you intend to provide.
Concerning your first question, if you estimate the standard errors of coefficient using bootstrap, you are absolutely correct in using estimated percentiles. There is no need to use the percentiles of bivariate normal, since then you assume normality and bootstrap is used exactly for the purpose of not attaching oneself to some theoretical distribution.
For your second question, first you need to define what do you mean by bias-corrected percentiles. Furthermore your question implies that `bootcov` returns non bias-corrected percentiles, but it does not return any percentiles at all.
Update
If we know that coefficients are normal we can use confidence ellipses. The mean and covariance matrix are estimated using bootstrap. The following code checks which of the observations of the bivariate sample falls into 0.95 confidence ellipse region:
```
confeps <- function(x, level=0.95, t=qchisq(level,2)) {
cv <- cov(x)
ch <- t(chol(x))
x <- sweep(x, 2, apply(x, 2, mean), "-")
y <- solve(ch,t(x))
tt <- y[1, ]^2 + y[2, ]^2
tt < t
}
```
Here is the example how this function works:
```
bs <- cbind(a <- rnorm(1000), 2-a/3+rnorm(1000)/5)
mn <- apply(bs, 2, mean)
require(ellipse)
eps <- ellipse(cov(bs), centre=mn, npoints=1000)
plot(eps, type="l", xlim=range(bs[,1]), ylim=range(bs[, 2]))
col <- rep(1, nrow(x))
col[!confeps(bs)] <- 2
points(bs[, 1], bs[, 2], col=col)
```
| null | CC BY-SA 2.5 | null | 2010-12-20T07:59:32.343 | 2010-12-24T11:45:34.523 | 2010-12-24T11:45:34.523 | 2116 | 2116 | null |
5647 | 1 | null | null | 2 | 2467 | I am fitting a conditional logistic regression model with 1:4 controls using `R`. I wish to obtain `AIC` from the model. How can I extract the appropriate parameters based on the object `m`?
```
library(survival)
m<-clogit(cc~exp+ factor1+ factor2 + strata(stratum),data=data1)
```
| How to obtain AIC with conditional logistic regression using R? | CC BY-SA 3.0 | null | 2010-12-20T12:30:03.800 | 2016-03-19T09:50:14.343 | 2013-09-03T11:43:32.840 | 22047 | null | [
"r",
"logistic",
"clogit"
]
|
5648 | 2 | null | 5647 | 3 | null | It seems you must do it manually, so something like this:
```
2*length(m$coefficients)-2*(m$loglik[2])
```
| null | CC BY-SA 2.5 | null | 2010-12-20T12:40:11.920 | 2010-12-20T12:40:11.920 | null | null | null | null |
5649 | 2 | null | 5602 | 2 | null | Since your metadata is probably discrete, I suggest using [classification trees](http://en.wikipedia.org/wiki/Classification_tree). Note that from your example it is highly likely that your unknown algorithm is random, i.e. there is no algorithm.
| null | CC BY-SA 2.5 | null | 2010-12-20T14:10:31.630 | 2010-12-20T14:10:31.630 | null | null | 2116 | null |
5650 | 1 | null | null | 9 | 976 | I am currently working on some time series data, I know I can use
LOESS/ARIMA model.
The data is written to a vector whose length is 1000, which is a queue,
updating every 15 minutes,
Thus the old data will pop out while the new data push in the vector.
I can rerun the whole model on a scheduler, e.g. retrain the model every 15
minutes, that is, Use the whole 1000 value to train the LOESS model, However it is very inefficient, as every time only one
value is insert while another 999 vlaues still same as last time.
So how can I achieve better performance?
Many thanks
| Incremental learning for LOESS time series model | CC BY-SA 2.5 | null | 2010-12-20T14:49:58.113 | 2013-05-07T05:59:47.533 | 2013-03-31T20:37:54.133 | 919 | 2454 | [
"time-series",
"model-evaluation"
]
|
5651 | 2 | null | 5602 | 1 | null | This isn't a clustering, it's (supervised) classification.
There are a lot of methods to do classification, such as naïve bayes for binary features and linear discriminants for continuous, or neural networks for some opaque combination of both.
For the example you mention, the feasibility of what you suggest depends a lot on the metadata - for example, if it's been tagged (=classified) properly, you could expect to be able to find some tags they like/dislike. However, the problem you stated is more general. Discovering the exact algorithm is unlikely, and in the example one doesn't even exist.
| null | CC BY-SA 2.5 | null | 2010-12-20T16:55:46.273 | 2010-12-20T16:55:46.273 | null | null | 2456 | null |
5652 | 2 | null | 5601 | 8 | null | This is not a bug. The model is stored using a 0-based index. So SplitVar=0 is X1, SplitVar=1 is X2, and SplitVar=2 is X3. So this split corresponds to a split on X3. Since X3 is an ordinal factor and the split is at 1.5, this corresponds to splitting levels 0&1 from 2&3.
> sum(data$X3<="c")
[1] 522
> sum(data$X3>="b")
[1] 478
| null | CC BY-SA 2.5 | null | 2010-12-20T18:12:43.150 | 2010-12-20T18:12:43.150 | null | null | null | null |
5653 | 2 | null | 5111 | 2 | null | Under one interpretation of your situation there is no need to modify the p values at all.
For example, let's posit that a sequence of (unknown) bivariate distributions $p_i(x,y)$ govern $A$ and $B$ for each organism $i$. That is, $\Pr(A=x, B=y) = p_i(x,y)$ for all possible outcomes $(x,y)$ of $(A,B)$. To test whether the measurement procedures $A$ and $B$ differ, a reasonable null hypothesis is that these distributions are all symmetric:
$$H_0: p_i(x,y) = p_i(y,x) \text{ for all } i, x, y.$$
The sign statistic (difference between number of $+$ and number of $-$ results) is still a reasonable one to use in this test. (It actually tests the null hypothesis $H_0: \Pr(A<B) = \Pr(B<A)$.) Its distribution depends on the chances of ties; namely on the values $t_i = \sum_{x}p_i(x,x)$ (one for each organism $i$). The question, which appears not to contemplate the possibility of ties at all, suggests their chances are fairly small. In any case, the symmetry assumption in the null implies the chance of organism $i$ yielding a $+$ sign equals the chance of organism $i$ yielding a $-$ sign and the assumption that ties are unlikely implies both these chances are close to $1/2$. This implies the distribution of the sign statistic is binomial, as usual, despite any correlation (or lack thereof) between $A$ and $B$.
If there is a substantial chance of ties, it looks like you cannot make any progress towards quantitative bounds until you specify something about those chances. For example, if you provide an upper bound for the $t_i$ you can say something about the distribution of the sign statistic.
| null | CC BY-SA 2.5 | null | 2010-12-20T19:26:32.303 | 2010-12-20T19:26:32.303 | null | null | 919 | null |
5654 | 1 | null | null | 9 | 1855 | I'm trying to help a scientist design a study for the occurrence of salmonella microbes. He would like to compare an experimental antimicrobial formulation against a chlorine (bleach) at poultry farms. Because background rates of salmonella differ over time, he plans to measure % poultry w/salmonella before treatment, and after treatment. So the measurement will be the difference of before/after % salmonella for the experimental vs. chlorine formulas.
Can anyone advise on how to estimate the sample sizes necessary? Let's say the background rate is 50%; after bleach it's 20%; and we want to detect whether the experimental formulation changes the rate by +/- 10%. thank you
EDIT:
What I'm struggling with is how to incorporate the background rates. Let's call them p3 and p4, the "before" salmonella rates for bleach and experimental samples, respectively. So the statistic to be estimated is the difference of differences: Experimental(After-Before) - Bleach(After-Before) = (p0-p2) - (p3-p1). To fully account for the sampling variation of "before" rates p2 and p3 in the sample-size calculation --- is it as simple as using p0(1-p0)+p1(1-p1)+p2(1-p2)+p3(1-p3) wherever there's a variation term in the sample-size equation? Let all samples sizes be equal, n1 = n2 = n.
| Sample size for proportions in repeated measures | CC BY-SA 2.5 | null | 2010-12-20T22:24:03.600 | 2010-12-22T11:04:25.630 | 2010-12-22T11:04:25.630 | null | 2473 | [
"sample-size",
"repeated-measures",
"proportion"
]
|
5655 | 2 | null | 1875 | 0 | null | How about just binning the given predictions and taking the observed fractions as your estimate for each bin?
You can generalise this to a continuous model by weighing all the observations around your value of interest (say the prediction by tomorrow) by a Gaussian and seeing what the weighted average is.
You can guess a width to get you a given fraction of your data (or, say, never less than 100 points for a good estimate). Alternatively use a method such as cross-validation of max-likelihood to get the Gaussian width.
| null | CC BY-SA 2.5 | null | 2010-12-21T00:33:47.953 | 2010-12-21T00:33:47.953 | null | null | 2067 | null |
5656 | 1 | 5662 | null | 10 | 11800 | Wondering if anyone has run across a package/function in R that will combine levels of a factor whose proportion of all the levels in a factor is less than some threshold? Specifically, one of the first steps in data preparation I conduct is to collapse sparse levels of factors together (say into a level called 'Other') that do not constitute at least, say, 2% of the total. This is done unsupervised and is done when the objective is to model some activity in marketing (not fraud detection, where those very small occurrences could be extremely important). I am looking for a function that will collapse levels until some threshold proportion is met.
UPDATE:
Thanks to these great suggestions I wrote a function pretty easily. I did realize though that it was possible to collapse levels with proportion < the minimum and still have that recoded level be < the minimum, requiring the addition of the lowest level with proportion > the minimum. Likely can be more efficient but it appears to work. The next enhancement would be to figure out how to capture the "rules" for applying the collapse logic to new data (a validation set or future data).
```
collapseFactors<- function(tableName,minPercent=5,fillIn ="RECODED" )
{
for (i in 1:ncol(tableName))
{
if(is.factor(tableName[,i]) == TRUE) #process just factors
{
sortedTable<-sort(prop.table(table(tableName[,i])))
numberToCollapse<-length(sortedTable[sortedTable<(minPercent/100)])
if (sum(sortedTable[1:numberToCollapse])<(minPercent/100))
{
numberToCollapse=numberToCollapse+1 #add next level if < minPercent
}
if(numberToCollapse>1) #if not >1 then nothing to collapse
{
lf <- names(sortedTable[1:numberToCollapse])
levels(tableName[,i])[levels(tableName[,i]) %in% lf] <- fillIn
}
}#end if a factor
}#end for loop
return(tableName)
}#end function
```
| R package for combining factor levels for datamining? | CC BY-SA 3.0 | null | 2010-12-21T01:35:25.853 | 2017-05-16T23:32:03.043 | 2017-05-16T23:32:03.043 | 11887 | 2040 | [
"r",
"many-categories"
]
|
5657 | 2 | null | 5656 | 5 | null | I wrote a quick function that will accomplish this goal. I'm a novice R user, so it may be slow with large tables.
```
Merge.factors <- function(x, p) {
#Combines factor levels in x that are less than a specified proportion, p.
t <- table(x)
y <- subset(t, prop.table(t) < p)
z <- subset(t, prop.table(t) >= p)
other <- rep("Other", sum(y))
new.table <- c(z, table(other))
new.x <- as.factor(rep(names(new.table), new.table))
return(new.x)
}
```
As an example of it in action:
```
> a <- rep("a", 100)
> b <- rep("b", 1000)
> c <- rep("c", 1000)
> d <- rep("d", 1000)
> e <- rep("e", 400)
> f <- rep("f", 100)
> x <- factor(c(a, b, c, d, e, f))
> summary(x)
a b c d e f
100 1000 1000 1000 400 100
> prop.table(table(x))
x
a b c d e f
0.02777778 0.27777778 0.27777778 0.27777778 0.11111111 0.02777778
>
> w <- Merge.factors(x, .05)
> summary(w)
b c d e Other
1000 1000 1000 400 200
> class(w)
[1] "factor"
```
| null | CC BY-SA 2.5 | null | 2010-12-21T02:33:01.607 | 2010-12-21T06:58:11.473 | 2010-12-21T06:58:11.473 | 1118 | 1118 | null |
5658 | 2 | null | 5656 | 5 | null | The only problem with Christopher answer is that it will mix up the original ordering of the factor. Here is my fix:
```
Merge.factors <- function(x, p) {
t <- table(x)
levt <- cbind(names(t), names(t))
levt[t/sum(t)<p, 2] <- "Other"
change.levels(x, levt)
}
```
where `change.levels` is the following function. I wrote it some time ago, so I suspect there might be better ways of achieving what it does.
```
change.levels <- function(f, levt) {
##Change the the names of the factor f levels from
##substitution table levt.
## In the first column there are the original levels, in
## the second column -- the substitutes
lv <- levels(f)
if(sum(sort(lv) != sort(levt[, 1]))>0)
stop ("The names from substitution table does not match given level names")
res <- rep(NA, length(f))
for(i in lv) {
res[f==i] <- as.character(levt[levt[, 1]==i, 2])
}
factor(res)
}
```
| null | CC BY-SA 2.5 | null | 2010-12-21T04:51:40.117 | 2010-12-21T04:51:40.117 | null | null | 2116 | null |
5659 | 2 | null | 5399 | 0 | null | suppose $(X,Y)$ is bivariate normal with zero means and correlation $\rho$. then
${\mathrm E} XY= cov(X,Y)= \rho\sigma_X\sigma_Y$.
all of the entries in the matrix $x_1x_2^T$ are of the form $XY$.
| null | CC BY-SA 2.5 | null | 2010-12-21T05:30:02.967 | 2010-12-21T05:30:02.967 | null | null | 1112 | null |
5660 | 2 | null | 5602 | 0 | null | You can use SOM which is a kind of supervised clustering. Or as sesqu said their are tons of other algorithms such as support vector machines, regression or logistic regression. Neural Networks, etc ..
| null | CC BY-SA 2.5 | null | 2010-12-21T08:36:12.760 | 2010-12-21T08:36:12.760 | null | null | 1808 | null |
5661 | 2 | null | 1875 | 2 | null | The [Brier Score](http://docs.lib.noaa.gov/rescue/mwr/078/mwr-078-01-0001.pdf) approach is very simple and the most directly applicable way verify accuracy of a predicted outcome versus binary event.
Don't rely on just formulas ...plot the scores for different periods of time, data, errors, [weighted] rolling average of data, errors ... it's tough to say what visual analysis might reveal ... after you think you see something, you will better know what kind of hypothesis test to perform until AFTER you look at the data.
The Brier Score inherently assumes stability of the variation/underlying distributions weather and technology driving the forecasting models, lack of linearity, no bias, lack of change in bias ... it assumes that same general level of accuracy/inaccuracy is consistent. As climate changes in ways that are not yet understood, the accuracy of weather predictions would decrease; conversely, the scientists feeding information to the weatherman have more resources, more complete models, more computing power so perhaps the accuracy of the predictions would increase. Looking at the errors would tell something about stability, linearity and bias of the forecasts ... you may not have enough data to see trends; you may learn that stability, linearity and bias are not an issue. You may learn that weather forecasts are getting more accurate ... or not.
| null | CC BY-SA 2.5 | null | 2010-12-21T08:54:45.653 | 2010-12-21T22:21:01.123 | 2010-12-21T22:21:01.123 | 2342 | 2342 | null |
5662 | 2 | null | 5656 | 11 | null | It seems it's just a matter of "releveling" the factor; no need to compute partial sums or make a copy of the original vector. E.g.,
```
set.seed(101)
a <- factor(LETTERS[sample(5, 150, replace=TRUE,
prob=c(.1, .15, rep(.75/3,3)))])
p <- 1/5
lf <- names(which(prop.table(table(a)) < p))
levels(a)[levels(a) %in% lf] <- "Other"
```
Here, the original factor levels are distributed as follows:
```
A B C D E
18 23 35 36 38
```
and then it becomes
```
Other C D E
41 35 36 38
```
It may be conveniently wrapped into a function. There is a `combine_factor()` function in the [reshape](http://cran.r-project.org/web/packages/reshape/index.html) package, so I guess it could be useful too.
Also, as you seem interested in data mining, you might have a look at the [caret](http://caret.r-forge.r-project.org/Classification_and_Regression_Training.html) package. It has a lot of useful features for data preprocessing, including functions like `nearZeroVar()` that allows to flag predictors with very imbalanced distribution of observed values (See the vignette, [example data, pre-processing functions, visualizations and other functions](http://cran.r-project.org/web/packages/caret/vignettes/caretMisc.pdf), p. 5, for example of use).
| null | CC BY-SA 2.5 | null | 2010-12-21T10:16:38.273 | 2010-12-21T10:16:38.273 | null | null | 930 | null |
5663 | 2 | null | 5115 | 23 | null | [Florence Nightingale](http://en.wikipedia.org/wiki/Florence_Nightingale) for being "a true pioneer in the graphical representation of statistics" and developing the polar area diagram. Yes, that Florence Nightingale!
| null | CC BY-SA 3.0 | null | 2010-12-21T11:49:21.513 | 2011-12-14T06:47:23.300 | 2011-12-14T06:47:23.300 | 183 | null | null |
5664 | 1 | 5665 | null | 6 | 409 | I'm a complete newbie to statistics (although I find it really interesting!), and I have taken the task of distributing feedback to speakers of a conference I'm co-organizing. Each speaker was given a grade on the scale 1-5 from the participants, and we combine feedback from all participants into a mean score, for instance 3.56. We can then order the speakers by mean score.
In addition to giving the speakers their mean score, we also want to give them a clue about how they did compared to the other speakers. To avoid discouraging the speakers who did the worst (we want people to try again!), we came up with giving back a more fuzzy metric. We want to divide the speakers into four groups: 10% best/20%/30%/40% worst. Does there exist a name for this?
EDIT:
I'll try an example. If I have ten talks with scores of { 1.2, 1.3, 2.1, 2.4, 2.7, 3.0, 3.2, 4.1, 4.2, 4.5}, I would divide them up like this:
10% best: 4.5
20% "next best": 4.2, 4.1
30% "next worst": 3.2, 3.0, 2.7
40% worst: 2.4, 2.1, 1.3, 1.2
What I want to know is if there is a name for these ranges.
| Is there a name for 10% best individual grades? | CC BY-SA 3.0 | null | 2010-12-21T13:18:15.667 | 2015-12-19T16:39:46.913 | 2015-12-19T16:39:46.913 | 28666 | 2470 | [
"terminology",
"quantiles"
]
|
5665 | 2 | null | 5664 | 9 | null | If I understand you correctly, you may refer to [Percentiles](http://en.wikipedia.org/wiki/Percentile), perhaps espacially Quartiles.
Perhaps you can elaborate a little more on which percentages should be enclosed in each bin, to get a more accurate answer.
UPDATE: Based on the comments below decile seems to be the term you want. For your data these can easily achieved via `R` (note the differences from your example):
```
> x = c(1.2, 1.3, 2.1, 2.4, 2.7, 3.0, 3.2, 4.1, 4.2, 4.5)
> quantile(x,c(0.9,0.8,0.7,0.6))
90% 80% 70% 60%
4.23 4.12 3.47 3.08
> x[x > quantile(x,0.9)]
[1] 4.5
> x[x > quantile(x,0.8) & x < quantile(x,0.9)]
[1] 4.2
> x[x > quantile(x,0.7) & x < quantile(x,0.8)]
[1] 4.1
> x[x < quantile(x,0.7)]
[1] 1.2 1.3 2.1 2.4 2.7 3.0 3.2
```
You can then tell the speakers you are in the 10th decile, 9th decile, 8th decile, or 7th and worse decile (or something with above the 90th percentile, ...). But from my point, the problem will always be to name the catch all (i.e. worst) category.
| null | CC BY-SA 2.5 | null | 2010-12-21T13:37:09.843 | 2010-12-21T14:34:48.360 | 2010-12-21T14:34:48.360 | 442 | 442 | null |
5666 | 2 | null | 5654 | 2 | null | Let's take a stab at a first-order approximation assuming simple random sampling and a constant proportion of infection for any treatment. Assume the sample size is large enough that a normal approximation can be used in a hypothesis test on proportions so we can calculate a z statistic like so
$z = \frac{p_t - p_0}{\sqrt{p_0(1-p_0)(\frac{1}{n_1}+\frac{1}{n_2})}}$
This is the sample statistic for a two-sample test, new formula vs. bleach, since we expect the effect of bleach to be random as well as the effect of the new formula.
Then let $n = n_1 = n_2$, since balanced experiments have the greatest power, and use your specifications that $|p_t - p_0| \geq 0.1$, $p_0 = 0.2$. To attain a test statistic $|z| \geq 2$ (Type I error of about 5%), this works out to $n \approx 128$. This is a reasonable sample size for the normal approximation to work, but it's definitely a lower bound.
I'd recommend doing a similar calculation based on the desired power for the test to control Type II error, since an underpowered design has a high probability of missing an actual effect.
Once you've done all this basic spadework, start looking at the stuff whuber addresses. In particular, it's not clear from your problem statement whether the samples of poultry measured are different groups of subjects, or the same groups of subjects. If they're the same, you're into paired t test or repeated measures territory, and you need someone smarter than me to help out!
| null | CC BY-SA 2.5 | null | 2010-12-21T14:10:44.480 | 2010-12-21T14:10:44.480 | null | null | 5792 | null |
5667 | 2 | null | 726 | 20 | null | >
The primary product of a research
inquiry is one or more measures of
effect size, not p values.
Cohen, J. (1990). [Things I have learned (so far)](http://www.cps.nova.edu/marker/whatIhavelearnedsofarcohen.pdf). American Psychologist, 45, 1304-1312.
| null | CC BY-SA 2.5 | null | 2010-12-21T15:48:06.727 | 2010-12-21T15:48:06.727 | null | null | 930 | null |
5668 | 2 | null | 5664 | 3 | null | You have the answer you asked for, but along with how to communicate this information you might also want to think about how to asses the reliability & precision of the scores. If the evaluators aren't really using the same standards, the scores will furnish a misleading measure of the quality of the speakers no matter how you decide to categorize them. Also, even if the evaluators are reliable in this sense, your rankings should be sensitive to the standard error in their measurements (likely to be large if you have only a modest number of evaluators): you don't want to imply that there are meaningful differences among speakers whose scores differ by amounts that are comparable to the level of background noise in your data. If you aren't in a position to furnish genuinely informative quantitative feedback, you are better off, in my view, picking one or two evaluators you trust to give the speakers' qualitative feedback informed by the evaluators' own observations & by their assessments of whatever evidence they have on the reactions of others.
| null | CC BY-SA 2.5 | null | 2010-12-21T16:02:19.503 | 2010-12-21T18:29:28.220 | 2010-12-21T18:29:28.220 | 11954 | 11954 | null |
5669 | 2 | null | 859 | 4 | null | Look into models with spatially correlated errors (and spatially correlated covariates). A brief introduction, with references to [GeoDa](http://geodacenter.asu.edu/), is available [here](http://www.s4.brown.edu/s4/courses/SO261-John/lab9.pdf). There are plenty of texts; good ones are by [Noel Cressie](http://rads.stackoverflow.com/amzn/click/0471002550), [Robert Haining](http://rads.stackoverflow.com/amzn/click/0521774373), and [Fotheringham et al](http://www2.fiu.edu/~tardanic/spatial.pdf) (the last link goes to a summary, not a book site). Some R code has recently been emerging but I'm unfamiliar with it.
| null | CC BY-SA 2.5 | null | 2010-12-21T16:08:11.367 | 2010-12-21T16:08:11.367 | null | null | 919 | null |
5670 | 2 | null | 5619 | 5 | null | I see no obvious reason not to do so. As far as I know, we usually make a distinction between two kind of effect size (ES) measures for qualifying the strength of an observed association: ES based on $d$ (difference of means) and ES based on $r$ (correlation). The latter includes Pearson's $r$, but also Spearman's $\rho$, Kendall's $\tau$, or the multiple correlation coefficient.
As for their interpretation, I think it mainly depends on the field you are working in: A correlation of .20 would certainly not be interpreted in the same way in psychological vs. software engineering studies. Don't forget that Cohen's three-way classification--small, medium, large--was based on behavioral data, as discussed in Kraemer et al. (2003), p. 1526. In their Table 1, they made no distinction about the different types of ES measures belonging to the $r$ family. There have by no way an absolute meaning and should be interpreted with reference to established results or literature review.
I would like to add some other references that provide useful reviews of common ES measures and their interpretation.
References
- Helena C. Kraemer, George A. Morgan, Nancy L. Leech, Jeffrey A. Gliner, Jerry J. Vaske, and Robert J. Harmon (2003). Measures of Clinical Significance. J Am Acad Child Adolesc Psychiatry, 42(12), 1524-1529.
- Christopher J. Ferguson (2009). An Effect Size Primer: A Guide for Clinicians and Researchers. Professional Psychology: Research and Practice, 40(5), 532-538.
- Edward F. Fern and Kent B. Monroe (1996). Effect-Size Estimates: Issues and Problems in Interpretation. Journal of Consumer Research, 23, 89-105.
- Daniel J. Denis (2003). Alternatives to Null Hypothesis Significance Testing. Theory and Science, 4(1).
- Paul D. Ellis (2010). The Essential Guide to Effect Sizes. Cambridge University Press. -- just browsed the TOC
| null | CC BY-SA 2.5 | null | 2010-12-21T16:44:46.750 | 2010-12-21T17:02:01.640 | 2010-12-21T17:02:01.640 | 930 | 930 | null |
5671 | 2 | null | 5207 | 3 | null | In fact, you should not do MCMC, since your problem is so much simpler. Try this algorithm:
Step 1: Generate a X from Log Normal
Step 2: Keeping this X fixed, generate a Y from the Singh Maddala.
Voilà! Sample Ready!!!
| null | CC BY-SA 3.0 | null | 2010-12-21T17:16:52.633 | 2018-02-24T13:37:18.813 | 2018-02-24T13:37:18.813 | 7224 | 2472 | null |
5672 | 2 | null | 5054 | 1 | null | Maybe I'm missing something here, but if you plot the number of times these hashtags are mentioned over time, shouldn't that tell you something?
Of course, maybe you need automated processing. In that case fit splines to these series, and take the derivative. (They are easy : just look up what the Function Data Analysts do.) Sharply trending topics will have high derivatives. How high, will come from your data.
Do tell if this worked or not?
| null | CC BY-SA 2.5 | null | 2010-12-21T17:23:16.220 | 2010-12-21T17:23:16.220 | null | null | 2472 | null |
5675 | 1 | 5676 | null | 4 | 1001 | When computing a confidence interval of slope in linear regression, should you use the z- or t-statistic?
| Confidence interval of slope in linear regression | CC BY-SA 2.5 | null | 2010-12-21T19:37:50.770 | 2010-12-22T07:04:10.727 | 2010-12-21T20:59:47.620 | null | 1395 | [
"regression",
"confidence-interval"
]
|
5676 | 2 | null | 5675 | 5 | null | If you're doing linear regression using least squares, you should use base confidence intervals on Student's t-distribution.
| null | CC BY-SA 2.5 | null | 2010-12-21T19:43:52.090 | 2010-12-21T19:43:52.090 | null | null | 449 | null |
5677 | 2 | null | 5675 | 5 | null | Rule of thumb: Use Student's t distribution if you must estimate the variance.
Since the distribution's variance is estimated (not known), you should use Student's t distribution rather than the standard normal distribution (z), which requires a known variance.
Although the t distribution becomes almost exactly the same as the z distribution when the degrees of freedom (think size of the sample) are large, it is (in my experience) quite rare that the z distribution is used instead of the t distribution in cases like this.
| null | CC BY-SA 2.5 | null | 2010-12-21T20:10:22.910 | 2010-12-21T21:18:12.697 | 2010-12-21T21:18:12.697 | 1583 | 1583 | null |
5678 | 2 | null | 5054 | 1 | null | As far as I can tell from my reading, a common method for determining a time series' trend is to smooth the series, perhaps in an iterated fashion, as in:
[A Pakistan SBP paper](http://www.sbp.org.pk/departments/stats/sam.pdf). In the Seasonal Adjustment Methodology section, it describes how X-12 ARIMA does it, though they also use a seasonal factor which perhaps you could also use or perhaps you could simply ignore.
Other links might include [A Bank of England web page](http://www.bankofengland.co.uk/mfsd/iadb/notesiadb/seasonal_adjustment.htm) and [A US Census Bureau paper](http://www.census.gov/ts/papers/jbes98.pdf) (pages 8-12).
| null | CC BY-SA 2.5 | null | 2010-12-21T20:21:54.227 | 2010-12-21T20:55:04.573 | 2010-12-21T20:55:04.573 | 1764 | 1764 | null |
5679 | 2 | null | 5675 | 3 | null | Depends on assumptions on your disturbances. If they are normal and homoscedastic, then yes use t-statistic. In economic applications though these assumptions rarely hold, so in that case I would suggest using z-statistic with [robust standard errors](http://en.wikipedia.org/wiki/White_standard_errors).
| null | CC BY-SA 2.5 | null | 2010-12-21T21:10:43.467 | 2010-12-22T07:04:10.727 | 2010-12-22T07:04:10.727 | 2116 | 2116 | null |
5680 | 1 | 5946 | null | 32 | 151636 | I have analyzed an experiment with a repeated measures ANOVA. The ANOVA is a 3x2x2x2x3 with 2 between-subject factors and 3 within (N = 189). Error rate is the dependent variable. The distribution of error rates has a skew of 3.64 and a kurtosis of 15.75. The skew and kurtosis are the result of 90% of the error rate means being 0. Reading some of the previous threads on normality tests here has me a little confused. I thought that if you had data that was not normally distributed it was in your best interest to transform it if possible, but it seems that a lot of people think analyzing non-normal data with an ANOVA or a T-test is acceptable. Can I trust the results of the ANOVA?
(FYI, In the future I intend to analyze this type of data in R with mixed-models with a binomial distribution)
| Can I trust ANOVA results for a non-normally distributed DV? | CC BY-SA 2.5 | null | 2010-12-21T21:38:00.003 | 2019-05-08T13:47:21.173 | null | null | 2322 | [
"anova",
"normality-assumption"
]
|
5681 | 1 | null | null | 1 | 125 | Measurement outcome is a continuous number measured in $p$ locations.
This yields a vector $X_1$ to $X_p$ per subject. These vectors are obtained on $N$ subjects.
In addition these $N$ subjects come from $k$ groups.
What method should I use to compare the average vector across the entire study population to an hypothesized null?
And how to compare between the $k$ groups the per group average vector overall and for individual $X_i$ locations?
| Comparing a vector of averages to a null one | CC BY-SA 2.5 | null | 2010-12-22T00:04:01.290 | 2010-12-22T11:47:26.107 | 2010-12-22T11:31:41.020 | null | null | [
"multiple-comparisons",
"multivariate-analysis"
]
|
5682 | 1 | 6236 | null | 7 | 3016 | I was wondering if anyone had experience with SAS/IML and R, and can give some pointers as to the relative advantages/disadvantages of the two. I've used R extensively for programming and statistical analyses, but haven't had much experience with IML. However, as this company is a SAS shop, I'm likely to have to use it (the alternative is to program everything in base SAS, which sometimes leads to true horrors).
Thanks
| SAS/IML compared to R | CC BY-SA 2.5 | null | 2010-12-22T01:19:57.423 | 2019-02-14T17:43:29.250 | null | null | 1569 | [
"r",
"sas"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.