Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
β | ParentId
stringlengths 1
6
β | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
β | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
β | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
β | LastEditorUserId
stringlengths 1
6
β | OwnerUserId
stringlengths 1
6
β | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6897 | 1 | 6898 | null | 6 | 592 | Here is my example. Supose we evaluate a characteristic using two different methods (a and b) and we want to study if both methods performs in a same way. We also know that these two measures have been recorded from two different groups, and the mean values for each one of these groups are highly different. Our data set could be as follows:
```
a <- c(22,34,56,62,27,53)
b <- c(42.5,43,58.6,55,31.2,51.75)
group <- factor(c(1,1,2,2,1,2), labels=c('bad','good'))
dat <- data.frame(a, b, group)
```
The association between a and b could be calculated as:
```
lm1 <- lm(a ~ b, data=dat)
summary(lm1)
Call:
lm(formula = a ~ b, data = dat)
Residuals:
1 2 3 4 5 6
-13.810 -2.533 -3.106 8.103 7.541 3.806
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -25.6865 19.7210 -1.302 0.2627
b 1.4470 0.4117 3.514 0.0246 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 9.271 on 4 degrees of freedom
Multiple R-squared: 0.7554, Adjusted R-squared: 0.6942
F-statistic: 12.35 on 1 and 4 DF, p-value: 0.02457
```
As we can see, it seems to be a high association between both measures. However, if we perform the same analysis for each group separately, this association disappears.
```
lm2 <- lm(a ~ b, data=dat, subset=dat$class=='bad')
summary(lm2)
Call:
lm(formula = a ~ b, data = dat, subset = dat$group == "bad")
Residuals:
1 2 5
-6.0992 5.8407 0.2584
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 22.9931 35.1657 0.654 0.631
b 0.1201 0.8953 0.134 0.915
Residual standard error: 8.449 on 1 degrees of freedom
Multiple R-squared: 0.01769, Adjusted R-squared: -0.9646
F-statistic: 0.01801 on 1 and 1 DF, p-value: 0.915
```
and,
```
lm3 <- lm(a ~ b, data=dat, subset=dat$class=='good')
summary(lm3)
Call:
lm(formula = a ~ b, data = dat, subset = dat$group == "good")
Residuals:
3 4 6
-2.394 5.047 -2.652
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 34.9361 70.4238 0.496 0.707
FIV 0.4003 1.2761 0.314 0.806
Residual standard error: 6.184 on 1 degrees of freedom
Multiple R-squared: 0.08959, Adjusted R-squared: -0.8208
F-statistic: 0.09841 on 1 and 1 DF, p-value: 0.8065
```
How should we assess the association between the two methods? We should take into account the group factor? Maybe it is a trivial question, but I have doubts about how to deal with this problem.
| If correlation between two variables is affected by a factor, how should I evaluate this correlation? | CC BY-SA 2.5 | null | 2011-02-04T22:16:28.863 | 2011-02-05T19:53:05.150 | null | null | 221 | [
"correlation"
]
|
6898 | 2 | null | 6897 | 9 | null | This might be a case of locally uncorrelated, but globally correlated variables. The variance in each group might be limited because of group homogeneity, therefore there is no evidence for a relationship within each group. But globally, with the full variance, the relationship can be strong. A schematic illustration of the joint distribution within three groups, and the resulting global joint distribution:

Edit: Your question also seems to be if the global correlation is still "real", even if the theoretical correlation within each group is 0. Random variables are defined on a probability space $<\Omega, P>$ where $\Omega$ is the set of all outcomes (think of different observable persons in your case), and $P$ is a probability measure. If your natural population $\Omega$ includes members from all groups, then: yes, the variables are "really" correlated. Otherwise, if the members of different groups do not form a natural common $\Omega$, but each belong to separate populations, then: no, the variables are uncorrelated.
| null | CC BY-SA 2.5 | null | 2011-02-04T22:50:54.100 | 2011-02-05T19:53:05.150 | 2011-02-05T19:53:05.150 | 1909 | 1909 | null |
6899 | 1 | null | null | 3 | 156 | [Kavosh](http://www.biomedcentral.com/1471-2105/10/318) is a recent package designed for [network motif](http://en.wikipedia.org/wiki/Network_motif) discovery. To give a comparison, Kavosh generates a collection of similar networks using an MCMC process.
The networks in consideration are directed graphs without loops or multiple edges. Similar networks have the same vertex set and the same in-degrees and out-degrees. Ideally we want to sample uniformly at random from the set of similar graphs.
As I understand it, network motif detection programs typically use a well-studied switching process: take two directed edges (a,c) and (b,d) uniformly at random, then replace with (a,d) and (b,c). Reject if a loop or multiple edge is formed. (see e.g. [http://arxiv.org/abs/cond-mat/0312028](http://arxiv.org/abs/cond-mat/0312028))
Judging from its source code, Kavosh seems to speed-up this process in the following way. For all vertices v do:
- Let a=v.
- b is a random vertex b<>a.
- c is a random out-neighbour of a.
- d is a random out-neighbour of c.
And as before, replace edges (a,c) and (b,d) with (a,d) and (b,c). Reject if a loop or multiple edge is formed. Repeat this whole process three times (so there can be up to 3|V| switches applied in total, where |V| is the number of vertices). [For some reason I'm unsure of, the last two steps are also repeated up to three times.]
>
Question: What effects could one expect to see as a result of this change? How concerned should one be about this change?
The Kavosh paper does not say much about it:
>
In our approach, similar to Milo's
random model [17,18] switching
operations are applied on the edges of
the input network repeatedly, until
the network is well randomized.
| Kavosh uses a different switching process in its MCMC; how concerned should I be? | CC BY-SA 2.5 | null | 2011-02-05T01:46:59.883 | 2011-02-05T01:46:59.883 | null | null | 386 | [
"algorithms",
"markov-chain-montecarlo",
"networks",
"graph-theory"
]
|
6902 | 2 | null | 1595 | 38 | null | I haven't seen the [scikit-learn](http://scikit-learn.org/) explicitly mentioned in the answers above. It's a Python package for machine learning in Python. It's fairly young but growing extremely rapidly (disclaimer: I am a scikit-learn developer). It's goals are to provide standard machine learning algorithmic tools in a unified interface with a focus on speed, and usability. As far as I know, you cannot find anything similar in Matlab. It's strong points are:
- A detailed documentation, with many examples
- High quality standard supervised learning (regression/classification) tools. Specifically:
very versatile SVM (based on libsvm, but with integration of external patches, and a lot of work on the Python binding)
penalized linear models (Lasso, sparse logistic regression...) with efficient implementations.
- The ability to perform model selection by cross-validation using multiple CPUs
- Unsupervised learning to explore the data or do a first dimensionality reduction, that can easily be chained to supervised learning.
- Open source, BSD licensed. If you are not in a purely academic environment (I am in what would be a national lab in the state) this matters a lot as Matlab costs are then very high, and you might be thinking of deriving products from your work.
Matlab is a great tool, but in my own work, scipy+scikit-learn is starting to give me an edge on Matlab because Python does a better job with memory due to its view mechanism (and I have big data), and because the scikit-learn enables me to very easily compare different approaches.
| null | CC BY-SA 3.0 | null | 2011-02-05T10:49:45.910 | 2014-08-19T20:55:48.897 | 2014-08-19T20:55:48.897 | 17230 | 1265 | null |
6904 | 2 | null | 4551 | 10 | null | In similar vein to @dirkan - The use of p-values as a formal measure of evidence of the null hypothesis being true. It does have some good heuristic and intuitively good features, but is essentially an incomplete measure of evidence because it makes no reference to the alternative hypothesis. While the data may be unlikely under the null (leading to a small p-value), the data may be even more unlikely under the alternative hypothesis.
The other problem with p-values, which also relates to some styles of hypothesis testing, is there is no principle telling you which statistic you should choose, apart from the very vague "large value" $\rightarrow$ "unlikely if null hypothesis is true". Once again, you can see the incompleteness showing up, for you should also have "large value" $\rightarrow$ "likely if alternative hypothesis is true" as an additional heuristic feature of the test statistic.
| null | CC BY-SA 2.5 | null | 2011-02-05T12:26:58.653 | 2011-02-05T12:26:58.653 | null | null | 2392 | null |
6905 | 2 | null | 4551 | 9 | null | Using statistics/probability in hypothesis testing to measure the "absolute truth". Statistics simply cannot do this, they can only be of use in deciding between alternatives, which must be specified from "outside" the statistical paradigm. Statements such as "the null hypothesis is proved true by the statistics" are just incorrect; statistics can only tell you "the null hypothesis is favoured by the data, compared to the alternative hypothesis". If you then assume that either the null hypothesis or the alternative must be true, you can say "the null proved true", but this is only a trivial consequence of your assumption, not anything demonstrated by the data.
| null | CC BY-SA 2.5 | null | 2011-02-05T12:40:27.353 | 2011-02-05T12:40:27.353 | null | null | 2392 | null |
6907 | 1 | 6919 | null | 30 | 7418 | Look at this picture:

If we draw a sample from the red density then some values are expected to be less than 0.25 whereas it is impossible to generate such a sample from the blue distribution. As a consequence, the Kullback-Leibler distance from the red density to the blue density is infinity. However, the two curves are not that distinct, in some "natural sense".
Here is my question: Does it exist an adaptation of the Kullback-Leibler distance that would allow a finite distance between these two curves?
| An adaptation of the Kullback-Leibler distance? | CC BY-SA 3.0 | null | 2011-02-05T13:01:14.260 | 2017-04-11T17:19:15.367 | 2017-04-11T17:19:15.367 | 11887 | 3019 | [
"kullback-leibler"
]
|
6909 | 2 | null | 6907 | 2 | null | Yes there does, Bernardo and Reuda defined something called the "intrinsic discrepancy" which for all purposes is a "symmetrised" version of the KL-divergence. Taking the KL divergence from $P$ to $Q$ to be $\kappa(P \mid Q)$ The intrinsic discrepancy is given by:
$$\delta(P,Q)\equiv \min \big[\kappa(P \mid Q),\kappa(Q \mid P)\big]$$
Searching intrinsic discrepancy (or bayesian reference criterion) will give you some articles on this measure.
In your case, you would just take the KL-divergence which is finite.
Another alternative measure to KL is Hellinger distance
EDIT: clarification, some comments raised suggested that the intrinsic discrepancy will not be finite when one density 0 when the other is not. This is not true if the operation of evaluating the zero density is carried out as a limit $Q\rightarrow 0$ or $P\rightarrow 0$ . The limit is well defined, and it is equal to $0$ for one of the KL divergences, while the other one will diverge. To see this note:
$$\delta(P,Q)\equiv \min \Big[\int P \,\log \big(\frac{P}{Q}\big),\int Q \log \big(\frac{Q}{P}\big)\Big]$$
Taking limit as $P\rightarrow 0$ over a region of the integral, the second integral diverges, and the first integral converges to $0$ over this region (assuming the conditions are such that one can interchange limits and integration). This is because $\lim_{z\rightarrow 0} z \log(z) =0$. Because of the symmetry in $P$ and $Q$ the result also holds for $Q$.
| null | CC BY-SA 2.5 | null | 2011-02-05T13:23:23.520 | 2011-02-06T17:03:49.210 | 2011-02-06T17:03:49.210 | 2970 | 2392 | null |
6911 | 2 | null | 4551 | 38 | null | Maybe stepwise regression and other forms of testing after model selection.
Selecting independent variables for modelling without having any a priori hypothesis behind the existing relationships can lead to logical fallacies or spurious correlations, among other mistakes.
Useful references (from a biological/biostatistical perspective):
- Kozak, M., & Azevedo, R. (2011). Does using stepwise variable selection to build sequential path analysis models make sense? Physiologia plantarum, 141(3), 197β200. doi:10.1111/j.1399-3054.2010.01431.x
- Whittingham, M. J., Stephens, P., Bradbury, R. B., & Freckleton, R. P. (2006). Why do we still use stepwise modelling in ecology and behaviour? The Journal of animal ecology, 75(5), 1182β9. doi:10.1111/j.1365-2656.2006.01141.x
- Frank Harrell, Regression Modeling Strategies, Springer 2001.
| null | CC BY-SA 3.0 | null | 2011-02-05T15:22:55.597 | 2014-12-09T12:51:56.987 | 2014-12-09T12:51:56.987 | 2126 | 2126 | null |
6912 | 1 | 6914 | null | 10 | 3254 | I came by the post "[Post-hoc Pairwise Comparisons of Two-way ANOVA](http://www.r-bloggers.com/post-hoc-pairwise-comparisons-of-two-way-anova/)" (responding to [this post](http://rtutorialseries.blogspot.com/2011/01/r-tutorial-series-two-way-anova-with.html)), which shows the following:
```
dataTwoWayComparisons <- read.csv("http://www.dailyi.org/blogFiles/RTutorialSeries/dataset_ANOVA_TwoWayComparisons.csv")
model1 <- aov(StressReduction~Treatment+Age, data =dataTwoWayComparisons)
summary(model1) # Treatment is signif
pairwise.t.test(dataTwoWayComparisons$StressReduction, dataTwoWayComparisons$Treatment, p.adj = "none")
# no signif pair
TukeyHSD(model1, "Treatment")
# mental-medical is the signif pair.
```
(Output is attached bellow)
Could someone please explain why the Tukey HSD is able to find a significant pairing while the paired (unadjusted pvalue) t-test fails in doing so?
Thanks.
---
Here is the code output
```
> model1 <- aov(StressReduction~Treatment+Age, data =dataTwoWayComparisons)
> summary(model1) # Treatment is signif
Df Sum Sq Mean Sq F value Pr(>F)
Treatment 2 18 9.000 11 0.0004883 ***
Age 2 162 81.000 99 1e-11 ***
Residuals 22 18 0.818
---
Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1
>
> pairwise.t.test(dataTwoWayComparisons$StressReduction, dataTwoWayComparisons$Treatment, p.adj = "none")
Pairwise comparisons using t tests with pooled SD
data: dataTwoWayComparisons$StressReduction and dataTwoWayComparisons$Treatment
medical mental
mental 0.13 -
physical 0.45 0.45
P value adjustment method: none
> # no signif pair
>
> TukeyHSD(model1, "Treatment")
Tukey multiple comparisons of means
95% family-wise confidence level
Fit: aov(formula = StressReduction ~ Treatment + Age, data = dataTwoWayComparisons)
$Treatment
diff lwr upr p adj
mental-medical 2 0.92885267 3.07114733 0.0003172
physical-medical 1 -0.07114733 2.07114733 0.0702309
physical-mental -1 -2.07114733 0.07114733 0.0702309
> # mental-medical is the signif pair.
```
| How could a Tukey HSD test be more signif then the uncorrected P value of t.test? | CC BY-SA 2.5 | null | 2011-02-05T15:42:04.080 | 2011-02-05T16:31:07.540 | null | null | 253 | [
"r",
"multiple-comparisons",
"t-test",
"post-hoc",
"tukey-hsd-test"
]
|
6913 | 1 | 6928 | null | 21 | 5537 | I recently read the advice that you should generally use median not mean to eliminate outliers.
Example:
The following article
[http://www.amazon.com/Forensic-Science-Introduction-Scientific-Investigative/product-reviews/1420064932/](http://www.amazon.com/Forensic-Science-Introduction-Scientific-Investigative/product-reviews/1420064932/)
has 16 reviews at the moment:
```
review = c(5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 4, 4, 3, 2, 1, 1)
summary(review) ## "ordinary" summary
Min. 1st Qu. Median Mean 3rd Qu. Max.
1.000 3.750 5.000 4.062 5.000 5.000
```
Because they use Mean the article gets 4 stars but if they would use Median it would get 5 stars.
Isn't the median a 'fairer' judge?
---
An experiment shows that median's error is always bigger than mean. Is median worse?
```
library(foreach)
# the overall population of bookjudgments
n <- 5
p <- 0.5
expected.value <- n*p
peoplesbelieve <- rbinom(10^6,n, p)
# 16 ratings made for 100 books
ratings <- foreach(i=1:100, .combine=rbind) %do% sample(peoplesbelieve,16)
stat <- foreach(i=1:100, .combine=rbind) %do% c(mean=mean(ratings[i,]),
median=median(ratings[i,]))
# which mean square error is bigger? Mean's or Median's?
meansqrterror.mean <- mean((stat[,"mean"]-expected.value)^2)
meansqrterror.median <- mean((stat[,"median"]-expected.value)^2)
res <- paste("mean MSE",meansqrterror.mean)
res <- paste(res, "| median MSE", meansqrterror.median)
print(res)
```
| Is median fairer than mean? | CC BY-SA 4.0 | null | 2011-02-05T16:29:45.263 | 2020-07-25T11:57:14.297 | 2020-07-25T11:57:14.297 | 7290 | 778 | [
"mean",
"median",
"types-of-averages"
]
|
6914 | 2 | null | 6912 | 10 | null | Because your pairwise $t$-test above is not adjusted for age, and age explains a lot of the variance in StressReduction.
| null | CC BY-SA 2.5 | null | 2011-02-05T16:31:07.540 | 2011-02-05T16:31:07.540 | null | null | 449 | null |
6915 | 2 | null | 6913 | 8 | null | If the only choices are integers in the range 1 to 5, can any really be considered an outlier?
I'm sure that with small sample sizes, popular outlier tests will fail, but that just points out the problems inherent in small samples. Indeed, given a sample of 5, 5, 5, 5, 5, 1, Grubbs' test reports 1 as an outlier at $\alpha = 0.05$. The same test for the data you give above does not identify the 1's as outliers.
```
Grubbs test for one outlier
data: review G = 2.0667, U = 0.6963,
p-value = 0.2153 alternative
hypothesis: lowest value 1 is an outlier
```
| null | CC BY-SA 2.5 | null | 2011-02-05T16:46:57.890 | 2011-02-05T16:54:18.127 | 2011-02-05T16:54:18.127 | 597 | 597 | null |
6916 | 1 | null | null | 17 | 384 | I'm teaching an intro stats class and was reviewing the types of sampling, including systematic sampling where you sample every kth individual or object.
A student asked if sampling every person with a particular characteristic would accomplish the same thing.
For example, would sampling every person with a blue t-shirt be random enough and provide enough of a representation of the whole population? At least, if you're asking a question other than "What color t-shirt do you prefer wearing?" My sense is no, but I wondered if anyone here had any thoughts on this.
| Is "every blue t-shirted person" a systematic sample? | CC BY-SA 2.5 | null | 2011-02-05T16:48:59.650 | 2011-02-06T10:59:39.137 | 2011-02-06T10:59:39.137 | null | 1490 | [
"sampling"
]
|
6917 | 2 | null | 6897 | 2 | null | Therefore, it is important to evaluate whether the homogeneity of groups is due to low number of data, or actually these groups are quite homogeneous and different. In the first case, we could ensure the presence of a high correlation even when this has not been observed for each group separately.
But what would happen in the second case? If even using a large number of data we could not observe a correlation within each group, we could say that this correlation exist?
Perhaps the value of one of these measures will be only useful for predicting membership in one of the two groups, but not to predict the value of the other measure.
| null | CC BY-SA 2.5 | null | 2011-02-05T17:07:20.920 | 2011-02-05T17:07:20.920 | null | null | 221 | null |
6918 | 2 | null | 6916 | 22 | null | The answer, in general, to your question is "no". Obtaining a random sample from a population (especially of humans) is notoriously difficult. By conditioning on a particular characteristic, you're by definition not obtaining a random sample. How much bias this introduces is another matter altogether.
As a slightly absurd example, you wouldn't want to sample this way at, say, a football game between the Bears and the Packers, even if your population was "football fans". (Bears fans may have different characteristics than other football fans, even when the quantity you are interested in may not seem directly related to football.)
There are many famous examples of hidden bias resulting from obtaining samples in this way. For example, in recent US elections in which phone polls have been conducted, it is believed that people owning only a cell phone and no landline are (perhaps dramatically) underrepresented in the sample. Since these people also tend to be, by and large, younger than those with landlines, a biased sample is obtained. Furthermore, younger people have very different political beliefs than older populations. So, this is a simple example of a case where, even when the sample was not intentionally conditioned on a particular characteristic, it still happened that way. And, even though the poll had nothing to do with the conditioning characteristic either (i.e., whether or not one uses a landline), the effect of the conditioning characteristic on the poll's conclusions was significant, both statistically and practically.
| null | CC BY-SA 2.5 | null | 2011-02-05T17:20:22.780 | 2011-02-05T17:33:53.097 | 2011-02-05T17:33:53.097 | 2970 | 2970 | null |
6919 | 2 | null | 6907 | 21 | null | You might look at Chapter 3 of Devroye, Gyorfi, and Lugosi, A Probabilistic Theory of Pattern Recognition, Springer, 1996. See, in particular, the section on $f$-divergences.
$f$-Divergences can be viewed as a generalization of Kullback--Leibler (or, alternatively, KL can be viewed as a special case of an $f$-Divergence).
The general form is
$$
D_f(p, q) = \int q(x) f\left(\frac{p(x)}{q(x)}\right) \, \lambda(dx) ,
$$
where $\lambda$ is a measure that dominates the measures associated with $p$ and $q$ and $f(\cdot)$ is a convex function satisfying $f(1) = 0$. (If $p(x)$ and $q(x)$ are densities with respect to Lebesgue measure, just substitute the notation $dx$ for $\lambda(dx)$ and you're good to go.)
We recover KL by taking $f(x) = x \log x$. We can get the Hellinger difference via $f(x) = (1 - \sqrt{x})^2$ and we get the total-variation or $L_1$ distance by taking $f(x) = \frac{1}{2} |x - 1|$. The latter gives
$$
D_{\mathrm{TV}}(p, q) = \frac{1}{2} \int |p(x) - q(x)| \, dx
$$
Note that this last one at least gives you a finite answer.
In another little book entitled Density Estimation: The $L_1$ View, Devroye argues strongly for the use of this latter distance due to its many nice invariance properties (among others). This latter book is probably a little harder to get a hold of than the former and, as the title suggests, a bit more specialized.
---
Addendum: Via [this question](https://stats.stackexchange.com/questions/7630/clustering-jsd-or-jsd2), I became aware that it appears that the measure that @Didier proposes is (up to a constant) known as the Jensen-Shannon Divergence. If you follow the link to the answer provided in that question, you'll see that it turns out that the square-root of this quantity is actually a metric and was previously recognized in the literature to be a special case of an $f$-divergence. I found it interesting that we seem to have collectively "reinvented" the wheel (rather quickly) via the discussion of this question. The interpretation I gave to it in the comment below @Didier's response was also previously recognized. All around, kind of neat, actually.
| null | CC BY-SA 2.5 | null | 2011-02-05T18:14:56.510 | 2011-02-27T03:38:48.520 | 2017-04-13T12:44:51.217 | -1 | 2970 | null |
6920 | 1 | 6923 | null | 67 | 19874 | I'm analysing some data where I would like to perform ordinary linear regression, however this is not possible as I am dealing with an on-line setting with a continuous stream of input data (which will quickly get too large for memory) and need to update parameter estimates while this is being consumed. i.e. I cannot just load it all into memory and perform linear regression on the entire data set.
I'm assuming a simple linear multivariate regression model, i.e.
$$\mathbf y = \mathbf A\mathbf x + \mathbf b + \mathbf e$$
What's the best algorithm for creating a continuously updating estimate of the linear regression parameters $\mathbf A$ and $\mathbf b$?
Ideally:
- I'd like an algorithm that is most $\mathcal O(N\cdot M)$ space and time complexity per update, where $N$ is the dimensionality of the independent variable ($\mathbf x$) and $M$ is the dimensionality of the dependent variable ($\mathbf y$).
- I'd like to be able to
specify some parameter to determine
how much the parameters are updated
by each new sample, e.g. 0.000001
would mean that the next sample would
provide one millionth of the
parameter estimate. This would give
some kind of exponential decay for
the effect of samples in the distant
past.
| Efficient online linear regression | CC BY-SA 3.0 | null | 2011-02-05T18:25:52.210 | 2022-04-04T17:10:48.880 | 2022-04-04T17:10:48.880 | 53690 | 2942 | [
"regression",
"time-series",
"algorithms",
"online-algorithms",
"real-time"
]
|
6921 | 2 | null | 1595 | 17 | null | great overview so far. I'm using python (specifically scipy + matplotlib) as a matlab replacement since 3 years working at University. I sometimes still go back because I'm familiar with specific libraries e.g. the matlab wavelet package is purely awesome.
I like the [http://enthought.com/](http://enthought.com/) python distribution. It's commercial, yet free for academic purposes and, as far as I know, completely open-source. As I'm working with a lot of students, before using enthought it was sometimes troublesome for them to install numpy, scipy, ipython etc. Enthought provides an installer for Windows, Linux and Mac.
Two other packages worth mentioning:
- ipython (comes already with enthought) great advanced shell. a good intro is on showmedo http://showmedo.com/videotutorials/series?name=PythonIPythonSeries
- nltk - the natural language toolkit http://www.nltk.org/ great package in case you want to do some statistics /machine learning on any corpus.
| null | CC BY-SA 2.5 | null | 2011-02-05T18:31:42.033 | 2011-02-05T18:31:42.033 | null | null | 2904 | null |
6922 | 2 | null | 6920 | 9 | null | I think recasting your linear regression model into a [state-space model](http://faculty.chicagobooth.edu/hedibert.lopes/teaching/MCMCtutorial/unit10-dynamicmodels.pdf) will give you what you are after. If you use R, you may want to use package [dlm](http://cran.r-project.org/web/packages/dlm/index.html)
and have a look at the [companion book](http://rads.stackoverflow.com/amzn/click/0387772375) by Petris et al.
| null | CC BY-SA 2.5 | null | 2011-02-05T18:52:53.013 | 2011-02-05T18:52:53.013 | null | null | 892 | null |
6923 | 2 | null | 6920 | 36 | null | Maindonald describes a sequential method based on [Givens rotations](http://en.wikipedia.org/wiki/Givens_rotation). (A Givens rotation is an orthogonal transformation of two vectors that zeros out a given entry in one of the vectors.) At the previous step you have decomposed the [design matrix](http://en.wikipedia.org/wiki/Design_matrix) $\mathbf{X}$ into a triangular matrix $\mathbf{T}$ via an orthogonal transformation $\mathbf{Q}$ so that $\mathbf{Q}\mathbf{X} = (\mathbf{T}, \mathbf{0})'$. (It's fast and easy to get the regression results from a triangular matrix.) Upon adjoining a new row $v$ below $\mathbf{X}$, you effectively extend $(\mathbf{T}, \mathbf{0})'$ by a nonzero row, too, say $t$. The task is to zero out this row while keeping the entries in the position of $\mathbf{T}$ diagonal. A sequence of Givens rotations does this: the rotation with the first row of $\mathbf{T}$ zeros the first element of $t$; then the rotation with the second row of $\mathbf{T}$ zeros the second element, and so on. The effect is to premultiply $\mathbf{Q}$ by a series of rotations, which does not change its orthogonality.
When the design matrix has $p+1$ columns (which is the case when regressing on $p$ variables plus a constant), the number of rotations needed does not exceed $p+1$ and each rotation changes two $p+1$-vectors. The storage needed for $\mathbf{T}$ is $O((p+1)^2)$. Thus this algorithm has a computational cost of $O((p+1)^2)$ in both time and space.
A similar approach lets you determine the effect on regression of deleting a row. Maindonald gives formulas; so do [Belsley, Kuh, & Welsh](http://books.google.com/books?id=GECBEUJVNe0C&printsec=frontcover&dq=Belsley,+Kuh,+%26+Welsch&source=bl&ots=b6k5lVb7E3&sig=vOuq7Ehg3OrO01CepQ8p-DZ1Bww&hl=en&ei=m7hNTeyfNsSAlAeY3eXcDw&sa=X&oi=book_result&ct=result&resnum=5&ved=0CDMQ6AEwBA#v=onepage&q&f=false). Thus, if you are looking for a moving window for regression, you can retain data for the window within a circular buffer, adjoining the new datum and dropping the old one with each update. This doubles the update time and requires additional $O(k (p+1))$ storage for a window of width $k$. It appears that $1/k$ would be the analog of the influence parameter.
For exponential decay, I think (speculatively) that you could adapt this approach to weighted least squares, giving each new value a weight greater than 1. There shouldn't be any need to maintain a buffer of previous values or delete any old data.
### References
J. H. Maindonald, Statistical Computation. J. Wiley & Sons, 1984. Chapter 4.
D. A. Belsley, E. Kuh, R. E. Welsch, Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. J. Wiley & Sons, 1980.
| null | CC BY-SA 3.0 | null | 2011-02-05T19:09:20.667 | 2012-11-19T23:16:15.307 | 2020-06-11T14:32:37.003 | -1 | 919 | null |
6924 | 2 | null | 6907 | 10 | null | The [Kolmogorov distance](http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test) between two distributions $P$ and $Q$ is the sup norm of their CDFs. (This is the largest vertical discrepancy between the two graphs of the CDFs.) It is used in distributional testing where $P$ is an hypothesized distribution and $Q$ is the empirical distribution function of a dataset.
It is hard to characterize this as an "adaptation" of the KL distance, but it does meet the other requirements of being "natural" and finite.
Incidentally, because the KL divergence is not a true "distance," we don't have to worry about preserving all the axiomatic properties of a distance. We can maintain the non-negativity property while making the values finite by applying any monotonic transformation $\mathbb{R_+} \to [0,C]$ for some finite value $C$. The inverse tangent will do fine, for instance.
| null | CC BY-SA 2.5 | null | 2011-02-05T19:26:34.743 | 2011-02-05T19:26:34.743 | null | null | 919 | null |
6925 | 2 | null | 5960 | 26 | null | There is a well-known paper by Silverman that deals with this issue. It employs kernel-density estimation. See
>
B. W. Silverman, Using kernel density estimates to investigate multimodality, J. Royal
Stat. Soc. B, vol. 43, no. 1, 1981, pp. 97-99.
Note that there are some errors in the tables of the paper. This is just a starting point, but a pretty good one. It provides a well-defined algorithm to use, in the event that's what you're most looking for. You might look on Google Scholar at papers that cite it for more "modern" approaches.
| null | CC BY-SA 3.0 | null | 2011-02-05T20:23:18.553 | 2011-12-21T17:37:45.403 | 2011-12-21T17:37:45.403 | 2970 | 2970 | null |
6926 | 2 | null | 6916 | 6 | null | As long as the distribution of the characteristic you are using to select units into the sample is orthogonal to the distribution of the characteristic of the population you want to estimate, you can obtain an unbiased estimate of the population quantity by conditioning selection on it. The sample is not strictly a random sample. But people tend to overlook that random samples are good because the random variable used to select units into sample is orthogonal to the distribution of the population characteristic, not because it is random.
Just think about drawing randomly from a Bernoulli with P(invlogit(x_i)) where x_i in [-inf, inf] is a feature of unit i such that Cov(x, y)!=0, and y is the population characteristic whose mean you want to estimate. The sample is "random" in the sense that you are randomizing before selecting into sample. But the sample does not yield an unbiased estimate of the population mean of y.
What you need is conditioning selection into sample on a variable that is as good as randomly assigned. I.e., that is orthogonal to the variable on which the quantity of interest depends. Randomizing is good because it insures orthogonality, not because of randomization itself.
| null | CC BY-SA 2.5 | null | 2011-02-05T21:42:45.253 | 2011-02-05T21:42:45.253 | null | null | 3069 | null |
6927 | 1 | 6936 | null | 9 | 2731 | I'm doing a simulation study which requires bootstrapping estimates obtained from a generalized linear mixed model (actually, the product of two estimates for fixed effects, one from a GLMM and one from an LMM). To do the study well would require about 1000 simulations with 1000 or 1500 bootstrap replications each time. This takes a significant amount of time on my computer (many days).
`How can I speed up the computation of these fixed effects?`
To be more specific, I have subjects who are measured repeatedly in three ways, giving rise to variables X, M, and Y, where X and M are continuous and Y is binary. We have two regression equations
$$M=\alpha_0+\alpha_1X+\epsilon_1$$
$$Y^*=\beta_0+\beta_1X+\beta_2M+\epsilon_2$$
where Y$^*$ is the underlying latent continuous variable for $Y$ and the errors are not iid.
The statistic we want to bootstrap is $\alpha_1\beta_2$. Thus, each bootstrap replication requires fitting an LMM and a GLMM. My R code is (using lme4)
```
stat=function(dat){
a=fixef(lmer(M~X+(X|person),data=dat))["X"]
b=fixef(glmer(Y~X+M+(X+M|person),data=dat,family="binomial"))["M"]
return(a*b)
}```
I realize that I get the same estimate for $\alpha_1$ if I just fit it as a linear model, so that saves some time, but the same trick doesn't work for $\beta_2$.
Do I just need to buy a faster computer? :)
| How can I speed up calculation of the fixed effects in a GLMM? | CC BY-SA 2.5 | null | 2011-02-05T21:46:17.677 | 2011-02-06T15:16:20.523 | null | null | 2739 | [
"r",
"mixed-model"
]
|
6928 | 2 | null | 6913 | 30 | null | The problem is that you haven't really defined what it means to have a good or fair rating. You suggest in a comment on @Kevin's answer that you don't like it if one bad review takes down an item. But comparing two items where one has a "perfect record" and the other has one bad review, maybe that difference should be reflected.
There's a whole (high-dimensional) continuum between median and mean. You can order the votes by value, then take a weighted average with the weights depending on the position in that order. The mean corresponds to all weights being equal, the median corresponds to only one or two entries in the middle getting nonzero weight, a trimmed average corresponds to giving all except the first and last couple the same weight, but you could also decide to weight the $k$th out of $n$ samples with weight $\frac{1}{1 + (2 k - 1 - n)^2}$ or $\exp(-\frac{(2k - 1 - n)^2}{n^2})$, to throw something random in there. Maybe such a weighted average where the outliers get less weight, but still a nonzero amount, could combine good properties of median and mean?
| null | CC BY-SA 2.5 | null | 2011-02-05T22:13:43.713 | 2011-02-05T22:13:43.713 | null | null | 2898 | null |
6929 | 2 | null | 6927 | 2 | null | It could possibly be a faster computer. But here is one trick which may work.
Generate a simultation of $Y^*$, but only conditional on $Y$, then just do OLS or LMM on the simulated $Y^*$ values.
Supposing your link function is $g(.)$. this says how you get from the probability of $Y=1$ to the $Y^*$ value, and is most likely the logistic function $g(z)=log \Big(\frac{z}{1-z}\Big)$.
So if you assume a bernouli sampling distribution for $Y\rightarrow Y\sim Bernoulli(p)$, and then use the jeffreys prior for the probability, you get a beta posterior for $p\sim Beta(Y_{obs}+\frac{1}{2},1-Y_{obs}+\frac{1}{2})$. Simulating from this should be like lighting, and if it isn't, then you need a faster computer. Further, the samples are independent, so no need to check any "convergence" diagnostics such as in MCMC, and you probably don't need as many samples - 100 may work fine for your case. If you have binomial $Y's$, then just replace the $1$ in the above posterior with $n_i$, the number of trials of the binomial for each $Y_i$.
So you have a set of simulated values $p_{sim}$. You then apply the link function to each of these values, to get $Y_{sim}=g(p_{sim})$. Fit a LMM to $Y_{sim}$, which is probably quicker than the GLMM program. You can basically ignore the original binary values (but don't delete them!), and just work with the "simulation matrix" ($N\times S$, where $N$ is the sample size, and $S$ is the number of simulations).
So in your program, I would replace the $gmler()$ function with the $lmer()$ function, and $Y$ with a single simultation, You would then create some sort of loop which applies the $lmer()$ function to each simulation, and then takes the average as the estimate of $b$. Something like
$$a=\dots$$
$$b=0$$
$$do \ s=1,\dots,S$$
$$b_{est}=lmer(Y_s\dots)$$
$$b=b+\frac{1}{s}(b_{est}-b)$$
$$end$$
$$return(a*b)$$
Let me know if I need to explain anything a bit clearer
| null | CC BY-SA 2.5 | null | 2011-02-05T23:20:50.217 | 2011-02-05T23:20:50.217 | null | null | 2392 | null |
6930 | 2 | null | 6776 | 1 | null | You would just do this as part of the algorithm. So you would
a) select observing $Y$ or $Y_2$, by your random probability
b) fit the relevant regression, depending on which part of the mixture you chose.
If you are also mixing over $X$ as well, this doesn't change the above method, you just act as if the mixing $X.mix$ corresponds to whatever you chose for $Y.mix$ (i.e. if you chose $Y$, then assume $X$, if you chose $Y_2$ then assume $X_2$)
You do this once for each Y. Then you need a way to relate your predictions back to the original probability of being in each mixture, which you haven't specified. Otherwise, there is nothing to plug back into the start of the algorithm.
One way to think about it, is that an observation with a low residual, is likely to match the relevant pair (i.e. you actually chose $Y$ and $X$, or $Y_2$ and $X_2$). Further, the large sampled $Y_2$ values are likely to be wrong, and the small sampled $Y$ values are likely to be wrong. Using this kind of information will help you update the mixing probabilities.
| null | CC BY-SA 2.5 | null | 2011-02-06T01:02:12.610 | 2011-02-07T10:44:03.220 | 2011-02-07T10:44:03.220 | 2116 | 2392 | null |
6931 | 2 | null | 6320 | 2 | null | If you have no reason to suspect more than one mode in your data, then a multivariate normal distribution is not a bad first go. You just calculate the mean vector, and covariance matrix, and there's your PDF.
However this is a very rough answer, and it seems to me that with 100 observations, you will probably have multi-modal data. Normal just looks like one big smooth multi-dimensional mountain. You data would probably look more like a multi-dimensional mountain range (with lots of local humps).
| null | CC BY-SA 2.5 | null | 2011-02-06T01:23:06.167 | 2011-02-06T01:23:06.167 | null | null | 2392 | null |
6932 | 2 | null | 1595 | 12 | null | Perhaps not directly related, but R has a nice GUI environment for interactive sessions (edit: on Mac/Windows). IPython is very good but for an environment closer to Matlab's you might try Spyder or IEP. I've had better luck of late using IEP, but Spyder looks more promising.
IEP:
[http://code.google.com/p/iep/](http://code.google.com/p/iep/)
Spyder:
[http://packages.python.org/spyder/](http://packages.python.org/spyder/)
And the IEP site includes a brief comparison of related software: [http://code.google.com/p/iep/wiki/Alternatives](http://code.google.com/p/iep/wiki/Alternatives)
| null | CC BY-SA 2.5 | null | 2011-02-06T04:09:24.287 | 2011-02-06T04:09:24.287 | null | null | 26 | null |
6933 | 2 | null | 6795 | 21 | null | This is problem 3.23 on page 97 of [Hastie et al., Elements of Statistical Learning, 2nd. ed. (5th printing)](http://www.stanford.edu/~hastie/local.ftp/Springer/ESLII_print5.pdf#page=115).
The key to this problem is a good understanding of ordinary least squares (i.e., linear regression), particularly the orthogonality of the fitted values and the residuals.
Orthogonality lemma: Let $X$ be the $n \times p$ design matrix, $y$ the response vector and $\beta$ the (true) parameters. Assuming $X$ is full-rank (which we will throughout), the OLS estimates of $\beta$ are $\hat{\beta} = (X^T X)^{-1} X^T y$. The fitted values are $\hat{y} = X (X^T X)^{-1} X^T y$. Then $\langle \hat{y}, y-\hat{y} \rangle = \hat{y}^T (y - \hat{y}) = 0$. That is, the fitted values are orthogonal to the residuals. This follows since $X^T (y - \hat{y}) = X^T y - X^T X (X^T X)^{-1} X^T y = X^T y - X^T y = 0$.
Now, let $x_j$ be a column vector such that $x_j$ is the $j$th column of $X$. The assumed conditions are:
- $\frac{1}{N} \langle x_j, x_j \rangle = 1$ for each $j$, $\frac{1}{N} \langle y, y \rangle = 1$,
- $\frac{1}{N} \langle x_j, 1_p \rangle = \frac{1}{N} \langle y, 1_p \rangle = 0$ where $1_p$ denotes a vector of ones of length $p$, and
- $\frac{1}{N} | \langle x_j, y \rangle | = \lambda$ for all $j$.
Note that in particular, the last statement of the orthogonality lemma is identical to $\langle x_j, y - \hat{y} \rangle = 0$ for all $j$.
---
The correlations are tied
Now, $u(\alpha) = \alpha X \hat{\beta} = \alpha \hat{y}$. So,
$$
\langle x_j, y - u(a) \rangle = \langle x_j, (1-\alpha) y + \alpha y - \alpha \hat{y} \rangle = (1-\alpha) \langle x_j, y \rangle + \alpha \langle x_j, y - \hat{y} \rangle ,
$$
and the second term on the right-hand side is zero by the orthogonality lemma, so
$$
\frac{1}{N} | \langle x_j, y - u(\alpha) \rangle | = (1-\alpha) \lambda ,
$$
as desired. The absolute value of the correlations are just
$$
\hat{\rho}_j(\alpha) = \frac{\frac{1}{N} | \langle x_j, y - u(\alpha) \rangle |}{\sqrt{\frac{1}{N} \langle x_j, x_j \rangle }\sqrt{\frac{1}{N} \langle y - u(\alpha), y - u(\alpha) \rangle }} = \frac{(1-\alpha)\lambda}{\sqrt{\frac{1}{N} \langle y - u(\alpha), y - u(\alpha) \rangle }}
$$
Note: The right-hand side above is independent of $j$ and the numerator is just the same as the covariance since we've assumed that all the $x_j$'s and $y$ are centered (so, in particular, no subtraction of the mean is necessary).
What's the point? As $\alpha$ increases the response vector is modified so that it inches its way toward that of the (restricted!) least-squares solution obtained from incorporating only the first $p$ parameters in the model. This simultaneously modifies the estimated parameters since they are simple inner products of the predictors with the (modified) response vector. The modification takes a special form though. It keeps the (magnitude of) the correlations between the predictors and the modified response the same throughout the process (even though the value of the correlation is changing). Think about what this is doing geometrically and you'll understand the name of the procedure!
---
Explicit form of the (absolute) correlation
Let's focus on the term in the denominator, since the numerator is already in the required form. We have
$$
\langle y - u(\alpha), y - u(\alpha) \rangle = \langle (1-\alpha) y + \alpha y - u(\alpha), (1-\alpha) y + \alpha y - u(\alpha) \rangle .
$$
Substituting in $u(\alpha) = \alpha \hat{y}$ and using the linearity of the inner product, we get
$$
\langle y - u(\alpha), y - u(\alpha) \rangle = (1-\alpha)^2 \langle y, y \rangle + 2\alpha(1-\alpha) \langle y, y - \hat{y} \rangle + \alpha^2 \langle y-\hat{y}, y-\hat{y} \rangle .
$$
Observe that
- $\langle y, y \rangle = N$ by assumption,
- $\langle y, y - \hat{y} \rangle = \langle y - \hat{y}, y - \hat{y} \rangle + \langle \hat{y}, y - \hat{y} \rangle = \langle y - \hat{y}, y - \hat{y}\rangle$, by applying the orthogonality lemma (yet again) to the second term in the middle; and,
- $\langle y - \hat{y}, y - \hat{y} \rangle = \mathrm{RSS}$ by definition.
Putting this all together, you'll notice that we get
$$
\hat{\rho}_j(\alpha) = \frac{(1-\alpha) \lambda}{\sqrt{ (1-\alpha)^2 + \frac{\alpha(2-\alpha)}{N} \mathrm{RSS}}} = \frac{(1-\alpha) \lambda}{\sqrt{ (1-\alpha)^2 (1 - \frac{\mathrm{RSS}}{N}) + \frac{1}{N} \mathrm{RSS}}}
$$
To wrap things up, $1 - \frac{\mathrm{RSS}}{N} = \frac{1}{N} (\langle y, y, \rangle - \langle y - \hat{y}, y - \hat{y} \rangle ) \geq 0$ and so it's clear that $\hat{\rho}_j(\alpha)$ is monotonically decreasing in $\alpha$ and $\hat{\rho}_j(\alpha) \downarrow 0$ as $\alpha \uparrow 1$.
---
Epilogue: Concentrate on the ideas here. There is really only one. The orthogonality lemma does almost all the work for us. The rest is just algebra, notation, and the ability to put these last two to work.
| null | CC BY-SA 3.0 | null | 2011-02-06T05:55:09.133 | 2011-05-24T22:56:27.137 | 2011-05-24T22:56:27.137 | 2970 | 2970 | null |
6935 | 2 | null | 6927 | 4 | null | Two other possibilities too consider, before buying a new computer.
- Parallel computing - bootstrapping is easy to run in parallel. If your computer is reasonably new, you probably have four cores. Take a look the the multicore library in R.
- Cloud computing is also a possibility and reasonably cheap. I have colleagues that have used the amazon cloud for running R scripts. They found that it was quite cost effective.
| null | CC BY-SA 2.5 | null | 2011-02-06T13:22:55.083 | 2011-02-06T13:22:55.083 | null | null | 8 | null |
6936 | 2 | null | 6927 | 4 | null | It should help to specify starting values, though it's hard to know how much. As you're doing simulation and bootstrapping, you should know the 'true' values or the un-bootstrapped estimates or both. Try using those in the `start =` option of `glmer`.
You could also consider looking into the whether the tolerances for declaring convergence are stricter than you need. I'm not clear how to alter them from the `lme4` documentation though.
| null | CC BY-SA 2.5 | null | 2011-02-06T15:16:20.523 | 2011-02-06T15:16:20.523 | null | null | 449 | null |
6937 | 2 | null | 6907 | 20 | null | The Kullback-Leibler divergence $\kappa(P|Q)$ of $P$ with respect to $Q$ is infinite when $P$ is not absolutely continuous with respect to $Q$, that is, when there exists a measurable set $A$ such that $Q(A)=0$ and $P(A)\ne0$. Furthermore the KL divergence is not symmetric, in the sense that in general $\kappa(P\mid Q)\ne\kappa(Q\mid P)$. Recall that
$$
\kappa(P\mid Q)=\int P\log\left(\frac{P}{Q}\right).
$$
A way out of both these drawbacks, still based on KL divergence, is to introduce the midpoint
$$R=\tfrac12(P+Q).
$$
Thus $R$ is a probability measure, and $P$ and $Q$ are always absolutely continuous with respect to $R$. Hence one can consider a "distance" between $P$ and $Q$, still based on KL divergence but using $R$, defined as
$$
\eta(P,Q)=\kappa(P\mid R)+\kappa(Q\mid R).
$$
Then $\eta(P,Q)$ is nonnegative and finite for every $P$ and $Q$, $\eta$ is symmetric in the sense that $\eta(P,Q)=\eta(Q,P)$ for every $P$ and $Q$, and $\eta(P,Q)=0$ iff $P=Q$.
An equivalent formulation is
$$
\eta(P,Q)=2\log(2)+\int \left(P\log(P)+Q\log(Q)-(P+Q)\log(P+Q)\right).
$$
Addendum 1 The introduction of the midpoint of $P$ and $Q$ is not arbitrary in the sense that
$$
\eta(P,Q)=\min [\kappa(P\mid \cdot)+\kappa(Q\mid \cdot)],
$$
where the minimum is over the set of probability measures.
Addendum 2 @cardinal remarks that $\eta$ is also an $f$-divergence, for the convex function
$$
f(x)=x\log(x)β(1+x)\log(1+x)+(1+x)\log(2).
$$
| null | CC BY-SA 3.0 | null | 2011-02-06T15:47:14.310 | 2015-08-14T16:37:50.697 | 2015-08-14T16:37:50.697 | 2592 | 2592 | null |
6938 | 2 | null | 6913 | 23 | null | The answer you get depends on the question you ask.
Mean and median answer different questions. So they give different answers. It's not that one is "fairer" than another. Medians are often used with highly skewed data (such as income). But, even there, sometimes the mean is best. And sometimes you don't want ANY measure of central tendency.
In addition, whenever you give a measure of central tendency, you should give some measure of spread. The most common pairings are mean-standard deviation and median-interquartile range. In these data, giving just a median of 5 is, I think, misleading, or, at least, uninformative. The median would also be 5 if every single vote was a 5.
| null | CC BY-SA 2.5 | null | 2011-02-06T15:51:31.447 | 2011-02-06T15:51:31.447 | null | null | 686 | null |
6939 | 1 | 6942 | null | 2 | 2346 | Suppose I have the following simplified distribution:
```
time | value
1 | 2
2 | 4
3 | 8
4 | 16
1 | 1
2 | 3
3 | 9
4 | 27
1 | 40
2 | 20
3 | 10
4 | 5
1 | 12
2 | 1
3 | 99
4 | 23423
```
These are all part of the same dataset (so one x can have multiple values here, eg. time = 1 corresponds with value = 2,1,40,12). I separated them because the first 3 have an obvious pattern within their slope (2, 3 and 0.5) and the last does not have a pattern in its slope. Time is in days, and the values represent a quantity.
Now, how do I use SPSS to find the elements of a distribution that together form a pattern (any kind of pattern) in their slope? And how can I make it also include elements that almost follow this pattern, but have slight variations (a variation that I can set)?
Any feedback is appreciated. Thanks.
Update:
It looks like SPSS is not the best tool for the task. I am interested in the R language however.
If anyone could recommend any books regarding this field of pattern recognition in R, that would be great.
| How to detect patterns between fields of a distribution in SPSS? | CC BY-SA 2.5 | null | 2011-02-06T17:33:19.747 | 2011-02-06T18:42:53.183 | 2011-02-06T18:16:45.200 | 3040 | 3040 | [
"distributions",
"spss",
"pattern-recognition"
]
|
6940 | 1 | 6953 | null | 15 | 10831 | I hope this isn't a silly question. Let's say I have some arbitrary continuous distribution. I also have a statistic, and I'd like to use this arbitrary distribution to get a p-value for this statistic.
I realize that in R it's easy to do this as long as your distribution fits one of the built-in ones, like if it's normal. But is there an easy way to do this with any given distribution, without making that kind of assumption?
| Calculating p-value from an arbitrary distribution | CC BY-SA 2.5 | null | 2011-02-06T18:13:50.133 | 2011-02-07T10:49:56.943 | 2011-02-07T10:49:56.943 | null | 52 | [
"r",
"distributions",
"p-value"
]
|
6941 | 2 | null | 6940 | 10 | null | Yes, it is possible to use any arbitrary distribution to get a p-value for any statistic. Theoretically and practically you can calculate (one-sided) p-value by this formula.
$$\mathrm{p-value} = P[T > T_{observed} | H_0 \quad \mathrm{holds}]$$
Where $T$ is the test-statistic of interest and $T_{observed}$ is the value that you have calculated for the observed data.
If you know the theoretical distribution of $T$ under $H_0$, great! Otherwise, you can use MCMC simulation to generate from the null distribution of $T$ and calculate the Monte Carlo integral to obtain p-value. Numerical integration techniques will also work in case you don't want to use (may be) easier Monte Carlo methods (especially in R; in Mathematica integration may be easier, but I have no experience using it)
The only assumption you are making here is -- you know the null distribution of T (which may not be in the standard R random number generator formats). That's it -- as long as you know the null distribution, p-value can be calculated.
| null | CC BY-SA 2.5 | null | 2011-02-06T18:33:44.633 | 2011-02-06T18:39:15.453 | 2011-02-06T18:39:15.453 | 1307 | 1307 | null |
6942 | 2 | null | 6939 | 1 | null | I think I understand what you are after, but I might be wrong - just to clarify things in advance :)
If you want to find the "distribution" of your data, than [R](http://www.r-project.org/) could do this easily, I have no idea about Spss. Though I am not sure about your are really after distributions, as those would only show the probabilities of certain values in your data series not dealing with the order, `fitdistr` from [MASS](http://cran.r-project.org/web/packages/MASS/index.html) and `fitdist` from [fitdistrplus](http://cran.r-project.org/web/packages/fitdistrplus/index.html) package will be your friend. Also, [Vito Ricci's paper](http://cran.r-project.org/doc/contrib/Ricci-distributions-en.pdf) worths reading in the issue available on CRAN.
A small example assuming you have a data table (`data`) with your data, and would like to fit the first columns data to normal distribution:
```
library(fitdistrplus)
fitdist(data[,1],"norm")
```
If you would like to get the "slopes" of the pattern of a data in a row, as I suppose you are really after, than you need to set up some linear models based on your data. I am sure Spss can do the trick also, but in R look for `lm` and `glm` functions. See the manual of the [lm](http://stat.ethz.ch/R-manual/R-devel/library/stats/html/lm.html) to fit linear models.
A small example:
```
# make up a demo dataset from day 1 to day 10 with 10 values
data <- data.frame(time=1:10, data=c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14))
```
This would look like:
```
> data
time data
1 1 4.17
2 2 5.58
3 3 5.18
4 4 6.11
5 5 4.50
6 6 4.61
7 7 5.17
8 8 4.53
9 9 5.33
10 10 5.14
```
And fit a simple model on it:
```
> lm(data)
Call:
lm(formula = data)
Coefficients:
(Intercept) data
4.6613 0.1667
```
Which shows the slope being around 0.16667.
| null | CC BY-SA 2.5 | null | 2011-02-06T18:42:53.183 | 2011-02-06T18:42:53.183 | null | null | 2714 | null |
6943 | 1 | 6944 | null | 6 | 34597 | I am unable to get the `zscore` function to work. When I try to call it, I get the following error:
```
could not find function zscore
```
The online documentation online states that it is in basic R library.
Searching online reveals that other people had the same problem some time ago - though on macs. Does anyone know how to fix this problem or use other alternative function?
I saw somewhere they were using `scale()`. This works but I am not sure whether it is the correct substitute to "zscore".
| zscore function in R | CC BY-SA 2.5 | null | 2011-02-06T23:31:42.537 | 2016-09-19T20:18:39.417 | 2011-02-06T23:47:13.467 | 8 | 3008 | [
"r"
]
|
6944 | 2 | null | 6943 | 12 | null | As the `zscore` function you are looking for can be found in the R.basic package made by [Henrik Bengtsson](http://www.braju.com/R/), which cannot be found on CRAN. To install use:
```
install.packages(c("R.basic"), contriburl="http://www.braju.com/R/repos/")
```
See [this similar topic](https://stats.stackexchange.com/q/3890/2714) for more details.
| null | CC BY-SA 2.5 | null | 2011-02-06T23:43:55.260 | 2011-02-06T23:43:55.260 | 2017-04-13T12:44:40.883 | -1 | 2714 | null |
6945 | 1 | 6947 | null | 8 | 468 | In a lottery, 1/10 of the 50 000 000 tickets give a prize.
What is the minimum amount of tickets one should buy to have at least a 50% chance to win?
Would be very glad if you could explain your methodology when resolving this. Please consider this as homework if you will.
| Minimum tickets required for specified probability of winning lottery | CC BY-SA 2.5 | null | 2011-02-07T00:15:04.510 | 2017-09-15T23:47:27.480 | 2011-02-12T05:55:21.817 | 183 | 1833 | [
"probability",
"self-study"
]
|
6946 | 1 | 6948 | null | 7 | 2793 | During an article revision the authors found, in average, 1.6 errors by page. Assuming the errors happen randomly following a Poisson process, what is the probability of finding 5 errors in 3 consecutive pages?
Please explain your methodology, as the main purpose of the question is not getting the answer but the "how to". Please consider this as homework if you will.
Thanks!
| How do you solve a Poisson process problem | CC BY-SA 2.5 | null | 2011-02-07T00:23:26.160 | 2011-02-12T13:25:08.530 | 2011-02-07T03:27:36.310 | 1833 | 1833 | [
"probability",
"self-study",
"poisson-distribution"
]
|
6947 | 2 | null | 6945 | 15 | null | I should really ask what your thoughts are so far on this. This problem is very closely related to the "birthday problem". The easiest way to do these problems is to count the possible ways of either winning or losing. Usually one is much easier to do than the other, so the key is to find the right one. Before we get into the actual calculation, let's start with some heuristics.
---
Heuristics: Let $n$ be the total number of tickets and $m$ be the number of winning tickets. In this case $n = 50\,000\,000$ and $m = 5\,000\,000$. When $n$ is very large then purchasing multiple distinguishable tickets is almost the same as sampling with replacement from the population of tickets.
Let's suppose that, instead of having to purchase $k$ separate tickets, we purchased a ticket, looked to see if it was a winner and then returned it to the lottery. We then repeat this procedure where each such draw is independent from all of the previous ones. Then the probability of winning after purchasing $k$ tickets is just
$$
\Pr( \text{we won} \mid \text{purchased $k$ tickets} ) = 1 - \left(\frac{n-m}{n}\right)^k .
$$
For our case then, the right-hand side is $1 - (9/10)^k$ and so we set this equal to $1/2$ and solve for $k$ in order to get the number of tickets.
But, we're actually sampling without replacement. Below, we'll go through the development, with the point being that the heuristics above are more than good enough for the present problem and many similar ones.
---
There are $50\,000\,000$ tickets. Of these $5\,000\,000$ are winning ones and $45\,000\,000$ are losing ones. We seek
$$
\Pr(\text{we win}) \geq 1/2 \>,
$$
or, equivalently,
$$
\Pr(\text{we lose}) \leq 1/2 .
$$
The probability that we lose is simply the probability that we hold none of the winning tickets. Let $k$ be the number of tickets we purchase. If $k = 1$, then $\Pr(\text{we lose}) = 45\,000\,000 / 50\,000\,000 = 9/10 \geq 1/2$, so that won't do. If we choose two tickets then there are $45\,000\,000 \cdot 44\,999\,999$ ways to choose two losing tickets and there are $50\,000\,000 \cdot 49\,999\,999$ ways to choose any two tickets. So,
$$
\Pr( \text{we lose} ) = \frac{45\,000\,000 \cdot 44\,999\,999}{50\,000\,000 \cdot 49\,999\,999} \approx 0.9^2 = 0.81.
$$
Let's generalize now. Let $m$ be the number of winning tickets and $n$ the total number of tickets, as before. Then, if we purchase $k$ tickets,
$$
\Pr(\text{we lose}) = \frac{(n-m) (n-m-1) \cdots (n-m-k+1)}{n (n-1) \cdots (n-k+1)} .
$$
It would be tedious, but we can now just start plugging in values for $k$ until we get the probability below $1/2$. We can use a "trick", though, to get close to the answer, especially when $n$ is very large and $m$ is relatively small with respect to $n$.
Note that $\frac{n-m-k}{n-k} \leq \frac{n-m}{n}$ for all $k < m < n$. Hence
$$
\Pr(\text{we lose}) = \frac{(n-m) (n-m-1) \cdots (n-m-k+1)}{n (n-1) \cdots (n-k+1)} \leq \left(1 - \frac{m}{n} \right)^k ,
$$
and so we need to solve the equation $\left(1 - \frac{m}{n} \right)^k = \frac{1}{2}$ for $k$. But,
$$
k = \frac{\log \frac{1}{2}}{\log (1 - \frac{m}{n})} \>,
$$
and so when $m/n = 1/10$, we get $k \approx 6.58$.
So, $k = 7$ tickets ought to do the trick. If you plug it in to the exact equation above, you'll get that
$$
\Pr( \text{we win} \mid \text{k=7} ) \approx 52.2\%
$$
| null | CC BY-SA 2.5 | null | 2011-02-07T01:43:02.600 | 2011-02-07T03:21:01.393 | 2011-02-07T03:21:01.393 | 2970 | 2970 | null |
6948 | 2 | null | 6946 | 10 | null | The two most important characteristics of a Poisson process with rate $\lambda$ are
- For any interval $(s, t)$, the number of arrivals within the interval follows a Poisson distribution with mean $\lambda (t-s)$.
- The number of arrivals in disjoint intervals are independent of one another. So, if $s_1 < t_1 < s_2 < t_2$, then the number of arrivals in $(s_1, t_1)$ and $(s_2, t_2)$ are independent of one another (and have means of $\lambda (t_1 - s_1)$ and $\lambda (t_2 - s_2)$, respectively).
For this problem let "time" be denoted in "pages". And so the Poisson process has rate $\lambda = 1.6 \text{ errors/page}$. Suppose we are interested in the probability that there are $x$ errors in three (prespecified!) pages. Call the random variable corresponding to the number of errors $X$. Then, $X$ has a Poisson distribution with mean $\lambda_3 = 3 \lambda = 3 \cdot 1.6 = 4.8$. And so
$$
\Pr(\text{$x$ errors in three pages}) = \Pr(X = x) = \frac{e^{-\lambda_3} \lambda_3^x}{x!},
$$
so, for $x = 5$, we get
$$
\Pr(X = 5) = \frac{e^{-\lambda_3} \lambda_3^5}{5!} = \frac{e^{-4.8} 4.8^5}{5!} \approx 0.175
$$
| null | CC BY-SA 2.5 | null | 2011-02-07T03:06:02.280 | 2011-02-07T03:42:29.363 | 2011-02-07T03:42:29.363 | 2970 | 2970 | null |
6949 | 1 | null | null | 6 | 478 | I am looking for a book/case study etc on how to build a fraud detection model at the transaction level. Something applied rather than theoretical would be really helpful.
| Any good reference books/material to help me build a txn level fraud detection model? | CC BY-SA 2.5 | null | 2011-02-07T03:13:07.937 | 2017-11-03T15:37:02.097 | 2017-11-03T15:37:02.097 | 11887 | null | [
"machine-learning",
"references",
"anomaly-detection",
"isotonic",
"fraud-detection"
]
|
6950 | 1 | 6952 | null | 4 | 3662 | I have a large amount of variables (24) to predict a Y/N value, and I would like help for writting a procedure that automatically tries the different results of the factor selection to see how good the regression turns out to be, and of course I want to save the best model for later use.
| How to do a logistic regression with the results of different factor analysis methods | CC BY-SA 2.5 | null | 2011-02-07T03:45:54.400 | 2015-12-30T22:28:22.197 | null | null | 1808 | [
"r",
"logistic",
"factor-analysis"
]
|
6951 | 2 | null | 6946 | 3 | null | If I were his instructor I'd prefer the following explanation, which obviously is equivalent to that given by @cardinal.
Let $N_t$ be the Poisson process of counting the number of errors on consequtive pages, of rate $\lambda=1.6/page$. One supposes that we observe the process
at integer moments t (here t being assimilated with number of pages).
Because $P(N_t=k)=e^{-\lambda t} (\lambda t)^k/k!$, we have $P(N_3=5)=e^{-(3*1.6)}(4.8)^5/5!$.
| null | CC BY-SA 2.5 | null | 2011-02-07T07:36:25.897 | 2011-02-12T13:25:08.530 | 2011-02-12T13:25:08.530 | 2376 | 2376 | null |
6952 | 2 | null | 6950 | 4 | null | First of all, consider if the factor analysis is the right way to do feature extraction. I would suggest to use principal component analysis to make dimension reduction first and then use extracted features as predictor variables. Depends on your settings you should also use appropriate cross-validation regime to access your prediction.
| null | CC BY-SA 2.5 | null | 2011-02-07T10:24:37.227 | 2011-02-07T10:24:37.227 | null | null | 609 | null |
6953 | 2 | null | 6940 | 12 | null | If you have a [cumulative distribution function](http://en.wikipedia.org/wiki/Cumulative_distribution_function) $F$, then calculating the $p$-value for given statistic $T$ is simply $1-F(T)$. This is straightforward in R. If you have [probability density function](http://en.wikipedia.org/wiki/Density_function) on the other hand, then $F(x)=\int_{-\infty}^xp(t)dt$. You can find this integral analytically or numerically. In R this will look like this:
```
dF <- function(x)dnorm(x)
pF <- function(q)integrate(dF,-Inf,q)$value
> pF(1)
[1] 0.8413448
> pnorm(1)
[1] 0.8413447
```
You can tune `integrate` for better accuracy. This of course may fail for specific cases, when the integral does not behave well, but it should work for majority of the density functions.
You can of course pass parameters into `pF`, if you have several parameter values to try-out and do not want to redefine `dF` each time.
```
dF <- function(x,mean=0,sd=1)dnorm(x,mean=mean,sd=sd)
pF <- function(q,mean=0,sd=1)integrate(dF,-Inf,q,mean=mean,sd=sd)$value
> pF(1,1,1)
[1] 0.5
> pnorm(1,1,1)
[1] 0.5
```
Of course you can also use Monte-Carlo methods as detailed by @suncoolsu, this would be just another numerical method for integration.
| null | CC BY-SA 2.5 | null | 2011-02-07T10:27:29.180 | 2011-02-07T10:27:29.180 | null | null | 2116 | null |
6954 | 1 | 6959 | null | 9 | 3072 | In R there is a function [nlm()](http://127.0.0.1:14633/library/stats/html/nlm.html) which carries out a minimization of a function f using the Newton-Raphson algorithm. In particular, that function outputs the value of the variable code defined as follows:
>
code an integer indicating why the optimization process terminated.
1: relative gradient is close to zero, current iterate is probably solution.
2: successive iterates within tolerance, current iterate is probably solution.
3: last global step failed to locate a point lower than estimate. Either estimate is an approximate local minimum of the function or steptol is too small.
4: iteration limit exceeded.
5: maximum step size stepmax exceeded five consecutive times. Either the function is unbounded below, becomes asymptotic to a finite value from above in some direction or stepmax is too small.
Can someone explain me (maybe using a simple illustration with a function of only one variable) to what correspond situations 1-5?
For example, situation 1 might correspond to the following picture:

Thank you in advance!
| The code variable in the nlm() function | CC BY-SA 2.5 | null | 2011-02-07T10:49:26.900 | 2011-02-26T15:47:54.760 | 2020-06-11T14:32:37.003 | -1 | 3019 | [
"r",
"extreme-value"
]
|
6955 | 1 | null | null | 1 | 169 | I create SAS output in .rtf format and the File Download warning pops up each time. I need to turn this off for all files that I create. How?
| Getting rid of file download warning in SAS | CC BY-SA 2.5 | null | 2011-02-07T11:25:49.097 | 2011-02-18T22:56:20.907 | 2011-02-07T15:16:13.880 | null | 3085 | [
"sas"
]
|
6956 | 1 | 6961 | null | 1 | 1155 | I was looking at howto figure out the optimal lag order for an ADF-test when I came across [this](http://homepages.strath.ac.uk/~hbs96127/mlecture12.ppt) PPT presentation. There are two examples of a ADF-test model reduction, one on p23 and one on p36.
I have two questions:
- How do I come up with the F numbers of the reduction? For example, on p23 it says "F(1,487) = 0.92201 [0.3374]"? I understand what an F number is, but what calculation lies behind the 0.92201 number (and 0.3374)?
- How do I interpret the numbers? For example, why is model 4 the preferred one on p23?
I'm not a complete noob, but I certainly wouldn't mind a nicely laid out pedagogical answer. Thanks!
| How to interpret ADF-test optimal lag model reduction? | CC BY-SA 2.5 | null | 2011-02-07T12:01:37.240 | 2011-02-07T14:46:03.480 | null | null | 3086 | [
"optimization",
"hypothesis-testing",
"cointegration"
]
|
6957 | 2 | null | 6653 | 6 | null | I would start simulating the data from a toy model. Something like:
```
n.games <- 1000
n.slices <- 90
score.away <- score.home <- matrix(0, ncol=n.slices, nrow=n.games)
for (j in 2:n.slices) {
score.home[ ,j] <- score.home[ , j-1] + (runif(n.games)>.97)
score.away[ ,j] <- score.away[ , j-1] + (runif(n.games)>.98)
}
```
Now we have something to play with. You could also use the raw data, but I find simulating the data very helpful to think things through.
Next I would just plot the data, that is, plot time of the game versus lead home, with the color scale corresponding to the observed probability of winning.
```
score.dif <- score.home-score.away
windf <- data.frame(game=1:n.games, win=score.home[ , n.slices] > score.away[, n.slices])
library(reshape)
library(ggplot2)
dnow <- melt(score.dif)
names(dnow) <- c('game', 'time', 'dif')
dnow <- merge(dnow, windf)
res <- ddply(dnow, c('time', 'dif'), function(x) c(pwin=sum(x$win)/nrow(x)))
qplot(time, dif, fill=pwin, data=res, geom='tile') + scale_color_gradient2()
```
This will help you find the support of your data, and give you a raw idea of what the probabilities look like.

| null | CC BY-SA 2.5 | null | 2011-02-07T12:15:54.877 | 2011-02-07T13:08:39.737 | 2011-02-07T13:08:39.737 | 375 | 375 | null |
6958 | 1 | 6960 | null | 14 | 16604 | I have fitted the ARIMA models to the original time series, and the best model is ARIMA(1,1,0). Now I want to simulate the series from that model. I wrote the simple AR(1) model, but I couldn't understand how to adjust the difference within the model ARI(1,1,0).
The following R code for AR(1) series is:
```
phi= -0.7048
z=rep(0,100)
e=rnorm(n=100,0,0.345)
cons=2.1
z[1]=4.1
for (i in 2:100) z[i]=cons+phi*z[i-1]+e[i]
plot(ts(Y))
```
How do i include the difference term ARI(1,1) in above code.
Any one help me in this regard.
| Simulation of ARIMA (1,1,0) series | CC BY-SA 2.5 | null | 2011-02-07T12:26:51.370 | 2022-07-20T17:45:52.827 | 2011-03-30T03:13:24.613 | 159 | 3084 | [
"r",
"time-series",
"arima"
]
|
6959 | 2 | null | 6954 | 8 | null | These situations are understood more clearly when having in mind what minimisation or maximisation really is and how optimisation works.
Suppose we have function $f$ which has local minimum at $x_0$. Optimisation methods try to construct the sequence $x_i$ which converges to $x_0$. It is always shown that in theory the sequence constructed converges to the point of local minimum for some class of functions $f$.
To obtain next candidate in iteration $i$ can be a lengthy process, so it is usual that all algorithms limit the number of iterations. This corresponds to situation 4.
Then for each $x$ close to $x_0$ we have that $f(x)>f(x_0)$. So if $f(x_i)>f(x_{i-1})$ this is an indication that we reached the minimum. This corresponds to situation 3
Now if function $f$ has a derivative at $x_0$ then necessarily $\nabla f(x_0)=0$. Newton-Raphson method calculates gradient at each step, so if $\nabla f(x_i)\approx 0$, $x_i$ is probably a solution, which corresponds to situation 1.
Each convergent sequence of real vectors is [Cauchy](http://en.wikipedia.org/wiki/Cauchy_sequence) sequence and vice versa, roughly meaning that if $x_i$ is close to $x_0$, then $x_i$ is close to $x_{i+1}$ and vice versa, where $i$ is the iteration number. So if $|x_i-x_{i-1}|<\varepsilon$, and we know that in theory $x_i$ converges to $x_0$, then we should be close to the minimum point. This corresponds to situation 2.
Converging sequences have the property that they contract, i.e. if we are close to convergence all the remaining elements of the sequence are contained in small area. So if the sequence which in theory should converge starts to take large steps this is an indication that there is no convergence probably. This corresponds to situation 5
Note Strict mathematical definitions were left out intentionally.
| null | CC BY-SA 2.5 | null | 2011-02-07T12:51:40.707 | 2011-02-07T12:51:40.707 | null | null | 2116 | null |
6960 | 2 | null | 6958 | 22 | null | If you want to simulate ARIMA you can use `arima.sim` in R, there is no need to do it by hand. This will generate the series you want.
```
e <- rnorm(100,0,0.345)
arima.sim(n=100,model=list(ar=-0.7048,order=c(1,1,0)),start.innov=4.1,n.start=1,innov=2.1+e)
```
You can look at the code of how this is achieved by typing `arima.sim` in R command line.
Alternatively if you do it yourself, the function you are probably looking is `diffinv`. It computes the inverse of lagged differences.
For recursive sequences `R` has a nice function `filter`. So instead of using loop
```
z <- rep(NA,100)
z[1] <- 4.1
for (i in 2:100) z[i]=cons+phi*z[i-1]+e[i]
```
you can write
```
filter(c(4.1,2.1+e),filter=-0.7048,method="recursive")
```
This will give the identical result to `arima.sim` example above:
```
diffinv(filter(c(4.1,2.1+e),filter=-0.7048,method="recursive")[-1])
```
| null | CC BY-SA 2.5 | null | 2011-02-07T13:32:42.747 | 2011-02-08T19:58:59.113 | 2011-02-08T19:58:59.113 | 2116 | 2116 | null |
6961 | 2 | null | 6956 | 2 | null | For page 23 you have simple [F statistics](http://en.wikipedia.org/wiki/F-test) for regression (scroll down to regression problems in the link for the formula). So the number 0.92201 comes from this calculation (in R):
```
> (963.09-961.27)/(961.27/487)
[1] 0.922051
```
The value in the brackets is p-value of F-statistic with degrees of freedom 1 and 487:
```
> 1-pf((963.09-961.27)/(961.27/487),1,487)
[1] 0.3374135
```
Now all these statistic test whether the additional parameters in the model are zero or not. So the model 4 comes up as the best, because compared to it, hypothesis that additional parameters in models 1 to 3 are zero is not rejected. On the other hand the hypothesis that additional parameter in model 4 is zero compared to model 5 is not rejected.
Note under null hypothesis of unit root, the asymptotic statistics on coefficients of lagged differences are the same as in usual regression. The point of including lagged differences is to control for serial correlation, the question is how many. This example illustrates how you can choose how many to include.
In my opinion though it is rather futile exercise. Why recreate ADF test by hand, when there is readily available packages to do it for you? The goal of ADF is to test whether we have unit root or not. Based on that information some kind of model is built for the variable tested, which in practice never has the form used in ADF test. This of course is my personal opinion. As always your mileage may vary.
| null | CC BY-SA 2.5 | null | 2011-02-07T13:56:16.377 | 2011-02-07T14:46:03.480 | 2011-02-07T14:46:03.480 | 2116 | 2116 | null |
6962 | 2 | null | 6955 | 2 | null | It's under the Recycle Bin's properties. Just right-click Recycle Bin,
properties, and you can turn off the confirmation there.
| null | CC BY-SA 2.5 | null | 2011-02-07T14:48:36.677 | 2011-02-07T14:48:36.677 | null | null | 2310 | null |
6963 | 1 | 7617 | null | 17 | 1442 | It is easy to produce a random variable with Dirichlet distribution using Gamma variables with the same scale parameter. If:
$ X_i \sim \text{Gamma}(\alpha_i, \beta) $
Then:
$ \left(\frac{X_1}{\sum_j X_j},\; \ldots\; , \frac{X_n}{\sum_j X_j}\right) \sim \text{Dirichlet}(\alpha_1,\;\ldots\;,\alpha_n) $
Problem
What happens if the scale parameters are not equal?
$ X_i \sim \text{Gamma}(\alpha_i, \beta_i) $
Then what is the distribution this variable?
$ \left(\frac{X_1}{\sum_j X_j},\; \ldots\; , \frac{X_n}{\sum_j X_j}\right) \sim \; ? $
For me it would be sufficient to know the expected value of this distribution.
I need a approximate closed algebraic formula that can be evaluated very very quickly by a computer.
Let's say approximation with accurancy of 0.01 is sufficient.
You can assume that:
$ \alpha_i, \beta_i \in \mathbb{N} $
Note In short, the task is to find an approximation of this integral:
$ f(\vec{\alpha}, \vec{\beta}) = \int_{\mathbb{R}^n_+} \;\frac{x_1}{\sum_j x_j} \cdot \prod_j \frac{\beta_j^{\alpha_j}}{\Gamma(\alpha_j)} x_j^{\alpha_j - 1} e^{-\beta_j x_j} \;\; dx_1\ldots dx_n$
| What is the expected value of modified Dirichlet distribution? (integration problem) | CC BY-SA 2.5 | null | 2011-02-07T15:53:15.410 | 2011-03-08T07:45:43.053 | 2011-02-07T17:13:02.550 | 217 | 217 | [
"expected-value",
"dirichlet-distribution",
"gamma-distribution",
"integral"
]
|
6964 | 1 | null | null | 1 | 2042 | I just ran a vendor-supplied program that converts data from their proprietary binary format to what they call a "Matlab format". There are two resulting files from the conversion, a "filename.dat" and a "filename.m".
The "filename.m" file seems to have column heading information in it. The second "filename.dat" file is >13Gigs in size and I can't load it into anything at the moment to inspect it.
How do I go about loading this "Matlab format" data into R?
| How to pull data formatted for Matlab into R? | CC BY-SA 2.5 | null | 2011-02-07T17:16:09.403 | 2012-05-24T21:21:26.637 | 2011-02-07T18:11:40.303 | 8 | 2768 | [
"r",
"matlab"
]
|
6965 | 2 | null | 6964 | 3 | null | This doesn't sound like "Matlab format" (which would be a .mat file).
Rather, the .m-file is a Matlab function or script that is most likely used to read the data in the .dat file.
Have a look at the Matlab [documentation](http://www.mathworks.com/help/techdoc/) to find out how the .m file reads the data, so that you can port the code to R.
| null | CC BY-SA 2.5 | null | 2011-02-07T17:26:07.803 | 2011-02-07T17:26:07.803 | null | null | 198 | null |
6966 | 1 | null | null | 57 | 6424 | Why continue to teach and use hypothesis testing (with all its difficult concepts and which are among the most statistical sins) for problems where there is an interval estimator (confidence, bootstrap, credibility or whatever)? What is the best explanation (if any) to be given to students? Only tradition? The views will be very welcome.
| Why continue to teach and use hypothesis testing (when confidence intervals are available)? | CC BY-SA 3.0 | null | 2011-02-07T18:05:50.610 | 2017-01-26T11:51:34.617 | 2017-01-26T11:51:34.617 | 28666 | 523 | [
"hypothesis-testing",
"confidence-interval",
"teaching"
]
|
6967 | 1 | 6973 | null | 2 | 789 | /edit: To clarify: The mtable function from the [memisc](http://cran.r-project.org/web/packages/memisc/index.html) package does exactly what I need, but unfortunately does not work with arima models.
Similar to [this question](https://stats.stackexchange.com/questions/6856/aggregating-results-from-linear-model-runs-r): I have multiple Arima models, some of which I've also fit with dependent variables. I'd like an easy way to make a table/graph of the coefficients in each model, as well as summary statistics about each model.
Here is some example code:
```
sim <- arima.sim(list(order = c(1,1,0), ar = 0.7), n = 200)
ar1<-arima(sim,order=c(1,1,0))
ar2<-arima(sim,order=c(2,1,0))
ar3<-arima(sim,order=c(3,1,0))
ar4<-arima(sim,order=c(2,2,1))
#Try mtable
library(memisc)
mtable("Model 1"=ar1,"Model 2"=ar2,"Model 3"=ar3,"Model 4"=ar4)
#>Error in UseMethod("getSummary") :
# no applicable method for 'getSummary' applied to an object of class "Arima"
#Try apsrtable
library(apsrtable)
apsrtable("Model 1"=ar1,"Model 2"=ar2,"Model 3"=ar3,"Model 4"=ar4)
#>Error in est/x$se : non-numeric argument to binary operator
```
| Aggregating results from Arima runs R | CC BY-SA 2.5 | null | 2011-02-07T18:20:52.983 | 2011-02-08T00:26:12.043 | 2017-04-13T12:44:25.283 | -1 | 2817 | [
"r",
"time-series",
"regression",
"arima"
]
|
6968 | 2 | null | 6966 | 29 | null | I teach hypothesis tests for a number of reasons. One is historical, that they'll have to understand a large body of prior research they read and understand the hypothesis testing point of view. A second is that, even in modern times, it's still used by some researchers, often implicitly, when performing other kinds of statistical analyses.
But when I teach it, I teach it in the framework of model building, that these assumptions and estimates are parts of building models. That way it's relatively easy to switch to comparing more complex and theoretically interesting models. Research more often pits theories against one another rather than a theory versus nothing.
The sins of hypothesis testing are not inherent in the math, and proper use of those calculations. Where they primarily lie is in over-reliance and misinterpretation. If a vast majority of naΓ―ve researchers exclusively used interval estimation with no recognition of any of the relationships to these things we call hypotheses we might call that a sin.
| null | CC BY-SA 3.0 | null | 2011-02-07T18:24:00.687 | 2014-09-29T00:52:03.693 | 2014-09-29T00:52:03.693 | 601 | 601 | null |
6969 | 2 | null | 6706 | 2 | null | You could also look [Wessa P., (2008), Maximum-likelihood Exponential Distribution Fitting](http://www.wessa.net/rwasp_fitdistrexp.wasp) which has a nice, documented online execution of the problem with the R source code attached to the actual plot.
| null | CC BY-SA 2.5 | null | 2011-02-07T18:28:04.613 | 2011-02-07T18:28:04.613 | null | null | 2714 | null |
6971 | 1 | 6984 | null | 3 | 199 | In order to test the Matrix package in R, I ran the following piece of code:
```
library(Matrix)
p <- 50; n <- 500
a0 <- sample(1:(n*p), n*p*0.1)
d1 <- (a0-1) %% n
d2 <- floor((a0-d1)/n)
d0 <- cbind(d1+1, d2+1)
x0 <- Matrix(0, n, p, forceCheck=TRUE)
x0[d0] <- rnorm(n*p*0.1, 10, 1)
x1 <- matrix(rnorm(n*p), n, p)
fx01 <- function(ll, x0) tcrossprod(x0, rnorm(ncol(x0)))
fx02 <- function(ll, x1) crossprod(t(x1), rnorm(ncol(x1)))
system.time(lapply(1:1000, fx01, x0=x0))
system.time(lapply(1:1000, fx02, x1=x1))
```
Is it normal that the first function takes 4-5 times more time to run than the second when the second does not use a sparse matrix ?
| Function timings in the Matrix package | CC BY-SA 2.5 | null | 2011-02-07T22:37:55.593 | 2011-02-08T08:59:04.167 | 2011-02-08T08:59:04.167 | 2116 | 603 | [
"r",
"optimization",
"matrix"
]
|
6972 | 1 | 6985 | null | 3 | 1583 | X is the weight of a baby when born (in gram). If the distribution of X is N(3315, 575) and Y is the number of babies with weight lower than 2719 in a random sample of 25, then P[Yβ€4] is about...?
I'm trying to figure how to solve this. What kind of formula do I use? I suppose N is a normal distribution, that 3315 is the mean and that 575 the variance. Please correct me if I'm wrong.
Thanks!
Edit: As stated in the comments 575 should be the standard deviation. I'm not interested in the answer like someone suggested. If you teach me how should I approach this it is more than enough.
| Normal distribution and weight of babies problem | CC BY-SA 2.5 | null | 2011-02-07T22:56:45.337 | 2017-09-10T08:31:15.333 | 2011-02-08T06:04:57.733 | 919 | 1833 | [
"probability",
"self-study",
"binomial-distribution",
"normal-distribution"
]
|
6973 | 2 | null | 6967 | 7 | null | You need to write a function called `getSummary.Arima` for use with the `memisc` package. Check out `getSummary.lm` for an example.
| null | CC BY-SA 2.5 | null | 2011-02-08T00:26:12.043 | 2011-02-08T00:26:12.043 | null | null | 159 | null |
6974 | 1 | 6977 | null | 5 | 569 | My problem is this:
There are 52 people divided into 5 groups: A, B, C, D, and E. From stage 1 to stage 2, they either stay in the same groups, or change groups. The following table shows how many people change from their stage 1 group to stage 2 group for all pairs.
```
Stage 1 Stage 2
Group A Group B Group C Group D Group E
---------------------------------------------------------
Group A 18 0 0 0 2
Group B 0 6 0 0 3
Group C 4 0 2 0 2
Group D 0 0 0 0 0
Group E 4 2 0 1 8
```
E.g. 18 people change from group A to group A (i.e. they retain their group); 2 + 3 + 2 people change from groups A, B, C, D to group E.
Now, my claims are these:
- 65% people (18 + 6 + 2 + 0 + 8 = 34 out of the total 52, the diagonal) do not change groups.
- 15% people (8 out of 52) change from some other group to group A.
- 13% people (7 out of 52) change from some other group to group E.
I want to attest these claims with the correct statistical test. Which one is the right one to use? I'm aware the numbers are small, so any test might be unreliable, but I'm curious to know what test it will be any way.
Any ideas?
| What is the right statistical test to use when attesting claims like "60% people switched over from group A to group B"? | CC BY-SA 2.5 | null | 2011-02-08T01:57:28.050 | 2011-02-09T04:24:16.253 | 2011-02-08T02:08:40.863 | 3097 | 3097 | [
"hypothesis-testing",
"classification",
"statistical-significance"
]
|
6975 | 1 | 7001 | null | 4 | 161 | I'm trying to figure out the best way of creating groups in a dataset with many dimensions. I have 1000 measurements, and each measurement has 40 dimensions. The measurements are of neighborhoods with physical and socio-economic data. For example I have measures like the number of people in each area, the population density, the number of buildings, average household size etc, average income.
I'm quite new to this field, so I'm not sure what is the best way of going about this. Two thoughts I had were using CART (specifically the rpart package in R), principal component analysis, and k-means clustering. Ideally I would do all of this analysis in R.
I would like to end up with 6 to 8 "typologies" or representative groupings of the data. They do not necessarily have to include all 40 dimensions.
I would appreciate any advice on the best way of doing this.
| Creating groups with multi-dimensional data? | CC BY-SA 2.5 | null | 2011-02-08T02:23:56.847 | 2011-02-08T21:25:23.370 | 2011-02-08T16:21:20.827 | 919 | 2635 | [
"classification",
"pca",
"factor-analysis",
"cart"
]
|
6976 | 1 | null | null | 11 | 1076 | This question might sound very broad, but here is what I am looking for. I know there are many excellent books about econometric methods, and many excellent expository articles about econometric techniques. There are even excellent reproducible examples of econometrics, as described in this CrossValidated [question](https://stats.stackexchange.com/questions/1980/complete-substantive-examples-of-reproducible-research-using-r). In fact the examples in this question come very close to what I am looking for; the only thing missing in those examples is that they are only research reports, without any mention of how the results of the study fared in a real-world application.
What I am looking for are documented/reproducible examples of real-world applications of econometric theory that ideally have the following characteristics:
- They should be reproducible, i.e. contain detailed description of (and pointers to) data, econometric techniques, and code. Ideally the code would be in the R language.
- There should be detailed documentation showing that the technique succeeded in the real world, according to a well-quantified measure of success (e.g. "the technique helped increase revenues because it enabled improved forecasting of demand, and here are the numbers involved")
I am using the term econometric quite broadly here -- I mean any sort of data-mining, statistical data-analysis, predictiion, forecasting or machine-learning technique.
One immediate problem in finding such examples: many successful applications of econometrics are done in a for-profit setting and are therefore proprietary, so if a technique worked well, it probably will not be published (this is especially true in the case of proprietary trading strategies), but I am nevertheless hoping there are published examples that have at least property (2) above if not both (1) and (2).
| Documented/reproducible examples of successful real-world applications of econometric methods? | CC BY-SA 2.5 | null | 2011-02-08T02:24:03.660 | 2011-02-08T12:46:40.163 | 2017-04-13T12:44:24.677 | -1 | 2544 | [
"r",
"machine-learning",
"forecasting",
"data-mining",
"econometrics"
]
|
6977 | 2 | null | 6974 | 10 | null | I think what you're probably looking for are the so-called mover-stayer models. The basic model is a loglinear model that allows for an "inflated" diagonal and usually involves some form of symmetry on the off-diagonals.
A good place to start is A. Agresti, Categorical Data Analysis, 2nd ed., pp. 423--428. (That's the big green book, not the smaller blue or white/brown one with almost the same name!)
You might also look at J. K. Lindsey, Applying generalized linear models, Springer, 1997, pp. 33--35.
---
Addendum: The log-linear version of the mover-stayer model is also often referred to as a quasi-independence model. Two forms are popular. The more general form is
$$
\log \mu_{ij} = \mu + \alpha_i + \beta_j + \delta_i \mathbf{1}_{(i = j)} ,
$$
where $\mathbf{1}_{(x \in A)}$ is the indicator function that $x \in A$.
Let's interpret this model. First off, if we drop the last term, then we recover the model for independence of the rows and columns. This would mean assuming that people change groups in such a way that the beginning and ending groups were independent of each other, even though each particular group (before and after) may have different marginal distributions.
By throwing in the $\delta_i$ into the model, we are instead assuming there are two groups in the population. There are the movers who will tend to move from one group to another distinct group and there are the stayers who tend to be happy with their first choice. When the population breaks down into two such groups, we expect to see an inflated diagonal due to the stayers.
The model is also called a quasi-independence model since it is a model for independence on the off-diagonals. This means that, among the movers, there is no interaction between the group they started in and the one they end up in ("drifters" might be a more evocative term).
Notice that if we have a separate $\delta_i$ for each $i$, then the diagonal cells will be fit perfectly by this model. A common variant is to replace $\delta_i$ by $\delta$, i.e., each group has a common affinity to "stay" versus "move".
---
Here is some $R$ code to fit such a model
```
# Mover-stayer model example
Y <- rbind( c(18,0,0,0,2),
c( 0,6,0,0,3),
c( 4,0,2,0,2),
c( 0,0,0,0,0),
c( 4,2,0,1,8))
grp <- c("A", "B", "C", "D", "E")
colnames(Y) <- grp
rownames(Y) <- grp
X <- expand.grid( row=grp, col=grp)
X <- cbind( X, sapply(grp, function(r) { X[,1]==r & X[,2]==r }) )
y <- as.vector(Y)
df <- data.frame(y,X)
# General model
move.stay <- glm(y~., family="poisson", data=df)
move.stay.summary <- summary(move.stay)
# Common diagonal term
move.stay.const <- glm(y~row+col+I(A+B+C+D+E), family="poisson", data=df)
move.stay.const.summary <- summary(move.stay.const)
```
At this point, you can do lots of things. If you need a formal hypothesis test, than a deviance test might be appropriate.
Your particular data certainly has the peculiarity of a row of all zeros. It's interesting that there was a "group" choice that contained no one in it to start with.
| null | CC BY-SA 2.5 | null | 2011-02-08T02:55:24.297 | 2011-02-09T04:24:16.253 | 2011-02-09T04:24:16.253 | 2970 | 2970 | null |
6978 | 1 | null | null | 12 | 342 | This was inspired by [Efficient online linear regression](https://stats.stackexchange.com/questions/6920/efficient-online-linear-regression), which I found very interesting. Are there any texts or resources devoted to large-scale statistical computing, by which computing with datasets too large to fit in main memory, and perhaps too varied to effectively subsample. For example, is it possible to fit mixed effects models in an online fashion? Has anyone looked into the effects of replacing the standard 2nd order optimization techniques for MLE with 1st order, SGD-type techniques?
| Online, scalable statistical methods | CC BY-SA 2.5 | null | 2011-02-08T04:09:23.683 | 2011-06-09T16:09:25.037 | 2017-04-13T12:44:33.977 | -1 | 1777 | [
"online-algorithms",
"computational-statistics",
"references"
]
|
6979 | 2 | null | 6966 | 4 | null | Hypothesis testing is a useful way to frame a lot of questions: is the effect of a treatment zero or nonzero? The ability between statements such as these and a statistical model or procedure (including the construction of an interval estimator) is important for practitioners I think.
It also bears mentioning that a confidence interval (in the traditional sense) isn't inherently any less "sin-prone" than hypothesis testing - how many intro stats students know the real definition of a confidence interval?
Perhaps the problem isn't hypothesis testing or interval estimation as it is the classical versions of the same; the Bayesian formulation avoids these quite nicely.
| null | CC BY-SA 2.5 | null | 2011-02-08T05:07:01.333 | 2011-02-08T05:07:01.333 | null | null | 26 | null |
6980 | 2 | null | 6966 | 60 | null | This is my personal opinion, so I'm not sure it properly qualifies as an answer.
Why should we teach hypothesis testing?
One very big reason, in short, is that, in all likelihood, in the time it takes you to read this sentence, hundreds, if not thousands (or millions) of hypothesis tests have been conducted within a 10ft radius of where you sit.
Your cell phone is definitely using a likelihood ratio test to decide whether or not it is within range of a base station. Your laptop's WiFi hardware is doing the same in communicating with your router.
The microwave you used to auto-reheat that two-day old piece of pizza used a hypothesis test to decide when your pizza was hot enough.
Your car's traction control system kicked in when you gave it too much gas on an icy road, or the tire-pressure warning system let you know that your rear passenger-side tire was abnormally low, and your headlights came on automatically at around 5:19pm as dusk was setting in.
Your iPad is rendering this page in landscape format based on (noisy) accelerometer readings.
Your credit card company shut off your card when "you" purchased a flat-screen TV at a Best Buy in Texas and a $2000 diamond ring at Zales in a Washington-state mall within a couple hours of buying lunch, gas, and a movie near your home in the Pittsburgh suburbs.
The hundreds of thousands of bits that were sent to render this webpage in your browser each individually underwent a hypothesis test to determine whether they were most likely a 0 or a 1 (in addition to some amazing error-correction).
Look to your right just a little bit at those "related" topics.
All of these things "happened" due to hypothesis tests. For many of these things some interval estimate of some parameter could be calculated. But, especially for automated industrial processes, the use and understanding of hypothesis testing is crucial.
---
On a more theoretical statistical level, the important concept of statistical power arises rather naturally from a decision-theoretic / hypothesis-testing framework. Plus, I believe "even" a pure mathematician can appreciate the beauty and simplicity of the NeymanβPearson lemma and its proof.
This is not to say that hypothesis testing is taught, or understood, well. By and large, it's not. And, while I would agree thatβparticularly in the medical sciencesβreporting of interval estimates along with effect sizes and notions of practical vs. statistical significance are almost universally preferable to any formal hypothesis test, this does not mean that hypothesis testing and the related concepts are not important and interesting in their own right.
| null | CC BY-SA 3.0 | null | 2011-02-08T05:10:14.317 | 2012-02-20T01:22:18.573 | 2012-02-20T01:22:18.573 | 2970 | 2970 | null |
6981 | 2 | null | 6966 | 7 | null | In teaching Neyman Pearson hypothesis testing to early statistics students, I have often tried to locate it in its original setting: that of making decisions. Then the infrastructure of type 1 and type 2 errors all makes sense, as does the idea that you might accept the null hypothesis.
We have to make a decision, we think that the outcome of our decision can be improved by knowledge of a parameter, we only have an estimate of that parameter. We still have to make a decision. Then what is the best decision to make in the context of having an estimate of the parameter?
It seems to me that in its original setting (making decisions in the face of uncertainty) the NP hypothesis test makes perfect sense. See e.g. N & P 1933, particularly p. 291.
Neyman and Pearson. On the problem of the most efficient tests of statistical hypotheses. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character (1933) vol. 231 pp. 289-337
| null | CC BY-SA 2.5 | null | 2011-02-08T05:41:56.397 | 2011-02-08T05:41:56.397 | null | null | 187 | null |
6982 | 2 | null | 6974 | 0 | null | I would start with building the sums:
```
Stage 1 Stage 2
A B C D E | Sum
-----------------------------------
Group A 18 0 0 0 2 | 20
Group B 0 6 0 0 3 | 9
Group C 4 0 2 0 2 | 8
Group D 0 0 0 0 0 | 0
Group E 4 2 0 1 8 | 15
-----------------------------------
Sum 26 8 2 1 15 | 52
```
As long as you don't tell us, that this is just a sample from a bigger population, I don't see any statistic involved, just frequency distributions.
From A to B went 0 person, not much to calculate here. 2 Persons went from A to E. A had initially 20 members (am I right?) and 2/20 is 10%. 10 Percent of the A group went to E. How many people have been later in group A, or where in total is not of interest here.
If you ask 'How many from group E came from A, then you would use the total of E in stage 2; there are 15 people, two of them came from A: 2/15.
| null | CC BY-SA 2.5 | null | 2011-02-08T06:55:16.950 | 2011-02-08T06:55:16.950 | null | null | 3100 | null |
6983 | 2 | null | 6966 | 11 | null | I think it depends on which hypothesis testing you are talking about. The "classical" hypothesis testing (Neyman-Pearson) is said to be defective because it does not appropriately condition on what actually happened when you did the test. It instead is designed to work "regardless" of what you actually saw in the long run. But failing to condition can lead to misleading results in the individual case. This is simply because the procedure "does not care" about the individual case, on the long run.
Hypothesis testing can be cast in the decision theoretical framework, which I think is a much better way to understand it. You can restate the problem as two decisions:
- "I will act as if $H_0$ is true"
- "I will act as if $H_\mathrm{A}$ is true"
The decision framework is much easier to understand, because it clearly separates out the concepts of "what will you do?" and "what is the truth?" (via your prior information).
You could even apply "decision theory" (DT) to your question. But in order to stop hypothesis testing, DT says you must have an alternative decision available to you. So the question is: if hypothesis testing is abandoned, what is to take its place? I can't think of an answer to this question. I can only think of alternative ways to do hypothesis testing.
(NOTE: in the context of hypothesis testing, the data, sampling distribution, prior distribution, and loss function are all prior information because they are obtained prior to making the decision.)
| null | CC BY-SA 3.0 | null | 2011-02-08T08:12:23.203 | 2013-07-16T15:46:09.633 | 2013-07-16T15:46:09.633 | 7290 | 2392 | null |
6984 | 2 | null | 6971 | 3 | null | There is an error in definition of `fx01`, the matrices are not compatible the way you defined. If we define
```
fx01 <- function(ll, x0) tcrossprod(rnorm(ncol(x0)),x0)
```
Then on my Macbook Pro with 2.53 GHz Intel Core 2 Duo processor with 4GB RAM and R 2.11.1, I get
```
> system.time(lapply(1:1000, fx02, x1=x1))
user system elapsed
0.459 0.092 0.546
> system.time(lapply(1:1000, fx02, x1=x0))
user system elapsed
0.632 0.005 0.633
> system.time(lapply(1:1000, fx01, x0=x1))
user system elapsed
0.078 0.001 0.079
> system.time(lapply(1:1000, fx01, x0=x0))
user system elapsed
3.810 0.014 3.805
```
Now this might give a hint (I cropped the output to make it clearer):
```
> showMethods("crossprod")
x="dgCMatrix", y="numeric"
(inherited from: x="CsparseMatrix", y="numeric")
> showMethods("tcrossprod")
x="numeric", y="dgCMatrix"
(inherited from: x="numeric", y="Matrix")
```
So I suspect that in `tcrossprod` there is a conversion of `dgCMatrix` (class of `x0`) to `Matrix`, meaning we convert sparse matrix to usual matrix. There is no such conversion in case of `crossproduct`. This might explain the difference, since sometimes unnecessary conversions are costly.
Further inspection confirms this, some sparse matrix code is used:
```
> getMethod("crossprod",signature=c(x="CsparseMatrix", y="numeric"))
Method Definition:
function (x, y = NULL)
.Call(Csparse_dense_crossprod, x, y)
<environment: namespace:Matrix>
Signatures:
x y
target "CsparseMatrix" "numeric"
defined "CsparseMatrix" "numeric"
```
No sparse matrix code is used:
```
> getMethod("tcrossprod",signature=c(x="numeric", y="Matrix"))
Method Definition:
function (x, y = NULL)
{
dim(x) <- c(1L, length(x))
callGeneric(x, y)
}
<environment: namespace:Matrix>
Signatures:
x y
target "numeric" "Matrix"
defined "numeric" "Matrix"
```
Side note: This is a first time I managed to get code from S4 methods (thanks for the question which forced me to figure it out), so I still may be wrong.
| null | CC BY-SA 2.5 | null | 2011-02-08T08:29:44.753 | 2011-02-08T08:43:50.513 | 2011-02-08T08:43:50.513 | 2116 | 2116 | null |
6985 | 2 | null | 6972 | 2 | null | It may be easier to first consider equality for general $Y=y$ and the to "add up" for $y=0,1,2,3,4$.
Because the question as stated does not assume any (logical) dependence between babies in the sample, we can say the all have the same probability of being under the threshold (because we do not know how to differentiate them, given what information there is in the question). Call this probability $P=Pr[X<2719|I]$ (where $I$ denotes the information in the question).
Now this is basically a binomial probability that you are asking for, with $25$ samples, and the probability that you see $y$ exactly is:
$$Pr[Y=y|I]=\frac{25!}{y!(25-y)!}P^y (1-P)^{25-y}$$
And so you simply add this up for your answer:
$$Pr[Y\leq 4|I]=\sum_{y=0}^{4}\frac{25!}{y!(25-y)!}P^y (1-P)^{25-y}$$
| null | CC BY-SA 2.5 | null | 2011-02-08T08:31:43.127 | 2011-02-08T08:31:43.127 | null | null | 2392 | null |
6986 | 1 | 6992 | null | 4 | 705 | If I have a dataset of $N$ pairs ($x_i$,$y_i$) where each $y_i$ has an individual error $\sigma_{y_i}$, I can incorporate this into regression by using the inverse of this as weights.
If I now do logarithmic regression $\ln y_i = b_0 + b_1\,\ln x_i + \epsilon_i$, how do I incorporate now the errors? Are the weights just plain the same as in the linear case or do I have to convert them using the lognormal distribution: $Var = \mathrm{e}^{2\mu+\sigma^{2}}(\mathrm{e}^{\sigma^{2}}-1)$
Thanks for your help.
| Logarithmic regression with individual errors | CC BY-SA 2.5 | null | 2011-02-08T12:25:54.363 | 2011-02-08T16:34:48.667 | null | null | 3104 | [
"regression",
"lognormal-distribution"
]
|
6987 | 2 | null | 6976 | 4 | null | As mentioned in the comments, travel demand forecasts often use inputs from discrete choice models (multinomial logit, nested logit, mixed logit, etc) to assist in the development of behavioral mode choice or route assignment in their travel demand forecasts. DCM certainly has many applications outside travel demand forecasting, but it has been used in the transportation industry for 30+ years so there should be lots of good examples.
As for reproducible examples:
- Biogeme is an open source piece of software that is optimized for estimating logit models. The website provides the data, code, and a paper write up describing their methods.
- travelR is a project to make travel demand forecasts with R. There was a presentation at useR! 2010 about the project, abstract here and slides here. There is also a webinar coming up in a few weeks regarding R and travel demand forecasting that I'll find the link to and update here for those interested.
- Transportation Review Board Conference has made a list of all papers available online this year. I don't have a specific paper to link to, but there are several committee's worth of papers regarding the application of choice models in the transportation context.
| null | CC BY-SA 2.5 | null | 2011-02-08T12:46:40.163 | 2011-02-08T12:46:40.163 | null | null | 696 | null |
6988 | 2 | null | 58 | 14 | null | The back propagation algorithm is a [gradient descent](http://en.wikipedia.org/wiki/Gradient_descent) algorithm for fitting a neural network model. (as mentionned by @Dikran) Let me explain how.
Formally: Using the calculation of the gradient at the end of this post within equation [1] below (that is a definition of the gradient descent) gives the back propagation algorithm as a particular case of the use of a gradient descent.
A neural network model
Formally, we fix ideas with a simple single layer model:
$$ f(x)=g(A^1(s(A^2(x)))) $$
where $g:\mathbb{R} \rightarrow \mathbb{R}$ and $s:\mathbb{R}^M\rightarrow \mathbb{R}^M$ are known with for all $m=1\dots,M$, $s(x)[m]=\sigma(x[m])$, and $A^1:\mathbb{R}^M\rightarrow \mathbb{R}$, $A^2\mathbb{R}^p\rightarrow \mathbb{R}^M$ are unknown affine functions. The function $\sigma:\mathbb{R}\rightarrow \mathbb{R}$ is called activation function in the framework of classification.
A quadratic Loss function is taken to fix ideas.
Hence the input $(x_1,\dots,x_n)$ vectors of $\mathbb{R}^p$ can be fitted to the real output $(y_1,\dots,y_n)$ of $\mathbb{R}$ (could be vectors) by minimizing the empirical loss:
$$\mathcal{R}_n(A^1,A^2)=\sum_{i=1}^n (y_i-f(x_i))^2\;\;\;\;\;\;\; [1]$$
with respect to the choice of $A^1$ and $A^2$.
[Gradient descent](http://en.wikipedia.org/wiki/Gradient_descent)
A grandient descent for minimizing $\mathcal{R}$ is an algorithm that iterate:
$$\mathbf{a}_{l+1}=\mathbf{a}_l-\gamma_l \nabla \mathcal{R}(\mathbf{a}_l),\ l \ge 0.$$
for well chosen step sizes $(\gamma_l)_l$ (also called learning rate in the framework of back propagation). It requires the calculation of the gradient of $\mathcal{R}$. In the considered case $\mathbf{a}_l=(A^1_{l},A^2_{l})$.
Gradient of $\mathcal{R}$ (for the simple considered neural net model)
Let us denote, by $\nabla_1 \mathcal{R}$ the gradient of $\mathcal{R}$ as a function of $A^1$, and $\nabla_2\mathcal{R}$ the gradient of $\mathcal{R}$ as a function of $A^2$. Standard calculation (using the rule for derivation of composition of functions) and the use of the notation $z_i=A^1(s(A^2(x_i)))$ give
$$\nabla_1 \mathcal{R}[1:M] =-2\times \sum_{i=1}^n z_i g'(z_i) (y_i-f(x_i))$$
for all $m=1,\dots,M$
$$\nabla_2 \mathcal{R}[1:p,m] =-2\times \sum_{i=1}^n x_i g'(z_i) z_i[m]\sigma'(A^2(x_i)[m]) (y_i-f(x_i))$$
Here I used the R notation: $x[a:b]$ is the vector composed of the coordinates of $x$ from index $a$ to index $b$.
| null | CC BY-SA 3.0 | null | 2011-02-08T12:47:57.683 | 2012-11-13T07:37:40.343 | 2012-11-13T07:37:40.343 | 223 | 223 | null |
6989 | 1 | 6994 | null | 33 | 17042 | I want to fit a multilevel GLMM with a Poisson distribution (with over-dispersion) using R. At the moment I am using [lme4](http://cran.r-project.org/web/packages/lme4/index.html) but I noticed that recently the `quasipoisson` family was removed.
I've seen elsewhere that you can model additive over-dispersion for binomial distributions by adding a random intercept with one level per observation. Does this apply to the poisson distribution as well?
Is there a better way to do it? Are there other packages that you would recommend?
| How do I fit a multilevel model for over-dispersed poisson outcomes? | CC BY-SA 4.0 | null | 2011-02-08T13:02:55.313 | 2018-09-04T10:02:50.647 | 2018-09-04T10:02:50.647 | 128677 | 1871 | [
"r",
"mixed-model",
"poisson-distribution",
"lme4-nlme",
"overdispersion"
]
|
6990 | 1 | null | null | 3 | 668 | How do I carry out a repeated measures analysis using R?
| Repeated measures using R | CC BY-SA 2.5 | null | 2011-02-08T13:16:39.953 | 2011-02-09T06:40:13.363 | 2011-02-09T04:55:12.440 | 183 | 3107 | [
"r",
"anova",
"repeated-measures"
]
|
6991 | 2 | null | 6986 | 3 | null | It is good to remember what is the purpose of the weighting. Suppose we have linear regression
\begin{align}
\mathbf{y}=\mathbf{X}\mathbf{\beta}+\mathbf{\varepsilon}
\end{align}
In this case $\sigma_{y_i}=\sigma_{\varepsilon_i}=\sigma_i$ and we can write that
$$\Omega=E(\varepsilon\varepsilon')=diag(\sigma_i)$$
Now weighted regression with weights inverse to $\sigma_i$ is [generalised least squares estimate](http://en.wikipedia.org/wiki/Generalized_least_squares) with $\Omega$. Generalised least squares estimates are best linear unbiased estimates if variance of errors is $\Omega$.
So you need to know the variance of errors, not the $y$. If you fit logarithmic regression and you know that $y$ is log normal, you need to weight with the variance of $\log y$, which in your case will correspond to variance of $\epsilon$. Then your weighted least squares estimates will be best linear unbiased estimates, and you will retain the same property as in linear case.
| null | CC BY-SA 2.5 | null | 2011-02-08T15:26:07.017 | 2011-02-08T15:26:07.017 | null | null | 2116 | null |
6992 | 2 | null | 6986 | 2 | null | If the variance of $y_i$ is $\sigma_i^2,$ then, by the [delta method](http://en.wikipedia.org/wiki/Delta_method), the variance of $\log(y_i)$ is approximately $\sigma_i^2 / y_i^2 .$ So if you're doing [weighted least squares](http://en.wikipedia.org/wiki/Weighted_least_squares#Weighted_least_squares) using inverse-variance weights, instead of weighting by $1/\sigma_i^2 ,$ you need to weight by $y_i^2 / \sigma_i^2 .$
| null | CC BY-SA 2.5 | null | 2011-02-08T16:34:48.667 | 2011-02-08T16:34:48.667 | null | null | 449 | null |
6993 | 2 | null | 6989 | 7 | null | I think that the glmmADMB package is exactely what you are looking for.
>
install.packages("glmmADMB",
repos="http://r-forge.r-project.org")
But in a bayesian point of view you can use the MCMCglmm package or the BUGS/JAGS software, they are very flexible and you can fit this kind of model. (and the syntax is close to the R one)
EDIT thanks to @randel
If you want to install the `glmmADMB` and `R2admb` packages it is better to do:
```
install.packages("glmmADMB", repos="http://glmmadmb.r-forge.r-project.org/repos"ββ)
install.packages("R2admb")
```
| null | CC BY-SA 3.0 | null | 2011-02-08T17:01:35.940 | 2014-08-04T20:39:07.137 | 2014-08-04T20:39:07.137 | 2028 | 2028 | null |
6994 | 2 | null | 6989 | 24 | null | You can fit multilevel GLMM with a Poisson distribution (with over-dispersion) using R in multiple ways. Few `R` packages are: `lme4`, `MCMCglmm`, `arm`, etc. A good reference to see is [Gelman and Hill (2007)](http://www.stat.columbia.edu/~gelman/arm/)
I will give an example of doing this using `rjags` package in `R`. It is an interface between `R` and `JAGS` (like `OpenBUGS` or `WinBUGS`).
$$n_{ij} \sim \mathrm{Poisson}(\theta_{ij})$$
$$\log \theta_{ij} = \beta_0 + \beta_1 \mbox{ } \mathtt{Treatment}_{i} + \delta_{ij}$$
$$\delta_{ij} \sim N(0, \sigma^2_{\epsilon})$$
$$i=1 \ldots I, \quad j = 1\ldots J$$
$\mathtt{Treatment}_i = 0 \mbox{ or } 1, \ldots, J-1 \mbox{ if the } i^{th} \mbox{ observation belongs to treatment group } 1 \mbox{, or, } 2, \ldots, J$
The $\delta_{ij}$ part in the code above models overdispersion. But there is no one stopping you from modeling correlation between individuals (you don't believe that individuals are really independent) and within individuals (repeated measures). Also, the rate parameter may be scaled by some other constant as in `rate models`. Please see Gelman and Hill (2007) for more reference. Here is the `JAGS` code for the simple model:
```
data{
for (i in 1:I){
ncount[i,1] <- obsTrt1[i]
ncount[i,2] <- obsTrt2[i]
## notice I have only 2 treatments and I individuals
}
}
model{
for (i in 1:I){
nCount[i, 1] ~ dpois( means[i, 1] )
nCount[i, 2] ~ dpois( means[i, 2] )
log( means[i, 1] ) <- mu + b * trt1[i] + disp[i, 1]
log( means[i, 2] ) <- mu + b * trt2[i] + disp[i, 2]
disp[i, 1] ~ dnorm( 0, tau)
disp[i, 2] ~ dnorm( 0, tau)
}
mu ~ dnorm( 0, 0.001)
b ~ dnorm(0, 0.001)
tau ~ dgamma( 0.001, 0.001)
}
```
Here is the `R` code to implement use it (say it is named: `overdisp.bug`)
```
dataFixedEffect <- list("I" = 10,
"obsTrt1" = obsTrt1 , #vector of n_i1
"obsTrt2" = obsTrt2, #vector of n_i2
"trt1" = trt1, #vector of 0
"trt2" = trt2, #vector of 1
)
initFixedEffect <- list(mu = 0.0 , b = 0.0, tau = 0.01)
simFixedEffect <- jags.model(file = "overdisp.bug",
data = dataFixedEffect,
inits = initFixedEffect,
n.chains = 4,
n.adapt = 1000)
sampleFixedEffect <- coda.samples(model = simFixedEffect,
variable.names = c("mu", "b", "means"),
n.iter = 1000)
meansTrt1 <- as.matrix(sampleFixedEffect[ , 2:11])
meansTrt2 <- as.matrix(sampleFixedEffect[ , 12:21])
```
You can play around with your parameters' posteriors and you can introduce more parameters to make you modeling more precise (we like to think this). Basically, you get the idea.
For more details on using `rjags` and `JAGS`, please see [John Myles White's page](http://www.johnmyleswhite.com/notebook/2010/08/20/using-jags-in-r-with-the-rjags-package/)
| null | CC BY-SA 2.5 | null | 2011-02-08T17:06:29.380 | 2011-02-08T17:15:16.543 | 2011-02-08T17:15:16.543 | 1307 | 1307 | null |
6995 | 2 | null | 6990 | 5 | null | There are [many tutorials](http://www.r-statistics.com/2010/04/repeated-measures-anova-with-r-tutorials/) you can go through.
| null | CC BY-SA 2.5 | null | 2011-02-08T17:47:25.943 | 2011-02-08T17:47:25.943 | null | null | 253 | null |
6996 | 2 | null | 6966 | 10 | null | If I were a hardcore Frequentist I would remind you that confidence intervals are quite regularly just inverted hypothesis tests, i.e. when the 95% interval is simply another way of describing all the points that a test involving your data wouldn't reject at the .05 level. In these situations a preference for one over the other is question of exposition rather than method.
Now, exposition is important of course, but I think that would be a pretty good argument. It's neat and clarifying to explain the two approaches as restatements of the same inference from different points of view. (The fact that not all interval estimators are inverted tests is then an inelegant but not particularly awkward fact, pedagogically speaking).
Much more serious implications come from the decision to condition on the observations, as pointed out above. However, even in retreat the Frequentist could always observe that there are plenty of situations (perhaps not a majority) where conditioning on the observations would be unwise or unilluminating. For those, the HT/CI setup is (not 'are') exactly what is wanted, and should be taught as such.
| null | CC BY-SA 2.5 | null | 2011-02-08T17:52:56.230 | 2011-02-08T17:52:56.230 | null | null | 1739 | null |
6997 | 2 | null | 6208 | 1 | null | This has been an eye-opening debate on the type-II/III issue for me. Thanks for everyone's the effort in providing the discussion. I'd come round to the view of promoting type-II consistently over type-III, but had a weak grasp on the argument - I just relied on the advice in John fox's regression book (car) that advised type-III tests were rarely interpretable (well, I think he said that...).
Anyway- ezANOVA is really useful for permitting access to R functionality that would otherwise be impossible for undergraduates I teach in psychology. I provide online R modules, one featuring ezANOVA to demonstrate mixed ANOVA designs (though it seems pre version 3 may have been buggy for this ... doh!)
Try it out here:
[http://www.wessa.net/rwasp_Mixed%20Model%20ANOVA.wasp](http://www.wessa.net/rwasp_Mixed%20Model%20ANOVA.wasp)
after the module loads (~10s) find the compute button (half way down the page) and it will run ezANOVA and associated tables and plots.
Ian
| null | CC BY-SA 2.5 | null | 2011-02-08T18:23:45.137 | 2011-02-08T18:23:45.137 | null | null | null | null |
6998 | 1 | 6999 | null | 4 | 568 | The geometric mean is the appropriate measure of central tendency for log-normally distributed variables. However, the arithmetic mean still has some use in relation to log-normal variables - in inferring totals from survey data, for instance. It is my understanding that the arithmetic mean of the sample of a log-normal distribtion is the same as the arithmetic mean of the population, with normal errors providing the sample is large enough for CLT to be invoked.
If you operate a simple missing data imputation method of replacing with a mean, you would inflate the geometric mean if you used the arithmetic mean (and give high leverage to imputed data), and deflate the arithmetic mean (and therefore the total) if you used the geometric mean.
My question is what is most appropriate to use, the geometric mean or the arithmetic mean? Or is such a simple imputation method completely inappropriate for log-normal data?
| Means and imputation of log-normal variables | CC BY-SA 2.5 | null | 2011-02-08T18:39:48.527 | 2011-02-08T18:48:31.850 | null | null | 229 | [
"mean",
"data-imputation",
"lognormal-distribution"
]
|
6999 | 2 | null | 6998 | 5 | null | Replacing missing data with the mean is worse than useless as a method of imputation. See e.g. [this page on missingdata.org.uk](http://missingdata.lshtm.ac.uk/index.php?option=com_content&view=article&id=68%3asimple-mean-imputation&catid=39%3asimple-ad-hoc-methods-for-coping-with-missing-data&Itemid=96).
| null | CC BY-SA 2.5 | null | 2011-02-08T18:48:31.850 | 2011-02-08T18:48:31.850 | null | null | 449 | null |
7001 | 2 | null | 6975 | 1 | null | You may want to start using a dimensionality reduction,maybe using [PCA](http://en.wikipedia.org/wiki/Principal_components_analysis) or [MDS](http://en.wikipedia.org/wiki/Multidimensional_scaling), which may give a clearer picture.
I would also try [clustering](http://en.wikipedia.org/wiki/Cluster_analysis), whether k-means or a more sophisticated method.
| null | CC BY-SA 2.5 | null | 2011-02-08T21:25:23.370 | 2011-02-08T21:25:23.370 | null | null | 2314 | null |
7004 | 1 | 7005 | null | 21 | 20073 |
## Background
I have a variable with an unknown distribution.
I have 500 samples, but I would like demonstrate the precision with which I can calculate variance, e.g. to argue that a sample size of 500 is sufficient. I am also interested in knowing the minimum sample size that would be required to estimate variance with a precision of $X\%$.
## Questions
How can I calculate
- the precision of my estimate of the variance given a sample size of $n=500$? of $n=N$?
- How can I calculate the minimum number of samples required to estimate the variance with a precision of $X$?
## Example
Figure 1 density estimate of the parameter based on the 500 samples.

Figure 2 Here is a plot of sample size on the x-axis vs. estimates of variance on the y axis that I have calculated using subsamples from the sample of 500. The idea is that the estimates will converge to the true variance as n increases.
However, the estimates are not valid independent since the samples used to estimate variance for $n \in [10,125,250,500]$ are not independent of each other or of the samples used to calculate variance at $n\in [20,40,80]$

| Calculating required sample size, precision of variance estimate? | CC BY-SA 2.5 | null | 2011-02-09T00:48:49.097 | 2016-09-11T03:20:05.800 | 2020-06-11T14:32:37.003 | -1 | 2750 | [
"estimation",
"random-variable",
"variance",
"sampling",
"sample-size"
]
|
7005 | 2 | null | 7004 | 13 | null | For i.i.d. random variables $X_1, \dotsc, X_n$, the unbiased estimator for the variance $s^2$ (the one with denominator $n-1$) has variance:
$$\mathrm{Var}(s^2) = \sigma^4 \left(\frac{2}{n-1} + \frac{\kappa}{n}\right)$$
where $\kappa$ is the excess kurtosis of the distribution (reference: [Wikipedia](http://en.wikipedia.org/wiki/Sample_variance#Distribution_of_the_sample_variance)). So now you need to estimate the kurtosis of your distribution as well. You can use a quantity sometimes described as $\gamma_2$ (also from [Wikipedia](http://en.wikipedia.org/wiki/Sample_variance#Distribution_of_the_sample_variance)):
$$\gamma_2 = \frac{\mu_4}{\sigma_4} - 3$$
I would assume that if you use $s$ as an estimate for $\sigma$ and $\gamma_2$ as an estimate for $\kappa$, that you get a reasonable estimate for $\mathrm{Var}(s^2)$, although I don't see a guarantee that it is unbiased. See if it matches with the variance among the subsets of your 500 data points reasonably, and if it does don't worry about it anymore :)
| null | CC BY-SA 3.0 | null | 2011-02-09T02:12:51.547 | 2012-06-07T00:58:13.283 | 2012-06-07T00:58:13.283 | 2750 | 2898 | null |
7007 | 1 | null | null | 3 | 5357 |
## Context:
A sample of people were asked to provide ratings of their impression of [jeepneys](http://en.wikipedia.org/wiki/Jeepney) using a Likert Scale. Age, gender, household income, and so on was also obtained.
Dangerous: strongly agree; agree; neutral; disagree; strongly disagree
Convenient: strongly agree; agree; neutral; disagree; strongly disagree
Comfortable: strongly agree; agree; neutral; disagree; strongly disagree
Age: $\quad<18\quad\quad$ $19-25\quad\quad$ $25\geq\quad$
Gender: male; female
Household Income: $\quad<3000\quad\quad$ $3001-5000\quad\quad$ $5000\geq\quad$
## Questions:
- What statistical analyses would you recommend?
- How could I implement this in stata?
| Statistical Analysis for Perception Survey | CC BY-SA 2.5 | null | 2011-02-09T03:36:52.297 | 2017-04-26T14:00:16.703 | 2011-02-12T05:47:26.930 | 183 | 3118 | [
"survey",
"stata"
]
|
7008 | 2 | null | 7004 | 18 | null | Learning a variance is hard.
It takes a (perhaps surprisingly) large number of samples to estimate a variance well in many cases. Below, I'll show the development for the "canonical" case of an i.i.d. normal sample.
Suppose $Y_i$, $i=1,\ldots,n$ are independent $\mathcal{N}(\mu, \sigma^2)$ random variables. We seek a $100(1-\alpha)\%$ confidence interval for the variance such that the width of the interval is $\rho s^2$, i.e., the width is $100\rho \%$ of the point estimate. For example, if $\rho = 1/2$, then the width of the CI is half the value of the point estimate, e.g., if $s^2 = 10$, then the CI would be something like $(8,\,13)$, having a width of 5. Note the asymmetry around the point estimate, as well. ($s^2$ is the unbiased estimator for the variance.)
"The" (rather, "a") confidence interval for $s^2$ is
$$
\frac{(n-1) s^2}{\chi_{(n-1)}^{2\;(1-\alpha/2)}} \leq \sigma^2 \leq \frac{(n-1) s^2}{\chi_{(n-1)}^{2\;(\alpha/2)}} \>,
$$
where $\chi_{(n-1)}^{2\;\beta}$ is the $\beta$ quantile of the chi-squared distribution with $n-1$ degrees of freedom. (This arises from the fact that $(n-1)s^2/\sigma^2$ is a pivotal quantity in a Gaussian setting.)
We want to minimize the width so that
$$
L(n) = \frac{(n-1) s^2}{\chi_{(n-1)}^{2\;(\alpha/2)}} - \frac{(n-1) s^2}{\chi_{(n-1)}^{2\;(1-\alpha/2)}} < \rho s^2 \>,
$$
so we are left to solve for $n$ such that
$$
(n-1) \left(\frac{1}{\chi_{(n-1)}^{2\;(\alpha/2)}} - \frac{1}{\chi_{(n-1)}^{2\;(1-\alpha/2)}} \right) < \rho .
$$
For the case of a 99% confidence interval, we get $n = 65$ for $\rho = 1$ and $n = 5321$ for $\rho = 0.1$. This last case yields an interval that is (still!) 10% as large as the point estimate of the variance.
If your chosen confidence level is less than 99%, then the same width interval will be obtained for a lower value of $n$. But, $n$ may still may be larger than you would have guessed.
A plot of the sample size $n$ versus the proportional width $\rho$ shows something that looks asymptotically linear on a log-log scale; in other words, a power-law--like relationship. We can estimate the power of this power-law relationship (crudely) as
$$
\hat{\alpha} \approx \frac{\log 0.1 - \log 1}{\log 5321 - \log 65} = \frac{-\log 10}{\log \frac{5231}{65}} \approx -0.525 ,
$$
which is, unfortunately, decidedly slow!
---
This is sort of the "canonical" case to give you a feel for how to go about the calculation. Based on your plots, your data don't look particularly normal; in particular, there is what appears to be noticeable skewness.
But, this should give you a ballpark idea of what to expect. Note that to answer your second question above, it is necessary to fix some confidence level first, which I've set to 99% in the development above for demonstration purposes.
| null | CC BY-SA 2.5 | null | 2011-02-09T04:05:39.063 | 2011-02-09T13:24:07.000 | 2011-02-09T13:24:07.000 | 2970 | 2970 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.