Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2049 | 2 | null | 1908 | 0 | null | [IEEE World Congress on Computational Intelligence](http://www.wcci2010.org/topics/ijcnn-2010). Note that it is the link for 2010 conference.
| null | CC BY-SA 2.5 | null | 2010-08-23T14:55:35.160 | 2010-08-23T14:55:35.160 | null | null | 976 | null |
2050 | 2 | null | 1908 | 0 | null | [International Conference on Artificial Neural Networks](http://delab.csd.auth.gr/icann2010/). Note that the link is for the 2010 conference.
| null | CC BY-SA 2.5 | null | 2010-08-23T14:58:22.317 | 2010-08-23T14:58:22.317 | null | null | 976 | null |
2051 | 2 | null | 1908 | 1 | null | [Artificial Intelligence In Medicine](http://aimedicine.info/aime/) (AIME), odd years starting from 1985.
| null | CC BY-SA 2.5 | null | 2010-08-23T15:35:18.580 | 2010-08-23T15:35:18.580 | null | null | 930 | null |
2053 | 2 | null | 1844 | 11 | null | I think I figured out the answer (to my own question). If the assumption of proportional hazards is true, the two methods give similar estimates of the hazard ratio. The discrepancy I found in one particular example, I now think, is due to the fact that that assumption is dubious.
If the assumption of proportional hazards is true, then a graph of log(time) vs. log(-log(St)) (where St is the proportional survival at time t) should show two parallel lines. Below is the graph created from the problem data set. It seems far from linear. If the assumption of proportional hazards is not valid, then the concept of a hazard ratio is meaningless, and so it doesn't matter which method is used to compute the hazard ratio.

I wonder if the discrepancy between the logrank and Mantel-Haenszel estimates of the hazard ratio can be used as a method to test the assumption of proportional hazards?
| null | CC BY-SA 2.5 | null | 2010-08-23T18:03:15.097 | 2010-08-23T21:10:58.187 | 2010-08-23T21:10:58.187 | 25 | 25 | null |
2054 | 2 | null | 1927 | 6 | null | The Tukey halfspace median can be extended to >2 dimensions using DEEPLOC, an algorithm due to Struyf and Rousseeuw; see [here](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.37.2052) for details.
The algorithm is used to approximate the point of greatest depth efficiently; naive methods which attempt to determine this exactly usually run afoul of (the computational version of) "the curse of dimensionality", where the runtime required to calculate a statistic grows exponentially with the number of dimensions of the space.
| null | CC BY-SA 2.5 | null | 2010-08-23T18:34:05.990 | 2010-08-23T18:34:05.990 | null | null | 1068 | null |
2055 | 2 | null | 1799 | 2 | null | As to your suggestion of using Multi-level models in this and the other thread, I see no benefit of approaching your analysis in this manner over repeated measures ANOVA. MLM are simply an extension of OLS regression, and offer an explicit framework to model group level (often referred to as "contextual") effects on lower level estimates. I would guess from your example you have no specific "contexts" besides that measures are repeated within individuals, and this is accounted for within the repeated ANOVA design (and your not interested in measuring effects of specific individuals anyway, only to control for this non-independence). Neither groups nor gender with what your description says are "contexts" in this sense, they can only be direct effects (or at least you can only observe if they have direct effects).
MLM just complicates things in this example IMO. It doesn't directly solve your problem that several dependent measures are non-independent, you can't measure any group level characteristic on your outcome because you only have two groups, and your hierarchy is cross classified and has three levels, (an observation is nested within only 1 individual, but the gender and group nestings are not mutually exclusive). All of the things you listed as purposes of the project can be accomplished using repeated ANOVA simply including group, gender, and group*gender interaction effects into the models.
It is beyond the scope of your question, but I would never agree that gender is a group observations are nested within.
| null | CC BY-SA 2.5 | null | 2010-08-23T19:47:39.243 | 2010-08-24T01:49:46.973 | 2010-08-24T01:49:46.973 | 1036 | 1036 | null |
2056 | 2 | null | 1399 | 6 | null | According to Davison and Hinckley ("Bootstrap methods and their application", 1997, Section 3.8), the third algorithm is conservative. They advocate a fourth approach: simply resampling the subjects.
| null | CC BY-SA 2.5 | null | 2010-08-23T20:39:21.760 | 2010-08-23T20:39:21.760 | null | null | 187 | null |
2057 | 2 | null | 1841 | 6 | null | I have a relatively simple solution to propose, Hugo. Because you're forthright about not being a statistician (often a plus ;-) but obviously can handle technical language, I'll take some pains to be technically clear but avoid statistical jargon.
Let's start by checking my understanding: you have six series of data (t[j,i], h[j,i]), 1 <= j <= 6, 1 <= i <= n[j], where t[j,i] is the time you measured the entropy h[j,i] for artifact j and n[j] is the number of observations made of artifact j.
We may as well assume t[j,i] <= t[j,i+1] is always the case, but it sounds like you cannot necessarily assume that t[1,i] = ... = t[6,i] for all i (synchronous measurements) or even that t[j,i+1] - t[j,i] is a constant for any given j (equal time increments). We might as well also suppose j=1 designates your special artifact.
We do need a model for the data. "Exponential" versus "sublinear" covers a lot of ground, suggesting we should adopt a very broad (non-parametric) model for the behavior of the curves. One thing that simply distinguishes these two forms of evolution is that the increments h[j,i+1] - h[j,i] in the exponential case will be increasing whereas for concave sublinear growth the increments will decreasing. Specifically, the increments of the increments,
d2[j,i] = h[j,i+1] - 2*h[j,i+1] + h[j,i], 1 <= i <= n[j]-2,
will either tend to be positive (for artifact 1) or negative (for the others).
A big question concerns the nature of variation: the observed entropies might not exactly fit along any nice curve; they might oscillate, seemingly at random, around some ideal curve. Because you don't want to do any statistical modeling, we aren't going to learn much about the nature of this variation, but let's hope that the amount of variation for any given artifact j is typically about the same size for all times t[j,i]. This lets us write each entropy in the form
h[j,i] = y[j,i] + e[j,i]
where y[j,i] is the "true" entropy for artifact j at time t[j,i] and e[j,i] is the difference between the observed entropy h[j,i] and the true entropy. It might be reasonable, as a first cut at this problem, to hope that the e[j,i] act randomly and appear to be statistically independent of each other and of the y[j,i] and t[j,i].
This setup and these assumptions imply that the set of second increments for artifact j, {d2[j,i] | 1 <= i <= n[j]-2}, will not necessarily be entirely positive or entirely negative, but that each such set should look like a bunch of (potentially different) positive or negative numbers plus some fluctuation:
d2[j,i] = (y[j,i+2] - 2*y[j,i+1] + y[j,i]) + (e[j,i+2] - 2*e[j,i+1] + e[j,i]).
We're still not in a classic probability context, but we're close if we (incorrectly, but perhaps not fatally) treat the correct second increments (y[j,i+2] - 2*y[j,i+1] + y[j,i]) as if they were numbers drawn randomly from some box. In the case of artifact 1 your hope is that this is a box of all positive numbers; for the other artifacts, your hope is that it is a box of all negative numbers.
At this point we can apply some standard machinery for hypothesis testing. The null hypothesis is that the true second increments are all (or most of them) negative; the alternative hypothesis covers all the other 2^6-1 possibilities concerning the signs of the six batches of second increments. This suggests running a t-test separately for each collection of actual second increments to compare them against zero. (A non-parametric equivalent, such as a sign test, would be fine, too.) Use a Bonferroni correction with these planned multiple comparisons; that is, if you want to test at a level of alpha (e.g., 5%) to attain a desired "probability value," use the alpha/6 critical value for the test. This can readily be done even in a spreadsheet if you like. It's fast and straightforward.
This approach is not going to be the best one because among all those that could be conceived: it's one of the less powerful and it still makes some assumptions (such as independence of the errors); but if it works--that is, if you find the second increments for j=1 to be significantly above 0 and all the others to be significantly below 0--then it will have done its job. If this is not the case, your expectations might still be correct, but it would take a greater statistical modeling effort to analyze the data. (The next phase, if needed, might be to look at the runs of increments for each artifact to see whether there's evidence that eventually each curve becomes exponential or sublinear. It should also involve a deeper analysis of the nature of variation in the data.)
| null | CC BY-SA 2.5 | null | 2010-08-23T22:04:33.300 | 2010-08-25T13:52:20.933 | 2010-08-25T13:52:20.933 | 919 | 919 | null |
2058 | 2 | null | 213 | 12 | null | You can find candidates for "outliers" among the support points of the minimum volume bounding ellipsoid. ([Efficient algorithms](http://www.stat.ucl.ac.be/ISdidactique/Rhelp/library/cluster/html/ellipsoidhull.html) to find these points in fairly high dimensions, both exactly and approximately, were invented in a spate of papers in the 1970's because this problem is intimately connected with a question in experimental design.)
| null | CC BY-SA 3.0 | null | 2010-08-23T22:41:05.283 | 2012-02-17T19:04:13.153 | 2012-02-17T19:04:13.153 | 919 | 919 | null |
2059 | 1 | null | null | 6 | 597 | It seems that MLE (via EM) is widely used in machine learning / statistics to learn the parameters of a mixture of Gaussians. I'm assuming we're given random samples from the mixture.
My question is: Are there any proven quantitative bounds on the error in terms of the number of samples (and perhaps the parameters of the Gaussian)?
For example, what is the runtime required to estimate the parameters up to a certain error?
Ideally, these bounds would not assume that we start in a local neighborhood of the optimal solution or any such thing. (If EM is not the method of choice and there is a better way of doing it, please point this out, as well.)
| Learning parameters of a mixture of Gaussian using MLE | CC BY-SA 2.5 | null | 2010-08-23T23:21:15.567 | 2011-03-27T16:01:57.583 | 2011-03-27T16:01:57.583 | 919 | 1072 | [
"estimation",
"normal-distribution",
"expectation-maximization",
"mixture-distribution"
] |
2060 | 2 | null | 2059 | 4 | null | EM essentially solves the maximum likelihood problem and therefore has the same properties w.r.t. sample sizes. EM for Gaussian mixture models is known to converge asymptotically to a local maximum and exhibits first order convergence (see [this paper](http://dspace.mit.edu/bitstream/handle/1721.1/7195/AIM-1520.pdf;jsessionid=DEB41C50ECD9DA8161DC80F7057BDCBE?sequence=2)).
BTW, there are some results which quantify how good the EM solution is in terms of the parameters of the data distribution. See [this paper](http://www.cs.caltech.edu/~schulman/Papers/em-uaif.pdf) which shows that the goodness depends on the separation of mixture components (measured by variances). A lot of papers have analyzed mixture models using this criteria.
If you don't want to use EM for mixture models, you can take a [fully Bayesian approach](http://research.microsoft.com/en-us/um/people/cmbishop/downloads/Bishop-robust-mixture-Neurocomputing-04.pdf).
| null | CC BY-SA 2.5 | null | 2010-08-24T00:27:18.560 | 2010-08-24T01:24:08.150 | 2010-08-24T01:24:08.150 | 881 | 881 | null |
2061 | 1 | null | null | 13 | 1711 | When lecturing statistics, it can be useful to incorporate the occasional short video.
My first thoughts included:
- Animations and visualisations of statistical concepts
- Stories regarding the application of a particular technique
- Humorous videos that have relevance to a statistical idea
- Interviews with a statistician or a researcher who uses statistics
Are there any videos that you use when teaching statistics or that you think would be useful?
Please provide:
- Link to the video online
- Description of content
- To what statistical topic the video might relate
| Short online videos that could assist a lecture on statistics | CC BY-SA 2.5 | null | 2010-08-24T02:48:08.290 | 2018-10-17T15:53:02.093 | 2010-09-17T08:49:22.640 | 183 | 183 | [
"teaching",
"references"
] |
2062 | 2 | null | 2061 | 11 | null | Link:
[https://yihui.name/animation/](https://yihui.name/animation/)
A big list of animation "clips" (gif's or other formats), through the use of the "[animation](http://cran.r-project.org/web/packages/animation/index.html)" package (R). Including the following topics:
Topics(in each of them there are 1 or more animation, by topic)
- Theory of Probability
- Mathematical Statistics
- Sampling Survey
- Linear Models
- Multivariate Statistics
- Nonparametric Statistics (no videos yet)
- Computational Statistics
- Time Series Analysis
- Data Mining, Machine Learning
- Statistical Data Analysis
- R Graphics
- Package ''animation''
- Dynamic Graphics
Through the use of the package, the animations can be reproduced in various formats (such as gif, aws, and others)
| null | CC BY-SA 4.0 | null | 2010-08-24T02:57:23.123 | 2018-10-17T15:53:02.093 | 2018-10-17T15:53:02.093 | 103520 | 253 | null |
2063 | 2 | null | 1873 | 1 | null | Perhaps I'm misunderstanding the OP, but shouldn't the test be for 8 successes (of not stubbing the fellow's toe) and 7 failures, for a total of 15 trials? And shouldn't the comparison be to a probability of 2/3 (2000 incidents of not stubbing a toe relative to the 1000 observed ascents)? This, of course, is taking the true population probability of stubbing a toe to be 1/3.
One way to permit both quantities to be random is to bootstrap. If stubbing is 1 and not stubbing is 0, draw (3000-15) observations with replacement from the total pool. Note that we exclude Joe from the total pool in order to compare him to others. Sum the observations (i.e., count the number of toe stubs) and divide by the total number of observations, 3000-15. Draw 15 observations from Joe's sample, sum, and divide by 15. Subtract Joe's proportion of stubs from the population proportion. Simulate many (1000, 10000 maybe) times, look at the 2.5 and 97.5 percentiles (for a 95% test against the null hypothesis that Joe's stub rate is the same as that of the population; a different level or one-sided test could also be used as desired) and see whether 0 is in that interval. If it is, then you cannot reject the null hypothesis that Joe is as clumsy as everyone else.
| null | CC BY-SA 2.5 | null | 2010-08-24T06:06:54.557 | 2010-08-24T06:06:54.557 | null | null | 401 | null |
2064 | 2 | null | 1998 | 5 | null | In the social sciences, we often call this issue "post treatment bias." If you are considering the effect of some treatment (your independent variable), including variables that arise after treatment (in a causal sense), then your estimate of the treatment effect can be biased. If you include these variables, then you are, in some sense, controlling for the impact of treatment. If treatment T causes outcome Y and other variable A and A causes Y, then controlling for A ignores the impact that T has on Y via A. This bias can be positive or negative.
In the social sciences, this can be especially difficult because A might cause T, which feeds back on A, and A and T both cause Y. For example, high GDP can lead to high levels of democratization (our treatment), which leads to higher GDP, and higher GDP and higher democratization both lead to less government corruption, say. Since GDP causes democratization, if we don't control for it, then we have an endogeneity issue or "omitted variables bias." But if we do control for GDP, we have post treatment bias. Other than use randomized trials when we can, there is little else that we can do to steer our ship between Scylla and Charybdis. Gary King talks about these issues as his nomination for Harvard's "Hardest Unsolved Problems in the Social Sciences" initiative [here](http://www.slideshare.net/HardProblemsSS/gary-king-hard-unsolved-problem-in-social-sciencepost-treatment-bias).
| null | CC BY-SA 2.5 | null | 2010-08-24T06:38:43.993 | 2010-08-24T06:38:43.993 | null | null | 401 | null |
2065 | 2 | null | 1844 | 3 | null | I thought I'd stumbled across a web site and reference that deals exactly with this question:
[http://www.graphpad.com/faq/viewfaq.cfm?faq=1226](http://www.graphpad.com/faq/viewfaq.cfm?faq=1226)
Start from "The two methods compared".
The site references the Berstein paper ars linked (above):
[http://www.jstor.org/stable/2530564?seq=1](http://www.jstor.org/stable/2530564?seq=1)
The site summarises Berstein et al's results nicely, so I'll quote it:
>
The two usually give identical (or
nearly identical) results. But the
results can differ when several
subjects die at the same time or when
the hazard ratio is far from 1.0.
Bernsetin and colleagues analyzed
simulated data with both methods (1).
In all their simulations, the
assumption of proportional hazards was
true. The two methods gave very
similar values. The logrank method
(which they refer to as the O/E
method) reports values that are closer
to 1.0 than the true Hazard Ratio,
especially when the hazard ratio is
large or the sample size is large.
When there are ties, both methods are
less accurate. The logrank methods
tend to report hazard ratios that are
even closer to 1.0 (so the reported
hazard ratio is too small when the
hazard ratio is greater than 1.0, and
too large when the hazard ratio is
less than 1.0). The Mantel-Haenszel
method, in contrast, reports hazard
ratios that are further from 1.0 (so
the reported hazard ratio is too large
when the hazard ratio is greater than
1.0, and too small when the hazard ratio is less than 1.0).
They did not test the two methods with
data simulated where the assumption of
proportional hazards is not true. I
have seen one data set where the two
estimate of HR were very different (by
a factor of three), and the assumption
of proportional hazards was dubious
for those data. It seems that the
Mantel-Haenszel method gives more
weight to differences in the hazard at
late time points, while the logrank
method gives equal weight everywhere
(but I have not explored this in
detail). If you see very different HR
values with the two methods, think
about whether the assumption of
proportional hazards is reasonable. If
that assumption is not reasonable,
then of course the entire concept of a
single hazard ratio describing the
entire curve is not meaningful
The site also refer to the dataset in which "the two estimate of HR were very different (by a factor of three)", and suggest that the PH assumption is a key consideration.
Then I thought, "Who authored the site?" After a little searching I found it was Harvey Motulsky. So Harvey I've managed to reference you in answering your own question. You've become the authority!
Is the "problem dataset" a publicly available dataset?
| null | CC BY-SA 2.5 | null | 2010-08-24T06:49:15.703 | 2010-08-24T06:49:15.703 | null | null | 521 | null |
2066 | 1 | 2074 | null | 15 | 7972 | Is there a numerically stable way to calculate values of a [beta distribution](http://en.wikipedia.org/wiki/Beta_distribution) for large integer alpha, beta (e.g. alpha,beta > 1000000)?
Actually, I only need a 99% confidence interval around the mode, if that somehow makes the problem easier.
Add: I'm sorry, my question wasn't as clearly stated as I thought it was. What I want to do is this: I have a machine that inspects products on a conveyor belt. Some fraction of these products is rejected by the machine. Now if the machine operator changes some inspection setting, I want to show him/her the estimated reject rate and some hint about how reliable the current estimate is.
So I thought I treat the actual reject rate as a random variable X, and calculate the probability distribution for that random variable based on the number of rejected objects N and accepted objects M. If I assume a uniform prior distribution for X, this is a beta distribution depending on N and M. I can either display this distribution to the user directly or find an interval [l,r] so that the actual reject rate is in this interval with p >= 0.99 (using shabbychef's terminology) and display this interval. For small M, N (i.e. immediately after the parameter change), I can calculate the distribution directly and approximate the interval [l,r]. But for large M,N, this naive approach leads to underflow errors, because x^N*(1-x)^M is to small to be represented as a double precision float.
I guess my best bet is to use my naive beta-distribution for small M,N and switch to a normal distribution with the same mean and variance as soon as M,N exceed some threshold. Does that make sense?
| How can I (numerically) approximate values for a beta distribution with large alpha & beta | CC BY-SA 2.5 | null | 2010-08-24T09:48:08.093 | 2023-02-11T14:22:18.387 | 2010-10-06T14:29:31.523 | 8 | 956 | [
"confidence-interval",
"algorithms",
"beta-distribution"
] |
2067 | 1 | 2138 | null | 14 | 5404 | I am currently finishing a paper and stumbled upon [this question from yesterday](https://stats.stackexchange.com/questions/2024/plotting-standard-errors) which led me to pose the same question to myself. Is it better to provide my graph with the actual standard error from the data or the one estimated from my ANOVA?
As the question from yesterday was rather unspecific and mine is pretty specific I thought it would be appropriate to pose this follow-up question.
Details:
I have run an experiment in some cognitive psychology domain (conditional reasoning) comparing two groups (inductive and deductive instructions, i.e., a between-subjects manipulation) with two within-subjects manipulations (problem type and content of the problem, each with two factor levels).
The results look like this (left panel with SE-estimates from the ANOVA Output, right panel with SEs estimated from the data):

Note that the different lines represent the two different groups (i.e., the between-subjects manipulation) and the within-subjects manipulations are plotted on the x-axis (i.e., the 2x2 factor levels).
In the text I provide the respective results of the ANOVA and even planned comparisons for the critical cross-over interaction in the middle. The SEs are there to give the reader some hint about the variability of the data. I prefer SEs over standard deviations and confidence intervals as it is not common to plot SDs and there are severe problems when comparing within- and between-subjects CIs (as the same surely applys for SEs, it is not so common to falsely infer significant differences from them).
To repeat my question: Is it better to plot the SEs estimated from the ANOVA or should I plot the SEs estimated from the raw data?
Update:
I think I should be a little bit clearer in what the estimated SEs are. The ANOVA Output in SPSS gives me `estimated marginal means` with corresponding SEs and CIs. This is what is plotted in the left graph. As far as I understand this, they should be the SDs of the residuals. But, when saving the residuals their SDs are not somehow near the estimated SEs. So a secondary (potentially SPSS specific) question would be:
What are these SEs?
---
UPDATE 2: I finally managed to write a R-function that should be able to make a plot as I finally liked it (see my accepted answer) on its own. If anybody has time, I would really appreciate if you could have a look at it. [Here it is.](http://www.psychologie.uni-freiburg.de/Members/singmann/R/rm.plot)
| Follow up: In a mixed within-between ANOVA plot estimated SEs or actual SEs? | CC BY-SA 2.5 | null | 2010-08-24T10:05:42.533 | 2012-02-21T08:49:36.340 | 2017-04-13T12:44:40.807 | -1 | 442 | [
"data-visualization",
"anova",
"mixed-model",
"standard-error"
] |
2068 | 2 | null | 2067 | 8 | null | You will not find a single reasonable error bar for inferential purposes with this type of experimental design. This is an old problem with no clear solution.
It seems impossible to have the estimate SE's you have here. There are two main kinds of error in such a design, the between and within S error. They are usually very different from one another and not comparable. There just really is no good single error bar to represent your data.
One might argue that the raw SEs or SDs from the data are most important in a descriptive rather than inferential sense. They either tell about the quality of the central tendency estimate (SE) or the variability of the data (SD). However, even then it's somewhat disingenuous because the thing you're testing and measuring within S is not that raw value but rather the effect of the within S variable. Therefore, reporting variability of the raw values is either meaningless or misleading with respect to within S effects.
I have typically endorsed no error bars on such graphs and adjacent effects graphs indicating the variability of the effects. One might have CI's on that graph that are perfectly reasonable. See Masson & Loftus (2003) for examples of the effects graphs. Simply eliminate their ((pretty much completely useless) error bars around the mean values they show and just use the effect error bars.
For your study I'd first replot the data as the 2 x 2 x 2 design it is (2-panel 2x2) and then plot immediately adjacent a graph with confidence intervals of the validity, plausibility, instruction, and interaction effects. Put SDs and SEs for the instruction groups in a table or in the text.
(waiting for expected mixed effects analysis response ;) )
UPDATE: OK, after editing it's clear the only thing you want is an SE to be used to show the quality of the estimate of the value. In that case use your model values. Both values are based on a model and there is no 'true' value in your sample. Use the ones from the model you applied to your data. BUT, make sure you warn readers in the figure caption that these SEs have no inferential value whatsoever for your within S effects or interactions.
UPDATE2: Looking back at the data you did present... that looks suspiciously like percentages which shouldn't have been analyzed with ANOVA in the first place. Whether it is or isn't, it's a variable that maxes at 100 and has reduced variances at the extremes so it still shouldn't be analyzed with ANOVA. I do very much like your rm.plot plots. I'd still be tempted to do separate plots of the between conditions, showing the raw data, and within conditions showing the data with between S variability removed.
| null | CC BY-SA 3.0 | null | 2010-08-24T10:35:37.933 | 2012-02-21T08:49:36.340 | 2012-02-21T08:49:36.340 | 601 | 601 | null |
2069 | 2 | null | 2067 | 7 | null | This looks like a very nice experiment, so congratulations!
I agree with John Christie, it is a mixed model, but provided it can be properely specified in an ANOVA design (& is balanced) I don't see why it can't be so formulated. Two factor within and 1-factor between subjects, but the between subjects factor (inductive/deductive) clearly interacts (modifies) the within-subjects effects. I assume the plotted means are from the ANOVA model (LHS) and so the model is correctly specified. Well done - this is non-trivial!
Some points:
1) The "estimated" vs "actual" "error" is a false dichotomy. Both assume an underlying model and make estimates on that basis. If the model is reasonable, I would argue it is better to use the model-based estimates (they are based on the pooling of larger samples). But as James mentions, the errors differ depending on the comparison you are making, so no simple representation is possible.
2) I would prefer to see box-plots or individual data points plotted (if there are not too many), perhaps with some sideways jitter, so points with the same value can be distinguished.
[http://en.wikipedia.org/wiki/Box_plot](http://en.wikipedia.org/wiki/Box_plot)
3) If you must plot an estimate of the error of the mean, never plot SDs - they are an estimate of the standard deviation of the sample and relate to population variablility, not a statistical comparison of means. It is generally preferable to plot 95% confidence intervals rather than SEs, but not in this case (see 1 and John's point)
4) The one issue with this data that concerns me is the assumption of uniform variance is probably violated as the "MP Valid and Plausible" data are clearly constrained by the 100% limit, especially for the deductive people. I'm tossing up in my own mind how important this issue is. Moving to a mixed-effects logit (binomial probability) is probably the ideal solution, but it's a hard ask. It might be best to let others answer.
| null | CC BY-SA 2.5 | null | 2010-08-24T11:21:58.323 | 2010-08-24T11:39:14.513 | 2010-08-24T11:39:14.513 | 521 | 521 | null |
2070 | 2 | null | 2067 | 3 | null | Lately I've been using mixed effects analysis, and in attempting to develop an accompanying visual data analysis approach I've been using bootstrapping ([see my description here](https://stat.ethz.ch/pipermail/r-sig-mixed-models/2010q3/004369.html)), which yields confidence intervals that are not susceptible to the within-versus-between troubles of conventional CIs.
Also, I would avoid mapping multiple variables to the same visual aesthetic, as you have done in the graph above; you have 3 variables (MP/AC, valid/invalid, plausible/implausible) mapped to the x-axis, which makes it rather difficult to parse the design and patterns. I would suggest instead mapping, say, MP/AC to the x-axis, valid/invalid to facet columns, and plausible/implausible to facet rows. Check out ggplot2 in R to easily achieve this, eg:
```
library(ggplot2)
ggplot(
data = my_data
, mapping = aes(
y = mean_endorsement
, x = mp_ac
, linetype = deductive_inductive
, shape = deductive_inductive
)+
geom_point()+
geom_line()+
facet_grid(
plausible_implausible ~ valid_invalid
)
```
| null | CC BY-SA 2.5 | null | 2010-08-24T11:32:58.983 | 2010-08-24T11:32:58.983 | null | null | 364 | null |
2071 | 2 | null | 2066 | 3 | null | A quick graphical experiment suggests that the beta distribution looks very like a normal distribution when alpha and beta are both very large. By googling "beta distribution limit normal" i found [http://nrich.maths.org/discus/messages/117730/143065.html?1200700623$^\dagger$](http://nrich.maths.org/discus/messages/117730/143065.html?1200700623%7B26a9031d-e855-4a07-858a-f00913e11d4a%7D), which gives a handwaving 'proof'.
The wikipedia page for the beta distribution gives its mean, mode (v close to mean for large alpha and beta) and variance, so you could use a normal distribution with the same mean & variance to get an approximation. Whether it's a good enough approximation for your purposes depends on what your purposes are.
---
$^\dagger$ The link is broken.
| null | CC BY-SA 4.0 | null | 2010-08-24T12:38:01.413 | 2023-02-11T10:26:41.770 | 2023-02-11T10:26:41.770 | 362671 | 449 | null |
2072 | 1 | null | null | 17 | 1633 | It isn't clear to me how to calculate cointegration with irregular time series (ideally using [the Johansen test](http://en.wikipedia.org/wiki/Johansen_test) with VECM). My initial thought would be to regularize the series and interpolate missing values, although that may bias the estimation.
Is there any literature on this subject?
| Does a cointegration model exist for irregularly spaced time series? | CC BY-SA 2.5 | null | 2010-08-24T13:07:33.350 | 2019-07-20T17:32:42.237 | 2019-07-20T17:32:42.237 | 11887 | 5 | [
"time-series",
"cointegration",
"unevenly-spaced-time-series"
] |
2073 | 2 | null | 2066 | 2 | null | I am going to infer you want an interval $[l,r]$ such that the probability that a random draw from the Beta RV is in the interval with probability 0.99, with bonus points for $l$ and $r$ being symmetric around the mode. By [Gauss' Inequality](http://en.wikipedia.org/wiki/Gauss%27s_inequality) or the Vysochanskii-Petunin inequality, you can construct intervals that contain the interval $[l,r]$, and would be fairly decent approximations. For sufficiently large $\alpha, \beta$, you will have numerical underflow issues in even representing $l$ and $r$ as distinct numbers, so this route may be good enough.
| null | CC BY-SA 2.5 | null | 2010-08-24T15:06:00.023 | 2010-08-24T15:06:00.023 | null | null | 795 | null |
2074 | 2 | null | 2066 | 13 | null | A Normal approximation works extremely well, especially in the tails. Use a mean of $\alpha/(\alpha+\beta)$ and a variance of $\frac{\alpha\beta}{(\alpha+\beta)^{2} (1+\alpha+\beta)}$. For example, the absolute relative error in the tail probability in a tough situation (where skewness might be of concern) such as $\alpha = 10^6, \beta = 10^8$ peaks around $0.00026$ and is less than $0.00006$ when you're more than 1 SD from the mean. (This is not because beta is so large: with $\alpha = \beta = 10^6$, the absolute relative errors are bounded by $0.0000001$.) Thus, this approximation is excellent for essentially any purpose involving 99% intervals.
In light of the edits to the question, note that one does not compute beta integrals by actually integrating the integrand: of course you'll get underflows (although they don't really matter, because they don't contribute appreciably to the integral). There are many, many ways to compute the integral or approximate it, as documented in Johnson & Kotz (Distributions in Statistics). An online calculator is found [here](https://web.archive.org/web/20110218021000/http://www.danielsoper.com/statcalc/calc37.aspx). You actually need the inverse of this integral. Some methods to compute the inverse are documented on the Mathematica site at [http://functions.wolfram.com/GammaBetaErf/InverseBetaRegularized/](http://functions.wolfram.com/GammaBetaErf/InverseBetaRegularized/). Code is provided in [Numerical Recipes](http://numerical.recipes/book/book.html). A really nice online calculator is the Wolfram Alpha site ([www.wolframalpha.com](http://www.wolframalpha.com) ): enter `inverse beta regularized (.005, 1000000, 1000001)` for the left endpoint and `inverse beta regularized (.995, 1000000, 1000001)` for the right endpoint ($\alpha=1000000, \beta=1000001$, 99% interval).
| null | CC BY-SA 4.0 | null | 2010-08-24T15:43:19.670 | 2023-02-11T10:24:33.733 | 2023-02-11T10:24:33.733 | 362671 | 919 | null |
2075 | 2 | null | 927 | 3 | null | I also just realized the [freakonomics has a podcast](http://itunes.apple.com/us/podcast/freakonomics-radio/id354668519)
| null | CC BY-SA 2.5 | null | 2010-08-24T16:17:53.823 | 2010-08-24T16:17:53.823 | null | null | 253 | null |
2076 | 2 | null | 924 | 1 | null | If you wanted to check non-parametrically for significance, you could bootstrap the confidence intervals on the ratio, or you could do a permutation test on the two classes. For example, to do the bootstrap, create two arrays: one with 3 ones and 999,997 zeros, and one with 10 ones and 999,990 zeros. Then draw with replacement a sample of 1m items from the first population and a sample of 1m items from the second population. The ratio we're interested in is the ratio of "hits" in the first group to the ratio of "hits" in the second group, or: (proportion of ones in the first sample) / (proportion of ones in the second sample). We do this 1000 times. I don't have matlab handy but here's the R code to do it:
```
# generate the test data to sample from
v1 <- c(rep(1,3),rep(0,999997))
v2 <- c(rep(1,10),rep(0,999990))
# set up the vectors that will hold our proportions
t1 <- vector()
t2 <- vector()
# loop 1000 times each time sample with replacement from the test data and
# record the proportion of 1's from each sample
# note: this step takes a few minutes. There are ways to write it such that
# it will go faster in R (applies), but it's more obvious what's going on this way:
for(i in 1:1000) {
t1[i] <- length(which(sample(v1,1000000,replace=TRUE)==1)) / 1000000
t2[i] <- length(which(sample(v2,1000000,replace=TRUE)==1)) / 1000000
}
# what was the ratio of the proportion of 1's between each group for each random draw?
ratios <- t1 / t2
# grab the 95% confidence interval over the bootstrapped samples
quantile(ratios,c(.05,.95))
# and the 99% confidence interval
quantile(ratios,c(.01,.99))
```
The output is:
5% 95%
0.0000000 0.8333333
and:
1% 99%
0.00 1.25
Since the 95% confidence interval doesn't overlap the null hypothesis (1), but the 99% confidence interval does, I believe that it would be correct to say that this is significant at an alpha of .05 but not at .01.
Another way to look at it is with a permutation test to estimate the distribution of ratios given the null hypothesis. In this case you'd mix the two samples together and randomly divide it into two 1,000,000 item groups. Then you'd see what the distribution of ratios under the null hypothesis looks like, and your empirical p-value is how extreme the true ratio is given this distribution of null ratios. Again, the R code:
```
# generate the test data to sample from
v1 <- c(rep(1,3),rep(0,999997))
v2 <- c(rep(1,10),rep(0,999990))
v3 <- c(v1,v2)
# vectors to hold the null hypothesis ratios
t1 <- vector()
t2 <- vector()
# loop 1000 times; each time randomly divide the samples
# into 2 groups and see what those two random groups' proportions are
for(i in 1:1000) {
idxs <- sample(1:2000000,1000000,replace=FALSE)
s1 <- v3[idxs]
s2 <- v3[-idxs]
t1[i] <- length(which(s1==1)) / 1000000
t2[i] <- length(which(s2==1)) / 1000000
}
# vector of the ratios
ratios <- t1 / t2
# take a look at the distribution
plot(density(ratios))
# calculate the sampled ratio of proportions
sample.ratio <- ((3/1000000)/(10/1000000))
# where does this fall on the distribution of null proportions?
plot(abline(v=sample.ratio))
# this ratio (r+1)/(n+1) gives the p-value of the true sample
(length(which(ratios <= sample.ratio)) + 1) / (1001)
```
The output is ~ .0412 (of course this will vary run to run since it's based on random draws). So again, you could potentially call this significant at the .05 value.
I should issue the caveats: it depends too on how your data was collected and the type of study, and I'm just a grad student so don't take my word as gold. If anyone has any criticism of my methods I'd love to hear them since I'm doing this stuff for my own work as well and I'd love to find out the methods are flawed here rather than in peer review. For more stuff like this check out Efron & Tibshirani 1993, or chapter 14 of Introduction to the Practice of Statistics by David Moore (a good general textbook for practitioners).
| null | CC BY-SA 2.5 | null | 2010-08-24T17:08:00.637 | 2010-08-24T17:08:00.637 | null | null | 1076 | null |
2077 | 1 | 2080 | null | 51 | 61162 | Besides taking differences, what are other techniques for making a non-stationary time series, stationary?
Ordinarily one refers to a series as "[integrated of order p](http://en.wikipedia.org/wiki/Order_of_integration)" if it can be made stationary through a lag operator $(1-L)^P X_t$.
| How to make a time series stationary? | CC BY-SA 2.5 | null | 2010-08-24T18:40:42.633 | 2021-11-14T21:32:30.737 | null | null | 5 | [
"time-series",
"stationarity"
] |
2078 | 2 | null | 1927 | 10 | null | [Geometric median](http://en.wikipedia.org/wiki/Geometric_median) is the point with the smallest average euclidian distance from the samples
| null | CC BY-SA 2.5 | null | 2010-08-24T19:25:14.850 | 2010-08-24T19:25:14.850 | null | null | 511 | null |
2079 | 1 | null | null | 3 | 118 | There seems to be a large body of applied research where distribution q is picked to minimize [KL(q,p)](http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) where p is empirical distribution. Are there theoretical reasons to prefer this estimator? For instance, a theoretical reason to prefer MLE for estimating mean of a one-dimensional Gaussian with mean between -1 and 1 because it's admissible, and also as sample size goes to infinity, it minimizes maximum risk and Bayes risk.
| What's good about I-projections? | CC BY-SA 2.5 | null | 2010-08-24T20:09:45.430 | 2018-10-07T20:31:51.977 | 2018-10-07T20:31:51.977 | 11887 | 511 | [
"estimation",
"kullback-leibler"
] |
2080 | 2 | null | 2077 | 23 | null | De-trending is fundamental. This includes regressing against covariates other than time.
Seasonal adjustment is a version of taking differences but could be construed as a separate technique.
Transformation of the data implicitly converts a difference operator into something else; e.g., differences of the logarithms are actually ratios.
Some EDA smoothing techniques (such as removing a moving median) could be construed as non-parametric ways of detrending. They were used as such by Tukey in his book on EDA. Tukey continued by detrending the residuals and iterating this process for as long as necessary (until he achieved residuals that appeared stationary and symmetrically distributed around zero).
| null | CC BY-SA 2.5 | null | 2010-08-24T21:27:09.350 | 2010-08-24T21:27:09.350 | null | null | 919 | null |
2081 | 2 | null | 1681 | 2 | null | >
in that notation, I'm looking for epsilon as a function of m
If I have not misunderstood, this indicates you want to solve an equation in $q~ (= p +\epsilon)$ of the form
$$q \log(p/q) + (1-q) \log((1-p)/(1-q)) = D(q\,\Vert\, p) = y;\ y \lt 0,$$
given $p$ and $y.$ ($y = \log(p)/m$ reveals the $m$-dependence.)
It is correct that we do not have a name for the solution, but that does not mean it cannot be found, and fairly easily at that. A little algebra converts this equation into one of the form
$$H(q) = \alpha + \beta\,q$$ where
$$H(q) = -q \log(q) - (1-q) \log(1-q) $$ and $\alpha$ and $\beta$ depend only on $y$ ("probability of deviation") and $m$ ("sample size").
Geometrically this asks for the intersections of a concave downward curve and a line; the curve has endpoints at $(0,0)$ and $(1,0)$ and is symmetric in the unit interval. There will therefore be up to two solutions. Newton-Raphson ought to converge rapidly after an initial set of bisection steps finds a point between 0 and the left root and another between the right root and 1 (assuming either exists).
If you need them, theoretical properties of the solution(s) could be readily derived from the definitions of $H,$ $\alpha,$ and $\beta.$
| null | CC BY-SA 4.0 | null | 2010-08-24T22:19:51.203 | 2023-02-11T12:54:06.857 | 2023-02-11T12:54:06.857 | 919 | 919 | null |
2082 | 2 | null | 2077 | 5 | null | Difference with another series. i.e. Brent oil prices are not stationary,
but the spread brent-light sweet crude might be. A more risky proposition for
forecasting is to bet on the existence of a co integration relationship
with another time series.
| null | CC BY-SA 4.0 | null | 2010-08-24T22:38:15.263 | 2021-11-14T21:32:30.737 | 2021-11-14T21:32:30.737 | 603 | 603 | null |
2084 | 2 | null | 1469 | 3 | null | [Asymptotic Statistics](https://www.cambridge.org/core/books/asymptotic-statistics/A3C7DAD3F7E66A1FA60E9C8FE132EE1D) by A. W. van der Vaart seems to serve the purpose.
| null | CC BY-SA 4.0 | null | 2010-08-25T00:46:17.397 | 2022-11-29T18:35:55.963 | 2022-11-29T18:35:55.963 | 362671 | 511 | null |
2085 | 1 | 2115 | null | 6 | 2092 | I am modelling a nested mixed-effects model with just the intercept in the random part, of the form:
```
fit4<-lme(fixed = Stdquad~factor(LayoutN)+factor(nCHIPS.fixed), random = ~1|Class.Ordered/student)
```
When trying to check the assumption on the independent, identically distributed (iid) random effects in SPlus using `ranef()`, I keep getting the error:
```
Problem in sort.list: ordering not defined for mode "list": sort.list(x, partial)
```
I suspect the nesting is the problem since if I remove student, the plots come up fine.
I would like to know how I can check that the random effects are iid and are independent for different groups using S-Plus (or R since its very similar).
| Checking assumptions for random effects in nested mixed-effects models in R / S-Plus | CC BY-SA 2.5 | null | 2010-08-25T01:57:42.433 | 2013-07-15T23:20:40.837 | 2013-07-15T23:20:40.837 | 22468 | 108 | [
"r",
"mixed-model",
"random-effects-model",
"error-message"
] |
2086 | 1 | 2087 | null | 12 | 9538 | I'm looking for correlations between the answers to different questions in a survey ("umm, let's see if answers to question 11 correlate with those of question 78"). All answers are categorical (most of them range from "very unhappy" to "very happy"), but a few have a different set of answers. Most of them can be considered ordinal so let's consider this case here.
Since I don't have access to a commercial statistics program, I must use R.
I tried [Rattle](http://rattle.togaware.com/) (a freeware data mining package for R, very nifty) but unfortunately it doesn't support categorical data. One hack I could use is to import in R the coded version of the survey which has numbers (1..5) instead of "very unhappy" ... "happy" and let Rattle believe they are numerical data.
I was thinking to do a scatter plot and have the dot size proportional to the number of numbers for each pair. After some googling I found [http://www.r-statistics.com/2010/04/correlation-scatter-plot-matrix-for-ordered-categorical-data/](http://www.r-statistics.com/2010/04/correlation-scatter-plot-matrix-for-ordered-categorical-data/) but it seems very complicated (to me).
I'm not a statistician (but a programmer) but have had some reading in the matter and, if I understand correctly, Spearman's rho would be appropriate here.
So the short version of the question for those in a hurry: is there a way to quickly plot
Spearman's rho in R? A plot is preferable to a matrix of numbers because it's easier to eye ball and also can be included in materials.
Thank you in advance.
PS I pondered for a while whether to post this on the main SO site or here. After searching both sites for R correlation, I felt this site is better suited for the question.
| Quickly evaluate (visually) correlations between ordered categorical data in R? | CC BY-SA 2.5 | null | 2010-08-25T03:16:23.030 | 2010-08-25T04:50:17.533 | 2010-08-25T04:05:22.290 | 840 | 840 | [
"r",
"correlation",
"categorical-data",
"data-visualization"
] |
2087 | 2 | null | 2086 | 19 | null | Another good visualization of correlation is offered by the [corrplot](http://cran.r-project.org/web/packages/corrplot/index.html) package, giving you things like this:

It is a great package.
Also have a look at the [answer here](https://stats.stackexchange.com/questions/1961/can-you-use-normal-correlation-for-vectors-with-only-2-or-3-ordered-levels), it might be good for you to know.
Lastly, if you have suggestions how the code on the post you referred to could be simpler - please let me know.
| null | CC BY-SA 2.5 | null | 2010-08-25T03:32:04.410 | 2010-08-25T03:32:04.410 | 2017-04-13T12:44:49.837 | -1 | 253 | null |
2088 | 2 | null | 2086 | 3 | null | A couple of additional plotting ideas are:
- Sunflower plot
- Scatter plot with a jitter using base graphics or ggplot2
| null | CC BY-SA 2.5 | null | 2010-08-25T04:50:17.533 | 2010-08-25T04:50:17.533 | null | null | 183 | null |
2089 | 2 | null | 1595 | 97 | null | First, let me say I agree with John D Cook's answer: Python is not a Domain Specific Language like R, and accordingly, there is a lot more you'll be able to do with it further down the road. Of course, R being a DSL means that the latest algorithms published in JASA will almost certainly be in R. If you are doing mostly ad hoc work and want to experiment with the latest lasso regression technique, say, R is hard to beat. If you are doing more production analytical work, integrating with existing software and environments, and concerned about speed, extensibility and maintainability, Python will serve you much better.
Second, ars gave a great answer with good links. Here are a few more packages that I view as essential to analytical work in Python:
- matplotlib for beautiful, publication quality graphics.
- IPython for an enhanced, interactive Python console. Importantly, IPython provides a powerful framework for interactive, parallel computing in Python.
- Cython for easily writing C extensions in Python. This package lets you take a chunk of computationally intensive Python code and easily convert it to a C extension. You'll then be able to load the C extension like any other Python module but the code will run very fast since it is in C.
- PyIMSL Studio for a collection of hundreds of mathemaical and statistical algorithms that are thoroughly documented and supported. You can call the exact same algorithms from Python and C, with nearly the same API and you'll get the same results. Full disclosure: I work on this product, but I also use it a lot.
- xlrd for reading in Excel files easily.
If you want a more MATLAB-like interactive IDE/console, check out [Spyder](http://code.google.com/p/spyderlib/), or the [PyDev](http://pydev.org/) plugin for [Eclipse](http://www.eclipse.org/).
| null | CC BY-SA 2.5 | null | 2010-08-25T05:19:46.727 | 2010-08-25T17:42:00.907 | 2010-08-25T17:42:00.907 | 1080 | 1080 | null |
2090 | 2 | null | 2061 | 6 | null | ASA Sections on: Statistical Computing Statistical Graphics has a video library:
- http://stat-graphics.org/movies/
It contains a large number of interesting videos relevant to statistical computing and graphics. The videos go back as far as the 1960s.
| null | CC BY-SA 2.5 | null | 2010-08-25T05:38:05.883 | 2010-08-25T05:38:05.883 | null | null | 183 | null |
2091 | 1 | 2103 | null | 4 | 1861 | X is uniform random variable in [0,1] and Y=1-X. How do I calculate the distribution function F(X,Y)? I can see that Y is also uniformly distributed and can draw the intervals. But I am unable to compute the distribution function when x+y>1 & x,y in [0,1]. How do I compute it for this specific case?
| Computing probability distribution function for uniform random variables and Y=1-X | CC BY-SA 2.5 | null | 2010-08-25T08:22:06.220 | 2010-10-14T14:44:02.327 | 2010-10-14T02:31:58.780 | 919 | 862 | [
"distributions",
"probability",
"copula"
] |
2092 | 1 | 2094 | null | 105 | 255432 | The waiting times for poisson distribution is an exponential distribution with parameter lambda. But I don't understand it. Poisson models the number of arrivals per unit of time for example. How is this related to exponential distribution? Lets say probability of k arrivals in a unit of time is P(k) (modeled by poisson) and probability of k+1 is P(k+1), how does exponential distribution model the waiting time between them?
| Relationship between poisson and exponential distribution | CC BY-SA 2.5 | null | 2010-08-25T08:33:14.610 | 2020-05-27T02:16:09.500 | 2010-09-18T21:58:05.727 | 930 | 862 | [
"distributions",
"poisson-distribution",
"exponential-distribution"
] |
2093 | 1 | 2097 | null | 3 | 4780 | I've implemented the [Marsaglia polar method](http://en.wikipedia.org/wiki/Marsaglia_polar_method) to generate random numbers that are normally distributed. Unfortunately the code as shown on this [website](http://www.taygeta.com/random/gaussian.html) returns numbers that are not within the range [0, 1).
Code:
```
double x1, y1, w;
do {
x1 = 2.0 * ranf() - 1.0;
y1 = 2.0 * ranf() - 1.0;
w = x1 * x1 + y1 * y1;
} while ( w >= 1.0 || w == 0 );
w = sqrt( (-2.0 * ln( w ) ) / w );
return x1 * w;
```
where ranf() returns values [0, 1).
For example, if (x1, y1) was (-0.43458512585358, -0.07521858582050478), this returns a value of -1.7830255550765148. Obviously not a value within my expected range.
I've seen some implementations that multiplies this by the standard deviation and adds the mean. But if I want to get numbers that range from [0, 1), what should I use as my standard deviation and mean? currently, I'm using a standard deviation of 1/8.125 ~= 0.121212... and 0.5 as the mean, but I've only stumbled on this by experimentation. What is the official way for me to get this appropriately ranged?
To answer the question in the comments below: I want to generate normal random variables within the range 0 and 1. Obviously I'd like the mean to be at 0.5 and values no be distributed around this mean.
More for my enlightenment, will the polar method really return numbers over the entire real line? If so, then it should be easy for me to scale the result down to [0, 1), assuming I don't hit floating point underflow due to the division to scale things down.
New edit:
What I'm trying to do is take Aniko's advise in his answer to my other [Power Factor Problem question](https://stats.stackexchange.com/questions/1687/applied-statistics-to-find-the-minimum-load-for-power-factor-floor). So my first thought was that simulate my distribution of velocities, was to implement my on GaussianRandom class by deriving from [Random](http://msdn.microsoft.com/en-us/library/ts6se2ek.aspx) and overriding [Random.Sample()](http://msdn.microsoft.com/en-us/library/system.random.sample.aspx) and the 3 other methods as recommended by MSDN. Since Random.Sample() is suppose to return a double in the range [0, 1), then I've hit this stumbling block.
I've come to the same conclusion as mbq's answer after staring at the ln(x) graph where x is in (0, 1] and sqrt(x) where x is in (0, 2].
From a practical/programming standpoint, I can't override Random.Sample(), because later when I actually call GaussianRandom.NextDouble(), there is no way for me to pass in the actual standard deviations for my velocities. It looks like I'll have to use the Marsaglia polar method in the raw instead of wrapped within the Random class. My "cartridge" generator will have to `return x1 * w * stdDev + mean`.
| How to make Marsaglia polar method return values [0, 1)? | CC BY-SA 2.5 | null | 2010-08-25T09:32:22.513 | 2010-08-25T21:15:38.573 | 2017-04-13T12:44:33.977 | -1 | 937 | [
"random-generation",
"normal-distribution"
] |
2094 | 2 | null | 2092 | 99 | null | I will use the following notation to be as consistent as possible with the wiki (in case you want to go back and forth between my answer and the wiki definitions for the [poisson](http://en.wikipedia.org/wiki/Poisson_distribution) and [exponential](http://en.wikipedia.org/wiki/Exponential_distribution).)
$N_t$: the number of arrivals during time period $t$
$X_t$: the time it takes for one additional arrival to arrive assuming that someone arrived at time $t$
By definition, the following conditions are equivalent:
$ (X_t > x) \equiv (N_t = N_{t+x})$
The event on the left captures the event that no one has arrived in the time interval $[t,t+x]$ which implies that our count of the number of arrivals at time $t+x$ is identical to the count at time $t$ which is the event on the right.
By the complement rule, we also have:
$P(X_t \le x) = 1 - P(X_t > x)$
Using the equivalence of the two events that we described above, we can re-write the above as:
$P(X_t \le x) = 1 - P(N_{t+x} - N_t = 0)$
But,
$P(N_{t+x} - N_t = 0) = P(N_x = 0)$
Using the poisson pmf the above where $\lambda$ is the average number of arrivals per time unit and $x$ a quantity of time units, simplifies to:
$P(N_{t+x} - N_t = 0) = \frac{(\lambda x)^0}{0!}e^{-\lambda x}$
i.e.
$P(N_{t+x} - N_t = 0) = e^{-\lambda x}$
Substituting in our original eqn, we have:
$P(X_t \le x) = 1 - e^{-\lambda x}$
The above is the cdf of a exponential pdf.
| null | CC BY-SA 3.0 | null | 2010-08-25T09:43:51.450 | 2017-08-10T23:14:14.070 | 2017-08-10T23:14:14.070 | 44024 | null | null |
2095 | 2 | null | 2091 | 4 | null | $F(x,y) = P(X \le x, Y \le y)$
Using the fact that $Y= 1-X$ and after simplifying, the above can be re-written as:
$F(x,y) = P(1-y \le X \le x)$
Since, $X$ is a uniform between 0 and 1, it follows that:
$F(x,y) = x+y-1$
Update
The above is a bit sloppy as I did not specify the domain of the cdf where it is non-zero appropriately. Consider:
$F(x,y) = P(1-y \le X \le x)$
Thus,
$F(x,y) = x+y-1$
Two points about the above cdf:
- For the above cdf to be non-zero it must be the case that:
$x > 1-y$
- Also, note that the cdf attains the value of 1 when $x+y=2$.
Thus, the correct way to represent the cdf is:
$F(x,y) = 0$ if $x+y \le 1$
$F(x,y) = x+y-1$ if $1 \le x+y \le 2$
$F(x,y) = 1$ otherwise
| null | CC BY-SA 2.5 | null | 2010-08-25T10:01:31.670 | 2010-10-14T14:44:02.327 | 2020-06-11T14:32:37.003 | -1 | null | null |
2096 | 2 | null | 2092 | 57 | null | For a Poisson process, hits occur at random independent of the past, but with a known long term average rate $\lambda$ of hits per time unit. The Poisson distribution would let us find the probability of getting some particular number of hits.
Now, instead of looking at the number of hits, we look at the random variable $L$ (for Lifetime), the time you have to wait for the first hit.
The probability that the waiting time is more than a given time value is
$P(L \gt t) = P(\text{no hits in time t})=\frac{\Lambda^0e^{-\Lambda}}{0!}=e^{-\lambda t}$ (by the Poisson distribution, where $\Lambda = \lambda t$).
$P(L \le t) = 1 - e^{-\lambda t}$ (the cumulative distribution function).
We can get the density function by taking the derivative of this:
$$f(t) =
\begin{cases}
\lambda e^{-\lambda t} & \mbox{for } t \ge 0 \\ 0 & \mbox{for } t \lt 0
\end{cases}$$
Any random variable that has a density function like this is said to be exponentially distributed.
| null | CC BY-SA 4.0 | null | 2010-08-25T10:58:52.970 | 2019-10-12T11:50:39.457 | 2019-10-12T11:50:39.457 | 7290 | 339 | null |
2097 | 2 | null | 2093 | 7 | null | It is just impossible, since normal distribution has nonzero probability for the whole $R$. Of course because of the numerical issues it would be not the case for Marsaglia method (this obviously means that it is not creating normally distributed numbers), yet this numerical range is so huge that it is nonsense to scale it to [0,1].
EDIT: So, as long as I understand, you want this number to be in [0,1) to overload .net Random; you don't need to! Write your own class, say NormalRandom which will hold one Random object and will use its output to generate (using Marsaglia) normal random numbers from $-\infty$ to $\infty$ and just use them. Or rather use something already done; Google found [http://www.codeproject.com/KB/recipes/Random.aspx](http://www.codeproject.com/KB/recipes/Random.aspx) .
On margin, remember that when it is said that something which cannot be negative is "normally distributed" (for instance mass) it does mean that it is not from normal but from "almost-normal" distribution which looks pretty normal, but still has one tail cut out (nevertheless simple rejection of negative outputs from generator is a fair way to deal with it).
| null | CC BY-SA 2.5 | null | 2010-08-25T11:01:38.983 | 2010-08-25T21:15:38.573 | 2010-08-25T21:15:38.573 | null | null | null |
2098 | 2 | null | 2093 | 2 | null | Based on the update you want to generate truncated normal random variables. Scaling based on the maximum and minimum will 'destroy' the normal distribution properties of the draws. Instead, you should consider using one of the two methods described below:
[Rejection Sampling](http://en.wikipedia.org/wiki/Rejection_sampling)
You draw from the standard normal and then accept the draw only if it is between 0 and 1.
[Inverse Transform Sampling](http://en.wikipedia.org/wiki/Inverse_transform_sampling)
Let:
$\phi(x)$ be the standard normal cdf.
Then a standard normal draw that is restricted to lie between 0 and 1 is given by:
$x = \phi^{-1}(\phi(0) + (\phi(1) - \phi(0)) U )$
where
$U$ is a uniform random draw between 0 and 1.
| null | CC BY-SA 2.5 | null | 2010-08-25T12:25:39.347 | 2010-08-25T12:25:39.347 | null | null | null | null |
2099 | 1 | 2100 | null | 7 | 4742 | Is there any free desktop software for [treemapping](http://en.wikipedia.org/wiki/Treemapping)? [Every one I've found](http://en.wikipedia.org/wiki/List_of_treemapping_software) appears to have a commercial license. [Many Eyes](http://manyeyes.alphaworks.ibm.com/manyeyes/) is free and has treemapping features but what I'm really looking for is downloadable desktop software.
Thanks in advance.
| Free treemapping software | CC BY-SA 2.5 | null | 2010-08-25T13:10:22.887 | 2013-10-02T05:11:19.890 | null | null | 666 | [
"data-visualization",
"software"
] |
2100 | 2 | null | 2099 | 5 | null | [Flowing Data has a tutorial](http://flowingdata.com/2010/02/11/an-easy-way-to-make-a-treemap/) on how to use the `map.market` function in the `portfolio` package in R.
| null | CC BY-SA 2.5 | null | 2010-08-25T13:15:48.387 | 2010-08-25T13:15:48.387 | null | null | 183 | null |
2101 | 2 | null | 2099 | 6 | null | Protovis has an [nice treemap layout](http://vis.stanford.edu/protovis/ex/treemap.html). I could try to add this into [webvis](http://cran.r-project.org/web/packages/webvis/index.html) if you want to create it from R, but it isn't currently an option.
| null | CC BY-SA 2.5 | null | 2010-08-25T13:31:32.787 | 2010-08-25T13:31:32.787 | null | null | 5 | null |
2102 | 2 | null | 2093 | 3 | null | Another possible interpretation is that you are confusing "normal" and "uniform". A uniform variate will have a mean of 0.5 and be evenly distributed on [0, 1). If this interpretation is correct, your code simplifies to
```
return ranf()
```
| null | CC BY-SA 2.5 | null | 2010-08-25T13:36:22.150 | 2010-08-25T13:36:22.150 | null | null | 919 | null |
2103 | 2 | null | 2091 | 3 | null | I'm interpreting your question as concerning the cdf, which by definition is a function $F$ for which
$$F(x,y) - F(x,0) - F(0,y) + F(0,0) = \Pr(X \le x, Y \le y) \text{;}$$
$$F(0,0) = 0.$$
For $x + y \lt 1$, the right hand side is zero and the left hand side becomes a statement that $F$ is a bilinear function, implying (in conjunction with some of the initial values specified below) that the graph of $F$ is part of a plane. For $x + y \gt 1$, the assumption of uniform distributions implies $F$ must be increasing at a unit rate in both $x$ and $y$, whence the graph of $F$ in this region is a part of a plane of the form $x + y = \text{constant}$. From the evident restrictions
$$F(x, 0) = F(0, y) = 0,$$
$$0 \le x \le 1, 0 \le y \le 1,$$
it is geometrically obvious that the first piece of the graph must lie in the xy plane and the second piece must intersect the first along the line segment $x + y = 1, 0 \le x \le 1$. The full solution therefore is
$$F(x,y) = 0 \quad \text{if} \quad x + y \le 1$$
$$F(x,y) = x + y - 1 \quad \text{if} \quad x + y \gt 1.$$
---
This result is the [Fréchet–Hoeffding minimum copula](http://en.wikipedia.org/wiki/Copula_%28statistics%29) $W(x,y)$. Generally, a copula expresses a multivariate distribution after the marginal variables have been subjected to a probability integral transformation; that is, the marginals have been made uniform. All 2D copulas must have values between this minimum $W$ and the maximum copula $M(x,y) = \min(x,y)$. $W$ expresses maximum anticorrelation between the variables while $M$ expresses maximum correlation between them. Follow the link (a Wikipedia article) for Mathematica plots of these copulas.
| null | CC BY-SA 2.5 | null | 2010-08-25T14:08:50.823 | 2010-10-14T02:31:24.530 | 2010-10-14T02:31:24.530 | 919 | 919 | null |
2104 | 1 | 2106 | null | 8 | 10110 | Thank you for reading. I am trying to get sphericity values for a purely within subject design. I have been unable to use ezANOVA, or Anova(). Anova works if I add a between subject factor, but I have been unable to get sphericity for a purely within subject design. Any advice?
I already read the R newsletter, fox chapter appendix, EZanova, and whatever I could find online.
My original ANOVA
```
anova(aov(resp ~ sucrose*citral, random =~1 | subject, data = p12bl, subset = exps==1))
anova(aov(resp ~ sucrose*citral, random =~1 | subject/sucrose*citral, data = p12bl, subset = exps==1))
> str(subset(p12bl, exps==1))
'data.frame': 192 obs. of 12 variables:
$ exps : int 1 1 1 1 1 1 1 1 1 1 ...
$ Order : int 1 1 1 1 1 1 1 1 1 1 ...
$ threshold: Factor w/ 2 levels " Suprathreshold",..: 1 1 1 1 1 1 1 1 1 1 ...
$ SET : Factor w/ 2 levels " A"," B": 1 1 1 1 1 1 1 1 1 1 ...
$ subject : Factor w/ 16 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
$ stim : chr "S1C1" "S1C1" "S1C1" "S1C1" ...
$ resp : num 6.01 5.63 0 2.57 6.81 ...
$ id : int 1 2 3 4 5 6 7 8 9 10 ...
$ X1 : Factor w/ 1 level "S": 1 1 1 1 1 1 1 1 1 1 ...
$ sucrose : Factor w/ 4 levels "1","2","3","4": 1 1 1 1 1 1 1 1 1 1 ...
$ X3 : Factor w/ 1 level "C": 1 1 1 1 1 1 1 1 1 1 ...
$ citral : Factor w/ 4 levels "1","2","3","4": 1 1 1 1 1 1 1 1 1 1 ...
subset(p12b,exps==1)
exps Order threshold SET observ S1C1 S1C2 S1C3 S1C4 S2C1 S2C2 S2C3 S2C4 S3C1 S3C2 S3C3 S3C4 S4C1 S4C2 S4C3 S4C4
1 1 1 Suprathreshold A 1 6.0 7.1 7.5 8.6 15.0 15.4 15.0 13.1 16.9 13 13.1 16.5 24 16 21 20
2 1 1 Suprathreshold A 2 5.6 0.8 4.0 5.6 5.6 11.3 12.9 14.5 18.5 15 12.9 14.5 24 26 29 28
3 1 1 Suprathreshold A 3 0.0 0.0 1.7 0.0 5.0 8.4 8.4 5.0 11.7 20 18.5 16.8 29 37 37 30
4 1 1 Suprathreshold A 4 2.6 3.3 9.1 16.3 5.4 10.0 9.6 16.8 13.5 12 22.2 23.1 19 20 22 23
5 1 1 Suprathreshold A 5 6.8 5.3 15.4 14.5 11.5 8.3 14.5 14.2 8.9 17 11.2 15.1 24 23 19 19
6 1 1 Suprathreshold A 6 2.6 2.8 2.6 5.2 13.4 15.6 13.7 13.0 13.7 15 16.0 18.9 22 24 25 25
7 1 1 Suprathreshold A 7 1.3 5.8 10.2 9.8 11.9 12.3 17.7 16.7 11.4 19 19.2 21.1 16 19 18 19
8 1 1 Suprathreshold A 8 2.0 5.6 3.9 2.0 4.9 5.2 7.5 4.9 20.2 21 8.2 9.5 30 26 32 45
9 1 1 Suprathreshold A 9 9.4 11.3 11.7 12.1 14.7 13.8 12.6 14.9 15.2 15 15.9 13.9 17 18 15 18
10 1 1 Suprathreshold A 10 4.5 17.8 18.5 21.6 5.8 10.9 17.0 20.2 6.6 10 17.8 18.7 12 12 16 19
11 1 1 Suprathreshold A 11 9.8 13.0 16.1 18.0 10.5 11.6 15.4 17.3 10.1 14 15.2 16.7 13 15 15 17
12 1 1 Suprathreshold A 12 9.6 10.4 13.3 11.3 12.1 12.6 13.6 13.6 14.9 16 15.1 16.3 16 18 18 17
```
Sample output
```
ezANOVA( data = subset(p12bl, exps==1) , dv= .(resp), sid = .(observ), within = .(sucrose,citral), between = NULL, collapse_within = FALSE)
Note: model has only an intercept; equivalent type-III tests substituted.
$ANOVA
Effect DFn DFd SSn SSd F p p<.05 pes
1 sucrose 3 33 4953 3263 16.70 9.0e-07 * 0.603
2 citral 3 33 410 553 8.16 3.3e-04 * 0.426
3 sucrose:citral 9 99 56 791 0.77 6.4e-01 0.066
```
Warning messages:
1: You have removed one or more Ss from the analysis. Refactoring "observ" for ANOVA.
2: Too few Ss for Anova(), reverting to aov(). See "Warning" section of the help on ezANOVA.
| How to get Sphericity in R for a nested within subject design? | CC BY-SA 2.5 | null | 2010-08-25T14:17:44.987 | 2010-09-07T15:44:17.460 | 2010-08-27T18:36:36.293 | 1084 | 1084 | [
"r",
"anova",
"sphericity"
] |
2105 | 2 | null | 2093 | 5 | null | You also might want to consider that perhaps you don't want a normally distributed variable on [0,1) but rather a random variable with a bell-shaped distribution naturally restricted to the [0,1) interval. If so, consider the [Beta distribution](http://en.wikipedia.org/wiki/Beta_distribution) or the [Kumaraswamy distribution](http://www.johndcook.com/blog/2009/11/24/kumaraswamy-distribution/).
| null | CC BY-SA 2.5 | null | 2010-08-25T14:43:24.377 | 2010-08-25T14:43:24.377 | null | null | 279 | null |
2106 | 2 | null | 2104 | 8 | null | Try:
```
library(ez)
ezANOVA(data=subset(p12bl, exps==1),
within=.(sucrose, citral),
wid=.(subject),
dv=.(resp)
)
```
and the equivalent aov command, minus sphericity etc, is:
```
aov(resp ~ sucrose*citral + Error(subject/(sucrose*citral)),
data= subset(p12bl, exps==1))
```
Here's the equivalent using Anova from car directly:
```
library(car)
df1<-read.table("clipboard", header=T) #From copying data in the question above
sucrose<-factor(rep(c(1:4), each=4))
citral<-factor(rep(c(1:4), 4))
idata<-data.frame(sucrose,citral)
mod<-lm(cbind(S1C1, S1C2, S1C3, S1C4, S2C1, S2C2, S2C3, S2C4,
S3C1, S3C2, S3C3, S3C4, S4C1, S4C2, S4C3, S4C4)~1, data=df1)
av.mod<-Anova(mod, idata=idata, idesign=~sucrose*citral)
summary(av.mod)
```
| null | CC BY-SA 2.5 | null | 2010-08-25T15:22:37.273 | 2010-08-31T15:25:02.000 | 2010-08-31T15:25:02.000 | 966 | 966 | null |
2107 | 2 | null | 2099 | 2 | null | While I haven't used either of these, I have these two projects bookmarked:
- Treeviz
- JTreeMap
Both are implemented in Java, are free and open source (MIT and Apache 2.0 licenses, respectively). Both appear to be actually libraries, but seem to come with an example application.
| null | CC BY-SA 2.5 | null | 2010-08-25T16:31:08.917 | 2010-08-25T16:31:08.917 | null | null | 705 | null |
2108 | 1 | null | null | 5 | 5375 | I have a web service, which returns a dataset, and I would like to query it from the [R] statistical package. I'd like to know how (or if) this can be done and also how to load the first datatable, of this dataset, into memory.
Thanks!
EDIT: It is an XML web service
| Using [R] to connect to a Web Service | CC BY-SA 2.5 | null | 2010-08-25T16:44:21.330 | 2010-08-26T12:36:48.317 | 2010-08-26T09:52:27.760 | 428 | 428 | [
"r"
] |
2110 | 2 | null | 2108 | 3 | null | If your webservice amounts to calling a URL and getting back XML or json, then you can just use the `XML` or `rjson` packages directly for this. Possibly with `RCurl` for more elaborate session handling. See [the RCurl paper for an example](http://www.omegahat.org/RCurl/RCurlJSS.pdf). You can also look at [our overflowr package](http://code.google.com/p/overflowr/) for an example with `rjson`.
Edit:
Using `RCurl` is not necessary unless you have a complicated request. Otherwise, R can natively handle pulling in data over http. Just pass the url into the function (e.g. read.table). A simple example:
```
URL <- "http://ichart.finance.yahoo.com/table.csv?s=SPY"
dat <- read.csv(URL)
```
In your case, since your data is XML, you can use the `readLines` function in the same manner.
Have a look at the `getNYTCongress` function in [the nytR package](http://code.google.com/p/nytr/source/browse/trunk/R/nyt.congress.R) for an example of reading an XML webservice without `RCurl`. Also look at [this question on StackOverflow](https://stackoverflow.com/questions/1395528/scraping-html-tables-into-r-data-frames-using-the-xml-package).
| null | CC BY-SA 2.5 | null | 2010-08-25T16:49:37.560 | 2010-08-26T12:36:48.317 | 2017-05-23T12:39:26.150 | -1 | 5 | null |
2111 | 1 | null | null | 10 | 2312 | For the selection of predictors in multivariate linear regression with $p$ suitable predictors, what methods are available to find an 'optimal' subset of the predictors without explicitly testing all $2^p$ subsets? In 'Applied Survival Analysis,' Hosmer & Lemeshow make reference to Kuk's method, but I cannot find the original paper. Can anyone describe this method, or, even better, a more modern technique? One may assume normally distributed errors.
| Computing best subset of predictors for linear regression | CC BY-SA 2.5 | null | 2010-08-25T18:15:55.980 | 2014-07-22T17:12:30.223 | 2010-08-25T22:57:35.883 | null | 795 | [
"modeling",
"regression",
"multivariable",
"model-selection",
"feature-selection"
] |
2112 | 2 | null | 2104 | 7 | null | Did you try the `car` package, from John Fox? It includes the function `Anova()` which is very useful when working with experimental designs. It should give you corrected p-value following Greenhouse-Geisser correction and Huynh-Feldt correction. I can post a quick R example if you wonder how to use it.
Also, there is a nice tutorial on the use of R with repeated measurements and mixed-effects model for [psychology experiments and questionnaires](http://www.psych.upenn.edu/~baron/rpsych/rpsych.html); see Section 6.10 about sphericity.
As a sidenote, the Mauchly's Test of Sphericity is available in `mauchly.test()`, but it doesn't work with `aov` object if I remembered correctly. The [R Newsletter](http://cran.r-project.org/doc/Rnews/Rnews_2007-2.pdf) from October 2007 includes a brief description of this topic.
| null | CC BY-SA 2.5 | null | 2010-08-25T18:30:01.223 | 2010-08-25T18:30:01.223 | null | null | 930 | null |
2113 | 2 | null | 2032 | 4 | null | Go to [http://www.vaultanalytics.com/books](http://www.vaultanalytics.com/books)
They have written a book on what predictive models are, when to use what tests/models, and how to create them in Excel. I'm using it every day in my job. I think it's extremely useful.
| null | CC BY-SA 3.0 | null | 2010-08-25T18:32:36.633 | 2014-06-07T06:33:09.310 | 2014-06-07T06:33:09.310 | 27433 | null | null |
2114 | 2 | null | 2111 | 13 | null | I've never heard of Kuk's method, but the hot topic these days is L1 minimisation. The rationale being that if you use a penalty term of the absolute value of the regression coefficients, the unimportant ones should go to zero.
These techniques have some funny names: Lasso, LARS, Dantzig selector. You can read the papers, but a good place to start is with [Elements of Statistical Learning](http://www-stat.stanford.edu/~tibs/ElemStatLearn/), Chapter 3.
| null | CC BY-SA 2.5 | null | 2010-08-25T18:37:15.187 | 2010-08-25T18:37:15.187 | null | null | 495 | null |
2115 | 2 | null | 2085 | 6 | null | It seems you are using the `nlme` package. Maybe it would be worth trying R and the `lme4` instead, although it is not fully comparable wrt. syntax or function call.
In your case, I would suggest to specify the `level` when you called `ranef()`, see `?ranef.lme`:
level: an optional vector of positive integers giving the levels of
grouping to be used in extracting the random effects from an
object with multiple nested grouping levels. Defaults to all
levels of grouping.
This is also present in the official [documentation](http://ftp.chg.ru/pub/math/NLME/UGuide.pdf) for NLME 3.0 (e.g., p. 17).
Check out Douglas Bates's neat handouts on GLMM. He is also writing a textbook entitled lme4: Mixed-effects modeling with R. All are available on [R-forge](http://lme4.R-forge.R-project.org/).
| null | CC BY-SA 2.5 | null | 2010-08-25T19:45:27.547 | 2010-08-25T19:45:27.547 | null | null | 930 | null |
2116 | 1 | null | null | 15 | 14183 | As someone who needs statistical knowledge but is not a formally trained statistician, I'd find it helpful to have a flowchart (or some kind of decision tree) to help me choose the correct approach to solve a particular problem (eg. "do you need this and know that and that and consider data to be normally distributed? Use technique X. If data is not normal, use Y or Z").
After some [googling](http://www.google.ca/#hl=en&q=flowchart+to+select+statistical+test), I've seen several attempts of various coverage and quality (some not available at the moment). I've also seen similar flowcharts in statistics textbooks I've consulted in libraries.
A bonus would be an interactive site that, besides just having a chart, would provide extra info (such as assumptions) and would point to how to perform those techniques in popular stat packages. "Need to do ANOVA in R? You need package X and here's a tutorial".
I'm asking as a community wiki question in the hope there are better resources I couldn't find. Since statistics is a large subject, I think such a flowchart would be suitable for techniques that can be approached by someone who has beginner or intermediate-level knowledge. Anything more complicated would need someone with formal training.
| Flowcharts to help selecting the proper analysis technique and test | CC BY-SA 3.0 | null | 2010-08-25T20:09:26.997 | 2022-07-11T09:05:44.673 | 2015-12-17T22:43:53.020 | 28666 | 840 | [
"hypothesis-testing",
"data-mining",
"exploratory-data-analysis"
] |
2117 | 2 | null | 2116 | 9 | null | These are not really interactive flowcharts, but maybe this could be useful: (1) [http://j.mp/cmakYq](http://j.mp/cmakYq), (2) [http://j.mp/aaxUsz](http://j.mp/aaxUsz), and (3) [http://j.mp/bDMyAR](http://j.mp/bDMyAR).
| null | CC BY-SA 2.5 | null | 2010-08-25T20:17:24.127 | 2010-08-25T20:17:24.127 | null | null | 930 | null |
2118 | 2 | null | 2116 | 4 | null | You can look at the solution given on the question ["Statistical models cheat sheet"](https://stats.stackexchange.com/questions/1252/statistical-models-cheat-sheet)
| null | CC BY-SA 2.5 | null | 2010-08-25T21:24:22.967 | 2010-08-25T21:24:22.967 | 2017-04-13T12:44:25.243 | -1 | 253 | null |
2119 | 1 | 2123 | null | 3 | 920 | Some guys told me that it's appropriate to use Wald-Wolfowitz Runs Test as a normality test (like Shapiro-Wilk's or Kolmogorov-Smirnov...). Do you think this is good way to test normality assumptions?
| Wald-Wolfowitz Runs Test for Normality Assumptions Testing | CC BY-SA 2.5 | null | 2010-08-25T21:56:01.473 | 2010-08-25T22:54:03.987 | null | null | 1356 | [
"normality-assumption",
"hypothesis-testing",
"nonparametric"
] |
2120 | 2 | null | 1912 | 1 | null | Let me take a stab at this...
If I understand the original sentiment and subsequent response, I think what Gary King is getting at is that noticing a scale invariance effect on your data is a very gross understanding of the phenomena of whats going on. While this 'birds eye view' of whatever phenomena you are observing could be insightful, one might gloss over useful information at the microscopic level.
This example might not be the best, but consider Conway's 'Game of Life'. This is completely determined, as in it is a deterministic system. Consider looking at some statistic of this system, cluster longevity, say, for some appropriate definition of cluster. For arguments sake lets say this follows a power law (I don't know if it does or not, but just for this example, lets say it does). This gives a gross high level description of the system but you've washed all the details of how gliders race across the board, how they collide to give glider guns and other useful information that you might be able to use to determine some specifics about your system.
I'm not sure this is the best example or even if I've gotten the gist of what Gary King was trying to say, but thats my 2 cents.
| null | CC BY-SA 2.5 | null | 2010-08-25T21:58:52.960 | 2010-08-25T21:58:52.960 | null | null | null | null |
2121 | 1 | null | null | 24 | 15303 | I am using ridge regression on highly multicollinear data. Using OLS I get large standard errors on the coefficients due to the multicollinearity. I know ridge regression is a way to deal with this problem, but in all the implementations of ridge regression that I've looked at, there are no standard errors reported for the coefficients. I would like some way of estimating how much the ridge regression is helping by seeing how much it is decreasing the standard errors of specific coefficients. Is there some way to estimate them in ridge regression?
| How can I estimate coefficient standard errors when using ridge regression? | CC BY-SA 2.5 | null | 2010-08-25T22:34:45.943 | 2016-09-14T03:04:07.357 | null | null | 1090 | [
"standard-error",
"ridge-regression"
] |
2123 | 2 | null | 2119 | 6 | null | This is not a great idea. The Wald-Wolfowitz test can detect certain departures from being independently distributed (it detects whether a series of observations is 'too bunchy' or 'too jittery' with respect to the distribution of positive and negative values), which have nothing to do with normality. It is not sensitive to the shape of the empirical distribution: it will 'pass' Cauchy, shifted uniform, $t$, polluted normal, etc. random variates.
Moreover, the WW test is sensitive to the ordering of the observations. A true test for normality should be invariant to permutations.
| null | CC BY-SA 2.5 | null | 2010-08-25T22:54:03.987 | 2010-08-25T22:54:03.987 | null | null | 795 | null |
2124 | 2 | null | 2007 | 2 | null | [RapidMiner](http://rapid-i.com/content/view/181/190/lang,en/) (Java) [open source]
| null | CC BY-SA 2.5 | null | 2010-08-25T22:55:22.173 | 2010-08-25T22:55:22.173 | null | null | 170 | null |
2125 | 1 | 2128 | null | 103 | 281806 | In particular, I am referring to the Pearson product-moment correlation coefficient.
| What's the difference between correlation and simple linear regression? | CC BY-SA 3.0 | null | 2010-08-25T23:53:00.417 | 2022-11-16T01:15:25.060 | 2013-08-01T17:07:24.153 | 7290 | 74 | [
"correlation",
"regression"
] |
2126 | 2 | null | 2121 | 10 | null | Assuming that the data generating process follows the standard assumptions behind OLS the standard errors for ridge regression is given by:
$ \sigma^2 (A^T A + \Gamma^T \Gamma)^{-1} A^T A (A^T A + \Gamma^T \Gamma)^{-1}$
The notation above follows the wiki notation for [ridge regression](http://en.wikipedia.org/wiki/Tikhonov_regularization). Specifically,
$A$ is the covraiate matrix,
$\sigma^2$ is the error variance.
$\Gamma$ is the Tikhonov matrix chosen suitably in ridge regression.
| null | CC BY-SA 2.5 | null | 2010-08-26T00:05:08.597 | 2010-08-26T00:30:18.583 | 2010-08-26T00:30:18.583 | null | null | null |
2127 | 2 | null | 2111 | 4 | null | This is a huge topic. As previously mentioned, Hastie, Tibshirani and Friedman give a good intro in Ch3 of Elements of Statistical Learning.
A few points.
1) What do you mean by "best" or "optimal"? What is best in one sense may not be best in another. Two common criteria are predictive accuracy (predicting the outcome variable) and producing unbiased estimators of the coefficients. Some methods, such as Lasso & Ridge Regression inevitably produce biased coefficient estimators.
2) The phrase "best subsets" itself can be used in two separate senses. Generally to refer to the best subset among all predictors which optimises some model building criteria. More specifically it can refer to Furnival and Wilson's efficient algorithm for finding that subset among moderate (~50) numbers of linear predictors (Regressions by Leaps and Bounds. Technometrics, Vol. 16, No. 4 (Nov., 1974), pp. 499-51)
[http://www.jstor.org/stable/1267601](http://www.jstor.org/stable/1267601)
| null | CC BY-SA 2.5 | null | 2010-08-26T00:56:53.163 | 2010-08-26T00:56:53.163 | null | null | 521 | null |
2128 | 2 | null | 2125 | 121 | null | What's the difference between the correlation between $X$ and $Y$ and a linear regression predicting $Y$ from $X$?
First, some similarities:
- the standardised regression coefficient is the same as Pearson's correlation coefficient
- The square of Pearson's correlation coefficient is the same as the $R^2$ in simple linear regression
- The sign of the unstandardized coefficient (i.e., whether it is positive or negative) will the same as the sign of the correlation coefficient.
- Neither simple linear regression nor correlation answer questions of causality directly. This point is important, because I've met people that think that simple regression can magically allow an inference that $X$ causes $Y$.
- Standard tests of the null hypothesis (i.e., "correlation = 0" or, equivalently, "slope = 0" for the regression in either order), such as carried out by lm and cor.test in R, will yield identical p-values.
Second, some differences:
- The regression equation (i.e., $a + bX$) can be used to make predictions on $Y$ based on values of $X$
- While correlation typically refers to the linear relationship, it can refer to other forms of dependence, such as polynomial or truly nonlinear relationships
- While correlation typically refers to Pearson's correlation coefficient, there are other types of correlation, such as Spearman's.
- The correlation between X and Y is the same as the correlation between Y and X. In contrast, the unstandardized coefficient typically changes when moving from a model predicting Y from X to a model predicting X from Y.
| null | CC BY-SA 4.0 | null | 2010-08-26T02:48:15.233 | 2022-11-16T01:15:25.060 | 2022-11-16T01:15:25.060 | 183 | 183 | null |
2129 | 2 | null | 2125 | 36 | null | Here is an answer I [posted on the graphpad.com website](http://www.graphpad.com/faq/viewfaq.cfm?faq=1141):
Correlation and linear regression are not the same. Consider these differences:
- Correlation quantifies the degree to which two variables are related. Correlation does not fit a line through the data.
- With correlation you don't have to think about cause and effect. You simply quantify how well two variables relate to each other. With regression, you do have to think about cause and effect as the regression line is determined as the best way to predict Y from X.
- With correlation, it doesn't matter which of the two variables you call "X" and which you call "Y". You'll get the same correlation coefficient if you swap the two. With linear regression, the decision of which variable you call "X" and which you call "Y" matters a lot, as you'll get a different best-fit line if you swap the two. The line that best predicts Y from X is not the same as the line that predicts X from Y (unless you have perfect data with no scatter.)
- Correlation is almost always used when you measure both variables. It rarely is appropriate when one variable is something you experimentally manipulate. With linear regression, the X variable is usually something you experimentally manipulate (time, concentration...) and the Y variable is something you measure.
| null | CC BY-SA 2.5 | null | 2010-08-26T03:21:45.833 | 2010-08-26T03:21:45.833 | null | null | 25 | null |
2130 | 1 | 2139 | null | 3 | 342 | Consider the following exponential expansion form:
$$ \exp\left[\sum_{k=1}^\infty \gamma_k x^k\right] = \sum_{j=0}^\infty\delta_j x^j $$
where $\gamma_k$'s are known, and $\delta_0=1$,
$$\delta_{j+1} = \frac{1}{j+1}\sum_{i=1}^{j+1}i\gamma_i\delta_{j+1-i}$$
Does anyone know how to prove it?
| How to prove the exponential expansion recursive form? | CC BY-SA 3.0 | null | 2010-08-26T04:33:09.090 | 2013-01-18T17:47:57.020 | 2013-01-18T17:47:57.020 | 2970 | 1043 | [
"mathematical-statistics"
] |
2131 | 1 | 2133 | null | 10 | 8303 | for eg. if I have a class variable credit scoring with two classes good and bad, where #(good) = 700 and #(bad)= 300. I do not want to shorten my data. which technique I should use? I was using SVM but it is giving all bad as good in predictions.
| For a classification problem if class variable has unequal distribution which technique we should use? | CC BY-SA 2.5 | null | 2010-08-26T04:45:31.653 | 2022-04-20T09:45:31.183 | 2010-08-26T07:02:24.003 | null | 861 | [
"classification",
"binary-data"
] |
2132 | 2 | null | 2125 | 27 | null | In the single predictor case of linear regression, the standardized slope has the same value as the correlation coefficient. The advantage of the linear regression is that the relationship can be described in such a way that you can predict (based on the relationship between the two variables) the score on the predicted variable given any particular value of the predictor variable. In particular one piece of information a linear regression gives you that a correlation does not is the intercept, the value on the predicted variable when the predictor is 0.
In short - they produce identical results computationally, but there are more elements which are capable of interpretation in the simple linear regression. If you are interested in simply characterizing the magnitude of the relationship between two variables, use correlation - if you are interested in predicting or explaining your results in terms of particular values you probably want regression.
| null | CC BY-SA 2.5 | null | 2010-08-26T06:37:23.230 | 2010-08-26T06:37:23.230 | null | null | 196 | null |
2133 | 2 | null | 2131 | 7 | null | Your class sample sizes do not seem so unbalanced since you have 30% of observations in your minority class. Logistic regression should be well performing in your case. Depending on the number of predictors that enter your model, you may consider some kind of penalization for parameters estimation, like ridge (L2) or lasso (L1). For an overview of problems with very unbalanced class, see Cramer (1999), The Statistician, 48: 85-94 ([PDF](https://papers.tinbergen.nl/98085.pdf)).
I am not familiar with credit scoring techniques, but I found some papers that suggest that you could use SVM with weighted classes, e.g. [Support Vector Machines for Credit Scoring: Extension to Non Standard Cases](https://doi.org/10.1007/3-540-26981-9_57). As an alternative, you can look at [boosting](https://mpra.ub.uni-muenchen.de/8156/1/MPRA_paper_8156.pdf) methods with CART, or Random Forests (in the latter case, it is possible to adapt the sampling strategy so that each class is represented when constructing the classification trees). The paper by Novak and LaDue discuss the pros and cons of [GLM vs Recursive partitioning](https://ageconsearch.umn.edu/bitstream/15129/1/31010109.pdf). I also found this article, [Scorecard construction with unbalanced class sizes](https://web.archive.org/web/20100620220144/http://fic.wharton.upenn.edu/fic/handpaper.pdf) by Hand and Vinciotti.
| null | CC BY-SA 4.0 | null | 2010-08-26T07:04:58.167 | 2022-04-20T09:45:31.183 | 2022-04-20T09:45:31.183 | 79696 | 930 | null |
2134 | 1 | null | null | 15 | 15980 | I'm looking for an `R` package for estimating the coefficients of logit models with individual fixed-effect (individual intercept) using Chamberlain's 1980 estimator. It is often known as Chamberlain's fixed-effect logit estimator.
It's a classic estimator when dealing with binary outcome panel data (at least in econometrics), but I just don't find anything related to it in the CRAN.
Any clue?
| R package for fixed-effect logistic regression | CC BY-SA 3.0 | null | 2010-08-26T08:03:41.620 | 2015-04-26T12:49:09.817 | 2013-09-03T11:58:44.917 | 805 | 1091 | [
"r",
"logistic",
"panel-data",
"clogit"
] |
2135 | 2 | null | 2131 | 10 | null | A popular approach towards solving class imbalance problems is to bias the classifier so that it pays more attention to the positive instances. This can be done, for instance, by increasing the penalty associated with misclassifying the positive class relative to the negative class. Another approach is to preprocess the data by oversampling the majority class or undersampling the minority class in order to create a balanced dataset.
However, in your case, class imbalancing don't seem to be a problem. Perhaps it is a matter of parameter tuning, since finding the optimal parameters for an SVM classifier can be a rather tedious process. There are two parameters for e.g. in an RBF kernel: $C$ and $\gamma$. It is not known beforehand which $C$ and $\gamma$ are best for a given problem; consequently some kind of model selection (parameter search) must be done.
In the data preprocessing phase, remember that SVM requires that each data instance is represented as a vector of real numbers. Hence, if there are categorical attributes, it's recommended to convert them into numeric data, using m numbers to represent an m-category attribute (or replacing it with m new binary variables).
Also, scaling the variables before applying SVM is crucial, in order to avoid attributes in greater numeric ranges dominating those in smaller numeric ranges.
Check out [this paper](http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf).
If you're working in R, check out the [tune](http://www.stat.ucl.ac.be/ISdidactique/Rhelp/library/e1071/html/tune.html) function (package e1071) to tune hyperparameters using a grid search over supplied parameter ranges. Then, using [plot.tune](http://bm2.genes.nig.ac.jp/RGM2/R_current/library/e1071/man/plot.tune.html), you can see visually which set of values gives the smaller error rate.
There is a shortcut around the time-consuming parameter search. There is an R package called "svmpath" which computes the entire regularization path for a 2-class SVM classifier in one go. Here is [a link to the paper](http://jmlr.csail.mit.edu/papers/volume5/hastie04a/hastie04a.pdf) that describes what it's doing.
P.S. You may also find this paper interesting: [Obtaining calibrated probability estimates](http://www.google.gr/url?sa=t&source=web&cd=3&ved=0CCwQFjAC&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.29.3039%26rep%3Drep1%26type%3Dpdf&ei=Byh2TMbJM4XN4Abf942yBg&usg=AFQjCNG9_KErIloNqZKPUMEFkjRpEP2Tyg&sig2=pMx4yZ01d2-pT-kt4Wp-gw)
| null | CC BY-SA 2.5 | null | 2010-08-26T08:45:35.657 | 2010-08-26T08:45:35.657 | null | null | 339 | null |
2136 | 2 | null | 2134 | 13 | null | Conditional logistic regression (I assume that this is what you refered to when talking about Chamberlain's estimator) is available through `clogit()` in the [survival](http://cran.r-project.org/web/packages/survival/index.html) package. I also found this page which contains R code to estimate [conditional logit parameters](http://course.wilkes.edu/Merrill/stories/storyReader%2424). The [survey](http://cran.r-project.org/web/packages/survey/index.html) package also includes a lot of wrapper function for GLM and Survival model in the case of complex sampling, but I didn't look at.
Try also to look at `logit.mixed` in the [Zelig](http://cran.r-project.org/web/packages/Zelig/index.html) package, or directly use the [lme4](http://cran.r-project.org/web/packages/lme4/index.html) package which provide methods for mixed-effects models with binomial link (see `lmer` or `glmer`).
Did you take a look at [Econometrics in R](http://cran.r-project.org/doc/contrib/Farnsworth-EconometricsInR.pdf), from Grant V. Farnsworth? It seems to provide a gentle overview of applied econometrics in R (with which I am not familiar).
| null | CC BY-SA 2.5 | null | 2010-08-26T08:59:08.873 | 2010-08-26T08:59:08.873 | null | null | 930 | null |
2137 | 2 | null | 2121 | 27 | null | I think boostrap would the best option to obtain robust SEs. This was done in some applied work using shrinkage methods, e.g. [Analysis of North American Rheumatoid Arthritis Consortium data using a penalized logistic regression approach](http://www.biomedcentral.com/1753-6561/3/S7/S61) (BMC Proceedings 2009).
There is also a nice paper from Casella on SE computation with penalized model, [Penalized Regression, Standard Errors, and Bayesian Lassos](http://j.mp/c3c4tC) (Bayesian Analysis 2010 5(2)). But they are more concerned with lasso and elasticnet penalization.
I always thought of ridge regression as a way to get better predictions than standard OLS, where the model is generally not parcimonious. For variable selection, the lasso or elasticnet criteria are more appropriate, but then it is difficult to apply a bootstrap procedure (since selected variables would change from one sample to the other, and even in the inner $k$-fold loop used to optimize the $\ell_1$/$\ell_2$ parameters); this is not the case with ridge regression, since you always consider all variables.
I have no idea about R packages that would give this information. It doesn't seem to be available in the [glmnet](http://cran.r-project.org/web/packages/glmnet/index.html) package (see Friedman's paper in JSS, [Regularization Paths for Generalized Linear Models via Coordinate Descent](http://www.jstatsoft.org/v33/i01/paper)).
However, Jelle Goeman who authored the [penalized](http://cran.r-project.org/web/packages/penalized/index.html) package discuss this point too. Cannot find the original PDF on the web, so I simply quote his words:
>
It is a very natural question to ask
for standard errors of regression
coefficients or other estimated
quantities. In principle such standard
errors can easily be calculated, e.g.
using the bootstrap.
Still, this
package deliberately does not provide
them. The reason for this is that
standard errors are not very
meaningful for strongly biased
estimates such as arise from penalized
estimation methods. Penalized
estimation is a procedure that reduces
the variance of estimators by
introducing substantial bias. The bias
of each estimator is therefore a major
component of its mean squared error,
whereas its variance may contribute
only a small part.
Unfortunately, in
most applications of penalized
regression it is impossible to obtain
a sufficiently precise estimate of the
bias. Any bootstrap-based cal-
culations can only give an assessment
of the variance of the estimates.
Reliable estimates of the bias are
only available if reliable unbiased
estimates are available, which is
typically not the case in situations
in which penalized estimates are used.
Reporting a standard error of a
penalized estimate therefore tells
only part of the story. It can give a
mistaken impression of great
precision, completely ignoring the
inaccuracy caused by the bias. It is
certainly a mistake to make confidence
statements that are only based on an
assessment of the variance of the
estimates, such as bootstrap-based
confidence intervals do.
| null | CC BY-SA 2.5 | null | 2010-08-26T09:36:21.003 | 2010-11-01T16:24:39.703 | 2010-11-01T16:24:39.703 | 930 | 930 | null |
2138 | 2 | null | 2067 | 9 | null | As a consequence of the inspiring answers and discussion to my question I constructed the following plots that do not rely on any model based parameters, but present the underlying data.
The reasons are that independent of whatever kind of standard-error I may choose, the standard-error is a model based parameter. So, why not present the underlying data and thereby transmit more information?
Furthermore, if choosing the SE from the ANOVA, two problems arise for my specific problems.
First (at least for me) it is somehow unclear what the SEs from `SPSS` ANOVA Output actually are ([see also this discussion, in the comments](http://www.analysisfactor.com/statchat/why-report-estimated-marginal-means-in-spss-glm/)). They are somehow related to the MSE but how exactly I don't know.
Second, they are only reasonable when the underlying assumptions are met. However, as the following plots show, the assumptions of homogeneity of variance is clearly violated.
The plots with boxplots:

The plots with all data points:

Note that the two groups are dislocated a little to the left or the right: deductive to the left, inductive to the right.
The means are still plotted in black and the data or boxplots in the background in grey. The differences between the plots on the left and on the right are if the means are dislocated the same as the points or boxplots or if they are presented centrally.
Sorry for the nonoptimal quality of the graphs and the missing x-axis labels.
The question that remains is, which one of the above plots is the one to choose now. I have to think about it and ask the other author of our paper. But right now, I prefer the "points with means dislocated". And I still would be very interested in comments.
---
Update: After some programming I finally managed to write a R-function to automatically create a plot like points with means dislocated. [Check it out (and send me comments)](http://www.psychologie.uni-freiburg.de/Members/singmann/R/rm.plot)!
| null | CC BY-SA 2.5 | null | 2010-08-26T10:15:37.873 | 2010-12-09T12:30:33.363 | 2010-12-09T12:30:33.363 | 442 | 442 | null |
2139 | 2 | null | 2130 | 2 | null | A brute-force approach would use a [Taylor series](http://en.wikipedia.org/wiki/Taylor_series) expansion of $\exp(\cdot)$ at 0 and group terms appropriately to demonstrate the relationship. I have not attempted to do so myself but an inspection of the first few terms does indicate that the proposed relationship holds.
Perhaps, there is a more elegant approach?
$$\exp\left[\gamma_1 x^1\right] = 1 + \frac{\gamma_1 x^1}{1!} + \frac{\left(\gamma_1 x^1\right)^2}{2!} + \ldots$$
$$\exp\left[\gamma_2 x^2\right] = 1 + \frac{\gamma_2 x^2}{1!} + \frac{\left(\gamma_2 x^2\right)^2}{2!} + \ldots$$
$$\exp\left[\gamma_3 x^3\right] = 1 + \frac{\gamma_3 x^3}{1!} + \frac{\left(\gamma_3 x^3\right)^2}{2!} + \ldots$$
| null | CC BY-SA 3.0 | null | 2010-08-26T10:36:17.803 | 2013-01-17T13:59:57.010 | 2013-01-17T13:59:57.010 | 17230 | null | null |
2140 | 1 | 2141 | null | 4 | 1973 | I'm investgating the repeatability of an experimental process (actually measuring lift in a wind tunnel), which can be measured using a single number, Q. However, I only have limited access to the tunnel so the number of experiments that I can run is restricted.
If I have 5 experiments with the tunnel in one configuration: Q1 = [1.18, -0.41, -0.66, 0.98, 0.1]
and five in another: Q2 = [-0.36 -0.73 -1.47 0.15 -0.31]
How confident can I be that the repeatability of the tunnel in the second configuration is better than the first? i.e. the standard deviation of the data from the first configuration is 0.81 and from the second configuration is 0.60, but these values will clearly change if I do additional experiments. Can I somehow calculate the standard deviation of the standard deviation?
Any advice on how I could calculate this would be appreciated and more generally how I can determine when I have enough experiments to say something like I am 95% confident that configuration 2 has better repeatability than configuration 1.
| How do I compare the repeatability of two sets of experiments with a low number of samples? | CC BY-SA 2.5 | null | 2010-08-26T11:42:13.020 | 2010-08-29T14:41:44.697 | 2010-08-26T13:07:19.183 | 279 | 1094 | [
"hypothesis-testing",
"variability"
] |
2141 | 2 | null | 2140 | 4 | null | Look up the two-sample F-test for variances. With equal sample sizes Q, this test calculates the ratio of the two variances, and then compares the value to the F-distribution with (Q-1, Q-1) degrees of freedom. A main assumption is normality, but that might be OK for random measurement error like this.
With your values, however the difference is not statistically significant (p=0.29). In general, the standard error of the standard deviation is large unless you have lots of samples, and thus detecting differences in variability (repeatability) is difficult.
| null | CC BY-SA 2.5 | null | 2010-08-26T13:06:34.753 | 2010-08-26T13:06:34.753 | null | null | 279 | null |
2142 | 1 | null | null | 9 | 12146 | When performing linear regression, it is often useful to do a transformation such as log-transformation for the dependent variable to achieve better normal distribution conformation. Often it is also useful to inspect beta's from the regression to better assess the effect size/real relevance of the results.
This raises the problem that when using e.g. log transformation, the effect sizes will be in log scale, and I've been told that because of non-linearity of the used scale, back-transforming these beta's will result in non-meaningful values that do not have any real world usage.
This far we have usually performed linear regression with transformed variables to inspect the significance and then linear regression with the original non-transformed variables to determine the effect size.
Is there a right/better way of doing this? For the most part we work with clinical data, so a real life example would be to determine how a certain exposure affects continues variables such as height, weight or some laboratory measurement, and we would like to conclude something like "exposure A had the effect of increasing weight by 2 kg".
| Linear regression effect sizes when using transformed variables | CC BY-SA 2.5 | null | 2010-08-26T13:07:47.100 | 2010-11-02T17:21:52.077 | null | null | null | [
"regression",
"data-transformation",
"effect-size"
] |
2143 | 2 | null | 2142 | 2 | null | SHORT ANSWER: Absolutely correct, the back transformation of the beta value is meaningless. However, you can report the non-linearity as something like. "If you weigh 100kg then eating two pieces of cake a day will increase your weight by approximately 2kg in one week. However, if you weigh 200kg your weight would increase 2.5kg. See figure 1 for a depiction of this non-linear relationship (figure 1 being a fit of the curve over the raw data)."
LONG ANSWER:
The meaningfulness of the back transformed value varies but when properly done it usually has some meaning.
If you have a regression of natural log values on two x predictors with a beta of 0.13, and an intercept of 7.0, then the back transformation of 0.13 (1.14) is pretty much meaningless. That is correct. However, the back transformation of 7.13 is going to be a value that can be interpreted with some meaning. You could then subtract out the back transformation of 7.0 and be left with a remainder value that is your effect in a meaningful scale (152.2). If you want to look at any predicted value you would need to first calculate it all out in log values and then back-transform. This would have to be done separately for every predicted value and result in a curve if graphed.
This is often reasonable to do if your transformation has a relatively small effect on your data. Log transformation of reaction times are one kind of value that can be back transformed. When it's done correctly you'll find that the values seem close to median values doing simple calculations on the raw data.
Even then though one must be careful with interactions and non-interactions. The relative values vary across the scale. The analysis was sensitive to the log value while the back transformed values may show different patterns that make interactions seem like they shouldn't be there or vice versa. In other words, you can back transform things that make small changes to the data as long as you're careful.
Some changes, like logistic transform of probability, can have quite massive impacts, especially near the end of the scale. An example of a place you should never back transform is interaction plots near the high or low end of probability.
| null | CC BY-SA 2.5 | null | 2010-08-26T13:30:32.093 | 2010-08-26T13:44:01.287 | 2010-08-26T13:44:01.287 | 601 | 601 | null |
2144 | 2 | null | 2140 | 2 | null | If you want a non-parametric approach that doesn't assume normality, you could obtain distributions of SDs for each configuration using bootstrapping, then compare these distributions. Comments below the answer to [this question](https://stats.stackexchange.com/questions/1169/ci-for-a-difference-based-on-independent-cis) suggest that non-overlapping 84% confidence intervals would indicate a difference in the SDs at an alpha of .05. Alternatively, on each iteration of the bootstrap you could find the difference between the two pseudo-estimates of the SD then compare the distribution of this difference to zero. Below is R code achieving both approaches; both suggest that you have not observed a difference that should be considered real.
```
a = c(1.18, -0.41, -0.66, 0.98, 0.1)
b = c(-0.36, -0.73, -1.47, 0.15, -0.31)
library(plyr)
iterations = 1e4
sds = ldply(
.data = 1:iterations
, .fun = function(x){
a_sd = sd(sample(a,replace=T))
b_sd = sd(sample(b,replace=T))
to_return = data.frame(
a_sd = a_sd
, b_sd = b_sd
, diff = a_sd - b_sd
)
return(to_return)
}
, .progress = 'text'
)
quantile(sds$a_sd,c(.08,.92))
quantile(sds$b_sd,c(.08,.92))
#substantial overlap
quantile(sds$diff,c(.025,.975))
#includes zero
```
| null | CC BY-SA 2.5 | null | 2010-08-26T13:49:10.053 | 2010-08-26T13:56:57.967 | 2017-04-13T12:44:23.203 | -1 | 364 | null |
2145 | 2 | null | 2130 | 6 | null | Srikant asks for a "more elegant approach." Perhaps the following will respond to this challenge.
Let the argument of the exponential be $f(x)$ (so its power series coefficients are the gammas) and let the right hand side be $g(x)$ (with deltas as its coefficients), so that by definition
$$g(x) = \exp(f(x)).$$
Differentiating both sides and replacing $\exp(f)$ with $g$ yields
$$g' = f' * g.$$
Writing this out as power series gives the desired result: the delta comes from $g'$ while the convolution of the gammas and deltas comes from $f' * g$.
You don't have to worry about convergence (and the whole machinery of Taylor series), by the way: all these calculations can be performed in the ring of formal power series.
| null | CC BY-SA 3.0 | null | 2010-08-26T14:43:02.220 | 2013-01-17T16:10:06.383 | 2013-01-17T16:10:06.383 | 919 | 919 | null |
2146 | 1 | 49881 | null | 1 | 144 | I have to compare pairs of audio strems as 1d time series. Looking at the aligned trajectories I need to either cluster them together or assume they arise from independent generators.
I remember hearing of Dirichlet processes being applied. Could try to defined a hidden Markov model on the structure, not sure yet how the emission probabilities would work out, but it must be possible.
any recommendations?
Best
EDITS- set of 1D data time series s in {1...N}, another set of trajectories q in {1...N} all over a discrete time domain t=0...t=T under a uniform interval. Task, determine whether under certain assumptions (eg. iid, normality etc), is the generator under these assumptions a valid null hypothesis to have generated the two categories of trajectories s and q? Or is the null hypothesis abandoned for a more complex model?
So do we use an autoregressive model? a Gaussian process would be hard to interpret the inverse covariance. Some parametric model would seem to be valid. Don't know... any ideas?
| For data similar to audio, how to determine if there are 1 or 2 categories? | CC BY-SA 3.0 | null | 2010-08-26T17:26:23.483 | 2013-02-12T23:00:11.953 | 2013-02-12T22:56:08.027 | 196 | 1098 | [
"machine-learning",
"time-series",
"clustering"
] |
2147 | 2 | null | 1142 | 32 | null | A good solution will have several ingredients, including:
- Use a resistant, moving window smooth to remove nonstationarity.
- Re-express the original data so that the residuals with respect to the smooth are approximately symmetrically distributed. Given the nature of your data, it's likely that their square roots or logarithms would give symmetric residuals.
- Apply control chart methods, or at least control chart thinking, to the residuals.
As far as that last one goes, control chart thinking shows that "conventional" thresholds like 2 SD or 1.5 times the IQR beyond the quartiles work poorly because they trigger too many false out-of-control signals. People usually use 3 SD in control chart work, whence 2.5 (or even 3) times the IQR beyond the quartiles would be a good starting point.
I have more or less outlined the nature of Rob Hyndman's solution while adding to it two major points: the potential need to re-express the data and the wisdom of being more conservative in signaling an outlier. I'm not sure that Loess is good for an online detector, though, because it doesn't work well at the endpoints. You might instead use something as simple as a moving median filter (as in Tukey's resistant smoothing). If outliers don't come in bursts, you can use a narrow window (5 data points, perhaps, which will break down only with a burst of 3 or more outliers within a group of 5).
Once you have performed the analysis to determine a good re-expression of the data, it's unlikely you'll need to change the re-expression. Therefore, your online detector really only needs to reference the most recent values (the latest window) because it won't use the earlier data at all. If you have really long time series you could go further to analyze autocorrelation and seasonality (such as recurring daily or weekly fluctuations) to improve the procedure.
| null | CC BY-SA 2.5 | null | 2010-08-26T18:02:43.510 | 2010-11-03T00:21:10.310 | 2010-11-03T00:21:10.310 | 919 | 919 | null |
2148 | 2 | null | 2142 | 5 | null | I would suggest that transformations aren't important to get a normal distribution for your errors. Normality isn't a necessary assumption. If you have "enough" data, the central limit theorem kicks in and your standard estimates become asymptotically normal. Alternatively, you can use bootstrapping as a non-parametric means to estimate the standard errors. (Homoskedasticity, a common variance for the observations across units, is required for your standard errors to be right; robust options permit heteroskedasticity).
Instead, transformations help to ensure that a linear model is appropriate. To give a sense of this, let's consider how we can interpret the coefficients in transformed models:
- outcome is units, predictors is units: A one unit change in the predictor leads to a beta unit change in the outcome.
- outcome in units, predictor in log units: A one percent change in the predictor leads to a beta/100 unit change in the outcome.
- outcome in log units, predictor in units: A one unit change in the predictor leads to a beta x 100% change in the outcome.
- outcome in log units, predictor in log units: A one percent change in the predictor leads to a beta percent change in the outcome.
If transformations are necessary to have your model make sense (i.e., for linearity to hold), then the estimate from this model should be used for inference. An estimate from a model that you don't believe isn't very helpful. The interpretations above can be quite useful in understanding the estimates from a transformed model and can often be more relevant to the question at hand. For example, economists like the log-log formulation because the interpretation of beta is an elasticity, an important measure in economics.
I'd add that the back transformation doesn't work because the expectation of a function is not the function of the expectation; the log of the expected value of beta is not the expected value of the log of beta. Hence, your estimator is not unbiased. This throws off standard errors, too.
| null | CC BY-SA 2.5 | null | 2010-08-26T20:43:29.787 | 2010-08-26T20:43:29.787 | null | null | 401 | null |
2149 | 1 | 2153 | null | 59 | 43521 | A [particle filter](http://en.wikipedia.org/wiki/Particle_filter) and [Kalman filter](http://en.wikipedia.org/wiki/Kalman_filter) are both [recursive Bayesian estimators](http://en.wikipedia.org/wiki/Recursive_Bayesian_estimation). I often encounter Kalman filters in my field, but very rarely see the usage of a particle filter.
When would one be used over the other?
| What is the difference between a particle filter (sequential Monte Carlo) and a Kalman filter? | CC BY-SA 2.5 | null | 2010-08-26T21:00:11.910 | 2020-09-06T16:19:25.457 | null | null | 5 | [
"bayesian",
"particle-filter",
"kalman-filter"
] |
2150 | 2 | null | 2032 | 21 | null | There's no need to call it Predictive Analytics :) It already has two names: statistics, and data mining.
Beginner Stats Book: Statistics in Plain English
Advanced Stats Book: Multivariate Analysis, by Hair
Data Mining Book: I still haven't found a great one, but Data Mining by Witten is okay.
Don't get too confused by all the details. There are only so many things you can accomplish in general:
- predict a real number (regression)
- predict a whole number (classification)
- modeling (same as the above two, but the model is understandable by humans)
- group similar observations (clustering)
- group similar factors (factor analysis)
- describe a single factor
- describe the relationship between multiple factors (correlation, association, etc)
- determine if a population value is different from another, based on a sample
- design experiments and calculate sample size
good luck!
| null | CC BY-SA 2.5 | null | 2010-08-26T21:24:56.327 | 2010-08-26T21:24:56.327 | null | null | 74 | null |
2151 | 1 | null | null | 97 | 163601 | In other words, instead of having a two class problem I am dealing with 4 classes and still would like to assess performance using AUC.
| How to plot ROC curves in multiclass classification? | CC BY-SA 2.5 | null | 2010-08-27T01:56:42.663 | 2021-04-12T16:53:22.913 | 2010-08-27T02:41:15.557 | 159 | null | [
"classification",
"roc"
] |
2152 | 2 | null | 2151 | 5 | null | While the math is beyond me this general review article has some references you will likely be interested in, and has a brief description of multi-class ROC graphs.
An introduction to ROC analysis by Tom Fawcett
Pattern Recognition Letters
[Volume 27, Issue 8, June 2006, Pages 861-874](http://dx.doi.org/10.1016/j.patrec.2005.10.010)
Link to [pdf](http://www.google.gr/url?sa=t&source=web&cd=1&ved=0CB8QFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.98.4088%26rep%3Drep1%26type%3Dpdf&ei=nlx3TIq-LJCk4Abn_Li3Bg&usg=AFQjCNHO-_yjWAJrRVnJms7MbcqaJkd8eg&sig2=sqERq2v68UvOhJDMviOklg) as provided by gd047- thanks
| null | CC BY-SA 2.5 | null | 2010-08-27T02:58:26.190 | 2010-08-27T13:22:46.397 | 2010-08-27T13:22:46.397 | 1036 | 1036 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.