Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1731 | 2 | null | 1729 | 3 | null | One way that you can do the above task without using Sweave is as follows:
- Write a custom export function from R that takes the pairwise.t.test object and outputs latex code.
- Use \input{...} in your LaTeX document to input this file into your LaTeX document.
With regards to step 1, there are many functions that are useful. Here are a few:
- paste() is useful for concatenating elements including latex elements with R objects
- formatC() is useful for formatting numbers
- write() is useful for exporting the combined LaTeX/R character vector from R to a text file
There are also a variety of LaTeX table generating functions (e.g., see the `xtable` package)
| null | CC BY-SA 2.5 | null | 2010-08-16T08:28:16.523 | 2010-08-16T08:28:16.523 | null | null | 183 | null |
1732 | 2 | null | 1485 | 3 | null | For (1), as ebony1 suggests, there are several incremental or on-line SVM algorithms you could try, the only thing I would mention is that the hyper-parameters (regularisation and kernel parameters) may also need tuning as you go along as well, and there are fewer algorithmic tricks to help with that. The regularisation parameter will almost certainly benefit from tuning, because as the amount of training data increases, the less regularisation that is normally required.
For (2) you could try fitting a one-class SVM to the training data, which would at least tell you if the data were consistent with the classes you do know about, and then classify as "unknown" if the output of the one-class SVM was sufficiently low. IIRC libsvm has an implementation of one-class SVM.
For (3) if you just use 1-v-1 class component SVMs, then you just need to make three new SVMs, one for each of the unknown-v-known class combinations, and there is no need to retrain the others.
HTH
| null | CC BY-SA 2.5 | null | 2010-08-16T09:07:50.237 | 2010-08-16T09:07:50.237 | null | null | 887 | null |
1733 | 2 | null | 602 | 3 | null | For optimisation, you don't need to perform a grid search; a Nelder-Mead simplex(fminsearch in MATLAB) approach is just as effective and generally much faster, especially if you have a lot of hyper-parameters to tune. Alternatively you can use gradient descent optimisation - if your implementation doesn't provide gradient information, you can always estimate it by finite differences (as fminunc in MATLAB does).
The Span bound is a good criterion to optimise, as it is fast, but good old cross-validation is hard to beat (but use a continuous statistic such as the squared hinge loss).
HTH
n.b. nu needs to lie in [0,1], however this is not a problem, just re-parameterise as theta = logit(nu), and then optimise theta instead of nu. You can then use more or less any numerical optimisation technique you like, e.g. Nelder-Mead simplex, gradient descent, local search, genetic algorithms...
| null | CC BY-SA 2.5 | null | 2010-08-16T09:13:42.133 | 2010-08-16T09:23:45.913 | 2010-08-16T09:23:45.913 | 887 | 887 | null |
1734 | 2 | null | 36 | 8 | null | As a generalization of 'pirates cause global warming': Pick any two quantities which are (monotonically) increasing or decreasing with time and you should see some correlation.
| null | CC BY-SA 2.5 | null | 2010-08-16T11:00:11.680 | 2010-08-16T11:00:11.680 | null | null | 961 | null |
1735 | 1 | null | null | 5 | 5195 | I am regressing two butterfly richness variables (summer and winter)
against a set of environmental variables separately.
(variables with continuous numbers)
Environmental variables are identitcal in each model.
In the summer model,
the weight rank of coefficients is
temp > prec > ndvi.
The weight rank in winter is
temp > ndvi > prec.
As it is almost implausible to compare the coefficients directly,
pls advise any advanced method other than regression
to discriminate such coefficient rank between seasons,
such as canonical correlation analysis (unsure if it is suitable here)
The spatial info here
richness and environmental variables comprising 2000 grid (continuous distribution) spanning from 100 E to 130 E longitude, 18 to 25 N latitude.
| Method to compare variable coefficient in two regression models | CC BY-SA 2.5 | null | 2010-08-16T11:36:42.863 | 2010-09-25T16:17:28.180 | 2010-09-24T14:01:07.313 | 930 | 962 | [
"regression"
] |
1736 | 1 | 1741 | null | 50 | 24771 | I am wondering if there are any packages for python that is capable of performing survival analysis. I have been using the survival package in R but would like to port my work to python.
| Survival Analysis tools in Python | CC BY-SA 2.5 | null | 2010-08-16T12:10:29.567 | 2020-12-04T14:42:13.057 | 2020-12-04T14:42:13.057 | 11887 | 172 | [
"python",
"survival",
"mortality"
] |
1737 | 1 | 1745 | null | 11 | 9407 | I want to calculate a summary of a variable in a data.frame for each unique combination of factors in the data.frame. Should I use plyr to do this? I am ok with using loops as opposed to apply() ; so just finding out each unique combination would be enough.
| How to find summary statistics for all unique combinations of factors in a data.frame in R? | CC BY-SA 3.0 | null | 2010-08-16T13:23:52.747 | 2013-01-21T07:01:48.660 | 2013-01-21T07:01:48.660 | 196 | null | [
"r",
"categorical-data",
"aggregation",
"plyr"
] |
1738 | 2 | null | 1713 | 12 | null | If you want inferences specifically about the mean of the original variable, then don't use Box-Cox transformation. IMO Box-Cox transformations are most useful when the transformed variable has its own interpretation, and the Box-Cox transformation only helps you to find the right scale for analysis - this turns out to be the case surprisingly often. Two unexpected exponents that I found this way were 1/3 (when the response variable was bladder volume) and -1 (when the response variable was breaths per minute).
The log-transformation is probably the only exception to this. The mean on the log-scale corresponds to the geometric mean in the original scale, which is at least a well-defined quantity.
| null | CC BY-SA 2.5 | null | 2010-08-16T13:42:04.937 | 2010-08-16T13:42:04.937 | null | null | 279 | null |
1739 | 2 | null | 1737 | 11 | null | See `aggregate` and `by`. For example, from the help file for `aggregate`:
```
## Compute the averages according to region and the occurrence of more
## than 130 days of frost.
aggregate(state.x77,
list(Region = state.region,
Cold = state.x77[,"Frost"] > 130),
mean)
```
| null | CC BY-SA 2.5 | null | 2010-08-16T13:46:53.893 | 2010-08-16T13:46:53.893 | null | null | 279 | null |
1740 | 2 | null | 1737 | 3 | null | Here's the plyr solution, which has the advantage of returning multiple summary stats and producing a progress bar for long computes:
```
library(ez) #for a data set
data(ANT)
cell_stats = ddply(
.data = ANT #use the ANT data
, .variables = .(cue,flanker) #uses each combination of cue and flanker
, .fun = function(x){ #apply this function to each combin. of cue & flanker
to_return = data.frame(
, acc = mean(x$acc)
, mrt = mean(x$rt[x$acc==1])
)
return(to_return)
}
, .progress = 'text'
)
```
| null | CC BY-SA 2.5 | null | 2010-08-16T14:01:14.360 | 2010-08-16T14:01:14.360 | null | null | 364 | null |
1741 | 2 | null | 1736 | 23 | null | AFAIK, there aren't any survival analysis packages in python. As mbq comments above, the only route available would be to [Rpy](http://rpy.sourceforge.net/).
Even if there were a pure python package available, I would be very careful in using it, in particular I would look at:
- How often does it get updated.
- Does it have a large user base?
- Does it have advanced techniques?
One of the benefits of R, is that these standard packages get a massive amount of testing and user feed back. When dealing with real data, unexpected edge cases can creep in.
| null | CC BY-SA 2.5 | null | 2010-08-16T14:05:44.193 | 2010-08-16T14:05:44.193 | null | null | 8 | null |
1742 | 2 | null | 1668 | 4 | null | I'd suggest Christopher Bishop's "Pattern Recognition and Machine Learning". You can see some of it, including a sample chapter, at [https://www.microsoft.com/en-us/research/people/cmbishop/#!prml-book](https://www.microsoft.com/en-us/research/people/cmbishop/#!prml-book)
| null | CC BY-SA 4.0 | null | 2010-08-16T14:38:47.827 | 2018-09-07T06:27:28.540 | 2018-09-07T06:27:28.540 | 131198 | 247 | null |
1743 | 2 | null | 36 | 6 | null | A correlation on its own can never establish a causal link. [David Hume](http://en.wikipedia.org/wiki/David_Hume) (1771-1776) argued quite effectively that we can not obtain certain knowlege of cauasality by purely empirical means. Kant attempted to address this, the Wikipedia page for [Kant](http://en.wikipedia.org/wiki/Kant) seems to sum it up quite nicely:
> Kant believed himself to be creating a compromise between the empiricists and the rationalists. The empiricists believed that knowledge is acquired through experience alone, but the rationalists maintained that such knowledge is open to Cartesian doubt and that reason alone provides us with knowledge. Kant argues, however, that using reason without applying it to experience will only lead to illusions, while experience will be purely subjective without first being subsumed under pure reason.
In otherwords, Hume tells us that we can never know a causal relationship exists just by observing a correlation, but Kant suggests that we may be able to use our reason to distinguish between correlations that do imply a causal link from those who don't. I don't think Hume would have disagreed, as long as Kant were writing in terms of plausibility rather than certain knowledge.
In short, a correlation provides circumstantial evidence implying a causal link, but the weight of the evidence depends greatly on the particular circumstances involved, and we can never be absolutely sure. The ability to predict the effects of interventions is one way to gain confidence (we can't prove anything, but we can disprove by observational evidence, so we have then at least attempted to falsify the theory of a causal link). Having a simple model that explains why we should observed a correlation that also explains other forms of evidence is another way we can apply our reasoning as Kant suggests.
Caveat emptor: It is entirely possible I have misunderstood the philosophy, however it remains the case that a correlation can never provide proof of a causal link.
| null | CC BY-SA 2.5 | null | 2010-08-16T15:29:24.607 | 2010-08-16T15:29:24.607 | null | null | 887 | null |
1744 | 2 | null | 1737 | 1 | null | In addition to other suggestions you may find the `describe.by()` function in the `psych` package useful.
It can be used to show summary statistics on numeric variables across levels of a factor variable.
| null | CC BY-SA 2.5 | null | 2010-08-16T15:32:04.733 | 2010-08-16T15:32:04.733 | null | null | 183 | null |
1745 | 2 | null | 1737 | 7 | null | While I think `aggregate` is probably the solution you are seeking, if you are want to create an explicit list of all possible factor combinations, `expand.grid` will do that for you. e.g.
```
> expand.grid(height = seq(60, 80, 5), weight = seq(100, 300, 50),
sex = c("Male","Female"))
height weight sex
1 60 100 Male
2 65 100 Male
...
30 80 100 Female
31 60 150 Female
```
You could then loop over each row in the resulting data frame to pull out records from your original data.
| null | CC BY-SA 2.5 | null | 2010-08-16T15:46:32.153 | 2010-08-16T15:46:32.153 | null | null | 729 | null |
1746 | 2 | null | 1736 | 9 | null | [python-asurv](http://sourceforge.net/projects/python-asurv/) is an effort to port the [asurv](http://www.astrostatistics.psu.edu/statcodes/sc_censor.html) software for survival methods in astronomy. Might be worth keeping an eye on, but cgillespie is right about the things to watch out for: it has a long way to go and development doesn't seem active. (AFAICT only one method exists and even completed, the package may be lacking for, say, biostatisticians.)
You're probably better off using [survival](http://cran.r-project.org/web/packages/survival/index.html) package in R from Python through something like [RPy](http://rpy.sourceforge.net/rpy2.html) or [PypeR](http://www.jstatsoft.org/v35/c02). I haven't had any problems doing this myself.
| null | CC BY-SA 2.5 | null | 2010-08-16T16:30:54.480 | 2010-08-16T16:30:54.480 | null | null | 251 | null |
1747 | 2 | null | 1517 | 0 | null | 2 great answers so far and I promise I'll accept one of them. But I wanted to post this link as an answer too:
[summary of how to compare two numbers](http://www.statsoft.com/textbook/elementary-concepts-in-statistics/#Two%20basic%20features%20of%20every%20relation%20between%20variables)
Yes, for me merely comparing two numbers had some questions, so that shows you the level I'm at. While it won't be relevant to most, here is the VBA function I made based on that info ... well, not based that info exactly, I needed [something simpler](http://wiki.answers.com/Q/How_do_you_determine_Percent_of_change_between_two_numbers):
```
Private Function fxDifference(sng_A As Single, sng_B As Single) As Variant
''When calling this function, sng_B must not be zero (Error div/zero)
fxDifference = (sng_A - sng_B)
fxDifference = fxDifference / sng_B
fxDifference = Round(fxDifference * 100, 1)
End Function
```
I suppose when I said 'generic' tools, I meant 'really, really basic' ...
| null | CC BY-SA 2.5 | null | 2010-08-16T17:13:58.303 | 2010-08-16T17:13:58.303 | null | null | 857 | null |
1748 | 2 | null | 980 | 3 | null | If BPM is staying the same over many samples (or changing infinitesimally in a way you aren't concerned about) you can truncate your data to a significant digit that you actually care about and then do Run Length Encoding.
For example, in R this data:
```
0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
```
has this output
```
rle(data)
Run Length Encoding
lengths: int [1:3] 10 15 15
values : num [1:3] 0 1 2
```
| null | CC BY-SA 2.5 | null | 2010-08-16T17:20:25.063 | 2010-08-16T17:20:25.063 | null | null | 196 | null |
1749 | 1 | 1751 | null | 6 | 652 | I'm uncertain whether I should be able to intuit the answer to my question from a question that has [already been asked](https://stats.stackexchange.com/questions/1123/modeling-success-rate-with-gaussian-distribution) but I can't, so I am asking the question anyway. Thus, I am looking for a clear easy to understand answer. A [recent newspaper article](http://www.dailymail.co.uk/news/article-1303442/Pregancy-The-average-couple-sex-104-times-conceive.html) reported that on average couples were able to conceive a child after 104 reproductive acts. Assuming indpendant binomial trials, that means for each act there was a 1/104 probability of success. I can do a quick simulation to show myself what the quantiles for this distribution look like, e.g. in R:
```
NSIM <- 10000
trialsuntilsuccess <- function(N=10000,Pr=1/104)
{
return(min(seq(1:N)[rbinom(N,1,Pr)==1]))
}
res <- rep(NA,NSIM)
for (i in 1:NSIM)
{
res[i] <- trialsuntilsuccess()
if (i %% 10 == 0) {cat(i,"\r"); flush.console()}
}
quantile(res,c(.025,.975))
```
But it seems like there should be some simple equation or approximation that could be applied, perhaps a probit or poisson? Any advice on how to get the quantiles without running a simulation? Bonus points for providing a way to do the relevant calculations in R.
| How can you approximate the number of trials to success given a particular Pr(Success)? | CC BY-SA 2.5 | null | 2010-08-16T17:27:28.967 | 2010-08-16T19:05:00.580 | 2017-04-13T12:44:29.013 | -1 | 196 | [
"probability",
"binomial-distribution",
"negative-binomial-distribution"
] |
1750 | 2 | null | 1737 | 1 | null | In [library(doBy)](http://cran.r-project.org/web/packages/doBy/vignettes/doBy.pdf) there is also the `summaryBy()` function, e.g.
```
summaryBy(DV1 + DV2 ~ Height+Weight+Sex,data=my.data)
```
| null | CC BY-SA 3.0 | null | 2010-08-16T17:31:42.483 | 2012-12-03T13:09:49.407 | 2012-12-03T13:09:49.407 | 1205 | 196 | null |
1751 | 2 | null | 1749 | 3 | null | If I understand your question correctly you want to compute the quantiles for the "No of failures before the first success" given that $p=\frac{1}{104}$.
The distribution you should be looking at is the [negative binomial distribution](http://en.wikipedia.org/wiki/Negative_binomial_distribution). The wiki discusses the negative binomial as:
>
In probability theory and statistics, the negative binomial distribution is a discrete probability distribution of the number of successes in a sequence of Bernoulli trials before a specified (non-random) number r of failures occurs.
Just invert the interpretation of success and failures with a setting of r=1 would accomplish what you want. The distribution with r=1 is also called the [geometric distribution](http://en.wikipedia.org/wiki/Geometric_distribution).
You could then use the discrete distribution to compute the quantiles.
PS: I do not know R.
| null | CC BY-SA 2.5 | null | 2010-08-16T17:43:49.463 | 2010-08-16T17:43:49.463 | null | null | null | null |
1753 | 1 | 1754 | null | 2 | 3074 | I came across an example where standard deviation was being plotted on a Cartesian plot (standard 2D with X and Y axes.)
This seems like a valid thing to do but in this case the example only had a single line running across the graph to "indicate" standard deviation. This to me seems not very useful, possibly dangerous and misleading. Don't you need three lines plotted to properly visualize standard deviation on a graph? Thusly:
- The mean
- The mean plus one standard deviation value
- The mean minus one standard deviation value
P.S. I am a software developer working on a data visualization package so please take my use of stats terminology with a grain of salt. Any corrections and feedback will be sincerely appreciated.
Rephrasing the question:
If I had a set of five data points to plot on a cartesian plane:
```
X: 10 20 30 35 50
Y: 20 40 5 55 10
```
For this sample data set (the Y values) the mean is 20 and the stdev is ~21.036 (x values plotted along the X axis and y values plotted along the Y axis.)
What would a proper plotting of the mean and the stdev on top of the X/Y data set look like?
| Visualizing standard deviation on a Cartesian plot | CC BY-SA 2.5 | null | 2010-08-16T18:01:43.127 | 2010-08-16T20:03:34.477 | 2010-08-16T19:36:49.437 | 968 | 968 | [
"data-visualization",
"standard-deviation"
] |
1754 | 2 | null | 1753 | 7 | null | Probably a line for the mean and a line for +/- twice the standard deviation. That would be the "default" plot for that.
That said, I think you may be missing the point of the plot with the single line for the standard deviation. If what you're trying to represent is change in the variability of Y over X (i.e., heteroscedasticity), then a line plotting SD over X might work. It really does depend on the data and the questions that you're trying to ask. There just isn't a set of rules that you can follow to produce good plots every time, and in general the more automated the plotting system gets, the more useless I find it.
| null | CC BY-SA 2.5 | null | 2010-08-16T19:59:01.507 | 2010-08-16T19:59:01.507 | null | null | 71 | null |
1755 | 2 | null | 1753 | 3 | null | What about plotting the point with error bars, say mean +/- sd. Here's what your example data would look like:

---
Here's the R code I used to generate the plot:
```
library(ggplot2)
df = data.frame(values=c(10, 20, 30, 35, 50, 20, 40, 5, 55, 10),
type=rep(c("X", "Y"), each=5))
means = tapply(df$values, df$type, mean)
sds = tapply(df$values, df$type, sd)
df_summary = data.frame(means, sds, type=c("X", "Y"))
g = ggplot(data=df_summary, aes(y=means, x=type)) +
geom_point(data=df,aes(y=values, x=type), col=2) +
geom_errorbar(aes(ymax = means + sd, ymin=means - sd)) +
ylab("Values")
g
```
| null | CC BY-SA 2.5 | null | 2010-08-16T20:03:34.477 | 2010-08-16T20:03:34.477 | null | null | 8 | null |
1756 | 2 | null | 841 | 8 | null | The most naive approach I can think of is to regress $Y_i$ vs $X_i$ as $Y_i \sim \hat{m}X_i + \hat{b}$, then perform a $t$-test on the hypothesis $m = 1$. See [t-test for regression slope](http://en.wikipedia.org/wiki/T-test#Slope_of_a_regression_line).
A less naive approach is the Morgan-Pitman test. Let $U_i = X_i - Y_i, V_i = X_i + Y_i,$ then perform a test of the Pearson Correlation coefficient of $U_i$ vs $V_i$. (One can do this simply using the [Fisher R-Z transform](http://en.wikipedia.org/wiki/Fisher_transformation), which gives the confidence intervals around the sample Pearson coefficient, or via a bootstrap.)
If you are using R, and don't want to have to code everything yourself, I would use `bootdpci` from Wilcox' Robust Stats package, WRS. (see [Wilcox' page](http://www-rcf.usc.edu/~rwilcox/).)
| null | CC BY-SA 2.5 | null | 2010-08-16T20:12:03.067 | 2010-08-16T20:12:03.067 | null | null | 795 | null |
1757 | 1 | null | null | 7 | 1559 | I am taking the time to learn how to analyze networks and want to test if there are differences between two networks over time. Since I am new to R and networks in general, I am hoping to get some help how to compare and analyze network graphs.
Simply, my dataset will contain information on flows between two geographic units (think Census migration data). I want to test if the flows, generally, are different with time. Since I am just starting out, I am not even sure if I am phrasing my question correctly.
I have created a few basic graphs and generated some very basic "summary statistics" on a graph in isolation in R before, so I understand how to get up and running, but I am not really sure where to go from here.
| Significant Difference between two network graphs | CC BY-SA 2.5 | null | 2010-08-16T23:36:20.887 | 2010-08-17T07:51:41.177 | 2010-08-17T07:37:41.443 | 8 | 569 | [
"r",
"networks"
] |
1758 | 2 | null | 1681 | 0 | null | I will try and answer your question for only one of the bounds. Chernoff bounds are given by:
$$Pr\left[\frac{1}{m} \sum_{i=1}^{i=m} X_i \ge p+\epsilon\right] \le e^{-D(p+\epsilon\Vert p)m}$$
where
$D(\cdot )$ is the [Kullback-Leibler divergence](http://en.wikipedia.org/wiki/Kullback-Leibler_divergence)
For the sake of convenience I will denote the rhs of the above inequality by $r(p,\epsilon,m)$. Thus, we have:
$$Pr\left[\frac{1}{m} \sum_{i=1}^{i=m} X_i \ge p+\epsilon\right] \le r(p,\epsilon,m)$$
The above can be re-written as:
$$Pr\left[ \sum_{i=1}^{i=m} X_i \ge m\ (p+\epsilon)\right] \le r(p,\epsilon,m)$$
If we let: $Y = \sum_{i=1}^{i=m} X_i$ then we know that:
$$Y \sim \textrm{Binomial}(m,p)$$
where
$p = Prob(X_i=1)$
Given the above it follows that,
$$Pr\left[ \sum_{i=1}^{i=m} X_i \ge m\ (p+\epsilon)\right] = Pr\left[ Y_i \ge m\ (p+\epsilon)\right]$$
But,
$$Pr[ Y_i \ge m\ (p+\epsilon)] = \sum_{k=\lceil m\ (p+\epsilon) \rceil}^{k=m} {m \choose k} p^k (1-p)^{m-k}$$
Thus, it follows that:
$$\sum_{k=\lceil p+\epsilon \rceil}^{k=m} {m \choose k} p^k (1-p)^{m-k} \le r(p,\epsilon,m)$$
I am not sure if we can simplify the above to express $\epsilon$ as a function of $m$ but it may be of some help. Alternatively, you may want to explore the use of the normal approximation to the binomial.
| null | CC BY-SA 4.0 | null | 2010-08-17T00:50:11.027 | 2023-02-11T10:21:51.907 | 2023-02-11T10:21:51.907 | 362671 | null | null |
1759 | 2 | null | 1757 | 1 | null | I am not sure I will be able to provide a complete answer but here is how I would start.
Step 1: Model the data generating process for the flows through the network
For example, you may want to model the flows from one point to another point in the network as a [poisson distribution](http://en.wikipedia.org/wiki/Poisson_distribution). The poission distribution is used to model arrivals in a system over time and thus may work well for network flows. Depending on network complexity and your needs you can model each path such that the arrival rate for each path is either different or the same (See the $\lambda$ parameter for the poisson distribution.)
Step 2: Identify a testing strategy which would let you ascertain the strength of evidence for your null model.
The challenge in this step is two-fold. First, you have to define what you mean when you say that the network flows in a network at different times is the same. Are you talking about throughput? or Are you talking about the flows across each one of the paths?
The second challenge is that once you have solved the above issue, you need to find out a way to test your null hypothesis.
As an example: Suppose you want to check that the flows across each path are the same and that your are modeling the path flows for the network by a single parameter (i.e., $\lambda$ is identical for all paths). Thus, your null hypothesis would assume that $\lambda$ does not change with time. This how you would go about testing your null hypothesis:
Null Hypothesis is True
Pool all network flows across time and estimate a common $\lambda$ using [maximum likelihood estimation](http://en.wikipedia.org/wiki/Poisson_distribution#Maximum_likelihood) for the poisson distribution.
Null Hypothesis is not true
Estimate $\lambda$ for each time period separately so that you get two different values (one for each one of the time periods).
You can then select the model (pooled or separate) that fits the data better on the basis of a criteria such as the [likelihood ratio](http://en.wikipedia.org/wiki/Likelihood-ratio_test).
| null | CC BY-SA 2.5 | null | 2010-08-17T01:11:36.063 | 2010-08-17T01:11:36.063 | null | null | null | null |
1760 | 2 | null | 726 | 83 | null | >
He uses statistics like a drunken man uses a lamp post, more for support than illumination.
-- Andrew Lang
| null | CC BY-SA 2.5 | null | 2010-08-17T02:01:44.427 | 2010-10-02T17:11:05.487 | 2010-10-02T17:11:05.487 | 795 | 74 | null |
1761 | 1 | null | null | 28 | 3318 | There was already [a request for Mathematical Statistics Videos](https://stats.stackexchange.com/questions/485/mathematical-statistics-videos), but it explicitly asked from people for
>
videos that provide a rigorous
mathematical presentation of
statistics. i.e., videos that might
accompany a course that use a textbook
mentioned in this discussion on...
So at the same time I am wondering, what recommendation do you have for stat/prob - 101 - video courses?
| Statistics/Probability Videos for Beginners | CC BY-SA 2.5 | null | 2010-08-17T02:52:02.637 | 2013-01-23T10:05:12.513 | 2017-04-13T12:44:37.583 | -1 | 253 | [
"references"
] |
1762 | 2 | null | 1687 | 1 | null | I doubt that you can get an analytical solution to this problem. About the only step that is doable is the probability that the average of the first three velocities is under some cutoff and it is not (probability the one velocity is under the cutoff)^3 as you stated. If $V_i \sim N (m, s^2)$, then $\bar{V}_3 \sim N (m, s^2/3)$, so the probability that $\bar{V}_3$ is less than $k$ standard deviations above the mean is $P(\bar{V}_3 < m + k s ) = \Phi(k/\sqrt{3})$ where $\Phi$ is the normal distribution function. For example, for $k=0$: $P(\bar{V}_3 < m) = 0.5$, for $k=1$: $P(\bar{V}_3 < m + s) = 0.718$, etc.
The calculation involving the next three rounds is complicated by two things: it should somehow be conditional on the fact that the average was too low already, and then should only use the three highest values. While it might work out to be some triple integral, I don't think you want to use such a formula. After that the fact that you have a choice whether the bullet should be weighed or fired also probably depends on the previous values, and things get even more complicated depending on your strategy.
Even the simple multiplication of a random bullet weight and one random velocity is not as innocuous as it seems: the product will not be normal (or any other "regular" distribution). Its standard deviation can be approximated via standard [error propagation](http://en.wikipedia.org/wiki/Propagation_of_uncertainty#Example_formulas), but calculating probabilities of falling below/above some cutoff is not straightforward.
In summary, I think you should write a little simulation program - you will get better answers faster. Note that for the chronometer you will have to define more exactly what does +/-4% mean.
| null | CC BY-SA 2.5 | null | 2010-08-17T03:09:29.367 | 2010-08-17T03:36:03.013 | 2010-08-17T03:36:03.013 | 279 | 279 | null |
1763 | 2 | null | 1761 | 16 | null | I think a number of the suggestions put forward on the [mathematical statistics video question](https://stats.stackexchange.com/questions/485/mathematical-statistics-videos) probably fall in the stats 101 category:
- http://www.khanacademy.org/#Statistics: series of short videos on introductory statistics
- http://www.khanacademy.org/#Probability: series of short videos on introductory probability
- Math and probability for life sciences: A whole university course introducing statistics and probability.
- I also have a list of maths and statistics videos with a few others.
The Stat 579 Videos on statistical computing in R are also quite good:
- http://connect.extension.iastate.edu/p31588910/
- http://connect.extension.iastate.edu/p45341752/
- http://connect.extension.iastate.edu/p39131597/
- Factor & reshaping data
- Debugging & data aggregation
- Data aggregation & maps
- Data aggregation & maps (copy?)
- LaTeX and Sweave
- Random numbers, basic Simulations
- Permutations & data from the web
- Text & Patterns
- Databases
| null | CC BY-SA 3.0 | null | 2010-08-17T03:47:24.627 | 2011-11-20T09:44:06.357 | 2017-04-13T12:44:48.803 | -1 | 183 | null |
1764 | 1 | 1792 | null | 28 | 10588 | Users are often tempted to break axis values to present data of different orders of magnitude on the same graph (see [here](http://peltiertech.com/Excel/Charts/BrokenYAxis.html)). While this may be convenient it's not always the preferred way of displaying the data (can be misleading at best). What are alternative ways of displaying the data that are different in several orders of magnitude?
I can think of two ways, to log-transform the data or use lattice plots. What are other options?
| What are alternatives to broken axes? | CC BY-SA 2.5 | null | 2010-08-17T05:22:31.340 | 2013-03-30T20:16:35.730 | 2010-08-17T05:25:15.750 | 159 | 144 | [
"data-visualization",
"logarithm"
] |
1765 | 2 | null | 1764 | 8 | null | Maybe it can be classified as lattice, but I'll try; plot all the bars scaled to the highest in one panel and put another panel showing zoom on lower ones. I used this technique once in case of a scatterplot, and the result was quite nice.
| null | CC BY-SA 2.5 | null | 2010-08-17T06:22:29.067 | 2010-08-17T06:22:29.067 | null | null | null | null |
1766 | 2 | null | 1649 | 1 | null | You can calculate SD for each person and its standard error (for example using bootstrap). Then you can use rmeta package to do analysis. I think you should use some transformation of SD for example log (or maybe better log of variance).
| null | CC BY-SA 2.5 | null | 2010-08-17T06:26:59.823 | 2010-08-17T06:26:59.823 | null | null | 419 | null |
1767 | 2 | null | 1378 | 3 | null | From your description I can't see any reason to distinguish the "baseline test" from the immediately drawn "second sample". They are simply 2 baseline measurements and the variance (at baseline) can be calculated on that basis. It would be better plotting the mean of the two baseline measurements versus the third "six month" sample.
The problem is with the 6 month sample. As only one sample is taken at this point there is no way of estimating the "variability" at this point, or rather separating sampling variation from longitudinal (real) change in TB reading.
If we consider this a longitudinal data analysis problem we would probably choose the a random intercept (baseline TB) and a random slope (to fit the 6 month TB). The sampling variability would be estimated from the two baseline measurements and the slope from the third 6 month measure. We can't estimate the variability at 6 months without strong distributional assumptions on the chnage over those six months, such as assuming no change.
| null | CC BY-SA 2.5 | null | 2010-08-17T06:29:42.563 | 2010-08-17T06:29:42.563 | null | null | 521 | null |
1769 | 2 | null | 1761 | 7 | null | I would suggest [Statistics 401](http://video.google.com/videoplay?docid=-3474013489970580510&hl=en&emb=1#) (from Iowa State University).
This is the link for Lesson 1. There are 64 lectures in total. (Here is the [last one](http://video.google.com/videoplay?docid=3835401745697888723&ei=coO0S9aOOJLF-Qb9kJnnBg&q=Statistics+401%3A+Lesson+64&hl=en&view=3#)). Unfortunately, I haven't found the links gathered together in one page. I think also that lessons 21,26 and 39 are missing. However, I don't remember having any problem in following the course). You can also find the handouts for this class posted [here](http://www.public.iastate.edu/~pcaragea/S401F07/Handouts07.html).
Some other introductory courses in Statistics are:
[Introductory Probability and Statistics for Business](http://webcast.berkeley.edu/course_details_new.php?seriesid=2009-B-87384&semesterid=2009-B)
[Basics of probability and statistics](http://videolectures.net/bootcamp07_keller_bss/)
[Lesson Videos for Statistics](http://sofia.fhda.edu/gallery/statistics/resources.html)
[2007 SLUO Lectures on Statistics](http://www-group.slac.stanford.edu/sluo/lectures/Stat2007_lectures.htm)
| null | CC BY-SA 2.5 | null | 2010-08-17T07:28:37.813 | 2010-08-17T07:36:52.710 | 2010-08-17T07:36:52.710 | 339 | 339 | null |
1771 | 2 | null | 1757 | 5 | null | I agree with Srikant, you need to model your process. You mentioned that you had already created some networks in R, what model did you assume?
The way I would tackle this problem, is to form a mathematical model, say an ODE model. For example,
\begin{equation}
\frac{dX_i(t)}{dt} = \lambda X_{i-1}(t) -\mu X_{i+1}(t)
\end{equation}
where $X_i$ depends on the population at geographic unit $i$. Since you are interested in differences in time, your parameters $\lambda$ may also depend on $t$.
You can fit both models simultaneously and determine if the rates are different.
You problem isn't easy and I don't think there's an simple solution to it.
| null | CC BY-SA 2.5 | null | 2010-08-17T07:51:41.177 | 2010-08-17T07:51:41.177 | null | null | 8 | null |
1772 | 2 | null | 726 | 4 | null | It's not really about statistics, but I think it applies to statistics:
>
It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.
Arthur Conan Doyle
| null | CC BY-SA 2.5 | null | 2010-08-17T08:29:54.487 | 2010-08-17T08:29:54.487 | null | null | 956 | null |
1773 | 1 | 1775 | null | 29 | 26143 | Precision is defined as:
```
p = true positives / (true positives + false positives)
```
Is it correct that, as `true positives` and `false positives` approach 0, the precision approaches 1?
Same question for recall:
```
r = true positives / (true positives + false negatives)
```
I am currently implementing a statistical test where I need to calculate these values, and sometimes it happens that the denominator is 0, and I am wondering which value to return for this case.
P.S.: Excuse the inappropriate tag, I wanted to use `recall`, `precision` and `limit`, but I cannot create new Tags yet.
| What are correct values for precision and recall in edge cases? | CC BY-SA 2.5 | null | 2010-08-17T09:11:30.343 | 2017-04-25T16:38:56.470 | 2010-09-16T07:01:47.313 | null | 977 | [
"precision-recall"
] |
1774 | 2 | null | 1773 | 2 | null | That would depend on what you mean by "approach 0". If false positives and false negatives both approach zero at a faster rate than true positives, then yes to both questions. But otherwise, not necessarily.
| null | CC BY-SA 2.5 | null | 2010-08-17T09:21:16.873 | 2010-08-17T09:21:16.873 | null | null | 159 | null |
1775 | 2 | null | 1773 | 21 | null | Given a confusion matrix:
```
predicted
(+) (-)
---------
(+) | TP | FN |
actual ---------
(-) | FP | TN |
---------
```
we know that:
```
Precision = TP / (TP + FP)
Recall = TP / (TP + FN)
```
Lets consider the cases where the denominator is zero:
- TP+FN=0 : means that there were no positive cases in the input data
- TP+FP=0 : means that all instances were predicted as negative
| null | CC BY-SA 3.0 | null | 2010-08-17T09:45:46.663 | 2016-03-05T12:13:53.743 | 2016-03-05T12:13:53.743 | 170 | 170 | null |
1776 | 1 | null | null | -2 | 323 | The purpose to run regressions for butterfly richness again 5 environmental variables is to show the importance rank of the independent variables mainly by AIC.
In non-full models, they reveal that variable A tends to be more influential than the others by delta AIC.
However, in the full model, the regression coefficient of variable A is slightly second to that of variable B. (R-square of the full model is 1.43)
The conflicting outcomes (non-full model by AIC and full model by slope) seems to make it difficult to ascertain that variable A is the variable mostly weighted.
Please kindly suggest which criterion should be relied on for the specified purpose or any further test should be carried out.
Thank you.
| Shall I trust AIC (non-full model) or slope (full model)? | CC BY-SA 2.5 | null | 2010-08-17T11:23:18.773 | 2010-08-17T11:56:05.983 | 2010-08-17T11:36:26.987 | null | 962 | [
"regression",
"aic",
"feature-selection"
] |
1778 | 2 | null | 1776 | 3 | null | R-Squared can't be 1.43... and other errors make your question hard to interpret.
Here's a sort of generic response that might eventually lead to an answer.
The AIC score tells you how good the model is similar to R-squared but penalizes it based on how many components are in the model. You can theoretically always get a better fit with more elements to the model, and R-squared reflects that, but at some point adding more explanatory variables doesn't increase the model accuracy as much as it needlessly increases complexity. Therefore, if you add a factor and the AIC goes down instead of up, that's because you're not adding enough explanatory power to make up for the increased complexity of the model and you should err on the side of parsimony (i.e. remove that factor).
| null | CC BY-SA 2.5 | null | 2010-08-17T11:56:05.983 | 2010-08-17T11:56:05.983 | null | null | 601 | null |
1779 | 2 | null | 1651 | 2 | null | You can try gamlss.cens package.
| null | CC BY-SA 2.5 | null | 2010-08-17T12:27:00.557 | 2010-08-17T12:27:00.557 | null | null | 419 | null |
1780 | 1 | 1785 | null | 8 | 1068 | (I'm a bit outside my comfort zone, so apologies if this is badly worded, or off-topic)
I have a bibliographic database, containign details of about 1200 different papers, books, web sites etc, all with various details, including keywords and an abstract. I want to somehow analyse this database and produce some graphics showing the correlations between different keywords. (like "drug" is often present with either "pharmacology" or "assay").
Ideally this would be in R, but general advise would also be welcome. (I've seen [this](https://stats.stackexchange.com/questions/124/statistical-classification-of-text) question/answer which piqued my interest, and this [heatmap graphic](http://www2.warwick.ac.uk/fac/sci/moac/students/peter_cock/r/heatmap/) also seem related)
My database could be in bibtex, or could be converted to plain text.
| How to start an analysis of keywords from a bibliography and detect correlations? | CC BY-SA 2.5 | null | 2010-08-17T13:10:09.610 | 2010-11-10T20:53:29.547 | 2017-04-13T12:44:45.640 | -1 | 114 | [
"r",
"text-mining"
] |
1781 | 1 | null | null | 25 | 2761 | Chemical analyses of environmental samples are often censored below at reporting limits or various detection/quantitation limits. The latter can vary, usually in proportion to the values of other variables. For example, a sample with a high concentration of one compound might need to be diluted for analysis, resulting in proportional inflation of the censoring limits for all other compounds analyzed at the same time in that sample. As another example, sometimes the presence of a compound can alter the response of the test to other compounds (a "matrix interference"); when this is detected by the laboratory, it will inflate its reporting limits accordingly.
I am seeking a practical way to estimate the entire variance-covariance matrix for such datasets, especially when many of the compounds experience more than 50% censoring, which is often the case. A conventional distributional model is that the logarithms of the (true) concentrations are multinormally distributed, and this appears to fit well in practice, so a solution for this situation would be useful.
(By "practical" I mean a method that can reliably be coded in at least one generally available software environment like R, Python, SAS, etc., in a way that executes quickly enough to support iterative recalculations such as occur in multiple imputation, and which is reasonably stable [which is why I am reluctant to explore a BUGS implementation, although Bayesian solutions in general are welcome].)
Many thanks in advance for your thoughts on this matter.
| Unbiased estimation of covariance matrix for multiply censored data | CC BY-SA 2.5 | null | 2010-08-17T13:10:29.173 | 2012-10-20T19:34:07.840 | 2010-10-19T07:19:29.530 | 449 | 919 | [
"correlation",
"estimation",
"censoring",
"covariance-matrix",
"unbiased-estimator"
] |
1782 | 2 | null | 1651 | 2 | null | Another R package that seems to do what you want, is [pscal](http://cran.r-project.org/web/packages/pscl/). The associated [vignette](http://cran.r-project.org/web/packages/pscl/vignettes/countreg.pdf) has lots of examples.
| null | CC BY-SA 2.5 | null | 2010-08-17T13:18:59.637 | 2010-08-17T13:18:59.637 | null | null | 8 | null |
1783 | 2 | null | 1773 | 4 | null | I am familiar with different terminology. What you call precision I would positive predictive value (PPV). And what you call recall I would call sensitivity (Sens). :
[http://en.wikipedia.org/wiki/Receiver_operating_characteristic](http://en.wikipedia.org/wiki/Receiver_operating_characteristic)
In the case of sensitivity (recall), if the denominator is zero (as Amro points out), there are NO positive cases, so the classification is meaningless. (That does not stop either TP or FN being zero, which would result in a limiting sensitivity of 1 or 0. These points are respectively at the top right and bottom left hand corners of the ROC curve - TPR = 1 and TPR = 0.)
The limit of PPV is meaningful though. It is possible for the test cut-off to be set so high (or low) so that all cases are predicted as negative. This is at the origin of the ROC curve. The limiting value of the PPV just before the cutoff reaches the origin can be estimated by considering the final segment of the ROC curve just before the origin. (This may be better to model as ROC curves are notoriously noisy.)
For example if there are 100 actual positives and 100 actual negatives and the final segnemt of the ROC curve approaches from TPR = 0.08, FPR = 0.02, then the limiting PPV would be PPR ~ 0.08*100/(0.08*100 + 0.02*100) = 8/10 = 0.8 i.e 80% probability of being a true positive.
In practice each sample is represented by a segment on the ROC curve - horizontal for an actual negative and vertical for an actual positive. One could estimate the limiting PPV by the very last segment before the origin, but that would give an estimated limiting PPV of 1, 0 or 0.5, depending on whether the last sample was a true positive, a false positive (actual negative) or made of an equal TP and FP. A modelling approach would be better, perhaps assuming the data are binormal - a common assumption, eg:
[http://mdm.sagepub.com/content/8/3/197.short](http://mdm.sagepub.com/content/8/3/197.short)
| null | CC BY-SA 2.5 | null | 2010-08-17T13:20:07.530 | 2010-08-17T13:20:07.530 | null | null | 521 | null |
1784 | 2 | null | 1764 | 11 | null | Some additional ideas:
(1) You needn't confine yourself to a logarithmic transformation. Search this site for the "data-transformation" tag, for example. Some data lend themselves well to certain transformations like a root or a logit. (Such transformations--even logs--are usually to be avoided when publishing graphics for a non-technical audience. On the other hand, they can be excellent tools for seeing patterns in data.)
(2) You can borrow a standard cartographic technique of insetting a detail of a chart within or next to your chart. Specifically, you would plot the extreme values by themselves on one chart and all (or the) rest of the data on another with a more limited axis range, then graphically arrange the two along with indications (visual and/or written) of the relationship between them. Think of a map of the US in which Alaska and Hawaii are inset at different scales. (This won't work with all kinds of charts, but could be effective with the bar charts in your illustration.) [I see this is similar to mbq's recent answer.]
(3) You can show the broken plot side-by-side with the same plot on unbroken axes.
(4) In the case of your bar chart example, choose a suitable (perhaps hugely stretched) vertical axis and provide a panning utility. [This is more of a trick than a genuinely useful technique, IMHO, but it might be useful in some special cases.]
(5) Select a different schema to display the data. Instead of a bar chart that uses length to represent values, choose a chart in which the areas of symbols represent the values, for example. [Obviously trade-offs are involved here.]
Your choice of technique will likely depend on the purpose of the plot: plots created for data exploration often differ from plots for general audiences, for example.
| null | CC BY-SA 2.5 | null | 2010-08-17T13:30:03.980 | 2010-08-17T13:30:03.980 | null | null | 919 | null |
1785 | 2 | null | 1780 | 1 | null | I'm also outside my area of expertise, but assuming that you want to use R, here are a few thoughts.
- There is a bibtex package in R for importing bibtex files.
- Various character functions could be used to extract the key words.
- The data sounds a little like a two-mode network, which might mean packages like sna and igraph are useful.
- Plots of 2d multidimensional scaling can also also be useful in visualising similarities (e.g., based on co-occurrence or some other measure) between words (here's a tutorial).
| null | CC BY-SA 2.5 | null | 2010-08-17T13:52:29.993 | 2010-08-17T13:52:29.993 | null | null | 183 | null |
1786 | 2 | null | 1780 | 0 | null | You may want to take a look at the [phi coefficient](http://en.wikipedia.org/wiki/Phi_coefficient) which is a measure of association for [nominal variables](http://en.wikipedia.org/wiki/Level_of_measurement#Nominal_scale).
| null | CC BY-SA 2.5 | null | 2010-08-17T14:01:10.333 | 2010-08-17T14:01:10.333 | null | null | null | null |
1787 | 1 | null | null | 7 | 2451 | I am fitting a GLM model (in R), and would like to get an estimation of the variability of the coefficients estimated by the model.
If I understand it correctly the method to use in such a case is bootstraping (not, cross validation).
Am I correct that an easy way to do this is by using the boot command from the boot package, then output the coefficients at each simulation, and at the end calculate their var? Or is there something I might be missing?
| Using bootstrap for glm coefficients variance estimation (in R) | CC BY-SA 2.5 | null | 2010-08-17T15:56:39.067 | 2010-08-18T16:38:48.443 | null | null | 253 | [
"r",
"confidence-interval",
"variance",
"generalized-linear-model",
"bootstrap"
] |
1788 | 2 | null | 726 | 10 | null | >
A man who ‘rejects’ a hypothesis provisionally, as a matter of habitual practice, when the significance is at the 1% level or higher, will certainly be mistaken in not more than 1% of such decisions. For when the hypothesis is correct he will be mistaken in just 1% of these cases, and when it is incorrect he will never be mistaken in rejection. [...] However, the calculation is absurdly academic, for in fact no scientific worker has a fixed level of significance at which from year to year, and in all circumstances, he rejects hypotheses; he rather gives his mind to each particular case in the light of his evidence and his ideas.
-- Sir Ronald A. Fisher, from Statistical Methods and Scientific Inference (1956)
Another quote as a commentary: "This passage clearly is intended as a criticism of Neyman and Pearson, although again their names are not mentioned. However, these authors never recommended a fixed level of significance that would be used in all cases. [...] Thus Fisher rather incongruously appears to be attacking his own past position rather than that of Neyman and Pearson" (from Fisher, Neyman, and the Creation of Classical Statistics by Erich Lehmann, section 4.5).
| null | CC BY-SA 3.0 | null | 2010-08-17T16:17:00.330 | 2015-02-28T18:06:46.280 | 2020-06-11T14:32:37.003 | -1 | 561 | null |
1789 | 2 | null | 1780 | 5 | null | so you have a document x keyword matrix which basically represents a bipartite graph (or two-mode network depending on your cultural background) with edges between documents and tags. If you're not interested in individual documents - as I understand you -, you can create a network of keywords by counting the number of cooccurrences between each keyword. Simply plotting this graph might already give you a neat idea of what this data looks like. You can further tweak the visualization if you, e.g., scale the size of the keywords by the number of total occurrences, or (in case you have a lot of keywords) introduce a minimum number of total occurrences for a keyword to appear in the first place.
As a tool, I can only recommend [GraphViz](http://www.graphviz.org/) which allows you to specify graphs like
```
keyword1 -- keyword2
keyword1 -- keyword3
keyword1[label="statistics", fontsize=...]
```
and "compile" them into pngs, pdfs, whatever, yielding very nice results (particularly if you play a bit with the font settings).
| null | CC BY-SA 2.5 | null | 2010-08-17T16:27:30.400 | 2010-08-17T16:27:30.400 | null | null | 979 | null |
1790 | 1 | 1791 | null | 7 | 496 | I'm trying to compare several methods by their performance on a set of synthetic data samples. For each method, I obtain a performance value between 0 and 1 for each of those samples. Then I plot a graph with the average performance per method
The problem now is that the achievable quality per sample varies strongly between different samples (if you wonder why, I generate random graphs and evaluate community detection methods on them, and sometimes strange things happen, like elements from the same community get disconnected due to sparseness etc). So showing error bars based on standard deviation or standard error tend to get really large.
Imagine one method yields [1, 0.5, 1], and the other one (one the same three samples) [0.5, 0.25, 0.5]. Which measure can I apply to *de*emphasize the between-sample variance in the series and emphasize the fact that method 1 always outperforms method 2? Or, to put it differently, how can I test whether method 1 is significantly better than method 2 without being mislead by the different range of the indiviual datapoints? (Also note that I typically have more than two methods to compare, this is just for the example)
Thanks,
Nic
Update
One thing I've done is to count, for each method, how many times its performance is within 95% of the top performance. The picture pretty much speaks in favor of sample-based variance, not robust vs less robust methods. However, I'm still uncertain about how to generate a statistically valid statement from that..?
Update two years after
Just found this answer again. Simply for anyone who stumbles across this: I went with a sign-test: How many times is method x better than method y. Then the null-hypothesis is that one should be better than the other 50% of the time if there's no difference - computing the probability that the actual number of wins/losses stems from a 0.5 coinflip can be computed via the binomial distribution, and serves as your $p$.
| Meaningful deviation measure with strongly varying datapoints | CC BY-SA 3.0 | null | 2010-08-17T16:39:24.803 | 2012-12-04T06:46:40.940 | 2012-12-04T06:46:40.940 | 979 | 979 | [
"standard-deviation",
"statistical-significance",
"standard-error"
] |
1791 | 2 | null | 1790 | 3 | null | You need to use some paired test, maybe paired t-test or a sign test is the distribution is really weired.
| null | CC BY-SA 2.5 | null | 2010-08-17T16:49:24.107 | 2010-08-17T16:49:24.107 | null | null | null | null |
1792 | 2 | null | 1764 | 17 | null | I am very [wary of using logarithmic axes on bar graphs](http://www.graphpad.com/faq/viewfaq.cfm?faq=1477). The problem is that you have to choose a starting point of the axis, and this is almost always arbitrary. You can choose to make two bars have very different heights, or almost the same height, merely by changing the minimum value on the axis. These three graphs all plot the same data:

An alternative to discontinuous axes, that no one has mentioned yet,is to simply show a table of values. In many cases, tables are easier to understand than graphs.
| null | CC BY-SA 2.5 | null | 2010-08-17T17:35:21.270 | 2010-08-17T17:35:21.270 | null | null | 25 | null |
1794 | 2 | null | 1661 | 2 | null | From my point of view, when there are two explanatory variables and both have just two levels, we have the famous two-by-two contingency table. Fisher’s exact test can take such a matrix as its sole argument. Alternatively you can use Pearson’s chi-squared test.
If your null hypothesis is not the 25:25:25:25 distribution across the four categories (i.e. say it's 9:3:3:1), you'll have to calculate the expected frequencies explicitly.
Then perform the chi-squared test (in R) like that:
```
chisq.test(observed,p=c(9,3,3,1),rescale.p=TRUE)
# rescale.p is needed because the probabilities do not sum to 1.0
```
| null | CC BY-SA 2.5 | null | 2010-08-17T18:31:59.927 | 2010-08-17T18:31:59.927 | null | null | 339 | null |
1795 | 2 | null | 1444 | 51 | null | A useful approach when the variable is used as an independent factor in regression is to replace it by two variables: one is a binary indicator of whether it is zero and the other is the value of the original variable or a re-expression of it, such as its logarithm. This technique is discussed in [Hosmer & Lemeshow's book on logistic regression](https://rads.stackoverflow.com/amzn/click/com/0471356328) (and in [other places,](https://stats.stackexchange.com/a/4833/919) I'm sure). Truncated probability plots of the positive part of the original variable are useful for identifying an appropriate re-expression. (See the analysis at [https://stats.stackexchange.com/a/30749/919](https://stats.stackexchange.com/a/30749/919) for examples.)
When the variable is the dependent one in a linear model, censored regression (like [Tobit](https://en.wikipedia.org/wiki/Tobit_model)) can be useful, again obviating the need to produce a started logarithm. This technique is common among econometricians.
| null | CC BY-SA 4.0 | null | 2010-08-17T18:48:40.177 | 2019-09-19T14:57:03.797 | 2019-09-19T14:57:03.797 | 919 | 919 | null |
1796 | 2 | null | 1790 | 1 | null | I am not at all sure if ignoring the performance spread is a good idea. Ideally, you would want a method to be both reliable (i.e., have low spread) and be valid (i.e., give a performance measure of close to 1). Consider the following two output measures:
Method 1. [0.80, 0.60]
Method 2. [0.71, 0.69].
Unlike your example, there is no method that clearly dominates and in fact both methods perform equally well on average. Thus you may want to choose the one that is more reliable (i.e., has lower spread).
If you accept the above reasoning then your null hypothesis should be:
$$\frac{\mu_1}{\sigma_1} = \frac{\mu_2}{\sigma_2}$$
The above is analagous to the [Sharpe ratio](http://en.wikipedia.org/wiki/Sharpe_ratio) from finance and I am sure there is an extensive financial literature which discusses how to test hypothesis like the above and its extensions to more than 2 groups. Unfortunately, I am not well read up on that literature to help you.
| null | CC BY-SA 2.5 | null | 2010-08-17T19:08:05.987 | 2010-08-17T19:08:05.987 | null | null | null | null |
1797 | 1 | 1855 | null | 14 | 3976 | I was looking at this [page](http://glmm.wikidot.com/faq) and noticed the methods for confidence intervals for lme and lmer in R. For those who don't know R, those are functions for generating mixed effects or multi-level models. If I have fixed effects in something like a repeated measures design what would a confidence interval around the predicted value (similar to mean) mean? I can understand that for an effect you can have a reasonable confidence interval but it seems to me a confidence interval around a predicted mean in such designs seems to be impossible. It could either be very large to acknowledge the fact that the random variable contributes to uncertainty in the estimate, but in that case it wouldn't be useful at all in an inferential sense comparing across values. Or, it would have to be small enough to use inferentially but useless as an estimate of the quality of the mean (predicted) value that you could find in the population.
Am I missing something here or is my analysis of the situation correct?... [and probably a justification for why it isn't implemented in lmer (but easy to get in SAS). :)]
| What would a confidence interval around a predicted value from a mixed effects model mean? | CC BY-SA 2.5 | null | 2010-08-17T19:15:32.930 | 2012-01-08T21:32:24.613 | null | null | 601 | [
"r",
"confidence-interval",
"mixed-model",
"repeated-measures",
"sas"
] |
1798 | 2 | null | 1787 | 2 | null | Yes, you are right. What boot does is that it just generates new training sets by drawing with replacement from the original set. So about 2/3 of the original objects are present in each of the new sets, still the size is the same, so it does not influence model building.
| null | CC BY-SA 2.5 | null | 2010-08-17T19:30:14.080 | 2010-08-17T19:30:14.080 | null | null | null | null |
1799 | 1 | null | null | 9 | 2914 | I am seeking recommendations and/or best practices for analyzing non-independent data. In particular, I am curious about non-independent data that does not reflect typical repeated-measures time-based data in which data for the same question(s) or stimuli are collected at different time points. Rather the data collected is elicited from similar (but not identical) questions or stimuli that are known to be related. A specific example follows.
I have pain perception data for two groups (group A, group B) that can be further divided by gender (i.e., A-women; A-men; B-women; B-men). The pain perception dependent variables include 3 thermal threshold tests (i.e., detection, pain, tolerance) and pain magnitude estimates for a range of specific temperatures (temp X1, X2, X3, X4, X5). These pain perception variables are collected for both heat and cold stimuli (e.g., heat pain tolerance; cold pain tolerance; heat temp X1; cold temp X1). It should be expected (and indeed it is observed) that the various pain perception variables for a given individual are not independent.
The purpose of the analysis is to look for between group differences based on group membership (group A vs. group B) and sex (women vs men). It is desirable to find interactions between group membership and sex. It is also desirable to find interactions between specific pain perception variables (or types of variables; i.e., "cold stimuli") and group membership and/or sex. I have tried running a series of repeated-measures ANOVAs separately for each of the different groupings of pain perception variables (i.e., heat stimuli threshold tests; cold stimuli threshold tests; heat pain magnitude estimates; cold pain magnitude estimates); however this solution does not feel optimal or adequate. My specific questions are:
1) is it appropriate to analyze data elicited from related (but not identical) questions/stimuli using repeated measures?
2) Is it appropriate to analyze chunks of the data (e.g., cold pain magnitude estimates; cold stimuli threshold tests) separately?
2) Would a different strategy (such as multilevel analysis / profile analysis / or some type of multivariate repeated measures ANOVA) be more appropriate?
3) General recommendations and/or best practices for analyzing data such as this.
Thank you for your feedback and input
Patrick Welch
note: I already searched the site for related questions (i.e., [ANOVA with non-independent observations](https://stats.stackexchange.com/questions/859/anova-with-non-independent-observations) ; [Parametric techniques for n-related samples](https://stats.stackexchange.com/questions/1324/parametric-techniques-for-n-related-samples)) but believe the current question to be different enough to warrant unique consideration.
| Recommendations - or best practices - for analyzing non-independent data. Specific example relating to pain perception data provided | CC BY-SA 2.5 | null | 2010-08-17T20:00:59.700 | 2010-09-16T06:34:25.647 | 2017-04-13T12:44:52.277 | -1 | 835 | [
"non-independent"
] |
1800 | 2 | null | 1023 | 5 | null | Tests and thousands of sample questions are available on the ARTIST ("Assessment Resource Tools for Improving Statistical Thinking") site, [https://app.gen.umn.edu/artist/tests/index.html](https://app.gen.umn.edu/artist/tests/index.html) . Most are appropriate for an intro stats course.
| null | CC BY-SA 2.5 | null | 2010-08-17T20:09:56.013 | 2010-08-17T20:09:56.013 | null | null | 919 | null |
1801 | 2 | null | 928 | 11 | null | John Tukey strongly and cogently argued for a proportion type of measurement in his book on EDA. One thing that makes proportions special and different from the classical "nominal, ordinal, interval, ratio" taxonomy is that frequently they enjoy an obvious symmetry: A proportion can be thought of as the average of a binary (0/1) indicator variable. Because it should not make any meaningful difference to recode the indicator, the data analysis should remain essentially unchanged when you re-express the proportion as its complement. Specifically, recoding $0\to 1$ and $1\to0$ changes the original proportion $p$ to $1-p$. For example, it should make no difference to talk about 60% of people voting "yes" or 40% voting "no" in a referendum; the two numbers 0.6 and 0.4 represent exactly the same thing. Thus, statistics, tests, decisions, summaries, etc., should give the same results (mutatis mutandis) regardless of which form of expression is used.
Accordingly, Tukey used re-expressions of proportions, and analyses based on those re-expressions, that are (almost) invariant under the conversion $p\longleftrightarrow 1-p$. They are of the form $f(p) \pm f(1-p)$ for various functions $f$. (Taking the minus sign is usually best because it continues to distinguish between $p$ and $1-p$: only their signs differ when re-expressed.) When scaled so that the differential change near $p=1/2$ equals $1$, he called these the "folded" values. Among them are the folded logarithm ("flog"), proportional to $\log(p) - \log(1-p)$ = $\log(p/(1-p)$ = $\text{logit}(p)$, and the folded root ("froot"), proportional to $\sqrt{p} - \sqrt{1-p}$.
A mathematical exposition of this topic is less convincing than seeing the statistics in action, so I recommend reading chapter 17 of EDA and studying the examples therein.
In sum, then, I am suggesting that the question itself is too limiting and that one should be open to possibilities that go beyond those suggested by the classical taxonomy of variables.
---
### Addendum: Why "Interval" and "Ratio" are not quite correct answers
Stevens created the nominal-ordinal-interval-ratio typonymy in a cogently argued 1946 paper in Science (New Series, Vol. 103, No. 2684, pp 677-680). The basis for the distinctions is explicitly invariance of the "basic empirical operations" under group actions. His Table 1 describes the relationship between scale and group thus:
$$\begin{array}{ll}
\text{Scale}&\text{Mathematical Group Structure} \\
\hline\text{Nominal}&\text{Permutation Group } x^\prime = f(x);\ f(x) \text{ means any one-to-one substitution} \\
\text{Ordinal}&\text{Isotonic Group } x^\prime = f(x);\ f(x) \text{ means any monotonic increasing function} \\
\text{Interval}&\text{General Linear Group } x^\prime = ax + b \\
\text{Ratio}&\text{Similarity Group } x^\prime = ax
\end{array}$$
(This is a direct quotation, with some columns not shown.)
This must be read with some latitude, because we always have the option of choosing a model that is not exactly correct. (For example, a Normal distribution as a model of variation can be extremely useful and quite accurate even when applied to, say, the heights of people, which can never be negative even though all Normal distributions assign some probability to negative values.) Thus, for instance, data of extremely small proportions could arguably be considered as being of ratio type because the upper limit of $1$ is practically irrelevant. Data of very closely spaced proportions that approach neither of the limits $0$ or $1$ might conceivably be considered of interval type. Limiting the scope of the questions to either of these special cases would (partially) justify some of the other answers in this thread which insist that proportions are on an interval scale or ratio scale. However, when proportions in a dataset can be both large (greater than $1/2$) and small (less than $1/2$) and some of them approach $1$ or $0$, then obviously neither the general linear group nor the similarity group can apply, because they do not preserve the interval $[0,1]$. This is why Stevens' classification is incomplete and why usually it cannot be applied to proportions.
| null | CC BY-SA 4.0 | null | 2010-08-17T20:28:20.243 | 2022-03-10T12:48:17.403 | 2022-03-10T12:48:17.403 | 919 | 919 | null |
1802 | 2 | null | 887 | 7 | null | The second question seems to ask for a prediction interval for one future observation. Such an interval is readily calculated under the assumptions that (a) the future observation is from the same distribution and (b) is independent of the previous sample. When the underlying distribution is Normal, we just have to erect an interval around the difference of two Gaussian random variables. Note that the interval will be wider than suggested by a naive application of a t-test or z-test, because it has to accommodate the variance of the future value, too. This rules out all the answers I have seen posted so far, so I guess I had better quote one explicitly. Hahn & Meeker's formula for the endpoints of this prediction interval is
$$m \pm t \times \sqrt{1 + \frac{1}{n}} \times s$$
where $m$ is the sample mean, $t$ is an appropriate two-sided critical value of Student's $t$ (for $n-1$ df), $s$ is the sample standard deviation, and $n$ is the sample size. Note in particular the factor of $\sqrt{1+1/n}$ instead of $\sqrt{1/n}$. That's a big difference!
This interval is used like any other interval: the requested test simply examines whether the new value lies within the prediction interval. If so, the new value is consistent with the sample; if not, we reject the hypothesis that it was independently drawn from the same distribution as the sample. Generalizations from one future value to $k$ future values or to the mean (or max or min) of $k$ future values, etc., exist.
There is a extensive literature on prediction intervals especially in a regression context. Any decent regression textbook will have formulas. You could begin with the Wikipedia entry ;-). Hahn & Meeker's Statistical Intervals is still in print and is an accessible read.
The first question has an an answer that is so routine nobody seems yet to have given it here (although some of the links provide details). For completeness, then, I will close by remarking that when the population has approximately a Normal distribution, the sample standard deviation is distributed as the [square root of a scaled chi-square variate](https://en.wikipedia.org/wiki/Chi_distribution) of $n-1$ df whose expectation is the population variance. That means (roughly) we expect the sample sd to be close to the population sd and the ratio of the two will usually be $1 + O(1/\sqrt{n-1})$. Unlike parallel statements for the sample mean (which invoke the CLT), this statement relies fairly strongly on the assumption of a Normal population.
| null | CC BY-SA 3.0 | null | 2010-08-17T20:58:12.667 | 2017-01-14T18:15:23.173 | 2017-01-14T18:15:23.173 | 919 | 919 | null |
1803 | 2 | null | 1780 | 0 | null | You could try to employ the [theory](http://www-users.cs.umn.edu/~kumar/dmbook/ch6.pdf) and [praxis](http://cran.r-project.org/web/packages/arules/vignettes/arules.pdf) of association analysis or market basket analysis to your problem (just read "items" as "keywords" / "cited reference" and "market basket" as "journal article").
Disclaimer - this is just an idea, I did not do anything like that myself. Just my 2Cents.
| null | CC BY-SA 2.5 | null | 2010-08-17T21:25:25.937 | 2010-08-17T21:25:25.937 | null | null | 573 | null |
1804 | 2 | null | 1268 | 2 | null | Just a thought: you might not need the full SVD for your problem. Let M = U S V* be the SVD of your d by n matrix (i.e., the time series are the columns). To achieve the dimension reduction you'll be using the matrices V and S. You can find them by diagonalizing M* M = V (S*S) V*. However, because you are missing some values, you cannot compute M* M. Nevertheless, you can estimate it. Its entries are sums of products of columns of M. When computing any of the SSPs, ignore pairs involving missing values. Rescale each product to account for the missing values: that is, whenever a SSP involves n-k pairs, rescale it by n/(n-k). This procedure is a "reasonable" estimator of M* M and you can proceed from there. If you want to get fancier, maybe multiple imputation techniques or Matrix Completion will help.
(This can be carried out in many statistical packages by computing a pairwise covariance matrix of the transposed dataset and applying PCA or factor analysis to it.)
| null | CC BY-SA 2.5 | null | 2010-08-17T22:49:17.710 | 2010-08-17T22:49:17.710 | null | null | 919 | null |
1805 | 1 | 1806 | null | 35 | 77482 | I was taught to only apply Fisher's Exact Test in contingency tables that were 2x2.
Questions:
- Did Fisher himself ever envision this test to be used in tables larger than 2x2 (I am aware of the tale of him devising the test while trying to guess whether an old woman could tell if milk was added to tea or tea was added to milk)
- Stata allows me to use Fisher's exact test to any contingency table. Is this valid?
- Is it preferable to use FET when expected cell counts in a contingency table are < 5?
| Fisher's Exact Test in contingency tables larger than 2x2 | CC BY-SA 3.0 | null | 2010-08-17T23:42:12.133 | 2022-02-13T15:25:32.843 | 2013-07-21T18:24:53.760 | 7290 | 561 | [
"spss",
"stata",
"contingency-tables",
"fishers-exact-test"
] |
1806 | 2 | null | 1805 | 23 | null | The only problem with applying Fisher's exact test to tables larger than 2x2 is that the calculations become much more difficult to do. The 2x2 version is the only one which is even feasible by hand, and so I doubt that Fisher ever imagined the test in larger tables because the computations would have been beyond anything he would have envisaged.
Nevertheless, the test can be applied to any mxn table and some software including Stata and SPSS provide the facility. Even so, the calculation is often approximated using a Monte Carlo approach.
Yes, if the expected cell counts are small, it is better to use an exact test as the chi-squared test is no longer a good approximation in such cases.
| null | CC BY-SA 2.5 | null | 2010-08-18T00:10:56.507 | 2010-08-18T00:10:56.507 | null | null | 159 | null |
1807 | 1 | null | null | 28 | 15459 | Student's $t$-test requires the sample standard deviation $s$. However, how do I compute for $s$ when only the sample size and sample average are known?
For example, if sample size is $49$ and sample average is $112$, I will then attempt to create a list of $49$ identical samples with values of $112$ each. Expectedly, the sample standard deviation is $0$. This will create a divide-by-zero problem in the $t$ test.
ADDITIONAL DATA:
The average income of ACME North Factory workers is $\$200$. It is reported that a random sample of $49$ workers in ACME South Factory had an annual income of $\$112$. Is this difference statistically significant?
Am I correct in saying that the population mean is $\$200$?
| How to perform Student's t-test having only sample size, sample average and population average are known? | CC BY-SA 2.5 | null | 2010-08-18T01:39:31.013 | 2015-05-16T19:36:06.987 | 2015-05-15T11:49:58.767 | 35989 | 850 | [
"t-test",
"standard-deviation",
"small-sample"
] |
1810 | 2 | null | 1807 | 2 | null | I presume you are referring to a one sample t test. Its goal is to compare the mean of your sample with a hypothetical mean. It then computes (assuming your population is Gaussian) a P value that answers this question: If the population mean really was the hypothetical value, how unlikely would it be to draw a sample whose mean is as far from that value (or further) than you observed? Of course, the answer to that question depends on sample size. But it also depends on variability. If your data have a huge amount of scatter, they are consistent with a broad range of population means. If your data are really tight, they are consistent with a smaller range of population means.
| null | CC BY-SA 2.5 | null | 2010-08-18T02:11:10.360 | 2010-08-18T06:34:34.960 | 2010-08-18T06:34:34.960 | 25 | 25 | null |
1811 | 2 | null | 1807 | 13 | null | This does look to be a slightly contrived question. 49 is an exact square of 7. The value of a t-distribution with 48 DoF for a two-sided test of p<0.05 is very nearly 2 (2.01).
We reject the null hypothesis of equality of means if |sample_mean - popn_mean| > 2*StdError, i.e. 200-112 > 2*SE so SE < 44, i.e. SD < 7*44 = 308.
It would be impossible to get a normal distribution with a mean of 112 with a standard deviation of 308 (or more) without negative wages.
Given wages are bounded below, they are likely to be skew, so assuming a log-normal distribution would be more appropriate, but it would still require highly variable wages to avoid a p<0.05 on a t-test.
| null | CC BY-SA 2.5 | null | 2010-08-18T05:40:49.263 | 2010-08-18T05:40:49.263 | null | null | 521 | null |
1812 | 1 | 2043 | null | 8 | 2261 | One of the most important issues in using factor analysis is its interpretation. Factor analysis often uses factor rotation to enhance its interpretation. After a satisfactory rotation, the rotated factor loading matrix L' will have the same ability to represent the correlation matrix and it can be used as the factor loading matrix, instead of the unrotated matrix L.
The purpose of rotation is to make the rotated factor loading matrix have some desirable properties. One of the methods used is to rotate the factor loading matrix such that the rotated matrix will have a simple structure.
L. L. Thurstone introduced the Principle of Simple Structure, as a general guide for factor rotation:
## Simple Structure Criteria:
- Each row of the factor matrix should contain at least one zero
- If there are m common factors, each column of the factor matrix should have at least m zeros
- For every pair of columns in the factor matrix, there should be several variables for which entries approach zero in the one column but not in the other
- For every pair of columns in the factor matrix, a large proportion of the variables should have entries approaching zero in both columns when there are four or more factors
- For every pair of columns in the factor matrix, there should be only a small number of variables with nonzero entries in both columns
The ideal simple structure is such that:
- each item has a high, or meaningful, loading on one factor only and
- each factor have high, or meaningful, loadings for only some of the items.
The problem is that, trying several combinations of rotation methods along with the parameters that each one accepts (especially for oblique ones), the number of candidate matrices increases and it is very difficult to see which one better meets the above criteria.
When I first faced that problem I realized that I was unable to select the best match by merely 'looking' at them, and that I needed an algorithm to help me decide. Under the stress of project's deadlines, the most I could do was to write the following code in MATLAB, which accepts one rotation matrix at a time and returns (under some assumptions) whether each criterion is met or not.
A new version (If I would ever tried to upgrade it) would accept a 3d matrix (a set of 2d matrices) as an argument, and the algorithm should return the one that better fits the above criteria.
How would you extract an algorithm out of those criteria? I am just asking for your opinions (I also think that there's been criticism over the usefulness of the method by itself) and perhaps better approaches to the rotation matrix selection problem.
Also, I would like to know what software do you prefer to perform FA. If it's R, what package do you use? (I must admit that if I had to do FA, I would turn to SPSS again). If someone wants to provide some code, I would prefer R or MATLAB.
P.S. The above [Simple Structure Criteria formulation](http://books.google.gr/books?id=5Jyaa2LQWbQC&pg=PA132&lpg=PA132&dq=%22The+criteria+of+simple+structure+was+originally+proposed%22&source=bl&ots=OXsCeU0Nzi&sig=qfiLMUyXsgwPunWBgNdSEeBNm34&hl=el&ei=huZeS5nkApTE4gaC67HsCw&sa=X&oi=book_result&ct=result&resnum=1&ved=0CAcQ6AEwAA#v=onepage&q=%22The%20criteria%20of%20simple%20structure%20was%20originally%20proposed%22&f=false) can be found in the book "Making Sense of Factor Analysis" by PETT, M., LACKEY, N., SULLIVAN, J.
P.S.2 (from the same book): "A test of successful factor analysis is the extent to which it can reproduce the original corr matrix. If you also used oblique solutions, among all select the one that generated the greatest number of highest and lowest factor loadings."
This sounds like another constraint that the algorithm could use.
P.S.3 This question has also been asked [here](https://stackoverflow.com/questions/2139703/fa-choosing-rotation-matrix-based-on-simple-structure-criteria). However, I think it fits better on this site.
```
function [] = simple_structure_criteria (my_pattern_table)
%Simple Structure Criteria
%Making Sense of Factor Analysis, page 132
disp(' ');
disp('Simple Structure Criteria (Thurstone):');
disp('1. Each row of the factor matrix should contain at least one zero');
disp( '2. If there are m common factors, each column of the factor matrix should have at least m zeros');
disp( '3. For every pair of columns in the factor matrix, there should be several variables for which entries approach zero in the one column but not in the other');
disp( '4. For every pair of columns in the factor matrix, a large proportion of the variables should have entries approaching zero in both columns when there are four or more factors');
disp( '5. For every pair of columns in the factor matrix, there should be only a small number of variables with nonzero entries in both columns');
disp(' ');
disp( '(additional by Pedhazur and Schmelkin) The ideal simple structure is such that:');
disp( '6. Each item has a high, or meaningful, loading on one factor only and');
disp( '7. Each factor have high, or meaningful, loadings for only some of the items.');
disp('')
disp('Start checking...')
%test matrix
%ct=[76,78,16,7;19,29,10,13;2,6,7,8];
%test it by giving: simple_structure_criteria (ct)
ct=abs(my_pattern_table);
items=size(ct,1);
factors=size(ct,2);
my_zero = 0.1;
approach_zero = 0.2;
several = floor(items / 3);
small_number = ceil(items / 4);
large_proportion = 0.30;
meaningful = 0.4;
some_bottom = 2;
some_top = floor(items / 2);
% CRITERION 1
disp(' ');
disp('CRITERION 1');
for i = 1 : 1 : items
count = 0;
for j = 1 : 1 : factors
if (ct(i,j) < my_zero)
count = count + 1;
break
end
end
if (count == 0)
disp(['Criterion 1 is NOT MET for item ' num2str(i)])
end
end
% CRITERION 2
disp(' ');
disp('CRITERION 2');
for j = 1 : 1 : factors
m=0;
for i = 1 : 1 : items
if (ct(i,j) < my_zero)
m = m + 1;
end
end
if (m < factors)
disp(['Criterion 2 is NOT MET for factor ' num2str(j) '. m = ' num2str(m)]);
end
end
% CRITERION 3
disp(' ');
disp('CRITERION 3');
for c1 = 1 : 1 : factors - 1
for c2 = c1 + 1 : 1 : factors
test_several = 0;
for i = 1 : 1 : items
if ( (ct(i,c1)>my_zero && ct(i,c2)<my_zero) || (ct(i,c1)<my_zero && ct(i,c2)>my_zero) ) % approach zero in one but not in the other
test_several = test_several + 1;
end
end
disp(['several = ' num2str(test_several) ' for factors ' num2str(c1) ' and ' num2str(c2)]);
if (test_several < several)
disp(['Criterion 3 is NOT MET for factors ' num2str(c1) ' and ' num2str(c2)]);
end
end
end
% CRITERION 4
disp(' ');
disp('CRITERION 4');
if (factors > 3)
for c1 = 1 : 1 : factors - 1
for c2 = c1 + 1 : 1 : factors
test_several = 0;
for i = 1 : 1 : items
if (ct(i,c1)<approach_zero && ct(i,c2)<approach_zero) % approach zero in both
test_several = test_several + 1;
end
end
disp(['large proportion = ' num2str((test_several / items)*100) '% for factors ' num2str(c1) ' and ' num2str(c2)]);
if ((test_several / items) < large_proportion)
pr = sprintf('%4.2g', (test_several / items) * 100 );
disp(['Criterion 4 is NOT MET for factors ' num2str(c1) ' and ' num2str(c2) '. Proportion is ' pr '%']);
end
end
end
end
% CRITERION 5
disp(' ');
disp('CRITERION 5');
for c1 = 1 : 1 : factors - 1
for c2 = c1 + 1 : 1 : factors
test_number = 0;
for i = 1 : 1 : items
if (ct(i,c1)>approach_zero && ct(i,c2)>approach_zero) % approach zero in both
test_number = test_number + 1;
end
end
disp(['small number = ' num2str(test_number) ' for factors ' num2str(c1) ' and ' num2str(c2)]);
if (test_number > small_number)
disp(['Criterion 5 is NOT MET for factors ' num2str(c1) ' and ' num2str(c2)]);
end
end
end
% CRITERION 6
disp(' ');
disp('CRITERION 6');
for i = 1 : 1 : items
count = 0;
for j = 1 : 1 : factors
if (ct(i,j) > meaningful)
count = count + 1;
end
end
if (count == 0 || count > 1)
disp(['Criterion 6 is NOT MET for item ' num2str(i)])
end
end
% CRITERION 7
disp(' ');
disp('CRITERION 7');
for j = 1 : 1 : factors
m=0;
for i = 1 : 1 : items
if (ct(i,j) > meaningful)
m = m + 1;
end
end
disp(['some items = ' num2str(m) ' for factor ' num2str(j)]);
if (m < some_bottom || m > some_top)
disp(['Criterion 7 is NOT MET for factor ' num2str(j)]);
end
end
disp('')
disp('Checking completed.')
return
```
| FA: Choosing Rotation matrix, based on "Simple Structure Criteria" | CC BY-SA 2.5 | null | 2010-08-18T07:36:59.733 | 2011-01-22T11:35:40.123 | 2017-05-23T12:39:26.143 | -1 | 339 | [
"r",
"algorithms",
"factor-analysis",
"psychometrics",
"matlab"
] |
1813 | 1 | 1816 | null | 10 | 1043 | I have two implementations of a genetic algorithm which are supposed to behave equivalently. However due to technical restrictions which cannot be resolved their output is not exactly the same, given the same input.
Still I'd like to show that there is no significant performance difference.
I have 20 runs with the same configuration for each of the two algorithms, using different initial random number seeds. For each run and generation the minimum error fitness of the best individual in the population was recorded. The algorithm employs an elite-preserving mechanism, so the fitness of the best individual is monotonically decreasing. A run consists of 1000 generations, so I have 1000 values per run. I cannot get more data, as the calculations are very expensive.
Which test should I employ? An easy way would probably be to only compare the error in the final generations (again, which test would I use here)? But one might also think about comparing the convergence behaviour in general.
| Comparing two genetic algorithms | CC BY-SA 2.5 | null | 2010-08-18T08:29:52.133 | 2010-08-18T15:25:32.860 | 2010-08-18T15:25:32.860 | 986 | 986 | [
"hypothesis-testing",
"genetic-algorithms"
] |
1814 | 2 | null | 1812 | 2 | null | Great Question. This is not really an answer, but just a few thoughts.
In most of the applications where I have used factor analysis, permitting correlated factors makes more theoretical sense. I tend to rely on the proxmax rotation method. I used to do this in SPSS and now I use the `factanal` function in R.
| null | CC BY-SA 2.5 | null | 2010-08-18T08:37:21.273 | 2010-08-18T08:37:21.273 | null | null | 183 | null |
1815 | 1 | null | null | 63 | 34457 | What are the panel's recommendations for books on design of experiments?
Ideally, books should be still in print or available electronically, although that may not always be feasible. If you feel moved to add a few words on what's so good about the book that would be great too.
Also, aim for one book per answer so that voting can help sort the suggestions.
(Community Wiki, please edit the question if you can make it better!)
| Recommended books on experiment design? | CC BY-SA 2.5 | null | 2010-08-18T08:54:23.650 | 2023-01-27T15:52:18.737 | null | null | 174 | [
"references",
"experiment-design"
] |
1816 | 2 | null | 1813 | 9 | null | Testing stochastic algorithms can be rather tricky!
I work in systems biology and there are many stochastic simulators available to use to simulate a model. Testing these simulators is tricky since any two realizations from a single model will be typically different.
In the [dsmts](http://code.google.com/p/dsmts/) we have calculated (analytically) the expected value and variance of a particular model. We then perform a hypothesis test to determine if a simulator differs from the truth. Section 3 of the [userguide](http://code.google.com/p/dsmts/downloads/detail?name=dsmts-userguide31.pdf) gives the details. Essentially we do a t-test for the mean values and a chi-squared test for variances.
In your case, you are comparing two simulators so you should just use a two-sampled t-test instead.
| null | CC BY-SA 2.5 | null | 2010-08-18T08:58:46.820 | 2010-08-18T12:23:43.077 | 2010-08-18T12:23:43.077 | 8 | 8 | null |
1817 | 2 | null | 1815 | 13 | null | [Montgomery's Design and Analysis of Experiments](http://rads.stackoverflow.com/amzn/click/0470128666) is a classic and highly regarded text:
If you are interested in experimental design in a particular field (eg. clinical trials) other more specialised texts may be appropriate.
| null | CC BY-SA 2.5 | null | 2010-08-18T10:40:25.813 | 2010-08-18T13:07:11.413 | 2010-08-18T13:07:11.413 | 8 | 521 | null |
1818 | 1 | 1823 | null | 14 | 35536 | I need some help about repeated measurement ANOVA.
We are investigating the effect of some intervention on reducing blood stream infection (BSI) rate in some wards. We plan to capture the BSI rate information at a monthly basis, 12 months without intervention first, then 12 months with intervention.
We are thinking of doing either time-series or repeated measurement ANOVA, I prefer the later one before I don't have much idea to do on the first one (extra question: too little time points, right?), but then here come another problem, how many wards do we need to show that there is really a statistically significant effect of intervention on BSI rate?
I think I'll do two ANOVA, one for "before intervention", one for "during intervention", and I suppose that the ANOVA "before intervention" should not have a significant F-ratio test.
I consider the term "sample size" two-dimensionally, either the number of wards, or the number of repeated measurements.
| How to determine the sample size needed for repeated measurement ANOVA? | CC BY-SA 2.5 | null | 2010-08-18T10:56:21.970 | 2012-05-10T12:06:11.957 | 2010-08-18T11:03:13.940 | 8 | 588 | [
"anova",
"repeated-measures",
"sample-size",
"statistical-power"
] |
1819 | 2 | null | 1815 | 3 | null | If you're interested in pharmaceutical trials, two books I recommend:
- Statistical Issues in Drug Development by Stephen Senn (Amazon link)
- Cross-over Trials in Clinical Research by Stephen Senn (Amazon link)
| null | CC BY-SA 2.5 | null | 2010-08-18T10:56:44.153 | 2010-08-18T10:56:44.153 | null | null | 561 | null |
1821 | 2 | null | 887 | 1 | null | 1) The standard deviation of the sample (stdev(S)) is an unbiased estimate of the standard deviation of the population.
2) Given we have estimated both the population mean and variance we need to take this into account when we evaluate whether a new observation x is a member of this population.
We don't use Z = (x - mean(S))/stdev(S), but rather:
t = (x - mean(S))/(stdev(S)*sqrt(1 + 1/n)), where n is the sample size of the first sample. We the compare t with a t-distribution with n-1 degrees of freedom to give a p-value. See here:
[http://en.wikipedia.org/wiki/Prediction_interval#Unknown_mean.2C_unknown_variance](http://en.wikipedia.org/wiki/Prediction_interval#Unknown_mean.2C_unknown_variance)
This accounts for the sample size both in the divisor (sqrt(1 + 1/n)) and in the degrees of freedom of the t-distribution.
| null | CC BY-SA 2.5 | null | 2010-08-18T11:09:28.313 | 2010-08-18T11:09:28.313 | null | null | 521 | null |
1822 | 1 | 1827 | null | 3 | 2928 | How do I test/verify if I can analyze my panel like dataset by simply pooling the individual series?
I have a dataset structured as a panel. Now I am wondering if I can simply
pool the individual series and estimate it via OLS or if I have to use another estimation technique.
(Any R hints and references are highly welcomed.)
| Test for Poolability of Individual Data Series | CC BY-SA 4.0 | null | 2010-08-18T11:37:10.547 | 2021-07-20T13:59:20.793 | 2021-07-20T13:59:20.793 | 94889 | 283 | [
"panel-data",
"survey"
] |
1823 | 2 | null | 1818 | 12 | null | How to perform power analysis on repeated measures ANOVA?
[G*Power 3](http://www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3/) is free software that provides a user-friendly GUI interface for performing power calculations.
It supports power calculations for repeated measures ANOVA.
What is the appropriate analysis for your design?
Here are a range of points related to what you have mentioned:
- More time points will give a clearer indication
of how the effect, if any, of your intervention operates over time. Thus, if the improvements decay over time or get greater, more time points will give a clearer sense of these patterns, both on average, and at an individual-level.
- If you have 12 time points or more, I'd look at multilevel modelling, particularly if you are expecting any missing observations. You are unlikely to be interested in whether there is an effect of time. Rather you are likely to be interested in various specific effects (e.g., changes pre and post intervention; perhaps a linear or quadratic improvement effect post-intervention). You could also look at using planned contrasts on top of repeated measures ANOVA. Applied Longitudinal Data Analysis: Modeling Change and Event Occurrence is a good starting point to learn about multilevel modelling of repeated measures data.
- The number of time points over and above pre- and post- wont do much to increase your power to detect the effect of your intervention. More time points will increase your reliability of measurement, and it might ensure that you capture the time period where the effect applies, but probably the bigger issue will be the sample size in the two conditions.
- Assuming you are truly randomly allocating cases to conditions, the populations are by definition equal on the dependent variable, and one could argue that a significance test of baseline differences is meaningless. That said, researchers often still do it, and I suppose it does provide some evidence that random allocation has actually occurred.
- There is a fair amount of debate about the best way to test the effect of an intervention in a pre-post-intervention-control design. A few options include: (a) the condition * time interaction; (b) the effect of condition but just at post intervention; (c) an ANCOVA looking at the effect of condition, controlling for pre, with post as the DV.
| null | CC BY-SA 3.0 | null | 2010-08-18T11:40:44.177 | 2012-05-10T12:06:11.957 | 2012-05-10T12:06:11.957 | 183 | 183 | null |
1824 | 2 | null | 1813 | 4 | null | Maybe you could measure the average difference between two runs of the same algorithm to the average difference between two runs from different algorithms. Doesn't solve the problem of how to measure that difference, but might be a more tractable problem. And the individual values of the time series would feed into the difference calculation instead of having to be treated as individual datapoints to be evaluated against each other (I also don't think that the particular difference at the nth step is what you really want to make statements about).
Update
Concerning details - well which features of the time series are you interested in, beyond the final error? I guess you actually got three different questions to solve:
- What constitues similarity for you, ie what do you mean when you say you don't believe the two methods are different?
- How do you quantify it - can be answered after 1, and
- How can you test for significant differences between your two methods?
All I was saying in the first post was that the answer to (1) probably doesn't consider the individual differences at each of the 1000 generations. And that I'd advise coming up with a scalar value for either each time series or at least similarity between time series. Only then you get to the actual statistics question (which I know least about of all three points, but I was advised to use a paired t-test in a similar question I just asked, when having a scalar value per element).
| null | CC BY-SA 2.5 | null | 2010-08-18T12:00:36.353 | 2010-08-18T14:09:05.340 | 2010-08-18T14:09:05.340 | 979 | 979 | null |
1825 | 2 | null | 1799 | 2 | null | If temperature levels X1...X5 are specific degree values, I'm not sure how temperature and stimulus (hot/cold) can be completely crossed. I presume then that "temperature" consists of 5 ordinal categories, ranging from "likely to cause minimal discomfort" to "likely to cause maximum discomfort permissible by my research ethics board", which then permits crossing with hot/cold direction.
If this is the case, you are correct that the 3 response variables might not be expected to be independent, so you might increase your power (relative to 3 seperate analyses) if you do a multivariate test. Frankly, I must admit that my first inclination would be to simply do three separate analyses, one for each response variable, but that's because I'm not very knowledgeable with regards to the mechanics of multivariate tests. That said, given the likely dependence between the 3 response variables, significant results across 3 separate anovas might be most reasonably considered as manifestations of this dependence rather than 3 truly independent sets of phenomena.
I am pretty sure that you shouldn't lump all 3 response variables into a single univariate analysis by adding "response variable type" as a predictor variable; this approach could run into trouble if the scales and variances of your response variables are very different. Presumably, a multivariate analysis has features that take such differences into account. (However, I wonder if a mixed effects analysis might also be able to handle such differences? I'm new to mixed effects, but suspect I it might...)
| null | CC BY-SA 2.5 | null | 2010-08-18T12:51:41.617 | 2010-08-18T12:51:41.617 | null | null | 364 | null |
1826 | 1 | 1830 | null | 87 | 51875 | How would you describe [cross-validation](http://en.wikipedia.org/wiki/Cross-validation_%28statistics%29) to someone without a data analysis background?
| Cross-Validation in plain english? | CC BY-SA 2.5 | null | 2010-08-18T13:11:19.190 | 2020-12-18T13:43:22.253 | 2020-12-18T13:43:22.253 | 11887 | 5 | [
"cross-validation",
"intuition"
] |
1827 | 2 | null | 1822 | 4 | null | You can fit a hierarchical bayesian (HB) model without pooling and do an ordinary OLS by pooling the data and compare the models in terms of model fit, hold-out predictions etc to evaluate whether pooling outperforms the HB model. The model very briefly will look like so:
Model
$y_i \sim N(X\ \beta_i,\sigma^2\ I)$
$\beta_i \sim N(\bar{\beta},\Sigma)$
Priors
$\bar{\beta} \sim N(\bar{\bar{\beta}},\Sigma_0)$
$\Sigma \sim IW(R,d)$
$\sigma^2 \sim IG(sp,sc)$
While I do not use R, I do know that there are packages that will do the above for you. Someone more knowledgeable about R can perhaps help you out.
| null | CC BY-SA 2.5 | null | 2010-08-18T13:15:34.937 | 2010-08-18T13:25:06.397 | 2010-08-18T13:25:06.397 | null | null | null |
1828 | 2 | null | 1826 | 44 | null | I think that this is best described with [the following picture](http://genome.tugraz.at/proclassify/help/pages/XV.html) (in this case showing k-fold cross-validation):

Cross-validation is a technique used to protect against overfitting in a predictive model, particularly in a case where the amount of data may be limited. In cross-validation, you make a fixed number of folds (or partitions) of the data, run the analysis on each fold, and then average the overall error estimate.
| null | CC BY-SA 2.5 | null | 2010-08-18T13:20:28.593 | 2010-08-18T13:20:28.593 | null | null | 5 | null |
1829 | 1 | 1845 | null | 51 | 32492 | I usually hear about "ordinary least squares". Is that the most widely used algorithm used for linear regression? Are there reasons to use a different one?
| What algorithm is used in linear regression? | CC BY-SA 2.5 | null | 2010-08-18T13:30:31.750 | 2016-06-13T18:50:53.980 | 2016-06-13T18:50:53.980 | 28666 | 988 | [
"regression",
"least-squares",
"algorithms",
"computational-statistics",
"numerics"
] |
1830 | 2 | null | 1826 | 60 | null | Consider the following situation:
>
I want to catch the subway to go to my office. My plan is to take my car, park at the subway and then take the train to go to my office. My goal is to catch the train at 8.15 am every day so that I can reach my office on time. I need to decide the following: (a) the time at which I need to leave from my home and (b) the route I will take to drive to the station.
In the above example, I have two parameters (i.e., time of departure from home and route to take to the station) and I need to choose these parameters such that I reach the station by 8.15 am.
In order to solve the above problem I may try out different sets of 'parameters' (i.e., different combination of times of departure and route) on Mondays, Wednesdays, and Fridays, to see which combination is the 'best' one. The idea is that once I have identified the best combination I can use it every day so that I achieve my objective.
Problem of Overfitting
The problem with the above approach is that I may overfit which essentially means that the best combination I identify may in some sense may be unique to Mon, Wed and Fridays and that combination may not work for Tue and Thu. Overfitting may happen if in my search for the best combination of times and routes I exploit some aspect of the traffic situation on Mon/Wed/Fri which does not occur on Tue and Thu.
One Solution to Overfitting: Cross-Validation
Cross-validation is one solution to overfitting. The idea is that once we have identified our best combination of parameters (in our case time and route) we test the performance of that set of parameters in a different context. Therefore, we may want to test on Tue and Thu as well to ensure that our choices work for those days as well.
Extending the analogy to statistics
In statistics, we have a similar issue. We often use a limited set of data to estimate the unknown parameters we do not know. If we overfit then our parameter estimates will work very well for the existing data but not as well for when we use them in another context. Thus, cross-validation helps in avoiding the above issue of overfitting by proving us some reassurance that the parameter estimates are not unique to the data we used to estimate them.
Of course, cross validation is not perfect. Going back to our example of the subway, it can happen that even after cross-validation, our best choice of parameters may not work one month down the line because of various issues (e.g., construction, traffic volume changes over time etc).
| null | CC BY-SA 3.0 | null | 2010-08-18T13:38:29.843 | 2016-04-21T01:33:54.363 | 2016-04-21T01:33:54.363 | 46761 | null | null |
1831 | 2 | null | 1822 | 3 | null | Srikant is right. The book you want is ["Data Analysis Using Regression and Multilevel/Hierarchical Models"](http://www.stat.columbia.edu/~gelman/arm/) by Gelman and Hill, all [the R code from the book](http://www.stat.columbia.edu/~gelman/arm/software/), and the associated [arm package in R](http://cran.r-project.org/web/packages/arm/).
| null | CC BY-SA 2.5 | null | 2010-08-18T13:42:13.063 | 2010-08-18T13:42:13.063 | null | null | 5 | null |
1832 | 2 | null | 1815 | 19 | null | for me, the best book around is by George Box:
[Statistics for Experimenters: Design, Innovation, and Discovery](http://rads.stackoverflow.com/amzn/click/0471718130)
of course the book by Maxwell and Delaney is also pretty good:
[Designing Experiments and Analyzing Data: A Model Comparison Perspective, Second Edition](http://rads.stackoverflow.com/amzn/click/0805837183)
I personally prefer the first, but they are both top quality. They are a little bit expensive, but you can definitely find a cheap earlier edition for sale.
| null | CC BY-SA 2.5 | null | 2010-08-18T13:50:28.787 | 2010-08-18T13:50:28.787 | null | null | 447 | null |
1833 | 2 | null | 1829 | 6 | null | The wiki link: [Estimation Methods for Linear Regression](http://en.wikipedia.org/wiki/Linear_regression#Estimation_methods) gives a fairly comprehensive list of estimation methods including OLS and the contexts in which alternative estimation methods are used.
| null | CC BY-SA 3.0 | null | 2010-08-18T14:01:06.170 | 2013-03-26T15:47:07.683 | 2013-03-26T15:47:07.683 | 603 | null | null |
1834 | 2 | null | 1829 | 4 | null | It is easy to get confused between definitions and terminology. Both terms are used, sometimes interchangeably. A quick lookup on Wikipedia should help:
- ordinary least squares
- lnear regression
Ordinary Least Squares (OLS) is a method used to fit linear regression models. Because of the demonstrable consistency and efficiency (under supplementary assumptions) of the OLS method, it is the dominant approach. See the articles for further leads.
| null | CC BY-SA 2.5 | null | 2010-08-18T14:01:19.650 | 2010-08-18T14:01:19.650 | null | null | 334 | null |
1835 | 2 | null | 1822 | 3 | null | The only further comment I would make is that the approach need not be Bayesian and the model need not be a mixed or random effects model.
In the simplest case if you had two series in x the mean model may be:
y = b01 + I*b02 + b11*x + I.b12*x
Where I indicates a sample from the 2nd series. An omnibus F-test can be used to determine whether the additional parameters are required to maintain distinct series (Ho: b02 = b12 = 0).
[http://en.wikipedia.org/wiki/F_test](http://en.wikipedia.org/wiki/F_test)
This can be extended to more series, but it soon becomes more efficient to use a mixed or random effects model.
| null | CC BY-SA 2.5 | null | 2010-08-18T14:13:25.737 | 2010-08-18T14:13:25.737 | null | null | 521 | null |
1836 | 2 | null | 1807 | 32 | null | This may surprise many, but to solve this problem you don't necessarily need to estimate s. In fact, you don't need to know anything about the spread of the data (although that would be helpful, of course). For instance, Wall, Boen, and Tweedie in a 2001 article describe how to find a finite confidence interval for the mean of any unimodal distribution based on a single draw.
In the present case, we have some basis to view the sample mean of 112 as a draw from an approximately normal distribution (namely, the sampling distribution of the average of a simple random sample of 49 salaries). We are implicitly assuming there are a fairly large number of factory workers and that their salary distribution is not so skewed or multimodal as to render the central limit theorem inoperable. Then a conservative 90% CI for the mean extends upwards to
$$112 + 5.84\ |112|,$$
clearly covering the true mean of 200. (See Wall et al formula 3.) Given the limited information available and the assumptions made here, we therefore cannot conclude that 112 differs "significantly" from 200.
Reference: ["An Effective Confidence Interval for the Mean With Samples of Size One and Two." The American Statistician, May 2001, Vol. 55, No. 2: pp. 102-105.](http://www.jstor.org/stable/2685995) ([pdf](http://www.biostat.umn.edu/ftp/pub/2000/rr2000-007.pdf))
| null | CC BY-SA 3.0 | null | 2010-08-18T14:33:06.530 | 2015-05-16T19:36:06.987 | 2015-05-16T19:36:06.987 | 919 | 919 | null |
1837 | 2 | null | 1321 | 5 | null | A method that can be very effective--one I have found extremely useful--is to sort the data by time and draw a connected X,Y scatterplot. (That is, successive points are connected by line segments or a spline.) This much is straightforward in almost any statistical plotting package. If the result is too confusing, add graphical indications of directionality. Depending on the density of the points and their pattern of temporal evolution, options include using arrows on the line segments or otherwise applying a graduated color or thickness to the segments to indicate their times. You can even dispense with the connecting lines and just color or size the points to indicate time: that works better when there are many points on the plot. In addition to displaying the bivariate relationship among the data in a conventional form, this method supplies a clear visual indication of temporally local correlations, changes that run counter to the prevailing correlation, etc.
An example of this appears in my reply at [Forecasting time series based on a behavior of other one](https://stats.stackexchange.com/questions/14351/forecasting-time-series-based-on-a-behavior-of-other-one/14366#14366) :

Because the points follow clear paths through this plot, I dropped the connecting lines in favor of letting hue represent time. This plot reveals far more detail about corresponding behaviors than the two original graphs do. The time dependency is qualitatively clear from the changes in hue, but a legend (matching colors to times) would help the reader.
| null | CC BY-SA 3.0 | null | 2010-08-18T14:47:31.707 | 2011-08-18T20:30:22.123 | 2017-04-13T12:44:21.613 | -1 | 919 | null |
1838 | 1 | 1843 | null | 11 | 20448 | Greetings,
Currently I'm doing the following in R:
```
require(zoo)
data <- read.csv(file="summary.csv",sep=",",head=TRUE)
cum = zoo(data$dcomp, as.Date(data$date))
data = zoo(data$compressed, as.Date(data$date))
data <- aggregate(data, identity, tail, 1)
cum <- aggregate(cum, identity, sum, 1)
days = seq(start(data), end(data), "day")
data2 = na.locf(merge(data, zoo(,days)))
plot(data2,xlab='',ylab='compressed bytes',col=rgb(0.18,0.34,0.55))
lines(cum,type="h",col=rgb(0,0.5,0))
```
Snip of summary.csv:
```
date,revision,file,lines,nclass,nattr,nrel,bytes,compressed,diff,dcomp
2007-07-25,16,model.xml,96,11,22,5,4035,991,0,0
2007-07-27,17,model.xml,115,16,26,6,4740,1056,53,777
2007-08-09,18,model.xml,106,16,26,7,4966,1136,47,761
2007-08-10,19,model.xml,106,16,26,7,4968,1150,4,202
2007-09-06,81,model.xml,111,16,26,7,5110,1167,13,258
...
```
The last two lines plot the information I need, and the result resembles the following:

Blue line is the entropy in bytes of the artifact I'm interested. Green lines represent the entropy of the changes.
Now, in this graph, it works well because there isn't a huge difference in scales. But I have other graphs where the green lines become so small one cannot see.
The solution I was looking for, involved two things:
- To move the green vertical lines to a second graph, just below the first one, with its own y axis, but shared x axis.
- To provide it a logarithmic scale, since I'm more interested in the "magnitude", than in the specific values.
Thanks in advance!
P.S. If someone can also tell me how could I put "minor ticks" in the x scale referring to the months, I appreciate :-) If these are too much questions for a single post, I can divide them further.
| How do I vertically stack two graphs with the same x scale, but a different y scale in R? | CC BY-SA 2.5 | null | 2010-08-18T14:55:55.133 | 2013-11-16T01:50:25.777 | 2010-08-18T15:39:27.513 | 990 | 990 | [
"r",
"time-series",
"data-visualization",
"entropy"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.