Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3175 | 1 | 3177 | null | 2 | 3402 | I have a data which is 100x1 vector. How can I display its empirical pdf in Matlab? Also, if I want to compare the pdf of three vectors on the same graph, then how to do that?
Right now I am using [pdfplot.m](http://www.mathworks.com/matlabcentral/fileexchange/8578-pdfplot) file to plot my empirical pdf, however when I want to compare the 3 distributions by using 'hold on', then firstly its not working and secondly all the distributions are in same color. Thanks!
Also, I don't want to plot cdf or histogram.
| How can I display empirical pdf of my 100x1 vector data in Matlab? | CC BY-SA 2.5 | null | 2010-09-29T19:12:08.840 | 2010-09-30T07:32:21.337 | null | null | null | [
"distributions",
"multiple-comparisons",
"matlab",
"density-function",
"data-visualization"
] |
3176 | 1 | null | null | 1 | 2473 | Imagine 2 models: 1 for production, 1 for revenue, have cross-section and time dummies.
Is it permissable to use CS in one and time in the other? How do I justify this? F-test for fixed effects is indicating this and also dummies I want to omit in each case are not really significant.
| Cross-section and fixed effects models | CC BY-SA 2.5 | null | 2010-09-29T20:47:57.117 | 2011-03-06T20:39:36.190 | 2010-09-29T23:31:12.340 | null | null | [
"regression",
"cross-section"
] |
3177 | 2 | null | 3175 | 1 | null | the problem is that `histc` plots bars instead of, say lines; the bars are plotted over one another. you should edit `pdfplot` to plot lines instead. look for the `hist` command and alter it.
| null | CC BY-SA 2.5 | null | 2010-09-29T21:50:43.303 | 2010-09-29T21:50:43.303 | null | null | 795 | null |
3178 | 2 | null | 1455 | 1 | null | Since the odds ratio cannot be negative, it is restricted at the lower end, but not at the upper end, and so has a skew distribution.
| null | CC BY-SA 2.5 | null | 2010-09-30T03:05:13.367 | 2010-09-30T03:05:13.367 | null | null | null | null |
3179 | 1 | 3217 | null | 15 | 2673 | I'm working on a project that involves 14 variables and 345,000 observations for housing data (things like year built, square footage, price sold, county of residence, etc). I'm concerned with trying to find good graphical techniques and R libraries that contain nice plotting techniques.
I'm already seeing what in ggplot and lattice will work nicely, and I'm thinking of doing violin plots for some of my numerical variables.
What other packages would people recommend for displaying a large amount of either numerical or factor-typed variables in a clear, polished, and most importantly, succinct manner?
| A good way to show lots of data graphically | CC BY-SA 2.5 | null | 2010-09-30T04:37:50.027 | 2017-07-04T18:55:07.973 | 2010-10-08T16:06:40.877 | 8 | 1118 | [
"r",
"data-visualization",
"large-data",
"exploratory-data-analysis"
] |
3180 | 1 | null | null | 7 | 277 | I was recently consulting a researcher in the following situation.
Context:
- data were collected over four years at around 50 participants per year (participants had a specific diagnosed clinical psychology disorder and were difficult to obtain in large numbers); participants were only measured once (i.e., it's not a longitudinal study)
- all participants had the same disorder
- the study involved participants completing a set of 10 psychological scales
- the 10 scales measured various things like symptoms, theorised precursors, and related psychopathology: the measures tended to intercorrelate around $r = .3$ to $.7$.
- in the first year one of the scales was not included
- the researcher wanted to run structural equation modelling on all 10 scales on the entire sample. Thus, there was an issue that around a quarter of the sample had missing data on one scale.
The researcher wanted to know:
- What is a good strategy for dealing with missing data like this? What tips, references to applied examples, or references to advice regarding best practice would you suggest?
I had a few thoughts, but I was keen to hear your suggestions.
| Dealing with missing data due to variable not being measured over initial period of a study | CC BY-SA 2.5 | null | 2010-09-30T04:52:04.513 | 2010-10-02T19:05:07.083 | 2010-10-01T06:57:58.187 | 930 | 183 | [
"scales",
"missing-data",
"data-imputation"
] |
3181 | 1 | 3184 | null | 28 | 1864 | If you think back, to when you first started with time series analysis. What tools, R packages and internet resources do you wish you had known about?
What I'm trying to ask is, where should one start? Specifically, are there any resources for R that really boil it down for one who is "new" to time series analysis with R.
| Getting seRious about time series with R | CC BY-SA 2.5 | null | 2010-09-30T05:08:38.800 | 2014-02-25T05:57:19.613 | 2010-09-30T11:05:59.217 | 776 | 776 | [
"r",
"time-series"
] |
3182 | 2 | null | 3179 | 6 | null | I'd recommend taking a look at [GGobi](http://www.ggobi.org/), which also has an R interface, at least for exploratory purposes. It has a number of graphical displays especially useful for dealing with a large number observations and variables and for linking these together. You might want to start by watching some of the videos under the "Watch a demo" section on the [Learn GGobi](http://www.ggobi.org/docs/) page.
Update
Links to Hadley Wickham's tools for GGobi, as suggested by chl in the comments:
- DescribeDisplay "R package that provides a way to recreate ggobi graphics in R"
- clusterfly "Explore clustering results in high dimensions"
- rggobi "R package that provides an easy interface with GGobi"
| null | CC BY-SA 2.5 | null | 2010-09-30T05:09:55.430 | 2010-09-30T20:05:28.320 | 2010-09-30T20:05:28.320 | 251 | 251 | null |
3183 | 2 | null | 3179 | 3 | null | [Mondrian](http://rosuda.org/Mondrian/Mondrian.html) provides interactive features and handles quite large data sets (it's in Java, though).
[Paraview](http://www.paraview.org/) includes 2D/3D viz. features.
| null | CC BY-SA 2.5 | null | 2010-09-30T06:44:28.390 | 2010-09-30T06:44:28.390 | null | null | 930 | null |
3184 | 2 | null | 3181 | 26 | null | There is a [Time Series Task View](http://cran.r-project.org/web/views/TimeSeries.html) that aims to summarize all the time series packages for R. It highlights some core packages that provide some essential functionality.
I would also recommend the book by [Shumway and Stoffer](http://www.stat.pitt.edu/stoffer/tsa2/) and the associated website, although it is not so good for forecasting.
My blog post on ["Econometrics and R"](http://robjhyndman.com/researchtips/econometrics-and-r/) provides a few other references that are useful.
Then there is my own book on forecasting using R: [Forecasting principles and practice](http://www.otexts.org/fpp).
| null | CC BY-SA 3.0 | null | 2010-09-30T06:55:51.197 | 2014-02-25T05:57:19.613 | 2014-02-25T05:57:19.613 | 159 | 159 | null |
3185 | 2 | null | 3181 | 13 | null | I've found the UseR! series book [Introductory Time Series with R](http://www.springer.com/statistics/statistical+theory+and+methods/book/978-0-387-88697-8) by Cowpertwait and Metcalfe very useful in translating my time series statistics textbooks into R-speak.
| null | CC BY-SA 3.0 | null | 2010-09-30T07:02:54.690 | 2013-06-27T13:17:33.023 | 2013-06-27T13:17:33.023 | 22047 | 1390 | null |
3186 | 2 | null | 3175 | 2 | null | I would suggest using the [ksdensity](http://www.mathworks.com/help/toolbox/stats/ksdensity.html) function. In the following example I compare the pdf of the data in column 1 of matrix 1 with the pdf of the data in column 2 of matrix 2:
```
[f,x] = ksdensity(mat1(:,1));
plot(x,f,'--b');hold
[f,x] = ksdensity(mat2(:,2));
plot(x,f,'--m')
legend('Data 1 pdf', 'Data 2 pdf');
```
| null | CC BY-SA 2.5 | null | 2010-09-30T07:24:51.460 | 2010-09-30T07:32:21.337 | 2010-09-30T07:32:21.337 | 339 | 339 | null |
3187 | 2 | null | 3181 | 6 | null | For ecologists, [Tree diversity analysis](http://www.worldagroforestry.org/units/library/books/PDFs/Kindt%20b2005.pdf) can be a first healthy step into the right direction. The book is free, it comes with an R package ([BiodiversityR](http://cran.r-project.org/web/packages/BiodiversityR/index.html)) and gives you a taste of other eco-packages (like vegan).
| null | CC BY-SA 2.5 | null | 2010-09-30T07:36:56.130 | 2011-04-01T04:38:17.623 | 2011-04-01T04:38:17.623 | 1381 | 144 | null |
3188 | 2 | null | 3179 | 6 | null | I feel you are actually asking two questions: 1) what types of visualizations to use and 2) what R package can produce them.
In the case of what type of graph to use, there are many, and it depends on your needs (e.g: types of variables - numeric, factor, geographic etc, and the type of connections you are interested to display):
- If you have many numeric variables, you might want to use a scatter plot matrix (have a look here)
- If you have many factor variables, you might want to use a scatter plot matrix for factors (have a look here)
- You could also go with doing some Parallel coordinates there are several ways to do it in R.
- For a wider range of graphical facilities in R, have a look at the graphics task view.
Now regarding how to do it. One problem with many data points is time till the plot is created. ggplot2, iplots, ggobi are not very good for too many data points (at least from my experience). In which case you might want to focus on R base graphics facilities, or sample your data and on that to use all the other tools. Or you can hope that the people developing iplots extreme (or [Acinonyx](http://www.rforge.net/Acinonyx/)) would get to an advance release stage.
| null | CC BY-SA 3.0 | null | 2010-09-30T08:08:11.903 | 2017-07-04T18:55:07.973 | 2017-07-04T18:55:07.973 | 11887 | 253 | null |
3189 | 5 | null | null | 0 | null | Usually, forecasting is applied to time series data where future values of a series are predicted based on past observations, possibly leveraging predictors. It is contrasted with [prediction](/questions/tagged/prediction) in Cressie & Wikle Statistics for Spatio-Temporal Data, p. 17:
>
Uncertainty in data, processes or parameters means that there will be uncertainty in conclusions. Statisticians call this drawing of conclusions in the presence of uncertainty, statistical inference (or just inference); in this book, inferences will be either estimation of fixed but unknown parameters, or prediction of unknown random quantities. (Notice that "forecasting," namely concluding something about the future, is a special case of "prediction".)
[Resources/books for project on forecasting models](https://stats.stackexchange.com/q/559908/1352) is our canonical thread for references and textbooks on forecasting.
| null | CC BY-SA 4.0 | null | 2010-09-30T08:09:08.867 | 2022-11-23T09:40:48.703 | 2022-11-23T09:40:48.703 | 1352 | 159 | null |
3190 | 4 | null | null | 0 | null | Prediction of the future events. It is a special case of [prediction], in the context of [time-series]. | null | CC BY-SA 3.0 | null | 2010-09-30T08:09:08.867 | 2017-05-24T14:47:09.810 | 2017-05-24T14:47:09.810 | 28666 | 159 | null |
3191 | 2 | null | 2910 | 81 | null | I am compiling a quick series of guidelines I found on [SO](http://www.stackoverflow.com) (as suggested by @Shane), [Biostar](http://biostar.stackexchange.com/) (hereafter, BS), and this SE. I tried my best to acknowledge ownership for each item, and to select first or highly upvoted answer. I also added things of my own, and flagged items that are specific to the [R] environment.
Data management
- Create a project structure for keeping all things at the right place (data, code, figures, etc., giovanni /BS)
- Never modify raw data files (ideally, they should be read-only), copy/rename to new ones when making transformations, cleaning, etc.
- Check data consistency (whuber /SE)
- Manage script dependencies and data flow with a build automation tool, like GNU make (Karl Broman/Zachary Jones)
Coding
- organize source code in logical units or building blocks (Josh Reich/hadley/ars /SO; giovanni/Khader Shameer /BS)
- separate source code from editing stuff, especially for large project -- partly overlapping with previous item and reporting
- Document everything, with e.g. [R]oxygen (Shane /SO) or consistent self-annotation in the source file -- a good discussion on Medstats, Documenting analyses and data edits Options
- [R] Custom functions can be put in a dedicated file (that can be sourced when necessary), in a new environment (so as to avoid populating the top-level namespace, Brendan OConnor /SO), or a package (Dirk Eddelbuettel/Shane /SO)
Analysis
- Don't forget to set/record the seed you used when calling RNG or stochastic algorithms (e.g. k-means)
- For Monte Carlo studies, it may be interesting to store specs/parameters in a separate file (sumatra may be a good candidate, giovanni /BS)
- Don't limit yourself to one plot per variable, use multivariate (Trellis) displays and interactive visualization tools (e.g. GGobi)
Versioning
- Use some kind of revision control for easy tracking/export, e.g. Git (Sharpie/VonC/JD Long /SO) -- this follows from nice questions asked by @Jeromy and @Tal
- Backup everything, on a regular basis (Sharpie/JD Long /SO)
- Keep a log of your ideas, or rely on an issue tracker, like ditz (giovanni /BS) -- partly redundant with the previous item since it is available in Git
Editing/Reporting
- [R] Sweave (Matt Parker /SO) or the more up-to-date knitr
- [R] Brew (Shane /SO)
- [R] R2HTML or ascii
As a side note, Hadley Wickham offers a comprehensive overview of [R project management](http://github.com/hadley/devtools/wiki), including reproducible exemplification and an unified philosophy of data.
Finally, in his R-oriented [Workflow of statistical data analysis](http://www.kirchkamp.de/oekonometrie/pdf/wf-screen2.pdf) Oliver Kirchkamp offers a very detailed overview of why adopting and obeying a specific workflow will help statisticians collaborate with each other, while ensuring data integrity and reproducibility of results. It further includes some discussion of using a weaving and version control system. Stata users might find J. Scott Long's [The Workflow of Data Analysis Using Stata](http://www.stata.com/bookstore/workflow-data-analysis-stata/) useful too.
| null | CC BY-SA 3.0 | null | 2010-09-30T10:44:48.180 | 2014-04-09T09:40:19.067 | 2017-05-23T12:39:26.203 | -1 | 930 | null |
3192 | 2 | null | 3156 | 1 | null | Reporting a CI around a mean is not reporting the distribution of values, only an estimate of how well you captured that mean value. It will always get smaller as n goes up. It's NOT what you want because you want to see how well a point fits into a distribution.
With your fairly large N's you might be able to do a nice plot of the distributions, all together on one plot overlayed in different colours and then place your point estimate. You may be able be able to see from what group it seems most likely to come from. Then estimate the z-score to the point value from each distribution. Whichever is the lowest is the one I'd be most inclined to pick but this would depend a great deal on how these looked in terms of distribution. For example, I'm guessing dist.3 is going to be the flattest and so it's going to have a very low z-score there.
Plot it out and see what it looks like. (after dividing everything by 1e8 :) )
| null | CC BY-SA 2.5 | null | 2010-09-30T10:58:23.187 | 2010-09-30T10:58:23.187 | null | null | 601 | null |
3193 | 1 | 3205 | null | 7 | 235 | Hello data analyst community. I have the following problem:
Given a set of n units and a timeline in days. A unit may be active at a certain day to a certain degree (in range from 0.0 to 1.0). A desirable outcome is that if a unit is active, it should be active for a series of consecutive days (or at maximum with one day break).
What I have, of course is the opposite :). Now I want to measure or even better visualize the activity frequencies to "prove" an image-affine person that not all units behave as desired. The brute-force-approach is to draw a line for each unit (along the timeline), colored according to the degree of activity, but since n > 30, the graph is big, colorful and you see nothing at all.
I am afraid that I am searching in the wrong direction. Any ideas, suggestions ?
EDIT:
I think I was not able to explain my goal: I do not want to visualize the activity of a singular unit, but getting an idea of the activity frequency of all units involved. In the far end, I will have two groups of units and want to see graphically whether one group performed better than the other (better according to property described above). My apologies for not stating this earlier (thanks to the contributions up to this point, I was able to see what I actually want to know).
| Visualizing activity frequency | CC BY-SA 2.5 | null | 2010-09-30T11:16:25.453 | 2010-10-01T11:59:49.043 | 2010-10-01T11:59:49.043 | 264 | 264 | [
"time-series",
"data-visualization"
] |
3194 | 1 | 3198 | null | 29 | 8963 | How can I test the fairness of a twenty sided die (d20)? Obviously I would be comparing the distribution of values against a uniform distribution. I vaguely remember using a Chi-square test in college. How can I apply this to see if a die is fair?
| How can I test the fairness of a d20? | CC BY-SA 2.5 | null | 2010-09-30T13:04:31.537 | 2018-05-28T23:55:32.270 | 2013-11-04T02:42:31.273 | 805 | 1456 | [
"hypothesis-testing",
"chi-squared-test",
"goodness-of-fit",
"uniform-distribution",
"dice"
] |
3195 | 2 | null | 3171 | 1 | null | You would normally make the assumption of independence of observations in your modelling.
Alternatively if you expected correlation between observations it would be good to model this and estimate that correlation. You can't do this as you don't know which observations are likely to be correlated.
If you assume independence when some observations are in fact positively correlated you will understimate the between subject variance. This means you are more likely to find "significant differences" than statistical theory would suggest. You can think of it as appearing to have more samples than you in fact do have as some are almost repeats.
| null | CC BY-SA 3.0 | null | 2010-09-30T13:14:28.583 | 2015-12-02T03:14:19.143 | 2015-12-02T03:14:19.143 | 22228 | 521 | null |
3196 | 2 | null | 3194 | 10 | null | Do you want to do it by hand, or in excel ?
If you want to do it in [R](http://www.r-project.org/), you can do it this way:
Step 1: roll your die (let's say) 100 times.
Step 2: count how many times you got each of your numbers
Step 3: put them in R like this (write the number of times each die roll you got, instead of the numbers I wrote):
```
x <- as.table(c(1,2,3,4,5,6,7,80,9,10,11,12,13,14,15,16,17,18,19,20))
```
Step 4: simply run this command:
```
chisq.test(x)
```
If the P value is low (e.g: bellow 0.05) - your die is not balanced.
This command simulates a balanced die (P= ~.5):
```
chisq.test(table(sample(1:20, 100, T)))
```
And this simulates an unbalanced die:
```
chisq.test(table(c(rep(20,10),sample(1:20, 100, T))))
```
(It get's to be about P = ~.005)
Now the real question is how many die's should be rolled to what level of power of detection. If someone wants to go into solving that, he is welcomed...
Update: There is also a nice article on this topic [here](http://www.textfiles.com/rpg/fairdice.txt).
| null | CC BY-SA 2.5 | null | 2010-09-30T13:34:56.983 | 2010-09-30T14:28:19.940 | 2010-09-30T14:28:19.940 | 253 | 253 | null |
3197 | 2 | null | 3194 | 7 | null | If you are interested in just checking the number of times each number appears, then a Chi-squared test would be suitable. Suppose you roll a die N times. You would expect each value to come up N/20 times. All a chi-square test does is compare what you observed with what you get. If this difference is too large, then this would indicate a problem.
Other tests
If you were interested in other aspects of randonness, for example, if you dice gave the following output:
```
1, 2, 3, 4...., 20,1,2,..
```
Then although this output has the correct number of each individual value, it is clearly not random. In this case, take a look at this [question](https://stats.stackexchange.com/questions/30/testing-random-variate-generation-algorithms/). This probably only makes sense for electronic dice.
Chi-squared test in R
In R, this would be
```
##Roll 200 times
> rolls = sample(1:20, 200, replace=TRUE)
> chisq.test(table(rolls), p = rep(0.05, 20))
Chi-squared test for given probabilities
data: table(rolls)
X-squared = 16.2, df = 19, p-value = 0.6439
## Too many 1's in the sample
> badrolls = cbind(rolls, rep(1, 10))
> chisq.test(table(badrolls), p = rep(0.05, 20))
Chi-squared test for given probabilities
data: table(badrolls)
X-squared = 1848.1, df = 19, p-value < 2.2e-16
```
| null | CC BY-SA 2.5 | null | 2010-09-30T13:40:00.953 | 2010-09-30T14:05:29.147 | 2017-04-13T12:44:26.710 | -1 | 8 | null |
3198 | 2 | null | 3194 | 14 | null | Here's an example with R code. The output is preceded by #'s.
A fair die:
```
rolls <- sample(1:20, 200, replace = T)
table(rolls)
#rolls
# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
# 7 8 11 9 12 14 9 14 11 7 11 10 13 8 8 5 13 9 10 11
chisq.test(table(rolls), p = rep(0.05, 20))
# Chi-squared test for given probabilities
#
# data: table(rolls)
# X-squared = 11.6, df = 19, p-value = 0.902
```
A biased die - numbers 1 to 10 each have a probability of 0.045; those 11-20 have a probability of 0.055 - 200 throws:
```
rolls <- sample(1:20, 200, replace = T, prob=cbind(rep(0.045,10), rep(0.055,10)))
table(rolls)
#rolls
# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
# 8 9 7 12 9 7 14 5 10 12 11 13 14 16 6 10 10 7 9 11
chisq.test(table(rolls), p = rep(0.05, 20))
# Chi-squared test for given probabilities
#
# data: table(rolls)
# X-squared = 16.2, df = 19, p-value = 0.6439
```
We have insufficient evidence of bias (p = 0.64).
A biased die, 1000 throws:
```
rolls <- sample(1:20, 1000, replace = T, prob=cbind(rep(0.045,10), rep(0.055,10)))
table(rolls)
#rolls
# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
# 42 47 34 42 47 45 48 43 42 45 52 50 57 57 60 68 49 67 42 63
chisq.test(table(rolls), p = rep(0.05, 20))
# Chi-squared test for given probabilities
#
# data: table(rolls)
# X-squared = 32.36, df = 19, p-value = 0.02846
```
Now p<0.05 and we are starting to see evidence of bias. You can use similar simulations to estimate the level of bias you can expect to detect and the number of throws needed to detect it with an given p-level.
Wow, 2 other answers even before I finished typing.
| null | CC BY-SA 2.5 | null | 2010-09-30T13:44:01.660 | 2010-09-30T13:51:53.710 | 2010-09-30T13:51:53.710 | 521 | 521 | null |
3199 | 1 | 3210 | null | 8 | 1861 | I have a 2 dim matrix, and I want to know e.g. all the higher values are in the upper left corner. I can't just project it into R^3 and use a standard clustering algorithm because I don't want to consider the value as a dimension by itself.
Is there an algorithm I can use for this?
EDIT:
To reformulate it, suppose it was like
| High values ... low values |
| ...
| Low values ... ... |
| ...
| High values .. low values |
I'd want to know that there's a "cluster" of high values in the upper left and lower left.
EDIT 2:
The matrix represents an image. The values of each cell represent the concentration of a substance at that coordinate. I want to know how homogeneous the image is (i.e. how well "mixed together" the substance is). Additionally, I would like to know where the non-homogeneity (if any) is coming from.
| Clustering of a matrix (homogeneity measurement) | CC BY-SA 2.5 | null | 2010-09-30T13:58:08.637 | 2019-08-02T14:17:24.650 | 2019-08-02T14:17:24.650 | 919 | 900 | [
"clustering",
"spatial"
] |
3200 | 1 | 3202 | null | 61 | 48975 | Lets assume you are a social science researcher/econometrician trying to find relevant predictors of demand for a service. You have 2 outcome/dependent variables describing the demand (using the service yes/no, and the number of occasions). You have 10 predictor/independent variables that could theoretically explain the demand (e.g., age, sex, income, price, race, etc). Running two separate multiple regressions will yield 20 coefficients estimations and their p-values. With enough independent variables in your regressions you would sooner or later find at least one variable with a statistically significant correlation between the dependent and independent variables.
My question: is it a good idea to correct the p-values for multiple tests if I want to include all independent variables in the regression? Any references to prior work are much appreciated.
| Is adjusting p-values in a multiple regression for multiple comparisons a good idea? | CC BY-SA 3.0 | null | 2010-09-30T14:07:56.490 | 2016-11-03T21:24:08.560 | 2012-06-05T19:10:37.397 | 7290 | 1458 | [
"regression",
"multivariate-analysis",
"predictive-models",
"multiple-regression",
"multiple-comparisons"
] |
3201 | 1 | 3218 | null | 11 | 852 | I have a database containing a large number of experts in a field. For each of those experts i have a variety of attributes/data points like:
- number of years of experience.
- licenses
- num of reviews
- textual content of those reviews
- The 5 star rating on each of those reviews, for a number of factors like speed, quality etc.
- awards, assosciations, conferences etc.
I want to provide a rating to these experts say out of 10 based on their importance. Some of the data points might be missing for some of the experts. Now my question is how do i come up with such an algorithm? Can anyone point me to some relevent literature?
Also i am concerned that as with all rating/reviews the numbers might bunch up near some some values. For example most of them might end up getting an 8 or a 5. Is there a way to highlight litle differences into a larger difference in the score for only some of the attributes.
Some other discussions that i figured might be relevant:
- Bayesian rating system with multiple categories for each rating
- How would YOU compute IMDB movie rating?
- Eliciting priors from experts
- What are some of the best ranking algorithms with inputs as up and down votes?
| How do I order or rank a set of experts? | CC BY-SA 2.5 | null | 2010-09-30T14:14:06.623 | 2013-06-28T13:25:14.263 | 2017-04-13T12:44:36.927 | -1 | 1459 | [
"rating",
"valuation"
] |
3202 | 2 | null | 3200 | 53 | null | It seems your question more generally addresses the problem of identifying good predictors. In this case, you should consider using some kind of penalized regression (methods dealing with variable or [feature selection](http://www.google.fr/url?sa=t&source=web&cd=1&ved=0CBgQFjAA&url=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FFeature_selection&ei=w5qkTLHOA8iK4gbr0MH9DA&usg=AFQjCNFGSoQruvgm6fTvAtx8NUU3MDBBcw&sig2=4wcurlN8irzgYgr3chlopA) are relevant too), with e.g. L1, L2 (or a combination thereof, the so-called [elasticnet](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.124.4696&rep=rep1&type=pdf)) penalties (look for related questions on this site, or the R [penalized](http://cran.r-project.org/web/packages/penalized/index.html) and [elasticnet](http://cran.r-project.org/package=elasticnet) package, among others).
Now, about correcting p-values for your regression coefficients (or equivalently your partial correlation coefficients) to protect against over-optimism (e.g. with Bonferroni or, better, step-down methods), it seems this would only be relevant if you are considering one model and seek those predictors that contribute a significant part of explained variance, that is if you don't perform model selection (with stepwise selection, or hierarchical testing). This article may be a good start: [Bonferroni Adjustments in Tests for Regression Coefficients](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.490.7640&rep=rep1&type=pdf). Be aware that such correction won't protect you against multicollinearity issue, which affects the reported p-values.
Given your data, I would recommend using some kind of iterative model selection techniques. In R for instance, the `stepAIC` function allows to perform stepwise model selection by exact AIC. You can also estimate the relative importance of your predictors based on their contribution to $R^2$ using boostrap (see the [relaimpo](http://cran.r-project.org/web/packages/relaimpo/index.html) package). I think that reporting effect size measure or % of explained variance are more informative than p-value, especially in a confirmatory model.
It should be noted that stepwise approaches have also their drawbacks (e.g., Wald tests are not adapted to conditional hypothesis as induced by the stepwise procedure), or as indicated by Frank Harrell on [R mailing](http://www.mail-archive.com/[email protected]/msg105680.html), "stepwise variable selection based on AIC has all the problems of stepwise variable selection based on P-values. AIC is just a restatement of the P-Value" (but AIC remains useful if the set of predictors is already defined); a related question -- [Is a variable significant in a linear regression model?](https://stats.stackexchange.com/questions/1016/is-a-variable-significant-in-a-linear-regression-model) -- raised interesting comments ([@Rob](https://stats.stackexchange.com/users/159/rob-hyndman), among others) about the use of AIC for variable selection. I append a couple of references at the end (including papers kindly provided by [@Stephan](https://stats.stackexchange.com/users/1352/stephan-kolassa)); there is also a lot of other references on [P.Mean](http://www.pmean.com/category/MultipleComparisons.html).
Frank Harrell authored a book on [Regression Modeling Strategy](http://biostat.mc.vanderbilt.edu/twiki/bin/view/Main/RmS) which includes a lot of discussion and advices around this problem (§4.3, pp. 56-60). He also developed efficient R routines to deal with generalized linear models (See the [Design](http://cran.r-project.org/web/packages/Design/index.html) or [rms](http://cran.r-project.org/web/packages/rms/index.html) packages). So, I think you definitely have to take a look at it (his [handouts](http://biostat.mc.vanderbilt.edu/wiki/pub/Main/RmS/course2.pdf) are available on his homepage).
References
- Whittingham, MJ, Stephens, P, Bradbury, RB, and Freckleton, RP (2006). Why do we still use stepwise modelling in ecology and behaviour? Journal of Animal Ecology, 75, 1182-1189.
- Austin, PC (2008). Bootstrap model selection had similar performance for selecting authentic and noise variables compared to backward variable elimination: a simulation study. Journal of Clinical Epidemiology, 61(10), 1009-1017.
- Austin, PC and Tu, JV (2004). Automated variable selection methods for logistic regression produced unstable models for predicting acute myocardial infarction mortality. Journal of Clinical Epidemiology, 57, 1138–1146.
- Greenland, S (1994). Hierarchical regression for epidemiologic analyses of multiple exposures. Environmental Health Perspectives, 102(Suppl 8), 33–39.
- Greenland, S (2008). Multiple comparisons and association selection in general epidemiology. International Journal of Epidemiology, 37(3), 430-434.
- Beyene, J, Atenafu, EG, Hamid, JS, To, T, and Sung L (2009). Determining relative importance of variables in developing and validating predictive models. BMC Medical Research Methodology, 9, 64.
- Bursac, Z, Gauss, CH, Williams, DK, and Hosmer, DW (2008). Purposeful selection of variables in logistic regression. Source Code for Biology and Medicine, 3, 17.
- Brombin, C, Finos, L, and Salmaso, L (2007). Adjusting stepwise p-values in generalized linear models. International Conference on Multiple Comparison Procedures. -- see step.adj() in the R someMTP package.
- Wiegand, RE (2010). Performance of using multiple stepwise algorithms for variable selection. Statistics in Medicine, 29(15), 1647–1659.
- Moons KG, Donders AR, Steyerberg EW, and Harrell FE (2004). Penalized Maximum Likelihood Estimation to predict binary outcomes. Journal of Clinical Epidemiology, 57(12), 1262–1270.
- Tibshirani, R (1996). Regression shrinkage and selection via the lasso. Journal of The Royal Statistical Society B, 58(1), 267–288.
- Efron, B, Hastie, T, Johnstone, I, and Tibshirani, R (2004). Least Angle Regression. Annals of Statistics, 32(2), 407-499.
- Flom, PL and Cassell, DL (2007). Stopping Stepwise: Why stepwise and similar selection methods are bad, and what you should use. NESUG 2007 Proceedings.
- Shtatland, E.S., Cain, E., and Barton, M.B. (2001). The perils of stepwise logistic regression and how to escape them using information criteria and the Output Delivery System. SUGI 26 Proceedings (pp. 222–226).
| null | CC BY-SA 3.0 | null | 2010-09-30T14:33:11.650 | 2015-10-28T12:48:38.413 | 2017-04-13T12:44:41.967 | -1 | 930 | null |
3203 | 2 | null | 3199 | 2 | null | The goal is just to find out a measure that will tell us how mixed up all the pixels are. Given 2 matrices of data with the exact same distribution of values, if the first one's values are ordered or clumped together in spatial groups and the 2nd one's values are well-dispersed (high points and not near other high points, low points not near other lows), what is the method of evaluating this dispersion/ clumpyness?
The matrices will have the exact same variance or standard deviation, so that is not a good method.
One idea is using the 2D Fourier Transform, because a more clumpy image intuitively has a lower frequency, but I'm not sure if that is actually a common or useful practice for this type of evaluation.
| null | CC BY-SA 2.5 | null | 2010-09-30T14:35:49.713 | 2010-09-30T14:35:49.713 | null | null | null | null |
3204 | 2 | null | 3193 | 1 | null | How about creating small timelines for each unit, one on top of the other, sorted in order of most to least active? Think [sparklines](http://en.wikipedia.org/wiki/Sparkline)
You could probably do something like highlight the inactive time as either a shaded portion of the chart, or a colored portion of the unit's timeline.
Since each unit would have a small plot, you'd be able to see an individual's activity at a given time. And sorting them by activity would show how poorly some units are performing, as the plots get flatter (and/or more full of your inactivity indicator) as you go down the graphic.
I don't have any great ideas on what software to create this with. You might be able to do it with Lattice in R.
| null | CC BY-SA 2.5 | null | 2010-09-30T15:03:59.807 | 2010-09-30T15:03:59.807 | null | null | 298 | null |
3205 | 2 | null | 3193 | 5 | null | You might be trying to incorporate too much information into the graphic. The essence of the visualization seems to be the frequency with which units are active more than one day and, possibly, the times at which those units are active.
Just to generate ideas--because there are many possible solutions--consider a display that provides a clear graphical distinction between the longer-term units and the shorter-term ones and allows assessments of the frequencies with which these occur. One simple solution is a scatterplot where the contiguous activity of a unit between times $x$ and $x + y$ is indicated by a point at $(x,y)$. Modify one salient characteristic of the point, such as its color, to emphasize the distinction between $y \ge 1$ and $y \lt 1$.
Here is a crude illustration: the first plots units on the vertical axis (200 of them), time on the horizontal (75 days; it needs a grid to show the units of time), and unit activities on a gray scale where darker corresponds to longer continuous activity. The second shows similar data as a scatterplot. The latter could be accompanied by a histogram of frequencies. The former ought to have the units sorted vertically by their average length in service.


| null | CC BY-SA 2.5 | null | 2010-09-30T15:36:57.600 | 2010-09-30T15:36:57.600 | null | null | 919 | null |
3206 | 2 | null | 3200 | 26 | null | To a great degree you can do whatever you like provided you hold out enough data at random to test whatever model you come up with based on the retained data. A 50% split can be a good idea. Yes, you lose some ability to detect relationships, but what you gain is enormous; namely, the ability to replicate your work before it is published. No matter how sophisticated the statistical techniques you bring to bear, you will be shocked at how many "significant" predictors wind up being entirely useless when applied to the confirmation data.
Bear in mind, too, that "relevant" for prediction means more than a low p-value. That, after all, only means it's likely a relationship found in this particular dataset is not due to chance. For prediction it's actually more important to find the variables that exert substantial influence on the predictand (without over-fitting the model); that is, to find the variables that are likely to be "real" and, when varied throughout a reasonable range of values (not just the values that might occur in your sample!), cause the predictand to vary appreciably. When you have hold-out data to confirm a model, you can be more comfortable provisionally retaining marginally "significant" variables that might not have low p-values.
For these reasons (and building on chl's fine answer), although I have found stepwise models, AIC comparisons, and Bonferroni corrections quite useful (especially with hundreds or thousands of possible predictors in play), these should not be the sole determinants of which variables enter your model. Do not lose sight of the guidance afforded by theory, either: variables having strong theoretical justification to be in a model usually should be kept in, even when they are not significant, provided they do not create ill-conditioned equations (e.g., collinearity).
NB: After you have settled on a model and confirmed its usefulness with the hold-out data, it's fine to recombine the retained data with the hold-out data for final estimation. Thus, nothing is lost in terms of the precision with which you can estimate model coefficients.
| null | CC BY-SA 2.5 | null | 2010-09-30T15:53:06.523 | 2010-09-30T15:53:06.523 | null | null | 919 | null |
3207 | 1 | 3256 | null | 4 | 323 | I previously asked this [question](https://stats.stackexchange.com/q/2917/1381) about the validity of my solutions for for $SE$ given $n$, $\bar{X_i}$ and summary statistics from post-hoc multiple comparisons such as Fisher's $LSD$ and Tukey's $HSD$, but I would like a general approach that can be applied to other post-hoc tests as well as other statistics. My hope is that this approach would be less prone to arithmetic error and easier to implement than my trying to find an analytical solution each time I encounter a new statistic.
I would like to get some estimate of $SE$ from previous studies that I will use in a meta-analysis. In addition, I would like to be able to do this for any statistic.
Is there a general simulation / optimization approach to this problem?
- $\bar{X_1}$, $\bar{X_2}$, some statistic, and n are given
- I am looking for $SE\pm 10\%$
- As in the earlier post, over estimates of SE are o.k.
My idea is that the solution would be something like the following examples in which LSD is given:
```
se.est <- function(x1bar, x2bar, lsd_obs, n, se) {
y <- c(rnorm(n, x1bar, se), rnorm(n, x2bar, se)).
x <- c(rep(1,n), rep(2,n))
mse <- sum(lm(y~factor(x))$residuals^2)/(2*n-2)
lsd_est <- qt(0.975,n)*sqrt(2*mse/2)
ans <- (lsd_obs - lsd_est)^2
}
SE <- optimize(par = c(x1bar, x2bar, lsd_obs,n), fn = se.est)
```
or
```
for (i in 1:10000){
se[i] <- rgamma(1, 1, 0.01)
y[i] <- c(rnorm(n, x1bar, se), rnorm(n, x2bar, se)).
x[i] <- c(rep(1,n), rep(2,n))
mse[i] <- sum(lm(y~factor(x))$residuals^2)/(2*n-2)
lsd_est[i] <- qt(0.975,n)*sqrt(2*mse/2)
}
se <- se[which.min((lsd_est - lsd_obs)^2)]
```
Your suggestions for a quick and effective approach would be appreciated; also, is one of the above preferable or substantially more efficient? if you think that an analytical solution is preferable, please see my previous [post](https://stats.stackexchange.com/q/2917/1381).
| Given sample size, group means, and misc. post-hoc range statistics, can you suggest a good way to estimate variance through simulation? | CC BY-SA 2.5 | null | 2010-09-30T16:02:39.110 | 2010-10-02T00:22:45.550 | 2017-04-13T12:44:24.667 | -1 | 1381 | [
"r",
"standard-deviation",
"variance",
"meta-analysis"
] |
3208 | 2 | null | 3200 | 0 | null | You can do a seemingly unrelated regression and use an F test. Put your data in a form like this:
```
Out1 1 P11 P12 0 0 0
Out2 0 0 0 1 P21 P22
```
so that the predictors for your first outcome have their values when that outcome is the y variable and 0 otherwise and vice-versa. So your y is a list of both outcomes. P11 and P12 are the two predictors for the first outcome and P21 and P22 are the two predictors for the second outcome. If sex, say, is a predictor for both outcomes, its use to predict outcome 1 should be in a separate variable/column when predicting outcome 2. This lets your regression have different slopes/impacts for sex for each outcome.
In this framework, you can use standard F testing procedures.
| null | CC BY-SA 2.5 | null | 2010-09-30T16:04:05.910 | 2010-09-30T16:04:05.910 | null | null | 401 | null |
3209 | 2 | null | 3201 | 0 | null | Do you think that you could quantify all those attributes?
If yes, I would suggest performing a principal component analysis. In the general case where all the correlations are positive (and if they aren't, you can easily get there using some transformation), the first principal component can be considered as a measure of the total importance of the expert, since it's a weighted average of all the attributes (and the weights would be the corresponding contributions of the variables - Under this perspective, the method itself will reveal the importance of each attribute). The score that each expert achieves in the first principal component is what you need to rank them.
| null | CC BY-SA 2.5 | null | 2010-09-30T16:06:50.210 | 2010-09-30T16:06:50.210 | null | null | 339 | null |
3210 | 2 | null | 3199 | 6 | null | This question is about spatial correlation. Many methods exist to characterize and quantify this. What they all have in common is comparing values at one location to those at nearby locations. Usually, the reference distribution is some kind of spatial stochastic process where data are generated independently from point to point ("complete spatial randomness"). Some methods only characterize average behavior while others provide more detailed exploratory tools to identify clusters of extreme values.
For three different approaches, check out (1) the literature on geostatistics/kriging/variography; (2) other measures of spatial correlation such as Ripley's K and L functions or [the Getis-Ord $G_i$ statistics](http://edndoc.esri.com/arcobjects/9.2/net/shared/geoprocessing/spatial_statistics_tools/hot_spot_analysis_getis_ord_gi_star_spatial_statistics_.htm) ; and (3) geographically weighted regression. Accessible, non-technical, and sort of correct explanations of all these can be found on ESRI.com . The Wikipedia articles are scanty and of variable quality, unfortunately.
The first two approaches are well supported with R packages such as spatstat and geoRglm. There is also free software for (2), of which some of the best known is [Geoda](http://geodacenter.asu.edu/) and [CrimeStat](http://www.icpsr.umich.edu/CrimeStat/about.html) . I know of no free implementation of GWR (#3), but there are [good resources](http://ncg.nuim.ie/ncg/GWR/) maintained by its inventors.
| null | CC BY-SA 2.5 | null | 2010-09-30T16:10:52.277 | 2010-09-30T16:10:52.277 | null | null | 919 | null |
3211 | 2 | null | 3201 | 3 | null | Ultimately this may not be solely a statistical exercise. PCA is a very powerful quantitative method that will allow you to generate a score or weights on its first few principal components that you can use for ranking. However, explaining what the principal components are is very challenging. They are quantitative constructs. They are not dialectic ones. Thus, to explain what they truly mean is sometimes not possible. This is especially true if you have an audience that is not quantitative. They will have no idea what you are talking about. And, will think of your PCA as some cryptic black box.
Instead, I would simply line up all the relevant variables and use a weighting system based on what one thinks the weighting should be.
I think if you develop this for outsiders, customers, users, it would be great if you could embed the flexibility of deciding on the weighting to the users.
Some users may value years of experience much more than certification and vice verse. If you can leave that decision to them. This way your algorithm is not a black box they don't understand and they are not comfortable with. You keep it totally transparent and up to them based on their own relative valuation of what matters.
| null | CC BY-SA 2.5 | null | 2010-09-30T16:15:40.633 | 2010-09-30T18:39:41.760 | 2010-09-30T18:39:41.760 | 1329 | 1329 | null |
3212 | 1 | 3213 | null | 38 | 60744 | I was wondering what are the differences between Mode, Class and Type of R objects?
Type of a R object can be obtained by typeof() function, mode by mode(), and class by class().
Also any other similar functions and concepts that I missed?
Thanks and regards!
| Mode, Class and Type of R objects | CC BY-SA 2.5 | null | 2010-09-30T16:19:47.023 | 2017-02-01T17:42:22.487 | 2010-09-30T16:25:25.557 | 1005 | 1005 | [
"r"
] |
3213 | 2 | null | 3212 | 20 | null | The `class()` is used to define/identify what "type" an object is from the point of view of object-oriented programming in R. So for
```
> x <- 1:3
> class(x)
[1] "integer"
```
any generic function that has an "integer" method will be used.
`typeof()` gives the "type" of object from R's point of view, whilst `mode()` gives the "type" of object from the point of view of Becker, Chambers & Wilks (1988). The latter may be more compatible with other S implementations according to the [R Language Definition](http://stat.ethz.ch/R-manual/R-devel/doc/manual/R-lang.html#Objects) manual.
I'd probably err on the side of using `typeof()` in most cases unless it was for passing R objects to compiled code, where `storage.mode()` will be useful.
This is usefully discussed in the R Language Definition as linked to above.
| null | CC BY-SA 2.5 | null | 2010-09-30T16:32:06.770 | 2010-09-30T16:32:06.770 | null | null | 1390 | null |
3214 | 1 | null | null | 4 | 269 | In my animal experiments, I do survival studies, which generate Kaplan-Meier survival curves for each group, which I then compare with an appropriate log rank test.
My question is: If I have run a survival experiment with identical variables, say, five times, and the final outcome (in very layman's terms) happens to be slightly different each time, is there a test that I can do estimate the variability (variance) of my experimental runs?
One major confounding factor is the fact that survival is a continuous variable over time. Therefore, it is difficult to reduce it to a single statistic (that can then be compared with a test). A lot of people use the median survival (expressed in units of time) as a single statistic surrogate for survival, but it can often be misleading - depending upon the slope of the survival curve, and may not represent the true nature of the survival outcome.
Can someone here help? Please also let me know if further clarifications are needed.
| Inter-experimental variation in survival experiment - how to estimate variability? | CC BY-SA 2.5 | null | 2010-09-30T18:19:59.433 | 2010-10-01T01:56:17.783 | 2010-09-30T18:40:21.410 | 930 | 1464 | [
"estimation",
"survival",
"variability"
] |
3215 | 1 | 3216 | null | 15 | 3763 | Addition, subtraction, multiplication and division of normal random variables are well defined, but what about trigonometric operations?
For instance, let us suppose that I'm trying to find the angle of a triangular wedge (modelled as a right-angle triangle) with the two catheti having dimensions $d_1$ and $d_2$, both described as normal distributions.
Both intuition and simulation tell me that the resulting distribution is normal, with mean $\arctan\left(\frac{\text{mean}(d_1)}{\text{mean}(d_2)}\right)$. But is there a way to compute the distribution of the resulting angle? References on where I'd find the answer?
(For a bit of context, I'm working on statistical tolerance of mechanical parts. My first impulse would be to simply simulate the whole process, check if the end result is reasonably normal, and compute the standard deviation. But I'm wondering if there might be a neater analytical approach.)
| Trigonometric operations on standard deviations | CC BY-SA 3.0 | null | 2010-09-30T19:21:21.627 | 2022-08-06T19:45:51.410 | 2017-06-06T01:00:33.783 | 11887 | 77 | [
"distributions",
"normal-distribution",
"circular-statistics",
"saddlepoint-approximation"
] |
3216 | 2 | null | 3215 | 17 | null | In this interpretation, the triangle is a right triangle of side lengths $X$ and $Y$ distributed binormally with expectations $\mu_x$ and $\mu_y$, standard deviations $\sigma_x$ and $\sigma_y$, and correlation $\rho$. We seek the distribution of $\arctan(Y/X)$. To this end, standardize $X$ and $Y$ so that
$$X = \sigma_x \xi + \mu_x$$ and $$Y = \sigma_y \eta + \mu_y$$
with $\xi$ and $\eta$ standard normal variates with correlation $\rho$. Let $\theta$ be an angle and for convenience write $q = \tan(\theta)$. Then
$$\mathbb{P}[\arctan(Y/X) \le \theta] = \mathbb{P}[Y \le q X]$$
$$=\mathbb{P}[\sigma_y \eta + \mu_y \le q \left( \sigma_x \xi + \mu_x \right)$$
$$=\mathbb{P}[\sigma_y \eta - q \sigma_x \xi \le q \mu_x - \mu_y]$$
The left hand side, being a linear combination of Normals, is normal, with mean $\mu_y \sigma_y - q \mu_x \sigma_x$ and variance $\sigma_y^2 + q^2 \sigma_x^2 - 2 q \rho \sigma_x \sigma_y$.
Differentiating the Normal cdf of these parameters with respect to $\theta$ yields the pdf of the angle. The expression is fairly grisly, but a key part of it is the exponential
$$\exp \left(-\frac{\left(\mu _y \left(\sigma _y+1\right)-\mu _x \left(\sigma _x+1\right) \tan (\theta )\right){}^2}{2 \left(-2 \rho \sigma _x \sigma _y \tan (\theta )+\sigma _x^2+\sigma _y^2+\tan ^2(\theta )\right)}\right),$$
showing right away that the angle is not normally distributed. However, as your simulations show and intuition suggests, it should be approximately normal provided the variations of the side lengths are small compared to the lengths themselves. In this case a [Saddlepoint approximation](https://projecteuclid.org/journals/annals-of-mathematical-statistics/volume-25/issue-4/Saddlepoint-Approximations-in-Statistics/10.1214/aoms/1177728652.full) ought to yield good results for specific values of $\mu_x$, $\mu_y$, $\sigma_x$, $\sigma_y$, and $\rho$, even though a closed-form general solution is not available. The approximate standard deviation will drop right out upon finding the second derivative (with respect to $\theta$) of the logarithm of the pdf (as shown in equations (2.6) and (3.1) of the reference). I recommend a computer algebra system (like MatLab or Mathematica) for carrying this out!
| null | CC BY-SA 4.0 | null | 2010-09-30T20:19:47.273 | 2022-08-06T19:45:51.410 | 2022-08-06T19:45:51.410 | 79696 | 919 | null |
3217 | 2 | null | 3179 | 13 | null | The best "graph" is so obvious nobody has mentioned it yet: make maps. Housing data depend fundamentally on spatial location (according to the old saw about real estate), so the very first thing to be done is to make a clear detailed map of each variable. To do this well with a third of a million points really requires an industrial-strength GIS, which can make short work of the process. After that it makes sense to go on and make probability plots and boxplots to explore univariate distributions, and to plot scatterplot matrices and wandering schematic boxplots, etc, to explore dependencies--but the maps will immediately suggest what to explore, how to model the data relationships, and how to break up the data geographically into meaningful subsets.
| null | CC BY-SA 2.5 | null | 2010-09-30T20:40:28.717 | 2010-09-30T20:40:28.717 | null | null | 919 | null |
3218 | 2 | null | 3201 | 12 | null | People have invented numerous systems for rating things (like experts) on multiple criteria: visit the Wikipedia page on [Multi-criteria decision analysis](http://en.wikipedia.org/wiki/Multi-criteria_decision_analysis) for a list. Not well represented there, though, is one of the most defensible methods out there: [Multi attribute valuation theory.](http://books.google.com/books?hl=en&lr=&id=GPE6ZAqGrnoC&oi=fnd&pg=PR11&dq=multi+attribute+evaluation+theory+raiffa&ots=EjKJPzatEx&sig=fd9rfTTHcOVIdv9FbmrRcdlyDcA#v=onepage&q&f=false) This includes a set of methods to evaluate trade-offs among sets of criteria in order to (a) determine an appropriate way to re-express values of the individual variables and (b) weight the re-expressed values to obtain a score for ranking. The principles are simple and defensible, the mathematics is unimpeachable, and there's nothing fancy about the theory. More people should know and practice these methods rather than inventing arbitrary scoring systems.
| null | CC BY-SA 2.5 | null | 2010-09-30T21:01:26.463 | 2010-09-30T21:01:26.463 | null | null | 919 | null |
3219 | 2 | null | 3158 | 8 | null | I found that the dependency graph in Flare is also similar to what I want:
[http://flare.prefuse.org/apps/dependency_graph](http://flare.prefuse.org/apps/dependency_graph)
| null | CC BY-SA 2.5 | null | 2010-09-30T22:37:31.757 | 2010-09-30T22:37:31.757 | null | null | 1106 | null |
3220 | 2 | null | 3215 | 14 | null | You are looking at circular statistics and in particular a circular distribution called the projected normal distribution.
For some reason this topic can be a little hard to google, but the two major texts on circular statistics are [The Statistical Analysis of Circular Data](http://books.google.com.au/books?id=wGPj3EoFdJwC&lpg=PA37&ots=Ph2sDAAER6&dq=Fisher%20statistical%20circular%20data%20book&pg=PP1#v=onepage&q=Fisher%20statistical%20circular%20data%20book&f=false) by Fisher and
[Directional Statistics](http://onlinelibrary.wiley.com/book/10.1002/9780470316979) by Mardia and Jupp.
For a thorough analysis of the projected normal distribution see page 46 of Mardia and Jupp. There are closed form expressions (up to the error function integral) for the distribution, and as whuber has suggested, it looks similar to the normal when its `variance' (careful here, what does variance mean for a random variable on a circle?!) is small, i.e. when the distribution is quite concentrated at one point (or direction or angle).
| null | CC BY-SA 3.0 | null | 2010-09-30T22:55:48.303 | 2014-08-26T13:48:20.407 | 2014-08-26T13:48:20.407 | 919 | 352 | null |
3221 | 1 | 3224 | null | 4 | 166 | I was wondering how to create a vector with equivalent spacing between its consecutive elements in R? In Matlab, I can do [start:step:end].
Also if I want to plot a function with analytical form, do I have to evaluate the function on some sample points in its domain and plot these pair of points? Is there a R function that can take the function form and a set in its domain as arguments, and plot the graph of the function on the set?
| How to create a vector with equivalent spacing between its consecutive elements in R? | CC BY-SA 3.0 | null | 2010-10-01T00:16:07.093 | 2014-07-02T15:23:55.260 | 2014-07-02T15:23:55.260 | 196 | 1005 | [
"r",
"data-visualization"
] |
3222 | 2 | null | 3221 | 3 | null | See [seq](http://stuff.mit.edu/afs/sipb/project/r-project/arch/i386_rhel3/lib/R/library/base/html/seq.html) for sequence generation:
```
seq(from, to, by)
```
or `?seq` for help.
| null | CC BY-SA 2.5 | null | 2010-10-01T00:20:28.803 | 2010-10-01T00:20:28.803 | null | null | 251 | null |
3223 | 2 | null | 3158 | 7 | null | I would just add:
As you point out, Flare has the dependency graph, which Aleks Jakulin [argued was similar but better](http://www.stat.columbia.edu/~cook/movabletype/archives/2009/06/visualizing_tab.html). This was based originally on the ["Hierarchical Edge Bundles:
Visualization of Adjacency Relations in Hierarchical Data"](http://www.win.tue.nl/~dholten/papers/bundles_infovis.pdf) (Holden 2006).
I personally prefer to use [Protovis](http://vis.stanford.edu/protovis/) to Flare directly, and you can look at [Mike Bostock's example of the same graphic](http://cs.stanford.edu/people/mbostock/iv/dependency-tree.html). Here is also an example of [an Arc Diagram in Protovis](http://vis.stanford.edu/protovis/ex/arc.html), which is very similar but laid out linearly.
| null | CC BY-SA 3.0 | null | 2010-10-01T00:22:16.480 | 2014-11-17T10:54:44.117 | 2014-11-17T10:54:44.117 | 22047 | 5 | null |
3224 | 2 | null | 3221 | 4 | null |
- Use seq as suggested by ars
- For example, plot(sin, -pi, 2*pi).
| null | CC BY-SA 2.5 | null | 2010-10-01T00:26:46.690 | 2010-10-01T00:26:46.690 | null | null | 159 | null |
3228 | 2 | null | 2910 | 1 | null | Just my 2 cents. I've found Notepad++ useful for this. I can maintain separate scripts (program control, data formatting, etc.) and a .pad file for each project. The .pad file call's all the scripts associated with that project.
| null | CC BY-SA 2.5 | null | 2010-10-01T00:58:04.533 | 2010-10-01T00:58:04.533 | null | null | null | null |
3229 | 2 | null | 3171 | 0 | null | You should adjust your standard errors (and p-values, confidence intervals, etc) to account for the observations not being independent. You can do this under some reasonable assumptions even though you don't know which observations are of the same person.
For example, suppose you're estimating the mean of some variable, x. Let x_{it} be the observations in your sample with the understanding that x_{it} and x_{it+1} are not necessarily the same person. Let e_{it} = x_{it} - mean(x). A central limit theorem for dependent data (e.g. [this one](http://en.wikipedia.org/wiki/Central_limit_theorem#Under_weak_dependence)) tells us that the sample mean is asymptotically normal with variance
V = lim E[1/NT (sum e_{it})^2 ] = lim 1/NT sum E[e_{it}e_{js}]
where the limit is as NT -> infinity, the first sum is over i,t and the second is over i,t,j,s. Assume that e_{it} are uncorrelated for different individuals. Suppose for a fixed individual, that e_{it} is weakly stationary with v_s = E[e_{t} e_{t+s} ]. Also, let's suppose that probability of the same person being sampled at time period s conditional on being in at time t is p. If the pool and sample sizes is fixed, then p = (sample size)/(pool size). Given that, we know
E[e_{it}e_{js}] = v_0 if t=s else p v_{t-s}
Now assuming that T -> infinity, we get
V = v_0 + p 2 sum_{t=1}^infinity v_t
This is the asymptotic variance of the sample mean. To use this result, you need to be able to estimate V consistently. V is called the long run variance. If N is fixed, it can be estimated by kernel methods. If N -> infinity seems like a more appropriate asymptotic approximation, then you need not resort to kernels.
All this is just meant as example to get you started. You can modify many of the assumptions if they're not appropriate for your setting. For example, p could vary with t and s instead of being constant. If you're not just interested in sample averages, you can do similar calculations for any statistic you want.
| null | CC BY-SA 2.5 | null | 2010-10-01T01:04:39.917 | 2010-10-01T01:04:39.917 | null | null | 1229 | null |
3230 | 2 | null | 3214 | 2 | null | I suggest you read Section 2.3 & 2.4 pp40-73 of Hosmer & Lemeshow's 1999 edition of [Applied Survival Analysis](http://rads.stackoverflow.com/amzn/click/0471154105). This gives variances of various statistics of survivorship functions, such as 1) each time (allowing confidence interval estimation), 2) mean survival, etc. I'm not clear from your question what aspect experimental variability you want to summarise.
You will be aware there are several test statistics which can be used to compare KM survival functions, such as the log-rank test you mention. They differ on the weighting applied to variance estimates of deaths at different survival times. However, the pooled result, which forms the test statistic, is not usually regarded as a variance estimate, in the same way that a chi-squared statistic is not a variance estimator, but a test of whether the [sample distribution of the variance](http://en.wikipedia.org/wiki/Variance#Distribution_of_the_sample_variance) follows what is expected from the null hypothesis.
| null | CC BY-SA 2.5 | null | 2010-10-01T01:07:46.603 | 2010-10-01T01:56:17.783 | 2010-10-01T01:56:17.783 | 521 | 521 | null |
3231 | 2 | null | 1380 | 2 | null | Your questions is a good one (given I understand correctly). I believe you have K, 2x2 tables which correspond K different methods (call Z) and your aim is to say .. method K_1, K_2 ... K_n (K_i belongs to {1,...,K}) have some association between prediction and truth and the remaining don't have a relation. If you think this is a correct interpretation, proceed ahead o/w ignore my answer.
The problem is similar to partial tables when we control for Z (ie condition on a particular method) and study the XY (truth and prediction) relationship at fixed levels of Z. Basically, two way cross-sectional slices of the three way contingency table cross classify X and Y at several values of Z (1 .. K for you). These cross sections are called Partial Tables.
Associations for partial tables are called conditional associations, because they refer to the effect of X on Y conditional of fixing Z at some level.
For reference see: Agresti's Categorical Data Analysis Book [link](http://books.google.com/books?id=hpEzw4T0sPUC&printsec=frontcover&dq=categorical+data+analysis&source=bl&ots=noIl9pcqSb&sig=HUEavYHiONx84lylUSCX8Utze2s&hl=en&ei=tzelTLi-CoOdlgeunoGvDA&sa=X&oi=book_result&ct=result&resnum=1&ved=0CCQQ6AEwAA#v=onepage&q&f=false). The example 2.3.2 (Death penalty example) is a special case and talks about 2x2x2 tables. Section 2.3.3 answers your question for the general scenario. Your question can be answered by testing if there is conditional independence.
This is accomplished using CMH (Cochran Mantel Haenszel) Test of Conditional independence. In R, this can be done using [this function](http://sekhon.berkeley.edu/stats/html/mantelhaen.test.html). CMH generalizations for IxJxK tables also exist. In your case under the null, the distribution of test statistic is hypergeometric and for greater details see Section 6.3 of the Agresti book.
Hopefully this answers your questions. Otherwise, I am sorry for making you read this far.
| null | CC BY-SA 2.5 | null | 2010-10-01T01:34:16.313 | 2010-10-01T01:34:16.313 | null | null | 1307 | null |
3232 | 1 | 3234 | null | 6 | 199 | My office is going to implement a bundle of infection control measures in hospital and see if it can effectively reduce the infection rate of some pathogen. The unit of measurement will be "case per thousand patient bed days". We have chosen 4 wards for implementing the control measures for 12 months, and do the measurement monthly, but by looking at the current infection data, even thses wards are considered as having the highest count of infections, they are still considered as relatively clean, as they have several months with zero measurement in rates.
I have created a regression model showing if they managed to reduce 50% of infection, and the betas in the model (with three variables inside), are not statistically significant. My colleagues are worried that with all these hard work, giving non-significant result will be very frustrating for the front line staff.
Is there any alernative outcome measure, or even alternative statistical methods, for time series of rare events? Thanks!
| Dependent variable selection for loglinear segmented regression in time-series analysis of rare events | CC BY-SA 2.5 | null | 2010-10-01T03:25:35.153 | 2010-10-01T13:32:19.047 | 2010-10-01T13:32:19.047 | null | 588 | [
"time-series",
"epidemiology",
"monitoring"
] |
3234 | 2 | null | 3232 | 3 | null | I think you're right to conclude that there's little hope of finding a 'statistically significant' result from 4 wards over 12 months. Of course, that doesn't mean the control measures don't work — just that your sample size is far too small (and the variability too large) to have much chance of finding evidence that it's worked. I'd guess that larger studies have been done and at least some evidence exists about what control measures work, and you should assume the same measures will work in your hospital too unless you have good reason for thinking it's different.
Within your one hospital, rather than looking a the p-value(s) as the 'bottom line' i think you'd be better looking at this as performance monitoring. I quick bit of googling found a 2004 [Report from Key Indicators Joint Working Group](http://www.his.org.uk//resource_library.cfm?cit_id=243&FAArea1=customWidgets.content_view_1&usecache=false) of the [Hospital Infection Society](http://www.his.org.uk/), which may be one place to start.
| null | CC BY-SA 2.5 | null | 2010-10-01T08:16:11.410 | 2010-10-01T08:16:11.410 | null | null | 449 | null |
3235 | 1 | 3243 | null | 37 | 19779 |
- I would like to measure the time that it takes to repeat the running of a function. Are replicate() and using for-loops equivalent? For example:
system.time(replicate(1000, f()));
system.time(for(i in 1:1000){f()});
Which is the prefered method.
- In the output of system.time(), is sys+user the actual CPU time for running the program? Is elapsed a good measure of time performance of the program?
| Timing functions in R | CC BY-SA 2.5 | null | 2010-10-01T11:46:09.530 | 2010-10-02T16:06:05.113 | 2010-10-01T13:31:31.707 | 8 | 1005 | [
"r"
] |
3236 | 2 | null | 3235 | 26 | null | Regarding your two points:
- It's stylistic. I like replicate() as it is functional.
- I tend to focus on elapsed, i.e. the third number.
What I often do is
```
N <- someNumber
mean(replicate( N, system.time( f(...) )[3], trimmed=0.05) )
```
to get a trimmed mean of 90% of N repetitions of calling `f()`.
(Edited, with thanks to Hadley for catching a thinko.)
| null | CC BY-SA 2.5 | null | 2010-10-01T12:03:11.517 | 2010-10-01T20:35:05.737 | 2010-10-01T20:35:05.737 | 334 | 334 | null |
3237 | 2 | null | 3235 | 10 | null | You can also time with timesteps returned by `Sys.time`; this of course measures walltime, so real time computation time. Example code:
```
Sys.time()->start;
replicate(N,doMeasuredComputation());
print(Sys.time()-start);
```
| null | CC BY-SA 2.5 | null | 2010-10-01T13:30:55.063 | 2010-10-01T13:30:55.063 | null | null | null | null |
3238 | 1 | 3257 | null | 44 | 23470 | I have a set of time series data. Each series covers the same period, although the actual dates in each time series may not all 'line up' exactly.
That is to say, if the Time series were to be read into a 2D matrix, it would look something like this:
```
date T1 T2 T3 .... TN
1/1/01 100 59 42 N/A
2/1/01 120 29 N/A 42.5
3/1/01 110 N/A 12 36.82
4/1/01 N/A 59 40 61.82
5/1/01 05 99 42 23.68
...
31/12/01 100 59 42 N/A
etc
```
I want to write an R script that will segregate the time series {T1, T2, ... TN} into 'families' where a family is defined as a set of series which "tend to move in sympathy" with each other.
For the 'clustering' part, I will need to select/define a kind of distance measure. I am not quite sure how to go about this, since I am dealing with time series, and a pair of series that may move in sympathy over one interval, may not do so in a subsequent interval.
I am sure there are far more experienced/clever people than me on here, so I would be grateful for any suggestions, ideas on what algorithm/heuristic to use for the distance measure and how to use that in clustering the time series.
My guess is that there is NOT an established robust statistic method for doing this, so I would be very interested to see how people approach/solve this problem - thinking like a statistician.
| Time series 'clustering' in R | CC BY-SA 2.5 | null | 2010-10-01T14:58:01.400 | 2015-04-22T01:00:23.067 | 2010-10-01T15:11:05.647 | 1216 | 1216 | [
"r",
"time-series",
"clustering",
"cointegration"
] |
3239 | 2 | null | 3238 | 18 | null | Another way of saying "tend to move in sympathy" is "cointegrated".
There are two standard ways of calculating [cointegration](http://en.wikipedia.org/wiki/Cointegration): Engle-Granger method and the Johansen procedure. These are covered in ["Analysis of Integrated and Cointegrated Time Series with R"](http://www.springer.com/statistics/statistical+theory+and+methods/book/978-0-387-75966-1) (Pfaff 2008) and the related R [urca package](http://cran.r-project.org/web/packages/urca/index.html). I highly recommend the book if you want to pursue these methods in R.
I also recommend that you look at [this question on multivariate time series](https://stackoverflow.com/questions/1714280/multivariate-time-series-modelling-in-r/1715488#1715488) and, in particular, at [Ruey Tsay's course at U. Chicago](http://faculty.chicagobooth.edu/ruey.tsay/teaching/mts/sp2009/) which includes all the necessary R code.
| null | CC BY-SA 2.5 | null | 2010-10-01T15:05:48.723 | 2010-10-01T15:05:48.723 | 2017-05-23T12:39:27.620 | -1 | 5 | null |
3240 | 2 | null | 3235 | 2 | null | They do different things. Time what you wish done. replicate() returns a vector of results of each execution of the function. The for loop does not. Therefore, they're not equivalent statements.
In addition, time a number of ways you want something done. Then you can find the most efficient method.
| null | CC BY-SA 2.5 | null | 2010-10-01T15:48:30.923 | 2010-10-02T16:06:05.113 | 2010-10-02T16:06:05.113 | 601 | 601 | null |
3241 | 2 | null | 3238 | 4 | null | Clustering time series is done fairly commonly by population dynamacists, particularily those that study insects to understand trends in outbreak and collapse. Look for work on Gypsy moth, Spruce budoworm, mountain pine beetle and larch budmoth.
For the actual clustering you can choose whatever distance metric you like, each probably has it's own strengths and weeknesses relative to the kind of data being clustered, Kaufmann and Rousseeuw 1990. Finding groups in data. An introduction to cluster analysis is a good place to start. Remember, the clustering method doesn't 'care' that you're using a time series, it only looks at the values measured at the same point of time. If your two time series are not in enough synch over their lifespan they the won't (and perhaps shouldn't) cluster.
Where you will have problems is determining the number of clusters (families) to use after you've clusterd the time series. There are various ways of selecting a cut-off of informative clusters, but here the literature isn't that good.
| null | CC BY-SA 2.5 | null | 2010-10-01T16:34:37.630 | 2010-10-01T16:34:37.630 | null | null | 1475 | null |
3242 | 1 | 3246 | null | 22 | 11194 | I have some data to which I am trying to fit a trendline. I believe the data to follow a power law, and so have plotted the data on log-log axes looking for a straight line. This has resulted in an (almost) straight line and so in Excel I have added a trendline for a power law. Being a stats newb, my question is, what is now the best way for me to go from "well the line looks like it fits pretty well" to "numeric property $x$ proves that this graph is fitted appropriately by a power law"?
In Excel I can get an r-squared value, though given my limited knowledge of statistics, I don't even know whether this is actually appropriate under my specific circumstances. I have included an image below showing the plot of the data I am working with in Excel. I have a little bit of experience with R, so if my analysis is being limited by my tools, I am open to suggestions on how to go about improving it using R.

| How to measure/argue the goodness of fit of a trendline to a power law? | CC BY-SA 3.0 | null | 2010-10-01T17:04:11.523 | 2017-10-25T04:56:48.910 | 2013-02-22T09:08:14.080 | 8 | 870 | [
"goodness-of-fit",
"power-law"
] |
3243 | 2 | null | 3235 | 19 | null | For effective timing of programs, especially when you are interested in comparing alternative solutions, you need a control! A good way is to put the procedure you're timing into a function. Call the function within a timing loop. Write a stub procedure, essentially by stripping out all the code from your function and just returning from it (but leave all the arguments in). Put the stub into your timing loop and re-time. This measures all the overhead associated with the timing. Subtract the stub time from the procedure time to get the net: this should be an accurate measure of the actual time needed.
Because most systems nowadays can be peremptorily interrupted, it is important to do several timing runs to check for variability. Instead of doing one long run of $N$ seconds, do $m$ runs of about $N/m$ seconds each. It helps to do this in a double loop all in one go. Not only is that easier to handle, it introduces a little bit of negative correlation in each time series, which actually improves the estimates.
By using these basic principles of experimental design, you essentially control for any differences due to how you deploy the code (e.g., the difference between a for loop and replicate()). That makes your problem go away.
| null | CC BY-SA 2.5 | null | 2010-10-01T17:08:15.340 | 2010-10-01T17:08:15.340 | null | null | 919 | null |
3244 | 1 | 3248 | null | 16 | 3814 | I know most of you probably feel that Google Docs is still a primitive tool. It is no Matlab or R and not even Excel. Yet, I am baffled at the power of this web based software that just uses the operating capability of a browser (and is compatible with many browsers that work very differently).
Mike Lawrence, active in this forum, has shared a spreadsheet with us using Google Docs doing some pretty fancy stuff with it. I personally have replicated a pretty thorough hypothesis testing framework (including numerous parametric and nonparametric tests) originally done in Excel in Google Docs.
I am interested if any of you have given Google Docs a try and have pushed it to its limits in interesting applications. I am also interested to hear about the bugs or flaws you have encountered with Google Docs
I am designating this question "for community wiki" denoting that there are no best answers for this. It is more of a survey than anything.
| Do some of you use Google Docs spreadsheet to conduct and share your statistical work with others? | CC BY-SA 2.5 | null | 2010-10-01T17:21:51.887 | 2018-07-14T17:48:44.927 | 2012-09-13T19:25:54.267 | 919 | 1329 | [
"software",
"computational-statistics"
] |
3246 | 2 | null | 3242 | 26 | null | See Aaron Clauset's page:
- Power-law Distributions in Empirical Data
which has links to code for fitting power laws (Matlab, R, Python, C++) as well as a paper by Clauset and Shalizi you should read first.
You might want to read Clauset's and Shalizi's blogs posts on the paper first:
- Power laws and all that jazz
- So You Think You Have a Power Law — Well Isn't That Special?
A summary of the last link could be:
>
Lots of distributions give you straight-ish lines on a log-log plot.
Abusing linear regression makes the baby Gauss cry.
Fitting a line to your log-log plot by least squares is a bad idea.
Use maximum likelihood to estimate the scaling exponent.
Use goodness of fit to estimate where the scaling region begins.
Use a goodness-of-fit test to check goodness of fit.
Use Vuong's test to check alternatives, and be prepared for disappointment.
| null | CC BY-SA 3.0 | null | 2010-10-01T18:22:17.717 | 2017-05-08T09:03:29.043 | 2017-05-08T09:03:29.043 | 105234 | 251 | null |
3247 | 2 | null | 3244 | 19 | null | As an enthusiast user of R, bash, Python, asciidoc, (La)TeX, open source sofwtare or any un*x tools, I cannot provide an objective answer. Moreover, as I often argue against the use of MS Excel or spreadsheet of any kind (well, you see your data, or part of it, but what else?), I would not contribute positively to the debate. I'm not the only one, e.g.
- Spreadsheet Addiction, from P. Burns.
- MS Excel’s precision and accuracy, a post on the 2004 R mailing-list
- L. Knusel, On the accuracy of statistical distributions in Microsoft Excel 97, Computational Statistics & Data Analysis, 26: 375–377, 1998. (pdf)
- B.D. McCullough & B. Wilson, On the accuracy of statistical procedures in Microsoft Excel
2000 and Excel XP, Computational Statistics & Data Analysis, 40: 713–721, 2002.
- M. Altman, J. Gill & M.P. McDonald, Numerical Issues in Statistical Computing for the Social Scientist, Wiley, 2004. [e.g., pp. 12–14]
A colleague of mine loose all his macros because of the lack of backward compatibility, etc. Another colleague tried to import genetics data (around 700 subjects genotyped on 800,000 markers, 120 Mo), just to "look at them". Excel failed, Notepad gave up too... I am able to "look at them" with vi, and quickly reformat the data with some sed/awk or perl script. So I think there are different levels to consider when discussing about the usefulness of spreadsheets. Either you work on small data sets, and only want to apply elementary statistical stuff and maybe it's fine. Then, it's up to you to trust the results, or you can always ask for the source code, but maybe it would be simpler to do a quick test of all inline procedures with the [NIST benchmark](http://www.itl.nist.gov/div898/strd/). I don't think it corresponds to a good way of doing statistics simply because this is not a true statistical software (IMHO), although as an update of the aforementioned list, newer versions of MS Excel seems to have demonstrated improvements in its accuracy for statistical analyses, see Keeling and Pavur, [A comparative study of the reliability of nine statistical software packages](http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V8V-4JHMGWJ-1&_user=10&_coverDate=05%2F01%2F2007&_rdoc=1&_fmt=high&_orig=search&_origin=search&_sort=d&_docanchor=&view=c&_searchStrId=1488136142&_rerunOrigin=google&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=2babab0b51c03746d5d7a74d31f1498c&searchtype=a) (CSDA 2007 51: 3811).
Still, about one paper out of 10 or 20 (in biomedicine, psychology, psychiatry) includes graphics made with Excel, sometimes without removing the gray background, the horizontal black line or the automatic legend (Andrew Gelman and Hadley Wickham are certainly as happy as me when seeing it). But more generally, it tend to be the most used "software" according to a [recent poll](http://datafl.ws/120) on FlowingData, which remind me of an old talk of Brian Ripley (who co-authored the MASS R package, and write an excellent book on pattern recognition, among others):
>
Let's not kid ourselves: the most
widely used piece of software for
statistics is Excel (B. Ripley via Jan
De Leeuw), http://www.stats.ox.ac.uk/~ripley/RSS2002.pdf
Now, if you feel it provides you with a quick and easier way to get your statistics done, why not? The problem is that there are still things that cannot be done (or at least, it's rather tricky) in such an environment. I think of bootstrap, permutation, multivariate exploratory data analysis, to name a few. Unless you are very proficient in VBA (which is neither a scripting nor a programming language), I am inclined to think that even minor operations on data are better handled under R (or Matlab, or Python, providing you get the right tool for dealing with e.g. so-called data.frame). Above all, I think Excel does not promote very good practices for the data analyst (but it also applies to any "cliquodrome", see the discussion on Medstats about the need to maintain a record of data processing, [Documenting analyses and data edits](http://groups.google.com/group/medstats/browse_thread/thread/601793e6ce36e789)), and I found this post on [Practical Stats](http://www.practicalstats.com/xlsstats/excelstats.html) relatively illustrative of some of Excel pitfalls. Still, it applies to Excel, I don't know how it translates to GDocs.
About sharing your work, I tend to think that [Github](http://github.com/) (or [Gist](http://gist.github.com/) for source code) or [Dropbox](http://www.dropbox.com) (although EULA might discourage some people) are very good options (revision history, grant management if needed, etc.). I cannot encourage the use of a software which basically store your data in a binary format. I know it can be imported in R, Matlab, Stata, SPSS, but to my opinion:
- data should definitively be in a text format, that can be read by another statistical software;
- analysis should be reproducible, meaning you should provide a complete script for your analysis and it should run (we approach the ideal case near here...) on another operating system at any time;
- your own statistical software should implement acknowledged algorithms and there should be an easy way to update it to reflect current best practices in statistical modeling;
- the sharing system you choose should include versioning and collaborative facilities.
That's it.
| null | CC BY-SA 4.0 | null | 2010-10-01T18:31:56.090 | 2018-07-14T17:48:44.927 | 2018-07-14T17:48:44.927 | 79696 | 930 | null |
3248 | 2 | null | 3244 | 12 | null | My main use for google spreadsheets have been with google forms, for collecting data, and then easily importing it into R. Here is a post I wrote about it half a year ago:
[Google spreadsheets + google forms + R = Easily collecting and importing data for analysis](http://www.r-statistics.com/2010/03/google-spreadsheets-google-forms-r-easily-collecting-and-importing-data-for-analysis/)
Also, If you are into collaboration, my tool of choice is DropBox. I wrote a post regarding it a few months ago:
[Syncing files across computers using DropBox](http://www.r-statistics.com/2010/05/syncing-files-across-computers-using-dropbox/)
I have now been using it for about half a year on a project with 5 co-authors, and it has been invaluable (syncing data files from 3 contributers, everyone can see the latest version of the output I am producing, and everyone are looking at the same .docx file for the article).
Both posts offer video tutorials and verbal instructions.
| null | CC BY-SA 2.5 | null | 2010-10-01T19:23:05.050 | 2010-10-01T19:23:05.050 | null | null | 253 | null |
3249 | 1 | null | null | 11 | 3497 | I want to forecast retail items (by week) using exponential smoothing. I'm stuck right now in how to calculate, store, and apply the sesonality indexes.
The problem is that all examples I've found deal with a sort of simple seasonality. In my case I have the following problems:
1. Seasons don't occur on the same week every year: they are movable. Mardi-gras, lent, easter, and a few others.
2. There are seasons that change depending on the year. For example, there is a national holiday season. Depending on whether the holiday is close to the weekend, customers will or will not leave town. So it's kind of like having two seasons: one where customers leave town, and one where they don't leave town.
3. Sometimes two (or 3) seasons occur at the same time. For example, we had "Mardi-Gras" season occurring at the same time as Valentine's season.
4. Sometimes seasons change in duration. For example, "Halloween season" started earlier this year. Christmas is also another example, where it seems like every year we start earlier to carry the products.
It seems to me that I need to find a way to set some sort of "seasonal profiles" which then, depending on the particular scenario are somehow added to obtain the correct seasonal index. Does that make sense?
Does anybody know where I can find practical information on how to do this?
Thanks,
Edgard
| Calculation of seasonality indexes for complex seasonality | CC BY-SA 2.5 | null | 2010-10-01T20:05:58.550 | 2010-10-03T03:51:53.697 | 2010-10-03T03:39:27.527 | 159 | 1479 | [
"time-series",
"seasonality"
] |
3250 | 2 | null | 3235 | 4 | null | Regarding which timing metric to use, I can not add to the other responders.
Regarding the function to use, I like using the ?benchmark from the [rbenchmark package](http://cran.r-project.org/web/packages/rbenchmark/index.html).
| null | CC BY-SA 2.5 | null | 2010-10-01T20:51:10.613 | 2010-10-01T20:51:10.613 | null | null | 253 | null |
3251 | 2 | null | 3249 | 1 | null | A simple fix would be to include events dummies in your specification:
$(1) \hat{y_t}=\lambda_1 y_{t-1}+...+\lambda_k y_{t-k}+\phi_1 D_{t,1}+\phi_m D_{t,m}$
where $D_{t,m}$ is an indicator taking value $1$ if week $t$ has event $m$ (say Mardi gras) and 0 otherwise, for all $m$ events you deem important.
The first part of the specification $\lambda_1 y_{t-1}+...+\lambda_k y_{t-k}$ is essentially a exponential smoothers but with weight varying as a function of lags (and estimated by OLS).
This pre-supposes that you have at least 20 observations for each event (i.e. 20 'mardi gras'). If this is not the case, you can try to bundle some events together (say mardi gras and labour day).
The R to fit (1) is rather straighforward, assuming dlsales is stationnary and D is your matrix of dummy variables :
```
fit<-arima(dlsales,order=c(4,0,0),seasonal = list(order = c(1, 0, 0),period=52),xreg = D)
```
Starting from here, you can ask more specific questions about the part of my answer that are not familiar to you (i don't know what your level is in statistics).
| null | CC BY-SA 2.5 | null | 2010-10-01T20:54:45.623 | 2010-10-02T07:42:40.397 | 2010-10-02T07:42:40.397 | 603 | 603 | null |
3252 | 1 | 3254 | null | 29 | 2865 | Exploratory data analysis (EDA) often leads to explore other "tracks" that do not necessarily belong to the initial set of hypotheses. I face such a situation in the case of studies with a limited sample size and a lot of data gathered through different questionnaires (socio-demographics data, neuropsychological or medical scales -- e.g., mental or physical functioning, depression/anxiety level, symptoms checklist). It happens that EDA helps to highlight some unexpected relationships ("unexpected" meaning that they were not included in the initial analysis plan) that translates into additional questions/hypothesis.
As is the case for overfitting, data [dredging](http://en.wikipedia.org/wiki/Data_dredging) or [snooping](http://en.wikipedia.org/wiki/Data_snooping) does lead to results that do not generalize. However, when a lot of data is available, it is quite difficult (for the researcher or physician) to postulate a limited set of hypotheses.
I would like to know if there are well-acknowledged methods, recommendations, or rules of thumb that may help to delineate EDA in the case of small-sample studies.
| How to cope with exploratory data analysis and data dredging in small-sample studies? | CC BY-SA 2.5 | null | 2010-10-01T21:52:02.977 | 2023-04-06T10:07:11.423 | 2010-10-25T06:50:55.937 | 449 | 930 | [
"multiple-comparisons",
"epidemiology",
"small-sample",
"exploratory-data-analysis"
] |
3253 | 2 | null | 3038 | 0 | null | Bootstrap differences (e.g. the difference between the means) between the 2 sample groups and check for statistical significance. A more detailed description of this approach, albeit in a different context, can be found here [http://www.automated-trading-system.com/a-different-application-of-the-bootstrap/](http://www.automated-trading-system.com/a-different-application-of-the-bootstrap/)
| null | CC BY-SA 2.5 | null | 2010-10-01T23:13:10.210 | 2010-10-01T23:13:10.210 | null | null | 226 | null |
3254 | 2 | null | 3252 | 12 | null | I think the main thing is to be honest when reporting such results that they were unexpected findings from EDA and not part of the initial analysis plan based on an a priori hypothesis. Some people like to label such results 'hypothesis generating': e.g. the [first hit](http://jco.ascopubs.org/content/23/30/7512.short) from a [search for this phrase on Google Scholar](http://scholar.google.co.uk/scholar?hl=en&q=%22hypothesis+generating%22) includes the following in the conclusion section of its abstract:
As this was an "exploratory" analysis, this effect should be considered as hypothesis generating and assessed prospectively in other trials...
Though note that although this was a post-hoc subgroup analysis it was from a randomized control trial, not an observational study, in which the problem gets worse. Philip Cole poured scorn on the idea that observational ('epidemiologic') studies can generate hypotheses in a deliberately provocative but entertaining commentary:
P Cole. [The hypothesis generating machine.](http://www.jstor.org/stable/3702282) Epidemiology 1993; 4:271-273.
| null | CC BY-SA 2.5 | null | 2010-10-01T23:19:38.370 | 2010-10-02T07:49:14.927 | 2010-10-02T07:49:14.927 | 449 | 449 | null |
3255 | 2 | null | 3249 | 7 | null | For the kinds of seasonality you describe, the dummy variable approach is probably best. However, this is easier to handle in an ARIMA framework than an exponential smoothing framework.
\begin{aligned}
y_t &= a + b_1D_{t,1} + \cdots + b_mD_{t,m} + N_t\\
N_t &\sim \text{ARIMA}
\end{aligned}
where each $D_{t,k}$ variable corresponds to one of the holiday or festival events. This is how the `arima` function in R will fit regression variables (as a regression with ARIMA errors, not as an ARIMAX model).
If you really want to stick with the exponential smoothing framework, there is a discussion of how to include covariates in my [2008 book on exponential smoothing](http://www.exponentialsmoothing.net). You might also look at my recent paper on [exponential smoothing with complex seasonality](http://robjhyndman.com/working-papers/complex-seasonality/) although the types of seasonal complications we discuss there are more difficult than the moving festival kind that you describe.
| null | CC BY-SA 2.5 | null | 2010-10-01T23:54:36.450 | 2010-10-03T03:51:53.697 | 2010-10-03T03:51:53.697 | 159 | 159 | null |
3256 | 2 | null | 3207 | 2 | null | With a small modification, the first version will be far more efficient. As it's written, se.est is a random function. Even at the same arguments, its value will change each time because of rnorm. This will mess up optimize. You should use the same random numbers each time se.est is called. Here's one way:
```
e <- c(rnorm(2*n,0,1))
se.est <- function(se,x1bar, x2bar, lsd_obs, n, e) {
y <- se*e + c(rep(x1bar,n),rep(x2bar,n)
x <- c(rep(1,n), rep(2,n))
mse <- sum(lm(y~factor(x))$residuals^2)/(2*n-2)
lsd_est <- qt(0.975,n)*sqrt(2*mse/2)
ans <- (lsd_obs - lsd_est)^2
}
SE <- optimize(f=se.est,interval=c(0.1,100),x1bar,x2bar,lsd_obs,n,e)
```
I didn't try to run this, so I may have made typos or syntax errors, but hopefully the idea is clear. If n is small, you should probably optimize multiple times with different e's, and use the something like the average or maybe the 90th percentile of the resulting SE's.
This may not be a concern, but if the original data was not normal, this method could give inaccurate results. Without assuming normality, you could write your problem as:
```
max_y se(y) subject to mean1(y)=x1bar, mean(2)=x2bar, statistic(y)=s
```
where you allow y to be a sample of size n with any distribution. Without any more restrictions, the solution will often be infinite (it depends on the statistic).
| null | CC BY-SA 2.5 | null | 2010-10-02T00:22:45.550 | 2010-10-02T00:22:45.550 | null | null | 1229 | null |
3257 | 2 | null | 3238 | 27 | null | In data streaming and mining of time series databases, a common approach is to transform the series to a symbolic representation, then use a similarity metric, such as Euclidean distance, to cluster the series. The most popular representations are SAX (Keogh & Lin) or the newer iSAX (Shieh & Keogh):
- Symbolic Aggregate approXimation
- iSAX: Indexing and Mining Terabyte Sized Time Series
The pages above also contain references to distance metrics and clustering. Keogh and crew are into reproducible research and pretty receptive to releasing their code. So you could email them and ask. I believe they tend to work in MATLAB/C++ though.
There was a recent effort to produce a Java and R implementation:
- jmotif
I don't know how far along it is -- it's geared towards motif finding, but, depending on how far they've gotten, it should have the necessary bits you need to put something together for your needs (iSAX and distance metrics: since this part is common to clustering and motif finding).
| null | CC BY-SA 2.5 | null | 2010-10-02T05:13:39.340 | 2010-10-02T05:13:39.340 | null | null | 251 | null |
3259 | 1 | 19865 | null | 22 | 3704 | This question is about estimating cut-off scores on a multi-dimensional screening questionnaire to predict a binary endpoint, in the presence of correlated scales.
I was asked about the interest of controlling for associated subscores when devising cut-off scores on each dimension of a measurement scale (personality traits) which might be used for alcoholism screening. That is, in this particular case, the person was not interested in adjusting on external covariates (predictors) -- which leads to (partial) area under covariate-adjusted ROC curve, e.g. (1-2) -- but essentially on other scores from the same questionnaire because they correlate one to each other (e.g. "impulsivity" with "sensation seeking"). It amounts to build an GLM which includes on the left-side the score of interest (for which we seek a cut-off) and another score computed from the same questionnaire, while on the right-hand side the outcome may be drinking status.
To clarify (per @robin request), suppose we have $j=4$ scores, say $x_j$ (e.g., anxiety, impulsivity, neuroticism, sensation seeking), and we want to find a cut-off value $t_j$ (i.e. "positive case" if $x_j>t_j$, "negative case" otherwise) for each of them. We usually adjust for other risk factors like gender or age when devising such cut-off (using ROC curve analysis). Now, what about adjusting impulsivity (IMP) on gender, age, and sensation seeking (SS) since SS is known to correlate with IMP? In other words, we would have a cut-off value for IMP where effect of age, gender and anxiety level are removed.
Apart from saying that a cut-off must remain as simple as possible, my response was
>
About covariates, I would recommend
estimating the AUCs with and without
adjustment, just to see if the
predictive performance increase. Here,
your covariates are merely other
subscores defined from the same
measurement instrument and I never
faced such a situation (usually, I
adjust on known risk factors, like Age
or Gender). [...] Also, since you are
interested in prognostic issues (i.e.
screening efficacy of the questionnaire), you
may also be interested in estimating
the positive predictive value (PPV,
probability of patients with positive
test results who are correctly
classified) provided you are able to
classify subjects as "positive" or
"negative" depending on their
subscores on your questionnaire. Note, however,
that it is necessary to know the
prevalence of this disorder to
correctly interpret the PPV in turn...
Do you have a more thorough understanding of this particular situation, with link to relevant papers when possible?
References
- Janes, H and Pepe, MS (2008). Adjusting for Covariates in Studies of Diagnostic, Screening, or Prognostic Markers: An Old Concept in a New Setting. American Journal of Epidemiology, 168(1): 89-97.
- Janes, H and Pepe, MS (2008). Accommodating Covariates in ROC Analysis. UW Biostatistics Working Paper Series, Paper 322.
| Adjusting for covariates in ROC curve analysis | CC BY-SA 2.5 | null | 2010-10-02T09:29:19.373 | 2011-12-15T14:11:44.723 | 2010-11-09T13:22:04.150 | 930 | 930 | [
"epidemiology",
"roc"
] |
3260 | 1 | null | null | 3 | 237 | I have a few (5-6) data sets, each is a function of time, with the time span the same between datasets. These datasets are all statistics of various perspectives of something (the partitions of a graph), and I am trying to find points in time which show interesting changes across the different datasets. Normally these interesting features are local minima, but this may not always be the case.
So for example, dataset1. is the variance of some measure and at time 0.4 it has a local minima. dataset2. is the number of maxima and it also has a local minima at 0.4.
The problem is in reality sometimes a minima will show on all datasets and sometimes only on one or a few datasets. So my goal is basically to combine the information from all datasets to increase my confidence that a local minima at a particular time, is an interesting and significant point.
A simple method would just be to count the number of datasets which have a local minima at that time and and use some threshold value, above which I define the minima to be significant. But I am wondering if there are more intelligent ways, i.e. ways which can see which datasets are more robust and reliable etc.
| Combining many datasets to increase confidence | CC BY-SA 2.5 | null | 2010-10-02T12:32:01.853 | 2010-11-01T03:15:58.853 | null | null | 809 | [
"multiple-comparisons",
"multivariate-analysis"
] |
3261 | 1 | 3274 | null | 5 | 540 | Suppose you had a method for estimating the population covariance of a vector-valued random variable given observations of that random variable, say $f(Z) \rightarrow C$, where the rows of $Z$ are observations of the random variable. Can one abuse this process to perform a least squares regression $y = x^T\beta + \epsilon$, for $n$-dimensional vector $x$? The idea is that one would have a vector of observations of $y$, call it $Y$, and a matrix of paired observations of $x$, call it $X$, then one would compute the covariance of $Z = [X\; Y]$ (concatenate the matrix next to the vector) via $f$, call it $C$, then let $\hat{\beta} = C_{1:n,1:n}^{-1} C_{1:n,n+1}$.
A few questions:
- Will this work under optimistic conditions? (a simple simulation in Matlab shows that the precision is not great, but the results are within 4 sig figs, so I am guessing it will.)
- Is this a known trick? if so, does it have a name I can google search for, or is it so trivial that it doesn't require a name?
- Most importantly, if $f$ can deal with input where some values are missing (say, MCAR--missing completely at random), under what conditions will this technique behave reasonably for regression with missing values?
edit I am assuming that $x$ is drawn from a zero mean process and that the regression has no intercept term.
| Using Covariance Estimator to Perform Linear Regression? | CC BY-SA 2.5 | null | 2010-10-02T17:05:12.987 | 2010-10-03T04:21:50.877 | 2010-10-03T04:21:50.877 | 795 | 795 | [
"regression",
"algorithms",
"missing-data",
"covariance-matrix"
] |
3262 | 1 | null | null | 34 | 6169 | I am about to help teach statistics to medical students this semester.
I've heard many horror stories about the fear of these students from learning statistics.
Can anyone suggest what to do with this fear? (Either links to people who are discussing this, or offer suggestions from your own experience)
| How to teach students who fear statistics? | CC BY-SA 2.5 | null | 2010-10-02T17:06:33.500 | 2017-01-25T18:33:36.700 | 2010-10-02T17:39:01.153 | null | 253 | [
"teaching"
] |
3263 | 2 | null | 328 | 0 | null | Kennedy's [Guide to Econometrics](http://rads.stackoverflow.com/amzn/click/1405182571) is a good survey of techniques in econometrics--not detailed enough to get your hands dirty, but very good for discovering what techniques are being used.
| null | CC BY-SA 2.5 | null | 2010-10-02T17:17:44.863 | 2010-10-02T17:17:44.863 | null | null | 795 | null |
3264 | 2 | null | 3262 | 13 | null | Not very much about how to deal with students' fear, but Andrew Gelman wrote an excellent book, [Teaching Statistics, a bag of tricks](http://www.stat.columbia.edu/~gelman/bag-of-tricks/) (there's also some [slides](http://www.stat.columbia.edu/~gelman/presentations/smithtalk.pdf)).
I like introducing a course by talking about randomness, elementary probability as found in games, causal association, permutation tests (because parametric tests provide good approximation to them :).
I just put an example that I like to show to students. This is from Phillip Good, in his book [Permutation, Parametric, and Bootstrap Tests of Hypotheses](http://www.springer.com/statistics/statistical+theory+and+methods/book/978-0-387-20279-2) (Springer, 2005 3rd ed.), where he introduces the general strategy of testing or decision making about statistical hypothesis and how to carry out a very simple and exact permutation test to solve the follwoing problem.
>
Shortly after I received my doctorate
in statistics, I decided that if I
really wanted to help bench scientists
apply statistics I ought to become a
scientist myself. So I went back to
school to learn physiology and aging
in cells raised in petri dishes.
I soon learned there was a great deal
more to an experiment than the random
assignment of subjects to treatments.
In general, 90% of experimental effort
was spent mastering various arcane
laboratory techniques, another 9% in
developing new techniques to span the
gap between what had been done and
what I really wanted to do, and a mere
1% on the experiment itself. But the
moment of truth came finally–—it had
to if I were to publish and not
perish–—and I succeeded in cloning
human diploid fibroblasts in eight
culture dishes: Four of these dishes
were filled with a conventional
nutrient solution and four held an
experimental "life-extending" solution
to which vitamin E had been added.
I waited three weeks with fingers
crossed that there was no
contamination of the cell cultures,
but at the end of this test period
three dishes of each type had
survived. My technician and I
transplanted the cells, let them grow
for 24 hours in contact with a
radioactive label, and then fixed and
stained them before covering them with
a photographic emulsion.
Ten days passed and we were ready to examine
the autoradiographs. Two years had
elapsed since I first envisioned this
experiment and now the results were
in: I had the six numbers I needed.
"I've lost the labels," my technician
said as she handed me the results.
This was a dire situation. Without the
labels, I had no way of knowing which
cell cultures had been treated with
vitamin E and which had not.
| null | CC BY-SA 2.5 | null | 2010-10-02T17:30:57.223 | 2010-10-02T18:33:09.533 | 2010-10-02T18:33:09.533 | 930 | 930 | null |
3265 | 2 | null | 3262 | 9 | null | This is a topic that would be of interest to members of the [Isolated Statisticians](http://www.lawrence.edu/fast/jordanj/isostat.html) group in the ASA. You are likely to get many useful responses from experienced teachers there, so I'll limit what I share here.
It's useful to understand where your students are coming from. A low-stress pre-test can help you identify their strengths, weaknesses, and fears. Sample tests for this purpose are provided in the instructor's manual to the Freedman, Pisani, Purves Statistics text. Get a copy of the manual through your institution. (I think the publisher will send it free.) (If you're really interested in this, I can post a version of these tests I have used for pre-assessment of undergraduates.) Another good source of test material related to intro statistics is the [Artist](https://app.gen.umn.edu/artist/index.html) Web site. As a working statistician, of course you will want to engage in some quantitative measurement of the learning that occurs in your class ;-). That site is a great resource for test questions.
There is a large and growing literature about teaching intro stats. A place to start is the online [Journal of Statistical Education](http://www.amstat.org/publications/jse/). At a minimum you will find articles there about using case studies and datasets relevant to medical students; you might uncover some that specifically address teaching this population.
When asked to teach such courses, I have always found it helpful to reach out to the other faculty and, when possible, the students themselves to find out what they really need to know and what might motivate them. Medical students are really busy and they didn't go to school to learn statistics, but they know they'll need to understand the papers they will be reading throughout their careers. If you're not familiar with medical literature, a few hours with the best journals, like the Lancet and JAMA, will help you appreciate what they're working towards.
| null | CC BY-SA 2.5 | null | 2010-10-02T18:10:23.917 | 2010-10-02T18:10:23.917 | null | null | 919 | null |
3266 | 2 | null | 3180 | 7 | null | I like the partial identification approach to missing data of Manski. The basic idea is to ask: given all possible values the missing data could have, what is the set of values that the estimated parameters could take? This set might be very large, in which case you could consider restricting the distribution of the missing data. Manski has a bunch of papers and a book on this topic. [This short paper](http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V07-4DW3JW4-2&_user=10&_coverDate=06%2F30%2F2005&_rdoc=5&_fmt=high&_orig=browse&_origin=browse&_zone=rslt_list_item&_srch=doc-info%28%23toc%235639%232005%23999609997%23593547%23FLP%23display%23Volume%29&_cdi=5639&_sort=d&_docanchor=&view=c&_ct=14&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=6a9ef3df98bd41c067b8cf99c81a5850&searchtype=a) is a good overview.
Inference in partially identified models can be complicated and is an active area of research. This [review](http://www.annualreviews.org/doi/abs/10.1146/annurev.economics.050708.143401?journalCode=economics) ([ungated pdf](http://www.google.com/url?sa=t&source=web&cd=5&sqi=2&ved=0CDEQFjAE&url=http%3A%2F%2Ffaculty.wcas.northwestern.edu%2F~ett094%2FPIE.pdf&rct=j&q=chernozhukov%20partial%20identification&ei=YH2nTJiWCMO88gbl2fSLDQ&usg=AFQjCNHw3d2pgEkRsO9ttBwEQfo7MHFdng&sig2=V9Adhrquz0p2ug47F8A-eQ&cad=rja)) is a good place to get started.
| null | CC BY-SA 2.5 | null | 2010-10-02T19:05:07.083 | 2010-10-02T19:05:07.083 | null | null | 1229 | null |
3267 | 2 | null | 3262 | 16 | null | Try to personalize statistics. To show why understanding its concepts (even though they will forget the math, acknowledge it) is useful to them. For instance, how to interpret breast cancer test results. To quote from [http://yudkowsky.net/rational/bayes](http://yudkowsky.net/rational/bayes):
>
Here's a story problem about a
situation that doctors often
encounter:
1% of women at age forty who
participate in routine screening have
breast cancer. 80% of women with
breast cancer will get positive
mammographies. 9.6% of women without
breast cancer will also get positive
mammographies. A woman in this age
group had a positive mammography in a
routine screening. What is the
probability that she actually has
breast cancer?
What do you think the answer is? If
you haven't encountered this kind of
problem before, please take a moment
to come up with your own answer before
continuing.
Next, suppose I told you that most
doctors get the same wrong answer on
this problem - usually, only around
15% of doctors get it right.
("Really? 15%? Is that a real
number, or an urban legend based on an
Internet poll?" It's a real number.
See Casscells, Schoenberger, and
Grayboys 1978; Eddy 1982; Gigerenzer
and Hoffrage 1995; and many other
studies. It's a surprising result
which is easy to replicate, so it's
been extensively replicated.)
Since your students will be medical doctors, make it clear: if they don't understand statistics, they will give the wrong interpretation of the results to their patients. This is not an academical matter.
Also acknowledge that unless they go in research, they will forget the details you will teach them. Don't even hope it's not the case. Aim for them to understand the fundamental concepts (type I and II errors, correlations and causations and so on) so when faced with a situation, they will remember "hey, perhaps I shouldn't rush drawing a conclusion, but talk to someone who understand stats better." Preventing cognitive errors and teaching them to be inquisitive of the results provided by others (especially in an industry where large sums of money are at stake) will be signs you succeeded.
| null | CC BY-SA 3.0 | null | 2010-10-02T19:21:19.000 | 2017-01-25T16:31:43.393 | 2017-01-25T16:31:43.393 | 137032 | 840 | null |
3268 | 1 | null | null | 9 | 1421 | I have around ten groups (of companies). Each group is connected with each other group. The data I have is representing the strength of the connection. Imagine it's the number of times someone from group A sent an email to group B.
The strength of a connection can be 0. There are two connections between two groups, A-B and B-A.
a) What would be a good way to visualize this? I could imagine, for example, that each group is a circle. Lines connect the circles and the thickness of the line represents the strength of the connection. Being able to indicate the size of the groups would be a plus but not required.
b) Do you know of any software tools to visualize this? The tool shouldn't be too expensive or should be available as a trial version as for now it's just a proof of concept. It doesn't have to be web-based.
| Visualization of connections between groups | CC BY-SA 3.0 | null | 2010-10-02T19:59:17.413 | 2017-10-06T09:50:48.220 | 2017-10-06T09:50:48.220 | 11887 | 1489 | [
"data-visualization",
"networks"
] |
3269 | 2 | null | 3268 | 3 | null | [Gephi](http://gephi.org/) is pretty good for visualization directed or undirected graphs/networks. Another option might be [Walrus](http://www.caida.org/tools/visualization/walrus/).
| null | CC BY-SA 2.5 | null | 2010-10-02T20:17:39.443 | 2010-10-02T20:17:39.443 | null | null | 251 | null |
3270 | 1 | 3278 | null | 21 | 16224 | I've been searching the internet far and wide... I have yet to find a really good overview of how to interpret 2D correspondence analysis plots. Could someone offer some advice on interpreting the distances between points?
Perhaps an example would help, here is a plot that's found on many of the websites I've seen that discuss correspondence analysis. The red triangles represent eye colour and the black dots represent hair colour.

Looking at the graph above, could you make a few statements about what you see in these data. Points of interest about the different dimensions and relationships between triangles and dots?
An explanation of row-points verses column points, and the use of the word "profile" with particular focus on the example would be instrumental.
| Interpreting 2D correspondence analysis plots | CC BY-SA 3.0 | null | 2010-10-02T22:12:51.957 | 2016-06-28T22:25:22.003 | 2011-06-15T19:42:20.767 | 930 | 776 | [
"interpretation",
"correspondence-analysis",
"biplot"
] |
3271 | 1 | 3284 | null | 8 | 5669 | I have seen a few queries on clustering in time series and specifically on clustering, but I don't think they answer my question.
Background: I want to cluster genes in a time course experiment in yeast. There are four time points say: t1 t2 t3 and t4 and total number of genes G. I have the data in form a matrix M in which the columns represent the treatments (or time points) t1 t2 t3 and t4 and the rows represent the genes. Therefore, M is a Gx4 matrix.
Problem: I want to cluster the genes which behave the same across all time points t1 t2 t3 and t4 as well as within a particular time point ti , where i is in {1, 2, 3, 4} (In case we cannot do both the clusterings together, the clustering within a time point is more important than clustering across time points). In addition to this, I also want to draw a heatmap.
My Solution:
I use the R code below to obtain a heatmap as well as the clusters using `hclust` function in R (performs hierarchical clustering with euclidean distance)
```
row.scaled.expr <- (expr.diff - rowMeans(expr.diff)) / rowSds(expr.diff)
breaks.expr <- c(quantile(row.scaled.expr[row.scaled.expr < 0],
seq(0,1,length=10)[-9]), 0,
quantile(row.scaled.expr[row.scaled.expr > 0],
seq(0,1,length=10))[-1] )
blue.red.expr <- maPalette(low = "blue", high = "red", mid = "white",
k=length(breaks.expr) - 1)
pdf("images/clust.pdf",
height=30,width=20,pointsize=20)
ht1 <- heatmap.2(row.scaled.expr, col = blue.red.expr, Colv = FALSE, key = FALSE,
dendrogram = "row", scale = "none", trace = "none",
cex=1.5, cexRow=1, cexCol=2,
density.info = "none", breaks = breaks.expr,
labCol = colnames(row.scaled.expr),
labRow="",
lmat=rbind( c(0, 3), c(2,1), c(0,4) ), lhei=c(0.25, 4, 0.25 ),
main=expression("Heat Map"),
ylab="Genes in the Microarray",
xlab="Treatments"
)
dev.off()
```
I recently discovered `hopach` package in Bioconductor which can be used to estimate the number of clusters. Previously, I was randomly assigning the number of bins for the heatmap and cutting the tree at an appropriate height to get a pre-specified number of clusters.
Possible Problems in my solution:
- I may be not clustering the genes within a particular treatment and clustering genes only across treatments or vice versa.
- There may be better ways of obtaining a heatmap for the pattern I want to see (similar genes within a treatment and across treatments).
- There may be better visualization methods which I am not aware of.
Note:
- csgillespie (moderator) has a more general document on his website in which he discusses all the aspects of time course analysis (including heatmaps and clustering). I would appreciate if you can point me to an articles which describe heatmaps and clustering in detail.
- I have tried the pvclust package, but it complains that M is singular and then it crashes.
| Clustering genes in a time course experiment | CC BY-SA 2.5 | null | 2010-10-02T22:16:46.023 | 2010-10-12T23:32:12.033 | 2010-10-04T09:04:46.327 | 8 | 1307 | [
"r",
"machine-learning",
"clustering",
"microarray"
] |
3272 | 2 | null | 3262 | 14 | null | I agree that making statistics personal/relevant is important, but that's not ultimately going to dispel the fear of the student. I think how the student feels about something often has more to do with the personality of the person teaching it, and how comfortable that person feels in the classroom, even when teaching uninterested or scared students. The first thing to do to do away with their fear, is to do away with your own...you shouldn't be afraid of teaching students who may be afraid, because ultimately counseling them over their fears isn't your responsibility. You are not a therapist. And yet, by being natural, fun, casual, corny, and likable, the student will be able to let go of their fear as they can begin to replace their feelings towards statistics with their feelings about you as a person and the environment of curiosity, fun, and learning you create.
That's my belief and experience.
So, here's what I recommend:
Reframe Statistics using mantras like "Information is Beautiful" and show them the blog of the same name. Mention things like how "measuring something allows you to manage it" and make wiser decisions. Yes, these are all ways of making it personal and relevant.
Introduce them to sections of Freakonomics. It's a great book, and uses regular language to describe why statistical analysis is important and sexy.
Tell corny jokes constantly. This endears you to them. Be a goof. Do whatever is necessary for them to feel that they are cooler than you. Do whatever is necessary for them to feel that they are smarter than you (even though they trust you secretly have everything under control). There was an article in the NYTimes some years ago about the power of a teacher who is uncool. It allows students to relax. Wear blue converse all stars, do something weird an idiosyncratic so they know they have a chance, and that they have nothing to fear.
Give them things to play with. Get some colored markers (I've done this in university) and have them draw their graphs and notes in color. This makes them feel like they're in elementary school even if they're calculating standard deviation. Major help in overcoming fear.
Get some measurement gear, measure heart rate and have them running around. Demonstrate concepts by collecting data from students live in the classroom. Make them forget it's a statistics class, make them feel like it's a study they are involved in, or administering.
Demystify the math. An intro statistics course has no actual mathematical operation more difficult than an arithmetic class, it's just a sequence of many operations in row, and it's about learning to keep track of that. Tell them it's like a yoga practice in learning to be more organized.
Memorize everyone's name on the first and second day, absolutely. Calling them by their names, poking fun at them sometimes, letting them poke fun at you, are all ways to overcome fear.
They ultimately want to know that you aren't going to hit them over the head with something that they can't handle (that's what fear is). Give them ample warning, and exaggerate how difficult things are as they come up. Start off the class by saying "Boy, you guys are going to kill me, because today is going to be so hard your head might explode," and then when you teaching them variance, let's say, and they find it easy to calculate, then they'll get a greater confidence.
When something is actually difficult to calculate, give them a whole period to do it out, and maybe a second shot the next class depending on your time constraints.
And again, it's ultimately about you. Do you know your statistics back and forth? Does it daunt you at all? Are you a fun teacher who makes students laugh and relax, or are you bumbling and not sure how you're steering the ship? Do you have the class time well managed, or are you constantly not sure how long something will take to teach? When you need to, can you be stern with them (they are medical students after all)?
| null | CC BY-SA 2.5 | null | 2010-10-02T22:51:17.683 | 2010-10-02T22:51:17.683 | null | null | 1490 | null |
3273 | 2 | null | 3262 | 3 | null | "Decision making in the face of uncertainty" sounds a lot more interesting than "statistics" even though that's essentially what statistics is about. Maybe you could lead with the decision-making aspect to build motivation for the course.
| null | CC BY-SA 2.5 | null | 2010-10-02T23:58:10.383 | 2010-10-02T23:58:10.383 | null | null | 319 | null |
3274 | 2 | null | 3261 | 3 | null | your "trick" seems to be the solution to the [so-called] normal equations for multiple regression - which is the usual least-squares answer in multiple regression.
as for missing data - what $f$ do you have in mind that knows how to get $C$ in that case?
there are methods like imputation for filling in missing values. perhaps [little and rubin](http://rads.stackoverflow.com/amzn/click/0471183865) can give further information on the issues involved.
| null | CC BY-SA 2.5 | null | 2010-10-03T04:14:41.910 | 2010-10-03T04:14:41.910 | null | null | 1112 | null |
3275 | 1 | 3357 | null | 3 | 12542 | Let's say I'm playing 10 "games". For each game, I know the probability of winning, the probability of tying, and the probability of losing.
From these values, I can calculate the probability of winning X games, the probability of losing X games, and the probability of tying X games (for X = 0 to 10).
I'm just trying to figure out the probability of each outcome after playing all 10 games. For example, what is the probability of winning 7, losing 2, and tying 1?
Any ideas? Thanks!
---
Here's some example data for 2 games:
Game 1:
- win: 23.3%
- tie: 1.1%
- lose: 75.6%
Game 2:
- win: 29.5%
- tie: 3.2%
- lose: 67.3%
Based on this, we can calculate the probability after playing 2 games of:
---
- 0 wins: 54.0%
- 1 win: 39.1%
- 2 wins: 6.9%
---
- 0 ties: 95.8%
- 1 tie: 4.2%
- 2 ties: 0.0%
---
- 0 losses: 8.0%
- 1 loss: 41.1%
- 2 losses: 50.9%
---
Based on these numbers, what is the probability of each outcome? The possible outcomes (W-L-T) would be:
- 2-0-0
- 1-1-0
- 1-0-1
- 0-1-1
- 0-2-0
- 0-0-2
Obviously this is pretty trivial with 2 games, but is there a generic formula for finding the probability of W wins, T ties, and L losses after X games?
| Probability of Game Outcomes (with ties!) | CC BY-SA 2.5 | null | 2010-10-03T04:42:40.847 | 2022-07-18T19:53:27.317 | 2019-06-29T07:58:41.570 | 3277 | null | [
"probability",
"multinomial-distribution",
"games",
"ties"
] |
3276 | 1 | 3279 | null | 7 | 549 | Suppose I have some i.i.d. data $x_1, \ldots, x_n \sim N(\mu, \sigma^2)$, where $\sigma^2$ is fixed and $\mu$ is unknown, and I want to estimate $\mu$.
Instead of simply giving the MLE of $\mu = \bar{x}$, one could estimate
(1) $\mu = \lambda \mu_0 + (1 - \lambda) \bar{x},$
for some "prior best guess" $\mu_0$. This also has a nice Bayesian interpretation: we place a prior $\mu \sim N(\mu_0, \sigma^2_0)$ on $\mu$, and $\lambda$ is the weighted precision.
I seem to recall that this also has an explicit L2 regularization-interpretation (i.e., we choose some penalty and minimize the squared loss to get the above estimate), similar to things like lasso and ridge regression, but I can't remember how it goes. Can anyone explain what the L2 regularization-interpretation of (1) is?
[More general answers, where the data isn't necessarily normal distributed, are welcome as well.]
| Regularization and Mean Estimation | CC BY-SA 2.5 | null | 2010-10-03T05:01:18.027 | 2010-10-03T16:53:49.277 | null | null | 1106 | [
"bayesian",
"estimation",
"mean"
] |
3277 | 2 | null | 3268 | 3 | null | A quick couple of thoughts:
- I've used multidimensional scaling to visualise connections between team members (i.e., a weighted network). Nodes with stronger connections then appear closer in the figure. Here's some resources for implementing in R.
- You could present a standard graph where line thickness is based on strength of connection.
| null | CC BY-SA 2.5 | null | 2010-10-03T06:35:56.690 | 2010-10-03T06:35:56.690 | null | null | 183 | null |
3278 | 2 | null | 3270 | 25 | null | First, there are different ways to construct so-called [biplots](http://en.wikipedia.org/wiki/Biplot) in the case of correspondence analysis. In all cases, the basic idea is to find a way to show the best 2D approximation of the "distances" between row cells and column cells. In other words, we seek a hierarchy (we also speak of "ordination") of the relationships between rows and columns of a contingency table.
Very briefly, CA decomposes the chi-square statistic associated with the two-way table into orthogonal factors that maximize the separation between row and column scores (i.e. the frequencies computed from the table of profiles). Here, you see that there is some connection with PCA but the measure of variance (or the metric) retained in CA is the $\chi^2$, which only depends on column profiles (As it tends to give more importance to modalities that have large marginal values, we can also re-weight the initial data, but this is another story).
Here is a more detailed answer.
The implementation that is proposed in the `corresp()` function (in `MASS`) follows from a view of CA as an SVD decomposition of dummy coded matrices representing the rows and columns (such that $R^tC=N$, with $N$ the total sample). This is in light with canonical correlation analysis.
In contrast, the French school of data analysis considers CA as a variant of the PCA, where you seek the directions that maximize the "inertia" in the data cloud. This is done by diagonalizing the inertia matrix computed from the centered and scaled (by marginals frequencies) two-way table, and expressing row and column profiles in this new coordinate system.
If you consider a table with $i=1,\dots,I$ rows, and $j=1,\dots,J$ columns, each row is weighted by its corresponding marginal sum which yields a series of conditional frequencies associated to each row: $f_{j|i}=n_{ij}/n_{i\cdot}$. The marginal column is called the mean profile (for rows). This gives us a vector of coordinates, also called a profile (by row). For the column, we have $f_{i|j}=n_{ij}/n_{\cdot j}$. In both cases, we will consider the $I$ row profiles (associated to their weight $f_{i\cdot}$) as individuals in the column space, and the $J$ column profiles (associated to their weight $f_{\cdot j}$) as individuals in the row space. The metric used to compute the proximity between any two individuals is the $\chi^2$ distance. For instance, between two rows $i$ and $i'$, we have
$$
d^2_{\chi^2}(i,i')=\sum_{j=1}^J\frac{n}{n_{\cdot j}}\left(\frac{n_{ij}}{n_{i\cdot}}-\frac{n_{i'j}}{n_{i'\cdot}} \right)^2
$$
You may also see the link with the $\chi^2$ statistic by noting that it is simply the distance between observed and expected counts, where expected counts (under $H_0$, independence of the two variables) are computed as $n_{i\cdot}\times n_{\cdot j}/n$ for each cell $(i,j)$. If the two variables were to be independent, the row profiles would be all equal, and identical to the corresponding marginal profile. In other words, when there is independence, your contingency table is entirely determined by its margins.
If you realize an PCA on the row profiles (viewed as individuals), replacing the euclidean distance by the $\chi^2$ distance, then you get your CA. The first principal axis is the line that is the closest to all points, and the corresponding eigenvalue is the inertia explained by this dimension. You can do the same with the column profiles. It can be shown that there is a symmetry between the two approaches, and more specifically that the principal components (PC) for the column profiles are associated to the same eigenvalues than the PCs for the row profiles. What is shown on a biplot is the coordinates of the individuals in this new coordinate system, although the individuals are represented in a separate factorial space. Provided each individual/modality is well represented in its factorial space (you can look at the $\cos^2$ of the modality with the 1st principal axis, which is a measure of the correlation/association), you can even interpret the proximity between elements $i$ and $j$ of your contingency table (as can be done by looking at the residuals of your $\chi^2$ test of independence, e.g. `chisq.test(tab)$expected-chisq.test(tab)$observed`).
The total inertia of your CA (= the sum of eigenvalues) is the $\chi^2$ statistic divided by $n$ (which is Pearson's $\phi^2$).
Actually, there are several packages that may provide you with enhanced CAs compared to the function available in the `MASS` package: [ade4](http://cran.r-project.org/web/packages/ade4/index.html), [FactoMineR](http://cran.r-project.org/web/packages/FactoMineR/index.html), [anacor](http://cran.r-project.org/web/packages/anacor/index.html), and [ca](http://cran.r-project.org/web/packages/ca/index.html).
The latest is the one that was used for your particular illustration, and a paper was published in the Journal of Statistical Software that explains most of its functionnalities: [Correspondence Analysis in R, with Two- and Three-dimensional Graphics: The ca Package](http://www.jstatsoft.org/v20/i03).
So, your example on eye/hair colors can be reproduced in many ways:
```
data(HairEyeColor)
tab <- apply(HairEyeColor, c(1, 2), sum) # aggregate on gender
tab
library(MASS)
plot(corresp(tab, nf=2))
corresp(tab, nf=2)
library(ca)
plot(ca(tab))
summary(ca(tab, nd=2))
library(FactoMineR)
CA(tab)
CA(tab, graph=FALSE)$eig # == summary(ca(tab))$scree[,"values"]
CA(tab, graph=FALSE)$row$contrib
library(ade4)
scatter(dudi.coa(tab, scannf=FALSE, nf=2))
```
In all cases, what we read in the resulting biplot is basically (I limit my interpretation to the 1st axis which explained most of the inertia):
- the first axis highlights the clear opposition between light and dark hair color, and between blue and brown eyes;
- people with blond hair tend to also have blue eyes, and people with black hair tend to have brown eyes.
There is a lot of additional resources on data analysis on the [bioinformatics lab](http://pbil.univ-lyon1.fr/R/enseignement.html) from Lyon, in France. This is mostly in French, but I think it would not be too much a problem for you. The following two handouts should be interesting as a first start:
- Initiation à l'analyse factorielle des correspondances
- Pratique de l'analyse des correspondances
Finally, when you consider a full disjonctive (dummy) coding of $k$ variables, you get the multiple correspondence analysis.
| null | CC BY-SA 2.5 | null | 2010-10-03T10:14:06.133 | 2010-10-04T20:40:05.720 | 2010-10-04T20:40:05.720 | 930 | 930 | null |
3279 | 2 | null | 3276 | 3 | null | Sure, it would be equivalent to the following ridge-like optimization problem:
$\underset{\mu\in\mathbb{R}|\mu_0,\lambda\geq0}{\min} ||x_i-\mu-\mu_0||_2+\lambda\mu^2$
For $\lambda=0$, $\mu+\mu_0$ goes to the OLS solution (i.e. $\bar{x}$), for $\lambda=\infty$, it shrinks to $\mu_0$.
| null | CC BY-SA 2.5 | null | 2010-10-03T11:09:10.000 | 2010-10-03T12:15:18.037 | 2010-10-03T12:15:18.037 | 603 | 603 | null |
3280 | 2 | null | 3276 | 3 | null | Ridge regression (Hoerl and Kennard, 1988) was initially developed to overcome singularities when inverting $X^tX$ (by adding $\lambda$ to its diagonal elements). Thus, the regularization in this case consists in working with a vc matrix $(X^tX-\lambda I)^{-1}$. This L2 penalization leads to "better" predictions than with usual OLS by optimizing the compromise between bias and variance (shrinkage), but it suffers from considering all coefficients in the model. The regression coefficients are found to be
$$
\hat\beta=\underset{\beta}{\operatorname{argmin}}\|Y-X\beta\|^2 + \lambda\|\beta\|^2
$$
with $\vert\vert\beta\vert\vert^2 = \sum_{j=1}^p\beta_j^2$ (L2-norm).
From a bayesian perspective, you can consider that the $\beta$'s must be small and plug them into a prior distribution. The likelihood $\ell (y,X,\hat\beta,\sigma^2)$ can thus be weighted by the prior probability for $\hat\beta$ (assumed i.i.d. with zero mean and variance $\tau^2$), and the posterior is found to be
$$
f(\beta|y,X,\sigma^2,\tau^2)=(y-\hat\beta^tX)^t(y-\hat\beta^tX)+\frac{\sigma^2}{\tau^2}\hat\beta^t\hat\beta
$$
where $\sigma^2$ is the variance of your $y$'s. It follows that this density is the opposite of the residual sum of squares that is to be minimized in the Ridge framework, after setting $\lambda=\sigma^2/\tau^2$.
The bayesian estimator for $\hat\beta$ is thus the same as the OLS one when considering the Ridge loss function with a prior variance $\tau^2$. More details can be found in The Elements of Statistical Learning from Hastie, Tibshirani, and Friedman (§3.4.3, p.60 in the 1st ed.). The [second edition](http://www-stat.stanford.edu/~tibs/ElemStatLearn/) is also available for free.
| null | CC BY-SA 2.5 | null | 2010-10-03T11:12:14.313 | 2010-10-03T16:53:49.277 | 2010-10-03T16:53:49.277 | 930 | 930 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.