Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8347
|
1
| null | null |
21
|
3619
|
While attending conferences, there has been a bit of a push by advocates of Bayesian statistics for assessing the results of experiments. It is vaunted as both more sensitive, appropriate, and selective towards genuine findings (fewer false positives) than frequentist statistics.
I have explored the topic somewhat, and I am left unconvinced so far of the benefits to using Bayesian statistics. Bayesian analyses were used to refute [Daryl Bem](http://dbem.ws/)'s research supporting precognition, however, so I remain cautiously curious about how Bayesian analyses might benefit even my own research.
So I am curious about the following:
- Power in a Bayesian analysis vs. a frequentist analysis
- Susceptibility to Type 1 error in each type of analysis
- The trade-off in complexity of the analysis (Bayesian seems more complicated) vs. the benefits gained. Traditional statistical analyses are straightforward, with well-established guidelines for drawing conclusions. The simplicity could be viewed as a benefit. Is that worth giving up?
Thanks for any insight!
|
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavioral research?
|
CC BY-SA 2.5
| null |
2011-03-11T19:47:33.830
|
2011-03-18T11:36:47.960
|
2011-03-16T07:50:42.073
|
449
| null |
[
"bayesian",
"statistical-power",
"frequentist"
] |
8348
|
2
| null |
8347
|
2
| null |
I am not familiar with Bayesian Statistics myself but I do know that Skeptics Guide to the Universe Episode 294 has and interview with Eric-Jan Wagenmakers where they discuss Bayesian Statistics. Here is a link to the podcast:
[http://www.theskepticsguide.org/archive/podcastinfo.aspx?mid=1&pid=294](http://www.theskepticsguide.org/archive/podcastinfo.aspx?mid=1&pid=294)
| null |
CC BY-SA 2.5
| null |
2011-03-13T04:16:06.883
|
2011-03-13T04:16:06.883
| null | null | null | null |
8349
|
1
|
8488
| null |
2
|
185
|
I'm a complete noob when it comes to statistics, so please excuse my lack of standard terminology. I think my question might be related to normalization, but please re-tag if I'm wrong.
I'm a programmer by trade, and I've been tasked with creating a chart which shows how the number of errors in a data set change from month to month. Unfortunately we don't have a statistician in-house to help me figure it out...
The data is as follows: Each month, a number of items are entered into a database. These items are checked for errors, and the errors are also entered into the database. Each item can have multiple errors of different types, and the number of items added each month can vary a lot.
How can I show how the error frequency changes from month to month?
It doesn't "feel" right to show a percentage of errors/number of items since there can be multiple errors for each item. The chart also needs to be easy to understand even by people with no background in statistics, so something as close as possible to the real values or percentages would be good.
An example of the data with total number of errors per month:
Month Errors Items
=====================
March 208 2027
April 276 1304
May 609 1721
June 167 1561
July 268 513
The errors for March could be broken down like this:
Type Errors
==================
Type 1 2
Type 2 43
Type 3 93
Type 4 0
Type 5 1
Type 6 0
Type 7 4
Type 8 0
Type 9 22
Type 10 0
Type 11 43
|
Charting errors based on number of items per month
|
CC BY-SA 2.5
| null |
2011-03-16T09:06:15.457
|
2011-03-19T10:40:15.893
|
2011-03-16T11:55:43.973
|
3735
|
3735
|
[
"data-visualization",
"descriptive-statistics"
] |
8350
|
2
| null |
8347
|
5
| null |
Bayesian statistics can be derived from a few Logical principles. Try Searching "probability as extended logic" and you will find more in depth analysis of the fundamentals. But basically, Bayesian statistics rests on three basic "desiderata" or normative principles:
- The plausability of a proposition is to be represented by a single real number
- The plausability if a proposition is to have qualitative correspondance with "common sense". If given initial plausibility $p(A|C^{(0)})$, then change from $C^{(0)}\rightarrow C^{(1)}$ such that $p(A|C^{(1)})>p(A|C^{(0)})$ (A becomes more plausible) and also $p(B|A C^{(0)})=p(B|AC^{(1)})$ (given A, B remains just as plausible) then we must have $p(AB| C^{(0)})\leq p(AB|C^{(1)})$ (A and B must be at least as plausible) and $p(\overline{A}|C^{(1)})<p(\overline{A}|C^{(0)})$ (not A must become less plausible).
- The plausability of a proposition is to be calculated consistently. This means a) if a plausability can be reasoned in more than 1 way, all answers must be equal; b) In two problems where we are presented with the same information, we must assign the same plausabilities; and c) we must take account of all the information that is available. We must not add information that isn't there, and we must not ignore information which we do have.
These three desiderata (along with the rules of logic and set theory) uniquely determine the sum and product rules of probability theory. Thus, if you would like to reason according to the above three desiderata, they you must adopt a Bayesian approach. You do not have to adopt the "Bayesian Philosophy" but you must adopt the numerical results.
[The first three chapters of this book](http://bayes.wustl.edu/etj/prob/book.pdf) describe these in more detail, and provide the proof.
And last but not least, the "Bayesian machinery" is the most powerful data processing tool you have. This is mainly because of the desiderata 3c) using all the information you have (this also explains why Bayes can be more complicated than non-Bayes). It can be quite difficult to decide "what is relevant" using your intuition. Bayes theorem does this for you (and it does it without adding in arbitrary assumptions, also due to 3c).
EDIT: to address the question more directly (as suggested in the comment), suppose you have two hypothesis $H_0$ and $H_1$. You have a "false negative" loss $L_1$ (Reject $H_0$ when it is true: type 1 error) and "false positive" loss $L_2$ (Accept $H_0$ when it is false: type 2 error). probability theory says you should:
- Calculate $P(H_0|E_1,E_2,\dots)$, where $E_i$ is all the pieces of evidence related to the test: data, prior information, whatever you want the calculation to incorporate into the analysis
- Calculate $P(H_1|E_1,E_2,\dots)$
- Calculate the odds $O=\frac{P(H_0|E_1,E_2,\dots)}{P(H_1|E_1,E_2,\dots)}$
- Accept $H_0$ if $O > \frac{L_2}{L_1}$
Although you don't really need to introduce the losses. If you just look at the odds, you will get one of three results: i) definitely $H_0$, $O>>1$, ii) definitely $H_1$, $O<<1$, or iii) "inconclusive" $O\approx 1$.
Now if the calculation becomes "too hard", then you must either approximate the numbers, or ignore some information.
For a actual example with worked out numbers [see my answer to this question](https://stats.stackexchange.com/questions/8419/bayesian-classifier-with-multivariate-normal-densities/8434#8434)
| null |
CC BY-SA 2.5
| null |
2011-03-16T09:23:16.897
|
2011-03-18T11:36:47.960
|
2017-04-13T12:44:44.530
|
-1
|
2392
| null |
8351
|
1
|
8359
| null |
8
|
7389
|
I have a quarterly time series and test for stationarity with an augmented Dickey-Fuller test using R.
```
adf.test(myseries)
# returns
# Dickey-Fuller = -3.9828, Lag order = 4, p-value = 0.01272
# alternative hypothesis: stationary
```
so the H0 is rejected. I tried to validate this intuitively and regressed the same series on a linear trend.
```
x<- 104:1
fit.1<-lm(myseries~x)
summary(fit.1)
#returns
# x 0.024 1.31e-05 ***
```
Even though a simple linear model is not so appropriate here and the intercept is large (around 80), there seems to be a slight downwards trend over time, which is in line with my thoughts after looking at the initial data. So do I get the adf.test wrong or is the trend just to small to be discovered?
Besides I used
```
plot(stl(myseries,"per"))
```
and ended up with a graph which sidebars suggested that trend and remainder were the main components driving the data, while seasonal influence was negligible. I saw that `stl()` uses Local Polynomial Regression Filtering and got a rough idea how that works (still I wonder why smoothed trends of Hadley's ggplot2 package looked that different even though it uses the same method by default).
So summing up I got:
- adf finding no evidence for a trend
- a slight downwards trends "detected" by eyeballing and the naive approach
- loess decomposition stating that the trend has strong influence (by the relation of its bars in the plot)
So what can I learned from this? Probably I do have a terminology problem here, because the former two seem to address time trends while the latter address some other trend I cannot fully grasp yet. Maybe my question is just: Can you help me to understand the trend extracted by loess? And how is it related to smoothed / filtered stuff like HP-Filter or Kalman Smoothing (if there is a relationship and similarity does not only occur in my case)?
|
Trend or no trend?
|
CC BY-SA 2.5
| null |
2011-03-16T09:29:27.813
|
2011-03-16T22:55:24.480
|
2011-03-16T15:06:42.183
|
704
|
704
|
[
"r",
"time-series",
"loess"
] |
8352
|
1
| null | null |
2
|
634
|
I would like to analyse the following type of experiment using a two-level fully nested ANOVA. We have mice of two genotypes (Group factor) and for each of the genotypes we take a certain number of mice (sub-groups) and measure several identical samples per each mouse. Thus it looks like a classic two-level nested design with Genotype and Mice as fixed factors. I am interested in the main group effect of the genotype on the measurement outcome. I used Statistica where F-value is calculated by the ratio of $\text{MS}_\text{group}/\text{MS}_\text{residual}$. However, I found several references on the web suggesting that it is better to calculate F-value from the
ratio of $\text{MS}_\text{group}/\text{MS}_\text{sub-group}$.
For example, on [John McDonald's web page](http://udel.edu/~mcdonald/statnested.html):
>
In a two-level nested anova, there are two F statistics, one for subgroups (Fsubgroup) and one for groups (Fgroup). The subgroup F-statistic is found by dividing the among-subgroup mean square, MSsubgroup (the average variance of subgroup means within each group) by the within-subgroup mean square, MSwithin (the average variation among individual measurements within each subgroup). The group F-statistic is found by dividing the among-group mean square, MSgroup (the variation among group means) by MSsubgroup. The P-value is then calculated for the F-statistic at each level...
On the same web page, he comments that in Rweb:
>
Fgroup is calculated as MSgroup/MSwithin, which is not a good idea if Fsubgroup is significant.
In my analysis, if there was a significant Group (Genotype) effect calculated as $\text{MS}_\text{group}/\text{MS}_\text{residual}$, I didn't observe a significant effect of sub-groups (Mice). Does it mean that I can use $\text{MS}_\text{group}/\text{MS}_\text{residual}$ ratio for estimating the F-value?
|
What is the proper ratio of mean squares for a two-level nested ANOVA?
|
CC BY-SA 3.0
| null |
2011-03-16T10:00:26.227
|
2011-06-22T08:48:31.287
|
2011-06-21T20:45:01.260
|
930
|
1727
|
[
"anova",
"experiment-design",
"genetics"
] |
8353
|
2
| null |
8335
|
3
| null |
Using the suggested corrected data we have:
The model to be tested is : Y(T)=B0 + B1*X(T) + A(T)
The null hypothesis is that the set B0 and B1 are the same over the two states
step 1 : Estimate this for STATE1 obtaining an error sum of squares SOS1 =.789
step 2 : Estimate this for STATE2 obtaining an error sum of squares SOS2 = 548.272
step 3 : Estimate this for all of the data (12 pairs) obtaining an error sum of squares SOS3 = 23920.4
step 4 : Compute NUM= [SOS3-(SOS2+SOS3)]/2 = 11685
step 5 : Compute MSE for composite analysis =23920.4/10 = 2392
step 5 : F value = NUM / MSE = 11685/2392 = 4.9
STEP 6 : An F OF 4.9 with 2 and 10 degrees of freedom is .03 Thus the hypothesis of equality is rejected at alpha < .03 ; accepted otherwise
| null |
CC BY-SA 2.5
| null |
2011-03-16T10:25:22.210
|
2011-03-17T11:37:41.620
|
2011-03-17T11:37:41.620
|
2116
|
3382
| null |
8354
|
2
| null |
8344
|
22
| null |
Influence functions are basically an analytical tool that can be used to assess the effect (or "influence") of removing an observation on the value of a statistic without having to re-calculate that statistic. They can also be used to create asymptotic variance estimates. If influence equals $I$ then asymptotic variance is $\frac{I^2}{n}$.
The way I understand influence functions is as follows. You have some sort of theoretical CDF, denoted by $F_{i}(y)=Pr(Y_{i}<y_{i})$. For simple OLS, you have
$$Pr(Y_{i}<y_{i})=Pr(\alpha+\beta x_{i} + \epsilon_{i} < y_{i})=\Phi\left(\frac{y_{i}-(\alpha+\beta x_{i})}{\sigma}\right)$$
Where $\Phi(z)$ is the standard normal CDF, and $\sigma^2$ is the error variance. Now you can show that any statistic will be a function of this CDF, hence the notation $S(F)$ (i.e. some function of $F$). Now suppose we change the function $F$ by a "little bit", to $F_{(i)}(z)=(1+\zeta)F(z)-\zeta \delta_{(i)}(z)$ Where $\delta_{i}(z)=I(y_{i}<z)$, and $\zeta=\frac{1}{n-1}$. Thus $F_{(i)}$ represents the CDF of the data with the "ith" data point removed. We can do a taylor series of $F_{(i)}(z)$ about $\zeta=0$. This gives:
$$S[F_{(i)}(z,\zeta)] \approx S[F_{(i)}(z,0)]+\zeta\left[\frac{\partial S[F_{(i)}(z,\zeta)]}{\partial \zeta}|_{\zeta=0}\right]$$
Note that $F_{(i)}(z,0)=F(z)$ so we get:
$$S[F_{(i)}(z,\zeta)] \approx S[F(z)]+\zeta\left[\frac{\partial S[F_{(i)}(z,\zeta)]}{\partial \zeta}|_{\zeta=0}\right]$$
The partial derivative here is called the influence function. So this represents an approximate "first order" correction to be made to a statistic due to deleting the "ith" observation. Note that in regression the remainder does not go to zero asymtotically, so that this is an approximation to the changes you may actually get. Now write $\beta$ as:
$$\beta=\frac{\frac{1}{n}\sum_{j=1}^{n}(y_{j}-\overline{y})(x_{j}-\overline{x})}{\frac{1}{n}\sum_{j=1}^{n}(x_{j}-\overline{x})^2}$$
Thus beta is a function of two statistics: the variance of X and covariance between X and Y. These two statistics have representations in terms of the CDF as:
$$cov(X,Y)=\int(X-\mu_x(F))(Y-\mu_y(F))dF$$
and
$$var(X)=\int(X-\mu_x(F))^{2}dF$$
where
$$\mu_x=\int xdF$$
To remove the ith observation we replace $F\rightarrow F_{(i)}=(1+\zeta)F-\zeta \delta_{(i)}$ in both integrals to give:
$$\mu_{x(i)}=\int xd[(1+\zeta)F-\zeta \delta_{(i)}]=\mu_x-\zeta(x_{i}-\mu_x)$$
$$Var(X)_{(i)}=\int(X-\mu_{x(i)})^{2}dF_{(i)}=\int(X-\mu_x+\zeta(x_{i}-\mu_x))^{2}d[(1+\zeta)F-\zeta \delta_{(i)}]$$
ignoring terms of $\zeta^{2}$ and simplifying we get:
$$Var(X)_{(i)}\approx Var(X)-\zeta\left[(x_{i}-\mu_x)^2-Var(X)\right]$$
Similarly for the covariance
$$Cov(X,Y)_{(i)}\approx Cov(X,Y)-\zeta\left[(x_{i}-\mu_x)(y_{i}-\mu_y)-Cov(X,Y)\right]$$
So we can now express $\beta_{(i)}$ as a function of $\zeta$. This is:
$$\beta_{(i)}(\zeta)\approx \frac{Cov(X,Y)-\zeta\left[(x_{i}-\mu_x)(y_{i}-\mu_y)-Cov(X,Y)\right]}{Var(X)-\zeta\left[(x_{i}-\mu_x)^2-Var(X)\right]}$$
We can now use the Taylor series:
$$\beta_{(i)}(\zeta)\approx \beta_{(i)}(0)+\zeta\left[\frac{\partial \beta_{(i)}(\zeta)}{\partial \zeta}\right]_{\zeta=0}$$
Simplifying this gives:
$$\beta_{(i)}(\zeta)\approx \beta-\zeta\left[\frac{(x_{i}-\mu_x)(y_{i}-\mu_y)}{Var(X)}-\beta\frac{(x_{i}-\mu_x)^2}{Var(X)}\right]$$
And plugging in the values of the statistics $\mu_y$, $\mu_x$, $var(X)$, and $\zeta=\frac{1}{n-1}$ we get:
$$\beta_{(i)}\approx \beta-\frac{x_{i}-\overline{x}}{n-1}\left[\frac{y_{i}-\overline{y}}{\frac{1}{n}\sum_{j=1}^{n}(x_{j}-\overline{x})^2}-\beta\frac{x_{i}-\overline{x}}{\frac{1}{n}\sum_{j=1}^{n}(x_{j}-\overline{x})^2}\right]
$$
And you can see how the effect of removing a single observation can be approximated without having to re-fit the model. You can also see how an x equal to the average has no influence on the slope of the line. Think about this and you will see how it makes sense. You can also write this more succinctly in terms of the standardised values $\tilde{x}=\frac{x-\overline{x}}{s_{x}}$ (similarly for y):
$$\beta_{(i)}\approx \beta-\frac{\tilde{x_{i}}}{n-1}\left[\tilde{y_{i}}\frac{s_y}{s_x}-\tilde{x_{i}}\beta\right]
$$
| null |
CC BY-SA 2.5
| null |
2011-03-16T11:27:17.790
|
2011-03-16T11:35:25.490
|
2011-03-16T11:35:25.490
|
2392
|
2392
| null |
8355
|
2
| null |
8351
|
5
| null |
[Larry Bretthorst's extended phd](http://bayes.wustl.edu/glb/book.pdf) will greatly help you I think. You should take the discrete fourier transform of the data. This will give you a look at your series in the frequency domain. Trend is represented by low frequency. Ultimate modeling book. It's 200 pages, but well worth it - includes computer code to implement the methods
| null |
CC BY-SA 2.5
| null |
2011-03-16T11:46:52.670
|
2011-03-16T11:46:52.670
| null | null |
2392
| null |
8357
|
2
| null |
8334
|
6
| null |
Your question is most interesting to me and it's solution has been my primary research for a number of years.
There are a number of ways that "a structural break" may occur.
If there is a change in the Intercept or a change in Trend in "the latter portion of the time series" then one would be better suited to perform Intervention Detection (N.B. this is the empirical identification of the significant impact of an unspecified Deterministic Variable such as a Level Shift or a Change in Trend or the onset of a Seasonal Pulse ). Intervention Detection then is a pre-cursor to Intervention Modelling where a suggested variable is included in the model. You can find information on the web by googling "AUTOMATIC INTERVENTION DETECTION" . Some authors use the term "OUTLIER DETECTION" but like a lot of statistical language this can be confusing/imprecise . Detected Interventions can be any of the following (detecting a significant change in the mean of the residuals );
- a 1 period change in Level ( i.e. a Pulse )
- a multi-period contiguous change in Level ( i.e. a change in Intercept )
- a systematic Pulse ( i.e. a Seasonal Pulse )
- a trend change (i.e. 1,2,3,4,5,7,9,11,13,15 ..... )
These procedures are easily programmed IN R/SAS/Matlab and routinely available in a number of commercially available time series packages however there are many pitfalls that you need to be wary of such as whether to detect the stochastic structure first or do Intervention detection on the original series. This is like the chicken and egg problem. Early work in this area was limited to type 1's and as such will probably be insufficient for your needs .
If no such phenomenon is detected then one might consider the CHOW TEST which normally requires the user to pre-specify the point of hypothesized change. I have been researching and implementing procedures to DETECT the point of change by evaluating alternative hypothetical points in time to determine the most likely break point.
In closing one might also be sensitive to the possibility that there might have been a structural change in the error variance thus that might mask the CHOW TEST leading to a false acceptance of the null hypothesis of no significant break points in parameters.
| null |
CC BY-SA 2.5
| null |
2011-03-16T12:27:38.923
|
2011-03-16T13:50:24.793
|
2011-03-16T13:50:24.793
|
3382
|
3382
| null |
8358
|
1
|
8543
| null |
4
|
679
|
I am a beginner in statistics with just basic knowledge. I have these data: cases, deaths and CFR (Case Fatality Rate-deaths per 100 cases) of a disease for 17 years (1994-2010) from 2 neighbouring states where people can walk across the states freely. This is a population based cohort study.
Data are available from 1994. The treatment protocol was started in 1996. State 1 and state 2 implemented the same treatment but state 2 could not implement the treatment perfectly due to local administrative problems. The death rate fell in 1 state and continued to be high in the other state. Because it would be unethical to subject patients to do a case-control study, I want to analyse the following available data to see if there is a significant difference in the death rates and risk ratios of these 2 states because of poor implementation of the treatment guidelines in state 2 from 1996 to 2010.
St. refers to State
```
Year St.1 Cases St.1 Deaths St.1 CFR St.2 Cases St.2 Deaths State2 CFR Risk Ratio
1994 1836 383 20.86 583 121 20.75 0.99
1995 1246 257 20.63 1126 227 20.16 0.98
1996 1450 263 18.14 896 179 19.98 1.10
1997 2953 407 13.78 351 76 21.65 1.57
1998 1161 149 12.83 1061 195 18.55 1.43
1999 2924 434 14.84 1371 275 20.06 1.35
2000 1729 169 9.77 1170 253 21.62 2.21
2001 1888 275 14.57 1005 199 19.80 1.36
2002 919 178 19.37 604 133 22.02 1.14
2003 865 142 16.42 1124 237 21.09 1.28
2004 1543 131 8.49 1030 228 22.14 2.61
2005 2887 336 11.64 6061 1500 24.75 2.13
2006 1484 108 7.28 2320 528 22.76 3.13
2007 1592 75 4.71 3024 645 21.33 4.53
2008 1920 53 2.76 3012 537 17.83 6.46
2009 1477 40 2.71 3073 556 18.09 6.68
2010 1534 26 1.69 3540 494 13.95 8.23
```
Kindly advise me what is the best way to go.
I have basic knowledge of using SPSS V19 and Comprehensive Meta-analysis V2.
|
How to assess effect of intervention in one state versus another using annual case fatality rate?
|
CC BY-SA 2.5
| null |
2011-03-16T12:31:39.013
|
2011-03-20T20:31:57.940
|
2011-03-16T13:00:03.317
| null |
2956
|
[
"time-series",
"statistical-significance",
"spss",
"relative-risk"
] |
8359
|
2
| null |
8351
|
7
| null |
The answer to your first question is no. If the null hypothesis of unit root is rejected, the alternative in its most general form is stationary series with [time trend](http://en.wikipedia.org/wiki/Augmented_Dickey%E2%80%93Fuller_test). Here is the example:
```
> rr <- 1+0.01*(1:100)+rnorm(100)
> plot(rr)
> adf.test(rr)
Augmented Dickey-Fuller Test
data: rr
Dickey-Fuller = -4.1521, Lag order = 4, p-value = 0.01
alternative hypothesis: stationary
Message d'avis :
In adf.test(rr) : p-value smaller than printed p-value
```
So your findings are consistent with ADF test: there is no unit root, but there is a time trend.
| null |
CC BY-SA 2.5
| null |
2011-03-16T12:44:21.670
|
2011-03-16T12:44:21.670
| null | null |
2116
| null |
8361
|
5
| null | null |
0
| null |
[ggplot2](http://had.co.nz/ggplot2/) is an [actively maintained, open-source](https://github.com/hadley/ggplot2) chart-drawing library for [r](/questions/tagged/r), written by [Hadley Wickham](http://stackoverflow.com/users/16632/hadley), based upon the principles of ["The Grammar of Graphics"](https://rads.stackoverflow.com/amzn/click/com/0387987746) by Leland Wilkinson. It can stand as an alternative to R's [basic plot](http://cran.r-project.org/web/views/Graphics.html) and the [lattice package](http://cran.r-project.org/web/packages/lattice/index.html).
Resources:
- Download from the package archive or use install.packages("ggplot2")
- Hadley's online old documentation (pre-0.9) and new documentation
- View the wiki at GitHub for tips, examples and case studies
- Hadley's Book "ggplot2: Elegant Graphics for Data Analysis" at Amazon – preview
- Google Groups Discussion
- View the source at GitHub
- Detailed usage and comparison to lattice at "Learning R"
- Hadley's stackoverflow user account
- On Mar 1st 2012 a major new version was released, 0.9.0. A PDF detailing many of the new features and changes is available.
- On Sept 11th 2012, version 0.9.2 was released. See documentation and a list of changes.
- Winston Chang's Cookbook for R has pages that have been updated to reflect changes in the new version of ggplot2 0.9.2.
There are several related packages on CRAN that are based upon, or extend ggplot2's functionality:
- GGally: Package containing templates for different plots to be combined into a plot matrix, as well as a parallel coordinate plot function
- ggcolpairs: Combination of ggplot2 plots into a matrix based on data columns
- ggdendro: Tools for extracting dendrogram and tree diagram plot data for use with ggplot2
- granovaGG: Graphical analysis of variance using ggplot2
- nvis: Combination of visualization functions for nuclear data using ggplot2 and ggcolpairs
- ggmap allows visualization of spatial data and models on top of Google Maps, OpenStreetMaps, Stamen Maps, or CloudMade Maps using ggplot2
- Finally, the package gridExtra is often useful, as ggplot2 is based upon the grid graphics system.
| null |
CC BY-SA 4.0
| null |
2011-03-16T14:34:23.620
|
2020-07-02T00:46:57.570
|
2020-07-02T00:46:57.570
|
7290
|
3376
| null |
8362
|
4
| null | null |
0
| null |
ggplot2 is an enhanced plotting library for R based upon the principles of "The Grammar of Graphics". Use this tag for *on topic* questions that (a) involve `ggplot2` as a critical part of the question &/or expected answer, & (b) are not just about how to use `ggplot2`.
| null |
CC BY-SA 4.0
| null |
2011-03-16T14:34:23.620
|
2020-07-02T00:44:16.790
|
2020-07-02T00:44:16.790
|
7290
| null | null |
8363
|
2
| null |
8342
|
1
| null |
Initially I suggested to try one of the constrained ordination techniques from the `vegan` package, but on a second thought I doubt that this would be useful, as you actually have 2 contingency tables. I hope that the second part of [this example](http://is.gd/CSS9QV) [PDF: R Demonstration – Categorical Analysis] will be helpful.
| null |
CC BY-SA 2.5
| null |
2011-03-16T14:47:55.757
|
2011-03-17T08:06:04.890
|
2011-03-17T08:06:04.890
|
3467
|
3467
| null |
8364
|
2
| null |
8347
|
15
| null |
A quick response to the bulleted content:
1) Power / Type 1 error in a Bayesian analysis vs. a frequentist analysis
Asking about Type 1 and power (i.e. one minus the probability of Type 2 error) implies that you can put your inference problem into a repeated sampling framework. Can you? If you can't then there isn't much choice but to move away from frequentist inference tools. If you can, and if the behavior of your estimator over many such samples is of relevance, and if you are not particularly interested in making probability statements about particular events, then I there's no strong reason to move.
The argument here is not that such situations never arise - certainly they do - but that they typically don't arise in the fields where the methods are applied.
2) The trade-off in complexity of the analysis (Bayesian seems more complicated) vs. the benefits gained.
It is important to ask where the complexity goes. In frequentist procedures the implementation may be very simple, e.g. minimize the sum of squares, but the principles may be arbitrarily complex, typically revolving around which estimator(s) to choose, how to find the right test(s), what to think when they disagree. For an example. see the still lively discussion, picked up in this forum, of different confidence intervals for a proportion!
In Bayesian procedures the implementation can be arbitrarily complex even in models that look like they 'ought' to be simple, usually because of difficult integrals but the principles are extremely simple. It rather depends where you'd like the messiness to be.
3) Traditional statistical analyses are straightforward, with well-established guidelines for drawing conclusions.
Personally I can no longer remember, but certainly my students never found these straightforward, mostly due to the principle proliferation described above. But the question is not really whether a procedure is straightforward, but whether is closer to being right given the structure of the problem.
Finally, I strongly disagree that there are "well-established guidelines for drawing conclusions" in either paradigm. And I think that's a good thing. Sure, "find p<.05" is a clear guideline, but for what model, with what corrections, etc.? And what do I do when my tests disagree? Scientific or engineering judgement is needed here, as it is elsewhere.
| null |
CC BY-SA 2.5
| null |
2011-03-16T14:54:27.780
|
2011-03-16T14:54:27.780
| null | null |
1739
| null |
8367
|
2
| null |
8303
|
6
| null |
One idea would be to use a random forest and then use the variable importance measures it outputs to choose your best 8 variables. Another idea would be to use the "boruta" package to repeat this process a few hundred times to find the 8 variables that are consistently most important to the model.
| null |
CC BY-SA 2.5
| null |
2011-03-16T16:04:09.750
|
2011-03-16T16:04:09.750
| null | null |
2817
| null |
8369
|
2
| null |
8342
|
1
| null |
Logistic regression seems appropriate for your problem. The variable you are trying to predict is the probability that an observation (which is either species A or species B) is species A. The covariates are $host$, $rain$ and optionally $host*rain$.
The R command would be:
>
glm(formula = species ~ host + rain, family = binomial(logit), weights=counts)
and you will be interested in the $p$-values of the slopes. Keep in mind that you are testing multiple hypotheses, though.
| null |
CC BY-SA 2.5
| null |
2011-03-16T16:38:09.173
|
2011-03-16T18:38:27.713
|
2011-03-16T18:38:27.713
|
3567
|
3567
| null |
8370
|
1
|
8372
| null |
6
|
2304
|
I have a matrix of positive real numbers between 0 and 1; the rows represent genes and columns represent samples. Number of rows is greater than the number of columns by a magnitude of $10^4$. I am wondering how to visualize this in `R`. I know heatmap is one of the ways to do this, but are there other ideas. Here are a few points which I want to emphasize in the visualization:
Data:
- The rows and columns have no order to them (as you may have already realized); specifically rows and columns are exchangeable.
- The entries of the matrix are positive real numbers between 0 and 1.
- A small fraction of the data (10% of the rows or genes, around 1000) are actually "interesting".
- The matrix represents the estimated probability of genes being more active in a sample.
Aim:
- I want show: which genes are more active and in which sample. The matrix has a lot of rows in which the probabilities are very similar across columns.
- I am ok with ordering the rows (genes) to make the pattern clearer.
My thoughts:
At the moment I can determine active genes in a sample by choosing a cutoff (say $\ge 95\%$) and arrange the genes in such a way that first set of rows are active genes in sample 1, second set of rows are active genes in sample 2, ...
I was also thinking about visualizing a subset of the data, may be by sampling rows. But I did not have any success.
I know these ideas may not be very elegant but rearranges my data in a way which makes the pattern more recognizable.
I know similar questions have been asked before, but I thought my query was a bit more specific, so hopefully I can get better inputs from the members of this forum.
|
How to visualize/summarize a matrix with number of rows $\gg$ number of columns?
|
CC BY-SA 2.5
| null |
2011-03-16T17:04:50.353
|
2011-03-16T20:14:48.487
|
2011-03-16T17:44:54.310
|
930
|
1307
|
[
"r",
"data-visualization",
"genetics"
] |
8371
|
1
|
9657
| null |
10
|
1792
|
French polling institutes are currently facing a major crisis after they recently published what can only be called [the most ridiculous poll so far](http://www.bbc.co.uk/news/world-europe-12660329) on the 2012 presidential election horse race. The French Senate is now considering to legislate on the issue by forcing polling institutes to publish, among other things, the confidence intervals for their results.
However, some pollsters are [opposing](http://www.lemonde.fr/idees/article/2011/03/08/il-est-inutile-de-legiferer-sur-la-question-des-sondages_1490046_3232.html) the measure, claiming that confidence intervals do not apply to [quota sampling](http://en.wikipedia.org/wiki/Quota_sampling), which is the method used by polling institutes in France. Since quota sampling is formally non-probabilistic sampling, there is some truth to the claim. But since quota sampling is fundamentally [stratified sampling](http://en.wikipedia.org/wiki/Stratified_sampling), confidence intervals should apply, right?
May I ask for experiences about this issue outside of France, in countries where pollsters also use quota sampling?
|
Do confidence intervals apply to quota sampling?
|
CC BY-SA 2.5
| null |
2011-03-16T17:38:34.997
|
2019-07-17T12:03:09.373
| null | null |
3582
|
[
"sampling",
"polling"
] |
8372
|
2
| null |
8370
|
6
| null |
Find one-dimensional [multidimensional scaling solutions](http://en.wikipedia.org/wiki/Multidimensional_scaling) for the rows and for the columns (separately), using whatever similarity measures you like (such as correlation). Sort the rows and columns according to their MDS positions. This will bring similar genes together and similar samples together. The whole thing can then easily be visualized as an array plot (e.g., normalize the values to the range 0..255 and display it as a grayscale image).
A 50 by 6 array of standard normal variates was processed in this way (using Euclidean distances as the proximity measures):

There's not much to see--after all, these data are iid--but look at the correlation matrices of the reordered columns and rows:


(red = positive, blue = negative). The concentrations of positive correlations along the diagonals and negative correlation off the diagonals demonstrate the method has worked as advertised. (With the original data, the correlation matrices are random, too, causing the red and blue cells to be more evenly interspersed throughout.)
In my experience, when there are even subtle nonzero correlations among rows and/or columns, this method does an excellent job of bringing them out in the original array plot (grayscale, above) and providing a visual display of clustering along both dimensions. Larger blocks along the diagonals of the corresponding correlation matrix plots help identify strongly clustered groups of rows or columns.
| null |
CC BY-SA 2.5
| null |
2011-03-16T18:29:31.027
|
2011-03-16T18:29:31.027
| null | null |
919
| null |
8373
|
1
| null | null |
15
|
12360
|
I am seeing a regression model which is regressing Year-on-Year stock index returns on lagged (12 months) Year-on-Year returns of the same stock index, credit spread (difference between monthly mean of risk-free bonds and corporate bond yields), YoY Inflation rate and YoY index of industrial production.
It looks thus (though you'd subsitute the data specific to India in this case):
```
SP500YOY(T) = a + b1*SP500YOY(T-12) + b2*CREDITSPREAD(T) +
b4*INDUSTRIALPRODUCTION(T+2) + b3*INFLATION(T+2) + b4*INFLATIONASYMM(T+2)
```
SP500YOY is the year-on-year returns for SP500 index To compute this, monthly average of SP500 values is computed and then converted to year-on-year returns for each month (i.e. Jan'10-Jan'11, Feb'10-Feb'11, Mar'10-Mar'11, . . ). On the explanatory variables side, a 12-month lagged value of the SP500YOY is used along with the CREDITSPREAD at time T and INFLATION and INDUSTRIALPRODUCTION two period AHEAD. The INFLATIONASYMM is a dummy for whether the Inflation is above a threshold value of 5.0%. The index in the parenthesis shows the time index for each variable.
This is estimated by standard OLS linear regression. To use this model for forecasting the 1,2 and 3-months ahead YOY returns of SP500, one has to generate 3,4 and 5-month ahead forecasts for Inflation and the Index of Industrial Production. These forecasts are done after fitting an ARIMA model to each of the two individually. The CreditSpread forecasts for 1,2 and 3 month ahead are just thrown in as mental estimates.
I'd like to know whether this OLS linear regression is correct/incorrect, efficient/inefficient or generally valid statistical practice.
The first problem I see is that of using overlapping data. i.e. the daily values of the stock index are averaged each month, and then used to compute yearly returns which are rolled over monthly. This should make the error term autocorrelated. I would think that one would have to use some 'correction' on the lines of one of the following:
- White's heteroscedasticity consistent covariance estimator
- Newey & West heteroscedasticity and autocorrelation consistent (HAC) estimator
- heteroscedasticity-consistent version of Hansen & Hodrick
Does it really make sense to apply standard OLS linear regression (without any corrections) to such overlapping data, and more so, use 3-period ahead ARIMA forecasts for explanatory variables to use in the original OLS linear regression for forecasting SP500YOY? I have not seen such a form before, and hence cannot really judge it, without the exception of correcting for the use of overlapping observations.
|
Time series regression with overlapping data
|
CC BY-SA 3.0
| null |
2011-03-16T18:40:35.840
|
2023-03-12T14:27:22.750
|
2023-03-12T14:27:22.750
|
53690
|
1349
|
[
"regression",
"time-series",
"autocorrelation",
"overlapping-data"
] |
8374
|
1
| null | null |
8
|
266
|
I am trying to correlate age (6-90 yrs) with loudness of voice (in dB). However, my data do not contain any data points in the range of 20-50 yrs.
What correlation measure is most appropriate with such a considerable gap, and why? I have been using Kendall Tau so far.
Note that we are not dealing with bimodally distributed data here, but with a substantial missing data gap in the age range.
|
Which correlation measure should be used with a large gap (missing data)?
|
CC BY-SA 2.5
| null |
2011-03-16T18:52:37.840
|
2011-03-16T22:59:14.990
|
2011-03-16T22:44:11.883
|
919
| null |
[
"distributions",
"correlation",
"missing-data"
] |
8375
|
1
|
8806
| null |
17
|
9270
|
I am looking to build a predictive model for predicting churn and looking to use a discrete time survival model fitted to a person-period training dataset (one row for each customer and discrete period they were at risk, with an indicator for event – equaling 1 if the churn happened in that period, else 0).
- I am fitting the model using ordinary
logistic regression using the
technique from Singer and
Willet.
- The churn of a customer can happen
anywhere during a month, but it is
only at the end of the month that we
know about it (i.e. sometime during
that month they left). 24 months is
being used for training.
- The time variable being used is the
origin time of the sample - all
customers active as of 12/31/2008 - they all receive t=0 as of Jan 2009 (not the classical way to do it, but I believe the way when building a predictive model versus a traditional statistical one). A
covariate used is the tenure of the
customer at that point in time.
- There are a series of covariates that
were constructed – some that do not
change across the rows of the dataset
(for a given customer) and some that
do.
- These time variant covariates are the
issue and what is causing to me
question a survival model for churn
prediction (compared to a regular
classifier that predicts churn in the
next x months based on current
snapshot data). The time-invariant ones describe activity the month prior and are expected to be important triggers.
The implementation of this predictive model, at least based on my current thinking, is to score the customer base at the end of each month, calculating the probability / risk of churn sometime during the next month. Then again for the next 1,2 or 3 months. Then for the next 1,2,3,4,5,6 months. For the 3 and 6 month churn probability, I would be using the estimated survival curve.
The problem:
When it comes to thinking about scoring, how can I incorporate time-varying predictors? It seems like I can only score with time-invariant predictors or to include those that are time invariant, you have to make them time invariant – set to the value “right now”.
Does anyone have experience or thoughts on this use of a survival model?
Update based on @JVM comment:
The issue is not with estimating the model, interpreting coefficients, plotting the hazard/survival plots of interesting covariate values using the training data etc. The issue is in using the model to forecast risk for a given customer. Say at the end of this month, I want to score everyone who is still an active customer with this model. I want to forecast that risk estimate out x periods (risk of closing the account at the end of next month. risk of closing the account at the end of two months from now, etc.). If there are time varying covariates, their values are unknown out any future periods, so how to utilize the model?
Final Update:
A person period data set will have an entry for each person and each time period they are at risk. Say there are J time periods (maybe J =1...24 for 24 months) Lets say I construct a discrete time survival model, where for simplicity we just treat time T as linear and have two covariates X and Z where X is time-invariant, meaning it is constant in every period for the ith person and Z is time varying, meaning that each record for the ith person can take on a different value. For example, X may be the customers gender and Z might be how much they were worth to the company in the prior month. The model for the logit of the hazard for the ith person in the jth time period is :
$logit(h(t_{ij}))=\alpha_{0}+\alpha_{1}T_{j}+\beta_{1}X_{i}+\beta_{2}Z_{ij}$
So the issue is, when using time varying covariates, and forecasting (into the yet unseen future) with new data, the $Z_{j}$ are unknown.
The only solutions I can think are:
- Don't use time varying covariates like Z. This would greatly weaken the model to predict the event of churning though since, for example, seeing a decrease in Z would tell us the customer is disengaging and perhaps preparing to leave.
- Use time varying covariates but lag them (like Z was above) which allows us to forecast out however many periods we have lagged the variable (again, thinking of the model scoring new current data).
- Use time varying covariates but keep them as constants in the forecast (so the model was fitted for varying data but for prediction we leave them constant and simulate how changes in these values, if later actually observed, will impact risk of churning.
- Use time varying covariates but impute their future values based on a forecast from known data. E.g. Forecast the $Z_{j}$ for each customer.
|
Survival Model for Predicting Churn - Time-varying predictors?
|
CC BY-SA 4.0
| null |
2011-03-16T19:58:45.120
|
2018-07-01T11:28:47.083
|
2018-06-04T07:11:26.640
|
128677
|
2040
|
[
"survival",
"predictive-models",
"churn"
] |
8376
|
2
| null |
8370
|
6
| null |
I was about to suggest something along @whuber's answer (I used this reordering technique but in a context of feature selection, so I was mainly concerned with the "variables view"). So let me suggest two other directions (but the first one is close to the already proposed one).
As far as heatmaps are concerned, you can display them after a slight rearrangement of rows (samples) and/or columns (genes) through hierarchical clustering (yet another aggregation method based on a (dis)similarity measure). There're a lot of R packages that can do this, for example the `cim()` function in [mixOmics](http://cran.r-project.org/web/packages/mixOmics/index.html). Another package that might be of interest is [MADE4](http://www.bioinf.ucd.ie/people/aedin/R/); it relies on the very good [ade4](http://cran.r-project.org/web/packages/ade4/index.html) package for multivariate data analysis and visualization.
If you are concerned with the rather large number of variables, you might also consider some reduction method for genes clustering. One that I've heard about is PCA-gene shaving (Hastie et al., 2000), that is largely described in Izenman (2008). In essence, this is a two-stage iterative procedure where (a) for feature selection, we single out genes whose correlation with the first principal component is below a distribution-based threshold (say, the 10% of genes having the lowest correlation at each step), and (b) for clustering, we seek to maximize an $R^2$ measure (between-cluster/within-cluster variances) for $j$ successive clusters of size $k_j$, where the optimal $k_j$ is obtained by a permutation technique and the use of the gap statistic (after effects of the former cluster has been removed by residualization). More detailed informations can be found in the paper referenced below, but the general idea is to cluster genes into small and potentially overlapping subsets of correlated genes that vary as much as possible across individuals.
References
- Hastie, T., Tibshirani, R., Eisen, M.B., Alzadeh, A., Levy, R., Staudt, L., Chan, W.C., Botstein, D., and Brown, P.O. (2000). 'Gene shaving' as a method for identifying distinct sets of genes with similar expression patterns. Genome Biology, 1(2).
- Izenman, A.J. (2008). Modern Multivariate Statistical Techniques. Springer.
| null |
CC BY-SA 2.5
| null |
2011-03-16T20:14:48.487
|
2011-03-16T20:14:48.487
| null | null |
930
| null |
8377
|
1
|
8383
| null |
7
|
1571
|
I am a first year psychology student. I am doing some research work with a prof, unfortunately the material that I need to use right now is covered only in my second year. But I need to already know it now. So I am burning through any resources I can find to quickly come up to speed. I need help to understand this particular situation here. Involves SAS, Regression Analysis.
When I ran a regression in SAS ( proc reg ) using two variables say a and b. I got this. I understand this as saying that both these variables (a&b) do not significantly predict my target variable. Here is the SAS output.
```
Analysis of Variance
Sum of Mean
Source DF Squares Square F Value Pr > F
Model 2 3.32392 1.66196 1.00 0.3774
Error 46 76.80649 1.66971
Corrected Total 48 80.13041
Root MSE 1.29217 R-Square 0.0415
Dependent Mean -0.23698 Adj R-Sq -0.0002
Coeff Var -545.26074
Parameter Estimates
Parameter Standard Standardized
Variable DF Estimate Error t Value Pr > |t| Estimate
Intercept 1 -0.25713 0.18515 -1.39 0.1716 0
a 1 -0.35394 0.28797 -1.23 0.2253 -0.19510
b 1 -0.04706 0.39586 -0.12 0.9059 -0.01887
```
Now I tried to include the interaction of a and b into the picture. Lets call it aXb, now the out put indicates that a and aXb significantly predict my target variable.
```
Analysis of Variance
Sum of Mean
Source DF Squares Square F Value Pr > F
Model 3 16.64439 5.54813 3.93 0.0142
Error 45 63.48602 1.41080
Corrected Total 48 80.13041
Root MSE 1.18777 R-Square 0.2077
Dependent Mean -0.23698 Adj R-Sq 0.1549
Coeff Var -501.20683
Parameter Estimates
Parameter Standard Standardized
Variable DF Estimate Error t Value Pr > |t| Estimate
Intercept 1 -0.06807 0.18098 -0.38 0.7086 0
a 1 3.01517 1.12795 2.67 0.0104 1.66201
b 1 -0.00994 0.36407 -0.03 0.9783 -0.00399
aXb 1 -1.13782 0.37029 -3.07 0.0036 -1.90743
```
Here are my questions: I am not sure what to make out of this situation. Taken together what does this indicate to me? Also while you are answering the question, could you supplement it with some resources, goog keywords etc for me to learn more surrounding these topics.
Thank you so much for your help.
|
Understanding multiple regression output
|
CC BY-SA 2.5
| null |
2011-03-16T20:55:08.437
|
2011-03-17T16:58:37.463
|
2011-03-17T16:58:37.463
|
919
|
3745
|
[
"regression",
"anova",
"sas"
] |
8378
|
2
| null |
8342
|
11
| null |
There are two ways to interpret your first question, which are reflected in the two ways you asked it: “Are species associated with host plants?” and, “Are species independent to host plants, given the effect of rain?”
The first interpretation corresponds to a model of joint independence, which states that species and hosts are dependent, but jointly independent of whether it rained:
$\quad p_{shr} = p_{sh} p_r$
where $p_{shr}$ is the probability that an observation falls into the $(s,h,r)$ cell where $s$ indexes species, $h$ host type, and $r$ rain value, $p_{sh}$ is the marginal probability of the $(s,h,\cdot)$ cell where we collapse over the rain variable, and $p_r$ is the marginal probability of rain.
The second interpretation corresponds to a model of conditional independence, which states that species and hosts are independent given whether it rained:
$\quad p_{sh|r} = p_{s|r}p_{h|r}$ or $p_{shr} = p_{sr}p_{hr} / p_r$
where $p_{sh|r}$ is the conditional probability of the $(s,h,r)$ cell, given a value of $r$.
You can test these models in R (`loglin` would work fine too but I’m more familiar with `glm`):
```
count <- c(12,15,10,13,11,12,12,7)
species <- rep(c("a", "b"), 4)
host <- rep(c("c","c", "d", "d"), 2)
rain <- c(rep(0,4), rep(1,4))
my.table <- xtabs(count ~ host + species + rain)
my.data <- as.data.frame.table(my.table)
mod0 <- glm(Freq ~ species + host + rain, data=my.data, family=poisson())
mod1 <- glm(Freq ~ species * host + rain, data=my.data, family=poisson())
mod2 <- glm(Freq ~ (species + host) * rain, data=my.data, family=poisson())
anova(mod0, mod1, test="Chi") #Test of joint independence
anova(mod0, mod2, test="Chi") #Test of conditional independence
```
Above, `mod1` corresponds to joint independence and `mod2` corresponds to conditional independence, whereas `mod0` corresponds to a mutual independence model $p_{shr} = p_s p_h p_r$. You can see the parameter estimates using `summary(mod2)`, etc. As usual, you should check to see if model assumptions are met. In the data you provided, the null model actually fits adequately.
A different way of approaching your first question would be to perform Fischer’s exact test (`fisher.test(xtabs(count ~ host + species))`) on the collapsed 2x2 table (first interpretation) or the Mantel-Haenszel test (`mantelhaen.test(xtabs(count ~ host + species + rain))`) for 2-stratified 2x2 tables or to write a permutation test that respects the stratification (second interpretation).
To paraphrase your second question, Does the relationship between species and host depend on whether it rained?
```
mod3 <- glm(Freq ~ species*host*rain - species:host:rain, data=my.data, family=poisson())
mod4 <- glm(Freq ~ species*host*rain, data=my.data, family=poisson())
anova(mod3, mod4, test=”Chi”)
pchisq(deviance(mod3), df.residual(mod3), lower=F)
```
The full model `mod4` is saturated, but you can test the effect in question by looking at the deviance of `mod3` as I've done above.
| null |
CC BY-SA 2.5
| null |
2011-03-16T21:16:26.233
|
2011-03-16T21:21:45.497
|
2011-03-16T21:21:45.497
|
3432
|
3432
| null |
8379
|
2
| null |
8374
|
8
| null |
Create a scatterplot to check whether it makes any sense to suppose that a single correlation coefficient is an adequate description of the association between the variables.
For example, in these (simulated) data the correlation for ages 6-20 is 90%, for ages 50+ it's -70%, and overall it's 15%. In such a situation reporting a single correlation coefficient would be as deceptive as reporting that the average number of legs among household pets is four when half of the pets are fish and the other half are spiders...

The choice of how to express correlation is of secondary concern and rests on other aspects of the dataset.
| null |
CC BY-SA 2.5
| null |
2011-03-16T22:15:34.437
|
2011-03-16T22:15:34.437
| null | null |
919
| null |
8380
|
1
| null | null |
4
|
396
|
I want to build a linear model to predict a scalar output from a vector of noisy scalar variable measurements.
I have two separate training data sets. One has output data and corresponding exact variable measurements. The other has exact variable measurements and corresponding noisy variable measurements. The noisy measurements of some variables are noisier (higher error variance) than others, and the noisy measurements of some variables are biased. I do not have a single data set with output data and corresponding noisy variable measurements.
How should I build my linear model? Should I use the exact variable measurements and ignore the fact that when the model is applied/used, noisy measurements of the variables will be input to the model? Or is there some way I can make use of what I can figure out about the noisy measurements of each variable when building the model? Can R help me with this problem?
|
Building linear model from exact variable measurements for use with noisy variable measurements
|
CC BY-SA 2.5
| null |
2011-03-16T22:15:53.453
|
2011-03-25T16:48:20.410
|
2011-03-21T16:06:20.593
|
3744
|
3744
|
[
"r",
"regression",
"predictive-models",
"errors-in-variables"
] |
8381
|
2
| null |
652
|
1
| null |
Agresti & Finlay's Statistical Methods for the Social Sciences is quite good, though I'd like to believe there is a good open source alternative. Is it wrong to use an amazon affiliate link here?
| null |
CC BY-SA 2.5
| null |
2011-03-16T22:34:19.733
|
2011-03-16T22:34:19.733
| null | null |
3748
| null |
8382
|
1
|
8438
| null |
6
|
611
|
So I have data that has been quantized by an analogue to digital converter. (continuous data has been turned into discrete data and the values range from 0 to the saturation value , which is 127 in this case).
This particular instrument I used to gather the data is quite noisy, let's say there is added Gaussian noise to the real value. Luckily, when taking single measurements, I have enough time to take multiple measurements and average them to reduce the noise. Note that sampling rate is not an issue here since the thing that I'm taking measurements of is completely stable.
Obviously , taking the simple mean will produce a biased result because values cannot exceed 0 or 127 (for example, if you attempt to use a plain old averaging on something with a "real" value of 126, you will get an estimated value that is less than 126. This is because the added Gaussian noise will not give you any value higher than 127 because of the truncation). So how do I take the average so the result will give me an unbiased estimator of the real value?
|
How to average quantized and truncated data?
|
CC BY-SA 3.0
| null |
2011-03-16T22:52:36.403
|
2017-04-23T16:16:25.247
|
2017-04-23T16:16:25.247
|
11887
|
3749
|
[
"mean",
"discrete-data",
"signal-processing",
"winsorizing"
] |
8383
|
2
| null |
8377
|
6
| null |
It seems like you need an introduction to regression. People made book recommendations [here](https://stats.stackexchange.com/questions/652/best-books-for-an-introduction-to-statistical-data-analysis). Free book recommendations [here](https://stats.stackexchange.com/questions/170/free-statistical-textbooks).
It's hard to make sure you're doing the analysis right when we don't know what the variables are or what the goal is. But based on the output, I can tell you that your second regression specification looks better than your first. I say that because you have two highly significant coefficients, and the adjusted R^2 value took a big jump. Note though, although I consider these important clues, it is not true that models with more significant coefficients or higher adjusted R^2 are consistently better. There are lots of other issues to consider.
Your regression models are predicting Y, using a and b. In your second model, the estimated regression equation is -0.06807 + (3.01517 * a) - (0.00994 * b) - (1.13782 ab)
In other words, plug in a and b, and you get the models prediction for Y. I could say a lot more, but I'll leave you there and suggest you pick up a textbook.
I strongly recommend you try plotting your data. Y with a on the x-axis, Y with b on the x-axis, and a by b as well.
| null |
CC BY-SA 2.5
| null |
2011-03-16T22:54:57.350
|
2011-03-17T00:22:58.507
|
2017-04-13T12:44:35.347
|
-1
|
3748
| null |
8384
|
2
| null |
8351
|
1
| null |
The ADF test has weak power and, as Dmitrij Celov mentioned, you should probably also check the results of PP and KPSS tests. If you find that your results are on the margin of detecting a unit root, it's possible your series is fractionally integrated. I would also check ACF and PACF plots of the series, looking for slow decay patterns. Generally, if you find that ADF test and Phillips-Perron reject the null of a unit root, but that the KPSS and ACF/PACF plots demonstrate some statistically-significant persistence through several lags, this may be strong evidence for fractional integration.
| null |
CC BY-SA 2.5
| null |
2011-03-16T22:55:24.480
|
2011-03-16T22:55:24.480
| null | null |
3265
| null |
8386
|
2
| null |
290
|
2
| null |
Alex Tabarrok posted a great list of resources here on [marginalrevolution.com](http://marginalrevolution.com/marginalrevolution/2011/01/stata-resources.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3a+marginalrevolution/hCQh+%28Marginal+Revolution%29)
Gabriel Rossman shares a good introductory guide [here](http://gabrielr.bol.ucla.edu/stataprogramming.pdf).
| null |
CC BY-SA 2.5
| null |
2011-03-16T23:45:24.433
|
2011-03-17T00:16:14.973
|
2011-03-17T00:16:14.973
|
3748
|
3748
| null |
8387
|
2
| null |
103
|
4
| null |
Andrew Gelman doesn't limit himself to visualization, but he comments on it frequently.
[Statistical Modeling, Causal Inference, and Social Science](http://www.stat.columbia.edu/~gelman/blog/)
| null |
CC BY-SA 3.0
| null |
2011-03-16T23:59:43.900
|
2012-10-24T14:54:19.153
|
2012-10-24T14:54:19.153
|
615
|
3748
| null |
8388
|
2
| null |
363
|
8
| null |
Andrew Gelman's interesting book recommendations are here:
[http://thebrowser.com/interviews/andrew-gelman-on-statistics](http://thebrowser.com/interviews/andrew-gelman-on-statistics)
| null |
CC BY-SA 2.5
| null |
2011-03-17T00:09:23.670
|
2011-03-17T00:09:23.670
| null | null |
3748
| null |
8389
|
1
| null | null |
5
|
654
|
I need a closed-form expression for the probability that 2 values drawn at random from a normal distribution are separated by at most T, as a function of T and the variance of the distribution. I can formulate the problem as an integration, but can't solve it. I'd be happy with a well-motivated approximation. I'd be amenable to making some simplifying assumptions about the distribution, if that would help.
|
Probability that two values drawn at random from a normal distribution are separated by at most T
|
CC BY-SA 2.5
| null |
2011-03-17T00:30:01.570
|
2011-03-17T02:07:17.637
| null | null | null |
[
"normal-distribution",
"sample"
] |
8390
|
2
| null |
7562
|
4
| null |
Looking at the example for `postStratify` in the [manual](http://cran.r-project.org/web/packages/survey/survey.pdf), you are correct: you seem to be required to give a `svydesign` object (though you can if needed use `svrepdesign` to specify it instead).
The `svydesign` object must have `ids`; all the others are optional, though you will almost certainly want `data` to have something to work with, and you will probably want some of the others. At this stage I would suggest you ignore all those appearing after `data`.
`postStratify` also needs `strata`, the variable to post-stratify on: the example uses `apiclus1$stype` which simply specifies the school type (E, M or H). It also needs `population` which you can either specify yourself or take from some other source: the example gives `data.frame(stype=c("E","H","M"), Freq=c(4421,755,1018))` though, as you say, `table` or `xtabs` can be used instead.
Again, you can then ignore all the other options unless you know you need them, so you can end up with something as simple as the example's `dclus1p<-postStratify(dclus1, ~stype, pop.types)`.
| null |
CC BY-SA 2.5
| null |
2011-03-17T00:40:10.997
|
2011-03-17T00:40:10.997
| null | null |
2958
| null |
8391
|
2
| null |
8389
|
10
| null |
The distribution of the difference between two single independent samples from normal distributions has a mean which is the difference between the means of the original distributions and a variance which is the sum of the variances of the original distributions.
So in your case, if the normal distribution of $X$ has mean $\mu$ and variance $\sigma^2$ then $X_1-X_2$ has a normal distribution with mean $0$ and variance $2\sigma^2$; if you prefer,$\frac{X_1-X_2}{\sqrt{2} \sigma}$ has a standard normal distribution. So long as you do a two-tailed calculation, that should be enough for you to find your answer.
To spell it out, your answer should be
$$2 \Phi \left(\frac{T}{\sqrt{2} \sigma} \right) - 1$$
where $\Phi$ is the cumulative distribution function of a standard normal distribution.
| null |
CC BY-SA 2.5
| null |
2011-03-17T00:47:18.960
|
2011-03-17T02:07:17.637
|
2011-03-17T02:07:17.637
|
919
|
2958
| null |
8392
|
1
|
8431
| null |
5
|
144
|
I have logs of which users run which programs on our systems at a university. The data shows username, process time, and the time stamp for the sample. I also know which classes each user is in. Given this data, is it possible to reliably tell which professors/courses use a particular piece of software? I figure this method might be more reliable than asking the professors, who never respond to our queries! What statistical method, if any, would appropriate here? Pointers to the appropriate R function would be appreciated. Would principal component analysis be appropriate here?
|
How to determine course-based usage of software?
|
CC BY-SA 2.5
|
0
|
2011-03-17T01:02:41.093
|
2011-03-19T09:46:11.220
|
2011-03-17T21:59:53.503
|
3369
|
2814
|
[
"r",
"modeling"
] |
8393
|
1
|
8403
| null |
2
|
434
|
I have three questions.
- I want to conduct a planned comparison in ANCOVA which compares three groups (n=2, n=7 and n=5) with two groups (n=11 and n=25). Is it ok to include the group with n=2 if I am only conducting a planned comparison as described above?
- Is it ok to conduct an ANCOVA when I am only comparing two groups? E.g. if I wanted to compare left and right hemisphere groups, controlling for another variable.
- If the DV in an ANCOVA has been transformed (because it was non-normal) does the covariate have to undergo the same transformation? E.g. If the DV is (log transformed) heart rate during an experimental condition, and the covariate is heart rate measured at baseline (which is normally distributed), does the baseline HR need to be transformed as well?
|
Questions about number of groups and group size in planned comparisons in ANCOVA
|
CC BY-SA 2.5
| null |
2011-03-17T01:17:24.617
|
2011-03-17T10:39:06.427
| null | null |
2025
|
[
"data-transformation",
"ancova",
"small-sample"
] |
8394
|
2
| null |
2917
|
0
| null |
You may consider trying the R package [compute.es](http://cran.r-project.org/web/packages/compute.es/index.html). There are several functions for deriving effect size estimates and the variance of the effect size.
| null |
CC BY-SA 2.5
| null |
2011-03-17T01:53:03.130
|
2011-03-17T02:47:49.370
|
2011-03-17T02:47:49.370
|
1381
| null | null |
8395
|
2
| null |
8157
|
4
| null |
If I understand you correctly, only points in some small volume of n-dimensional space meet your constraints.
Your first constraint constrains it to the interior of a hypersphere,
which reminds me of the comp.graphics.algorithms FAQ ["Uniform random points on sphere"](http://www.cgafaq.info/wiki/Uniform_random_points_on_sphere) and [How to generate uniformly distributed points in the 3-d unit ball?](https://stats.stackexchange.com/questions/8021/how-to-generate-uniformly-distributed-points-in-the-3-d-unit-ball)
The second constraint slices a bit out of the hypersphere,
and the other constraints further whittle away at the volume that meets your constraints.
I think the simplest thing to do is one of the approaches suggested by the FAQ:
- choose some arbitrary Axis-aligned bounding box that we are sure contains the entire volume.
In this case, -c < a_1 < c, -c < a_2 < c, ... -c < a_n < c
contains the entire constrained volume, since it contains the hypersphere described by the first constraint, and the other constraints keep whittling away at that volume.
- The algorithm uniformly picks points throughout that bounding box.
In this case, the algorithm independently sets each coordinate of a candidate vector to some independent uniformly distributed random number from -c to +c.
(I am assuming you want points distributed with equal density throughout this volume. I suppose you could make the algorithm select some or all coordinates with a Poisson distribution or some other non-uniform distribution, if you had some reason to do that).
- Once you have a candidate vector, check each constraint. If it fails any of them, go back and pick another point.
- Once you have a candidate vector, store it somewhere for later use.
- If you don't have enough stored vectors, go back and try to generate another one.
With sufficiently high-quality random number generator, this gives you a set of stored coordinates that meet your criteria with (expected) uniform density.
Alas, if your have a relatively high dimensionality n (i.e., if you construct each vectors out of a relatively long list of coordinates), the inscribed sphere (much less your whittled-down volume) has a surprisingly small part of the total volume of the total bounding box, so it might need to execute many iterations, most of them generating rejected points outside your constrained area, before finding a point inside your constrained area.
Since computers these days are pretty fast, will that be fast enough?
| null |
CC BY-SA 2.5
| null |
2011-03-17T02:56:04.293
|
2011-03-17T02:56:04.293
|
2017-04-13T12:44:46.680
|
-1
|
3754
| null |
8396
|
1
| null | null |
5
|
5108
|
I do not know why package forecast 2.16 in R does not produce Theil's U?
I really appreciate your efforts.
|
How to produce Theil's U with package forecast 2.16 in R?
|
CC BY-SA 2.5
| null |
2011-03-17T03:27:07.263
|
2015-07-21T20:22:14.807
|
2011-03-17T07:22:09.587
|
2116
| null |
[
"r",
"forecasting",
"goodness-of-fit"
] |
8397
|
2
| null |
8377
|
2
| null |
You may be interested by [this](http://joyeuserrance.wordpress.com/2010/11/25/the-linear-model-2-simple-linear-regression/) introduction to the linear model (basis of almost any statistical analyses), and linear regression in particular:
- it thoroughly explains lots of the mathematical aspects of linear regression, by detailing all important equations (which is usually left for exercise anywhere else on the Internet);
- it uses a simple, yet informative enough, data set as an example;
- and it gives all the R commands required to do the computations step by step, as well as plot the results.
| null |
CC BY-SA 2.5
| null |
2011-03-17T04:08:33.860
|
2011-03-17T04:08:33.860
| null | null |
3459
| null |
8398
|
2
| null |
8396
|
12
| null |
It does. Use the `accuracy()` command.
Update: here is an example.
```
library(forecast)
x <- EuStockMarkets[1:200,1]
f <- EuStockMarkets[201:300,1]
fit1 <- ses(x,h=100)
accuracy(fit1,f)
ME RMSE MAE MPE MAPE MASE ACF1 Theil's U
0.8065983 78.1801986 63.2728352 -0.1725009 3.7876802 7.0619776 0.9586859 6.6120277
```
If you want the in-sample value of U (which is of limited value), the following will work:
```
fpe <- fit1$fitted[2:200]/x[1:199] - 1
ape <- x[2:200]/x[1:199] - 1
U <- sqrt(sum((fpe - ape)^2)/sum(ape^2))
```
| null |
CC BY-SA 2.5
| null |
2011-03-17T05:01:07.543
|
2011-03-17T09:28:55.800
|
2011-03-17T09:28:55.800
|
159
|
159
| null |
8399
|
1
| null | null |
3
|
2532
|
I posted [this](https://electronics.stackexchange.com/q/10632/2939) question on Electronics.Stackexchange and someone told me I'll be better off posting it here.
Its an implementation of the Particle Filter using MATLAB but the results never follow the observations. I changed the weighting function to be Gaussian but still no avail.
NOTE: I haven't taken into account that the process/measurement noise is time-correlated. Will this significantly change the accuracy of my program? If so, how do I correct for it?
MY OLD QUESTION
I'm working on a particle filter experiment for multi-sensor fusion and I just programmed it in MATLAB. However, I get very low accuracies for my final values. Plus, I read a lot of literature where they talk about pdf of state and observations etc. but my practical knowledge is still extremely shaky, since I've had no formal training in filtering/Bayesian estimates etc.
I have devised my algorithm like this:
- Initialize particles = I'm doing it as a Gaussian distribution - 10 particles
- Move the 10 particles forward using the state transition equation:
\begin{equation}
X_{t+1} = A \times X_t + 0.1 \times rand()
\end{equation}
I'm only injecting Gaussian noise so far.
- Using the observation, calculate the weights for the particles. I do this like a root mean square of the difference between predicted state and observation. For example, if my azimuth(a) = 40, pitch(p) = 3, roll(r)=4 in my state and in my observation it is a = 39, p = 3, r = 3, then I do rms = sqrt((40-39)^2 + (3-3)^2 + (4-3)^2). Then my weight is assigned as 1/rms in order for it to be inversely proportional to the 'distance' between the prediction and observation
- Then I normalize these weights to get norm_weight = weight/norm(weight) so that their sum is equal to one.
- Then I continue forward for all the observations. I have not included resampling yet because when I run this experiment, I do not experience any degeneracy, which is also very puzzling.
Where am I going wrong? I realized that I haven't 'computed' a lot of the Bayesian equations given in the literature i.e.
\begin{equation}
p(x|z_t) = p(z_t|x)\times p(x)/p(z)
\end{equation}
and I don't know where it fits in here either. Can somebody please help me?
My Matlab code looks like this:
```
function resultx = particlefilter(resultx_1, observationx, A, noiseP)
for j = 1:length(observationx)
for i = 1:length(resultx_1)
apriori_state{i} = A*resultx_1{i} + noiseP;
rms(i) = sqrt((observationx{j}(1) - apriori_state{i}(1))^2 +
(observationx{j}(2) - apriori_state{i}(2))^2);
weight(i) = 1/rms(i);
end;
norm_weight = weight/norm(weight);
for i = 1:length(apriori_state)
plot(apriori_state{i});
end
disp(rms);
disp(norm_weight);
end
```
|
Particle filter in Matlab - what is going wrong?
|
CC BY-SA 2.5
| null |
2011-03-17T05:09:45.780
|
2011-06-15T13:00:42.117
|
2017-04-13T12:33:11.210
|
-1
| null |
[
"correlation",
"matlab",
"monte-carlo",
"particle-filter"
] |
8401
|
1
| null | null |
11
|
2566
|
I am working on a research project that is related to optimization and recently had an idea to use MCMC in this setting. Unfortunately, I am fairly new to MCMC methods so I had several questions. I'll start by describing the problem and then asking my questions.
Our problem boils down to estimating the expected value of a cost function $c(\omega)$ where $\omega = (\omega_1,\omega_2,...\omega_h)$ is an $h$-dimentional random variable with a density $f(\omega)$.
In our case, a closed form version of $c(\omega)$ does not exist. This means that we have to use Monte Carlo methods to approximate the expected value. Unfortunately, it turns out that estimates of $E[c(\omega)]$ that are generated using MC or QMC methods have too much variance to be useful within in a practical setting.
One idea that we had to use an importance sampling distribution to generate sample points that will produce a low variance estimate of $E[c(\omega)]$. In our case, the ideal importance sampling distribution, $g(\omega)$, has to be roughly proportional to $c(\omega)f(\omega)$. Seeing how $g(\omega)$ is known up to constant, I am wondering whether I can use MCMC along with the proposal distribution $c(\omega)f(\omega)$ to eventually generate samples from $g(\omega)$ .
My questions here are:
- Can MCMC be used within this setting? If so, what MCMC method would be appropriate? I am working in MATLAB, so I have a preference to anything that already has a MATLAB implementation.
- Are there any techniques that I can use to speed up the burn-in period for MCMC. And how can I tell that the stationary distribution has been reached? In this case, it actually takes a fair bit of time to calculate $c(\omega)$ for a given $\omega$.
|
Using MCMC to evaluate the expected value of a high-dimensional function
|
CC BY-SA 2.5
|
0
|
2011-03-17T06:41:46.460
|
2014-09-22T20:13:17.627
|
2011-03-17T08:51:51.883
|
3572
|
3572
|
[
"sampling",
"markov-chain-montecarlo",
"matlab",
"expected-value"
] |
8402
|
2
| null |
8377
|
2
| null |
If you want a book specifically on this sort of regression - as opposed to data analysis in general - I recommend [Regression Analysis by Example by Chatterjee and Price](http://rads.stackoverflow.com/amzn/click/0471746967). Good, not technical, but it doesn't oversimplify.
| null |
CC BY-SA 2.5
| null |
2011-03-17T10:33:44.997
|
2011-03-17T10:33:44.997
| null | null |
686
| null |
8403
|
2
| null |
8393
|
4
| null |
First off, with such small N, I think anything that comes close to statistical significance is going to pass the inter-ocular trauma test - it will hit you between the eyes.
I am not sure what you mean by compare 3 groups with 2 groups. Do you mean you want to combine the three groups into one, and the 2 groups into another and then compare? Whether this is reasonable or not depends on whether it makes substantive sense to combine these groups. But then the group sizes are a bit larger, which is good.
ANCOVA is fine with only 2 groups.
No, what you do to the DV does NOT have to be done to the IVs. The topic of transformations is a complex one, but this is an easy question to answer. Of course, whether it is right to transform your DV in this way is a different matter.
| null |
CC BY-SA 2.5
| null |
2011-03-17T10:39:06.427
|
2011-03-17T10:39:06.427
| null | null |
686
| null |
8404
|
2
| null |
726
|
35
| null |
>
The plural of anecdote is not data.
-- Roger Brinner
(in the context of [Anecdotal_evidence](http://en.wikipedia.org/wiki/Anecdotal_evidence))
| null |
CC BY-SA 2.5
| null |
2011-03-17T10:53:40.767
|
2011-03-17T10:53:40.767
| null | null |
264
| null |
8405
|
1
|
8410
| null |
3
|
5488
|
I have continuous data $A$ and categorical data $O$. I need counts of $A$ in bins by group $O$. I'm working in R.
I know how to bin data (using `cut2`) and how to get the counts of $O$ (using `aggregate` or `by` or `describe` or `summaryBy`). I could also get what I need by running one of these functions with a subset (bin) of the data one by one, but is there a way to do this automatically in one command?
Update:
The output I want is just basic descriptive statistics (counts in this case, but also means and/or medians for example):
```
Age (A)................Outcome (O)
........................Yes....No
18-27 ...................10....12
28-37 ....................2....11
38-47 ....................9.....7
```
For example:
```
by(subset(X, X$Age>47)$Age, subset(X, X$Age>47)$Outcome, length)
```
gives me the number of data points for ages > 47 where `Outcome` is true and where `O` is false. I want to do this for all bins automatically, without having to run separate commands with long > and < conditions for each bin.
|
Counts of binned data by group
|
CC BY-SA 2.5
| null |
2011-03-17T12:03:05.233
|
2011-03-28T06:07:52.767
|
2011-03-22T09:04:58.307
|
2824
|
2824
|
[
"r",
"descriptive-statistics",
"binning"
] |
8406
|
2
| null |
726
|
7
| null |
>
The researcher armed with a confidence
interval, but deprived of the false
respectability of statistical
significance, must work harder to
convince himself and others of the
importance of his findings. This can
only be good.
Michael Oakes, Statistical inference: A commentary for the social and behavioural sciences (NY: Wiley, 1986)
| null |
CC BY-SA 2.5
| null |
2011-03-17T12:05:58.503
|
2011-03-17T12:05:58.503
| null | null |
2669
| null |
8407
|
1
|
8417
| null |
26
|
34327
|
I would like to import the "Last Trade" stock price from Yahoo finance into R. The intention is to work with (almost) real time data. Are there any solutions?
Thanks in advance for any helpful comment.
|
Import stock price from Yahoo Finance into R?
|
CC BY-SA 2.5
| null |
2011-03-17T12:11:10.293
|
2016-03-01T17:19:29.597
|
2011-03-18T16:15:41.810
| null |
3757
|
[
"r"
] |
8408
|
2
| null |
8407
|
15
| null |
That is pretty easy given that [R](http://www.r-project.org) can read directly off a given URL. The key is simply to know how to form the URL. Here is a quick and dirty example based on code Dj Padzensky wrote in the late 1990s and which I have been maintaining in the [Perl](http://www.perl.org) module [Yahoo-FinanceQuote](http://dirk.eddelbuettel.com/code/yahooquote.html) (which is of course also on [CPAN here](http://search.cpan.org/~edd/Finance-YahooQuote/YahooQuote.pm)) for almost as long.
If you know a little R, the code should be self-explanatory. Getting documentation for the format string is a little trickier but e.g. the Perl module has some.
```
R> syms <- c("^GSPC", "^IXIC")
R> baseURL <- "http://download.finance.yahoo.com/d/quotes.csvr?e=.csv&f="
R> formatURL <- "snl1d1t1c1p2va2bapomwerr1dyj1x"
R> endURL <- "&s="
R> url <- paste(baseURL, formatURL, endURL, paste(syms, collapse="+"), sep="")
R> read.csv(url, header=FALSE)
V1 V2 V3 V4 V5 V6 V7
1 ^GSPC S&P 500 INDEX,RTH 1256.88 3/16/2011 4:04pm 0 0.00%
2 ^IXIC NASDAQ Composite 2616.82 3/16/2011 5:30pm 0 0.00%
V8 V9 V10 V11 V12 V13 V14
1 4282084608 0 N/A N/A 1256.88 1279.46 1249.05 - 1280.91
2 0 0 N/A N/A 2616.82 0.00 0.00 - 0.00
V15 V16 V17 V18 V19 V20 V21 V22
1 1010.91 - 1344.07 N/A N/A N/A N/A N/A N/A SNP
2 2061.14 - 2840.51 N/A N/A N/A N/A N/A N/A NasdaqSC
R>
```
Column three is your last trade. During open market hours you will get fewer NAs and more data variability. But note though that most prices are 15 or 20 minute delayed---but some indices are real-time. Real-time data is a big business and major revenue for exchanges so they tend not to give it away. Also, and if I remember correctly, the newer and more real-time displays on the Finance pages at Google and Yahoo use something more AJAXy that is harder to milk from the outside.
| null |
CC BY-SA 2.5
| null |
2011-03-17T12:29:03.407
|
2011-03-17T12:29:03.407
| null | null |
334
| null |
8409
|
2
| null |
8401
|
4
| null |
I would always remember, that MCMC is just a numerical integration tool (and a rather inefficient one at that). It is not some magic/mystical thing. It is very useful because it is reasonably easy to apply. It does not require much thinking compared to some other numerical integration techniques. For instance, you do not have to do any derivatives. You only have to generate "random numbers".
However, like any numerical integration method, it is not a universal catch all tool. There are conditions when it is useful, and conditions when it isn't.
It may be wiser to set up another technique. Depending on how big $h$ is, and how fast your computer is, and how much time you are prepared to wait for results. A uniform grid may do the job (although this requires small $h$ or a long amount of waiting). The "job" is to evaluate the integral - the equation does not care what meaning you or I attach to the result (and hence it does not care whether we obtained the result randomly or not).
Additionally, if your estimates of $\omega$ are quite accurate, the $f(\omega)$ will be sharply peaked and closely resemble a delta function, so the integral is effectively substituting $\omega\rightarrow\omega_{max}$.
Another numerical integration technique is using a taylor series under the integral. $f(\omega)\approx f(\omega_{max})+(\omega-\omega_{max})f'(\omega_{max})+\frac{1}{2}(\omega-\omega_{max})^{2}f''(\omega_{max})+\dots$
This is a useful strategy when the moments of $\omega$ are easily obtained.
Edwin Jaynes has a nice quote on this:
whenever there is a randomised way of doing something, there is a non-randomised way which yields better results, but requires more thinking
One "more thinking" way is to use "stratified MCMC" to do the integral. So rather than "randomly" pick a spot on the whole parameter space: divide it up into "strata". These "strata" should be picked so that you get a good range of the high part of the integral. Then randomly sample within each strata. But this will require you to write your own code I would imagine (i.e. more thinking).
| null |
CC BY-SA 2.5
| null |
2011-03-17T12:40:34.437
|
2011-03-17T12:40:34.437
| null | null |
2392
| null |
8410
|
2
| null |
8405
|
6
| null |
I really like using the `summary.formula` function in the `Hmisc` package for these tasks.
Some artificial data:
```
A <- rnorm(100, mean=50, sd=20)
G <- gl(2, 50)
B <- unclass(G)*10 + rnorm(100, sd=3)
```
Descriptives by `G`:
```
summary( G ~ cut2(A,g=4) + B, method="reverse")
Descriptive Statistics by G
+------------------------------+-----------------------------+-----------------------------+
| |1 |2 |
| |(N=50) |(N=50) |
+------------------------------+-----------------------------+-----------------------------+
|cut2(A, g = 4) : [ 5.19, 43.0)| 22% (11) | 28% (14) |
+------------------------------+-----------------------------+-----------------------------+
| [42.98, 49.7) | 18% ( 9) | 32% (16) |
+------------------------------+-----------------------------+-----------------------------+
| [49.68, 60.3) | 26% (13) | 24% (12) |
+------------------------------+-----------------------------+-----------------------------+
| [60.27,107.5] | 34% (17) | 16% ( 8) |
+------------------------------+-----------------------------+-----------------------------+
|B | 7.849173/10.153238/11.737293|18.766779/20.155744/22.139419|
+------------------------------+-----------------------------+-----------------------------+
```
Summarizing `B` by `G` and ranges of `A` with a custom function:
```
summary(B ~ cut2(A, g=4) + stratify(G), fun=function(x)c(mean(x),sd(x)))
B by g N=100
+--------------+-------------+--+---------+--------+--+--------+--------+
| | |N | |B |N | |B |
+--------------+-------------+--+---------+--------+--+--------+--------+
|cut2(A, g = 4)|[ 5.19, 43.0)|11|10.351295|3.069181|14|20.55056|3.659311|
| |[42.98, 49.7)| 9|10.269449|2.460152|16|19.18630|1.628746|
| |[49.68, 60.3)|13| 9.432093|2.928439|12|21.07909|3.064897|
| |[60.27,107.5]|17| 9.683748|2.800625| 8|20.65612|2.685183|
+--------------+-------------+--+---------+--------+--+--------+--------+
|Overall | |50| 9.870604|2.777788|50|20.25773|2.844987|
+--------------+-------------+--+---------+--------+--+--------+--------+
```
The documentation describes lots of options to control what is calculated and how the output looks like.
| null |
CC BY-SA 2.5
| null |
2011-03-17T13:04:36.523
|
2011-03-17T15:52:42.263
|
2011-03-17T15:52:42.263
|
279
|
279
| null |
8413
|
2
| null |
8377
|
4
| null |
The two together don't tell you anything more than the second one would alone! The main effects are uninteresting and misleading when there is interaction present. The second model tells you all you need to know. Here are a couple of plots, with R code, to help you understand what that second model looks like...
```
library(lattice)
a <- rep(seq(-1.37, 2.12, (2.12--1.37)/9),4)
b <- sort(rep(quantile(seq(-1.03, 1.30, .01),c(.2,.4,.6,.8)),10) )
y <- -0.06807 + (3.01517 * a) + (-0.00994 * b) + (-1.13782 *a*b)
xyplot(y~a|factor(b))
```

This one shows the estimated effect of a on y by levels of b. At each level of b, the relationship is positive. This is your significant positive slope for main effect of a in the presence of the interaction a:b.
```
a <- sort(rep(quantile(seq(-1.37, 2.12, .01),c(.2,.4,.6,.8)),10) )
b <- rep(seq(-1.03, 1.30, (1.30--1.03)/9),4)
y <- -0.06807 + (3.01517 * a) + (-0.00994 * b) + (-1.13782 *a*b)
xyplot(y~b|factor(a))
```

This is image shows the estimated effects of b on y within levels of a. You can see why you have no significant main effect for b. The direction of the y~b relationship depends on level of a. Thus, no independent relationship (imagine averaging those lines) but a significant interaction (clear pattern when you take into account the level of a)
| null |
CC BY-SA 2.5
| null |
2011-03-17T14:42:00.950
|
2011-03-17T14:42:00.950
| null | null |
485
| null |
8415
|
1
|
8416
| null |
5
|
658
|
I have a time series, let's call it $y(t)$. Augmented Dickey-Fuller and Phillips-Perron tests indicate no unit root but a deterministic trend. The series is strongly autocorrelated. I estimated the following short-run dynamic model:
$$y(t) = \alpha + \beta\cdot y(t-1) + \gamma\cdot t + \varepsilon_t$$
where $\alpha$, $\beta$ and $\gamma$ are estimated coefficients. The estimated model has no statistical anomalies and the coefficients are significantly different from zero. The estimated value of $\gamma$ measures a trend. However, a long-run trend can be estimated by calculating $\gamma/(1-\beta)$.
We can estimate directly the following static, long-run model:
$$y(t) = \alpha + \gamma\cdot t + \varepsilon_t.$$
Question: What is the best, consistent model to apply in an analysis of a deterministic trend?
|
Short and long-run trend
|
CC BY-SA 2.5
| null |
2011-03-17T16:09:46.423
|
2011-03-17T16:46:02.370
|
2011-03-17T16:30:57.613
|
919
|
3762
|
[
"time-series",
"model-selection",
"trend"
] |
8416
|
2
| null |
8415
|
3
| null |
There is no quick solution to the problem, since the deterministic trend is just a function of $t$. We may denote this trend by $f(t)$ and it is not evident that the trend is linear.
So some quick tips what to do in this case:
- Plot the original data, though the noise ratio could be huge therefore...
- Make technical decomposition of your time series into Trend + Cycle + Seasonal + Residual components and try to plot them (implementation of this step depends on the software you do use), usual approach is to apply Hodrick-Prescott filter for example.
- After you decide on the form of trend you may conclude that some polynomial $\sum_{i=0}^n \alpha_i t^i$ is suitable, to check this you may apply $n$ times difference operator, the reminder term should be close to white noise then...
- If it is not search in addition for the best fit from the $(S)ARMA(p, q)$ class (since there is no unit root in your case, in general you will work with SARIMA, or some advanced users would go for SARFIMA-GARCH may be :)).
- Decide on the efficient estimate of the final model on the basis of any information criteria you think is relevant ($BIC$ for example).
Anyway, the time series models are needed for short term forecasting needs (without clear structure and theoretical restrictions long-term forecasts are useless), short and long term betas therefore are just a matter of curiosity. Since you do not have a structural model here you do not need to care about consistency. For forecasting needs any parsimonious (the simpler the better) model with lowest information criteria (the best) will do nicely in the short run, even if the true data generating process is more complicated.
| null |
CC BY-SA 2.5
| null |
2011-03-17T16:46:02.370
|
2011-03-17T16:46:02.370
| null | null |
2645
| null |
8417
|
2
| null |
8407
|
14
| null |
This really isn't a statistics question (perhaps this could be moved to SO?), but there's a nice function in [quantmod](http://www.quantmod.com) that does what Dirk has done by hand. See `getQuote()` and `yahooQF()`. Typing `yahooQF()` will bring up a menu of all the possible quote formats you can use.
```
> require(quantmod)
> getQuote("QQQQ;SPY", what=yahooQF("Last Trade (Price Only)"))
Trade Time Last
QQQQ 2011-03-17 12:33:00 55.14
SPY 2011-03-17 12:33:00 128.17
```
| null |
CC BY-SA 3.0
| null |
2011-03-17T16:48:37.487
|
2011-07-01T07:25:32.483
|
2011-07-01T07:25:32.483
|
3651
|
1657
| null |
8418
|
1
| null | null |
1
|
2917
|
I have a dataset with repeated measures at different speeds. I've binned the speed ranges into 0-0.5, 0.5-1.5, and 1.5-infinity.
```
mydata`$`fact<−cut(mydata$section, breaks=c(0,0.5,1.5,Inf))
```
which gives the levels (0,0.5] (0.5,1.5] (1.5,Inf]
I first used the lme to test all speeds together to see if x and y are speed-dependent. Now I want to isolate speed bins, but I want to do it in the model instead of re-defining the factor. I can't seem to get it to work. "Fact" is what the speed bins correspond to.
```
lme(y ~ x + factor(repeatedmeasures) + fact, random = ~1 | z, data=mydata, na.action=na.omit)
```
Thanks for your help!
|
How can I specify a level of a factor while in an lme?
|
CC BY-SA 2.5
| null |
2011-03-17T16:52:45.347
|
2011-03-17T18:10:24.983
|
2011-03-17T16:56:34.597
|
919
| null |
[
"r",
"mixed-model"
] |
8419
|
1
|
8434
| null |
3
|
7056
|
Supposing a Bayesian classifier with multivariate normal densities, how do I find the error rate of the classifier when we have two classes?
I am using this:
When dimension $d = 1$:
$$P(x | \mu , \sigma^2) = N(x, \mu, \sigma^2) = \frac{1}{\sqrt{2\pi \sigma^2}} e^{-\frac{(x-\mu)^2}{2\sigma^2}} $$
In $d$ dimensions:
$$P(x | \mu, \Sigma) = \frac{1}{(2\pi)^{d/2} |\Sigma|^{1/2}} \cdot e^{-\frac{1}{2}{(x-μ)^T\Sigma^{-1}(x-μ)}}$$
where $\mu$ is in $\mathbb{R}^d$ and $\Sigma$ is a $d \times d$ variance-covariance matrix.
What would be the algebra to get the error rate, or could you give an example? I would like to do this in `Matlab`.
Lets say I have 2 classes and 2 atributes with 20 examples:
```
features:
1.8756 1.4236
2.0677 1.3759
0.8540 0.7782
0.5651 -0.3511
0.6103 -0.6901
-0.1945 1.6438
-0.2620 0.8022
1.5326 0.0188
0.3334 1.0578
0.8535 -1.8545
0.3066 1.9716
-1.4424 0.4216
0.3275 -0.2844
-0.0079 2.8506
0.0114 1.4001
-0.4049 -0.3981
-0.0913 2.2094
0.3376 -1.0467
0.3455 2.4960
0.3232 -0.5614
and targets
2
2
1
1
1
2
1
1
2
1
2
1
1
2
2
1
2
1
2
1
```
What is the process to classify them and get the error rate of this examples, so we divide the set in 15 examples for training and the other 5 for testing?
|
Bayesian classifier with multivariate normal densities
|
CC BY-SA 2.5
| null |
2011-03-17T17:18:25.883
|
2011-04-29T00:57:16.927
|
2011-04-29T00:57:16.927
|
3911
|
3681
|
[
"machine-learning",
"naive-bayes"
] |
8420
|
1
|
8450
| null |
4
|
196
|
Other then gapminder.org, could you please direct me to any good examples (interactive or static) of visualizations that compare between countries?
|
Examples of visualization featuring comparisons between countries?
|
CC BY-SA 3.0
| null |
2011-03-17T17:33:20.797
|
2017-10-06T09:42:48.480
|
2017-10-06T09:42:48.480
|
11887
|
253
|
[
"data-visualization"
] |
8421
|
2
| null |
7720
|
2
| null |
To 'test' (i.e., evaluate) the proportional odds assumption in R, you can use residuals.lrm() in Frank Harrell Jr.'s Design package. If you type ?residuals.lrm , there is a quick-to-replicate example of how Frank Harrell recommends evaluating the proportional odds assumption (i.e., visually, rather than by a push-button test). Design estimates ordered logistic regressions using lrm(), which you can substitute for polr() from MASS.
For a more formal example of how to visually test the proportional odds assumption in R, see:
Paper: Ordinal Response Regression Models in Ecology
Author(s): Antoine Guisan and Frank E. Harrell
Source: Journal of Vegetation Science, Vol. 11, No. 5 (Oct., 2000), pp. 617-626
| null |
CC BY-SA 3.0
| null |
2011-03-17T17:52:24.270
|
2017-01-29T21:05:53.147
|
2017-01-29T21:05:53.147
|
145102
|
3763
| null |
8422
|
2
| null |
8206
|
2
| null |
Here is mine implementation of your solutions. I've decided not to map the mean value, there is not to much space left. Also the line from 0 to 1 seems odd.
Thank's a lot everyone.
```
data <- read.table("roc_average.txt")
bxp <- boxplot(data, horizontal = TRUE, range = 0, axes = FALSE, col = "grey", add = TRUE, at = 0.2, varwidth=FALSE, boxwex=0.3)
valuelabels <- c(round(fivenum(data)[2], digits = 2), round(fivenum(data)[4], digits = 2))
text(x = valuelabels, y = c(0.35, 0.35), labels = valuelabels, font = 2)
mtext(c(min(round(data, digits = 2)),max(round(data, digits = 2))), side=1, at=bxp$stats[c(1,5)], line=-3, font = 2)
```

| null |
CC BY-SA 2.5
| null |
2011-03-17T17:58:12.653
|
2011-03-19T12:26:25.473
|
2011-03-19T12:26:25.473
|
3345
|
3345
| null |
8424
|
2
| null |
8418
|
1
| null |
If you just want to fit separate models for each level of your factor, then probably the easiest way is to use the `subset=` argument to `lme` (or any other GLM in R, btw). For example,
```
lme(y ~ x + factor(repeatedmeasures) + fact,
random = ~1 | z, data=mydata, subset=as.numeric(fact) == 1)
```
should subset on the first level of your factor. I used `as.numeric()` because you didn't provide labels to your newly created factor. To do so, you can fill the `labels=` argument when calling `cut()`, such that the above code could read
```
lme(..., subset=fact == "low")
```
for example (assuming the first level of `fact` is named `low`). In both cases, you can easy set up a loop (using something like `for i in 1:nlevels(fact)` or `for i in seq_along(levels(fact))`) to iterate over your different models.
| null |
CC BY-SA 2.5
| null |
2011-03-17T18:10:24.983
|
2011-03-17T18:10:24.983
| null | null |
930
| null |
8425
|
1
| null | null |
5
|
291
|
I hope this is an appropriate forum for this question...if not, any pointers on a place to ask would be great. If my questions is not clear, please just let me know and I'll try to add information/explanation where I can.
Say I have a set of data points that I run through a [FLAME clustering algorithm](http://en.wikipedia.org/wiki/FLAME_clustering), and I get my set of cluster supporting objects, and the fuzzy memberships among them.
Next, let's say I get an additional set of data, for argument's sake, let's say it doubles the amount of data. If I want to add that to my clustering, without having to reprocess the old data (i.e. just using the cluster supporting objects, and adding these new observations to the cluster), will I be introducing some kind of bias? To get the most "accurate" picture, would it be better to reprocess ALL the data, re-establish CSO's, re-compute distances, etc.?
To add to the complexity, let's say I'm going to be periodically adding similar quantities of data points. Does the answer to the above change?
And I guess a final question - is there a clustering algorithm (or family of algorithms) that is not subject to extreme bias in the beginning, such that added data points can be added relatively pain-free? I'm sort of thinking that FLAME clustering, with its nice fuzziness properties is a reasonable way of going about this, but any more suggestions would be helpful.
Thanks!
|
Bias from increased information in FLAME clustering
|
CC BY-SA 2.5
| null |
2010-08-27T18:14:14.280
|
2012-06-10T14:34:20.997
|
2011-03-17T21:59:03.863
|
3369
|
338
|
[
"clustering",
"online-algorithms"
] |
8426
|
2
| null |
8425
|
1
| null |
I think it is possible to introduce the new set of data periodically by assigning its initial value with the interpolated value of its neighbor while the possible set of neighbor for each new data point could be the points in the triangle that data point resides in.
I think what is more important is the order of assigning the initial values for this set of data point since the data point that is closer to CSO will be updated first. To determine the closeness of data point to CSO, we can use Euclidean distance, weighted link or both.
Note: something to think about
- Data point that lie outside the contour of current data set.
- The number of update recurring. The data point tends to converge to CSO as the number of update increases. This can cause saturation problem. To solve this problem, subsidizing current data points or levitating new data points is necessary.
| null |
CC BY-SA 2.5
| null |
2010-12-17T07:46:14.033
|
2011-03-18T08:01:26.373
|
2011-03-18T08:01:26.373
| null | null | null |
8428
|
1
| null | null |
11
|
6107
|
I am finishing up some analysis on a large set of data. I would like to take the linear model used in the first part of the work and re-fit it using an linear mixed model (LME). The LME would be very similar with the exception that one of the variables used in the model would be used as a random effect. This data comes from many observations (>1000) in a small group of subjects (~10) and I know that modeling the effect of subject is better done as a random effect (this is a variable that I want to shift). The R code would look like:
```
my_modelB <- lm(formula = A ~ B + C + D)
lme_model <- lme(fixed=A ~ B + C, random=~1|D, data=my_data, method='REML')
```
Everything runs fine and the results are vastly similar. It would be nice if I could use something like RLRsim or an AIC/BIC to compare these two models and decide which is the most appropriate. My colleagues don't want to report the LME because there isn't an easily accessible way of choosing which is "better", even though I think the LME is the more appropriate model. Any suggestions?
|
Comparing a mixed model (subject as random effect) to a simple linear model (subject as a fixed effect)
|
CC BY-SA 3.0
| null |
2011-03-17T19:28:38.270
|
2018-03-12T11:19:53.597
|
2018-03-12T11:19:53.597
|
28666
|
3727
|
[
"r",
"regression",
"hypothesis-testing",
"mixed-model",
"lme4-nlme"
] |
8429
|
2
| null |
8428
|
2
| null |
I am not totally sure to figure out what model is fitted when you use the lme function. (I guess the random effect is supposed to follow a normal distribution with zero mean?). However, the linear model is a special case of the mixed model when the variance of the random effect is zero. Although some technical difficulties exist (because $0$ is in the boundary of the parameter space for the variance) it should be possible to test $H_0:variance = 0$ vs $H_1: variance > 0$...
EDIT
In order to avoid confusion: The test mentioned above is sometimes used to decide whether or not the random effect is significant... but not to decide whether or not it should be transformed into a fixed effect.
| null |
CC BY-SA 2.5
| null |
2011-03-17T20:08:02.357
|
2011-03-17T20:34:47.713
|
2011-03-17T20:34:47.713
|
3019
|
3019
| null |
8430
|
2
| null |
8428
|
8
| null |
This is to add to @ocram's answer because it is too long to post as a comment. I would treat `A ~ B + C` as your null model so you can assess the statistical significance of a `D`-level random intercept in a nested model setup. As ocram pointed out, regularity conditions are violated when $H_0: \sigma^2 = 0$, and the likelihood ratio test statistic (LRT) will not necessarily be asymptotically distributed $\chi^2$. The solution was I taught was to bootstrap the LRT (whose bootstrap distribution will likely not be $\chi^2$) parametrically and compute a bootstrap p-value like this:
```
library(lme4)
my_modelB <- lm(formula = A ~ B + C)
lme_model <- lmer(y ~ B + C + (1|D), data=my_data, REML=F)
lrt.observed <- as.numeric(2*(logLik(lme_model) - logLik(my_modelB)))
nsim <- 999
lrt.sim <- numeric(nsim)
for (i in 1:nsim) {
y <- unlist(simulate(mymodlB))
nullmod <- lm(y ~ B + C)
altmod <- lmer(y ~ B + C + (1|D), data=my_data, REML=F)
lrt.sim[i] <- as.numeric(2*(logLik(altmod) - logLik(nullmod)))
}
mean(lrt.sim > lrt.observed) #pvalue
```
The proportion of bootstrapped LRTs more extreme that the observed LRT is the p-value.
| null |
CC BY-SA 2.5
| null |
2011-03-17T20:43:54.430
|
2011-03-17T20:50:30.897
|
2011-03-17T20:50:30.897
|
3432
|
3432
| null |
8431
|
2
| null |
8392
|
1
| null |
Purely from a machine learning perspective, I think it is possible to do the inference you want from this data. How reliable it will be will depend on how much data you have (the more the better) and on usage patterns (do students mostly do course-related or course-unrelated stuff on these computers). However, this is not a trivial task. In particular, I don't think there is a single R function which you can just run and it will give you the results. So it is worth exploring other options (particularly social engineering) to get the exact data.
If you do go for a statistical analysis, here are a few ideas to try.
One option is to use regression. Suppose you are interested in just one program and which courses it is used in. Define the following variables: $s_i$ is 1 if in a given term student $i$ used the program, 0 otherwise (you can potentially replace this with how often they used it); $x_{ij}$ is 1 if student $i$ is enrolled in class $j$ and 0 otherwise; $\beta_j$ is 1 if class $j$ uses the program and 0 otherwise. You can build a logistic regression $$logit(s_i) = \sum_j \beta_j x_{ij}.$$ The variables $s_i$ and $x_{ij}$ are known, and the regression will give you the $\beta$'s. If $\beta_j$ is high, it is likely the course uses the program.
Another option is to apply the author-topic model with some modifications. If your dataset is large enough, it can work well. The advantage, compared to logistic regression, is that it is more versatile, so it's easy to incorporate additional information that you have (such as that all students will use standard software like firefox, and there will be a small number of programs that only 1-2 students use). The disadvantage is that it is harder to implement.
I don't think PCA or LDA would work well here. Your clusters would be the sets of programs students in a particular class use. The problem is that two classes may overlap in what they use, in which case that component will not be picked up by PCA/LDA.
| null |
CC BY-SA 2.5
| null |
2011-03-17T21:11:16.093
|
2011-03-17T21:11:16.093
| null | null |
3369
| null |
8432
|
2
| null |
8392
|
0
| null |
Considering you observe co-occurences of discrete variables, I would do the following:
```
num_programs = 3
num_students = 10
num_courses = 5
# How long a given student uses a given program
random_exponential = rexp(num_programs*num_students)
P = matrix(random_exponential, nrow = num_programs)
# Given a student, probability he/she uses the program
P = t(t(P)/apply(P, 2, sum))
# Which student attends which course
random_bernoulli = sample(c(0,1), num_courses*num_students, replace = TRUE)
C = matrix(random_bernoulli, nrow = num_courses)
# Given a course, probability a student attends this course
C = C/apply(C, 1, sum)
# Given a course, probability a program is used
solution = P%*%t(C)
```
This yields:
```
> random_exponential
[1] 1.19398141 0.67134008 0.09263323 0.38315782 1.54479115 0.32107116 1.33303520 1.33895914
[9] 0.66164138 0.48749235 0.44726465 1.17094483 1.08530260 3.17384173 0.81148181 0.95276800
[17] 1.09086805 0.04151037 0.03336502 1.14701888 0.37769900 0.56155261 0.49287836 1.10332629
[25] 2.54373887 2.32393698 0.44819034 3.46339510 1.24639024 1.36241789
> P
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,] 0.60981053 0.1703666 0.3998743 0.2315106 0.2140372 0.45693098 0.02141415 0.2602483
[2,] 0.34287825 0.6868730 0.4016513 0.2124064 0.6259270 0.52316137 0.73617320 0.2284216
[3,] 0.04731122 0.1427605 0.1984744 0.5560829 0.1600358 0.01990765 0.24241265 0.5113301
[,9] [,10]
[1,] 0.47851823 0.5703688
[2,] 0.43716996 0.2052616
[3,] 0.08431182 0.2243696
> random_bernoulli
[1] 1 0 0 1 1 0 0 0 0 0 1 0 0 0 1 0 1 1 1 1 1 0 0 0 0 0 1 1 0 0 0 1 0 0 1 1 0 0 0 1 1 1 0 0
[45] 1 0 0 0 0 0
> C
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] 0.2000000 0 0.2000000 0.0000000 0.2 0.00 0.0000000 0.2000000 0.2000000 0
[2,] 0.0000000 0 0.0000000 0.2500000 0.0 0.25 0.2500000 0.0000000 0.2500000 0
[3,] 0.0000000 0 0.0000000 0.5000000 0.0 0.50 0.0000000 0.0000000 0.0000000 0
[4,] 0.5000000 0 0.0000000 0.5000000 0.0 0.00 0.0000000 0.0000000 0.0000000 0
[5,] 0.1666667 0 0.1666667 0.1666667 0.0 0.00 0.1666667 0.1666667 0.1666667 0
> solution
[,1] [,2] [,3] [,4] [,5]
[1,] 0.3924977 0.2970935 0.3442208 0.4206606 0.3335627
[2,] 0.4072096 0.4772277 0.3677839 0.2776423 0.3931168
[3,] 0.2002927 0.2256788 0.2879953 0.3016971 0.2733205
```
| null |
CC BY-SA 2.5
| null |
2011-03-17T22:52:13.613
|
2011-03-19T09:46:11.220
|
2011-03-19T09:46:11.220
|
1351
|
1351
| null |
8433
|
1
|
8437
| null |
6
|
1274
|
I have data on 70,000 students, nested in 120 schools. I'm starting with fixed effects for the schools, but at some point I might start letting intercepts and slopes vary.
Some key predictors (e.g. gpa, test scores) have non-linear relationships with some outcomes. I definitely want to interact these predictors with categorical variables, and maybe with continuous ones as well. Natural cubic splines might be the right approach, but I haven't seen any examples with interactions or nested data, so I could use some pointers. These non-linear relationships aren't the sole focus of my research, so I don't want to focus on them to the neglect of other aspects of the model.
I'd appreciate suggestions of R packages, examples of related work, and general advice on how to approach this. Thanks!
|
Interactions between non-linear predictors
|
CC BY-SA 2.5
| null |
2011-03-17T23:33:22.827
|
2011-10-08T15:32:08.783
|
2011-10-08T15:32:08.783
|
3748
|
3748
|
[
"r",
"mixed-model",
"interaction",
"splines",
"generalized-additive-model"
] |
8434
|
2
| null |
8419
|
5
| null |
I would have thought the "best" (most intuitive) estimate of error would be the probability that the classification is incorrect. Or alternatively/equivalently the odds against the class. Using the [wikipedia page](http://en.wikipedia.org/wiki/Naive_Bayes_classifier), you classify as Male or female. I would have thought the estimate of accuracy should be:
$$O(Male|evidence)=\frac{P(Male|evidence)}{P(Female|evidence)}$$
So you would report the data in terms of "the evidence gives odds O:1 in favour of this classification"
In a problem with more than 2 classes, you should report the "worst" odds ratio. That is given classes $C_{i}\;\;(i=1,\dots,R+1)$, one (and only one) of which is assumed to be true. Suppose you classify an observation as $C_{R+1}$, then you would report odds of:
$$O(C_{R+1}|evidence)=\frac{P(C_{R+1}|evidence)}{Max_{i\in[1,\dots,R]}P(C_{i}|evidence)}$$
So you would report the data in terms of "the evidence gives odds O:1 in favour of this classification against the next best alternative"
EDIT/UPDATE: in response to the new part of the question, to put some numbers into the calculation. Using the first 15 observations to "train" the classifier. Because you are dealing with normal distribution, you only need the sufficient statistics, which in this case are:
$$ \begin{pmatrix}
\hat{\mu}_{1} & \hat{\mu}_{2} \\
\hat{\nu}_{1} & \hat{\nu}_{2} \\
\hat{\sigma}_{1} & \hat{\sigma}_{2} \\
\hat{\tau}_{1} & \hat{\tau}_{2}
\end{pmatrix}
=
\begin{pmatrix}
0.3798 & 0.6275 \\
-0.1449 & 1.6748 \\
0.8367 & 0.8685 \\
0.8200 & 0.5451
\end{pmatrix}
$$
Where the subscript denotes the class. $\mu$ and $\sigma$ denote the mean and standard deviation of the first attribute. $\nu$ and $\tau$ denote the mean and standard deviation of the second attribute. You could include correlation, but it is "safer" not to unless you know correlations actually exist. As these are just numbers to me, I have no reason to suppose them to be dependent. So I will not constrain them by forcing a dependence assumption.
Now you need a decision rule in order to classify an observation into 1 class or the other. The one I was suggesting was to use the odds ratio. So we need to calculate the probability of belonging to class 1, given the training data (denoted by D), the prior information (denoted by I), and the sample to test (denoted by y):
$$p(C_{1}|y,D,I)=\frac{p(C_{1}|D,I)p(y|C_{1},D,I)}{p(y|D,I)}
\rightarrow O(C_{1}|y,D,I)=\frac{p(C_{1}|D,I)p(y|C_{1},D,I)}{p(C_{2}|D,I)p(y|C_{2},D,I)}$$
Where $p(C_{1}|D,I) = \frac{8}{15}$ assuming complete initial ignorance (because I am ignorant prior to seeing the data). If you knew it was possible for both categories to occur prior to observing the training data, then the probability would be given by the rule of succession $\frac{9}{17}$.
$Pr(y|C_{1},D,I)$ is the posterior predictive distribution for class 1, given by:
$$p(y|C_{1},D,I)=\int p(y|\mu_{1},\nu_{1},\sigma_{1},\tau_{1},I) p(\mu_{1},\nu_{1},\sigma_{1},\tau_{1}|D,I)d\mu_{1}d\nu_{1}d\sigma_{1}d\tau_{1}$$
Now I will just give the posterior assuming complete ignorance, it is not hard to derive, you use a prior $p(\mu_{1},\nu_{1},\sigma_{1},\tau_{1}|I)\propto\frac{1}{\sigma_{1}\tau_{1}}$ and do the necessary integrals. It is a product of two student t densities (denoted by $St(x|\mu,\sigma,df)$)
$$p(y|C_{1},D,I)=St(y_{1}|\hat{\mu}_{1},\hat{\sigma}_{1}\sqrt{\frac{8+1}{8-1}},8-1)St(y_{2}|\hat{\nu}_{1},\hat{\tau}_{1}\sqrt{\frac{8+1}{8-1}},8-1)$$
Where $y_j$ is the attribute j value of the new data point. This should make it fairly obvious how it would generalise to more than two attributes. Similarly, we have $p(C_{2}|D,I) = \frac{7}{15}$ assuming ignorance or $\frac{8}{17}$ using the rule of succession. The posterior predictive is:
$$p(y|C_{2},D,I)=St(y_{1}|\hat{\mu}_{2},\hat{\sigma}_{2}\sqrt{\frac{7+1}{7-1}},7-1)St(y_{2}|\hat{\nu}_{2},\hat{\tau}_{2}\sqrt{\frac{7+1}{7-1}},7-1)$$
And so the final odds ratio is given by:
$$O(C_{1}|y,D,I)=\frac{8}{7}
\times
\frac{St(y_{1}|\hat{\mu}_{1},\hat{\sigma}_{1}\sqrt{\frac{9}{7}},7)}{St(y_{1}|\hat{\mu}_{2},\hat{\sigma}_{2}\sqrt{\frac{8}{6}},6)}
\times
\frac{St(y_{2}|\hat{\nu}_{1},\hat{\tau}_{1}\sqrt{\frac{9}{7}},7)}{St(y_{2}|\hat{\nu}_{2},\hat{\tau}_{2}\sqrt{\frac{8}{6}},6)}$$
I think you will agree that this number is sensible by any criteria. It "goes in all the right directions", and it appropriately accounts for the uncertainty in estimating the parameters of the model. inserting in these densities gives:
$$\frac{8}{7}
\times\frac{\hat{\sigma}_{2}\hat{\tau}_{2}}{\hat{\sigma}_{1}\hat{\tau}_{1}}
\times\left[\frac{\frac{\Gamma(4)}{\Gamma(\frac{7}{2})\sqrt{7}}}{\frac{\Gamma(\frac{7}{2})}{\Gamma(3)\sqrt{6}}}\right]^2
\frac{
\left[1+\frac{1}{8}\left(\frac{y_{1}-\hat{\mu}_{2}}{\hat{\sigma}_{2}}\right)^2
\right]^{\frac{7}{2}}
\left[1+\frac{1}{8}\left(\frac{y_{2}-\hat{\nu}_{2}}{\hat{\tau}_{2}}\right)^2
\right]^{\frac{7}{2}}
}{
\left[1+\frac{1}{9}\left(\frac{y_{1}-\hat{\mu}_{1}}{\hat{\sigma}_{1}}\right)^2
\right]^{\frac{8}{2}}
\left[1+\frac{1}{9}\left(\frac{y_{2}-\hat{\nu}_{1}}{\hat{\tau}_{1}}\right)^2
\right]^{\frac{8}{2}}
}
$$
Now in order to make a decision, you need to think about the consequences of making a wrong classification. is it worse to classify a $1$ as a $2$ compared to classifying a $2$ as a $1$? If not then the cut-off is simply $O(C_{1}|y,D,I)>1$ are classed as $1$, otherwise as $2$. the cut-off will slide up or down depending on "which is more important to get right". Although for this particular example, the classifier is so good, you hardly need to bother about this.
The table below shows the odds for each of the testing observations. Interestingly, attribute 2 is what is driving most of the changes in odds - attribute 1 does not appear to be as useful. In fact you can see that attribute 1 actually introduces more uncertainty with identifying group 2. This is obvious when you note that the mean and variance of attribute 1 are basically the same for each group (but mean and variance of attribute 2 is quite different between two groups):
$$
\begin{array}{c|c}
y_{1} & y_{2} & O(C_{1}|y_{1},y_{2},D,I) & O(C_{1}|y_{2},D,I) & \text{True Class} \\
\hline
-0.4049 & -0.3981 & 47.0 &37.0 & 1 \\
-0.0913 & 2.2094 & 0.109 & 0.089 & 2 \\
0.3376 & -1.0467 & 102.4 & 93.5 & 1 \\
0.3455 & 2.496 & 0.104 & 0.095 & 2 \\
0.3232 & -0.5614 & 54.6 & 49.7 & 1
\end{array}
$$
| null |
CC BY-SA 2.5
| null |
2011-03-17T23:34:51.363
|
2011-03-19T01:49:46.053
|
2011-03-19T01:49:46.053
|
2392
|
2392
| null |
8435
|
1
| null | null |
0
|
665
|
When to use multivariate logistic regression versus generalized linear mixed-effects models? What is the difference between the two?
Edit (in response to comments):
I was hoping to see multivariate as more than one covariate.
My problemset is analysis for a longitudinal prospective cohort study where im measuring vitamin K intake in relation to incidence of calcification. vitamin K intake is primarily bein assessed by food questionnaires but im also looking at plasma vitamin K levels. the exposure variable and the outcome variable are ordinal/categorical. on the other hand, outcome also has more than two categories.
i will be using a multivariate mixed model. considering i know my covariates, it should be suffice to use a mixed regression model?
Thank you .
|
Generalized Linear Mixed Effects
|
CC BY-SA 2.5
| null |
2011-03-18T01:21:08.357
|
2011-03-29T06:32:50.853
|
2011-03-29T06:32:50.853
|
930
| null |
[
"regression",
"mixed-model",
"panel-data"
] |
8436
|
1
|
8440
| null |
8
|
2166
|
I have some data; it's a proportion $y$ of some stuff relative to everything, so it's bounded between 0 and 1 by definition. The proportion changes over time. Besides fairly high variance there is a step-like change about the middle of the time period; the step isn't very large, but it's there, and it happens pretty fast relative to the whole time period. So I have an S-curve, and I want to (and was told to) do logistic regression.
But: (1) this is not monotone, not even close, because of the high variance. (2) it does not go from 0 to 1. Rather, it goes from about 0.2 to about 0.8, if you look at the means. So it would seem that the right thing to do will be to fit something like $a\phi(x)+b$, where $\phi$ is the usual logistic S-curve, so we have 4 parameters altogether.
What bothers me is that I've never seen an example of logistic regression used like that (I'll admit to not seeing too many). It's not that I'm not sure how to implement this -- I'm pretty confident I can figure that out, although specific pointers will be appreciated -- but I am afraid that the assumption that the data is bounded by the limits of the S-curve is important for all the estimates afterward, like goodness of fit, significance of the step, confidence intervals, etc.
So:
- Are my fears justified?
- Can you point me to a relevant example in the literature? Not necessarily statistical literature, if some biologists (or whoever) use it, that's fine. Or even just someone mentioning this possibility.
- If there are indeed problems with this approach, what are the alternatives?
Update: Well, I was so ignorant that I could not even ask the questions properly. I was looking for multinomial (aka polytomous) regression.
|
Logistic regression for bounds different from 0 and 1
|
CC BY-SA 2.5
| null |
2011-03-18T02:30:48.143
|
2019-10-06T08:36:32.147
|
2011-03-19T17:35:35.033
|
2658
|
2658
|
[
"logistic"
] |
8437
|
2
| null |
8433
|
7
| null |
You could try generalized additive mixed models, handily implemented in the [gamm4](http://cran.r-project.org/web/packages/gamm4/index.html) package. The way I've used them, you can do something like:
```
fit1 = gamm4(
formula = V1 ~ V2 + s(V3)
, random = ~ (1|V4)
)
fit2 = gamm4(
formula = V1 ~ V2 + s(V3,by=V2)
, random = ~ (1|V4)
)
```
`fit1` seeks to predict V1 using V4 as a random effect and V2 and V3 as fixed effects, but where V3 is spline-smoothed. `fit2` seeks the same, except with the addition of permitting the smooth of V3 to vary within levels of V2, thus implementing an interaction. Comparison of `fit1` to `fit2` evaluates the necessity of permitting the interaction.
| null |
CC BY-SA 2.5
| null |
2011-03-18T02:30:58.790
|
2011-03-18T02:30:58.790
| null | null |
364
| null |
8438
|
2
| null |
8382
|
1
| null |
[http://en.wikipedia.org/wiki/Truncation_%28statistics%29](http://en.wikipedia.org/wiki/Truncation_%28statistics%29)
This is not much help, but at least it gives the correct buzzword (truncated, not quantized; quantization is not your problem) and one pointer to a paper. This should do as a starting point for further search.
Oh, and Winsorized mean is the exact opposite from what you want.
| null |
CC BY-SA 2.5
| null |
2011-03-18T02:54:43.513
|
2011-03-18T02:54:43.513
| null | null |
2658
| null |
8439
|
1
| null | null |
2
|
278
|
I would like to validate a commonly used questionnaire for internet-based administration. What statistical tests should I be using?
|
Validating a paper questionnaire in a web-based format
|
CC BY-SA 2.5
| null |
2011-03-18T03:00:31.760
|
2011-03-18T11:38:41.063
|
2011-03-18T08:16:00.613
|
183
| null |
[
"validation",
"survey"
] |
8440
|
2
| null |
8436
|
3
| null |
To begin with, I think we have to distinguish between [logistic regression](http://en.wikipedia.org/wiki/Logistic_regression) and (generalized) [logistic function](http://en.wikipedia.org/wiki/Generalised_logistic_curve). Though the latter may be viewed as a separate case of a the former taking the time as the only explanatory variable. It is then straightforward to see that fitted process will go by $S$ shaped path that goes to its upper (or, probably, lower) limit when $t \rightarrow \infty$. Therefore moving along the $S$ curve is the influence of over covariates (not time) that are changing up and down with time (in consumption structures these are income, tastes, prices, etc.). So there could be jumps or whatever, because nobody restricts the linear regression to go only up or only down to the $S$ curve's bounds.
Since you are working with structures $0$ and $1$ are natural limits. You can never be sure that any other bounds won't be hit higher or lower in the future, when your conclusions are based only on historical data analysis, and arguing that the process never did so is not an appropriate reasoning. Therefore your fears are not justified, logistic regression (but not fitting the logistic curve! that comes as a solution of deterministic differential equation!) will work just fine here. Pay attention to the fact that there could be several categories that some up to 1, so you need multinomial logit model to fit the structure in this case.
Among the alternatives there could be any model that could be applied to [discrete choice](http://en.wikipedia.org/wiki/Discrete_choice). Commonly used candidates are probit and logit models. Even if you think that there is no decision in my model actually all structures in the world are the result of decision processes solved either by humans, nature or the aliens ^_^.
| null |
CC BY-SA 2.5
| null |
2011-03-18T05:48:45.150
|
2011-03-18T09:15:16.640
|
2011-03-18T09:15:16.640
|
2645
|
2645
| null |
8441
|
2
| null |
8439
|
5
| null |
The following are two complementary strategies that you could adopt.
### 1. Conduct your own empirical validation:
A simple strategy would be to run some form of experiment.
Randomly assign half of your participants to web based completion and the other half to paper based completion.
Then, examine whether statistical properties of the questionnaire are the same across administration formats; e.g., item mean, item standard deviation, intercorrelations between items, etc.
You may find structural equation modelling approaches useful.
You'll also need a sufficiently large sample size because you are effectively trying to prove the null hypothesis.
Other designs would also be possible. For example, you could get participants to complete both versions and counterbalance for order. This mixed design would have more power for a given sample size but would suffer the potential confound of carry-over effects.
### 2. Rely on existing research literature
There's been a lot of research looking at consistency of web based and paper based formats.
Over the while I've read research on online-paper consistency for personality tests, ability tests, questionnaires, and more.
I don't have specific references on hand, but you should do a literature review to gather relevant evidence from analogous existing studies.
In general, you'd want to think about the reasons why you would expect differences. For example, I've seen cases where data collected online is more noisy than face-to-face data.
You also need to consider what you mean by "questionnaire". A questionnaire can be mail, face-to-face, self-administered but with an investigator in the room, and so on.
| null |
CC BY-SA 2.5
| null |
2011-03-18T08:13:31.180
|
2011-03-18T11:38:41.063
|
2011-03-18T11:38:41.063
|
183
|
183
| null |
8442
|
1
| null | null |
4
|
212
|
My goal is to create CI for the CART prediction of new_x
Consider the following code:
```
require(rpart)
set.seed(147830)
n <- 100
x1 <- runif(n)
x2 <- rnorm(n)
y <- x1 + 2*x2 + rnorm(n, 0, .5)
DAT <- data.frame(y,x1,x2)
fit <- rpart(y ~ x1 + x2, data = DAT)
new_x <- data.frame(x1 = .5 , x2 = .25)
predict(fit, newdata = new_x) # 1.353142 # "should" be ~1
```
The only way I can think of at the moment is to bootstrap on x1/x2 from DAT, build many prediction models with them, for each predict new_x, and then use the lower/upper percentiles.
Is there some other way to do it? What are the advantages/disadvantages of it?
Thanks.
|
How to produce a CI for a value predicted in CART?
|
CC BY-SA 2.5
| null |
2011-03-18T08:46:16.270
|
2011-03-18T09:58:04.173
| null | null |
253
|
[
"r",
"predictive-models",
"cart",
"rpart"
] |
8443
|
2
| null |
7236
|
0
| null |
What I would do is provide `prob` argument with weights for each data point based on number of levels in your variable. Example:
```
df <- data.frame(oks = sample(100),
grp = c(rep("trt1", times = 30), rep("trt2", times = 70)))
> head(df)
oks grp
1 40 trt1
2 29 trt1
3 12 trt1
4 25 trt1
5 19 trt1
6 45 trt1
```
Obviously:
```
> (df.prob <- table(df$grp))
trt1 trt2
30 70
```
You pass a vector of probabilities to `sample`. You can sort your `data.frame` by your desired variable (and use the adaptation of solution provided here), or you could assign weights to individual rows based on the level of the treatment (not presented here, but shouldn't be too hard to recode).
```
df[sample(x = df$oks, size = 30, prob = rep(df.prob/nrow(df), df.prob)), ] # / by nrow(df) to get appropriate weight per treatment
```
This is the approximate ratio you're looking for, right?
```
> table(df[sample(x = df$oks, size = 30, prob = rep(df.prob/nrow(df), df.prob)), ]$grp)
trt1 trt2
12 18
```
| null |
CC BY-SA 2.5
| null |
2011-03-18T09:21:26.243
|
2011-03-18T09:21:26.243
| null | null |
144
| null |
8444
|
2
| null |
8442
|
1
| null |
Couldn't you use the distribution of the observed `y` in the leafs/terminal nodes to get an idea about the precision of the estimated mean for new observations falling into that leaf? For CIs with large coverage you'd probably need a lot of data / a small tree so that each terminal node contains a lot of observations, though.
i.e. do s.th like:
```
quantile(DAT$y[predict(fit) == predict(fit, newdata = new_x)],
probs=c(.025, .05, .1, .9, .95, .975))
```
maybe with a more elegant method than what I used to find out which of the original `y` fall into the same leaf as the observation you are forecasting.
| null |
CC BY-SA 2.5
| null |
2011-03-18T09:58:04.173
|
2011-03-18T09:58:04.173
| null | null |
1979
| null |
8446
|
1
| null | null |
4
|
672
|
My supervisor asked me to find out which distribution represents a particular situation.
I have a VoIP generator that generates calls "uniformly" distributed between callers. This means that the volume per caller distribution is "almost" uniformly distributed between a minimum and maximum. So by running a test with 10000 users and a min value equal to 30 calls per week and a max value equal to 90 calls/week i obtain that not all the users respect this limits: we have some users that generate <30 calls and some other users that generate >90 calls. It is clear that the obtained distribution is not uniform.
The situation is this:

He said that i have to perform a sort of numerical process in order to find some formulas that could define this distribution. Initially, as wrote before,we wanted to obtain a uniform (min,max) distribution (the green area in the figure) but this is not the case as proved with chi-square test.
Moreover the curve in figure is not symmetric, the probability of call generation below 30 call/week and greater that 90 call/week are not identical (it is high for 90calls/week). The variability of the number of generated call increases with the increasing call generation rates.
"Actually implementation of this distribution is nothing but assigning different call rates in a range for users in domain which indicates implementation of several delta functions. As the call rates increases the variability of the generated calls also increases with the average call rate and this leads to the asymmetric behavior of the curve." [cit. from the Voipgenerator documentation]
Someone can help me?I think that now i cannot use Q-Q plot because i don't know which theoretical distribution i have to use in order to compare it with my empirical data.
Sorry if I have stressed with a similar problem a few weeks ago, but initially we thought we could change the implementation, but now we cannot. Hence i have to discover the type of the distribution i obtained and i don't know how can i do this.
|
What kind of distribution is this "almost" uniformly distributed data for calls/week?
|
CC BY-SA 4.0
|
0
|
2011-03-18T10:03:48.483
|
2018-08-24T06:34:50.247
|
2018-08-20T06:45:47.427
|
176202
|
3342
|
[
"hypothesis-testing",
"distributions",
"numerics",
"distribution-identification"
] |
8447
|
1
| null | null |
4
|
4030
|
I want to graphically show the change in relative risk along with confidence limits over 17 years.
Can I use a forest plot without meta-analysis because it will have the added advantage to tabulate Z-values and p-values also? Here I want to show the progressive change in the relative risk and NOT meta-analysis.
Or is there a better alternative for the graphic presentation?
|
Visualizing change in risk ratio along with confidence limits
|
CC BY-SA 2.5
| null |
2011-03-18T10:55:33.273
|
2011-09-05T06:24:41.813
|
2011-03-18T16:01:58.567
| null |
2956
|
[
"confidence-interval",
"relative-risk",
"graphical-model"
] |
8448
|
2
| null |
8405
|
2
| null |
@SabreWolfy: I've edited my answer to add an additional factor by binning data:
```
library(Hmisc)
library(doBy)
set.seed(123)
A=rnorm(100, mean=50, sd=20)
bin.data<-data.frame(bins.A=cut2(A, g=4),G=gl(2,50), A)
summaryBy(A~bins.A+G,data=bin.data,FUN=each(length,mean,median))
```
will give output for age:
```
bins.A G A.length A.mean A.median
1 [ 3.82,40.2) 1 14 29.44193 29.06172
2 [ 3.82,40.2) 2 11 29.41250 29.62849
3 [40.18,51.4) 1 12 44.04858 43.98966
4 [40.18,51.4) 2 13 45.93984 45.48458
5 [51.41,64.0) 1 11 57.00405 58.01543
6 [51.41,64.0) 2 14 57.24880 57.11421
7 [64.03,93.7] 1 13 74.35300 74.15924
8 [64.03,93.7] 2 12 77.01414 75.09460
```
for each combination of the bin factor and the grouping factor.
| null |
CC BY-SA 2.5
| null |
2011-03-18T11:12:23.097
|
2011-03-18T18:19:36.817
|
2011-03-18T18:19:36.817
|
3774
|
3774
| null |
8449
|
2
| null |
8446
|
1
| null |
Try Q-Q plots (or P-P plots, only they are somewhat less widely used) of your empirical distribution against each of hypothetical type of distribution. The parameters for the latter are usually deduced by software from your empirical distribution although you could input parameter values you wish to check.
| null |
CC BY-SA 2.5
| null |
2011-03-18T11:24:22.680
|
2011-03-18T11:24:22.680
| null | null |
3277
| null |
8450
|
2
| null |
8420
|
2
| null |
Although I haven't seen international examples, I think [InstantAtlas](http://www.instantatlas.com/) offers quite a lot of nice features that could be useful in your case:

[StatPlanet](http://www.sacmeq.org/statplanet/) might be another (free) option:

| null |
CC BY-SA 2.5
| null |
2011-03-18T11:48:32.543
|
2011-03-18T11:48:32.543
| null | null |
22
| null |
8451
|
2
| null |
8436
|
5
| null |
You are looking for the wrong keywords. Logistic regression is for 0-1 outcomes, where the probability of being 1 is modeled with an S-shaped (logistic) function, not the actual data points themselves. Looking for nonlinear regression, specifically the four-parameter logistic model: $y = A + (B-A)/(1+\exp(-(a+bx))) + \epsilon$ (it can show up with other parametrizations). You fill find lots of ELISA related stuff if you search for this model.
There is specialized software for fitting this, and of course the major statistical packages also can handle it. Unfortunately, if there is a lot of variability, estimation might be difficult, and the fitting process might "not converge".
| null |
CC BY-SA 2.5
| null |
2011-03-18T13:04:44.530
|
2011-03-18T13:04:44.530
| null | null |
279
| null |
8452
|
2
| null |
8435
|
2
| null |
This depends on what you mean by "multivariate". Doing a quick google search, the term seems to refer to "more than 1 co variate". So the model equation is for "ordinary logistic regression" you have:
$$log(\frac{\pi}{1-\pi})=\beta_0 + \beta_1 X_1$$
For "multivariate logistic regression" you have:
$$logit(\frac{\pi}{1-\pi})=\beta_0 + \beta_1 X_1 + \beta_2 X_2 +\dots+ \beta_p X_p=X\beta$$
Where $X\beta$ is the "matrix version" - just for more compact notation. But as @onestop points out, the term "multivariate" could also refer to more than 1 probability. The model equations are then:
$$logit(\frac{\pi_1}{\pi_m})=X^{(1)}\beta^{(1)}$$
$$logit(\frac{\pi_2}{\pi_m})=X^{(2)}\beta^{(2)}$$
$$\dots$$
$$logit(\frac{\pi_{m-1}}{\pi_m})=X^{(m-1)}\beta^{(m-1)}$$
Where the brackets $(1),(2),..,(m-1)$ just indicate that the co variates and betas could be different in each model. The "multinomial" simplifies back to the "multivariate" when $m=2$ (only 2 categories), and the "multivariate" simplifies back to the "ordinary logistic regression" when you have only 1 co variate.
Now these are all "independence" based models, in that given the covariates and betas, any one individual's probability tells you nothing about any others. When you move from this framework, which is called "generalised linear modelling" or GLM in the literature, to the "extra M" of GLMM, the observations are now assumed to somehow depend on each other. The easiest way I can see a reason for this happening is that you know a particular attribute will increase or decrease the probability of a certain group of observations, but you have not measured that covariate in your data (e.g. you are analyzing the incidence of cancer among people, but you do not know which people in your data set are smokers. However have variables which could predict smoking status.). The usual GLM model goes:
$$DATA=MODEL+NOISE^{(1)}$$
(NOTE: although the likelihood isn't linear for GLMs, the usual fisher-scoring IRLS or newton-rhapson algorithm means this is the effective model one is actually estimating - because it is based on a 1 term taylor series expansion "the delta method").
Whereas the GLMM goes:
$$DATA=MODEL+MODEL(UNOBSERVED)+NOISE^{(2)}$$
The unobserved model is sometimes written as $Z\gamma$. So your model equation becomes:
$$logit(\frac{\pi}{1-\pi})=X\beta+Z\gamma$$
The vector $\gamma$ is often called a "random effect" (and is usually assumed to have a normal distribution) and the vector $\beta$ is called a "fixed effect" (although I don't particularly like this terminology).
And the idea is to "decompose" the noise into the "correlated part" and the "independent part". You fit the "fixed effect" part of the model - this is like the "global model". The GLMM basically allows you to recognise that there can be "local effects" in your data - systematic deviations from the "global model" for small groups, in a very similar way to ANOVA and ANCOVA. In fact, all of the GLMMs I have fit, my $Z$ matrix has been a classical design matrix for ANOVA.
So basically what the GLMM does is make a compromise between the global model and the local model. The compromise depends on the estimated variance for the "random effect" and on the estimate variance of the noise.
| null |
CC BY-SA 2.5
| null |
2011-03-18T13:08:03.853
|
2011-03-18T13:08:03.853
| null | null |
2392
| null |
8453
|
1
| null | null |
2
|
340
|
Does $R^2 * \mbox{slope}$ provide an estimation of cause and effect? For example, if $R^2$ of miles driven and coffee consumed is $x\%$ and the slope of the coffee-and-miles dataset is $y$ (miles/coffee), can you say "$z$ of each cup of coffee drunk is related to the number of miles driven"?
|
Can you use $R^2$ and regression to estimate cause and effect?
|
CC BY-SA 3.0
| null |
2011-03-18T13:17:29.973
|
2014-05-25T15:29:49.563
|
2014-05-25T15:29:49.563
|
27403
| null |
[
"regression",
"correlation"
] |
8455
|
1
|
8531
| null |
10
|
1265
|
The recent events in Japan have made me think about the following.
Nuclear Plants are usually designed to limit risk of serious accidents
to a 'design basis probability' for example, say, 10E-6/year.
This is the criteria for a single plant.
However, when there is a population of hundreds of reactors, how do we combine the individual probabilities of a serious accident ?
I know I could probably research this myself but having found this site I a sure
there is someone that will be able to answer this question quite easily.
Thanks
|
Combining probabilities of nuclear accidents
|
CC BY-SA 2.5
| null |
2011-03-18T13:59:43.307
|
2011-03-21T17:36:23.963
| null | null | null |
[
"probability"
] |
8456
|
1
| null | null |
5
|
46489
|
I am a senior doing a science fair project on the efficiency of fuels. For each fuel I tested, I have a time series of temperature.
I want to plot these into an excel xy chart, how do I plot them if the constants (time) are not the same for each set of data?
|
Excel xy chart with unequal x values in series
|
CC BY-SA 2.5
| null |
2011-03-18T14:03:39.683
|
2023-01-25T10:51:48.917
|
2011-03-18T16:07:45.020
| null | null |
[
"data-visualization",
"excel"
] |
8457
|
2
| null |
8455
|
1
| null |
As commentators pointed out, this has the very strong independency assumption.
Let the probability that a plant blows up be $p$. Then the probability that a plant does not blow up is $1-p$. Then the probability that $n$ plants do not blow up is $(1-p)^n$. The expected number of plants blown up per year is $np$.
In case you're interested: [binomial distribution](http://en.wikipedia.org/wiki/Binomial_distribution).
| null |
CC BY-SA 2.5
| null |
2011-03-18T14:09:25.707
|
2011-03-18T16:04:05.963
|
2011-03-18T16:04:05.963
|
2860
|
2860
| null |
8458
|
2
| null |
8453
|
5
| null |
Regression gives you, essentially, a correlation. Correlation is not causality -- or else global warming would be caused by a decline in global number of pirates.
| null |
CC BY-SA 2.5
| null |
2011-03-18T14:17:10.117
|
2011-03-18T14:17:10.117
| null | null |
2044
| null |
8459
|
1
| null | null |
2
|
131
|
I recently was asked to help characterize some surfaces (raised ink dots on paper) that were scanned with a profilometer, yielding height data across a grid of x & y coordinates. The critical question was to characterize the average dot height. The data were rather noisy as the paper was not perfectly flat, so I played around with various approaches to removing this noise and came upon an approach that appears to do very well (to the eye at least). See below for pseudocode and R code. The logic is that every row and column definitely has an area with no dots on it, so ideally the minimum for each row and column should be around zero. However, one obtains different surfaces depending on whether row minimums are shifted to zero or whether column minimums are shifted to zero. To solve this ambiguity, I simply do both and average the results. I then noticed that repeating this process many times appeared to converge on a very nicely de-noised final result. Is there a name for this approach or something similar?
```
#Pseudocode ("data" is a RxC matrix of height values)
repeat the following 1000 times:
create a copy of data, call it "copy1"
for every row of height data in copy1:
subtract the minimum of that row from each value in the row (i.e. shift the row's data so that the minimum is zero)
create a second copy of data, call it "copy2"
for every column of height data in copy2:
subtract the minimum of that column from each value in the column (i.e. shift the column's data so that the minimum is zero)
replace data with the cell-wise average of copy1 and copy2
```
_
```
#R-code (data is in "a")
for(i in 1:1e3){
c1 = a
for(i in 1:ncol(c1)){
c1[,i] = c1[,i]-min(c1[,i])
}
c2 = a
for(i in 1:nrow(c2)){
c2[i,] = c2[i,]-min(c2[i,])
}
a = (c1+c2)/2
}
```
|
Does this de-noising algorithm have a name?
|
CC BY-SA 2.5
| null |
2011-03-18T15:26:07.260
|
2011-03-19T15:56:45.143
| null | null |
364
|
[
"smoothing"
] |
8460
|
2
| null |
8420
|
0
| null |
I like Visualizing data patterns with micromaps by daniel carr and linda pickle, specially the concept of linked micromaps, which is influenced by the idea of small multiples (Tufte) and the idea of dot - charts (Cleveland). I strong recommend to read it.
| null |
CC BY-SA 2.5
| null |
2011-03-18T15:49:52.650
|
2011-03-18T15:49:52.650
| null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.