Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6570 | 1 | 6576 | null | 7 | 2015 | I want to conduct a poll on the quality of a product containing several questions with five possible answers:
- Very poor
- Poor
- OK / No opinion
- Good
- Very good
A colleague has advised me to ditch option 3 (OK / No opinion) to force people to choose.
Which will produce the most reliable / useful data? Is there a preferred option or is it dependent on other factors (if so, what)?
I understand that Gallup usually has five options, hence why I choose five.
| Should a multiple choice poll contain a neutral response? | CC BY-SA 2.5 | null | 2011-01-26T13:24:23.333 | 2014-12-22T18:14:19.050 | 2011-01-26T15:12:29.600 | 919 | 1259 | [
"polling"
]
|
6571 | 1 | null | null | 3 | 478 | Let us say we have some demographic time series data which tells us how many hours people spend in front of a computer screen each day, grouped by age and gender:
```
set.seed(42)
dates = seq(as.Date("2011/1/1"), by="day", length.out = 100)
male.age1 = round(runif(100, min = 1, max = 10))
female.age1 = round(runif(100, min = 1, max = 10))
male.age2 = round(runif(100, min = 1, max = 10))
female.age2 = round(runif(100, min = 1, max = 10))
df = data.frame(dates = dates, male.age1 = male.age1, female.age1 = female.age1, male.age2 = male.age2, female.age2 = female.age2)
```
which looks like this:
```
> df[1:5,]
dates male.age1 female.age1 male.age2 female.age2
1 2011-01-01 9 7 9 5
2 2011-01-02 9 3 6 5
3 2011-01-03 4 3 9 2
4 2011-01-04 8 5 5 4
5 2011-01-05 7 9 2 9
etc...
```
I am just wondering what kind of analysis would be appropriate for this type of data, and if you could give me some examples? My initial thinking is trying to see how certain types of people act on certain dates, though I don't know how to go about that. I think what I'm finding hard is that the dates are not really independent of each other (i.e. what happens today might effect what happens tomorrow).
Many thanks in advance for your time.
Note: This isn't homework or anything like that, just curiosity and trying to learn R.
Note 2: Is there any benefit in reshaping the data as follows (I came across the reshape package a few days ago, and it looks really cool):
```
library(reshape)
library(reshape2)
y = melt(df, id = "dates")
y$gender = NA
ind = grep("female", y$variable, fixed = TRUE)
y[ind, "gender"] = "female"
y[-ind, "gender"] = "male"
y$gender = factor(y$gender)
levels(y$variable) <- c("age1", "age2", "age3", "age4")
```
Which gives:
```
> y[with(y, order(dates)), ][1:10,]
dates variable value gender
1 2011-01-01 age1 9 male
101 2011-01-01 age2 7 female
201 2011-01-01 age3 9 male
301 2011-01-01 age4 5 female
2 2011-01-02 age1 9 male
102 2011-01-02 age2 3 female
202 2011-01-02 age3 6 male
302 2011-01-02 age4 5 female
3 2011-01-03 age1 4 male
103 2011-01-03 age2 3 female
etc...
```
| What types of analysis are appropriate for demographic time series data? | CC BY-SA 2.5 | null | 2011-01-26T14:02:10.713 | 2011-01-26T19:19:10.297 | 2011-01-26T14:07:52.420 | 539 | 539 | [
"r",
"time-series"
]
|
6572 | 2 | null | 6570 | 0 | null | The fact that you are forcing the respondent for a positive answer or negative answer, in this situation, is not correct; the respondent may be undecided and it is a possibility; it is more when it is a new product. If you are developing an instrument to measure quality it is better that you use 5 options as given by you.
| null | CC BY-SA 2.5 | null | 2011-01-26T15:32:22.127 | 2011-01-26T15:32:22.127 | null | null | 2926 | null |
6573 | 2 | null | 6554 | 3 | null | (Disclaimer: I am by no means very experienced in Bayesian stats or using BUGS!)
What parameters would you like to estimate? What does "mu = 10, tau=1/10" mean? Are these numbers considered to be priors?
Let us assume your interested in estimating $\mu$ and $\sigma^2$. (Keep in mind that Win/OpenBUGS use the precision, i.e prec = 1/$\sigma^2$). So, your BUGS model could look like this:
```
model {
for (i in 1:3) {
x[i] ~ dnorm(mu, prec)
}
## priors (will have strong impact on the parameter estimation)
prec ~ dgamma(0.1, 0.001)
mu ~ dnorm(2.0, 0.0001)
var <- 1/prec
}
## data vector (BUGS follows the S/R notation, i.e. use the c() function)
list(x=c(1.4, 2.1, 1.1))
```
I am using OpenBUGS (10,000 iterations, burn-in: 5000; starting values were generated by OpenBUGS) and here are my results:
```
mean sd MC_error val2.5pc median val97.5pc start sample
mu 1.544 0.6666 0.01039 0.3894 1.535 2.728 5000 5002
prec 4.155 3.942 0.06772 0.1263 3.024 14.38 5000 5002
var 1.31 5.893 0.1191 0.06952 0.331 7.917 5000 5002
```
You might realize that the variance estimator has been heavily affected by the choice of the prior.
As @whuber already mentioned, I strongly recommend that you check out the many examples that come with any of the BUGS packages. You also might be interested in ["Bayesian Methods for Ecology"](http://arcue.botany.unimelb.edu.au/bayescode.html) or ["Bayesian Modeling Using WinBUGS: An introduction"](http://stat-athens.aueb.gr/~jbn/winbugs_book/).
| null | CC BY-SA 2.5 | null | 2011-01-26T15:40:51.817 | 2011-01-26T15:45:58.250 | 2011-01-26T15:45:58.250 | 307 | 307 | null |
6575 | 1 | 6589 | null | 7 | 1125 | Consider the following picture representing the experimental data sequence obtained by two 1D-sensors (each point of the sequence is plotted on XY plane according to the respective sensor reading):

It's visually obvious that two modes have been registered. Let's assume that generally those two modes interfere, so there's no easy possibility to separate them by isolating certain sequence segments. I try the classic principal component analysis by finding the covariance matrix, then finding the set of eigenvalues and corresponding eigenvectors:

White box dimensions represents the magnitude if the eigenvalues, box orientation represents the direction of eigenvectors.
It's clear that PCA first component deviates slightly from the high-magnitude mode direction, while the second component deviates greatly due to skewness of the lower-magnitude mode original direction.
It is known that PCA, being based on eigenvectors, results in orthogonal basis of primary components.
Is there other elegant methods (or PCA-derived methods) to obtain the non-orthogonal basis of primary skewed components?
| How to do primary component analysis on multi-mode data with non-orthogonal primary components? | CC BY-SA 2.5 | null | 2011-01-26T15:51:26.317 | 2021-03-22T02:09:36.077 | null | null | 2820 | [
"pca",
"multivariate-analysis",
"mode"
]
|
6576 | 2 | null | 6570 | 10 | null | Neutral points can mean many different things to many people. The way you labeled the middle choice yourself reflects this uncertainty. Some reasons for choosing the neutral point from the perspective of a participant:
- I don't care to really think about my answer to this question (I just want to get paid and leave)
- I have no strong opinion on this question
- I don't understand the question, but don't want to ask (I just want to get paid and leave)
- with regards to the given aspect, the product is truly medium in quality, i.e., it neither excels nor falls short of my expectations
- with regards to the given aspect, the product has some high-quality features, and some low-quality features
Without further qualification, the people who choose the middle category can thus represent a very heterogeneous collection of attitudes / cognitions. With good labeling, some of this confusion can be avoided.
You can also present a separate "no answer" category. However, participants often interpret such a category as a signal to only provide an answer if they feel very confident in their choice. In other words, participants then tend to choose "no answer" because they feel they're not well-informed enough to make a choice that meets the questionnaire-designers quality standards.
IMHO there's no right answer to your question. You have to be very careful in labeling some or all of the presented choices, do lots of pre-testing with additional free interviews of participants on how they perceived the options. If you're really pragmatic, you just choose a standard label-set for which you can cite an article that everybody else always cites and be done with it.
| null | CC BY-SA 2.5 | null | 2011-01-26T16:03:55.117 | 2011-01-26T16:03:55.117 | null | null | 1909 | null |
6577 | 1 | null | null | 3 | 672 | I've got some data that has this basic shape (using R):
```
df <- data.frame(group=sample(LETTERS, 500, T, log(2:27)),
type=sample(c("x","y"), 500, T, c(.4,.6)),
value=sample(0:20, 500, T))
```
I want to investigate the ratios between `x` and `y` within each group.
One way would be to first compute the mean of `x` and `y` within each group, then use the `compressed.ratio` function I wrote (does it have a name? I just made it up) to map the ratio between the means from the interval [0,Inf] onto the interval [-1,1] so that it can be plotted symmetrically in `x` and `y`:
```
compressed.ratio <- function(x, y) (x-y)/(x+y)
df.means <- ddply(df, .(group,type),
function(df) data.frame(mean=mean(df$value),n=nrow(df)))
with(df.means, plot(unique(group),
compressed.ratio(mean[type=="x"], mean[type=="y"]), ylim=c(-1,1)))
```
In addition to this, I'd like to show something that gets at the amount of variation within each group, and also shows where there might be problems with very small numbers of samples in a given group.
But I haven't thought of a good way to do these - the obvious way to show uncertainty due to sample size would be to use standard-error bars, but I'm not sure how to compute the standard-error of a ratio between two groups of quantities. Would it be appropriate to compute the ratios of each `x` to `mean(y)`, and then `mean(x)` to each `y`, and treat those as `x+y` separate measurements? Or maybe to do some kind of random simulation, doing draws from the `x` and `y` pools and taking their ratios?
Finally, does anyone know some kind of visual standard way to show both the standard-deviation and standard-error in the same graph? Maybe a thick error bar and thin whiskers?
| Investigate ratios between two groups | CC BY-SA 2.5 | null | 2011-01-26T16:07:42.770 | 2011-01-27T19:53:37.743 | 2011-01-27T19:53:37.743 | 1434 | 1434 | [
"r",
"statistical-significance"
]
|
6578 | 2 | null | 6080 | 1 | null | By zero-truncated, do you mean that any data that would have had a 0 as a response is just missing? In that case, can't you just put the 0s in?
Or do you mean that some proportion of the time, instead of getting a sensical answer, you get a 0 instead? That sounds like zero-inflation to me. In that case, there are zero-inflated poisson and similar in GAMLSS.
I don't know of a zero-inflated Sichel, and there's nothing in GAMLSS for it, but is there a particularly good reason for using the Sichel distribution for your data? Does it reflect the underlying process particularly well? (I believe that the Sichel represents a mixture model of Poissons, where the meta-distribution is distributed as Inverse Gaussian...)
| null | CC BY-SA 2.5 | null | 2011-01-26T16:20:22.180 | 2011-01-26T16:20:22.180 | null | null | 6 | null |
6579 | 1 | null | null | 3 | 1831 | I am doing an experiment whereby I have 4 different conditions. Within each condition, I do 4-7 technical replicates (cell counts in 4-7 high powered fields). I have also repeated the experiment 3 times (3 biological replicate (rats)). What test will compare the 4 different conditions, but will also take into account the variation between biological replicates? I am trying to do this in graphpad prism 5.0.
| Statistical test for multiple biological replicates | CC BY-SA 2.5 | null | 2011-01-26T16:24:01.237 | 2011-01-26T18:46:56.767 | 2011-01-26T18:46:56.767 | 449 | null | [
"anova"
]
|
6580 | 1 | 6602 | null | 5 | 2818 | Question
Is there such concept in econometrics/statistics as a derivative of parameter $\hat{b_{p}}$ in a linear model with respect to some observation $X_{ij}$?
By derivative I mean $\frac{\partial \hat{b_{p}}}{\partial X_{ij}}$ - how would change parameter $\hat{b_{p}}$ if we changed $X_{ij}$?
Motivation
I was thinking about a situation when we have some uncertainty in data (e.g. results from a survey) and we have enough money to obtain precise results only in a single observation, which observation should we choose?
My intuition is saying that we should choose the observation that might change parameters the most, which is equivalent to the highest value of the derivative. If there are any other concepts feel free to write about them.
| Derivative of a linear model | CC BY-SA 2.5 | null | 2011-01-26T16:25:10.783 | 2011-01-27T04:21:12.800 | 2011-01-26T19:06:17.867 | 1643 | 1643 | [
"regression",
"uncertainty"
]
|
6581 | 1 | 6610 | null | 52 | 70366 | What is "Deviance," how is it calculated, and what are its uses in different fields in statistics?
In particular, I'm personally interested in its uses in CART (and its implementation in rpart in R).
I'm asking this since the [wiki-article](http://en.wikipedia.org/wiki/Deviance_%28statistics%29) seems somewhat lacking and your insights will be most welcomed.
| What is Deviance? (specifically in CART/rpart) | CC BY-SA 2.5 | null | 2011-01-26T16:27:34.040 | 2019-09-15T17:04:48.410 | null | null | 253 | [
"r",
"cart",
"rpart",
"deviance"
]
|
6582 | 1 | null | null | 7 | 1799 | When deconstructing my mixed effects model, I found a three-way significant interaction. I calculated my p-value by using maximum likelihood ratio tests allowing for a comparison of the fit of the two models (the model with all predictors minus the model with all predictors but the predictor of interest - in this case, the three-way interaction). When I conduct follow-up comparisons of the three-way interaction, do I need to correct the alpha level of significance with Bonferroni correction?
Thanks for all input!
EDIT: (merged from answer --mbq)
I want to look at the significance of the three-way interaction and then...I wanted to look at any other significant interactions within that first three-way interaction.
I use the same dataset...the model is a crossed random effects of participants and items (the data is comprised of repeated observations (response times) with Valence (positive and negative) and age as a between subjects factor as well as two continuous predictor variables (attachment dimensions). Thus my model is Valence x Age x Attachment anxiety x Attachment avoidance. I found that Valence x Age x Attachment avoidance is significant. However, I want to examine this interaction further. I did this by examining the same model but just for young adults vs old adults separately. Thus, I found with older adults a significant interaction of Valence and Attachment avoidance. However, when I calculated the p-value (as described above) of this two-way interaction, can I take the p-value as is or do I need to correct with Bonferroni? And if so, how? I hope this is clearer?
Thank you!
Basically I want to examine the 'direction' of my three-way interaction and test whether or not the differences within that interaction is significant.
| Conducting planned comparisons in mixed model using lmer | CC BY-SA 2.5 | null | 2011-01-26T17:23:46.763 | 2011-09-24T03:28:02.923 | 2011-01-26T20:57:29.597 | null | 2934 | [
"mixed-model",
"interaction",
"model-comparison"
]
|
6583 | 2 | null | 6579 | 1 | null | Sounds like you'll need to do a two-way analysis of variance to me. I'm assuming the 'technical replicates' are 3 repeats of the same measurement procedure in the same rat with the same condition, and all the rats are subjected to all the conditions. The rats are then a 'blocking' factor, and the condition is your 'treatment' factor.
My only niggling doubt is: how did you decide how many technical replicates to do in each case?
| null | CC BY-SA 2.5 | null | 2011-01-26T18:45:42.947 | 2011-01-26T18:45:42.947 | null | null | 449 | null |
6584 | 2 | null | 6570 | 1 | null | If you wish to detect overt opinions then put in a neutral option. If you wish to detect any potential positive or negative bias then leave it out. As caracal said, label things as unambiguously as possible with respect to what you wish the options to reflect.
I've seen studies where only the form of response was changed. When there were only two options, like / dislike, then two stimuli were rated as very strongly liked in roughly equal proportions. When subjects were subsequently given an infinite rating scale with neither like nor dislike in the middle the rating differences between the two stimuli were vast (75% of the scale vs. 4%). This suggests that with a limited scale and no neutral option you can detect very small biases as large effects so you should be careful in interpreting such scales and use them judiciously.
| null | CC BY-SA 2.5 | null | 2011-01-26T18:49:01.063 | 2011-01-26T18:56:44.410 | 2011-01-26T18:56:44.410 | 601 | 601 | null |
6585 | 2 | null | 6580 | 6 | null | I guess this would come under the heading of regression diagnostics. I haven't seen this precise statistic before, but something that comes fairly close is DFBETAij, which is the the change in regression coefficient i when the jth observation is omitted divided by the estimated standard error of coefficient i.
The book that defined this and many other regression diagnostics (perhaps too many) is:
Belsley, D. A., E. Kuh, and R. E. Welsch. (1980). [Regression Diagnostics: Identifying Influential Data and Sources of Collinearity](http://books.google.co.uk/books?id=GECBEUJVNe0C). New York: Wiley. ISBN 0471691178
| null | CC BY-SA 2.5 | null | 2011-01-26T19:14:09.810 | 2011-01-26T19:21:21.437 | 2011-01-26T19:21:21.437 | 449 | 449 | null |
6586 | 2 | null | 6570 | 2 | null | I try to avoid questions with more than two answers, as it is impossible to compare them between users. (good vs. very good can be very subjective).
I rephrase most questions into binary type (giving though the possibility to be indiffernt):
"Would you use the product everyday?"
Yes No Indifferent
"Would you recommend the product to your friends?"
etc.
I found results obtained with this method to be way more consistent with the feeling I got from later interviews and performance tests. However, my work so far focuses on Human Computer Interaction questionnaires.
The best is anyways to conduct real person interviews, as you learn more from them. Of course, they are also very time consuming :(
| null | CC BY-SA 2.5 | null | 2011-01-26T19:16:51.430 | 2011-01-26T19:16:51.430 | null | null | 2904 | null |
6587 | 2 | null | 6571 | 2 | null | If you're trying to learn time series in R, I would suggest you to use real data and not simulated data.
This is so because in time series there many effects due to time, such as seasonality and trend.
I would suggest you to take a look at
```
?ts
?ts.plot
?decompose
?arima
```
If you want to study this simulated data sets, you may find useful grouping your data as time series using male.age1, female.age1, male.age2, female.age2. For example
```
m.age1.series <- ts( male.age1, start=c(01,01) , frequency=30 )
```
This will create a time series object that you may analyze.
Take a look at the following link:
[http://www.stat.pitt.edu/stoffer/tsa2/R_time_series_quick_fix.htm](http://www.stat.pitt.edu/stoffer/tsa2/R_time_series_quick_fix.htm)
I hope it helps! :)
| null | CC BY-SA 2.5 | null | 2011-01-26T19:19:10.297 | 2011-01-26T19:19:10.297 | null | null | 2902 | null |
6588 | 1 | 6596 | null | 13 | 2812 | I have a data set with lots of zeros that looks like this:
```
set.seed(1)
x <- c(rlnorm(100),rep(0,50))
hist(x,probability=TRUE,breaks = 25)
```
I would like to draw a line for its density, but the `density()` function uses a moving window that calculates negative values of x.
```
lines(density(x), col = 'grey')
```
There is a `density(... from, to)` arguments, but these seem to only truncate the calculation, not alter the window so that the density at 0 is consistent with the data as can be seen by the following plot :
```
lines(density(x, from = 0), col = 'black')
```
(if the interpolation was changed, I would expect that the black line would have higher density at 0 than the grey line)
Are there alternatives to this function that would provide a better calculation of the density at zero?

| How can I estimate the density of a zero-inflated parameter in R? | CC BY-SA 2.5 | null | 2011-01-26T20:01:23.850 | 2013-04-19T15:21:32.397 | 2013-04-18T19:19:02.937 | 1036 | 2750 | [
"r",
"probability",
"kde"
]
|
6589 | 2 | null | 6575 | 1 | null | There are factor analysis techniques that allow oblique rotation, not just the orthogonal rotation that PCA uses. Take a look at direct oblimin rotation or promax rotation.
Not sure what statistical application you are using. In R, the psych and HDMD packages have commands that allow oblique rotations.
| null | CC BY-SA 2.5 | null | 2011-01-26T20:30:43.377 | 2011-01-26T20:30:43.377 | null | null | 2933 | null |
6592 | 1 | 6597 | null | 4 | 260 | I'm not sure that I've titled this question correctly, but here is my query.
Suppose you are given a set of measurements and the uncertainty (variance) associated with each. The task is to statistically figure out how many different objects were likely measured and finally, to combine measurements into a single estimate for each.
The second part is easy enough - an uncertainty-weighted mean would do it - but I am having difficulty understanding how to sort out how many objects were measured. If there were just two objects, ANOVA would work. But what if there are an unknown number of objects?
As an aside, I'm aware of a Baysian technique in which one considers each measurement in turn, building a set of hypotheses for each measurement that doesn't fall within the confidence interval of an existing hypothesis and combining it into the hypothesis when they do. But I think this method is dependent on the order the measurements are considered and therefore imparts a kind of time dependence on measurements that have none.
I feel like this is something that's commonly done and I should know how to do, but I'm stumped so any thoughts you all might have would be much appreciated.
Thanks,
Val
| Multiple hypothesis ANOVA | CC BY-SA 2.5 | null | 2011-01-26T21:02:50.433 | 2011-06-26T00:03:58.643 | 2011-01-26T23:07:52.737 | 449 | 2932 | [
"anova",
"clustering",
"meta-analysis"
]
|
6593 | 2 | null | 6581 | 11 | null | Deviance is the likelihood-ratio statistic for testing the null hypothesis that the model holds agains the general alternative (i.e., the saturated model). For some Poisson and binomial GLMs, the number of observations $N$ stays fixed as the individual counts increase in size. Then the deviance has a chi-squared asymptotic null distribution. The degrees of freedom = N - p, where p is the number of model parameters; i.e., it is equal to the numbers of free parameters in the saturated and unsaturated models. The deviance then provides a test for the model fit.
$Deviance = -2[L(\hat{\mathbf{\mu}} | \mathbf{y})-L(\mathbf{y}|\mathbf{y})]$
However, most of the times, you want to test if you need to drop some variables. Say there are two models $M_1$ and $M_2$ with $p_1$ and $p_2$ parameters, respectively, and you need to test which of these two is better. Assume $M_1$ is a special case of $M_2$ i.e. nested models.
In that case, the difference of deviance is taken:
$\Delta Deviance = -2[L(\hat{\mathbf{\mu}_1} | \mathbf{y})-L(\hat{\mathbf{\mu}_2}|\mathbf{y})]$
Notice that the log likelihood of the saturated model cancels and the degree of freedom of $\Delta Deviance$ changes to $p_2-p_1$. This is what we use most often when we need to test if some of the parameters are 0 or not. But when you fit `glm` in `R` the deviance output is for the saturated model vs the current model.
If you want to read in greater details: cf: Categorical Data Analysis by Alan Agresti, pp 118.
| null | CC BY-SA 2.5 | null | 2011-01-26T22:13:54.307 | 2011-01-26T22:20:50.633 | 2011-01-26T22:20:50.633 | null | 1307 | null |
6595 | 2 | null | 6588 | 0 | null | You may try lowering bandwidth (blue line is for `adjust=0.5`),

but probably KDE is just not the best method to deal with such data.
| null | CC BY-SA 2.5 | null | 2011-01-26T22:19:10.947 | 2011-01-26T22:19:10.947 | null | null | null | null |
6596 | 2 | null | 6588 | 17 | null | The density is infinite at zero because it includes a discrete spike. You need to estimate the spike using the proportion of zeros, and then estimate the positive part of the density assuming it is smooth. KDE will cause problems at the left hand end because it will put some weight on negative values. One useful approach is to transform to logs, estimate the density using KDE, and then transform back. See [Wand, Marron & Ruppert (JASA 1991)](http://www.jstor.org/pss/2290569) for a reference.
The following R function will do the transformed density:
```
logdensity <- function (x, bw = "SJ")
{
y <- log(x)
g <- density(y, bw = bw, n = 1001)
xgrid <- exp(g$x)
g$y <- c(0, g$y/xgrid)
g$x <- c(0, xgrid)
return(g)
}
```
Then the following will give the plot you want:
```
set.seed(1)
x <- c(rlnorm(100),rep(0,50))
hist(x,probability=TRUE,breaks = 25)
fit <- logdensity(x[x>0]) # Only take density of positive part
lines(fit$x,fit$y*mean(x>0),col="red") # Scale density by proportion positive
abline(v=0,col="blue") # Add spike at zero.
```

| null | CC BY-SA 2.5 | null | 2011-01-26T22:39:44.857 | 2011-01-26T23:02:53.137 | 2011-01-26T23:02:53.137 | 159 | 159 | null |
6597 | 2 | null | 6592 | 2 | null | One approach would be a finite mixture model with an unknown number of components. A set of measurements and their variances sounds like meta-analysis. I suggest you have at look at [Peter Schlattmann's webpage for his book 'Medical Applications of Finite Mixture Models'](http://www.charite.de/biometrie/schlattmann/book/), which includes meta-analysis amongst its applications. The book is not cheap, but if you download the R or SAS code from that webpage and have a look at the documentation you may be able to manage without it.
| null | CC BY-SA 2.5 | null | 2011-01-26T22:41:50.113 | 2011-01-26T22:41:50.113 | null | null | 449 | null |
6598 | 2 | null | 6588 | 4 | null | I'd agree with Rob Hyndman that you need to deal with the zeroes separately. There are a few methods of dealing with a kernel density estimation of a variable with bounded support, including 'reflection', 'rernormalisation' and 'linear combination'. These don't appear to have been implemented in R's `density` function, but are available in [Benn Jann's kdens package for Stata](http://fmwww.bc.edu/RePEc/bocode/k/kdens.pdf).
| null | CC BY-SA 2.5 | null | 2011-01-26T23:05:31.587 | 2011-01-26T23:05:31.587 | null | null | 449 | null |
6599 | 1 | 6665 | null | 23 | 6161 | I have read Alexandru Niculescu-Mizil and Rich Caruana's paper "[Obtaining Calibrated Probabilities from Boosting](http://aaaipress.org/Papers/Workshops/2007/WS-07-05/WS07-05-006.pdf)" and the discussion in [this](https://stats.stackexchange.com/questions/5196/why-use-platts-scaling) thread. However, I am still having trouble understanding and implementing logistic or Platt's scaling to calibrate the output of my multi-class boosting classifier (gentle-boost with decision stumps).
I am somewhat familiar with generalized linear models, and I think I understand how the logistic and Platt's calibration methods work in the binary case, but am not sure I know how to extend the method described in the paper to the multi-class case.
The classifier I am using outputs the following:
- $f_{ij}$ = Number of votes that the classifier casts for class $j$ for the sample $i$ that is being classified
- $y_i$ = Estimated class
At this point I have the following questions:
Q1: Do I need to use a multinomial logit to estimate probabilities? or can I still do this with logistic regression (e.g. in a 1-vs-all fashion)?
Q2: How should I define the intermediate target variables (e.g. as in Platt's scaling) for the multi-class case?
Q3: I understand this might be a lot to ask, but would anybody be willing to sketch out the pseudo-code for this problem? (on a more practical level, I am interested in a solution in Matlab).
| Calibrating a multi-class boosted classifier | CC BY-SA 2.5 | null | 2011-01-26T23:48:06.297 | 2018-07-06T15:42:46.263 | 2017-04-13T12:44:32.747 | -1 | 2798 | [
"machine-learning",
"boosting"
]
|
6600 | 2 | null | 6582 | 1 | null | It sounds like you basically have a problem of model choice. I think this is best treated as a decision problem. You want to act as if the final model you select is the true model, so that you can make conclusions about your data.
So in decision theory, you need to specify a loss function, which says how you are going to rank each model, and a set of alternative models which you are going to decide between. See [here](http://www.uv.es/~bernardo/2010Valencia9.pdf) and [here](http://www.uv.es/~bernardo/JMBSlidesV9.pdf) for a decision theoretical approach to hypothesis testing in inference. And [here](http://www.uv.es/~bernardo/Kernel.pdf) is one which uses a decision theory approach to choose a model.
It sounds like you want to use the p-value as your loss function (because that's how you want to compare the models). So if this is your criterion, then you pick the model with the smallest p-value.
But the criterion needs to apply to something which the models have in common, an "obvious" choice based on a statistic which measures how well the model fits the data.
One example is the sum of squared errors for predicting a new set of observations which were not included in the model fitting (based on the idea that a "good" model should reproduce the data it is supposed to be describing). So, what you can do is, for each model:
1) randomly split your data into two parts, a "model part" big enough for your model, and a "test" part to check predictions (which particular partition should not matter if the model is a good model). The "model" set is usually larger than the "test" set (at least 10 times larger, depending on how much data you have)
2) Fit the model to the "model data", and then use it to predict the "test" data.
3) Calculate the sum of squared error for prediction in the "test" data.
4) repeat 1-3 as many times as you feel necessary for your data (just in case you did a "bad" or "unlucky" partition), and take the average of the sum of squared error value in step 3).
It does seem as though you have already defined a class of alternative models that you are willing to consider.
Just a side note: Any procedure that you use to select the model, should go into step 1, including "automatic" model selection procedures. This way you properly account for the "multiple comparisons" that the automatic procedure does. Unfortunately, you need to have an alternative (maybe one is "foward selection" one is "forward stepwise" one is "backward selection", etc.). To "keep things fair" you could keep the same set of partitions for all models.
| null | CC BY-SA 2.5 | null | 2011-01-27T01:07:12.153 | 2011-01-27T01:07:12.153 | null | null | 2392 | null |
6601 | 1 | 6605 | null | 32 | 14624 | This is a similar question to the one [here](https://stats.stackexchange.com/questions/155/what-is-your-favorite-laymans-explanation-for-a-difficult-statistical-concept), but different enough I think to be worthwhile asking.
I thought I'd put as a starter, what I think one of the hardest to grasp is.
Mine is the difference between probability and frequency. One is at the level of "knowledge of reality" (probability), while the other is at the level "reality itself" (frequency). This almost always makes me confused if I think about it too much.
Edwin Jaynes Coined a term called the "mind projection fallacy" to describe getting these things mixed up.
Any thoughts on any other tough concepts to grasp?
| What is the hardest statistical concept to grasp? | CC BY-SA 2.5 | null | 2011-01-27T03:57:01.977 | 2015-05-23T23:01:52.830 | 2017-04-13T12:44:33.550 | -1 | 2392 | [
"teaching"
]
|
6602 | 2 | null | 6580 | 5 | null | @onestop points in the right direction. Belsley, Kuh, and Welsch describe this approach on pp. 24-26 of their book. To differentiate with respect to an observation (and not just one of its attributes), they introduce a weight, perform weighted least squares, and differentiate with respect to the weight.
Specifically, let $\mathbb{X} = X_{ij}$ be the design matrix, let $\mathbf{x}_i$ be the $i$th observation, let $e_i$ be its residual, let $w_i$ be the weight, and define $h_i$ (the $i$th diagonal entry in the hat matrix) to be $\mathbf{x}_i (\mathbb{X}^T \mathbb{X})^{-1} \mathbf{x}_i^T$. They compute
$$\frac{\partial b(w_i)}{\partial w_i} = \frac{(\mathbb{X}^T\mathbb{X})^{-1} \mathbf{x}_i^T e_i}{\left[1 - (1 - w_i)h_i\right]^2},$$
whence
$$\frac{\partial b(w_i)}{\partial w_i}\Bigg|_{w_i=1} = (\mathbb{X}^T\mathbb{X})^{-1} \mathbf{x}_i^T e_i.$$
This is interpreted as a way to "identify influential observations, ... provid[ing] a means for examining the sensitivity of the regression coefficients to a slight change in the weight given to the ith observation. Large values of this derivative indicate observations that have large influence on the calculated coefficients." They suggest it can be used as an alternative to the DFBETA diagnostic. (DFBETA measures the change in $b$ when observation $i$ is completely deleted.) The relationship between the influence and DFBETA is that DFBETA equals the influence divided by $1 - h_i$ [equation 2.1 p. 13].
| null | CC BY-SA 2.5 | null | 2011-01-27T04:21:12.800 | 2011-01-27T04:21:12.800 | null | null | 919 | null |
6603 | 2 | null | 6176 | 3 | null | Jumping straight into non-parametric Bayesian analysis is quite a big first leap! Maybe get a bit of parametric Bayes under your belt first?
Three books which you may find useful from the Bayesian part of things are:
1) Probability Theory: The Logic of Science by E. T. Jaynes, Edited by G. L. Bretthorst (2003)
2) Bayesian Theory by Bernardo, J. M. and Smith, A. F. M. (1st ed 1994, 2nd ed 2007).
3) Bayesian Decision Theory J. O. Berger (1985)
A good place to see recent applications of Bayesian statistics is the FREE journal called [Bayesian Analysis](http://ba.stat.cmu.edu/), with articles from 2006 to present.
| null | CC BY-SA 2.5 | null | 2011-01-27T04:35:11.757 | 2011-01-27T04:35:11.757 | null | null | 2392 | null |
6604 | 1 | null | null | 4 | 11098 | I am running a bivariate correlation analysis in SPSS, and I am performing multiple comparisons (there are 8 variables in total). I want to correct for multiple comparisons because I am aware that any 'significant' results could simply be flukes.
However, the Bonferroni correction is not appropriate in this case (it is too strict).
Does anyone know how to correct for multiple comparisons in SPSS?
Consider some sample data as follows. There are 4 'independent variables' and 4 'dependent variables'.
Independent variables:
- Blood flow through middle cerebral artery
- Blood flow through anterior cerebral artery
- Blood flow through posterior cerebral artery
- Blood flow through anterior communicating artery.
Dependent variables
- Performance on cognitive test #1
- Performance on cognitive test #2
- Performance on cognitive test #3
- Performance on cognitive test #4
The 4 'independent' variables are not uncorrelated to each other. The 4 'dependent' variables are also not going to be uncorrelated to each other (i.e., if a person does well on one test, chances are they will also do well on another test).
(I realize that it is wrong to call these variables 'independent' and 'dependent', since correlation does not prove causality, but this is the way that I have framed them in my mind)
I suppose that there are 2 problems here:
- how to correct for multiple comparisons (from a statistics point of view) and
- how to actually implement this in SPSS (a practical problem).
Any help would be much appreciated (especially for problem #2).
| Correcting for multiple comparisons when running a bivariate correlation in SPSS | CC BY-SA 2.5 | null | 2011-01-27T05:51:07.740 | 2011-01-27T10:46:55.900 | null | null | 2938 | [
"correlation",
"multiple-comparisons",
"spss"
]
|
6605 | 2 | null | 6601 | 31 | null | for some reason, people have difficulty grasping what a p-value really is.
| null | CC BY-SA 2.5 | null | 2011-01-27T06:06:18.073 | 2011-01-27T06:06:18.073 | null | null | 795 | null |
6606 | 2 | null | 6601 | 23 | null | Similar to shabbychef's answer, it is difficult to understand the meaning of a confidence interval in frequentist statistics. I think the biggest obstacle is that a confidence interval doesn't answer the question that we would like to answer. We'd like to know, "what's the chance that the true value is inside this particular interval?" Instead, we can only answer, "what's the chance that a randomly chosen interval created in this way contains the true parameter?" The latter is obviously less satisfying.
| null | CC BY-SA 2.5 | null | 2011-01-27T06:46:32.313 | 2011-01-27T06:46:32.313 | null | null | 401 | null |
6607 | 2 | null | 6601 | 6 | null | What do the different distributions really represent, besides than how they are used.
| null | CC BY-SA 2.5 | null | 2011-01-27T07:51:33.693 | 2011-01-27T07:51:33.693 | null | null | 1808 | null |
6608 | 2 | null | 6581 | 31 | null | It might be a bit clearer if we think about a perfect model with as many parameters as observations such that it explains all variance in the response. This is the saturated model. Deviance simply measures the difference in "fit" of a candidate model and that of the saturated model.
In a regression tree, the saturated model would be one that had as many terminal nodes (leaves) as observations so it would perfectly fit the response. The deviance of a simpler model can be computed as the node residual sums of squares, summed over all nodes. In other words, the sum of squared differences between predicted and observed values. This is the same sort of error (or deviance) used in least squares regression.
For a classification tree, residual sums of squares is not the most appropriate measure of lack of fit. Instead, there is an alternative measure of deviance, plus trees can be built minimising an entropy measure or the Gini index. The latter is the default in `rpart`. The Gini index is computed as:
$$D_i = 1 - \sum_{k = 1}^{K} p_{ik}^2$$
where $p_{ik}$ is the observed proportion of class $k$ in node $i$. This measure is summed of all terminal $i$ nodes in the tree to arrive at a deviance for the fitted tree model.
| null | CC BY-SA 2.5 | null | 2011-01-27T08:47:16.440 | 2011-01-27T08:47:16.440 | null | null | 1390 | null |
6609 | 1 | null | null | 6 | 2203 |
### Question:
- Can you do a repeated measures multinomial logistic regression using SPSS?
### Context:
I need to do a regression on data at two points of time and I think this maybe the only way to go(?).
To elaborate: I work for a national health service supporting individuals with psychosis. I want to investigate whether any factors (age, gender etc) predict the vocational outcome of a client group (3 categories: partial vocation, full vocation or no vocation) at entry of service and at 18 months into the service. I want to consider whether there is a difference at 18 months compared to entry to the service in regards to vocational status.
| How to do a repeated measures multinomial logistic regression using SPSS? | CC BY-SA 2.5 | null | 2011-01-27T08:53:01.223 | 2011-02-25T03:26:34.467 | 2011-02-25T03:26:34.467 | 183 | null | [
"logistic",
"spss"
]
|
6610 | 2 | null | 6581 | 56 | null | Deviance and GLM
Formally, one can view deviance as a sort of distance between two probabilistic models; in GLM context, it amounts to two times the log ratio of likelihoods between two nested models $\ell_1/\ell_0$ where $\ell_0$ is the "smaller" model; that is, a linear restriction on model parameters (cf. the [Neyman–Pearson lemma](http://j.mp/awJEkH)), as @suncoolsu said. As such, it can be used to perform model comparison. It can also be seen as a generalization of the RSS used in OLS estimation (ANOVA, regression), for it provides a measure of goodness-of-fit of the model being evaluated when compared to the null model (intercept only). It works with LM too:
```
> x <- rnorm(100)
> y <- 0.8*x+rnorm(100)
> lm.res <- lm(y ~ x)
```
The residuals SS (RSS) is computed as $\hat\varepsilon^t\hat\varepsilon$, which is readily obtained as:
```
> t(residuals(lm.res))%*%residuals(lm.res)
[,1]
[1,] 98.66754
```
or from the (unadjusted) $R^2$
```
> summary(lm.res)
Call:
lm(formula = y ~ x)
(...)
Residual standard error: 1.003 on 98 degrees of freedom
Multiple R-squared: 0.4234, Adjusted R-squared: 0.4175
F-statistic: 71.97 on 1 and 98 DF, p-value: 2.334e-13
```
since $R^2=1-\text{RSS}/\text{TSS}$ where $\text{TSS}$ is the total variance. Note that it is directly available in an ANOVA table, like
```
> summary.aov(lm.res)
Df Sum Sq Mean Sq F value Pr(>F)
x 1 72.459 72.459 71.969 2.334e-13 ***
Residuals 98 98.668 1.007
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
Now, look at the deviance:
```
> deviance(lm.res)
[1] 98.66754
```
In fact, for linear models the deviance equals the RSS (you may recall that OLS and ML estimates coincide in such a case).
Deviance and CART
We can see CART as a way to allocate already $n$ labeled individuals into arbitrary classes (in a classification context). Trees can be viewed as providing a probability model for individuals class membership. So, at each node $i$, we have a probability distribution $p_{ik}$ over the classes. What is important here is that the leaves of the tree give us a random sample $n_{ik}$ from a multinomial distribution specified by $p_{ik}$. We can thus define the deviance of a tree, $D$, as the sum over all leaves of
$$D_i=-2\sum_kn_{ik}\log(p_{ik}),$$
following Venables and Ripley's notations ([MASS](http://www.stats.ox.ac.uk/pub/MASS4/), Springer 2002, 4th ed.). If you have access to this essential reference for R users (IMHO), you can check by yourself how such an approach is used for splitting nodes and fitting a tree to observed data (p. 255 ff.); basically, the idea is to minimize, by pruning the tree, $D+\alpha \#(T)$ where $\#(T)$ is the number of nodes in the tree $T$. Here we recognize the cost-complexity trade-off. Here, $D$ is equivalent to the concept of node impurity (i.e., the heterogeneity of the distribution at a given node) which are based on a measure of entropy or information gain, or the well-known Gini index, defined as $1-\sum_kp_{ik}^2$ (the unknown proportions are estimated from node proportions).
With a regression tree, the idea is quite similar, and we can conceptualize the deviance as sum of squares defined for individuals $j$ by
$$D_i=\sum_j(y_j-\mu_i)^2,$$
summed over all leaves. Here, the probability model that is considered within each leaf is a gaussian $\mathcal{N}(\mu_i,\sigma^2)$. Quoting Venables and Ripley (p. 256), "$D$ is the usual scaled deviance for a gaussian GLM. However, the distribution at internal nodes of the tree is then a mixture of normal distributions, and so $D_i$ is only appropriate at the leaves. The tree-construction process has to be seen as a hierarchical refinement of probability models, very similar to forward variable selection in regression." Section 9.2 provides further detailed information about `rpart` implementation, but you can already look at the `residuals()` function for `rpart` object, where "deviance residuals" are computed as the square root of minus twice the logarithm of the fitted model.
[An introduction to recursive partitioning using the rpart routines](https://cran.r-project.org/web/packages/rpart/vignettes/longintro.pdf), by Atkinson and Therneau, is also a good start. For more general review (including bagging), I would recommend
- Moissen, G.G. (2008). Classification and Regression Trees. Ecological Informatics, pp. 582-588.
- Sutton, C.D. (2005). Classification and Regression Trees, Bagging,
and Boosting, in Handbook of Statistics, Vol. 24, pp. 303-329, Elsevier.
| null | CC BY-SA 4.0 | null | 2011-01-27T09:05:42.833 | 2019-09-15T17:04:48.410 | 2019-09-15T17:04:48.410 | 129149 | 930 | null |
6611 | 2 | null | 6540 | 1 | null | The question does not state the precise intervals or yields so the $H_{0}$ hypothesis must be conservative with infinite intervals and yields with pessimist approximations, `.*logic`'s suggestion won't qualify. Confidence interval not calculated. So:
$lim_{ m \rightarrow \infty } \left[ 1+\frac{r}{m} \right] ^{mt}=e^{rt}$,
where `r` is the rate p.a. and `t` is the time. The sum is $\sum_{k=1}^{n} x_{k} e^{rt}$. If you have data of different signs, you must calculate positive numbers to one sum and negative numbers to one sum, this way you get proper upper/lower bounds. The exponent is $r_{k} MINUS(timestamp_{1}, timestamp_{2})/365$, the `MINUS` -function returns days between the timestamps and $r_{k}$ is the rate. The terms `FV` and `PV` are:
$FV = x_{0} \left( 1+r \right)^{n} + x_{1} \left(1+r \right)^{n-1}+...+x_{n}$
$PV = x_{0} + \frac{x_{1}}{1+r} +...+ \frac{x_{n}}{\left( 1+r\right)}^{n}$
so $FV$ is with the sum -formula, while you just take reciprocal with $PV$.
Example
```
1 01/1/1980
2 01/2/1999
3 03/12/2000
-1 03/6/2005
-5 07/07/2007
```
Work in progress.
Trying to find one-liner to operate over the data: $if (term_{k} > 0) : sum (positive_{k}e^{MINUS(t_{k},t_{0})/365})$
| null | CC BY-SA 2.5 | null | 2011-01-27T10:46:01.347 | 2011-01-28T21:04:15.727 | 2011-01-28T21:04:15.727 | 2914 | 2914 | null |
6612 | 2 | null | 6604 | 3 | null | This first part of my response won't address your two questions directly since what I am suggesting departs from your correlational approach. If I understand you correctly, you have two blocks of variables, and they play an asymmetrical role in the sense that one of them is composed of response variables (performance on four cognitive tests) whereas the other includes explanatory variables (measures of blood flow at several locations). So, a nice way to answer your question of interest would be to look at [PLS regression](http://en.wikipedia.org/wiki/Partial_least_squares_regression). As detailed in an earlier response of mine, [Regression with multiple dependent variables?](https://stats.stackexchange.com/questions/4517/regression-with-multiple-dependent-variables), the correlation between factor scores on the first dimension will reflect the overall link between these two blocks, and a closer look at the weighted combination of variables in each block (i.e., loadings) would help interpreting the contribution of each variable of the $X$ block in predicting the $Y$ block. The [SPSS implementation](http://faculty.chass.ncsu.edu/garson/PA765/pls.htm) is detailed on Dave Garson's website. This prevents from using any correction for multiple comparisons.
Back to your specific questions, yes the Bonferroni correction is known to be conservative and step-down methods are to be preferred (instead of correcting the p-values or the test statistic in one shot for all the tests, we adapt the threshold depending on the previous HT outcomes, in a sequential manner).
Look into SPSS documentation (or [Pairwise Comparisons in SAS and SPSS](http://www.uky.edu/ComputingCenter/SSTARS/www/documentation/MultipleComparisons_3.htm#b13)) to find a suitable one, e.g. Bonferroni-Holm.
| null | CC BY-SA 2.5 | null | 2011-01-27T10:46:55.900 | 2011-01-27T10:46:55.900 | 2017-04-13T12:44:39.283 | -1 | 930 | null |
6613 | 1 | null | null | 2 | 4178 | ```
x <- read.table('file_name',header=TRUE, row.names=1)
y <- t(x)
y <- data.frame(y)
row.names(y) <- names(x)
names(y) <- row.names(x)
library(corrplot)
corr <- cor(y)
par(ask = TRUE)
corrplot(corr, order = "hclust")
```
I'm trying use corrplot on my dataset. The original dataset has 25000 rows and 100 columns. I transposed to 25000 columns and 100 rows and finally applied corrplot. And failed!
If any help would be appreciated. [note:1st column is and first row are names and others are values (like -0.9 or 2.5 or 14449.45 etc.)]
| Error in cor(y): allocMatrix: too many elements specified | CC BY-SA 2.5 | null | 2011-01-27T11:03:38.850 | 2011-01-27T17:53:25.883 | 2011-01-27T17:53:25.883 | null | null | [
"r",
"correlation"
]
|
6614 | 2 | null | 6601 | 8 | null | From my personal experience the concept of [likelihood](http://en.wikipedia.org/wiki/Likelihood_function) can also cause quite a lot of stir, especially for non-statisticians. As wikipedia says, it is very often mixed up with the concept of probability, which is not exactly correct.
| null | CC BY-SA 2.5 | null | 2011-01-27T11:08:52.113 | 2011-01-27T11:08:52.113 | null | null | 22 | null |
6615 | 2 | null | 6557 | 9 | null | Here's my version with your simulated data set:
```
x1 <- rnorm(100,2,10)
x2 <- rnorm(100,2,10)
y <- x1+x2+x1*x2+rnorm(100,1,2)
dat <- data.frame(y=y,x1=x1,x2=x2)
res <- lm(y~x1*x2,data=dat)
z1 <- z2 <- seq(-1,1)
newdf <- expand.grid(x1=z1,x2=z2)
library(ggplot2)
p <- ggplot(data=transform(newdf, yp=predict(res, newdf)),
aes(y=yp, x=x1, color=factor(x2))) + stat_smooth(method=lm)
p + scale_colour_discrete(name="x2") +
labs(x="x1", y="mean of resp") +
scale_x_continuous(breaks=seq(-1,1)) + theme_bw()
```
I let you manage the details about x/y-axis labels and legend positioning.

| null | CC BY-SA 2.5 | null | 2011-01-27T11:45:40.877 | 2011-01-27T11:45:40.877 | null | null | 930 | null |
6617 | 2 | null | 6601 | 5 | null | I think the question is interpretable in two ways, which will give very different answers:
1) For people studying statistics, particularly at a relatively advanced level, what is the hardest concept to grasp?
2) Which statistical concept is misunderstood by the most people?
For 1) I don't know the answer at all. Something from measure theory, maybe? Some type of integration? I don't know.
For 2) p-value, hands down.
| null | CC BY-SA 2.5 | null | 2011-01-27T13:22:27.327 | 2011-01-27T13:22:27.327 | null | null | 686 | null |
6618 | 2 | null | 6601 | 9 | null | I think that very few scientists understand this basic point: It is only possible to interpret results of statistical analyses at face value, if every step was planned in advance. Specifically:
- Sample size has to be picked in advance. It is not ok to keep analyzing the data as more subjects are added, stopping when the results looks good.
- Any methods used to normalize the data or exclude outliers must also be decided in advance. It isn't ok to analyze various subsets of the data until you find results you like.
- And finally, of course, the statistical methods must be decided in advance. Is it not ok to analyze the data via parametric and nonparametric methods, and pick the results you like.
Exploratory methods can be useful to, well, explore. But then you can't turn around and run regular statistical tests and interpret the results in the usual way.
| null | CC BY-SA 2.5 | null | 2011-01-27T13:29:56.687 | 2011-01-28T14:51:32.457 | 2011-01-28T14:51:32.457 | 25 | 25 | null |
6619 | 1 | 6620 | null | 4 | 281 | ...2 to 5 questions answered correctly, out of 20 of them? Each question has 5 choices. Probability of getting one right is 1/5. Probability of getting exactly 1 right is ${20 \choose 1} p^1 q^{19}$, with $p=P(\mathrm{right})$ and $q=P(\mathrm{wrong})$ (which I managed to understand and calculate). However how do I calculate for the problem above?
| Probability of getting between | CC BY-SA 2.5 | null | 2011-01-27T13:40:46.323 | 2011-01-27T16:41:24.930 | 2011-01-27T16:41:24.930 | 919 | 1833 | [
"probability",
"self-study",
"binomial-distribution"
]
|
6620 | 2 | null | 6619 | 5 | null | Hint: sum up the probabilities. The probability that exactly $k$ answers are answered correctly is $${20 \choose k}\left(\frac{1}{5}\right)^k\left(\frac{4}{5}\right)^{20-k}.$$ In your case you have $k=2,3,4,5$.
| null | CC BY-SA 2.5 | null | 2011-01-27T13:56:41.140 | 2011-01-27T13:56:41.140 | null | null | 2116 | null |
6621 | 2 | null | 6577 | 1 | null | Why not display the raw data?
```
df <- data.frame(group=sample(LETTERS, 500, T, log(2:27)),
x=sample(0:20, 500, T),
y=sample(0:20, 500, T))
df$ratio <- with(df, (x-y)/(x+y))
library(ggplot2)
qplot(group, ratio, data = df) +
stat_summary(fun.y = mean, colour = "red", size = 2, geom = "point")
qplot(group, ratio, data = df) +
stat_summary(fun.data = "mean_cl_boot", colour = "red", geom = "crossbar")
```
| null | CC BY-SA 2.5 | null | 2011-01-27T14:51:14.593 | 2011-01-27T14:51:14.593 | null | null | 46 | null |
6622 | 2 | null | 6613 | 0 | null | I would guess that you don't have enough memory. The correlation matrix for 25,000 columns will be 25,000 x 25000 which is about 4 gig. `(25000 ^ 2 * 8 ) / 1024 ^ 3)`
| null | CC BY-SA 2.5 | null | 2011-01-27T14:53:11.207 | 2011-01-27T14:53:11.207 | null | null | 46 | null |
6623 | 2 | null | 4466 | 1 | null | Good suggestions, I've got plenty of things to look into now.
Remember, one extremely important consideration is making sure that the work is "correct" in the first place. This is the role that tools like [Sweave](http://en.wikipedia.org/wiki/Sweave) play, by increasing the chances that what you did, and what you said you did, are the same thing.
| null | CC BY-SA 2.5 | null | 2011-01-27T14:57:16.553 | 2011-01-27T14:57:16.553 | null | null | 1434 | null |
6624 | 1 | 6631 | null | 7 | 3321 | I've asked the [same question](https://math.stackexchange.com/questions/19180/detect-abnormal-points-in-point-cloud) at Math SE, but the suggestion is that probably this question belongs here.
Given a list of [point cloud](http://en.wikipedia.org/wiki/Point_cloud) in terms of $(x,y,z)$ how to determine abnormal points?
The motivation is this. We need to reconstruct a terrain surface out from those point cloud, which the surveyors obtain when doing field survey. The surveyors would take an equipment and record a sufficient sample of the $x,y,z$ of a terrain. Those points will be recorded into a CAD program.
The problem is that the CAD file can be corrupted from time to time with the introduction of "abnormal" points. Those points do not fit into the terrain surface generally, and tend to have erroneous $z$ value ( i.e., the $z$ value is outside of the normal range).
I am aware that the definition of abnormal points is a bit loose; and I can't come up with a rigorous definition of it. However, I know what is an abnormal point when I see the drawing.
Given all these constraint, is there any algorithm to detect these kinds of abnormal points?
| Detecting abnormal points in point cloud | CC BY-SA 2.5 | null | 2011-01-27T15:04:03.230 | 2011-01-28T09:14:13.320 | 2017-04-13T12:19:38.800 | -1 | 175 | [
"outliers",
"spatial"
]
|
6626 | 2 | null | 6252 | 5 | null | As others have pointed out, there are many measures of clustering "quality";
most programs minimize SSE.
No single number can tell much about noise in the data,
or noise in the method,
or flat minima — low points in Saskatchewan.
So first try to visualize, get a feel for,
a given clustering, before reducing it to "41".
Then make 3 runs: do you get SSEs 41, 39, 43 or 41, 28, 107 ?
What are the cluster sizes and radii ?
(Added:) Take a look at silhouette plots and silhouette scores, e.g. in
the book by Izenman,
[Modern Multivariate Statistical Techniques](http://rads.stackoverflow.com/amzn/click/0387781889)
(2008, 731p, isbn 0387781889).
| null | CC BY-SA 2.5 | null | 2011-01-27T15:37:09.260 | 2011-01-28T18:28:55.360 | 2011-01-28T18:28:55.360 | 557 | 557 | null |
6627 | 2 | null | 6613 | 3 | null | The "allocMatrix: too many elements specified" error is thrown in on line 170 of `R/src/main/array.c` when nrow x ncol is greater than `INT_MAX` (+2,147,483,647). `INT_MAX` is defined in the C standard library file "limits.h" and it is the same in the 32-bit and 64-bit toolchain used to build R, so no amount of RAM on a current 64-bit R build will solve your problem.
| null | CC BY-SA 2.5 | null | 2011-01-27T15:42:03.393 | 2011-01-27T15:42:03.393 | null | null | 1657 | null |
6628 | 2 | null | 6570 | 7 | null | I think this whole "force people to choose" thing is just a complete red herring. People say it to me all the time. To me it sounds like "force people to state the capital of Uzbekistan". They don't know, and forcing them won't make them know any better.
With that mini-rant over, my only sensible contribution is to say that you should always pilot surveys whenever you can. Pilot both versions, see who uses the "don't know" category in the one where it's included, and look at the distribution of responses. And talk to the people who filled it out. "Were you sure of your answer?" "What made you say 'don't know' here"- that kind of thing.
| null | CC BY-SA 2.5 | null | 2011-01-27T16:04:35.303 | 2011-01-27T16:04:35.303 | null | null | 199 | null |
6629 | 2 | null | 6624 | 1 | null | Are the points relatively dense on your surface? Then I would suggest counting the number of points in a sphere around every point. Choose the radius of your sphere to be a bit less than the distance the "abnormal" points have to the regular surface - maybe half of what they typically have. Then throw out the points where the number of other points inside that sphere is very low. (I don't know if your outliers occur in small groups or if they are isolated points; this technique should work for either case.)
If a naive implementation picks out the correct points but is too slow, and you're struggling to come up with a faster algorithm to do the same, then let us know. I'm sure we could come up with something :)
| null | CC BY-SA 2.5 | null | 2011-01-27T16:10:51.673 | 2011-01-27T16:10:51.673 | null | null | 2898 | null |
6630 | 1 | null | null | 9 | 1067 | I need some guidance on the appropriate level of pooling to use for difference of means tests on time series data. I am concerned about temporal and sacrificial pseudo-replication, which seem to be in tension on this application. This is in reference to a mensural study rather than a manipulative experiment.
Consider a monitoring exercise: A system of sensors measures dissolved oxygen (DO) content at many locations across the width and depth of a pond. Measurements for each sensor are recorded twice daily, as DO is known to vary diurnally. The two values are averaged to record a daily value. Once a week, the daily results are aggregated spatially to arrive at a single weekly DO concentration for the whole pond.
Those weekly results are reported periodically, and further aggregated – weekly results are averaged to give a monthly DO concentration for the pond. The monthly results are averaged to give an annual value. The annual averages are themselves averaged to report decadal DO concentrations for the pond.
The goal is to answer questions such as: Was the pond's DO concentration in year X higher, lower, or the same as the concentration in year Y? Is the average DO concentration of the last ten years different than that of the prior decade? The DO concentrations in a pond respond to many inputs of large magnitude, and thus vary considerably. A significance test is needed. The method is to use a T-test comparison of means. Given that the decadal values are the mean of the annual values, and the annual values are the mean of the monthly values, this seems appropriate.
Here’s the question – you can calculate the decadal means and the T-values of those means from the monthly DO values, or from the annual DO values. The mean doesn’t change of course, but the width of the confidence interval and the T-value does. Due to the order of magnitude higher N attained by using monthly values, the CI often tightens up considerably if you go that route. This can give the opposite conclusion vs using the annual values with respect to the statistical significance of an observed difference in the means, using the same test on the same data. What is the proper interpretation of this discrepancy?
If you use the monthly results to compute the test stats for a difference in decadal means, are you running afoul of temporal pseudoreplication? If you use the annual results to calc the decadal tests, are you sacrificing information and thus pseudoreplicating?
| What temporal resolution for time series significance test? | CC BY-SA 2.5 | null | 2011-01-27T16:18:30.880 | 2011-05-06T12:32:01.043 | null | null | null | [
"time-series"
]
|
6631 | 2 | null | 6624 | 8 | null | An outlier detector for your irregular ("vector") point data is available in GRASS as [v.outlier](http://grass.osgeo.org/grass64/manuals/html64_user/v.outlier.html).
An overview of spatial outlier detection methods appears in a [2004 paper by Cheng and Li](http://www.geo.upm.es/postgrado/CarlosLopez/papers/AHybridApproachToDetectSpatialTemporalOutliers.pdf).
[An older method](http://www.casa.arizona.edu/data/rnr_420/hutchinson_article.pdf), specialized for topographic data, relies on "drainage enforcement" (making the water flow downhill continuously without accumulating in sinks). That can find some of the outliers, but probably not all of them.
A more generic method is to adapt a local indicator of spatial variability, such as a [local Moran's I](http://en.wikipedia.org/wiki/Indicators_of_spatial_association) statistic, to identify points that are "too far" away from the surface. [GeoDa](http://geodacenter.asu.edu/software/downloads) can compute such statistics.
| null | CC BY-SA 2.5 | null | 2011-01-27T16:39:42.483 | 2011-01-27T16:39:42.483 | null | null | 919 | null |
6632 | 2 | null | 6624 | 2 | null | You could fit some sort of smooth function for $z(x,y)$, perhaps using [locally weighted scatterplot smoothing](http://en.wikipedia.org/wiki/Local_regression) (LOWESS or LOESS), then look for points where the residual for $z$ (i.e. the difference between the observed and fitted values) is greater than some fixed multiple of the standard error of prediction. That should be straightforward e.g. in `R` using the `loess` function in the standard `stats` package.
| null | CC BY-SA 2.5 | null | 2011-01-27T16:41:47.253 | 2011-01-28T09:14:13.320 | 2011-01-28T09:14:13.320 | 449 | 449 | null |
6634 | 2 | null | 6624 | 1 | null | I think this problem relies only in the outliers of variable $z$.
The surveyor scans a grid of $x$,$y$ points that are "well-behaved". On the other hand $z$ points may contain abnormal values (in statistics we call them outliers).
I would suggest to explore the values of $z$, and the plot of $(x,y,z)$.
From those plots it will be clear that abnormal values of $z$ occur isolated.
Lets suppose that we have a rectangular grid of points $x_k, y_k$, at each point of the grid we have a value of $z$ that we will denote as $z_{k,k}$.
So, if we think $z_k$ is an abnormal point, we expect a low correlation among $(x_k,y_k,z_{k,k})$ and $(x_{k+1},y_{k},z_{k+1,k})$.
In general, we expect a low correlation between $(x_k,y_k,z_{k,k})$ and its neighbors $\mathcal{N}(k,k)$. A way to measure the spatial correlation between the point $(k,k)$ and its neighborhood is the empirical variogram defined by:
$\hat{\gamma}(k,k) = \frac{1}{\#\mathcal{N}(k,k)} \sum_{(i,j), (p,q) \in \mathcal{N}(k,k)} | z_{i,j} - z_{p,q} |^2$.
If you calculate $\hat{\gamma}(k,k)$ for the whole grid you can be sure that outliers in the empirical variogram are indeed abnormal points.
A boxplot can be useful to identify the outliers.
Using the variogram is a way to ensure that you are actually reading an abnormal point. Suppose that your surveyors are scanning a slope, then you will notice that the $z_{k,k}$ has high values, but also their neighbors. In case the point is abnormal only $z_{k,k}$ will have high values.
NOTE: If you're sure that your surveyors are analyzing a rather flat surface, get rid of the variogram and make a boxplot of $z$, any outlier identified by the boxplot is an abnormal point.
| null | CC BY-SA 2.5 | null | 2011-01-27T17:48:28.370 | 2011-01-27T17:48:28.370 | null | null | 2902 | null |
6635 | 2 | null | 6601 | 9 | null | Tongue firmly in cheek: For frequentists, the Bayesian concept of probability; for Bayesians, the frequentist concept of probability. ;o)
Both have merit of course, but it can be very difficult to understand why one framework is interesting/useful/valid if your grasp of the other is too firm. Cross-validated is a good remedy as asking questions and listening to answers is a good way to learn.
| null | CC BY-SA 3.0 | null | 2011-01-27T17:57:12.117 | 2015-05-23T23:01:52.830 | 2015-05-23T23:01:52.830 | 22047 | 887 | null |
6636 | 1 | 6666 | null | 6 | 1521 | (redirected here from mathoverflow.net)
Hello,
At work I was asked the probability of a user hitting an outage on the website. I have some following metrics. Total system downtime = 500,000 seconds a year. Total amount of seconds a year = 31,556,926 seconds. Thus, p of system down = 0.159 or 1.59%
We can also assume that downtime occurs evenly for a period of approximately 2 hours per week.
Now, here is the tricky part. We have a metric for amount of total users attempting to use the service = 16,000,000 during the same time-frame. However, these are subdivided, in the total time spent using the service. So, lets say we have 7,000,000 users that spend between 0 - 30 seconds attempting to use the service. So for these users what is the probability of hitting the system when it is unavailable? (We can assume an average of 15 seconds spent total if this simplifies things)
I looked up odds ratios and risk factors, but I am not sure how to calculate the probability of the event occurring at all.
Thanks in advance!
P.S. I was given a possible answer, at [https://mathoverflow.net/questions/52816/probability-calculation-system-uptime-likelihood-of-occurence](https://mathoverflow.net/questions/52816/probability-calculation-system-uptime-likelihood-of-occurence) and was following the advice on posting the question in the most appropriate forum.
| Probability calculation, system uptime, likelihood of occurence | CC BY-SA 2.5 | null | 2011-01-27T18:42:03.703 | 2011-04-29T00:51:46.293 | 2017-04-13T12:58:32.177 | -1 | 2950 | [
"probability",
"odds-ratio"
]
|
6637 | 1 | 6639 | null | 6 | 3007 | Let us take two formulations of the $\ell_{2}$ SVM optimization problem, one constrained:
$\min_{\alpha,b} ||w||_2^2 + C \sum_{i=1}^n {\xi_{i}^2}$
s.t $ y_i(w^T x_i +b) \geq 1 - \xi_i$
and $\xi_i \geq 0 \forall i$
and one unconstrained:
$\min_{\alpha,b} ||w||_2^2 + C \sum_{i=1}^n \max(0,1 - y_i (w^T x_i + b))^2$
What is the difference between those two formulations of the optimization problem? Is one better than the other?
Hope I didn't make any mistake in the equations. Thanks.
Update : I took the unconstrained formulation from [Olivier Chapelle's work](http://www.kyb.mpg.de/publications/attachments/primal_%5b0%5d.pdf). It seems that people use the unconstrained optimization problem when they want to work on the primal and the other way around when they want to work on the dual, I was wondering why?
| Constrained versus unconstrained formulation of SVM optimisation | CC BY-SA 3.0 | null | 2011-01-27T19:15:38.773 | 2021-12-31T02:54:02.783 | 2011-08-10T14:58:28.420 | 2513 | 1320 | [
"optimization",
"svm"
]
|
6638 | 1 | 6641 | null | 5 | 2686 | I have been running a linear regression where my dependent variable is a composite. By this I mean that it is built up of components that are added and multiplied together. Specifically, for the composite variable A:
```
A = (B*C + D*E + F*G + H*I + J*K + L*M)*(1 - N)*(1 + O*P)
```
None of the component variables are used as independent variables (the only independent variables are dummy variables). The component variables are mostly (though not completely) independent of one another.
Currently I just run a regression with A as the DV, to estimate each dummy variable's effect on A. But I would also like to estimate each dummy variable's effect on the separate components of A (and in the future I hope to try applying separate priors for each component). To do this I have been running several separate regressions, each with a different one of the components as the DV (and using the same IVs for all the regressions). If I do this, should I expect that for a given dummy IV, I could recombine the coefficient estimates from all the separate regressions (using the formula listed above) and get the same value as I get for that IV when I run the composite A regression? Am I magnifying the coefficient standard errors by running all these separate regressions and then trying to recombine the values (there is a lot of multicollinearity in the dummy variables)? Is there some other structure than linear regression that would be better for a case like this?
| Composite dependent variable | CC BY-SA 2.5 | null | 2011-01-27T19:27:56.060 | 2011-01-27T20:29:25.263 | 2011-01-27T19:39:57.683 | null | 1090 | [
"regression"
]
|
6639 | 2 | null | 6637 | 7 | null | It seems to me that at the solution of the first problem, the inequality constraint becomes an equality, i.e. $1 - \xi_i = y_i(w^Tx_i + b)$, because we are minimising the $\xi_i$s and the smallest value that satisfies the constraint occurs at equality. So as $\xi_i \geq 0$, $\xi_i = max(0, 1 - y_i(w^Tx_i+b))$, which substituting gives something rather similar to your second formulation.
Having checked the [paper by Chapelle](http://www.kyb.mpg.de/fileadmin/user_upload/files/publications/attachments/primal_%5b0%5d.pdf), it looks like the second formulation is missing a "1 -" in the second half of the max operation (see definition of L(.,.) following equation 2.8). In that case both formulations are identical, they are both equivalent representations of the primal optimisation problem (the dual formulation is in terms of the Lagrange multipliers $\alpha_i$). The advantages and disadvantages are therefore purely computational.
| null | CC BY-SA 4.0 | null | 2011-01-27T19:48:05.287 | 2018-05-25T18:38:34.540 | 2018-05-25T18:38:34.540 | 168251 | 887 | null |
6640 | 1 | null | null | 1 | 1640 | I have what seems like a fairly common business statistics scenario: I need to compare one group of stores to another group of stores and be able to say if their difference in sales is statistically different.
For example:
Group A ($n_A$ = 30 stores) participated in a promotion and saw an avg sales increase for this month compared to the same month last year of $\bar{x}_A$ and a standard deviation of $s_A$.
Group B ($n_B$ = 50 stores) did not participate in the promotion and had avg sales increase of $\bar{x}_B$ and corresponding $s_B$.
I realize there are a number of other variables but, in theory, I should be able to say with some certainty that there is or isn't any difference between stores that took or did not participate in a promotion, right?
Can I do a standard comparison of means test? Does it make a difference that these stores comprise of the entire population? Or is it not the entire population and I should be looking at average increase for multiple months and multiple years?
| How should I compare average store sales change across time? | CC BY-SA 2.5 | null | 2011-01-27T19:52:14.727 | 2011-01-27T22:45:31.127 | null | null | null | [
"multiple-comparisons",
"mean",
"business-intelligence"
]
|
6641 | 2 | null | 6638 | 4 | null | (1) Should I expect to obtain the same fits using the two models? No. Let's look at what's going on.
(a) In the regression of $A$ directly--I'll call it the "monolithic model," the model is
$$A_j = \sum{\beta_i X_{ij}} + \epsilon_j,$$
with the cases indexed by $j$, the variables (including a constant, if any) indexed by $i$, and with the $\epsilon_j$ random variables of zero mean.
(b) In the "composite model" you have a series of regressions. To be systematic, let's rename the variables $B$ though $P$ as $B_1, B_2, \ldots, B_{15}$ and write
$$A = (B C + \cdots + L M)(1 - N)(1 + O P) = f(B_1, B_2, \ldots, B_{15}) = f(\mathbf{B}).$$
Each component model in the composite is
$$B_{kj} = \sum_{i}{\gamma_{ki} X_{ij}} + \delta_{kj}, \quad k=1,2,\ldots,15.$$
Therefore
$$A_j = f(\mathbf{B}_j) = f(\sum_{i}{\gamma_{ki} X_{ij}} + \delta_{kj}).$$
To see how this differs from the monolithic model, let's consider the simpler case where $A = B_1 B_2 = f(B_1,B_2)$, which gives
$$\eqalign{
A_j &= f(B_{1j}, B_{2j}) \cr
&=\left(\sum_{i}{\gamma_{1i} X_{ij}} + \delta_{1j}\right)\left(\sum_{l}{\gamma_{2l} X_{lj}} + \delta_{2j}\right) \cr
&=\sum_{il}{\gamma_{1i}\gamma_{2l}X_{ij}X_{lj}} + \sum_{i}{\left(\gamma_{1i}\delta_{2j} + \gamma_{2i}\delta_{1j}\right) X_{ij}} + \delta_{1j}\delta_{2j}.
}$$
Note the differences:
- If the error terms $\delta_{ij}$ are not independent, the expectation of $\delta_{1j}\delta_{2j}$ will be nonzero, introducing a bias.
- If the error terms are independent, the expectation of $\delta_{1j}\delta_{2j}$ is zero, which is good, but the expectation of $A$ equals $\sum_{il}{\gamma_{1i}\gamma_{1l}X_{ij}X_{lj}}$. In this model there are only interaction terms!
- If you include all interaction terms in the monolithic model and the $\delta$s are independent, then the interaction coefficients for $X_iX_l$ can be compared to the coefficients $\gamma_{1i}\gamma_{2l}+\gamma_{2i}\gamma_{1l}$ that appear in the composite model. However, we cannot expect equality, because there are fewer $\gamma$ parameters in the composite model than there would be interaction parameters in the monolithic model. (In other words, the structure of the composite model introduces algebraic relationships among the coefficients that the monolithic model cannot enforce.)
- The distribution of the random part of this model, $\sum_{i}{\left(\gamma_{1i}\delta_{2j} + \gamma_{2i}\delta_{1j}\right) X_{ij}} + \delta_{1j}\delta_{2j}$, depends on the data $X_{ij}$, on the parameters $\gamma_{ki}$, and on the products of $\delta_{kj}$. As such it will likely be heteroscedastic--that is, depending on the values of the variables--and have a different (and complex) distributional shape even when the $\delta_{kj}$ have a "nice" shape (such as normal).
- The analog of $\beta_j$ in the composite model, $\gamma_{1i}\delta_{2j} + \gamma_{2i}\delta_{1j}$, depends on the residuals in the component models of the composite. Therefore we would not expect these to have equal values. In fact, they would only tend (probabilistically) to be equal when the $\beta_j$ are zero, because the expectation of the analog is zero.
Whence the monolithic and composite models are almost entirely different. There seems to be no valid way even to compare coefficients and where they can be compared we cannot expect them to be equal.
(2) Am I magnifying the coefficient standard errors? By the same reasoning, the standard errors cannot even be compared.
There is nothing overtly wrong with using either model, but they say two different things about the behavior of $A$. The choice between them should not be made on the basis of coefficient standard errors but on the basis of which (if either) appears to be more appropriate and useful for the intended purpose.
(3) Are there alternatives to linear regression? Of course. You could extend the models in many nonlinear ways. But normally we begin with the simplest models that can reasonably be expected to serve our analytical purposes. Note that the monolithic model uses far fewer parameters than the composite model and therefore is a fortiori simpler.
| null | CC BY-SA 2.5 | null | 2011-01-27T20:29:25.263 | 2011-01-27T20:29:25.263 | null | null | 919 | null |
6642 | 1 | 6676 | null | 7 | 439 | Let $S_n = \frac{1}{n}\sum_{i=1}^n X_i$, and $T_n = \frac{1}{n}\sum_{j=1}^nY_i$, where
The $X_i$ are iid, the $Y_i$ are iid (with a different law)
$X_i$, and $Y_i$ are dependent
For $i\neq j$, $X_i$ and $Y_j$ are independent.
Is there a central limit type result for $S_n^2 - T_n^2$?
| Limiting distribution of a squared sum of random variables | CC BY-SA 2.5 | null | 2011-01-27T20:52:04.727 | 2011-01-28T18:48:44.800 | 2011-01-28T13:05:46.070 | 2116 | 2952 | [
"central-limit-theorem",
"delta-method"
]
|
6643 | 1 | 6645 | null | 7 | 1132 | Is there a closed form solution for this inverse CDF?
| What is the closed form solution for the inverse CDF for Epanechnikov | CC BY-SA 2.5 | null | 2011-01-27T21:09:18.177 | 2017-09-12T18:41:50.350 | 2015-04-23T05:54:35.603 | 9964 | 2808 | [
"distributions",
"cumulative-distribution-function",
"kernel-smoothing"
]
|
6644 | 2 | null | 6640 | 3 | null | If all of the stores were included in the study rather than a sample, then you could make conclusions without using probability statements or statistics.
But if you want to use a subsample to make inference about the larger population or make forecasts, then the use of statistics is appropriate.
A standard comparison of means test for which your data meet the assumptions would be appropriate. For example, if you want to make inference about the effect of promotion on the larger population of stores, e.g. to evaluate the null hypothesis that there is no effect, the [student's t test](http://en.wikipedia.org/wiki/Student%27s_t-test) with unequal sample sizes and unequal variance, a.k.a. [Welch's t-test](http://en.wikipedia.org/wiki/Welch%27s_t_test) is a widely used and robust method.
| null | CC BY-SA 2.5 | null | 2011-01-27T21:51:47.840 | 2011-01-27T22:45:31.127 | 2011-01-27T22:45:31.127 | 1381 | 1381 | null |
6645 | 2 | null | 6643 | 7 | null | You mean for a random variable with a single Epanechnikov kernel as PDF? Well, the PDF is $\frac{3}{4}(1-u^2)$, so the CDF is $\frac{1}{4}(2 + 3 u - u^3)$. Inverting this in Maple leads to three solutions, of which $$u = -1/2\,{\frac { \left( 1-2\,t+2\,i\sqrt {t}\sqrt {1-t} \right) ^{2/3}+1 +i\sqrt {3} \left( 1-2\,t+2\,i\sqrt {t}\sqrt {1-t} \right) ^{2/3}-i
\sqrt {3}}{\sqrt [3]{1-2\,t+2\,i\sqrt {t}\sqrt {1-t}}}}$$ seems to be the right one (where the third roots return the main branch). Of course this is a real value for real values of $t$ between 0 and 1; I currently don't have time to make this come out right but I'll try and revisit in a couple of days. If someone else sees it, it would be great if you could leave a comment.
---
Note whuber's comment below for a much nicer formula:
$$
u(z)=2\sin\left(\frac{1}{3}\arcsin(2z-1)\right)
$$
for $z\in[0,1].$
| null | CC BY-SA 3.0 | null | 2011-01-27T22:41:50.163 | 2017-09-12T18:41:50.350 | 2017-09-12T18:41:50.350 | 22311 | 2898 | null |
6646 | 2 | null | 6601 | 21 | null | What is the meaning of "degrees of freedom"? How about df that are not whole numbers?
| null | CC BY-SA 2.5 | null | 2011-01-27T23:29:07.293 | 2011-01-29T19:13:03.407 | 2011-01-29T19:13:03.407 | -1 | null | null |
6648 | 1 | 6649 | null | 3 | 240 | An average of n birds fly through an area, in an hour, following a Poisson process. (I think this means the hours don't matter; there's no influence in the number of birds that fly through, at different parts of the day, hypothetically. Correct me if I'm wrong.)
P1 is the probability that exactly m birds fly through between 12:00-14:00 (2 continuous hours). P2 is the probability that exactly m birds fly through between 15:00-16:00 and 17:00-18:00 (a total of 2 hours still, but not together). Please notice m is the same for both situations.
Are the probabilities of P1 and P2 the same or I'm assuming something that's wrong?
Thanks!
(Please tell me if you need more information, I can make it up)
| Poisson process, time and probabilities | CC BY-SA 2.5 | null | 2011-01-27T23:34:01.300 | 2011-01-27T23:42:22.013 | null | null | 1833 | [
"probability",
"poisson-distribution"
]
|
6649 | 2 | null | 6648 | 5 | null | Yes, they are the same. Another crucial assumption in the Poisson process is that what happens now is independent of what happened a moment ago or what will happen in the next moment (or at any other moment, for that matter). Therefore the distribution of events during any (measurable) period of time depends only on the length of time, not on how it is broken up.
| null | CC BY-SA 2.5 | null | 2011-01-27T23:42:22.013 | 2011-01-27T23:42:22.013 | null | null | 919 | null |
6650 | 2 | null | 6601 | 7 | null | [Fiducial inference](http://en.wikipedia.org/wiki/Fiducial_inference). Even Fisher admitted he didn't understand what it does, and he invented it.
| null | CC BY-SA 2.5 | null | 2011-01-27T23:45:50.293 | 2011-01-27T23:45:50.293 | null | null | 449 | null |
6651 | 2 | null | 6347 | 3 | null | PCA depends on the scaling of your columns. If you perform a PCA on matrix $X$, then rescale each column to be norm 1 (i.e. divide by the two norm of each column), then perform a PCA on the transformed $X$, you will get different answers. I believe this is part of what 'small w.r.t. a particular row/column' is referring to in the original question.
However, the small elements in the matrix should contribute very little to a PCA/downsample/reconstruction operation. Perhaps you would be better served by determining, perhaps in a semiautomatic way, how many PCs to take in the representation. For this purpose, you might want to look at the scaling of the eigenvalues you get from the SVD, and look for the 'knee' in the [scree plot](http://janda.org/workshop/factor%20analysis/SPSS%20run/SPSS08.htm), or if you know something about the small noise, you can rely on the distribution of eigenvalues of the Gramian matrix. For normally distributed noise, with equal variances, they should follow a [Marchenko-Pastur Distribution](http://en.wikipedia.org/wiki/Marchenko%E2%80%93Pastur_distribution), up to scaling. This will give you the limits of the eigenvalues you expect to see in the pure noise situation. Basing your PC cutoff on that limit may be fruitful.
Sorry this is somewhat vague, I do not think I fully understand what technically is desired from the OP.
| null | CC BY-SA 2.5 | null | 2011-01-28T00:18:46.790 | 2011-01-28T00:18:46.790 | null | null | 795 | null |
6652 | 1 | 6801 | null | 104 | 17955 | I know roughly and informally what a confidence interval is. However, I can't seem to wrap my head around one rather important detail: According to Wikipedia:
>
A confidence interval does not predict that the true value of the parameter has a particular probability of being in the confidence interval given the data actually obtained.
I've also seen similar points made in several places on this site. A more correct definition, also from Wikipedia, is:
>
if confidence intervals are constructed across many separate data analyses of repeated (and possibly different) experiments, the proportion of such intervals that contain the true value of the parameter will approximately match the confidence level
Again, I've seen similar points made in several places on this site. I don't get it. If, under repeated experiments, the fraction of computed confidence intervals that contain the true parameter $\theta$ is $(1 - \alpha)$, then how can the probability that $\theta$ is in the confidence interval computed for the actual experiment be anything other than $(1 - \alpha)$? I'm looking for the following in an answer:
- Clarification of the distinction between the incorrect and correct definitions above.
- A formal, precise definition of a confidence interval that clearly shows why the first definition is wrong.
- A concrete example of a case where the first definition is spectacularly wrong, even if the underlying model is correct.
| What, precisely, is a confidence interval? | CC BY-SA 2.5 | null | 2011-01-28T00:23:50.893 | 2021-01-25T11:56:53.507 | null | null | 1347 | [
"confidence-interval",
"definition"
]
|
6653 | 1 | 6839 | null | 18 | 1190 | I am hoping that I can ask this question the correct way. I have access to play-by-play data, so it's more of an issue with best approach and constructing the data properly.
What I am looking to do is to calculate the probability of winning an NHL game given the score and time remaining in regulation. I figure I could use a logistic regression, but I am not sure what the dataset should look like. Would I have multiple observations per game and for every slice of time I am interested in? Would I have one observation per game and fit seperate models per slice of time? Is logisitic regression even the right way to go?
Any help you can provide will be very much appreciated!
Best regards.
| Logistic Regression and Dataset Structure | CC BY-SA 2.5 | null | 2011-01-28T00:24:32.027 | 2011-02-21T19:22:09.767 | 2011-02-02T11:01:33.630 | 264 | 569 | [
"time-series",
"probability",
"logistic"
]
|
6654 | 2 | null | 6652 | 47 | null | There are many issues concerning confidence intervals, but let's focus on the quotations. The problem lies in possible misinterpretations rather than being a matter of correctness. When people say a "parameter has a particular probability of" something, they are thinking of the parameter as being a random variable. This is not the point of view of a (classical) confidence interval procedure, for which the random variable is the interval itself and the parameter is determined, not random, yet unknown. This is why such statements are frequently attacked.
Mathematically, if we let $t$ be any procedure that maps data $\mathbf{x} = (x_i)$ to subsets of the parameter space and if (no matter what the value of the parameter $\theta$ may be) the assertion $\theta \in t(\mathbf{x})$ defines an event $A(\mathbf{x})$, then--by definition--it has a probability $\Pr_{\theta}\left( A(\mathbf{x}) \right)$ for any possible value of $\theta$. When $t$ is a confidence interval procedure with confidence $1-\alpha$ then this probability is supposed to have an infimum (over all parameter values) of $1-\alpha$. (Subject to this criterion, we usually select procedures that optimize some additional property, such as producing short confidence intervals or symmetric ones, but that's a separate matter.) The Weak Law of Large Numbers then justifies the second quotation. That, however, is not a definition of confidence intervals: it is merely a property they have.
I think this analysis has answered question 1, shows that the premise of question 2 is incorrect, and makes question 3 moot.
| null | CC BY-SA 2.5 | null | 2011-01-28T03:47:24.143 | 2011-01-28T03:47:24.143 | null | null | 919 | null |
6655 | 1 | 6661 | null | 12 | 6521 | There are quite a few methods for parameter estimation out there. MLE, UMVUE, MoM, decision-theoretic, and others all seem like they have a fairly logical case for why they are useful for parameter estimation. Is any one method better than the others, or is it just a matter of how we define what the "best fitting" estimator is (similar to how minimizing orthogonal errors produces different estimates from an ordinary least squares approach)?
| How do I know which method of parameter estimation to choose? | CC BY-SA 2.5 | null | 2011-01-28T06:12:21.280 | 2012-08-07T15:35:30.410 | 2012-08-07T15:35:30.410 | null | 1118 | [
"estimation",
"mathematical-statistics",
"maximum-likelihood",
"method-of-moments",
"umvue"
]
|
6656 | 1 | null | null | 2 | 292 | I am studying a dynamical system that takes as an initial condition a list. I want to analyze the evolution of Shannon's entropy in this system. I know the maximum entropy (50) and the minimum (0). Pure random conditions have almost maximum entropy, and so it is hard to analyze changes in it unless it decreases. I set up the list to have an initial value of 25 (average between maximum and minimum), so there is an equal amount to expand in either direction. Is this statistically sound?
Thanks in advance.
| Median entropy to observe evolution of system? | CC BY-SA 2.5 | null | 2011-01-28T06:51:57.627 | 2012-07-23T19:45:09.440 | null | null | null | [
"entropy"
]
|
6657 | 2 | null | 6371 | 2 | null | So, as said in the comments, the Markov chain you consider has some absorbing states (and is irreducible, presumably), hence its stationary distribution is concentrated on these absorbing states. Therefore the issue is to compute some confidence intervals for the only two non zero coordinates of the stationary vector, one for each of the absorbing states. I would call these entries absorption probabilities rather than stationary vector because there is not much really stationary here, but anyway...
Since there are two absorbing states, $a$ and $b$ say, you are interested in $u_c=P_c[$The chain is absorbed at $a]$, for a given initial state $c\ne a$, $b$. I gather you observe the number $N_t(x)$ of particles at state $x$ and time $t$, for every state $x$ (or every $x\ne a$, $b$?), at two different times $t_1$ and $t_2$. How to estimate $u_c$ from these counts? Surely I am missing the obvious but, even replacing (a multiple of) each $N_t(x)$ by the exact value of $P_c[$The chain is at $x$ at time $t]$, I do not see how to compute $u_c$ from these quantities.
Dimensional analysis shows that for $n$ states, $N_t(x)$ at every state $x$ and two different times yields $2(n−1)$ independent parameters and the transition matrix has $n(n−1)$ independent parameters, a fact which seems to indicate that it would be impossible to identify the latter from the former as soon as $n\ge3$. OK, this argument is too sloppy to be really conclusive but...
(Caveat: I did not read, and have no access to, the paper by Karson and Wrobleski.)
| null | CC BY-SA 2.5 | null | 2011-01-28T06:54:14.890 | 2011-01-29T21:39:40.393 | 2011-01-29T21:39:40.393 | 2592 | 2592 | null |
6658 | 1 | null | null | 7 | 13300 | I want to compare sixteen Case Fatality Rates (deaths per 100 cases) of a particular disease from sixteen different populations across 7 years. Each population received the same treatment but some regions did not implement it properly. As a result, I am trying to show the effectiveness the treatment had in each of the regions to prove a hypothesis that
>
More deaths in some regions were
because of not giving treatment
thoroughly.
Can someone suggest me a way to do this?
Update:
Below is the data from 2003 to 2010 if that helps:
```
SI. 2003 2004 2005 2006 2007 2008 2009 2010
-----------------------------------------------------------------------------------------------------------------------------------------------
Cases Deaths Cases Deaths Cases Deaths Cases Deaths Cases Deaths Cases Deaths Cases Deaths Cases Deaths
1 31 4 7 3 34 0 11 0 22 0 6 0 14 0 132 5
2 109 49 235 64 145 52 392 119 424 133 319 99 462 92 562 125
3 6 2 85 28 192 64 21 3 336 164 203 45 325 95 50 7
4 12 5 17 0 6 0 1 0 0 0 0 0 0 0 x x
5 0 0 0 0 4 0 0 0 27 0 39 0 66 3 80 0
6 104 67 37 27 46 39 2 1 32 18 13 3 12 10 2 1
7 226 10 181 6 122 10 73 3 32 1 3 0 246 8 138 1
S 17 2 9 1 1 0 3 3 2 0 2 0 3 0 19 5
9 475 115 22 0 51 0 1 0 0 0 24 0 1 0 34 17
10 1 0 0 0 1 0 0 0 65 0 4 0 6 0 118 15
11 0 0 0 0 0 0 0 0 7 0 0 0 9 2 11 6
12 0 0 0 0 1 0 0 0 0 0 0 0 0 0 x x
13 0 0 0 0 0 0 0 0 0 0 12 0 0 0 7 0
14 163 36 88 9 51 11 18 1 37 0 144 0 265 8 458 7
15 1124 237 1030 228 6061 1500 2320 528 3024 645 3012 537 3073 556 3540 494
16 2 1 3 1 12 6 0 0 16 2 58 0 0 0 1 0
```
| How can we compare multiple proportions from multiple independent populations to evaluate implementation of a treatment? | CC BY-SA 2.5 | null | 2011-01-28T06:55:48.787 | 2015-02-19T14:19:26.490 | 2011-01-30T06:01:05.940 | 2956 | 2956 | [
"hypothesis-testing",
"spss",
"proportion"
]
|
6660 | 1 | null | null | 9 | 1664 | as question, since we can do the conversion from odds ratio `(p1/q1)/(p2/q2)` to relative risk `(p1/(p1+q1))/(p2/(p2+q2))` fairly easily, I wonder if there is anything that I need to pay attention before doing this?
It is obvious that if I am doing a case-control study, I shouldn't do a conversion, because I never know the relative risk from this kind of study, but anything other things that I need to consider?
Thanks.
| Prerequisite for conversion from odds ratio to relative risk to be valid | CC BY-SA 2.5 | null | 2011-01-28T08:24:29.473 | 2011-10-01T03:35:32.033 | 2011-08-31T23:18:21.180 | 5836 | 588 | [
"epidemiology",
"relative-risk",
"odds"
]
|
6661 | 2 | null | 6655 | 12 | null | There's a slight confusion of two things here: methods for deriving estimators, and criteria for evaluating estimators. Maximum likelihood (ML) and method-of-moments (MoM) are ways of deriving estimators; Uniformly minimum variance unbiasedness (UMVU) and decision theory are criteria for evaluating different estimators once you have them, but they won't tell you how to derive them.
Of the methods for deriving estimators, ML usually produces estimators that are more efficient (i.e. lower variance) than MoM if you know the model under which your data were derived (the 'data-generating process' (DGP) in the jargon). But MoM makes fewer assumptions about the model; as its name implies, it only uses one or more moments, usually just the mean or just the mean and variance, so it's sometimes more robust if you're not sure about the DGP. There can be more than one MoM estimator for the same problem, while if you know the DGP, there is only one ML estimator.
Of the methods for evaluating estimators, decision-theoretic depends on having a loss function to use to judge your estimator, although the results can be fairly robust to a range of 'reasonable' loss functions. UMVU estimators often don't even exist; in many cases there is no unbiased estimator that always has minimum variance. And the criterion of unbiasedness is also of questionable usefulness, as it's not invariant to transformations. For example, would you prefer an unbiased estimator of the odds ratio, or of the log odds ratio? The two will be different.
| null | CC BY-SA 2.5 | null | 2011-01-28T09:10:11.987 | 2011-01-28T09:26:27.437 | 2011-01-28T09:26:27.437 | 449 | 449 | null |
6662 | 2 | null | 6601 | 5 | null | Confidence interval in non-Bayesian tradition is a difficult one.
| null | CC BY-SA 2.5 | null | 2011-01-28T11:07:15.030 | 2011-01-28T11:07:15.030 | null | null | 1966 | null |
6664 | 1 | null | null | 8 | 4374 | I have serial hematological measurements data and I have plotted their means and SE in Stata. On the y-axis I have for example hemoglobin and time (visit days) on the x-axis hence I can visualize hemoglobin levels with time (whether it is decreasing or in increasing). The level decreases up to sometime and increases again. What test can use to test whether this is significant or not? Stata and R related answers are welcomed.
Thank you.
Julie
| Statistical test for trend (continuous variable) in Stata or R | CC BY-SA 2.5 | null | 2011-01-28T13:07:55.577 | 2011-01-28T13:49:56.673 | 2011-01-28T13:09:32.870 | 2116 | 2961 | [
"r",
"stata"
]
|
6665 | 2 | null | 6599 | 11 | null | This is a topic of practical interest to me as well so I did a little research. Here are two papers by an author that is often listed as a reference in these matters.
- Transforming classifier scores into
accurate multiclass probability
estimates
- Reducing multiclass
to binary by coupling probability
estimates
The gist of the technique advocated here is to reduce the multiclass problem to a binary one (e.g. one versus the rest, AKA one versus all), use a technique like Platt (preferrably using a test set) to claibrate the binary scores/probabilities and then combine these using a techique as discussed in the papers (one is an extenstion of a Hastie et al process of "coupling"). In the first link, the best results were found by simply normalizing the binary probabilities to that they sum to 1.
I would love to hear other advice and if any of these tecnhiqes have been implmented in R.
| null | CC BY-SA 2.5 | null | 2011-01-28T13:21:02.197 | 2011-01-28T13:21:02.197 | null | null | 2040 | null |
6666 | 2 | null | 6636 | 4 | null | Okay, so here is my answer that I promised. I initially thought it would be quickish, but my answer has become quite large, so at the begining, I state my general results first, and leave the gory details down the bottom for those who want to see it.
I must thank @terry felkrow for this fascinating question - if I could give you +10 I would! This basically is a prime example of the slickness and elegance of Bayesian and Maximum Entropy methods. I have had much fun working it out!
SUMMARY
Exact result
$$Pr(\theta \in (0,S)|F_{obs},T_U,T_D)=1-\frac{T_U}{T_U+T_D}\Bigg(\frac{T_U}{T_U+S}\Bigg)^{F_{obs}+1}$$
Where $\theta$ is the time of the first down time (in seconds) observed by the user, $T_U$ is the number of "up time" seconds observed , $T_D$ is the number of "down time" seconds observed, and $F_{obs}$ is the number of "down periods" (F for "failures"; $\frac{T_D}{F_{obs}}$ is the average number of seconds spent in "down time") observed
For your case, $F_{obs}$ is not given, but I would guess that you could find out what it was (which is why I gave the answer for known $F_{obs}$). Now because you know $T_D$, this tells you a bit about $F_{obs}$, and you should be able to pose an "Expected Value" or educated guess of $F_{obs}$, call it $\hat{F}$. Now using the geometric distribution with probability parameter $p=\frac{1}{\hat{F}}$ (this is the Maximum Entropy distribution for fixed mean equal to $\hat{F}$), to integrate out $F_{obs}$ gives the probability of (see details for the maths):
$$Pr(\theta \in (0,S)|\hat{F},T_U,T_D)=1-\frac{\Bigg(\frac{T_U}{T_U+T_D}\Bigg)\Bigg(\frac{T_U}{T_U+S}\Bigg)^2}{\hat{F}-(\hat{F}-1)\Bigg(\frac{T_U}{T_U+S}\Bigg)}$$
So for your specific case, the table below shows various bounds for different $F$, assuming it is known (column 2) or "expected" (column 3). Can see that the knowing $F_{obs}$ comparing to knowing a "rough" guess $\hat{F}$ only matters when it is very large, (i.e. when the observed average down time is 1 second or less).
$$
\begin{array}{c|c}
F & Pr(\theta \geq \text{S}|F_{obs},T_U,T_D) & Pr(\theta \in (0,S)|\hat{F},T_U,T_D) \\
\hline
1,000,000 & 0.625 & 0.499 \\
\hline
500,000 & 0.393 & 0.336 \\
\hline
250,000 & 0.227 & 0.207 \\
\hline
125,000 & 0.128 & 0.122 \\
\hline
62,500 & 0.074 & 0.072 \\
\hline
31,250 & 0.045 & 0.045 \\
\hline
15,685 & 0.031 & 0.030 \\
\hline
7,812 & 0.023 & 0.023 \\
\hline
1 & 0.016 & 0.016
\end{array}
$$
DETAILS
It is based on example 3 in the paper below
Jaynes, E. T., 1976. `Confidence Intervals vs Bayesian Intervals,' in Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science, W. L. Harper and C. A. Hooker (eds.), D. Reidel, Dordrecht, p. 175; [pdf](http://bayes.wustl.edu/etj/articles/confidence.pdf)
It supposes that the probability that a machine will operate without failure for a time $t$, is given by
$$Pr(\theta \geq t)=e^{-\lambda t};\ \ 0<t,\lambda < \infty$$
Where $\lambda$ is an unknown "rate of failure", to be estimated from some data.
I will use this to model the failure times in 2 separate cases. Where "failure" indicates going from "working" to "down time", and the other way around. You can think of this like modeling two "memoryless" proceedures. We first "wait" for the down time, from time $t=t_{0u}=0$, to time $t=t_{1d}$ (so that there was $t_1$ seconds of uninterupted "operating" time). This has a failure rate of $\lambda_d$ At time $t=t_{1d}$ a new process takes over and now we "wait" for the down time to "fail" at time $t=t{1u}$. It is also supposed that the rate of failure is constant over time, and that the process has independent increments (i.e. if you know where the process is at time $t=s$, then all other information about the process prior to time $t<s$ is irrelevant). This is what is known as a first order Markov process, also known as a "memoryless" process (for obvious reasons).
Okay, the problem goes as follows, Jaynes eq (8) gives the density that $r$ units out of $n$ will fail at the times $t_1 ,t_2 ,\dots,t_r$, and the remaining (n-r) do not fail at time t as
$$p(t_1 ,t_2 ,\dots,t_r | \lambda,n)=[\lambda^r exp(-\lambda \sum_{i}t_i)][exp(-(n-r)\lambda t)]$$
Then assigning a uniform prior (the particular prior you use won't matter in your case because you have so much data, the likelihood will dominate any reasonably "flat" prior) to $\lambda$, this give the a posterior predictive distribution of (see Jaynes paper for details, eq (9)-(13)):
$$Pr(\theta\geq\theta_0|n,t_1 ,\dots,t_r)=\int_0^{\infty}Pr(\theta\geq\theta_0|\lambda)p( \lambda | t_1 ,t_2 ,\dots,t_r,n)d\lambda=\Bigg(\frac{T}{T+\theta_0}\Bigg)^{r+1}$$
Where $T=\sum_{i}t_i + (n-r)t$ is the total time the devices operated without failure. This indicates that you only needed to know the total "failure free time", which you have both given as $T_D=500,000$ and $T_U=31,556,926-500,000=31,056,926$. Also for you problem we always observed either $n$ or $n-1$ "failures" by time $t$, depending on whether the system was "down" or "up" at time $t$.
Now if you knew what $F_{obs}$ was, then you just plug in $r=F_{obs}$ to the above equation. The probability that a user will not be in the "down time" in the first $S$ seconds given that the system was "up" when they started is then
$$Pr(\theta\geq S|[\text{Up at start} ],F_{obs},T_U)=\Bigg(\frac{T_U}{T_U+30}\Bigg)^{F_{obs}+1}$$
But the story is not yet finished, because we can marginalise (remove conditions) further. To make the equations shorter, let $A$ stand for the system was up when the user started, and let $B$ stand for no down time in $S$ seconds. Then, by the law of total probability, we have
$$Pr(B|F_{obs},T_U,T_D)=Pr(B|F_{obs},T_U,T_D,A)Pr(A|F_{obs},T_U,T_D)$$
$$+Pr(B|F_{obs},T_U,T_D,\overline{A})Pr(\overline{A}|T_U,T_D)$$
Now $\overline{A}$ means that the system was down when the user started, so that it is impossible for $B$ to be true (i.e. no down time) when $\overline{A}$ is true. Thus, $Pr(B|F_{obs},T_U,T_D,\overline{A})=0$, and we just have to multiply by $Pr(A|F_{obs},T_U,T_D)$. This is given by $\frac{T_U}{T_U+T_D}$, because none of information contained in $F_{obs},T_U,T_D$ give any reason to favor any particular time over any other time.
$$Pr(\theta\geq S|F_{obs},T_U,T_D)=\frac{T_U}{T_U+T_D}\Bigg(\frac{T_U}{T_U+S}\Bigg)^{F_{obs}+1}$$
Taking 1 minus this gives the desired result.
NOTE: We may have additional knowledge which would favor certain times, such as knowing what time of day is more likely to have a system outage, or we may believe that system outage is related to the number of users; this analysis ignores such information, and so could be improved upon by taking it into account.
NOTE: if you only knew the a rough guess of $F_{obs}$, say $\hat{F}$, you could (in theory) use the geometric distribution (has largest entropy for fixed mean) for $F_{obs}$ with probability parameter $p=\frac{1}{\hat{F}}$ and marginalise over $F_{obs}$ to give:
$$Pr(\theta \geq S|T_U,T_D)=\frac{T_U}{T_U+T_D}\sum_{i=1}^{i=\infty} p(1-p)^{i-1}\Bigg(\frac{T_U}{T_U+S}\Bigg)^{i+1}$$
$$=\frac{T_U}{T_U+T_D}\Bigg(\frac{T_U}{T_U+S}\Bigg)\sum_{i=1}^{i=\infty} p(1-p)^{i-1}\Bigg(\frac{T_U}{T_U+S}\Bigg)^{i}$$
$$=\frac{T_U}{T_U+T_D}\Bigg(\frac{T_U}{T_U+S}\Bigg)\sum_{i=1}^{i=\infty} p(1-p)^{i-1} exp\Bigg(i log\Bigg[\frac{T_U}{T_U+S}\Bigg]\Bigg)$$
Now the summation is just the moment generating function, $m_{X}(t)=E[exp(tX)]$, evaluated at $t=log\Bigg[\frac{T_U}{T_U+S}\Bigg]$. The [mgf for the geometric distribution](http://en.wikipedia.org/wiki/Geometric_distribution) is given by:
$$m_{X}(t)=E[exp(tX)]=\frac{pe^t}{1-(1-p)e^t}$$
$$\rightarrow m_{X}(log\Bigg[\frac{T_U}{T_U+S}\Bigg])=\frac{p\Bigg[\frac{T_U}{T_U+S}\Bigg]}{1-(1-p)\Bigg[\frac{T_U}{T_U+S}\Bigg]}$$
And this gives a marginal probability of (noting $p=\frac{1}{\hat{F}}$):
$$Pr(\theta \geq S|T_U,T_D)=\frac{T_U}{T_U+T_D}\Bigg(\frac{T_U}{T_U+S}\Bigg)\frac{\frac{1}{\hat{F}}\Bigg[\frac{T_U}{T_U+S}\Bigg]}{1-(1-\frac{1}{\hat{F}})\Bigg[\frac{T_U}{T_U+S}\Bigg]}$$
Rearranging terms gives the final result:
$$Pr(\theta \in (0,S)|T_U,T_D)=1-Pr(\theta \geq S|T_U,T_D)=1-\frac{\Bigg(\frac{T_U}{T_U+T_D}\Bigg)\Bigg(\frac{T_U}{T_U+S}\Bigg)^2}{\hat{F}-(\hat{F}-1)\Bigg(\frac{T_U}{T_U+S}\Bigg)}$$
| null | CC BY-SA 2.5 | null | 2011-01-28T13:24:37.943 | 2011-01-28T16:22:23.360 | 2011-01-28T16:22:23.360 | 2392 | 2392 | null |
6667 | 2 | null | 6664 | 3 | null | It seems that your problem can be stated as change-point problem. R packages dealing with such type of problems are [segmented](http://cran.r-project.org/web/packages/segmented/index.html) and [strucchange](http://cran.r-project.org/web/packages/strucchange/index.html). Since you want to look into changes in time trend (and time trends always need special treatment in linear regression), I suggest differencing your hemoglobin level data and then testing whether there is a change in mean.
Look also into answers for this [question](https://stats.stackexchange.com/questions/5700/finding-the-change-point-in-data-from-a-piecewise-linear-function).
| null | CC BY-SA 2.5 | null | 2011-01-28T13:49:56.673 | 2011-01-28T13:49:56.673 | 2017-04-13T12:44:46.680 | -1 | 2116 | null |
6668 | 1 | null | null | 5 | 423 | I calculated the quantiles for an Epanechnikov kernel which I'm using to estimate the density of a sample. What I need is to find the sample quantiles knowing that it is composed of many Epanechnikov kernels. Is there a way to calculate at wich data points the different quantiles are using the kernel inverse CDF formula?
| How can I convert kernel quantiles into sample quantiles? | CC BY-SA 2.5 | null | 2011-01-28T15:13:59.893 | 2015-04-23T05:57:46.607 | 2015-04-23T05:57:46.607 | 9964 | 2953 | [
"quantiles",
"cumulative-distribution-function",
"kernel-smoothing"
]
|
6669 | 2 | null | 6652 | 5 | null | From a theoretical perspective Questions 2 and 3 are based on the incorrect assumption that the definitions are wrong. So I am in agreement with @whuber's answer in that respect, and @whuber's answer to question 1 does not require any additional input from me.
However, from a more practical perspective a confidence interval can be given its intuitive definition (Probability of containing the true value) when it is numerically identical with a Bayesian credible interval based on the same information (i.e. a non-informative prior).
But this is somewhat disheartening for the die hard anti-bayesian, because in order to verify the conditions to give his CI the interpretation he/she want to give it, they must work out the Bayesian solution, for which the intuitive interpretation automatically holds!
The easiest example is a $1-\alpha$ confidence interval for the normal mean with a known variance $\overline{x}\pm \sigma Z_{\alpha/2} $, and a $1-\alpha$ posterior credible interval $\overline{x}\pm \sigma Z_{\alpha/2} $.
I am not exactly sure of the conditions, but I know the following are important for the intuitive interpretation of CIs to hold:
1) a Pivot statistic exists, whose distribution is independent of the parameters (do exact pivots exist outside normal and chi-square distributions?)
2) there are no nuisance parameters, (except in the case of a Pivotal statistic, which is one of the few exact ways one has to handle nuisance parameters when making CIs)
3) a sufficient statistic exists for the parameter of interest, and the confidence interval uses the sufficient statistic
4) the sampling distribution of the sufficient statistic and the posterior distribution have some kind of symmetry between the sufficient statistic and the parameter. In the normal case the sampling distribution the symmetry is in $(\overline{x}|\mu,\sigma)\sim N(\mu,\frac{\sigma}{\sqrt{n}})$ while $(\mu|\overline{x},\sigma)\sim N(\overline{x},\frac{\sigma}{\sqrt{n}})$.
These conditions are usually difficult to find, and usually it is quicker to work out the Bayesian interval, and compare it. An interesting exercise may also be to try and answer the question "for what prior is my CI also a Credible Interval?" You may discover some hidden assumptions about your CI procedure by looking at this prior.
| null | CC BY-SA 2.5 | null | 2011-01-28T15:32:33.077 | 2011-01-28T15:32:33.077 | null | null | 2392 | null |
6670 | 1 | null | null | 9 | 8971 | I want to calculate a better bandwidh for my kernel density estimator, which is an Epanechnikov. I use Silverman's formula which involves the standard deviation of the sample, the sample size and a constant, but I'm getting a very smooth curve in most cases and I would prefer if it were more balanced. Thank you for any help you can give me.
| Which is the formula from Silverman to calculate the bandwidth in a kernel density estimation? | CC BY-SA 2.5 | null | 2011-01-28T15:37:17.283 | 2015-04-27T05:39:17.940 | 2015-04-27T05:39:17.940 | 9964 | 2953 | [
"estimation",
"smoothing",
"kernel-smoothing"
]
|
6671 | 2 | null | 6670 | 11 | null | To shamelessly quote the Stata manual entry for [kdensity](http://www.stata.com/help.cgi?kdensity):
>
The optimal width is the width that would minimize the mean integrated squared error if the data were Gaussian and a Gaussian kernel were used, so it is not optimal in any global sense. In fact, for multimodal and highly skewed densities, this width is usually too wide and oversmooths the density (Silverman 1992).
[Silverman, B. W.](http://en.wikipedia.org/wiki/Bernard_Silverman) 1992. [Density Estimation for Statistics and Data Analysis](http://books.google.com/books?id=e-xsrjsL7WkC). London: Chapman & Hall. ISBN 9780412246203
The formula Stata give for the optimal bandwidth $h$ is:
$$h = \frac{0.9m}{n^{1/5}} \quad \mbox{with } m = \min\left(\sqrt{\operatorname{Var}(X)},\frac{\operatorname{IQR}(X)}{1.349}\right),$$
where $n$ is the number of observations on $X$, $\operatorname{Var}(X)$ is its variance and $\operatorname{IQR}(X)$ its interquartile range.
| null | CC BY-SA 2.5 | null | 2011-01-28T16:00:09.583 | 2011-01-28T16:22:45.787 | 2011-01-28T16:22:45.787 | 449 | 449 | null |
6672 | 2 | null | 6652 | 2 | null | Okay, I realize that when you calculate a 95% confidence interval for a parameter using classical frequentist methods, it doesn't mean that there is a 95% probability that the parameter lies within that interval. And yet ... when you approach the problem from a Bayesian perspective, and calculate a 95% credible interval for the parameter, you get (assuming a non-informative prior) exactly the same interval that you get using the classical approach. So, if I use classical statistics to calculate the 95% confidence interval for (say) the mean of a data set, then it is true that there's a 95% probability that the parameter lies in that interval.
| null | CC BY-SA 2.5 | null | 2011-01-28T17:14:04.500 | 2011-01-28T17:14:04.500 | null | null | 2617 | null |
6674 | 2 | null | 726 | 16 | null | I just can't help myself, this is a provocative quote from E. T. Jaynes:
>
Many of us have already explored the road you are following, and we
know what you will find at the end of it. It doesn't matter how many
new words you drag into the discussion to avoid having to utter the
word 'probability' in a sense different from frequency: likelihood,
confidence, significance, propensity, support, credibility,
acceptability, indifference, consonance, tenability; and so on, until
the resources of the good Dr Roget are exhausted. All of these are
attempts to represent degrees of plausibility by real numbers, and
they are covered automatically by Cox's theorems. It doesn't matter
which approach you happen to like philosophically; by the time you
have made your methods fully consistent, you will be forced, kicking
and screaming, back to the ones given by Laplace. Until you have
achieved mathematical equivalence with Laplace's methods, it will be
possible, by looking in specific problems with Galileo's
magnification, to exhibit the defects in your methods.
| null | CC BY-SA 3.0 | null | 2011-01-28T18:01:15.537 | 2011-08-15T04:14:01.257 | 2011-08-15T04:14:01.257 | 1381 | 2392 | null |
6675 | 2 | null | 5077 | 4 | null | I came across your question when I was looking for the original reference for Hit-and-Run. Thanks for that! I just put together a proof-of-concept implementation of hit-and-run for PyMC at the end of [this recent blog](http://healthyalgorithms.wordpress.com/2011/01/28/mcmc-in-python-pymc-step-methods-and-their-pitfalls).
| null | CC BY-SA 2.5 | null | 2011-01-28T18:42:16.323 | 2011-01-28T18:42:16.323 | null | null | 2498 | null |
6676 | 2 | null | 6642 | 5 | null | If $X_i$ and $Y_j$ are dependent for $i=j$, but independent for $i\neq j$ we have an iid sample from bivariate distribution $Z_i=(X_i,Y_i)$. Then central limit theorem gives us
\begin{align}
\sqrt{n}\left(\frac{1}{n}\sum_{i=1}^nZ_i -EZ_1\right)\xrightarrow{D}N(0,\Sigma)
\end{align}
with $\Sigma=cov(Z_1)$ and $\xrightarrow{D}$ indicating convergence in distribution. Note that
$$(S_n,T_n)=\frac{1}{n}\sum_{i=1}^nZ_i.$$
Now we can use delta method, which states that if $r_n(U_n-\theta)\xrightarrow{D}U$ for numbers $r_n\to\infty$, then $r_n(\phi(U_n)-\phi(\theta))\xrightarrow{D}\phi'_\theta(U)$. This statement can be found [here](http://books.google.com/books?id=UEuQEM5RjWgC&lpg=PR1&dq=asymptotic%20statistics%20vaart&hl=fr&pg=PA26#v=onepage&q&f=false). Delta method is also described [here](http://en.wikipedia.org/wiki/Delta_method).
In our case now we have $r_n=\sqrt{n}$, $\phi(x,y)=x^2-y^2$, $U_n=(S_n,T_n)$ and $\theta=(EX_1,EY_1)$. We have
$$\phi_{\theta}'=(2EX_1,-2EY_1)$$
and
$$U=(U_1,U_2)\sim N(0,\Sigma)$$
Finally we get
\begin{align}
\sqrt{n}\left(S_n^2-T_n^2-(EX_1)^2+(EY_1)^2\right)\xrightarrow{D}
2U_1EX_1-2U_2EY_1:=V
\end{align}
Since we know that $(U_1,U_2)\sim N(0,\Sigma)$ we get that
$$V\sim N(0,4(EX_1,EY_1)\Sigma(EX_1,EY_1)').$$
Note that I basically redid the example after the theorem in the link.
The final answer then depends on $\Sigma=cov((X_1,Y_1))$, but this should be known to original poster.
| null | CC BY-SA 2.5 | null | 2011-01-28T18:42:37.477 | 2011-01-28T18:48:44.800 | 2011-01-28T18:48:44.800 | 2116 | 2116 | null |
6677 | 2 | null | 6670 | 2 | null | I second @onestop, but quote Wilcox, 'Introduction to Robust Estimation and Hypothesis Testing', 2nd edition, page 50:
$$
h = 1.06\frac{A}{n^{1/5}}, \qquad A = \min{\left(s,\frac{IQR(x)}{1.34}\right)},
$$
where $s$ is the sample standard deviation.
| null | CC BY-SA 2.5 | null | 2011-01-28T19:00:00.760 | 2011-01-28T19:00:00.760 | null | null | 795 | null |
6678 | 2 | null | 726 | 7 | null | >
Everybody knows that probability and statistics are the same thing, and statistics is nothing but correlation. Now the correlation is just the cosine of an angle, thus all is trivial.
-- Emil Artin, according to Kai Lai Chung in
[Elementary probability theory](http://books.google.com/books?id=safNnEOICL8C) (right, Artin might not been known primarily as a statistician)
| null | CC BY-SA 2.5 | null | 2011-01-28T19:20:55.060 | 2011-01-28T19:29:25.247 | 2011-01-28T19:29:25.247 | 2592 | 2592 | null |
6679 | 2 | null | 6492 | 7 | null | On page 372 of [ARM](http://www.stat.columbia.edu/~gelman/arm/), Gelman and Hill mention using a uniform distribution on the inverse of DF between 1/DF = .5 and 1/DF = 0.
Specifically, in BUGS, they use:
```
nu.y <- 1/nu.inv.y
nu.inv.y ~ dunif(0,.5)
```
| null | CC BY-SA 2.5 | null | 2011-01-28T19:39:49.070 | 2011-01-28T19:39:49.070 | null | null | 1146 | null |
6680 | 1 | 6685 | null | 3 | 1824 | I need to convert the table A to table B. How can I do that using R?
TABLE A
```
Y 10
Y 12
Y 18
X 22
X 12
Z 11
Z 15
```
TABLE B
```
X 22 12
Y 10 12 18
Z 11 15
```
| Need to convert duplicate column elements to a unique element in R | CC BY-SA 2.5 | null | 2011-01-28T19:46:47.740 | 2011-01-28T23:08:18.223 | 2011-01-28T22:23:20.910 | 919 | 2725 | [
"r"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.