Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
4154
2
null
4150
4
null
You should google [EM algorithm](http://en.wikipedia.org/wiki/Expectation-maximization_algorithm). The wiki has a description of the algorithm with an example of the application of this algorithm to [gaussian mixtures](http://en.wikipedia.org/wiki/Expectation-maximization_algorithm#Example%3a_Gaussian_mixture). Perhaps, someone else can point out an R package for you.
null
CC BY-SA 2.5
null
2010-11-03T00:39:58.123
2010-11-03T00:45:52.243
2010-11-03T00:45:52.243
919
null
null
4155
1
4156
null
6
813
I've been learning X12-ARIMA by looking at data from a friend's service company, and wondering how to model the capacity of the company. That is, if the company is limited by a particular resource to only be able to handle 1,000 customers a week, how do I keep my ARIMA model from happily predicting 1,200 customers next summer? (This isn't an issue with time series like GDP or stock prices, which don't have a hard cap.) It doesn't seem that you can do anything in the optimization phase (which is simply choosing parameters), nor with exogenous variables (which drive the process, not react to it). Maybe changing the ARIMA model to a State Space representation would help? (Any recommendations on an R package to do this? I've looked at several and DLM's many matrices confuse me at this point.)
Saturation in ARIMA (et al) models?
CC BY-SA 2.5
null
2010-11-03T02:22:36.290
2011-03-04T20:38:15.360
null
null
1764
[ "time-series" ]
4156
2
null
4155
3
null
If Y is customer demand, than you are observing X=min(Y,1000) due to resource constraints. The actual Y could be larger, but you never observe it. So if you fit a time series model to X, you can set the forecasts to min(F,1000) where F is the forecast from the time series model. I don't think there is a need to do anything more fancy than that.
null
CC BY-SA 2.5
null
2010-11-03T03:01:37.603
2010-11-03T03:01:37.603
null
null
159
null
4157
1
4163
null
3
624
I've really only seen EM used for mixtures where one can point out multiple modes visually - e.g, the classic mixture of gaussians example. I would like to use EM for a mixture of an empirically defined, sharply peaked distribution and something that is more uniform - does anyone have an idea as to how much confidence I should put in the resulting estimates, or prior experience with similar applications?
Using the EM Algorithm for unimodal distributions?
CC BY-SA 2.5
null
2010-11-03T03:18:12.013
2011-03-27T16:05:08.313
2011-03-27T16:05:08.313
919
1777
[ "modeling", "mixture-distribution", "expectation-maximization" ]
4158
2
null
3814
24
null
Confusing p-values and effect size (i.e. stating my effect is large because I have a really tiny p-value). Slightly different than Stephan's [answer](https://stats.stackexchange.com/questions/3814/how-to-annoy-a-statistical-referee/3817#3817) of excluding effect sizes but giving p-values. I agree you should give both (and hopefully understand the difference!)
null
CC BY-SA 2.5
null
2010-11-03T04:01:10.777
2010-11-03T04:01:10.777
2017-04-13T12:44:54.643
-1
1036
null
4163
2
null
4157
3
null
There are two questions here: 1) how much confidence should you put in your model with peaked and flat components. 2) how much confidence should you put in the EM algorithm as a way to fit this model. Question 1 has the same answers as any other model, e.g. a regression model with particular covariates or a factor analysis model with a certain number of factors. The only specific consideration I can think of is that you may be introducing a the alternative flat data generating source to make a more robust estimate of peaked data source. This is a standard noisy measurement model. For comparison, you might also work with a fatter tailed peaked source, e.g. T-distribution vs. Normal + Uniform. As for Question 2, EM is just a maximum likelihood method. This means, first that there may be better parameter values available, because it may have found a local minimum, and second that you can get degenerate solutions because there is no prior / regularization in the setup. Both are standard ML problems, not really anything to do with the EM algorithm, although the latter is probably made slightly worse by having missing data in the mix. For a more elaborate discussion, see McLachlan and Krishnan's 'EM Algorithm and Extensions'.
null
CC BY-SA 2.5
null
2010-11-03T07:46:46.563
2010-11-03T07:46:46.563
null
null
1739
null
4164
2
null
4138
3
null
I interpreted the question to ask the distribution of the maximal element of a multivariate normal. In this case, the CDF can be computed from the CDF of a multivariate normal. This usually doesn't have a nice solution (even in terms of the univariate normal CDF), however can be evaluated numerically. In R: ``` library(mvtnorm) # given xl, mu and sigma pmvnorm(upper=rep(xl,length(mu)), mean=mu, sigma=sigma) ``` However on re-reading the question, it seems to be asking the probability that a particular element of the vector is maximal. In this case, I'd agree with G. Jay Kerns.
null
CC BY-SA 2.5
null
2010-11-03T08:29:08.687
2010-11-04T12:13:54.883
2010-11-04T12:13:54.883
495
495
null
4165
1
4167
null
20
18281
I wish to decide if I should take a course called "INTRODUCTION TO STOCHASTIC PROCESSES" which will be held next semester in my University. I asked the lecturer how studying such a course would help me as a statistician, he said that since he comes from probability, he knows very little of statistics and doesn't know how to answer my question. I can make an un-educated guess that stochastic processes are important in statistics. But I am also curious to know how. That is, in what fields/methods, will basic understanding in "stochastic processes" will help me do better statistics?
How will studying "stochastic processes" help me as a statistician?
CC BY-SA 2.5
null
2010-11-03T08:57:40.697
2013-01-12T16:13:52.467
2010-11-03T10:10:14.340
183
253
[ "probability", "stochastic-processes" ]
4166
2
null
4165
3
null
A deep understanding of survival analysis requires knowledge of counting processes, martingales, Cox processes... See e.g. Odd O. Aalen, Ørnulf Borgan, Håkon K. Gjessing. Survival and event history analysis: a process point of view. Springer, 2008. [ISBN 9780387202877](http://en.wikipedia.org/wiki/Special%3aBookSources/9780387202877) Having said that, many applied statisticians (including me) use survival analysis without any understanding of stochastic processes. I'm not likely to make any advances to the theory though.
null
CC BY-SA 2.5
null
2010-11-03T10:35:34.100
2010-11-03T10:35:34.100
null
null
449
null
4167
2
null
4165
22
null
Stochastic processes underlie many ideas in statistics such as time series, markov chains, markov processes, bayesian estimation algorithms (e.g., Metropolis-Hastings) etc. Thus, a study of stochastic processes will be useful in two ways: - Enable you to develop models for situations of interest to you. An exposure to such a course, may enable you to identify a standard stochastic process that works given your problem context. You can then modify the model as needed to accommodate the idiosyncrasies of your specific context. - Enable you to better understand the nuances of the statistical methodology that uses stochastic processes. There are several key ideas in stochastic processes such as convergence, stationarity that play an important role when we want to analyze a stochastic process. It is my belief that a course in stochastic process will let you appreciate better the need for caring about these issues and why they are important. Can you be a statistician without taking a course in stochastic processes? Sure. You can always use the software that is available to perform whatever statistical analysis you want. However, a basic understanding of stochastic processes is very helpful in order to make a correct choice of methodology, in order to understand what is really happening in the black box etc. Obviously, you will not be able to contribute to the theory of stochastic processes with a basic course but in my opinion it will make you a better statistician. My general rule of thumb for coursework: The more advanced course you take the better off you will be in the long-run. By way of analogy: You can perform a t-test without knowing any probability theory or statistics testing methodology. But, a knowledge of probability theory and statistical testing methodology is extremely useful in understanding the output correctly and in choosing the correct statistical test.
null
CC BY-SA 2.5
null
2010-11-03T10:36:25.520
2010-11-03T10:42:13.277
2020-06-11T14:32:37.003
-1
null
null
4168
2
null
4165
3
null
The short answer probably is that all observable processes, which we may want to analyze with statistical tools, are stochastic processes, that is, they contain some element of randomness. The course will probably teach you the mathematics behind these stochastic processes, e. g. distribution functions, which will allow you to grasp the function of your statistical tools. I think you can compare it with an automobile: As you can drive your car without understanding the engineering behind it, and without theoretical knowledge about the dynamics of your car on the road, you can apply statistical tools to your data without understanding how these tools work, as long as you understand the output. This will probably be good enough if you want to do basic statistics with well behaved data. But if you really want to get the most out of your car, to see where it's limits are, you need knowledge about the engineering, the dynamics of your car on roads and in curves and so on. And if you want to get the most out of your data with the help of your statistical tools, you need to understand how data generation can be modeled, how tests are devised and what the assumptions behind your tests are to be able to see where those assumptions are not valid.
null
CC BY-SA 2.5
null
2010-11-03T10:42:41.717
2010-11-03T10:42:41.717
null
null
1766
null
4169
2
null
4157
0
null
I have a [paper](http://www.thinkingaboutthinking.org/wp-content/uploads/2010/05/Lawrence_BRM_in_press.pdf) in press that explores application of EM to estimation of a Von Mises & uniform mixture in the circular domain. (The Von Mises is the circular analogue of a gaussian.)
null
CC BY-SA 2.5
null
2010-11-03T11:38:25.917
2010-11-03T11:38:25.917
null
null
364
null
4170
2
null
192
10
null
Arguably, the question is not very precise. Rather than enumerating all measures of association for $2\times 2$ tables, I shall concentrate on the way such measures may be constructed and how to select the one that is most appropriate with respect to hypothesis or constraints relevant to a cross-classification. The very first questions to ask are: what does the table reflect (concordance, agreement, association between two attributes, etc.), do you seek an overall measure of association or do you think one of the two variables plays a specific role (which would justify the search for an "oriented" association), do you consider either or both of the margins fixed (row and/or columns totals)? All of this impact on the method to choose and the way to interpret the results. The $2\times 2$ case Two-by-two tables are often treated separately from $I\times J$ tables because we often consider that variables play a symmetric role in this particular case. Obviously, this is not always the case: cross-classification of exposure and disease, as commonly found in epidemiological studies, is an example where both variables play a distinct role, which may lend to more than a simple interpretation in terms of association. Another one is $2\times 2$ tables constructed for studying the screening properties of a given diagnostic instrument. Although the odds-ratio (compared to, e.g. the relative risk) keeps its nice properties, we may be interested in predictive/negative positive values or specificity/sensibility, which means working with other quantities of interest. Hence, the need to specifiy whether the problem at hand implies two variables that are purely acting in a symmetrical way, or not, because it influences the way we interpret the results or derive a useful measure of association, agreement, or discrimination. For the sake of clarity, I will consider that data (counts) are arranged in the following way: ![alt text](https://i.stack.imgur.com/41Wjh.png) Basically, measures of association for $2\times 2$ tables can be grouped in two classes: those relying on (a) (a function of) the cross-product ratio and those based on (b) the product-moment (Pearson) correlation, or a function thereof. The cross-product ratio, mostly known as the odds-ratio, is simply $\alpha=p_{11}p_{22}/p_{12}p_{21}$. It is invariant under rows and columns interchange, and transformations of margins that preserves $\sum_{i,j}p_{ij}=1$. In epidemiology, we usually think of it as a measure of association where rows (or columns) are fixed: $p_{11}/p_{12}$ is then the odds of being in the first column (e.g., diseased) conditional on being in the first row (e.g., exposed), and likewise $p_{21}/p_{22}$ is the odds for the second row, or in other words $$ \alpha=\frac{p_{11}/p_{12}}{p_{21}/p_{22}}. $$ Yule's $Q=(\alpha-1)/(\alpha+1)$ fall into the former case, (a). Yule also proposed a measure of "colligation", $Y$, as $(\sqrt{\alpha}-1)/(\sqrt{\alpha}+1)$. Yule's $Q$ can be interpreted as the difference between conditional probabilities of like and unlike "orders" for two individuals chosen at random; it is identical to Goodman and Kruskal's $\gamma$ measure of association for $I\times J$ tables. For (b), we can derive a correlation coefficient for a $2\times 2$ table by thinking of the table as a combination of each of two variables scores (taking value 0 and 1, for the first and second row/column, resp.). Then, the coefficient $\rho$ is defined as the covariance divided by the square root of the product of the variances: $$ \rho=\frac{p_{22}-p_{2\cdot}p_{\cdot 2}}{\sqrt{p_{1\cdot}p_{2\cdot}p_{\cdot 1}p_{\cdot 2}}}, $$ which is equivalent to putting $p_{11}p_{22}-p_{21}p_{12}$ in the numerator. Plugging in the observed counts, Pearson's $r$ is the MLE of $\rho$ under a multinomial sampling model. It is invariant under rows and columns interchange, and positive linear transformation. It can be shown (Yule, 1912) that $\rho$ is identical to Yule's $Y$ if we standardize our table such that row and column margins sum to 1/2, i.e. $p_{11}^*=p_{22}^*=0.5\left(\sqrt{\alpha}/(\sqrt{\alpha}+1)\right)$ and $p_{12}^*=p_{21}^*=0.5\left(1/(\sqrt{\alpha}+1)\right)$. By doing this, we remove the information coming from the margins, such that $Y=2(p_{11}^*-p_{12}^*)$. Correlation-based measures are connected to the usual Pearson's chi-square statistic, since $$ \Phi^2=\sum_{i=1}^2\sum_{j=1}^2\frac{(p_{ij}-p_{i\cdot}p_{\cdot j})^2}{p_{i\cdot}p_{\cdot j}}, $$ that is, $$ \Phi^2=\frac{(p_{11}p_{22}-p_{21}p_{12})^2}{p_{1\cdot}p_{2\cdot}p_{\cdot 1}p_{\cdot 2}}=\rho^2. $$ In a $2\times 2$ table, we thus have $r^2=\chi^2/N$. Pearson also proposed to use $\sqrt{\rho^2/(1+\rho^2)}$ as a measure of association, and he coined it the coefficient of mean square contingency. As to how to choose the correct measure (a vs. b), it clearly depends on whether we want to be sensitive to marginal totals (in this case, $\rho$ cannot take its full range of possible values in $[-1;1]$), and whether we consider that we observe a full association even if one of the four cells is zero (in this case, $\rho$ cannot take the value $+1$ or $-1$ if only one of the cells is zero, which is not the case of Yule's $Q$). Of note, correlation-based measures are better if they are used in a correlation matrix (e.g., for factor analysis), because we cannot guarantee that a matrix composed of Yule's $Q$ coefficient will be positive definite. The $I\times J$ case Like for the $2\times 2$ case, we can derive measures of association based on different quantities. Measures based on chi-square include - Pearson's $P$ coefficient based on $\Phi^2$ (see above), $\sqrt{\Phi^2/(\Phi^2+1)}$ (to overcome the fact that $\Phi^2$ no longer lies in $[0;1]$ when $I$ or $J>2$); - Tschuprow's $T=\left(\Phi^2/\sqrt{(I-1)(J-1)}\right)^{1/2}$, which behaves better than $P$ in square tables (in that it can reach a maximum value of 1, for full or complete association); - Cramer's $V$ is another derivation, and $V=\left(\Phi^2/\text{min}(I-1,J-1)\right)^{1/2}$ (we have $V\geq T$ for all $I,J>2$). These measures are all measures of association where none of the variables plays a specific role. In case a $\chi^2$ test is significant, it is more interesting to look at how the expected counts depart from the observed counts (i.e. look at the Pearson residuals) in all $(i,j)$ cells, or to use something like a [mosaic plot](https://cran.r-project.org/web/packages/vcd/index.html). Goodman and Kruskall (1954) also proposed a predictive measure of association between rows and columns, or more specifically a measure of proportional reduction in error in predicting one column category when the row category is known as opposed to the case when the latter one is unknown. This is called $\lambda_{C|R}$ and its MLE is $$ \hat\lambda_{C|R}=\frac{\sum_{i=1}^Ix_{im}-x_{\cdot m}}{N-x_{\cdot m}} $$ where $x_{im}$ and $x_{\cdot m}$ stand for the maximum for the $i$th row and the column totals. This measure is interesting because it has a nicer interpretation than $\chi^2$-based measure, but it also has some drawbacks: when there is statistical independence, $\lambda_{C|R}$ is not necessarily zero, for instance. A measure of the proportion of explained variance (derived from Gini's total variation) may be derived from the total sum of squares (SS) in an $I\times J$ table $$ \text{TSS}=\frac{N}{2}-\frac{1}{2N}\sum_{i=1}^Ix_{i\cdot}^2, $$ which can be partitioned as a within- and between-group SS. Of interest here is the variance explained by considering the different categories (BSS) divided by the total variance, TSS. Like in the ANOVA framework, we have BSS=TSS-WSS, where $$ \text{WSS}=\frac{N}{2}-\frac{1}{2}\sum_{j=1}^J\frac{1}{x_{\cdot j}}\sum_{i=1}^Ix_{ij}^2, $$ so that we can derive BSS/TSS as $$ \hat\tau_{R|C}=\frac{\sum_j\frac{1}{x_{\cdot j}}\sum_i x_{ij}^2-\frac{1}{N}\sum_ix_{i\cdot}^2}{N-\frac{1}{N}\sum_ix_{i\cdot}^2}. $$ This measure can be interpreted as "the relative decrease in the proportion of incorrect predictions when we go from predicting the row category based only on the row marginal probabilities to predicting the row category based on the conditional proportions $p_{ij}/p_{\cdot j}$" (Bishop et al., 2007, p. 391). Finally, measures based on the cross-product ratios are also available, as well as measures of agreement for ordinal variables, but I realize now that I need to stop (and thank the reader who reached the end of this overview). A thorough overview of measures of association may be found in Bishop et al. (2007), from which I grabbed most of the above discussion, and of course Agresti (2002), about which Laura Thompson made a complete R adaptation in his textbook [R (and S-PLUS) Manual to Accompany Agresti's Categorical Data Analysis](https://web.archive.org/web/20151002190955/https://home.comcast.net/%7Elthompson221/Splusdiscrete2.pdf). References - Agresti, A. (2002). Categorical Data Analysis. Wiley. Companion website - Bishop, Y.M., Fienberg, S.E., and Holland, P.W. (2007). Discrete Multivariate Analysis. Springer. - Goodman, L.A. and Kruskall, W.H. (1954). Measures of association for cross-classification. JASA, 49, 732-764. - Yule, G.U. (1912). On the methods of measuring association between two attributes. Journal of the Royal Society, 75, 579-642.
null
CC BY-SA 4.0
null
2010-11-03T12:52:22.440
2020-11-05T10:10:06.100
2020-11-05T10:10:06.100
930
930
null
4171
2
null
4165
3
null
Just for the sake of completeness, an IID sequence of random variables is also a stochastic process (a very simple one).
null
CC BY-SA 2.5
null
2010-11-03T13:00:45.750
2010-11-03T13:00:45.750
null
null
247
null
4172
1
null
null
4
387
I am new to the area of statistics and I am hoping you can suggest methods I may use. Sorry if this is long but I might as well be as clear as possible on my first post :) What I am worried most is that I may miss out on assumptions and draw conclusions based on statistical tests that, in fact, cannot be applied to my situation. In a nutshell: We are replacing a measurement tool + methodology with another tool and a similar methodology and I would like to prove that the new tool & methodology provide the same "results". The data reported : Each tool reports 1) the GPS position, 2) a category of measurement (type 1, type 2, type 3) (the categories are the same for both measurement tools and relate to what is being measured, they should report the same thing), and 3) a quantized value of a continuous value. The measurement tools probably quantize the value with different algorithms but, according to spec, they should provide the same value. Given what we're measuring the measurements are definately not stationnary and, since we're measuring a physical quantity, I assume the time series are autocorrelated. How the setups differ: Setup 1 (historical setup) : uses tool "A", takes a measurement 3 times a minute and reports the GPS position, the category of the measurement and the discrete value Setup 2 (new setup) : uses tool "B", takes a measurement up to every second (but not necessarily based on distance criteria between measurements) and report the GPS position, the category and discrete value too Our experiment: We put both tools in a car and traveled enough to gather over 100.000 data points for setup 1. What I would like to prove:- the categories reported by setup 1 and 2 do not significantly differ - the discrete value measurements do not significantly differ either - if the new setup and a bias or skew compared to the other one What I have done so far: I have matched each data point of setup 1 to a single data point in setup 2 (the one that is "closest geographically" in a 4 minute-time window). Is this even statistically sound ? 1) Regarding the discrete value reported,- I drew a scatter plot of the discrete values for matched data points with bubble sizes corresponding to the count for each (x,y) : the data clusters along a 45° angle line as expected but I can see there is some bias. There is also some spread a round that line - I drew a Bland-Altman/Tukey diagram of the same data and I now see that the average difference depends on the average mean. That's interesting to know - I computed the pearson correlation for matches that are in the same category : I get 0.87 which seems to be high enough to look good. Can Pearson be applied given I have no idea if the distribution is normal and since the measurements are definalty not independent inside the time series ? Would the U test be better ? - I tried to compute a t test but I'm getting t values in the "80" range because SQRT(N) is huge I would like to use all the data collected in setup 2 rather than only the data that was matched 1 to 1. There is about 4 times more data reported by setup 2 than setup 1. I've been looking into non-parametric tests and I believe that is what applies to my case as well as the whole notion of inter-rater agreement. So it seems like my next steps will be to use R to compute Cohen's Kappa and KrippenDorff's alpha. Would computing these and finding high correlations be enough to make my point ? 2) Regarding categories reported, again the data reported in the time series are correlated because if category 1 is reported then the chance of the next category being reported being 1 is higher than if category 2 had been reported. Given that there are three categories, what kind of tests could I apply ? thanks for your suggestions
How to "prove" that new measurement tool & process gives same result as old?
CC BY-SA 2.5
null
2010-11-03T13:17:39.483
2016-07-22T06:11:08.683
2016-07-22T06:11:08.683
1352
1784
[ "time-series", "correlation", "reliability", "agreement-statistics", "bland-altman-plot" ]
4173
2
null
4157
3
null
I have used various algorithms, including Bayesian approaches (and, I am sorry to confess, even Excel many years ago), to fit mixtures. When there is not a clear visual indication of the two (or more components) in the histogram, you can expect the likelihood function to be extremely flat--almost parabolic--near its peak. This is because the visual impression translates mathematically into an ability to trade off some proportion of one mixture with an equivalent proportion of the other (adjusting the parameters of the components to keep a good fit) while making only a minor change to the likelihood. In many cases it's difficult to pin down the maximum likelihood. (This is evidenced by regime-switching in the Markov chains, for instance: a chain will pursue an area where one component predominates and after longish periods switch to an area where another component predominates, never really settling down to a single optimum.) In any event you also want to assess uncertainty. This is reflected by how much change is needed in the mixture parameters to reduce the likelihood by some threshold amount. The near-parabolic flatness near the optimum delineates a long "ridge" of near-optimum values, resulting in a long elliptical confidence region for the mixture. Usually the major axis of that ellipse corresponds to the mixture proportions. Thus, you might conclude that your data are $p$ percent of component A and $1-p$ percent of component B, but $p$ might be anywhere from 0 to 70%. (Yes, there are boundary value problems with mixtures, too.) It can take an extraordinary amount of data to reduce these wide confidence intervals if you can even reliably find them. These problems are exacerbated when only the tails of the data provide most of the information needed to disentangle the distributions. This would often be the case for unimodal data.
null
CC BY-SA 2.5
null
2010-11-03T14:28:21.180
2010-11-03T14:28:21.180
null
null
919
null
4174
1
4180
null
6
752
At work we have a hardware device that is failing for some yet to be determined reason. I have been tasked to see if I can make this device not fail by making changes to its software driver. I have constructed a software test bench which iterates over the driver functions which I feel are most likely to cause the device to fail. So far I have forced 7 such failures and the iterations that the device failed on are as follows: 100 22 36 44 89 24 74 Mean = 55.57 Stdev = 31.81 Next, I made some software changes to the device driver and was able to run the device for 223 iterations without failure before I manually stopped the test. I want to be able to go back to my boss and say "The fact that we were able to run the device for 223 iterations without failure means that my software change has a X% probability of fixing the problem." I would also be satisfied with the converse probability that the device will still fail with this fix. If we were to assume that the iteration the device fails on is normally distributed, we can say that going 223 iterations without failure is 5.26 standard deviations from the mean which roughly has a 1-in-14 million chance of happening. However, because we only have a sample size of 7 (not including the 223), I'm fairly certain it would be unwise to assume normality. This is where I think the Student's t-test comes into play. Using the t-test with 6 degrees of freedom, I've calculated that the actual population mean has a 99% probability of being less than 94. So now my question to you guys is whether or not I am allowed to say with 99% certainty that hitting 223 iterations without failure is a 4.05 sigma event, i.e. $\frac{(223 - 94)}{31.81} = 4.05$ ? Am I allowed to use the 31.81 sample standard deviation in that calculation or is there some other test I should do to get a 99% confidence on what the maximum standard deviation is and then use that in my calculation for how many sigmas 223 really is away from the mean at the 99% confidence level? Thanks! UPDATE The answers I received here are beyond any expectation I had. I truly appreciate the time and thought many of you have put into your answers. There is much for me to think about. In response to whuber's concern that the data does not seem to follow an exponential distribution, I believe I have the answer for as to why. Some of these trials were run with what I thought would be a software fix but ultimately ended in failure. I would not be surprised if those trials were the 74 89 100 grouping that we see. Although I wasn't able to fix the problem it certainly seems like I was able to skew the data. I will check my notes to see if this is the case and my apologies for not remembering to include that piece of information earlier. Lets assume the above is true and we were to remove 74 89 100 from the data set. If I were to re-run the device with the original driver and get additional failure data points with values 15 20 23, how would you then compute the exponentially distributed parametric prediction limit at the 95% confidence level? Would you feel that this prediction limit is still a better statistic than assuming independent Bernoulli trials to find the probability of no failure at 223 iterations? Looking more closely at the wikipedia page on Prediction Limits I calculated the parametric prediction limits at the 99% confidence level assuming unknown population mean and unknown stdev on Excel as follows: $\bar{X_n} = 55.57$ $S_n = 31.81$ $T_a = T.INV\Bigl(\frac{1+.99}{2},6\Bigr)$ $Lower Limit = 55.57 - 3.707*31.81*\sqrt{1+\frac{1}{7}} = -70.51$ $Upper Limit = 55.57 + 3.707*31.81*\sqrt{1+\frac{1}{7}} = 181.65$ Since my trial of 223 is outside the 99% confidence interval of [-70.51 , 181.65] can I assume with 99% probability that this is fixed assuming that the underlying distribution is the T-Distribution? I wanted to make sure my understanding was correct even though the underlying distribution is most likely exponential, not normal. I have no clue in the slightest how to adjust the equation for an underlying exponential distribution. UPDATE 2 So I'm really intrigued with this 'R' software, I've never seen it before. Back when I took my stats class (several years ago) we used SAS. Anyway, with the cursory knowledge I gathered from Owe Jessen's example and a bit of help from Google, I think I came up with the following R code to produce the Prediction Limits with the hypothetical dataset assuming an Exponential Distribution Let me know if I got this right: ``` fails <- c(22, 24, 36, 44, 15, 20, 23) fails_xfm <- fails^(1/3) Y_bar <- mean(fails_xfm) Sy <- sd(fails_xfm) df <- length(fails_xfm) - 1 no_fail <- 223 percentile <- c(.9000, .9500, .9750, .9900, .9950, .9990, .9995, .9999) quantile <- qt(percentile, df) UPL <- (Y_bar + quantile*Sy*sqrt(1+(1/length(fails_xfm))))^3 plot(percentile,UPL) abline(h=no_fail,col="red") text(percentile, UPL, percentile, cex=0.9, pos=2, col="red") ``` [Prediction Limits http://img411.imageshack.us/img411/5246/grafr.png](http://img411.imageshack.us/img411/5246/grafr.png)
Estimating the probability that a software change fixed a problem
CC BY-SA 2.5
null
2010-11-03T14:52:52.013
2023-01-23T16:12:16.213
2017-11-03T13:50:16.927
101426
1786
[ "hypothesis-testing", "distributions", "t-test", "prediction-interval" ]
4175
1
4324
null
2
2720
I am trying to understand how I can use resampling techniques to compliment my pre-planned analyses. This is not homework. I have a 5 sided die. 30 subjects call a number (1-5) and then roll the die. If it matches it's a hit, if not it's a miss. Each subject does this 25 times. N is the the number of trials (=25) and p is the probability of getting it correct (=.2) then the population value (mu) of the mean number correct is n*p=5. The population standard deviation, sigma, is square-root(n*p*[1-p]), which is 2. The experimental hypothesis (H1) is that subjects in this study will score above chance (above mu). The null hypothesis (H0) assumes a binomial distribution for each subject (they will score at mu). [Please don't get too worried about why I am doing this. If it helps you to understand the problem then you can think of it as an ESP test (and therefore I am testing the ability of subjects to score above mu). Also if it helps, imagine that the task is a virtual reality die throwing task, where the virtual 5-sided die performs according to chance. There can be no bias from an imperfect die because the die is virtual.] Okay. So before I conducted the "experiment" I had planned to compare the 30 subjects score with a one-sample t-test (comparing it to the null that mu=5). Then I discovered that the one-sample z-test was a more powerful test given what we know about the null hypothesis. Okay. Here is a simulation of my data in R: ``` binom.samp1 <- as.data.frame(matrix(rbinom(30*1, size=25, prob=0.2), ncol=1)) ``` Now R has a binom.test function, which gives the p-value regarding the number of successes over the number of trials. For my collected data (not the simulated data given): ``` >binom.test(174, 750, 1/5, alternative="g") number of successes = 174, number of trials = 750, p-value = 0.01722 ``` Now the one-sample t-test that I had originally planned to use (mainly because I'd never heard of the alternatives - should've paid more attention in higher statistics): ``` >t.test(binom.samp1-5, alternative="g") t = 1.7647, df = 29, p-value = 0.04407 ``` and for completedness sake: the one-sample z-test ([BSDA package](http://rgm2.lab.nig.ac.jp/RGM2/R_man-2.9.0/library/BSDA/man/z.test.html)): ``` >z.test(binom.samp1, mu=5, sigma.x=2, alternative="g") z = 2.1909, p-value = 0.01423 ``` So. My first question is, am I right in concluding that the [binom.test](http://sekhon.berkeley.edu/stats/html/binom.test.html) is the correct test given the data and hypothesis? In other words, does t approximate to z which approximates to the exact binom.test ([Bernoulli trial](http://en.wikipedia.org/wiki/Bernoulli_trial))? Now my second question relates to the resampling methods. I have several books by Philip Good and I've read plenty on permutation and bootstrapping. I was just going to use the one-sample permutation test given in the [DAAG](http://pbil.univ-lyon1.fr/library/DAAG/html/onet.permutation.html) package: ``` >onet.permutation(binom.samp1-5) 0.114 ``` And the perm.test function in the [exactRankTests](http://www.stat.ucl.ac.be/ISdidactique/Rhelp/library/exactRankTests/html/perm.test.html) package gives this: ``` >perm.test(binom.samp1, mu=5, alternative="g", exact=TRUE) T = 42, p-value = 0.05113 ``` I have the feeling that what I want to do is conduct a one-sample permutation binom.test. The only way I can see it working is if I take a subset of the 30 subjects and calculate the binom.test and then repeat it for a large number of N. Does this sound like a reasonable idea? Finally, I did repeat this experiment with the same equipment (the 5 sided die) but a larger sample size (50 people) and I got exactly what I expected. My understanding is that the two studies are like a [Galton box](http://en.wikipedia.org/wiki/Bean_machine) that hasn't filled up yet. The 30 n experiment has a bit of a skew, but had it been run for longer it would have filled up to the binomial. Is this all gibberish? ``` >binom.test(231, 1250, 1/5, alternative="g") number of successes = 231, number of trials = 1250, p-value = 0.917 >t.test(binom.samp2-5) t = -1.2249, df = 49, p-value = 0.2265 >z.test(binom.samp2, mu=5, sigma.x=2) z = -1.3435, p-value = 0.1791 >onet.permutation(binom.samp2-5) 0.237 >perm.test(binom.samp2, mu=5, alternative="g", exact=TRUE) T = 35, p-value = 0.8991 ```
Resampling, binomial, z- and t-test: help with real data
CC BY-SA 2.5
null
2010-11-03T15:07:30.070
2010-11-10T07:39:23.930
2010-11-10T07:39:23.930
930
1614
[ "r", "hypothesis-testing" ]
4176
2
null
4174
4
null
There are a few ways of doing this problem. The way I would tackle this problem is as follows. The data you have comes from a [geometric](http://en.wikipedia.org/wiki/Geometric_distribution) distribution. That is, the number of [Bernoulli trials](http://en.wikipedia.org/wiki/Bernoulli_distribution) before a failure. The geometric distribution has one parameter p, which is the probability of failure at each point. For your data set, we estimate p as follows: \begin{equation} \hat p^{-1} = \frac{100 + 22 + 36 + 44 + 89 + 24 + 74}{7} = 55.57 \end{equation} So $\hat p = 1/55.57 = 0.018$. From the CDF, the probability of having a run of 223 iterations and observing a failure is: \begin{equation} 1-(1-\hat p)^{223} = 0.983 \end{equation} So the probability of running 223 iterations and not having a failure is \begin{equation} 1- 0.983 = 0.017 \end{equation} So it seems likely (but not overwhelming so) that you have fixed the problem. If you have a run of about 300 iterations than the probability goes down to 0.004 Some notes - A bernoulli trial is just tossing a coin, i.e. there are only two outcomes. - The geometric distribution is usually phrased in terms of success (rather than failure). For you a success is when the machine breaks!
null
CC BY-SA 2.5
null
2010-11-03T15:16:19.827
2010-11-03T15:16:19.827
null
null
8
null
4177
2
null
4165
9
null
You need to be careful how you ask this question. Since you could substitute almost anything in place of stochastic processes and it would still be potentially useful. For example, a course in biology could help with biological statistical consultancy since you know more biology! I presume that you have a choice of modules that you can take, and you need to pick $n$ of them. The real question is what modules should I pick (that question probably isn't appropriate for this site!) To answer your question, you are still very early in your career and at this moment in time you should try to get a wide selection of courses under your belt. Furthermore, if you are planning a career in academia then some more mathematical courses, like stochastic processes would be useful.
null
CC BY-SA 2.5
null
2010-11-03T15:23:45.933
2010-11-03T15:23:45.933
null
null
8
null
4178
2
null
4065
4
null
Some time has passed and I think I might have a solution at hand. I will describe my approach briefly to give you the general idea. The code should be enough to figure out the details. I like to attach code here, but it is a lot and stackexchange makes it not easy to do so. I am of course happy to answer any comments, also I appreciate any criticism. The code can be found below. The strategy: - Approximate a smooth ROC-Curve by using the Logistic function in the interval [0,6] - By adding a parameter k one can influence the shape of the curve to fit the desired model quality, measured by AUC (Area Under Curve). The resulting function is $f_k(x)=\frac{1}{(1+exp(-k*x))}$. If k-> 0, AUC approaches 0.5 (no optimization), if k -> Inf, AUC approaches 1 (optimal model). As a handy approach, k should be in the interval [0.0001,100]. By some basic calculus, one can create a function to map k to AUC and vice versa. - Now, given you have a roc-curve which matches the desired AUC, determine a score by sample from [0,1] uniformly. This represents the fpr (False-Positive-Rate) on the ROC-curve. For simplicity, the score is calculated then as 1-fpr. - The label is now determined by sampling from a Bernoulli Distribution with p calculated using the slope of the ROC-Curve at this fpr and the desired overall precision of the scores. In detail: weight(label="1"):= slope(fpr) mutiplicated by overallPrecision, weight(label="0"):= 1 multiplicated by (1-overallPrecision). Normalize the weights so that they sum up to 1 to determine p and 1-p. Here is an example ROC-Curve for AUC = 0.6 and overall precision = 0.1 (also in the code below) ![alt text](https://i.stack.imgur.com/ObKTU.png) ![alt text](https://i.stack.imgur.com/xRUf2.png) Notes: - the resulting AUC is not exactly the same as the input AUC, in fact, there is a small error (around 0.02). This error originates from the way the label of a score is determined. An improvement could be to add a parameter to control the size of the error. - the score is set as 1-fpr. This is arbitrary since the ROC-Curve does not care how the scores look like as long as they can be sorted. code: ``` # This function creates a set of random scores together with a binary label # n = sampleSize # basePrecision = ratio of positives in the sample (also called overall Precision on stats.stackexchange) # auc = Area Under Curve i.e. the quality of the simulated model. Must be in [0.5,1]. # binaryModelScores <- function(n,basePrecision=0.1,auc=0.6){ # determine parameter of logistic function k <- calculateK(auc) res <- data.frame("score"=rep(-1,n),"label"=rep(-1,n)) randUniform = runif(n,0,1) runIndex <- 1 for(fpRate in randUniform){ tpRate <- roc(fpRate,k) # slope slope <- derivRoc(fpRate,k) labSampleWeights <- c((1-basePrecision)*1,basePrecision*slope) labSampleWeights <- labSampleWeights/sum(labSampleWeights) res[runIndex,1] <- 1-fpRate # score res[runIndex,2] <- sample(c(0,1),1,prob=labSampleWeights) # label runIndex<-runIndex+1 } res } # min-max-normalization of x (fpr): [0,6] -> [0,1] transformX <- function(x){ (x-0)/(6-0) * (1-0)+0 } # inverse min-max-normalization of x (fpr): [0,1] -> [0,6] invTransformX <- function(invx){ (invx-0)/(1-0) *(6-0) + 0 } # min-max-normalization of y (tpr): [0.5,logistic(6,k)] -> [0,1] transformY <- function(y,k){ (y-0.5)/(logistic(6,k)-0.5)*(1-0)+0 } # logistic function logistic <- function(x,k){ 1/(1+exp(-k*x)) } # integral of logistic function intLogistic <- function(x,k){ 1/k*log(1+exp(k*x)) } # derivative of logistic function derivLogistic <- function(x,k){ numerator <- k*exp(-k*x) denominator <- (1+exp(-k*x))^2 numerator/denominator } # roc-function, mapping fpr to tpr roc <- function(x,k){ transformY(logistic(invTransformX(x),k),k) } # derivative of the roc-function derivRoc <- function(x,k){ scalFactor <- 6 / (logistic(6,k)-0.5) derivLogistic(invTransformX(x),k) * scalFactor } # calculate the AUC for a given k calculateAUC <- function(k){ ((intLogistic(6,k)-intLogistic(0,k))-(0.5*6))/((logistic(6,k)-0.5)*6) } # calculate k for a given auc calculateK <- function(auc){ f <- function(k){ return(calculateAUC(k)-auc) } if(f(0.0001) > 0){ return(0.0001) }else{ return(uniroot(f,c(0.0001,100))$root) } } # Example require(ROCR) x <- seq(0,1,by=0.01) k <- calculateK(0.6) plot(x,roc(x,k),type="l",xlab="fpr",ylab="tpr",main=paste("ROC-Curve for AUC=",0.6," <=> k=",k)) dat <- binaryModelScores(1000,basePrecision=0.1,auc=0.6) pred <- prediction(dat$score,as.factor(dat$label)) performance(pred,measure="auc")@y.values[[1]] perf <- performance(pred, measure = "tpr", x.measure = "fpr") plot(perf,main="approximated ROC-Curve (random generated scores)") ```
null
CC BY-SA 3.0
null
2010-11-03T15:47:20.410
2012-04-06T15:09:14.477
2012-04-06T15:09:14.477
264
264
null
4179
2
null
4174
2
null
I think you could torture your data a bit with bootstrapping. Following cgillspies calculations with the geometric distribution, I played around a bit and came up with the following R-code - any corrections greatly appreciated: ``` fails <- c(100, 22, 36, 44, 89, 24, 74) # Observed data N <- 100000 # Number of replications Ncol <- length(fails) # Number of columns in the data-matrix boot.m <- matrix(sample(fails,N*Ncol,replac=T),ncol=Ncol) # The bootstrap data matrix # it draws a vector of Ncol results from the original data, and replicates this N-times p.hat <- function(x){p.hat = 1/(sum(x)/length(x))} # Function to calculate the # probability of failure p.vec <- apply(boot.m,1,p.hat) # calculates the probabilities for each of the # replications quant.p <- quantile(p.vec,probs=0.01) # calculates the 1%-quantile of the probs. hist(p.vec) # draws a histogram of the probabilities abline(v=quant.p,col="red") # adds a line where quant.p is no.fail <- 223 # Repetitions without a fail after the repair (prob.fail <- 1 - pgeom(no.fail,prob=quant.p)) # Prob of no fail after 223 reps with # failure prob qant.p ``` The idea was to get a worst-case value for the probability, and then use it to calculate the probability of observing no fail after 223 iterations, given the prior failure probability. The worst case of course being a low failure probability to begin with, which would raise the likelihood of observing no failure after 223 iterations without fixing the problem. The result was 6.37% - as I understand it, you would have had a 6%-probability of not observing a failure after 223 trials if the problem still exists. Of course, you could generate samples of trials and calculate the probability from that: ``` boot.fails <- rbinom(N,size=no.fail, prob=quant.p) # repeats draws with succes-rate # quant.prob N times. mean(boot.fails==0) # Ratio of no successes ``` with the result of 6.51%.
null
CC BY-SA 2.5
null
2010-11-03T15:49:57.703
2010-11-03T19:18:57.650
2010-11-03T19:18:57.650
1766
1766
null
4180
2
null
4174
7
null
This question asks for a [prediction limit](http://en.wikipedia.org/wiki/Prediction_interval). This tests whether a future statistic is "consistent" with previous data. (In this case, the future statistic is the post-fix value of 223.) It accounts for a chance mechanism or uncertainty in three ways: - The data themselves can vary by chance. - Because of this, any estimates made from the data are uncertain. - The future statistic can also vary by chance. Estimating a probability distribution from the data handles (1). But if you simply compare the future value to predictions from that distribution you are ignoring (2) and (3). This will exaggerate the significance of any difference that you note. This is why it can be important to use a prediction limit method rather than some ad hoc method. Failure times are often taken to be [exponentially distributed](http://en.wikipedia.org/wiki/Exponential_distribution) (which is essentially a continuous version of a geometric distribution). The exponential is a special case of the [Gamma distribution](http://en.wikipedia.org/wiki/Gamma_distribution) with "shape parameter" 1. Approximate prediction limit methods for gamma distributions have been worked out, as published by Krishnamoorthy, Mathew, and Mukherjee in a [2008 Technometrics article](http://www.ucs.louisiana.edu/~kxk4695/GammaR2.pdf). The calculations are relatively simple. I won't discuss them here because there are more important issues to attend to first. Before applying any parametric procedure you should check that the data at least approximately conform to the procedure's assumptions. In this case we can check whether the data look exponential (or geometric) by making an [exponential probability plot](http://www.itl.nist.gov/div898/handbook/eda/section3/probplot.htm). This procedure matches the sorted data values $k_1, k_2, \ldots, k_7$ = $22, 24, 36, 44, 74, 89, 100$ to percentage points of (any) exponential distribution, which can be computed as the negative logarithms of $1 - (1 - 1/2)/7, 1 - (2 - 1/2)/7, \ldots, 1 - (7 - 1/2)/7$. When I do that the plot looks decidedly curved, suggesting that these data are not drawn from an exponential (or geometric) distribution. With either of those distributions you should see a cluster of shorter failure times and a straggling tail of longer failure times. Here, the initial clustering is apparent at $22, 24, 26, 44$, but after a relatively long gap from $44$ to $74$ there is another cluster at $74, 89, 100$. This should cause us to mistrust the results of our parametric models. One approach in this situation is to use a [nonparametric prediction limit](http://en.wikipedia.org/wiki/Censoring_%28statistics%29). That's a dead simple procedure in this case: if the post-fix value is the largest of all the values, that should be evidence that the fix actually lengthened the failure times. If all eight values (the seven pre-fix data and the one post-fix value) come from the same distribution and are independent, there is only a $1/8$ chance that the eighth value will be the largest. Therefore, we can say with $1 - 1/8 = 87.5$% confidence that the fix has improved the failure times. This procedure also correctly handles the [censoring](http://en.wikipedia.org/wiki/Censoring_%28statistics%29) in the last value, which really records a failure time of some unknown value greater than 233. (If a parametric prediction limit happens to exceed 233--and I suspect [based on experience and on the result of @Owe Jessen's bootstrap] it would be close if we were to calculate it with 95% confidence--we would determine that the number 233 is not inconsistent with the other data, but that would leave unanswered the question concerning the true time to failure, for which 233 is only an underestimate.) Based on @csgillespie's calculations, which--as I argued above--likely overestimate the confidence as $98.3$%, we nevertheless have found a window in which the actual confidence is likely to lie: it's at least $87.5$% and somewhat less than $98.3$% (assuming we have any faith in the geometric distribution model). I will conclude by sharing my greatest concern: the question as stated could easily be misinterpreted as an appeal to use statistics to make an impression or sanctify a conclusion, rather than provide genuinely useful information about uncertainty. If there are additional reasons to suppose that the fix has worked, then the best course is to invoke them and don't bother with statistics. Make the case on its technical merits. If, on the other hand, there is little assurance that the fix was effective--we just don't know for sure--and the objective here is to decide whether the data warrant proceeding as if it did work, then a prudent decision maker will likely prefer the conservative confidence level afforded by the non-parametric procedure. Edit For (hypothetical) data {22, 24, 36, 44, 15, 20, 23} the exponential probability plot is not terrifically non-linear: ![alt text](https://i.stack.imgur.com/qJA2L.png) (If this looks non-linear to you, generate probability plots for a few hundred realizations of seven draws from an Exponential[25] distribution to see how much they will wiggle by chance alone.) Therefore with this modified dataset you can feel more comfortable using the equations in [Krishnamoorthy et al.](http://www.ucs.louisiana.edu/~kxk4695/GammaR2.pdf) (op. cit.) to compute a prediction limit. However, the harmonic mean of 25.08 and relatively small SD (around 10) indicate the prediction limit for any typical confidence level (e.g., 95% or 99%) will be much less than 223. The principle in play here is that one uses statistics for insight and to make difficult decisions. Statistical procedures are of little (additional) help when the results are obvious.
null
CC BY-SA 2.5
null
2010-11-03T16:37:07.917
2010-11-04T04:13:33.710
2010-11-04T04:13:33.710
919
919
null
4181
2
null
4138
11
null
The question reads to me like the OP was asking when $U = (X,Y,Z)^{\mathrm{T}}$ are jointly normal then what is the probability $P(X \geq Y \mbox{ and } X \geq Z)$? For that question we could look at the joint distribution of $AU$ where $A$ looks like $$ A=\left[ \begin{array}{ccc} 1 & -1 & 0 \newline 1 & 0 & -1 \end{array}\right] $$ Of course, $AU$ is also jointly normal with mean $A\mu$ and variance-covariance $A\Sigma A^{\mathrm{T}}$, and the desired probability is $P(AU > \mathbf{0}_{n-1})$. We could get this in R with something like ``` set.seed(1) Mu <- c(1,2,3) library(MCMCpack) S <- rwish(3, diag(3)) # get var-cov matrix A <- matrix(c(1,-1,0, 1,0,-1), nrow = 2, byrow = TRUE) newMu <- as.vector(A %*% Mu) newS <- A %*% S %*% t(A) library(mvtnorm) pmvnorm(lower=c(0,0), mean = newMu, sigma = newS) ``` which is about 0.1446487 on my system. If a person knew something about the matrix $\Sigma$ then (s)he might even be able to write something down that looks like a formula (I haven't tried, though).
null
CC BY-SA 2.5
null
2010-11-03T16:51:13.893
2010-11-03T19:22:08.373
2010-11-03T19:22:08.373
null
null
null
4182
2
null
4174
1
null
I faced this problem myself and decided to try Fisher's exact test. This has the advantage that the arithmetic boils down to something you can do with JavaScript. I put this on a [web page](https://web.archive.org/web/20120307183001/http://www.mcdowella.demon.co.uk/FlakyPrograms.html) - this should work either from there or if you download it to your computer (which you are welcome to do). I think you have a total of 382 successes and 7 failures in the old version, and 223 successes and 0 failures in the new one, and that you could get this at random with probability about 4% even if the new version was no better. I suggest that you run it a bit more. You can play about with the web page to see how the probability changes if you survive longer - I would go for something over 1000 - in fact I'd try hard to turn it into something I could run automatically and then let it overnight to really blitz the problem.
null
CC BY-SA 4.0
null
2010-11-03T18:48:02.257
2023-01-23T16:12:16.213
2023-01-23T16:12:16.213
362671
1789
null
4183
2
null
539
39
null
I will assume that a "categorical" variable actually stands for an ordinal variable; otherwise it doesn't make much sense to treat it as a continuous one, unless it's a binary variable (coded 0/1) as pointed by @Rob. Then, I would say that the problem is not that much the way we treat the variable, although many models for categorical data analysis have been developed so far--see e.g., [The analysis of ordered categorical data: An overview and a survey of recent developments](https://web.archive.org/web/20120907071653/http://petra.euitio.uniovi.es/%7Ei1770184/Archivos/t141/Test_agresti.pdf) from Liu and Agresti--, than the underlying measurement scale we assume. My response will focus on this second point, although I will first briefly discuss the assignment of numerical scores to variable categories or levels. By using a simple numerical recoding of an ordinal variable, you are assuming that the variable has interval properties (in the sense of the classification given by Stevens, 1946). From a measurement theory perspective (in psychology), this may often be a too strong assumption, but for basic study (i.e. where a single item is used to express one's opinion about a daily activity with clear wording) any monotone scores should give comparable results. Cochran (1954) already pointed that > any set of scores gives a valid test, provided that they are constructed without consulting the results of the experiment. If the set of scores is poor, in that it badly distorts a numerical scale that really does underlie the ordered classification, the test will not be sensitive. The scores should therefore embody the best insight available about the way in which the classification was constructed and used. (p. 436) (Many thanks to @whuber for reminding me about this throughout one of his comments, which led me to re-read Agresti's book, from which this citation comes.) Actually, several tests treat implicitly such variables as interval scales: for example, the $M^2$ statistic for testing a linear trend (as an alternative to simple independence) is based on a correlational approach ($M^2=(n-1)r^2$, Agresti, 2002, p. 87). Well, you can also decide to recode your variable on an irregular range, or aggregate some of its levels, but in this case strong imbalance between recoded categories may distort statistical tests, e.g. the aforementioned trend test. A nice alternative for assigning distance between categories was already proposed by @Jeromy, namely optimal scaling. Now, let's discuss the second point I made, that of the underlying measurement model. I'm always hesitating about adding the "psychometrics" tag when I see this kind of question, because the construction and analysis of measurement scales come under Psychometric Theory (Nunnally and Bernstein, 1994, for a neat overview). I will not dwell on all the models that are actually headed under the [Item Response Theory](https://en.wikipedia.org/wiki/Item_response_theory), and I kindly refer the interested reader to I. Partchev's tutorial, [A visual guide to item response theory](http://www.metheval.uni-jena.de/irt/VisualIRT.pdf), for a gentle introduction to IRT, and to references (5-8) listed at the end for possible IRT taxonomies. Very briefly, the idea is that rather than assigning arbitrary distances between variable categories, you assume a latent scale and estimate their location on that continuum, together with individuals' ability or liability. A simple example is worth much mathematical notation, so let's consider the following item (coming from the [EORTC QLQ-C30](https://web.archive.org/web/20110811064357/http://groups.eortc.be/qol/questionnaires_qlqc30.htm) health-related quality of life questionnaire): > Did you worry? which is coded on a four-point scale, ranging from "Not at all" to "Very much". Raw scores are computed by assigning a score of 1 to 4. Scores on items belonging to the same scale can then be added together to yield a so-called scale score, which denotes one's rank on the underlying construct (here, a mental health component). Such summated scale scores are very practical because of scoring easiness (for the practitioner or nurse), but they are nothing more than a discrete (ordered) scale. We can also consider that the probability of endorsing a given response category obeys some kind of a logistic model, as described in I. Partchev's tutorial, referred above. Basically, the idea is that of a kind of threshold model (which lead to equivalent formulation in terms of the proportional or cumulative odds models) and we model the odds of being in one response category rather the preceding one or the odds of scoring above a certain category, conditional on subjects' location on the latent trait. In addition, we may impose that response categories are equally spaced on the latent scale (this is the Rating Scale model)--which is the way we do by assigning regularly spaced numerical scores-- or not (this is the Partial Credit model). Clearly, we are not adding very much to Classical Test Theory, where ordinal variable are treated as numerical ones. However, we introduce a probabilistic model, where we assume a continuous scale (with interval properties) and where specific errors of measurement can be accounted for, and we can plug these factorial scores in any regression model. References - S S Stevens. On the theory of scales of measurement. Science, 103: 677-680, 1946. - W G Cochran. Some methods of strengthening the common $\chi^2$ tests. Biometrics, 10: 417-451, 1954. - J Nunnally and I Bernstein. Psychometric Theory. McGraw-Hill, 1994 - Alan Agresti. Categorical Data Analysis. Wiley, 1990. - C R Rao and S Sinharay, editors. Handbook of Statistics, Vol. 26: Psychometrics. Elsevier Science B.V., The Netherlands, 2007. - A Boomsma, M A J van Duijn, and T A B Snijders. Essays on Item Response Theory. Springer, 2001. - D Thissen and L Steinberg. A taxonomy of item response models. Psychometrika, 51(4): 567–577, 1986. - P Mair and R Hatzinger. Extended Rasch Modeling: The eRm Package for the Application of IRT Models in R. Journal of Statistical Software, 20(9), 2007.
null
CC BY-SA 4.0
null
2010-11-03T20:14:59.663
2022-12-14T06:44:23.937
2022-12-14T06:44:23.937
362671
930
null
4184
1
4869
null
2
155
[Cross post from [here](https://math.stackexchange.com/q/8830/2949), figured this community may be more relevant] I am working in the field of machine learning, and I have come across a few papers that show relationships between Gröbner bases and discrete probability. So I come here for help. Can you please explain how can Gröbner bases used to describe discrete probability? I have looked at Gröbner bases and I understand the general concepts (and used Maple to calculate a few examples). So it is the link that is missing for me.
How can Gröbner bases used to describe discrete probability?
CC BY-SA 2.5
null
2010-11-04T00:01:32.990
2019-01-19T22:58:30.840
2019-01-19T22:58:30.840
99274
1793
[ "probability", "maple" ]
4185
1
4186
null
2
4800
[Here](http://uk.answers.yahoo.com/question/index?qid=20080702080708AAsRZpt) is a link that describes the formula to find the mode of grouped data. [Here](http://www.tutors4you.com/modegraphically.htm) is a link that gives a graphical method to finding the mode of grouped data. Question: Can someone please explain how the formula corresponds with the graphical method? The formula is an interpolation but I am not able to see how it expresses the idea of the graphical method. If it helps, I understand the formula for calculating the mean of grouped data. Thank you.
The formula for finding the mode of grouped data
CC BY-SA 2.5
null
2010-11-04T00:34:49.023
2010-11-04T15:30:42.843
2010-11-04T15:30:42.843
919
1636
[ "descriptive-statistics" ]
4186
2
null
4185
2
null
I have not bothered to check the math at the link you gave but from the figure the mode is the intersection of the two diagonal lines. The end points of both the lines is known so all you need to do is to find out the intersection of those two lines to get the mode.
null
CC BY-SA 2.5
null
2010-11-04T00:44:18.200
2010-11-04T00:44:18.200
null
null
null
null
4187
1
null
null
20
19235
Hello I have two problems that sound like natural candidates for multilevel/mixed models, which I have never used. The simpler, and one that I hope to try as an introduction, is as follows: The data looks like many rows of the form `x y innergroup outergroup` where x is a numeric covariate upon which I want to regress y (another numeric variable), each y belongs to an innergroup, and each innergroup is nested in an outergroup (i.e, all the y in a given innergroup belong to the same outergroup). Unfortunately, innergroup has a lot of levels (many thousands), and each level has relatively few observations of y, so I thought this sort of model might be appropriate. My questions are - How do I write this sort of multilevel formula? - Once lmer fits the model, how does one go about predicting from it? I have fit some simpler toy examples, but have not found a predict() function. Most people seem more interested in inference than prediction with this sort of technique. I have several million rows, so the computations might be an issue, but I can always cut it down as appropriate. I won't need to do the second for some time, but I might as well begin thinking about it and playing around with it. I have similar data as before, but without x, and y is now a binomial variable of the form $(n,n-k)$. y also exhibits a lot of overdispersion, even within innergroups. Most of the $n$ are no more than 2 or 3 (or less), so to derive estimates of the success rates of each $y_i$ I have been using the beta-binomial shrinkage estimator $(\alpha+k_i)/(\alpha+\beta+n_i)$, where $\alpha$ and $\beta$ are estimated by MLE for each innergroup separately. This is has been somewhat adequate, but data sparsity still plagues me, so I would like to use all the data available. From one perspective, this problem is easier since there is no covariate, but from the other the binomial nature makes it more difficult. Does anyone have any high (or low!) level guidance?
Using lmer for prediction
CC BY-SA 2.5
null
2010-11-04T03:08:14.567
2022-05-15T12:13:42.687
2010-11-04T11:47:23.530
930
1777
[ "r", "mixed-model", "maximum-likelihood", "generalized-linear-model" ]
4191
1
4194
null
17
8605
MCMC algorithms like Metropolis-Hastings and Gibbs sampling are ways of sampling from the joint posterior distributions. I think I understand and can implement metropolis-hasting pretty easily--you simply choose starting points somehow, and 'walk the parameter space' randomly, guided by the posterior density and proposal density. Gibbs sampling seems very similar but more efficient since it updates only one parameter at a time, while holding the others constant, effectively walking the space in an orthogonal fashion. In order to do this, you need the full conditional of each parameter in analytical from*. But where do these full conditionals come from? $$ P(x_1 | x_2,\ \ldots,\ x_n) = \frac{P(x_1,\ \ldots,\ x_n)}{P(x_2,\ \ldots,\ x_n)} $$ To get the denominator you need to marginalize the joint over $x_1$. That seems like a whole lot of work to do analytically if there are many parameters, and might not be tractable if the joint distribution isn't very 'nice'. I realize that if you use conjugacy throughout the model, the full conditionals may be easy, but there's got to be a better way for more general situations. All the examples of Gibbs sampling I've seen online use toy examples (like sampling from a multivariate normal, where the conditionals are just normals themselves), and seem to dodge this issue. * Or do you need the full conditionals in analytical form at all? How do programs like winBUGS do it?
Where do the full conditionals come from in Gibbs sampling?
CC BY-SA 3.0
null
2010-11-04T04:35:38.390
2013-11-24T16:47:46.673
2013-11-24T16:47:46.673
7290
1795
[ "bayesian", "markov-chain-montecarlo", "gibbs" ]
4193
1
null
null
14
911
Most asymptotic results in statistics prove that as $n \rightarrow \infty$ an estimator (such as the MLE) converges to a normal distribution based on a second-order taylor expansion of the likelihood function. I believe there's a similar result in Bayesian literature, the "Bayesian Central Limit Theorem", which shows that the posterior converges asymptotically to a normal as $n \rightarrow \infty$ My question is - does the distribution converge to something "before" it becomes normal, based on the third term in the Taylor series? Or is this not possible to do in general?
Do third order asymptotics exist?
CC BY-SA 2.5
null
2010-11-04T05:49:02.013
2023-04-13T12:40:24.827
2017-06-06T01:01:41.487
11887
1760
[ "mathematical-statistics", "asymptotics", "saddlepoint-approximation" ]
4194
2
null
4191
7
null
Yes, you are right, the conditional distribution needs to be found analytically, but I think there are lots of examples where the full conditional distribution is easy to find, and has a far simpler form than the joint distribution. The intuition for this is as follows, in most "realistic" joint distributions $P(X_1,\dots,X_n)$, most of the $X_i$'s are generally conditionally independent of most of the other random variables. That is to say, some of the variables have local interactions, say $X_i$ depends on $X_{i-1}$ and $X_{i+1}$, but doesn't interact with everything, hence the conditional distributions should simplify significantly as $Pr(X_i|X_1, \dots, X_i) = Pr(X_i|X_{i-1}, X_{i+1})$
null
CC BY-SA 2.5
null
2010-11-04T05:57:44.330
2010-11-04T05:57:44.330
null
null
1760
null
4196
1
null
null
1
341
I have a time series $X(t)$. Each $X(t)$ has three possible outcomes: A, B or C. I am interested in the ratio of A, B and C to the total. Assuming $N$ is the number of data points I have gathered for $X(t)$, How can I compute the confidence levels for A/N, B/N and C/N when the $X(t)$ are "intuitively" not independent ? For example, $X(t)$ is an indication of whether a car is: moving (speed>0), stopped (speed=0) or its motor is off. The data I gathered is a time slice pertaining to a car. To me, those categories are not independent because when the car is moving at $X(t)$ is very likely to be still moving at $X(t)$. [Am I correct?]
Confidence interval for ratio in timeseries
CC BY-SA 4.0
null
2010-11-04T06:34:24.897
2022-12-22T14:45:34.203
2022-12-22T14:45:34.203
56940
1784
[ "time-series", "confidence-interval", "non-independent" ]
4197
2
null
4193
3
null
Here is an attempt to answer your insightful question. I have seen the inclusion of the 3rd term of the Taylor series to increase the speed of convergence of the series to the true distribution. However, I haven't seen (in my limited experience) the usage of third and higher moments. As pointed out by John D. Cook in his blogs ([here](http://www.johndcook.com/blog/2010/09/20/skewness-andkurtosis/) and [here](http://www.johndcook.com/blog/2008/09/30/quantifying-the-error-in-the-central-limit-theorem/)), there hasn't been much work done in this direction, apart from the [Berry-Esseen theorem](http://en.wikipedia.org/wiki/Berry%E2%80%93Ess%C3%A9en_theorem). My guess would be (from the observation in the blog about the approximation error being bounded by $n^{1/2}$), as the asymptotic normality of mle is guaranteed at a rate of converge of $n^{1/2}$ ($n$, being sample size), considering higher moments won't improve on the normality result. Therefore, I guess, the answer to your question should be no. The asymptotic distribution converges to a normal dist.(by CLT, under regularity conditions of Lindberg's CLT). However, using higher order terms may increase the rate of convergence to the asymptotic distribution.
null
CC BY-SA 2.5
null
2010-11-04T06:37:44.763
2010-11-04T06:37:44.763
null
null
1307
null
4199
2
null
4193
3
null
Definitely not my area, but I'm pretty sure third- and higher-order asymptotics exist. Is this any help? Robert L. Strawderman. [Higher-Order Asymptotic Approximation: Laplace, Saddlepoint, and Related Methods](http://www.jstor.org/stable/2669788) Journal of the American Statistical Association Vol. 95, No. 452 (Dec., 2000), pp. 1358-1364
null
CC BY-SA 2.5
null
2010-11-04T08:09:42.603
2010-11-04T08:09:42.603
null
null
449
null
4200
1
null
null
3
6834
I have a large number (hundreds to thousands) of noisy time series that represent contemporaneous observations from different subjects. I hypothesise that there exist lead-lag relationships between observations for different subjects (or groups of subjects.) I would like to explore the potential use of such lead-lag relationships for the purposes of predicting the individual series. What methods might I consider for this? edit: To be clear, I am not looking at pairwise relationships. What I am looking for is a method that would look at the mountain of data at hand and attempt to discover (potentially non-linear) lead-lag relationships between arbitrary groups of series and the individual series to be predicted.
Using lead-lag relationships for time series prediction
CC BY-SA 2.5
null
2010-11-04T09:02:43.647
2017-04-22T23:58:07.200
2010-11-08T07:26:18.083
439
439
[ "time-series" ]
4201
2
null
2635
15
null
I believe M. Tibbit's answer refers to the general case of a gamma with unknown shape and scale. If the shape α is known and the sampling distribution for x is gamma(α, β) and the prior distribution on β is gamma(α0, β0), the posterior distribution for β is gamma(α0 + nα, β0 + Σxi). See this [diagram](http://www.johndcook.com/conjugate_prior_diagram.html) and the references at the bottom.
null
CC BY-SA 3.0
null
2010-11-04T10:12:31.840
2016-05-28T13:27:54.763
2016-05-28T13:27:54.763
319
319
null
4202
2
null
4187
17
null
Expressing factors relationships using R formulas follows from Wilkinson's notation, where '*' denotes crossing and '/' nesting, but there are some particularities in the way formula for mixed-effects models, or more generally random effects, are handled. For example, two crossed random effects might be represented as `(1|x1)+(1|x2)`. I have interpreted your description as a case of nesting, much like classes are nested in schools (nested in states, etc.), so a basic formula with `lmer` would look like (unless otherwise stated, a `gaussian` family is used by default): ``` y ~ x + (1|A:B) + (1|A) ``` where A and B correspond to your inner and outer factors, respectively. B is nested within A, and both are treated as random factors. In the older [nlme](http://cran.r-project.org/web/packages/nlme/index.html) package, this would correspond to something like `lme(y ~ x, random=~ 1 | A/B)`. If A was to be considered as a fixed factor, the formula should read `y ~ x + A + (1|A:B)`. But it is worth checking more precisely D. Bates' specifications for the [lme4](http://cran.r-project.org/web/packages/lme4/index.html) package, e.g. in his forthcoming textbook, [lme4: Mixed-effects Modeling with R](http://lme4.r-forge.r-project.org/), or the many handouts available on the same webpage. In particular, there is an example for such nesting relations in [Fitting Linear Mixed-Effects Models, the lme4 Package in R](http://www.stat.wisc.edu/~bates/PotsdamGLMM/LMMD.pdf?bcsi_scan_CBA24F92DB3F63E2=0&bcsi_scan_filename=LMMD.pdf). John Maindonald's tutorial also provides a nice overview: [The Anatomy of a Mixed Model Analysis, with R’s lme4 Package](http://www.maths.anu.edu.au/~johnm/r-book/xtras/mlm-ohp.pdf). Finally, section 3 of the R vignette on [lme4 imlementation](http://cran.r-project.org/web/packages/lme4/vignettes/Implementation.pdf) includes an example of the analysis of a nested structure. There is no `predict()` function in [lme4](http://cran.r-project.org/web/packages/lme4/index.html) (this function now exists, see comment below), and you have to compute yourself predicted individual values using the estimated fixed (see `?fixef`) and random (see `?ranef`) effects, but see also this thread on the [lack of predict function in lme4](http://r.789695.n4.nabble.com/no-predict-function-in-lme4-td1679131.html). You can also generate a sample from the posterior distribution using the `mcmcsamp()` function. Sometimes, it might clash, though. See the [sig-me](https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models) mailing list for more updated information.
null
CC BY-SA 3.0
null
2010-11-04T11:28:13.980
2017-09-15T23:46:59.690
2017-09-15T23:46:59.690
28564
930
null
4203
1
4206
null
1
1819
Suppose I have a set of $N$ experimental points of the form \begin{equation} \{x_i, y_i, d_i\}, \end{equation} where $i=1,...,N,$ and $d_i$ are errorbars for $y_i$. To fit the data, I minimize the reduced chi-square \begin{equation} \chi^2(p) = \sum_{i=1}^N \frac{[y_i - f(x_i,p)]^2}{d_i^2}, \end{equation} where $f(x,p)$ is a (generally non-linear) function parametrized by some parameter $p$ (there might be more than one parameter, but it doesn't really matter). My question is: given the optimal parameter $p_0$, i.e. $\chi^2(p)$ is minimal at $p=p_0$, and assuming the $y_i$'s are independent and are Normally distributed, what can be said about the distribution of $f(x, p0)$?
What is the distribution of a chi-square minimizing function?
CC BY-SA 2.5
0
2010-11-04T11:33:05.190
2010-11-04T13:28:31.173
2010-11-04T13:28:31.173
930
1197
[ "distributions", "chi-squared-test", "fitting" ]
4204
2
null
4187
10
null
The [ez](http://cran.r-project.org/package=ez) package contains the ezPredict() function, which obtains predictions from lmer models where prediction is based on the fixed effects only. It's really just a wrapper around the approach detailed in the [glmm wiki](http://glmm.wikidot.com/faq).
null
CC BY-SA 2.5
null
2010-11-04T12:36:04.560
2010-11-04T12:36:04.560
null
null
364
null
4205
2
null
4200
9
null
You can choose from about 40 years of research and countless books, dissertations, monographs etc. Given that your question is not all that focussed yet, maybe an introductory time-series book could help. In a nutshell, the autocorrelation function gives clues to lead/lag relationships that may be present in a single time-series, or between two series. Rob has done a lot of research into sensibly automating the process of identifying how many / which leads/lags to use, so please look at his [forecast](http://cran.r-project.org/package=forecast) package for R and [other research](http://robjhyndman.com/).
null
CC BY-SA 2.5
null
2010-11-04T13:00:44.517
2010-11-04T13:00:44.517
null
null
334
null
4206
2
null
4203
1
null
Another term for your fitting procedure would be weighted [non-linear least squares](http://en.wikipedia.org/wiki/Non-linear_least_squares). The weights are a very minor complication. Fitting non-linear least squares is more tricky than [ordinary least squares](http://en.wikipedia.org/wiki/Ordinary_least_squares), but once the fitting is done the asymptotic ($N \to \infty$) distribution of the estimates is given by the same [large-sample theory](http://en.wikipedia.org/wiki/Linear_least_squares#Large_sample_properties).
null
CC BY-SA 2.5
null
2010-11-04T13:13:23.710
2010-11-04T13:13:23.710
null
null
449
null
4209
2
null
4193
5
null
It is not possible for a sequence to "converge" to one thing and then to another. The higher-order terms in an asymptotic expansion will go to zero. What they tell you is how close to zero they are for any given value of $n$. For the Central Limit Theorem (as an example) the appropriate expansion is that of the logarithm of the characteristic function: the cumulant generating function (cgf). Standardization of the distributions fixes the zeroth, first, and second terms of the cgf. The remaining terms, whose coefficients are the cumulants, depend on $n$ in an orderly way. The standardization that occurs in the CLT (dividing the sum of $n$ random variables by something proportional to $n^{1/2}$--without which convergence will not occur) causes the $m^\text{th}$ cumulant--which after all depends on $m^\text{th}$ moments--to be divided by $(n^{1/2})^m = n^{m/2}$, but at the same time because we are summing $n$ terms, the net result is that the $m^\text{th}$ order term is proportional to $n/n^{m/2} = n^{-(m-2)/2}$. Thus the third cumulant of the standardized sum is proportional to $1/n^{1/2}$, the fourth cumulant is proportional to $1/n$, and so on. These are the higher-order terms. (For details, see [this paper of Yuval Filmus](http://www.cs.toronto.edu/%7Eyuvalf/CLT.pdf) for example.) In general, a high negative power of $n$ is much smaller than a low negative power. We can always be assured of this by taking a sufficiently large value of $n$. Thus, for really large $n$ we can neglect all negative powers of $n$: they converge to zero. Along the way to convergence, departures from the ultimate limit are measured with increasing accuracy by the additional terms: the $1/n^{1/2}$ term is an initial "correction," or departure from the limiting value; the next $1/n$ term is a smaller, more quickly-vanishing correction added to that, and so on. In brief, the additional terms give you a picture of how quickly the sequence converges to its limit. These additional terms can help us make corrections for finite (usually small) values of $n$. They show up all the time in this regard, such as [Chen's modification of the t-test](https://projecteuclid.org/journals/annals-of-statistics/volume-23/issue-5/Tests-Following-Transformations/10.1214/aos/1176324314.full), which exploits the third-order ($1/n^{1/2}$) term.
null
CC BY-SA 4.0
null
2010-11-04T14:46:20.700
2022-09-21T17:53:16.627
2022-09-21T17:53:16.627
79696
919
null
4210
2
null
4193
7
null
You are searching for the Edgeworth series aren't you? See the Wikipedia article on [same](https://en.wikipedia.org/wiki/Edgeworth_series). (note that Edgeworth died in 1926, should be in most famous statisticians? )
null
CC BY-SA 4.0
null
2010-11-04T15:24:41.853
2023-04-13T12:40:24.827
2023-04-13T12:40:24.827
362671
223
null
4211
1
4213
null
18
3264
I have a data like this: ``` > table(A,B,C) , , C = FALSE B A FALSE TRUE FALSE 177 42 TRUE 6 8 , , C = TRUE B A FALSE TRUE FALSE 5 31 TRUE 4 10 ``` How can I plot this on a single graph, possibly without imposing any hierarchy?
How to visualize 3D contingency matrix?
CC BY-SA 2.5
null
2010-11-04T16:11:48.220
2012-01-10T19:17:41.530
null
null
null
[ "data-visualization", "contingency-tables" ]
4212
2
null
2576
2
null
You can rank ordinal distributions by means of an intuitive dominance criterion: the answers to one question are better than the answers to another when it is more likely than not that a randomly chosen answer to the first will be better than a randomly chosen answer to the second. In more detail: put all the answers to question $X$ into one hat and all the answers to question $Y$ into another hat. Draw one answer from each hat at random. We will compare these answers, which we can do because they are on an ordinal scale. Let's also agree to resolve any ties by flipping a fair coin. Let $p(X,Y)$ be the probability that the answer to $X$ is better than the answer to $Y$. Rank $X$ ahead of $Y$ when $p$ exceeds $1/2$ and rank $X$ behind $Y$ when $p$ is less than $1/2$. If $p$ equals $1/2$, declare a tie between $X$ and $Y$. (By virtue of our tie-resolution procedure, $p(X,Y) + p(Y,X) = 1$, implying the ranking does not depend on the sequence in which we draw the two answers.) --- The calculation is a simple exercise for "just" a programmer (and a fun one if you are interested in efficient calculation, although that's unlikely to matter here). To make this proposal clear, though, I will illustrate it. Suppose all answers are on an integral scale from one to four, with four best. Write the answer distributions in the form $(k_1, k_2, k_3, k_4)$ where $k_3$ counts the number of "3"'s among the answers to a question, for example. For this example suppose $X$ has distribution $(4, 2, 0, 4)$ and $Y$ has distribution $(1, 6, 1, 2)$ (ten answers each). (Stop for a moment to consider which of these distributions ought to be considered "best" and note that they have identical means of 2.4 and identical medians of 2, suggesting this is a difficult comparison to make.) Then: - There is a 4/10 chance of drawing a "4" for $X$. In this case, There is a 2/10 chance of drawing a "4" for $Y$ for a tie; There is an 8/10 chance of drawing less than "4" for $Y$, a win for $X$. This contributes $(4/10)[(2/10)0.5 + 8/10] = 0.36$ to $p(X,Y)$. Continuing similarly, - Drawing a "3" for $X$ is impossible; it contributes $0$ to $p(X,Y)$. - Drawing a "2" for $X$ contributes $(2/10)[(6/10)0.5 + 1/10] = .08$. - Drawing a "1" for $X$ contributes $(4/10)[(1/10)0.5] = 0.02$. Whence $p(X,Y) = 0.36 + 0.00 + 0.08 + 0.02 = 0.46$. Because this value is less than $1/2$, we conclude $X$ should be ranked lower than $Y$. --- This idea is related to that of [Pitman Closeness](http://www.jstor.org/pss/2290088) and to certain non-parametric [slippage tests](http://books.google.com/books?id=u97pzxRjaCQC&pg=PA384&lpg=PA384&dq=slippage+test+distribution+statistics&source=bl&ots=ixuCqitLQ5&sig=xs-rPHYaZhmplWDewk-cgdsnm7E&hl=en&ei=u9zSTMWwD4OBlAevs9DnDg&sa=X&oi=book_result&ct=result&resnum=10&ved=0CDwQ6AEwCQ#v=onepage&q=slippage%20test%20distribution%20statistics&f=false) (which decide whether one distribution has "slipped"--changed values--with respect to other distributions based on random samples of them), such as the [Mann-Whitney (aka Wilcoxon)](http://en.wikipedia.org/wiki/Mann%E2%80%93Whitney_U) test.
null
CC BY-SA 2.5
null
2010-11-04T16:30:27.683
2010-11-04T20:10:51.363
2010-11-04T20:10:51.363
919
919
null
4213
2
null
4211
15
null
I would try some kind of 3D heatmap, [mosaic plot](http://www.datavis.ca/papers/drew/) or a [sieve plot](http://www.improving-visualisation.org/visuals/tag=:sieve+plot/mode=1/sort=alpha) (available in the [vcd](http://cran.r-project.org/web/packages/vcd/index.html) package). Isn't the base `mosaicplot()` function working with three-way table? (at least `mosaic3d()` in the [vcdExtra](http://cran.r-project.org/web/packages/vcdExtra/index.html) package should work, see e.g. [http://datavis.ca/R/](http://datavis.ca/R/)) Here's an example (including a conditional plot): ``` A <- sample(c(T,F), 100, replace=T) B <- sample(c(T,F), 100, replace=T) C <- sample(c(T,F), 100, replace=T) tab <- table(A,B,C) library(vcd) sieve(tab, shade=TRUE) cotabplot(tab) library(vcdExtra) mosaic3d(tab, type="expected", box=TRUE) ``` ![alt text](https://i.stack.imgur.com/MCbni.png) ![alt text](https://i.stack.imgur.com/IwMp8.png) ![alt text](https://i.stack.imgur.com/TtUpt.png) Actually, the rendering of `mosaic3d()` rely on the [rgl](http://cran.r-project.org/web/packages/rgl/index.html) package, so it is hard to give a pretty picture of the result.
null
CC BY-SA 2.5
null
2010-11-04T16:33:27.503
2010-11-04T17:34:35.440
2010-11-04T17:34:35.440
930
930
null
4214
1
null
null
7
388
On both a practical and philosophical level, how should you choose the scope when performing multiple comparisons? When a study performs 10 tests to check the hypothesis that 10 explanatory variable are predictive for "something" (on the same dataset), the test should obviously be corrected. What if there where ten studies, each testing for one different explanatory variable - when doing a meta analysis, should their P values be corrected? (will knowing if these studies where done on the same dataset, or on different datasets make a difference?) But then, what if we add to the mix another 100 researchers, all of them where just not very good at their jobs (all where testing "junk" variables) - automatically that will ruin our chances at finding anything after correction. But is that a reflection of something already happening in real life science? Now, let's assume the same researchers is doing a hundred studies, on different fields, asking one question in each of them. Should he have corrected his P values from these 100 studies? What if the questions are different but on the same study/dataset? What are criterions would you offer for choosing the scoping of performing multiple comparisons correction? p.s: I understand my question relates to [this one](https://stats.stackexchange.com/questions/1458/why-is-multiple-comparison-a-problem), but since there are new people on the site, and since there is somewhat of a difference, I allowed myself to ask the above question. p.p.s: I don't think this question has a "right answer", thus I choose to have it a community wiki, but for some reason I can't find how to do it in the screen today...
Choosing the scope when performing multiple comparisons?
CC BY-SA 2.5
null
2010-11-04T16:49:31.543
2010-11-04T19:10:54.393
2017-04-13T12:44:20.943
-1
253
[ "multiple-comparisons", "meta-analysis" ]
4215
2
null
2169
4
null
As you know, from $$Var[q] = Var[\sum_i w_i x_{(i)}] = \sum_i\sum_j w_i w_j Cov[x_{(i)}, x_{(j)}]$$ it follows you need only compute the variances and covariances of the order statistics. To do this, diagonalize the covariance matrix! Although this cannot be done in general, M. A. Stephens has [obtained (heuristically) an asymptotic diagonalization](http://biomet.oxfordjournals.org/content/62/1/23.full.pdf). (The eigenvectors are Hermite polynomials.) In the spirit of PCA, limiting your calculations to the largest few eigenvalues can greatly reduce the computational effort and might produce a reasonable approximation, depending on the structure of the $w_i$. In fact, if you adjust that weight vector to be a linear combination of a small number of the eigenvectors, that will assure a simple calculation of $Var[q]$ and perhaps not cost you much in terms of the accuracy of $q$ itself. At worst, a preliminary eigendecomposition of $\vec{w}$ will then require only $O(N)$ calculations of the variance rather than $O(N^2)$.
null
CC BY-SA 2.5
null
2010-11-04T16:51:47.097
2010-11-04T16:51:47.097
null
null
919
null
4217
2
null
4214
2
null
Think of the following two experiments: Experiment A; Throw a fair coin 10 times to assess Prob(Heads). Experiment B: Throw a fair dice 5 times to assess Prob(Face showing 1). To take the coin toss [example](http://en.wikipedia.org/wiki/Multiple_comparisons#Example_.E2.80.94_Flipping_coins) from the wiki: We may wish to declare a coin as biased if we observe more than 9 heads out of 10 tosses. Thus, if I were to repeat experiment A 100 times then there is 34% chance (see the wiki for the calculations) that we would identify a coin as biased when it is not thus increasing out type I error probability from 0.05 to 0.34. Therefore, we need to control for multiple comparisons in this context. However, note that our trials in experiment A have no influence on our results as far as experiment B is concerned as that is a completely different data generating process. The above suggests that we have to control for multiple comparisons for the two experiments separately instead of collectively. In other words, controlling for multiple comparisons should be done whenever the comparisons involve the same data generating process. Edit Strictly speaking the above example of coin vs dice is not a good example as that would be analogous to experiments that investigate two very different questions (e.g., estimate whether smoking causes cancer and estimate if jumping red lights leads to an accident). In these contexts, controlling for multiple comparisons collectively for the two experiments is meaningless. On further thinking, it is not clear to me if the data generating process really has a special role to play as far as multiple comparisons are concerned. Even if the data generating process were to be different (perhaps because of different covariates) you will still run the risk of increasing type I error because of multiple comparisons. Therefore, it seems to me that what matters is whether the multiple comparisons involve making judgement about the same null hypothesis. As long as the multiple comparisons involve the same null hypothesis we have to correct for multiple comparisons to keep Type I error at desired levels (e.g., 0.05).
null
CC BY-SA 2.5
null
2010-11-04T18:14:15.557
2010-11-04T19:10:54.393
2010-11-04T19:10:54.393
null
null
null
4219
1
4227
null
7
3998
I'm using McNemar's test. Basically this question is about best practices when reporting results using McNemar's test. I want to report the effect size. What is a sensible effect size for McNemar's test? I've seen the odds ratio b/c and the proportions b/(b+c) and c/(b+c) both used in papers. If I say what b and c are then all possible effect sizes can be computed. However I haven't seen this, is it bad form? Thanks!
Effect size of McNemar's Test
CC BY-SA 2.5
null
2010-11-05T01:24:23.780
2010-11-05T10:40:01.677
null
null
1540
[ "hypothesis-testing", "nonparametric" ]
4220
1
4223
null
181
117697
On the [Wikipedia page about naive Bayes classifiers](http://en.wikipedia.org/wiki/Naive_Bayes_classifier#Testing), there is this line: > $p(\mathrm{height}|\mathrm{male}) = 1.5789$ (A probability distribution over 1 is OK. It is the area under the bell curve that is equal to 1.) How can a value $>1$ be OK? I thought all probability values were expressed in the range $0 \leq p \leq 1$. Furthermore, given that it is possible to have such a value, how is that value obtained in the example shown on the page?
Can a probability distribution value exceeding 1 be OK?
CC BY-SA 3.0
null
2010-11-05T01:25:39.520
2021-07-07T12:27:34.507
2021-07-07T12:27:34.507
35989
226
[ "probability", "distributions", "normal-distribution", "density-function", "faq" ]
4221
2
null
4220
51
null
This is a common mistake from not understanding the difference between probability mass functions, where the variable is discrete, and probability density functions, where the variable is continuous. See [What is a probability distribution](http://www.itl.nist.gov/div898/handbook/eda/section3/eda361.htm): > continuous probability functions are defined for an infinite number of points over a continuous interval, the probability at a single point is always zero. Probabilities are measured over intervals, not single points. That is, the area under the curve between two distinct points defines the probability for that interval. This means that the height of the probability function can in fact be greater than one. The property that the integral must equal one is equivalent to the property for discrete distributions that the sum of all the probabilities must equal one.
null
CC BY-SA 2.5
null
2010-11-05T01:38:47.637
2010-11-05T15:20:19.447
2010-11-05T15:20:19.447
919
493
null
4222
2
null
4191
11
null
I think you've missed the main advantage of algorithms like of Metropolis-Hastings. For Gibbs sampling, you will need to sample from the full conditionals. You are right, that is rarely easy to do. The main advantage of Metropolis-Hastings algorithms is that you can still sample one parameter at a time, but you only need to know the full conditionals up to proportionality. This is because the denominators cancel in the acceptance criteria function The unnormalized full conditionals are often available. For instance, in your example $P(x_1 | x_2,...,x_n) \propto P(x_1,...,x_n)$, which you have. You don't need to do any integrals analytically. In most applications, a lot more will likely cancel too. Programs like WinBugs/Jags typically take Metropolis-Hastings or slice sampling steps that only require the conditionals up to proportionality. These are easily available from the DAG. Given conjugacy, they also sometimes take straight Gibbs steps or fancy block stops.
null
CC BY-SA 2.5
null
2010-11-05T01:56:14.007
2010-11-05T02:03:58.703
2010-11-05T02:03:58.703
493
493
null
4223
2
null
4220
200
null
That Wiki page is abusing language by referring to this number as a probability. You are correct that it is not. It is actually a probability per foot. Specifically, the value of 1.5789 (for a height of 6 feet) implies that the probability of a height between, say, 5.99 and 6.01 feet is close to the following unitless value: $$1.5789\, [1/\text{foot}] \times (6.01 - 5.99)\, [\text{feet}] = 0.0316$$ This value must not exceed 1, as you know. (The small range of heights (0.02 in this example) is a crucial part of the probability apparatus. It is the "differential" of height, which I will abbreviate $d(\text{height})$.) Probabilities per unit of something are called densities by analogy to other densities, like mass per unit volume. Bona fide probability densities can have arbitrarily large values, even infinite ones. ![Gamma distribution](https://i.stack.imgur.com/2CM4T.png) This example shows the probability density function for a Gamma distribution (with shape parameter of $3/2$ and scale of $1/5$). Because most of the density is less than $1$, the curve has to rise higher than $1$ in order to have a total area of $1$ as required for all probability distributions. ![Beta distribution](https://i.stack.imgur.com/yGcYJ.png) This density (for a beta distribution with parameters $1/2, 1/10$) becomes infinite at $0$ and at $1$. The total area still is finite (and equals $1$)! --- The value of 1.5789 /foot is obtained in that example by estimating that the heights of males have a normal distribution with mean 5.855 feet and variance 3.50e-2 square feet. (This can be found in a previous table.) The square root of that variance is the standard deviation, 0.18717 feet. We re-express 6 feet as the number of SDs from the mean: $$z = (6 - 5.855) / 0.18717 = 0.7747$$ The division by the standard deviation produces a relation $$dz = d(\text{height})/0.18717$$ The Normal probability density, by definition, equals $$\frac{1}{\sqrt{2 \pi}}\exp(-z^2/2)dz = 0.29544\ d(\text{height}) / 0.18717 = 1.5789\ d(\text{height}).$$ (Actually, I cheated: I simply asked Excel to compute NORMDIST(6, 5.855, 0.18717, FALSE). But then I really did check it against the formula, just to be sure.) When we strip the essential differential $d(\text{height})$ from the formula only the number $1.5789$ remains, like the Cheshire Cat's smile. We, the readers, need to understand that the number has to be multiplied by a small difference in heights in order to produce a probability.
null
CC BY-SA 3.0
null
2010-11-05T02:32:49.170
2017-02-09T13:59:38.130
2017-02-09T13:59:38.130
919
919
null
4224
2
null
4062
5
null
Did you see this post? [http://groups.google.com/group/ggplot2/browse_thread/thread/8e1efd0e7793c1bb](http://groups.google.com/group/ggplot2/browse_thread/thread/8e1efd0e7793c1bb) Take the example, add coord_polar() and reverse the axes and you get pretty close: ``` library(cluster) data(mtcars) x <- as.phylo(hclust(dist(mtcars))) p <- ggplot(data=x) p <- p + geom_segment(aes(y=x,x=y,yend=xend,xend=yend), colour="blue",alpha=1) p <- p + geom_text(data=label.phylo(x), aes(x=y, y=x, label=label),family=3, size=3) + xlim(0, xlim) + coord_polar() theme <- theme_update( axis.text.x = theme_blank(), axis.ticks = theme_blank(), axis.title.x = theme_blank(), axis.title.y = theme_blank(), legend.position = "none" ) p <- p + theme_set(theme) print(p) ```
null
CC BY-SA 2.5
null
2010-11-05T03:17:06.993
2010-11-05T03:17:06.993
null
null
1809
null
4225
1
4249
null
3
470
I was thinking about CI and subjective Bayesian and I have following two questions: - If a subjective (not objective) Bayesian would care if her predictions don't do well in the real world. - A classical statistician would not care if her confidence statement is (obviously) wrong for a given data set (as in Welch's Paradox, where conditioning is on ancillary statistics leads to the resolution of pathological behavior). I think my answer for 1. is YES and 2. is NO. But I don't know if I am thinking along the right lines. Can I have some more insights? --- UPDATE Welch's example: This example works for any $n$, but we will take $n=2$ for simplicity. $X_1, X_2 \sim U(\theta - 1/2, \theta +1/2)$ (iid), $\theta \in R$. This implies $X_1 - \theta \sim U(-1/2, 1/2)$ (iid). $(X_1 + X_2) /2 - \theta$ (note that this is NOT a statistic) has a distribution independent of $\theta$. We can choose c > 0 s.t. $Prob_{\theta} [-c \le (X_1 + X_2) /2 - \theta \le c] = 1- \alpha (~0.99)$, implying $((X_1 + X_2) /2 - c, (X_1 + X_2) /2 + c)$ is the 99% CI of $\theta$. The interpretation of this CI is: if we sample repeatedly, we will get different $(X_1 + X_2) /2$ and (at least) 99% times it will contain true \theta. But for a particular set of $X_1, X_2$, we can't say if the CI contains $\theta$. Now, consider the following data: $X_1 =0$ and $X_2=1$, as $|X_1 - X_2|=1$, we know FOR SURE (Prob =1) that the interval $(X_1, X_2)$ contains theta (one possible criticism, $P{|X_1 - X_2|=1} = 0$, but we can handle it mathematically and I won't discuss it). (Better details are in Pratt, 1961; Lehman, Chap 10, 2nd Edition, Prob 27, 28; Kiefer, 1977; Berger and Wolpert, 1988) Thanks, S.
Subjective Bayesian's care for real world validation and classical statistician's worry about CI related paradoxes for a given data set?
CC BY-SA 2.5
null
2010-11-05T05:22:01.777
2010-11-05T18:03:22.370
2010-11-05T18:03:22.370
1307
1307
[ "bayesian", "confidence-interval" ]
4226
1
4235
null
27
2204
Quantum Mechanics has generalized probability theory to negative/imaginary numbers, mostly to explain interference patterns, wave/particle duality and generally weird things like that. It can be seen more abstractly, however, as a noncommutative generalisation of Bayesian probability (quote from Terrence Tao). I'm curious about these things, though by no means an expert. Does this have any applications outside Quantum Mechanics? Just curious.
Do negative probabilities/probability amplitudes have applications outside quantum mechanics?
CC BY-SA 2.5
null
2010-11-05T06:35:47.567
2023-02-20T12:07:57.113
2010-11-05T06:45:53.887
1760
1760
[ "probability" ]
4227
2
null
4219
6
null
In general, I think best practice for presenting measures of effect size depends on the question of interest and the usual practice in your field. There's little point reporting an effect measure that readers will be unfamiliar with. Having said that, in this particular case if you want a single effect measure I think the (conditional) odds ratio b / c is the obvious choice. Giving both proportions b / (b + c) and c / (b + c) gives more information and clearly the odds ratio can be derived from these two. I wouldn't call two proportions an 'effect size' though. I'd strongly encourage you to report all four cells of the 2×2 table concordancy table, however. That way nothing is hidden and anyone can calculate whatever statistics they wish. It's only four numbers, after all.
null
CC BY-SA 2.5
null
2010-11-05T07:17:15.547
2010-11-05T10:40:01.677
2010-11-05T10:40:01.677
449
449
null
4228
2
null
4093
0
null
PCA results (the different dimensions or commponents) generally can't be translated into a real concept I think is wrong to assume that one of the components is "fear of bears" what lead you to think that was what the component meant? Principal components procedure transforms your data matrix to a new data matrix with the same or less amount of dimensions, and the resulting dimensions range from the one that better explains the variance to the one that explains it the less. This components are calculated based on a combination of the original variables with the calculated eigenvectors. Overal PCA procedure does convert the original variables to orthogonal ones (linearly independent). Hope this helps you clarify a little about pca procedure
null
CC BY-SA 2.5
null
2010-11-05T07:42:26.210
2010-11-05T07:42:26.210
null
null
1808
null
4230
2
null
4089
0
null
Maybe you are looking for the library ggplot2 that lets you plot things in a pretty way. Or you can check this website that seems to have lots of R graphic utilities [http://addictedtor.free.fr/graphiques/](http://addictedtor.free.fr/graphiques/)
null
CC BY-SA 2.5
null
2010-11-05T07:58:21.050
2010-11-05T07:58:21.050
null
null
1808
null
4231
1
null
null
4
270
I have heard the terms training and validating a model. I know that we select variables which are most statistically significant and we look for other things like multicollinearity. My question is: what does traning a model involves more than this ?
Training a model
CC BY-SA 2.5
null
2010-11-05T08:52:03.723
2010-11-05T12:50:14.080
null
null
1763
[ "logistic" ]
4233
1
null
null
3
642
I know about $r^2$ tells you about the amount of variation that can be explained by the predictor variables. I have run a model in which the rsquare has value 0.3010 but has false positive rate of around 15.60%. So this model which is logit in nature predicts 84% cases right. I want to know two things: 1) Is this false positive rate significant, i.e. is this prediction good enough? 2) Does it matter if I $r^2$ is as low as 0.3 if my model is good in prediction as we know that $r^2$ is a value used for explaning and not predicting?
Significance of $r^2$ value
CC BY-SA 2.5
null
2010-11-05T10:46:22.953
2010-11-06T16:17:06.830
2010-11-05T13:08:36.773
449
1763
[ "hypothesis-testing", "regression", "logistic" ]
4234
2
null
4093
5
null
For me, PCA scores are just re-arrangements of the data in a form that allows me to explain the data set with less variables. The scores represent how much each item relates to the component. You can name them as per factor analysis, but its important to remember that they are not latent variables, as PCA analyses all variance in the data set, not just the elements held in common (as factor analysis does).
null
CC BY-SA 2.5
null
2010-11-05T11:19:36.060
2010-11-05T11:19:36.060
null
null
656
null
4235
2
null
4226
17
null
Yes. I like the article Søren shared very much, and together with the references in that article I would recommend Muckenheim, W. et al. (1986). [A Review of Extended Probabilities](https://doi.org/10.1016/0370-1573(86)90110-9). Phys. Rep. 133 (6) 337-401. It's a physics paper for sure, but the applications there are not all related to quantum physics. My personal favorite application relates to [de Finetti's Theorem](http://en.wikipedia.org/wiki/De_Finetti%27s_theorem) (also Bayesian in flavor): if we don't mind negative probabilities then it turns out that all exchangeable sequences (even finite, perhaps negatively correlated ones) are a (signed) mixture of IID sequences. Of course, this itself has applications in quantum mechanics, in particular, that Fermi-Dirac statistics yield the same type of (signed) mixture representation that Bose-Einstein statistics do. My second personal favorite application (outside of physics proper) relates to [infinite divisible](http://en.wikipedia.org/wiki/Infinite_divisibility_(probability)) (ID) distributions, which classically includes normal, gamma, poisson, ... the list continues. It isn't too hard to show that ID distributions must have unbounded support, which immediately kills distributions like the binomial or uniform (discrete+continuous) distributions. But if we permit negative probabilities then these problems disappear and the binomial, uniform (discrete+continuous), and a whole bunch of other distributions then become infinitely divisible - in this extended sense, please bear in mind. ID distributions relate to statistics in that they are limiting distributions in generalized central limit theorems. By the way, the first application is whispered folklore among probabilists and the infinite divisibility stuff is proved [here](http://maurice.bgsu.edu/record=b2719985%7ES0), an informal electronic copy being [here](http://people.ysu.edu/%7Egkerns/pdf/Jdiss.pdf). Presumably there is a bunch of material on [arXiv](http://www.arxiv.org/), too, though I haven't checked there in quite some time. As a final remark, whuber is absolutely right that it isn't really legal to call anything a probability that doesn't lie in $[0,1]$, at the very least, not for the time being. Given that "negative probabilities" have been around for so long I don't see this changing in the near future, not without some kind of colossal breakthrough.
null
CC BY-SA 4.0
null
2010-11-05T12:23:25.587
2023-02-20T11:41:03.587
2023-02-20T11:41:03.587
77222
null
null
4236
2
null
1980
3
null
Looking for examples and practices is a good way to learn, but I just wanted to mention that reproducibility has not only technical/script rerun side, but also code style and structuring aspect, minimization of side effects in core functions etc. I personally found that Chambers book Software for Data Analysis allows to understand more deeply techniques that help to avoid reliability and reproducibility issues on R code level.
null
CC BY-SA 2.5
null
2010-11-05T12:26:45.920
2010-11-05T12:26:45.920
null
null
1820
null
4237
2
null
4231
5
null
Although the [curse of dimensionality](http://en.wikipedia.org/wiki/Curse_of_dimensionality) and multicollinearity are distinct issues, cross-validation is used for building a predictive model: we usually estimate parameters of our model on training samples, and assess its generalizability on test samples. This yields a measure of model performance, which can be a % of prediction accuracy if we work with a classification model, or an RMSEA if it is a regression model. The idea is that the better the model performs, the better it will allow us to predict outcome(s) on unseen data. To overcome the problem of overfitting when building a predictive model, we may also introduce some kind of variable or [feature selection](http://en.wikipedia.org/wiki/Feature_selection). Cross-validation may be done in various way (split or holdout method, k-fold, leave-one-out, etc.) but the general idea to keep in mind is that the final model is assessed on individuals who do not participate to its construction. You may find additional information by looking at the "[cross-validation](https://stats.stackexchange.com/questions/tagged/cross%2Dvalidation)" or "[feature selection](https://stats.stackexchange.com/questions/tagged/feature%2Dselection)" tags.
null
CC BY-SA 2.5
null
2010-11-05T12:39:03.030
2010-11-05T12:50:14.080
2017-04-13T12:44:44.530
-1
930
null
4238
1
null
null
5
187
We are often interested in estimating the limiting distribution of a parameter in situations where the data exhibit dependence within clusters. For example, a study of the effects of a household-level treatment on household-level outcomes must contend with the possibility that households within villages will have correlated outcomes. Thus, when computing standard errors for the treatment effects, we typically take this clustering into account by using a "cluster robust" covariance estimator or perhaps a "random effects" model. (See [http://bit.ly/bAah5L](http://bit.ly/bAah5L) for an example.) The properties of these covariance estimators are typically studied by assuming the source of the clustering are common "shocks" that occur within a group. That is for groups indexed by $g$ and units within groups indexed by $i$, we typically write down a model of the form, $y_{ig} = \alpha_g + \epsilon_{ig}$, where the $\alpha_g$'s denote the group level shocks. We then typically assume that the $\alpha_g$ are independent across groups with group level variance $\sigma^2_{\alpha}$ and the $\epsilon_{ig}$ are independent draws with variance $\sigma^2_{\epsilon}$, in which case the intra-cluster correlation is the familiar $\frac{\sigma^2_{\alpha}}{\sigma^2_{\alpha}+\sigma^2_{\epsilon}}$. Here's what I am wondering: does this linear "group shocks" model implicitly rule out important forms of within-cluster correlation? (I am limiting the question to correlation for now, and not other types of non-correlation dependence.) That is, are there ways that within-cluster correlation can arise that cannot, through algebraic manipulation, be expressed in terms of a group-level shock? Or is the group level shock characterization general with respect to within group correlation? I am inclined to try working out some of the examples, but I thought I'd put this up to see if someone else has already done the heavy (or maybe it's not so heavy) lifting. If this is not a well-posed question (i.e. unanswerable), I'd also welcome comments about why. (Note I am not putting "clustering" on the tag list here because it's not about cluster methods as the term is usually applied.)
Sources of within-cluster correlation other than "random shocks"
CC BY-SA 2.5
null
2010-11-05T13:32:21.913
2019-03-29T04:05:59.027
2017-11-12T17:23:18.740
11887
96
[ "correlation", "random-effects-model" ]
4239
1
null
null
13
8644
I am designing a questionnaire for my dissertation. I am in the process of validating the questionnaire I have applied a Cronbach's alpha test to the initial sample group. The responses to the questionnaire are on a Likert scale; can anyone suggest any further tests to apply to help test its validity. I am not an expert on statistics so any help would be appreciated. I have been doing some research and it appears I can do a Rasch analysis has anyone got any free software sites to apply this test and advice?
Validating questionnaires
CC BY-SA 2.5
null
2010-11-05T13:34:50.963
2016-09-26T10:27:54.403
2016-09-26T10:27:54.403
3277
null
[ "survey", "scales", "psychometrics", "scale-construction" ]
4241
2
null
4165
0
null
Other areas of application for stochastic processes: (1) Asymptotic theory: This builds on PeterR's comment about an IID sequence. Law of large numbers and central limit theorem results require an understanding of stochastic processes. This is so fundamental in so many areas of application that I am inclined to say that anyone with a graduate degree in stats or a field that uses sampling or frequentist inference ought to have key stochastic processes results under their belt. (2) Structural equation modeling for causal inference a la Judea Pearl: Analyzing directed acyclic graphs (DAGs) of causal processes requires some handle of stochastic process theory.
null
CC BY-SA 2.5
null
2010-11-05T14:04:18.887
2010-11-05T14:04:18.887
null
null
96
null
4242
2
null
4239
22
null
I will assume that your questionnaire is to be considered as one unidimensional scale (otherwise, Cronbach's alpha doesn't make very much sense). It is worth running an exploratory factor analysis to check for that. It will also allow you to see how items relate to the scale (i.e., through their loadings). Basic steps for validating your items and your scale should include: - a complete report on the items' basic statistics (range, quartiles, central tendency, ceiling and floor effects if any); - checking the internal consistency as you've done with your alpha (best, give 95% confidence intervals, because it is sample-dependent); - describe you summary measure (e.g., total or mean score, aka scale score) with usual statistics (histogram + density, quantiles etc.); - check your summary responses against specific covariates which are supposed to be related to the construct your are assessing -- this is referred to as known-group validity; - if possible, check your summary responses against known instruments that purport to measure the same construct (concurrent or convergent validity). If your scale is not unidimensional, these steps have to be done for each subscale, and you could also factor out the correlation matrix of your factors to assess the second-order factor structure (or use structural equation modeling, or confirmatory factor analysis, or whatever you want). You can also assess convergent and discriminant validity by using Multi-trait scaling or Multi-trait multi-method modeling (based on interitem correlations within and between scales), or, again, SEMs. Then, I would say that Item Response Theory would not help that much unless you are interested in shortening your questionnaire, filtering out some items that show [differential item functioning](http://en.wikipedia.org/wiki/Differential_item_functioning), or use your test in some kind of a [computer adaptive test](http://en.wikipedia.org/wiki/Computerized_adaptive_testing). In any case, the [Rasch model](http://en.wikipedia.org/wiki/Rasch_model) is for binary items. For polytomous ordered items, the most commonly used models are : - the graded response model - the partial credit model - the rating scale model. Only the latter two are from the Rasch family, and they basically use an adjacent odds formulation, with the idea that subject has to "pass" several thresholds to endorse a given response category. The difference between these two models is that the PCM does not impose that thresholds are equally spaced on the theta (ability, or subject location on the latent trait) scale. The graded response model relies on a cumulative odds formulation. Be aware that these models all suppose that the scale is unidimensional; i.e., there's only one latent trait. There are additional assumptions like, e.g., local independence (i.e., the correlations between responses are explained by variation on the ability scale). Anyway, you will find a very complete documentation and useful clues to apply psychometric methods in R in volume 20 of the Journal of Statistical Software: [Special Volume: Psychometrics in R](http://www.jstatsoft.org/v20). Basically, the most interesting R packages that I use in my daily work are: [ltm](http://cran.r-project.org/web/packages/ltm/index.html), [eRm](http://cran.r-project.org/web/packages/eRm/index.html), [psych](http://cran.r-project.org/web/packages/psych/index.html), [psy](http://cran.r-project.org/web/packages/psy/index.html). Others are referenced on the CRAN task view [Psychometrics](http://cran.r-project.org/web/views/Psychometrics.html). Other resources of interest are: - Notes on the use of R for psychology experiments and questionnaires - Using R for psychological research (W. Revelle is actually writing a book on psychometrics in R) - the PsychoR project (it does not focus on IRT and scale development, though). A good review on the use of FA vs. IRT in scale development can be found in Scale construction and evaluation in practice: [A review of factor analysis versus item response theory applications](http://www.ppsw.rug.nl/~boomsma/tenholt_vanduijn_boomsma_2010.pdf), by ten Holt et al (Psychological Test and Assessment Modeling (2010) 52(3): 272-297).
null
CC BY-SA 2.5
null
2010-11-05T14:14:26.580
2010-11-30T12:37:24.433
2010-11-30T12:37:24.433
930
930
null
4243
2
null
4239
11
null
While supporting everything said above, i would suggest that you do the following (in similiar enough order) Firstly, you should be using R, if not you should start. The following advice is predicated on the use of R. I'll assume that you have, at this point, calculated the descriptive statistics et al. If not, the psych package has a describe() function which should give you the stats you need. Install the psych package from CRAN. Load the psych package. Use the fa.parallel routine on your data. This should give you a number of factors to retain. Then, use the VSS(routine). This calculates the MAP criterion which gives you a different (normally) number of factors to retain. Use a form of factor analysis (not principal components) and an oblique rotation for each number of factors. If your factors do not appear to be correlated after an oblique rotation, switch to orhogonal rotation. This is as an orthogonal structure can be determined from an oblique rotation, but not vice versa. Extract all the factor solutions between the MAP criterion and the parallel analysis criterion. Determine which of these has the best fit indices and makes the most sense. This is the one you should retain. On IRT, having used both ltm and eRm, I would suggest starting with eRm. It has better graphics functions for your models, and support for polytomous models is greater. That being said, it only fits Rasch models, and often data from psychological questionnaires do not meet the requirements for them. Good luck! Psychometrics is a lot of fun, as you will no doubt discover.
null
CC BY-SA 2.5
null
2010-11-05T14:51:50.307
2010-11-05T14:51:50.307
null
null
656
null
4244
2
null
3616
2
null
If you want to assume simple linear trend, you can take the difference of each data set at the various time points and test that the slope of the line is zero. -Ralph Winters
null
CC BY-SA 2.5
null
2010-11-05T14:54:01.720
2010-11-05T14:54:01.720
null
null
null
null
4245
1
4246
null
10
24189
My problem with understanding this expression might come from the fact that English is not my first language, but I don't understand why it's used in this way. The marginal mean is typically the mean of a group or subgroup's measures of a variable in an experiment, but why not just use the word mean? What's the marginal here for? See the [definition of marginal from wiktionary](http://en.wiktionary.org/wiki/marginal).
What is the meaning of 'Marginal mean'?
CC BY-SA 2.5
null
2010-11-05T14:54:16.593
2016-08-31T05:24:35.263
2010-11-05T15:04:38.387
null
1320
[ "terminology", "marginal-distribution" ]
4246
2
null
4245
7
null
Perhaps, the term originates from how the data is represented in a contingency table. See this [example](http://en.wikipedia.org/wiki/Contingency_table#Example) from the wiki. In the above example, we would speak of marginal totals for gender and handedness when referring to the last column and the bottom row respectively. If you see the wiktionary the first definition of marginal is: > of, relating to, or located at a margin or an edge Since the totals (and means if means are reported) are at the edge of the table they are referred to as marginal totals (and marginal means if the edges have means).
null
CC BY-SA 2.5
null
2010-11-05T15:01:44.253
2010-11-05T15:01:44.253
null
null
null
null
4248
2
null
4226
18
null
QM does not use negative or imaginary probabilities: if it did, they would no longer be probabilities! What can be (and usually is) a complex value is the quantum mechanical wave function $\psi$. From it the probability amplitude (which is a bona fide probability density) can be constructed; it is variously written $\langle\psi|\psi\rangle$ or $\|\psi\|^2$. When $\psi$ has (complex) scalar values, $\|\psi\|^2 = \psi^* \psi$. In every case these values are nonnegative real numbers. For details, see the section on "Postulates of Quantum Mechanics" in the [Wikipedia article](http://en.wikipedia.org/wiki/Mathematical_formulations_of_quantum_mechanics).
null
CC BY-SA 4.0
null
2010-11-05T15:30:54.033
2023-02-20T12:07:57.113
2023-02-20T12:07:57.113
362671
919
null
4249
2
null
4225
3
null
For many reasons you're right about 1. I certainly wouldn't heed the advice of someone who did not care about whether it is any good! Number 2, as you have expressed it, does not characterize good practice. If there are possible datasets where a CI (or any decision procedure, for that matter) is clearly wrong, then that procedure is inadmissible by definition (because you can replace it by one that is at least as good and sometimes better, no matter what). Yes, inadmissible procedures are sometimes used in practice, but it can be argued that such procedures actually are admissible when we include the cost of performing the procedure itself within the loss function. (In other words, a quick and dirty method that works ok can be considered superior to one that requires extensive time and effort to calculate and works only a little better.) But in this hypothetical case, the "obviously" clause indicates it takes no effort to recognize some wrong intervals and replace them with better ones. Therefore, although I do think your reasoning is good, we should conclude that a thoughtful concerned "classical" statistician would indeed care: the answer should be YES to both questions.
null
CC BY-SA 2.5
null
2010-11-05T16:02:06.010
2010-11-05T16:02:06.010
null
null
919
null
4250
2
null
4245
4
null
I'd assume it means the sample analogue of the marginal expectation $\operatorname{E}(X)$, as opposed to the sample analogue of a conditional expectation $\operatorname{E}(X \mid Y)$, where $Y$ could be anything.
null
CC BY-SA 2.5
null
2010-11-05T16:42:02.713
2010-11-05T17:00:14.750
2010-11-05T17:00:14.750
449
449
null
4251
2
null
4225
4
null
For the second question, I believe the answer is "Yes". I will quote Andrew Gelman here, "..in general there is no coverage guarantee because frequency properties depend on nuisance parameters which can only be ignored in some special cases of pivotal test statistics". You can take a look at the following paper for some really nice discussion of relative merits and demerits of these procedures. [http://www.stat.columbia.edu/~gelman/research/published/badbayesresponsemain.pdf](http://www.stat.columbia.edu/~gelman/research/published/badbayesresponsemain.pdf) V.S.
null
CC BY-SA 2.5
null
2010-11-05T17:33:07.933
2010-11-05T17:33:07.933
null
null
1831
null
4252
1
4254
null
10
88147
How can I calculate the truncated or trimmed mean? Let's say truncated by 10%? I can imagine how to do it if you have 10 entries or so, but how can I do it for a lot of entries?
How to calculate the truncated or trimmed mean?
CC BY-SA 3.0
null
2010-11-05T17:35:33.833
2016-02-17T11:02:00.770
2013-03-03T16:05:18.663
603
1833
[ "mean", "robust", "truncation", "trimmed-mean" ]
4253
2
null
4089
0
null
Its probably not exactly what you are looking for, but the pairs.panels() function in the psych package for R may prove useful. It gives you correlation values in the upper diagonal, loess lines and points in the lower diagonal, and shows a histogram of each variable's scores in the diagonal line of the matrix. I personally think its one of the best graphical summaries of data around.
null
CC BY-SA 2.5
null
2010-11-05T17:44:49.703
2010-11-05T17:44:49.703
null
null
656
null
4254
2
null
4252
21
null
Trimmed mean involves trimming $P$ percent observations from both ends. E.g.: If you are asked to compute a 10% trimmed mean, $P = 10$. Given a bunch of observations, $X_i$: - First find $n$ = number of observations. - Reorder them as "order statistics" $X_i$ from the smallest to the largest. - Find lower case $p = P/100$ = proportion trimmed. - Compute $n p$. If $n p$ is an integer use $k = n p$ and trim $k$ observations at both ends. $R$ = remaining observations = $n - 2k$. Trimmed mean = $(1/R) \left( X_{k+1} + X_{k+2} + \ldots + X_{n-k} \right).$ Example: Find 10% trimmed mean of 2, 4, 6, 7, 11, 21, 81, 90, 105, 121 Here, $n = 10, p = 0.10, k = n p = 1$ which is an integer so trim exactly one observation at each end, since $k = 1$. Thus trim off 2 and 121. We are left with $R = n - 2k = 10 - 2 = 8$ observations. 10% trimmed mean= (1/8) * (4 + 6 + 7 + 11 + 21 + 81 + 90 + 105) = 40.625 If $ n p$ has a fractional part present, trimmed mean is a bit more complicated. In the above example, if we wanted 15% trimmed mean, $P = 15, p = 0.15, n = 10, k = n p = 1.5$. This has integer part 1 and fractional part 0.5 is present. $R = n - 2k = 10 - 2 * 1.5 = 10 - 3 = 7$. Thus $R = 7$ observations are retained. Addendum upon @whuber's comment: To remain unbiased (after removing 2 and 121), it seems we must remove half of the 4 and half of the 105 for a trimmed mean of $(4/2 + 6 + 7 + 11 + 21 + 81 + 90 + 105/2)/7 = 38.64$ Source: [Class notes on P percent trimmed mean](http://www.fordham.edu/economics/vinod/st1trim.doc)
null
CC BY-SA 3.0
null
2010-11-05T17:52:59.327
2013-06-26T06:06:11.103
2013-06-26T06:06:11.103
805
69
null
4255
2
null
4089
1
null
To explore dataset I really like `rattle`. Install the package and just call `rattle()`. The interface is quite self explainatory.
null
CC BY-SA 2.5
null
2010-11-05T18:42:31.133
2010-11-05T18:42:31.133
null
null
582
null
4256
2
null
2397
1
null
Is it not the case that $|S_{-i}|=|S_{-j}|$ for all $i,j$ where $S_{-i}$ is the Multinomial covariance matrix with the $i$-th row and column removed? Since this is the case, I don't understand what you mean by "freedom of choice" as any "choice" is equivalent.
null
CC BY-SA 2.5
null
2010-11-05T19:28:01.477
2010-11-05T19:33:51.173
2010-11-05T19:33:51.173
1835
1835
null
4257
2
null
4233
4
null
First, no model is perfect unless it is over-fit. So, your false positive rate is not unusual. Is a false positive rate of 16% good or bad? If it is lower than the natural proportion in the data it is OK. If it is not it is really bad. The key is by how much your model reduces the error rate. That's measured in several ways as mentioned in the second paragraph. Second, R Square is not a very well accepted measure of a model's robustness and explanatory power for Logit regression. I would not rely solely on it. There are other R Square measures earmarked for Logit regression. They include Pseudo R Square, Aldrich and Nelson R Square, and McFadden R Square. They all measure the reduction in error as captured by the -2LL calculation between the Baseline model that simply assumes that the probability for all observations is the average probability or the average proportion (if all values are 1 or 0). Another answer also mentions the Cox & Snell R Square and the Nagelkerke R Square. I suspect those are other variants on the same theme focusing on the model's error reduction. I would re-frame the investigation on the robustness of your model by focusing on the mentioned different R Square measures. I would also calculate the Chi Square p value for the whole model. The difference in the -2LL between the baseline model and the model is a Chi Square value. The number of variables gives you the DF. The resulting Chi Square p value gives you a measure of statistical significance for the whole model (probability that results were due to randomness). Maybe even more importantly, I would focus on the statistical significance of the independent variables you are using. The Wald statistics (really another Chi Square test) fits that purpose well. In this case, a variable regressed coefficient divided by the coefficient standard error gives you the Wald value. DF is always 1. And, the resulting p value conveys the statistical significance of this variable. With the above, you can then remove or add variables and attempt to improve your model.
null
CC BY-SA 2.5
null
2010-11-05T20:41:18.610
2010-11-06T16:17:06.830
2010-11-06T16:17:06.830
1329
1329
null
4258
1
4317
null
14
2393
I would like to automate the choice of burn-in for an MCMC chain, e.g. by removing the first n rows based on a convergence diagnostic. To what extent can this step be safely automated? Even if I still double check the autocorrelation, mcmc trace, and pdfs, it would be nice to have the choice of burn-in length automated. My question is general, but it would be great if you could provide specifics for dealing with an R mcmc.object; I am using the rjags and coda packages in R.
Can I semi-automate MCMC convergence diagnostics to set the burn-in length?
CC BY-SA 2.5
null
2010-11-05T21:17:42.250
2010-11-10T00:17:12.523
2010-11-09T16:23:28.803
1381
1381
[ "r", "bayesian", "markov-chain-montecarlo" ]
4259
1
null
null
10
1585
I work in the field of data mining and have had very little formal schooling in statistics. Lately I have been reading a lot of work that focuses on Bayesian paradigms for learning and mining, which I find very interesting. My question is (in several parts), given a problem is there a general framework by which it is possible to construct a statistical model? What are the first things you do when given a dataset of which you'd like to model the underlying process? Are there good books/tutorials out there that explain this process or is it a matter of experience? Is inference in the forefront of your mind when constructing your model or do you first aim to describe the data before you worry about how to use it to compute? Any insight would be greatly appreciated! Thanks.
Tips and tricks to get started with statistical modeling?
CC BY-SA 2.5
null
2010-11-05T21:17:57.507
2022-12-03T04:30:54.830
2010-11-06T16:43:06.420
null
null
[ "bayesian", "modeling", "references", "exploratory-data-analysis" ]
4260
2
null
2343
1
null
Let $\tilde{\omega}$ be the equivalence class of a given tree $\omega$ (i.e. $\omega_1 \sim \omega_2$ iif $\omega_1 \in \tilde{\omega}_2$). In your question, you define a probability (say $\tilde{P}$) on the equivalence class and you want a formula to compute rapidly $\tilde{P}(\tilde{\omega}=\tilde{w})$. Obviously the trivial formula you though of when you said "compute the number of trees that are isomorphic to t in some sense" is $$\tilde{P}(\tilde{\omega})=\sum_{\omega\in\tilde{\omega}}P(\omega)$$ where $P$ is the probability measure without considering the equivalent class (two isomorphic trees can be different). $P$ is easy to compute from the paper but if the trees are big the preceding formula is not really friendly. There is a type of equivalence for which you can have a nice formula (such as the one in the paper): it is when your equivalence relation can be restated recursively (according to the tree). Let $\mathcal{S}_k$ be the group of the bijections of $(1,\dots,k)$ on itself. > Definition 1. $w_1$ and $w_2$ are $\mathcal{S}_k$-equivalent if any node has not more than $k$ sons and if they have the same root (say $s\in V$ ) the sons of s in $w_1$ and $w_2$ are equal say to $V_1V_2\dots,V_k$ and there exists $\sigma\in \mathcal{S}_k$ such that a. $V_{\sigma(1)}V_{\sigma(2)}\dots V_{\sigma(k)}=V_1V_2\dots V_k$ b. $\forall i=1\dots,k$ the sub-tree of $w_1$ with root $V_{\sigma(i)}$ is equivalent with the sub-tree of $w_2$ with root $V_{i}$. To this definition, you can associate another one > Definition 2. Let us define the weigth $q(s->V_1V_2\dots V_k )$ of an occurence of a rule $(s->V_1V_2\dots V_k)$ in a tree $w$ by the number of elements $\sigma$ in $\mathcal{S}$ such that a. $V_{\sigma(1)}V_{\sigma(2)}\dots V_{\sigma(k)}=V_1V_2\dots V_k$ b. for any $i$ such that $\sigma(i)\neq i$, the subtree generated by $V_{\sigma(i)}$ (in $w$) is not equal to the subtree generated by $V_i$. with this second definition, and in the case when the equivalent class is defined through $\mathcal{S}_k$ -equivalence, the equation 2 p. 1994 in Chi (2004) is still true if you replace $p(s->n)$ by $q(s->n)p(s->n)$ exept for the transitions giving leaves or when . In your example ``` * A: 0 -> 1 1 (1.) has weight 2 (identity+permutation of (1,2)) except if the sons are equal * B: 1 -> 0 0 (0.5) has weight 2 (identity+permutation of (1,2)) except if the sons are equal * C: 1 -> 0 1 (0.5) has weigh 1 (only identiy) ``` Hence your tree has probability 0.5 since each transition is applyed once and only the first one does not give leaves. In the two other tree we are in the case of equal sons (this gives weight 1) and the probabilities are 0.25. These definitions are a bit complicated for a tree with 3 cases... I guess it can lead to fast calculation and can be interesting in the case of a very large tree ?
null
CC BY-SA 2.5
null
2010-11-05T21:33:34.133
2010-11-22T16:36:15.373
2010-11-22T16:36:15.373
223
223
null
4261
1
4263
null
3
1144
This relates to a previous question of mine which didn't gain many responses, perhaps because it wasn't very clear nd well written. I hope this time I will be more accurate and get your much appreciated assistance. I am analyzing results of a biological experiment. The results given as a single value ( non-negative integer) per genomic position. I am interested in valleys, or local minimam over this series of values. I wish to control the false positives rate and get the significance for each local minima. I can shuffle the raw data which was used to produce the data. So what I do is to shuffle the raw data, create the new series of values,search for all local minima and keep their values. Now, I have something like this: ``` data_set local_minima_values ============================= true_data 4 9 1 27 12 0 0 2 5 32 0 1 5 70 2 sim_1 14 25 94 59 32 sim_2 52 0 14 74 82 12 54 ... ``` Note the number of local minima naturally varies between simulations. So, my idea was to calculate an ECDF for each simulation and then combine those ECDFs into a single "average ECDF" which represents the null hypothesis. Then, I can assign a p-value for each local minima from the true data, and see how significant ('surprising') it is. My questions are: - Does this make sense? - How do I create an average ECDF? I can't just merge the values from all simulation together and get and ECDF for this merged set, since the number of minima found in each simulation differs, and I think all simulations should have the same contribution to the average ECDF, or am I wrong? - How should I take the number of simulations (shuffles) into account? Thanks, Dave p.s. I'm working with R.
Should I use an average ECDF?
CC BY-SA 2.5
null
2010-11-05T22:12:23.317
2023-03-03T13:35:06.313
null
null
634
[ "r", "statistical-significance", "sampling", "permutation-test" ]
4262
1
null
null
3
1873
I have a process which writes statistics from a server system to a file each second in this format: ``` label1 label2 label3 344 666 787 344 849 344 939 994 344 ``` There are a number of different values which I need graphs for, and each value is added to the bottom of the file each second. I am looking for a nice way to show these numbers graphically, preferably a program which auto-updates each second and shows the different graphs. Can anyone recommend such a program for mac? Preferably a free one :)
Graphing real-time data from a text file
CC BY-SA 2.5
null
2010-11-05T22:50:38.007
2010-11-06T00:36:39.673
null
null
1845
[ "data-visualization", "real-time" ]
4263
2
null
4261
1
null
To average the ECDFs, I'd do something like: ``` impute_resolution = 1e3 values_to_impute = seq( min(my_data$true_data) , max(my_data$true_data) , length.out = impute_resoluton ) ecdfs = matrix(NA,nrow=length(values_to_impute)) for(i in 1:(ncol(my_data)-1)){ #assumes column 1 is true_data this_ecdf = ecdf(my_data[,i+1]) ecdfs[i,] = this_ecdf(values_to_impute) } mean_ecdf = colMeans(ecdfs) plot( x = values_to_impute , y = mean_ecdf , type = 'l' ) ```
null
CC BY-SA 2.5
null
2010-11-05T23:20:38.963
2010-11-05T23:20:38.963
null
null
364
null
4264
2
null
4262
7
null
I had really good luck with the KDE program [kst](http://kst.kde.org): > Kst is the fastest real-time large-dataset viewing and plotting tool available and has basic data analysis functionality. Kst contains many powerful built-in features and is expandable with plugins and extensions. Extensive help is available, both from within Kst and on the web. Kst is licenced under the GPL and is freely available for anyone. It can handle much higher update frequency than once a second. It also happens to be file-based, but I don't know how much work it is to get this going on OS X. On Ubuntu and Debian, `sudo apt-get install kst` is all it takes.
null
CC BY-SA 2.5
null
2010-11-05T23:43:15.160
2010-11-05T23:43:15.160
null
null
334
null
4265
2
null
4262
2
null
RRDTool looks like it might be exactly what you are looking for. I've never tried to run it on a Mac but it looks like someone has some info on that here: [http://rrdtool.darwinports.com/](http://rrdtool.darwinports.com/) Good luck!
null
CC BY-SA 2.5
null
2010-11-06T00:36:39.673
2010-11-06T00:36:39.673
null
null
118
null
4266
2
null
2397
2
null
There no inherent problem with the singular covariance here. Your asymptotic distribution is the singular normal. See [http://fedc.wiwi.hu-berlin.de/xplore/tutorials/mvahtmlnode34.html](http://fedc.wiwi.hu-berlin.de/xplore/tutorials/mvahtmlnode34.html) which gives the density of the singular normal.
null
CC BY-SA 2.5
null
2010-11-06T00:52:44.303
2010-11-06T00:52:44.303
null
null
1860
null
4267
1
4268
null
18
1771
I recently read Skillicorn's book on matrix decompositions, and was a bit disappointed, as it was targeted to an undergraduate audience. I would like to compile (for myself and others) a short bibliography of essential papers (surveys, but also breakthrough papers) on matrix decompositions. What I have in mind primarily is something on SVD/PCA (and robust/sparse variants), and NNMF, since those are by far the most used. Do you all have any recommendation/suggestion? I am holding off mine not to bias the answers. I would ask to limit each answer to 2-3 papers. P.S.: I refer to these two decompositions as the most used in data analysis. Of course QR, Cholesky, LU and polar are very important in numerical analysis. That is not the focus of my question though.
Essential papers on matrix decompositions
CC BY-SA 2.5
null
2010-11-06T03:32:17.627
2010-12-16T08:38:52.443
2010-11-06T18:59:40.517
null
30
[ "matrix-decomposition", "svd", "numerics" ]
4268
2
null
4267
17
null
How do you know that SVD and NMF are by far the most used [matrix decompositions](http://en.wikipedia.org/wiki/Matrix_decomposition) rather than LU, Cholesky and QR? My personal favourite 'breakthrough' would have to be the guaranteed rank-revealing QR algorithm, - Chan, Tony F. "Rank revealing QR factorizations". Linear Algebra and its Applications Volumes 88-89, April 1987, Pages 67-82. DOI:10.1016/0024-3795(87)90103-0 ... a development of the earlier idea of QR with column-pivoting: - Businger, Peter; Golub, Gene H. (1965). Linear least squares solutions by Householder transformations. Numerische Mathematik Volume 7, Number 3, 269-276, DOI:10.1007/BF01436084 A (the?) classic textbook is: - Golub, Gene H.; Van Loan, Charles F. (1996). Matrix Computations (3rd ed.), Johns Hopkins, ISBN 978-0-8018-5414-9. (i know you didn't ask for textbooks but i can't resist) Edit: A bit more googling finds a paper whose abstract suggests we could be slightly at cross porpoises. My above text was coming from a 'numerical linear algebra' (NLA) perspective; possibly you're concerned more with an 'applied statistics / psychometrics' (AS/P) perspective? Could you perhaps clarify? - Lawrence Hubert, Jacqueline Meulman and Willem Heiser. Two Purposes for Matrix Factorization: A Historical Appraisal. SIAM Review Vol. 42, No. 1 (Mar., 2000), pp. 68-82
null
CC BY-SA 2.5
null
2010-11-06T07:17:02.640
2010-11-06T08:39:48.787
2010-11-06T08:39:48.787
449
449
null
4269
2
null
4175
1
null
From Sidney Siegel: > With a large enough sample the binomial distribution tends toward the normal distribution. A rule of thumb is that NPQ must be equal to at least 9. I believe in this case it is 750 independent observations * 1/5 *4/5 =120. Thus the parametric one-sample t-test is appropriate and the most powerful test. And so I'll just stick with the one-sample permutation t-test (DAAG package) as the resampling method to compare with the table-lookup test.
null
CC BY-SA 2.5
null
2010-11-06T10:28:50.867
2010-11-06T10:28:50.867
null
null
1614
null
4270
2
null
4044
10
null
The most complete survey is provided in [Statistical Inference Based on Divergence Measures](http://books.google.com/books?id=ziDGGIkhqlMC&pg=PA493&dq=Leandro+Pardo+Complutense+University,+Chapman+Hall+2006&hl=en&ei=Zm_VTOqGI8H6lwepsfn9CA&sa=X&oi=book_result&ct=result&resnum=1&ved=0CC4Q6AEwAA#v=onepage&q&f=false) by Leandro Pardo, Complutense University, Chapman Hall 2006.
null
CC BY-SA 2.5
null
2010-11-06T11:07:48.890
2010-11-06T15:10:28.317
2010-11-06T15:10:28.317
919
1873
null
4271
2
null
1645
2
null
In fact the Kiefer Salmon test and the Jarque Bera test are critically different as shown in several places but most recently [here](http://www.econ.sinica.edu.tw/upload/file/1118.pdf) -Moment Tests for Standardized Error Distributions: A Simple Robust Approach by Yi-Ting Chen. The Kiefer Salmon test by construction is robust in the face of ARCH type error structures unlike the standard Jarque Bera test. The paper by Yi-Ting Chen develops and discusses what I think are likely to be the best tests around at the moment.
null
CC BY-SA 2.5
null
2010-11-06T14:06:09.300
2010-11-06T14:06:09.300
null
null
1873
null