Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
5140
2
null
5115
27
null
[Leo Breiman](http://en.wikipedia.org/wiki/Leo_Breiman) for CART, bagging, and random forests.
null
CC BY-SA 3.0
null
2010-12-04T15:52:26.967
2011-09-22T23:12:26.383
2011-09-22T23:12:26.383
74
null
null
5144
1
5146
null
2
206
I have an experiment with any possible (reasonable) number of parameters (independent variables). I run the experiment several times for each possible combination of my variables. The data I get will be generally numeric. However I know nothing (and any assumptions are difficult) about the distribution of my data. What I am interested in is a measure of how well do my parameters predict the data I get. Which statistic should I use? How do I calculate it (by hand, a link to a tutorial would be very sweet)? ### Edit I am trying to solve this as generally as possible (hence the slightly non-specific description) for a piece of software I'm working on. To make it a bit more clear a bit of an example: I have these parameters: ``` decay: 0.1 | 0.2 | 0.3 particles: 10 | 100 velocity: 30 | 70 ``` This gives 12 combinations (3 * 2 * 2) and I'll measure my dependent variable (say temperature) five times for each combinations. Thus my final dataset will have 60 measurements of temperature. Now suppose that temperature was in fact given by: $t = K(0.3v + 0.6d + \varepsilon)$ where $t$ is temperature, $K$ is some constant, $v$ is velocity, $d$ is decay and $\varepsilon$ is some sort of random effect. Particles is completely unrelated to the measured temperature. Now I'd like to perform a test that would tell me that velocity has a ~0.3 effect, decay ~0.6 and particles ~0 effect. However I may have more or less variables and more or less measurements.
Appropriate test for multivariate experiment result with unknown distributions
CC BY-SA 2.5
null
2010-12-04T18:47:22.257
2010-12-06T03:00:55.350
2010-12-06T03:00:55.350
2261
2261
[ "experiment-design" ]
5145
2
null
5115
48
null
[William Sealy Gosset](http://en.wikipedia.org/wiki/William_Sealy_Gosset) for Student's t-distribution and the statistically-driven improvement of beer.
null
CC BY-SA 2.5
null
2010-12-04T20:42:26.150
2010-12-04T20:42:26.150
null
null
2077
null
5146
2
null
5144
6
null
Well, following your update, it seems you are dealing with a factorial experiment (factorial means that every factors are crossed, or, in other words, each unit is subjected to every possible combination of your factors), with five replicates. Let assume that these are not the same statistical units whose temperature is repeatedly measured across each of the 12 combinations (for the sake of clarity). An [ANalysis Of VAriance](http://en.wikipedia.org/wiki/Analysis_of_variance) (ANOVA) seems to be the most appropriate method to deal with this design. Basically, it will allow you to estimate the contribution of each source of variance (decay, particles, and velocity) wrt. the total variance in the observed temperature. What is not explained by these factors is called the residual variance (what you call the 'random effect'). A full additive model (i.e., without modeling interaction between your factors) will read something like $$ y_{ijkl}=\mu + \alpha_i + \beta_j + \gamma_k + \varepsilon_{ijkl}, $$ where $y_{ijkl}$ is the temperature for unit $l$ when considering levels $i=1\dots a$, $j=1\dots b$, and $k=1\dots c$, of factors $\alpha$ (decay), $\beta$ (particles), and $\gamma$ (velocity); the $\varepsilon_{ijk}$ are the residuals assumed to follow a gaussian distribution of unknown variance, $\sigma^2$. They can be viewed as random fluctuations around $\mu$, the overall mean, and reflect the between-unit variations that are not accounted for by the other factors. The $\alpha_i$, $\beta_j$, and $\gamma_k$ can be viewed as factor-specific deviations from the overall mean $\mu$. The so-called main effect of decay, particles, and velocity will be estimated by forming a ratio between the variance that they account for (known as mean squares) and the residual variance (what is left after considering all variance explained by those factors), which is known to follow a Fisher-Snedecor (F) distribution, with $d-1$ and $N-abc$ degrees of freedom, where $d=a$, $b$, or $c$ stands for the number of levels of $\alpha$ (decay), $\beta$ (particles), and $\gamma$ (velocity). A significant effect (following an [hypothesis test](http://en.wikipedia.org/wiki/Statistical_hypothesis_testing) of a null effect, i.e. $H_0:\, \mu_i=\mu_j\,\, \forall i\neq j$ vs. $H_1:$ at least two of the $\mu_i$'s differ) would indicate that the factor under consideration has a significant effect on the outcome. This is readily obtained by any statistical software. For instance, in R you would use something like ``` summary(aov(temperature ~ decay + particles + velocity, data=df)) ``` provided temperature and factor levels are organized in four columns, in a data.frame named `df`, as suggested below: ``` t1 0.1 10 30 t2 0.1 10 30 t3 0.1 10 30 t4 0.1 10 30 t5 0.1 10 30 t6 0.2 10 30 t7 0.2 10 30 ... t60 0.3 100 70 ``` The effect of any of the three factors can also be summarized under an equation like the one you referred to sy simply calling (again under R): ``` summary.lm(aov(temperature ~ decay + particles + velocity)) ``` This follows from the fact that an ANOVA is nothing more than a [Linear Model](http://en.wikipedia.org/wiki/Linear_model) that you may have heard about (think of a regression model where the explanatory variables are all categorical). Should you want to account for possible interactions between all three factors, you need to add three second-order and one three-order interaction terms. If any of these effects prove to be significant, this would mean that the effect of the corresponding factors cannot be considered in isolation one from the other (e.g., the effect of decay on temperature is not the same depending on the number of particles). As for references, I would suggest starting with on-line tutorial or textbook like [Three-Way ANOVA](http://www.psych.nyu.edu/cohen/three_way_ANOVA.pdf), by Barry Cohen, or [Practical Regression and Anova using R](http://cran.r-project.org/doc/contrib/Faraway-PRA.pdf), by John MainDonald (but see also other textbooks available on [CRAN documentation](http://cran.r-project.org/other-docs.html)). The definitive reference is Montgomery, [Design and Analysis of Experiments](http://bcs.wiley.com/he-bcs/Books?action=index&itemId=047148735X&bcsId=2172) (Wiley, 2005).
null
CC BY-SA 2.5
null
2010-12-04T21:13:59.517
2010-12-04T21:23:14.870
2010-12-04T21:23:14.870
930
930
null
5147
1
5148
null
2
1032
When trying to find the mode of a nonnegative function $f$ (i.e. maximize the function), one way to do it is to sampling the function viewed as an unnormalized density of some distribution via MCMC. Suppose we have had a sufficiently long sequence of samples via this method, I was wondering how to determine the mode from the samples? Specifically, as we know that a part taken from the end of the sequence will be assumed to be approximately subject to the distribution corresponding to the target function $f$. As far as I have thought already, there might be two choices for the estimation of the mode of the function: - take the last sample in the original sequence, - take a small subsequence from the end of the original sequence and evaluate the function f on every sample in the subsequence and pick the one with the maximal function value. I saw the first choice from my class note, but the second was my first thought before looking at the note. So I was wondering what choice for the mode of the target function might be reasonable or better and why? Possible to have any references on this? You don't have to limit your scope to the two I just mentioned. Thanks and regards!
Finding the mode of a function by MCMC sampling
CC BY-SA 2.5
null
2010-12-04T23:02:34.320
2010-12-05T00:01:21.373
2010-12-05T00:01:21.373
null
1005
[ "markov-chain-montecarlo", "optimization", "monte-carlo" ]
5148
2
null
5147
4
null
The mode is indeed the maximum of f(x), so the value of x encountered during the simulation that gives the highest value of f(x) ought to be the best approximation of the mode. AFAICS there is no good reason that the last sample should be the mode, unless you are performing simulated annealing and the temperature has fallen below the "freezing point" for the problem. Likewise I don't see why only a small subset at the end of the run is important, the x maximising f(x) is the estimate of the mode regardless of where in the run it occurrs.
null
CC BY-SA 2.5
null
2010-12-04T23:24:09.753
2010-12-04T23:24:09.753
null
null
887
null
5149
1
5153
null
7
4141
I have some time series data and want to test for the existence of and estimate the parameters of a linear trend in a dependent variable w.r.t. time, i.e. time is my independent variable. The time points cannot be considered IID under the null of no trend. Specifically, the error terms for points sampled near each other in time are positively correlated. Error terms for samples obtained at sufficiently different times can be considered IID for all practical purposes. I do not have a well-specified model of how the error terms are correlated for points close to each other in time. All I know from domain knowledge is that they are positively correlated to some degree or another. Other than this issue, I believe the assumptions of ordinarly least squares linear regression (homoskedasticity, linearity, normally distributed error terms) are met. Modulo the correlated error term issue, OLS would solve my problem. I am a complete novice at dealing with time series data. Is there any "standard" way to proceed in these circumstances?
Determining trend significance in a time series
CC BY-SA 2.5
null
2010-12-05T02:33:01.707
2010-12-06T03:21:10.097
2010-12-05T12:49:08.100
null
1347
[ "time-series", "regression", "correlation" ]
5150
1
null
null
6
2255
I have a biometric authentication system that is using a person's gait to authenticate them. I extract features from gait, run it through a comparison versus a template and produce a similarity score (where if this similarity score is below a certain threshold, then the user is authenticated). So, I have 72 trials total (36 trials containing a positive case and 36 that contain a negative case). What I want to do is graph the ability of this system to authenticate people by illustrating it with a ROC graph. Unfortunately, I don't quite understand how to choose a threshold. Is there some mathematical procedure involved for choosing a threshold for the similarity scores? Do I just choose a bunch of different thresholds, and graph the corresponding ROC curves for all these different threshold values? The resulting similarity scores vary from [0.6,1.2] where the positive cases tend to lie around 0.6. All my coding is being done in Matlab.
Choosing the right threshold for a biometric trait authentication system
CC BY-SA 2.5
null
2010-12-05T03:24:56.760
2010-12-05T13:32:24.757
null
null
1224
[ "matlab", "mathematical-statistics", "roc" ]
5151
2
null
125
3
null
This book suggests it is aimed at entry level undergraduate level Biostatistics: A Bayesian Introduction. By George G Woodsworth. Published by John Wiley & Sons
null
CC BY-SA 2.5
null
2010-12-05T03:47:19.163
2010-12-05T03:47:19.163
null
null
2030
null
5152
1
null
null
1
535
I am conducting a study on a cohort of people with a follow-up period of 7 years. I wish to use Cox Proportional Hazard model to estimate HR between an exposure and the length of time of an event. One missing information is the date of birth for the all subjects, but month and year are available.This prevents the calculation of exact age at the time of the study. Any suggestions will be much appreciated? Any sensitivity analysis should be conducted? Thanks
Missing values in survival analysis
CC BY-SA 2.5
null
2010-12-05T04:46:48.430
2010-12-07T02:59:36.707
null
null
null
[ "survival", "missing-data" ]
5153
2
null
5149
3
null
What you are describing is commonly referred to as [auto correlated errors](http://en.wikipedia.org/wiki/Autocorrelation). I would suggest you look up resources on ARIMA modelling. ARIMA modelling will allow you to model the correlation in your error term, and hence allow you to assess your trend variable independent of this auto correlation (or other independent variables you are interested in). My suggested reading for an into to ARIMA modelling would be [Applied Time Series Analysis for the Social Sciences](http://books.google.com/books?id=D6-CAAAAIAAJ&q=Applied+Time+Series+Analysis+for+the+Social+Sciences&dq=Applied+Time+Series+Analysis+for+the+Social+Sciences&hl=en&ei=QaLNTKD-HcH88Aabt-Al&sa=X&oi=book_result&ct=result&resnum=1&ved=0CC8Q6AEwAA) 1980 by R McCleary ; R A Hay ; E E Meidinger ; D McDowall But there are plenty of resources (time series analysis is a massive field of study). You would probably be able to turn up some good online resources with just a google search if you don't have access to an academic library. I just turned up this page, [Statistica ARIMA](http://www.statsoft.com/textbook/time-series-analysis/#arima), it has a brief but very concise description of ARIMA modelling as well as other methods for time series analysis.
null
CC BY-SA 2.5
null
2010-12-05T05:13:40.530
2010-12-05T05:13:40.530
null
null
1036
null
5155
2
null
5115
57
null
[John Tukey](http://en.wikipedia.org/wiki/John_Tukey) for Fast Fourier Transforms, exploratory data analysis (EDA), box plots, projection pursuit, jackknife (along with Quenouille). Coined the words "software" and "bit".
null
CC BY-SA 3.0
null
2010-12-05T05:18:37.237
2012-08-02T03:11:29.257
2012-08-02T03:11:29.257
74
74
null
5156
2
null
5092
2
null
I think that power analysis is too elaborate for what you're trying to do, and might let your down. With a sample size north of 9 million, I think your estimate for `p = Pr(X > 3) = 0.000015` is pretty accurate. So you can use that in a simple binomial(n, p) model to estimate a sample size. Let's say your goal is to observed at least one "Large" event with a probability of 99.9%. Then `Pr(L > 0) = 1 - Pr(L = 0) = 1 - 0.999985^n = 0.999` and your desired sample size is `n = ln(0.001)/ln(0.999985) = 460514`. Of course, if you're feeling lucky and are willing to take a 10% chance of missing a Large event, you only need a sample size of n = 153505. Tripling the sample size cuts your chance of missing the Large event by a factor of 100, so I'd go for the 460,000. BUT...if you're looking for FIVE's, their probability is just south of 1/9180902 and to observe at least one of THOSE with 99.9% probability, you'd need a sample size of about 63.4 million! Do heed DrKNexus' advice about updating your estimate of the probabilities for the Large events, since it might not be constant across all your datasets.
null
CC BY-SA 2.5
null
2010-12-05T05:29:13.617
2010-12-05T05:29:13.617
null
null
5792
null
5157
2
null
5149
1
null
Along the lines of a previous answer, if all assumptions for OLS are met except for the fact that errors are correlated, maybe something as simple as a [Cochrane-Orcutt](http://en.wikipedia.org/wiki/Cochrane%E2%80%93Orcutt_estimation) correction would be enough to solve your problem.
null
CC BY-SA 2.5
null
2010-12-05T08:02:27.610
2010-12-05T08:02:27.610
null
null
892
null
5158
1
5164
null
40
33748
I understand that when sampling from a finite population and our sample size is more than 5% of the population, we need to make a correction on the sample's mean and standard error using this formula: $\hspace{10mm} FPC=\sqrt{\frac{N-n}{N-1}}$ Where $N$ is the population size and $n$ is the sample size. I have 3 questions about this formula: - Why is the threshold set at 5%? - How was the formula derived? - Are there other online resources that comprehensively explain this formula besides this paper?
Explanation of finite population correction factor?
CC BY-SA 4.0
null
2010-12-05T09:40:51.387
2022-10-06T12:46:35.533
2021-05-13T14:42:54.947
11887
1636
[ "sampling", "finite-population" ]
5159
1
5169
null
3
2006
The height for 1000 students is approximately normal with a mean 174.5cm and a standard deviation of 6.9cm. If 200 random samples of size 25 are chosen from this population and the values of the mean are recorded to the nearest integer, determine the probability that the mean height for the students is more than 176cm. Since the samples were rounded to the nearest integer, I should find $P(X>176.5)$ instead of $P(X>176)$. Is this how we account for the effect of rounding the observations? EDIT: In light of whuber's answer: The answer given by my module (no workings were provided): $\hspace{1cm} n=25; Normal$ $\hspace{1cm} \mu_{\overline{x}}=174.5cm$ $\hspace{1cm} \sigma_{\overline{x}}=6.9/5=1.38$ The answer is 0.1379. Which I'm pretty sure was found using $1-\phi(\dfrac{176-174.5}{1.38})$ So, - Is this an acceptable answer? - Since $n$ was less than 30, would it be ok to find the probability using a t-distribution? Thank you.
Correction due to rounding error
CC BY-SA 2.5
null
2010-12-05T11:13:52.800
2010-12-07T19:18:04.460
2010-12-07T19:18:04.460
1636
1636
[ "self-study" ]
5160
1
null
null
17
5586
I am looking for a good tutorial on clustering data in `R` using hierarchical dirichlet process (HDP) (one of the recent and popular nonparametric Bayesian methods). There is `DPpackage` (IMHO, the most comprehensive of all the available ones) in `R` for nonparametric Bayesian analysis. But I am unable to understand the examples provided in `R News` or in the package reference manual well enough to code HDP. Any help or pointer is appreciated. A C++ implementation of HDP for topic modeling is available [here](http://www.cs.princeton.edu/~blei/topicmodeling.html) (please look at the bottom for C++ code)
Nonparametric Bayesian analysis in R
CC BY-SA 2.5
null
2010-12-05T11:14:12.273
2012-01-18T15:58:29.560
2010-12-06T08:52:21.543
null
1307
[ "r", "bayesian", "clustering", "nonparametric" ]
5161
2
null
5159
3
null
I understand the question as one where we know the theoretical distribution of students height with some precision (i.e., with one decimal place). In the present case, this is a gaussian distribution with parameters $\mathcal{N}(174.5;6.9^2)$. Now, we have empirical measurements of students height on small samples ($n=25$), but results are rounded to the nearest integer due to possible measurement error or imperfect measurement device. So, my understanding is that the question is really to assess $\Pr(X>176)$ or $\Pr(Z>\frac{176-174.5}{6.9})$ if you refer to the standardized $\mathcal{N}(0;1)$ distribution, and not $\Pr(X>176.5)$ as you suggested.
null
CC BY-SA 2.5
null
2010-12-05T11:26:49.977
2010-12-05T11:51:26.913
2010-12-05T11:51:26.913
930
930
null
5162
2
null
5150
4
null
Generally, the cut-off value is chosen such as to maximize the compromise between sensitivity (Se) and specificity (Sp). You can generate a regular sequence of thresholds and plot the resulting ROC curve, as shown below, based on the [DiagnosisMed](http://cran.r-project.org/web/packages/DiagnosisMed/index.html) R package. ![alt text](https://i.stack.imgur.com/yM82E.png) Actually, the raw data looks like ``` test.values TP FN FP TN Sensitivity Specificity 1 0.037 51 0 97 0 1 0.0000 2 0.038 51 0 96 1 1 0.0103 3 0.039 51 0 91 6 1 0.0619 4 0.040 51 0 84 13 1 0.1340 5 0.041 51 0 74 23 1 0.2371 6 0.042 51 0 67 30 1 0.3093 ``` and the optimal threshold is found as ``` test.values TP FN FP TN Sensitivity Specificity 47 0.194 43 8 8 89 0.8431 0.9175 ``` To sum up, I would suggest to generate a regular sequence of possible thresholds and compute Se and Sp in each case; then, choose the one that maximize Se and (1-Sp) (or use other criteria if you want to minimize FP or FN rates).
null
CC BY-SA 2.5
null
2010-12-05T11:46:24.613
2010-12-05T13:32:24.757
2010-12-05T13:32:24.757
930
930
null
5163
2
null
5136
4
null
The relevant section of the classical typology distinguishes between (observed) variables, latent variables, and parameters. Regular variables are observed and have a distribution. Latent variables are not observed and have a distribution. Parameters are not observed and do not have a distribution. Parameters vs latent variables is indeed a modelling decision. Consider a set of survey questions that tap an underlying scale. If you expect that learning about one subject's position on the scale is potentially informative about another subject's position and you wish to be able to generalise to new subjects then you should treat position as a latent variable. If not, you may as well treat it like a parameter. Bringing up FA and IRT is a bit confusing because some measurement models aim to estimate subject parameters e.g. Rasch models, and some aim to estimate subject latent variables e.g. FA and IRT models. All types of model have parameters in addition, associated with the items. For a survey context there are also indexes, constructed by combining several indicators (which are observed variables). You should probably think of these as non-parametric estimators of latent variables, for when you don't feel happy with measurement model parametric assumptions. (Although personally I've never been particularly sure about their status)
null
CC BY-SA 2.5
null
2010-12-05T12:21:37.620
2010-12-05T12:21:37.620
null
null
1739
null
5164
2
null
5158
33
null
The threshold is chosen such that it ensures convergence of the [hypergeometric distribution](http://en.wikipedia.org/wiki/Hypergeometric_distribution) ($\sqrt{\frac{N-n}{N-1}}$ is its SD), instead of a binomial distribution (for sampling with replacement), to a normal distribution (this is the Central Limit Theorem, see e.g., [The Normal Curve, the Central Limit Theorem, and Markov's and Chebychev's Inequalities for Random Variables](http://stat-www.berkeley.edu/~stark/SticiGui/Text/clt.htm)). In other words, when $n/N\leq 0.05$ (i.e., $n$ is not 'too large' compared to $N$), the FPC can safely be ignored; it is easy to see how the correction factor evolves with varying $n$ for a fixed $N$: with $N=10,000$, we have $\text{FPC}=.9995$ when $n=10$ while $\text{FPC}=.3162$ when $n=9,000$. When $N\to\infty$, the FPC approaches 1 and we are close to the situation of sampling with replacement (i.e., like with an infinite population). To understand this results, a good starting point is to read some online tutorials on sampling theory where sampling is done without replacement ([simple random sampling](http://en.wikipedia.org/wiki/Simple_random_sample)). This online tutorial on [Nonparametric statistics](http://www.stat.washington.edu/fritz/DATAFILES425_2009/Stat425Ranksum.pdf) has an illustration on computing the expectation and variance for a total. You will notice that some authors use $N$ instead of $N-1$ in the denominator of the FPC; in fact, it depends on whether you work with the sample or population statistic: for the variance, it will be $N$ instead of $N-1$ if you are interested in $S^2$ rather than $\sigma^2$. As for online references, I can suggest you - Estimation and statistical inference - A new look at inference for the Hypergeometric Distribution - Finite Population Sampling with Application to the Hypergeometric Distribution - Simple random sampling
null
CC BY-SA 2.5
null
2010-12-05T12:32:46.047
2010-12-05T13:19:33.057
2010-12-05T13:19:33.057
930
930
null
5165
2
null
5149
4
null
Generalised least squares (GLS) is one potential option here. The OLS estimates of the parameters are given by: $$\hat{\beta} = (X^{T}\Sigma^{-1}X)^{-1}X^{T}\Sigma^{-1}y$$ Normally we leave out $\Sigma$ as in OLS it is defined as $\sigma^2 \mathbf{I}$, i.e. an identity matrix multiplied by the estimated residual standard error. $\mathbf{I}$ is the assumption of uncorrelated errors; an observation is perfectly correlated with itself and is uncorrelated with any other observation. GLS relaxes this indepence assumption by allowing $\Sigma$ to take different forms. Usually we choose a simple process to parametrise $\Sigma$, such as an AR(1). In an AR(1) the correlation between two errors at times $t$ and $s$ is $$\mathrm{cor}(\varepsilon_s \varepsilon_t) = \left\lbrace \begin{array}{ll} 1 & \mathrm{if} \; s = t \\ \rho^{|t-s|} & \mathrm{else} \\ \end{array} \right. $$ Which would give us the following error covariance matrix: $$\mathbf{\Sigma} = \sigma^2 \left( \begin{array}{ccccc} 1 & \rho & \rho^2 & \cdots & \rho^{n-1} \\ \rho & 1 & \rho & \cdots & \rho^{n-2} \\ \rho^2 & \rho & 1 & \cdots & \rho^{n-3} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \rho^{n-1} & \rho^{n-2} & \rho^{n-3} & \cdots & 1 \\ \end{array} \right)$$ An additional parameter estimate is required, $\rho$. More complex processes for $\Sigma$ can be employed, including ARMA models. In R, these sorts of model s can be fitted using the `gls()` function in package nlme. If you are an R user, you might also take a look at the sandwich package which allows for something similar to the above, but where you estimate the OLS model and then afterwards, estimate $\Sigma$ and use that as a plug-in value to correct the standard errors of the OLS parameters.
null
CC BY-SA 2.5
null
2010-12-05T13:22:35.273
2010-12-05T13:22:35.273
null
null
1390
null
5167
1
null
null
1
1368
I am sampling covariance matrix from a Inverse Wishart distribution. In one dimensional case, after doing sufficient iterations I am taking the mode value for variance (after removing the burn-in values). How to do the same in a multivariate case?
Sampling covariance matrix using Gibbs sampling
CC BY-SA 2.5
null
2010-12-05T20:54:03.357
2010-12-07T10:38:37.483
2010-12-07T10:38:37.483
null
2157
[ "markov-chain-montecarlo", "gibbs" ]
5168
2
null
5160
13
null
Here are some online ressources I found interesting without going into detail (and I'm not a specialist of this topic): - Hierarchical Dirichlet Processes, by Teh et al. (2005) - Dirichlet Processes A gentle tutorial, by El-Arini (2008) - Bayesian Nonparametrics, by Rosasco (2010) - Non-parametric Bayesian Methods, by Ghahramani (2005) The definitive reference seems to be > N. Hjort, C. Holmes, P. Müller, and S. Walker, editors. Bayesian Nonparametrics. Number 28 in Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 2010. About R, there seems to be some other packages worth to explore if the [DPpackage](http://cran.r-project.org/web/packages/DPpackage/index.html) does not suit your needs, e.g. [dpmixsim](http://cran.r-project.org/web/packages/dpmixsim/), [BHC](http://www.bioconductor.org/help/bioc-views/release/bioc/html/BHC.html), or [mbsc](http://www.stat.washington.edu/hoff/Code/MBSC/) found on [Rseek.org](http://www.rseek.org).
null
CC BY-SA 3.0
null
2010-12-05T20:54:42.357
2012-01-18T15:58:29.560
2012-01-18T15:58:29.560
2728
930
null
5169
2
null
5159
5
null
I interpret this question as supposing that an experiment is conducted 200 times. In this experiment, 25 people are independently drawn from the population (with replacement) and their average height is rounded to the nearest centimeter. This process yields 200 whole numbers. You seem to be asking, what is the chance that the average of these 200 numbers exceeds 176 cm. This interpretation requires us to deal with several phenomena: the sampling distribution of the mean, the effects of rounding, and the effect of repeated sampling. Other interpretations are possible but it appears these issues will arise no matter what, so I hope the following analysis will illustrate useful, relevant techniques even if a different interpretation is intended. --- The sampling distribution of the mean of 25 independent values (with replacement) has the same mean as the parent distribution and 1/25th of its variance. It's also Normal. Thus in this case it's a Normal(174.5, 6.9/5) distribution. Rounding turns a continuous distribution (Normal(174.5, 6.9/5) in this case) into a discrete distribution, because now the only possible values are 0, 1, ..., 174, 175, 176, ..., cm. The probability of observing one of these values $y$ equals the probability that the true value lies between $y - 1/2$ and $y + 1/2$ and therefore is given by $$\Pr(Y) = \Phi(\frac{y + 1/2 - 174.5}{6.9/5}) - \Phi(\frac{y - 1/2 - 174.5}{6.9/5}).$$ where, as usual, $\Phi$ is the cumulative distribution function for the standard Normal distribution. Because normal distributions are symmetric, bias in rounding of values less than the mean ought to balance any bias in rounding of values greater than the mean. The balancing will be perfect when the distribution's mode is a half integer, which is the case here. Thus the mean of this "discrete Normal" distribution is exactly 174.5. The rounding will increase the variance. As an approximation, people usually think of rounding as acting at random to vary a number by some amount uniformly distributed between $-1/2$ and $+1/2$. The variance of this uniform distribution is $1/12$, whence we can estimate the variance of the discrete normal distribution as $$\sqrt{sd^2 + 1/2} = \sqrt{(6.9/5)^2 + 1/12} = 1.40986\ 99703\ 63697\ 52354,$$ approximately. This approximation works well when the rounding is small compared to the standard deviation of the true distribution, which is the case here. In fact, exact calculations give a value of $1.40986\ 99703\ 63697\ 65285$, differing from the approximation by about $10^{-16}$. That's more than accurate enough! -- But it was worthwhile checking. Now that we know the parameters of the distribution of rounded averages of 25-person samples--namely, a mean of 174.5 and SD of 1.40986--we determine as before that the expectation of the mean of 200 such rounded averages is 174.5 and its SD is $1.40986/\sqrt{200}$ = $0.099693$. Now this distribution is going to be extremely close to Normal, but not perfectly so: after all, its values must all be multiples of 1/200 = 0.005 cm. If you like, you can introduce a continuity correction in the usual way by observing that there can be no average between 176 and 176.005 cm, so you would compute the probability of a Normal variate exceeding their midpoint of 176.0025 cm. However, this makes no practical difference, because 176 - 174.5 cm is more than 15 standard deviations above the mean: it's virtually impossible that the average of 200 rounded values could exceed either 176.0025 or 176. The exact value is approximately $$1 - \Phi(\frac{176.0025 - 174.5}{0.099693})$$ which less than $10^{-53}$. Because the original population is only "approximately" Normal, we shouldn't rely on any probability calculation this small. We might just as well say the answer is "essentially zero."
null
CC BY-SA 2.5
null
2010-12-05T21:40:11.867
2010-12-05T21:40:11.867
null
null
919
null
5170
1
5176
null
3
4385
I am new to forecasting in R and am trying to automatically fit an ARIMA model to what I believe is a univariate dataset. ``` > str(p1.z) 'zoo' series from 2009-04-05 to 2010-10-31 Data: int [1:83] 360 570 540 585 570 690 495 660 510 690 ... Index: Class 'Date' num [1:83] 14339 14346 14353 14360 14367 ... > head(p1.z) 2009-04-05 2009-04-12 2009-04-19 2009-04-26 2009-05-03 2009-05-10 360 570 540 585 570 690 ``` But when I try to fit the model, I get the error as seen below. ``` > p1.arima <- auto.arima(p1.z) Error in nsdiffs(xx) : Non seasonal data ``` It is my understanding that the forecast package and the auto.arima function would be able to fit my data seasonal or not. I am trying to learn time series forecasting and am using a dataset that appears to be ideal for this sort of task . Also, the function ets() was able to find a model. Any help you can provide will be greatly appreciated
Starting out with forecast package in R
CC BY-SA 2.5
null
2010-12-05T23:12:33.357
2010-12-06T10:26:02.110
2010-12-06T10:26:02.110
159
569
[ "r", "time-series", "forecasting" ]
5171
1
5258
null
8
2402
I hope this isn't either far too basic or redundant. I have been looking around for guidance but so far I am still uncertain of how to proceed. My data consists of counts of a particular structure used in conversations between pairs of interlocutors. The hypothesis I want to test is the following: more frequent use of this structure by one speaker will tend to increase the frequency of the structure by the other speaker (i.e., this might be evidence of a priming effect). So I just have two vectors, the counts for speaker A and the counts for speaker B are the columns, and if they are lined up each row represents a particular conversation, like this: A B 0 1 0 2 1 0 3 1 0 2 2 0 2 1 There are about 420 conversations (rows). There are lots of zeros in this data. What would be the best way to analyze this data? I am using R, if that makes a difference. Here is a plot of the frequencies (counts). The x-axis is number of uses by speaker A, the y-axis number of uses by speaker B. The distinction between speakers means only that speaker A spoke first, and there's no special reason why they did. Otherwise the distinction between speaker A and speaker B is basically meaningless: [Valid XHTML http://phonematic.com/convplot.jpg](http://phonematic.com/convplot.jpg) And this is frequency relative to number of sentences spoken by each speaker in each conversation. : [Valid XHTML http://phonematic.com/rs_plot.jpg](http://phonematic.com/rs_plot.jpg) (I should mention that I have thrown out conversations with no hits at all, i.e. {0,0}.)
Testing paired frequencies for independence
CC BY-SA 2.5
null
2010-12-05T23:43:10.043
2022-12-11T17:43:57.517
2011-05-10T20:38:31.137
930
52
[ "categorical-data", "independence" ]
5172
1
null
null
4
8945
I have several sets of data, unfortunately the data comes to me in a "summary" form. My job is to consolidate the several data sources into one general summary. I'm currently using the median to summarise the data, but I don't know if this is statistically sound. Here's a description of my problem: There are $N_P$ samples, each with varying sample sizes, but all from a single population. Neither the sample size or the standard variation are known. Each sample can be divided into $N_Q$ disjoint groups (or qualities). From each sample, the only data that is known is what percent of the sample falls within a group (or category). For example, population $A$ contains, $x\%$ of $a$, $y\%$ of $b$ and $z\%$ of $c$. The different samples are not disjoint, so a single item might be in several of the samples; but I don't know how much overlapping there is. There are 5-8 different samples with 5-7 categories. An example (smaller) table is the following. ``` cat. a cat. b cat. c sample A 47.34% 30.05% 11.92% sample B 41.60% 29.90% 11.90% sample c 47.74% 29.67% 12.69% -------- ------ ------ ------ median 47.34% 29.90% 11.92% ``` Now is it statistically sound to create this "median" summary, which takes each group from the different samples and finds the median? Maybe I should be using the mean? The problem I'm seeing is the "median sample" usually sums to less than 100%, even though the percentages from each sample sum to 100%. Should this matter? ``` Sample sizes: 100k - 100m Population size: ~1 billion ```
Is taking the median of a set of percentages statistically sound?
CC BY-SA 2.5
null
2010-12-06T00:17:18.907
2017-07-24T11:43:52.403
2017-07-24T11:43:52.403
11887
2271
[ "sampling", "median", "population", "percentage" ]
5173
1
5175
null
10
1913
"Spurious regression" (in the context of time series) and associated terms like unit root tests are something I've heard a lot about, but never understood. Why/when, intuitively, does it occur? (I believe it's when your two time series are cointegrated, i.e., some linear combination of the two is stationary, but I don't see why cointegration should lead to spuriousness.) What do you do to avoid it? I'm looking for a high-level understanding of what cointegration/unit root tests/Granger causality have to do with Spurious regression (those three are terms I remember being associated with spurious regression somehow, but I don't remember what exactly), so either a custom response or a link to references where I can learn more would be great.
Resources for learning about spurious time series regression
CC BY-SA 2.5
null
2010-12-06T00:33:15.987
2015-07-07T14:41:53.283
2010-12-06T09:08:35.157
159
1106
[ "time-series", "regression", "cointegration" ]
5174
2
null
5173
12
null
Let's start with the spurious regression. Take or imagine two series which are both driven by a dominant time trend: for example US population and US consumption of whatever (it doesn't matter what item you think about, be it soda or licorice or gas). Both series will be growing because of the common time trend. Now regress aggregate consumption on aggregate population size and presto, you have a great fit. (We could simulate that quickly in R too.) But it means nothing. There is no relationship (as we as the modelers know) -- yet the linear model sees a fit (in the minimizing sum of squares sense) as both series happen to both be uptrending without a causal link. We fell victim to a spurious regression. What could or should be modeled is change in one series on change in the other, or maybe per capita consumption, or ... All those changes make the variables stationary which helps to alleviate the issue. Now, from 30,000 feet, unit roots and cointegration help you with formal inference in these case by providing rigorous statistical underpinning (Econometrica publications and a Nobel don't come easily) where none was available. As for question in good resources: it's tricky. I have read dozens of time series books, and most excel at the math and leave the intuition behind. There is nothing like Kennedy's Econometrics text for time series. Maybe Walter Enders text comes closest. I will try to think of some more and update here. Other than books, software for actually doing this is important and R has what you need. The price is right too.
null
CC BY-SA 2.5
null
2010-12-06T00:53:50.953
2010-12-06T13:26:36.717
2010-12-06T13:26:36.717
334
334
null
5175
2
null
5173
11
null
These concepts have been created to deal with regressions (for instance correlation) between non stationary series. Clive Granger is the key author you should read. Cointegration has been introduced in 2 steps: 1/ Granger, C., and P. Newbold (1974): "Spurious Regression in Econometrics," In this article, the authors point out that regression among non stationary variables should be conducted as regressions among changes (or log changes) of the variables. Otherwise you might find high correlation without any real significance. (= spurious regression) 2/ Engle, Robert F., Granger, Clive W. J. (1987) "Co-integration and error correction: Representation, estimation and testing", Econometrica, 55(2), 251-276. In this article (for which Granger has been rewarded by the Nobel jury in 2003), the authors go further, and introduce cointegration as a way to study the error correction model that can exist between two non stationary variables. Basically the 1974 advice to regress the change in the time series may lead to unspecified regression models. You can indeed have variables whose changes are uncorrelated, but which are connected through an "error correction model". Hence, you can have correlation without cointegration, and cointegration without correlation. The two are complementary. If there was only one paper to read, I suggest you start with this one, which is a very good and nice introduction: [(Murray 1993) Drunk and her dog](http://www-stat.wharton.upenn.edu/~steele/Courses/434/434Context/Co-integration/Murray93DrunkAndDog.pdf)
null
CC BY-SA 2.5
null
2010-12-06T01:56:44.033
2010-12-06T15:36:20.017
2010-12-06T15:36:20.017
919
1709
null
5176
2
null
5170
8
null
`ets()` and `auto.arima()` are not really set up to handle `zoo` objects. Although `ets()` is not returning an error, it will be ignoring any seasonality. `auto.arima()` is failing because it is confused by the `zoo` object with apparent seasonality. I will try to include better checking in a future version. When using the forecast package, use `ts` objects instead. In this example, ``` x <- ts(x) auto.arima(x) ets(x) ``` That will ignore the frequency component of `x`. It looks like weekly data, so ``` x <- ts(x,start=2009+(31+28+31+4)/365,f=52) ``` will capture the frequency (and start period). However, note that `ets()` will not handle weekly data and will return an error with this latter formulation.
null
CC BY-SA 2.5
null
2010-12-06T03:12:22.033
2010-12-06T03:12:22.033
null
null
159
null
5177
2
null
5149
6
null
To add to the existing answers, if you are using R a simple way to proceed is to allow the ARMA errors to be modelled automatically using `auto.arima()`. If `x` is your time series, then you can proceed as follows. ``` t <- 1:length(x) auto.arima(x,xreg=t,d=0) ``` This will fit the model $x_t = a + bt + e_t$ where $e_t\sim\text{ARMA}(p,q)$ and $p$ and $q$ are selected automatically using the AIC. The resulting output will give the value of $b$ and its standard error. Here is an example: ``` Series: x ARIMA(3,0,0) with non-zero mean Call: auto.arima(x = x, xreg = t) Coefficients: ar1 ar2 ar3 intercept t -0.3770 0.1454 -0.2351 563.9654 0.0376 s.e. 0.1107 0.1190 0.1145 11.4725 0.2378 sigma^2 estimated as 5541: log likelihood = -475.85 AIC = 963.7 AICc = 964.81 BIC = 978.21 ``` In this case, $p=3$ and $q=0$. The first three coefficients give the autoregressive terms, $a$ is the intercept and $b$ is in the `t` column. In this (artificial) example, the slope is not significantly different from zero. The `auto.arima` function is using MLE rather than GLS, but the two are asymptotically equivalent. The use of a Cochrane-Orcutt procedure only works if the error is AR(1). So the above is much more general and flexible.
null
CC BY-SA 2.5
null
2010-12-06T03:21:10.097
2010-12-06T03:21:10.097
null
null
159
null
5178
2
null
5172
2
null
What you are doing does not makes sense if your goal is to categorize what proportion of the entire population (sample A + sample B + sample C) is in category a, b, and c. Consider the following contingency table: ``` a b c a b c A 8; 1; 1 A .8; .1; .1 B 7; 2; 1 B .7; .2; .1 C 1; 13; 16 C .03; .43; .53 ``` Then, for example, the median of the category a probabilities is 0.7 and the mean is 0.51, but only 16/50 = 0.32 of the all the observations are in column a. Likewise, the median of the category c probabilities would be 0.1, but only 0.36 of the observations are in column c. Does the "median summary" you propose tell you anything meaningful in a situation such as this one? Unless you have the marginal counts of either the samples or the categories, or you are willing to make some assumptions about them, I don't think there is a whole lot you can do in this case. Do you have any specific goals in mind? Also, how many categories and samples do you have? Edit: Your sample/population phrasing is slightly confusing. It's better to say you "have 3 samples, each which be sub-divided into 3 categories a,b, and c." The phrase "sample population" is troublesome, as is your reference to two different "populations."
null
CC BY-SA 2.5
null
2010-12-06T04:15:24.173
2010-12-06T04:15:24.173
null
null
2144
null
5179
2
null
5171
0
null
I would maybe start with a [rank correlation](http://en.wikipedia.org/wiki/Rank_correlation) analysis. The issue is that you may have very low correlations as the effects you are trying to capture are small. Both Kendall and Spearman correlation coefficients are implemented in R in ``` cor(x=A, y=B, method = "spearman") cor(x=A, y=B, method = "kendall") ```
null
CC BY-SA 2.5
null
2010-12-06T05:01:20.237
2010-12-06T05:01:20.237
null
null
1709
null
5180
2
null
5167
0
null
If I understand your question correctly: Covariance matrix for 1-dim case reduces to the variance. Wishart Distribution (or Inv wishart distribution depending on your formulation) is a prior of covariance matrices, which for dimensions $\geq$ 2 correspond to multivariate case. However, I may have misunderstood you. Please correct me if that is the case.
null
CC BY-SA 2.5
null
2010-12-06T06:10:49.867
2010-12-06T06:10:49.867
null
null
1307
null
5181
1
5263
null
3
1637
Say I'm doing stats on the height of adults from various countries. I assume the heights of adults from one country are normally distributed, and ignore sex differences (I also ignore the fact that neighbouring countries tend to have similar populations). I have a bunch of data by country, but for some countries I have very few data points, which can lead to quite big errors on my estimate of standard distributions. Is there a way I can use the data from countries for which I have a lot of data to get better estimates - say if I notice standard deviation in those countries is always between 7 and 8.5 cm, but my dataset for Nepal (for which I have 9 samples) has a standard deviation of 9.5 cm, I should probably correct that downwards. But how? Is there a formula for this? When calculating the parameters for Nepal, shouldn't the data from the other countries allow me to have a "prior" distribution of expected means and deviations, which I would then update by taking the actual data from Nepal into account? How would I formalize this methodology? (I got this while looking for a simple reduction of the problem that prompted [previous question](https://stats.stackexchange.com/questions/5107/mahalanobis-distance-to-a-multivariate-distribution-of-which-i-have-few-sample), which didn't get an answer yet - I'm still mostly looking for good methodologies for thinking about this kind of problem).
Estimating distribution parameters from few data points
CC BY-SA 2.5
null
2010-12-06T10:30:50.827
2010-12-16T03:26:47.083
2017-04-13T12:44:37.583
-1
1737
[ "bayesian", "estimation", "normal-distribution", "uncertainty" ]
5182
2
null
5171
2
null
You seem to have ordered categorical data, therefore I suggest a linear-by-linear test as described by Agresti (2007, p229 ff). Function `lbl_test()` of package `coin` implements it in R. Agresti, A. (2007). Introduction to Categorical Data Analysis. 2nd Ed. Hoboken, New Jersey: John Wiley & Sons. Hoboken, NJ: Wiley.
null
CC BY-SA 2.5
null
2010-12-06T11:00:31.630
2010-12-06T11:00:31.630
null
null
1909
null
5183
2
null
5107
2
null
If you have very little data, it is not that the distance estimate is wrong, but that your estimate is uncertain. A Bayesian approach would seek to determine the posterior distribution of the distance between the arbitrary point and the multi-variate distribution, rather than a single point estimate, and then marginalise over that posterior in reaching your conclusion. This posterior distribution reflects the uncertainty in estimating the mean and covariance matrix of the multivariate Gaussian distribution. I would be wary of using an informative prior. In a Bayesian analysis, the conclusions are only as strong as the prior assumtions on which they are based; if the prior is questionable - so is the posterior. Without more information about the problem it is not possible to determine if such a prior is reasonable.
null
CC BY-SA 2.5
null
2010-12-06T11:43:57.957
2010-12-06T11:43:57.957
null
null
887
null
5184
1
null
null
15
5073
In a recent paper, I fitted a three-way fixed effects model. Since one of the factors wasn't significant (p > 0.1), I removed it and refitted the model with two fixed effects and an interaction. I've just had referees comments back, to quote: > That time was not a significant factor in the 3-way ANOVA is not of itself a sufficient criterion for pooling the time factor: the standard text on this issue, Underwood 1997, argues that the p-value for a non-significant effect must be greater than 0.25 before treatment levels of a factor can be pooled. The authors should give the relevant p-value here, and justify their pooling with reference to Underwood 1997. My questions are: - I've never heard of the 0.25 rule. Has anyone else? I can understand not removing the factor if the p-value was close to the cut-off, but to have a "rule" seems a bit extreme. - This referee states that Underwood 1997 is the standard text. Is it really? I've never heard of it. What would be the standard text (does such a thing exist)? Unfortunately, I don't have access to this Underwood, 1997. - Any advice when responding to the referees. --- Background: this paper was submitted to a non-statistical journal. When fitting the three-way model I checked for interaction effects.
Removing factors from a 3-way ANOVA table
CC BY-SA 4.0
null
2010-12-06T14:04:10.083
2022-12-24T07:23:20.347
2021-09-15T22:00:52.557
919
8
[ "anova", "fixed-effects-model" ]
5185
2
null
5184
17
null
I'm guessing the Underwood in question is [Experiments in Ecology (Cambridge Press 1991)](https://www.cambridge.org/core/books/experiments-in-ecology/DCF3663D5E7C9923D19B5ECE88167780). Its a more-or-less standard reference in the ecological sciences, perhaps third behind [Zar](https://www.pearson.com/en-gb/subject-catalog/p/biostatistical-analysis-pearson-new-international-edition/P200000005805/9781292037110) and [Sokal and Rohlf](https://rads.stackoverflow.com/amzn/click/com/0716724111) (and in my opinion the most 'readable' of the three). If you can find a copy, the relevant section your referee is citing is in 9.7 on p.273. There Underwood suggests a recommended pooling procedure (so not a 'rule' per se) for non-significant factors. It's a 2-step procedure that frankly I don't quite understand, but the upshot is the p = 0.25 is suggested to reduce the probability of Type I error when pooling the non-significant factor (so nothing to do with 'time' in your example, it could be any non-sig factor). The procedure doesn't actually appear to be Underwood's, he himself cites Winer et al 1991 ([Statistical Procedures in Experimental Design](https://rads.stackoverflow.com/amzn/click/com/0070709815) McGraw-Hill). You might try there if you can't find a copy of Underwood.
null
CC BY-SA 4.0
null
2010-12-06T14:53:21.063
2022-12-24T07:23:20.347
2022-12-24T07:23:20.347
362671
1475
null
5186
2
null
5184
11
null
I loathe these sort of cut-off-based rules. I think it depends on design and what your a priori hypotheses and expectations were. If you expecting the outcome to vary with time then I'd say you should keep time in, as you would for any other 'blocking' factor. On the other hand, if you were replicating the same experiments at different times and had no reason to think the outcome would vary with time but wished to check this was the case, then having done so and found little or no evidence for it varying with time, i'd say it's quite entirely reasonable to then drop time. I've never heard of Underwood before. It may be a standard text for 'Experiments in Ecology' (the book's title), but there's no obvious reason that experiments in ecology should be treated any differently from any other experiments in this respect, so to view it as "the standard text on this issue" seems unjustified.
null
CC BY-SA 2.5
null
2010-12-06T14:57:16.090
2010-12-06T14:57:16.090
null
null
449
null
5187
1
5188
null
5
3746
I am new to Gibbs Sampling, and have been using WinBUGS, but I find that it is not well-suited towards storing/presenting results, so I have been calling it from R using the R2WinBUGS package. The data is apparently stored as a "bug" class. I converted it to coda to run diagnostics, and it displays each of the chains, but I am confused as to the $ extension for each individual chain. I cannot find any good documentation for the "coda" class (the cran instructions are not helpful). My code is below: ``` > bugs.sim <- bugs(data, inits, parameter, "gl.bug", n.chains = 5, codaPkg = TRUE, DIC = FALSE, n.iter = 5000) > codaobject <- read.bugs(bugs.sim) ``` As you can see, I have 5 chains, and I would like to take the mean and standard deviation of each. How do I go about doing this? I can use the codaobject to take the [Geweke diagnostic](http://www.people.fas.harvard.edu/~plam/teaching/methods/convergence/convergence_print.pdf) of each chain, it displays each chain as `[[i]]` ($i=1,\dots,5$). Thanks in advance. And any references to a detailed documentation for R2Winbugs, would also be greatly appreciated.
Using R2WinBUGS, how to extract information from each chain?
CC BY-SA 2.5
null
2010-12-06T17:23:20.837
2012-01-15T18:56:56.413
2010-12-07T10:56:02.763
null
null
[ "r", "markov-chain-montecarlo", "bugs" ]
5188
2
null
5187
5
null
The object returned by `read.bugs` is an object of S3 class `mcmc.list`. You can use the double brackets `[[` to access the separate chains, i.e. the different `mcmc`-objects that make up the larger `mcmc.list` object, which really is simply a list of `mcmc`-objects that inherits some information about thinning and chain length from its components. More to the point, s.th. like `lapply(codaobject, function(x){ colMeans(x) })` should return the posterior means for each parameter in each chain and `lapply(codaobject, function(x){ apply(x, 2, sd) })` should give chain- and parameter-specific posterior sd's, since each chain is essentially just a numeric matrix with rows corresponding to the (saved) iterations and columns corresponding to the different params. EDIT: I think Gelman/Hill's "Bayesian Data Analysis" contains some worked examples using R2WinBUGS.
null
CC BY-SA 2.5
null
2010-12-06T18:03:04.027
2010-12-06T18:10:12.523
2010-12-06T18:10:12.523
1979
1979
null
5189
1
5194
null
7
3518
With some great help from this forum, I have been able to get up and running with some basic time series analysis in R. Right now, my needs are mostly univariate time series. Here is my question: I can read in daily data from database into a data frame. I have two columns, date which is understood by R as POSIXct and the second which is the value of interest and numeric. What is the best, most straightforward way to make this a ts object where R understands the start/end dates and represents daily observations dynamically? It seems to me that I shouldn't be required to (when coercing my object to a ts object) manually tell it the start and end dates when the data frame is already has it. For some context, I have been able to aggregate other data from daily to weekly, but find myself doing things in ways that just seem long and unnecessary considering R already understands my raw data as time. As you can tell, I am new to R and time series in R, but I figure that since R is so powerful, there probably is a pretty easy way around my issues.
Getting started with time series in R
CC BY-SA 4.0
null
2010-12-06T18:40:58.743
2018-05-29T12:42:11.920
2018-05-29T12:42:11.920
128677
569
[ "r", "time-series" ]
5190
2
null
5181
4
null
What you seem to be referring to is called ["shrinkage"](http://en.wikipedia.org/wiki/Shrinkage_%28statistics%29). This allows you to share strength across groups and is frequently used in hierarchical Bayesian models. The (very) basic idea is to impose a prior distribution over the entire population and place more weight on the prior for those groups that have few samples, much like you would for any Bayesian analysis. The best reference I can think of for this is ch. 8 of Peter Hoff's book: "A First Course in Bayesian Statistical Methods", but if you don't have access to this book, I'm sure any book on Bayesian statistics will talk about this (search for "group comparison", "hierarchical modeling", or "shrinkage"). I know that [Robert Gramacy](http://faculty.chicagobooth.edu/robert.gramacy/papers.html) has done some work on this as well, so you may wish to check out his publications.
null
CC BY-SA 2.5
null
2010-12-06T20:27:33.010
2010-12-06T20:27:33.010
null
null
1913
null
5191
1
5193
null
2
104
This question is related to my previous question [Bias for kernel density estimator (periodic case)](https://stats.stackexchange.com/questions/5011/bias-for-kernel-density-estimator-periodic-case) A kernel $K(x)$ is of the order $p$ if $$\int_{-\infty}^{\infty}K(x)x^{j}=\delta_{0,j}\ j=0,...p-1$$ $$\int_{-\infty}^{\infty}K(x)x^{p}\neq0\ $$ Does it mean that for the kernel with period 1 the definition of the order of the kernel is $$\int_{0}^{1}K(x)Min(x,1-x)^{j}=\delta_{0,j}\ j=0,...p-1$$ $$\int_{0}^{1}K(x)Min(x,1-x)^{p}\neq0\ $$
Order of the kernel for periodic case
CC BY-SA 3.0
null
2010-12-06T21:08:37.437
2015-04-23T05:54:16.560
2017-04-13T12:44:41.607
-1
2189
[ "kernel-smoothing" ]
5192
2
null
5115
32
null
[George Box](http://en.wikipedia.org/wiki/George_E._P._Box) for his work on time series, designed experiments and elucidating the iterative nature of scientific discovery (proposing and testing models).
null
CC BY-SA 3.0
null
2010-12-06T21:24:56.347
2011-12-14T06:46:31.820
2011-12-14T06:46:31.820
183
null
null
5193
2
null
5191
2
null
I think the correct analog of this definition in the periodic case is that coefficients $1$ through $p-1$ of the Fourier Series for $K$ all vanish. The purpose of the definition of order is to obtain estimates of the bias of the kernel estimator. When $K$ "kills" powers $1$ through $p-1$ of $x$, then the bias will be approximately of order $h^p$ for a bandwidth $h$. This is proven in Tsybakov's Proposition 1.2 by expanding the pdf in a power series: multiplication by $K$ kills off the terms through order $p-1$, leaving the Taylor error term of order $p$; elementary estimates of that integral finish the job. The analog of a power series for periodic functions is the Fourier Series. The analog is a perfect one: we can think of a periodic function as being defined on the unit circle in the complex plane. It has a complex coordinate $q = e^{i x}$ (where now the period is $2\pi$ rather than $1$, but that's inconsequential). Expanding $K(q)$ in a power series expresses it as a sum of powers of $q$. However, from $$q^j = (e^{i x})^j = e^{i x j} = \cos(j x) + i \sin(j x)$$ we see that this expansion is just the Fourier Series (both the sine and cosine terms). Consequently you should be able to emulate the proof of Proposition 1.2 with very little change at all.
null
CC BY-SA 2.5
null
2010-12-07T00:03:46.107
2010-12-07T00:03:46.107
null
null
919
null
5194
2
null
5189
3
null
It seems like you need the package xts. Create your time serie using ``` install.packages('xts') library(xts) X = xts(coredata(DF[,2]), order.by=DF[,1]) ``` Then you will be able to manipulate your data easily. ``` to.weekly(X) to.monthly(X) ``` Please note that you will then manipulate xts objects and not ts. But no worries, you can go back to ts whenever needed.
null
CC BY-SA 2.5
null
2010-12-07T01:14:28.133
2010-12-07T02:22:22.400
2010-12-07T02:22:22.400
1709
1709
null
5195
1
5209
null
13
20036
As title, I need to draw something like this: ![alt text](https://i.stack.imgur.com/KYQ5V.jpg) Can ggplot, or other packages if ggplot is not capable, be used to draw something like this?
How to draw funnel plot using ggplot2 in R?
CC BY-SA 2.5
null
2010-12-07T01:29:37.223
2016-02-12T23:37:36.123
2010-12-07T10:53:58.223
null
588
[ "r", "data-visualization", "ggplot2", "funnel-plot" ]
5196
1
5201
null
19
17299
In order to calibrate a confidence level to a probability in supervised learning (say to map the confidence from an SVM or a decision tree using oversampled data) one method is to use Platt's Scaling (e.g., [Obtaining Calibrated Probabilities from Boosting](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.60.5153&rep=rep1&type=pdf)). Basically one uses logistic regression to map $[-\infty;\infty]$ to $[0;1]$. The dependent variable is the true label and the predictor is the confidence from the uncalibrated model. What I don't understand is the use of a target variable other than 1 or 0. The method calls for creation of a new "label": > To avoid overfitting to the sigmoid train set, an out-of-sample model is used. If there are $N_+$ positive examples and $N_-$ negative examples in the train set, for each training example Platt Calibration uses target values $y_+$ and $y_-$ (instead of 1 and 0, respectively), where $$ y_+=\frac{N_++1}{N_++2};\quad\quad y_-=\frac{1}{N_-+2} $$ What I don't understand is how this new target is useful. Isn't logistic regression simply going to treat the dependent variable as a binary label (regardless of what label is given)? UPDATE: I found that in SAS changing the dependent from $1/0$ to something else reverted back to the same model (using `PROC GENMOD`). Perhaps my error or perhaps SAS's lack of versatility. I was able to change the model in R. As an example: ``` data(ToothGrowth) attach(ToothGrowth) # 1/0 coding dep <- ifelse(supp == "VC", 1, 0) OneZeroModel <- glm(dep~len, family=binomial) OneZeroModel predict(OneZeroModel) # Platt coding dep2 <- ifelse(supp == "VC", 31/32, 1/32) plattCodeModel <- glm(dep2~len, family=binomial) plattCodeModel predict(plattCodeModel) compare <- cbind(predict(OneZeroModel), predict(plattCodeModel)) plot(predict(OneZeroModel), predict(plattCodeModel)) ```
Why use Platt's scaling?
CC BY-SA 3.0
null
2010-12-07T01:31:14.380
2013-09-27T21:59:39.513
2013-09-27T21:59:39.513
7290
2040
[ "logistic", "cross-validation", "calibration" ]
5197
1
null
null
2
219
I have database of 78706 resident incidents in aged care facilities (5 years of data). I want to to learn and implement a tool allowing analyzing these data using following attributes: - Resident - Date/Time - Location - Result - Injury I want to be able to get following assumptions from my system which will be passed to specialists for further research, decision making and action: Examples of outputs: - Most of incidents in facility A with residents X, Y and Z - Falls occur in North Wing between 2am and 5am - Skin tears happen during showering in facility B - Most incidents in a facility C related to repositioning My question is not what software package can help me but what type of statistical analysis solves this problem - regression, cluster etc. Can you recommend some practical books for a starter too?
What type of statistical analysis solves this problem?
CC BY-SA 2.5
null
2010-12-07T02:08:09.843
2010-12-07T03:07:27.183
null
null
null
[ "regression", "clustering" ]
5198
2
null
5152
3
null
It is hard to imagine a situation when the effect of age with a precision of a month is not sufficient - even for babies after the first few months of life nobody uses weeks. For adults, even rounding to years should be just fine.
null
CC BY-SA 2.5
null
2010-12-07T02:59:36.707
2010-12-07T02:59:36.707
null
null
279
null
5199
2
null
5197
1
null
You could consider [association analysis](http://en.wikipedia.org/wiki/Association_rule_learning). If your time is discretized appropriately and the data support your 2nd example (Falls occur in North Wing between 2am and 5am), one possible learned rule that comes out of the analysis might be {North Wing, 2AM-5AM} => {Fall}.
null
CC BY-SA 2.5
null
2010-12-07T03:07:27.183
2010-12-07T03:07:27.183
null
null
1815
null
5200
1
5203
null
3
540
In Frank Schorfheide's class notes on [likelihood functions of DSGE models](https://web.archive.org/web/20100702005639/http://www.webpages.ttu.edu/pesummer/ECO%205328/notes/4%20likelihood%20dsge.pdf), he expresses the value of the likelihood function for a given vector of parameters $\theta$, and time series $Y^T$ as: $$p(Y^{T}|\theta)=(2\pi)^{-nT/2}\left(\prod_{t=1}^{T}\left|F_{t|t-1}\right|\right)^{-1/2}\exp\left\{-\frac{1}{2}\sum_{t=1}^{T}v_{t}F_{t|t-1}v_t\prime\right\}$$ where $v_t$ is the innovation in $y$ $$v_t=y_t-\hat{y}_{t|t-1}$$ and the marginal distribution of $y_t$ is $$y_t|Y^{t-1}\sim\mathcal{N}\left(\hat{y}_{t|t-1}, F_{t|t-1}\right)$$ I've just got a few questions about what these terms look like. First, does anyone have an idea what $n$ is in the exponent of the first term in the first equation? I think it might be a misprint, but I'm not sure. Second, what does $F_{t|t-1}$ look like? For an $n\times 1$ vector $y$ I'm picturing an $n\times n$ matrix, but what would the values of $F_{j,k}$ be equal to? I'm picturing the covariance between $y_{t,j}$ and $\hat{y}_{t|t-1,k}$ - is that correct? Lastly, I'm assuming from the results of my code that the value the likelihood function returns is a scalar, but it doesn't look like the formula produces one -- for an $n\times 1$ vector $y$, wouldn't the second term in the first equation be $n\times n$? Or do you think it's meant to be the determinant of $F$?
Likelihood function of DSGE model using Kalman filter
CC BY-SA 4.0
null
2010-12-07T03:47:50.017
2023-02-05T17:46:42.410
2023-02-05T17:46:42.410
362671
2251
[ "kalman-filter", "likelihood" ]
5201
2
null
5196
15
null
I suggest to check out the [wikipedia page of logistic regression](http://en.wikipedia.org/wiki/Logistic_regression). It states that in case of a binary dependent variable logistic regression maps the predictors to the probability of occurrence of the dependent variable. Without any transformation, the probability used for training the model is either 1 (if y is positive in the training set) or 0 (if y is negative). So: Instead of using the absolute values 1 for positive class and 0 for negative class when fitting $p_i=\frac{1}{(1+exp(A*f_i+B))}$ (where $f_i$ is the uncalibrated output of the SVM), Platt suggests to use the mentioned transformation to allow the opposite label to appear with some probability. In this way some regularization is introduced. When the size of the dataset reaches infinity, $y_+$ will become 1 and $y_{-}$ will become zero. For details, see the [original paper of Platt](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.1639).
null
CC BY-SA 2.5
null
2010-12-07T07:43:17.147
2010-12-07T07:43:17.147
null
null
264
null
5202
2
null
5115
3
null
[Teuvo Kohonen](http://en.wikipedia.org/wiki/Teuvo_Kohonen) for invention of the [Self-Organizing-Map](http://en.wikipedia.org/wiki/Self-Organizing_Map) (SOM).
null
CC BY-SA 2.5
null
2010-12-07T07:46:56.347
2010-12-07T07:46:56.347
null
null
264
null
5203
2
null
5200
3
null
$n$ is the dimension of the observation vector, as you mention in your question. $F$ is the covariance matrix of innovations; I think you are missing an exponent of -1 in the last term of the likelihood. It should read $v_t' F_{t|t-1}^{-1} v_t$ (using the convention that $v_t$ is a column vector; your notation seems to assume the opposite). What you have in the second term of the likelihood is indeed the determinant of $F$. You might want to peruse any among many excellent books on Kalman filter and state-space models, like [Durbin-Koopman](http://rads.stackoverflow.com/amzn/click/0198523548), [Anderson-Moore](http://rads.stackoverflow.com/amzn/click/0486439380) or [Harvey](http://rads.stackoverflow.com/amzn/click/0521405734). Or may be just look at the Wikipedia the topic on [Kalman filtering](http://en.wikipedia.org/wiki/Kalman_filter).
null
CC BY-SA 2.5
null
2010-12-07T08:25:52.680
2010-12-07T08:25:52.680
null
null
892
null
5204
2
null
5196
2
null
Another method of avoiding over-fitting that I have found useful is to fit the univariate logistic regression model to the leave-out-out cross-validation output of the SVM, which can be approximated efficiently using the [Span bound](http://dx.doi.org/10.1023/A:1012450327387). However, if you want a classifier that produces estimates of the probability of class membership, then you would be better off using kernel logistic regression, which aims to do that directly. The ouput of the SVM is designed for discrete classification and doesn't necessarily contain the information required for accurate estimation of probabilities away from the p=0.5 contour. [Gaussian process classifiers](http://www.gaussianprocess.org/gpml/) are another good option if you want a kernel based probabilistic classifier.
null
CC BY-SA 2.5
null
2010-12-07T08:57:03.053
2010-12-07T08:57:03.053
null
null
887
null
5205
2
null
5167
2
null
You shouldn't need a Gibbs sampler: the mode of an [inverse-Wishart](http://en.wikipedia.org/wiki/Inverse-Wishart_distribution) has a closed form. Also, independent random samples from the Cholesky factor of a Wishart can be obtained from the [Bartlett decomposition](http://en.wikipedia.org/wiki/Wishart_distribution#Bartlett_decomposition): as it is triangular, it can be inverted easily by forward subsitution to get the Cholesky factor of an inverse-Wishart.
null
CC BY-SA 2.5
null
2010-12-07T09:59:50.810
2010-12-07T09:59:50.810
null
null
495
null
5206
1
5208
null
8
6576
This is the confidence interval estimated by prop.test ``` n <- 600; x <- 276; p <- 0.40 prop.test(x, n, p, alternative="two.sided", conf.level=0.95, correct=T) 95 percent confidence interval: 0.4196787 0.5008409 ``` I tried to reproduce it, reading the code under prop.test. Here is a simplified way to get those two limits ``` ESTIMATE <- x/n YATES <- 0.5 conf.level <- 0.95 z <- qnorm((1 + conf.level)/2) YATES <- min(YATES, abs(x - n * p)) z22n <- z^2/(2 * n) p.c <- ESTIMATE + YATES/n (p.c + z22n + z * sqrt(p.c * (1 - p.c)/n + z22n/(2 * n)))/(1 + 2 * z22n) [1] 0.5008409 p.c <- ESTIMATE - YATES/n (p.c + z22n - z * sqrt(p.c * (1 - p.c)/n + z22n/(2 * n)))/(1 + 2 * z22n) [1] 0.4196787 ``` Can you explain to me why the underlying probability of success (p) is used in line 5? or maybe you could suggest where can I find more info about this YATES correction that affects the ESTIMATE. Thank you
Yates' continuity correction in confidence interval returned by prop.test
CC BY-SA 2.5
null
2010-12-07T10:10:47.810
2010-12-07T14:08:24.490
2010-12-07T10:37:21.260
null
339
[ "r", "confidence-interval", "yates-correction" ]
5207
1
10734
null
9
1571
I tried to simulate from a bivariate density $p(x,y)$ using Metropolis algorithms in R and had no luck. The density can be expressed as $p(y|x)p(x)$, where $p(x)$ is Singh-Maddala distribution $p(x)=\dfrac{aq x^{a-1}}{b^a (1 + (\frac{x}{b})^a)^{1+q}}$ with parameters $a$, $q$, $b$, and $p(y|x)$ is log-normal with log-mean as a fraction of $x$, and log-sd a constant. To test whether my sample is the one I want, I looked at the marginal density of $x$, which should be $p(x)$. I tried different Metropolis algorithms from R packages MCMCpack, mcmc and dream. I discarded burn-in, used thinning, used samples with size up to million, but the resulting marginal density was never the one I supplied. Here is the final edition of my code I used: ``` logvrls <- function(x,el,sdlog,a,scl,q.arg) { if(x[2]>0) { dlnorm(x[1],meanlog=el*log(x[2]),sdlog=sdlog,log=TRUE)+ dsinmad(x[2],a=a,scale=scl,q.arg=q.arg,log=TRUE) } else -Inf } a <- 1.35 q <- 3.3 scale <- 10/gamma(1 + 1/a)/gamma(q - 1/a)* gamma(q) Initvrls <- function(pars,nseq,meanlog,sdlog,a,scale,q) { cbind(rlnorm(nseq,meanlog,sdlog),rsinmad(nseq,a,scale,q)) } library(dream) aa <- dream(logvrls, func.type="logposterior.density", pars=list(c(0,Inf),c(0,Inf)), FUN.pars=list(el=0.2,sdlog=0.2,a=a,scl=scale,q.arg=q), INIT=Initvrls, INIT.pars=list(meanlog=1,sdlog=0.1,a=a,scale=scale,q=q), control=list(nseq=3,thin.t=10) ) ``` I've settled on dream package, since it samples until the convergence. I've tested whether I have the correct results in three ways. Using KS statistic, comparing quantiles, and estimating the parameters of Singh-Maddala distribution with maximum likelihood from the resulting sample: ``` ks.test(as.numeric(aa$Seq[[2]][,2]),psinmad,a=a,scale=scale,q.arg=q) lsinmad <- function(x,sample) sum(dsinmad(sample,a=x[1],scale=x[2],q.arg=x[3],log=TRUE)) optim(c(2,20,2),lsinmad,method="BFGS",sample=aa$Seq[[1]][,2]) qq <- eq(0.025,.975,by=0.025) tst <- cbind(qq, sapply(aa$Seq,function(l)round(quantile(l[,2],qq),3)), round(qsinmad(qq,a,scale,q),3)) colnames(tst) <- c("Quantile","S1","S2","S3","True") library(ggplot2) qplot(x=Quantile,y=value, data=melt(data.frame(tst),id=1), colour=variable,group=variable,geom="line") ``` When I look at the results of these comparisons, KS statistic almost always rejects the null hypothesis that sample is from Singh-Maddala distribution with supplied parameters. Maximum likelihood estimated parameters sometimes comes close to its true values, but usually too far out of comfort zone, to accept that sampling procedure was succesfull. Ditto for the quantiles, empirical quantiles are not too far, but too far away. My question is what I am doing wrong? My own hypotheses: - MCMC is not appropriate for this type of sampling - MCMC cannot converge, due to theoretical reasons (the distribution function does not satisfy required properties, whatever they are) - I do not use the Metropolis algorithm correctly - My distribution tests are not correct, since I do not have independent sample.
Sampling from bivariate distribution with known density using MCMC
CC BY-SA 2.5
null
2010-12-07T10:35:08.577
2018-02-24T13:37:18.813
2010-12-07T13:03:37.737
2116
2116
[ "sampling", "monte-carlo", "metropolis-hastings" ]
5208
2
null
5206
3
null
The help page indicates that "Continuity correction is used only if it does not exceed the difference between sample and null proportions in absolute value." This is what line 5 is checking: `x/n` is the empirical proportion, `p` is the null proportion. (Actually, I find the "if" slightly misleading since it's more of a "insofar as it does not exceed" when looking at line 5.)
null
CC BY-SA 2.5
null
2010-12-07T10:55:54.983
2010-12-07T10:55:54.983
null
null
1909
null
5209
2
null
5195
12
null
Although there's room for improvement, here is a small attempt with simulated (heteroscedastic) data: ``` library(ggplot2) set.seed(101) x <- runif(100, min=1, max=10) y <- rnorm(length(x), mean=5, sd=0.1*x) df <- data.frame(x=x*70, y=y) m <- lm(y ~ x, data=df) fit95 <- predict(m, interval="conf", level=.95) fit99 <- predict(m, interval="conf", level=.999) df <- cbind.data.frame(df, lwr95=fit95[,"lwr"], upr95=fit95[,"upr"], lwr99=fit99[,"lwr"], upr99=fit99[,"upr"]) p <- ggplot(df, aes(x, y)) p + geom_point() + geom_smooth(method="lm", colour="black", lwd=1.1, se=FALSE) + geom_line(aes(y = upr95), color="black", linetype=2) + geom_line(aes(y = lwr95), color="black", linetype=2) + geom_line(aes(y = upr99), color="red", linetype=3) + geom_line(aes(y = lwr99), color="red", linetype=3) + annotate("text", 100, 6.5, label="95% limit", colour="black", size=3, hjust=0) + annotate("text", 100, 6.4, label="99.9% limit", colour="red", size=3, hjust=0) + labs(x="No. admissions...", y="Percentage of patients...") + theme_bw() ``` ![alt text](https://i.stack.imgur.com/stUAo.png)
null
CC BY-SA 2.5
null
2010-12-07T11:38:07.737
2010-12-07T11:38:07.737
null
null
930
null
5210
2
null
5195
21
null
If you are looking for this (meta-analysis) type of [funnel plot](http://en.wikipedia.org/wiki/Funnel_plot), then the following might be a starting point: ``` library(ggplot2) set.seed(1) p <- runif(100) number <- sample(1:1000, 100, replace = TRUE) p.se <- sqrt((p*(1-p)) / (number)) df <- data.frame(p, number, p.se) ## common effect (fixed effect model) p.fem <- weighted.mean(p, 1/p.se^2) ## lower and upper limits for 95% and 99.9% CI, based on FEM estimator number.seq <- seq(0.001, max(number), 0.1) number.ll95 <- p.fem - 1.96 * sqrt((p.fem*(1-p.fem)) / (number.seq)) number.ul95 <- p.fem + 1.96 * sqrt((p.fem*(1-p.fem)) / (number.seq)) number.ll999 <- p.fem - 3.29 * sqrt((p.fem*(1-p.fem)) / (number.seq)) number.ul999 <- p.fem + 3.29 * sqrt((p.fem*(1-p.fem)) / (number.seq)) dfCI <- data.frame(number.ll95, number.ul95, number.ll999, number.ul999, number.seq, p.fem) ## draw plot fp <- ggplot(aes(x = number, y = p), data = df) + geom_point(shape = 1) + geom_line(aes(x = number.seq, y = number.ll95), data = dfCI) + geom_line(aes(x = number.seq, y = number.ul95), data = dfCI) + geom_line(aes(x = number.seq, y = number.ll999), linetype = "dashed", data = dfCI) + geom_line(aes(x = number.seq, y = number.ul999), linetype = "dashed", data = dfCI) + geom_hline(aes(yintercept = p.fem), data = dfCI) + scale_y_continuous(limits = c(0,1.1)) + xlab("number") + ylab("p") + theme_bw() fp ``` ![alt text](https://i.stack.imgur.com/2mrP6.png)
null
CC BY-SA 3.0
null
2010-12-07T13:19:27.420
2013-03-19T05:36:01.310
2013-03-19T05:36:01.310
307
307
null
5212
2
null
5206
7
null
On the second question of where you can find more info on this continuity correction (attributed to Yates in the help for `prop.test` but not in the refs below, I think [as Yates orginally proposed a continuity correction only to the chi-squared test for contingency tables](http://en.wikipedia.org/wiki/Yates%27_correction_for_continuity)): - Newcombe RG. Two-sided confidence intervals for the single proportion: comparison of seven methods. Stat Med 1998; 17(8):857-872. PMID:9595616 - Brown LD, Cai TT, DasGupta A. Interval estimation for a binomial proportion (with Comments & Rejoinder). Statistical Science 2001; 16(2):101-133. doi:10.1214/ss/1009213286 The continuity-corrected Wilson score interval is 'method 4' in Newcomb. Brown et al. consider only the uncorrected Wilson score interval in the main text, but George Casella suggests using the continuity-corrected version in his Comment (p121), which Brown et al. discuss in their Rejoinder (p130): > Casella suggests the possibility of performing a continuity correction on the score statistic prior to constructing a confidence interval. We do not agree with this proposal from any perspective. These “continuity-corrected Wilson” intervals have extremely conservative coverage properties, though they may not in principle be guaranteed to be everywhere conservative. But even if one’s goal, unlike ours, is to produce conservative intervals, these intervals will be very inefficient at their normal level relative to Blyth–Still or even Clopper– Pearson. The Clopper-Pearson 'exact' interval is provided by `binom.test` in R. I'd suggest using that rather than `prop.test` if you want a conservative interval, i.e. one that guarantees at least 95% coverage. If you'd prefer an interval that has close to 95% coverage on average (over p) and will therefore often be narrower, you could use `prop.test(…, correct=FALSE)` to give the uncorrected Wilson score interval. The standard textbook for such matters is Fleiss Statistical Methods for Rates and Proportions. Newcomb references the original 1981 edition but [the latest edition is the 3rd (2003)](http://books.google.co.uk/books?id=a5LwdxF2d10C). I haven't checked it myself, however.
null
CC BY-SA 2.5
null
2010-12-07T14:08:24.490
2010-12-07T14:08:24.490
null
null
449
null
5213
1
5221
null
6
295
Suppose I observe a sample $(y_i,x_i)$, $i=1,...,n$. Suppose that I know the following: $y_i=\alpha_0+\alpha_1x_i+\varepsilon_i$, $i \in J\subset\{1,...,n\}$ $y_i=\beta_0+\beta_1x_i+\varepsilon_i$, $i \in J^c$ where $\varepsilon_i$ are i. i. d. and $J$ is not known in advance. Is it possible to estimate $\alpha_0,\alpha_1,\beta_0,\beta_1$? Or at least test the hypothesis that $J=\varnothing$? If $J$ is known the problem is very easy to solve. Going through all the subsets is not feasible, since we have $2^n$ possible combinations. If we assume $J=\{1,...,k\}$ with unknown $k=1,...,n$, it is the classical change-point problem, for which many tests are available. I suspect that this maybe ill-posed problem, so I wanted to check before trying to solve it. Here is a simple illustration of the problem: ``` N <- 200 s1 <- sample(1:N,N %/% 2) s2 <- (1:N)[!(1:N) %in% s1] x <- rnorm(N) eps <- rnorm(N) ind <- 1:N y <- rep(NA,N*T) y[ind %in% s1] <- 2+0.5*x[ind %in% s1]+eps[ind %in% s1]/5 y[ind %in% s2] <- 1+1*x[ind %in% s2]+eps[ind %in% s2]/5 y sal1 <- ind %in% s1 plot(x, y) points(x[sal1], y[sal1], col=2) abline(2, 0.5, col=2) abline(1, 1) ``` Graphically it is more or less obvious that we have two different models. Maybe it is possible to use some classification or data-mining techniques for solving this problem?
Discerning between two different linear regression models in one sample
CC BY-SA 2.5
null
2010-12-07T14:30:19.860
2019-09-09T08:17:06.213
2019-09-09T08:17:06.213
11887
2116
[ "regression", "classification", "data-mining", "mixture-distribution" ]
5214
1
5216
null
2
930
I have left censored data where the distribution is known (it's near enough lognormal, at least in theory). I'd like to calculate some simple summary stats: geometric mean and standard deviation in this case. I've previously used R's `NADA` package for this but it is no longer on CRAN. Is there an alternative available? EDIT: I contacted Lopaka Lee, the package maintainer, and he says that he's > working on updating the package for the upcoming release of R so hopefully the package's absence is only temporary.
How do you calculate simple statistics for left censored data in R?
CC BY-SA 2.5
null
2010-12-07T15:32:41.797
2010-12-07T18:28:04.907
2010-12-07T18:28:04.907
478
478
[ "r", "censoring" ]
5216
2
null
5214
4
null
I will let other suggest better alternatives to [NADA](http://www.practicalstats.com/nada/nada/downloads_files/NADAforR_Examples.pdf), but it seems the package is still available on CRAN, in the [Archive](http://cran.r-project.org/src/contrib/Archive) section. The last version is from May, 2009. Installation went fine for me, using ``` R CMD install NADA_1.5-2.tar.gz ``` Under Windows, I guess you can just download the tgz and use built-in install facilities.
null
CC BY-SA 2.5
null
2010-12-07T15:54:24.510
2010-12-07T15:54:24.510
null
null
930
null
5217
2
null
5187
2
null
The contents of your chains are stored in three different formats. Take a look at ``` bugs.sim$sims.array bugs.sim$sims.list bugs.sim$sims.matrix ``` and read the Value section of `?bugs`.
null
CC BY-SA 2.5
null
2010-12-07T15:57:53.917
2010-12-07T15:57:53.917
null
null
478
null
5218
2
null
4830
5
null
In logistic regression, highly skewed distributions of outcome variables (where there are far more non-events to events or vis versa), the cut point or probability trigger does need to be adjusted, but it will not have much of an effect on overall classification efficieny. This will always remain roughly the same, but you are currently under-classifying events since the "chance" probability in such a data set will always make you more likely to classify into non-events. This needs to be adjusted for. In fact, in such a situation it's not uncommon to see the overall efficiency of classification go down,since it was previously inflated by misscalculation due to chance. Think of it this way, if you have an event where 90% don't do it and 10% do it, then if you put everyone into the "don't do it" group, you automatically get 90% right, and that was without even trying, just pure chance, inflated by the skewness of it's distribution. The issue of interactions is unrelated to this skewing, and should be driven by theory. You will most likely always improve classification by adding additional terms, including simply adding interactions, but you do so by often overfitting the model. You then have to go back and be able to interpret this. Matt P Data Analyst, University of Illinois Urbana Champaign
null
CC BY-SA 2.5
null
2010-12-07T16:08:40.270
2010-12-07T16:08:40.270
null
null
null
null
5220
1
null
null
2
6993
I have done a meta analysis and heterogeneity is too high. I am working with (even/Total) for experimental and control groups to calculate the Odds Ratio. I have done fixed-effect and random-effect modeling. I now need to use meta-regression via SPSS. Can anyone direct me to a good set of materials to learn how to do this? I have read some links but none has really solved this question. Thanks
How to do meta-regression analysis with SPSS?
CC BY-SA 2.5
null
2010-12-07T16:37:20.697
2012-05-22T20:15:20.493
2010-12-07T21:11:28.573
307
null
[ "regression", "spss", "meta-analysis" ]
5221
2
null
5213
5
null
You need to model the observations as a mixture model. Define: $p$ as the probability that a sample belongs to the first data generating process. Thus, the density function of $y_i$ is given by: $f(y_i|-) \sim p f_1(y_i|-) + (1-p) f_2(y_i|-)$ where $f_1(.)$ is the density that arises because of the first data generating process and $f_2(.)$ is the density that arises because of the second data generating process. You can then either use maximum likelihood (see for example the [EM algorithm](http://en.wikipedia.org/wiki/Expectation-maximization_algorithm)) or bayesian approaches to estimate the model.
null
CC BY-SA 2.5
null
2010-12-07T17:01:09.163
2010-12-07T17:01:09.163
null
null
null
null
5222
2
null
5220
7
null
You could start with David B Wilson's website on "[meta-analysis stuff](http://mason.gmu.edu/~dwilsonb/ma.html)". He offers spss, stata, and sas macros for performing meta-analytic analyses (including meta-regression; metareg.sps) + PPT slides (analysis.ppt, interpretation.ppt). Another presentation I really like(d) was given by Marsh et al. „[Meta-Analysis: Session 3.3 & 3.4: Teacher Expectancy Example](http://www.self.ox.ac.uk/rdimaterials.htm)” (see "Practical example - fixed, random, & multilevel Meta20 data"). Unfortunately, the presentation seems to be no longer available... But you might want to check the [other resources](http://www.self.ox.ac.uk/addit.materials.htm) (see esp. "Advanced Meta-Anaylsis Seminar Presentations"). Finally, you can find a presentation on "[Random and Mixed-effects Modeling](http://www.campbellcollaboration.org/artman2/uploads/1/Pigott_random_mixed_effects.pdf)" on the website of the Campbell Collaboration.
null
CC BY-SA 3.0
null
2010-12-07T17:57:35.953
2011-10-17T20:28:11.763
2011-10-17T20:28:11.763
919
307
null
5223
2
null
5077
10
null
I didn't look at the paper you supplied, but let me have a go anyway: If you have a $p$-dimensional parameter space you can generate a random direction $d$ uniformly distributed on the surface of the unit sphere with ``` x <- rnorm(p) d <- x/sqrt(sum(x^2)) ``` (c.f. [Wiki](http://en.wikipedia.org/wiki/Hypersphere#Generating_random_points)). Then, use this to generate proposals for $d$ for rejection sampling (assuming you can actually evaluate the distribution for $d$). Assuming you start in position $x$ and have accepted a $d$, generate a proposal $y$ with ``` lambda <- r<SOMEDISTRIBUTION>(foo, bar) y <- x + lambda * d ``` and do a Metropolis-Hastings-Step to decide whether to move to $y$ or not. Of course, how well this can work will depend on the distribution of $d$ and how expensive it is to (repeatedly) evaluate its density in the rejection sampling step, but since generating proposals for $d$ is cheap you may get away with it. --- Added for @csgillespie's benefit: From what I was able to gather by some googling, hit-and-run MCMC is useful primarily for fast mixing if you have a (multivariate) target that has arbitrary bounded but not necessarily connected support, because it enables you to move from any point in the support to any other in one step. More [here](http://books.google.com/books?id=1-ffZVmazvwC&pg=PA173&dq=hit-and-run++MCMC&hl=de&ei=jL0ATdG3JozLswbOwfzyDg&sa=X&oi=book_result&ct=result&resnum=2&ved=0CCwQ6AEwAQ#v=onepage&q=hit-and-run%20%20MCMC&f=false) and [here](http://faculty.washington.edu/harin/L1.pdf).
null
CC BY-SA 3.0
null
2010-12-07T18:19:10.860
2017-04-24T07:45:30.210
2017-04-24T07:45:30.210
123119
1979
null
5225
2
null
5213
4
null
The first hit on [Rseek](http://Rseek.org) with keywords "mixture regression" brings up the `flexmix` package, which does what you want. I seem to recall that there were other packages for this as well.
null
CC BY-SA 2.5
null
2010-12-07T21:03:29.853
2010-12-07T21:03:29.853
null
null
279
null
5226
1
5231
null
5
654
If $X=[x_1,x_2,...,x_n]^T$ is an $n$-dimensional random variable and we have $E\left\{X\right\} = M = \left[m_1,m_2,...,m_n\right]^T$ $Cov\left\{X\right\} = \Sigma = diag\left(\lambda_1,\lambda_2,...,\lambda_n\right)$ how can I express the following expectation in terms of $M$, $\Sigma$, and $n$ (and maybe raw $m_i$'s and $\lambda_i$'s)? $E\left\{ \left(X-M\right)^T\left(X-M\right)\left(X-M\right)^T\left(X-M\right)\right\}$ Supposing $x_i$'s are i.i.d and have normal distribution would be acceptable, but are these assumptions necessary? Update: - I know that $E\left\{ \left(X-M\right)^T\left(X-M\right)\right\} = \sum_{i=1}^n \left(\lambda_i\right)$ but don't think this would help in this case. - In the section 6.2.3 Cubic Forms 8.2.4 Quartic Forms of Matrix cookbook there is a formula for calculated quadratic expectations like this, but i don't want just a formula to solve it. I think there should be a simple question for this problem because the covariance matrix is diagonalized.
Expectation of $\left(X-M\right)^T\left(X-M\right)\left(X-M\right)^T\left(X-M\right)$
CC BY-SA 2.5
null
2010-12-08T00:13:18.207
2010-12-08T14:35:35.940
2010-12-08T14:35:35.940
2148
2148
[ "expected-value" ]
5227
2
null
5226
2
null
I believe this depends on the kurtosis of $X$. If I am reading this correctly, and assuming the $X_i$ are independent, you are trying to find the expectation of $\sum_i (X_i - m_i)^4$. Because $X_i^4$ appears, you cannot find this expectation in terms of $M$ and $\Sigma$ without making further assumptions. (Even without the independence of the $X_i$, you will have $E[X_i^4]$ terms in your expectation.) If you assume that the $X_i$ are normally distributed, you should find the expectation is equal to $3 \sum_i \lambda_i^2$.
null
CC BY-SA 2.5
null
2010-12-08T01:02:29.333
2010-12-08T03:03:57.627
2010-12-08T03:03:57.627
795
795
null
5228
1
5232
null
9
1792
I am looking at the sample kurtosis of a fairly skewed random variable, and the results seem inconsistent. To simply illustrate the problem, I looked at the sample kurtosis of a log-normal RV. In R (which I am slowly learning): ``` library(moments); samp_size = 2048; n_trial = 4096; kvals <- rep(NA,1,n_trial); #preallocate for (iii in 1:n_trial) { kvals[iii] <- kurtosis(exp(rnorm(samp_size))); } print(summary(kvals)); ``` The summary I get is ``` Min. 1st Qu. Median Mean 3rd Qu. Max. 11.87 28.66 39.32 59.17 61.70 1302.00 ``` According to [Wikipedia](http://en.wikipedia.org/wiki/Log_normal), the kurtosis for this log-normal RV should be around 114. Clearly the sample kurtosis is biased. Doing some research I found that sample kurtosis is biased for small sample sizes. I used the 'G2' estimator as provided by the `e1071` package in CRAN, and got very similar results for this sample size. The question: which of the following characterize what is going on: - The standard error of the sample kurtosis is simply very large for this RV (even though the hand-wavey common estimate of the standard error is of order $1/\sqrt{n}$). Alternatively, I used too few samples (2048) in this study. - These implementations of sample kurtosis suffer from numerical problems which might be corrected by e.g. Terriberry's method (in much the same way that Welford's method gives better results than the naive method for sample variance). - I computed the population kurtosis incorrectly. (ouch) - Sample kurtosis is inherently biased and you can never fix it for such small sample sizes.
Is sample kurtosis hopelessly biased?
CC BY-SA 2.5
null
2010-12-08T01:21:07.327
2010-12-08T08:42:18.133
2010-12-08T08:42:18.133
1390
795
[ "r", "unbiased-estimator", "kurtosis" ]
5229
2
null
3466
20
null
Daniel B. Wright discusses this in section 5 of his article [Making Friends with your Data](http://www2.fiu.edu/~dwright/pdf/makefriends.pdf). He suggests (p.130): > The only procedure that is always correct in this situation is a scatterplot comparing the scores at time 2 with those at time 1 for the different groups. In most cases you should analyse the data in several ways. If the approaches give different results ... think more carefully about the model implied by each. He recommends the following articles as further reading: - Hand, D. J. (1994). Deconstructing statistical questions. Journal of the Royal Statistical Society: A, 157, 317–356. - Lord, F. M. (1967). A paradox in the interpretation of group comparisons. Psychological Bulletin, 72, 304–305. Free PDF - Wainer, H. (1991). Adjusting for differential base rates: Lord’s paradox again. Psychological Bulletin, 109, 147–151. Free PDF
null
CC BY-SA 2.5
null
2010-12-08T01:31:49.233
2010-12-25T06:55:23.157
2010-12-25T06:55:23.157
183
183
null
5231
2
null
5226
5
null
Because $\left(X-M\right)^T\left(X-M\right) = \sum_i{(X_i - m_i)^2}$, $$\left(X-M\right)^T\left(X-M\right)\left(X-M\right)^T\left(X-M\right) = \sum_{i,j}{(X_i - m_i)^2(X_j - m_j)^2} \text{.}$$ There are two kinds of expectations to obtain here. Assuming the $X_i$ are independent and $i \ne j$, $$\eqalign{ E \left[ (X_i - m_i)^2(X_j - m_j)^2 \right] &= E\left[(X_i - m_i)^2\right] E\left[(X_j - m_j)^2\right] \cr &= \lambda_i \lambda_j . }$$ When $i = j$, $$\eqalign{ E \left[ (X_i - m_i)^2(X_j - m_j)^2 \right] &= E\left[(X_i - m_i)^4\right] \cr &= 3 \lambda_i^2 \text{ for Normal variates} \cr &= \lambda_i \lambda_j + 2 \lambda_i^2 \text{.} }$$ Whence the expectation equals $$\eqalign{ &\sum_{i, j} {\lambda_i \lambda_j} + 2 \sum_{i} {\lambda_i^2} \cr = &(\sum_{i}{\lambda_i})^2 + 2 \sum_{i} {\lambda_i^2}. }$$ Note where the assumptions of independence and Normality come in. Minimally, we need to assume the squares of the residuals are mutually independent and we only need a formula for the central fourth moment; Normality is not necessasry.
null
CC BY-SA 2.5
null
2010-12-08T04:20:55.597
2010-12-08T04:20:55.597
null
null
919
null
5232
2
null
5228
8
null
There's a [bias correction](http://www.mathworks.com/help/toolbox/stats/kurtosis.html). It's not huge. I believe the sampling variance of the kurtosis is proportional to the eighth (!) central moment, which can be enormous for a lognormal distribution. You would need millions of trials (or far more) in a simulation to detect bias unless the CV is tiny. (Plot a histogram of kvals to see how extraordinarily skewed they are.) The correct kurtosis is indeed about 113.9364. As far as R style goes, it can be convenient to encapsulate the simulation in a function so you can easily modify the sample size or number of trials.
null
CC BY-SA 2.5
null
2010-12-08T04:51:09.113
2010-12-08T04:51:09.113
null
null
919
null
5233
1
5236
null
5
312
Say I have a database of around a million words, and I want to get an intuitive idea about exactly how a particular, quite infrequent, word is distributed throughout this data. My goal is to be able to see clearly whether this word tends to cluster together, or whether it is relatively evenly spaced. What would be some good methods for visualizing this? For instance, I have seen something that looks useful. It's basically a strip (long rectangle) in which each instance of something is represented by a very thin red vertical line. The problem is that I don't know what these are called, and therefore I can't figure out how to make something like this in R. Any help finding the right R function for that, or any other suggestions for good ways to visualize this sort of data, would be most appreciated.
Visualizing the distribution of something within a very large body of data
CC BY-SA 2.5
null
2010-12-08T05:22:59.760
2010-12-08T19:43:07.323
2010-12-08T05:33:36.847
919
52
[ "r", "distributions", "data-visualization" ]
5234
2
null
5233
3
null
With a 1200 dpi printer using the thinnest possible line (one pixel) for each word, your plot of a million words would still be almost 20 meters long! Maybe a [density plot](http://www.statmethods.net/graphs/density.html) would be more helpful.
null
CC BY-SA 2.5
null
2010-12-08T05:32:42.603
2010-12-08T05:32:42.603
null
null
919
null
5235
1
5252
null
31
99643
In multiple linear regression, I can understand the correlations between residual and predictors are zero, but what is the expected correlation between residual and the criterion variable? Should it expected to be zero or highly correlated? What's the meaning of that?
What is the expected correlation between residual and the dependent variable?
CC BY-SA 3.0
null
2010-12-08T05:50:51.757
2017-09-07T17:55:53.393
2013-04-02T21:45:29.987
7290
400
[ "regression", "residuals" ]
5236
2
null
5233
2
null
While Whuber is correct in principle you still might be able to see something because your word is very infrequent and you only want plots of the one word. Something quite uncommon might only appear 30 times, probably not more than 500. Let's say you convert your words into a single vector of words that's a million long. You could easily construct a plot with basic R commands. Let's call the vector 'words' and the rare item 'wretch'. ``` n <- length(words) plot(1:n, integer(n), type = 'n', xlab = 'index of word', ylab = '', main = 'instances of wretch', yaxt = 'n', frame.plot = TRUE) wretch <- which(words %in% 'wretch') abline(v = wretch, col = 'red', lwd = 0.2) ``` You could change the line assigning wretch using a grep command if you need to account for variations of the word. Also, the lwd in the abline command could be set thicker or thinner depending on the frequency of the word. If you end up plotting 400 instances 0.2 will work fine. I tried some density plots of this kind of data. I imported about 50,000 words of Shakespeare and finding patterns was easier for me in the code above than it was in the density plots. I used a very common word that appeared in frequency 200x more than the mean frequency ('to') and the plots looked just fine. I think you'll make a fine graph like this with rare instances in 1e6 words.
null
CC BY-SA 2.5
null
2010-12-08T05:51:16.343
2010-12-08T16:37:08.893
2010-12-08T16:37:08.893
601
601
null
5237
2
null
5233
2
null
I don't know if this may be useful in your case, but in bioinformatics I often feel the need to visualize the distribution of gene counts in a give data set. This is definitely not as large as your data set, but I think the strategy can be followed for most of the large data sets. A typical strategy would be to find a predetermined number of clusters using, say, hierarchical clustering (or any other clustering procedure). Once you have the clusters, you can sample a gene from each of these clusters. Assuming that the gene is representative of the cluster, visualizing the count for the gene (in form of density plot, histogram, qq-plot, etc.) is equivalent to visualizing the behavior of the cluster. You can do the same for all the clusters. Basically, you reduce the huge data set to clusters and then visualize the representatives from these clusters assuming "on an average" the clusters' behavior will remain "more or less" the same. Warning: This method is highly sensitive to a lot of things, a few are, clustering method, how many clusters you choose, etc. I believe visualizing all the words if the number of words is reasonably large (say $\geq$ 50) would be pretty difficult. As as whuber aptly points out, it may be almost impossible.
null
CC BY-SA 2.5
null
2010-12-08T06:10:45.690
2010-12-08T06:10:45.690
null
null
1307
null
5238
1
5241
null
18
12334
When I work on data analysis projects I often store data in comma or tab-delimited (CSV, TSV) data files. While data often belongs in a dedicated database management system. For many of my applications, this would be overdoing things. I can edit CSV and TSV files in Excel (or presumably another Spreadsheet program). This has benefits: - spreadsheets make it easy to enter data There are also several problems: - Working with CSV and TSV files leads to a wide range of warning messages about various features being lost and how only the active sheet will be saved and so forth. Thus, it's annoying if you just want to open the file and make a little change. - It does many "supposedly intelligent" conversions. For example, if you enter 12/3, it will think that you want to enter a date. UPDATE: I should have mentioned that the date example is just one of many examples; most problems seem to be related to inappropriate conversion. In particular, text fields that look like numbers or dates cause problems. Alternatively, I could work directly with the text file in a standard text editor. This ensures that what I enter is what is recorded. However it is a very awkward way to enter data (columns don't line up; it's difficult to enter data simply into multiple cells; etc.). Question - What is a good strategy for working with CSV or TSV data files? i.e., what strategy makes it easy to enter and manipulate the data while also ensuring that what you enter is actually interpreted correctly?
Strategy for editing comma separated value (CSV) files
CC BY-SA 3.0
null
2010-12-08T07:14:29.000
2017-05-22T18:46:48.997
2017-05-20T16:29:55.990
101426
183
[ "project-management" ]
5239
2
null
5238
5
null
Update: [Having been going through a large backlog of email from R-Help] I am reminded of the thread on "[The behaviour of read.csv()](http://thread.gmane.org/gmane.comp.lang.r.general/213174/focus=213179)". In this, Duncan Murdoch mentions that he prefers to use [Data Interchange Format (DIF)](http://en.wikipedia.org/wiki/Data_Interchange_Format) files instead of csv for some of the reason Jeromy mentions. I just tried this and Gnumeric gets it wrong (loading 12/3 as a date), but OpenOffice.org reads this correctly and preserves the 12/3 information intact. (Anyone care to check this in MS Excel?) DIF files are plain text and can be read by spreadsheets and R (as long as you use a recent R revision (SVN revision >= r53778)) will read the data in in the correct format. --- Original: I would try to avoid using a spreadsheet full stop for data editing / manipulation whenever possible. It is incredibly difficult, if not impossible, to document any changes you make to an existing data set so that pretty much rules it out from a reproducible research point of view. At most, I use a spreadsheet to quickly view existing data. For data processing, I tend to write an R script that will take the raw csv file and apply all the necessary processing steps required. I heavily comment that script to explain exactly what I am doing at each stage and why. My data analysis script would then call the data processing script which loads and processes the data. For data entry, is it more hassle to enter the data in a text editor or in a spreadsheet? I suspect the problems you mention for the latter do not outweigh those of trying to enter CSV data into a text editor. You could try a better spreadsheet; OpenOffice.org refuses to stop formatting 12/3 as a date (or it converts it to the numeric representation) even if one formats the column as "numeric" first. Gnumeric on the other hand will leave 12/3 as it is if you format the column as "numeric" first. You can force OpenOffice.org to not reformat 12/3 as a date by prepending a `'` to the entries, i.e. `'12/3` will get displayed as 12/3 in the spreadsheet and saved out as text. This is probably quite safe to use. Not sure why you would want 12/3 stored numerically as 12/3 in the text file - how should something like R read this? Your comment on warnings about losing features or only saving the active sheet aren't really problems are they? (If they are, then I want your problems in my life ;-)
null
CC BY-SA 2.5
null
2010-12-08T07:31:04.013
2010-12-08T10:35:34.760
2010-12-08T10:35:34.760
1390
1390
null
5240
2
null
4991
4
null
Have a look at the [dlm](http://cran.r-project.org/web/packages/dlm/index.html) package and its [vignette](http://cran.r-project.org/web/packages/dlm/vignettes/dlm.pdf). I think you might find what you are looking for from the vignette. The package authors have also written a book [Dynamic Linear Models with R](http://amzn.com/0387772375).
null
CC BY-SA 2.5
null
2010-12-08T07:40:03.023
2010-12-14T21:58:57.510
2010-12-14T21:58:57.510
919
214
null
5241
2
null
5238
14
null
- If you are comfortable with R, you can create your basic data.frame and then use the fix() function on it to input data. Along the same line as #5, once you set up the data.frame you can use a series of readLines(n=1) (or whatever) to get your data in, validate it, and the provide the opportunity to add the next row. Then leave the fixing to fix(). See an implemented example below using scan(). - Another option in excel would be messy, but you could type in 12/9, then have another column evaluate =IFERROR(MONTH(DateEntryCell)/DAY(DataEntryCell),DataEntryCell). But then you'll have to maintain the excel sheet AND the csv sheet and all of the complaining as you write the csv will persist. - Alternatively, so long as your fields are relatively short and have a consistent length a regular text editor should serve you well with TSV. You can always load it up in excel when you are done and make sure the number of columns for each row is what you expect it to be. - Emacs is available on a number of platforms and probably has something just for this, e.g. http://www.emacswiki.org/emacs/CsvMode. - If you are a hearty soul, programming something quick up in a programming language to do the data entry is trivial, the data editing will be a lot harder. - A quick google search shows software with just this purpose, but no free software seemed to be any good. - It sounds insane, but someone on superuser suggested editing tables in access and then exporting them as CSV... that is just crazy enough to work. - It doesn't stop excel from complaining as you save as .csv, but you can type a single apostrophe before your data entry field and that makes it leave it alone in terms of auto-formatting. Nicely, this (in Office 2007 at least) doesn't leave apostrophes in the csv file. Update: I've been poking around a lot on this problem because it is an issue I also have. So far the best/easiest solution for data-entry I've seen so far is [KillinkCSV](http://www.whitepeaksoftware.com/main/killink-csv-editor). It isn't "free" software, it is shareware with a 30 day trial duration and a reasonable price (~$27). I'm not sure how much I trust it for editing existing CSVs though - I handed it an insanely large (and presumably well formatted) CSV and it failed to read all of the rows. However it seemed to work well for one that was reasonably large (20 MB) and the problem with the larger file may be user error on my part. R Example: ``` #This function takes a what argument like in scan, #a list with the types to be used, see usage example #at the end of this code block #dataEntry will keep reading in values until #the values it reads in matches what is in #"terminateon". #limitations: Many dataEntry <- function(what,terminateon) { CONTINUE <- TRUE #Make sure we start the loop data <- NULL #Create empty data so that the data.frame can define itself ti <- NULL while(CONTINUE) { ti <- NULL ti <- tryCatch( {as.data.frame(scan(what=what, nlines=1, multi.line=FALSE, comment.char="",quiet=TRUE))}, error=function (e) {print("Error in data entry! Line not stored.") return(NULL)}, warning=function(w) {print("Error in data entry! Line not stored.") return(NULL)}, finally={ti <- NULL} ) #Try getting the data according to the parameters in 'what' one row at a time. if (!is.null(ti)) { if ((ncol(ti)==length(what)) & (nrow(ti)==1)) { data <- rbind(data,ti) #If there wasn't an error, add ti to the previous value } else { print("Too many or not enough values on previous entry.") print("Tail of current data:") print(tail(data)) } } if (!is.null(ti) & all(ti == terminateon)) { CONTINUE <- FALSE data <- data[-c(nrow(data)),] } #if we've recieved the final value we won't continue and the last row is invalid so we remove it } return(data) } dataEntry(list(x=integer(), y=numeric(), z=character()),terminateon=c(999,999,"Z")) ```
null
CC BY-SA 3.0
null
2010-12-08T07:51:55.477
2013-01-21T15:24:42.620
2013-01-21T15:24:42.620
196
196
null
5242
2
null
5228
5
null
[Just on the R Style - @whuber has answered the Kurtsosis Q] This was a bit too complicated to stick into a comment. For such simple loops like the one you use, we can combine @whuber's suggestion of encapsulating the simulation in a function with the `replicate()` function. `replicate()` takes care of allocation and running the loop for you. An example is given below: ``` require(moments) foo <- function(size, trials, meanlog = 0, sdlog = 1) { replicate(trials, kurtosis(rlnorm(size, meanlog = meanlog, sdlog = sdlog))) } ``` We use it like this: ``` > set.seed(1) > out <- foo(2048, 10000) > summary(out) Min. 1st Qu. Median Mean 3rd Qu. Max. 10.93 28.77 39.99 62.53 62.58 1557.00 ``` Note that I use the `rlnorm()` function to generate the log-normal random variable. It is equivalent to `exp(rnorm())` in your loop but uses the correct tool, and we allow our function to pass on user-specified parameters of the log-normal distribution. ``` > set.seed(123) > exp(rnorm(1)) [1] 0.5709374 > set.seed(123) > rlnorm(1) [1] 0.5709374 ```
null
CC BY-SA 2.5
null
2010-12-08T08:41:48.057
2010-12-08T08:41:48.057
null
null
1390
null
5243
2
null
5238
1
null
I like Gnumeric because it does not try to be so much idiot-resistant as others (it doesn't shout about lost functionality) and works with large data... yet I think it is Linux-only.
null
CC BY-SA 2.5
null
2010-12-08T09:09:34.283
2010-12-08T09:09:34.283
null
null
null
null
5244
2
null
5238
2
null
After I asked this question, I started having a look at [CSVed](http://csved.sjfrancke.nl/index.html). From the website: > CSVed is an easy and powerful CSV file editor, you can manipulate any CSV file, separated with any separator. I'm not sure if anyone has experience with it.
null
CC BY-SA 2.5
null
2010-12-08T09:09:51.760
2010-12-08T09:09:51.760
null
null
183
null
5245
2
null
5235
4
null
So, the residuals are your unexplained variance, the difference between your model's predictions and the actual outcome you're modeling. In practice, few models produced through linear regression will have all residuals close to zero unless linear regression is being used to analyze a mechanical or fixed process. Ideally, the residuals from your model should be random, meaning they should not be correlated with either your independent or dependent variables (what you term the criterion variable). In linear regression, your error term is normally distributed, so your residuals should also be normally distributed as well. If you have significant outliers, or If your residuals are correlated with either your dependent variable or your independent variables, then you have a problem with your model. If you have significant outliers and non-normal distribution of your residuals, then the outliers may be skewing your weights (Betas), and I would suggest calculating DFBETAS to check the influence of your observations on your weights. If your residuals are correlated with your dependent variable, then there is a significantly large amount of unexplained variance that you are not accounting for. You may also see this if you're analyzing repeated observations of the same thing, due to autocorrelation. This can be checked for by seeing if your residuals are correlated with your time or index variable. If your residuals are correlated with your independent variables, then your model is heteroskedastic (see: [http://en.wikipedia.org/wiki/Heteroscedasticity](http://en.wikipedia.org/wiki/Heteroscedasticity)). You should check (if you haven't already) if your input variables are normally distributed, and if not, then you should consider scaling or transforming your data (the most common kinds are log and square-root) in order to make it more normalized. In the case of both, your residuals, and your independent variables, you should take a QQ-Plot, as well as perform a Kolmogorov-Smirnov test (this particular implementation is sometimes referred to as the Lilliefors test) to make sure that your values fit a normal distribution. Three things that are quick and may be helpful in dealing with this problem, are examining the median of your residuals, it should be as close to zero as possible (the mean will almost always be zero as a result of how the error term is fitted in linear regression), a Durbin-Watson test for autocorrelation in your residuals (especially as I mentioned before, if you are looking at multiple observations of the same things), and performing a partial residual plot will help you look for heteroscedasticity and outliers.
null
CC BY-SA 2.5
null
2010-12-08T09:27:14.297
2010-12-08T09:35:02.833
2010-12-08T09:35:02.833
2166
2166
null
5246
2
null
5238
2
null
Excel is not very CSV friendly. For example, if you were to enter "1,300" into Excel, and save that as a comma separated value, it would let you! This can be a big problem (I encounter it on a regular basis when receiving files from others). I personally use OpenOffice.org Calc, I also use many of the solutions listed above, however many of these don't have the functionality and the ease of use that are required for regular editing. OOO Calc is much more intelligent than Excel, although being a spreadsheet program, you will still have to enter "=12/3" instead of "12/3" otherwise you will be entering a value, rather than a calculation. Give it a whirl, you won't be disappointed.
null
CC BY-SA 2.5
null
2010-12-08T09:42:47.957
2010-12-08T09:42:47.957
null
null
2166
null
5247
1
5261
null
4
2597
I am going to host a training session to teach healthcare staff how to use control chart (c-chart to be specific), and I need to tell that why these rules ([Western Electric Rules](http://en.wikipedia.org/wiki/Western_Electric_rules#Zone_rules)) are so called rules that when pattern is matched, outbreak can be considered as a probability. I have read the article "[Control Charts 101: A Guide to Health Care Applications](http://www.ncbi.nlm.nih.gov/pubmed/11372500)" a bit and it teaches me a bit, saying the chance for one data point to exceed UCL is less than 0.5%, I can calculate that using the following R code without problem: ``` 1-pnorm(3, mean=0, sd=1) [1] 0.001349898 ``` I have written the following R code to verify the same thing in c-chart (i.e. poisson rather than normal distribution), for lambda ranged from 1.0 to 100.0 ``` UCL <- integer(0) for (loop.UCL in (1:1000)) {lambda <- loop.UCL/10 result <- 1-ppois(lambda+ceiling(sqrt(lambda)*3),lambda) UCL <- c(UCL, result) } summary(UCL) Min. 1st Qu. Median Mean 3rd Qu. 0.0009679 0.0016160 0.0018260 0.0019960 0.0021220 Max. 0.0134600 ``` I think I can handle the first Western Electric Rules, but I am puzzled for the second and third. Can anyone give me some insight or hint? Thanks!
How to calculate the probability for pattern that violates "control chart rules" to occur?
CC BY-SA 2.5
null
2010-12-08T09:59:17.943
2015-03-10T20:43:36.770
2010-12-08T15:43:49.840
919
588
[ "r", "poisson-distribution", "control-chart" ]
5248
2
null
4989
2
null
Looking at the [Heckman article](http://www.jstor.org/stable/full/1912352?seq=1) I see no reason, why two selection rules cannot be applied. In the article Heckman postulates the model as: $Y_{1i}=X_{1i}\beta_1+U_{1i}$ $Y_{2i}=X_{2i}\beta_1+U_{2i}$ The goal is to estimate the first equation, but some data is missing. Which data is missing is controled by second equation. Then we have $E(Y_{1i}| X_{1i}, \text{sample selection rule})=X_{1i}\beta_1+E(U_{1i}|\text{sample selection rule})$ If sample selection rule is $Y_{2i}\ge 0$ we have $E(Y_{1i}|X_{1i},Y_{2i}\ge 0)=X_{1i}\beta_1+E(U_{1i}|U_{2i}\ge -X_{2i}\beta_2)$ And the article goes into details how this is estimated. From this I see no reason why we cannot add another sample selection rule $Y_{3i}\ge 0$, where: $Y_{3i}=X_{3i}\beta_3+U_{3i}$ Then we get $E(Y_{1i}|X_{1i},Y_{2i}\ge 0,Y_{3i}\ge 0)=X_{1i}\beta_1+E(U_{1i}|U_{2i}\ge -X_{2i}\beta_2,U_{3i}\ge X_{3i}\beta_3)$ Everything depends what assumptions are laid on $U_{1i},U_{2i},U_{3i}$. For estimation we can form maximum likelihood, or look for analog of formulas with Mills ration for bivariate normal density for trivariate normal density. In the latter case proceed as in article. I would be quite optimistic of this idea succeeding. For the panel data framework look at [Wooldridge book "Econometric analysis of cross-section and panel data"](http://rads.stackoverflow.com/amzn/click/0262232197). It has whole chapter dedicated for sample selection problems for panel data. The way I see it, it is straightforward application of ideas in Heckman's article. As for the code in R, tough luck. If it is not implemented in plm package, it probably does not have readily available implementation.
null
CC BY-SA 2.5
null
2010-12-08T10:22:15.833
2010-12-08T10:22:15.833
null
null
2116
null
5249
1
null
null
17
742
In a previous question I asked about tools for editing [CSV files](https://stats.stackexchange.com/questions/5238/strategy-for-editing-comma-separated-value-csv-files). [Gavin](https://stats.stackexchange.com/questions/5238/strategy-for-editing-comma-separated-value-csv-files/5239#5239) linked to a comment on R Help by [Duncan Murdoch](http://thread.gmane.org/gmane.comp.lang.r.general/213174/focus=213179) suggesting that Data Interchange Format is a more reliable way to store data than CSV. For some applications a dedicated database management system is what is needed. However, for small scale data analysis projects something more light weight seems more suitable. Consider the following criteria for evaluating a file format: - reliabile: the data entered should stay true to what has been entered; data should open consistently in different software; - simple: it would be nice if the file format is easy to understand and ideally be readable with a simple text editor; it should be easy to write a simple program to read and write the format. - open: the format should be open - interoperable: the file format should be supported by many systems I find tab and comma separated value formats fail on the reliability criterion. Although I suppose I could blame the importing and exporting programs rather than the file format. I often find myself having to make little adjustments to the options in `read.table` in order to prevent some strange character from breaking the loading of the data frame. Questions - Which file format best meets these needs? - Is Data Interchange Format a better alternative? or does it have its own problems? - Is there some other format that is preferable? - Am I unfairly evaluating TSV and CSV? Is there a simple set of tips for working with such files that make the file format more reliable?
Simple, reliable, open, and interoperable plain text format for storing data
CC BY-SA 2.5
null
2010-12-08T10:43:56.587
2017-05-20T16:31:28.313
2017-05-20T16:31:28.313
101426
183
[ "project-management" ]