source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
10,001
I have noticed that there are a few implementations of random forest such as ALGLIB, Waffles and some R packages like randomForest . Can anybody tell me whether these libraries are highly optimized? Are they basically equivalent to the random forests as detailed in The Elements of Statistical Learning or have a lot of extra tricks been added? I hope this question is specific enough. As an illustration of the type of answer I am looking for, if somebody asked me whether the linear algebra package BLAS was highly optimized, I would say it was extremely highly optimized and mostly not worth trying to improve upon except in very specialized applications.
(Updated 6 IX 2015 with suggestions from comments, also made CW) There are two new, nice packages available for R which are pretty well optimised for a certain conditions: ranger -- C++, R package, optimised for $p>>n$ problems, parallel, special treatment of GWAS data. Arborist -- C++, R and Python bindings, optimised for large- $n$ problems, apparently plans for GPGPU. Other RF implementations: The Original One -- standalone Fortran code, not parallel, pretty hard to use. randomForest -- C, R package, probably the most popular, not parallel, actually quite fast when compared on a single-core speed basis, especially for small data. randomForestSRC -- C, R package, clone of randomForest supporting parallel processing and survival problems. party -- C, R package, quite slow, but designed as a plane for experimenting with RF. bigrf -- C+/R, R package, built to work on big data within bigmemory framework ; quite far from being complete. scikit learn Ensemble forest -- Python, part of scikit-learn framework, parallel, implements many variants of RF. milk 's RF -- Python, part of milk framework. so-called WEKA rf -- Java/WEKA, parallel. ALGLIB rt-rank -- abandoned? Ranger paper has some speed/memory comparisons, but there is no thorough benchmark.
{ "source": [ "https://stats.stackexchange.com/questions/10001", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/847/" ] }
10,017
I am trying to understand standard error "clustering" and how to execute in R (it is trivial in Stata). In R I have been unsuccessful using either plm or writing my own function. I'll use the diamonds data from the ggplot2 package. I can do fixed effects with either dummy variables > library(plyr) > library(ggplot2) > library(lmtest) > library(sandwich) > # with dummies to create fixed effects > fe.lsdv <- lm(price ~ carat + factor(cut) + 0, data = diamonds) > ct.lsdv <- coeftest(fe.lsdv, vcov. = vcovHC) > ct.lsdv t test of coefficients: Estimate Std. Error t value Pr(>|t|) carat 7871.082 24.892 316.207 < 2.2e-16 *** factor(cut)Fair -3875.470 51.190 -75.707 < 2.2e-16 *** factor(cut)Good -2755.138 26.570 -103.692 < 2.2e-16 *** factor(cut)Very Good -2365.334 20.548 -115.111 < 2.2e-16 *** factor(cut)Premium -2436.393 21.172 -115.075 < 2.2e-16 *** factor(cut)Ideal -2074.546 16.092 -128.920 < 2.2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 or by de-meaning both left- and right-hand sides (no time invariant regressors here) and correcting degrees of freedom. > # by demeaning with degrees of freedom correction > diamonds <- ddply(diamonds, .(cut), transform, price.dm = price - mean(price), carat.dm = carat .... [TRUNCATED] > fe.dm <- lm(price.dm ~ carat.dm + 0, data = diamonds) > ct.dm <- coeftest(fe.dm, vcov. = vcovHC, df = nrow(diamonds) - 1 - 5) > ct.dm t test of coefficients: Estimate Std. Error t value Pr(>|t|) carat.dm 7871.082 24.888 316.26 < 2.2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 I can't replicate these results with plm , because I don't have a "time" index (i.e., this isn't really a panel, just clusters that could have a common bias in their error terms). > plm.temp <- plm(price ~ carat, data = diamonds, index = "cut") duplicate couples (time-id) Error in pdim.default(index[[1]], index[[2]]) : I also tried to code my own covariance matrix with clustered standard error using Stata's explanation of their cluster option ( explained here ), which is to solve $$\hat V_{cluster} = (X'X)^{-1} \left( \sum_{j=1}^{n_c} u_j'u_j \right) (X'X)^{-1}$$ where $u_j = \sum_{cluster~j} e_i * x_i$, $n_c$ si the number of clusters, $e_i$ is the residual for the $i^{th}$ observation and $x_i$ is the row vector of predictors, including the constant (this also appears as equation (7.22) in Wooldridge's Cross Section and Panel Data ). But the following code gives very large covariance matrices. Are these very large values given the small number of clusters I have? Given that I can't get plm to do clusters on one factor, I'm not sure how to benchmark my code. > # with cluster robust se > lm.temp <- lm(price ~ carat + factor(cut) + 0, data = diamonds) > > # using the model that Stata uses > stata.clustering <- function(x, clu, res) { + x <- as.matrix(x) + clu <- as.vector(clu) + res <- as.vector(res) + fac <- unique(clu) + num.fac <- length(fac) + num.reg <- ncol(x) + u <- matrix(NA, nrow = num.fac, ncol = num.reg) + meat <- matrix(NA, nrow = num.reg, ncol = num.reg) + + # outer terms (X'X)^-1 + outer <- solve(t(x) %*% x) + + # inner term sum_j u_j'u_j where u_j = sum_i e_i * x_i + for (i in seq(num.fac)) { + index.loop <- clu == fac[i] + res.loop <- res[index.loop] + x.loop <- x[clu == fac[i], ] + u[i, ] <- as.vector(colSums(res.loop * x.loop)) + } + inner <- t(u) %*% u + + # + V <- outer %*% inner %*% outer + return(V) + } > x.temp <- data.frame(const = 1, diamonds[, "carat"]) > summary(lm.temp) Call: lm(formula = price ~ carat + factor(cut) + 0, data = diamonds) Residuals: Min 1Q Median 3Q Max -17540.7 -791.6 -37.6 522.1 12721.4 Coefficients: Estimate Std. Error t value Pr(>|t|) carat 7871.08 13.98 563.0 <2e-16 *** factor(cut)Fair -3875.47 40.41 -95.9 <2e-16 *** factor(cut)Good -2755.14 24.63 -111.9 <2e-16 *** factor(cut)Very Good -2365.33 17.78 -133.0 <2e-16 *** factor(cut)Premium -2436.39 17.92 -136.0 <2e-16 *** factor(cut)Ideal -2074.55 14.23 -145.8 <2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 1511 on 53934 degrees of freedom Multiple R-squared: 0.9272, Adjusted R-squared: 0.9272 F-statistic: 1.145e+05 on 6 and 53934 DF, p-value: < 2.2e-16 > stata.clustering(x = x.temp, clu = diamonds$cut, res = lm.temp$residuals) const diamonds....carat.. const 11352.64 -14227.44 diamonds....carat.. -14227.44 17830.22 Can this be done in R? It is a fairly common technique in econometrics (there's a brief tutorial in this lecture ), but I can't figure it out in R. Thanks!
Edit as of December 2021 : Probably the easiest way to get clustered standard errors in R now is via the felm function in the lfe package or the feols function in the fixest package: feols in fixest : Clustering syntax and standard error computational procedure felm in lfe : CRAN documentation Original answers and some subsequent edits : For White standard errors clustered by group with the plm framework try coeftest(model.plm, vcov=vcovHC(model.plm,type="HC0",cluster="group")) where model.plm is a plm model. See also this link http://www.inside-r.org/packages/cran/plm/docs/vcovHC or the plm package documentation EDIT: For two-way clustering (e.g. group and time) see the following link: http://people.su.se/~ma/clustering.pdf Here is another helpful guide for the plm package specifically that explains different options for clustered standard errors: http://www.princeton.edu/~otorres/Panel101R.pdf Clustering and other information, especially for Stata, can be found here: http://www.kellogg.northwestern.edu/faculty/petersen/htm/papers/se/se_programming.htm EDIT 2: Here are examples that compare R and stata: http://www.richard-bluhm.com/clustered-ses-in-r-and-stata-2/ Also, the multiwayvcov may be helpful. This post provides a helpful overview: http://rforpublichealth.blogspot.dk/2014/10/easy-clustered-standard-errors-in-r.html From the documentation: library(multiwayvcov) library(lmtest) data(petersen) m1 <- lm(y ~ x, data = petersen) # Cluster by firm vcov_firm <- cluster.vcov(m1, petersen $firmid) coeftest(m1, vcov_firm) # Cluster by year vcov_year <- cluster.vcov(m1, petersen$ year) coeftest(m1, vcov_year) # Cluster by year using a formula vcov_year_formula <- cluster.vcov(m1, ~ year) coeftest(m1, vcov_year_formula) # Double cluster by firm and year vcov_both <- cluster.vcov(m1, cbind(petersen $firmid, petersen$ year)) coeftest(m1, vcov_both) # Double cluster by firm and year using a formula vcov_both_formula <- cluster.vcov(m1, ~ firmid + year) coeftest(m1, vcov_both_formula)
{ "source": [ "https://stats.stackexchange.com/questions/10017", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1445/" ] }
10,159
I'm going to start out by saying this is a homework problem straight out of the book. I have spent a couple hours looking up how to find expected values, and have determined I understand nothing. Let $X$ have the CDF $F(x) = 1 - x^{-\alpha}, x\ge1$ . Find $E(X)$ for those values of $\alpha$ for which $E(X)$ exists. I have no idea how to even start this. How can I determine which values of $\alpha$ exist? I also don't know what to do with the CDF (I'm assuming this means Cumulative Distribution Function). There are formulas for finding the expected value when you have a frequency function or density function. Wikipedia says the CDF of $X$ can be defined in terms of the probability density function $f$ as follows: $F(x) = \int_{-\infty}^x f(t)\,dt$ This is as far as I got. Where do I go from here? EDIT: I meant to put $x\ge1$ .
Usage of the density function is not necessary Integrate 1 minus the CDF When you have a random variable $X$ that has a support that is non-negative (that is, the variable has nonzero density/probability for only positive values), you can use the following property: $$ E(X) = \int_0^\infty \left( 1 - F_X(x) \right) \,\mathrm{d}x $$ A similar property applies in the case of a discrete random variable. Proof Since $1 - F_X(x) = P(X\geq x) = \int_x^\infty f_X(t) \,\mathrm{d}t$, $$ \int_0^\infty \left( 1 - F_X(x) \right) \,\mathrm{d}x = \int_0^\infty P(X\geq x) \,\mathrm{d}x = \int_0^\infty \int_x^\infty f_X(t) \,\mathrm{d}t \mathrm{d}x $$ Then change the order of integration: $$ = \int_0^\infty \int_0^t f_X(t) \,\mathrm{d}x \mathrm{d}t = \int_0^\infty \left[xf_X(t)\right]_0^t \,\mathrm{d}t = \int_0^\infty t f_X(t) \,\mathrm{d}t $$ Recognizing that $t$ is a dummy variable, or taking the simple substitution $t=x$ and $\mathrm{d}t = \mathrm{d}x$, $$ = \int_0^\infty x f_X(x) \,\mathrm{d}x = \mathrm{E}(X) $$ Attribution I used the Formulas for special cases section of the Expected value article on Wikipedia to refresh my memory on the proof. That section also contains proofs for the discrete random variable case and also for the case that no density function exists.
{ "source": [ "https://stats.stackexchange.com/questions/10159", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4401/" ] }
10,213
I'm doing some reading on topic modeling (with Latent Dirichlet Allocation) which makes use of Gibbs sampling. As a newbie in statistics―well, I know things like binomials, multinomials, priors, etc.―,I find it difficult to grasp how Gibbs sampling works. Can someone please explain it in simple English and/or using simple examples? (If you are not familiar with topic modeling, any examples will do.)
You are a dungeonmaster hosting Dungeons & Dragons and a player casts 'Spell of Eldritch Chaotic Weather (SECW). You've never heard of this spell before, but it turns out it is quite involved. The player hands you a dense book and says, 'the effect of this spell is that one of the events in this book occurs.' The book contains a whopping 1000 different effects, and what's more, the events have different 'relative probabilities.' The book tells you that the most likely event is 'fireball'; all the probabilities of the other events are described relative to the probability of 'fireball'; for example: on page 155, it says that 'duck storm' is half as likely as 'fireball.' How are you, the Dungeon Master, to sample a random event from this book? Here's how you can do it: The accept-reject algorithm: 1) Roll a d1000 to decide a 'candidate' event. 2) Suppose the candidate event is 44% as likely as the most likely event, 'fireball'. Then accept the candidate with probability 44%. (Roll a d100, and accept if the roll is 44 or lower. Otherwise, go back to step 1 until you accept an event.) 3) The accepted event is your random sample. The accept-reject algorithm is guaranteed to sample from the distribution with the specified relative probabilities. After much dice rolling you finally end up accepting a candidate: 'summon frog'. You breathe a sigh of relief as you now you can get back to the (routine in comparison) business of handling the battle between the troll-orcs and dragon-elves. However, not to be outdone, another player decides to cast 'Level. 2 arcane cyber-effect storm.' For this spell, two different random effects occur: a randomly generated attack, and a randomly generated character buff. The manual for this spell is so huge that it can only fit on a CD. The player boots you up and shows you a page. Your jaw drops: the entry for each attack is about as large a the manual for the previous spell, because it lists a relative probability for each possible accompanying buff 'Cupric Blade' The most likely buff accompanying this attack is 'Hotelling aura' 'Jackal Vision' is 33% as likely to accompany this attack as 'Hotelling aura' 'Toaster Ears' is 20% as likely to accompany this attack as 'Hotelling aura' ... Similarly, the probability of a particular attack spell occurring depends on the probability of the buff occurring. It would be justified to wonder if a proper probability distribution can even be defined given this information. Well, it turns out that if there is one, it is uniquely specified by the conditional probabilities given in the manual. But how to sample from it? Luckily for you, the CD comes with an automated Gibbs' sampler, because you would have to spend an eternity doing the following by hand. Gibbs' sampler algorithm 1) Choose an attack spell randomly 2) Use the accept-reject algorithm to choose the buff conditional on the attack 3) Forget the attack spell you chose in step 1. Choose a new attack spell using the accept-reject algorithm conditional on the buff in step 2 4) Go to step 2, repeat forever (though usually 10000 iterations will be enough) 5) Whatever your algorithm has at the last iteration, is your sample. You see, in general, MCMC samplers are only asymptotically guaranteed to generate samples from a distribution with the specified conditional probabilities. But in many cases, MCMC samplers are the only practical solution available.
{ "source": [ "https://stats.stackexchange.com/questions/10213", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4429/" ] }
10,251
Principal component analysis can use matrix decomposition, but that is just a tool to get there. How would you find the principal components without the use of matrix algebra? What is the objective function (goal), and what are the constraints?
Without trying to give a full primer on PCA, from an optimization standpoint, the primary objective function is the Rayleigh quotient . The matrix that figures in the quotient is (some multiple of) the sample covariance matrix $$\newcommand{\m}[1]{\mathbf{#1}}\newcommand{\x}{\m{x}}\newcommand{\S}{\m{S}}\newcommand{\u}{\m{u}}\newcommand{\reals}{\mathbb{R}}\newcommand{\Q}{\m{Q}}\newcommand{\L}{\boldsymbol{\Lambda}} \S = \frac{1}{n} \sum_{i=1}^n \x_i \x_i^T = \m{X}^T \m{X} / n $$ where each $\x_i$ is a vector of $p$ features and $\m{X}$ is the matrix such that the $i$th row is $\x_i^T$. PCA seeks to solve a sequence of optimization problems. The first in the sequence is the unconstrained problem $$ \begin{array}{ll} \text{maximize} & \frac{\u^T \S \u}{\u^T\u} \;, \u \in \reals^p \> . \end{array} $$ Since $\u^T \u = \|\u\|_2^2 = \|\u\| \|\u\|$, the above unconstrained problem is equivalent to the constrained problem $$ \begin{array}{ll} \text{maximize} & \u^T \S \u \\ \text{subject to} & \u^T \u = 1 \>. \end{array} $$ Here is where the matrix algebra comes in. Since $\S$ is a symmetric positive semidefinite matrix (by construction!) it has an eigenvalue decomposition of the form $$ \S = \Q \L \Q^T \>, $$ where $\Q$ is an orthogonal matrix (so $\Q \Q^T = \m{I}$) and $\L$ is a diagonal matrix with nonnegative entries $\lambda_i$ such that $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_p \geq 0$. Hence, $\u^T \S \u = \u^T \Q \L \Q^T \u = \m{w}^T \L \m{w} = \sum_{i=1}^p \lambda_i w_i^2$. Since $\u$ is constrained in the problem to have a norm of one, then so is $\m{w}$ since $\|\m{w}\|_2 = \|\Q^T \u\|_2 = \|\u\|_2 = 1$, by virtue of $\Q$ being orthogonal. But, if we want to maximize the quantity $\sum_{i=1}^p \lambda_i w_i^2$ under the constraints that $\sum_{i=1}^p w_i^2 = 1$, then the best we can do is to set $\m{w} = \m{e}_1$, that is, $w_1 = 1$ and $w_i = 0$ for $i > 1$. Now, backing out the corresponding $\u$, which is what we sought in the first place, we get that $$ \u^\star = \Q \m{e}_1 = \m{q}_1 $$ where $\m{q}_1$ denotes the first column of $\Q$, i.e., the eigenvector corresponding to the largest eigenvalue of $\S$. The value of the objective function is then also easily seen to be $\lambda_1$. The remaining principal component vectors are then found by solving the sequence (indexed by $i$) of optimization problems $$ \begin{array}{ll} \text{maximize} & \u_i^T \S \u_i \\ \text{subject to} & \u_i^T \u_i = 1 \\ & \u_i^T \u_j = 0 \quad \forall 1 \leq j < i\>. \end{array} $$ So, the problem is the same, except that we add the additional constraint that the solution must be orthogonal to all of the previous solutions in the sequence. It is not difficult to extend the argument above inductively to show that the solution of the $i$th problem is, indeed, $\m{q}_i$, the $i$th eigenvector of $\S$. The PCA solution is also often expressed in terms of the singular value decomposition of $\m{X}$. To see why, let $\m{X} = \m{U} \m{D} \m{V}^T$. Then $n \S = \m{X}^T \m{X} = \m{V} \m{D}^2 \m{V}^T$ and so $\m{V} = \m{Q}$ (strictly speaking, up to sign flips) and $\L = \m{D}^2 / n$. The principal components are found by projecting $\m{X}$ onto the principal component vectors. From the SVD formulation just given, it's easy to see that $$ \m{X} \m{Q} = \m{X} \m{V} = \m{U} \m{D} \m{V}^T \m{V} = \m{U} \m{D} \> . $$ The simplicity of representation of both the principal component vectors and the principal components themselves in terms of the SVD of the matrix of features is one reason the SVD features so prominently in some treatments of PCA.
{ "source": [ "https://stats.stackexchange.com/questions/10251", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/74/" ] }
10,289
At work we were discussing this as my boss has never heard of normalization. In Linear Algebra, Normalization seems to refer to the dividing of a vector by its length. And in statistics, Standardization seems to refer to the subtraction of a mean then dividing by its SD. But they seem interchangeable with other possibilities as well. When creating some kind of universal score, that makes up $2$ different metrics, which have different means and different SD's, would you Normalize, Standardize, or something else? One person told me it's just a matter of taking each metric and dividing them by their SD, individually. Then summing the two. And that will result in a universal score that can be used to judge both metrics. For instance, say you had the number of people who take the subway to work (in NYC) and the number of people who drove to work (in NYC). $$\text{Train} \longrightarrow x$$ $$\text{Car} \longrightarrow y$$ If you wanted to create a universal score to quickly report traffic fluctuations, you can't just add $\text{mean}(x)$ and $\text{mean}(y)$ because there will be a LOT more people who ride the train. There's 8 million people living in NYC, plus tourists. That's millions of people taking the train everyday verse hundreds of thousands of people in cars. So they need to be transformed to a similar scale in order to be compared. If $\text{mean}(x) = 8,000,000$ and $\text{mean}(y) = 800,000$ Would you normalize $x$ & $y$ then sum? Would you standardize $x$ & $y$ then sum? Or would you divide each by their respective SD then sum? In order to get to a number that when fluctuates, represents total traffic fluctuations. Any article or chapters of books for reference would be much appreciated. THANKS! Also here's another example of what I'm trying to do. Imagine you're a college dean, and you're discussing admission requirements. You may want students with at least a certain GPA and a certain test score. It'd be nice if they were both on the same scale because then you could just add the two together and say, "anyone with at least a 7.0 can get admitted." That way, if a prospective student has a 4.0 GPA, they could get as low as a 3.0 test score and still get admitted. Inversely, if someone had a 3.0 GPA, they could still get admitted with a 4.0 test score. But it's not like that. The ACT is on a 36 point scale and most GPA's are on 4.0 (some are 4.3, yes annoying). Since I can't just add an ACT and GPA to get some kind of universal score, how can I transform them so they can be added, thus creating a universal admission score. And then as a Dean, I could just automatically accept anyone with a score above a certain threshold. Or even automatically accept everyone whose score is within the top 95%.... those sorts of things. Would that be normalization? standardization? or just dividing each by their SD then summing?
Normalization rescales the values into a range of [0,1]. This might be useful in some cases where all parameters need to have the same positive scale. However, the outliers from the data set are lost. $$ X_{changed} = \frac{X - X_{min}}{X_{max}-X_{min}} $$ Standardization rescales data to have a mean ($\mu$) of 0 and standard deviation ($\sigma$) of 1 (unit variance). $$ X_{changed} = \frac{X - \mu}{\sigma} $$ For most applications standardization is recommended.
{ "source": [ "https://stats.stackexchange.com/questions/10289", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4455/" ] }
10,302
I came across term perplexity which refers to the log-averaged inverse probability on unseen data. Wikipedia article on perplexity does not give an intuitive meaning for the same. This perplexity measure was used in pLSA paper. Can anyone explain the need and intuitive meaning of perplexity measure ?
You have looked at the Wikipedia article on perplexity . It gives the perplexity of a discrete distribution as $$2^{-\sum_x p(x)\log_2 p(x)}$$ which could also be written as $$\exp\left({\sum_x p(x)\log_e \frac{1}{p(x)}}\right)$$ i.e. as a weighted geometric average of the inverses of the probabilities. For a continuous distribution, the sum would turn into a integral. The article also gives a way of estimating perplexity for a model using $N$ pieces of test data $$2^{-\sum_{i=1}^N \frac{1}{N} \log_2 q(x_i)}$$ which could also be written $$\exp\left(\frac{{\sum_{i=1}^N \log_e \left(\dfrac{1}{q(x_i)}\right)}}{N}\right) \text{ or } \sqrt[N]{\prod_{i=1}^N \frac{1}{q(x_i)}}$$ or in a variety of other ways, and this should make it even clearer where "log-average inverse probability" comes from.
{ "source": [ "https://stats.stackexchange.com/questions/10302", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4290/" ] }
10,419
I've got a question concerning a negative binomial regression: Suppose that you have the following commands: require(MASS) attach(cars) mod.NB<-glm.nb(dist~speed) summary(mod.NB) detach(cars) (Note that cars is a dataset which is available in R, and I don't really care if this model makes sense.) What I'd like to know is: How can I interpret the variable theta (as returned at the bottom of a call to summary ). Is this the shape parameter of the negbin distribution and is it possible to interpret it as a measure of skewness?
Yes, theta is the shape parameter of the negative binomial distribution, and no, you cannot really interpret it as a measure of skewness. More precisely: skewness will depend on the value of theta , but also on the mean there is no value of theta that will guarantee you lack of skew If I did not mess it up, in the mu / theta parametrization used in negative binomial regression, the skewness is $$ {\rm Skew}(NB) = \frac{\theta+2\mu}{\sqrt{\theta\mu(\theta+\mu)}} = \frac{1 + 2\frac{\mu}{\theta}}{\sqrt{\mu(1+\frac{\mu}{\theta})}} $$ In this context, $\theta$ is usually interpreted as a measure of overdispersion with respect to the Poisson distribution. The variance of the negative binomial is $\mu + \mu^2/\theta$, so $\theta$ really controls the excess variability compared to Poisson (which would be $\mu$), and not the skew.
{ "source": [ "https://stats.stackexchange.com/questions/10419", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4496/" ] }
10,441
I'm running an experiment where I'm gathering (independent) samples in parallel, I compute the variance of each group of samples and now I want to combine then all to find the total variance of all the samples. I'm having a hard time finding a derivation for this as I'm not sure of terminology. I think of it as a partition of one RV. So I want to find $Var(X)$ from $Var(X_1)$, $Var(X_2)$, ..., and $Var(X_n)$, where $X$ = $[X_1, X_2, \dots, X_n]$. EDIT: The partitions are not the same size/cardinality, but the sum of the partition sizes equal the number of samples in the overall sample set. EDIT 2: There is a formula for a parallel computation here , but it only covers the case of a partition into two sets, not $n$ sets.
The formula is fairly straightforward if all the sub-sample have the same sample size. If you had $g$ sub-samples of size $k$ (for a total of $gk$ samples), then the variance of the combined sample depends on the mean $E_j$ and variance $V_j$ of each sub-sample: $$ Var(X_1,\ldots,X_{gk}) = \frac{k-1}{gk-1}(\sum_{j=1}^g V_j + \frac{k(g-1)}{k-1} Var(E_j)),$$ where by $Var(E_j)$ means the variance of the sample means. A demonstration in R: > x <- rnorm(100) > g <- gl(10,10) > mns <- tapply(x, g, mean) > vs <- tapply(x, g, var) > 9/99*(sum(vs) + 10*var(mns)) [1] 1.033749 > var(x) [1] 1.033749 If the sample sizes are not equal, the formula is not so nice. EDIT: formula for unequal sample sizes If there are $g$ sub-samples, each with $k_j, j=1,\ldots,g$ elements for a total of $n=\sum{k_j}$ values, then $$ Var(X_1,\ldots,X_{n}) = \frac{1}{n-1}\left(\sum_{j=1}^g (k_j-1) V_j + \sum_{j=1}^g k_j (\bar{X}_j - \bar{X})^2\right), $$ where $\bar{X} = (\sum_{j=1}^gk_j\bar{X}_j)/n$ is the weighted average of all the means (and equals to the mean of all values). Again, a demonstration: > k <- rpois(10, lambda=10) > n <- sum(k) > g <- factor(rep(1:10, k)) > x <- rnorm(n) > mns <- tapply(x, g, mean) > vs <- tapply(x, g, var) > 1/(n-1)*(sum((k-1)*vs) + sum(k*(mns-weighted.mean(mns,k))^2)) [1] 1.108966 > var(x) [1] 1.108966 By the way, these formulas are easy to derive by writing the desired variance as the scaled sum of $(X_{ji}-\bar{X})^2$, then introducing $\bar{X}_j$: $[(X_{ji}-\bar{X}_j)-(\bar{X}_j-\bar{X})]^2$, using the square of difference formula, and simplifying.
{ "source": [ "https://stats.stackexchange.com/questions/10441", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4499/" ] }
10,540
Im trying to use silhouette plot to determine the number of cluster in my dataset. Given the dataset Train , i used the following matlab code Train_data = full(Train); Result = []; for num_of_cluster = 1:20 centroid = kmeans(Train_data,num_of_cluster,'distance','sqeuclid'); s = silhouette(Train_data,centroid,'sqeuclid'); Result = [ Result; num_of_cluster mean(s)]; end plot( Result(:,1),Result(:,2),'r*-.');` The resultant plot is given below with xaxis as number of cluster and yaxis mean of silhouette value . How do i interpret this graph? How do i determine the number of cluster from this?
Sergey's answer contains the critical point, which is that the silhouette coefficient quantifies the quality of clustering achieved -- so you should select the number of clusters that maximizes the silhouette coefficient. The long answer is that the best way to evaluate the results of your clustering efforts is to start by actually examining -- human inspection -- the clusters formed and making a determination based on an understanding of what the data represents, what a cluster represents, and what the clustering is intended to achieve. There are numerous quantitative methods of evaluating clustering results which should be used as tools, with full understanding of the limitations. They tend to be fairly intuitive in nature, and thus have a natural appeal (like clustering problems in general). Examples: cluster mass / radius / density, cohesion or separation between clusters, etc. These concepts are often combined, for example, the ratio of separation to cohesion should be large if clustering was successful. The way clustering is measured is informed by the type of clustering algorithms used. For example, measuring quality of a complete clustering algorithm (in which all points are put into clusters) can be very different from measuring quality of a threshold-based fuzzy clustering algorithm (in which some point might be left un-clustered as 'noise'). The silhouette coefficient is one such measure. It works as follows: For each point p, first find the average distance between p and all other points in the same cluster (this is a measure of cohesion, call it A). Then find the average distance between p and all points in the nearest cluster (this is a measure of separation from the closest other cluster, call it B). The silhouette coefficient for p is defined as the difference between B and A divided by the greater of the two (max(A,B)). We evaluate the cluster coefficient of each point and from this we can obtain the 'overall' average cluster coefficient. Intuitively, we are trying to measure the space between clusters. If cluster cohesion is good (A is small) and cluster separation is good (B is large), the numerator will be large, etc. I've constructed an example here to demonstrate this graphically. In these plots the same data is plotted five times; the colors indicate the clusters created by k-means clustering, with k = 1,2,3,4,5. That is, I've forced a clustering algorithm to divide the data into 2 clusters, then 3, and so on, and colored the graph accordingly. The silhouette plot shows the that the silhouette coefficient was highest when k = 3, suggesting that's the optimal number of clusters. In this example we are lucky to be able to visualize the data and we might agree that indeed, three clusters best captures the segmentation of this data set. If we were unable to visualize the data, perhaps because of higher dimensionality, a silhouette plot would still give us a suggestion. However, I hope my somewhat long-winded answer here also makes the point that this "suggestion" could be very insufficient or just plain wrong in certain scenarios.
{ "source": [ "https://stats.stackexchange.com/questions/10540", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4290/" ] }
10,613
Recently, I have found in a paper by Klammer, et al. a statement that p-values should be uniformly distributed. I believe the authors, but cannot understand why it is so. Klammer, A. A., Park, C. Y., and Stafford Noble, W. (2009) Statistical Calibration of the SEQUEST XCorr Function . Journal of Proteome Research . 8(4): 2106–2113.
To clarify a bit. The p-value is uniformly distributed when the null hypothesis is true and all other assumptions are met. The reason for this is really the definition of alpha as the probability of a type I error. We want the probability of rejecting a true null hypothesis to be alpha, we reject when the observed $\text{p-value} < \alpha$, the only way this happens for any value of alpha is when the p-value comes from a uniform distribution. The whole point of using the correct distribution (normal, t, f, chisq, etc.) is to transform from the test statistic to a uniform p-value. If the null hypothesis is false then the distribution of the p-value will (hopefully) be more weighted towards 0. The Pvalue.norm.sim and Pvalue.binom.sim functions in the TeachingDemos package for R will simulate several data sets, compute the p-values and plot them to demonstrate this idea. Also see: Murdoch, D, Tsai, Y, and Adcock, J (2008). P-Values are Random Variables. The American Statistician , 62 , 242-245. for some more details. Edit: Since people are still reading this answer and commenting, I thought that I would address @whuber's comment. It is true that when using a composite null hypothesis like $\mu_1 \leq \mu_2$ that the p-values will only be uniformly distributed when the 2 means are exactly equal and will not be a uniform if $\mu_1$ is any value that is less than $\mu_2$. This can easily be seen using the Pvalue.norm.sim function and setting it to do a one sided test and simulating with the simulation and hypothesized means different (but in the direction to make the null true). As far as statistical theory goes, this does not matter. Consider if I claimed that I am taller than every member of your family, one way to test this claim would be to compare my height to the height of each member of your family one at a time. Another option would be to find the member of your family that is the tallest and compare their height with mine. If I am taller than that one person then I am taller than the rest as well and my claim is true, if I am not taller than that one person then my claim is false. Testing a composite null can be seen as a similar process, rather than testing all the possible combinations where $\mu_1 \leq \mu_2$ we can test just the equality part because if we can reject that $\mu_1 = \mu_2$ in favour of $\mu_1 > \mu_2$ then we know that we can also reject all the possibilities of $\mu_1 < \mu_2$. If we look at the distribution of p-values for cases where $\mu_1 < \mu_2$ then the distribution will not be perfectly uniform but will have more values closer to 1 than to 0 meaning that the probability of a type I error will be less than the selected $\alpha$ value making it a conservative test. The uniform becomes the limiting distribution as $\mu_1$ gets closer to $\mu_2$ (the people who are more current on the stat-theory terms could probably state this better in terms of distributional supremum or something like that). So by constructing our test assuming the equal part of the null even when the null is composite, then we are designing our test to have a probability of a type I error that is at most $\alpha$ for any conditions where the null is true.
{ "source": [ "https://stats.stackexchange.com/questions/10613", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4552/" ] }
10,712
I'm just reading the book "R in a Nutshell". And it seems as if I skipped the part where the "." as in "sample.formula" was explained. > sample.formula <- as.formula(y~x1+x2) Is sample an object with a field formula as in other languages? And if so, how can I find out, what other fields/functions this object has? (Type declaration) EDIT: I just found another confusing use of the ".": > svm(formula = is_spam~., data = spambase.training) (the dot between ~., )
The dot can be used as in normal name. It has however additional special interpretation. Suppose we have an object with specific class: a <- list(b=1) class(a) <- "myclass" Now declare myfunction as standard generic in the following way: myfunction <- function(x,...) UseMethod("myfunction") Now declare the function myfunction.myclass <- function(x,...) x$b+1 Then the dot has special meaning. For all objects with class myclass calling myfunction(a) will actualy call function myfunction.myclass : > myfunction(a) [1] 2 This is used widely in R, the most apropriate example is function summary . Each class has its own summary function, so when you fit some model for example (which usually returns object with specific class), you need to invoke summary and it will call appropriate summary function for that specific model.
{ "source": [ "https://stats.stackexchange.com/questions/10712", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3541/" ] }
10,789
I'm having problems understanding the concept of a random variable as a function. I understand the mechanics (I think) but I do not understand the motivation... Say $(\Omega, B, P) $ is a probability triple, where $\Omega = [0,1]$, $B$ is the Borel-$\sigma$-algebra on that interval and $P$ is the regular Lebesgue measure. Let $X$ be a random variable from $B$ to $\{1,2,3,4,5,6\}$ such that $X([0,1/6)) = 1$, $X([1/6,2/6)) = 2$, ..., $X([5/6,1]) = 6$, so $X$ has a discrete uniform distribution on the values 1 through 6. That's all good, but I do not understand the necessity of the original probability triple... we could have directly constructed something equivalent as $(\{1,2,3,4,5,6\}, S, P_x)$ where $S$ is all the appropriate $\sigma$-algebra of the space, and $P_x$ is a measure that assigns to each subset the measure (# of elements)/6. Also, the choice of $\Omega=[0,1]$ was arbitrary-- it could've been $[0,2]$, or any other set. So my question is, why bother constructing an arbitrary $\Omega$ with a $\sigma$-algebra and a measure, and define a random variable as a map from the $\sigma$-algebra to the real line?
If you are wondering why all this machinery is used when something much simpler could suffice--you are right, for most common situations. However, the measure-theoretic version of probability was developed by Kolmogorov for the purpose of establishing a theory of such generality that it could handle, in some cases, very abstract and complicated probability spaces. In fact, Kolmogorov's measure theoretic foundations for probability ultimately allowed probabilistic tools to be applied far beyond their original intended domain of application into areas such as harmonic analysis. At first it does seem more straightforward to skip any "underlying" $\sigma$-algebra $\Omega$, and to simply assign probability masses to the events comprising the sample space directly, as you have proposed. Indeed, probabilists effectively do the same thing whenever they choose to work with the "induced-measure" on the sample space defined by $P \circ X^{-1}$. However, things start getting tricky when you start getting into infinite dimensional spaces. Suppose you want to prove the Strong Law of Large Numbers for the specific case of flipping fair coins (that is, that the proportion of heads tends arbitrarily closely to 1/2 as the number of coin flips goes to infinity). You could attempt to construct a $\sigma$-algebra on the set of infinite sequences of the form $(H,T,H,...)$. But here can find that it is much more convenient to take the underlying space to be $\Omega = [0,1)$; and then use the binary representations of real numbers (e.g. $0.10100...$) to represent sequences of coin flips (1 being heads, 0 being tails.) An illustration of this very example can be found in the first few chapters of Billingsley's Probability and Measure .
{ "source": [ "https://stats.stackexchange.com/questions/10789", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4608/" ] }
10,838
I wonder if there is a simple way to produce a list of variables using a for loop, and give its value. for(i in 1:3) { noquote(paste("a",i,sep=""))=i } In the above code, I try to create a1 , a2 , a3 , which assign to the values of 1, 2, 3. However, R gives an error message. Thanks for your help.
Your are looking for assign() . for(i in 1:3){ assign(paste("a", i, sep = ""), i) } gives > ls() [1] "a1" "a2" "a3" and > a1 [1] 1 > a2 [1] 2 > a3 [1] 3 Update I agree that using loops is (very often) bad R coding style (see discussion above). Using list2env() (thanks to @mbq for mentioning it), this is another solution to @Han Lin Shang's question: x <- as.list(rnorm(10000)) names(x) <- paste("a", 1:length(x), sep = "") list2env(x , envir = .GlobalEnv)
{ "source": [ "https://stats.stackexchange.com/questions/10838", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4625/" ] }
10,975
Is there a (stronger?) alternative to the arcsin square root transformation for percentage/proportion data? In the data set I'm working on at the moment, marked heteroscedasticity remains after I apply this transformation, i.e. the plot of residuals vs. fitted values is still very much rhomboid. Edited to respond to comments: the data are investment decisions by experimental participants who may invest 0-100% of an endowment in multiples of 10%. I have also looked at these data using ordinal logistic regression, but would like to see what a valid glm would produce. Plus I could see the answer being useful for future work, as arcsin square root seems to be used as a one-size-fits all solution in my field and I hadn't come across any alternatives being employed.
Sure. John Tukey describes a family of (increasing, one-to-one) transformations in EDA . It is based on these ideas: To be able to extend the tails (towards 0 and 1) as controlled by a parameter. Nevertheless, to match the original (untransformed) values near the middle ( $1/2$ ), which makes the transformation easier to interpret. To make the re-expression symmetric about $1/2.$ That is, if $p$ is re-expressed as $f(p)$ , then $1-p$ will be re-expressed as $-f(p)$ . If you begin with any increasing monotonic function $g: (0,1) \to \mathbb{R}$ differentiable at $1/2$ you can adjust it to meet the second and third criteria: just define $$f(p) = \frac{g(p) - g(1-p)}{2g'(1/2)}.$$ The numerator is explicitly symmetric (criterion $(3)$ ), because swapping $p$ with $1-p$ reverses the subtraction, thereby negating it. To see that $(2)$ is satisfied, note that the denominator is precisely the factor needed to make $f^\prime(1/2)=1.$ Recall that the derivative approximates the local behavior of a function with a linear function; a slope of $1=1:1$ thereby means that $f(p)\approx p$ (plus a constant $-1/2$ ) when $p$ is sufficiently close to $1/2.$ This is the sense in which the original values are "matched near the middle." Tukey calls this the "folded" version of $g$ . His family consists of the power and log transformations $g(p) = p^\lambda$ where, when $\lambda=0$ , we consider $g(p) = \log(p)$ . Let's look at some examples. When $\lambda = 1/2$ we get the folded root, or "froot," $f(p) = \sqrt{1/2}\left(\sqrt{p} - \sqrt{1-p}\right)$ . When $\lambda = 0$ we have the folded logarithm, or "flog," $f(p) = (\log(p) - \log(1-p))/4.$ Evidently this is just a constant multiple of the logit transformation, $\log(\frac{p}{1-p})$ . In this graph the blue line corresponds to $\lambda=1$ , the intermediate red line to $\lambda=1/2$ , and the extreme green line to $\lambda=0$ . The dashed gold line is the arcsine transformation, $\arcsin(2p-1)/2 = \arcsin(\sqrt{p}) - \arcsin(\sqrt{1/2})$ . The "matching" of slopes (criterion $(2)$ ) causes all the graphs to coincide near $p=1/2.$ The most useful values of the parameter $\lambda$ lie between $1$ and $0$ . (You can make the tails even heavier with negative values of $\lambda$ , but this use is rare.) $\lambda=1$ doesn't do anything at all except recenter the values ( $f(p) = p-1/2$ ). As $\lambda$ shrinks towards zero, the tails get pulled further towards $\pm \infty$ . This satisfies criterion #1. Thus, by choosing an appropriate value of $\lambda$ , you can control the "strength" of this re-expression in the tails.
{ "source": [ "https://stats.stackexchange.com/questions/10975", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/266/" ] }
10,987
I am looking for input on how others organize their R code and output. My current practice is to write code in blocks in a text file as such: #================================================= # 19 May 2011 date() # Correlation analysis of variables in sed summary load("/media/working/working_files/R_working/sed_OM_survey.RData") # correlation between estimated surface and mean perc.OM in epi samples cor.test(survey$mean.perc.OM[survey$Depth == "epi"], survey$est.surf.OM[survey$Depth == "epi"])) #================================================== I then paste the output into another text file, usually with some annotation. The problems with this method are: The code and the output are not explicitly linked other than by date. The code and output are organized chronologically and thus can be hard to search. I have considered making one Sweave document with everything since I could then make a table of contents but this seems like it may be more hassle than the benefits it would provide. Let me know of any effective routines you have for organizing your R code and output that would allow for efficient searching and editing the analysis.
You are not the first person to ask this question. Managing a statistical analysis project – guidelines and best practices A workflow for R R Workflow: Slides from a Talk at Melbourne R Users by Jeromy Anglim (including another much longer list of webpages dedicated to R Workflow) My own stuff: Dynamic documents with R and LATEX as an important part of reproducible research More links to project organization: How to efficiently manage a statistical analysis project?
{ "source": [ "https://stats.stackexchange.com/questions/10987", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4048/" ] }
11,000
I'd like to regress a vector B against each of the columns in a matrix A. This is trivial if there are no missing data, but if matrix A contains missing values, then my regression against A is constrained to include only rows where all values are present (the default na.omit behavior). This produces incorrect results for columns with no missing data. I can regress the column matrix B against individual columns of the matrix A, but I have thousands of regressions to do, and this is prohibitively slow and inelegant. The na.exclude function seems to be designed for this case, but I can't make it work. What am I doing wrong here? Using R 2.13 on OSX, if it matters. A = matrix(1:20, nrow=10, ncol=2) B = matrix(1:10, nrow=10, ncol=1) dim(lm(A~B)$residuals) # [1] 10 2 (the expected 10 residual values) # Missing value in first column; now we have 9 residuals A[1,1] = NA dim(lm(A~B)$residuals) #[1] 9 2 (the expected 9 residuals, given na.omit() is the default) # Call lm with na.exclude; still have 9 residuals dim(lm(A~B, na.action=na.exclude)$residuals) #[1] 9 2 (was hoping to get a 10x2 matrix with a missing value here) A.ex = na.exclude(A) dim(lm(A.ex~B)$residuals) # Throws an error because dim(A.ex)==9,2 #Error in model.frame.default(formula = A.ex ~ B, drop.unused.levels = TRUE) : # variable lengths differ (found for 'B')
Edit: I misunderstood your question. There are two aspects: a) na.omit and na.exclude both do casewise deletion with respect to both predictors and criterions. They only differ in that extractor functions like residuals() or fitted() will pad their output with NA s for the omitted cases with na.exclude , thus having an output of the same length as the input variables. > N <- 20 # generate some data > y1 <- rnorm(N, 175, 7) # criterion 1 > y2 <- rnorm(N, 30, 8) # criterion 2 > x <- 0.5*y1 - 0.3*y2 + rnorm(N, 0, 3) # predictor > y1[c(1, 3, 5)] <- NA # some NA values > y2[c(7, 9, 11)] <- NA # some other NA values > Y <- cbind(y1, y2) # matrix for multivariate regression > fitO <- lm(Y ~ x, na.action=na.omit) # fit with na.omit > dim(residuals(fitO)) # use extractor function [1] 14 2 > fitE <- lm(Y ~ x, na.action=na.exclude) # fit with na.exclude > dim(residuals(fitE)) # use extractor function -> = N [1] 20 2 > dim(fitE$residuals) # access residuals directly [1] 14 2 b) The real issue is not with this difference between na.omit and na.exclude , you don't seem to want casewise deletion that takes criterion variables into account, which both do. > X <- model.matrix(fitE) # design matrix > dim(X) # casewise deletion -> only 14 complete cases [1] 14 2 The regression results depend on the matrices $X^{+} = (X' X)^{-1} X'$ (pseudoinverse of design matrix $X$, coefficients $\hat{\beta} = X^{+} Y$) and the hat matrix $H = X X^{+}$, fitted values $\hat{Y} = H Y$). If you don't want casewise deletion, you need a different design matrix $X$ for each column of $Y$, so there's no way around fitting separate regressions for each criterion. You can try to avoid the overhead of lm() by doing something along the lines of the following: > Xf <- model.matrix(~ x) # full design matrix (all cases) # function: manually calculate coefficients and fitted values for single criterion y > getFit <- function(y) { + idx <- !is.na(y) # throw away NAs + Xsvd <- svd(Xf[idx , ]) # SVD decomposition of X + # get X+ but note: there might be better ways + Xplus <- tcrossprod(Xsvd$v %*% diag(Xsvd$d^(-2)) %*% t(Xsvd$v), Xf[idx, ]) + list(coefs=(Xplus %*% y[idx]), yhat=(Xf[idx, ] %*% Xplus %*% y[idx])) + } > res <- apply(Y, 2, getFit) # get fits for each column of Y > res$y1$coefs [,1] (Intercept) 113.9398761 x 0.7601234 > res$y2$coefs [,1] (Intercept) 91.580505 x -0.805897 > coefficients(lm(y1 ~ x)) # compare with separate results from lm() (Intercept) x 113.9398761 0.7601234 > coefficients(lm(y2 ~ x)) (Intercept) x 91.580505 -0.805897 Note that there might be numerically better ways to caculate $X^{+}$ and $H$, you could check a $QR$-decomposition instead. The SVD-approach is explained here on SE . I have not timed the above approach with big matrices $Y$ against actually using lm() .
{ "source": [ "https://stats.stackexchange.com/questions/11000", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1699/" ] }
11,009
Is it ever valid to include a two-way interaction in a model without including the main effects? What if your hypothesis is only about the interaction, do you still need to include the main effects?
In my experience, not only is it necessary to have all lower order effects in the model when they are connected to higher order effects, but it is also important to properly model (e.g., allowing to be nonlinear) main effects that are seemingly unrelated to the factors in the interactions of interest. That's because interactions between $x_1$ and $x_2$ can be stand-ins for main effects of $x_3$ and $x_4$. Interactions sometimes seem to be needed because they are collinear with omitted variables or omitted nonlinear (e.g., spline) terms.
{ "source": [ "https://stats.stackexchange.com/questions/11009", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2310/" ] }
11,087
I was just wondering why regression problems are called "regression" problems. What is the story behind the name? One definition for regression: "Relapse to a less perfect or developed state."
The term "regression" was used by Francis Galton in his 1886 paper "Regression towards mediocrity in hereditary stature". To my knowledge he only used the term in the context of regression toward the mean . The term was then adopted by others to get more or less the meaning it has today as a general statistical method.
{ "source": [ "https://stats.stackexchange.com/questions/11087", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3541/" ] }
11,109
If you have a variable which perfectly separates zeroes and ones in target variable, R will yield the following "perfect or quasi perfect separation" warning message: Warning message: glm.fit: fitted probabilities numerically 0 or 1 occurred We still get the model but the coefficient estimates are inflated. How do you deal with this in practice?
A solution to this is to utilize a form of penalized regression. In fact, this is the original reason some of the penalized regression forms were developed (although they turned out to have other interesting properties. Install and load package glmnet in R and you're mostly ready to go. One of the less user-friendly aspects of glmnet is that you can only feed it matrices, not formulas as we're used to. However, you can look at model.matrix and the like to construct this matrix from a data.frame and a formula... Now, when you expect that this perfect separation is not just a byproduct of your sample, but could be true in the population, you specifically don't want to handle this: use this separating variable simply as the sole predictor for your outcome, not employing a model of any kind.
{ "source": [ "https://stats.stackexchange.com/questions/11109", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/333/" ] }
11,127
I have 2 dependent variables (DVs) each of whose score may be influenced by the set of 7 independent variables (IVs). DVs are continuous, while the set of IVs consists of a mix of continuous and binary coded variables. (In code below continuous variables are written in upper case letters and binary variables in lower case letters.) The aim of the study is to uncover how these DVs are influenced by IVs variables. I proposed the following multivariate multiple regression (MMR) model: my.model <- lm(cbind(A, B) ~ c + d + e + f + g + H + I) To interpret the results I call two statements: summary(manova(my.model)) Manova(my.model) Outputs from both calls are pasted below and are significantly different. Can somebody please explain which statement among the two should be picked to properly summarize the results of MMR, and why? Any suggestion would be greatly appreciated. Output using summary(manova(my.model)) statement: > summary(manova(my.model)) Df Pillai approx F num Df den Df Pr(>F) c 1 0.105295 5.8255 2 99 0.004057 ** d 1 0.085131 4.6061 2 99 0.012225 * e 1 0.007886 0.3935 2 99 0.675773 f 1 0.036121 1.8550 2 99 0.161854 g 1 0.002103 0.1043 2 99 0.901049 H 1 0.228766 14.6828 2 99 2.605e-06 *** I 1 0.011752 0.5887 2 99 0.556999 Residuals 100 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Output using Manova(my.model) statement: > library(car) > Manova(my.model) Type II MANOVA Tests: Pillai test statistic Df test stat approx F num Df den Df Pr(>F) c 1 0.030928 1.5798 2 99 0.21117 d 1 0.079422 4.2706 2 99 0.01663 * e 1 0.003067 0.1523 2 99 0.85893 f 1 0.029812 1.5210 2 99 0.22355 g 1 0.004331 0.2153 2 99 0.80668 H 1 0.229303 14.7276 2 99 2.516e-06 *** I 1 0.011752 0.5887 2 99 0.55700 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Briefly stated, this is because base-R's manova(lm()) uses sequential model comparisons for so-called Type I sum of squares, whereas car 's Manova() by default uses model comparisons for Type II sum of squares. I assume you're familiar with the model-comparison approach to ANOVA or regression analysis. This approach defines these tests by comparing a restricted model (corresponding to a null hypothesis) to an unrestricted model (corresponding to the alternative hypothesis). If you're not familiar with this idea, I recommend Maxwell & Delaney's excellent "Designing experiments and analyzing data" (2004). For type I SS, the restricted model in a regression analysis for your first predictor c is the null-model which only uses the absolute term: lm(Y ~ 1) , where Y in your case would be the multivariate DV defined by cbind(A, B) . The unrestricted model then adds predictor c , i.e. lm(Y ~ c + 1) . For type II SS, the unrestricted model in a regression analysis for your first predictor c is the full model which includes all predictors except for their interactions, i.e., lm(Y ~ c + d + e + f + g + H + I) . The restricted model removes predictor c from the unrestricted model, i.e., lm(Y ~ d + e + f + g + H + I) . Since both functions rely on different model comparisons, they lead to different results. The question which one is preferable is hard to answer - it really depends on your hypotheses. What follows assumes you're familiar with how multivariate test statistics like the Pillai-Bartlett Trace are calculated based on the null-model, the full model, and the pair of restricted-unrestricted models. For brevity, I only consider predictors c and H , and only test for c . N <- 100 # generate some data: number of subjects c <- rbinom(N, 1, 0.2) # dichotomous predictor c H <- rnorm(N, -10, 2) # metric predictor H A <- -1.4*c + 0.6*H + rnorm(N, 0, 3) # DV A B <- 1.4*c - 0.6*H + rnorm(N, 0, 3) # DV B Y <- cbind(A, B) # DV matrix my.model <- lm(Y ~ c + H) # the multivariate model summary(manova(my.model)) # from base-R: SS type I # Df Pillai approx F num Df den Df Pr(>F) # c 1 0.06835 3.5213 2 96 0.03344 * # H 1 0.32664 23.2842 2 96 5.7e-09 *** # Residuals 97 For comparison, the result from car 's Manova() function using SS type II. library(car) # for Manova() Manova(my.model, type="II") # Type II MANOVA Tests: Pillai test statistic # Df test stat approx F num Df den Df Pr(>F) # c 1 0.05904 3.0119 2 96 0.05387 . # H 1 0.32664 23.2842 2 96 5.7e-09 *** Now manually verify both results. Build the design matrix $X$ first and compare to R's design matrix. X <- cbind(1, c, H) XR <- model.matrix(~ c + H) all.equal(X, XR, check.attributes=FALSE) # [1] TRUE Now define the orthogonal projection for the full model ($P_{f} = X (X'X)^{-1} X'$, using all predictors). This gives us the matrix $W = Y' (I-P_{f}) Y$. Pf <- X %*% solve(t(X) %*% X) %*% t(X) Id <- diag(N) WW <- t(Y) %*% (Id - Pf) %*% Y Restricted and unrestricted models for SS type I plus their projections $P_{rI}$ and $P_{uI}$, leading to matrix $B_{I} = Y' (P_{uI} - P_{PrI}) Y$. XrI <- X[ , 1] PrI <- XrI %*% solve(t(XrI) %*% XrI) %*% t(XrI) XuI <- X[ , c(1, 2)] PuI <- XuI %*% solve(t(XuI) %*% XuI) %*% t(XuI) Bi <- t(Y) %*% (PuI - PrI) %*% Y Restricted and unrestricted models for SS type II plus their projections $P_{rI}$ and $P_{uII}$, leading to matrix $B_{II} = Y' (P_{uII} - P_{PrII}) Y$. XrII <- X[ , -2] PrII <- XrII %*% solve(t(XrII) %*% XrII) %*% t(XrII) PuII <- Pf Bii <- t(Y) %*% (PuII - PrII) %*% Y Pillai-Bartlett trace for both types of SS: trace of $(B + W)^{-1} B$. (PBTi <- sum(diag(solve(Bi + WW) %*% Bi))) # SS type I # [1] 0.0683467 (PBTii <- sum(diag(solve(Bii + WW) %*% Bii))) # SS type II # [1] 0.05904288 Note that the calculations for the orthogonal projections mimic the mathematical formula, but are a bad idea numerically. One should really use QR-decompositions or SVD in combination with crossprod() instead.
{ "source": [ "https://stats.stackexchange.com/questions/11127", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/609/" ] }
11,193
I have a data frame that contains some duplicate ids. I want to remove records with duplicate ids, keeping only the row with the maximum value. So for structured like this (other variables not shown): id var_1 1 2 1 4 2 1 2 3 3 5 4 2 I want to generate this: id var_1 1 4 2 3 3 5 4 2 I know about unique() and duplicated(), but I can't figure out how to incorporate the maximization rule...
One way is to reverse-sort the data and use duplicated to drop all the duplicates. For me, this method is conceptually simpler than those that use apply. I think it should be very fast as well. # Some data to start with: z <- data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,5,2)) # id var # 1 2 # 1 4 # 2 1 # 2 3 # 3 5 # 4 2 # Reverse sort z <- z[order(z$id, z$var, decreasing=TRUE),] # id var # 4 2 # 3 5 # 2 3 # 2 1 # 1 4 # 1 2 # Keep only the first row for each duplicate of z$id; this row will have the # largest value for z$var z <- z[!duplicated(z$id),] # Sort so it looks nice z <- z[order(z$id, z$var),] # id var # 1 4 # 2 3 # 3 5 # 4 2 Edit: I just realized that the reverse sort above doesn't even need to sort on id at all. You could just use z[order(z$var, decreasing=TRUE),] instead and it will work just as well. One more thought... If the var column is numeric, then there's a simple way to sort so that id is ascending, but var is descending. This eliminates the need for the sort at the end (assuming you even wanted it to be sorted). z <- data.frame(id=c(1,1,2,2,3,4),var=c(2,4,1,3,5,2)) # Sort: id ascending, var descending z <- z[order(z$id, -z$var),] # Remove duplicates z <- z[!duplicated(z$id),] # id var # 1 4 # 2 3 # 3 5 # 4 2
{ "source": [ "https://stats.stackexchange.com/questions/11193", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4110/" ] }
11,210
I appreciate the usefulness of the bootstrap in obtaining uncertainty estimates, but one thing that's always bothered me about it is that the distribution corresponding to those estimates is the distribution defined by the sample. In general, it seems like a bad idea to believe that our sample frequencies look exactly like the underlying distribution, so why is it sound/acceptable to derive uncertainty estimates based on a distribution where the sample frequencies define the underlying distribution? On the other hand, this may be no worse (possibly better) than other distributional assumptions we typically make, but I'd still like to understand the justification a bit better.
There are several ways that one can conceivably apply the bootstrap. The two most basic approaches are what are deemed the "nonparametric" and "parametric" bootstrap. The second one assumes that the model you're using is (essentially) correct. Let's focus on the first one. We'll assume that you have a random sample $X_1, X_2, \ldots, X_n$ distributed according the the distribution function $F$. (Assuming otherwise requires modified approaches.) Let $\hat{F}_n(x) = n^{-1} \sum_{i=1}^n \mathbf{1}(X_i \leq x)$ be the empirical cumulative distribution function. Much of the motivation for the bootstrap comes from a couple of facts. Dvoretzky–Kiefer–Wolfowitz inequality $$ \renewcommand{\Pr}{\mathbb{P}} \Pr\big( \textstyle\sup_{x \in \mathbb{R}} \,|\hat{F}_n(x) - F(x)| > \varepsilon \big) \leq 2 e^{-2n \varepsilon^2} \> . $$ What this shows is that the empirical distribution function converges uniformly to the true distribution function exponentially fast in probability. Indeed, this inequality coupled with the Borel–Cantelli lemma shows immediately that $\sup_{x \in \mathbb{R}} \,|\hat{F}_n(x) - F(x)| \to 0$ almost surely. There are no additional conditions on the form of $F$ in order to guarantee this convergence. Heuristically, then, if we are interested in some functional $T(F)$ of the distribution function that is smooth , then we expect $T(\hat{F}_n)$ to be close to $T(F)$. (Pointwise) Unbiasedness of $\hat{F}_n(x)$ By simple linearity of expectation and the definition of $\hat{F}_n(x)$, for each $x \in \mathbb{R}$, $$ \newcommand{\e}{\mathbb{E}} \e_F \hat{F}_n(x) = F(x) \>. $$ Suppose we are interested in the mean $\mu = T(F)$. Then the unbiasedness of the empirical measure extends to the unbiasedness of linear functionals of the empirical measure. So, $$ \e_F T(\hat{F}_n) = \e_F \bar{X}_n = \mu = T(F) \> . $$ So $T(\hat{F}_n)$ is correct on average and since $\hat{F_n}$ is rapidly approaching $F$, then (heuristically), $T(\hat{F}_n)$ rapidly approaches $T(F)$. To construct a confidence interval ( which is, essentially, what the bootstrap is all about ), we can use the central limit theorem, the consistency of empirical quantiles and the delta method as tools to move from simple linear functionals to more complicated statistics of interest. Good references are B. Efron, Bootstrap methods: Another look at the jackknife , Ann. Stat. , vol. 7, no. 1, 1–26. B. Efron and R. Tibshirani, An Introduction to the Bootstrap , Chapman–Hall, 1994. G. A. Young and R. L. Smith, Essentials of Statistical Inference , Cambridge University Press, 2005, Chapter 11 . A. W. van der Vaart, Asymptotic Statistics , Cambridge University Press, 1998, Chapter 23 . P. Bickel and D. Freedman, Some asymptotic theory for the bootstrap . Ann. Stat. , vol. 9, no. 6 (1981), 1196–1217.
{ "source": [ "https://stats.stackexchange.com/questions/11210", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4733/" ] }
11,351
This is pretty hard for me to describe, but I'll try to make my problem understandable. So first you have to know that I've done a very simple linear regression so far. Before I estimated the coefficient, I watched the distribution of my $y$. It is heavy left skewed. After I estimated the model, I was quite sure to observe a left-skewed residual in a QQ-Plot as wel, but I absolutely did not. What might be the reason for this solution? Where is the mistake? Or has the distribution $y$ nothing to do with the distribution of the error term?
To answer your question, let's take a very simple example. The simple regression model is given by $y_i = \beta_0 + \beta_1 x_i + \epsilon_i$, where $\epsilon_i \sim N(0,\sigma^2)$. Now suppose that $x_i$ is dichotomous. If $\beta_1$ is not equal to zero, then the distribution of $y_i$ will not be normal, but actually a mixture of two normal distributions, one with mean $\beta_0$ and one with mean $\beta_0 + \beta_1$. If $\beta_1$ is large enough and $\sigma^2$ is small enough, then a histogram of $y_i$ will look bimodal. However, one can also get a histogram of $y_i$ that looks like a "single" skewed distribution. Here is one example (using R): xi <- rbinom(10000, 1, .2) yi <- 0 + 3 * xi + rnorm(10000, .7) hist(yi, breaks=20) qqnorm(yi); qqline(yi) It's not the distribution of $y_i$ that matters -- but the distribution of the error terms. res <- lm(yi ~ xi) hist(resid(res), breaks=20) qqnorm(resid(res)); qqline(resid(res)) And that looks perfectly normal -- not only figuratively speaking =)
{ "source": [ "https://stats.stackexchange.com/questions/11351", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4496/" ] }
11,353
I have hundreds of explanatory variables and under 100 observations (saturated data set). I'd like to create a linear model in which I have two or so composite variables made up of a dozen of the explanatory variables each. How do I find the best variables to use for the composites without going through every combination? Currently my $R^2 = .64$, I'd like to improve that. The model looks something like this: $$Y = B_1(v_1 + v_2 + \dots + v_{12}) + B_2(v_{13} + v_{14} + \dots +v_{24})$$ where $B_1$ and $B_2$ are the coefficients and the entirety of ($v_1 + v_2 + \dots + v_{12}$) is acting as one variable.
To answer your question, let's take a very simple example. The simple regression model is given by $y_i = \beta_0 + \beta_1 x_i + \epsilon_i$, where $\epsilon_i \sim N(0,\sigma^2)$. Now suppose that $x_i$ is dichotomous. If $\beta_1$ is not equal to zero, then the distribution of $y_i$ will not be normal, but actually a mixture of two normal distributions, one with mean $\beta_0$ and one with mean $\beta_0 + \beta_1$. If $\beta_1$ is large enough and $\sigma^2$ is small enough, then a histogram of $y_i$ will look bimodal. However, one can also get a histogram of $y_i$ that looks like a "single" skewed distribution. Here is one example (using R): xi <- rbinom(10000, 1, .2) yi <- 0 + 3 * xi + rnorm(10000, .7) hist(yi, breaks=20) qqnorm(yi); qqline(yi) It's not the distribution of $y_i$ that matters -- but the distribution of the error terms. res <- lm(yi ~ xi) hist(resid(res), breaks=20) qqnorm(resid(res)); qqline(resid(res)) And that looks perfectly normal -- not only figuratively speaking =)
{ "source": [ "https://stats.stackexchange.com/questions/11353", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4798/" ] }
11,406
I am very new to R and to any packages in R. I looked at the ggplot2 documentation but could not find this. I want a box plot of variable boxthis with respect to two factors f1 and f2 . That is suppose both f1 and f2 are factor variables and each of them takes two values and boxthis is a continuous variable. I want to get 4 boxplots on a graph, each corresponding to one combination from the possible combinations that f1 and f2 can take. I think using the basic functionality in R, this can be done by > boxplot(boxthis ~ f1 * f2 , data = datasetname) Thanks in advance for any help.
I can think of two ways to accomplish this: 1. Create all combinations of f1 and f2 outside of the ggplot -function library(ggplot2) df <- data.frame(f1=factor(rbinom(100, 1, 0.45), label=c("m","w")), f2=factor(rbinom(100, 1, 0.45), label=c("young","old")), boxthis=rnorm(100)) df$f1f2 <- interaction(df$f1, df$f2) ggplot(aes(y = boxthis, x = f1f2), data = df) + geom_boxplot() 2. use colour/fill/etc. ggplot(aes(y = boxthis, x = f2, fill = f1), data = df) + geom_boxplot()
{ "source": [ "https://stats.stackexchange.com/questions/11406", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4820/" ] }
11,551
I want to browse a .rda file (R dataset). I know about the View(datasetname) command. The default R.app that comes for Mac does not have a very good browser for data (it opens a window in X11). I like the RStudio data browser that opens with the View command. However, it shows only 1000 rows and omits the remaining. ( UPDATE: RStudio viewer now shows all rows ) Is there a good browser that will show all rows in the data set and that you like/use.
Here are a few basic options, but like you, I can't say that I'm entirely happy with my current system. Avoid using the viewer : I.e., Use the command line tools to browse the data head and tail for showing initial and final rows str for an overview of variable types dplyr::glimpse() for an overview of variable types of all columns basic extraction tools like [,1:5] to show the first five colums Use a pager to display and navigate the data (e.g., page(foo, "print") ) possibly in conjunction with some variable extraction tools. This works fairly well on Linux, which uses less . I'm not sure how it goes on Windows or Mac. Export to spreadsheet software : I quite like browsing data in Excel when it's set up as a table . It's easy to sort, filter, and highlight. See here for the function that I use to open a data.frame in a spreadsheet .
{ "source": [ "https://stats.stackexchange.com/questions/11551", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4820/" ] }
11,602
TL:DR : Is it ever a good idea to train an ML model on all the data available before shipping it to production? Put another way, is it ever ok to train on all data available and not check if the model overfits, or get a final read of the expected performance of the model? Say I have a family of models parametrized by $\alpha$ . I can do a search (e.g. a grid search) on $\alpha$ by, for example, running k-fold cross-validation for each candidate. The point of using cross-validation for choosing $\alpha$ is that I can check if a learned model $\beta_i$ for that particular $\alpha_i$ had e.g. overfit, by testing it on the "unseen data" in each CV iteration (a validation set). After iterating through all $\alpha_i$ 's, I could then choose a model $\beta_{\alpha^*}$ learned for the parameters $\alpha^*$ that seemed to do best on the grid search, e.g. on average across all folds. Now, say that after model selection I would like to use all the the data that I have available in an attempt to ship the best possible model in production. For this, I could use the parameters $\alpha^*$ that I chose via grid search with cross-validation, and then, after training the model on the full ( $F$ ) dataset, I would a get a single new learned model $\beta^{F}_{\alpha^*}$ The problem is that, if I use my entire dataset for training, I can't reliably check if this new learned model $\beta^{F}_{\alpha^*}$ overfits or how it may perform on unseen data. So is this at all good practice? What is a good way to think about this problem?
The way to think of cross-validation is as estimating the performance obtained using a method for building a model, rather than for estimating the performance of a model. If you use cross-validation to estimate the hyperparameters of a model (the $\alpha$s) and then use those hyper-parameters to fit a model to the whole dataset, then that is fine, provided that you recognise that the cross-validation estimate of performance is likely to be (possibly substantially) optimistically biased. This is because part of the model (the hyper-parameters) have been selected to minimise the cross-validation performance, so if the cross-validation statistic has a non-zero variance (and it will) there is the possibility of over-fitting the model selection criterion. If you want to choose the hyper-parameters and estimate the performance of the resulting model then you need to perform a nested cross-validation, where the outer cross-validation is used to assess the performance of the model, and in each fold cross-validation is used to determine the hyper-parameters separately in each fold. You build the final model by using cross-validation on the whole set to choose the hyper-parameters and then build the classifier on the whole dataset using the optimized hyper-parameters. This is of course computationally expensive, but worth it as the bias introduced by improper performance estimation can be large. See my paper G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, 2010. Research, vol. 11, pp. 2079-2107, July 2010. ( www , pdf ) However, it is still possible to have over-fitting in model selection (nested cross-validation just allows you to test for it). A method I have found useful is to add a regularisation term to the cross-validation error that penalises hyper-parameter values likely to result in overly-complex models, see G. C. Cawley and N. L. C. Talbot, Preventing over-fitting in model selection via Bayesian regularisation of the hyper-parameters, Journal of Machine Learning Research, volume 8, pages 841-861, April 2007. ( www , pdf ) So the answers to your question are (i) yes, you should use the full dataset to produce your final model as the more data you use the more likely it is to generalise well but (ii) make sure you obtain an unbiased performance estimate via nested cross-validation and potentially consider penalising the cross-validation statistic to further avoid over-fitting in model selection.
{ "source": [ "https://stats.stackexchange.com/questions/11602", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2798/" ] }
11,609
My current understanding of the notion "confidence interval with confidence level $1 - \alpha$" is that if we tried to calculate the confidence interval many times (each time with a fresh sample), it would contain the correct parameter $1 - \alpha$ of the time. Though I realize that this is not the same as "probability that the true parameter lies in this interval", there's something I want to clarify. [Major Update] Before we calculate a 95% confidence interval, there is a 95% probability that the interval we calculate will cover the true parameter. After we've calculated the confidence interval and obtained a particular interval $[a,b]$, we can no longer say this. We can't even make some sort of non-frequentist argument that we're 95% sure the true parameter will lie in $[a,b]$; for if we could, it would contradict counterexamples such as this one: What, precisely, is a confidence interval? I don't want to make this a debate about the philosophy of probability; instead, I'm looking for a precise, mathematical explanation of the how and why seeing the particular interval $[a,b]$ changes (or doesn't change) the 95% probability we had before seeing that interval. If you argue that "after seeing the interval, the notion of probability no longer makes sense", then fine, let's work in an interpretation of probability in which it does make sense. More precisely: Suppose we program a computer to calculate a 95% confidence interval. The computer does some number crunching, calculates an interval, and refuses to show me the interval until I enter a password. Before I've entered the password and seen the interval (but after the computer has already calculated it), what's the probability that the interval will contain the true parameter? It's 95%, and this part is not up for debate : this is the interpretation of probability that I'm interested in for this particular question (I realize there are major philosophical issues that I'm suppressing, and this is intentional). But as soon as I type in the password and make the computer show me the interval it calculated, the probability (that the interval contains the true parameter) could change. Any claim that this probability never changes would contradict the counterexample above. In this counterexample, the probability could change from 50% to 100%, but... Are there any examples where the probability changes to something other than 100% or 0% (EDIT: and if so, what are they)? Are there any examples where the probability doesn't change after seeing the particular interval $[a,b]$ (i.e. the probability that the true parameter lies in $[a,b]$ is still 95%)? How (and why) does the probability change in general after seeing the computer spit out $[a,b]$? [Edit] Thanks for all the great answers and helpful discussions!
I think the fundamental problem is that frequentist statistics can only assign a probability to something that can have a long run frequency. Whether the true value of a parameter lies in a particular interval or not doesn't have a long run frequency, becuase we can only perform the experiment once, so you can't assign a frequentist probability to it. The problem arises from the definition of a probability. If you change the definition of a probability to a Bayesian one, then the problem instantly dissapears as you are no longer tied to discussion of long run frequencies. See my (rather tounge in cheek) answer to a related question here : " A Frequentist is someone that believes probabilies represent long run frequencies with which events ocurr; if needs be, he will invent a fictitious population from which your particular situation could be considered a random sample so that he can meaningfully talk about long run frequencies. If you ask him a question about a particular situation, he will not give a direct answer, but instead make a statement about this (possibly imaginary) population. " In the case of a confidence interval, the question we normally would like to ask (unless we have a problem in quality control for example) is "given this sample of data, return the smallest interval that contains the true value of the parameter with probability X". However a frequentist can't do this as the experiment is only performed once and so there are no long run frequencies that can be used to assign a probability. So instead the frequentist has to invent a population of experiments (that you didn't perform) from which the experiment you did perform can be considered a random sample. The frequentist then gives you an indirect answer about that fictitious population of experiments, rather than a direct answer to the question you really wanted to ask about a particular experiment. Essentially it is a problem of language, the frequentist definition of a popuation simply doesn't allow discussion of the probability of the true value of a parameter lying in a particular interval. That doesn't mean frequentist statistics are bad, or not useful, but it is important to know the limitations. Regarding the major update I am not sure we can say that "Before we calculate a 95% confidence interval, there is a 95% probability that the interval we calculate will cover the true parameter." within a frequentist framework. There is an implicit inference here that the long run frequency with which the true value of the parameter lies in confidence intervals constructed by some particular method is also the probability that that the true value of the parameter will lie in the confidence interval for the particular sample of data we are going to use. This is a perfectly reasonable inference, but it is a Bayesian inference, not a frequentist one, as the probability that the true value of the parameter lies in the confidence interval that we construct for a particular sample of data has no long run freqency, as we only have one sample of data. This is exactly the danger of frequentist statistics, common sense reasoning about probability is generally Bayesian, in that it is about the degree of plausibility of a proposition. We can however "make some sort of non-frequentist argument that we're 95% sure the true parameter will lie in [a,b]", that is exactly what a Bayesian credible interval is, and for many problems the Bayesian credible interval exactly coincides with the frequentist confidence interval. "I don't want to make this a debate about the philosophy of probability", sadly this is unavoidable, the reason you can't assign a frequentist probability to whether the true value of the statistic lies in the confidence interval is a direct consequence of the frequentist philosophy of probability. Frequentists can only assign probabilities to things that can have long run frequencies, as that is how frequentists define probability in their philosophy. That doesn't make frequentist philosophy wrong, but it is important to understand the bounds imposed by the definition of a probability. "Before I've entered the password and seen the interval (but after the computer has already calculated it), what's the probability that the interval will contain the true parameter? It's 95%, and this part is not up for debate:" This is incorrect, or at least in making such a statement, you have departed from the framework of frequentist statistics and have made a Bayesian inference involving a degree of plausibility in the truth of a statement, rather than a long run frequency. However, as I have said earlier, it is a perfectly reasonable and natural inference. Nothing has changed before or after entering the password, because niether event can be assigned a frequentist probability. Frequentist statistics can be rather counter-intuitive as we often want to ask questions about degrees of plausibility of statements regarding particular events, but this lies outside the remit of frequentist statistics, and this is the origin of most misinterpretations of frequentist procedures.
{ "source": [ "https://stats.stackexchange.com/questions/11609", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4895/" ] }
11,659
In my job role I often work with other people's datasets, non-experts bring me clinical data and I help them to summarise it and perform statistical tests. The problem I am having is that the datasets I am brought are almost always riddled with typos, inconsistencies, and all sorts of other problems. I am interested to know if other people have standard tests which they do to try to check any datasets that come in. I used to draw histograms of each variable just to have a look but I now realise there are lots of horrible errors that can survive this test. For example, I had a repeated measures dataset the other day where, for some individuals, the repeated measure was identical at Time 2 as it was at Time 1. This was subsequently proved to be incorrect, as you would expect. Another dataset had an individual who went from being very severely disordered (represented by a high score) to being problem-free, represented by 0's across the board. This is just impossible, although I couldn't prove it definitively. So what basic tests can I run on each dataset to make sure that they don't have typos and they don't contain impossible values? Thanks in advance!
It helps to understand how the data were recorded. Let me share a story . Once, long ago, many datasets were stored only in fading hardcopy. In those dark days I contracted with an organization (of great pedigree and size; many of you probably own its stock) to computerize about 10^5 records of environmental monitoring data at one of its manufacturing plants. To do this, I personally marked up a shelf of laboratory reports (to show where the data were), created data entry forms, and contracted with a temp agency for literate workers to type the data into the forms. (Yes, you had to pay extra for people who could read.) Due to the value and sensitivity of the data, I conducted this process in parallel with two workers at a time (who usually changed from day to day). It took a couple of weeks. I wrote software to compare the two sets of entries, systematically identifying and correcting all the errors that showed up. Boy were there errors! What can go wrong? A good way to describe and measure errors is at the level of the basic record, which in this situation was a description of a single analytical result (the concentration of some chemical, often) for a particular sample obtained at a given monitoring point on a given date. In comparing the two datasets, I found: Errors of omission : one dataset would include a record, another would not. This usually happened because either (a) a line or two would be overlooked at the bottom of a page or (b) an entire page would be skipped. Apparent errors of omission that were really data-entry mistakes. A record is identified by a monitoring point name, a date, and the "analyte" (usually a chemical name). If any of these has a typographical error, it will not be matched to the other records with which it is related. In effect, the correct record disappears and an incorrect record appears. Fake duplication . The same results can appear in multiple sources, be transcribed multiple times, and seem to be true repeated measures when they are not. Duplicates are straightforward to detect, but deciding whether they are erroneous depends on knowing whether duplicates should even appear in the dataset. Sometimes you just can't know. Frank data-entry errors . The "good" ones are easy to catch because they change the type of the datum: using the letter "O" for the digit "0", for instance, turns a number into a non-number. Other good errors change the value so much it can readily be detected with statistical tests. (In one case, the leading digit in "1,000,010 mg/Kg" was cut off, leaving a value of 10. That's an enormous change when you're talking about a pesticide concentration!) The bad errors are hard to catch because they change a value into one that fits (sort of) with the rest of the data, such as typing "80" for "50". (This kind of mistake happens with OCR software all the time.) Transpositions . The right values can be entered but associated with the wrong record keys. This is insidious, because the global statistical characteristics of the dataset might remain unaltered, but spurious differences can be created between groups. Probably only a mechanism like double-entry is even capable of detecting these errors. Once you are aware of these errors and know, or have a theory, of how they occur, you can write scripts to troll your datasets for the possible presence of such errors and flag them for further attention. You cannot always resolve them, but at least you can include a "comment" or "quality flag" field to accompany the data throughout their later analysis. Since that time I have paid attention to data quality issues and have had many more opportunities to make comprehensive checks of large statistical datasets. None is perfect; they all benefit from quality checks. Some of the principles I have developed over the years for doing this include Whenever possible, create redundancy in data entry and data transcription procedures: checksums, totals, repeated entries: anything to support automatic internal checks of consistency. If possible, create and exploit another database which describes what the data should look like: that is, computer-readable metadata. For instance, in a drug experiment you might know in advance that every patient will be seen three times. This enables you to create a database with all the correct records and their identifiers with the values just waiting to be filled in. Fill them in with the data given you and then check for duplicates, omissions, and unexpected data. Always normalize your data (specifically, get them into at least fourth normal form ), regardless of how you plan to format the dataset for analysis. This forces you to create tables of every conceptually distinct entity you are modeling. (In the environmental case, this would include tables of monitoring locations, samples, chemicals (properties, typical ranges, etc.), tests of those samples (a test usually covers a suite of chemicals), and the individual results of those tests. In so doing you create many effective checks of data quality and consistency and identify many potentially missing or duplicate or inconsistent values. This effort (which requires good data processing skills but is straightforward) is astonishingly effective. If you aspire to analyze large or complex datasets and do not have good working knowledge of relational databases and their theory, add that to your list of things to be learned as soon as possible. It will pay dividends throughout your career. Always perform as many "stupid" checks as you possibly can . These are automated verification of obvious things such that dates fall into their expected periods, the counts of patients (or chemicals or whatever) always add up correctly, that values are always reasonable (e.g., a pH must be between 0 and 14 and maybe in a much narrower range for, say, blood pH readings), etc. This is where domain expertise can be the most help: the statistician can fearlessly ask stupid questions of the experts and exploit the answers to check the data. Much more can be said of course--the subject is worth a book--but this should be enough to stimulate ideas.
{ "source": [ "https://stats.stackexchange.com/questions/11659", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/199/" ] }
11,691
How would you know if your (high dimensional) data exhibits enough clustering so that results from kmeans or other clustering algorithm is actually meaningful? For k-means algorithm in particular, how much of a reduction in within-cluster variance should there be for the actual clustering results to be meaningful (and not spurious)? Should clustering be apparent when a dimensionally-reduced form of the data is plotted, and are the results from kmeans (or other methods) meaningless if the clustering cannot be visualized?
About k-means specifically, you can use the Gap statistics. Basically, the idea is to compute a goodness of clustering measure based on average dispersion compared to a reference distribution for an increasing number of clusters. More information can be found in the original paper: Tibshirani, R., Walther, G., and Hastie, T. (2001). Estimating the numbers of clusters in a data set via the gap statistic . J. R. Statist. Soc. B, 63(2): 411-423. The answer that I provided to a related question highlights other general validity indices that might be used to check whether a given dataset exhibits some kind of a structure. When you don't have any idea of what you would expect to find if there was noise only, a good approach is to use resampling and study clusters stability. In other words, resample your data (via bootstrap or by adding small noise to it) and compute the "closeness" of the resulting partitions, as measured by Jaccard similarities. In short, it allows to estimate the frequency with which similar clusters were recovered in the data. This method is readily available in the fpc R package as clusterboot() . It takes as input either raw data or a distance matrix, and allows to apply a wide range of clustering methods (hierarchical, k-means, fuzzy methods). The method is discussed in the linked references: Hennig, C. (2007) Cluster-wise assessment of cluster stability . Computational Statistics and Data Analysis , 52, 258-271. Hennig, C. (2008) Dissolution point and isolation robustness: robustness criteria for general cluster analysis methods . Journal of Multivariate Analysis , 99, 1154-1176. Below is a small demonstration with the k-means algorithm. sim.xy <- function(n, mean, sd) cbind(rnorm(n, mean[1], sd[1]), rnorm(n, mean[2],sd[2])) xy <- rbind(sim.xy(100, c(0,0), c(.2,.2)), sim.xy(100, c(2.5,0), c(.4,.2)), sim.xy(100, c(1.25,.5), c(.3,.2))) library(fpc) km.boot <- clusterboot(xy, B=20, bootmethod="boot", clustermethod=kmeansCBI, krange=3, seed=15555) The results are quite positive in this artificial (and well structured) dataset since none of the three clusters ( krange ) were dissolved across the samples, and the average clusterwise Jaccard similarity is > 0.95 for all clusters. Below are the results on the 20 bootstrap samples. As can be seen, statistical units tend to stay grouped into the same cluster, with few exceptions for those observations lying in between. You can extend this idea to any validity index, of course: choose a new series of observations by bootstrap (with replacement), compute your statistic (e.g., silhouette width, cophenetic correlation, Hubert's gamma, within sum of squares) for a range of cluster numbers (e.g., 2 to 10), repeat 100 or 500 times, and look at the boxplot of your statistic as a function of the number of cluster. Here is what I get with the same simulated dataset, but using Ward's hierarchical clustering and considering the cophenetic correlation (which assess how well distance information are reproduced in the resulting partitions) and silhouette width (a combination measure assessing intra-cluster homogeneity and inter-cluster separation). The cophenetic correlation ranges from 0.6267 to 0.7511 with a median value of 0.7031 (500 bootstrap samples). Silhouette width appears to be maximal when we consider 3 clusters (median 0.8408, range 0.7371-0.8769).
{ "source": [ "https://stats.stackexchange.com/questions/11691", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2973/" ] }
11,707
According to the Wikipedia article on unbiased estimation of standard deviation the sample SD $$s = \sqrt{\frac{1}{n-1} \sum_{i=1}^n (x_i - \overline{x})^2}$$ is a biased estimator of the SD of the population. It states that $E(\sqrt{s^2}) \neq \sqrt{E(s^2)}$. NB. Random variables are independent and each $x_{i} \sim N(\mu,\sigma^{2})$ My question is two-fold: What is the proof of the biasedness? How does one compute the expectation of the sample standard deviation My knowledge of maths/stats is only intermediate.
You don't need normality. All you need is that $$s^2 = \frac{1}{n-1} \sum_{i=1}^n(x_i - \bar{x})^2$$ is an unbiased estimator of the variance $\sigma^2$. Then use that the square root function is strictly concave such that (by a strong form of Jensen's inequality ) $$E(\sqrt{s^2}) < \sqrt{E(s^2)} = \sigma$$ unless the distribution of $s^2$ is degenerate at $\sigma^2$.
{ "source": [ "https://stats.stackexchange.com/questions/11707", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4937/" ] }
11,812
I am using a ranksum test to compare the median of two samples ($n=120000$) and have found that they are significantly different with: p = 1.12E-207 . Should I be suspicious of such a small $p$-value or should I attribute it to the high statistical power associated with having a very large sample? Is there any such thing as a suspiciously low $p$-value?
P-values on standard computers (using IEEE double precision floats) can get as low as approximately $10^{-303}$. These can be legitimately correct calculations when effect sizes are large and/or standard errors are low. Your value, if computed with a T or normal distribution, corresponds to an effect size of about 31 standard errors. Remembering that standard errors usually scale with the reciprocal square root of $n$, that reflects a difference of less than 0.09 standard deviations (assuming all samples are independent). In most applications, there would be nothing suspicious or unusual about such a difference. Interpreting such p-values is another matter. Viewing a number as small as $10^{-207}$ or even $10^{-10}$ as a probability is exceeding the bounds of reason, given all the ways in which reality is likely to deviate from the probability model that underpins this p-value calculation. A good choice is to report the p-value as being less than the smallest threshold you feel the model can reasonably support: often between $0.01$ and $0.0001$.
{ "source": [ "https://stats.stackexchange.com/questions/11812", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4054/" ] }
11,859
What is the difference between a multiclass problem and a multilabel problem?
I suspect the difference is that in multi-class problems the classes are mutually exclusive, whereas for multi-label problems each label represents a different classification task, but the tasks are somehow related (so there is a benefit in tackling them together rather than separately). For example, in the famous leptograspus crabs dataset there are examples of males and females of two colour forms of crab. You could approach this as a multi-class problem with four classes (male-blue, female-blue, male-orange, female-orange) or as a multi-label problem, where one label would be male/female and the other blue/orange. Essentially in multi-label problems a pattern can belong to more than one class.
{ "source": [ "https://stats.stackexchange.com/questions/11859", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4290/" ] }
11,924
How can I add new variable into data frame which will be percentile rank of one of the variables? I can do this in Excel easily, but I really want to do that in R. Thanks
Given a vector of raw data values, a simple function might look like perc.rank <- function(x, xo) length(x[x <= xo])/length(x)*100 where x0 is the value for which we want the percentile rank, given the vector x , as suggested on R-bloggers . However, it might easily be vectorized as perc.rank <- function(x) trunc(rank(x))/length(x) which has the advantage of not having to pass each value. So, here is an example of use: my.df <- data.frame(x=rnorm(200)) my.df <- within(my.df, xr <- perc.rank(x))
{ "source": [ "https://stats.stackexchange.com/questions/11924", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/333/" ] }
11,985
I have 5 variables and I'm trying to predict my target variable which must be within the range 0 to 70. How do I use this piece of information to model my target better?
You don't necessarily have to do anything. It's possible the predictor will work fine. Even if the predictor extrapolates to values outside the range, possibly clamping the predictions to the range (that is, use $\max(0, \min(70, \hat{y}))$ instead of $\hat{y}$) will do well. Cross-validate the model to see whether this works. However, the restricted range raises the possibility of a nonlinear relationship between the dependent variable ($y$) and the independent variables ($x_i$). Some additional indicators of this include: Greater variation in residual values when $\hat{y}$ is in the middle of its range, compared to variation in residuals at either end of the range. Theoretical reasons for specific non-linear relationships. Evidence of model mis-specification (obtained in the usual ways). Significance of quadratic or high-order terms in the $x_i$. Consider a nonlinear re-expression of $y$ in case any of these conditions hold. There are many ways to re-express $y$ to create more linear relationships with the $x_i$. For instance, any increasing function $f$ defined on the interval $[0,70]$ can be "folded" to create a symmetric increasing function via $y \to f(y) - f(70-y)$. If $f$ becomes arbitrarily large and negative as its argument approaches $0$, the folded version of $f$ will map $[0,70]$ into all the real numbers. Examples of such functions include the logarithm and any negative power. Using the logarithm is equivalent to the "logit link" recommended by @user603. Another way is to let $G$ be the inverse CDF of any probability distribution and define $f(y) = G(y/70)$. Using a Normal distribution gives the "probit" transformation. One way to exploit families of transformations is to experiment: try a likely transformation, perform a quick regression of the transformed $y$ against the $x_i$, and test the residuals: they should appear to be independent of the predicted values of $y$ (homoscedastic and uncorrelated). These are signs of a linear relationship with the independent variables. It helps, too, if the residuals of the back-transformed predicted values tend to be small. This indicates the transformation has improved the fit. To resist the effects of outliers, use robust regression methods such as iteratively reweighted least squares .
{ "source": [ "https://stats.stackexchange.com/questions/11985", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/333/" ] }
12,029
Thanks to Tormod question (posted here ) I came across the Parallel Sets plot. Here is an example for how it looks: (It is a visualization of the Titanic dataset. Showing, for example, how most of the women that didn't survive belonged to the third class...) I would love to be able to reproduce such a plot with R. Is that possible to do? Thanks, Tal
Here's a version using only base graphics, thanks to Hadley's comment. (For previous version, see edit history). parallelset <- function(..., freq, col="gray", border=0, layer, alpha=0.5, gap.width=0.05) { p <- data.frame(..., freq, col, border, alpha, stringsAsFactors=FALSE) n <- nrow(p) if(missing(layer)) { layer <- 1:n } p$layer <- layer np <- ncol(p) - 5 d <- p[ , 1:np, drop=FALSE] p <- p[ , -c(1:np), drop=FALSE] p$freq <- with(p, freq/sum(freq)) col <- col2rgb(p$col, alpha=TRUE) if(!identical(alpha, FALSE)) { col["alpha", ] <- p$alpha*256 } p$col <- apply(col, 2, function(x) do.call(rgb, c(as.list(x), maxColorValue = 256))) getp <- function(i, d, f, w=gap.width) { a <- c(i, (1:ncol(d))[-i]) o <- do.call(order, d[a]) x <- c(0, cumsum(f[o])) * (1-w) x <- cbind(x[-length(x)], x[-1]) gap <- cumsum( c(0L, diff(as.numeric(d[o,i])) != 0) ) gap <- gap / max(gap) * w (x + gap)[order(o),] } dd <- lapply(seq_along(d), getp, d=d, f=p$freq) par(mar = c(0, 0, 2, 0) + 0.1, xpd=TRUE ) plot(NULL, type="n",xlim=c(0, 1), ylim=c(np, 1), xaxt="n", yaxt="n", xaxs="i", yaxs="i", xlab='', ylab='', frame=FALSE) for(i in rev(order(p$layer)) ) { for(j in 1:(np-1) ) polygon(c(dd[[j]][i,], rev(dd[[j+1]][i,])), c(j, j, j+1, j+1), col=p$col[i], border=p$border[i]) } text(0, seq_along(dd), labels=names(d), adj=c(0,-2), font=2) for(j in seq_along(dd)) { ax <- lapply(split(dd[[j]], d[,j]), range) for(k in seq_along(ax)) { lines(ax[[k]], c(j, j)) text(ax[[k]][1], j, labels=names(ax)[k], adj=c(0, -0.25)) } } } data(Titanic) myt <- subset(as.data.frame(Titanic), Age=="Adult", select=c("Survived","Sex","Class","Freq")) myt <- within(myt, { Survived <- factor(Survived, levels=c("Yes","No")) levels(Class) <- c(paste(c("First", "Second", "Third"), "Class"), "Crew") color <- ifelse(Survived=="Yes","#008888","#330066") }) with(myt, parallelset(Survived, Sex, Class, freq=Freq, col=color, alpha=0.2))
{ "source": [ "https://stats.stackexchange.com/questions/12029", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/253/" ] }
12,053
I've learnt that I must test for normality not on the raw data but their residuals. Should I calculate residuals and then do the Shapiro–Wilk's W test? Are residuals calculated as: $X_i - \text{mean}$ ? Please see this previous question for my data and the design.
Why must you test for normality? The standard assumption in linear regression is that the theoretical residuals are independent and normally distributed. The observed residuals are an estimate of the theoretical residuals, but are not independent (there are transforms on the residuals that remove some of the dependence, but still give only an approximation of the true residuals). So a test on the observed residuals does not guarantee that the theoretical residuals match. If the theoretical residuals are not exactly normally distributed, but the sample size is large enough then the Central Limit Theorem says that the usual inference (tests and confidence intervals, but not necessarily prediction intervals) based on the assumption of normality will still be approximately correct. Also note that the tests of normality are rule out tests, they can tell you that the data is unlikely to have come from a normal distribution. But if the test is not significant that does not mean that the data came from a normal distribution, it could also mean that you just don't have enough power to see the difference. Larger sample sizes give more power to detect the non-normality, but larger samples and the CLT mean that the non-normality is least important. So for small sample sizes the assumption of normality is important but the tests are meaningless, for large sample sizes the tests may be more accurate, but the question of exact normality becomes meaningless. So combining all the above, what is more important than a test of exact normality is an understanding of the science behind the data to see if the population is close enough to normal. Graphs like qqplots can be good diagnostics, but understanding of the science is needed as well. If there is concern that there is too much skewness or potential for outliers, then non-parametric methods are available that do not require the normality assumption.
{ "source": [ "https://stats.stackexchange.com/questions/12053", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5003/" ] }
12,060
I have a dataset: X variable is date (from April to October) Y variable is vegetation biomass data In my study area, growing season starts around April when vegetation biomass is low and peaks around at the end of August when biomass is highest, and finishes around October. The purpose is to determine the exactly date when the vegetation biomass increased at maximum during the start of growing season. It should be in April. First, I did the curve fitting using Sigmoid, logistic, 4 parameter method which was the best fit for this dataset. And I got a formula of sigmoid, logistic 4 parameter method. Now, how can I calculate maximum increase date of vegetation biomass from curve fitting line to accomplish the purpose as mentioned above? Thanks a lot.
Why must you test for normality? The standard assumption in linear regression is that the theoretical residuals are independent and normally distributed. The observed residuals are an estimate of the theoretical residuals, but are not independent (there are transforms on the residuals that remove some of the dependence, but still give only an approximation of the true residuals). So a test on the observed residuals does not guarantee that the theoretical residuals match. If the theoretical residuals are not exactly normally distributed, but the sample size is large enough then the Central Limit Theorem says that the usual inference (tests and confidence intervals, but not necessarily prediction intervals) based on the assumption of normality will still be approximately correct. Also note that the tests of normality are rule out tests, they can tell you that the data is unlikely to have come from a normal distribution. But if the test is not significant that does not mean that the data came from a normal distribution, it could also mean that you just don't have enough power to see the difference. Larger sample sizes give more power to detect the non-normality, but larger samples and the CLT mean that the non-normality is least important. So for small sample sizes the assumption of normality is important but the tests are meaningless, for large sample sizes the tests may be more accurate, but the question of exact normality becomes meaningless. So combining all the above, what is more important than a test of exact normality is an understanding of the science behind the data to see if the population is close enough to normal. Graphs like qqplots can be good diagnostics, but understanding of the science is needed as well. If there is concern that there is too much skewness or potential for outliers, then non-parametric methods are available that do not require the normality assumption.
{ "source": [ "https://stats.stackexchange.com/questions/12060", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5077/" ] }
12,174
Inspired by Peter Donnelly's talk at TED , in which he discusses how long it would take for a certain pattern to appear in a series of coin tosses, I created the following script in R. Given two patterns 'hth' and 'htt', it calculates how long it takes (i.e. how many coin tosses) on average before you hit one of these patterns. coin <- c('h','t') hit <- function(seq) { miss <- TRUE fail <- 3 trp <- sample(coin,3,replace=T) while (miss) { if (all(seq == trp)) { miss <- FALSE } else { trp <- c(trp[2],trp[3],sample(coin,1,T)) fail <- fail + 1 } } return(fail) } n <- 5000 trials <- data.frame("hth"=rep(NA,n),"htt"=rep(NA,n)) hth <- c('h','t','h') htt <- c('h','t','t') set.seed(4321) for (i in 1:n) { trials[i,] <- c(hit(hth),hit(htt)) } summary(trials) The summary statistics are as follows, hth htt Min. : 3.00 Min. : 3.000 1st Qu.: 4.00 1st Qu.: 5.000 Median : 8.00 Median : 7.000 Mean :10.08 Mean : 8.014 3rd Qu.:13.00 3rd Qu.:10.000 Max. :70.00 Max. :42.000 In the talk it is explained that the average number of coin tosses would be different for the two patterns; as can be seen from my simulation. Despite watching the talk a few times I'm still not quite getting why this would be the case. I understand that 'hth' overlaps itself and intuitively I would think that you would hit 'hth' sooner than 'htt', but this is not the case. I would really appreciate it if someone could explain this to me.
Think about what happens the first time you get an H followed by a T. Case 1: you're looking for H-T-H , and you've seen H-T for the first time. If the next toss is H, you're done. If it's T, you're back to square one: since the last two tosses were T-T you now need the full H-T-H. Case 2: you're looking for H-T-T , and you've seen H-T for the first time. If the next toss is T, you're done. If it's H, this is clearly a setback; however, it's a minor one since you now have the H and only need -T-T. If the next toss is H, this makes your situation no worse, whereas T makes it better, and so on. Put another way, in case 2 the first H that you see takes you 1/3 of the way, and from that point on you never have to start from scratch. This is not true in case 1, where a T-T erases all progress you've made.
{ "source": [ "https://stats.stackexchange.com/questions/12174", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4524/" ] }
12,187
Nearly every decision tree example I've come across happens to be a binary tree. Is this pretty much universal? Do most of the standard algorithms (C4.5, CART, etc.) only support binary trees? From what I gather, CHAID is not limited to binary trees, but that seems to be an exception. A two-way split followed by another two-way split on one of the children is not the same thing as a single three-way split. This might be an academic point, but I'm trying to make sure I understand the most common use-cases.
This is mainly a technical issue: if you don't restrict to binary choices, there are simply too many possibilities for the next split in the tree. So you are definitely right in all the points made in your question. Be aware that most tree-type algorithms work stepwise and are even as such not guaranteed to give the best possible result. This is just one extra caveat. For most practical purposes, though not during the building/pruning of the tree, the two kinds of splits are equivalent, though, given that they appear immediately after each other.
{ "source": [ "https://stats.stackexchange.com/questions/12187", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2485/" ] }
12,209
I was wondering, given two normal distributions with $\sigma_1,\ \mu_1$ and $\sigma_2, \ \mu_2$ how can I calculate the percentage of overlapping regions of two distributions? I suppose this problem has a specific name, are you aware of any particular name describing this problem? Are you aware of any implementation of this (e.g., Java code)?
This is also often called the "overlapping coefficient" (OVL). Googling for this will give you lots of hits. You can find a nomogram for the bi-normal case here . A useful paper may be: Henry F. Inman; Edwin L. Bradley Jr (1989). The overlapping coefficient as a measure of agreement between probability distributions and point estimation of the overlap of two normal densities. Communications in Statistics - Theory and Methods, 18(10), 3851-3874. ( Link ) Edit Now you got me interested in this more, so I went ahead and created R code to compute this (it's a simple integration). I threw in a plot of the two distributions, including the shading of the overlapping region: min.f1f2 <- function(x, mu1, mu2, sd1, sd2) { f1 <- dnorm(x, mean=mu1, sd=sd1) f2 <- dnorm(x, mean=mu2, sd=sd2) pmin(f1, f2) } mu1 <- 2; sd1 <- 2 mu2 <- 1; sd2 <- 1 xs <- seq(min(mu1 - 3*sd1, mu2 - 3*sd2), max(mu1 + 3*sd1, mu2 + 3*sd2), .01) f1 <- dnorm(xs, mean=mu1, sd=sd1) f2 <- dnorm(xs, mean=mu2, sd=sd2) plot(xs, f1, type="l", ylim=c(0, max(f1,f2)), ylab="density") lines(xs, f2, lty="dotted") ys <- min.f1f2(xs, mu1=mu1, mu2=mu2, sd1=sd1, sd2=sd2) xs <- c(xs, xs[1]) ys <- c(ys, ys[1]) polygon(xs, ys, col="gray") ### only works for sd1 = sd2 SMD <- (mu1-mu2)/sd1 2 * pnorm(-abs(SMD)/2) ### this works in general integrate(min.f1f2, -Inf, Inf, mu1=mu1, mu2=mu2, sd1=sd1, sd2=sd2) For this example, the result is: 0.6099324 with absolute error < 1e-04 . Figure below.
{ "source": [ "https://stats.stackexchange.com/questions/12209", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5121/" ] }
12,232
How can I calculate the $\alpha$ and $\beta$ parameters for a Beta distribution if I know the mean and variance that I want the distribution to have? Examples of an R command to do this would be most helpful.
I set$$\mu=\frac{\alpha}{\alpha+\beta}$$and$$\sigma^2=\frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)}$$and solved for $\alpha$ and $\beta$. My results show that$$\alpha=\left(\frac{1-\mu}{\sigma^2}-\frac{1}{\mu}\right)\mu^2$$and$$\beta=\alpha\left(\frac{1}{\mu}-1\right)$$ I've written up some R code to estimate the parameters of the Beta distribution from a given mean, mu, and variance, var: estBetaParams <- function(mu, var) { alpha <- ((1 - mu) / var - 1 / mu) * mu ^ 2 beta <- alpha * (1 / mu - 1) return(params = list(alpha = alpha, beta = beta)) } There's been some confusion around the bounds of $\mu$ and $\sigma^2$ for any given Beta distribution, so let's make that clear here. $\mu=\frac{\alpha}{\alpha+\beta}\in\left(0, 1\right)$ $\sigma^2=\frac{\alpha\beta}{\left(\alpha+\beta\right)^2\left(\alpha+\beta+1\right)}=\frac{\mu\left(1-\mu\right)}{\alpha+\beta+1}<\frac{\mu\left(1-\mu\right)}{1}=\mu\left(1-\mu\right)\in\left(0,0.5^2\right)$
{ "source": [ "https://stats.stackexchange.com/questions/12232", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/118/" ] }
12,262
I've got a weird question. Assume that you have a small sample where the dependent variable that you're going to analyze with a simple linear model is highly left skewed. Thus you assume that $u$ is not normally distributed, because this would result in normally distributed $y$. But when you compute the QQ-Normal plot there is evidence, that the residuals are normally distributed. Thus anyone can assume that the error term is normally distributed, although $y$ is not. So what does it mean, when the error term seems to be normally distributed, but $y$ does not?
It is reasonable for the residuals in a regression problem to be normally distributed, even though the response variable is not. Consider a univariate regression problem where $y \sim \mathcal{N}(\beta x, \sigma^2)$ . so that the regression model is appropriate, and further assume that the true value of $\beta=1$ . In this case, while the residuals of the true regression model are normal, the distribution of $y$ depends on the distribution of $x$ , as the conditional mean of $y$ is a function of $x$ . If the dataset has a lot of values of $x$ that are close to zero and progressively fewer the higher the value of $x$ , then the distribution of $y$ will be skewed to the right. If values of $x$ are distributed symmetrically, then $y$ will be distributed symmetrically, and so forth. For a regression problem, we only assume that the response is normal conditioned on the value of $x$ .
{ "source": [ "https://stats.stackexchange.com/questions/12262", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4496/" ] }
12,386
I find resources like the Probability and Statistics Cookbook and The R Reference Card for Data Mining incredibly useful. They obviously serve well as references but also help me to organize my thoughts on a subject and get the lay of the land. Q: Does anything like these resources exist for machine learning methods? I'm imagining a reference card which for each ML method would include: General properties When the method works well When the method does poorly From which or to which other methods the method generalizes. Has it been mostly superseded? Seminal papers on the method Open problems associated with the method Computational intensity All these things can be found with some minimal digging through textbooks I'm sure. It would just be really convenient to have them on a few pages.
Some of the best and freely available resources are: Hastie, Friedman et al. The Elements of Statistical Learning: Data Mining, Inference, and Prediction David Barber. Bayesian Reasoning and Machine Learning David MacKay. Information Theory, Inference and Learning Algorithms (http://www.inference.phy.cam.ac.uk/mackay/itila/) As to the author's question I haven't met "All in one page" solution
{ "source": [ "https://stats.stackexchange.com/questions/12386", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3577/" ] }
12,421
I know that generative means "based on $P(x,y)$" and discriminative means "based on $P(y|x)$," but I'm confused on several points: Wikipedia (+ many other hits on the web) classify things like SVMs and decision trees as being discriminative. But these don't even have probabilistic interpretations. What does discriminative mean here? Has discriminative just come to mean anything that isn't generative? Naive Bayes (NB) is generative because it captures $P(x|y)$ and $P(y)$, and thus you have $P(x,y)$ (as well as $P(y|x)$). Isn't it trivial to make, say, logistic regression (the poster boy of discriminative models) "generative" by simply computing $P(x)$ in a similar fashion (same independence assumption as NB, such that $P(x) = P(x_0) P(x_1) ... P(x_d)$, where the MLE for $P(x_i)$ are just frequencies)? I know that discriminative models tend to outperform generative ones. What's the practical use of working with generative models? Being able to generate/simulate data is cited, but when does this come up? I personally only have experience with regression, classification, collab. filtering over structured data, so are the uses irrelevant to me here? The "missing data" argument ($P(x_i|y)$ for missing $x_i$) seems to only give you an edge with training data (when you actually know $y$ and don't need to marginalize over $P(y)$ to get the relatively dumb $P(x_i)$ which you could've estimated directly anyway), and even then imputation is much more flexible (can predict based not just on $y$ but other $x_i$'s as well). What's with the completely contradictory quotes from Wikipedia? "generative models are typically more flexible than discriminative models in expressing dependencies in complex learning tasks" vs. "discriminative models can generally express more complex relationships between the observed and target variables" Related question that got me thinking about this.
The fundamental difference between discriminative models and generative models is: Discriminative models learn the (hard or soft) boundary between classes Generative models model the distribution of individual classes To answer your direct questions: SVMs (Support Vector Machines) and DTs (Decision Trees) are discriminative because they learn explicit boundaries between classes. SVM is a maximal margin classifier, meaning that it learns a decision boundary that maximizes the distance between samples of the two classes, given a kernel. The distance between a sample and the learned decision boundary can be used to make the SVM a "soft" classifier. DTs learn the decision boundary by recursively partitioning the space in a manner that maximizes the information gain (or another criterion). It is possible to make a generative form of logistic regression in this manner. Note that you are not using the full generative model to make classification decisions, though. There are a number of advantages generative models may offer, depending on the application. Say you are dealing with non-stationary distributions, where the online test data may be generated by different underlying distributions than the training data. It is typically more straightforward to detect distribution changes and update a generative model accordingly than do this for a decision boundary in an SVM, especially if the online updates need to be unsupervised. Discriminative models also do not generally function for outlier detection, though generative models generally do. What's best for a specific application should, of course, be evaluated based on the application. (This quote is convoluted, but this is what I think it's trying to say) Generative models are typically specified as probabilistic graphical models, which offer rich representations of the independence relations in the dataset. Discriminative models do not offer such clear representations of relations between features and classes in the dataset. Instead of using resources to fully model each class, they focus on richly modeling the boundary between classes. Given the same amount of capacity (say, bits in a computer program executing the model), a discriminative model thus may yield more complex representations of this boundary than a generative model.
{ "source": [ "https://stats.stackexchange.com/questions/12421", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1720/" ] }
12,525
I'm trying to minimize a custom function. It should accept five parameters and the data set and do all sorts of calculations, producing a single number as an output. I want to find a combination of five input parameters which yields smallest output of my function.
I wrote a post listing a few tutorials using optim . Here is a quote of the relevant section: "The combination of the R function optim and a custom created objective function, such as a minus log-likelihood function provides a powerful tool for parameter estimation of custom models. Scott Brown's tutorial includes an example of this. Ajay Shah has an example of writing a likelihood function and then getting a maximum likelihood estimate using optim . Benjamin Bolker has great material available on the web from his book Ecological Models and Data in R . PDFs, Rnw, and R code for early versions of the chapters are provided on the website. Chapter 6 ( likelihood and all that ) , 7 ( the gory details of model fitting ), and 8 ( worked likelihood estimation examples ). Brian Ripley has a set of slides on simulation and optimisation in R . In particular it provides a useful discussion of the various optimisation algorithms available using optim ".
{ "source": [ "https://stats.stackexchange.com/questions/12525", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/333/" ] }
12,546
Is there any software package to solve the linear regression with the objective of minimizing the L-infinity norm.
Short answer : Your problem can be formulated as a linear program (LP), leaving you to choose your favorite LP solver for the task. To see how to write the problem as an LP, read on. This minimization problem is often referred to as Chebyshev approximation . Let $\newcommand{\y}{\mathbf{y}}\newcommand{\X}{\mathbf{X}}\newcommand{\x}{\mathbf{x}}\newcommand{\b}{\mathbf{\beta}}\newcommand{\reals}{\mathbb{R}}\newcommand{\ones}{\mathbf{1}_n} \y = (y_i) \in \reals^n$, $\X \in \reals^{n \times p}$ with row $i$ denoted by $\x_i$ and $\b \in \reals^p$. Then we seek to minimize the function $f(\b) = \|\y - \X \b\|_\infty$ with respect to $\b$. Denote the optimal value by $$ f^\star = f(\b^\star) = \inf \{f(\b): \b \in \reals^p \} \>. $$ The key to recasting this as an LP is to rewrite the problem in epigraph form . It is not difficult to convince oneself that, in fact, $$ f^\star = \inf\{t: f(\b) \leq t, \;t \in \reals, \;\b \in \reals^p \} \> . $$ Now, using the definition of the function $f$, we can rewrite the right-hand side above as $$ f^\star = \inf\{t: -t \leq y_i - \x_i \b \leq t, \;t \in \reals, \;\b \in \reals^p,\; 1 \leq i \leq n \} \>, $$ and so we see that minimizing the $\ell_\infty$ norm in a regression setting is equivalent to the LP $$ \begin{array}{ll} \text{minimize} & t \\ \text{subject to} & \y-\X \b \leq t\ones \\ & \y - \X \b \geq - t \ones \>, \\ \end{array} $$ where the optimization is done over $(\b, t)$, and $\ones$ denotes a vector of ones of length $n$. I leave it as an (easy) exercise for the reader to recast the above LP in standard form. Relationship to the $\ell_1$ (total variation) version of linear regression It is interesting to note that something very similar can be done with the $\ell_1$ norm. Let $g(\b) = \|\y - \X \b \|_1$. Then, similar arguments lead one to conclude that $$\newcommand{\t}{\mathbf{t}} g^\star = \inf\{\t^T \ones : -t_i \leq y_i - \x_i \b \leq t_i, \;\t = (t_i) \in \reals^n, \;\b \in \reals^p,\; 1 \leq i \leq n \} \>, $$ so that the corresponding LP is $$ \begin{array}{ll} \text{minimize} & \t^T \ones \\ \text{subject to} & \y-\X \b \leq \t \\ & \y - \X \b \geq - \t \>. \\ \end{array} $$ Note here that $\t$ is now a vector of length $n$ instead of a scalar, as it was in the $\ell_\infty$ case. The similarity in these two problems and the fact that they can both be cast as LPs is, of course, no accident. The two norms are related in that that they are the dual norms of each other.
{ "source": [ "https://stats.stackexchange.com/questions/12546", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4670/" ] }
12,605
I've been playing around with random forests for regression and am having difficulty working out exactly what the two measures of importance mean, and how they should be interpreted. The importance() function gives two values for each variable: %IncMSE and IncNodePurity . Is there simple interpretations for these 2 values? For IncNodePurity in particular, is this simply the amount the RSS increase following the removal of that variable?
The first one can be 'interpreted' as follows: if a predictor is important in your current model, then assigning other values for that predictor randomly but 'realistically' (i.e.: permuting this predictor's values over your dataset), should have a negative influence on prediction, i.e.: using the same model to predict from data that is the same except for the one variable, should give worse predictions. So, you take a predictive measure (MSE) with the original dataset and then with the 'permuted' dataset, and you compare them somehow. One way, particularly since we expect the original MSE to always be smaller, the difference can be taken. Finally, for making the values comparable over variables, these are scaled. For the second one: at each split, you can calculate how much this split reduces node impurity (for regression trees, indeed, the difference between RSS before and after the split). This is summed over all splits for that variable, over all trees. Note: a good read is Elements of Statistical Learning by Hastie, Tibshirani and Friedman...
{ "source": [ "https://stats.stackexchange.com/questions/12605", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/845/" ] }
12,842
I read from my textbook that $\text{cov}(X,Y)=0$ does not guarantee X and Y are independent. But if they are independent, their covariance must be 0. I could not think of any proper example yet; could someone provide one?
Easy example: Let $X$ be a random variable that is $-1$ or $+1$ with probability 0.5. Then let $Y$ be a random variable such that $Y=0$ if $X=-1$, and $Y$ is randomly $-1$ or $+1$ with probability 0.5 if $X=1$. Clearly $X$ and $Y$ are highly dependent (since knowing $Y$ allows me to perfectly know $X$), but their covariance is zero: They both have zero mean, and $$\eqalign{ \mathbb{E}[XY] &=&(-1) &\cdot &0 &\cdot &P(X=-1) \\ &+& 1 &\cdot &1 &\cdot &P(X=1,Y=1) \\ &+& 1 &\cdot &(-1)&\cdot &P(X=1,Y=-1) \\ &=&0. }$$ Or more generally, take any distribution $P(X)$ and any $P(Y|X)$ such that $P(Y=a|X) = P(Y=-a|X)$ for all $X$ (i.e., a joint distribution that is symmetric around the $x$ axis), and you will always have zero covariance. But you will have non-independence whenever $P(Y|X) \neq P(Y)$; i.e., the conditionals are not all equal to the marginal. Or ditto for symmetry around the $y$ axis.
{ "source": [ "https://stats.stackexchange.com/questions/12842", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3525/" ] }
12,900
My understanding is that $R^2$ cannot be negative as it is the square of R. However I ran a simple linear regression in SPSS with a single independent variable and a dependent variable. My SPSS output give me a negative value for $R^2$. If I was to calculate this by hand from R then $R^2$ would be positive. What has SPSS done to calculate this as negative? R=-.395 R squared =-.156 B (un-standardized)=-1261.611 Code I've used: DATASET ACTIVATE DataSet1. REGRESSION /MISSING LISTWISE /STATISTICS COEFF OUTS R ANOVA /CRITERIA=PIN(.05) POUT(.10) /NOORIGIN /DEPENDENT valueP /METHOD=ENTER ageP I get a negative value. Can anyone explain what this means?
$R^2$ compares the fit of the chosen model with that of a horizontal straight line (the null hypothesis). If the chosen model fits worse than a horizontal line, then $R^2$ is negative. Note that $R^2$ is not always the square of anything, so it can have a negative value without violating any rules of math. $R^2$ is negative only when the chosen model does not follow the trend of the data, so fits worse than a horizontal line. Example: fit data to a linear regression model constrained so that the $Y$ intercept must equal $1500$ . The model makes no sense at all given these data. It is clearly the wrong model, perhaps chosen by accident. The fit of the model (a straight line constrained to go through the point (0,1500)) is worse than the fit of a horizontal line. Thus the sum-of-squares from the model $(SS_\text{res})$ is larger than the sum-of-squares from the horizontal line $(SS_\text{tot})$ . If $R^2$ is computed as $1 - \frac{SS_\text{res}}{SS_\text{tot}}$ . (here, $SS_{res}$ = residual error.) When $SS_\text{res}$ is greater than $SS_\text{tot}$ , that equation could compute a negative value for $R^2$ , if the value of the coeficient is greater than 1. With linear regression with no constraints, $R^2$ must be positive (or zero) and equals the square of the correlation coefficient, $r$ . A negative $R^2$ is only possible with linear regression when either the intercept or the slope are constrained so that the "best-fit" line (given the constraint) fits worse than a horizontal line. With nonlinear regression, the $R^2$ can be negative whenever the best-fit model (given the chosen equation, and its constraints, if any) fits the data worse than a horizontal line. Bottom line: a negative $R^2$ is not a mathematical impossibility or the sign of a computer bug. It simply means that the chosen model (with its constraints) fits the data really poorly.
{ "source": [ "https://stats.stackexchange.com/questions/12900", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4498/" ] }
13,065
I'd like to convert a factor variable to a numeric one but as.numeric doesn't have the effect I expect. Below I get summary statistics for the numeric version of the variable based on the original variable. The means keep counting up by 1... perhaps (he speculates) the levels of the factor have both names and numbers, and I'm expecting the value of the new variable to come from the name when as.numeric is designed to use the number? > describe.by(as.numeric(df$sch), df$sch) group: var n mean sd median trimmed mad min max range skew kurtosis se 1 1 5389 1 0 1 1 0 1 1 0 NaN NaN 0 --------------------------------------------------------- group: 001 var n mean sd median trimmed mad min max range skew kurtosis se 1 1 19 2 0 2 2 0 2 2 0 NaN NaN 0 --------------------------------------------------------- group: 002 var n mean sd median trimmed mad min max range skew kurtosis se 1 1 54 3 0 3 3 0 3 3 0 NaN NaN 0 ---------------------------------------------------------
That is correct: as.numeric(factor) returns the number that R assigns to the level of that factor. You could try as.numeric(as.character(factor))
{ "source": [ "https://stats.stackexchange.com/questions/13065", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3748/" ] }
13,086
I'd like to know if there is a boxplot variant adapted to Poisson distributed data (or possibly other distributions)? With a Gaussian distribution, whiskers placed at L = Q1 - 1.5 IQR and U = Q3 + 1.5 IQR, the boxplot has the property that there will be roughly as many low outliers (points below L) as there are high outliers (points above U). If the data is Poisson distributed however, this does not hold anymore because of the positive skewness we get Pr(X<L) < Pr(X>U) . Is there an alternate way to place the whiskers such that it would 'fit' a Poisson distribution?
Boxplots weren't designed to assure low probability of exceeding the ends of the whiskers in all cases: they are intended, and usually used, as simple graphical characterizations of the bulk of a dataset. As such, they are fine even when the data have very skewed distributions (although they might not reveal quite as much information as they do about approximately unskewed distributions). When boxplots become skewed, as they will with a Poisson distribution, the next step is to re-express the underlying variable (with a monotonic, increasing transformation) and redraw the boxplots. Because the variance of a Poisson distribution is proportional to its mean, a good transformation to use is the square root. Each boxplot depicts 50 iid draws from a Poisson distribution with given intensity (from 1 through 10, with two trials for each intensity). Notice that the skewness tends to be low. The same data on a square root scale tend to have boxplots that are slightly more symmetric and (except for the lowest intensity) have approximately equal IQRs regardless of intensity). In sum, don't change the boxplot algorithm: re-express the data instead. Incidentally, the relevant chances to be computing are these: what is the chance that an independent normal variate $X$ will exceed the upper(lower) fence $U$($L$) as estimated from $n$ independent draws from the same distribution? This accounts for the fact that the fences in a boxplot are not computed from the underlying distribution but are estimated from the data. In most cases, the chances are much greater than 1%! For instance, here (based on 10,000 Monte-Carlo trials) is a histogram of the log (base 10) chances for the case $n=9$: (Because the normal distribution is symmetric, this histogram applies to both fences.) The logarithm of 1%/2 is about -2.3. Clearly, most of the time the probability is greater than this. About 16% of the time it exceeds 10%! It turns out (I won't clutter this reply with the details) that the distributions of these chances are comparable to the normal case (for small $n$) even for Poisson distributions of intensity as low as 1, which is pretty skewed. The main difference is that it's usually less likely to find a low outlier and a little more likely to find a high outlier.
{ "source": [ "https://stats.stackexchange.com/questions/13086", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2451/" ] }
13,152
I always use lm() in R to perform linear regression of $y$ on $x$. That function returns a coefficient $\beta$ such that $$y = \beta x.$$ Today I learned about total least squares and that princomp() function (principal component analysis, PCA) can be used to perform it. It should be good for me (more accurate). I have done some tests using princomp() , like: r <- princomp(Β ~Β xΒ +Β y) My problem is: how to interpret its results? How can I get the regression coefficient? By "coefficient" I mean the number $\beta$ that I have to use to multiply the $x$ value to give a number close to $y$.
Ordinary least squares vs. total least squares Let's first consider the simplest case of only one predictor (independent) variable $x$. For simplicity, let both $x$ and $y$ be centered, i.e. intercept is always zero. The difference between standard OLS regression and "orthogonal" TLS regression is clearly shown on this (adapted by me) figure from the most popular answer in the most popular thread on PCA: OLS fits the equation $y=\beta x$ by minimizing squared distances between observed values $y$ and predicted values $\hat y$. TLS fits the same equation by minimizing squared distances between $(x,y)$ points and their projection on the line. In this simplest case TLS line is simply the first principal component of the 2D data. To find $\beta$, do PCA on $(x,y)$ points, i.e. construct the $2\times 2$ covariance matrix $\boldsymbol \Sigma$ and find its first eigenvector $\mathbf v = (v_x, v_y)$; then $\beta = v_y/v_x$. In Matlab: v = pca([x y]); //# x and y are centered column vectors beta = v(2,1)/v(1,1); In R: v <- prcomp(cbind(x,y))$rotation beta <- v[2,1]/v[1,1] By the way, this will yield correct slope even if $x$ and $y$ were not centered (because built-in PCA functions automatically perform centering). To recover the intercept, compute $\beta_0 = \bar y - \beta \bar x$. OLS vs. TLS, multiple regression Given a dependent variable $y$ and many independent variables $x_i$ (again, all centered for simplicity), regression fits an equation $$y= \beta_1 x_1 + \ldots + \beta_p x_p.$$ OLS does the fit by minimizing the squared errors between observed values of $y$ and predicted values $\hat y$. TLS does the fit by minimizing the squared distances between observed $(\mathbf x, y)\in\mathbb R^{p+1}$ points and the closest points on the regression plane/hyperplane. Note that there is no "regression line" anymore! The equation above specifies a hyperplane : it's a 2D plane if there are two predictors, 3D hyperplane if there are three predictors, etc. So the solution above does not work: we cannot get the TLS solution by taking the first PC only (which is a line). Still, the solution can be easily obtained via PCA. As before, PCA is performed on $(\mathbf x, y)$ points. This yields $p+1$ eigenvectors in columns of $\mathbf V$. The first $p$ eigenvectors define a $p$-dimensional hyperplane $\mathcal H$ that we need; the last (number $p+1$) eigenvector $\mathbf v_{p+1}$ is orthogonal to it. The question is how to transform the basis of $\mathcal H$ given by the first $p$ eigenvectors into the $\boldsymbol \beta$ coefficients. Observe that if we set $x_i=0$ for all $i \ne k$ and only $x_k=1$, then $\hat y=\beta_k$, i.e. the vector $$(0,\ldots, 1, \ldots, \beta_k) \in \mathcal H$$ lies in the hyperplane $\mathcal H$. On the other hand, we know that $$\mathbf v_{p+1}=(v_1, \ldots, v_{p+1}) \:\bot\: \mathcal H$$ is orthogonal to it. I.e. their dot product must be zero: $$v_k + \beta_k v_{p+1}=0 \Rightarrow \beta_k = -v_k/v_{p+1}.$$ In Matlab: v = pca([X y]); //# X is a centered n-times-p matrix, y is n-times-1 column vector beta = -v(1:end-1,end)/v(end,end); In R: v <- prcomp(cbind(X,y))$rotation beta <- -v[-ncol(v),ncol(v)] / v[ncol(v),ncol(v)] Again, this will yield correct slopes even if $x$ and $y$ were not centered (because built-in PCA functions automatically perform centering). To recover the intercept, compute $\beta_0 = \bar y - \bar {\mathbf x} \boldsymbol \beta$. As a sanity check, notice that this solution coincides with the previous one in case of only a single predictor $x$. Indeed, then the $(x,y)$ space is 2D, and so, given that the first PCA eigenvector is orthogonal to the second (last) one, $v^{(1)}_y/v^{(1)}_x=-v^{(2)}_x/v^{(2)}_y$. Closed form solution for TLS Surprisingly, it turns out that there is a closed form equation for $\boldsymbol \beta$. The argument below is taken from Sabine van Huffel's book "The total least squares" (section 2.3.2). Let $\mathbf X$ and $\mathbf y$ be the centered data matrices. The last PCA eigenvector $\mathbf v_{p+1}$ is an eigenvector of the covariance matrix of $[\mathbf X\: \mathbf y]$ with an eigenvalue $\sigma^2_{p+1}$. If it is an eigenvector, then so is $-\mathbf v_{p+1}/v_{p+1} = (\boldsymbol \beta\:\: -1)^\top$. Writing down the eigenvector equation: $$\left(\begin{array}{c}\mathbf X^\top \mathbf X & \mathbf X^\top \mathbf y\\ \mathbf y^\top \mathbf X & \mathbf y^\top \mathbf y\end{array}\right) \left(\begin{array}{c}\boldsymbol \beta \\ -1\end{array}\right) = \sigma^2_{p+1}\left(\begin{array}{c}\boldsymbol \beta \\ -1\end{array}\right),$$ and computing the product on the left, we immediately get that $$\boldsymbol \beta_\mathrm{TLS} = (\mathbf X^\top \mathbf X - \sigma^2_{p+1}\mathbf I)^{-1} \mathbf X^\top \mathbf y,$$ which strongly reminds the familiar OLS expression $$\boldsymbol \beta_\mathrm{OLS} = (\mathbf X^\top \mathbf X)^{-1} \mathbf X^\top \mathbf y.$$ Multivariate multiple regression The same formula can be generalized to the multivariate case, but even to define what multivariate TLS does, would require some algebra. See Wikipedia on TLS . Multivariate OLS regression is equivalent to a bunch of univariate OLS regressions for each dependent variable, but in the TLS case it is not so.
{ "source": [ "https://stats.stackexchange.com/questions/13152", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5405/" ] }
13,166
There's a lot of discussion going on on this forum about the proper way to specify various hierarchical models using lmer . I thought it would be great to have all the information in one place. A couple of questions to start: How to specify multiple levels, where one group is nested within the other: is it (1|group1:group2) or (1+group1|group2) ? What's the difference between (~1 + ....) and (1 | ...) and (0 | ...) etc.? How to specify group-level interactions?
What's the difference between (~1 +....) and (1 | ...) and (0 | ...) etc.? Say you have variable V1 predicted by categorical variable V2, which is treated as a random effect, and continuous variable V3, which is treated as a linear fixed effect. Using lmer syntax, simplest model (M1) is: V1 ~ (1|V2) + V3 This model will estimate: P1: A global intercept P2: Random effect intercepts for V2 (i.e. for each level of V2, that level's intercept's deviation from the global intercept) P3: A single global estimate for the effect (slope) of V3 The next most complex model (M2) is: V1 ~ (1|V2) + V3 + (0+V3|V2) This model estimates all the parameters from M1, but will additionally estimate: P4: The effect of V3 within each level of V2 (more specifically, the degree to which the V3 effect within a given level deviates from the global effect of V3), while enforcing a zero correlation between the intercept deviations and V3 effect deviations across levels of V2 . This latter restriction is relaxed in a final most complex model (M3): V1 ~ (1+V3|V2) + V3 In which all parameters from M2 are estimated while allowing correlation between the intercept deviations and V3 effect deviations within levels of V2. Thus, in M3, an additional parameter is estimated: P5: The correlation between intercept deviations and V3 deviations across levels of V2 Usually model pairs like M2 and M3 are computed then compared to evaluate the evidence for correlations between fixed effects (including the global intercept). Now consider adding another fixed effect predictor, V4. The model: V1 ~ (1+V3*V4|V2) + V3*V4 would estimate: P1: A global intercept P2: A single global estimate for the effect of V3 P3: A single global estimate for the effect of V4 P4: A single global estimate for the interaction between V3 and V4 P5: Deviations of the intercept from P1 in each level of V2 P6: Deviations of the V3 effect from P2 in each level of V2 P7: Deviations of the V4 effect from P3 in each level of V2 P8: Deviations of the V3-by-V4 interaction from P4 in each level of V2 P9 Correlation between P5 and P6 across levels of V2 P10 Correlation between P5 and P7 across levels of V2 P11 Correlation between P5 and P8 across levels of V2 P12 Correlation between P6 and P7 across levels of V2 P13 Correlation between P6 and P8 across levels of V2 P14 Correlation between P7 and P8 across levels of V2 Phew , That's a lot of parameters! And I didn't even bother to list the variance parameters estimated by the model. What's more, if you have a categorical variable with more than 2 levels that you want to model as a fixed effect, instead of a single effect for that variable you will always be estimating k-1 effects (where k is the number of levels), thereby exploding the number of parameters to be estimated by the model even further.
{ "source": [ "https://stats.stackexchange.com/questions/13166", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
13,171
What would be the distribution of $y$, when: $y = x^2$ and $x\sim\mathcal{N}(\mu, \sigma^2)$. $y = x^2$ and $x\sim$ Log-$\mathcal{N}$.
What's the difference between (~1 +....) and (1 | ...) and (0 | ...) etc.? Say you have variable V1 predicted by categorical variable V2, which is treated as a random effect, and continuous variable V3, which is treated as a linear fixed effect. Using lmer syntax, simplest model (M1) is: V1 ~ (1|V2) + V3 This model will estimate: P1: A global intercept P2: Random effect intercepts for V2 (i.e. for each level of V2, that level's intercept's deviation from the global intercept) P3: A single global estimate for the effect (slope) of V3 The next most complex model (M2) is: V1 ~ (1|V2) + V3 + (0+V3|V2) This model estimates all the parameters from M1, but will additionally estimate: P4: The effect of V3 within each level of V2 (more specifically, the degree to which the V3 effect within a given level deviates from the global effect of V3), while enforcing a zero correlation between the intercept deviations and V3 effect deviations across levels of V2 . This latter restriction is relaxed in a final most complex model (M3): V1 ~ (1+V3|V2) + V3 In which all parameters from M2 are estimated while allowing correlation between the intercept deviations and V3 effect deviations within levels of V2. Thus, in M3, an additional parameter is estimated: P5: The correlation between intercept deviations and V3 deviations across levels of V2 Usually model pairs like M2 and M3 are computed then compared to evaluate the evidence for correlations between fixed effects (including the global intercept). Now consider adding another fixed effect predictor, V4. The model: V1 ~ (1+V3*V4|V2) + V3*V4 would estimate: P1: A global intercept P2: A single global estimate for the effect of V3 P3: A single global estimate for the effect of V4 P4: A single global estimate for the interaction between V3 and V4 P5: Deviations of the intercept from P1 in each level of V2 P6: Deviations of the V3 effect from P2 in each level of V2 P7: Deviations of the V4 effect from P3 in each level of V2 P8: Deviations of the V3-by-V4 interaction from P4 in each level of V2 P9 Correlation between P5 and P6 across levels of V2 P10 Correlation between P5 and P7 across levels of V2 P11 Correlation between P5 and P8 across levels of V2 P12 Correlation between P6 and P7 across levels of V2 P13 Correlation between P6 and P8 across levels of V2 P14 Correlation between P7 and P8 across levels of V2 Phew , That's a lot of parameters! And I didn't even bother to list the variance parameters estimated by the model. What's more, if you have a categorical variable with more than 2 levels that you want to model as a fixed effect, instead of a single effect for that variable you will always be estimating k-1 effects (where k is the number of levels), thereby exploding the number of parameters to be estimated by the model even further.
{ "source": [ "https://stats.stackexchange.com/questions/13171", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3903/" ] }
13,241
Am I correct to understand that the order in which variables are specified in a multifactorial ANOVA makes a difference but that the order does not matter when doing a multiple linear regression? So assuming an outcome such as measured blood loss y and two categorical variables adenoidectomy method a , tonsillectomy method b . The model y~a+b is different to the model y~b+a (or so my implementation in R seems to indicate). Am I correct to understand that the term here is that ANOVA is a hierarchical model since it first attributes as much variance as it can to the first factor before trying to attribute residual variance to the second factor? In the example above the hierarchy makes sense because I always do the adenoidectomy first before doing the tonsillectomy but what would happen if one had two variables with no inherent order?
This question evidently came from a study with an unbalanced two-way design, analyzed in R with the aov() function; this page provides a more recent and detailed example of this issue. The general answer to this question, as to so many, is: "It depends." Here it depends on whether the design is balanced and, if not, which flavor of ANOVA is chosen. First, it depends on whether the design is balanced. In the best of all possible worlds, with equal numbers of cases in all cells of a factorial design, there would be no difference due to the order of entering the factors into the model, regardless of how ANOVA is performed.* The cases at hand, evidently from a retrospective clinical cohort, seem to be from a real world where such balance was not found. So the order might matter. Second, it depends on how the ANOVA is performed, which is a somewhat contentious issue. The types of ANOVA for unbalanced designs differ in the order of evaluating main effects and interactions. Evaluating interactions is fundamental to two-way and higher-order ANOVA, so there are disputes over the best way to proceed. See this Cross Validated page for one explanation and discussion. See the Details and the Warning for the Anova() (with a capital "A") function in the manual for the car package for a different view. The order of factors does matter in unbalanced designs under the default aov() in R, which uses what are called type-I tests. These are sequential attributions of variance to factors in the order of entry into the model, as the present question envisioned. The order does not matter with the type-II or type-III tests provided by the Anova() function in the car package in R. These alternatives, however, have their own potential disadvantages noted in the above links. Finally, consider the relation to multiple linear regression as with lm() in R, which is essentially the same type of model if you include interaction terms. The order of entry of variables in lm() does not matter in terms of regression coefficients and p -values reported by summary(lm()) , in which a k-level categorical factor is coded as (k-1) binary dummy variables and a regression coefficient is reported for each dummy. It is, however, possible to wrap the lm() output with anova() (lower-case "a," from the R stats package) or Anova() to summarize the influence of each factor over all of its levels, as one expects in classical ANOVA. Then the ordering of factors will matter with anova() as for aov() , and will not matter with Anova() . Similarly, the disputes over which type of ANOVA to use would return. So it's not safe to assume order-independence of factor entry with all downstream uses of lm() models. *Having equal numbers of observations in all cells is sufficient but, as I understand it, not necessary for the order of factors to be irrelevant. Less demanding types of balance may allow for order-independence.
{ "source": [ "https://stats.stackexchange.com/questions/13241", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/104/" ] }
13,266
I have run a simple linear regression of the natural log of 2 variables to determine if they correlate. My output is this: R^2 = 0.0893 slope = 0.851 p < 0.001 I am confused. Looking at the $R^2$ value, I would say that the two variables are not correlated, since it is so close to $0$. However, the slope of the regression line is almost $1$ (despite looking as though it's almost horizontal in the plot), and the p-value indicates that the regression is highly significant. Does this mean that the two variables are highly correlated? If so, what does the $R^2$ value indicate? I should add that the Durbin-Watson statistic was tested in my software, and did not reject the null hypothesis (it equalled $1.357$). I thought that this tested for independence between the $2$ variables. In this case, I would expect the variables to be dependent, since they are $2$ measurements of an individual bird. I'm doing this regression as part of a published method to determine body condition of an individual, so I assumed that using a regression in this way made sense. However, given these outputs, I'm thinking that maybe for these birds, this method isn't suitable. Does this seem a reasonable conclusion?
The estimated value of the slope does not, by itself, tell you the strength of the relationship. The strength of the relationship depends on the size of the error variance, and the range of the predictor. Also, a significant $p$-value doesn't tell you necessarily that there is a strong relationship; the $p$-value is simply testing whether the slope is exactly 0. For a sufficiently large sample size, even small departures from that hypothesis (e.g. ones not of practical importance) will yield a significant $p$-value. Of the three quantities you presented, $R^2$, the coefficient of determination , gives the greatest indication of the strength of the relationship. In your case, $R^{2} = .089$, means that $8.9\%$ of the variation in your response variable can be explained a linear relationship with the predictor. What constitutes a "large" $R^2$ is discipline dependent. For example, in social sciences $R^2 = .2$ might be "large" but in controlled environments like a factory setting, $R^2 > .9$ may be required to say there is a "strong" relationship. In most situations $.089$ is a very small $R^2$, so your conclusion that there is a weak linear relationship is probably reasonable.
{ "source": [ "https://stats.stackexchange.com/questions/13266", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4238/" ] }
13,314
I was skimming through some lecture notes by Cosma Shalizi (in particular, section 2.1.1 of the second lecture ), and was reminded that you can get very low $R^2$ even when you have a completely linear model. To paraphrase Shalizi's example: suppose you have a model $Y = aX + \epsilon$, where $a$ is known. Then $\newcommand{\Var}{\mathrm{Var}}\Var[Y] = a^2 \Var[x] + \Var[\epsilon]$ and the amount of explained variance is $a^2 \Var[X]$, so $R^2 = \frac{a^2 \Var[x]}{a^2 \Var[X] + \Var[\epsilon]}$. This goes to 0 as $\Var[X] \rightarrow 0$ and to 1 as $\Var[X] \rightarrow \infty$. Conversely, you can get high $R^2$ even when your model is noticeably non-linear. (Anyone have a good example offhand?) So when is $R^2$ a useful statistic, and when should it be ignored?
To address the first question , consider the model $$Y = X + \sin(X) + \varepsilon$$ with iid $\varepsilon$ of mean zero and finite variance. As the range of $X$ (thought of as fixed or random) increases, $R^2$ goes to 1. Nevertheless, if the variance of $\varepsilon$ is small (around 1 or less), the data are "noticeably non-linear." In the plots, $var(\varepsilon)=1$. Incidentally, an easy way to get a small $R^2$ is to slice the independent variables into narrow ranges. The regression (using exactly the same model ) within each range will have a low $R^2$ even when the full regression based on all the data has a high $R^2$. Contemplating this situation is an informative exercise and good preparation for the second question. Both the following plots use the same data. The $R^2$ for the full regression is 0.86. The $R^2$ for the slices (of width 1/2 from -5/2 to 5/2) are .16, .18, .07, .14, .08, .17, .20, .12, .01, .00, reading left to right. If anything, the fits get better in the sliced situation because the 10 separate lines can more closely conform to the data within their narrow ranges. Although the $R^2$ for all the slices are far below the full $R^2$, neither the strength of the relationship, the linearity , nor indeed any aspect of the data (except the range of $X$ used for the regression) has changed. (One might object that this slicing procedure changes the distribution of $X$. That is true, but it nevertheless corresponds with the most common use of $R^2$ in fixed-effects modeling and reveals the degree to which $R^2$ is telling us about the variance of $X$ in the random-effects situation. In particular, when $X$ is constrained to vary within a smaller interval of its natural range, $R^2$ will usually drop.) The basic problem with $R^2$ is that it depends on too many things (even when adjusted in multiple regression), but most especially on the variance of the independent variables and the variance of the residuals. Normally it tells us nothing about "linearity" or "strength of relationship" or even "goodness of fit" for comparing a sequence of models. Most of the time you can find a better statistic than $R^2$. For model selection you can look to AIC and BIC; for expressing the adequacy of a model, look at the variance of the residuals. This brings us finally to the second question . One situation in which $R^2$ might have some use is when the independent variables are set to standard values, essentially controlling for the effect of their variance. Then $1 - R^2$ is really a proxy for the variance of the residuals, suitably standardized.
{ "source": [ "https://stats.stackexchange.com/questions/13314", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1106/" ] }
13,346
A Skellam Distribution describes the difference between two variables that have Poisson distributions. Is there a similar distribution that describes the difference between variables that follow negative binomial distributions? My data is produced by a Poisson process, but includes a fair amount of noise, leading to overdispersion in the distribution. Thus, modeling the data with a negative binomial (NB) distribution works well. If I want to model the difference between two of these NB data sets, what are my options? If it helps, assume similar means and variance for the two sets.
I don't know the name of this distribution but you can just derive it from the law of total probability. Suppose $X, Y$ each have negative binomial distributions with parameters $(r_{1}, p_{1})$ and $(r_{2}, p_{2})$, respectively. I'm using the parameterization where $X,Y$ represent the number of successes before the $r_{1}$'th, and $r_{2}$'th failures, respectively. Then, $$ P(X - Y = k) = E_{Y} \Big( P(X-Y = k) \Big) = E_{Y} \Big( P(X = k+Y) \Big) = \sum_{y=0}^{\infty} P(Y=y)P(X = k+y) $$ We know $$ P(X = k + y) = {k+y+r_{1}-1 \choose k+y} (1-p_{1})^{r_{1}} p_{1}^{k+y} $$ and $$ P(Y = y) = {y+r_{2}-1 \choose y} (1-p_{2})^{r_{2}} p_{2}^{y} $$ so $$ P(X-Y=k) = \sum_{y=0}^{\infty} {y+r_{2}-1 \choose y} (1-p_{2})^{r_{2}} p_{2}^{y} \cdot {k+y+r_{1}-1 \choose k+y} (1-p_{1})^{r_{1}} p_{1}^{k+y} $$ That's not pretty (yikes!). The only simplification I see right off is $$ p_{1}^{k} (1-p_{1})^{r_{1}} (1-p_{2})^{r_{2}} \sum_{y=0}^{\infty} (p_{1}p_{2})^{y} {y+r_{2}-1 \choose y} {k+y+r_{1}-1 \choose k+y} $$ which is still pretty ugly. I'm not sure if this is helpful but this can also be re-written as $$ \frac{ p_{1}^{k} (1-p_{1})^{r_{1}} (1-p_{2})^{r_{2}} }{ (r_{1}-1)! (r_{2}-1)! } \sum_{y=0}^{\infty} (p_{1}p_{2})^{y} \frac{ (y+r_{2}-1)! (k+y+r_{1}-1)! }{y! (k+y)! } $$ I'm not sure if there is a simplified expression for this sum but it could be approximated numerically if you only need it to calculate $p$-values I verified with simulation that the above calculation is correct. Here is a crude R function to calculate this mass function and carry out a few simulations f = function(k,r1,r2,p1,p2,UB) { S=0 const = (p1^k) * ((1-p1)^r1) * ((1-p2)^r2) const = const/( factorial(r1-1) * factorial(r2-1) ) for(y in 0:UB) { iy = ((p1*p2)^y) * factorial(y+r2-1)*factorial(k+y+r1-1) iy = iy/( factorial(y)*factorial(y+k) ) S = S + iy } return(S*const) } ### Sims r1 = 6; r2 = 4; p1 = .7; p2 = .53; X = rnbinom(1e5,r1,p1) Y = rnbinom(1e5,r2,p2) mean( (X-Y) == 2 ) [1] 0.08508 f(2,r1,r2,1-p1,1-p2,20) [1] 0.08509068 mean( (X-Y) == 1 ) [1] 0.11581 f(1,r1,r2,1-p1,1-p2,20) [1] 0.1162279 mean( (X-Y) == 0 ) [1] 0.13888 f(0,r1,r2,1-p1,1-p2,20) [1] 0.1363209 I've found the sum converges very quickly for all of the values I tried, so setting UB higher than 10 or so is not necessary. Note that R's built in rnbinom function parameterizes the negative binomial in terms of the number of failures before the $r$'th success, in which case you'd need to replace all of the $p_{1}, p_{2}$'s in the above formulas with $1-p_{1}, 1-p_{2}$ for compatibility.
{ "source": [ "https://stats.stackexchange.com/questions/13346", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/54/" ] }
13,389
Andrew More defines information gain as: $IG(Y|X) = H(Y) - H(Y|X)$ where $H(Y|X)$ is the conditional entropy . However, Wikipedia calls the above quantity mutual information . Wikipedia on the other hand defines information gain as the Kullback–Leibler divergence (aka information divergence or relative entropy) between two random variables: $D_{KL}(P||Q) = H(P,Q) - H(P)$ where $H(P,Q)$ is defined as the cross-entropy . These two definitions seem to be inconsistent with each other. I have also seen other authors talking about two additional related concepts, namely differential entropy and relative information gain. What is the precise definition or relationship between these quantities? Is there a good text book that covers them all? Information gain Mutual information Cross entropy Conditional entropy Differential entropy Relative information gain
I think that calling the Kullback-Leibler divergence "information gain" is non-standard. The first definition is standard. EDIT: However, $H(Y)βˆ’H(Y|X)$ can also be called mutual information. Note that I don't think you will find any scientific discipline that really has a standardized, precise, and consistent naming scheme. So you will always have to look at the formulae, because they will generally give you a better idea. Textbooks: see "Good introduction into different kinds of entropy" . Also: Cosma Shalizi: Methods and Techniques of Complex Systems Science: An Overview, chapter 1 (pp. 33--114) in Thomas S. Deisboeck and J. Yasha Kresh (eds.), Complex Systems Science in Biomedicine http://arxiv.org/abs/nlin.AO/0307015 Robert M. Gray: Entropy and Information Theory http://ee.stanford.edu/~gray/it.html David MacKay: Information Theory, Inference, and Learning Algorithms http://www.inference.phy.cam.ac.uk/mackay/itila/book.html also, "What is β€œentropy and information gain”?"
{ "source": [ "https://stats.stackexchange.com/questions/13389", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2798/" ] }
13,465
When doing a GLM and you get the "not defined because of singularities" error in the anova output, how does one counteract this error from happening? Some have suggested that it is due to collinearity between covariates or that one of the levels is not present in the dataset (see: interpreting "not defined because of singularities" in lm ) If I wanted to see which "particular treatment" is driving the model and I have 4 levels of treatment: Treat 1 , Treat 2 , Treat 3 & Treat 4 , which are recorded in my spreadsheet as: when Treat 1 is 1 the rest are zero, when Treat 2 is 1 the rest are zero, etc., what would I have to do?
You're probably getting that error because two or more of your independent variables are perfectly collinear (e.g. mis-coding dummy variables to make identical copies). Use cor() on your data or alias() on your model for closer inspection.
{ "source": [ "https://stats.stackexchange.com/questions/13465", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5397/" ] }
13,494
Stein's Example shows that the maximum likelihood estimate of $n$ normally distributed variables with means $\mu_1,\ldots,\mu_n$ and variances $1$ is inadmissible (under a square loss function) iff $n\ge 3$ . For a neat proof, see the first chapter of Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction by Bradley Effron. This was highly surprising to me at first, but there is some intuition behind why one might expect the standard estimate to be inadmissible (most notably, if $x \sim \mathcal N(\mu,1)$ , then $\mathbb{E}\|x\|^2\approx \|\mu\|^2+n$ , as outlined in Stein's original paper, linked to below). My question is rather: What property of $n$ -dimensional space (for $n\ge 3$ ) does $\mathbb{R}^2$ lack which facilitates Stein's example? Possible answers could be about the curvature of the $n$ -sphere, or something completely different. In other words, why is the MLE admissible in $\mathbb{R}^2$ ? Edit 1: In response to @mpiktas concern about 1.31 following from 1.30: $$E_\mu\left(\|z-\hat{\mu}\|^2\right)=E_\mu\left(S\left(\frac{N-2}{S}\right)^2\right)=E_\mu\left(\frac{(N-2)^2}{S}\right).$$ $$\hat{\mu_i} = \left(1-\frac{N-2}{S}\right)z_i$$ so $$E_\mu\left(\frac{\partial\hat{\mu_i}}{\partial z_i} \right)=E_\mu\left( 1-\frac{N-2}{S}+2\frac{z_i^2}{S^2}\right).$$ Therefore we have: $$2\sum_{i=1}^N E_\mu\left(\frac{\partial\hat{\mu_i}}{\partial z_i} \right)=2N-2E_\mu\left(\frac{N(N-2)}{S}\right)+4E_\mu\left(\frac{(N-2)}{S}\right)\\=2N-E_\mu\frac{2(N-2)^2}{S}.$$ Edit 2 : In this paper , Stein proves that the MLE is admissible for $N=2$ .
The dichotomy between the cases $d < 3$ and $d \geq 3$ for the admissibility of the MLE of the mean of a $d$-dimensional multivariate normal random variable is certainly shocking. There is another very famous example in probability and statistics in which there is a dichotomy between the $d < 3$ and $d \geq 3$ cases. This is the recurrence of a simple random walk on the lattice $\mathbb{Z}^d$. That is, the $d$-dimensional simple random walk is recurrent in 1 or 2 dimensions, but is transient in $d \geq 3$ dimensions. The continuous-time analogue (in the form of Brownian motion) also holds. It turns out that the two are closely related. Larry Brown proved that the two questions are essentially equivalent. That is, the best invariant estimator $\hat{\mu} \equiv \hat{\mu}(X) = X$ of a $d$-dimensional multivariate normal mean vector is admissible if and only if the $d$-dimensional Brownian motion is recurrent. In fact, his results go much further. For any sensible (i.e., generalized Bayes) estimator $\tilde{\mu} \equiv \tilde{\mu}(X)$ with bounded (generalized) $L_2$ risk, there is an explicit(!) corresponding $d$-dimensional diffusion such that the estimator $\tilde{\mu}$ is admissible if and only if its corresponding diffusion is recurrent. The local mean of this diffusion is essentially the discrepancy between the two estimators, i.e., $\tilde{\mu} - \hat{\mu}$ and the covariance of the diffusion is $2 I$. From this, it is easy to see that for the case of the MLE $\tilde{\mu} = \hat{\mu} = X$, we recover (rescaled) Brownian motion. So, in some sense, we can view the question of admissibility through the lens of stochastic processes and use well-studied properties of diffusions to arrive at the desired conclusions. References L. Brown (1971). Admissible estimators, recurrent diffusions, and insoluble boundary value problems . Ann. Math. Stat. , vol. 42, no. 3, pp. 855–903. R. N. Bhattacharya (1978). Criteria for recurrence and existence of invariant measures for multidimensional diffusions . Ann. Prob. , vol. 6, no. 4, 541–553.
{ "source": [ "https://stats.stackexchange.com/questions/13494", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5234/" ] }
13,533
Can anybody explain in detail: What does reject inferencing mean? How can it be used to increase accuracy of my model? I do have idea of reject inferencing in credit card application but struggling with the thought of using it to increase the accuracy of my model.
In credit model building, reject inferencing is the process of inferring the performance of credit accounts that were rejected in the application process. When building an application credit risk model, we want to build a model that has " through-the-door " applicability, i.e., we input all of the application data into the credit risk model, and the model outputs a risk rating or a probability of default. The problem when using regression to build a model from past data is that we know the performance of the account only for past accepted applications. However, we don't know the performance of the rejects, because after applying we sent them back out the door. This can result in selection bias in our model, because if we only use past "accepts" in our model, the model might not perform well on the "through-the-door" population. There are many ways to deal with reject inferencing, all of them controversial. I'll mention two simple ones here. "Define past rejects as bad" Parceling "Define past rejects as bad" is simply taking all of the rejected application data, and instead of discarding it when building the model, assign all of them as bad. This method heavily biases the model towards the past accept/reject policy. "Parceling" is a little bit more sophisticated. It consists of Build the regression model with the past "accepts" Apply the model to the past rejects to assign risk ratings to them Using the expected probability of default for each risk rating, assign the rejected applications to being either good or bad. E.g., if the risk rating has a probability of default of 10%, and there are 100 rejected applications that fall into this risk rating, assign 10 of the rejects to "bad" and 90 of the rejects to "good". Rebuild the regression model using the accepted applications and now the inferred performance of the rejected applications There are different ways to do the assignments to good or bad in step 3, and this process can also be applied iteratively. As stated earlier, the use of reject inferencing is controversial, and it's difficult to give a straightforward answer on how it can be used to increase accuracy of models. I'll simply quote some others on this matter. Jonathan Crook and John Banasik, Does Reject Inference Really Improve the Performance of Application Scoring Models? First, even where a very large proportion of applicants are rejected, the scope for improving on a model parameterised only on those accepted appears modest. Where the rejection rate is not so large, that scope appears to be very small indeed. David Hand, "Direct Inference in Credit Operations", appearing in Handbook of Credit Scoring, 2001 Several methods have been proposed and are used and, while some of them are clearly poor and should never be recommended, there is no unique best method of universal applicability unless extra information is obtained. That is, the best solution is to obtain more information (perhaps by granting loans to some potential rejects) about those applicants who fall in the reject region.
{ "source": [ "https://stats.stackexchange.com/questions/13533", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1763/" ] }
13,536
when integrating a function or in complex simulations, I have seen the Monte Carlo method is widely used. I'm asking myself why one doesn't generate a grid of points to integrate a function instead of drawing random points. Wouldn't that bring more exact results?
I found chapter 1 and 2 of these lecture notes helpful when I asked the same question myself a few years ago. A short summary: A grid with $N$ points in 20 dimensional space will demand $N^{20}$ function evaluations. That is a lot. By using Monte Carlo simulation, we dodge the curse of dimensionality to some extent. The convergence of a Monte Carlo simulation is $O(N^{-1/2})$ which is, albeit pretty slow, dimensionally independent .
{ "source": [ "https://stats.stackexchange.com/questions/13536", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2091/" ] }
13,607
Many clinical studies are based on non-random samples. However, most standard tests (e.g. t-tests, ANOVA, linear regression, logistic regression) are based on the assumption that samples contain "random numbers". Are results valid if these non-random samples were analyzed by standard tests? Thank you.
There are two general models to testing. The first one, based on the assumption of random sampling from a population, is usually called the "population model". For example, for the two-independent samples t-test, we assume that the two groups we want to compare are random samples from the respective populations. Assuming that the distributions of the scores within the two groups are normally distributed in the population, we can then derive analytically the sampling distribution of the test statistic (i.e., for the t-statistic). The idea is that if we were to repeat this process (randomly drawing two samples from the respective populations) an infinite number of times (of course, we do not actually do that), we would obtain this sampling distribution for the test statistic. An alternative model for testing is the "randomization model". Here, we do not have to appeal to random sampling. Instead, we obtain a randomization distribution through permutations of our samples. For example, for the t-test, you have your two samples (not necessarily obtained via random sampling). Now if indeed there is no difference between these two groups, then whether a particular person actually "belongs" to group 1 or group 2 is arbitrary. So, what we can do is to permute the group assignment over and over, each time noting how far the means of the two groups are apart. This way, we obtain a sampling distribution empirically. We can then compare how far the two means are apart in the original samples (before we started to reshuffle the group memberships) and if that difference is "extreme" (i.e., falls into the tails of empirically derived sampling distribution), then we conclude that group membership is not arbitrary and there is indeed a difference between the two groups. In many situations, the two approaches actually lead to the same conclusion. In a way, the approach based on the population model can be seen as an approximation to the randomization test. Interestingly, Fisher was the one who proposed the randomization model and suggested that it should be the basis for our inferences (since most samples are not obtained via random sampling). A nice article describing the difference between the two approaches is: Ernst, M. D. (2004). Permutation methods: A basis for exact inference. Statistical Science, 19(4), 676-685 (link) . Another article that provides a nice summary and suggest that the randomization approach should be the basis for our inferences: Ludbrook, J., & Dudley, H. (1998). Why permutation tests are superior to t and F tests in biomedical research. American Statistician, 52(2), 127-132 (link) . EDIT: I should also add that it is common to calculate the same test statistic when using the randomization approach as under the population model. So, for example, for testing the difference in means between two groups, one would calculate the usual t-statistic for all possible permutations of the group memberships (yielding the empirically derived sampling distribution under the null hypothesis) and then one would check how extreme the t-statistic for the original group membership is under that distribution.
{ "source": [ "https://stats.stackexchange.com/questions/13607", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5546/" ] }
13,614
I'm trying to model a weekly process of adoption (adoption events could only occur on Fridays) using the coxph function, but a large quantity of observations are missing for the first 6 years, leaving me with psuedo annual data at irregular intervals. The problem is, R's method for handling interval censoring appears to assume regular time intervals. As I understand this vignette , interval censored data is represented by three numbers (appropriate medical analogy in parentheses): First non-adopted observation (or first "well" observation) Last non-adopted observation (or last "well" observation) First adopted observation (first infected or death observation) What I would like to do is elide the last-adopted observation, and instead include a list of missing observation times. In my case, since my basic time unit is a week, specifying that between weeks 5 and 20, 27 and 34 etc. observations were missing would be much more appropriate. Otherwise, it just appears as though massive collections of events happened very irregularly, and the cox model does not take into account the fact that events could have happened during those missing weeks. Another potential problem is that it is conceivable for an adoption event to occur during the censored time interval and then an "un-adoption" event happens before the next observation. I think the medical analogy normally gets around this problem because events like infection and death are unlikely to have gotten better by the time of the next observation (though it's presumably a problem for them as well in the former case). My hope is that the trick John Fox uses to handle time dependent covariates will allow me to deal with this problem. Any suggestions welcome ( Stata would also be an option) thanks very much.
There are two general models to testing. The first one, based on the assumption of random sampling from a population, is usually called the "population model". For example, for the two-independent samples t-test, we assume that the two groups we want to compare are random samples from the respective populations. Assuming that the distributions of the scores within the two groups are normally distributed in the population, we can then derive analytically the sampling distribution of the test statistic (i.e., for the t-statistic). The idea is that if we were to repeat this process (randomly drawing two samples from the respective populations) an infinite number of times (of course, we do not actually do that), we would obtain this sampling distribution for the test statistic. An alternative model for testing is the "randomization model". Here, we do not have to appeal to random sampling. Instead, we obtain a randomization distribution through permutations of our samples. For example, for the t-test, you have your two samples (not necessarily obtained via random sampling). Now if indeed there is no difference between these two groups, then whether a particular person actually "belongs" to group 1 or group 2 is arbitrary. So, what we can do is to permute the group assignment over and over, each time noting how far the means of the two groups are apart. This way, we obtain a sampling distribution empirically. We can then compare how far the two means are apart in the original samples (before we started to reshuffle the group memberships) and if that difference is "extreme" (i.e., falls into the tails of empirically derived sampling distribution), then we conclude that group membership is not arbitrary and there is indeed a difference between the two groups. In many situations, the two approaches actually lead to the same conclusion. In a way, the approach based on the population model can be seen as an approximation to the randomization test. Interestingly, Fisher was the one who proposed the randomization model and suggested that it should be the basis for our inferences (since most samples are not obtained via random sampling). A nice article describing the difference between the two approaches is: Ernst, M. D. (2004). Permutation methods: A basis for exact inference. Statistical Science, 19(4), 676-685 (link) . Another article that provides a nice summary and suggest that the randomization approach should be the basis for our inferences: Ludbrook, J., & Dudley, H. (1998). Why permutation tests are superior to t and F tests in biomedical research. American Statistician, 52(2), 127-132 (link) . EDIT: I should also add that it is common to calculate the same test statistic when using the randomization approach as under the population model. So, for example, for testing the difference in means between two groups, one would calculate the usual t-statistic for all possible permutations of the group memberships (yielding the empirically derived sampling distribution under the null hypothesis) and then one would check how extreme the t-statistic for the original group membership is under that distribution.
{ "source": [ "https://stats.stackexchange.com/questions/13614", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5571/" ] }
13,630
I want to sample according to a density $$ f(a) \propto \frac{c^a d^{a-1}}{\Gamma(a)} 1_{(1,\infty)}(a) $$ where $c$ and $d$ are strictly positive. (Motivation: This could be useful for Gibbs sampling when the shape parameter of a Gamma density has a uniform prior.) Does anyone know how to sample from this density easily? Maybe it is standard and just something I don't know about? I can think of a stupid rejection sampliing algorithm that will more or less work (find the mode $a^*$ of $f$, sample $(a,u)$ from uniform in a big box $[0,10a^*]\times [0,f(a^*)]$ and reject if $u>f(a)$), but (i) it is not at all efficient and (ii) $f(a^*)$ will be too big for a computer to handle easily for even moderately large $c$ and $d$. (Note that the mode for large $c$ and $d$ is approximately at $a=cd$.) Thanks in advance for any help!
Rejection sampling will work exceptionally well when $c d \ge \exp(5)$ and is reasonable for $c d \ge \exp(2)$ . To simplify the math a little, let $k = c d$ , write $x = a$ , and note that $$f(x) \propto \frac{k^x}{\Gamma(x)} dx$$ for $x \ge 1$ . Setting $x = u^{3/2}$ gives $$f(u) \propto \frac{k^{u^{3/2}}}{\Gamma(u^{3/2})} u^{1/2} du$$ for $u \ge 1$ . When $k \ge \exp(5)$ , this distribution is extremely close to Normal (and gets closer as $k$ gets larger). Specifically, you can Find the mode of $f(u)$ numerically (using, e.g., Newton-Raphson). Expand $\log{f(u)}$ to second order about its mode. This yields the parameters of a closely approximate Normal distribution. To high accuracy, this approximating Normal dominates $f(u)$ except in the extreme tails. (When $k \lt \exp(5)$ , you may need to scale the Normal pdf up a little bit to assure domination.) Having done this preliminary work for any given value of $k$ , and having estimated a constant $M \gt 1$ (as described below), obtaining a random variate is a matter of: Draw a value $u$ from the dominating Normal distribution $g(u)$ . If $u \lt 1$ or if a new uniform variate $X$ exceeds $f(u)/(M g(u))$ , return to step 1. Set $x = u^{3/2}$ . The expected number of evaluations of $f$ due to the discrepancies between $g$ and $f$ is only slightly greater than 1. (Some additional evaluations will occur due to rejections of variates less than $1$ , but even when $k$ is as low as $2$ the frequency of such occurrences is small.) This plot shows the logarithms of g and f as a function of u for $k=\exp(5)$ . Because the graphs are so close, we need to inspect their ratio to see what's going on: This displays the log ratio $\log(\exp(0.004)g(u)/f(u))$ ; the factor of $M = \exp(0.004)$ was included to assure the logarithm is positive throughout the main part of the distribution; that is, to assure $Mg(u) \ge f(u)$ except possibly in regions of negligible probability. By making $M$ sufficiently large you can guarantee that $M \cdot g$ dominates $f$ in all but the most extreme tails (which have practically no chance of being chosen in a simulation anyway). However, the larger $M$ is, the more frequently rejections will occur. As $k$ grows large, $M$ can be chosen very close to $1$ , which incurs practically no penalty. A similar approach works even for $k \gt \exp(2)$ , but fairly large values of $M$ may be needed when $\exp(2) \lt k \lt \exp(5)$ , because $f(u)$ is noticeably asymmetric. For instance, with $k = \exp(2)$ , to get a reasonably accurate $g$ we need to set $M=1$ : The upper red curve is the graph of $\log(\exp(1)g(u))$ while the lower blue curve is the graph of $\log(f(u))$ . Rejection sampling of $f$ relative to $\exp(1)g$ will cause about 2/3 of all trial draws to be rejected, tripling the effort: still not bad. The right tail ( $u \gt 10$ or $x \gt 10^{3/2} \sim 30$ ) will be under-represented in the rejection sampling (because $\exp(1)g$ no longer dominates $f$ there), but that tail comprises less than $\exp(-20) \sim 10^{-9}$ of the total probability. To summarize, after an initial effort to compute the mode and evaluate the quadratic term of the power series of $f(u)$ around the mode--an effort that requires a few tens of function evaluations at most--you can use rejection sampling at an expected cost of between 1 and 3 (or so) evaluations per variate. The cost multiplier rapidly drops to 1 as $k = c d$ increases beyond 5. Even when just one draw from $f$ is needed, this method is reasonable. It comes into its own when many independent draws are needed for the same value of $k$ , for then the overhead of the initial calculations is amortized over many draws. Addendum @Cardinal has asked, quite reasonably, for support of some of the hand-waving analysis in the forgoing. In particular, why should the transformation $x = u^{3/2}$ make the distribution approximately Normal? In light of the theory of Box-Cox transformations , it is natural to seek some power transformation of the form $x = u^\alpha$ (for a constant $\alpha$ , hopefully not too different from unity) that will make a distribution "more" Normal. Recall that all Normal distributions are simply characterized: the logarithms of their pdfs are purely quadratic, with zero linear term and no higher order terms. Therefore we can take any pdf and compare it to a Normal distribution by expanding its logarithm as a power series around its (highest) peak. We seek a value of $\alpha$ that makes (at least) the third power vanish, at least approximately: that is the most we can reasonably hope that a single free coefficient will accomplish. Often this works well. But how to get a handle on this particular distribution? Upon effecting the power transformation, its pdf is $$f(u) = \frac{k^{u^{\alpha}}}{\Gamma(u^{\alpha})} u^{\alpha-1}.$$ Take its logarithm and use Stirling's asymptotic expansion of $\log(\Gamma)$ : $$\log(f(u)) \approx \log(k) u^\alpha + (\alpha - 1)\log(u) - \alpha u^\alpha \log(u) + u^\alpha - \log(2 \pi u^\alpha)/2 + c u^{-\alpha}$$ (for small values of $c$ , which is not constant). This works provided $\alpha$ is positive, which we will assume to be the case (for otherwise we cannot neglect the remainder of the expansion). Compute its third derivative (which, when divided by $3!$ , will be the coefficient of the third power of $u$ in the power series) and exploit the fact that at the peak, the first derivative must be zero. This simplifies the third derivative greatly, giving (approximately, because we are ignoring the derivative of $c$ ) $$-\frac{1}{2} u^{-(3+\alpha)} \alpha \left(2 \alpha(2 \alpha-3) u^{2 \alpha} + (\alpha^2 - 5\alpha +6)u^\alpha + 12 c \alpha \right).$$ When $k$ is not too small, $u$ will indeed be large at the peak. Because $\alpha$ is positive, the dominant term in this expression is the $2\alpha$ power, which we can set to zero by making its coefficient vanish: $$2 \alpha-3 = 0.$$ That's why $\alpha = 3/2$ works so well: with this choice, the coefficient of the cubic term around the peak behaves like $u^{-3}$ , which is close to $\exp(-2 k)$ . Once $k$ exceeds 10 or so, you can practically forget about it, and it's reasonably small even for $k$ down to 2. The higher powers, from the fourth on, play less and less of a role as $k$ gets large, because their coefficients grow proportionately smaller, too. Incidentally, the same calculations (based on the second derivative of $log(f(u))$ at its peak) show the standard deviation of this Normal approximation is slightly less than $\frac{2}{3}\exp(k/6)$ , with the error proportional to $\exp(-k/2)$ .
{ "source": [ "https://stats.stackexchange.com/questions/13630", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5179/" ] }
13,643
I'm struggling to grasp the concept of bias in the context of linear regression analysis. What is the mathematical definition of bias? What exactly is biased and why/how? Illustrative example?
Bias is the difference between the expected value of an estimator and the true value being estimated. For example the sample mean for a simple random sample (SRS) is an unbiased estimator of the population mean because if you take all the possible SRS's find their means, and take the mean of those means then you will get the population mean (for finite populations this is just algebra to show this). But if we use a sampling mechanism that is somehow related to the value then the mean can become biased, think of a random digit dialing sample asking a question about income. If there is positive correlation between number of phone numbers someone has and their income (poor people only have a few phone numbers that they can be reached at while richer people have more) then the sample will be more likely to include more people with higher incomes and therefore the mean income in the sample will tend to be higher than the population income. The are also some estimators that are naturally biased. The trimmed mean will be biased for a skewed population/distribution. The standard variance is unbiased for SRS's if either the population mean is used with denominator $n$ or the sample mean is used with denominator $n-1$. Here is a simple example using R, we generate a bunch of samples from a normal with mean 0 and standard deviation 1, then compute the average mean, variance, and standard deviation from the samples. Notice how close the mean and variance averages are to the true values (sampling error means they won't be exact), now compare the mean sd, it is a biased estimator (though not hugely biased). > tmp.data <- matrix( rnorm(10*1000000), ncol=10 ) > mean( apply(tmp.data, 1, mean) ) [1] 0.0001561002 > mean( apply(tmp.data, 1, var) ) [1] 1.000109 > mean( apply(tmp.data, 1, sd) ) [1] 0.9727121 In regression we can get biased estimators of slopes by doing stepwise regression. A variable is more likely to be kept in a stepwise regression if the estimated slope is further from 0 and more likely to be dropped if it is closer to 0, so this is biased sampling and the slopes in the final model will tend to be further from 0 than the true slope. Techniques like the lasso and ridge regression bias slopes towards 0 to counter the selection bias away from 0.
{ "source": [ "https://stats.stackexchange.com/questions/13643", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3541/" ] }
13,686
I have a dataset with around 30 independent variables and would like to construct a generalized linear model (GLM) to explore the relationship between them and the dependent variable. I am aware that the method I was taught for this situation, stepwise regression, is now considered a statistical sin . What modern methods of model selection should be used in this situation?
There are several alternatives to Stepwise Regression . The most used I have seen are: Expert opinion to decide which variables to include in the model. Partial Least Squares Regression . You essentially get latent variables and do a regression with them. You could also do PCA yourself and then use the principal variables. Least Absolute Shrinkage and Selection Operator (LASSO). Both PLS Regression and LASSO are implemented in R packages like PLS : http://cran.r-project.org/web/packages/pls/ and LARS : http://cran.r-project.org/web/packages/lars/index.html If you only want to explore the relationship between your dependent variable and the independent variables (e.g. you do not need statistical significance tests), I would also recommend Machine Learning methods like Random Forests or Classification/Regression Trees . Random Forests can also approximate complex non-linear relationships between your dependent and independent variables, which might not have been revealed by linear techniques (like Linear Regression ). A good starting point to Machine Learning might be the Machine Learning task view on CRAN: Machine Learning Task View : http://cran.r-project.org/web/views/MachineLearning.html
{ "source": [ "https://stats.stackexchange.com/questions/13686", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/179/" ] }
13,983
I have a sample size of 6. In such a case, does it make sense to test for normality using the Kolmogorov-Smirnov test? I used SPSS. I have a very small sample size because it takes time to get each. If it doesn't make sense, how many samples is the lowest number which makes sense to test? Note: I did some experiment related to the source code. The sample is time spent for coding in a version of software (version A) Actually, I have another sample size of 6 which is time spent for coding in another version of software (version B) I would like to do hypothesis testing using one-sample t-test to test whether the time spent in the code version A is differ from the time spent in the code version B or not (This is my H1). The precondition of one-sample t-test is that the data to be tested have to be normally distributed. That is why I need to test for normality.
Yes. All hypothesis tests have two salient properties : their size (or "significance level"), a number which is directly related to confidence and expected false positive rates, and their power, which expresses the chance of false negatives. When sample sizes are small and you continue to insist on a small size (high confidence), the power gets worse. This means that small-sample tests usually cannot detect small or moderate differences. But they are still meaningful . The K-S test assesses whether the sample appears to have come from a Normal distribution. A sample of six values will have to look highly non-normal indeed to fail this test. But if it does, you can interpret this rejection of the null exactly as you would interpret it with higher sample sizes. On the other hand, if the test fails to reject the null hypothesis, that tells you little, due to the high false negative rate. In particular, it would be relatively risky to act as if the underlying distribution were Normal. One more thing to watch out for here: some software uses approximations to compute p-values from the test statistics. Often these approximations work well for large sample sizes but act poorly for very small sample sizes. When this is the case, you cannot trust that the p-value has been correctly computed, which means you cannot be sure that the desired test size has been attained. For details, consult your software documentation. Some advice: The KS test is substantially less powerful for testing normality than other tests specifically constructed for this purpose. The best of them is probably the Shapiro-Wilk test, but others commonly used and almost as powerful are the Shapiro-Francia and Anderson-Darling . This plot displays the distribution of the Kolmogorov-Smirnov test statistic in 10,000 samples of six normally-distributed variates: Based on 100,000 additional samples, the upper 95th percentile (which estimates the critical value for this statistic for a test of size $\alpha=5\%$) is 0.520. An example of a sample that passes this test is the dataset 0.000, 0.001, 0.002, 1.000, 1.001, 1000000 The test statistic is 0.5 (which is less than the critical value). Such a sample would be rejected using the other tests of normality.
{ "source": [ "https://stats.stackexchange.com/questions/13983", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3584/" ] }
14,002
How are PCA and classical MDS different? How about MDS versus nonmetric MDS? Is there a time when you would prefer one over the other? How do the interpretations differ?
Classic Torgerson 's metric MDS is actually done by transforming distances into similarities and performing PCA (eigen-decomposition or singular-value-decomposition) on those. [The other name of this procedure ( distances between objects -> similarities between them -> PCA , whereby loadings are the sought-for coordinates) is Principal Coordinate Analysis or PCoA .] So, PCA might be called the algorithm of the simplest MDS. Non-metric MDS is based on iterative ALSCAL or PROXSCAL algorithm (or algorithm similar to them) which is a more versatile mapping technique than PCA and can be applied to metric MDS as well. While PCA retains m important dimensions for you, ALSCAL/PROXSCAL fits configuration to m dimensions (you pre-define m ) and it reproduces dissimilarities on the map more directly and accurately than PCA usually can (see Illustration section below). Thus, MDS and PCA are probably not at the same level to be in line or opposite to each other. PCA is just a method while MDS is a class of analysis. As mapping, PCA is a particular case of MDS. On the other hand, PCA is a particular case of Factor analysis which, being a data reduction, is more than only a mapping, while MDS is only a mapping. As for your question about metric MDS vs non-metric MDS there's little to comment because the answer is straightforward. If I believe my input dissimilarities are so close to be euclidean distances that a linear transform will suffice to map them in m-dimensional space, I will prefer metric MDS. If I don't believe, then monotonic transform is necessary, implying use of non-metric MDS. A note on terminology for a reader. Term Classic(al) MDS (CMDS) can have two different meanings in a vast literature on MDS, so it is ambiguous and should be avoided. One definition is that CMDS is a synonym of Torgerson's metric MDS. Another definition is that CMDS is any MDS (by any algorithm; metric or nonmetric analysis) with single matrix input (for there exist models analyzing many matrices at once - Individual "INDSCAL" model and Replicated model). Illustration to the answer . Some cloud of points (ellipse) is being mapped on a one-dimensional mds-map. A pair of points is shown in red dots. Iterative or "true" MDS aims straight to reconstruct pairwise distances between objects. For it is the task of any MDS . Various stress or misfit criteria could be minimized between o riginal distances and distances on the m ap: $\|D_o-D_m\|_2^2$ , $\|D_o^2-D_m^2\|_1$ , $\|D_o-D_m\|_1$ . An algorithm may (non-metric MDS) or may not (metric MDS) include monotonic transformation on this way. PCA-based MDS (Torgerson's, or PCoA) is not straight. It minimizes the squared distances between objects in the original space and their images on the map. This is not quite genuine MDS task; it is successful, as MDS, only to the extent to which the discarded junior principal axes are weak. If $P_1$ explains much more variance than $P_2$ the former can alone substantially reflect pairwise distances in the cloud, especially for points lying far apart along the ellipse. Iterative MDS will always win, and especially when the map is wanted very low-dimensional. Iterative MDS, too, will succeed more when a cloud ellipse is thin, but will fulfill the MDS-task better than PCoA. By the property of the double-centration matrix (described here ) it appears that PCoA minimizes $\|D_o\|_2^2-\|D_m\|_2^2$ , which is different from any of the above minimizations. Once again, PCA projects cloud's points on the most advantageous all-corporal saving subspace. It does not project pairwise distances , relative locations of points on a subspace most saving in that respect, as iterative MDS does it. Nevertheless, historically PCoA/PCA is considered among the methods of metric MDS.
{ "source": [ "https://stats.stackexchange.com/questions/14002", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/36/" ] }
14,059
It is common to use weights in applications like mixture modeling and to linearly combine basis functions. Weights $w_i$ must often obey $w_i β‰₯$ 0 and $\sum_{i} w_i=1$. I'd like to randomly choose a weight vector $\mathbf{w} = (w_1, w_2, …)$ from a uniform distribution of such vectors. It may be tempting to use $w_i = \frac{\omega_i}{\sum_{j} \omega_j}$ where $\omega_i \sim$ U(0, 1), however as discussed in the comments below, the distribution of $\mathbf{w}$ is not uniform. However, given the constraint $\sum_{i} w_i=1$, it seems that the underlying dimensionality of the problem is $n-1$, and that it should be possible to choose a $\mathbf{w}$ by choosing $n-1$ parameters according to some distribution and then computing the corresponding $\mathbf{w}$ from those parameters (because once $n-1$ of the weights are specified, the remaining weight is fully determined). The problem appears to be similar to the sphere point picking problem (but, rather than picking 3-vectors whose $β„“_2$ norm is unity, I want to pick $n$-vectors whose $β„“_1$ norm is unity). Thanks!
Choose $\mathbf{x} \in [0,1]^{n-1}$ uniformly (by means of $n-1$ uniform reals in the interval $[0,1]$). Sort the coefficients so that $0 \le x_1 \le \cdots \le x_{n-1}$. Set $$\mathbf{w} = (x_1, x_2-x_1, x_3 - x_2, \ldots, x_{n-1} - x_{n-2}, 1 - x_{n-1}).$$ Because we can recover the sorted $x_i$ by means of the partial sums of the $w_i$, the mapping $\mathbf{x} \to \mathbf{w}$ is $(n-1)!$ to 1; in particular, its image is the $n-1$ simplex in $\mathbb{R}^n$. Because (a) each swap in a sort is a linear transformation, (b) the preceding formula is linear, and (c) linear transformations preserve uniformity of distributions, the uniformity of $\mathbf{x}$ implies the uniformity of $\mathbf{w}$ on the $n-1$ simplex. In particular, note that the marginals of $\mathbf{w}$ are not necessarily independent. This 3D point plot shows the results of 2000 iterations of this algorithm for $n=3$. The points are confined to the simplex and are approximately uniformly distributed over it. Because the execution time of this algorithm is $O(n \log(n)) \gg O(n)$, it is inefficient for large $n$. But this does answer the question! A better way (in general) to generate uniformly distributed values on the $n-1$-simplex is to draw $n$ uniform reals $(x_1, \ldots, x_n)$ on the interval $[0,1]$, compute $$y_i = -\log(x_i)$$ (which makes each $y_i$ positive with probability $1$, whence their sum is almost surely nonzero) and set $$\mathbf w = (y_1, y_2, \ldots, y_n) / (y_1 + y_2 + \cdots + y_n).$$ This works because each $y_i$ has a $\Gamma(1)$ distribution, which implies $\mathbf w$ has a Dirichlet$(1,1,1)$ distribution--and that is uniform.
{ "source": [ "https://stats.stackexchange.com/questions/14059", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5754/" ] }
14,088
I am trying to move from using the ez package to lme for repeated measures ANOVA (as I hope I will be able to use custom contrasts on with lme ). Following the advice from this blog post I was able to set up the same model using both aov (as does ez , when requested) and lme . However, whereas in the example given in that post the F -values do perfectly agree between aov and lme (I checked it, and they do), this is not the case for my data. Although the F -values are similar, they are not the same. aov returns a f-value of 1.3399, lme returns 1.36264. I am willing to accept the aov result as the "correct" one as this is also what SPSS returns (and this is what counts for my field/supervisor). Questions: It would be great if someone could explain why this difference exists and how I can use lme to provide credible results. (I would also be willing to use lmer instead of lme for this type of stuff, if it gives the "correct" result. However, I haven't used it so far.) After solving this problem I would like to run a contrast analysis. Especially I would be interested in the contrast of pooling the first two levels of factor (i.e., c("MP", "MT") ) and compare this with the third level of factor (i.e., "AC" ). Furthermore, testing the third versus the fourth level of factor (i.e., "AC" versus "DA" ). Data: tau.base <- structure(list(id = structure(c(1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 16L, 17L, 18L, 19L, 20L, 21L, 22L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 16L, 17L, 18L, 19L, 20L, 21L, 22L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 16L, 17L, 18L, 19L, 20L, 21L, 22L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 16L, 17L, 18L, 19L, 20L, 21L, 22L), .Label = c("A18K", "D21C", "F25E", "G25D", "H05M", "H07A", "H08H", "H25C", "H28E", "H30D", "J10G", "J22J", "K20U", "M09M", "P20E", "P26G", "P28G", "R03C", "U21S", "W08A", "W15V", "W18R"), class = "factor"), factor = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L), .Label = c("MP", "MT", "AC", "DA" ), class = "factor"), value = c(0.9648092876, 0.2128662077, 1, 0.0607615485, 0.9912814024, 3.22e-08, 0.8073856412, 0.1465590332, 0.9981672618, 1, 1, 1, 0.9794401938, 0.6102546108, 0.428651501, 1, 0.1710644881, 1, 0.7639763913, 1, 0.5298989196, 1, 1, 0.7162733447, 0.7871177434, 1, 1, 1, 0.8560509327, 0.3096989662, 1, 8.51e-08, 0.3278862311, 0.0953598576, 1, 1.38e-08, 1.07e-08, 0.545290432, 0.1305621416, 2.61e-08, 1, 0.9834051136, 0.8044114935, 0.7938839461, 0.9910112678, 2.58e-08, 0.5762677121, 0.4750002288, 1e-08, 0.8584252623, 1, 1, 0.6020385797, 8.51e-08, 0.7964935271, 0.2238374288, 0.263377904, 1, 1.07e-08, 0.3160751898, 5.8e-08, 0.3460325565, 0.6842217296, 1.01e-08, 0.9438301877, 0.5578367224, 2.18e-08, 1, 0.9161424562, 0.2924856039, 1e-08, 0.8672987992, 0.9266688748, 0.8356425464, 0.9988463913, 0.2960361777, 0.0285680426, 0.0969063841, 0.6947998266, 0.0138254805, 1, 0.3494775301, 1, 2.61e-08, 1.52e-08, 0.5393467752, 1, 0.9069223275)), .Names = c("id", "factor", "value"), class = "data.frame", row.names = c(1L, 6L, 10L, 13L, 16L, 17L, 18L, 22L, 23L, 24L, 27L, 29L, 31L, 33L, 42L, 43L, 44L, 45L, 54L, 56L, 58L, 61L, 64L, 69L, 73L, 76L, 79L, 80L, 81L, 85L, 86L, 87L, 90L, 92L, 94L, 96L, 105L, 106L, 107L, 108L, 117L, 119L, 121L, 124L, 127L, 132L, 136L, 139L, 142L, 143L, 144L, 148L, 149L, 150L, 153L, 155L, 157L, 159L, 168L, 169L, 170L, 171L, 180L, 182L, 184L, 187L, 190L, 195L, 199L, 202L, 205L, 206L, 207L, 211L, 212L, 213L, 216L, 218L, 220L, 222L, 231L, 232L, 233L, 234L, 243L, 245L, 247L, 250L)) And the code: require(nlme) summary(aov(value ~ factor+Error(id/factor), data = tau.base)) anova(lme(value ~ factor, data = tau.base, random = ~1|id))
They are different because the lme model is forcing the variance component of id to be greater than zero. Looking at the raw anova table for all terms, we see that the mean squared error for id is less than that for the residuals. > anova(lm1 <- lm(value~ factor+id, data=tau.base)) Df Sum Sq Mean Sq F value Pr(>F) factor 3 0.6484 0.21614 1.3399 0.2694 id 21 3.1609 0.15052 0.9331 0.5526 Residuals 63 10.1628 0.16131 When we compute the variance components, this means that the variance due to id will be negative. My memory of expected mean squares memory is shaky, but the calculation is something like (0.15052-0.16131)/3 = -0.003597. This sounds odd but can happen. What it means is that the averages for each id are closer to each other than you would expect to each other given the amount of residual variation in the model. In contrast, using lme forces this variance to be greater than zero. > summary(lme1 <- lme(value ~ factor, data = tau.base, random = ~1|id)) ... Random effects: Formula: ~1 | id (Intercept) Residual StdDev: 3.09076e-05 0.3982667 This reports standard deviations, squaring to get the variance yields 9.553e-10 for the id variance and 0.1586164 for the residual variance. Now, you should know that using aov for repeated measures is only appropriate if you believe that the correlation between all pairs of repeated measures is identical; this is called compound symmetry. (Technically, sphericity is required but this is sufficient for now.) One reason to use lme over aov is that it can handle different kinds of correlation structures. In this particular data set, the estimate for this correlation is negative; this helps explain how the mean squared error for id was less than the residual squared error. A negative correlation means that if an individual's first measurement was below average, on average, their second would be above average, making the total averages for the individuals less variable than we would expect if there was a zero correlation or a positive correlation. Using lme with a random effect is equivalent to fitting a compound symmetry model where that correlation is forced to be non-negative; we can fit a model where the correlation is allowed to be negative using gls : > anova(gls1 <- gls(value ~ factor, correlation=corCompSymm(form=~1|id), data=tau.base)) Denom. DF: 84 numDF F-value p-value (Intercept) 1 199.55223 <.0001 factor 3 1.33985 0.267 This ANOVA table agrees with the table from the aov fit and from the lm fit. OK, so what? Well, if you believe that the variance from id and the correlation between observations should be non-negative, the lme fit is actually more appropriate than the fit using aov or lm as its estimate of the residual variance is slightly better. However, if you believe the correlation between observations could be negative, aov or lm or gls is better. You may also be interested in exploring the correlation structure further; to look at a general correlation structure, you'd do something like gls2 <- gls(value ~ factor, correlation=corSymm(form=~unclass(factor)|id), data=tau.base) Here I only limit the output to the correlation structure. The values 1 to 4 represent the four levels of factor ; we see that factor 1 and factor 4 have a fairly strong negative correlation: > summary(gls2) ... Correlation Structure: General Formula: ~unclass(factor) | id Parameter estimate(s): Correlation: 1 2 3 2 0.049 3 -0.127 0.208 4 -0.400 0.146 -0.024 One way to choose between these models is with a likelihood ratio test; this shows that the random effects model and the general correlation structure model aren't statistically significantly different; when that happens the simpler model is usually preferred. > anova(lme1, gls2) Model df AIC BIC logLik Test L.Ratio p-value lme1 1 6 108.0794 122.6643 -48.03972 gls2 2 11 111.9787 138.7177 -44.98936 1 vs 2 6.100725 0.2965
{ "source": [ "https://stats.stackexchange.com/questions/14088", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/442/" ] }
14,089
Wikipedia explains: For a data set, the mean is the sum of the values divided by the number of values. This definition however corresponds to what I call "average" (at least that's what I remember learning). Yet Wikipedia once more quotes: There are other statistical measures that use samples that some people confuse with averages - including 'median' and 'mode'. Now that's confusing. Are "mean value" and "average" different from one another? If so how?
Mean versus average The mean most commonly refers to the arithmetic mean , but may refer to some other form of mean, such as harmonic or geometric (see the Wikipedia article ). Thus, when used without qualification, I think most people would assume that "mean" refers to the arithmetic mean. Average has many meanings, some of which are much less mathematical than the term "mean". Even within the context of numerical summaries, "average" can refer to a broad range of measures of central tendency. Thus, the arithmetic mean is one type of average . Arguably, when used without qualification the average of a numeric variable often is meant to refer to the arithmetic mean. Side point It is interesting to observe that Excel uses the sloppier but more accessible name of AVERAGE() for its arithmetic mean function, where R uses mean() .
{ "source": [ "https://stats.stackexchange.com/questions/14089", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5705/" ] }
14,099
Question: I want to be sure of something, is the use of k-fold cross-validation with time series is straightforward, or does one need to pay special attention before using it? Background: I'm modeling a time series of 6 year (with semi-markov chain), with a data sample every 5 min. To compare several models, I'm using a 6-fold cross-validation by separating the data in 6 year, so my training sets (to calculate the parameters) have a length of 5 years, and the test sets have a length of 1 year. I'm not taking into account the time order, so my different sets are : fold 1 : training [1 2 3 4 5], test [6] fold 2 : training [1 2 3 4 6], test [5] fold 3 : training [1 2 3 5 6], test [4] fold 4 : training [1 2 4 5 6], test [3] fold 5 : training [1 3 4 5 6], test [2] fold 6 : training [2 3 4 5 6], test [1]. I'm making the hypothesis that each year are independent from each other. How can I verify that? Is there any reference showing the applicability of k-fold cross-validation with time series.
Time-series (or other intrinsically ordered data) can be problematic for cross-validation. If some pattern emerges in year 3 and stays for years 4-6, then your model can pick up on it, even though it wasn't part of years 1 & 2. An approach that's sometimes more principled for time series is forward chaining, where your procedure would be something like this: fold 1 : training [1], test [2] fold 2 : training [1 2], test [3] fold 3 : training [1 2 3], test [4] fold 4 : training [1 2 3 4], test [5] fold 5 : training [1 2 3 4 5], test [6] That more accurately models the situation you'll see at prediction time, where you'll model on past data and predict on forward-looking data. It also will give you a sense of the dependence of your modeling on data size.
{ "source": [ "https://stats.stackexchange.com/questions/14099", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5767/" ] }
14,158
Let's say that I have a categorical variable which can take the values A, B, C and D. How can I generate 10000 random data points and control for the frequency of each? For example: A = 10% B = 20% C = 65% D = 5% Any ideas how I can do this?
Do you want the proportions in the sample to be exactly the proportions stated? or to represent the idea of sampling from a very large population with those proportions (so the sample proportions will be close but not exact)? If you want the exact proportions then you can follow Brandon's suggestion and use the R sample function to randomize the order of a vector that has the exact proportions. If you want to sample from the population, but not restrict the proportions to be exact then you can still use the sample function in R with the prob argument like so: > x <- sample( LETTERS[1:4], 10000, replace=TRUE, prob=c(0.1, 0.2, 0.65, 0.05) ) > prop.table(table(x)) x A B C D 0.0965 0.1972 0.6544 0.0519
{ "source": [ "https://stats.stackexchange.com/questions/14158", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/333/" ] }
14,210
It is a known fact that median is resistant to outliers. If that is the case, when and why would we use the mean in the first place? One thing I can think of perhaps is to understand the presence of outliers i.e. if the median is far from the mean, then the distribution is skewed and perhaps the data needs to be examined to decide what is to be done with the outliers. Are there any other uses?
In a sense, the mean is used because it is sensitive to the data. If the distribution happens to be symmetric and the tails are about like the normal distribution, the mean is a very efficient summary of central tendency. The median, while being robust and well-defined for any continuous distribution, is only $\frac{2}{\pi}$ as efficient as the mean if the data happened to come from a normal distribution. It is this relative inefficiency of the median that keeps us from using it even more than we do. The relative inefficiency translates into a minor absolute inefficiency as the sample size gets large, so for large $n$ we can be more guilt-free about using the median. It is interesting to note that for a measure of variation (spread, dispersion), there is a very robust estimator that is 0.98 as efficient as the standard deviation, namely Gini's mean difference. This is the mean absolute difference between any two observations. [You have to multiply the sample standard deviation by a constant to estimate the same quantity estimated by Gini's mean difference.] An efficient measure of central tendency is the Hodges-Lehmann estimator, i.e., the median of all pairwise means. We would use it more if its interpretation were simpler.
{ "source": [ "https://stats.stackexchange.com/questions/14210", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2164/" ] }
14,215
I am trying to develop a logic to identify association between different time series for association mining. I have a lot of series and need to find whether or not the association exists. I figured two ways to do this: Constructive: try to find relation in given two series. Destructive: try to prove that relation does not exist. Are there any existing mathematical parameters to identify such relations? if not, any suggestions?
In a sense, the mean is used because it is sensitive to the data. If the distribution happens to be symmetric and the tails are about like the normal distribution, the mean is a very efficient summary of central tendency. The median, while being robust and well-defined for any continuous distribution, is only $\frac{2}{\pi}$ as efficient as the mean if the data happened to come from a normal distribution. It is this relative inefficiency of the median that keeps us from using it even more than we do. The relative inefficiency translates into a minor absolute inefficiency as the sample size gets large, so for large $n$ we can be more guilt-free about using the median. It is interesting to note that for a measure of variation (spread, dispersion), there is a very robust estimator that is 0.98 as efficient as the standard deviation, namely Gini's mean difference. This is the mean absolute difference between any two observations. [You have to multiply the sample standard deviation by a constant to estimate the same quantity estimated by Gini's mean difference.] An efficient measure of central tendency is the Hodges-Lehmann estimator, i.e., the median of all pairwise means. We would use it more if its interpretation were simpler.
{ "source": [ "https://stats.stackexchange.com/questions/14215", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5808/" ] }
14,226
Given that software can do the Fisher's exact test calculation so easily nowadays , is there any circumstance where, theoretically or practically, the chi-squared test is actually preferable to Fisher's exact test? Advantages of the Fisher's exact test include: scaling to contingency tables larger than 2x2 (i.e any r x c table) gives an exact p-value not needing to have a minimum expected cell count to be valid
This is a great question. Fisher's exact test is one of the great examples of Fisher's clever use of experimental design , together with conditioning on data (basically on tables with the observed row and marginal totals) and his ingenuity at finding probability distributions (though this isn't the best example, for a better example see here ). The use of computers to calculate "exact" p-values has definitely helped to obtain accurate answers. However, it is hard to justify the assumptions of Fisher's exact test in practice. Because the so called "exact" comes from the fact that in the "tea tasting experiement" or in the 2x2 contingency tables case, the row total and column total, that is, the marginal totals are fixed by design. This assumption is rarely justified in practice. For nice references see here . The name "exact" leads one into believing that the p-values given by this test are exact, which again in most of the cases is unfortunately not correct because of these reasons If the marginals are not fixed by design (which happens almost every time in practice), the p-values will be conservative. Since the test uses a discrete probability distribution (specifically, Hyper-geometric distribution), for certain cutoffs it is impossible to calculate the "exact null probabilities", that is, p-value. In most of the practical cases, using a likelihood ratio test or Chi-square test should not give very different answers (p-value) from a Fisher's exact test. Yes, when the marginals are fixed, Fisher's exact test is a better choice, but this will happen rarely. Therefore, using Chi-square test of likelihood ratio test is always recommended for consistency checks. Similar ideas apply when the Fisher's exact test is generalized to any table, which basically equivalent to calculating Multivariate Hypergeometric proabilities. Therefore one must always try to calculate Chi-square and likelihood ratio distribution based p-values, in addition to "exact" p-values.
{ "source": [ "https://stats.stackexchange.com/questions/14226", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/561/" ] }
14,437
Anybody have any experience with software (preferably free, preferably open source) that will take an image of data plotted on cartesian coordinates (a standard, everyday plot) and extract the coordinates of the points plotted on the graph? Essentially, this is a data-mining problem and a reverse data-visualization problem.
Check out the digitize package for R . Its designed to solve exactly this sort of problem.
{ "source": [ "https://stats.stackexchange.com/questions/14437", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1640/" ] }
14,483
Suppose $X$ is a random variable with pdf $f_X(x)$ . Then the random variable $Y=X^2$ has the pdf $$f_Y(y)=\begin{cases}\frac{1}{2\sqrt{y}}\left(f_X(\sqrt{y})+f_X(-\sqrt{y})\right) & y \ge 0 \\ 0 & y \lt 0\end{cases}$$ I understand the calculus behind this. But I'm trying to think of a way to explain it to someone who doesn't know calculus. In particular, I'm trying to explain why the factor $\frac{1}{\sqrt{y}}$ appears out front. I'll take a stab at it: Suppose $X$ has a Gaussian distribution. Almost all the weight of its pdf is between the values, say, $-3$ and $3.$ But that maps to 0 to 9 for $Y$ . So, the heavy weight in the pdf for $X$ has been extended across a wider range of values in the transformation to $Y$ . Thus, for $f_Y(y)$ to be a true pdf the extra heavy weight must be downweighted by the multiplicative factor $\frac{1}{\sqrt{y}}$ How does that sound? If anyone can provide a better explanation of their own or link to one in a document or textbook I'd greatly appreciate it. I find this variable transformation example in several intro mathematical probability/stats books. But I never find an intuitive explanation with it :(
PDFs are heights but they are used to represent probability by means of area. It therefore helps to express a PDF in a way that reminds us that area equals height times base. Initially the height at any value $x$ is given by the PDF $f_X(x)$ . The base is the infinitesimal segment $dx$ , whence the distribution (that is, the probability measure as opposed to the distribution function ) is really the differential form, or "probability element," $$\operatorname{PE}_X(x) = f_X(x) \, dx.$$ This, rather than the PDF, is the object you want to work with both conceptually and practically, because it explicitly includes all the elements needed to express a probability. When we re-express $x$ in terms of $y = x^2$ , the base segments $dx$ get stretched (or squeezed): by squaring both ends of the interval from $x$ to $x + dx$ we see that the base of the $y$ area must be an interval of length $$dy = (x + dx)^2 - x^2 = 2 x \, dx + (dx)^2.$$ Because the product of two infinitesimals is negligible compared to the infinitesimals themselves, we conclude $$dy = 2 x \, dx, \text{ whence }dx = \frac{dy}{2x} = \frac{dy}{2\sqrt{y}}.$$ Having established this, the calculation is trivial because we just plug in the new height and the new width: $$\operatorname{PE}_X(x) = f_X(x) \, dx = f_X(\sqrt{y}) \frac{dy}{2\sqrt{y}} = \operatorname{PE}_Y(y).$$ Because the base, in terms of $y$ , is $dy$ , whatever multiplies it must be the height, which we can read directly off the middle term as $$\frac{1}{2\sqrt{y}}f_X(\sqrt{y}) = f_Y(y).$$ This equation $\operatorname{PE}_X(x) = \operatorname{PE}_Y(y)$ is effectively a conservation of area (=probability) law. This graphic accurately shows narrow (almost infinitesimal) pieces of two PDFs related by $y=x^2$ . Probabilities are represented by the shaded areas. Due to the squeezing of the interval $[0.32, 0.45]$ via squaring, the height of the red region ( $y$ , at the left) has to be proportionally expanded to match the area of the blue region ( $x$ , at the right).
{ "source": [ "https://stats.stackexchange.com/questions/14483", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3577/" ] }
14,673
Are there any measures of similarity or distance between two symmetric covariance matrices (both having the same dimensions)? I am thinking here of analogues to KL divergence of two probability distributions or the Euclidean distance between vectors except applied to matrices. I imagine there would be quite a few similarity measurements. Ideally I would also like to test the null hypothesis that two covariance matrices are identical.
You can use any of the norms $\| A-B \|_p $ (see Wikipedia on a variety of norms; note that the square-root of the sum of squared distances, $\sqrt{\sum_{i,j} (a_{ij}-b_{ij})^2}$, is called Frobenius norm, and is different from $L_2$ norm, which is the square root of the largest eigenvalue of $(A-B)^2$, although of course they would generate the same topology). The K-L distance between the two normal distributions with the same means (say zero) and the two specific covariance matrices is also available in Wikipedia as $\frac12 [ \mbox{tr} (A^{-1}B) - \mbox{ln}( |B|/|A| ) ]$. Edit: if one of the matrices is a model-implied matrix, and the other is the sample covariance matrix, then of course you can form a likelihood ratio test between the two. My personal favorite collection of such tests for simple structures is given in Rencher (2002) Methods of Multivariate Analysis . More advanced cases are covered in covariance structure modeling, on which a reasonable starting point is Bollen (1989) Structural Equations with Latent Variables .
{ "source": [ "https://stats.stackexchange.com/questions/14673", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/8101/" ] }
14,951
I need to calculate matrix inverse and have been using solve function. While it works well on small matrices, solve tends to be very slow on large matrices. I was wondering if there is any other function or combination of functions (through SVD, QR, LU, or other decomposition functions) that can give me faster results.
Have you tried what cardinal suggested and explored some of the alternative methods for computing the inverse? Let's consider a specific example: library(MASS) k <- 2000 rho <- .3 S <- matrix(rep(rho, k*k), nrow=k) diag(S) <- 1 dat <- mvrnorm(10000, mu=rep(0,k), Sigma=S) ### be patient! R <- cor(dat) system.time(RI1 <- solve(R)) system.time(RI2 <- chol2inv(chol(R))) system.time(RI3 <- qr.solve(R)) all.equal(RI1, RI2) all.equal(RI1, RI3) So, this is an example of a $2000 \times 2000$ correlation matrix for which we want the inverse. On my laptop (Core-i5 2.50Ghz), solve takes 8-9 seconds, chol2inv(chol()) takes a bit over 4 seconds, and qr.solve() takes 17-18 seconds (multiple runs of the code are suggested to get stable results). So the inverse via the Choleski decomposition is about twice as fast as solve . There may of course be even faster ways of doing that. I just explored some of the most obvious ones here. And as already mentioned in the comments, if the matrix has a special structure, then this probably can be exploited for more speed.
{ "source": [ "https://stats.stackexchange.com/questions/14951", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1261/" ] }
14,999
This has come up in a few questions now, and I've been wondering about something. Has the field as a whole moved toward "reproducibility" focusing on the availability of the original data, and the code in question? I was always taught that the core of reproducibility was not necessarily, as I've referred to it, the ability to click Run and get the same results. The data-and-code approach seems to assume that the data are correct - that there isn't a flaw in the collection of the data itself (often demonstrably false in the case of scientific fraud). It also focuses on a single sample of the target population, rather than the replicability of the finding over multiple independent samples. Why is the emphasis then on being able to re-run analysis, rather than duplicate the study from the ground up? The article mentioned in the comments below is available here .
"Reproducible research" as reproducible analysis Reproducible research is a term used in some research domains to refer specifically to conducting analyses such that code transforms raw data and meta-data into processed data, code runs analyses on the data, and code incorporates analyses into a report. When such data and code are shared, this allows other researchers to: perform analyses not reported by the original researchers check the correctness of the analyses performed by the original researchers This usage can be seen in discussions of technologies like Sweave . E.g., Friedrich Leisch writes in the context of Sweave that "the report can be automatically updated if data or analysis change, which allows for truly reproducible research." It can also be seen in the CRAN Task View on Reproducible Research which states that "the goal of reproducible research is to tie specific instructions to data analysis and experimental data so that scholarship can be recreated, better understood and verified." Broader usage of the term "reproducibility" Reproducibility is a fundamental aim of science. It's not new. Research reports include method and results sections that should outline how the data was generated, processed, and analysed. A general rule is that the details provided should be sufficient to enable an appropriately competent researcher to take the information provided and replicate the study. Reproducibility is also closely related to the concepts of replicability and generalisation. Thus, the term "reproducible research", taken literally, as applied to technologies like Sweave, is a misnomer, given that it suggests a relevance broader than it covers. Also, when presenting technologies like Sweave to researchers who have not used such technologies, such researchers are often surprised when I call the process "reproducible research". A better term than "reproducible research" Given that "reproducible research" as used within Sweave-like contexts only pertains to one aspect of reproducible research, perhaps an alternative term should be adopted. Possible alternatives include: Reproducible analysis: John D Cook has used this term Jennifer Blackford uses the term "reliable and reproducible analyses" Reproducible data analysis Christophe Pouzat uses this term Reproducible statistical analysis A Biostats site at Vanderbilt uses the term "reproducible statistical analysis and reporting activities" Reproducible reporting All of the above terms are a more accurate reflection of what Sweave-like analyses entail. Reproducible analysis is short and sweet. Adding "data" or "statistical" further clarifies things, but also makes the term both longer and narrower. Furthermore, "statistical" has a narrow and a broad meaning, and certainly within the narrow meaning, much of data processing is not statistical. Thus, the breadth implied by the term "reproducible analysis" has its advantages . It's not just about reproducibility The other additional issue with the term "reproducible research" is the aim of Sweave-like technologies is not just "reproducibility". There are several interrelated aims: Reproducibility Can the analyses easily be re-run to transform raw data into final report with the same results? Correctness Is the data analysis consistent with the intentions of the researcher? Are the intentions of the researcher correct? Openness Transparency, accountability Can others check and verify the accuracy of analyses performed? Extensibility, modfifiability Can others modify, extend, reuse, and mash, the data, analyses, or both to create new research works? There is an argument that reproducible analysis should promote correct analyses, because there is a written record of analyses that can be checked. Furthermore if data and code is shared, it creates accountability which motivates researchers to check their analyses, and enables other researchers to note corrections. Reproducible analysis also fits in closely with concepts around open research. Of course, a researcher can use Sweave-like technologies just for themselves. Open research principles encourage sharing the data and analysis code to enable greater reuse and accountability. This is not really a critique of the use of the word "reproducible". Rather, it just highlights that using Sweave-like technologies is necessary but not sufficient to achieving open scientific research aims.
{ "source": [ "https://stats.stackexchange.com/questions/14999", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5836/" ] }
15,011
For a simulation study I have to generate random variables that show a predefined (population) correlation to an existing variable $Y$ . I looked into the R packages copula and CDVine which can produce random multivariate distributions with a given dependency structure. It is, however, not possible to fix one of the resulting variables to an existing variable. Any ideas and links to existing functions are appreciated! Conclusion: Two valid answers came up, with different solutions: An R script by caracal, which calculates a random variable with an exact (sample) correlation to a predefined variable An R function I found myself, which calculates a random variable with a defined population correlation to a predefined variable [@ttnphns' addition: I took the liberty to expand the question title from single fixed variable case to arbitrary number of fixed variables; i.e. how to generate a variable having predefined corretation(s) with some fixed, existing variable(s)]
Here's another one: for vectors with mean 0, their correlation equals the cosine of their angle. So one way to find a vector $x$ with exactly the desired correlation $r$, corresponding to an angle $\theta$: get fixed vector $x_1$ and a random vector $x_2$ center both vectors (mean 0), giving vectors $\dot{x}_{1}$, $\dot{x}_{2}$ make $\dot{x}_{2}$ orthogonal to $\dot{x}_{1}$ (projection onto orthogonal subspace), giving $\dot{x}_{2}^{\perp}$ scale $\dot{x}_{1}$ and $\dot{x}_{2}^{\perp}$ to length 1, giving $\bar{x}_{1}$ and $\bar{x}_{2}^{\perp}$ $\bar{x}_{2}^{\perp} + (1/\tan(\theta)) \cdot \bar{x}_{1}$ is the vector whose angle to $\bar{x}_{1}$ is $\theta$, and whose correlation with $\bar{x}_{1}$ thus is $r$. This is also the correlation to $x_1$ since linear transformations leave the correlation unchanged. Here is the code: n <- 20 # length of vector rho <- 0.6 # desired correlation = cos(angle) theta <- acos(rho) # corresponding angle x1 <- rnorm(n, 1, 1) # fixed given data x2 <- rnorm(n, 2, 0.5) # new random data X <- cbind(x1, x2) # matrix Xctr <- scale(X, center=TRUE, scale=FALSE) # centered columns (mean 0) Id <- diag(n) # identity matrix Q <- qr.Q(qr(Xctr[ , 1, drop=FALSE])) # QR-decomposition, just matrix Q P <- tcrossprod(Q) # = Q Q' # projection onto space defined by x1 x2o <- (Id-P) %*% Xctr[ , 2] # x2ctr made orthogonal to x1ctr Xc2 <- cbind(Xctr[ , 1], x2o) # bind to matrix Y <- Xc2 %*% diag(1/sqrt(colSums(Xc2^2))) # scale columns to length 1 x <- Y[ , 2] + (1 / tan(theta)) * Y[ , 1] # final new vector cor(x1, x) # check correlation = rho For the orthogonal projection $P$, I used the $QR$-decomposition to improve numerical stability, since then simply $P = Q Q'$.
{ "source": [ "https://stats.stackexchange.com/questions/15011", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/6082/" ] }
15,102
I recently realized that a mixed-model with only subject as a random factor and the other factors as fixed factors is equivalent to an ANOVA when setting the correlational structure of the mixed model to compound symmetry. Therefore I would like to know what does compound symmetry mean in the context of a mixed (i.e., split-plot) ANOVA, at best explained in plain English. Besides compound symmetry lme offers other types of correlational structures, such as corSymm general correlation matrix, with no additional structure. or different types of spatial correlation . Therefore, I have the related question on which other types of correlational structures may be advisable to use in the context of designed experiments (with between- and within-subjects factors)? It would be great if answers could point to some references for different correlational structures.
Compound symmetry is essentially the "exchangeable" correlation structure, except with a specific decomposition for the total variance. For example, if you have mixed model for the subject $i$ in cluster $j$ response, $Y_{ij}$, with only a random intercept by cluster $$ Y_{ij} = \alpha + \gamma_{j} + \varepsilon_{ij} $$ where $\gamma_{j}$ is the cluster $j$ random effect with variance $\sigma^{2}_{\gamma}$ and $\varepsilon_{ij}$ is the subject $i$ in cluster $j$ "measurement error" with variance $\sigma^{2}_{\varepsilon}$ and $\gamma_{j}, \varepsilon_{ij}$ are independent. This model implicitly specifies the compound symmetry covariance matrix between observations in the same cluster: $$ {\rm cov}(Y_{ij}, Y_{kj}) = \sigma^{2}_{\gamma} + \sigma^{2}_{\varepsilon} \cdot \mathcal{I}(k = i) $$ Note that the compound symmetry assumption implies that the correlation between distinct members of a cluster is $\sigma^{2}_{\gamma}/(\sigma^{2}_{\gamma} + \sigma^{2}_{\varepsilon})$. In "plain english" you might say this covariance structure implies that all distinct members of a cluster are equally correlated with each other and the total variation, $\sigma^{2} = \sigma^{2}_{\gamma} + \sigma^{2}_{\varepsilon}$, can be partitioned into the "shared" (within a cluster) component, $\sigma^{2}_{\gamma}$ and the "unshared" component, $\sigma^{2}_{\varepsilon}$. Edit: To aid understanding in the "plain english" sense, consider an example where individuals are clustered within families so that $Y_{ij}$ denotes the subject $i$ in family $j$ response. In this case the compound symmetry assumption means that the total variation in $Y_{ij}$ can be partitioned into the variation within a family, $\sigma^{2}_{\varepsilon}$, and the variation between families, $\sigma^{2}_{\gamma}$.
{ "source": [ "https://stats.stackexchange.com/questions/15102", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/442/" ] }
15,287
In my dataset we have both continuous and naturally discrete variables. I want to know whether we can do hierarchical clustering using both type of variables. And if yes, what distance measure is appropriate?
One way is to use Gower similarity coefficient which is a composite measure $^1$ ; it takes quantitative (such as rating scale), binary (such as present/absent) and nominal (such as worker/teacher/clerk) variables. Later Podani $^2$ added an option to take ordinal variables as well. The coefficient is easily understood even without a formula; you compute the similarity value between the individuals by each variable, taking the type of the variable into account, and then average across all the variables. Usually, a program calculating Gower will allow you to weight variables, that is, their contribution, to the composite formula. However, proper weighting of variables of different type is a problem , no clear-cut guidelines exist, which makes Gower or other "composite" indices of proximity pull ones face. The facets of Gower similarity ( $GS$ ): When all variables are quantitative (interval) then the coefficient is the range-normalized Manhattan distance converted into similarity. Because of the normalization variables of different units may be safely used. You should not, however, forget about outliers. (You might also decide to normalize by another measure of spread than range.) Because of the said normalization by a statistic, such as range, which is sensitive to the composition of individuals in the dataset Gower similarity between some two individuals may change its value if you remove or add some other individuals in the data. When all variables are ordinal, then they are first ranked, and then Manhattan is computed, as above with quantitative variables, but with the special adjustment for ties. When all variables are binary (with an asymmetric significance of categories: "present" vs "absent" attribute) then the coefficient is the Jaccard matching coefficient (this coefficient treats when both individuals lack the attribute as neither match nor mismatch). When all variables are nominal (also including here dichotomous with symmetric significance: "this" vs "that") then the coefficient is the Dice matching coefficient that you obtain from your nominal variables if recode them into dummy variables (see this answer for more). (It is easy to extend the list of types. For example, one could add a summand for count variables, using normalized chi-squared distance converted to similarity.) The coefficient ranges between 0 and 1. " Gower distance ". Without ordinal variables present (i.e. w/o using the Podani's option) $\sqrt{1-GS}$ behaves as Euclidean distance, it fully supports euclidean space. But $1-GS$ is only metric (supports triangular inequality), not Euclidean. With ordinal variables present (using the Podani's option) $\sqrt{1-GS}$ is only metric, not Euclidean; and $1-GS$ isn't metric at all. See also . With euclidean distances (distances supporting Euclidean space), virtually any classic clustering technique will do. Including K-means (if your K-means program can process distance matrices, of course) and including Ward's, centroid, median methods of Hierarchical clustering . Using K-means or other those methods based on Euclidean distance with non-euclidean still metric distance is heuristically admissible, perhaps. With non-metric distances, no such methods may be used. The previous paragraph talks about if K-means or Ward's or such clustering is legal or not with Gower distance mathematically (geometrically). From the measurement-scale ("psychometric") point of view one should not compute mean or euclidean-distance deviation from it in any categorical (nominal, binary, as well as ordinal) data; therefore from this stance you just may not process Gower coefficient by K-means, Ward etc. This viewpoint warns that even if a Euclidean space is present it may be granulated, not smooth ( see related ). If you want all the formulae and additional info on Gower similarity / distance, please read the description of my SPSS macro !KO_gower ; it's in the Word document found in collection "Various proximities" on my web-page. $^1$ Gower J. C. A general coefficient of similarity and some of its properties // Biometrics, 1971, 27, 857-872 $^2$ Podani, J. Extending Gower’s general coefficient of similarity to ordinal characters // Taxon, 1999, 48, 331-340
{ "source": [ "https://stats.stackexchange.com/questions/15287", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4278/" ] }
15,371
Would like to know how confident I can be in my $\lambda$. Anyone know of a way to set upper and lower confidence levels for a Poisson distribution? Observations ($n$) = 88 Sample mean ($\lambda$) = 47.18182 what would the 95% confidence look like for this?
For Poisson, the mean and the variance are both $\lambda$. If you want the confidence interval around lambda, you can calculate the standard error as $\sqrt{\lambda / n}$. The 95-percent confidence interval is $\hat{\lambda} \pm 1.96\sqrt{\hat{\lambda} / n}$.
{ "source": [ "https://stats.stackexchange.com/questions/15371", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9363/" ] }
15,565
Context I have been reading about item response theory, and I find it fascinating. I believe I understand the basics, but I am left wondering how to apply statistical techniques related to the area. Below are two articles that are similar to the area I would like to apply ITR in: http://www.jstor.org/stable/4640738?seq=7 http://www.ncbi.nlm.nih.gov/pubmed/21744971 The second being the one I would actually like to extend at this point in time. I have downloaded a free program called jMetrik, and it seems to be working great. I think it may be too basic as far as IRT goes, but I am unsure. I know the "best" way would likely involve learning R; however, I don't know if I can spare the time to tackle that learning curve. Note that we have some funding to purchase software, but from what I see, there doesn't seem to be any great IRT programs out there. Questions What are your thoughts on the effectiveness of jMetrik? How would you suggest I go forward in applying IRT? What are the best programs for applying IRT? Do any of you use IRT regularly? If so, how?
As a good starter to IRT, I always recommend reading A visual guide to item response theory . A survey of available software can be found on www.rasch.org . From my experience, I found the Raschtest (and associated) Stata command(s) very handy in most cases where one is interested in fitting one-parameter model. For more complex design, one can resort on GLLAMM ; there's a nice working example based on De Boeck and Wilson's book, Explanatory Item and Response Models (Springer, 2004). About R specifically, there are plenty of packages that have become available in the past five years, see for instance the related CRAN Task View . Most of them are discussed in a special issue of the Journal of Statistical Software (vol. 20, 2007). As discussed in another response, the ltm and eRm allow to fit a wide range of IRT models. As they rely on different method of estimation--- ltm used the marginal approach while eRm use the conditional approach---choosing one or the other is mainly a matter of the model you want to fit ( eRm won't fit 2- or 3-parameter models) and the measurement objective you follow: conditional estimation of person parameters has some nice psychometric properties while a marginal approach let you easily switch to mixed-effects model, as discussed in the following two papers: Doran, H., Bates, D., Bliese, P. and Dowling, M. (2007). Estimating the Multilevel Rasch Model: With the lme4 Package . Journal of Statistical Software , 20(2). See also Doug Bates's slides on R-forge De Boeck, P., Bakker, M., Zwitser, R., Nivard, M., Hofman, A., Tuerlinckx, F., and Partchev, I. (2011). The Estimation of Item Response Models with the lmer Function from the lme4 Package in R . Journal of Statistical Software , 39(12). See also the aforementioned De Boeck's handbook and this handout There are also some possibilities to fit Rasch models using MCMC methods, see e.g. the MCMCpack package (or WinBUGS / JAGS , but see BUGS Code for Item Response Theory , JSS (2010) 36). I have no experience with SAS for IRT modeling, so I'll let that to someone who is more versed into SAS programming. Other dedicated software (mostly used in educational assessment) include: RUMM, Conquest, Winsteps, BILOG/MULTILOG, Mplus (not citing the list already available on wikipedia ). None are free to use, but time-limited demonstration version are proposed for some of them. I found jMetrik very limited when I tried it (one year ago), and all functionalities are already available in R. Likewise, ConstructMap can be safely replaced by lme4 , as illustrated in the handout linked above. I should also mention mdltm (Multidimensional Discrete Latent Trait Models) for mixture Rasch models, by von Davier and coll., which is supposed to accompagny the book Multivariate and Mixture Distribution Rasch Models (Springer, 2007).
{ "source": [ "https://stats.stackexchange.com/questions/15565", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3262/" ] }
15,574
Using R or Excel, what is the easiest way to convert a frequency table into a vector of values? E.g., How would you convert the following frequency table Value Frequency 1. 2 2. 1 3. 4 4. 2 5. 1 into the following vector? 1, 1, 2, 3, 3, 3, 3, 4, 4, 5
In R, you can do it using the rep command: tab <- data.frame(value=c(1, 2, 3, 4, 5), freq=c(2, 1, 4, 2, 1)) vec <- rep(tab$value, tab$freq) This gives following result: > tab value freq 1 1 2 2 2 1 3 3 4 4 4 2 5 5 1 > vec [1] 1 1 2 3 3 3 3 4 4 5 For details, see the help file for the rep command by typing ?rep .
{ "source": [ "https://stats.stackexchange.com/questions/15574", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2595/" ] }
15,664
I'll eliminate all the biological details and experiments and quote just the problem at hand and what I have done statistically. I would like to know if its right, and if not, how to proceed. If the data (or my explanation) isn't clear enough, I'll try to explain better by editing. Suppose I have two groups/observations, X and Y, with size $N_x=215$ and $N_y=40$. I would like to know if the means of these two observations are equal. My first question is: If the assumptions are satisfied, is it relevant to use a parametric two-sample t-test here? I ask this because from my understanding its usually applied when the size is small? I plotted histograms of both X and Y and they were not normally distributed, one of the assumptions of a two-sample t-test. My confusion is that, I consider them to be two populations and that's why I checked for normal distribution. But then I am about to perform a two-SAMPLE t-test... Is this right? From central limit theorem, I understand that if you perform sampling (with/without repetition depending on your population size) multiple times and compute the average of the samples each time, then it will be approximately normally distributed. And, the mean of this random variables will be a good estimate of the population mean. So, I decided to do this on both X and Y, 1000 times, and obtained samples, and I assigned a random variable to the mean of each sample. The plot was very much normally distributed. The mean of X and Y were 4.2 and 15.8 (which were the same as population +- 0.15) and the variance was 0.95 and 12.11. I performed a t-test on these two observations (1000 data points each) with unequal variances, because they are very different (0.95 and 12.11). And the null hypothesis was rejected. Does this make sense at all? Is this correct / meaningful approach or a two-sample z-test is sufficient or its totally wrong? I also performed a non-parametric Wilcoxon test just to be sure (on original X and Y) and the null hypothesis was convincingly rejected there as well. In the event that my previous method was utterly wrong, I suppose doing a non-parametric test is good, except for statistical power maybe? In both cases, the means were significantly different. However, I would like to know if either or both the approaches are faulty/totally wrong and if so, what is the alternative?
The idea that the t-test is only for small samples is a historical hold over. Yes it was originally developed for small samples, but there is nothing in the theory that distinguishes small from large. In the days before computers were common for doing statistics the t-tables often only went up to around 30 degrees of freedom and the normal was used beyond that as a close approximation of the t distribution. This was for convenience to keep the t-table's size reasonable. Now with computers we can do t-tests for any sample size (though for very large samples the difference between the results of a z-test and a t-test are very small). The main idea is to use a t-test when using the sample to estimate the standard deviations and the z-test if the population standard deviations are known (very rare). The Central Limit Theorem lets us use the normal theory inference (t-tests in this case) even if the population is not normally distributed as long as the sample sizes are large enough. This does mean that your test is approximate (but with your sample sizes, the appromition should be very good). The Wilcoxon test is not a test of means (unless you know that the populations are perfectly symmetric and other unlikely assumptions hold). If the means are the main point of interest then the t-test is probably the better one to quote. Given that your standard deviations are so different, and the shapes are non-normal and possibly different from each other, the difference in the means may not be the most interesting thing going on here. Think about the science and what you want to do with your results. Are decisions being made at the population level or the individual level? Think of this example: you are comparing 2 drugs for a given disease, on drug A half the sample died immediatly the other half recovered in about a week; on drug B all survived and recovered, but the time to recovery was longer than a week. In this case would you really care about which mean recovery time was shorter? Or replace the half dying in A with just taking a really long time to recover (longer than anyone in the B group). When deciding which drug I would want to take I would want the full information, not just which was quicker on average.
{ "source": [ "https://stats.stackexchange.com/questions/15664", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4751/" ] }
15,696
I'm pretty new to statistics and I need your help. I have a small sample, as follows: H4U 0.269 0.357 0.2 0.221 0.275 0.277 0.253 0.127 0.246 I ran the Shapiro-Wilk test using R: shapiro.test(precisionH4U$H4U) and I got the following result: W = 0.9502, p-value = 0.6921 Now, if I assume the significance level at 0.05 than the p-value is larger then alpha (0.6921 > 0.05) and I cannot reject the null hypothesis about the normal distribution, but does it allow me to say that the sample has a normal distribution?
No - you cannot say "the sample has a normal distribution" or "the sample comes from a population which has a normal distribution", but only "you cannot reject the hypothesis that the sample comes from a population which has a normal distribution". In fact the sample does not have a normal distribution (see the qqplot below), but you would not expect it to as it is only a sample. The question as to the distribution of the underlying population remains open. qqnorm( c(0.269, 0.357, 0.2, 0.221, 0.275, 0.277, 0.253, 0.127, 0.246) )
{ "source": [ "https://stats.stackexchange.com/questions/15696", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5344/" ] }
15,978
What is the formula for variance of product of dependent variables? In the case of independent variables the formula is simple: $$ {\rm var}(XY) = E(X^{2}Y^{2}) - E(XY)^{2} = {\rm var}(X){\rm var}(Y) + {\rm var}(X)E(Y)^2 + {\rm var}(Y)E(X)^2 $$ But what is the formula for correlated variables? By the way, how can I find the correlation based on the statistical data?
Well, using the familiar identity you pointed out, $$ {\rm var}(XY) = E(X^{2}Y^{2}) - E(XY)^{2} $$ Using the analogous formula for covariance, $$ E(X^{2}Y^{2}) = {\rm cov}(X^{2}, Y^{2}) + E(X^2)E(Y^2) $$ and $$ E(XY)^{2} = [ {\rm cov}(X,Y) + E(X)E(Y) ]^{2} $$ which implies that, in general, ${\rm var}(XY)$ can be written as $$ {\rm cov}(X^{2}, Y^{2}) + [{\rm var}(X) + E(X)^2] \cdot[{\rm var}(Y) + E(Y)^2] - [ {\rm cov}(X,Y) + E(X)E(Y) ]^{2} $$ Note that in the independence case, ${\rm cov}(X^2,Y^2) = {\rm cov}(X,Y) = 0$ and this reduces to $$ [{\rm var}(X) + E(X)^2] \cdot[{\rm var}(Y) + E(Y)^2] - [ E(X)E(Y) ]^{2} $$ and the two $[ E(X)E(Y) ]^{2}$ terms cancel out and you get $$ {\rm var}(X){\rm var}(Y) + {\rm var}(X)E(Y)^{2} + {\rm var}(Y)E(X)^{2} $$ as you pointed out above. Edit: If all you observe is $XY$ and not $X$ and $Y$ separately, then I don't think there is a way for you to estimate ${\rm cov}(X,Y)$ or ${\rm cov}(X^2,Y^2)$ except in special cases (for example, if $X,Y$ have means that are known a priori )
{ "source": [ "https://stats.stackexchange.com/questions/15978", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/6453/" ] }
15,979
Evan Miller's " How Not to Sort by Average Rating " proposes using the lower bound of a confidence interval to get a sensible aggregate "score" for rated items. However, it's working with a Bernoulli model: ratings are either thumbs up or thumbs down. What's a reasonable confidence interval to use for a rating model which assigns a discrete score of $1$ to $k$ stars, assuming that the number of ratings for an item might be small? I think I can see how to adapt the centre of the Wilson and Agresti-Coull intervals as $$\tilde{p} = \frac{\sum_{i=1}^n{x_i} + z_{\alpha/2}^2\; p_0}{n + z_{\alpha/2}^2}$$ where either $p_0 = \frac{k+1}{2}$ or (probably better) it's the average rating over all items. However, I'm not sure how to adapt the width of the interval. My (revised) best guess would be $$\tilde{p} \pm \frac{z_{\alpha/2}}{\tilde{n}} \sqrt{\frac{\sum_{i=1}^n{(x_i - \tilde{p})^2} + z_{\alpha/2}(p_0-\tilde{p})^2}{\tilde{n}}}$$ with $\tilde{n} = n + z_{\alpha/2}^2$, but I can't justify with more than hand-waving it as an analogy of Agresti-Coull, taking that as $$\text{Estimate}(\bar{X}) \pm \frac{z_{\alpha/2}}{\tilde{n}} \sqrt{\text{Estimate}(\text{Var}(X))}$$ Are there standard confidence intervals which apply? (Note that I don't have subscriptions to any journals or easy access to a university library; by all means give proper references, but please supplement with the actual result!)
Like Karl Broman said in his answer, a Bayesian approach would likely be a lot better than using confidence intervals. The Problem With Confidence Intervals Why might using confidence intervals not work too well? One reason is that if you don't have many ratings for an item, then your confidence interval is going to be very wide, so the lower bound of the confidence interval will be small. Thus, items without many ratings will end up at the bottom of your list. Intuitively, however, you probably want items without many ratings to be near the average item, so you want to wiggle your estimated rating of the item toward the mean rating over all items (i.e., you want to push your estimated rating toward a prior ). This is exactly what a Bayesian approach does. Bayesian Approach I: Normal Distribution over Ratings One way of moving the estimated rating toward a prior is, as in Karl's answer, to use an estimate of the form $w*R + (1-w)*C$: $R$ is the mean over the ratings for the items. $C$ is the mean over all items (or whatever prior you want to shrink your rating to). Note that the formula is just a weighted combination of $R$ and $C$. $w = \frac{v}{v+m}$ is the weight assigned to $R$, where $v$ is the number of reviews for the beer and $m$ is some kind of constant "threshold" parameter. Note that when $v$ is very large, i.e., when we have a lot of ratings for the current item, then $w$ is very close to 1, so our estimated rating is very close to $R$ and we pay little attention to the prior $C$. When $v$ is small, however, $w$ is very close to 0, so the estimated rating places a lot of weight on the prior $C$. This estimate can, in fact, be given a Bayesian interpretation as the posterior estimate of the item's mean rating when individual ratings comes from a normal distribution centered around that mean. However, assuming that ratings come from a normal distribution has two problems: A normal distribution is continuous , but ratings are discrete . Ratings for an item don't necessarily follow a unimodal Gaussian shape. For example, maybe your item is very polarizing, so people tend to either give it a very high rating or give it a very low rating. Bayesian Approach II: Multinomial Distribution over Ratings So instead of assuming a normal distribution for ratings, let's assume a multinomial distribution. That is, given some specific item, there's a probability $p_1$ that a random user will give it 1 star, a probability $p_2$ that a random user will give it 2 stars, and so on. Of course, we have no idea what these probabilities are. As we get more and more ratings for this item, we can guess that $p_1$ is close to $\frac{n_1}{n}$, where $n_1$ is the number of users who gave it 1 star and $n$ is the total number of users who rated the item, but when we first start out, we have nothing. So we place a Dirichlet prior $Dir(\alpha_1, \ldots, \alpha_k)$ on these probabilities. What is this Dirichlet prior? We can think of each $\alpha_i$ parameter as being a "virtual count" of the number of times some virtual person gave the item $i$ stars. For example, if $\alpha_1 = 2$, $\alpha_2 = 1$, and all the other $\alpha_i$ are equal to 0, then we can think of this as saying that two virtual people gave the item 1 star and one virtual person gave the item 2 stars. So before we even get any actual users, we can use this virtual distribution to provide an estimate of the item's rating. [One way of choosing the $\alpha_i$ parameters would be to set $\alpha_i$ equal to the overall proportion of votes of $i$ stars. (Note that the $\alpha_i$ parameters aren't necessarily integers.)] Then, once actual ratings come in, simply add their counts to the virtual counts of your Dirichlet prior. Whenever you want to estimate the rating of your item, simply take the mean over all of the item's ratings (both its virtual ratings and its actual ratings).
{ "source": [ "https://stats.stackexchange.com/questions/15979", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/6458/" ] }
15,981
Is "margin of error" the same as "standard error"? A (simple) example to illustrate the difference would be great!
Short answer : they differ by a quantile of the reference (usually, the standard normal) distribution. Long answer : you are estimating a certain population parameter (say, proportion of people with red hair; it may be something far more complicated, from say a logistic regression parameter to the 75th percentile of the gain in achievement scores to whatever). You collect your data, you run your estimation procedure, and the very first thing you look at is the point estimate, the quantity that approximates what you want to learn about your population (the sample proportion of redheads is 7%). Since this is a sample statistic, it is a random variable. As a random variable, it has a (sampling) distribution that can be characterized by mean, variance, distribution function, etc. While the point estimate is your best guess regarding the population parameter, the standard error is your best guess regarding the standard deviation of your estimator (or, in some cases, the square root of the mean squared error, MSE = bias$^2$ + variance). For a sample of size $n=1000$, the standard error of your proportion estimate is $\sqrt{0.07\cdot0.93/1000}$ $=0.0081$. The margin of error is the half-width of the associated confidence interval , so for the 95% confidence level, you would have $z_{0.975}=1.96$ resulting in a margin of error $0.0081\cdot1.96=0.0158$.
{ "source": [ "https://stats.stackexchange.com/questions/15981", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/6096/" ] }
15,983
I have run a linear discriminant analysis for the simple 2 categorical group case using the MASS package lda() function in R. With priors fixed at 0.5 and unequal n for the response variable of each group, the output basically provides the group means and the LD1 (first linear discriminant coefficient) value. There is no automatic output of the cutoff (decision boundary) value estimated that is later used to classify new values of the response variable into the different groups. I have tried various unsuccessful approaches to extract this value. It is obvious that in the simple 2 group case the value will be close to the mean of the 2 group means and that the LD1 value is involved (perhaps grand mean * LD1?). I am probably missing (misunderstanding?) the obvious and would appreciate being educated in this matter. Thanks. Regards,BJ
Short answer : they differ by a quantile of the reference (usually, the standard normal) distribution. Long answer : you are estimating a certain population parameter (say, proportion of people with red hair; it may be something far more complicated, from say a logistic regression parameter to the 75th percentile of the gain in achievement scores to whatever). You collect your data, you run your estimation procedure, and the very first thing you look at is the point estimate, the quantity that approximates what you want to learn about your population (the sample proportion of redheads is 7%). Since this is a sample statistic, it is a random variable. As a random variable, it has a (sampling) distribution that can be characterized by mean, variance, distribution function, etc. While the point estimate is your best guess regarding the population parameter, the standard error is your best guess regarding the standard deviation of your estimator (or, in some cases, the square root of the mean squared error, MSE = bias$^2$ + variance). For a sample of size $n=1000$, the standard error of your proportion estimate is $\sqrt{0.07\cdot0.93/1000}$ $=0.0081$. The margin of error is the half-width of the associated confidence interval , so for the 95% confidence level, you would have $z_{0.975}=1.96$ resulting in a margin of error $0.0081\cdot1.96=0.0158$.
{ "source": [ "https://stats.stackexchange.com/questions/15983", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/6461/" ] }
16,008
What does it mean to say that "the variance is a biased estimator". What does it mean to convert a biased estimate to an unbiased estimate through a simple formula. What does this conversion do exactly? Also, What is the practical use of this conversion? Do you convert these scores when using certain kind of statistics?
You can find everything here . However, here is a brief answer. Let $\mu$ and $\sigma^2$ be the mean and the variance of interest; you wish to estimate $\sigma^2$ based on a sample of size $n$. Now, let us say you use the following estimator: $S^2 = \frac{1}{n} \sum_{i=1}^n (X_{i} - \bar{X})^2$, where $\bar{X} = \frac{1}{n} \sum_{i=1}^n X_i$ is the estimator of $\mu$. It is not too difficult (see footnote) to see that $E[S^2] = \frac{n-1}{n}\sigma^2$. Since $E[S^2] \neq \sigma^2$, the estimator $S^2$ is said to be biased. But, observe that $E[\frac{n}{n-1} S^2] = \sigma^2$. Therefore $\tilde{S}^2 = \frac{n}{n-1} S^2$ is an unbiased estimator of $\sigma^2$. Footnote Start by writing $(X_i - \bar{X})^2 = ((X_i - \mu) + (\mu - \bar{X}))^2$ and then expand the product... Edit to account for your comments The expected value of $S^2$ does not give $\sigma^2$ (and hence $S^2$ is biased) but it turns out you can transform $S^2$ into $\tilde{S}^2$ so that the expectation does give $\sigma^2$. In practice, one often prefers to work with $\tilde{S}^2$ instead of $S^2$. But, if $n$ is large enough, this is not a big issue since $\frac{n}{n-1} \approx 1$. Remark Note that unbiasedness is a property of an estimator, not of an expectation as you wrote.
{ "source": [ "https://stats.stackexchange.com/questions/16008", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5837/" ] }
16,057
I'm trying to label a pretty simple scatterplot in R. This is what I use: plot(SI, TI) text(SI, TI, Name, pos=4, cex=0.7) The result is mediocre, as you can see (click to enlarge): I tried to compensate for this using the textxy function, but it's not better . Making the image itself larger doesn't work for the dense clusters. Is there any function or easy way to compensate for this and let R plot labels that don't overlap ? Here is a small subset of the data I have: Name;SI;TI 01_BAD_talking_head;6.944714;4.421208 01_GOOD_talking_head;5.680141;4.864035 01_GOOD_talking_head_subtitles;7.170114;4.664205
Check out the new package ggrepel . ggrepel provides geoms for ggplot2 to repel overlapping text labels. It works both for geom_text and geom_label. Figure is taken from this blog post .
{ "source": [ "https://stats.stackexchange.com/questions/16057", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1205/" ] }
16,117
I want to detect seasonality in data that I receive. There are some methods that I have found like the seasonal subseries plot and the autocorrelation plot but the thing is I don't understand how to read the graph, could anyone help? The other thing is, are there other methods to detect seasonality with or without the final result in graph?
A really good way to find periodicity in any regular series of data is to inspect its power spectrum after removing any overall trend . (This lends itself well to automated screening when the total power is normalized to a standard value, such as unity.) The preliminary trend removal (and optional differencing to remove serial correlation) is essential to avoid confounding periods with other behaviors. The power spectrum is the discrete Fourier transform of the autocovariance function of an appropriately smoothed version of the original series. If you think of the time series as sampling a physical waveform, you can estimate how much of the wave's total power is carried within each frequency. The power spectrum (or periodogram ) plots the power versus frequency. Cyclic (that is, repetitive or seasonal patterns) will show up as large spikes located at their frequencies. As an example, consider this (simulated) time series of residuals from a daily measurement taken for one year (365 values). The values fluctuate around $0$ without any evident trends, showing that all important trends have been removed. The fluctuation appears random: no periodicity is apparent. Here's another plot of the same data, drawn to help us see possible periodic patterns. If you look really hard, you might be able to discern a noisy but repetitive pattern that occurs 11 to 12 times. The longish sequences of above-zero and below-zero values at least suggest some positive autocorrelation, showing this series is not completely random. Here's the periodogram, shown for periods up to 91 (one-quarter of the total series length). It was constructed with a Welch window and normalized to unit area (for the entire periodogram, not just the part shown here). The power looks like "white noise" (small random fluctuations) plus two prominent spikes. They're hard to miss, aren't they? The larger occurs at a period of 12 and the smaller at a period of 52. This method has thereby detected a monthly cycle and a weekly cycle in these data. That's really all there is to it. To automate detection of cycles ("seasonality"), just scan the periodogram (which is a list of values) for relatively large local maxima. It's time to reveal how these data were created. The values are generated from a sum of two sine waves, one with frequency 12 (of squared amplitude 3/4) and another with frequency 52 (of squared amplitude 1/4). These are what the spikes in the periodogram detected. Their sum is shown as the thick black curve. Iid Normal noise of variance 2 was then added, as shown by the light gray bars extending from the black curve to the red dots. This noise introduced the low-level wiggles at the bottom of the periodogram, which otherwise would just be a flat 0. Fully two-thirds of the total variation in the values is non-periodic and random, which is very noisy: that's why it's so difficult to make out the periodicity just by looking at the dots. Nevertheless (in part because there's so much data) finding the frequencies with the periodogram is easy and the result is clear. Instructions and good advice for computing periodograms appear on the Numerical Recipes site: look for the section on "power spectrum estimation using the FFT." R has code for periodogram estimation . These illustrations were created in Mathematica 8; the periodogram was computed with its "Fourier" function.
{ "source": [ "https://stats.stackexchange.com/questions/16117", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/6519/" ] }