source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
29,354
This question may well be too open-ended to get a definitive answer, but hopefully not. Machine learning algorithms, such as SVM, GBM, Random Forest etc, generally have some free parameters that, beyond some rule of thumb guidance, need to be tuned to each data set. This is generally done with some kind of re-sampling technique (bootstrap, CV etc) in order to fit the set of parameters that give the best generalisation error. My question is, can you go too far here? People talk about doing grid searches as so forth, but why not simply treat this as an optimisation problem and drill down to the best possible set of parameters? I asked about some mechanics of this in this question, but it hasn't received much attention. Maybe the question was badly asked, but perhaps the question itself represents a bad approach that people generally do not do? What bothers me is the lack of regularisation. I might find by re-sampling that the best number of trees to grow in a GBM for this data set is 647 with an interaction depth of 4, but how sure can I be that this will be true of new data (assuming the new population is identical to the training set)? With no reasonable value to 'shrink' to (or if you will, no informative prior information) re-sampling seems like the best we can do. I just don't hear any talk about this, so it makes me wonder if there is something I'm missing. Obviously there is a large computational cost associated with doing many many iterations to squeeze every last bit of predictive power out of a model, so clearly this is something you would do if you've got the time/grunt to do the optimisation and every bit of performance improvement is valuable.
There is a definitive answer to this question which is "yes, it is certainly possible to overfit a cross-validation based model selection criterion and end up with a model that generalises poorly! ". In my view, this appears not to be widely appreciated, but is a substantial pitfall in the application of machine learning methods, and is the main focus of my current research; I have written two papers on the subject so far G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, 2010. Research, vol. 11, pp. 2079-2107, July 2010. ( www ) which demonstrates that over-fitting in model selection is a substantial problem in machine learning (and you can get severely biased performance estimates if you cut corners in model selection during performance evaluation) and G. C. Cawley and N. L. C. Talbot, Preventing over-fitting in model selection via Bayesian regularisation of the hyper-parameters, Journal of Machine Learning Research, volume 8, pages 841-861, April 2007. ( www ) where the cross-validation based model selection criterion is regularised to try an ameliorate over-fitting in model selection (which is a key problem if you use a kernel with many hyper-parameters). I am writing up a paper on grid-search based model selection at the moment, which shows that it is certainly possible to use a grid that is too fine where you end up with a model that is statistically inferior to a model selected by a much coarser grid (it was a question on StackExchange that inspired me to look into grid-search). Hope this helps. P.S. Unbiased performance evaluation and reliable model selection can indeed be computationally expensive, but in my experience it is well worthwhile. Nested cross-validation, where the outer cross-validation is used for performance estimation and the inner crossvalidation for model selection is a good basic approach.
{ "source": [ "https://stats.stackexchange.com/questions/29354", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/10354/" ] }
29,356
I would like to gain a conceptual understanding of Root Mean Squared Error (RMSE) and Mean Bias Deviation (MBD). Having calculated these measures for my own comparisons of data, I've often been perplexed to find that the RMSE is high (for example, 100 kg), whereas the MBD is low (for example, less than 1%). More specifically, I am looking for a reference (not online) that lists and discusses the mathematics of these measures. What is the normally accepted way to calculate these two measures, and how should I report them in a journal article paper? It would be really helpful in the context of this post to have a "toy" dataset that can be used to describe the calculation of these two measures. For example, suppose that I am to find the mass (in kg) of 200 widgets produced by an assembly line. I also have a mathematical model that will attempt to predict the mass of these widgets. The model doesn't have to be empirical, and it can be physically-based. I compute the RMSE and the MBD between the actual measurements and the model, finding that the RMSE is 100 kg and the MBD is 1%. What does this mean conceptually, and how would I interpret this result? Now suppose that I find from the outcome of this experiment that the RMSE is 10 kg, and the MBD is 80%. What does this mean, and what can I say about this experiment? What is the meaning of these measures, and what do the two of them (taken together) imply? What additional information does the MBD give when considered with the RMSE?
I think these concepts are easy to explain. So I would rather just describe it here. I am sure many elementary statistics books cover this including my book "The Essentials of Biostatistics for Physicians, Nurses and Clinicians." Think of a target with a bulls-eye in the middle. The mean square error represent the average squared distance from an arrow shot on the target and the center. Now if your arrows scatter evenly arround the center then the shooter has no aiming bias and the mean square error is the same as the variance. But in general the arrows can scatter around a point away from the target. The average squared distance of the arrows from the center of the arrows is the variance. This center could be looked at as the shooters aim point. The distance from this shooters center or aimpoint to the center of the target is the absolute value of the bias. Thinking of a right triangle where the square of the hypotenuse is the sum of the sqaures of the two sides. So a squared distance from the arrow to the target is the square of the distance from the arrow to the aim point and the square of the distance between the center of the target and the aimpoint. Averaging all these square distances gives the mean square error as the sum of the bias squared and the variance.
{ "source": [ "https://stats.stackexchange.com/questions/29356", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11615/" ] }
29,385
I'm training a Multi-class LDA classifier with 8 classes of data. While performing training, I get a warning of: " Variables are collinear " I'm getting a training accuracy of over 90% . I'm using scikits-learn library in Python do train and test the Multi-class data. I get decent testing accuracy too(about 85%-95% ). I don't understand what the error/warning means. Please help me out.
Multicollinearity means that your predictors are correlated. Why is this bad? Because LDA, like regression techniques involves computing a matrix inversion, which is inaccurate if the determinant is close to 0 ( i.e. two or more variables are almost a linear combination of each other). More importantly, it makes the estimated coefficients impossible to interpret. If an increase in $X_1$, say, is associated with an decrease in $X_2$ and they both increase variable $Y$, every change in $X_1$ will be compensated by a change in $X_2$ and you will underestimate the effect of $X_1$ on $Y$. In LDA, you would underestimate the effect of $X_1$ on the classification. If all you care for is the classification per se , and that after training your model on half of the data and testing it on the other half you get 85-95% accuracy I'd say it is fine.
{ "source": [ "https://stats.stackexchange.com/questions/29385", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/7918/" ] }
29,477
Is there an easy way in R to create a linear regression over a model with 100 parameters in R? Let's say we have a vector Y with 10 values and a dataframe X with 10 columns and 100 rows In mathematical notation I would write Y = X[[1]] + X[[2]] + ... + X[[100]] . How do I write something similar in R syntax?
Try this df<-data.frame(y=rnorm(10),x1=rnorm(10),x2=rnorm(10)) lm(y~.,df)
{ "source": [ "https://stats.stackexchange.com/questions/29477", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3807/" ] }
29,489
What's a meaningful "correlation" measure to study the relation between the such two types of variables? In R, how to do it?
For a moment, let's ignore the continuous/discrete issue. Basically correlation measures the strength of the linear relationship between variables, and you seem to be asking for an alternative way to measure the strength of the relationship. You might be interested in looking at some ideas from information theory . Specifically I think you might want to look at mutual information . Mutual information essentially gives you a way to quantify how much knowing the state of one variable tells you about the other variable. I actually think this definition is closer to what most people mean when they think about correlation. For two discrete variables X and Y, the calculation is as follows: $$I(X;Y) = \sum_{y \in Y} \sum_{x \in X} p(x,y) \log{ \left(\frac{p(x,y)}{p(x)\,p(y)} \right) }$$ For two continuous variables we integrate rather than taking the sum: $$I(X;Y) = \int_Y \int_X p(x,y) \log{ \left(\frac{p(x,y)}{p(x)\,p(y)} \right) } \; dx \,dy$$ Your particular use-case is for one discrete and one continuous. Rather than integrating over a sum or summing over an integral, I imagine it would be easier to convert one of the variables into the other type. A typical way to do that would be to discretize your continuous variable into discrete bins. There are a number of ways to discretzie data (e.g. equal intervals), and I believe the entropy package should be helpful for the MI calculations if you want to use R.
{ "source": [ "https://stats.stackexchange.com/questions/29489", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/8783/" ] }
29,612
I am doing multiple linear regression. I have 21 observations and 5 variables. My aim is just finding the relation between variables Is my data set enough to do multiple regression? The t-test result revealed 3 of my variables are not significant. Do I need to do my regression again with the significant variables (or my first regression is enough to get conclusion)? My correlation matrix is as follow var 1 var 2 var 3 var 4 var 5 Y var 1 1.0 0.0 0.0 -0.1 -0.3 -0.2 var 2 0.0 1.0 0.4 0.3 -0.4 -0.4 var 3 0.0 0.4 1.0 0.7 -0.7 -0.6 var 4 -0.1 0.3 0.7 1.0 -0.7 -0.9 var 5 -0.3 -0.4 -0.7 -0.7 1.0 0.8 Y -0.2 -0.4 -0.6 -0.9 0.8 1.0 var 1 and var 2 are continues variables and var 3 to 5are categorical variables and y is my dependent variable . It should be mentioned the important variable which has been considered in the literature as the most influential factor on my dependent variable is not also among my regression variables due to my data limitation. Does still make sense to do regression without this important variable? here is my confidence interval Varibales Regression Coefficient Lower 95% C.L. Upper 95% C.L. Intercept 53.61 38.46 68.76 var 1 -0.39 -0.97 0.19 var 2 -0.01 -0.03 0.01 var 3 5.28 -2.28 12.84 var 4 -27.65 -37.04 -18.26 **var 5 11.52 0.90 22.15**
The general rule of thumb (based on stuff in Frank Harrell's book, Regression Modeling Strategies ) is that if you expect to be able to detect reasonable-size effects with reasonable power , you need 10-20 observations per parameter (covariate) estimated. Harrell discusses a lot of options for "dimension reduction" (getting your number of covariates down to a more reasonable size), such as PCA, but the most important thing is that in order to have any confidence in the results dimension reduction must be done without looking at the response variable . Doing the regression again with just the significant variables, as you suggest above, is in almost every case a bad idea. However, since you're stuck with a data set and a set of covariates you're interested in, I don't think that running the multiple regression this way is inherently wrong. I think the best thing would be to accept the results as they are, from the full model (don't forget to look at the point estimates and confidence intervals to see whether the significant effects are estimated to be "large" in some real-world sense, and whether the non-significant effects are actually estimated to be smaller than the significant effects or not). As to whether it makes any sense to do an analysis without the predictor that your field considers important: I don't know. It depends what kind of inferences you want to make based on the model. In the narrow sense, the regression model is still well-defined ("what are the marginal effects of these predictors on this response?"), but someone in your field might quite rightly say that the analysis just doesn't make sense. It would help a little bit if you knew that the predictors you have are uncorrelated from the well-known predictor (whatever it is), or that well-known predictor is constant or nearly constant for your data: then at least you could say that something other than the well-known predictor does have an effect on the response.
{ "source": [ "https://stats.stackexchange.com/questions/29612", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11691/" ] }
29,627
I have seen somewhere that classical distances (like Euclidean distance) become weakly discriminant when we have multidimensional and sparse data. Why? Do you have an example of two sparse data vectors where the Euclidean distance does not perform well? In this case which similarity should we use?
I believe it is not so much the sparsity, but the high dimensionality usually associated with sparse data. But maybe it is even worse when the data is very sparse. Because then the distance of any two objects will likely be a quadratic mean of their lengths, or $$\lim_{dim\rightarrow\infty}d(x,y) = ||x-y|| \rightarrow_p \sqrt{||x||^2 + ||y||^2}$$ This equation holds trivially if $\forall_i x_i=0 \vee y_i=0$. If you increase the dimensionality and sparseness enough so that it holds for almost all attributes, the difference will be minimal. Even worse: if you normalized your vectors to have length $||x||=1$, then the euclidean distance of any two objects will be $\sqrt{2}$ with high probability. So as a rule of thumb, for Euclidean distance to be usable (I'm not claiming useful or meaningful) the objects should be non-zero in $3/4$ of attributes. Then there should be a reasonable number of attributes where $|y_i| \neq |x_i-y_i| \neq |x_i|$ so the vector difference becomes useful. This also applies to any other norm-induced difference. Because in the situation above $|x-y| \rightarrow_p |x + y|$ I don't think this is a desirable behavior for distance functions to become largely independent of the actual difference, or the absolute difference converging to the absolute sum! A common solution is to use distances such as Cosine distance. On some data they work very well. Roughly speaking, they only look at attributes where both vectors are non-zero. An interesting approach is discussed in the reference below (they didn't invent it, but I like their experimental evaluation of the properties) is to use shared nearest neighbors. So even when vectors x and y have no attributes in common, they might have some common neighbors. Counting the number of objects connecting two objects is closely related to graph distances. There is a lot of discussion on distance functions in: Can Shared-Neighbor Distances Defeat the Curse of Dimensionality? M. E. Houle, H.-P. Kriegel, P. Kröger, E. Schubert and A. Zimek SSDBM 2010 and if you do not prefer scientific articles, also on Wikipedia: Curse of Dimensionality
{ "source": [ "https://stats.stackexchange.com/questions/29627", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/8114/" ] }
29,641
Suppose I'm running an experiment that can have 2 outcomes, and I'm assuming that the underlying "true" distribution of the 2 outcomes is a binomial distribution with parameters $n$ and $p$: ${\rm Binomial}(n, p)$. I can compute the standard error, $SE_X = \frac{\sigma_X}{\sqrt{n}}$, from the form of the variance of ${\rm Binomial}(n, p)$: $$ \sigma^{2}_{X} = npq$$ where $q = 1-p$. So, $\sigma_X=\sqrt{npq}$. For the standard error I get: $SE_X=\sqrt{pq}$, but I've seen somewhere that $SE_X = \sqrt{\frac{pq}{n}}$. What did I do wrong?
It seems like you're using $n$ twice in two different ways - both as the sample size and as the number of bernoulli trials that comprise the Binomial random variable; to eliminate any ambiguity, I'm going to use $k$ to refer to the latter. If you have $n$ independent samples from a ${\rm Binomial}(k,p)$ distribution, the variance of their sample mean is $$ {\rm var} \left( \frac{1}{n} \sum_{i=1}^{n} X_{i} \right) = \frac{1}{n^2} \sum_{i=1}^{n} {\rm var}( X_{i} ) = \frac{ n {\rm var}(X_{i}) }{ n^2 } = \frac{ {\rm var}(X_{i})}{n} = \frac{ k pq }{n} $$ where $q=1-p$ and $\overline{X}$ is the same mean. This follows since (1) ${\rm var}(cX) = c^2 {\rm var}(X)$, for any random variable, $X$, and any constant $c$. (2) the variance of a sum of independent random variables equals the sum of the variances . The standard error of $\overline{X}$is the square root of the variance: $\sqrt{\frac{ k pq }{n}}$. Therefore, When $k = n$, you get the formula you pointed out: $\sqrt{pq}$ When $k = 1$, and the Binomial variables are just bernoulli trials , you get the formula you've seen elsewhere: $\sqrt{\frac{pq }{n}}$
{ "source": [ "https://stats.stackexchange.com/questions/29641", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11411/" ] }
29,713
What is covariance in plain language and how is it linked to the terms dependence , correlation and variance-covariance structure with respect to repeated-measures designs?
Covariance is a measure of how changes in one variable are associated with changes in a second variable. Specifically, covariance measures the degree to which two variables are linearly associated. However, it is also often used informally as a general measure of how monotonically related two variables are. There are many useful intuitive explanations of covariance here . Regarding how covariance is related to each of the terms you mentioned: (1) Correlation is a scaled version of covariance that takes on values in $[-1,1]$ with a correlation of $\pm 1$ indicating perfect linear association and $0$ indicating no linear relationship. This scaling makes correlation invariant to changes in scale of the original variables, (which Akavall points out and gives an example of, +1). The scaling constant is the product of the standard deviations of the two variables. (2) If two variables are independent , their covariance is $0$. But, having a covariance of $0$ does not imply the variables are independent. This figure (from Wikipedia) $ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $ shows several example plots of data that are not independent, but their covariances are $0$. One important special case is that if two variables are jointly normally distributed, then they are independent if and only if they are uncorrelated . Another special case is that pairs of bernoulli variables are uncorrelated if and only if they are independent (thanks @cardinal). (3) The variance/covariance structure (often called simply the covariance structure ) in repeated measures designs refers to the structure used to model the fact that repeated measurements on individuals are potentially correlated (and therefore are dependent) - this is done by modeling the entries in the covariance matrix of the repeated measurements. One example is the exchangeable correlation structure with constant variance which specifies that each repeated measurement has the same variance, and all pairs of measurements are equally correlated. A better choice may be to specify a covariance structure that requires two measurements taken farther apart in time to be less correlated (e.g. an autoregressive model ). Note that the term covariance structure arises more generally in many kinds of multivariate analyses where observations are allowed to be correlated.
{ "source": [ "https://stats.stackexchange.com/questions/29713", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5003/" ] }
29,719
I have the data of a test that could be used to distinguish normal and tumor cells. According to ROC curve it looks good for this purpose (area under curve is 0.9): My questions are: How to determine cutoff point for this test and its confidence interval where readings should be judged as ambiguous? What is the best way to visualize this (using ggplot2 )? Graph is rendered using ROCR and ggplot2 packages: #install.packages("ggplot2","ROCR","verification") #if not installed yet library("ggplot2") library("ROCR") library("verification") d <-read.csv2("data.csv", sep=";") pred <- with(d,prediction(x,test)) perf <- performance(pred,"tpr", "fpr") auc <-performance(pred, measure = "auc")@y.values[[1]] rd <- data.frame([email protected][[1]],[email protected][[1]]) p <- ggplot(rd,aes(x=x,y=y)) + geom_path(size=1) p <- p + geom_segment(aes(x=0,y=0,xend=1,yend=1),colour="black",linetype= 2) p <- p + geom_text(aes(x=1, y= 0, hjust=1, vjust=0, label=paste(sep = "", "AUC = ",round(auc,3) )),colour="black",size=4) p <- p + scale_x_continuous(name= "False positive rate") p <- p + scale_y_continuous(name= "True positive rate") p <- p + opts( axis.text.x = theme_text(size = 10), axis.text.y = theme_text(size = 10), axis.title.x = theme_text(size = 12,face = "italic"), axis.title.y = theme_text(size = 12,face = "italic",angle=90), legend.position = "none", legend.title = theme_blank(), panel.background = theme_blank(), panel.grid.minor = theme_blank(), panel.grid.major = theme_line(colour='grey'), plot.background = theme_blank() ) p data.csv contains the following data: x;group;order;test 56;Tumor;1;1 55;Tumor;1;1 52;Tumor;1;1 60;Tumor;1;1 54;Tumor;1;1 43;Tumor;1;1 52;Tumor;1;1 57;Tumor;1;1 50;Tumor;1;1 34;Tumor;1;1 24;Normal;2;0 34;Normal;2;0 22;Normal;2;0 32;Normal;2;0 25;Normal;2;0 23;Normal;2;0 23;Normal;2;0 19;Normal;2;0 56;Normal;2;0 44;Normal;2;0
Thanks to all who aswered this question. I agree that there could be no one correct answer and criteria greatly depend on the aims that stand behind of the certain diagnostic test. Finally I had found an R package OptimalCutpoints dedicated exactly to finding cutoff point in such type of analysis. Actually there are several methods of determining cutoff point. "CB" (cost-benefit method); "MCT" (minimizes Misclassification Cost Term); "MinValueSp" (a minimum value set for Specificity); "MinValueSe" (a minimum value set for Sensitivity); "RangeSp" (a range of values set for Specificity); "RangeSe" (a range of values set for Sensitivity); "ValueSp" (a value set for Specificity); "ValueSe" (a value set for Sensitivity); "MinValueSpSe" (a minimum value set for Specificity and Sensitivity); "MaxSp" (maximizes Specificity); "MaxSe" (maximizes Sensitivity); "MaxSpSe" (maximizes Sensitivity and Specificity simultaneously); "Max-SumSpSe" (maximizes the sum of Sensitivity and Specificity); "MaxProdSpSe" (maximizes the product of Sensitivity and Specificity); "ROC01" (minimizes distance between ROC plot and point (0,1)); "SpEqualSe" (Sensitivity = Specificity); "Youden" (Youden Index); "MaxEfficiency" (maximizes Efficiency or Accuracy); "Minimax" (minimizes the most frequent error); "AUC" (maximizes concordance which is a function of AUC); "MaxDOR" (maximizes Diagnostic Odds Ratio); "MaxKappa" (maximizes Kappa Index); "MaxAccuracyArea" (maximizes Accuracy Area); "MinErrorRate" (minimizes Error Rate); "MinValueNPV" (a minimum value set for Negative Predictive Value); "MinValuePPV" (a minimum value set for Positive Predictive Value); "MinValueNPVPPV" (a minimum value set for Predictive Values); "PROC01" (minimizes distance between PROC plot and point (0,1)); "NPVEqualPPV" (Negative Predictive Value = Positive Predictive Value); "ValueDLR.Negative" (a value set for Negative Diagnostic Likelihood Ratio); "ValueDLR.Positive" (a value set for Positive Diagnostic Likelihood Ratio); "MinPvalue" (minimizes p-value associated with the statistical Chi-squared test which measures the association between the marker and the binary result obtained on using the cutpoint); "ObservedPrev" (The closest value to observed prevalence); "MeanPrev" (The closest value to the mean of the diagnostic test values); "PrevalenceMatching" (The value for which predicted prevalence is practically equal to observed prevalence). So now the task is narrowed to selecting the method that is the best match for each situation. There are many other configuration options described in package documentation including several methods of determining confidence intervals and detailed description of each of the methods.
{ "source": [ "https://stats.stackexchange.com/questions/29719", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3376/" ] }
29,731
There are several threads on this site discussing how to determine if the OLS residuals are asymptotically normally distributed. Another way to evaluate the normality of the residuals with R code is provided in this excellent answer . This is another discussion on the practical difference between standardized and observed residuals. But let's say the residuals are definitely not normally distributed, like in this example . Here we have several thousand observations and clearly we must reject the normally-distributed-residuals assumption. One way to address the problem is to employ some form of robust estimator as explained in the answer. However I am not limited to OLS and in facts I would like to understand the benefits of other glm or non-linear methodologies. What is the most efficient way to model data violating the OLS normality of residuals assumption? Or at least what should be the first step to develop a sound regression analysis methodology?
The ordinary least squares estimate is still a reasonable estimator in the face of non-normal errors. In particular, the Gauss-Markov Theorem states that the ordinary least squares estimate is the best linear unbiased estimator (BLUE) of the regression coefficients ('Best' meaning optimal in terms of minimizing mean squared error )as long as the errors (1) have mean zero (2) are uncorrelated (3) have constant variance Notice there is no condition of normality here (or even any condition that the errors are IID ). The normality condition comes into play when you're trying to get confidence intervals and/or $p$-values. As @MichaelChernick mentions (+1, btw) you can use robust inference when the errors are non-normal as long as the departure from normality can be handled by the method - for example, (as we discussed in this thread) the Huber $M$-estimator can provide robust inference when the true error distribution is the mixture between normal and a long tailed distribution (which your example looks like) but may not be helpful for other departures from normality. One interesting possibility that Michael alludes to is bootstrapping to obtain confidence intervals for the OLS estimates and seeing how this compares with the Huber-based inference. Edit: I often hear it said that you can rely on the Central Limit Theorem to take care of non-normal errors - this is not always true (I'm not just talking about counterexamples where the theorem fails). In the real data example the OP refers to, we have a large sample size but can see evidence of a long-tailed error distribution - in situations where you have long tailed errors, you can't necessarily rely on the Central Limit Theorem to give you approximately unbiased inference for realistic finite sample sizes. For example, if the errors follow a $t$-distribution with $2.01$ degrees of freedom (which is not clearly more long-tailed than the errors seen in the OP's data), the coefficient estimates are asymptotically normally distributed, but it takes much longer to "kick in" than it does for other shorter-tailed distributions. Below, I demonstrate with a crude simulation in R that when $y_{i} = 1 + 2x_{i} + \varepsilon_i$, where $\varepsilon_{i} \sim t_{2.01}$, the sampling distribution of $\hat{\beta}_{1}$ is still quite long tailed even when the sample size is $n=4000$: set.seed(5678) B = matrix(0,1000,2) for(i in 1:1000) { x = rnorm(4000) y = 1 + 2*x + rt(4000,2.01) g = lm(y~x) B[i,] = coef(g) } qqnorm(B[,2]) qqline(B[,2])
{ "source": [ "https://stats.stackexchange.com/questions/29731", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/7795/" ] }
29,781
In some literature, I have read that a regression with multiple explanatory variables, if in different units, needed to be standardized. (Standardizing consists in subtracting the mean and dividing by the standard deviation.) In which other cases do I need to standardize my data? Are there cases in which I should only center my data (i.e., without dividing by standard deviation)?
You have come across a common belief. However, in general, you do not need to center or standardize your data for multiple regression. Different explanatory variables are almost always on different scales (i.e., measured in different units). This is not a problem; the betas are estimated such that they convert the units of each explanatory variable into the units of the response variable appropriately. One thing that people sometimes say is that if you have standardized your variables first, you can then interpret the betas as measures of importance. For instance, if $\beta_1=.6$ , and $\beta_2=.3$ , then the first explanatory variable is twice as important as the second. While this idea is appealing, unfortunately, it is not valid. There are several issues, but perhaps the easiest to follow is that you have no way to control for possible range restrictions in the variables. Inferring the 'importance' of different explanatory variables relative to each other is a very tricky philosophical issue. None of that is to suggest that standardizing is bad or wrong , just that it typically isn't necessary . The only case I can think of off the top of my head where centering is helpful is before creating power terms. Lets say you have a variable, $X$ , that ranges from 1 to 2, but you suspect a curvilinear relationship with the response variable, and so you want to create an $X^2$ term. If you don't center $X$ first, your squared term will be highly correlated with $X$ , which could muddy the estimation of the beta. Centering first addresses this issue. (Update added much later:) An analogous case that I forgot to mention is creating interaction terms. If an interaction / product term is created from two variables that are not centered on 0, some amount of collinearity will be induced (with the exact amount depending on various factors). Centering first addresses this potential problem. For a fuller explanation, see this excellent answer from @Affine: Collinearity diagnostics problematic only when the interaction term is included .
{ "source": [ "https://stats.stackexchange.com/questions/29781", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11621/" ] }
30,042
Many authors of papers I read affirm SVMs is superior technique to face their regression/classification problem, aware that they couldn't get similar results through NNs. Often the comparison states that SVMs, instead of NNs, Have a strong founding theory Reach the global optimum due to quadratic programming Have no issue for choosing a proper number of parameters Are less prone to overfitting Needs less memory to store the predictive model Yield more readable results and a geometrical interpretation Is it seriously a broadly accepted thought? Don't quote No-Free Lunch Theorem or similar statements, my question is about practical usage of those techniques. On the other side, which kind of abstract problem you definitely would face with NN?
It is a matter of trade-offs. SVMs are in right now, NNs used to be in . You'll find a rising number of papers that claim Random Forests, Probabilistic Graphic Models or Nonparametric Bayesian methods are in. Someone should publish a forecasting model in the Annals of Improbable Research on what models will be considered hip. Having said that for many famously difficult supervised problems the best performing single models are some type of NN, some type of SVMs or a problem specific stochastic gradient descent method implemented using signal processing methods. Pros of NN: They are extremely flexible in the types of data they can support. NNs do a decent job at learning the important features from basically any data structure, without having to manually derive features. NN still benefit from feature engineering, e.g. you should have an area feature if you have a length and width. The model will perform better for the same computational effort. Most of supervised machine learning requires you to have your data structured in a observations by features matrix, with the labels as a vector of length observations. This restriction is not necessary with NN. There is fantastic work with structured SVM, but it is unlikely it will ever be as flexible as NNs. Pros of SVM: Fewer hyperparameters. Generally SVMs require less grid-searching to get a reasonably accurate model. SVM with a RBF kernel usually performs quite well. Global optimum guaranteed. Cons of NN and SVM: For most purposes they are both black boxes. There is some research on interpreting SVMs, but I doubt it will ever be as intuitive as GLMs. This is a serious problem in some problem domains. If you're going to accept a black box then you can usually squeeze out quite a bit more accuracy by bagging/stacking/boosting many many models with different trade-offs. Random forests are attractive because they can produce out-of-bag predictions(leave-one-out predictions) with no extra effort, they are very interpretable, they have an good bias-variance trade-off(great for bagging models) and they are relatively robust to selection bias. Stupidly simple to write a parallel implementation of. Probabilistic graphical models are attractive because they can incorporate domain-specific-knowledge directly into the model and are interpretable in this regard. Nonparametric(or really extremely parametric) Bayesian methods are attractive because they produce confidence intervals directly. They perform very well on small sample sizes and very well on large sample sizes. Stupidly simple to write a linear algebra implementation of.
{ "source": [ "https://stats.stackexchange.com/questions/30042", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11583/" ] }
30,159
Somebody asked me this question in a job interview and I replied that their joint distribution is always Gaussian. I thought that I can always write a bivariate Gaussian with their means and variance and covariances. I am wondering if there can be a case for which the joint probability of two Gaussians is not Gaussian?
The bivariate normal distribution is the exception , not the rule! It is important to recognize that "almost all" joint distributions with normal marginals are not the bivariate normal distribution. That is, the common viewpoint that joint distributions with normal marginals that are not the bivariate normal are somehow "pathological", is a bit misguided. Certainly, the multivariate normal is extremely important due to its stability under linear transformations, and so receives the bulk of attention in applications. Examples It is useful to start with some examples. The figure below contains heatmaps of six bivariate distributions, all of which have standard normal marginals. The left and middle ones in the top row are bivariate normals, the remaining ones are not (as should be apparent). They're described further below. The bare bones of copulas Properties of dependence are often efficiently analyzed using copulas . A bivariate copula is just a fancy name for a probability distribution on the unit square $[0,1]^2$ with uniform marginals. Suppose $C(u,v)$ is a bivariate copula. Then, immediately from the above, we know that $C(u,v) \geq 0$ , $C(u,1) = u$ and $C(1,v) = v$ , for example. We can construct bivariate random variables on the Euclidean plane with prespecified marginals by a simple transformation of a bivariate copula. Let $F_1$ and $F_2$ be prescribed marginal distributions for a pair of random variables $(X,Y)$ . Then, if $C(u,v)$ is a bivariate copula, $$ F(x,y) = C(F_1(x), F_2(y)) $$ is a bivariate distribution function with marginals $F_1$ and $F_2$ . To see this last fact, just note that $$ \renewcommand{\Pr}{\mathbb P} \Pr(X \leq x) = \Pr(X \leq x, Y < \infty) = C(F_1(x), F_2(\infty)) = C(F_1(x),1) = F_1(x) \>. $$ The same argument works for $F_2$ . For continuous $F_1$ and $F_2$ , Sklar's theorem asserts a converse implying uniqueness. That is, given a bivariate distribution $F(x,y)$ with continuous marginals $F_1$ , $F_2$ , the corresponding copula is unique (on the appropriate range space). The bivariate normal is exceptional Sklar's theorem tells us (essentially) that there is only one copula that produces the bivariate normal distribution. This is, aptly named, the Gaussian copula which has density on $[0,1]^2$ $$ c_\rho(u,v) := \frac{\partial^2}{\partial u \, \partial v} C_\rho(u,v) = \frac{\varphi_{2,\rho}(\Phi^{-1}(u),\Phi^{-1}(v))}{\varphi(\Phi^{-1}(u)) \varphi(\Phi^{-1}(v))} \>, $$ where the numerator is the bivariate normal distribution with correlation $\rho$ evaluated at $\Phi^{-1}(u)$ and $\Phi^{-1}(v)$ . But, there are lots of other copulas and all of them will give a bivariate distribution with normal marginals which is not the bivariate normal by using the transformation described in the previous section. Some details on the examples Note that if $C(u,v)$ is am arbitrary copula with density $c(u,v)$ , the corresponding bivariate density with standard normal marginals under the transformation $F(x,y) = C(\Phi(x),\Phi(y))$ is $$ f(x,y) = \varphi(x) \varphi(y) c(\Phi(x), \Phi(y)) \> . $$ Note that by applying the Gaussian copula in the above equation, we recover the bivariate normal density. But, for any other choice of $c(u,v)$ , we will not. The examples in the figure were constructed as follows (going across each row, one column at a time): Bivariate normal with independent components. Bivariate normal with $\rho = -0.4$ . The example given in this answer of Dilip Sarwate . It can easily be seen to be induced by the copula $C(u,v)$ with density $c(u,v) = 2 (\mathbf 1_{(0 \leq u \leq 1/2, 0 \leq v \leq 1/2)} + \mathbf 1_{(1/2 < u \leq 1, 1/2 < v \leq 1)})$ . Generated from the Frank copula with parameter $\theta = 2$ . Generated from the Clayton copula with parameter $\theta = 1$ . Generated from an asymmetric modification of the Clayton copula with parameter $\theta = 3$ .
{ "source": [ "https://stats.stackexchange.com/questions/30159", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/172/" ] }
30,365
Today I came across a new topic called the Mathematical Expectation. The book I am following says, expectation is the arithmetic mean of random variable coming from any probability distribution. But, it defines expectation as the sum of product of some data and the probability of it. How can these two (average and expectation) be same? How can the sum of probability times the data be the average of whole distribution?
Informally, a probability distribution defines the relative frequency of outcomes of a random variable - the expected value can be thought of as a weighted average of those outcomes (weighted by the relative frequency). Similarly, the expected value can be thought of as the arithmetic mean of a set of numbers generated in exact proportion to their probability of occurring (in the case of a continuous random variable this isn't exactly true since specific values have probability $0$). The connection between the expected value and the arithmetic mean is most clear with a discrete random variable, where the expected value is $$ E(X) = \sum_{S} x P(X=x) $$ where $S$ is the sample space. As an example, suppose you have a discrete random variable $X$ such that: $$ X = \begin{cases} 1 & \mbox{with probability } 1/8 \\ 2 & \mbox{with probability } 3/8 \\ 3 & \mbox{with probability } 1/2 \end{cases} $$ That is, the probability mass function is $P(X=1)=1/8$, $P(X=2)=3/8$, and $P(X=3)=1/2$. Using the formula above, the expected value is $$ E(X) = 1\cdot (1/8) + 2 \cdot (3/8) + 3 \cdot (1/2) = 2.375 $$ Now consider numbers generated with frequencies exactly proportional to the probability mass function - for example, the set of numbers $\{1,1,2,2,2,2,2,2,3,3,3,3,3,3,3,3\}$ - two $1$s, six $2$s and eight $3$s. Now take the arithmetic mean of these numbers: $$ \frac{1+1+2+2+2+2+2+2+3+3+3+3+3+3+3+3}{16} = 2.375 $$ and you can see it's exactly equal to the expected value.
{ "source": [ "https://stats.stackexchange.com/questions/30365", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11941/" ] }
30,394
Let's say we have the statistics given below gender mean sd n f 1.666667 0.5773503 3 m 4.500000 0.5773503 4 How do you perform a two-sample t-test (to see if there is a significant difference between the means of men and women in some variable) using statistics like this rather than actual data? I couldn't find anywhere on the internet how to do this. Most of the tutorials and even the manual deal with the test with the actual data set only.
You can write your own function based on what we know about the mechanics of the two-sample $t$ -test . For example, this will do the job: # m1, m2: the sample means # s1, s2: the sample standard deviations # n1, n2: the same sizes # m0: the null value for the difference in means to be tested for. Default is 0. # equal.variance: whether or not to assume equal variance. Default is FALSE. t.test2 <- function(m1,m2,s1,s2,n1,n2,m0=0,equal.variance=FALSE) { if( equal.variance==FALSE ) { se <- sqrt( (s1^2/n1) + (s2^2/n2) ) # welch-satterthwaite df df <- ( (s1^2/n1 + s2^2/n2)^2 )/( (s1^2/n1)^2/(n1-1) + (s2^2/n2)^2/(n2-1) ) } else { # pooled standard deviation, scaled by the sample sizes se <- sqrt( (1/n1 + 1/n2) * ((n1-1)*s1^2 + (n2-1)*s2^2)/(n1+n2-2) ) df <- n1+n2-2 } t <- (m1-m2-m0)/se dat <- c(m1-m2, se, t, 2*pt(-abs(t),df)) names(dat) <- c("Difference of means", "Std Error", "t", "p-value") return(dat) } Example usage: set.seed(0) x1 <- rnorm(100) x2 <- rnorm(200) # you'll find this output agrees with that of t.test when you input x1,x2 (tt2 <- t.test2(mean(x1), mean(x2), sd(x1), sd(x2), length(x1), length(x2))) Difference of means Std Error t p-value 0.01183358 0.11348530 0.10427416 0.91704542 This matches the result of t.test : (tt <- t.test(x1, x2)) # Welch Two Sample t-test # # data: x1 and x2 # t = 0.10427, df = 223.18, p-value = 0.917 # alternative hypothesis: true difference in means is not equal to 0 # 95 percent confidence interval: # -0.2118062 0.2354734 # sample estimates: # mean of x mean of y # 0.02266845 0.01083487 tt$statistic == tt2[["t"]] # t # TRUE tt$p.value == tt2[["p-value"]] # [1] TRUE
{ "source": [ "https://stats.stackexchange.com/questions/30394", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11013/" ] }
30,456
I know that linear regression can be thought as "the line that is vertically closest to all the points" : But there is another way to see it, by visualizing the column space, as "the projection onto the space spanned by the columns of the coefficient matrix" : My question is: in these two interpretations, what happens when we use the penalized linear regression, like ridge regression and LASSO ? What happens with the line in the first interpretation? And what happens with the projection in the second interpretation? UPDATE: @JohnSmith in the comments brought up the fact that the penalty occurs in the space of the coefficients. Is there an interpretation in this space also?
Sorry for my painting skills, I will try to give you the following intuition. Let $f(\beta)$ be the objective function (for example, MSE in case of regression). Let's imagine the contour plot of this function in red (of course we paint it in the space of $\beta$, here for simplicity $\beta_1$ and $\beta_2$). There is a minimum of this function, in the middle of the red circles. And this minimum gives us the non-penalized solution. Now we add different objective $g(\beta)$ which contour plot is given in blue. Either LASSO regularizer or ridge regression regularizer. For LASSO $g(\beta) = \lambda (|\beta_1| + |\beta_2|)$, for ridge regression $g(\beta) = \lambda (\beta_1^2 + \beta_2^2)$ ($\lambda$ is a penalization parameter). Contour plots shows the area at which the function have the fixed values. So the larger $\lambda$ - the faster $g(x)$ growth, and the more "narrow" the contour plot is. Now we have to find the minimum of the sum of this two objectives: $f(\beta) + g(\beta)$. And this is achieved when two contour plots meet each other. The larger penalty, the "more narrow" blue contours we get, and then the plots meet each other in a point closer to zero. An vise-versa: the smaller the penalty, the contours expand, and the intersection of blue and red plots comes closer to the center of the red circle (non-penalized solution). And now follows an interesting thing that greatly explains to me the difference between ridge regression and LASSO: in case of LASSO two contour plots will probably meet where the corner of regularizer is ($\beta_1 = 0$ or $\beta_2 = 0$). In case of ridge regression that is almost never the case. That's why LASSO gives us sparse solution, making some of parameters exactly equal $0$. Hope that will explain some intuition about how penalized regression works in the space of parameters.
{ "source": [ "https://stats.stackexchange.com/questions/30456", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11090/" ] }
30,465
I have a number of multivariate observations and would like to evaluate the probability density across all variables. It is assumed that the data is normally distributed. At low numbers of variables everything works as I would expect, but moving to greater numbers results in the covariance matrix becoming non positive definite. I have reduced the problem in Matlab to: load raw_data.mat; % matrix number-of-values x number of variables Sigma = cov(data); [R,err] = cholcov(Sigma, 0); % Test for pos-def done in mvnpdf. If err>0 then Sigma is not positive definite. Is there anything that I can do in order to evaluate my experimental data at higher dimensions? Does it tell me anything useful about my data? I'm somewhat of a beginner in this area so apologies if I've missed out something obvious.
The covariance matrix is not positive definite because it is singular. That means that at least one of your variables can be expressed as a linear combination of the others. You do not need all the variables as the value of at least one can be determined from a subset of the others. I would suggest adding variables sequentially and checking the covariance matrix at each step. If a new variable creates a singularity drop it and go on the the next one. Eventually you should have a subset of variables with a postive definite covariance matrix.
{ "source": [ "https://stats.stackexchange.com/questions/30465", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11988/" ] }
30,588
We have a multivariate normal vector ${\boldsymbol Y} \sim \mathcal{N}(\boldsymbol\mu, \Sigma)$. Consider partitioning $\boldsymbol\mu$ and ${\boldsymbol Y}$ into $$\boldsymbol\mu = \begin{bmatrix} \boldsymbol\mu_1 \\ \boldsymbol\mu_2 \end{bmatrix} $$ $${\boldsymbol Y}=\begin{bmatrix}{\boldsymbol y}_1 \\ {\boldsymbol y}_2 \end{bmatrix}$$ with a similar partition of $\Sigma$ into $$ \begin{bmatrix} \Sigma_{11} & \Sigma_{12}\\ \Sigma_{21} & \Sigma_{22} \end{bmatrix} $$ Then, $({\boldsymbol y}_1|{\boldsymbol y}_2={\boldsymbol a})$, the conditional distribution of the first partition given the second, is $\mathcal{N}(\overline{\boldsymbol\mu},\overline{\Sigma})$, with mean $$ \overline{\boldsymbol\mu}=\boldsymbol\mu_1+\Sigma_{12}{\Sigma_{22}}^{-1}({\boldsymbol a}-\boldsymbol\mu_2) $$ and covariance matrix $$ \overline{\Sigma}=\Sigma_{11}-\Sigma_{12}{\Sigma_{22}}^{-1}\Sigma_{21}$$ Actually these results are provided in Wikipedia too, but I have no idea how the $\overline{\boldsymbol\mu}$ and $\overline{\Sigma}$ is derived. These results are crucial, since they are important statistical formula for deriving Kalman filters . Would anyone provide me a derivation steps of deriving $\overline{\boldsymbol\mu}$ and $\overline{\Sigma}$ ? Thank you very much!
You can prove it by explicitly calculating the conditional density by brute force, as in Procrastinator's link (+1) in the comments. But, there's also a theorem that says all conditional distributions of a multivariate normal distribution are normal. Therefore, all that's left is to calculate the mean vector and covariance matrix. I remember we derived this in a time series class in college by cleverly defining a third variable and using its properties to derive the result more simply than the brute force solution in the link (as long as you're comfortable with matrix algebra). I'm going from memory but it was something like this: Let ${\bf x}_{1}$ be the first partition and ${\bf x}_2$ the second. Now define ${\bf z} = {\bf x}_1 + {\bf A} {\bf x}_2 $ where ${\bf A} = -\Sigma_{12} \Sigma^{-1}_{22}$. Now we can write \begin{align*} {\rm cov}({\bf z}, {\bf x}_2) &= {\rm cov}( {\bf x}_{1}, {\bf x}_2 ) + {\rm cov}({\bf A}{\bf x}_2, {\bf x}_2) \\ &= \Sigma_{12} + {\bf A} {\rm var}({\bf x}_2) \\ &= \Sigma_{12} - \Sigma_{12} \Sigma^{-1}_{22} \Sigma_{22} \\ &= 0 \end{align*} Therefore ${\bf z}$ and ${\bf x}_2$ are uncorrelated and, since they are jointly normal, they are independent . Now, clearly $E({\bf z}) = {\boldsymbol \mu}_1 + {\bf A} {\boldsymbol \mu}_2$, therefore it follows that \begin{align*} E({\bf x}_1 | {\bf x}_2) &= E( {\bf z} - {\bf A} {\bf x}_2 | {\bf x}_2) \\ & = E({\bf z}|{\bf x}_2) - E({\bf A}{\bf x}_2|{\bf x}_2) \\ & = E({\bf z}) - {\bf A}{\bf x}_2 \\ & = {\boldsymbol \mu}_1 + {\bf A} ({\boldsymbol \mu}_2 - {\bf x}_2) \\ & = {\boldsymbol \mu}_1 + \Sigma_{12} \Sigma^{-1}_{22} ({\bf x}_2- {\boldsymbol \mu}_2) \end{align*} which proves the first part. For the covariance matrix, note that \begin{align*} {\rm var}({\bf x}_1|{\bf x}_2) &= {\rm var}({\bf z} - {\bf A} {\bf x}_2 | {\bf x}_2) \\ &= {\rm var}({\bf z}|{\bf x}_2) + {\rm var}({\bf A} {\bf x}_2 | {\bf x}_2) - {\bf A}{\rm cov}({\bf z}, -{\bf x}_2) - {\rm cov}({\bf z}, -{\bf x}_2) {\bf A}' \\ &= {\rm var}({\bf z}|{\bf x}_2) \\ &= {\rm var}({\bf z}) \end{align*} Now we're almost done: \begin{align*} {\rm var}({\bf x}_1|{\bf x}_2) = {\rm var}( {\bf z} ) &= {\rm var}( {\bf x}_1 + {\bf A} {\bf x}_2 ) \\ &= {\rm var}( {\bf x}_1 ) + {\bf A} {\rm var}( {\bf x}_2 ) {\bf A}' + {\bf A} {\rm cov}({\bf x}_1,{\bf x}_2) + {\rm cov}({\bf x}_2,{\bf x}_1) {\bf A}' \\ &= \Sigma_{11} +\Sigma_{12} \Sigma^{-1}_{22} \Sigma_{22}\Sigma^{-1}_{22}\Sigma_{21} - 2 \Sigma_{12} \Sigma_{22}^{-1} \Sigma_{21} \\ &= \Sigma_{11} +\Sigma_{12} \Sigma^{-1}_{22}\Sigma_{21} - 2 \Sigma_{12} \Sigma_{22}^{-1} \Sigma_{21} \\ &= \Sigma_{11} -\Sigma_{12} \Sigma^{-1}_{22}\Sigma_{21} \end{align*} which proves the second part. Note: For those not very familiar with the matrix algebra used here, this is an excellent resource . Edit: One property used here this is not in the matrix cookbook (good catch @FlyingPig) is property 6 on the wikipedia page about covariance matrices: which is that for two random vectors $\bf x, y$, $${\rm var}({\bf x}+{\bf y}) = {\rm var}({\bf x})+{\rm var}({\bf y}) + {\rm cov}({\bf x},{\bf y}) + {\rm cov}({\bf y},{\bf x})$$ For scalars, of course, ${\rm cov}(X,Y)={\rm cov}(Y,X)$ but for vectors they are different insofar as the matrices are arranged differently.
{ "source": [ "https://stats.stackexchange.com/questions/30588", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3525/" ] }
30,691
I got a an R script from someone to run a random forest model. I modified and run it with some employee data. We are trying to predict voluntary separations. Here is some additional info: this is a classification model were 0 = employee stayed, 1= employee terminated, we are currently only looking at a dozen predictor variables, the data is "unbalanced" in that the term'd records make up about 7% of the total record set. I run the model with various mtry and ntree selections but settled on the below. The OOB is 6.8% which I think is good but the confusion matrix seems to tell a different story for predicting terms since the error rate is quite high at 92.79% Am I right in assuming that I can't rely on and use this model because the high error rate for predicting terms? or is there something also I can do to use RF and get a smaller error rate for predicting terms? FOREST_model <- randomForest(theFormula, data=trainset, mtry=3, ntree=500, importance=TRUE, do.trace=100) ntree OOB 1 2 100: 6.97% 0.47% 92.79% 200: 6.87% 0.36% 92.79% 300: 6.82% 0.33% 92.55% 400: 6.80% 0.29% 92.79% 500: 6.80% 0.29% 92.79% > print(FOREST_model) Call: randomForest(formula = theFormula, data = trainset, mtry = 3, ntree = 500, importance = TRUE, do.trace = 100) Type of random forest: classification Number of trees: 500 No. of variables tried at each split: 3 OOB estimate of error rate: 6.8% Confusion matrix: 0 1 class.error 0 5476 16 0.002913328 1 386 30 0.927884615 > nrow(trainset) [1] 5908
The confusion matrix is calculated at a specific point determined by the cutoff on the votes. Depending on your needs, i.e., better precision (reduce false positives) or better sensitivity (reduce false negatives) you may prefer a different cutoff. For this purpose I recommend plotting (i) a ROC curve , (ii) a recall-precision and (iii) a calibrating curve in order to select the cutoff that best fits your purposes. All these can be easily plotted using the 2 following functions from the ROCR R library (available also on CRAN ): pred.obj <- prediction(predictions, labels,...) performance(pred.obj, measure, ...) For example: rf <- randomForest (x,y,...); OOB.votes <- predict (rf,x,type="prob"); OOB.pred <- OOB.votes[,2]; pred.obj <- prediction (OOB.pred,y); RP.perf <- performance(pred.obj, "rec","prec"); plot (RP.perf); ROC.perf <- performance(pred.obj, "fpr","tpr"); plot (ROC.perf); plot ([email protected][[1]],[email protected][[1]]); lines ([email protected][[1]],[email protected][[1]]); lines ([email protected][[1]],[email protected][[1]]);
{ "source": [ "https://stats.stackexchange.com/questions/30691", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11944/" ] }
30,728
I have analysed my data as they are. Now I want to look at my analyses after taking the log of all variables. Many variables contain many zeros. Therefore I add a small quantity to avoid taking the log of zero. So far I've added 10^-10, without any rationale really, just because I felt like adding a very small quantity would be advisable to minimize the effect of my arbitrarily chosen quantity. But some variables contain mostly zeros, and therefore when logged mostly -23.02. The range of the ranges of my variables is 1.33-8819.21, and the frequency of zeroes also varies dramatically. Therefore my personal choice of "small quantity" affects the variables very differently. It is clear now that 10^-10 is a completely unacceptable choice, as most of the variance in all the variables then comes from this arbitrary "small quantity". I wonder what would be a more correct way of doing this. Maybe it's better to derive the quantity from each variables individual distribution? Are there any guidelines about how big this "small quantity" should be? My analyses are mostly simple cox models with each variable and age/sex as IVs. The variables are the concentrations of various blood lipids, with often considerable coefficients of variation. Edit : Adding the smallest non-zero value of the variable seems practical for my data. But maybe there is a general solution? Edit 2 : As the zeros merely indicate concentrations below the detection limit, maybe setting them to (detection limit)/2 would be appropriate?
As the zeros merely indicate concentrations below the detection limit, maybe setting them to (detection limit)/2 would be appropriate I was just typing that the thing that comes to my mind where log does (frequently) make sense and 0 may occur are concentrations when you did the 2nd edit. As you say, for measured concentrations the 0 just means "I couldn't measure that low concentrations". Side note: do you mean LOQ instead of LOD? Whether setting the 0 to $\frac{1}{2}$ LOQ is a good idea or not depends: from the point of view that $\frac{1}{2}\mathrm{LOQ}$ is your "guess" expressing that c is anywhere between 0 and LOQ, it does make sense. But consider the corresponding calibration function: On the left, the calibration function yields c = 0 below the LOQ. On the right, $\frac{1}{2}\mathrm{LOQ}$ is used instead of 0. However, if the original measured value is available, that may provide a better guess. After all, LOQ usually just means that the relative error is 10%. Below that the measurement still carries information, but the relative error becomes huge. (blue: LOD, red: LOQ) An alternative would be to exclude these measurements. That can be reasonable, too e.g. think of a calibration curve. In practice you often observe a sigmoid shape: for low c, signal ≈ constant, intermediate linear behaviour, then detector saturation. In that situation you may want to restrict yourself to statements about concentrations that are clearly in the linear range as both below and above other processes heavily influence the result. Make sure you explain that the data was selected that way and why. edit: What is sensible or acceptable, depends of course on the problem. Hopefully, we're talking here about a small part of the data that does not influence the analyis. Maybe a quick and dirty check is: run your data analysis with and without excluding the data (or whatever treatment you propose) and see whether anything changes substantially. If you see changes, then of course you're in trouble. However, from the analytical chemistry point of view, I'd say your trouble does not primarily lie in which method you use to deal with the data, but the underlying problem is that the analytical method (or its working range) was not appropriate for the problem at hand. There is of course a zone where the better statistical approach can save your day, but in the end the approximation "garbage in, garbage out" usually holds also for the more fancy methods. Quotations for the topic: A statistician once told me: The problem with you (chemists/spectroscopists) is that your problems are either so hard that they cannot be solved or so easy that there is no fun in solving them. Fisher about the statistical post-mortem of experiments
{ "source": [ "https://stats.stackexchange.com/questions/30728", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/10064/" ] }
30,788
I'm very new with R and stats in general, but I need to make a scatterplot that I think might be beyond its native capacities. I have a couple of vectors of observations and I want to make a scatterplot with them, and each pair falls into one out of three categories. I would like to make a scatterplot that separates each category, either by colour or by symbol. I think this would be better than generating three different scatterplots. I have another problem with the fact that in each of the categories, there are large clusters at one point, but the clusters are larger in one group than in the other two. Does anyone know a good way to do this? Packages I should install and learn how to use? Anyone done something similar? Thanks
large clusters : if overprinting is a problem, you could either use a lower alpha, so single points are dim, but overprining makes more intense colour. Or you switch to 2d histograms or density estimates. require ("ggplot2") ggplot (iris, aes (x = Sepal.Length, y = Sepal.Width, colour = Species)) + stat_density2d () You'd probably want to facet this... ggplot (iris, aes (x = Sepal.Length, y = Sepal.Width, fill = Species)) + stat_binhex (bins=5, aes (alpha = ..count..)) + facet_grid (. ~ Species) While you can procude this plot also without facets, the prining order of the Species influnces the final picture. You can avoid this if you're willing to get your hands a bit dirty (= link to explanation & code) and calculate mixed colours for the hexagons: Another useful thing is to use (hex)bins for high density areas, and plot single points for other parts: ggplot (df, aes (x = date, y = t5)) + stat_binhex (data = df [df$t5 <= 0.5,], bins = nrow (df) / 250) + geom_point (data = df [df$t5 > 0.5,], aes (col = type), shape = 3) + scale_fill_gradient (low = "#AAAAFF", high = "#000080") + scale_colour_manual ("response type", values = c (normal = "black", timeout = "red")) + ylab ("t / s") For the sake of completeness of the plotting packages, let me also mention lattice : require ("lattice") xyplot(Sepal.Width ~ Sepal.Length | Species, iris, pch= 20) xyplot(Sepal.Width ~ Sepal.Length, iris, groups = iris$Species, pch= 20) xyplot(Sepal.Width ~ Sepal.Length | Species, iris, groups = iris$Species, pch= 20)
{ "source": [ "https://stats.stackexchange.com/questions/30788", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12096/" ] }
30,858
I need to calculate the cumulative distribution function of a data sample. Is there something similar to hist() in R that measure the cumulative density function? I have tries ecdf() but i can't understand the logic.
The ecdf function applied to a data sample returns a function representing the empirical cumulative distribution function. For example: > X = rnorm(100) # X is a sample of 100 normally distributed random variables > P = ecdf(X) # P is a function giving the empirical CDF of X > P(0.0) # This returns the empirical CDF at zero (should be close to 0.5) [1] 0.52 > plot(P) # Draws a plot of the empirical CDF (see below) If you want to have an object representing the empirical CDF evaluated at specific values (rather than as a function object) then you can do > z = seq(-3, 3, by=0.01) # The values at which we want to evaluate the empirical CDF > p = P(z) # p now stores the empirical CDF evaluated at the values in z Note that p contains at most the same amount of information as P (and possibly it contains less) which in turn contains the same amount of information as X .
{ "source": [ "https://stats.stackexchange.com/questions/30858", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/10017/" ] }
31,036
What is the difference between a consistent estimator and an unbiased estimator? The precise technical definitions of these terms are fairly complicated, and it's difficult to get an intuitive feel for what they mean. I can imagine a good estimator, and a bad estimator, but I'm having trouble seeing how any estimator could satisfy one condition and not the other.
To define the two terms without using too much technical language: An estimator is consistent if, as the sample size increases, the estimates (produced by the estimator) "converge" to the true value of the parameter being estimated. To be slightly more precise - consistency means that, as the sample size increases, the sampling distribution of the estimator becomes increasingly concentrated at the true parameter value. An estimator is unbiased if, on average, it hits the true parameter value. That is, the mean of the sampling distribution of the estimator is equal to the true parameter value. The two are not equivalent: Unbiasedness is a statement about the expected value of the sampling distribution of the estimator. Consistency is a statement about "where the sampling distribution of the estimator is going" as the sample size increases. It certainly is possible for one condition to be satisfied but not the other - I will give two examples. For both examples consider a sample $X_1, ..., X_n$ from a $N(\mu, \sigma^2)$ population. Unbiased but not consistent: Suppose you're estimating $\mu$ . Then $X_1$ is an unbiased estimator of $\mu$ since $E(X_1) = \mu$ . But, $X_1$ is not consistent since its distribution does not become more concentrated around $\mu$ as the sample size increases - it's always $N(\mu, \sigma^2)$ ! Consistent but not unbiased: Suppose you're estimating $\sigma^2$ . The maximum likelihood estimator is $$ \hat{\sigma}^2 = \frac{1}{n} \sum_{i=1}^{n} (X_i - \overline{X})^2 $$ where $\overline{X}$ is the sample mean. It is a fact that $$ E(\hat{\sigma}^2) = \frac{n-1}{n} \sigma^2 $$ which can be derived using the information here . Therefore $\hat{\sigma}^2$ is biased for any finite sample size. We can also easily derive that $${\rm var}(\hat{\sigma}^2) = \frac{ 2\sigma^4(n-1)}{n^2}$$ From these facts we can informally see that the distribution of $\hat{\sigma}^2$ is becoming more and more concentrated at $\sigma^2$ as the sample size increases since the mean is converging to $\sigma^2$ and the variance is converging to $0$ . ( Note: This does constitute a proof of consistency, using the same argument as the one used in the answer here )
{ "source": [ "https://stats.stackexchange.com/questions/31036", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/10914/" ] }
31,066
I am currently using an SVM with a linear kernel to classify my data. There is no error on the training set. I tried several values for the parameter $C$ ($10^{-5}, \dots, 10^2$). This did not change the error on the test set. Now I wonder: is this an error caused by the ruby bindings for libsvm I am using ( rb-libsvm ) or is this theoretically explainable ? Should the parameter $C$ always change the performance of the classifier?
The C parameter tells the SVM optimization how much you want to avoid misclassifying each training example. For large values of C, the optimization will choose a smaller-margin hyperplane if that hyperplane does a better job of getting all the training points classified correctly. Conversely, a very small value of C will cause the optimizer to look for a larger-margin separating hyperplane, even if that hyperplane misclassifies more points. For very tiny values of C, you should get misclassified examples, often even if your training data is linearly separable.
{ "source": [ "https://stats.stackexchange.com/questions/31066", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/10268/" ] }
31,083
I'm using R to do K-means clustering. I'm using 14 variables to run K-means What is a pretty way to plot the results of K-means? Are there any existing implementations? Does having 14 variables complicate plotting the results? I found something called GGcluster which looks cool but it is still in development. I also read something about sammon mapping, but didn't understand it very well. Would this be a good option?
Here an example that can helps you: library(cluster) library(fpc) data(iris) dat &lt- iris[, -5] # without known classification # Kmeans clustre analysis clus &lt- kmeans(dat, centers=3) # Fig 01 plotcluster(dat, clus$cluster) # More complex clusplot(dat, clus$cluster, color=TRUE, shade=TRUE, labels=2, lines=0) # Fig 03 with(iris, pairs(dat, col=c(1:3)[clus$cluster])) Based on the latter plot you could decide which of your initial variables to plot. Maybe 14 variables are huge, so you can try a principal component analysis (PCA) before and then use the first two or three components from the PCA to perform the cluster analysis.
{ "source": [ "https://stats.stackexchange.com/questions/31083", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11748/" ] }
31,088
Consistency is obviously a natural and important property of estimators, but are there situations where it may be better to use an inconsistent estimator rather than a consistent one? More specifically, are there examples of an inconsistent estimator which outperforms a reasonable consistent estimator for all finite $n$ (with respect to some suitable loss function)?
This answer describes a realistic problem where a natural consistent estimator is dominated (outperformed for all possible parameter values for all sample sizes) by an inconsistent estimator. It is motivated by the idea that consistency is best suited for quadratic losses, so using a loss departing strongly from that (such as an asymmetric loss) should render consistency almost useless in evaluating the performance of estimators. Suppose your client wishes to estimate the mean of a variable (assumed to have a symmetric distribution) from an iid sample $(x_1, \ldots, x_n)$, but they are averse to either (a) underestimating it or (b) grossly overestimating it. To see how this might work out, let us adopt a simple loss function, understanding that in practice the loss might differ from this one quantitatively (but not qualitatively). Choose units of measurement so that $1$ is the largest tolerable overestimate and set the loss of an estimate $t$ when the true mean is $\mu$ to equal $0$ whenever $\mu \le t\le \mu+1$ and equal to $1$ otherwise. The calculations are particularly simple for a Normal family of distributions with mean $\mu$ and variance $\sigma^2 \gt 0$, for then the sample mean $\bar{x}=\frac{1}{n}\sum_i x_i$ has a Normal$(\mu, \sigma^2/n)$ distribution. The sample mean is a consistent estimator of $\mu$, as is well known (and obvious). Writing $\Phi$ for the standard normal CDF, the expected loss of the sample mean equals $1/2 + \Phi(-\sqrt{n}/\sigma)$: $1/2$ comes from the 50% chance that the sample mean will underestimate the true mean and $\Phi(-\sqrt{n}/\sigma)$ comes from the chance of overestimating the true mean by more than $1$. The expected loss of $\bar{x}$ equals the blue area under this standard normal PDF. The red area gives the expected loss of the alternative estimator, below. They differ by replacing the solid blue area between $-\sqrt{n}/(2\sigma)$ and $0$ by the smaller solid red area between $\sqrt{n}/(2\sigma)$ and $\sqrt{n}/\sigma$. That difference grows as $n$ increases. An alternative estimator given by $\bar{x}+1/2$ has an expected loss of $2\Phi(-\sqrt{n}/(2\sigma))$. The symmetry and unimodality of normal distributions imply its expected loss is always better than that of the sample mean. (This makes the sample mean inadmissible for this loss.) Indeed, the expected loss of the sample mean has a lower limit of $1/2$ whereas that of the alternative converges to $0$ as $n$ grows. However, the alternative clearly is inconsistent: as $n$ grows, it converges in probability to $\mu+1/2 \ne \mu$. Blue dots show loss for $\bar{x}$ and red dots show loss for $\bar{x}+1/2$ as a function of sample size $n$.
{ "source": [ "https://stats.stackexchange.com/questions/31088", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/8507/" ] }
31,177
Is it (always) true that $$\mathrm{Var}\left(\sum\limits_{i=1}^m{X_i}\right) = \sum\limits_{i=1}^m{\mathrm{Var}(X_i)} \>?$$
The answer to your question is "Sometimes, but not in general". To see this let $X_1, ..., X_n$ be random variables (with finite variances). Then, $$ {\rm var} \left( \sum_{i=1}^{n} X_i \right) = E \left( \left[ \sum_{i=1}^{n} X_i \right]^2 \right) - \left[ E\left( \sum_{i=1}^{n} X_i \right) \right]^2$$ Now note that $(\sum_{i=1}^{n} a_i)^2 = \sum_{i=1}^{n} \sum_{j=1}^{n} a_i a_j $, which is clear if you think about what you're doing when you calculate $(a_1+...+a_n) \cdot (a_1+...+a_n)$ by hand. Therefore, $$ E \left( \left[ \sum_{i=1}^{n} X_i \right]^2 \right) = E \left( \sum_{i=1}^{n} \sum_{j=1}^{n} X_i X_j \right) = \sum_{i=1}^{n} \sum_{j=1}^{n} E(X_i X_j) $$ similarly, $$ \left[ E\left( \sum_{i=1}^{n} X_i \right) \right]^2 = \left[ \sum_{i=1}^{n} E(X_i) \right]^2 = \sum_{i=1}^{n} \sum_{j=1}^{n} E(X_i) E(X_j)$$ so $$ {\rm var} \left( \sum_{i=1}^{n} X_i \right) = \sum_{i=1}^{n} \sum_{j=1}^{n} \big( E(X_i X_j)-E(X_i) E(X_j) \big) = \sum_{i=1}^{n} \sum_{j=1}^{n} {\rm cov}(X_i, X_j)$$ by the definition of covariance. Now regarding Does the variance of a sum equal the sum of the variances? : If the variables are uncorrelated, yes : that is, ${\rm cov}(X_i,X_j)=0$ for $i\neq j$, then $$ {\rm var} \left( \sum_{i=1}^{n} X_i \right) = \sum_{i=1}^{n} \sum_{j=1}^{n} {\rm cov}(X_i, X_j) = \sum_{i=1}^{n} {\rm cov}(X_i, X_i) = \sum_{i=1}^{n} {\rm var}(X_i) $$ If the variables are correlated, no, not in general : For example, suppose $X_1, X_2$ are two random variables each with variance $\sigma^2$ and ${\rm cov}(X_1,X_2)=\rho$ where $0 < \rho <\sigma^2$. Then ${\rm var}(X_1 + X_2) = 2(\sigma^2 + \rho) \neq 2\sigma^2$, so the identity fails. but it is possible for certain examples : Suppose $X_1, X_2, X_3$ have covariance matrix $$ \left( \begin{array}{ccc} 1 & 0.4 &-0.6 \\ 0.4 & 1 & 0.2 \\ -0.6 & 0.2 & 1 \\ \end{array} \right) $$ then ${\rm var}(X_1+X_2+X_3) = 3 = {\rm var}(X_1) + {\rm var}(X_2) + {\rm var}(X_3)$ Therefore if the variables are uncorrelated then the variance of the sum is the sum of the variances, but converse is not true in general.
{ "source": [ "https://stats.stackexchange.com/questions/31177", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2750/" ] }
31,238
What is the reason that a likelihood function is not a pdf (probability density function)?
We'll start with two definitions: A probability density function (pdf) is a non-negative function that integrates to $1$. The likelihood is defined as the joint density of the observed data as a function of the parameter. But, as pointed out by the reference to Lehmann made by @whuber in a comment below, the likelihood function is a function of the parameter only, with the data held as a fixed constant. So the fact that it is a density as a function of the data is irrelevant. Therefore, the likelihood function is not a pdf because its integral with respect to the parameter does not necessarily equal 1 (and may not be integrable at all, actually, as pointed out by another comment from @whuber). To see this, we'll use a simple example. Suppose you have a single observation, $x$, from a ${\rm Bernoulli}(\theta)$ distribution. Then the likelihood function is $$ L(\theta) = \theta^{x} (1 - \theta)^{1-x} $$ It is a fact that $\int_{0}^{1} L(\theta) d \theta = 1/2$. Specifically, if $x = 1$, then $L(\theta) = \theta$, so $$\int_{0}^{1} L(\theta) d \theta = \int_{0}^{1} \theta \ d \theta = 1/2$$ and a similar calculation applies when $x = 0$. Therefore, $L(\theta)$ cannot be a density function. Perhaps even more important than this technical example showing why the likelihood isn't a probability density is to point out that the likelihood is not the probability of the parameter value being correct or anything like that - it is the probability (density) of the data given the parameter value , which is a completely different thing. Therefore one should not expect the likelihood function to behave like a probability density.
{ "source": [ "https://stats.stackexchange.com/questions/31238", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12223/" ] }
31,249
Can someone summarize for me with possible examples, at what situations increasing the training data improves the overall system? When do we detect that adding more training data could possibly over-fit data and not give good accuracies on the test data? This is a very non-specific question, but if you want to answer it specific to a particular situation, please do so.
In most situations, more data is usually better . Overfitting is essentially learning spurious correlations that occur in your training data, but not the real world. For example, if you considered only my colleagues, you might learn to associate "named Matt" with "has a beard." It's 100% valid ( $n=4$ , even!) when considering only the small group of people working on floor, but it's obviously not true in general. Increasing the size of your data set (e.g., to the entire building or city) should reduce these spurious correlations and improve the performance of your learner. That said, one situation where more data does not help---and may even hurt---is if your additional training data is noisy or doesn't match whatever you are trying to predict. I once did an experiment where I plugged different language models[*] into a voice-activated restaurant reservation system. I varied the amount of training data as well as its relevance: at one extreme, I had a small, carefully curated collection of people booking tables, a perfect match for my application. At the other, I had a model estimated from huge collection of classic literature, a more accurate language model, but a much worse match to the application. To my surprise, the small-but-relevant model vastly outperformed the big-but-less-relevant model. A surprising situation, called **double-descent**, also occurs when size of the training set is close to the number of model parameters. In these cases, the test risk first decreases as the size of the training set increases, transiently *increases* when a bit more training data is added, and finally begins decreasing again as the training set continues to grow. This phenomena was reported 25 years in the neural network literature (see Opper, 1995), but occurs in modern networks too ([Advani and Saxe, 2017][1]). Interestingly, this happens even for a linear regression, albeit one fit by SGD ([Nakkiran, 2019][2]). This phenomenon is not yet totally understood and is largely of theoretical interest: I certainly wouldn't use it as a reason not to collect more data (though I might fiddle with the training set size if n==p and the performance were unexpectedly bad). [*]A language model is just the probability of seeing a given sequence of words e.g. $P(w_n = \textrm{'quick', } w_{n+1} = \textrm{'brown', } w_{n+2} = \textrm{'fox'})$ . They're vital to building halfway decent speech/character recognizers.
{ "source": [ "https://stats.stackexchange.com/questions/31249", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12060/" ] }
31,284
I recently came across multidimensional scaling. I am trying to understand this tool better and its role in modern statistics. So here are a few guiding questions: Which questions does it answer? Which researchers are often interested in using it? Are there other statistical techniques which perform similar functions? What theory is developed around it? How does "MDS" relate to "SSA"? I apologize in advance for asking such a mixed/unorganized question, but so is the nature of my current stage in this field.
In case you will accept a concise answer... What questions does it answer? Visual mapping of pairwise dissimilarities in euclidean (mostly) space of low dimensionality. Which researchers are often interested in using it? Everyone who aims either to display clusters of points or to get some insight of possible latent dimensions along which points differentiate. Or who just wants to turn a proximity matrix into points X variables data. Are there other statistical techniques which perform similar functions? PCA (linear, nonlinear), Correspondence analysis, Multidimensional unfolding (a version of MDS for rectangular matrices). They are related in different ways to MDS but are rarely seen as substitutes of it. (Linear PCA and CA are closely related linear algebra space- reducing operations on square and rectangular matrices, respectively. MDS and MDU are similar iterative generally nonlinear space- fitting algorithms on square and rectangular matrices, respectively.) What theory is developed around it? Matrix of observed dissimilarities $S$ is transformed into disparities $T$ in such a way as to minimize error $E$ of mapping the disparities by means of euclidean distances $D$ in $m$-dimensional space: $S \rightarrow T =^m D+E$. The transformation could be requested linear (metric MDS) or monotonic (non-metric MDS). $E$ could be absolute error or squared error or other stress function. You can obtain a map for a single matrix $S$ (classic or simple MDS) or a map for many matrices at once with additional map of weights (individual differences or weighted MDS). There are as well other forms like repeated MDS and generalized MDS. So, MDS is a diverse technique. How does "MDS" relates to "SSA"? Notion about this can be found on Wikipedia page of MDS. Update for the last point. This technote from SPSS leaves impression that SSA is a case of Multidimensional unfolding (PREFSCAL procedure in SPSS). The latter, as I've noted above, is MDS algo applied to rectangular (rather than square symmetric) matrices.
{ "source": [ "https://stats.stackexchange.com/questions/31284", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/253/" ] }
31,326
Take the case of book ratings on a website. Book A is rated by 10,000 people with an average rating of 4.25 and the variance $\sigma = 0.5$. Similarly Book B is rated by 100 people and has a rating of 4.5 with $\sigma = 0.25$. Now because of the large sample size of Book A the 'mean stabilized' to 4.25. Now for 100 people, it may be that if more people read Book B the mean rating may fall to 4 or 4.25. how should one interpret the comparison of means from different samples and what are the best conclusions one can/should draw? For example - can we really say Book B is better than Book A.
You can use a t-test to assess if there are differences in the means. The different sample sizes don't cause a problem for the t-test, and don't require the results to be interpreted with any extra care. Ultimately, you can even compare a single observation to an infinite population with a known distribution and mean and SD; for example someone with an IQ of 130 is smarter than 97.7% of people. One thing to note though, is that for a given $N$ (i.e., total sample size), power is maximized if the group $n$'s are equal; with highly unequal group sizes, you don't get as much additional resolution with each additional observation. To clarify my point about power, here is a very simple simulation written for R: set.seed(9) # this makes the simulation exactly reproducible power5050 = vector(length=10000) # these will store the p-values from each power7525 = vector(length=10000) # simulated test to keep track of how many power9010 = vector(length=10000) # are 'significant' for(i in 1:10000){ # I run the following procedure 10k times n1a = rnorm(50, mean=0, sd=1) # I'm drawing 2 samples of size 50 from 2 normal n2a = rnorm(50, mean=.5, sd=1) # distributions w/ dif means, but equal SDs n1b = rnorm(75, mean=0, sd=1) # this version has group sizes of 75 & 25 n2b = rnorm(25, mean=.5, sd=1) n1c = rnorm(90, mean=0, sd=1) # this one has 90 & 10 n2c = rnorm(10, mean=.5, sd=1) power5050[i] = t.test(n1a, n2a, var.equal=T)$p.value # here t-tests are run & power7525[i] = t.test(n1b, n2b, var.equal=T)$p.value # the p-values are stored power9010[i] = t.test(n1c, n2c, var.equal=T)$p.value # for each version } mean(power5050<.05) # this code counts how many of the p-values for [1] 0.7019 # each of the versions are less than .05 & mean(power7525<.05) # divides the number by 10k to compute the % [1] 0.5648 # of times the results were 'significant'. That mean(power9010<.05) # gives an estimate of the power [1] 0.3261 Notice that in all cases $N=100$, but that in the first case $n_1=50$ & $n_2=50$, in the second case $n_1=75$ & $n_2=25$, and in the last case $n_1=90$ and $n_2=10$. Note further that the standardized mean difference / data generating process was the same in all cases. However, whereas the test was 'significant' 70% of the time for the 50-50 sample, power was 56% with 75-25 and only 33% when the group sizes were 90-10. I think of this by analogy. If you want to know the area of a rectangle, and the perimeter is fixed, then the area will be maximized if the length and width are equal (i.e., if the rectangle is a square ). On the other hand, as the length and width diverge (as the rectangle becomes elongated), the area shrinks.
{ "source": [ "https://stats.stackexchange.com/questions/31326", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4426/" ] }
31,362
I'm implementing sentiment analysis on the set of user comments. All comments are on the same object. At the moment I decided to have three classes - negative, neutral and positive. I got test array of 1500 comments with marked classes. Tried to use SVM for classification on binary feature vectors in which each element refers to the presence of some word in the comment. I got maximum accuracy of 60% correct classes. Known researches had 80% and better accuracy, but they was done on English texts. One of the problems - numerous errors in the comments, spelling and grammar. Also the Russian language is more complex than English. I would appreciate advice of any kind. Are there any good tools for the analysis of the Russian language? Maybe SVM isn't the right choice, are there any better algorithms for my case? Or maybe i must choose the more efficient feature space?
You can use a t-test to assess if there are differences in the means. The different sample sizes don't cause a problem for the t-test, and don't require the results to be interpreted with any extra care. Ultimately, you can even compare a single observation to an infinite population with a known distribution and mean and SD; for example someone with an IQ of 130 is smarter than 97.7% of people. One thing to note though, is that for a given $N$ (i.e., total sample size), power is maximized if the group $n$'s are equal; with highly unequal group sizes, you don't get as much additional resolution with each additional observation. To clarify my point about power, here is a very simple simulation written for R: set.seed(9) # this makes the simulation exactly reproducible power5050 = vector(length=10000) # these will store the p-values from each power7525 = vector(length=10000) # simulated test to keep track of how many power9010 = vector(length=10000) # are 'significant' for(i in 1:10000){ # I run the following procedure 10k times n1a = rnorm(50, mean=0, sd=1) # I'm drawing 2 samples of size 50 from 2 normal n2a = rnorm(50, mean=.5, sd=1) # distributions w/ dif means, but equal SDs n1b = rnorm(75, mean=0, sd=1) # this version has group sizes of 75 & 25 n2b = rnorm(25, mean=.5, sd=1) n1c = rnorm(90, mean=0, sd=1) # this one has 90 & 10 n2c = rnorm(10, mean=.5, sd=1) power5050[i] = t.test(n1a, n2a, var.equal=T)$p.value # here t-tests are run & power7525[i] = t.test(n1b, n2b, var.equal=T)$p.value # the p-values are stored power9010[i] = t.test(n1c, n2c, var.equal=T)$p.value # for each version } mean(power5050<.05) # this code counts how many of the p-values for [1] 0.7019 # each of the versions are less than .05 & mean(power7525<.05) # divides the number by 10k to compute the % [1] 0.5648 # of times the results were 'significant'. That mean(power9010<.05) # gives an estimate of the power [1] 0.3261 Notice that in all cases $N=100$, but that in the first case $n_1=50$ & $n_2=50$, in the second case $n_1=75$ & $n_2=25$, and in the last case $n_1=90$ and $n_2=10$. Note further that the standardized mean difference / data generating process was the same in all cases. However, whereas the test was 'significant' 70% of the time for the 50-50 sample, power was 56% with 75-25 and only 33% when the group sizes were 90-10. I think of this by analogy. If you want to know the area of a rectangle, and the perimeter is fixed, then the area will be maximized if the length and width are equal (i.e., if the rectangle is a square ). On the other hand, as the length and width diverge (as the rectangle becomes elongated), the area shrinks.
{ "source": [ "https://stats.stackexchange.com/questions/31362", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
31,364
I am trying to calculate covariance matrix from a 2D data, assumed from coming from a Gaussian Distribution. I am trying to calculate using the equality that $\mathrm{Var}[x] = \mathrm{E}[x^2] - \mathrm{E}[x]^2$, so supposing that D is the data matrix where rows are observations, the MATLAB code is: [mean(D(:,1).^2) - mean(D(:,1))^2 , mean(D(:,1).*D(:,2)) - mean(D(:,1))*mean(D(:,2)) mean(D(:,1).*D(:,2)) - mean(D(:,1))*mean(D(:,2)) , mean(D(:,2).^2) - mean(D(:,2))^2] However cov(D) gives me an entirely different covariance matrix. Of course I can use cov() and go on my life but I am using the calculation method above in another different piece of C++ code, so it is nice to learn where I am doing wrong. I think I am missing a crucial and fundamental information here but could not figure it out. Any help?
You can use a t-test to assess if there are differences in the means. The different sample sizes don't cause a problem for the t-test, and don't require the results to be interpreted with any extra care. Ultimately, you can even compare a single observation to an infinite population with a known distribution and mean and SD; for example someone with an IQ of 130 is smarter than 97.7% of people. One thing to note though, is that for a given $N$ (i.e., total sample size), power is maximized if the group $n$'s are equal; with highly unequal group sizes, you don't get as much additional resolution with each additional observation. To clarify my point about power, here is a very simple simulation written for R: set.seed(9) # this makes the simulation exactly reproducible power5050 = vector(length=10000) # these will store the p-values from each power7525 = vector(length=10000) # simulated test to keep track of how many power9010 = vector(length=10000) # are 'significant' for(i in 1:10000){ # I run the following procedure 10k times n1a = rnorm(50, mean=0, sd=1) # I'm drawing 2 samples of size 50 from 2 normal n2a = rnorm(50, mean=.5, sd=1) # distributions w/ dif means, but equal SDs n1b = rnorm(75, mean=0, sd=1) # this version has group sizes of 75 & 25 n2b = rnorm(25, mean=.5, sd=1) n1c = rnorm(90, mean=0, sd=1) # this one has 90 & 10 n2c = rnorm(10, mean=.5, sd=1) power5050[i] = t.test(n1a, n2a, var.equal=T)$p.value # here t-tests are run & power7525[i] = t.test(n1b, n2b, var.equal=T)$p.value # the p-values are stored power9010[i] = t.test(n1c, n2c, var.equal=T)$p.value # for each version } mean(power5050<.05) # this code counts how many of the p-values for [1] 0.7019 # each of the versions are less than .05 & mean(power7525<.05) # divides the number by 10k to compute the % [1] 0.5648 # of times the results were 'significant'. That mean(power9010<.05) # gives an estimate of the power [1] 0.3261 Notice that in all cases $N=100$, but that in the first case $n_1=50$ & $n_2=50$, in the second case $n_1=75$ & $n_2=25$, and in the last case $n_1=90$ and $n_2=10$. Note further that the standardized mean difference / data generating process was the same in all cases. However, whereas the test was 'significant' 70% of the time for the 50-50 sample, power was 56% with 75-25 and only 33% when the group sizes were 90-10. I think of this by analogy. If you want to know the area of a rectangle, and the perimeter is fixed, then the area will be maximized if the length and width are equal (i.e., if the rectangle is a square ). On the other hand, as the length and width diverge (as the rectangle becomes elongated), the area shrinks.
{ "source": [ "https://stats.stackexchange.com/questions/31364", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11997/" ] }
31,459
Is there a relationship between regression and linear discriminant analysis (LDA)? What are their similarities and differences? Does it make any difference if there are two classes or more than two classes?
I take it that the question is about LDA and linear (not logistic) regression. There is a considerable and meaningful relation between linear regression and linear discriminant analysis . In case the dependent variable (DV) consists just of 2 groups the two analyses are actually identical. Despite that computations are different and the results - regression and discriminant coefficients - are not the same, they are exactly proportional to each other. Now for the more-than-two-groups situation. First, let us state that LDA (its extraction, not classification stage) is equivalent (linearly related results) to canonical correlation analysis if you turn the grouping DV into a set of dummy variables (with one redundant of them dropped out) and do canonical analysis with sets "IVs" and "dummies". Canonical variates on the side of "IVs" set that you obtain are what LDA calls "discriminant functions" or "discriminants". So, then how canonical analysis is related to linear regression? Canonical analysis is in essence a MANOVA (in the sense "Multivariate Multiple linear regression" or "Multivariate general linear model") deepened into latent structure of relationships between the DVs and the IVs. These two variations are decomposed in their inter-relations into latent "canonical variates". Let us take the simplest example, Y vs X1 X2 X3. Maximization of correlation between the two sides is linear regression (if you predict Y by Xs) or - which is the same thing - is MANOVA (if you predict Xs by Y). The correlation is unidimensional (with magnitude R^2 = Pillai's trace) because the lesser set, Y, consists just of one variable. Now let's take these two sets: Y1 Y2 vs X1 x2 x3. The correlation being maximized here is 2-dimensional because the lesser set contains 2 variables. The first and stronger latent dimension of the correlation is called the 1st canonical correlation, and the remaining part, orthogonal to it, the 2nd canonical correlation. So, MANOVA (or linear regression) just asks what are partial roles (the coefficients) of variables in the whole 2-dimensional correlation of sets; while canonical analysis just goes below to ask what are partial roles of variables in the 1st correlational dimension, and in the 2nd. Thus, canonical correlation analysis is multivariate linear regression deepened into latent structure of relationship between the DVs and IVs. Discriminant analysis is a particular case of canonical correlation analysis ( see exactly how ). So, here was the answer about the relation of LDA to linear regression in a general case of more-than-two-groups. Note that my answer does not at all see LDA as classification technique. I was discussing LDA only as extraction-of-latents technique. Classification is the second and stand-alone stage of LDA (I described it here ). @Michael Chernick was focusing on it in his answers.
{ "source": [ "https://stats.stackexchange.com/questions/31459", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/7329/" ] }
31,530
I know what is a conditional probability distribution. But what is exactly full conditional probability ?
I think the context is a MCMC algorithm, because this terminology is rather standard in such a context. The goal is to simulate a multivariate distribution, that is, the distribution of a random vector $(\theta_1, \ldots,\theta_p)$. The full conditional distribution of $\theta_1$ is then nothing but the conditional distribution of $\theta_1$ given all other variables.
{ "source": [ "https://stats.stackexchange.com/questions/31530", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12329/" ] }
31,547
I would love to perform a TukeyHSD post-hoc test after my two-way Anova with R, obtaining a table containing the sorted pairs grouped by significant difference. (Sorry about the wording, I'm still new with statistics.) I would like to have something like this: So, grouped with stars or letters. Any idea? I tested the function HSD.test() from the agricolae package, but it seems it doesn't handle two-way tables.
The agricolae::HSD.test function does exactly that, but you will need to let it know that you are interested in an interaction term . Here is an example with a Stata dataset: library(foreign) yield <- read.dta("http://www.stata-press.com/data/r12/yield.dta") tx <- with(yield, interaction(fertilizer, irrigation)) amod <- aov(yield ~ tx, data=yield) library(agricolae) HSD.test(amod, "tx", group=TRUE) This gives the results shown below: Groups, Treatments and means a 2.1 51.17547 ab 4.1 50.7529 abc 3.1 47.36229 bcd 1.1 45.81229 cd 5.1 44.55313 de 4.0 41.81757 ef 2.0 38.79482 ef 1.0 36.91257 f 3.0 36.34383 f 5.0 35.69507 They match what we would obtain with the following commands: . webuse yield . regress yield fertilizer##irrigation . pwcompare fertilizer#irrigation, group mcompare(tukey) ------------------------------------------------------- | Tukey | Margin Std. Err. Groups ----------------------+-------------------------------- fertilizer#irrigation | 1 0 | 36.91257 1.116571 AB 1 1 | 45.81229 1.116571 CDE 2 0 | 38.79482 1.116571 AB 2 1 | 51.17547 1.116571 F 3 0 | 36.34383 1.116571 A 3 1 | 47.36229 1.116571 DEF 4 0 | 41.81757 1.116571 BC 4 1 | 50.7529 1.116571 EF 5 0 | 35.69507 1.116571 A 5 1 | 44.55313 1.116571 CD ------------------------------------------------------- Note: Margins sharing a letter in the group label are not significantly different at the 5% level. The multcomp package also offers symbolic visualization ('compact letter displays', see Algorithms for Compact Letter Displays: Comparison and Evaluation for more details) of significant pairwise comparisons, although it does not present them in a tabular format. However, it has a plotting method which allows to conveniently display results using boxplots. Presentation order can be altered as well (option decreasing= ), and it has lot more options for multiple comparisons. There is also the multcompView package which extends those functionalities. Here is the same example analyzed with glht : library(multcomp) tuk <- glht(amod, linfct = mcp(tx = "Tukey")) summary(tuk) # standard display tuk.cld <- cld(tuk) # letter-based display opar <- par(mai=c(1,1,1.5,1)) plot(tuk.cld) par(opar) Treatment sharing the same letter are not significantly different, at the chosen level (default, 5%). Incidentally, there is a new project, currently hosted on R-Forge, which looks promising: factorplot . It includes line and letter-based displays, as well as a matrix overview (via a level plot) of all pairwise comparisons. A working paper can be found here: factorplot: Improving Presentation of Simple Contrasts in GLMs
{ "source": [ "https://stats.stackexchange.com/questions/31547", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12339/" ] }
31,565
I want to create heatmaps based upon cosine dissimilarity. I'm using R and have explored several packages, but cannot find a function to generate a standard cosine dissimilarity matrix. The built-in dist() function doesn't support cosine distances, also within the package arules there is a dissimilarity() function, but it only works on binary data. Can anybody recommend a library? Or demonstrated how to calculate cosine dissimilarity within R?
As @Max indicated in the comments (+1) it would be simpler to "write your own" than to spend time looking for it somewhere else. As we know, the cosine similarity between two vectors $A,B$ of length $n$ is $$ C = \frac{ \sum \limits_{i=1}^{n}A_{i} B_{i} }{ \sqrt{\sum \limits_{i=1}^{n} A_{i}^2} \cdot \sqrt{\sum \limits_{i=1}^{n} B_{i}^2} } $$ which is straightforward to generate in R . Let X be the matrix where the rows are the values we want to compute the similarity between. Then we can compute the similarity matrix with the following R code: cos.sim <- function(ix) { A = X[ix[1],] B = X[ix[2],] return( sum(A*B)/sqrt(sum(A^2)*sum(B^2)) ) } n <- nrow(X) cmb <- expand.grid(i=1:n, j=1:n) C <- matrix(apply(cmb,1,cos.sim),n,n) Then the matrix C is the cosine similarity matrix and you can pass it to whatever heatmap function you like (the only one I'm familiar with is image() ).
{ "source": [ "https://stats.stackexchange.com/questions/31565", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9811/" ] }
31,569
I recently measured how the meaning of a new word is acquired over repeated exposures (practice: day 1 to day 10) by measuring ERPs (EEGs) when the word was viewed in different contexts. I also controlled properties of the context, for instance, its usefulness for the discovery of new word meaning (high vs. low). I am particularly interested in the effect of practice (days). Because individual ERP recordings are noisy, ERP component values are obtained by averaging over the trials of a particular condition. With the lmer function, I applied the following formula: lmer(ERPindex ~ practice*context + (1|participants), data=base) and lmer(ERPindex ~ practice*context + (1+practice|participants), data=base) I've also seen the equivalent of the following random effects in the literature: lmer(ERPindex ~ practice*context + (practice|participants) + (practice|participants:context), data=base) What is accomplished by using a random factor of the form participants:context ? Is there a good source that would allow someone with just cursory knowledge of matrix algebra understand precisely what random factors do in linear mixed models and how they should be selected?
I'm going to describe what model each of your calls to lmer() fits and how they are different and then answer your final question about selecting random effects. Each of your three models contain fixed effects for practice , context and the interaction between the two. The random effects differ between the models. lmer(ERPindex ~ practice*context + (1|participants), data=base) contains a random intercept shared by individuals that have the same value for participants . That is, each participant 's regression line is shifted up/down by a random amount with mean $0$. lmer(ERPindex ~ practice*context + (1+practice|participants), data=base) This model, in addition to a random intercept, also contains a random slope in practice . This means that the rate at which individuals learn from practice is different from person to person. If an individual has a positive random effect, then they increase more quickly with practice than the average, while a negative random effect indicates they learn less quickly with practice than the average, or possibly get worse with practice, depending on the variance of the random effect (this is assuming the fixed effect of practice is positive). lmer(ERPindex ~ practice*context + (practice|participants) + (practice|participants:context), data=base) This model fits a random slope and intercept in practice (you have to do (practice-1|...) to suppress the intercept), just as the previous model did, but now you've also added a random slope and intercept in the factor participants:context , which is a new factor whose levels are every combination of the levels present in participants and context and the corresponding random effects are shared by observations that have the same value of both participants and context . To fit this model you will need to have multiple observations that have the same values for both participants and context or else the model is not estimable. In many situations, the groups created by this interaction variable are very sparse and result in very noisy/difficult to fit random effects models, so you want to be careful when using an interaction factor as a grouping variable. Basically (read: without getting too complicated) random effects should be used when you think that the grouping variables define "pockets" of inhomogeneity in the data set or that individuals which share the level of the grouping factor should be correlated with each other (while individuals that do not should not be correlated) - the random effects accomplish this. If you think observations which share levels of both participants and context are more similar than the sum of the two parts then including the "interaction" random effect may be appropriate. Edit: As @Henrik mentions in the comments, the models you fit, e.g.: lmer(ERPindex ~ practice*context + (1+practice|participants), data=base) make it so that the random slope and random intercept are correlated with each other, and that correlation is estimated by the model. To constrain the model so that the random slope and random intercept are uncorrelated (and therefore independent, since they are normally distributed), you'd instead fit the model: lmer(ERPindex ~ practice*context + (1|participants) + (practice-1|participants), data=base) The choice between these two should be based on whether you think, for example, participant s with a higher baseline than average (i.e. a positive random intercept) are also likely to have a higher rate of change than average (i.e. positive random slope). If so, you'd allow the two to be correlated whereas if not, you'd constrain them to be independent. (Again, this example assumes the fixed effect slope is positive).
{ "source": [ "https://stats.stackexchange.com/questions/31569", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12373/" ] }
31,598
If I understand correctly, book ratings on a 1-5 scale are Likert scores. That is, a 3 for me may not necessarily be a 3 for someone else. It's an ordinal scale IMO. One shouldn't really average ordinal scales but can definitely take the mode, median and percentiles. So is it 'okay' to bend the rules since the large part of the population understands means than the above statistics? Although the research community strongly rebukes taking averages of Likert scale based data, is it fine to do this with the masses (practically speaking)? Is taking the average in this case even misleading to start with? Seems unlikely that a company like Amazon would fumble on basic statistics, but if not then what am I missing here? Can we claim that the ordinal scale is a convenient approximation to the ordinal to justify taking the mean? On what grounds?
Benefits of using the mean to summarise central tendency of a 5 point rating As @gung mentioned I think there are often very good reasons for taking the mean of a five-point item as an index of central tendency. I have already outlined these reasons here . To paraphrase: the mean is easy to calculate The mean is intuitive and well understood The mean is a single number Other indices often yield similar rank ordering of objects Why the mean is good for Amazon Think about the goals of Amazon in reporting the mean. They might be aiming to provide an intuitive and understandable rating for an item ensure user acceptance of the rating system ensure that people understand what the rating means so they can use it appropriately to inform purchasing decisions Amazon provides some sort of rounded mean, frequency counts for each rating option, and the sample size (i.e., number of ratings). This information presumably is enough for most people to appreciate both the general sentiment regarding the item and the confidence in such a rating (i.e., a 4.5 with 20 ratings is more likely to be accurate than a 4.5 with 2 ratings; an item with 10 5-star ratings, and one 1-star rating with no comments might still be a good item). You could even see the mean as a democratic option. Many elections are decided based on which candidate gets the highest mean on a two-point scale. Similarly, if you take the argument that each person who submits a review gets a vote, then you can see the mean as a form that weights each person's vote equally. Are differences in scale use really a problem? There are a wide range of rating biases known in the psychological literature (for a review, see Saal et al 1980), such as central tendency bias, leniency bias, strictness bias. Also, some raters will be more arbitrary and some will be more reliable. Some may even systematically lie giving fake positive or fake negative reviews. This will create various forms of error when trying to calculate the true mean rating for an item. However, if you were to take a random sample of the population, such biases would cancel out, and with a sufficient sample size of raters, you would still get the true mean. Of course, you don't get a random sample on Amazon, and there is the risk that the particular set of raters you get for an item is systematically biased to be more lenient or strict and so on. That said, I think users of Amazon would appreciate that user submitted ratings come from an imperfect sample. I also think that it's quite likely that with a reasonable sample size that in many cases, the majority of response bias differences would start to disappear. Possible advances beyond the mean In terms of improving the accuracy of the rating, I wouldn't challenge the general concept of the mean, but rather I think there are other ways of estimating the true population mean rating for an item (i.e., the mean rating that would be obtained were a large representative sample asked to rate the item). Weight raters based on their trustworthiness Use a Bayesian rating system that estimates the mean rating as a weighted sum of the average rating for all items and the mean from the specific item, and increase the weighting for the specific item as the number of ratings increases Adjust the information of a rater based on any general rating tendency across items (e.g., a 5 from someone who typically gives 3s would be worth more than someone who typically gives 4s). Thus, if accuracy in rating was the primary goal of Amazon, I think it should endeavour to increase the number of ratings per item and adopt some of the above strategies. Such approaches might be particularly relevant when creating "best-of" rankings. However, for the humble rating on the page, it may well be that the sample mean better meets the goals of simplicity and transparency. References Saal, F.E., Downey, R.G. & Lahey, M.A. (1980). Rating the ratings: Assessing the psychometric quality of rating data.. Psychological Bulletin, 88, 413.
{ "source": [ "https://stats.stackexchange.com/questions/31598", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4426/" ] }
31,726
I saw this plot in the supplement of a recent paper and I'd love to be able to reproduce it using R. It's a scatterplot, but to fix the overplotting there are contour lines that are "heat" colored blue to red corresponding to the overplotting density. How would I do this?
Here is my take, using base functions only for drawing stuff: library(MASS) # in case it is not already loaded set.seed(101) n <- 1000 X <- mvrnorm(n, mu=c(.5,2.5), Sigma=matrix(c(1,.6,.6,1), ncol=2)) ## some pretty colors library(RColorBrewer) k <- 11 my.cols <- rev(brewer.pal(k, "RdYlBu")) ## compute 2D kernel density, see MASS book, pp. 130-131 z <- kde2d(X[,1], X[,2], n=50) plot(X, xlab="X label", ylab="Y label", pch=19, cex=.4) contour(z, drawlabels=FALSE, nlevels=k, col=my.cols, add=TRUE) abline(h=mean(X[,2]), v=mean(X[,1]), lwd=2) legend("topleft", paste("R=", round(cor(X)[1,2],2)), bty="n") For more fancy rendering, you might want to have a look at ggplot2 and stat_density2d() . Another function I like is smoothScatter() : smoothScatter(X, nrpoints=.3*n, colramp=colorRampPalette(my.cols), pch=19, cex=.8)
{ "source": [ "https://stats.stackexchange.com/questions/31726", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/36/" ] }
31,746
I want to know what the differences between the forward-backward algorithm and the Viterbi algorithm for inference in hidden Markov models (HMM) are.
A bit of background first maybe it clears things up a bit. When talking about HMMs (Hidden Markov Models) there are generally 3 problems to be considered: Evaluation problem Evaluation problem answers the question: what is the probability that a particular sequence of symbols is produced by a particular model? For evaluation we use two algorithms: the forward algorithm or the backwards algorithm (DO NOT confuse them with the forward-backward algorithm). Decoding problem Decoding problem answers the question: Given a sequence of symbols (your observations) and a model, what is the most likely sequence of states that produced the sequence. For decoding we use the Viterbi algorithm . Training problem Training problem answers the question: Given a model structure and a set of sequences, find the model that best fits the data. For this problem we can use the following 3 algorithms: MLE (maximum likelihood estimation) Viterbi training(DO NOT confuse with Viterbi decoding) Baum Welch = forward-backward algorithm To sum it up, you use the Viterbi algorithm for the decoding problem and Baum Welch/Forward-backward when you train your model on a set of sequences. Baum Welch works in the following way. For each sequence in the training set of sequences. Calculate forward probabilities with the forward algorithm Calculate backward probabilities with the backward algorithm Calculate the contributions of the current sequence to the transitions of the model, calculate the contributions of the current sequence to the emission probabilities of the model. Calculate the new model parameters (start probabilities, transition probabilities, emission probabilities) Calculate the new log likelihood of the model Stop when the change in log likelihood is smaller than a given threshold or when a maximum number of iterations is passed. If you need a full description of the equations for Viterbi decoding and the training algorithm let me know and I can point you in the right direction.
{ "source": [ "https://stats.stackexchange.com/questions/31746", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12329/" ] }
31,849
I have been learning about Bayesian statistics, and I often have read in articles "we adopt a Bayesian approach" or something similar. I also noticed, less often: "we adopt a fully Bayesian approach" (my emphasis). Is there any difference between these approaches in any practical or theoretical sense ? FWIW, I am using the package MCMCglmm in R in case that is relevant.
The terminology "fully Bayesian approach" is nothing but a way to indicate that one moves from a "partially" Bayesian approach to a "true" Bayesian approach, depending on the context. Or to distinguish a "pseudo-Bayesian" approach from a "strictly" Bayesian approach. For example one author writes: "Unlike the majority of other authors interested who typically used an Empirical Bayes approach for RVM, we adopt a fully Bayesian approach" beacuse the empirical Bayes approach is a "pseudo-Bayesian" approach. There are others pseudo-Bayesian approaches, such as the Bayesian-frequentist predictive distribution (a distribution whose quantiles match the bounds of the frequentist prediction intervals). In this page several R packages for Bayesian inference are presented. The MCMCglmm is presented as a "fully Bayesian approach" because the user has to choose the prior distribution, contrary to the other packages. Another possible meaning of "fully Bayesian" is when one performs a Bayesian inference derived from the Bayesian decision theory framework, that is, derived from a loss function, because Bayesian decision theory is a solid foundational framework for Bayesian inference.
{ "source": [ "https://stats.stackexchange.com/questions/31849", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11405/" ] }
31,867
Can someone give a good rundown of the differences between the Bayesian and the frequentist approach to probability? From what I understand: The frequentists view is that the data is a repeatable random sample (random variable) with a specific frequency/probability (which is defined as the relative frequency of an event as the number of trials approaches infinity). The underlying parameters and probabilities remain constant during this repeatable process and that the variation is due to variability in $X_n$ and not the probability distribution (which is fixed for a certain event/process). The bayesian view is that the data is fixed while the frequency/probability for a certain event can change meaning that the parameters of the distribution changes. In effect, the data that you get changes the prior distribution of a parameter which gets updated for each set of data. To me it seems that the frequentist approach is more practical/logical since it seems reasonable that events have a specific probability and that the variation is in our sampling. Furthermore, most data analysis from studies is usually done using the frequentist approach (i.e. confidence intervals, hypothesis testing with p-values etc) since it is easily understandable. I was just wondering whether anyone could give me a quick summary of their interpretation of bayesian vs frequentist approach including bayesian statistical equivalents of the frequentist p-value and confidence interval. In addition, specific examples of where 1 method would be preferable to the other is appreciated.
In the frequentist approach, it is asserted that the only sense in which probabilities have meaning is as the limiting value of the number of successes in a sequence of trials, i.e. as $$p = \lim_{n\to\infty} \frac{k}{n}$$ where $k$ is the number of successes and $n$ is the number of trials. In particular, it doesn't make any sense to associate a probability distribution with a parameter . For example, consider samples $X_1, \dots, X_n$ from the Bernoulli distribution with parameter $p$ (i.e. they have value 1 with probability $p$ and 0 with probability $1-p$). We can define the sample success rate to be $$\hat{p} = \frac{X_1+\cdots +X_n}{n}$$ and talk about the distribution of $\hat{p}$ conditional on the value of $p$, but it doesn't make sense to invert the question and start talking about the probability distribution of $p$ conditional on the observed value of $\hat{p}$. In particular, this means that when we compute a confidence interval, we interpret the ends of the confidence interval as random variables, and we talk about "the probability that the interval includes the true parameter", rather than "the probability that the parameter is inside the confidence interval". In the Bayesian approach, we interpret probability distributions as quantifying our uncertainty about the world. In particular, this means that we can now meaningfully talk about probability distributions of parameters, since even though the parameter is fixed, our knowledge of its true value may be limited. In the example above, we can invert the probability distribution $f(\hat{p}\mid p)$ using Bayes' law, to give $$\overbrace{f(p\mid \hat{p})}^\text{posterior} = \underbrace{\frac{f(\hat{p}\mid p)}{f(\hat{p})}}_\text{likelihood ratio} \overbrace{f(p)}^\text{prior}$$ The snag is that we have to introduce the prior distribution into our analysis - this reflects our belief about the value of $p$ before seeing the actual values of the $X_i$. The role of the prior is often criticised in the frequentist approach, as it is argued that it introduces subjectivity into the otherwise austere and object world of probability. In the Bayesian approach one no longer talks of confidence intervals, but instead of credible intervals, which have a more natural interpretation - given a 95% credible interval, we can assign a 95% probability that the parameter is inside the interval.
{ "source": [ "https://stats.stackexchange.com/questions/31867", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12347/" ] }
32,103
Which distributions have closed-form solutions for the maximum likelihood estimates of the parameters from a sample of independent observations?
Without any appreciable loss of generality we may assume that the probability density (or mass) $f(x_i)$ for any observation $x_i$ (out of $n$ observations) is strictly positive, enabling us to write it as an exponential $$ f(x_i) = \exp{(g(x_i,\theta))}$$ for a parameter vector $\theta = (\theta_j)$. Equating the gradient of the log likelihood function to zero (which finds stationary points of the likelihood, among which will be all interior global maxima if one exists) gives a set of equations of the form $$\sum_i\frac{d g(x_i, \theta)}{d\theta_j} = 0,$$ one for each $j$. For any one of these to have a ready solution, we would like to be able to separate the $x_i$ terms from the $\theta$ terms . (Everything flows from this key idea, motivated by the Principle of Mathematical Laziness : do as little work as possible; think ahead before computing; tackle easy versions of hard problems first.) The most general way to do this is for the equations to take the form $$\sum_i \left(\eta_j(\theta) \tau_j(x_i) - \alpha_j(\theta)\right) = \eta_j(\theta)\sum_i \tau_j(x_i) - n \alpha_j(\theta) $$ for known functions $\eta_j$, $\tau_j$, and $\alpha_j$, for then the solution is obtained by solving the simultaneous equations $$\frac{n\alpha_j(\theta)}{\eta_j(\theta)}= \sum_i \tau_j(x_i)$$ for $\theta$. In general these will be difficult to solve, but provided the set of values of $\left(\frac{n\alpha_j(\theta)}{\eta_j(\theta)}\right)$ give full information about $\theta$, we could simply use this vector in place of $\theta$ itself (thereby somewhat generalizing the idea of a "closed form" solution, but in a highly productive way). In such a case, integrating with respect to $\theta_j$ yields $$g(x, \theta) = \tau_j(x)\int^\theta \eta_j(\theta) d\theta_j - \int^\theta \alpha_j(\theta) d\theta_j + B(x, \theta_j')$$ (where $\theta_j'$ stands for all the components of $\theta$ except $\theta_j$). Because the left hand side is functionally independent of $\theta_j$, we must have that $\tau_j(x)=T(x)$ for some fixed function $T$; that $B$ must not depend on $\theta$ at all; and the $\eta_j$ are derivatives of some function $H(\theta)$ and the $\alpha_j$ are derivatives of some other function $A(\theta)$, both of them functionally independent of the data. Whence $$g(x, \theta) = H(\theta)T(x) - A(\theta) + B(x).$$ Densities that can be written in this form make up the well-known Koopman-Pitman-Darmois , or exponential , family. It comprises important parametric families, both continuous and discrete, including Gamma, Normal, Chi-squared, Poisson, Multinomial, and many others .
{ "source": [ "https://stats.stackexchange.com/questions/32103", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11328/" ] }
32,310
Popular topic models like LDA usually cluster words that tend to co-occur together into the same topic (cluster). What is the main difference between such topic models, and other simple co-occurrence based clustering approaches like PMI ? (PMI stands for Pointwise Mutual Information, and it is used to identify the words that co-occur with a given word.)
Recently, a huge body of literature discussing how to extract information from written text has grown. Hence I will just describe four milestones/popular models and their advantages/disadvantages and thus highlight (some of) the main differences (or at least what I think are the main/most important differences). You mention the "easiest" approach, which would be to cluster the documents by matching them against a predefined query of terms (as in PMI). These lexical matching methods however might be inaccurate due to polysemy (multiple meanings) and synonymy (multiple words that have similar meanings) of single terms. As a remedy, latent semantic indexing ( LSI ) tries to overcome this by mapping terms and documents into a latent semantic space via a singular value decomposition. The LSI results are more robust indicators of meaning than individual terms would be. However, one drawback of LSI is that it lacks in terms of solid probabilistic foundation. This was partly solved by the invention of probabilistic LSI ( pLSI ). In pLSI models each word in a document is drawn from a mixture model specified via multinomial random variables (which also allows higher-order co-occurences as @sviatoslav hong mentioned). This was an important step forward in probabilistic text modeling, but was incomplete in the sense that it offers no probabilistic structure at the level of documents. Latent Dirichlet Allocation ( LDA ) alleviates this and was the first fully probabilistic model for text clustering. Blei et al. (2003) show that pLSI is a maximum a-posteriori estimated LDA model under a uniform Dirichlet prior. Note that the models mentioned above (LSI, pLSI, LDA) have in common that they are based on the “bag-of-words” assumption - i.e. that within a document, words are exchangeable, i.e. the order of words in a document can be neglected. This assumption of exchangeability offers a further justification for LDA over the other approaches: Assuming that not only words within documents are exchangeable, but also documents, i.e., the order of documents within a corpus can be neglected, De Finetti's theorem states that any set of exchangeable random variables has a representation as a mixture distribution. Thus if exchangeability for documents and words within documents is assumed, a mixture model for both is needed. Exactly this is what LDA generally achieves but PMI or LSI do not (and even pLSI not as beautiful as LDA).
{ "source": [ "https://stats.stackexchange.com/questions/32310", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/7556/" ] }
32,318
I'm struggling to understand the difference between the standard error and the standard deviation. How are they different and why do you need to measure the standard error?
Let $\theta$ be your parameter of interest for which you want to make inference. To do this, you have available to you a sample of observations $\mathbf{x} = \{x_1, \ldots, x_n \}$ along with some technique to obtain an estimate of $\theta$, $\hat{\theta}(\mathbf{x})$. In this notation, I have made explicit that $\hat{\theta}(\mathbf{x})$ depends on $\mathbf{x}$. Indeed, if you had had another sample, $\tilde{\mathbf{x}}$, you would have ended up with another estimate, $\hat{\theta}(\tilde{\mathbf{x}})$. This makes $\hat{\theta}(\mathbf{x})$ a realisation of a random variable which I denote $\hat{\theta}$. This random variable is called an estimator. The standard error of $\hat{\theta}(\mathbf{x})$ (=estimate) is the standard deviation of $\hat{\theta}$ (=random variable). It contains the information on how confident you are about your estimate. If it is large, it means that you could have obtained a totally different estimate if you had drawn another sample. The standard error is used to construct confidence intervals.
{ "source": [ "https://stats.stackexchange.com/questions/32318", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12627/" ] }
32,405
I have generated a vector which has a Poisson distribution, as follows: x = rpois(1000,10) If I make a histogram using hist(x) , the distribution looks like a the familiar bell-shaped normal distribution. However, a the Kolmogorov-Smirnoff test using ks.test(x, 'pnorm',10,3) says the distribution is significantly different to a normal distribution, due to very small p value. So my question is: how does the Poisson distribution differ from a normal distribution, when the histogram looks so similar to a normal distribution?
A Poisson distribution is discrete while a normal distribution is continuous, and a Poisson random variable is always >= 0. Thus, a Kolgomorov-Smirnov test will often be able to tell the difference. When the mean of a Poisson distribution is large, it becomes similar to a normal distribution. However, rpois(1000, 10) doesn't even look that similar to a normal distribution (it stops short at 0 and the right tail is too long). Why are you comparing it to ks.test(..., 'pnorm', 10, 3) rather than ks.test(..., 'pnorm', 10, sqrt(10)) ? The difference between 3 and $\sqrt{10}$ is small but will itself make a difference when comparing distributions. Even if the distribution truly were normal you would end up with an anti-conservative p-value distribution: set.seed(1) hist(replicate(10000, ks.test(rnorm(1000, 10, sqrt(10)), 'pnorm', 10, 3)$p.value))
{ "source": [ "https://stats.stackexchange.com/questions/32405", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12492/" ] }
32,419
I am wondering what the differences are between mixed and unmixed GLMs. For instance, in SPSS the drop down menu allows users to fit either: analyze-> generalized linear models-> generalized linear models & analyze-> mixed models-> generalized linear Do they deal with missing values differently? My dependent variable is binary and I have several categorical and continuous independent variables.
The advent of generalized linear models has allowed us to build regression-type models of data when the distribution of the response variable is non-normal--for example, when your DV is binary. (If you would like to know a little more about GLiMs, I wrote a fairly extensive answer here , which may be useful although the context differs.) However, a GLiM, e.g. a logistic regression model, assumes that your data are independent . For instance, imagine a study that looks at whether a child has developed asthma. Each child contributes one data point to the study--they either have asthma or they don't. Sometimes data are not independent, though. Consider another study that looks at whether a child has a cold at various points during the school year. In this case, each child contributes many data points. At one time a child might have a cold, later they might not, and still later they might have another cold. These data are not independent because they came from the same child. In order to appropriately analyze these data, we need to somehow take this non-independence into account. There are two ways: One way is to use the generalized estimating equations (which you don't mention, so we'll skip). The other way is to use a generalized linear mixed model . GLiMMs can account for the non-independence by adding random effects (as @MichaelChernick notes). Thus, the answer is that your second option is for non-normal repeated measures (or otherwise non-independent) data. (I should mention, in keeping with @Macro's comment, that general- ized linear mixed models include linear models as a special case and thus can be used with normally distributed data. However, in typical usage the term connotes non-normal data.) Update: (The OP has asked about GEE as well, so I will write a little about how all three relate to each other.) Here's a basic overview: a typical GLiM (I'll use logistic regression as the prototypical case) lets you model an independent binary response as a function of covariates a GLMM lets you model a non-independent (or clustered) binary response conditional on the attributes of each individual cluster as a function of covariates the GEE lets you model the population mean response of non-independent binary data as a function of covariates Since you have multiple trials per participant, your data are not independent; as you correctly note, "[t]rials within one participant are likely to be more similar than as compared to the whole group". Therefore, you should use either a GLMM or the GEE. The issue, then, is how to choose whether GLMM or GEE would be more appropriate for your situation. The answer to this question depends on the subject of your research--specifically, the target of the inferences you hope to make. As I stated above, with a GLMM, the betas are telling you about the effect of a one unit change in your covariates on a particular participant, given their individual characteristics. On the other hand with the GEE, the betas are telling you about the effect of a one unit change in your covariates on the average of the responses of the entire population in question. This is a difficult distinction to grasp, especially because there is no such distinction with linear models (in which case the two are the same thing). One way to try to wrap your head around this is to imagine averaging over your population on both sides of the equals sign in your model. For example, this might be a model: $$ \text{logit}(p_i)=\beta_{0}+\beta_{1}X_1+b_i $$ where: $$ \text{logit}(p)=\ln\left(\frac{p}{1-p}\right),~~~~~\&~~~~~~b\sim\mathcal N(0,\sigma^2_b) $$ There is a parameter that governs the response distribution ($p$, the probability, with binary data) on the left side for each participant. On the right hand side, there are coefficients for the effect of the covariate[s] and the baseline level when the covariate[s] equals 0. The first thing to notice is that the actual intercept for any specific individual is not $\beta_0$, but rather $(\beta_0+b_i)$. But so what? If we are assuming that the $b_i$'s (the random effect) are normally distributed with a mean of 0 (as we've done), certainly we can average over these without difficulty (it would just be $\beta_0$). Moreover, in this case we don't have a corresponding random effect for the slopes and thus their average is just $\beta_1$. So the average of the intercepts plus the average of the slopes must be equal to the logit transformation of the average of the $p_i$'s on the left, mustn't it? Unfortunately, no . The problem is that in between those two is the $\text{logit}$, which is a non-linear transformation. (If the transformation were linear, they would be equivalent, which is why this problem doesn't occur for linear models.) The following plot makes this clear: Imagine that this plot represents the underlying data generating process for the probability that a small class of students will be able to pass a test on some subject with a given number of hours of instruction on that topic. Each of the grey curves represents the probability of passing the test with varying amounts of instruction for one of the students. The bold curve is the average over the whole class. In this case, the effect of an additional hour of teaching conditional on the student's attributes is $\beta_1$--the same for each student (that is, there is not a random slope). Note, though, that the students baseline ability differs amongst them--probably due to differences in things like IQ (that is, there is a random intercept). The average probability for the class as a whole, however, follows a different profile than the students. The strikingly counter-intuitive result is this: an additional hour of instruction can have a sizable effect on the probability of each student passing the test, but have relatively little effect on the probable total proportion of students who pass . This is because some students might already have had a large chance of passing while others might still have little chance. The question of whether you should use a GLMM or the GEE is the question of which of these functions you want to estimate. If you wanted to know about the probability of a given student passing (if, say, you were the student, or the student's parent), you want to use a GLMM. On the other hand, if you want to know about the effect on the population (if, for example, you were the teacher , or the principal), you would want to use the GEE. For another, more mathematically detailed, discussion of this material, see this answer by @Macro.
{ "source": [ "https://stats.stackexchange.com/questions/32419", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9203/" ] }
32,464
I would have expected the correlation coefficient to be the same as a regression slope (beta), however having just compared the two, they are different. How do they differ - what different information do they give?
Assuming you're talking about a simple regression model $$Y_i = \alpha + \beta X_i + \varepsilon_i$$ estimated by least squares, we know from wikipedia that $$ \hat {\beta} = {\rm cor}(Y_i, X_i) \cdot \frac{ {\rm SD}(Y_i) }{ {\rm SD}(X_i) } $$ Therefore the two only coincide when ${\rm SD}(Y_i) = {\rm SD}(X_i)$. That is, they only coincide when the two variables are on the same scale, in some sense. The most common way of achieving this is through standardization, as indicated by @gung. The two, in some sense give you the same information - they each tell you the strength of the linear relationship between $X_i$ and $Y_i$. But, they do each give you distinct information (except, of course, when they are exactly the same): The correlation gives you a bounded measurement that can be interpreted independently of the scale of the two variables. The closer the estimated correlation is to $\pm 1$, the closer the two are to a perfect linear relationship . The regression slope, in isolation, does not tell you that piece of information. The regression slope gives a useful quantity interpreted as the estimated change in the expected value of $Y_i$ for a given value of $X_i$. Specifically, $\hat \beta$ tells you the change in the expected value of $Y_i$ corresponding to a 1-unit increase in $X_i$. This information can not be deduced from the correlation coefficient alone.
{ "source": [ "https://stats.stackexchange.com/questions/32464", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12492/" ] }
32,600
In linear regression analysis, we analyze outliers, investigate multicollinearity, test heteroscedasticty. The question is: Is there any order to apply these? I mean, do we have to analyze outliers very firstly, and then examine multicollinearity? Or reverse? Is there any rule of thumb about this?
The process is iterative, but there is a natural order: You have to worry first about conditions that cause outright numerical errors . Multicollinearity is one of those, because it can produce unstable systems of equations potentially resulting in outright incorrect answers (to 16 decimal places...) Any problem here usually means you cannot proceed until it is fixed. Multicollinearity is usually diagnosed using Variance Inflation Factors and similar examination of the "hat matrix." Additional checks at this stage can include assessing the influence of any missing values in the dataset and verifying the identifiability of important parameters. (Missing combinations of discrete independent variables can sometimes cause trouble here.) Next you need to be concerned whether the output reflects most of the data or is sensitive to a small subset. In the latter case, everything else you subsequently do may be misleading, so it is to be avoided. Procedures include examination of outliers and of leverage . (A high-leverage datum might not be an outlier but even so it may unduly influence all the results.) If a robust alternative to the regression procedure exists, this is a good time to apply it: check that it is producing similar results and use it to detect outlying values. Finally, having achieved a situation that is numerically stable (so you can trust the computations) and which reflects the full dataset, you turn to an examination of the statistical assumptions needed for correct interpretation of the output . Primarily these concerns focus--in rough order of importance--on distributions of the residuals (including heteroscedasticity, but also extending to symmetry, distributional shape, possible correlation with predicted values or other variables, and autocorrelation), goodness of fit (including the possible need for interaction terms), whether to re-express the dependent variable, and whether to re-express the independent variables. At any stage, if something needs to be corrected then it's wise to return to the beginning. Repeat as many times as necessary.
{ "source": [ "https://stats.stackexchange.com/questions/32600", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12697/" ] }
32,730
The following code evaluates the similarity between two time series: set.seed(10) RandData <- rnorm(8760*2) America <- rep(c('NewYork','Miami'),each=8760) Date = seq(from=as.POSIXct("1991-01-01 00:00"), to=as.POSIXct("1991-12-31 23:00"), length=8760) DatNew <- data.frame(Loc = America, Doy = as.numeric(format(Date,format = "%j")), Tod = as.numeric(format(Date,format = "%H")), Temp = RandData, DecTime = rep(seq(1, length(RandData)/2) / (length(RandData)/2), 2)) require(mgcv) mod1 <- gam(Temp ~ Loc + s(Doy) + s(Doy,by = Loc) + s(Tod) + s(Tod,by = Loc),data = DatNew, method = "ML") Here, gam is used to evaluate how the temperature at New York and Miami vary from the mean temperature (of both locations) at different times of the day. The problem that I now have is that I need to include an interaction term which shows how the temperature of each location varies throughout at the day for different days of the year. I eventually hope to display all of this information on one graph (for each location). So, for Miami I hope to have one graph that shows how the temperature varies from the mean during different times of the day and different times of the year (3d plot?)
The "a" in "gam" stands for "additive" which means no interactions, so if you fit interactions you are really not fitting a gam model any more. That said, there are ways to get some interaction like terms within the additive terms in a gam, you are already using one of those by using the by argument to s . You could try extending this to having the argument by be a matrix with a function (sin, cos) of doy or tod. You could also just fit smoothing splines in a regular linear model that allows interactions (this does not give the backfitting that gam does, but could still be useful). You might also look at projection pursuit regression as another fitting tool. Loess or more parametric models (with sin and/or cos) might also be useful. Part of decision on what tool(s) to use is what question you are trying to answer. Are you just trying to find a model to predict future dates and times? are you trying to test to see if particular predictors are significant in the model? are you trying to understand the shape of the relationship between a predictor and the outcome? Something else?
{ "source": [ "https://stats.stackexchange.com/questions/32730", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12141/" ] }
32,925
I want to perform K-means clustering on objects I have, but the objects aren't described as points in space, i.e. by objects x features dataset. However, I am able to compute the distance between any two objects (it is based on a similarity function). So, I dispose of the distance matrix objects x objects . I've implemented K-means before, but that was with points dataset input; and with distance matrix input it's not clear to me how to update the clusters to be the cluster "centers" without a point-representation. How would this normally be done? Are there versions of K-means or methods close to it, for that?
Obviously, k-means needs to be able to compute means . However, there is a well-known variation of it known as k-medoids or PAM (Partitioning Around Medoids), where the medoid is the existing object most central to the cluster. K-medoids only needs the pairwise distances.
{ "source": [ "https://stats.stackexchange.com/questions/32925", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12828/" ] }
33,013
I would like to test the difference in response of two variables to one predictor. Here is a minimal reproducible example. library(nlme) ## gls is used in the application; lm would suffice for this example m.set <- gls(Sepal.Length ~ Petal.Width, data = iris, subset = Species == "setosa") m.vir <- gls(Sepal.Length ~ Petal.Width, data = iris, subset = Species == "virginica") m.ver <- gls(Sepal.Length ~ Petal.Width, data = iris, subset = Species == "versicolor") I can see that the slope coefficients are different: m.set$coefficients (Intercept) Petal.Width 4.7771775 0.9301727 m.vir$coefficients (Intercept) Petal.Width 5.2694172 0.6508306 m.ver$coefficients (Intercept) Petal.Width 4.044640 1.426365 I have three questions: How can I test the difference between slopes? How can I test the difference between residual variances? What is a simple, effective way to present these comparisons? A related question, Method to compare variable coefficient in two regression models , suggests re-running the model with a dummy variable to differentiate the slopes, are there options that would allow the use of independent data sets?
To answer these questions with R code, use the following: 1. How can I test the difference between slopes? Answer: Examine the ANOVA p-value from the interaction of Petal.Width by Species, then compare the slopes using lsmeans::lstrends, as follows. library(lsmeans) m.interaction <- lm(Sepal.Length ~ Petal.Width*Species, data = iris) anova(m.interaction) Analysis of Variance Table Response: Sepal.Length Df Sum Sq Mean Sq F value Pr(>F) Petal.Width 1 68.353 68.353 298.0784 <2e-16 *** Species 2 0.035 0.017 0.0754 0.9274 Petal.Width:Species 2 0.759 0.380 1.6552 0.1947 Residuals 144 33.021 0.229 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # Obtain slopes m.interaction$coefficients m.lst <- lstrends(m.interaction, "Species", var="Petal.Width") Species Petal.Width.trend SE df lower.CL upper.CL setosa 0.9301727 0.6491360 144 -0.3528933 2.213239 versicolor 1.4263647 0.3459350 144 0.7425981 2.110131 virginica 0.6508306 0.2490791 144 0.1585071 1.143154 # Compare slopes pairs(m.lst) contrast estimate SE df t.ratio p.value setosa - versicolor -0.4961919 0.7355601 144 -0.675 0.7786 setosa - virginica 0.2793421 0.6952826 144 0.402 0.9149 versicolor - virginica 0.7755341 0.4262762 144 1.819 0.1669 2. How can I test the difference between residual variances? If I understand the question, you can compare Pearson correlations with a Fisher transform, also called a "Fisher's r-to-z", as follows. library(psych) library(data.table) iris <- as.data.table(iris) # Calculate Pearson's R m.correlations <- iris[, cor(Sepal.Length, Petal.Width), by = Species] m.correlations # Compare R values with Fisher's R to Z paired.r(m.correlations[Species=="setosa", V1], m.correlations[Species=="versicolor", V1], n = iris[Species %in% c("setosa", "versicolor"), .N]) paired.r(m.correlations[Species=="setosa", V1], m.correlations[Species=="virginica", V1], n = iris[Species %in% c("setosa", "virginica"), .N]) paired.r(m.correlations[Species=="virginica", V1], m.correlations[Species=="versicolor", V1], n = iris[Species %in% c("virginica", "versicolor"), .N]) 3. What is a simple, effective way to present these comparisons? "We used linear regression to compare the relationship of Sepal Length to Petal Width for each Species. We did not find a significant interaction in the relationships of Sepal Length to Petal Width for I. Setosa (B = 0.9), I. Versicolor (B = 1.4), nor I. Virginica (B = 0.6); F (2, 144) = 1.6, p = 0.19. A Fisher's r-to-z comparison indicated that the Pearson correlation for I. Setosa (r = 0.28) was significantly lower (p = 0.02) than I. Versicolor (r = 0.55). Similarly, the correlation for I. Virginica (r = 0.28) was significantly weaker (p = 0.02) than the one observed for I. Versicolor ." Finally, always visualize your results! plotly_interaction <- function(data, x, y, category, colors = col2rgb(viridis(nlevels(as.factor(data[[category]])))), ...) { # Create Plotly scatter plot of x vs y, with separate lines for each level of the categorical variable. # In other words, create an interaction scatter plot. # The "colors" must be supplied in a RGB triplet, as produced by col2rgb(). require(plotly) require(viridis) require(broom) groups <- unique(data[[category]]) p <- plot_ly(...) for (i in 1:length(groups)) { groupData = data[which(data[[category]]==groups[[i]]), ] p <- add_lines(p, data = groupData, y = fitted(lm(data = groupData, groupData[[y]] ~ groupData[[x]])), x = groupData[[x]], line = list(color = paste('rgb', '(', paste(colors[, i], collapse = ", "), ')')), name = groups[[i]], showlegend = FALSE) p <- add_ribbons(p, data = augment(lm(data = groupData, groupData[[y]] ~ groupData[[x]])), y = groupData[[y]], x = groupData[[x]], ymin = ~.fitted - 1.96 * .se.fit, ymax = ~.fitted + 1.96 * .se.fit, line = list(color = paste('rgba','(', paste(colors[, i], collapse = ", "), ', 0.05)')), fillcolor = paste('rgba', '(', paste(colors[, i], collapse = ", "), ', 0.1)'), showlegend = FALSE) p <- add_markers(p, data = groupData, x = groupData[[x]], y = groupData[[y]], symbol = groupData[[category]], marker = list(color=paste('rgb','(', paste(colors[, i], collapse = ", ")))) } p <- layout(p, xaxis = list(title = x), yaxis = list(title = y)) return(p) } plotly_interaction(iris, "Sepal.Length", "Petal.Width", "Species")
{ "source": [ "https://stats.stackexchange.com/questions/33013", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2750/" ] }
33,078
I have a set of data that is not ordered in any particular way but when plotted clearly has two distinct trends. A simple linear regression would not really be adequate here because of the clear distinction between the two series. Is there a simple way to get the two independent linear trendlines? For the record I'm using Python and I am reasonably comfortable with programming and data analysis , including machine learning but am willing to jump over to R if absolutely necessary.
To solve your problem, a good approach is to define a probabilistic model that matches the assumptions about your dataset. In your case, you probably want a mixture of linear regression models. You can create a "mixture of regressors" model similar to a gaussian mixture model by associating different data points with different mixture components. I have included some code to get you started. The code implements an EM algorithm for a mixture of two regressors (it should be relatively easy to extend to larger mixtures). The code seems to be fairly robust for random datasets. However, unlike linear regression, mixture models have non-convex objectives, so for a real dataset, you may need to run a few trials with different random starting points. import numpy as np import matplotlib.pyplot as plt import scipy.linalg as lin #generate some random data N=100 x=np.random.rand(N,2) x[:,1]=1 w=np.random.rand(2,2) y=np.zeros(N) n=int(np.random.rand()*N) y[:n]=np.dot(x[:n,:],w[0,:])+np.random.normal(size=n)*.01 y[n:]=np.dot(x[n:,:],w[1,:])+np.random.normal(size=N-n)*.01 rx=np.ones( (100,2) ) r=np.arange(0,1,.01) rx[:,0]=r #plot the random dataset plt.plot(x[:,0],y,'.b') plt.plot(r,np.dot(rx,w[0,:]),':k',linewidth=2) plt.plot(r,np.dot(rx,w[1,:]),':k',linewidth=2) # regularization parameter for the regression weights lam=.01 def em(): # mixture weights rpi=np.zeros( (2) )+.5 # expected mixture weights for each data point pi=np.zeros( (len(x),2) )+.5 #the regression weights w1=np.random.rand(2) w2=np.random.rand(2) #precision term for the probability of the data under the regression function eta=100 for _ in xrange(100): if 0: plt.plot(r,np.dot(rx,w1),'-r',alpha=.5) plt.plot(r,np.dot(rx,w2),'-g',alpha=.5) #compute lhood for each data point err1=y-np.dot(x,w1) err2=y-np.dot(x,w2) prbs=np.zeros( (len(y),2) ) prbs[:,0]=-.5*eta*err1**2 prbs[:,1]=-.5*eta*err2**2 #compute expected mixture weights pi=np.tile(rpi,(len(x),1))*np.exp(prbs) pi/=np.tile(np.sum(pi,1),(2,1)).T #max with respect to the mixture probabilities rpi=np.sum(pi,0) rpi/=np.sum(rpi) #max with respect to the regression weights pi1x=np.tile(pi[:,0],(2,1)).T*x xp1=np.dot(pi1x.T,x)+np.eye(2)*lam/eta yp1=np.dot(pi1x.T,y) w1=lin.solve(xp1,yp1) pi2x=np.tile(pi[:,1],(2,1)).T*x xp2=np.dot(pi2x.T,x)+np.eye(2)*lam/eta yp2=np.dot(pi[:,1]*y,x) w2=lin.solve(xp2,yp2) #max wrt the precision term eta=np.sum(pi)/np.sum(-prbs/eta*pi) #objective function - unstable as the pi's become concentrated on a single component obj=np.sum(prbs*pi)-np.sum(pi[pi>1e-50]*np.log(pi[pi>1e-50]))+np.sum(pi*np.log(np.tile(rpi,(len(x),1))))+np.log(eta)*np.sum(pi) print obj,eta,rpi,w1,w2 try: if np.isnan(obj): break if np.abs(obj-oldobj)<1e-2: break except: pass oldobj=obj return w1,w2 #run the em algorithm and plot the solution rw1,rw2=em() plt.plot(r,np.dot(rx,rw1),'-r') plt.plot(r,np.dot(rx,rw2),'-g') plt.show()
{ "source": [ "https://stats.stackexchange.com/questions/33078", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/7292/" ] }
33,142
In Collaborative filtering, we have values that are not filled in. Suppose a user did not watch a movie then we have to put an 'na' in there. If I am going to take an SVD of this matrix, then I have to put some number in there - say 0. Now if I factorize the matrix, I have a method to find similar users (by finding out which users are closer together in the reduced dimensional space). But the predicted preference itself - for a user to an item will be zero. (because thats what we entered on the unknown columns). So I am stuck with the problem of collaborative filtering vs SVD. They seem to be almost the same, but not quite. What is the difference between them and what happens when I apply an SVD to a collaborative filtering problem? I did, and the results seem acceptable in terms of finding nearby users, which is great, but how?
$\DeclareMathOperator*{\argmin}{arg\,min}$ Ok, when you say SVD, presumably you're talking about truncated SVD (where you only keep the $k$ biggest singular values). There are two different ways to look at the truncated SVD of a matrix. One is the standard definition: First you do the SVD: $\underset{n\times m}{X} = \underset{n\times n}{U} \overset{n\times m}{\Sigma} \underset{m\times m}{V^T}$, where $U$ and $V$ are rotation matrices, and $\Sigma$ has the singular values along the diagonal. Then you pick the top $k$ singular values, zero out the rest, and hack off irrelevant rows and columns to make a $k$-rank approximation to the original: $X \approx \tilde{X} = \underset{n\times k}{\tilde{U}} \overset{k\times k}{\tilde{\Sigma}} \underset{k\times m}{\tilde{V}^T}$ This is all fine and dandy (and easy to implement in R or matlab), but it doesn't make sense when talking about matrices with missing values. However, there's an interesting property of the $k$-truncated SVD--It's the best $k$-rank approximation to the original! That is: $ \tilde{X} = \argmin_{B : rank(B)=k} \displaystyle\sum\limits_{i,j} (X_{ij} - B_{ij})^2$ This property seems easy to generalize to the missing value case. Basically you're looking for a $k$-rank matrix that minimizes the element-wise mean squared error across the known entries of the original matrix. That is, when you're training the system, you ignore all of the missing values. (For tips on how you might actually go about finding a $k$-rank approximation, here are some places to look). Then, once you've come up with a suitably "close" $k$-rank approximation to the original, you use it to fill in the missing values. That is, if $X_{ij}$ was missing, then you fill in $\tilde{X}_{ij}$. Tada! You are now done.
{ "source": [ "https://stats.stackexchange.com/questions/33142", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12640/" ] }
33,165
I am observing strange patterns in residuals for my data: [EDIT] Here are the partial regression plots for the two variables: [EDIT2] Added the PP Plot The distribution seems to be doing fine (see below) but I have no clue where this straight line might be coming from. Any ideas? [UPDATE 31.07] It turns out you were absolutely right, I had cases where the retweet count was indeed 0 and these ~ 15 cases resulted in those strange residual patterns. The residuals look much better now: I've also included the partial regressions with a loess line.
It seems that on some its subrange your dependent variable is constant or is exactly linearly dependent on the predictor(s). Let's have two correlated variables, X and Y (Y is dependent). The scatterplot is on the left. Let's return, as example, on the first ("constant") possibility. Recode all Y values from lowest to -0.5 to a single value -1 (see picture in the centre). Regress Y on X and plot residuals scatter, that is, rotate the central picture so that the prediction line is horizontal now. Does it resemble your picture?
{ "source": [ "https://stats.stackexchange.com/questions/33165", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12902/" ] }
33,185
I've dealt with Naive Bayes classifier before. I've been reading about Multinomial Naive Bayes lately. Also Posterior Probability = (Prior * Likelihood)/(Evidence) . The only prime difference (while programming these classifiers) I found between Naive Bayes & Multinomial Naive Bayes is that Multinomial Naive Bayes calculates likelihood to be count of an word/token (random variable) and Naive Bayes calculates likelihood to be following: Correct me if I'm wrong!
The general term Naive Bayes refers the the strong independence assumptions in the model, rather than the particular distribution of each feature. A Naive Bayes model assumes that each of the features it uses are conditionally independent of one another given some class. More formally, if I want to calculate the probability of observing features $f_1$ through $f_n$, given some class c, under the Naive Bayes assumption the following holds: $$ p(f_1,..., f_n|c) = \prod_{i=1}^n p(f_i|c)$$ This means that when I want to use a Naive Bayes model to classify a new example, the posterior probability is much simpler to work with: $$ p(c|f_1,...,f_n) \propto p(c)p(f_1|c)...p(f_n|c) $$ Of course these assumptions of independence are rarely true, which may explain why some have referred to the model as the "Idiot Bayes" model, but in practice Naive Bayes models have performed surprisingly well, even on complex tasks where it is clear that the strong independence assumptions are false. Up to this point we have said nothing about the distribution of each feature. In other words, we have left $p(f_i|c)$ undefined. The term Multinomial Naive Bayes simply lets us know that each $p(f_i|c)$ is a multinomial distribution, rather than some other distribution. This works well for data which can easily be turned into counts, such as word counts in text. The distribution you had been using with your Naive Bayes classifier is a Guassian p.d.f., so I guess you could call it a Guassian Naive Bayes classifier. In summary, Naive Bayes classifier is a general term which refers to conditional independence of each of the features in the model, while Multinomial Naive Bayes classifier is a specific instance of a Naive Bayes classifier which uses a multinomial distribution for each of the features. References: Stuart J. Russell and Peter Norvig. 2003. Artificial Intelligence: A Modern Approach (2 ed.). Pearson Education. See p. 499 for reference to "idiot Bayes" as well as the general definition of the Naive Bayes model and its independence assumptions
{ "source": [ "https://stats.stackexchange.com/questions/33185", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/7918/" ] }
33,197
There are several threads on this site for book recommendations on introductory statistics and machine learning but I am looking for a text on advanced statistics including, in order of priority: maximum likelihood, generalized linear models, principal component analysis, non-linear models . I've tried Statistical Models by A.C. Davison but frankly I had to put it down after 2 chapters. The text is encyclopedic in its coverage and mathematical treats but, as a practitioner, I like to approach subjects by understanding the intuition first, and then delve into the mathematical background. These are some texts that I consider outstanding for their pedagogical value. I would like to find an equivalent for the more advanced subjects I mentioned. Statistics , D. Freedman, R. Pisani, R. Purves. Forecasting: Methods and Applications , R. Hyndman et al. Multiple Regression and Beyond , T. Z. Keith Applying Contemporary Statistical Techniques , Rand R. Wilcox An Introduction to Statistical Learning with Applications in R - (PDF Released Version) , Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani The Elements of Statistical Learning:Data Mining, Inference, and Prediction. - (PDF Released Version) , Hastie, Tibshirani and Friedman (2009)
Maximum likelihood: In all Likelihood (Pawitan). Moderately clear book and the most clear (IMO) with respect to books dealing with likelihood only. Also has R code. GLMs: Categorical Data Analysis (Agresti, 2002) is one of the best written stat books I have read (also has R code available). This text will also help with maximum likelihood. The third edition is coming out in a few months. Second on my list for the above two is Collett's Modelling Binary Data . PCA: I find Rencher's writing clear in Methods of multivariate analysis . This is a graduate level text, but it is introductory.
{ "source": [ "https://stats.stackexchange.com/questions/33197", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/7795/" ] }
33,300
I know this is a rather hot topic where no one really can give a simple answer for. Nevertheless I am wondering if the following approach couldn’t be useful. The bootstrap method is only useful if your sample follows more or less (read exactly) the same distribution as the original population. In order to be certain this is the case you need to make your sample size large enough. But what is large enough? If my premise is correct you have the same problem when using the central limit theorem to determine the population mean. Only when your sample size is large enough you can be certain that the population of your sample means is normally distributed (around the population mean). In other words, your samples need to represent your population (distribution) well enough. But again, what is large enough? In my case (administrative processes: time needed to finish a demand vs amount of demands) I have a population with a multi-modal distribution (all the demands that are finished in 2011) of which I am 99% certain that it is even less normally distributed than the population (all the demands that are finished between present day and a day in the past, ideally this timespan is as small as possible) I want to research. My 2011 population exists out of enough units to make $x$ samples of a sample size $n$ . I choose a value of $x$ , suppose $10$ ( $x=10$ ). Now I use trial and error to determine a good sample size. I take an $n=50$ , and see if my sample mean population is normally distributed by using Kolmogorov-Smirnov. If so I repeat the same steps but with a sample size of $40$ , if not repeat with a sample size of $60$ (etc.). After a while I conclude that $n=45$ is the absolute minimum sample size to get a more or less good representation of my 2011 population. Since I know my population of interest (all the demands that are finished between present day and a day in the past) has less variance I can safely use a sample size of $n=45$ to bootstrap. (Indirectly, the $n=45$ determines the size of my timespan: time needed to finish $45$ demands.) This is, in short, my idea. But since I am not a statistician but an engineer whose statistics lessons took place in the days of yonder I cannot exclude the possibility I just generated a lot of rubbish :-). What do you guys think? If my premise makes sense, do I need to chose an $x$ larger than $10$ , or smaller? Depending on your answers (do I need to feel embarrassed or not? :-) I'll be posting some more discussion ideas. response on first answer Thanks for replying, Your answer was very useful to me especially the book links. But I am afraid that in my attempt to give information I completely clouded my question. I know that the bootstrap samples take over the distribution of the population sample. I follow you completely but... Your original population sample needs to be large enough to be moderately certain that the distribution of your population sample corresponds (equals) with the 'real' distribution of the population. This is merely an idea on how to determine how large your original sample size needs to be in order to be reasonably certain that the sample distribution corresponds with the population distribution. Suppose you have a bimodal population distribution and one top is a lot larger than the other one. If your sample size is 5 the chance is large that all 5 units have a value very close to the large top (chance to ad randomly draw a unit there is the largest). In this case your sample distribution will look unimodal. With a sample size of a hundred the chance that your sample distribution is also bimodal is a lot larger!! The trouble with bootstrapping is that you only have one sample (and you build further on that sample). If the sample distribution really does not correspond with the population distribution you are in trouble. This is just an idea to make the chance of having 'a bad sample distribution' as low as possible without having to make your sample size infinitely large.
I took interest in this question because I saw the word bootstrap and I have written books on the bootstrap. Also people often ask "How many bootstrap samples do I need to get a good Monte Carlo approximation to the bootstrap result?" My suggested answer to that question is to keep increasing the size until you get convergence. No one number fits all problems. But that is apparently not that question you are asking. You seem to be asking what the original sample size needs to be for the bootstrap to work. First of all I do not agree with your premise. The basic nonparametric bootstrap assumes that the sample is taken at random from a population. So for any sample size $n$ the distribution for samples chosen at random is the sampling distribution assumed in bootstrapping. The bootstrap principle says that choosing a random sample of size $n$ from the population can be mimicked by choosing a bootstrap sample of size $n$ from the original sample. Whether or not the bootstrap principle holds does not depend on any individual sample "looking representative of the population". What it does depend on is what you are estimating and some properties of the population distribution (e.g., this works for sampling means with population distributions that have finite variances, but not when they have infinite variances). It will not work for estimating extremes regardless of the population distribution. The theory of the bootstrap involves showing consistency of the estimate. So it can be shown in theory that it works for large samples. But it can also work in small samples. I have seen it work for classification error rate estimation particularly well in small sample sizes such as 20 for bivariate data. Now if the sample size is very small---say 4---the bootstrap may not work just because the set of possible bootstrap samples is not rich enough. In my book or Peter Hall's book this issue of too small a sample size is discussed. But this number of distinct bootstrap samples gets large very quickly. So this is not an issue even for sample sizes as small as 8. You can take a look at these references: My book: Bootstrap Methods: A Guide for Practitioners and Researchers Hall's book: The Bootstrap and Edgeworth Expansion
{ "source": [ "https://stats.stackexchange.com/questions/33300", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12943/" ] }
33,327
Reading mgcv::gam 's help page: confidence/credible intervals are readily available for any quantity predicted using a fitted model However I can't figure a way to actually get one. I thought predict.gam would have a type=confidence and a level parameter but it doesn't. Can you help me on how to create it ?
In the usual way: p <- predict(mod, newdata, type = "link", se.fit = TRUE) Then note that p contains a component $se.fit with standard errors of the predictions for observations in newdata . You can then form CI by multipliying the SE by a value appropriate to your desired level. E.g. an approximate 95% confidence interval is formed as: upr <- p$fit + (2 * p$se.fit) lwr <- p$fit - (2 * p$se.fit) You substitute in an appropriate value from a $t$ or Gaussian distribution for the interval you need. Note that I use type = "link" as you don't say if you have a GAM or just an AM. In the GAM, you need to form the confidence interval on the scale of the linear predictor and then transform that to the scale of the response by applying the inverse of the link function: upr <- mod$family$linkinv(upr) lwr <- mod$family$linkinv(lwr) Now note that these are very approximate intervals. In addition these intervals are point-wise on the predicted values and they don't take into account the fact that the smoothness selection was performed. A simultaneous confidence interval can be computed via simulation from the posterior distribution of the parameters. I have an example of that on my blog . If you want a confidence interval that is not conditional upon the smoothing parameters (i.e. one that takes into account that we do not know, but instead estimate, the values of the smoothness parameters), then add unconditional = TRUE to the predict() call. Also, if you don't want to do this yourself, note that newer versions of mgcv have a plot.gam() function that returns an object with all data used to create the plots of the smooths and their confidence intervals. You can just save the output from plot.gam() in an obj obj <- plot(model, ....) and then inspect obj , which is a list with one component per smooth. Add seWithMean = TRUE to the plot() call to get confidence intervals that are not conditional upon smoothness parameter.
{ "source": [ "https://stats.stackexchange.com/questions/33327", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/339/" ] }
33,413
Given a continuous dependent variable y and independent variables including an ordinal variable X 1 , how do I fit a linear model in R ? Are there papers about this type of model?
@Scortchi's got you covered with this answer on Coding for an ordered covariate . I've repeated the recommendation on my answer to Effect of two demographic IVs on survey answers (Likert scale) . Specifically, the recommendation is to use Gertheiss' (2013) ordPens package , and to refer to Gertheiss and Tutz (2009a) for theoretical background and a simulation study. The specific function you probably want is ordSmooth * . This essentially smooths dummy coefficients across levels of ordinal variables to be less different from those for adjacent ranks, which reduces overfitting and improves predictions. It generally performs as well as or (sometimes much) better than maximum likelihood (i.e., ordinary least squares in this case) estimation of a regression model for continuous (or in their terms, metric) data when the data are actually ordinal. It appears compatible with all sorts of generalized linear models, and allows you to enter nominal and continuous predictors as separate matrices. Several additional references from Gertheiss, Tutz, and colleagues are available and listed below. Some of these may contain alternatives – even Gertheiss and Tutz (2009a) discuss ridge reroughing as another alternative. I haven't dug through it all yet myself, but suffice it to say this solves @Erik's problem of too little literature on ordinal predictors! References - Gertheiss, J. (2013, June 14). ordPens: Selection and/or smoothing of ordinal predictors , version 0.2-1. Retrieved from http://cran.r-project.org/web/packages/ordPens/ordPens.pdf . - Gertheiss, J., Hogger, S., Oberhauser, C., & Tutz, G. (2011). Selection of ordinally scaled independent variables with applications to international classification of functioning core sets. Journal of the Royal Statistical Society: Series C (Applied Statistics), 60 (3), 377–395. - Gertheiss, J., & Tutz, G. (2009a). Penalized regression with ordinal predictors. International Statistical Review, 77 (3), 345–365. Retrieved from http://epub.ub.uni-muenchen.de/2100/1/tr015.pdf . - Gertheiss, J., & Tutz, G. (2009b). Supervised feature selection in mass spectrometry-based proteomic profiling by blockwise boosting. Bioinformatics, 25 (8), 1076–1077. - Gertheiss, J., & Tutz, G. (2009c). Variable scaling and nearest neighbor methods. Journal of Chemometrics, 23 (3), 149–151. - Gertheiss, J. & Tutz, G. (2010). Sparse modeling of categorial explanatory variables. The Annals of Applied Statistics, 4 , 2150–2180. - Hofner, B., Hothorn, T., Kneib, T., & Schmid, M. (2011). A framework for unbiased model selection based on boosting. Journal of Computational and Graphical Statistics, 20 (4), 956–971. Retrieved from http://epub.ub.uni-muenchen.de/11243/1/TR072.pdf . - Oelker, M.-R., Gertheiss, J., & Tutz, G. (2012). Regularization and model selection with categorial predictors and effect modifiers in generalized linear models. Department of Statistics: Technical Reports, No. 122 . Retrieved from http://epub.ub.uni-muenchen.de/13082/1/tr.gvcm.cat.pdf . - Oelker, M.-R., & Tutz, G. (2013). A general family of penalties for combining differing types of penalties in generalized structured models. Department of Statistics: Technical Reports, No. 139 . Retrieved from http://epub.ub.uni-muenchen.de/17664/1/tr.pirls.pdf . - Petry, S., Flexeder, C., & Tutz, G. (2011). Pairwise fused lasso. Department of Statistics: Technical Reports, No. 102 . Retrieved from http://epub.ub.uni-muenchen.de/12164/1/petry_etal_TR102_2011.pdf . - Rufibach, K. (2010). An active set algorithm to estimate parameters in generalized linear models with ordered predictors. Computational Statistics & Data Analysis, 54 (6), 1442–1456. Retrieved from http://arxiv.org/pdf/0902.0240.pdf?origin=publication_detail . - Tutz, G. (2011, October). Regularization methods for categorical data. Munich: Ludwig-Maximilians-Universität. Retrieved from http://m.wu.ac.at/it/departments/statmath/resseminar/talktutz.pdf . - Tutz, G., & Gertheiss, J. (2013). Rating scales as predictors—The old question of scale level and some answers. Psychometrika , 1-20.
{ "source": [ "https://stats.stackexchange.com/questions/33413", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1443/" ] }
33,659
Explain briefly What is meant by interpolation.How is it related to the concept of regression? interpolation is art of reading between the lines of a table and in elementary mathematics the term usually denotes the process of computing the intermediate values of a function from a set of given or tabular values of that function. I can't give the answer of second question.
The main difference between interpolation and regression, is the definition of the problem they solve. Given $n$ data points, when you interpolate, you look for a function that is of some predefined form that has the values in that points exactly as specified. That means given pairs $(x_i, y_i)$ you look for $F$ of some predefined form that satisfies $F(x_i) = y_i$. I think most commonly $F$ is chosen to be polynomial, spline (low degree polynomials on intervals between given points). When you do regression, you look for a function that minimizes some cost, usually sum of squares of errors. You don't require the function to have the exact values at given points, you just want a good aproximation. In general, your found function $F$ might not satisfy $F(x_i) = y_i$ for any data point, but the cost function, i.e $\sum_{i=1}^n (F(x_i) - y_i)^2$ will be the smallest possible of all the functions of given form. A good example for why you might want to only aproximate instead of interpolate are prices on stock market. You can take prices in some $k$ recent units of time, and try to interpolate them to get some prediction of the price in the next unit of time. This is rather a bad idea, because there is no reason to think that the relations between the prices can be exactly expressed by a polynomial. But linear regression might do the trick, since the prices might have some "slope" and a linear function might be a good aproximation, at least locally (hint: it's not that easy, but regression is definately a better idea than interpolation in this case).
{ "source": [ "https://stats.stackexchange.com/questions/33659", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12710/" ] }
33,780
I learned R but it seems that companies are much more interested in SAS experience. What are the advantages of SAS over R?
I think there are several issues (in ascending order of possible validity): Tradition / habit : people are used to SAS, and don't want to have to learn something new. (Making it more difficult, the way you think in SAS and R is different.) This can apply to anyone who might have to send you code, or read / use your code, including managers and colleagues. Distrust of freeware : I've had several people say they aren't willing to accept results from R because you don't have a for-profit company vetting the code to ensure it gives correct results before it goes out to customers, lest they end up losing business. Big data : R performs operations with everything in memory, whereas SAS doesn't necessarily. Thus, if your data approaches the limits of your memory, there will be problems. Personally, I only think #3 has any legitimate merit, although there are approaches to big data that have been developed with R. The issues with #1 speak for themselves. I think #2 ignores several facts: there is some vetting that goes on with R, many of the main packages are written by some of the biggest names in statistics, and there have been studies that compare the accuracy of different statistical software & R has certainly been competitive.
{ "source": [ "https://stats.stackexchange.com/questions/33780", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11634/" ] }
33,888
X and Y are not correlated (-.01); however, when I place X in a multiple regression predicting Y, alongside three (A, B, C) other (related) variables, X and two other variables (A, B) are significant predictors of Y. Note that the two other (A, B) variables are significantly correlated with Y outside of the regression. How should I interpret these findings? X predicts unique variance in Y, but since these are not correlated (Pearson), it is somehow difficult to interpret. I know of opposite cases (i.e., two variables are correlated but regression is not significant) and those are relatively simpler to understand from a theoretical and statistical perspective. Note that some of the predictors are quite correlated (e.g., .70) but not to the extent that I would expect substantial multicollinearity. Maybe I am mistaken, though. NOTE: I asked this question previously and it was closed. The rational was that this question is redundant with the question " How can a regression be significant yet all predictors be non-significant? ". Perhaps I do not understand the other question, but I believe these are entirely separate questions, both mathematically and theoretically. My question is entirely independent from if "a regression is significant". Furthermore, several predictors are significant, while the other question entails variables not being significant, so I don't see the overlap. If these questions are redundant for reasons I do not understand, please insert a comment prior to closing this question. Also, I was hoping to message the moderator who closed the other question to avoid identical questions, but I couldn't find an option to do so.
Causal theory offers another explanation for how two variables could be unconditionally independent yet conditionally dependent. I am not an expert on causal theory and am grateful for any criticism that will correct any misguidance below. To illustrate, I will use directed acyclic graphs (DAG). In these graphs, edges ( $-$ ) between variables represent direct causal relationships. Arrowheads ( $\leftarrow$ or $\rightarrow$ ) indicate the direction of causal relationships. Thus $A \rightarrow B$ infers that $A$ directly causes $B$ , and $A \leftarrow B$ infers that $A$ is directly caused by $B$ . $A \rightarrow B \rightarrow C$ is a causal path that infers that $A$ indirectly causes $C$ through $B$ . For simplicity, assume all causal relationships are linear. First, consider a simple example of confounder bias : Here, a simple bivariable regression will suggest a dependence between $X$ and $Y$ . However, there is no direct causal relationship between $X$ and $Y$ . Instead, both are directly caused by $Z$ , and in the simple bivariable regression, observing $Z$ induces a dependency between $X$ and $Y$ , resulting in bias by confounding. However, a multivariable regression conditioning on $Z$ will remove the bias and suggest no dependence between $X$ and $Y$ . Second, consider an example of collider bias (also known as Berkson's bias or berksonian bias, of which selection bias is a special type): Here, a simple bivariable regression will suggest no dependence between $X$ and $Y$ . This agrees with the DAG, which infers no direct causal relationship between $X$ and $Y$ . However, a multivariable regression conditioning on $Z$ will induce a dependence between $X$ and $Y$ , suggesting that a direct causal relationship between the two variables may exist when in fact, none exist. The inclusion of $Z$ in the multivariable regression results in collider bias. Third, consider an example of incidental cancellation: Let us assume that $\alpha$ , $\beta$ , and $\gamma$ are path coefficients and that $\beta = -\alpha\gamma$ . A simple bivariable regression will suggest no dependence between $X$ and $Y$ . Although $X$ is, in fact, a direct cause of $Y$ , the confounding effect of $Z$ on $X$ and $Y$ incidentally cancels out the effect of $X$ on $Y$ . A multivariable regression conditioning on $Z$ will remove the confounding effect of $Z$ on $X$ and $Y$ , allowing for the estimation of the direct effect of $X$ on $Y$ , assuming the DAG of the causal model is correct. To summarize: Confounder example: $X$ and $Y$ are dependent in bivariable regression and independent in multivariable regression conditioning on confounder $Z$ . Collider example: $X$ and $Y$ are independent in bivariable regression and dependent in multivariable regression conditioning on collider $Z$ . Incidental cancellation example: $X$ and $Y$ are independent in bivariable regression and dependent in multivariable regression conditioning on confounder $Z$ . Discussion: The results of your analysis are not compatible with the confounder example but are compatible with both the collider example and the incidental cancellation example. Thus, a potential explanation is that you have incorrectly conditioned on a collider variable in your multivariable regression and have induced an association between $X$ and $Y$ even though $X$ is not a cause of $Y$ and $Y$ is not a cause of $X$ . Alternatively, you might have correctly conditioned on a confounder in your multivariable regression that was incidentally cancelling out the true effect of $X$ on $Y$ in your bivariable regression. I find using background knowledge to construct causal models helpful when considering which variables to include in statistical models. For example, if previous high-quality randomized studies concluded that $X$ causes $Z$ and $Y$ causes $Z$ , I could make a strong assumption that $Z$ is a collider of $X$ and $Y$ and not condition upon it in a statistical model. However, if I merely had an intuition that $X$ causes $Z$ , and $Y$ causes $Z$ , but no strong scientific evidence to support my intuition, I could only make a weak assumption that $Z$ is a collider of $X$ and $Y$ , as human intuition has a history of being misguided. Subsequently, I would be skeptical of inferring causal relationships between $X$ and $Y$ without further investigating their causal relationships with $Z$ . In lieu of or in addition to background knowledge, there are also algorithms designed to infer causal models from the data using a series of tests of association (e.g. PC algorithm and FCI algorithm, see TETRAD for Java implementation, PCalg for R implementation). These algorithms are very interesting, but I would not recommend relying on them without a strong understanding of the power and limitations of causal calculus and causal models in causal theory. Conclusion: Contemplation of causal models does not excuse the investigator from addressing the statistical considerations discussed in other answers here. However, I feel that causal models can nevertheless provide a helpful framework when thinking of potential explanations for observed statistical dependence and independence in statistical models, especially when visualizing potential confounders and colliders. Further reading: Gelman, Andrew. 2011. " Causality and Statistical Learning ." Am. J. Sociology 117 (3) (November): 955–966. Greenland, S, J Pearl, and J M Robins. 1999. “ Causal Diagrams for Epidemiologic Research .” Epidemiology (Cambridge, Mass.) 10 (1) (January): 37–48. Greenland, Sander. 2003. “ Quantifying Biases in Causal Models: Classical Confounding Vs Collider-Stratification Bias .” Epidemiology 14 (3) (May 1): 300–306. Pearl, Judea. 1998. Why There Is No Statistical Test For Confounding, Why Many Think There Is, And Why They Are Almost Right . Pearl, Judea. 2009. Causality: Models, Reasoning and Inference . 2nd ed. Cambridge University Press. Spirtes, Peter, Clark Glymour, and Richard Scheines. 2001. Causation, Prediction, and Search , Second Edition. A Bradford Book. Update: Judea Pearl discusses the theory of causal inference and the need to incorporate causal inference into introductory statistics courses in the November 2012 edition of Amstat News . His Turing Award Lecture , entitled "The mechanization of causal inference: A 'mini' Turing Test and beyond" is also of interest.
{ "source": [ "https://stats.stackexchange.com/questions/33888", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3262/" ] }
34,396
I have 10 years of daily returns data for 28 different currencies. I wish to extract the first principal component, but rather than operate PCA on the whole 10 years, I want to rollapply a 2 year window, because the currencies' behaviours evolve and so I wish to reflect this. However I have a major problem, that is that both the princomp() and prcomp() functions will often jump from positive to negative loadings in adjacent PCA analyses (ie 1 day apart). Have a look at the loading chart for the EUR currency: Clearly I can't use this because adjacent loadings will jump from positive to negative, so my series which uses them will be erroneous. Now take a look at the absolute value of the EUR currency loading: The problem is of course that I still cannot use this because you can see from the top chart that the loading does go from negative to positive and back at times, a characteristic which I need to preserve. Is there any way I can get around this problem? Can I force the eigenvector orientation to always be the same in adjacent PCAs? By the way this problem also occurs with the FactoMineR PCA() function. The code for the rollapply is here: rollapply(retmat, windowl, function(x) summary(princomp(x))$loadings[, 1], by.column = FALSE, align = "right") -> princomproll
Whenever the plot jumps too much, reverse the orientation. One effective criterion is this: compute the total amount of jumps on all the components. Compute the total amount of jumps if the next eigenvector is negated. If the latter is less, negate the next eigenvector. Here's an implementation. (I am not familiar with zoo , which might allow a more elegant solution.) require(zoo) amend <- function(result) { result.m <- as.matrix(result) n <- dim(result.m)[1] delta <- apply(abs(result.m[-1,] - result.m[-n,]), 1, sum) delta.1 <- apply(abs(result.m[-1,] + result.m[-n,]), 1, sum) signs <- c(1, cumprod(rep(-1, n-1) ^ (delta.1 <= delta))) zoo(result * signs) } As an example, let's run a random walk in an orthogonal group and jitter it a little for interest: random.rotation <- function(eps) { theta <- rnorm(3, sd=eps) matrix(c(1, theta[1:2], -theta[1], 1, theta[3], -theta[2:3], 1), 3) } set.seed(17) n.times <- 1000 x <- matrix(1., nrow=n.times, ncol=3) for (i in 2:n.times) { x[i,] <- random.rotation(.05) %*% x[i-1,] } Here's the rolling PCA: window <- 31 data <- zoo(x) result <- rollapply(data, window, function(x) summary(princomp(x))$loadings[, 1], by.column = FALSE, align = "right") plot(result) Now the fixed version: plot(amend(result))
{ "source": [ "https://stats.stackexchange.com/questions/34396", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4705/" ] }
34,465
From Theory of Statistics by Mark J. Schervish (page 12): Although DeFinetti's representation theorem 1.49 is central to motivating parametric models, it is not actually used in their implementation. How is the theorem central to parametric models?
De Finetti's Representation Theorem gives in a single take, within the subjectivistic interpretation of probabilities, the raison d'être of statistical models and the meaning of parameters and their prior distributions. Suppose that the random variables $X_1,\dots,X_n$ represent the results of successive tosses of a coin, with values $1$ and $0$ corresponding to the results "Heads" and "Tails", respectively. Analyzing, within the context of a subjectivistic interpretation of the probability calculus, the meaning of the usual frequentist model under which the $X_i$'s are independent and identically distributed, De Finetti observed that the condition of independence would imply, for example, that $$ P\{X_n=x_n\mid X_1=x_1,\dots,X_{n-1}=x_{n-1}\} = P\{X_n=x_n\} \, , $$ and, therefore, the results of the first $n-1$ tosses would not change my uncertainty about the result of $n$-th toss. For example, if I believe $\textit{a priori}$ that this is a balanced coin, then, after getting the information that the first $999$ tosses turned out to be "Heads", I would still believe, conditionally on that information, that the probability of getting "Heads" on toss 1000 is equal to $1/2$. Effectively, the hypothesis of independence of the $X_i$'s would imply that it is impossible to learn anything about the coin by observing the results of its tosses. This observation led De Finetti to the introduction of a condition weaker than independence that resolves this apparent contradiction. The key to De Finetti's solution is a kind of distributional symmetry known as exchangeability. $\textbf{Definition.}$ For a given finite set $\{X_i\}_{i=1}^n$ of random objects, let $\mu_{X_1,\dots,X_n}$ denote their joint distribution. This finite set is exchangeable if $\mu_{X_1,\dots,X_n} = \mu_{X_{\pi(1)},\dots,X_{\pi(n)}}$, for every permutation $\pi:\{1,\dots,n\}\to\{1,\dots,n\}$. A sequence $\{X_i\}_{i=1}^\infty$ of random objects is exchangeable if each of its finite subsets are exchangeable. Supposing only that the sequence of random variables $\{X_i\}_{i=1}^\infty$ is exchangeable, De Finetti proved a notable theorem that sheds light on the meaning of commonly used statistical models. In the particular case when the $X_i$'s take the values $0$ and $1$, De Finetti's Representation Theorem says that $\{X_i\}_{i=1}^\infty$ is exchangeable if and only if there is a random variable $\Theta:\Omega\to[0,1]$, with distribution $\mu_\Theta$, such that $$ P\{X_1=x_1,\dots,X_n=x_n\} = \int_{[0,1]} \theta^s(1-\theta)^{n-s}\,d\mu_\Theta(\theta) \, , $$ in which $s=\sum_{i=1}^n x_i$. Moreover, we have that $$ \bar{X}_n = \frac{1}{n}\sum_{i=1}^n X_i \xrightarrow[n\to\infty]{} \Theta \qquad \textrm{almost surely}, $$ which is known as De Finetti's Strong Law of Large Numbers. This Representation Theorem shows how statistical models emerge in a Bayesian context: under the hypothesis of exchangeability of the observables $\{X_i\}_{i=1}^\infty$, $\textbf{there is}$ a $\textit{parameter}$ $\Theta$ such that, given the value of $\Theta$, the observables are $\textit{conditionally}$ independent and identically distributed. Moreover, De Finetti's Strong law shows that our prior opinion about the unobservable $\Theta$, represented by the distribution $\mu_\Theta$, is the opinion about the limit of $\bar{X}_n$, before we have information about the values of the realizations of any of the $X_i$'s. The parameter $\Theta$ plays the role of a useful subsidiary construction, which allows us to obtain conditional probabilities involving only observables through relations like $$ P\{X_n=1\mid X_1=x_1,\dots,X_{n-1}=x_{n-1}\} = \mathrm{E}\left[\Theta\mid X_1=x_1,\dots,X_{n-1}=x_{n-1}\right] \, . $$
{ "source": [ "https://stats.stackexchange.com/questions/34465", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/10849/" ] }
34,547
If there are multiple possible approximations, I'm looking for the most basic one.
You can approximate it with the multivariate normal distribution in the same way that binomial distribution is approximated by univariate normal distribution. Check Elements of Distribution Theory and Multinomial Distribution pages 15-16-17. Let $P=(p_1,...,p_k)$ be the vector of your probabilities. Then the mean vector of the multivariate normal distribution is $ np=(np_1,np_2,...,np_k)$. The covariance matrix is a $k \times k$ symmetric matrix. The diagonal elements are actually the variance of $X_i$'s; i.e.$ np_i(1-p_i)$, $i=1,2...,k$. The off-diagonal element in the ith row and jth column is $\text{Cov}(X_i,X_j)=-np_ip_j$, where $i$ is not equal to $j$.
{ "source": [ "https://stats.stackexchange.com/questions/34547", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12244/" ] }
34,613
So I was asked a question on which central measures L1 (i.e., lasso) and L2 (i.e., ridge regression) estimated. The answer is L1=median and L2=mean. Is there any type of intuitive reasoning to this? Or does it have to be determined algebraically? If so, how do I go about doing that?
There is a simple geometric explanation for why the L1 loss function yields the median. Recall that we are working in one dimension, so imagine a number line spreading horizontally. Plot each of the data points on the number line. Put your finger somewhere on the line; your finger will be your current candidate estimate. Suppose you move your finger a little bit to the right, say $\delta$ units to the right. What happens to the total loss? Well, if your finger was between two data points, and you move it across a data point, you've increased the total loss by $\delta$ for each data point to the left of your finger, and decreased it by $\delta$ for each data point to the right of your finger. So, if there are more data points to the right of your finger than there are to the left, moving your finger to the right decreases the total loss. In other words, if more than half of the data points are to the right of your finger, you should move your finger to the right. This leads to you moving your finger towards a spot where half of the data points are to the left of that spot, and half are on the right. That spot is the median. That's L1 and the median. Unfortunately, I don't have a similar, "all intuition, no algebra" explanation for L2 and the mean.
{ "source": [ "https://stats.stackexchange.com/questions/34613", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12851/" ] }
34,616
I am learning right now about regression analysis and the analysis of variance. In regression analysis you have one variable fixed and you want to know how the variable goes with the other variable. In analysis of variance you want to know for example: If this specific animal food influences the weight of animals... SO one fixed var and the influence on the others... Is that right or wrong, pls help me...
Suppose your data set consists of a set $(x_i,y_i)$ for $i=1,\ldots,n$ and you want to look at the dependence of $y$ on $x$. Suppose you find the values $\hat\alpha$ and $\hat\beta$ of $\alpha$ and $\beta$ that minimize the residual sum of squares $$ \sum_{i=1}^n (y_i - (\alpha+\beta x_i))^2. $$ Then you take $\hat y = \hat\alpha+ \hat\beta x$ to be the predicted $y$-value for any (not necessarily already observed) $x$-value. That's linear regression. Now consider decomposing the total sum of squares $$ \sum_{i=1}^n (y_i - \bar y)^2 \qquad\text{where }\bar y = \frac{y_1+\cdots+y_n}{n} $$ with $n-1$ degrees of freedom, into "explained" and "unexplained" parts: $$ \underbrace{\sum_{i=1}^n ((\hat\alpha+\hat\beta x_i) - \bar y)^2}_{\text{explained}}\ +\ \underbrace{\sum_{i=1}^n (y_i - (\hat\alpha+\hat\beta x_i))^2}_{\text{unexplained}}. $$ with $1$ and $n-2$ degrees of freedom, respectively. That's analysis of variance, and one then considers things like F-statistics $$ F = \frac{\sum_{i=1}^n ((\hat\alpha+\hat\beta x_i) - \bar y)^2/1}{\sum_{i=1}^n (y_i - (\hat\alpha+\hat\beta x_i))^2/(n-2)}. $$ This F-statistic tests the null hypothesis $\beta=0$. One often first encounters the term "analysis of variance" when the predictor is categorical, so that you're fitting the model $$ y = \alpha + \beta_i $$ where $i$ identifies which category is the value of the predictor. If there are $k$ categories, you'd get $k-1$ degrees of freedom in the numerator in the F-statistic, and usually $n-k$ degrees of freedom in the denominator. But the distinction between regression and analysis of variance is still the same for this kind of model. A couple of additional points: To some mathematicians, the account above may make it appear that the whole field is only what is seen above, so it may seem mysterious that both regression and analysis of variance are active research areas. There is much that won't fit into an answer appropriate for posting here. There is a popular and tempting mistake, which is that it's called "linear" because the graph of $y=\alpha+\beta x$ is a line. That is false. One of my earlier answers explains why it's still called "linear regression" when you're fitting a polynomial via least squares.
{ "source": [ "https://stats.stackexchange.com/questions/34616", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/13561/" ] }
34,636
I'm somewhat new to using logistic regression, and a bit confused by a discrepancy between my interpretations of the following values which I thought would be the same: exponentiated beta values predicted probability of the outcome using beta values. Here is a simplified version of the model I am using, where undernutrition and insurance are both binary, and wealth is continuous: Under.Nutrition ~ insurance + wealth My (actual) model returns an exponentiated beta value of .8 for insurance, which I would interpret as: "The probability of being undernourished for an insured individual is .8 times the probability of being undernourished for an uninsured individual." However, when I calculate the difference in probabilities for individuals by putting in values of 0 and 1 into the insurance variable and the mean value for wealth, the difference in undernutrition is only .04. That is calculated as follows: Probability Undernourished = exp(β0 + β1*Insurance + β2*Wealth) / (1+exp(β0 + β1*Insurance + β2*wealth)) I would really appreciate it if someone could explain why these values are different, and what a better interpretation (particularly for the second value) might be. Further Clarification Edits As I understand it, the probability of being under-nourished for an uninsured person (where B1 corresponds to insurance) is: Prob(Unins) = exp(β0 + β1*0 + β2*Wealth) / (1+exp(β0 + β1*0+ β2*wealth)) While the Probability of being under-nourished for an insured person is: Prob(Ins)= exp(β0 + β1*1 + β2*Wealth) / (1+exp(β0 + β1*1+ β2*wealth)) The odds of being undernourished for an uninsured person compared to an insured person is: exp(B1) Is there a way to translate between these values (mathematically)? I'm still a bit confused by this equation (where I should probably be a different value on the RHS): Prob(Ins) - Prob(Unins) != exp(B) In layman's terms, the question is why doesn't insuring an individual change their probability of being under-nourished as much as the odds ratio indicates it does? In my data, Prob(Ins) - Prob(Unins) = .04, where the exponentiated beta value is .8 (so why is the difference not .2?)
It seems self-evident to me that $$ \exp(\beta_0 + \beta_1x) \neq\frac{\exp(\beta_0 + \beta_1x)}{1+\exp(\beta_0 + \beta_1x)} $$ unless $\exp(\beta_0 + \beta_1x)=0$. So, I'm less clear about what the confusion might be. What I can say is that the left hand side (LHS) of the (not) equals sign is the odds of being undernourished, whereas the RHS is the probability of being undernourished. When examined on its own, $\exp(\beta_1)$, is the odds ratio , that is the multiplicative factor that allows you to move from the odds($x$) to the odds($x+1$). Let me know if you need additional / different information. Update: I think this is mostly an issue of being unfamiliar with probabilities and odds, and how they relate to one another. None of that is very intuitive, you need to sit down and work with it for a while and learn to think in those terms; it doesn't come naturally to anyone. The issue is that absolute numbers are very difficult to interpret on their own. Lets say I was telling you about a time when I had a coin and I wondered whether it was fair. So I flipped it some and got 6 heads. What does that mean? Is 6 a lot, a little, about right? It's awfully hard to say. To deal with this issue we want to give numbers some context. In a case like this there are two obvious choices for how to provide the needed context: I could give the total number of flips, or I could give the number of tails. In either case, you have adequate information to make sense of 6 heads, and you could compute the other value if the one I told you wasn't the one you preferred. Probability is the number of heads divided by the total number of events. The odds is the ratio of the number of heads to the number of non-heads (intuitively we want to say the number of tails, which works in this case, but not if there are more than 2 possibilities). With the odds, it is possible to give both numbers, e.g. 4 to 5. This means that in the long run something will happen 4 times for every 5 times it doesn't happen. When the odds are presented this way, they're called " Las Vegas odds ". However in statistics, we typically divide through and say the odds are .8 instead (i.e., 4/5 = .8) for purposes of standardization. We can also convert between the odds and probabilities: $$ \text{probability}=\frac{\text{odds}}{1+\text{odds}} ~~~~~~~~~~~~~~~~ \text{odds}=\frac{\text{probability}}{1-\text{probability}} $$ (With these formulas it can be difficult to recognize that the odds is the LHS at top, and the probability is the RHS, but remember that it's the not equals sign in the middle.) An odds ratio is just the odds of something divided by the odds of something else; in the context of logistic regression, each $\exp(\beta)$ is the ratio of the odds for successive values of the associated covariate when all else is held equal. What's important to recognize from all of these equations is that probabilities, odds, and odds ratios do not equate in any straightforward way; just because the probability goes up by .04 very much does not imply that the odds or odds ratio should be anything like .04! Moreover, probabilities range from $[0, 1]$, whereas ln odds (the output from the raw logistic regression equation) can range from $(-\infty, +\infty)$, and odds and odds ratios can range from $(0, +\infty)$. This last part is vital: Due to the bounded range of probabilities, probabilities are non-linear , but ln odds can be linear. That is, as (for example) wealth goes up by constant increments, the probability of undernourishment will increase by varying amounts, but the ln odds will increase by a constant amount and the odds will increase by a constant multiplicative factor. For any given set of values in your logistic regression model, there may be some point where $$ \exp(\beta_0 + \beta_1x)-\exp(\beta_0 + \beta_1x') =\frac{\exp(\beta_0 + \beta_1x)}{1+\exp(\beta_0 + \beta_1x)}-\frac{\exp(\beta_0 + \beta_1x')}{1+\exp(\beta_0 + \beta_1x')} $$ for some given $x$ and $x'$, but it will be unequal everywhere else. (Although it was written in the context of a different question, my answer here contains a lot of information about logistic regression that may be helpful for you in understanding LR and related issues more fully.)
{ "source": [ "https://stats.stackexchange.com/questions/34636", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/13378/" ] }
34,642
I would like to know the difference between panel data analysis & mixed model analysis. To my knowledge, both panel data & mixed models use fixed & random effects. If so, why do they have different names? Or are they synonymous? I've read the following post, which describes the definition of fixed, random & mixed effect, but doesn't exactly answer my question: What is the difference between fixed effect, random effect and mixed effect models? I would also be grateful if somebody could refer me to a brief (about 200 page) reference on mixed model analysis. Just to add, I would prefer mixed modeling reference irrespective of software treatment. Mainly theoretical explanation of mixed modelling.
Both panel data and mixed effect model data deal with double indexed random variables $y_{ij}$. First index is for group, the second is for individuals within the group. For the panel data the second index is usually time, and it is assumed that we observe individuals over time. When time is second index for mixed effect model the models are called longitudinal models. The mixed effect model is best understood in terms of 2 level regressions. (For ease of exposition assume only one explanatory variable) First level regression is the following $$y_{ij}=\alpha_i+x_{ij}\beta_i+\varepsilon_{ij}.$$ This is simply explained as individual regression for each group. The second level regression tries to explain variation in regression coefficients: $$\alpha_i=\gamma_0+z_{i1}\gamma_1+u_i$$ $$\beta_i=\delta_0+z_{i2}\delta_1+v_i$$ When you substitute the second equation to the first one you get $$y_{ij}=\gamma_0+z_{i1}\gamma_1+x_{ij}\delta_0+x_{ij}z_{i2}\delta_1+u_i+x_{ij}v_i+\varepsilon_{ij}$$ The fixed effects are what is fixed, this means $\gamma_0,\gamma_1,\delta_0,\delta_1$. The random effects are $u_i$ and $v_i$. Now for panel data the terminology changes, but you still can find common points. The panel data random effects models is the same as mixed effects model with $$\alpha_i=\gamma_0+u_i$$ $$\beta_i=\delta_0$$ with model becomming $$y_{it}=\gamma_0+x_{it}\delta_0+u_i+\varepsilon_{it},$$ where $u_i$ are random effects. The most important difference between mixed effects model and panel data models is the treatment of regressors $x_{ij}$. For mixed effects models they are non-random variables, whereas for panel data models it is always assumed that they are random. This becomes important when stating what is fixed effects model for panel data. For mixed effect model it is assumed that random effects $u_i$ and $v_i$ are independent of $\varepsilon_{ij}$ and also from $x_{ij}$ and $z_i$, which is always true when $x_{ij}$ and $z_i$ are fixed. If we allow for stochastic $x_{ij}$ this becomes important. So the random effects model for panel data assumes that $x_{it}$ is not correlated with $u_i$. But the fixed effect model which has the same form $$y_{it}=\gamma_0+x_{it}\delta_0+u_i+\varepsilon_{it},$$ allows correlation of $x_{it}$ and $u_i$. The emphasis then is solely for consistently estimating $\delta_0$. This is done by subtracting the individual means: $$y_{it}-\bar{y}_{i.}=(x_{it}-\bar{x}_{i.})\delta_0+\varepsilon_{it}-\bar{\varepsilon}_{i.},$$ and using simple OLS on resulting regression problem. Algebraically this coincides with least square dummy variable regression problem, where we assume that $u_i$ are fixed parameters. Hence the name fixed effects model. There is a lot of history behind fixed effects and random effects terminology in panel data econometrics, which I omitted. In my personal opinion these models are best explained in Wooldridge's " Econometric analysis of cross section and panel data ". As far as I know there is no such history in mixed effects model, but on the other hand I come from econometrics background, so I might be mistaken.
{ "source": [ "https://stats.stackexchange.com/questions/34642", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4278/" ] }
34,724
I have a set of data that has $n$ samples described by $m$ variables. I do a PCA to reduce it to just 2 dimensions so I can make a nice 2D plot of the data. I understand that the $x,y$ coordinates (i.e., the PCA scores) for the plot are calculated by basically summing the products of the original data (after centering) by the loadings for each variable, so: $$\mathrm{PC}_1 = X_1L_1 + X_2L_2 + ... + X_mL_m.$$ My question is, if I pick an arbitrary point in the PCA space (i.e. a value for $\mathrm{PC}_1$ and $\mathrm{PC}_2$, or $x$ and $y$ in my plot), is there a convenient way to translate that back to a set of the original values (i.e., $X_1,X_2,\dots,X_m$)? Note 100% reversal is obviously not expected (since I'm only using 2 PCs), so a decent approximation is fine.
Yes. Basically, what you did was to do: $$\mathrm{PC}=\mathbf{V}X,$$ where $\mathrm{PC}$ are the principal components, $X$ is your matrix with the data (centered, and with data points in columns) and $\mathbf{V}$ is the matrix with the loadings (the matrix with the eigenvectors of the sample covariance matrix of $X$). Therefore, you can do: $$\mathbf{V}^{-1}\cdot\mathrm{PC}=X,$$ but, because the matrix of loadings is orthonormal (they are eigenvectors!), then $\mathbf{V}^{-1}=\mathbf{V}^{T}$, so: $$\mathbf{V}^T\cdot\mathrm{PC}=X.$$ Note that this gives you exactly the same equation you cite for the recovery of the PCs, but now for the data, and you can retain as many PCS as you like.
{ "source": [ "https://stats.stackexchange.com/questions/34724", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/13455/" ] }
34,769
In a dataset of two non-overlapping populations (patients & healthy, total $n=60$) I would like to find (out of $300$ independent variables) significant predictors for a continuous dependent variable. Correlation between predictors is present. I am interested in finding out if any of the predictors are related to the dependent variable "in reality" (rather than predicting the dependent variable as exactly as possible). As I got overwhelmed with the numerous possible approaches, I would like to ask for which approach is most recommended. From my understanding stepwise inclusion or exclusion of predictors is not recommended E.g. run a linear regression separately for every predictor and correct p-values for multiple comparison using FDR (probably very conservative?) Principal-component regression: difficult to interpret as I won't be able to tell about the predictive power of individual predictors but only about the components. any other suggestions?
I would recommend trying a glm with lasso regularization . This adds a penalty to the model for number of variables, and as you increase the penalty, the number of variables in the model will decrease. You should use cross-validation to select the value of the penalty parameter. If you have R, I suggest using the glmnet package . Use alpha=1 for lasso regression, and alpha=0 for ridge regression. Setting a value between 0 and 1 will use a combination of lasso and ridge penalties, also know as the elastic net.
{ "source": [ "https://stats.stackexchange.com/questions/34769", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/10389/" ] }
34,903
I am a newbie to statistics and found this . In statistics, θ, the lowercase Greek letter 'theta', is the usual name for a (vector of) parameter(s) of some general probability distribution. A common problem is to find the value(s) of theta. Notice that there isn't any meaning in naming a parameter this way. We might as well call it anything else. In fact, a lot of distributions have parameters which are usually given other names. For example, it is common use to name the mean and deviation of the normal distribution μ (read: 'mu') and deviation σ ('sigma'), respectively. But I still don't know what that means in plain English?
It is not a convention, but quite often $\theta$ stands for the set of parameters of a distribution. That was it for plain English, let's show examples instead. Example 1. You want to study the throw of an old fashioned thumbtack (the ones with a big circular bottom). You assume that the probability that it falls point down is an unknown value that you call $\theta$. You could call a random variable $X$ and say that $X=1$ when the thumbtack falls point down and $X=0$ when it falls point up. You would write the model $$P(X = 1) = \theta \\ P(X = 0) = 1-\theta,$$ and you would be interested in estimating $\theta$ (here, the proability that the thumbtack falls point down). Example 2. You want to study the disintegration of a radioactive atom. Based on the literature, you know that the amount of radioactivity decreases exponentially, so you decide to model the time to disintegration with an exponential distribution. If $t$ is the time to disintegration, the model is $$f(t) = \theta e^{-\theta t}.$$ Here $f(t)$ is a probability density, which means that the probability that the atom disintegrates in the time interval $(t, t+dt)$ is $f(t)dt$. Again, you will be interested in estimating $\theta$ (here, the disintegration rate). Example 3. You want to study the precision of a weighing instrument. Based on the literature, you know that the measurement are Gaussian so you decide to model the weighing of a standard 1 kg object as $$f(x) = \frac{1}{\sigma \sqrt{2\pi}} \exp \left\{ -\left( \frac{x-\mu}{2\sigma} \right)^2\right\}.$$ Here $x$ is the measure given by the scale, $f(x)$ is the density of probability, and the parameters are $\mu$ and $\sigma$, so $\theta = (\mu, \sigma)$. The paramter $\mu$ is the target weight (the scale is biased if $\mu \neq 1$), and $\sigma$ is the standard deviation of the measure every time you weigh the object. Again, you will be interested in estimating $\theta$ (here, the bias and the imprecision of the scale).
{ "source": [ "https://stats.stackexchange.com/questions/34903", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11799/" ] }
34,926
The tl;dr version What successful strategies do you employ to teach the sampling distribution (of a sample mean, for example) at an introductory undergraduate level? The background In September I'll be teaching an introductory statistics course for second year social science (mainly political science and sociology) students using The Basic Practice of Statistics by David Moore. It'll be the fifth time I've taught this course and one issue I've consistently had is that the students have really struggled with the notion of the sampling distribution . It's covered as the background for inference and follows a basic introduction to probability with which they don't seem to have trouble after some initial hiccups (and by basic, I mean basic -- after all, many of these students have been self-selected into a specific course stream because they were trying to avoid anything with even a vague hint of "math"). I would guess that probably 60% leave the course with no to minimal understanding, about 25% understand the principle but not the connections to other concepts, and the remaining 15% fully understand. The main issue The trouble students seem to have is with the application. It's difficult to explain what the precise issue is other than to say they just don't get it. From a poll I conducted last semester and from exam responses, I think that part of the difficulty is confusion between two related and similar sounding phrases (sampling distribution and sample distribution), so I've don't use the phrase "sample distribution" anymore, but surely this is something that, while confusing at first, is easily grasped with a little effort and anyway it can't explain the general confusion of the concept of a sampling distribution. (I realize that it might be me and my teaching that's at issue here! However I think ignoring that uncomfortable possibility is reasonable to do since some students do seem to get it and overall everybody seems to do quite well...) What I've tried I had to argue with the undergraduate administrator in our department to introduce mandatory sessions in the computer lab thinking that repeated demonstrations might be helpful (before I started teaching this course there was no computing involved). While I think this helps overall understanding of the course material in general, I don't think it's helped with this specific topic. One idea I've had is to simply not teach it at all or to not give it much weight, a position advocated by some (e.g. Andrew Gelman ). I don't find this particularly satisfying since it has the whiff of teaching to the lowest common denominator and more importantly denies strong and motivated students who want to learn more about statistical application from really understanding how important concepts work (not only the sampling distribution!). On the other hand, the median student does seem to grasp p-values for example, so maybe they don't need to understand the sampling distribution anyway. The question What strategies do you employ to teach the sampling distribution? I know there are materials and discussions available (e.g. here and here and this paper which opens a PDF file ) but I'm just wondering if I can get some concrete examples of what works for people (or I guess even what doesn't work so I'll know not to try it!). My plan now, as I plan my course for September, is to follow Gelman's advice and "deemphasize" the sampling distribution. I'll teach it, but I'll assure the students that this is a sort of FYI-only topic and will not appear on an exam (except perhaps as a bonus question?!). However, I'm really interested in hearing other approaches people have used.
In my opinion, sampling distributions are the key idea of statistics 101. You might as well skip the course as skip that issue. However, I am very familiar with the fact that students just don't get it, seemingly no matter what you do. I have a series of strategies. These can take up a lot of time, but I recommend skipping / abbreviating other topics, so as to ensure that they get the idea of the sampling distribution. Here are some tips: Say it distinctly: I first explicitly mention that there 3 different distributions that we are concerned with: the population distribution, the sample distribution, and the sampling distribution. I say this over and over throughout the lesson, and then over and over throughout the course. Every time I say these terms I emphasize the distinctive ending: sam- ple , samp- ling . (Yes, students do get sick of this; they also get the concept.) Use pictures (figures): I have a set of standard figures that I use every time I talk about this. It has the three distributions pictured distinctly, and typically labeled. (The labels that go with this figure are on the powerpoint slide and include short descriptions, so they don't show up here, but obviously it's: population at the top, then samples, then sampling distribution.) Give the students activities: The first time you introduce this concept, either bring in a roll of nickles (some quarters may disappear) or a bunch of 6-sided dice. Have the students form into small groups and generate a set of 10 values and average them. Then you can make a histogram on the board or with Excel. Use animations (simulations): I write some (comically inefficient) code in R to generate data & display it in action. This part is especially helpful when you transition to explaining the Central Limit Theorem. (Notice the Sys.sleep() statements, these pauses give me a moment to explain what is going on at each stage.) N = 10 number_of_samples = 1000 iterations = c(3, 7, number_of_samples) breakpoints = seq(10, 91, 3) meanVect = vector() x = seq(10, 90) height = 30/dnorm(50, mean=50, sd=10) y = height*dnorm(x, mean=50, sd=10) windows(height=7, width=5) par(mfrow=c(3,1), omi=c(0.5,0,0,0), mai=c(0.1, 0.1, 0.2, 0.1)) for(i in 1:iterations[3]) { plot(x,y, type="l", col="blue", axes=F, xlab="", ylab="") segments(x0=20, y0=0, x1=20, y1=y[11], col="lightgray") segments(x0=30, y0=0, x1=30, y1=y[21], col="gray") segments(x0=40, y0=0, x1=40, y1=y[31], col="darkgray") segments(x0=50, y0=0, x1=50, y1=y[41]) segments(x0=60, y0=0, x1=60, y1=y[51], col="darkgray") segments(x0=70, y0=0, x1=70, y1=y[61], col="gray") segments(x0=80, y0=0, x1=80, y1=y[71], col="lightgray") abline(h=0) if(i==1) { Sys.sleep(2) } sample = rnorm(N, mean=50, sd=10) points(x=sample, y=rep(1,N), col="green", pch="*") if(i<=iterations[1]) { Sys.sleep(2) } xhist1 = hist(sample, breaks=breakpoints, plot=F) hist(sample, breaks=breakpoints, axes=F, col="green", xlim=c(10,90), ylim=c(0,N), main="", xlab="", ylab="") if(i==iterations[3]) { abline(v=50) } if(i<=iterations[2]) { Sys.sleep(2) } sampleMean = mean(sample) segments(x0=sampleMean, y0=0, x1=sampleMean, y1=max(xhist1$counts)+1, col="red", lwd=3) if(i<=iterations[1]) { Sys.sleep(2) } meanVect = c(meanVect, sampleMean) hist(meanVect, breaks=x, axes=F, col="red", main="", xlab="", ylab="", ylim=c(0,((N/3)+(0.2*i)))) if(i<=iterations[2]) { Sys.sleep(2) } } Sys.sleep(2) xhist2 = hist(meanVect, breaks=x, plot=F) xMean = round(mean(meanVect), digits=3) xSD = round(sd(meanVect), digits=3) histHeight = (max(xhist2$counts)/dnorm(xMean, mean=xMean, sd=xSD)) lines(x=x, y=(histHeight*dnorm(x, mean=xMean, sd=xSD)), col="yellow", lwd=2) abline(v=50) txt1 = paste("population mean = 50 sampling distribution mean = ", xMean, sep="") txt2 = paste("SD = 10 10/sqrt(", N,") = 3.162 SE = ", xSD, sep="") mtext(txt1, side=1, outer=T) mtext(txt2, side=1, line=1.5, outer=T) Reinstantiate these concepts throughout the semester: I bring the idea of the sampling distribution up again each time we talk about the next subject (albeit typically only very briefly). The most important place for this is when you teach ANOVA, as the null hypothesis case there really is the situation in which you sampled from the same population distribution several times, and your set of group means really is an empirical sampling distribution. (For an example of this, see my answer here: How does the standard error work? .)
{ "source": [ "https://stats.stackexchange.com/questions/34926", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9249/" ] }
34,956
Wackerly et al's text states this theorem "Let $m_x(t)$ and $m_y(t)$ denote the moment-generating functions of random variables X and Y, respectively. If both moment-generating functions exist and $m_x(t) = m_y(t)$ for all values of t, then X and Y have the same probability distribution." without a proof saying its beyond the scope of the text. Scheaffer Young also has the same theorem without a proof. I don't have a copy of Casella, but Google book search didn't seem to find the theorem in it. Gut's text seems to have an outline of a proof , but doesn't make reference to the "well-known results" and also requires knowing another result whose proof is also not provided. Does anyone know who originally proved this and if the proof is available online anywhere? Otherwise how would one fill in the details of this proof? In case I get asked no this is not a homework question, but I could imagine this possibly being someone's homework. I took a course sequence based on the Wackerly text and I have been left wondering about this proof for some time. So I figured it was just time to ask.
The general proof of this can be found in Feller (An Introduction to Probability Theory and Its Applications, Vol. 2) . It is an inversion problem involving Laplace transform theory. Did you notice that the mgf bears a striking resemblance to the Laplace transform?. For use of Laplace Transformation you can see Widder (Calcus Vol I) . Proof of a special case: Suppose that X and Y are random varaibles both taking only possible values in {$0, 1, 2,\dots, n$}. Further, suppose that X and Y have the same mgf for all t: $$\sum_{x=0}^ne^{tx}f_X(x)=\sum_{y=0}^ne^{ty}f_Y(y)$$ For simplicity, we will let $s = e^t$ and we will define $c_i = f_X(i) − f_Y (i)$ for $i = 0, 1,\dots,n$. Now $$\sum_{x=0}^ne^{tx}f_X(x)-\sum_{y=0}^ne^{ty}f_Y(y)=0$$ $$\Rightarrow \sum_{x=0}^ns^xf_X(x)-\sum_{y=0}^ns^yf_Y(y)=0$$ $$\Rightarrow \sum_{x=0}^ns^xf_X(x)-\sum_{x=0}^ns^xf_Y(x)=0$$ $$\Rightarrow\sum_{x=0}^ns^x[f_X(x)-f_Y(x)]=0$$ $$\Rightarrow \sum_{x=0}^ns^xc_x=0~∀s>0$$ The above is simply a polynomial in s with coefficients $c_0, c_1,\dots,c_n$. The only way it can be zero for all values of s is if $c_0=c_1=\cdots= c_n=0$.So, we have that $0=c_i=f_X(i)−f_Y(i)$ for $i=0, 1,\dots,n$. Therefore, $f_X(i)=f_Y(i)$ for $i=0,1,\dots,n$. In other words the density functions for $X$ and $Y$ are exactly the same. In other other words, $X$ and $Y$ have the same distributions.
{ "source": [ "https://stats.stackexchange.com/questions/34956", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4325/" ] }
35,013
As I understand it, the exponentiated beta value from a logistic regression is the odds ratio of that variable for the dependent variable of interest. However, the value does not match the manually calculated odds ratio. My model is predicting stunting (a measure of malnutrition) using, amongst other indicators, insurance. // Odds ratio from LR, being done in stata logit stunting insurance age ... etc. or_insurance = exp(beta_value_insurance) // Odds ratio, manually calculated odds_stunted_insured = num_stunted_ins/num_not_stunted_ins odds_stunted_unins = num_stunted_unins/num_not_stunted_unins odds_ratio = odds_stunted_ins/odds_stunted_unins What is the conceptual reason for these values being different? Controlling for other factors in the regression? Just want to be able to explain the discrepancy.
If you're only putting that lone predictor into the model, then the odds ratio between the predictor and the response will be exactly equal to the exponentiated regression coefficient . I don't think a derivation of this result in present on the site, so I will take this opportunity to provide it. Consider a binary outcome $Y$ and single binary predictor $X$: $$ \begin{array}{c|cc} \phantom{} & Y = 1 & Y = 0 \\ \hline X=1 & p_{11} & p_{10} \\ X=0 & p_{01} & p_{00} \\ \end{array} $$ Then, one way to calculate the odds ratio between $X_i$ and $Y_i$ is $$ {\rm OR} = \frac{ p_{11} p_{00} }{p_{01} p_{10}} $$ By definition of conditional probability, $p_{ij} = P(Y = i | X = j) \cdot P(X = j)$. In the ratio, he marginal probabilities involving the $X$ cancel out and you can rewrite the odds ratio in terms of the conditional probabilities of $Y|X$: $${\rm OR} = \frac{ P(Y = 1| X = 1) }{P(Y = 0 | X = 1)} \cdot \frac{ P(Y = 0 | X = 0) }{ P(Y = 1 | X = 0)} $$ In logistic regression, you model these probabilities directly: $$ \log \left( \frac{ P(Y_i = 1|X_i) }{ P(Y_i = 0|X_i) } \right) = \beta_0 + \beta_1 X_i $$ So we can calculate these conditional probabilities directly from the model. The first ratio in the expression for ${\rm OR}$ above is: $$ \frac{ P(Y_i = 1| X_i = 1) }{P(Y_i = 0 | X_i = 1)} = \frac{ \left( \frac{1}{1 + e^{-(\beta_0+\beta_1)}} \right) } {\left( \frac{e^{-(\beta_0+\beta_1)}}{1 + e^{-(\beta_0+\beta_1)}}\right)} = \frac{1}{e^{-(\beta_0+\beta_1)}} = e^{(\beta_0+\beta_1)} $$ and the second is: $$ \frac{ P(Y_i = 0| X_i = 0) }{P(Y_i = 1 | X_i = 0)} = \frac{ \left( \frac{e^{-\beta_0}}{1 + e^{-\beta_0}} \right) } { \left( \frac{1}{1 + e^{-\beta_0}} \right) } = e^{-\beta_0}$$ plugging this back into the formula, we have ${\rm OR} = e^{(\beta_0+\beta_1)} \cdot e^{-\beta_0} = e^{\beta_1}$, which is the result. Note: When you have other predictors, call them $Z_1, ..., Z_p$, in the model, the exponentiated regression coefficient (using a similar derivation) is actually $$ \frac{ P(Y = 1| X = 1, Z_1, ..., Z_p) }{P(Y = 0 | X = 1, Z_1, ..., Z_p)} \cdot \frac{ P(Y = 0 | X = 0, Z_1, ..., Z_p) }{ P(Y = 1 | X = 0, Z_1, ..., Z_p)} $$ so it is the odds ratio conditional on the values of the other predictors in the model and, in general, in not equal to $$ \frac{ P(Y = 1| X = 1) }{P(Y = 0 | X = 1)} \cdot \frac{ P(Y = 0 | X = 0) }{ P(Y = 1 | X = 0)}$$ So, it is no surprise that you're observing a discrepancy between the exponentiated coefficient and the observed odds ratio. Note 2: I derived a relationship between the true $\beta$ and the true odds ratio but note that the same relationship holds for the sample quantities since the fitted logistic regression with a single binary predictor will exactly reproduce the entries of a two-by-two table. That is, the fitted means exactly match the sample means, as with any GLM. So, all of the logic used above applies with the true values replaced by sample quantities.
{ "source": [ "https://stats.stackexchange.com/questions/35013", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/13378/" ] }
35,042
If I have random variables $X_1,X_2,\ldots,X_n$ that are Poisson distributed with parameters $\lambda_1, \lambda_2,\ldots, \lambda_n$, what is the distribution of $Y=\left\lfloor\frac{\sum_{i=1}^n X_i}{n}\right\rfloor$ (i.e. the integer floor of the average)? A sum of Poissons is also Poisson, but I am not confident enough in statistics to determine if it is the same for the case above.
A generalization of the question asks for the distribution of $Y = \lfloor X/m \rfloor$ when the distribution of $X$ is known and supported on the natural numbers. (In the question, $X$ has a Poisson distribution of parameter $\lambda = \lambda_1 + \lambda_2 + \cdots + \lambda_n$ and $m=n$.) The distribution of $Y$ is easily determined by the distribution of $mY$, whose probability generating function (pgf) can be determined in terms of the pgf of $X$. Here's an outline of the derivation. Write $p(x) = p_0 + p_1 x + \cdots + p_n x^n + \cdots$ for the pgf of $X$, where (by definition) $p_n = \Pr(X=n)$. $mY$ is constructed from $X$ in such a way that its pgf, $q$, is $$\eqalign{q(x) &=& \left(p_0 + p_1 + \cdots + p_{m-1}\right) + \left(p_m + p_{m+1} + \cdots + p_{2m-1}\right)x^m + \cdots + \\&&\left(p_{nm} + p_{nm+1} + \cdots + p_{(n+1)m-1}\right)x^{nm} + \cdots.}$$ Because this converges absolutely for $|x| \le 1$, we can rearrange the terms into a sum of pieces of the form $$D_{m,t}p(x) = p_t + p_{t+m}x^m + \cdots + p_{t + nm}x^{nm} + \cdots$$ for $t=0, 1, \ldots, m-1$. The power series of the functions $x^t D_{m,t}p$ consist of every $m^\text{th}$ term of the series of $p$ starting with the $t^\text{th}$: this is sometimes called a decimation of $p$. Google searches presently don't turn up much useful information on decimations, so for completeness, here's a derivation of a formula. Let $\omega$ be any primitive $m^\text{th}$ root of unity; for instance, take $\omega = \exp(2 i \pi / m)$. Then it follows from $\omega^m=1$ and $\sum_{j=0}^{m-1}\omega^j = 0$ that $$x^t D_{m,t}p(x) = \frac{1}{m}\sum_{j=0}^{m-1} \omega^{t j} p(x/\omega^j).$$ To see this, note that the operator $x^t D_{m,t}$ is linear, so it suffices to check the formula on the basis $\{1, x, x^2, \ldots, x^n, \ldots \}$. Applying the right hand side to $x^n$ gives $$x^t D_{m,t}[x^n] = \frac{1}{m}\sum_{j=0}^{m-1} \omega^{t j} x^n \omega^{-nj}= \frac{x^n}{m}\sum_{j=0}^{m-1} \omega^{(t-n) j.}$$ When $t$ and $n$ differ by a multiple of $m$, each term in the sum equals $1$ and we obtain $x^n$. Otherwise, the terms cycle through powers of $\omega^{t-n}$ and these sum to zero. Whence this operator preserves all powers of $x$ congruent to $t$ modulo $m$ and kills all the others: it is precisely the desired projection. A formula for $q$ follows readily by changing the order of summation and recognizing one of the sums as geometric, thereby writing it in closed form: $$\eqalign{ q(x) &= \sum_{t=0}^{m-1} (D_{m,t}[p])(x) \\ &= \sum_{t=0}^{m-1} x^{-t} \frac{1}{m} \sum_{j=0}^{m-1} \omega^{t j} p(\omega^{-j}x ) \\ &= \frac{1}{m} \sum_{j=0}^{m-1} p(\omega^{-j}x) \sum_{t=0}^{m-1} \left(\omega^j/x\right)^t \\ &= \frac{x(1-x^{-m})}{m} \sum_{j=0}^{m-1} \frac{p(\omega^{-j}x)}{x-\omega^j}. }$$ For example, the pgf of a Poisson distribution of parameter $\lambda$ is $p(x) = \exp(\lambda(x-1))$. With $m=2$, $\omega=-1$ and the pgf of $2Y$ will be $$\eqalign{ q(x) &= \frac{x(1-x^{-2})}{2} \sum_{j=0}^{2-1} \frac{p((-1)^{-j}x)}{x-(-1)^j} \\ &= \frac{x-1/x}{2} \left(\frac{\exp(\lambda(x-1))}{x-1} + \frac{\exp(\lambda(-x-1))}{x+1}\right) \\ &= \exp(-\lambda) \left(\frac{\sinh (\lambda x)}{x}+\cosh (\lambda x)\right). }$$ One use of this approach is to compute moments of $X$ and $mY$. The value of the $k^\text{th}$ derivative of the pgf evaluated at $x=1$ is the $k^\text{th}$ factorial moment. The $k^\text{th}$ moment is a linear combination of the first $k$ factorial moments. Using these observations we find, for instance, that for a Poisson distributed $X$, its mean (which is the first factorial moment) equals $\lambda$, the mean of $2\lfloor(X/2)\rfloor$ equals $\lambda- \frac{1}{2} + \frac{1}{2} e^{-2\lambda}$, and the mean of $3\lfloor(X/3)\rfloor$ equals $\lambda -1+e^{-3 \lambda /2} \left(\frac{\sin \left(\frac{\sqrt{3} \lambda }{2}\right)}{\sqrt{3}}+\cos \left(\frac{\sqrt{3} \lambda}{2}\right)\right)$: The means for $m=1,2,3$ are shown in blue, red, and yellow, respectively, as functions of $\lambda$: asymptotically, the mean drops by $(m-1)/2$ compared to the original Poisson mean. Similar formulas for the variances can be obtained. (They get messy as $m$ rises and so are omitted. One thing they definitively establish is that when $m \gt 1$ no multiple of $Y$ is Poisson: it does not have the characteristic equality of mean and variance) Here is a plot of the variances as a function of $\lambda$ for $m=1,2,3$: It is interesting that for larger values of $\lambda$ the variances increase . Intuitively, this is due to two competing phenomena: the floor function is effectively binning groups of values that originally were distinct; this must cause the variance to decrease. At the same time, as we have seen, the means are changing, too (because each bin is represented by its smallest value); this must cause a term equal to the square of the difference of means to be added back. The increase in variance for large $\lambda$ becomes larger with larger values of $m$. The behavior of the variance of $mY$ with $m$ is surprisingly complex. Let's end with a quick simulation (in R ) showing what it can do. The plots show the difference between the variance of $m\lfloor X/m \rfloor$ and the variance of $X$ for Poisson distributed $X$ with various values of $\lambda$ ranging from $1$ through $5000$. In all cases the plots appear to have reached their asymptotic values at the right. set.seed(17) par(mfrow=c(3,4)) temp <- sapply(c(1,2,5,10,20,50,100,200,500,1000,2000,5000), function(lambda) { x <- rpois(20000, lambda) v <- sapply(1:floor(lambda + 4*sqrt(lambda)), function(m) var(floor(x/m)*m) - var(x)) plot(v, type="l", xlab="", ylab="Increased variance", main=toString(lambda), cex.main=.85, col="Blue", lwd=2) })
{ "source": [ "https://stats.stackexchange.com/questions/35042", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/13568/" ] }
35,071
Fitting a logistic regression using lme4 ends with Error in mer_finalize(ans) : Downdated X'X is not positive definite. A likely cause of this error is apparently rank deficiency. What is rank deficiency, and how should I address it?
Rank deficiency in this context says there is insufficient information contained in your data to estimate the model you desire. It stems from many origins. I'll talk here about modeling in a fairly general context, rather than explicitly logistic regression, but everything still applies to the specific context. The deficiency may stem from simply too little data. In general, you cannot uniquely estimate n parameters with less than n data points. That does not mean that all you need are n points, as if there is any noise in the process, you would get rather poor results. You need more data to help the algorithm to choose a solution that will represent all of the data, in a minimum error sense. This is why we use least squares tools. How much data do you need? I was always asked that question in a past life, and the answer was more than you have, or as much as you can get. :) Sometimes you may have more data than you need, but some (too many) points are replicates. Replication is GOOD in the sense that it helps to reduce the noise, but it does not help to increase numerical rank. Thus, suppose you have only two data points. You cannot estimate a unique quadratic model through the points. A million replicates of each point will still not allow you to fit more than a straight line, through what are still only effectively a pair of points. Essentially, replication does not add information content. All it does is decrease noise at locations where you already have information. Sometimes you have information in the wrong places. For example, you cannot fit a two dimensional quadratic model if all you have are points that all lie in a straight line in two dimensions. That is, suppose you have points scattered only along the line x = y in the plane, and you wish to fit a model for the surface z(x,y). Even with zillions of points (not even replicates) will you have sufficient information to intelligently estimate more than a constant model. Amazingly, this is a common problem that I've seen in sampled data. The user wonders why they cannot build a good model. The problem is built into the very data they have sampled. Sometimes it is simply choice of model. This can be viewed as "not enough data", but from the other side. You wish to estimate a complicated model, but have provided insufficient data to do so. In all of the above instances the answer is to get more data, sampled intelligently from places that will provide information about the process that you currently lack. Design of experiments is a good place to start. However, even good data is sometimes inadequate, at least numerically so. (Why do bad things happen to good data?) The problem here may be model related. It may lie in nothing more than a poor choice of units. It may stem from the computer programming done to solve the problem. (Ugh! Where to start?) First, lets talk about units and scaling. Suppose I try to solve a problem where one variable is MANY orders of magnitude larger than another. For example, suppose I have a problem that involves my height and my shoe size. I'll measure my height in nanometers. So my height would be roughly 1.78 billion (1.78e9) nanometers. Of course, I'll choose to measure my shoe size in kilo-parsecs, so 9.14e-21 kilo-parsecs. When you do regression modeling, linear regression is all about linear algebra, which involves linear combinations of variables. The problem here is these numbers are different by hugely many orders of magnitude (and not even the same units.) The mathematics will fail when a computer program tries to add and subtract numbers that vary by so many orders of magnitude (for a double precision number, that absolute limit is roughly 16 powers of 10.) The trick is usually to use common units, but on some problems even that is an issue when variables vary by too many orders of magnitude. More important is to scale your numbers to be similar in magnitude. Next, you may see problems with big numbers and small variation in those numbers. Thus, suppose you try to build a moderately high order polynomial model with data where your inputs all lie in the interval [1,2]. Squaring, cubing, etc., numbers that are on the order of 1 or 2 will cause no problems when working in double precision arithmetic. Alternatively, add 1e12 to every number. In theory, the mathematics will allow this. All it does is shift any polynomial model we build on the x-axis. It would have exactly the same shape, but be translated by 1e12 to the right. In practice, the linear algebra will fail miserably due to rank deficiency problems. You have done nothing but translate the data, but suddenly you start to see singular matrices popping up. Usually the comment made will be a suggestion to "center and scale your data". Effectively this says to shift and scale the data so that it has a mean near zero and a standard deviation that is roughly 1. That will greatly improve the conditioning of most polynomial models, reducing the rank deficiency issues. Other reasons for rank deficiency exist. In some cases it is built directly into the model. For example, suppose I provide the derivative of a function, can I uniquely infer the function itself? Of course not, as integration involves a constant of integration, an unknown parameter that is generally inferred by knowledge of the value of the function at some point. In fact, this sometimes arises in estimation problems too, where the singularity of a system is derived from the fundamental nature of the system under study. I surely left out a few of the many reasons for rank deficiency in a linear system, and I've prattled along for too long now. Hopefully I managed to explain those I covered in simple terms, and a way to alleviate the problem.
{ "source": [ "https://stats.stackexchange.com/questions/35071", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/8207/" ] }
35,123
I was wondering what the difference between the variance and the standard deviation is. If you calculate the two values, it is clear that you get the standard deviation out of the variance, but what does that mean in terms of the distribution you are observing? Furthermore, why do you really need a standard deviation?
The standard deviation is the square root of the variance. The standard deviation is expressed in the same units as the mean is, whereas the variance is expressed in squared units, but for looking at a distribution, you can use either just so long as you are clear about what you are using. For example, a Normal distribution with mean = 10 and sd = 3 is exactly the same thing as a Normal distribution with mean = 10 and variance = 9.
{ "source": [ "https://stats.stackexchange.com/questions/35123", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/13561/" ] }
35,276
My dataset is small (120 samples), however the number of features are large varies from (1000-200,000). Although I'm doing feature selection to pick a subset of features, it might still overfit. My first question is, how does SVM handle overfitting, if at all. Secondly, as I study more about overfitting in case of classification, I came to the conclusion that even datasets with small number of features can overfit. If we do not have features correlated to the class label, overfitting takes place anyways. So I'm now wondering what's the point of automatic classification if we cannot find the right features for a class label. In case of document classification, this would mean manually crafting a thesaurus of words that relate to the labels, which is very time consuming. I guess what I'm trying to say is, without hand-picking the right features it is very difficult to build a generalized model ? Also, if the experimental results don't show that the results have low/no overfitting it becomes meaningless. Is there a way to measure it ?
In practice, the reason that SVMs tend to be resistant to over-fitting, even in cases where the number of attributes is greater than the number of observations, is that it uses regularization. They key to avoiding over-fitting lies in careful tuning of the regularization parameter, $C$, and in the case of non-linear SVMs, careful choice of kernel and tuning of the kernel parameters. The SVM is an approximate implementation of a bound on the generalization error, that depends on the margin (essentially the distance from the decision boundary to the nearest pattern from each class), but is independent of the dimensionality of the feature space (which is why using the kernel trick to map the data into a very high dimensional space isn't such a bad idea as it might seem). So in principle SVMs should be highly resistant to over-fitting, but in practice this depends on the careful choice of $C$ and the kernel parameters. Sadly, over-fitting can also occur quite easily when tuning the hyper-parameters as well, which is my main research area, see G. C. Cawley and N. L. C. Talbot, Preventing over-fitting in model selection via Bayesian regularisation of the hyper-parameters, Journal of Machine Learning Research, volume 8, pages 841-861, April 2007. ( www ) and G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, 2010. Research, vol. 11, pp. 2079-2107, July 2010. ( www ) Both of those papers use kernel ridge regression, rather than the SVM, but the same problem arises just as easily with SVMs (also similar bounds apply to KRR, so there isn't that much to choose between them in practice). So in a way, SVMs don't really solve the problem of over-fitting, they just shift the problem from model fitting to model selection. It is often a temptation to make life a bit easier for the SVM by performing some sort of feature selection first. This generally makes matters worse, as unlike the SVM, feature selection algorithms tend to exhibit more over-fitting as the number of attributes increases. Unless you want to know which are the informative attributes, it is usually better to skip the feature selection step and just use regularization to avoid over-fitting the data. In short, there is no inherent problem with using an SVM (or other regularised model such as ridge regression, LARS, Lasso, elastic net etc.) on a problem with 120 observations and thousands of attributes, provided the regularisation parameters are tuned properly .
{ "source": [ "https://stats.stackexchange.com/questions/35276", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/13420/" ] }
35,319
I am new to Independent Component Analysis (ICA) and have just a rudimentary understanding of the the method. It seems to me that ICA is similar to Factor Analysis (FA) with one exception: ICA assumes that the observed random variables are a linear combination of independent components/factors that are non-gaussian whereas the classical FA model assumes that the observed random variables are a linear combination of correlated, gaussian components/factors. Is the above accurate?
FA, PCA, and ICA, are all 'related', in as much as all three of them seek basis vectors that the data is projected against, such that you maximize insert-criteria-here. Think of the basis vectors as just encapsulating linear combinations. For example, lets say your data matrix $\mathbf Z$ was a $2$ x $N$ matrix, that is, you have two random variables, and $N$ observations of them each. Then lets say you found a basis vector of $\mathbf w = \begin{bmatrix}0.1 \\-4 \end{bmatrix}$. When you extract (the first) signal, (call it the vector $\mathbf y$), it is done as so: $$ \mathbf {y = w^{\mathrm T}Z} $$ This just means "Multiply 0.1 by the first row of your data, and subtract 4 times the second row of your data". Then this gives $\mathbf y$, which is of course a $1$ x $N$ vector that has the property that you maximized its insert-criteria-here. So what are those criteria? Second-Order Criteria: In PCA, you are finding basis vectors that 'best explain' the variance of your data. The first (ie highest ranked) basis vector is going to be one that best fits all the variance from your data. The second one also has this criterion, but must be orthogonal to the first, and so on and so forth. (Turns out those basis vectors for PCA are nothing but the eigenvectors of your data's covariance matrix). In FA, there is difference between it and PCA, because FA is generative, whereas PCA is not. I have seen FA as being described as 'PCA with noise', where the 'noise' are called 'specific factors'. All the same, the overall conclusion is that PCA and FA are based on second-order statistics, (covariance), and nothing above. Higher Order Criteria: In ICA, you are again finding basis vectors, but this time, you want basis vectors that give a result, such that this resulting vector is one of the independent components of the original data. You can do this by maximization of the absolute value of normalized kurtosis - a 4th order statistic. That is, you project your data on some basis vector, and measure the kurtosis of the result. You change your basis vector a little, (usually through gradient ascent), and then measure the kurtosis again, etc etc. Eventually you will happen unto a basis vector that gives you a result that has the highest possible kurtosis, and this is your independent component. The top diagram above can help you visualize it. You can clearly see how the ICA vectors correspond to the axes of the data, (independent of each other), whereas the PCA vectors try to find directions where variance is maximized. (Somewhat like resultant). If in the top diagram the PCA vectors look like they almost correspond to the ICA vectors, that is just coincidental. Here is another instance on different data and mixing matrix where they are very different. ;-)
{ "source": [ "https://stats.stackexchange.com/questions/35319", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/13690/" ] }
35,510
I have produced generalized additive models for deforestation. To account for spatial-autocorrelation, I have included latitude and longitude as a smoothed, interaction term (i.e. s(x,y)). I've based this on reading many papers where the authors say 'to account for spatial autocorrelation, coordinates of points were included as smoothed terms' but these have never explained why this actually accounts for it. It's quite frustrating. I've read all the books I can find on GAMs in the hope of finding an answer, but most (e.g. Generalized Additive Models, an Introduction with R, S.N. Wood) just touch on the subject without explaining. I'd really appreciate it if someone could explain WHY the inclusion of latitude and longitude accounts for spatial autocorrelation, and what 'accounting' for it really means - is it simply enough to include it in the model, or should you compare a model with s(x,y) in and a model without? And does the deviance explained by the term indicate the extent of spatial autocorrelation?
The main issue in any statistical model is the assumptions that underlay any inference procedure. In the sort of model you describe, the residuals are assumed independent. If they have some spatial dependence and this is not modelled in the sytematic part of the model, the residuals from that model will also exhibit spatial dependence, or in other words they will be spatially autocorrelated. Such dependence would invalidate the theory that produces p-values from test statistics in the GAM for example; you can't trust the p-values because they were computed assuming independence. You have two main options for handling such data; i) model the spatial dependence in the systematic part of the model, or ii) relax the assumption of independence and estimate the correlation between residuals. i) is what is being attempted by including a smooth of the spatial locations in the model. ii) requires estimation of the correlation matrix of the residuals often during model fitting using a procedure like generalised least squares. How well either of these approaches deal with the spatial dependence will depend upon the nature & complexity of the spatial dependence and how easily it can be modelled. In summary, if you can model the spatial dependence between observations then the residuals are more likely to be independent random variables and therefore not violate the assumptions of any inferential procedure.
{ "source": [ "https://stats.stackexchange.com/questions/35510", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/13130/" ] }
35,591
What is the difference between data 'Normalization' and data 'Scaling'? Till now I thought both terms refers to same process but now I realize there is something more that I don't know/understand. Also if there is a difference between Normalization and Scaling, when should we use Normalization but not Scaling and vice versa? Please elaborate with some example.
I am not aware of an "official" definition and even if there it is, you shouldn't trust it as you will see it being used inconsistently in practice. This being said, scaling in statistics usually means a linear transformation of the form $f(x) = ax+b$. Normalizing can either mean applying a transformation so that you transformed data is roughly normally distributed, but it can also simply mean putting different variables on a common scale. Standardizing, which means subtracting the mean and dividing by the standard deviation, is an example of the later usage. As you may see it's also an example of scaling. An example for the first would be taking the log for lognormal distributed data. But what you should take away is that when you read it you should look for a more precise description of what the author did. Sometimes you can get it from the context.
{ "source": [ "https://stats.stackexchange.com/questions/35591", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/8924/" ] }
35,634
How to prove that the radial basis function $k(x, y) = \exp(-\frac{||x-y||^2)}{2\sigma^2})$ is a kernel? As far as I understand, in order to prove this we have to prove either of the following: For any set of vectors $x_1, x_2, ..., x_n$ matrix $K(x_1, x_2, ..., x_n)$ = $(k(x_i, x_j))_{n \times n}$ is positive semidefinite. A mapping $\Phi$ can be presented such as $k(x, y)$ = $\langle\Phi(x), \Phi(y)\rangle$. Any help?
Zen used method 1. Here is method 2: Map $x$ to a spherically symmetric Gaussian distribution centered at $x$ in the Hilbert space $L^2$. The standard deviation and a constant factor have to be tweaked for this to work exactly. For example, in one dimension, $$ \int_{-\infty}^\infty \frac{\exp[-(x-z)^2/(2\sigma^2)]}{\sqrt{2 \pi} \sigma} \frac{\exp[-(y-z)^2/(2 \sigma^2)}{\sqrt{2 \pi} \sigma} dz = \frac{\exp [-(x-y)^2/(4 \sigma^2)]}{2 \sqrt \pi \sigma}. $$ So, use a standard deviation of $\sigma/\sqrt 2$ and scale the Gaussian distribution to get $k(x,y) = \langle \Phi(x), \Phi(y)\rangle$. This last rescaling occurs because the $L^2$ norm of a normal distribution is not $1$ in general.
{ "source": [ "https://stats.stackexchange.com/questions/35634", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4337/" ] }
35,692
I'm reading "The Drunkard's Walk" now and cannot understand one story from it. Here it goes: Imagine that George Lucas makes a new Star Wars film and in one test market decides to perform a crazy experiment. He releases the identical film under two titles: "Star Wars: Episode A" and "Star Wars: Episode B". Each film has its own marketing campaign and distribution schedule, with the corresponding details identical except that the trailers and ads for one film say "Episode A" and those for the other, "Episode B". Now we make a contest out of it. Which film will be more popular? Say we look at the first 20,000 moviegoers and record the film they choose to see (ignoring those die-hard fans who will go to both and then insist there were subtle but meaningful differences between the two). Since the films and their marketing campaigns are identical, we can mathematically model the game this way: Imagine lining up all the viewers in a row and flipping a coin for each viewer in turn. If the coin lands heads up, he or she sees Episode A; if the coin lands tails up, it’s Episode B. Because the coin has an equal chance of coming up either way, you might think that in this experimental box office war each film should be in the lead about half the time. But the mathematics of randomness says otherwise: the most probable number of changes in the lead is 0, and it is 88 times more probable that one of the two films will lead through all 20,000 customers than it is that, say, the lead continuously seesaws" I, probably incorrectly, attribute this to a plain Bernoulli trials problem, and must say I fail to see why the leader won't seesaw on average! Can anyone explain?
Here is some R code to simulate the George Lucas experiment: B<-20000 steps<-2*rbinom(B,1,0.5)-1 rw<-cumsum(steps) ts.plot(rw,xlab="Number of customers",ylab="Difference") Running it, we get pictures like these: where the difference in sold tickets between A and B is on the y-axis. Next, we run $10,000$ such simulated George Lucas experiments. For each experiment, we compute the proportion of time spent $\geq 0$, i.e. the proportion of the lined-up viewers for which the number of tickets sold to A is greater or equal to the number of tickets sold to B. Intuitively, you'd say that this proportion should be roughly $1/2$. Here is a histogram of the results: The proportion is $1/2$ on average in the sense that the expected value is $1/2$, but $1/2$ is an unlikely value compared to values close to $0$ or $1$. For most experiments, the differences are either positive or negative most of the time! The red curve is the density function of the arcsine distribution, also known as the $\mbox{Beta}(1/2,1/2)$ distribution . What is illustrated in the above picture is a theorem known as the first arscine law for random walks , which says that as the number of steps of the simple symmetric random walk approaches infinity, the distribution of the proportion of time spent above $0$ tends to the arcsine distribution. A standard reference for this result is Section III.4 of An introduction to probability theory and its applications , Vol 1 by William Feller. The R code for the simulation study is prop<-vector(length=10000) for(i in 1:10000) { steps<-2*rbinom(B,1,0.5)-1 rw<-cumsum(steps) prop[i]<-sum(rw>=0)/B } hist(prop,freq=FALSE,xlab="Proportion of time spent above 0",main="George Lucas experiment") curve(dbeta(x,1/2,1/2),0,1,col=2,add=TRUE)
{ "source": [ "https://stats.stackexchange.com/questions/35692", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9013/" ] }
35,711
Is there a Box-Cox like transformation for independent variables? That is, a transformation that optimizes the $x$ variable so that the y~f(x) will make a more reasonable fit for a linear model? If so, is there a function to perform this with R ?
John Tukey advocated his " three point method " for finding re-expressions of variables to linearize relationships. I will illustrate with an exercise from his book, Exploratory Data Analysis . These are mercury vapor pressure data from an experiment in which temperature was varied and vapor pressure was measured. pressure <- c(0.0004, 0.0013, 0.006, 0.03, 0.09, 0.28, 0.8, 1.85, 4.4, 9.2, 18.3, 33.7, 59, 98, 156, 246, 371, 548, 790) # mm Hg temperature <- seq(0, 360, 20) # Degrees C The relation is strongly nonlinear: see the left panel in the illustration. Because this is an exploratory exercise, we expect it to be interactive. The analyst is asked to begin by identifying three "typical" points in the plot : one near each end and one in the middle. I have done so here and marked them in red. (When I first did this exercise long ago, I used a different set of points but arrived at the same results.) In the three point method, one searches--by brute force or otherwise--for a Box-Cox transformation that when applied to one of the coordinates--either y or x--will (a) place the typical points approximately on a line and (b) uses a "nice" power, usually chosen from a "ladder" of powers that might be interpretable by the analyst. For reasons that will become apparent later, I have extended the Box-Cox family by allowing an "offset" so that the transformations are in the form $$x \to \frac{(x + \alpha)^\lambda - 1}{\lambda}.$$ Here's a quick and dirty R implementation. It first finds an optimal $(\lambda,\alpha)$ solution, then rounds $\lambda$ to the nearest value on the ladder and, subject to that restriction, optimizes $\alpha$ (within reasonable limits). It's incredibly quick because all the calculations are based on just those three typical points out of the original dataset. (You could do them with pencil and paper, even, which is exactly what Tukey did.) box.cox <- function(x, parms=c(1,0)) { lambda <- parms[1] offset <- parms[2] if (lambda==0) log(x+offset) else ((x+offset)^lambda - 1)/lambda } threepoint <- function(x, y, ladder=c(1, 1/2, 1/3, 0, -1/2, -1)) { # x and y are length-three samples from a dataset. dx <- diff(x) f <- function(parms) (diff(diff(box.cox(y, parms)) / dx))^2 fit <- nlm(f, c(1,0)) parms <- fit$estimate #$ lambda <- ladder[which.min(abs(parms[1] - ladder))] if (lambda==0) offset = 0 else { do <- diff(range(y)) offset <- optimize(function(x) f(c(lambda, x)), c(max(-min(x), parms[2]-do), parms[2]+do))$minimum } c(lambda, offset) } When the three-point method is applied to the pressure (y) values in the mercury vapor dataset, we obtain the middle panel of the plots. data <- cbind(temperature, pressure) n <- dim(data)[1] i3 <- c(2, floor((n+1)/2), n-1) parms <- threepoint(temperature[i3], pressure[i3]) y <- box.cox(pressure, parms) In this case, parms turns out to equal $(0,0)$: the method elects to log-transform the pressure. We have reached a point analogous to the context of the question: for whatever reason (usually to stabilize residual variance), we have re-expressed the dependent variable, but we find that the relation with an independent variable is nonlinear. So now we turn to re-expressing the independent variable in an effort to linearize the relation. This is done in the same way, merely reversing the roles of x and y: parms <- threepoint(y[i3], temperature[i3]) x <- box.cox(temperature, parms) The values of parms for the independent variable (temperature) are found to be $(-1, 253.75)$: in other words, we should express the temperature as degrees Celsius above $-254$C and use its reciprocal (the $-1$ power). (For technical reasons, the Box-Cox transformation further adds $1$ to the result.) The resulting relation is shown in the right panel. By now, anybody with the least science background has recognized that the data are "telling" us to use absolute temperatures--where the offset is $273$ instead of $254$--because those will be physically meaningful. (When the last plot is re-drawn using an offset of $273$ instead of $254$, there is little visible change. A physicist would then label the x-axis with $1/(1-x)$: that is, reciprocal absolute temperature.) This is a nice example of how statistical exploration needs to interact with understanding of the subject of investigation . In fact, reciprocal absolute temperatures show up all the time in physical laws. Consequently, using simple EDA methods alone to explore this century-old, simple, dataset, we have rediscovered the Clausius-Clapeyron relation : the logarithm of the vapor pressure is a linear function of the reciprocal absolute temperature. Not only that, we have a not very bad estimate of absolute zero ($-254$ degrees C), from the slope of the righthand plot we can calculate the specific enthalpy of vaporization, and--as it turns out--a careful analysis of the residuals identifies an outlier (the value at a temperature of $0$ degrees C), shows us how the enthalphy of vaporization varies (very slightly) with temperature (thereby violating the Ideal Gas Law), and ultimately can give us accurate information about the effective radius of the mercury gas molecules! All that from 19 data points and some basic skills in EDA.
{ "source": [ "https://stats.stackexchange.com/questions/35711", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/253/" ] }
35,883
I don't know if it's just me, but I am very skeptical of statistics in general. I can understand it in dice games, poker games, etc. Very small, simple, mostly self-contained repeated games are fine. For example, a coin landing on its edge is small enough to accept the probability that landing heads or tails is ~50%. Playing a $10 game of poker aiming for a 95% win is fine. But what if your entire life savings + more is dependent on you hitting a win or not? How would knowing that you'd win in 95% of the time in that situation will help me at all? Expected value doesn't help much there. Other examples include a life-threatening surgery. How does that help knowing that it is 51% survival rate versus 99% survival rate given existing data? In both cases, I don't think it will matter to me what the doctor tells me, and I would go for it. If actual data is 75%, he might as well tell me (barring ethics and law), that there is a 99.99999% chance of survival so I'd feel better. In other words, existing data doesn't matter except binomially. Even then, it doesn't matter if there is a 99.99999% survival rate, if I end up dying from it. Also, earthquake probability. It doesn't matter if a strong earthquake happened every x (where x > 100) years on average. I have no idea if an earthquake will happen ever in my lifetime. So why is it even useful information? A less serious example, say, 100% of the places I've been to that I love are in the Americas, indifferent to 100% of the places I've been to in Europe, and hate 100% of the places that I have been to in Asia. Now, that by no means mean that I wouldn't find a place that I love in Asia on my next trip or hate in Europe or indifferent in America, just by the very nature that the statistics doesn't capture all of the information I need, and I probably can never capture all of the information I need, even if I have traveled to over x% of all of those continents. Just because there are unknowns in the 1-x% of those continents that I haven't been to. (Feel free to replace the 100% with any other percentage). I understand that there is no way to brute force everything and that you have to rely on statistics in many situations, but how can we believe that statistics are helpful in our one shot situation, especially when statistics basically do not extrapolate to outlier events? Any insights to get over my skepticism of statistics?
First I think that you may be confusing "statistics" meaning a collection of numbers or other facts describing a group or situation, and "statistics" meaning the science of using data and information to understand the world in the face of variation (others may be able to improve on my definitions). Statisticians use both senses of the word, so it is not surprising when people mix them up. Statistics (the science) is a lot about choosing strategies and choosing the best strategy even if we only get to apply it once. Some times when I (and others) teach probability we use the classic Monty Hall problem (3 doors, 2 goats, 1 car) to motivate it and we show how we can estimate probabilities by playing the game a bunch of times (not for prizes) and we can see that the "switch" strategy wins 2/3 of the time and the "stay" strategy only wins 1/3 of the time. Now if we had the opportunity to play the game a single time we would know some things about which strategy gives a better chance of winning. The surgery example is similar, you will only have the surgery (or not have the surgery) once, but don't you want to know which strategy benifits more people? If your choices are surgery with some chance greater than 0% of survival or no surgery and 0% of survival, then yes there is little difference between the surgery having 51% survival and 99.9% survival. But what if there are other options as well, you can choose between surgery, doing nothing (which has 25% survival) or a change of diet and exercise which has 75% survival (but requires effort on your part), now wouldn't you care about if the surgery option has 51% vs. 99% survival? Also consider the doctor, he will be doing more than just your surgery. If surgery has 99.9% survival then he has no reason to consider alternatives, but if it only has 51% survival then while it may be the best choice today, he should be looking for other alternatives that increase that survival. Yes even with 90% survival he will loose some patients, but which strategy gives him the best chance of saving the most patients? This morning I wore my seat belt while driving (my usual strategy), but did not get in any accidents, so was my strategy a waste of time? If I knew when I would get in an accident then I could save time by only putting on the seat belt on those occasions and not on others. But I don't know when I will be in an accident so I will stick with my wear the seat belt strategy because I believe it will give me the best chance if I ever am in an accident even if that means wasting a bit of time and effort in the high percentage (hopefully 100%) of times that there is no accident.
{ "source": [ "https://stats.stackexchange.com/questions/35883", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/13919/" ] }
35,893
For plot 1, I can test the association between x and y by doing a simple correlation. For plot 2, where the relationship is nonlinear yet there is a clear relation between x and y, how can I test the association and label its nature?
...the relationship is nonlinear yet there is a clear relation between x and y, how can I test the association and label its nature? One way of doing this would be to fit $y$ as a semi-parametrically estimated function of $x$ using, for example, a generalized additive model and testing whether or not that functional estimate is constant, which would indicate no relationship between $y$ and $x$. This approach frees you from having to do polynomial regression and making sometimes arbitrary decisions about the order of the polynomial, etc. Specifically, if you have observations, $(Y_i, X_i)$, you could fit the model: $$ E(Y_i | X_i) = \alpha + f(X_i) + \varepsilon_i $$ and test the hypothesis $H_{0} : f(x) = 0, \ \forall x$. In R , you can do this using the gam() function. If y is your outcome and x is your predictor, you could type: library(mgcv) g <- gam(y ~ s(x)) Typing summary(g) will give you the result of the hypothesis test above. As far as characterizing the nature of the relationship, this would be best done with a plot. One way to do this in R (assuming the code above has already been entered) plot(g,scheme=2) If your response variable is discrete (e.g. binary), you can accommodate that within this framework by fitting a logistic GAM (in R , you'd add family=binomial to your call to gam ). Also, if you have multiple predictors, you can include multiple additive terms (or ordinary linear terms), or fit multivariable functions, e.g. $f(x,z)$ if you had predictors x, z . The complexity of the relationship is automatically selected by cross validation if you use the default methods, although there is a lot of flexibility here - see the gam help file if interested.
{ "source": [ "https://stats.stackexchange.com/questions/35893", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11899/" ] }
35,940
This question is in response to an answer given by @Greg Snow in regards to a question I asked concerning power analysis with logistic regression and SAS Proc GLMPOWER . If I am designing an experiment and will analze the results in a factorial logistic regression, how can I use simulation ( and here ) to conduct a power analysis? Here is a simple example where there are two variables, the first takes on three possible values {0.03, 0.06, 0.09} and the second is a dummy indicator {0,1}. For each we estimate the response rate for each combination (# of responders / number of people marketed to). Further, we wish to have 3 times as many of the first combination of factors as the others (which can be considered equal) because this first combination is our tried and true version. This is a setup like given in the SAS course mentioned in the linked question. The model that will be used to analyze the results will be a logistic regression, with main effects and interaction (response is 0 or 1). mod <- glm(response ~ Var1 + Var2 + I(Var1*Var2)) How can I simulate a data set to use with this model to conduct a power analysis? When I run this through SAS Proc GLMPOWER (using STDDEV =0.05486016 which corresponds to sqrt(p(1-p)) where p is the weighted average of the shown response rates): data exemplar; input Var1 $ Var2 $ response weight; datalines; 3 0 0.0025 3 3 1 0.00395 1 6 0 0.003 1 6 1 0.0042 1 9 0 0.0035 1 9 1 0.002 1; run; proc glmpower data=exemplar; weight weight; class Var1 Var2; model response = Var1 | Var2; power power=0.8 ntotal=. stddev=0.05486016; run; Note: GLMPOWER only will use class (nominal) variables so 3, 6, 9 above are treated as characters and could have been low, mid and high or any other three strings. When the real analysis is conducted, Var1 will be used a numeric (and we will include a polynomial term Var1*Var1) to account for any curvature. The output from SAS is So we see that we need 762,112 as our sample size (Var2 main effect is the hardest to estimate) with power equal to 0.80 and alpha equal to 0.05. We would allocate these so that 3 times as many were the baseline combination (i.e. 0.375 * 762112) and the remainder just fall equally into the other 5 combinations.
Preliminaries: As discussed in the G*Power manual , there are several different types of power analyses, depending on what you want to solve for. (That is, $N$, the effect size $ES$, $\alpha$, and power exist in relation to each other; specifying any three of them will let you solve for the fourth.) in your description, you want to know the appropriate $N$ to capture the response rates you specified with $\alpha=.05$, and power = 80%. This is a-priori power . we can start with post-hoc power (determine power given $N$, response rates, & alpha) as this is conceptually simpler, and then move up In addition to @GregSnow's excellent post , another really great guide to simulation-based power analyses on CV can be found here: Calculating statistical power . To summarize the basic ideas: figure out the effect you want to be able to detect generate N data from that possible world run the analysis you intend to conduct over those faux data store whether the results are 'significant' according to your chosen alpha repeat many ($B$) times & use the % 'significant' as an estimate of (post-hoc) power at that $N$ to determine a-priori power, search over possible $N$'s to find the value that yields your desired power Whether you will find significance on a particular iteration can be understood as the outcome of a Bernoulli trial with probability $p$ (where $p$ is the power). The proportion found over $B$ iterations allows us to approximate the true $p$. To get a better approximation, we can increase $B$, although this will also make the simulation take longer. In R, the primary way to generate binary data with a given probability of 'success' is ?rbinom E.g. to get the number of successes out of 10 Bernoulli trials with probability p, the code would be rbinom(n=10, size=1, prob=p) , (you will probably want to assign the result to a variable for storage) you can also generate such data less elegantly by using ?runif , e.g., ifelse(runif(1)<=p, 1, 0) if you believe the results are mediated by a latent Gaussian variable, you could generate the latent variable as a function of your covariates with ?rnorm , and then convert them into probabilities with pnorm() and use those in your rbinom() code. You state that you will "include a polynomial term Var1*Var1) to account for any curvature". There is a confusion here; polynomial terms can help us account for curvature, but this is an interaction term--it will not help us in this way. Nonetheless, your response rates require us to include both squared terms and interaction terms in our model. Specifically, your model will need to include: $var1^2$, $var1*var2$, and $var1^2*var2$, beyond the basic terms. Although written in the context of a different question, my answer here: Difference between logit and probit models has a lot of basic information about these types of models. Just as there are different kinds of Type I error rates when there are multiple hypotheses (e.g., per-contrast error rate , familywise error rate , & per-family error rate ), so are there different kinds of power* (e.g., for a single pre-specified effect , for any effect , & for all effects ). You could also seek for the power to detect a specific combination of effects, or for the power of a simultaneous test of the model as a whole. My guess from your description of your SAS code is that it is looking for the latter. However, from your description of your situation, I am assuming you want to detect the interaction effects at a minimum. *reference: Maxwell, S.E. (2004). The persistence of underpowered studies in psychological research: causes, consequences, and remedies. Psychological Methods , 9 , 2 , pp. 147-163. your effects are quite small (not to be confused with the low response rates), so we will find it difficult to achieve good power. Note that, although these all sound fairly similar, they are very much not the same (e.g., it is very possible to get a significant model with no significant effects--discussed here: How can a regression be significant yet all predictors be non-significant? , or significant effects but where the model is not significant--discussed here: Significance of coefficients in linear regression: significant t-test vs non-significant F-statistic ), which will be illustrated below. For a different way to think about issues related to power, see my answer here: How to report general precision in estimating correlations within a context of justifying sample size. Simple post-hoc power for logistic regression in R: Let's say your posited response rates represent the true situation in the world, and that you had sent out 10,000 letters. What is the power to detect those effects? (Note that I am famous for writing "comically inefficient" code, the following is intended to be easy to follow rather than optimized for efficiency; in fact, it's quite slow.) set.seed(1) repetitions = 1000 N = 10000 n = N/8 var1 = c( .03, .03, .03, .03, .06, .06, .09, .09) var2 = c( 0, 0, 0, 1, 0, 1, 0, 1) rates = c(0.0025, 0.0025, 0.0025, 0.00395, 0.003, 0.0042, 0.0035, 0.002) var1 = rep(var1, times=n) var2 = rep(var2, times=n) var12 = var1**2 var1x2 = var1 *var2 var12x2 = var12*var2 significant = matrix(nrow=repetitions, ncol=7) startT = proc.time()[3] for(i in 1:repetitions){ responses = rbinom(n=N, size=1, prob=rates) model = glm(responses~var1+var2+var12+var1x2+var12x2, family=binomial(link="logit")) significant[i,1:5] = (summary(model)$coefficients[2:6,4]<.05) significant[i,6] = sum(significant[i,1:5]) modelDev = model$null.deviance-model$deviance significant[i,7] = (1-pchisq(modelDev, 5))<.05 } endT = proc.time()[3] endT-startT sum(significant[,1])/repetitions # pre-specified effect power for var1 [1] 0.042 sum(significant[,2])/repetitions # pre-specified effect power for var2 [1] 0.017 sum(significant[,3])/repetitions # pre-specified effect power for var12 [1] 0.035 sum(significant[,4])/repetitions # pre-specified effect power for var1X2 [1] 0.019 sum(significant[,5])/repetitions # pre-specified effect power for var12X2 [1] 0.022 sum(significant[,7])/repetitions # power for likelihood ratio test of model [1] 0.168 sum(significant[,6]==5)/repetitions # all effects power [1] 0.001 sum(significant[,6]>0)/repetitions # any effect power [1] 0.065 sum(significant[,4]&significant[,5])/repetitions # power for interaction terms [1] 0.017 So we see that 10,000 letters doesn't really achieve 80% power (of any sort) to detect these response rates. (I am not sufficiently sure about what the SAS code is doing to be able to explain the stark discrepancy between these approaches, but this code is conceptually straightforward--if slow--and I have spent some time checking it, and I think these results are reasonable.) Simulation-based a-priori power for logistic regression: From here the idea is simply to search over possible $N$'s until we find a value that yields the desired level of the type of power you are interested in. Any search strategy that you can code up to work with this would be fine (in theory). Given the $N$'s that are going to be required to capture such small effects, it is worth thinking about how to do this more efficiently. My typical approach is simply brute force, i.e. to assess each $N$ that I might reasonably consider. (Note however, that I would typically only consider a small range, and I'm typically working with very small $N$'s--at least compared to this.) Instead, my strategy here was to bracket possible $N$'s to get a sense of what the range of powers would be. Thus, I picked an $N$ of 500,000 and re-ran the code (initiating the same seed, n.b. this took an hour and a half to run). Here are the results: sum(significant[,1])/repetitions # pre-specified effect power for var1 [1] 0.115 sum(significant[,2])/repetitions # pre-specified effect power for var2 [1] 0.091 sum(significant[,3])/repetitions # pre-specified effect power for var12 [1] 0.059 sum(significant[,4])/repetitions # pre-specified effect power for var1X2 [1] 0.606 sum(significant[,5])/repetitions # pre-specified effect power for var12X2 [1] 0.913 sum(significant[,7])/repetitions # power for likelihood ratio test of model [1] 1 sum(significant[,6]==5)/repetitions # all effects power [1] 0.005 sum(significant[,6]>0)/repetitions # any effect power [1] 0.96 sum(significant[,4]&significant[,5])/repetitions # power for interaction terms [1] 0.606 We can see from this that the magnitude of your effects varies considerably, and thus your ability to detect them varies. For example, the effect of $var1^2$ is particularly difficult to detect, only being significant 6% of the time even with half a million letters. On the other hand, the model as a whole was always significantly better than the null model. The other possibilities are arrayed in between. Although most of the 'data' are thrown away on each iteration, a good bit of exploration is still possible. For example, we could use the significant matrix to assess the correlations between the probabilities of different variables being significant. I should note in conclusion, that due to the complexity and large $N$ entailed in your situation, this was not as simple as I had suspected / claimed in my initial comment. However, you can certainly get the idea for how this can be done in general, and the issues involved in power analysis, from what I've put here. HTH.
{ "source": [ "https://stats.stackexchange.com/questions/35940", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2040/" ] }
35,944
I had heard that there is a body of literature devoted to the following problem: You have a dataset and you produced a good predictive model for it. Now you have a different dataset, derived from different instruments, or different data sources, but similar enough that you can hope to scale one to the other so that you can use the predictive model on it. Alternately, you do not have enough data to produce a model and hope that combining different datasets by scaling one to the other will help you reach the quantity of data you need. First off what is this problem called so that I can search it more effectively? And also does anyone know of a good recent survey of techniques for this, either in book or paper form? I currently have access to most academic journals so those links work for me as well. Say I have dataset A that I have a model for, dataset B occupies the same database schema but is from a difference source with different factors that are not included in my feature set. Initially my intuition was to construct a QQplot and fit lines (or curves) to features that I thought should be similar. If the difference in the way feature 1 from A increases is similar to the way feature 1 from B increases but with a constant factor then fitting a line can reveal this factor. If the difference was exponential or logarithmic then I could scale using a fitted function. In this way I could constrain the way one variable increased to fit how another variable increased. However this is just my intuition. I can certainly test it for overfitting but when I had heard that there was a lot of literature devoted to this subject then it seemed as though I should learn a few ways in which I could question my assumptions. It would probably be good for me to review the literature. Does anyone know what the tag for this literature might be?
Preliminaries: As discussed in the G*Power manual , there are several different types of power analyses, depending on what you want to solve for. (That is, $N$, the effect size $ES$, $\alpha$, and power exist in relation to each other; specifying any three of them will let you solve for the fourth.) in your description, you want to know the appropriate $N$ to capture the response rates you specified with $\alpha=.05$, and power = 80%. This is a-priori power . we can start with post-hoc power (determine power given $N$, response rates, & alpha) as this is conceptually simpler, and then move up In addition to @GregSnow's excellent post , another really great guide to simulation-based power analyses on CV can be found here: Calculating statistical power . To summarize the basic ideas: figure out the effect you want to be able to detect generate N data from that possible world run the analysis you intend to conduct over those faux data store whether the results are 'significant' according to your chosen alpha repeat many ($B$) times & use the % 'significant' as an estimate of (post-hoc) power at that $N$ to determine a-priori power, search over possible $N$'s to find the value that yields your desired power Whether you will find significance on a particular iteration can be understood as the outcome of a Bernoulli trial with probability $p$ (where $p$ is the power). The proportion found over $B$ iterations allows us to approximate the true $p$. To get a better approximation, we can increase $B$, although this will also make the simulation take longer. In R, the primary way to generate binary data with a given probability of 'success' is ?rbinom E.g. to get the number of successes out of 10 Bernoulli trials with probability p, the code would be rbinom(n=10, size=1, prob=p) , (you will probably want to assign the result to a variable for storage) you can also generate such data less elegantly by using ?runif , e.g., ifelse(runif(1)<=p, 1, 0) if you believe the results are mediated by a latent Gaussian variable, you could generate the latent variable as a function of your covariates with ?rnorm , and then convert them into probabilities with pnorm() and use those in your rbinom() code. You state that you will "include a polynomial term Var1*Var1) to account for any curvature". There is a confusion here; polynomial terms can help us account for curvature, but this is an interaction term--it will not help us in this way. Nonetheless, your response rates require us to include both squared terms and interaction terms in our model. Specifically, your model will need to include: $var1^2$, $var1*var2$, and $var1^2*var2$, beyond the basic terms. Although written in the context of a different question, my answer here: Difference between logit and probit models has a lot of basic information about these types of models. Just as there are different kinds of Type I error rates when there are multiple hypotheses (e.g., per-contrast error rate , familywise error rate , & per-family error rate ), so are there different kinds of power* (e.g., for a single pre-specified effect , for any effect , & for all effects ). You could also seek for the power to detect a specific combination of effects, or for the power of a simultaneous test of the model as a whole. My guess from your description of your SAS code is that it is looking for the latter. However, from your description of your situation, I am assuming you want to detect the interaction effects at a minimum. *reference: Maxwell, S.E. (2004). The persistence of underpowered studies in psychological research: causes, consequences, and remedies. Psychological Methods , 9 , 2 , pp. 147-163. your effects are quite small (not to be confused with the low response rates), so we will find it difficult to achieve good power. Note that, although these all sound fairly similar, they are very much not the same (e.g., it is very possible to get a significant model with no significant effects--discussed here: How can a regression be significant yet all predictors be non-significant? , or significant effects but where the model is not significant--discussed here: Significance of coefficients in linear regression: significant t-test vs non-significant F-statistic ), which will be illustrated below. For a different way to think about issues related to power, see my answer here: How to report general precision in estimating correlations within a context of justifying sample size. Simple post-hoc power for logistic regression in R: Let's say your posited response rates represent the true situation in the world, and that you had sent out 10,000 letters. What is the power to detect those effects? (Note that I am famous for writing "comically inefficient" code, the following is intended to be easy to follow rather than optimized for efficiency; in fact, it's quite slow.) set.seed(1) repetitions = 1000 N = 10000 n = N/8 var1 = c( .03, .03, .03, .03, .06, .06, .09, .09) var2 = c( 0, 0, 0, 1, 0, 1, 0, 1) rates = c(0.0025, 0.0025, 0.0025, 0.00395, 0.003, 0.0042, 0.0035, 0.002) var1 = rep(var1, times=n) var2 = rep(var2, times=n) var12 = var1**2 var1x2 = var1 *var2 var12x2 = var12*var2 significant = matrix(nrow=repetitions, ncol=7) startT = proc.time()[3] for(i in 1:repetitions){ responses = rbinom(n=N, size=1, prob=rates) model = glm(responses~var1+var2+var12+var1x2+var12x2, family=binomial(link="logit")) significant[i,1:5] = (summary(model)$coefficients[2:6,4]<.05) significant[i,6] = sum(significant[i,1:5]) modelDev = model$null.deviance-model$deviance significant[i,7] = (1-pchisq(modelDev, 5))<.05 } endT = proc.time()[3] endT-startT sum(significant[,1])/repetitions # pre-specified effect power for var1 [1] 0.042 sum(significant[,2])/repetitions # pre-specified effect power for var2 [1] 0.017 sum(significant[,3])/repetitions # pre-specified effect power for var12 [1] 0.035 sum(significant[,4])/repetitions # pre-specified effect power for var1X2 [1] 0.019 sum(significant[,5])/repetitions # pre-specified effect power for var12X2 [1] 0.022 sum(significant[,7])/repetitions # power for likelihood ratio test of model [1] 0.168 sum(significant[,6]==5)/repetitions # all effects power [1] 0.001 sum(significant[,6]>0)/repetitions # any effect power [1] 0.065 sum(significant[,4]&significant[,5])/repetitions # power for interaction terms [1] 0.017 So we see that 10,000 letters doesn't really achieve 80% power (of any sort) to detect these response rates. (I am not sufficiently sure about what the SAS code is doing to be able to explain the stark discrepancy between these approaches, but this code is conceptually straightforward--if slow--and I have spent some time checking it, and I think these results are reasonable.) Simulation-based a-priori power for logistic regression: From here the idea is simply to search over possible $N$'s until we find a value that yields the desired level of the type of power you are interested in. Any search strategy that you can code up to work with this would be fine (in theory). Given the $N$'s that are going to be required to capture such small effects, it is worth thinking about how to do this more efficiently. My typical approach is simply brute force, i.e. to assess each $N$ that I might reasonably consider. (Note however, that I would typically only consider a small range, and I'm typically working with very small $N$'s--at least compared to this.) Instead, my strategy here was to bracket possible $N$'s to get a sense of what the range of powers would be. Thus, I picked an $N$ of 500,000 and re-ran the code (initiating the same seed, n.b. this took an hour and a half to run). Here are the results: sum(significant[,1])/repetitions # pre-specified effect power for var1 [1] 0.115 sum(significant[,2])/repetitions # pre-specified effect power for var2 [1] 0.091 sum(significant[,3])/repetitions # pre-specified effect power for var12 [1] 0.059 sum(significant[,4])/repetitions # pre-specified effect power for var1X2 [1] 0.606 sum(significant[,5])/repetitions # pre-specified effect power for var12X2 [1] 0.913 sum(significant[,7])/repetitions # power for likelihood ratio test of model [1] 1 sum(significant[,6]==5)/repetitions # all effects power [1] 0.005 sum(significant[,6]>0)/repetitions # any effect power [1] 0.96 sum(significant[,4]&significant[,5])/repetitions # power for interaction terms [1] 0.606 We can see from this that the magnitude of your effects varies considerably, and thus your ability to detect them varies. For example, the effect of $var1^2$ is particularly difficult to detect, only being significant 6% of the time even with half a million letters. On the other hand, the model as a whole was always significantly better than the null model. The other possibilities are arrayed in between. Although most of the 'data' are thrown away on each iteration, a good bit of exploration is still possible. For example, we could use the significant matrix to assess the correlations between the probabilities of different variables being significant. I should note in conclusion, that due to the complexity and large $N$ entailed in your situation, this was not as simple as I had suspected / claimed in my initial comment. However, you can certainly get the idea for how this can be done in general, and the issues involved in power analysis, from what I've put here. HTH.
{ "source": [ "https://stats.stackexchange.com/questions/35944", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/13950/" ] }
35,955
I refer to this paper: Hayes JR, Groner JI. "Using multiple imputation and propensity scores to test the effect of car seats and seat belt usage on injury severity from trauma registry data." J Pediatr Surg. 2008 May;43(5):924-7. In this study, multiple imputation was performed to obtain 15 complete datasets. Propensity scores were then computed for each dataset. Then, for each observational unit, a record was chosen randomly from one of the completed 15 datasets (including the related propensity score) thereby creating a single final dataset for which was then analysed by propensity score matching. My questions are: Is this valid way to perform propensity score matching following multiple imputation ? Are there alternative ways to do it ? For context: In my new project, I aim to compare the effects of 2 treatment methods using propensity score matching. There is missing data and I intend to use the MICE package in R to impute missing values, then twang to do the propensity score matching, and then lme4 to analyse the matched data. Update1: I have found this paper which takes a different approach: Mitra, Robin and Reiter, Jerome P. (2011) Propensity score matching with missing covariates via iterated, sequential multiple imputation [Working Paper] In this paper the authors compute propensity scores on all the imputed datasets and then pool them by averaging, which is in the spirit of multiple imputation using Rubin's rule's for a point estimate - but is it really applicable for a propensity score ? It would be really nice if anyone on CV could provide an answer with commentary on these 2 different approaches, and/or any others....
The first thing to say is that, for me, method 1 (sampling) seems to be without much merit - it is discarding the benefits of multiple imputation, and reduces to single imputation for each observation, as mentioned by Stas. I can't see any advantage in using it. There is an excellent discussion of the issues surrounding propensity score analysis with missing data in Hill (2004): Hill, J. "Reducing Bias in Treatment Effect Estimation in Observational Studies Suffering from Missing Data" ISERP Working Papers, 2004. It is downloadable from here . The paper considers two approaches to using multiple imputation (and also other methods of dealing with missing data) and propensity scores : averaging of propensity scores after multiple imputation, followed by causal inference (method 2 in your post above) causal inference using each set of propensity scores from the multiple imputations followed by averaging of the causal estimates. Additionally, the paper considers whether the outcome should be included as a predictor in the imputation model. Hill asserts that while multiple imputation is preferred to other methods of dealing with missing data, in general, there is no a priori reason to prefer one of these techniques over the other. However, there may be reasons to prefer averaging the propensity scores, particularly when using certain matching algorithms. Hill did a a simulation study in the same paper and found that averaging the propensity scores prior to causal inference, when including the outcome in the imputation model produced the best results in terms of mean squared error, and averaging the scores first, but without the outcome in the imputation model, produced the best results in terms of average bias (absolute difference between estimated and true treatment effect). Generally, it is advisable to include the outcome in the imputation model (for example see here ). So it would seem that your method 2 is the way to go.
{ "source": [ "https://stats.stackexchange.com/questions/35955", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11405/" ] }
35,956
Taleb's book "The Black Swan" was a New York Times best seller when it came out several years ago. The book is now in its second edition. After meeting with statisticians at a JSM (an annual statistical conference), Taleb toned down his criticism of statistics somewhat. But the thrust of the book is that statistics is not very useful because it relies on the normal distribution and very rare events: "Black Swans" don't have normal distributions. Do you think this is valid criticism? Is Taleb missing some important aspects of statistical modeling? Can rare events be predicted at least in the sense that probabilities of occurrences can be estimated?
I read the Black Swan a couple of years ago. The Black Swan idea is good and the attack on the ludic fallacy (seeing things as though they are dice games, with knowable probabilities) is good but statistics is outrageously misrepresented, with the central problem being the wrong claim that all statistics falls apart if variables are not normally distributed. I was sufficiently annoyed by this aspect to write Taleb the letter below: Dear Dr Taleb I recently read "The Black Swan". Like you, I am a fan of Karl Popper, and I found myself agreeing with much that is in it. I think your exposition of the ludic fallacy is basically sound, and draws attention to a real and common problem. However, I think that much of Part III lets your overall argument down badly, even to the point of possibly discrediting the rest of the book. This is a shame, as I think the arguments with regard to Black Swans and "unknown unknowns" stand on their merits without relying on some of the errors in Part III. The main issue I wish to point out - and seek your response on, particularly if I have misunderstood issues - is your misrepresentation of the field of applied statistics. In my judgement, chapters 14, 15 and 16 depend largely upon a straw man argument, misrepresenting statistics and econometrics. The field of econometrics that you describe is not the one that I was taught when I studied applied statistics, econometrics, and actuarial risk theory (at the Australian National University, but using texts that seemed pretty standard). The issues that you raise (such as the limitations of Gaussian distributions) are well and truly understood and taught, even at the undergraduate level. For example, you go to some lengths to show how income distribution does not follow a normal distribution, and present this as an argument against statistical practice in general. No competent statistician would ever claim that it does, and ways of dealing with this issue are well established. Just using techniques from the very most basic "first year econometrics" level, for example, transforming the variable by taking its logarithm would make your numerical examples look much less convincing. Such a transformation would in fact invalidate much of what you say, because then the variance of the original variable does increase as its mean increases. I am sure there are some incompetent econometricians who do OLS regressions etc with an untransformed response variable the way you say, but that just makes them incompetent and using techniques which are well established to be inappropriate. They would certainly have been failed even in undergraduate courses, which spend much time looking for more appropriate ways of modelling variables such as income, reflecting the actual observed (non-Gaussian) distribution. The family of Generalized Linear Models is one set of techniques developed in part to get around the problems you raise. Many of the exponential family of distributions (eg Gamma, Exponential, and Poisson distributions) are asymmetrical and have variance that increases as the centre of the distribution increases, getting around the problem you point out with using the Gaussian distribution. If this is still too limiting, it is possible to drop a pre-existing "shape" altogether and simply specify a relationship between the mean of a distribution and its variance (eg allowing the variance to increase proportionately to the square of the mean), using the "quasi-likelihood" method of estimation. Of course, you could argue that this form of modelling is still too simplistic and an intellectual trap that lulls us into thinking the future will be like the past. You may be correct, and I think the strength of your book is to make people like me consider this. But you need different arguments to those that you use in chapters 14-16. The great weight you place on the fact that the variance of the Gaussian distribution is constant regardless of its mean (which causes problems with scalability), for instance, is invalid. So is your emphasis on the fact that real-life distributions tend to be asymmetric rather than bell-curves. Basically, you have taken one over-simplification of the most basic approach to statistics (naïve modelling of raw variables as having Gaussian distributions) and shown, at great length, (correctly) the shortcomings of such an oversimplified approach. You then use this to make the gap to discredit the whole field. This is either a serious lapse in logic, or a propaganda technique. It is unfortunate because it detracts from your overall argument, much of which (as I said) I found valid and persuasive. I would be interested to hear what you say in response. I doubt I am the first to have raised this issue. Yours sincerely PE
{ "source": [ "https://stats.stackexchange.com/questions/35956", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11032/" ] }
35,971
Or more so "will it be"? Big Data makes statistics and relevant knowledge all the more important but seems to underplay Sampling Theory. I've seen this hype around 'Big Data' and can't help wonder that "why" would I want to analyze everything ? Wasn't there a reason for "Sampling Theory" to be designed/implemented/invented/discovered? I don't get the point of analyzing the entire 'population' of the dataset. Just because you can do it doesn't mean you should (Stupidity is a privilege but you shouldn't abuse it :) So my question is this: Is it statistically relevant to analyze the entire data set? The best you could do would be to minimize error if you did sampling. But is the cost of minimizing that error really worth it? Is the "value of information" really worth the effort, time cost etc. that goes in analyzing big data over massively parallel computers? Even if one analyzes the entire population, the outcome would still be at best a guess with a higher probability of being right. Probably a bit higher than sampling (or would it be a lot more?) Would the insight gained from analyzing the population vs analyzing the sample differ widely? Or should we accept it as "times have changed"? Sampling as an activity could become less important given enough computational power :) Note: I'm not trying to start a debate but looking for an answer to understand the why big data does what it does (i.e. analyze everything) and disregard the theory of sampling (or it doesn't?)
In a word, yes . I believe there are still clear situations where sampling is appropriate, within and without the "big data" world, but the nature of big data will certainly change our approach to sampling, and we will use more datasets that are nearly complete representations of the underlying population. On sampling: Depending on the circumstances it will almost always be clear if sampling is an appropriate thing to do. Sampling is not an inherently beneficial activity; it is just what we do because we need to make tradeoffs on the cost of implementing data collection. We are trying to characterize populations and need to select the appropriate method for gathering and analyzing data about the population. Sampling makes sense when the marginal cost of a method of data collection or data processing is high. Trying to reach 100% of the population is not a good use of resources in that case, because you are often better off addressing things like non-response bias than making tiny improvements in the random sampling error. How is big data different? "Big data" addresses many of the same questions we've had for ages, but what's "new" is that the data collection happens off an existing, computer-mediated process, so the marginal cost of collecting data is essentially zero. This dramatically reduces our need for sampling. When will we still use sampling? If your "big data" population is the right population for the problem, then you will only employ sampling in a few cases: the need to run separate experimental groups, or if the sheer volume of data is too large to capture and process (many of us can handle millions of rows of data with ease nowadays, so the boundary here is getting further and further out). If it seems like I'm dismissing your question, it's probably because I've rarely encountered situations where the volume of the data was a concern in either the collection or processing stages, although I know many have The situation that seems hard to me is when your "big data" population doesn't perfectly represent your target population, so the tradeoffs are more apples to oranges. Say you are a regional transportation planner, and Google has offered to give you access to its Android GPS navigation logs to help you. While the dataset would no doubt be interesting to use, the population would probably be systematically biased against the low-income, the public-transportation users, and the elderly. In such a situation, traditional travel diaries sent to a random household sample, although costlier and smaller in number, could still be the superior method of data collection. But, this is not simply a question of "sampling vs. big data", it's a question of which population combined with the relevant data collection and analysis methods you can apply to that population will best meet your needs.
{ "source": [ "https://stats.stackexchange.com/questions/35971", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4426/" ] }
36,015
I am doing a multivariate Cox regression, I have my significant independent variables and beta values. The model fits to my data very well. Now, I would like to use my model and predict the survival of a new observation. I am unclear how to do this with a Cox model. In a linear or logistic regression, it would be easy, just put the values of new observation in the regression and multiply them with betas and so I have the prediction of my outcome. How can I determine my baseline hazard? I need it in addition to computing the prediction. How is this done in a Cox model?
Following Cox model, the estimated hazard for individual $i$ with covariate vector $x_i$ has the form $$\hat{h}_i(t) = \hat{h}_0(t) \exp(x_i' \hat{\beta}),$$ where $\hat{\beta}$ is found by maximising the partial likelihood, while $\hat{h}_0$ follows from the Nelson-Aalen estimator, $$ \hat{h}_0(t_i) = \frac{d_i}{\sum_{j:t_j \geq t_i} \exp(x_j' \hat{\beta})} $$ with $t_1$, $t_2, \dotsc$ the distinct event times and $d_i$ the number of deaths at $t_i$ (see, e.g., Section 3.6 ). Similarly, $$\hat{S}_i(t) = \hat{S}_0(t)^{\exp(x_i' \hat{\beta})}$$ with $\hat{S}_0(t) = \exp(- \hat{\Lambda}_0(t))$ and $$\hat{\Lambda}_0(t) = \sum_{j:t_j \leq t} \hat{h}_0(t_j).$$ EDIT: This might also be of interest :-)
{ "source": [ "https://stats.stackexchange.com/questions/36015", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/13976/" ] }
36,027
From the distribution density function we could identify a mean (=0) for Cauchy distribution just like the graph below shows. But why do we say Cauchy distribution has no mean?
You can mechanically check that the expected value does not exist, but this should be physically intuitive, at least if you accept Huygens' principle and the Law of Large Numbers . The conclusion of the Law of Large Numbers fails for a Cauchy distribution, so it can't have a mean. If you average $n$ independent Cauchy random variables, the result does not converge to $0$ as $n\to \infty$ with probability $1$. It stays a Cauchy distribution of the same size. This is important in optics. The Cauchy distribution is the normalized intensity of light on a line from a point source. Huygens' principle says that you can determine the intensity by assuming that the light is re-emitted from any line between the source and the target. So, the intensity of light on a line $2$ meters away can be determined by assuming that the light first hits a line $1$ meter away, and is re-emitted at any forward angle. The intensity of light on a line $n$ meters away can be expressed as the $n$-fold convolution of the distribution of light on a line $1$ meter away. That is, the sum of $n$ independent Cauchy distributions is a Cauchy distribution scaled by a factor of $n$. If the Cauchy distribution had a mean, then the $25$th percentile of the $n$-fold convolution divided by $n$ would have to converge to $0$ by the Law of Large Numbers. Instead it stays constant. If you mark the $25$th percentile on a (transparent) line $1$ meter away, $2$ meters away, etc. then these points form a straight line, at $45$ degrees. They don't bend toward $0$. This tells you about the Cauchy distribution in particular, but you should know the integral test because there are other distributions with no mean which don't have a clear physical interpretation.
{ "source": [ "https://stats.stackexchange.com/questions/36027", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3525/" ] }
36,113
A very simple version of central limited theorem as below $$ \sqrt{n}\bigg(\bigg(\frac{1}{n}\sum_{i=1}^n X_i\bigg) - \mu\bigg)\ \xrightarrow{d}\ \mathcal{N}(0,\;\sigma^2) $$ which is Lindeberg–Lévy CLT. I do not understand why there is a $\sqrt{n}$ on the left handside. And Lyapunov CLT says $$ \frac{1}{s_n} \sum_{i=1}^{n} (X_i - \mu_i) \ \xrightarrow{d}\ \mathcal{N}(0,\;1) $$ but why not $\sqrt{s_n}$? Would anyone tell me what are these factors, such $\sqrt{n}$ and $\frac{1}{s_n}$? how do we get them in the theorem?
Nice question (+1)!! You will remember that for independent random variables $X$ and $Y$, $Var(X+Y) = Var(X) + Var(Y)$ and $Var(a\cdot X) = a^2 \cdot Var(X)$. So the variance of $\sum_{i=1}^n X_i$ is $\sum_{i=1}^n \sigma^2 = n\sigma^2$, and the variance of $\bar{X} = \frac{1}{n}\sum_{i=1}^n X_i$ is $n\sigma^2 / n^2 = \sigma^2/n$. This is for the variance . To standardize a random variable, you divide it by its standard deviation. As you know, the expected value of $\bar{X}$ is $\mu$, so the variable $$ \frac{\bar{X} - E\left( \bar{X} \right)}{\sqrt{ Var(\bar{X}) }} = \sqrt{n} \frac{\bar{X} - \mu}{\sigma}$$ has expected value 0 and variance 1. So if it tends to a Gaussian, it has to be the standard Gaussian $\mathcal{N}(0,\;1)$. Your formulation in the first equation is equivalent. By multiplying the left hand side by $\sigma$ you set the variance to $\sigma^2$. Regarding your second point, I believe that the equation shown above illustrates that you have to divide by $\sigma$ and not $\sqrt{\sigma}$ to standardize the equation, explaining why you use $s_n$ (the estimator of $\sigma)$ and not $\sqrt{s_n}$. Addition: @whuber suggests to discuss the why of the scaling by $\sqrt{n}$. He does it there , but because the answer is very long I will try to capture the essense of his argument (which is a reconstruction of de Moivre's thoughts). If you add a large number $n$ of +1's and -1's, you can approximate the probability that the sum will be $j$ by elementary counting. The log of this probability is proportional to $-j^2/n$. So if we want the probability above to converge to a constant as $n$ goes large, we have to use a normalizing factor in $O(\sqrt{n})$. Using modern (post de Moivre) mathematical tools, you can see the approximation mentioned above by noticing that the sought probability is $$P(j) = \frac{{n \choose n/2+j}}{2^n} = \frac{n!}{2^n(n/2+j)!(n/2-j)!}$$ which we approximate by Stirling's formula $$ P(j) \approx \frac{n^n e^{n/2+j} e^{n/2-j}}{2^n e^n (n/2+j)^{n/2+j} (n/2-j)^{n/2-j} } = \left(\frac{1}{1+2j/n}\right)^{n+j} \left(\frac{1}{1-2j/n}\right)^{n-j}. $$ $$ \log(P(j)) = -(n+j) \log(1+2j/n) - (n-j) \log(1-2j/n) \\ \sim -2j(n+j)/n + 2j(n-j)/n \propto -j^2/n.$$
{ "source": [ "https://stats.stackexchange.com/questions/36113", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3525/" ] }
36,165
Can someone explain why we need a large number of trees in random forest when the number of predictors is large? How can we determine the optimal number of trees?
Random forest uses bagging (picking a sample of observations rather than all of them) and random subspace method (picking a sample of features rather than all of them, in other words - attribute bagging ) to grow a tree. If the number of observations is large, but the number of trees is too small, then some observations will be predicted only once or even not at all. If the number of predictors is large but the number of trees is too small, then some features can (theoretically) be missed in all subspaces used. Both cases results in the decrease of random forest predictive power. But the last is a rather extreme case, since the selection of subspace is performed at each node. During classification the subspace dimensionality is $\sqrt{p}$ (rather small, $p$ is the total number of predictors) by default, but a tree contains many nodes. During regression the subspace dimensionality is $p/3$ (large enough) by default, though a tree contains fewer nodes. So the optimal number of trees in a random forest depends on the number of predictors only in extreme cases. The official page of the algorithm states that random forest does not overfit, and you can use as much trees as you want. But Mark R. Segal (April 14 2004. "Machine Learning Benchmarks and Random Forest Regression." Center for Bioinformatics & Molecular Biostatistics) has found that it overfits for some noisy datasets. So to obtain optimal number you can try training random forest at a grid of ntree parameter (simple, but more CPU-consuming) or build one random forest with many trees with keep.inbag , calculate out-of-bag (OOB) error rates for first $n$ trees (where $n$ changes from $1$ to ntree ) and plot OOB error rate vs. number of trees (more complex, but less CPU-consuming).
{ "source": [ "https://stats.stackexchange.com/questions/36165", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/14031/" ] }
36,207
I have a data frame that contains two time series: the dates and version numbers of Emacs and Firefox releases. Using one ggplot2 command it's easy to make a chart that uses loess (in a way that looks a bit amusing, which I don't mind) to turn the points into lines. How can I extend the lines into the future? I want to determine where and when Emacs and Firefox version numbers will cross, and if there's a way to show an error range, all the better. Given that ggplot2 is plotting the lines, it must have a model, but I don't see how to tell it to extend the lines, or to get the model out and do something with it. > library(ggplot2) > programs <- read.csv("http://www.miskatonic.org/files/se-program-versions.csv") > programs$Date <- as.Date(programs$Date, format="%B %d, %Y") > head(programs) Program Version Date 1 Emacs 24.1 2012-06-10 2 Emacs 23.4 2012-01-29 3 Emacs 23.3 2011-03-10 4 Emacs 23.2 2010-05-08 5 Emacs 23.1 2009-07-29 6 Emacs 22.3 2008-09-05 > head(subset(programs, Program == "Firefox")) Program Version Date 18 Firefox 16 2012-10-09 19 Firefox 15 2012-08-28 20 Firefox 14 2012-06-26 21 Firefox 13 2012-06-15 22 Firefox 12 2012-04-24 23 Firefox 11 2012-03-13 > ggplot(programs, aes(y = Version, x = Date, colour = Program)) + geom_point() + geom_smooth(span = 0.5, fill = NA) (Note: I had to fudge the early Firefox versions and turn 0.1 onto 0.01, etc., because "dot one" and "dot ten" are equal arithmetically. I know Firefox is releasing every six weeks now, but they don't exist yet, and I'm interested in a general answer to this prediction question.)
As @Glen mentions you have to use a stat_smooth method which supports extrapolations, which loess does not. lm does however. What you need to do is use the fullrange parameter of stat_smooth and expand the x-axis to include the range you want to predict over. I don't have your data, but here's an example using the mtcars dataset: ggplot(mtcars,aes(x=disp,y=hp)) + geom_point() + xlim(0,700) + stat_smooth(method="lm",fullrange=TRUE)
{ "source": [ "https://stats.stackexchange.com/questions/36207", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/10827/" ] }