source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
50,080
I have a set of real numbers. I need to estimate the quantile of a new number. Is there any clean way to do this in R? in general? I hope this is not ultra-trivial ;-) Much appreciated for your response. PK
As whuber pointed out, you can use ecdf , which takes a vector and returns a function for getting the percentile of a value. > percentile <- ecdf(1:10) > percentile(8) [1] 0.8
{ "source": [ "https://stats.stackexchange.com/questions/50080", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/20188/" ] }
50,537
I'm reading a paper where author discards several variables due to high correlation to other variables before doing PCA. The total number of variables is around 20. Does this give any benefits? It looks like an overhead to me as PCA should handle this automatically.
This expounds upon the insightful hint provided in a comment by @ttnphns. Adjoining nearly correlated variables increases the contribution of their common underlying factor to the PCA. We can see this geometrically. Consider these data in the XY plane, shown as a point cloud: There is little correlation, approximately equal covariance, and the data are centered: PCA (no matter how conducted) would report two approximately equal components. Let us now throw in a third variable $Z$ equal to $Y$ plus a tiny amount of random error. The correlation matrix of $(X,Y,Z)$ shows this with the small off-diagonal coefficients except between the second and third rows and columns ($Y$ and $Z$): $$\left( \begin{array}{ccc} 1. & -0.0344018 & -0.046076 \\ -0.0344018 & 1. & 0.941829 \\ -0.046076 & 0.941829 & 1. \end{array} \right)$$ Geometrically, we have displaced all the original points nearly vertically, lifting the previous picture right out of the plane of the page. This pseudo 3D point cloud attempts to illustrate the lifting with a side perspective view (based on a different dataset, albeit generated in the same way as before): The points originally lie in the blue plane and are lifted to the red dots. The original $Y$ axis points to the right. The resulting tilting also stretches the points out along the YZ directions, thereby doubling their contribution to the variance. Consequently, a PCA of these new data would still identify two major principal components, but now one of them will have twice the variance of the other. This geometric expectation is borne out with some simulations in R . For this I repeated the "lifting" procedure by creating near-collinear copies of the second variable a second, third, fourth, and fifth time, naming them $X_2$ through $X_5$. Here is a scatterplot matrix showing how those last four variables are well correlated: The PCA is done using correlations (although it doesn't really matter for these data), using the first two variables, then three, ..., and finally five. I show the results using plots of the contributions of the principal components to the total variance. Initially, with two almost uncorrelated variables, the contributions are almost equal (upper left corner). After adding one variable correlated with the second--exactly as in the geometric illustration--there are still just two major components, one now twice the size of the other. (A third component reflects the lack of perfect correlation; it measures the "thickness" of the pancake-like cloud in the 3D scatterplot.) After adding another correlated variable ($X_4$), the first component is now about three-fourths of the total; after a fifth is added, the first component is nearly four-fifths of the total. In all four cases components after the second would likely be considered inconsequential by most PCA diagnostic procedures; in the last case it's possible some procedures would conclude there is only one principal component worth considering. We can see now that there may be merit in discarding variables thought to be measuring the same underlying (but "latent") aspect of a collection of variables , because including the nearly-redundant variables can cause the PCA to overemphasize their contribution. There is nothing mathematically right (or wrong) about such a procedure; it's a judgment call based on the analytical objectives and knowledge of the data. But it should be abundantly clear that setting aside variables known to be strongly correlated with others can have a substantial effect on the PCA results. Here is the R code. n.cases <- 240 # Number of points. n.vars <- 4 # Number of mutually correlated variables. set.seed(26) # Make these results reproducible. eps <- rnorm(n.vars, 0, 1/4) # Make "1/4" smaller to *increase* the correlations. x <- matrix(rnorm(n.cases * (n.vars+2)), nrow=n.cases) beta <- rbind(c(1,rep(0, n.vars)), c(0,rep(1, n.vars)), cbind(rep(0,n.vars), diag(eps))) y <- x%*%beta # The variables. cor(y) # Verify their correlations are as intended. plot(data.frame(y)) # Show the scatterplot matrix. # Perform PCA on the first 2, 3, 4, ..., n.vars+1 variables. p <- lapply(2:dim(beta)[2], function(k) prcomp(y[, 1:k], scale=TRUE)) # Print summaries and display plots. tmp <- lapply(p, summary) par(mfrow=c(2,2)) tmp <- lapply(p, plot)
{ "source": [ "https://stats.stackexchange.com/questions/50537", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/21114/" ] }
50,682
I'm reading a book on time series and I started scratching my head in the following part: Could someone explain the intuition for me? I couldn't get it from this text. Why do we need the process to be invertible? What is the big picture here? Thank you for any help. I'm new on this stuff so if you could be kind to use student-level terms when explaining this :)
In the AR($\infty$) representation, the most recent error can be written as a linear function of current and past observations: $$w_t = \sum_{j=0}^\infty (-\theta)^j x_{t-j}$$ For an invertible process, $|\theta|<1$ and so the most recent observations have higher weight than observations from the more distant past. But when $|\theta| > 1$, the weights increase as lags increase, so the more distant the observations the greater their influence on the current error. When $|\theta|=1$, the weights are constant in size, and the distant observations have the same influence as the recent observations. As neither of these situations make much sense, we prefer the invertible processes.
{ "source": [ "https://stats.stackexchange.com/questions/50682", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/18528/" ] }
50,807
I consider the problem of (multiclass) classification based on time series of variable length $T$, that is, to find a function $$f(X_T) = y \in [1..K]\\ \text{for } X_T = (x_1, \dots, x_T)\\ \text{with } x_t \in \mathbb{R}^d ~,$$ via a global representation of the time serie by a set of selected features $v_i$ of fixed size $D$ independent of $T$, $$\phi(X_T) = v_1, \dots, v_D \in \mathbb{R}~,$$ and then use standard classification methods on this feature set. I'm not interested in forecasting, i.e. predicting $x_{T+1}$ . For example, we may analyse the way a person walks to predict the gender of the person. What are the standard features that I may take into account ? In example, we can obviously use the mean and variance of the serie (or higher order moments) and also look into the frequency domain, like the energy contained in some interval of the Discrete Fourier Transform of the serie (or Discrete Wavelet Transform ).
Simple statistical features Means in each of the $d$ dimensions Standard deviations of the $d$ dimensions Skewness , Kurtosis and Higher order moments of the $d$ dimensions Maximum and Minimum values Time serie analysis related features The $d \times d-1$ Cross-Correlations between each dimension and the $d$ Auto-Correlations Orders of the autoregressive (AR), integrated (I) and moving average (MA) part of an estimated ARIMA model Parameters of the AR part Parameters of the MA part Frequency domain related features See Morchen03 for a study of energy preserving features on DFT and DWT frequencies of the $k$ peaks in amplitude in the DFTs for the detrended $d$ dimensions $k$-quantiles of these DFTs
{ "source": [ "https://stats.stackexchange.com/questions/50807", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/16137/" ] }
50,817
Suppose $y$ is a continuous random variable and $d$ is a binary random variable that takes the value $1$ with probability $p$ and $0$ with probability $1-p$. How do I show that $\text{Cov}(y,d)=(E[y|d=1]-E[y|d=0])p(1-p)$?
Simple statistical features Means in each of the $d$ dimensions Standard deviations of the $d$ dimensions Skewness , Kurtosis and Higher order moments of the $d$ dimensions Maximum and Minimum values Time serie analysis related features The $d \times d-1$ Cross-Correlations between each dimension and the $d$ Auto-Correlations Orders of the autoregressive (AR), integrated (I) and moving average (MA) part of an estimated ARIMA model Parameters of the AR part Parameters of the MA part Frequency domain related features See Morchen03 for a study of energy preserving features on DFT and DWT frequencies of the $k$ peaks in amplitude in the DFTs for the detrended $d$ dimensions $k$-quantiles of these DFTs
{ "source": [ "https://stats.stackexchange.com/questions/50817", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/21248/" ] }
51,006
Context : Hierarchical regression with some missing data. Question : How do I use full information maximum likelihood (FIML) estimation to address missing data in R? Is there a package you would recommend, and what are typical steps? Online resources and examples would be very helpful too. P.S. : I'm a social scientist who recently started using R. Multiple imputation is an option, but I really like how elegantly programs like Mplus handles missing data using FIML. Unfortunately Mplus doesn't seem to compare models in the context of hierarchical regression at the moment (please let me know if you know a way to do that!). I wondered whether there is anything similar in R? Many thanks!
Credit of this answer goes to @Joshua who gave an awesome answer when I posted this question to the R and Statistics community on Google+. I am simply pasting his answer below. For running regression (without latent variable modeling), please read my notes typed after the quoted text. Handling missing data with Maximum Likelihood on all available data (so-called FIML) is a very useful technique. However, there are a number of complications that make it challenging to implement in a general way. Consider a simple linear regression model, predicting some continuous outcome from say age, sex, and occupation type. In OLS, you do not worry about the distribution of age, sex, and occupation, only the outcome. Typically for categorical predictors, they are dummy coded (0/1). To use ML, distributional assumptions are required for all variables with missingness. By far the easiest approach is multivariate normal (MVN). This is what for example Mplus will do by default if you do not go out for your way to declare the type of variable (e.g., categorical). In the simple example I gave, you would probably want to assume, normal for age, Bernoulli for sex, and multinomal for job type. The latter is tricky because what you actually have are several binary variables, but you do not want to treat them as Bernoulli. This means you do not want to work with the dummy coded variables, you need to work with the actual categorical variable so the ML estimators can properly use a multinomial, but this in turn means that the dummy coding process needs to be built into the model, not the data. Again complicating life. Further, the joint distribution of continuous and categorical variables is nontrivial to compute (when I run into problems like this in Mplus, it pretty quickly starts to break down and struggle). Finally, you really ideally specify the missing data mechanism. In SEM style, FIML, all variables are essentially conditioned on all others, but this is not necessarily correct. For example, perhaps age is missing as a function not of gender and occupation type, but their interaction. The interaction may not be important for the focal outcome, but if it is important for missingness on age, then it must also be in the model, not necessarily the substantive model of interest but the missing data model. lavaan will use ML for MVN, but presently I believe the categorical data options are limited (again coming from the SEM field, this is standard). Multiple imputation seems less elegant at first because it makes explicit many hidden assumptions behind FIML (like distributional assumptions for every variable and the predictive model assumed for missingness on every variable). However, it gives you a lot of control and explicitly thinking about the distribution of each variable, and the optimal missing data mechanism for each is valuable. I am becoming more and more convinced that Bayesian models are the way to handle missing data. The reason is that they are very flexible at including distributions for each variable, allowing many different types of distributions, and can easily incorporate the variability introduced by missing data on predictors, into the overall model estimates (which is the trick with multiple imputation where you then have to somehow combine results). Of course, these methods are not the easiest and can take a lot of training and time to use. So that doesn't really answer your question, but explains a bit of why completely general frameworks for dealing with missingness are tricky. In my semutils package for the covariance matrices, I use lavaan underneath to use ML. I do that because I assume for a variance covariance matrix that you are using continuous variables anyway so that I assume my users are already assuming MVN for their data. This means that if all variables with missingness are continuous, lavaan , a structural equation modelling (SEM) package is a nice one to use for FIML in R. Now going back to my initial question. My intention was to have a magic fix for missingness when running linear regression. All my variables with missing were nice and continuous. So I proceeded to run my analyses in two styles: The usual way with multiple imputation In SEM style with lavaan using FIML. I was missing a lot of things by doing regression in SEM style. Both styles gave similar coefficients and R squares, but in SEM style I didn't get the significance testing of the regression (the typical F values with df), instead I got fit indices that were not helpful as I had used up all my degrees of freedom. Also when one model had a larger R2 than another, I couldn’t find a way to compare whether the difference was significant. Additionally, doing regression the usual way gives access to a bunch of testing for regression assumptions that are invaluable. For a more detailed answer on this issue see my other question that was nicely answered by @StasK . So the conclusion seems to be that lavaan is a decent package for FIML in R, yet the use of FIML depends on statistical assumptions and the type of analysis one is conducting. As far as regression (without latent variable modeling) goes, keeping it out of SEM programs and using multiple imputation is probably a wise move.
{ "source": [ "https://stats.stackexchange.com/questions/51006", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/20344/" ] }
51,062
I wonder if there is any statistical test to "test" the significance of a bimodal distribution. I mean, How much my data meets the bimodal distribution or not? If so, is there any test in the R program?
Another possible approach to this issue is to think about what might be going on behind the scenes that is generating the data you see. That is, you can think in terms of a mixture model , for example, a Gaussian mixture model. For instance, you might believe that your data are drawn from either a single normal population, or from a mixture of two normal distributions (in some proportion), with differing means and variances. Of course, you don't have to believe that there are only one or two, nor do you have to believe that the populations from which the data are drawn need to be normal. There are (at least) two R packages that allow you to estimate mixture models. One package is flexmix , and another is mclust . Having estimated two candidate models, I believe it may be possible to conduct a likelihood ratio test. Alternatively, you could use the parametric bootstrap cross-fitting method ( pdf ).
{ "source": [ "https://stats.stackexchange.com/questions/51062", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/19939/" ] }
51,185
Can someone prove the following connection between Fisher information metric and the relative entropy (or KL divergence) in a purely mathematical rigorous way? $$D( p(\cdot , a+da) \parallel p(\cdot,a) ) =\frac{1}{2} g_{i,j} \, da^i \, da^j + (O( \|da\|^3)$$ where $a=(a^1,\dots, a^n), da=(da^1,\dots,da^n)$ , $$g_{i,j}=\int \partial_i (\log p(x;a)) \partial_j(\log p(x;a))~ p(x;a)~dx$$ and $g_{i,j} \, da^i \, da^j := \sum_{i,j}g_{i,j} \, da^i \, da^j$ is the Einstein summation convention. I found the above in the nice blog of John Baez where Vasileios Anagnostopoulos says about that in the comments.
In 1946, geophysicist and Bayesian statistician Harold Jeffreys introduced what we today call the Kullback-Leibler divergence, and discovered that for two distributions that are "infinitely close" (let's hope that Math SE guys don't see this ;-) we can write their Kullback-Leibler divergence as a quadratic form whose coefficients are given by the elements of the Fisher information matrix. He interpreted this quadratic form as the element of length of a Riemannian manifold, with the Fisher information playing the role of the Riemannian metric. From this geometrization of the statistical model, he derived his Jeffreys's prior as the measure naturally induced by the Riemannian metric, and this measure can be interpreted as an intrinsically uniform distribution on the manifold, although, in general, it is not a finite measure. To write a rigorous proof, you'll need to spot out all the regularity conditions and take care of the order of the error terms in the Taylor expansions. Here is a brief sketch of the argument. The symmetrized Kullback-Leibler divergence between two densities $f$ and $g$ is defined as $$ D[f,g] = \int (f(x) - g(x)) \log\left(\frac{f(x)}{g(x)} \right) dx \, . $$ If we have a family of densities parameterized by $\theta=(\theta_1,\dots,\theta_k)$ , then $${\scriptsize D[p(\,\cdot\,\mid\theta), p(\,\cdot\,\mid\theta + \Delta\theta)] = \int ( p(x,\mid\theta) - p(x\mid\theta + \Delta\theta)) \log\left( \frac{p(x\mid\theta)}{p(x\mid\theta + \Delta\theta)}\right) \,dx \, }, $$ in which $\Delta\theta=(\Delta\theta_1,\dots,\Delta\theta_k)$ . Introducing the notation $$ \Delta p(x\mid\theta) = p(x\mid\theta) - p(x\mid\theta + \Delta\theta) \, , $$ some simple algebra gives $$ D[p(\;\cdot\,\mid\theta), p(\;\cdot\,\mid\theta + \Delta\theta)] = \int\frac{\Delta p(x\mid\theta)}{p(x\mid\theta)} \log\left(1+\frac{\Delta p(x\mid\theta)}{p(x\mid\theta)}\right)p(x\mid\theta)\,dx \, . $$ Using the Taylor expansion for the natural logarithm, we have $$ \log\left(1+\frac{\Delta p(x\mid\theta)}{p(x\mid\theta)}\right) \approx \frac{\Delta p(x\mid\theta)}{p(x\mid\theta)} \, , $$ and therefore $$ D[p(\;\cdot\,\mid\theta), p(\;\cdot\,\mid\theta + \Delta\theta)] \approx \int\left(\frac{\Delta p(x\mid\theta)}{p(x\mid\theta)}\right)^2p(x\mid\theta)\,dx \, . $$ But $$ \frac{\Delta p(x\mid\theta)}{p(x\mid\theta)} \approx \frac{1}{p(x\mid\theta)} \sum_{i=1}^k \frac{\partial p(x\mid\theta)}{\partial\theta_i} \, \Delta\theta_i = \sum_{i=1}^k \frac{\partial \log p(x\mid\theta)}{\partial\theta_i} \, \Delta\theta_i \, . $$ Hence $$ D[p(\,\cdot\,\mid\theta), p(\,\cdot\,\mid\theta + \Delta\theta)] \approx \sum_{i,j=1}^k g_{ij} \,\Delta\theta_i \, \Delta\theta_j \, , $$ in which $$ g_{ij} = \int \frac{\partial \log p(x\mid\theta)}{\partial\theta_i} \frac{\partial \log p(x\mid\theta)}{\partial\theta_j} p(x\mid\theta) \,dx \, . $$ This is the original paper: Jeffreys, H. (1946). An invariant form for the prior probability in estimation problems. Proc. Royal Soc. of London, Series A, 186, 453–461.
{ "source": [ "https://stats.stackexchange.com/questions/51185", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/21423/" ] }
51,273
I am getting the impression that when people are referring to a 'deep belief' network that this is basically a neural network but very large. Is this correct or does a deep belief network also imply that the algorithm itself is different (ie, no feed forward neural net but perhaps something with feedback loops)?
"Neural networks" is a term usually used to refer to feedforward neural networks. Deep Neural Networks are feedforward Neural Networks with many layers. A Deep belief network is not the same as a Deep Neural Network. As you have pointed out a deep belief network has undirected connections between some layers. This means that the topology of the DNN and DBN is different by definition. The undirected layers in the DBN are called Restricted Boltzmann Machines. This layers can be trained using an unsupervised learning algorithm (Contrastive Divergence) that is very fast (Here's a link ! with details). Some more comments: The solutions obtained with deeper neural networks correspond to solutions that perform worse than the solutions obtained for networks with 1 or 2 hidden layers. As the architecture gets deeper, it becomes more difficult to obtain good generalization using a Deep NN. In 2006 Hinton discovered that much better results could be achieved in deeper architectures when each layer (RBM) is pre-trained with an unsupervised learning algorithm (Contrastive Divergence). Then the Network can be trained in a supervised way using backpropagation in order to "fine-tune" the weights.
{ "source": [ "https://stats.stackexchange.com/questions/51273", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/17935/" ] }
51,275
Akaike Information Criterion (AIC) and the c-statistic (area under ROC curve) are two measures of model fit for logistic regression. I am having trouble explaining what is going on when the results of the two measures are not consistent. I guess they are measuring slightly different aspects of model fit, but what are those specific aspects? I have 3 logistic regressions models. Model M0 has some standard covariates. Model M1 adds X1 to M0; model M2 adds X2 to M0 (so M1 and M2 are not nested). The difference in AIC from M0 to both M1 and M2 is about 15, indicating X1 and X2 both improve model fit, and by about the same amount. c-statistics are: M0, 0.70; M1, 0.73; M2 0.72. The difference in c-statistic from M0 to M1 is significant (method of DeLong et al 1988), but the difference from M0 to M2 is not significant, indicating that X1 improves model fit, but X2 does not. X1 is not routinely collected. X2 is supposed to be routinely collected but is missing in about 40% of cases. We want to decide whether to start collecting X1, or improve collection of X2, or drop both variables. From AIC we conclude that the variables make similar improvement to the model. It's probably easier to improve collection of X2 than start collecting a completely new variable (X1), so we would aim to improve X2 collection. But from c-statistic, X1 improves the model and X2 does not, so we should forget about X2 and start collecting X1. As our recommendation depends on which statistic we focus on, we need to clearly understand the difference in what they are measuring. Any advice welcome.
AIC and c-statistic are trying to answer different questions. (Also some issues with c-statistic have been raised in recent years, but I'll come onto that as an aside) Roughly speaking: AIC is telling you how good your model fits for a specific mis-classification cost. AUC is telling you how good your model would work, on average, across all mis-classification costs. When you calculate the AIC you treat your logistic giving a prediction of say 0.9 to be a prediction of 1 (i.e. more likely 1 than 0), however it need not be. You could take your logistic score and say "anything above 0.95 is 1, everything below is 0". Why would you do this? Well this would ensure that you only predict one when you are really really confident. Your false positive rate will be really really low, but your false negative will skyrocket. In some situations this isn't a bad thing - if you are going to accuse someone of fraud, you probably want to be really really sure first. Also, if it is very expensive to follow up the positive results, then you don't want too many of them. This is why it relates to costs. There is a cost when you classify a 1 as a 0 and a cost when you classify a 0 as a 1. Typically (assuming you used a default setup) the AIC for logistic regression refers to the special case when both mis-classifications are equally costly. That is, logistic regression gives you the best overall number of correct predictions, without any preference for positive or negative. The ROC curve is used because this plots the true positive against the false positive in order to show how the classifier would perform if you used it under different cost requirements. The c-statistic comes about because any ROC curve that lies strictly above another is clearly a dominating classifier. It is therefore intuitive to measure the area under the curve as a measure of how good the classifier overall. So basically, if you know your costs when fitting the model, use AIC (or similar). If you are just constructing a score, but not specifying the diagnostic threshold, then AUC approaches are needed (with the following caveat about AUC itself). So what is wrong with c-statistic/AUC/Gini? For many years AUC was the standard approach, and is still widely used, however there are a number of problems with it. One thing that made it particularly appealing was that it corresponds to a Wilcox test on the ranks of the classifications. That is it measured the probability that the score of a randomly picked member of one class will be higher than a randomly picked member of the other class. The problem is, that is almost never a useful metric. The most critical problems with AUC were publicized by David Hand a few years back. (See references below) The crux of the problem is that while AUC does average over all costs, because the x-axis of the ROC curve is False Positive Rate, the weight that it assigns to the different cost regimes varies between classifiers. So if you calculate the AUC on two different logitic regressions it won't be measuring "the same thing" in both cases. This means it makes little sense to compare models based on AUC. Hand proposed an alternative calculation using a fixed cost weighting, and called this the H-measure - there is a package in R called hmeasure that will perform this calculation, and I believe AUC for comparison. Some references on the problems with AUC: When is the area under the receiver operating characteristic curve an appropriate measure of classifier performance? D.J. Hand, C. Anagnostopoulos Pattern Recognition Letters 34 (2013) 492–495 (I found this to be a particularly accessible and useful explanation)
{ "source": [ "https://stats.stackexchange.com/questions/51275", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/7644/" ] }
51,296
I wonder how to compute precision and recall using a confusion matrix for a multi-class classification problem. Specifically, an observation can only be assigned to its most probable class / label. I would like to compute: Precision = TP / (TP+FP) Recall = TP / (TP+FN) for each class, and then compute the micro-averaged F-measure.
In a 2-hypothesis case, the confusion matrix is usually: Declare H1 Declare H0 Is H1 TP FN Is H0 FP TN where I've used something similar to your notation: TP = true positive (declare H1 when, in truth, H1), FN = false negative (declare H0 when, in truth, H1), FP = false positive TN = true negative From the raw data, the values in the table would typically be the counts for each occurrence over the test data. From this, you should be able to compute the quantities you need. Edit The generalization to multi-class problems is to sum over rows / columns of the confusion matrix. Given that the matrix is oriented as above, i.e., that a given row of the matrix corresponds to specific value for the "truth", we have: $\text{Precision}_{~i} = \cfrac{M_{ii}}{\sum_j M_{ji}}$ $\text{Recall}_{~i} = \cfrac{M_{ii}}{\sum_j M_{ij}}$ That is, precision is the fraction of events where we correctly declared $i$ out of all instances where the algorithm declared $i$ . Conversely, recall is the fraction of events where we correctly declared $i$ out of all of the cases where the true of state of the world is $i$ .
{ "source": [ "https://stats.stackexchange.com/questions/51296", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/21532/" ] }
51,416
I am trying to learn various cross validation methods, primarily with intention to apply to supervised multivariate analysis techniques. Two I have come across are K-fold and Monte Carlo cross-validation techniques. I have read that K-fold is a variation on Monte Carlo but I'm not sure I fully understand what makes up the definition of Monte Carlo. Could someone please explain the distinction between these two methods?
$k$-Fold Cross Validation Suppose you have 100 data points. For $k$-fold cross validation, these 100 points are divided into $k$ equal sized and mutually-exclusive 'folds'. For $k$=10, you might assign points 1-10 to fold #1, 11-20 to fold #2, and so on, finishing by assigning points 91-100 to fold #10. Next, we select one fold to act as the test set, and use the remaining $k-1$ folds to form the training data. For the first run, you might use points 1-10 as the test set and 11-100 as the training set. The next run would then use points 11-20 as the test set and train on points 1-10 plus 21-100, and so forth, until each fold is used once as the test set. Monte-Carlo Cross Validation Monte Carlo works somewhat differently. You randomly select (without replacement) some fraction of your data to form the training set, and then assign the rest of the points to the test set. This process is then repeated multiple times, generating (at random) new training and test partitions each time. For example, suppose you chose to use 10% of your data as test data. Then your test set on rep #1 might be points 64, 90 , 63, 42 , 65, 49, 10, 64, 96, and 48. On the next run, your test set might be 90 , 60, 23, 67, 16, 78, 42 , 17, 73, and 26. Since the partitions are done independently for each run, the same point can appear in the test set multiple times, which is the major difference between Monte Carlo and cross validation . Comparison Each method has its own advantages and disadvantages. Under cross validation, each point gets tested exactly once, which seems fair. However, cross-validation only explores a few of the possible ways that your data could have been partitioned. Monte Carlo lets you explore somewhat more possible partitions, though you're unlikely to get all of them--there are $\binom{100}{50} \approx 10^{28}$ possible ways to 50/50 split a 100 data point set(!). If you're attempting to do inference (i.e., statistically compare two algorithms), averaging the results of a $k$-fold cross validation run gets you a (nearly) unbiased estimate of the algorithm's performance, but with high variance (as you'd expect from having only 5 or 10 data points). Since you can, in principle, run it for as long as you want/can afford, Monte Carlo cross validation can give you a less variable, but more biased estimate. Some approaches fuse the two, as in the 5x2 cross validation (see Dietterich (1998) for the idea, though I think there have been some further improvements since then), or by correcting for the bias (e.g., Nadeau and Bengio, 2003 ).
{ "source": [ "https://stats.stackexchange.com/questions/51416", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/21591/" ] }
51,456
Taken from Practical Statistics for Medical Research where Douglas Altman writes in page 285: ...for any two quantities X and Y, X will be correlated with X-Y. Indeed, even if X and Y are samples of random numbers we would expect the correlation of X and X-Y to be 0.7 I tried this in R and it seems to be the case: x <- rnorm(1000000, 10, 2) y <- rnorm(1000000, 10, 2) cor(x, x-y) xu <- sample(1:100, size = 1000000, replace = T) yu <- sample(1:100, size = 1000000, replace = T) cor(xu, xu-yu) Why is that? What is the theory behind this?
If $X$ and $Y$ are uncorrelated random variables with equal variance $\sigma^2$, then we have that $$\begin{align} \operatorname{var}(X-Y) &= \operatorname{var}(X) + \operatorname{var}(-Y)\\ &= \operatorname{var}(X) + \operatorname{var}(Y)\\ &=2\sigma^2,\\ \operatorname{cov}(X, X-Y) &= \operatorname{cov}(X,X) - \operatorname{cov}(X,Y) & \text{bilinearity of covariance operator}\\ &= \operatorname{var}(X) - 0 & 0 ~\text{because}~X ~\text{and}~ Y ~\text{are uncorrelated}\\ &= \sigma^2. \end{align}$$ Consequently, $$\rho_{X,X-Y} = \frac{\operatorname{cov}(X, X-Y)}{\sqrt{\operatorname{var}(X)\operatorname{var}(X-Y)}}= \frac{\sigma^2}{\sqrt{\sigma^2\cdot2\sigma^2}} = \frac{1}{\sqrt{2}}.$$ So, when you find $$\frac{\sum_{i=1}^n\left(x_i - \bar{x}\right) \left((x_i-y_i) - (\bar{x}-\bar{y})\right)}{ \sqrt{\sum_{i=1}^n\left(x_i - \bar{x}\right)^2 \sum_{i=1}^n\left((x_i-y_i) - (\bar{x}-\bar{y})\right)^2}} $$ the sample correlation of $x$ and $x-y$ for a large data set $\{(x_i,y_i)\colon 1 \leq i \leq n\}$ drawn from a population with these properties, which includes "random numbers" as a special case, the result tends to be close to the population correlation value $\frac{1}{\sqrt{2}} \approx 0.7071\ldots$
{ "source": [ "https://stats.stackexchange.com/questions/51456", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9671/" ] }
51,490
Is there a common method used to determine how many training samples are required to train a classifier (an LDA in this case) to obtain a minimum threshold generalization accuracy? I am asking because I would like to minimize the calibration time usually required in a brain-computer interface.
The search term you are looking for is "learning curve", which gives the (average) model performance as function of the training sample size. Learning curves depend on a lot of things, e.g. classification method complexity of the classifier how well the classes are separated. (I think for two-class LDA you may be able to derive some theoretical power calculations, but the crucial fact is always whether your data actually meets the "equal COV multivariate normal" assumption. I'd go for some simulation on for both LDA assumptions and resampling of your already existing data). There are two aspects of the performance of a classifier trained on a finite sample size $n$ (as usual), bias, i.e. on average a classifier trained on $n$ training samples is worse than the classifier trained on $n = \infty$ training cases (this is usually meant by learning curve), and variance: a given training set of $n$ cases may lead to quite different model performance. Even with few cases, you may be lucky and get good results. Or you have bad luck and get a really bad classifier. As usual, this variance decreases with incresing training sample size $n$. Another aspect that you may need to take into account is that it is usually not enough to train a good classifier, but you also need to prove that the classifier is good (or good enough). So you need to plan also the sample size needed for validation with a given precision. If you need to give these results as fraction of successes among so many test cases (e.g. producer's or consumer's accuracy / precision / sensitivity / positive predictive value), and the underlying classification task is rather easy, this can need more independent cases than training of a good model. As a rule of thumb, for training, the sample size is usually discussed in relation to model complexity (number of cases : number of variates), whereas absolute bounds on the test sample size can be given for a required precision of the performance measurement. Here's a paper, where we explained these things in more detail, and also discuss how to constuct learning curves: Beleites, C. and Neugebauer, U. and Bocklitz, T. and Krafft, C. and Popp, J.: Sample size planning for classification models. Anal Chim Acta, 2013, 760, 25-33. DOI: 10.1016/j.aca.2012.11.007 accepted manuscript on arXiv: 1211.1323 This is the "teaser", showing an easy classification problem (we actually have one easy distinction like this in our classification problem, but other classes are far more difficult to distinguish): We did not try to extrapolate to larger training sample sizes to determine how much more training cases are needed, because the test sample sizes are our bottleneck, and larger training sample sizes would let us construct more complex models, so extrapolation is questionable. For the kind of data sets I have, I'd approach this iteratively, measuring a bunch of new cases, showing how much things improved, measure more cases, and so on. This may be different for you, but the paper contains literature references to papers using extrapolation to higher sample sizes in order to estimate the required number of samples.
{ "source": [ "https://stats.stackexchange.com/questions/51490", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/21624/" ] }
51,718
Suppose I want to see whether my data is exponential based on a histogram (i.e. skewed to the right). Depending on how I group or bin the data, I can get wildly different histograms. One set of histograms will make is seem that the data is exponential. Another set will make it seem that data are not exponential. How do I make determining distributions from histograms well defined?
The difficulty with using histograms to infer shape While histograms are often handy and mostly useful, they can be misleading. Their appearance can alter quite a lot with changes in the locations of the bin boundaries. This problem has long been known*, though perhaps not as widely as it should be -- you rarely see it mentioned in elementary-level discussions (though there are exceptions). * for example, Paul Rubin[1] put it this way: " it's well known that changing the endpoints in a histogram can significantly alter its appearance ". . I think it's an issue that should be more widely discussed when introducing histograms. I'll give some examples and discussion. Why you should be wary of relying on a single histogram of a data set Take a look at these four histograms: That's four very different looking histograms. If you paste the following data in (I'm using R here): Annie <- c(3.15,5.46,3.28,4.2,1.98,2.28,3.12,4.1,3.42,3.91,2.06,5.53, 5.19,2.39,1.88,3.43,5.51,2.54,3.64,4.33,4.85,5.56,1.89,4.84,5.74,3.22, 5.52,1.84,4.31,2.01,4.01,5.31,2.56,5.11,2.58,4.43,4.96,1.9,5.6,1.92) Brian <- c(2.9, 5.21, 3.03, 3.95, 1.73, 2.03, 2.87, 3.85, 3.17, 3.66, 1.81, 5.28, 4.94, 2.14, 1.63, 3.18, 5.26, 2.29, 3.39, 4.08, 4.6, 5.31, 1.64, 4.59, 5.49, 2.97, 5.27, 1.59, 4.06, 1.76, 3.76, 5.06, 2.31, 4.86, 2.33, 4.18, 4.71, 1.65, 5.35, 1.67) Chris <- c(2.65, 4.96, 2.78, 3.7, 1.48, 1.78, 2.62, 3.6, 2.92, 3.41, 1.56, 5.03, 4.69, 1.89, 1.38, 2.93, 5.01, 2.04, 3.14, 3.83, 4.35, 5.06, 1.39, 4.34, 5.24, 2.72, 5.02, 1.34, 3.81, 1.51, 3.51, 4.81, 2.06, 4.61, 2.08, 3.93, 4.46, 1.4, 5.1, 1.42) Zoe <- c(2.4, 4.71, 2.53, 3.45, 1.23, 1.53, 2.37, 3.35, 2.67, 3.16, 1.31, 4.78, 4.44, 1.64, 1.13, 2.68, 4.76, 1.79, 2.89, 3.58, 4.1, 4.81, 1.14, 4.09, 4.99, 2.47, 4.77, 1.09, 3.56, 1.26, 3.26, 4.56, 1.81, 4.36, 1.83, 3.68, 4.21, 1.15, 4.85, 1.17) Then you can generate them yourself: opar<-par() par(mfrow=c(2,2)) hist(Annie,breaks=1:6,main="Annie",xlab="V1",col="lightblue") hist(Brian,breaks=1:6,main="Brian",xlab="V2",col="lightblue") hist(Chris,breaks=1:6,main="Chris",xlab="V3",col="lightblue") hist(Zoe,breaks=1:6,main="Zoe",xlab="V4",col="lightblue") par(opar) Now look at this strip chart: x<-c(Annie,Brian,Chris,Zoe) g<-rep(c('A','B','C','Z'),each=40) stripchart(x~g,pch='|') abline(v=(5:23)/4,col=8,lty=3) abline(v=(2:5),col=6,lty=3) (If it's still not obvious, see what happens when you subtract Annie's data from each set: head(matrix(x-Annie,nrow=40)) ) The data has simply been shifted left each time by 0.25. Yet the impressions we get from the histograms - right skew, uniform, left skew and bimodal - were utterly different. Our impression was entirely governed by the location of the first bin-origin relative to the minimum. So not just 'exponential' vs 'not-really-exponential' but 'right skew' vs 'left skew' or 'bimodal' vs 'uniform' just by moving where your bins start. Edit: If you vary the binwidth, you can get stuff like this happen: That's the same 34 observations in both cases, just different breakpoints, one with binwidth $1$ and the other with binwidth $0.8$ . x <- c(1.03, 1.24, 1.47, 1.52, 1.92, 1.93, 1.94, 1.95, 1.96, 1.97, 1.98, 1.99, 2.72, 2.75, 2.78, 2.81, 2.84, 2.87, 2.9, 2.93, 2.96, 2.99, 3.6, 3.64, 3.66, 3.72, 3.77, 3.88, 3.91, 4.14, 4.54, 4.77, 4.81, 5.62) hist(x,breaks=seq(0.3,6.7,by=0.8),xlim=c(0,6.7),col="green3",freq=FALSE) hist(x,breaks=0:8,col="aquamarine",freq=FALSE) Nifty, eh? Yes, those data were deliberately generated to do that... but the lesson is clear - what you think you see in a histogram may not be a particularly accurate impression of the data. What can we do? Histograms are widely used, frequently convenient to obtain and sometimes expected. What can we do to avoid or mitigate such problems? As Nick Cox points out in a comment to a related question : The rule of thumb always should be that details robust to variations in bin width and bin origin are likely to be genuine; details fragile to such are likely to be spurious or trivial . At the least, you should always do histograms at several different binwidths or bin-origins, or preferably both. Alternatively, check a kernel density estimate at not-too-wide a bandwidth. One other approach that reduces the arbitrariness of histograms is averaged shifted histograms , (that's one on that most recent set of data) but if you go to that effort, I think you might as well use a kernel density estimate. If I am doing a histogram (I use them in spite of being acutely aware of the issue), I almost always prefer to use considerably more bins than typical program defaults tend to give and very often I like to do several histograms with varying bin width (and, occasionally, origin). If they're reasonably consistent in impression, you're not likely to have this problem, and if they're not consistent, you know to look more carefully, perhaps try a kernel density estimate, an empirical CDF, a Q-Q plot or something similar. While histograms may sometimes be misleading, boxplots are even more prone to such problems; with a boxplot you don't even have the ability to say "use more bins". See the four very different data sets in this post , all with identical, symmetric boxplots, even though one of the data sets is quite skew. [1]: Rubin, Paul (2014) "Histogram Abuse!", Blog post, OR in an OB world , Jan 23 2014 link ... (alternate link)
{ "source": [ "https://stats.stackexchange.com/questions/51718", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/21738/" ] }
51,743
When performing hierarchical clustering, one can use many metrics to measure the distance between clusters. Two such metrics imply calculation of the centroids and means of data points in the clusters. What is the difference between the mean and the centroid? Aren't these the same point in cluster?
As far as I know, the "mean" of a cluster and the centroid of a single cluster are the same thing, though the term "centroid" might be a little more precise than "mean" when dealing with multivariate data. To find the centroid, one computes the (arithmetic) mean of the points' positions separately for each dimension. For example, if you had points at: (-1, 10, 3), (0, 5, 2), and (1, 20, 10), then the centroid would be located at ((-1+0+1)/3, (10+5+20)/3, (3+2+10)/3), which simplifies (0, 11 2/3, 5). (NB: The centroid does not have to be--and rarely is---one of the original data points) The centroid is also sometimes called the center of mass or barycenter, based on its physical interpretation (it's the center of mass of an object defined by the points). Like the mean, the centroid's location minimizes the sum-squared distance from the other points. A related idea is the medoid , which is the data point that is "least dissimilar" from all of the other data points. Unlike the centroid, the medoid has to be one of the original points. You may also be interested in the geometric median which is analgous to the median, but for multivariate data. These are both different from the centroid. However, as Gabe points out in his answer , there is a difference between the "centroid distance" and the "average distance" when you're comparing clusters. The centroid distance between cluster $A$ and $B$ is simply the distance between $\text{centroid}(A)$ and $\text{centroid}(B)$. The average distance is calculated by finding the average pairwise distance between the points in each cluster. In other words, for every point $a_i$ in cluster $A$, you calculate $\text{dist}(a_i, b_1)$, $\text{dist}(a_i, b_2)$ , ... $\text{dist}(a_i, b_n)$ and average them all together.
{ "source": [ "https://stats.stackexchange.com/questions/51743", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/21754/" ] }
52,089
What does having "constant variance" in the error term mean? As I see it, we have a data with one dependent variable and one independent variable. Constant variance is one of the assumptions of linear regression. I am wondering what homoscedasticity means. Since even if I have 500 rows, I would have a single variance value which is obviously constant. With what variable should I compare the variance?
It means that when you plot the individual error against the predicted value, the variance of the error predicted value should be constant. See the red arrows in the picture below, the length of the red lines (a proxy of its variance) are the same.
{ "source": [ "https://stats.stackexchange.com/questions/52089", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/17262/" ] }
52,104
Lets say we have a dependent variable $Y$ with few categories and set of independent variables. What are the advantages of multinomial logistic regression over set of binary logistic regressions (i.e. one-vs-rest scheme )? By set of binary logistic regression I mean that for each category $y_{i} \in Y$ we build separate binary logistic regression model with target=1 when $Y=y_{i}$ and 0 otherwise.
If $Y$ has more than two categories your question about "advantage" of one regression over the other is probably meaningless if you aim to compare the models' parameters , because the models will be fundamentally different: $\bf log \frac{P(i)}{P(not~i)}=logit_i=linear~combination$ for each $i$ binary logistic regression, and $\bf log \frac{P(i)}{P(r)}=logit_i=linear~combination$ for each $i$ category in multiple logistic regression, $r$ being the chosen reference category ( $i \ne r$ ). However, if your aim is only to predict probability of each category $i$ either approach is justified, albeit they may give different probability estimates. The formula to estimate a probability is generic: $\bf P'(i)= \frac{exp(logit_i)}{exp(logit_i)+exp(logit_j)+\dots+exp(logit_r)}$ , where $i,j,\dots,r$ are all the categories, and if $r$ was chosen to be the reference one its $\bf exp(logit)=1$ . So, for binary logistic that same formula becomes $\bf P'(i)= \frac{exp(logit_i)}{exp(logit_i)+1}$ . Multinomial logistic relies on the (not always realistic) assumption of independence of irrelevant alternatives whereas a series of binary logistic predictions does not. A separate theme is what are technical differences between multinomial and binary logistic regressions in case when $Y$ is dichotomous . Will there be any difference in results? Most of the time in the absence of covariates the results will be the same, still, there are differences in the algorithms and in output options. Let me just quote SPSS Help about that issue in SPSS: Binary logistic regression models can be fitted using either the Logistic Regression procedure or the Multinomial Logistic Regression procedure. Each procedure has options not available in the other. An important theoretical distinction is that the Logistic Regression procedure produces all predictions, residuals, influence statistics, and goodness-of-fit tests using data at the individual case level, regardless of how the data are entered and whether or not the number of covariate patterns is smaller than the total number of cases, while the Multinomial Logistic Regression procedure internally aggregates cases to form subpopulations with identical covariate patterns for the predictors, producing predictions, residuals, and goodness-of-fit tests based on these subpopulations. If all predictors are categorical or any continuous predictors take on only a limited number of values—so that there are several cases at each distinct covariate pattern—the subpopulation approach can produce valid goodness-of-fit tests and informative residuals, while the individual case level approach cannot. Logistic Regression provides the following unique features: Hosmer-Lemeshow test of goodness of fit for the model Stepwise analyses Contrasts to define model parameterization Alternative cut points for classification Classification plots Model fitted on one set of cases to a held-out set of cases Saves predictions, residuals, and influence statistics Multinomial Logistic Regression provides the following unique features: Pearson and deviance chi-square tests for goodness of fit of the model Specification of subpopulations for grouping of data for goodness-of-fit tests Listing of counts, predicted counts, and residuals by subpopulations Correction of variance estimates for over-dispersion Covariance matrix of the parameter estimates Tests of linear combinations of parameters Explicit specification of nested models Fit 1-1 matched conditional logistic regression models using differenced variables
{ "source": [ "https://stats.stackexchange.com/questions/52104", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1643/" ] }
52,274
I am wondering how to choose a predictive model after doing K-fold cross-validation. This may be awkwardly phrased, so let me explain in more detail: whenever I run K-fold cross-validation, I use K subsets of the training data, and end up with K different models. I would like to know how to pick one of the K models, so that I can present it to someone and say "this is the best model that we can produce." Is it OK to pick any one of the K models? Or is there some kind of best practice that is involved, such as picking the model that achieves the median test error?
I think that you are missing something still in your understanding of the purpose of cross-validation. Let's get some terminology straight, generally when we say 'a model' we refer to a particular method for describing how some input data relates to what we are trying to predict. We don't generally refer to particular instances of that method as different models. So you might say 'I have a linear regression model' but you wouldn't call two different sets of the trained coefficients different models. At least not in the context of model selection. So, when you do K-fold cross validation, you are testing how well your model is able to get trained by some data and then predict data it hasn't seen. We use cross validation for this because if you train using all the data you have, you have none left for testing. You could do this once, say by using 80% of the data to train and 20% to test, but what if the 20% you happened to pick to test happens to contain a bunch of points that are particularly easy (or particularly hard) to predict? We will not have come up with the best estimate possible of the models ability to learn and predict. We want to use all of the data. So to continue the above example of an 80/20 split, we would do 5-fold cross validation by training the model 5 times on 80% of the data and testing on 20%. We ensure that each data point ends up in the 20% test set exactly once. We've therefore used every data point we have to contribute to an understanding of how well our model performs the task of learning from some data and predicting some new data. But the purpose of cross-validation is not to come up with our final model. We don't use these 5 instances of our trained model to do any real prediction. For that we want to use all the data we have to come up with the best model possible. The purpose of cross-validation is model checking, not model building. Now, say we have two models, say a linear regression model and a neural network. How can we say which model is better? We can do K-fold cross-validation and see which one proves better at predicting the test set points. But once we have used cross-validation to select the better performing model, we train that model (whether it be the linear regression or the neural network) on all the data. We don't use the actual model instances we trained during cross-validation for our final predictive model. Note that there is a technique called bootstrap aggregation (usually shortened to 'bagging') that does in a way use model instances produced in a way similar to cross-validation to build up an ensemble model, but that is an advanced technique beyond the scope of your question here.
{ "source": [ "https://stats.stackexchange.com/questions/52274", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3572/" ] }
52,293
I have plotted this after I did a Shapiro-Wilk normality test. The test showed that it is likely that the population is normally distributed. However, how to see this "behaviour" on this plot? UPDATE A simple histogram of the data: UPDATE The Shapiro-Wilk test says:
" The test showed that it is likely that the population is normally distributed. " No; it didn't show that. Hypothesis tests don't tell you how likely the null is. In fact you can bet this null is false. The Q-Q plot doesn't give a strong indication of non-normality (the plot is fairly straight); there's perhaps a slightly shorter left tail than you'd expect but that really won't matter much. The histogram as-is probably doesn't say a lot either; it does also hint at a slightly shorter left tail. But see here The population distribution your data are from isn't going to be exactly normal. However, the Q-Q plot shows that normality is probably a reasonably good approximation. If the sample size was not too small, a lack of rejection of the Shapiro-Wilk would probably be saying much the same. Update: your edit to include the actual Shapiro-Wilk p-value is important because in fact that would indicate you would reject the null at typical significant levels. That test indicates the population your data were sampled from (assuming a simple random sample of that population) is not normally distributed and the mild skewness indicated by the plots is probably what is being picked up by the test. For typical procedures that might assume normality of the variable itself (the one-sample t-test is one that comes to mind), at what appears to be a fairly large sample size, this mild non-normality will be of almost no consequence at all -- one of the problems with goodness of fit tests is they're more likely to reject just when it doesn't matter (when the sample size is large enough to detect some modest non-normality); similarly they're more likely to fail to reject when it matters most (when the sample size is small).
{ "source": [ "https://stats.stackexchange.com/questions/52293", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/13561/" ] }
52,517
As far as I understand, $R^2$ explains how well the model predicts the observation. Adjusted $R^2$ is the one that takes into account more observations (or degrees of freedom). So, Adjusted $R^2$ predicts the model better? Then why is this less than $R^2$? It appears it should often be more.
$R^2$ shows the linear relationship between the independent variables and the dependent variable. It is defined as $1-\frac{SSE}{SSTO}$ which is the sum of squared errors divided by the total sum of squares. $SSTO = SSE + SSR$ which are the total error and total sum of the regression squares. As independent variables are added $SSR$ will continue to rise (and since $SSTO$ is fixed) $SSE$ will go down and $R^2$ will continually rise irrespective of how valuable the variables you added are. The Adjusted $R^2$ is attempting to account for statistical shrinkage. Models with tons of predictors tend to perform better in sample than when tested out of sample. The adjusted $R^2$ "penalizes" you for adding the extra predictor variables that don't improve the existing model. It can be helpful in model selection. Adjusted $R^2$ will equal $R^2$ for one predictor variable. As you add variables, it will be smaller than $R^2$.
{ "source": [ "https://stats.stackexchange.com/questions/52517", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/22133/" ] }
52,585
I start with my OLS regression: $$ y = \beta _0 + \beta_1x_1+\beta_2 D + \varepsilon $$ where D is a dummy variable, the estimates become different from zero with a low p-value. I then preform a Ramsey RESET test and find that i have some misspesification of the equation, i thus include squared x: $$ y = \beta _0 + \beta_1x_1+\beta_2x_1^2+\beta_3 D + \varepsilon $$ What does the squared term explain? (Non-linear increase in Y?) By doing this my D estimate does not vary from zero any more, with a high p-value. How do i interpret the squared term in my equation (in general)? Edit: Improving question.
Well, first of, the dummy variable is interpreted as a change in intercept. That is, your coefficient $\beta_3$ gives you the difference in the intercept when $D=1$, i.e. when $D=1$, the intercept is $\beta_0 + \beta_3$. That interpretation doesn't change when adding the squared $x_1$. Now, the point of adding a squared to the series is that you assume that the relationship wears off at a certain point. Looking at your second equation $$y = \beta _0 + \beta_1x_1+\beta_2x_1^2+\beta_3 D + \varepsilon$$ Taking the derivate w.r.t. $x_1$ yields $$\frac{\delta y}{\delta x_1} = \beta_1 + 2\beta_2 x_1$$ Solving this equation gives you the turning point of the relationship. As user1493368 explained, this is indeed reflecting an inverse U-shape if $\beta_1<0$ and vice versa. Take the following example: $$\hat{y} = 1.3 + 0.42 x_1 - 0.32 x_1^2 + 0.14D$$ The derivative w.r.t. $x_1$ is $$\frac{\delta y}{\delta x_1} = 0.42 - 2*0.32 x_1 $$ Solving for $x_1$ gives you $$\frac{\delta y}{\delta x_1} = 0 \iff x_1 \approx 0.66 $$ That is the point at which the relationship has its turning point. You can take a look at Wolfram-Alpha's output for the above function, for some visualization of your problem. Remember, when interpreting the ceteris paribus effect of a change in $x_1$ on $y$, you have to look at the equation: $$\Delta y = (\beta_1 + 2\beta_2x_1)\Delta x$$ That is, you can not interpret $\beta_1$ in isolation, once you added the squared regressor $x_1^2$! Regarding your insignificant $D$ after including the squared $x_1$, it points towards misspecification bias.
{ "source": [ "https://stats.stackexchange.com/questions/52585", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/22159/" ] }
52,625
I have a data set with 16 variables, and after clustering by kmeans, I wish to plot the two groups. What plots do you suggest to visually represent the two clusters?
There's no single right visualization. It depends on what aspect of the clusters you want to see or emphasize. Do want to see how each variable contributes? Consider a parallel coordinates plot. Do you want to see how clusters are distributed along the principal components? Consider a biplot (in 2D or 3D): Do you want to look for cluster outliers over all dimensions. Consider a scatterplot of distance from cluster 1's center against distance from cluster's center 2. (By definition of K Means each cluster will fall on one side of the diagonal line.) Do you want to see pairwise relations compared to the clustering. Consider a scatterplot matrix colored by cluster. Do you want to see a summary view of the cluster distances? Consider a comparison of any distribution visualization, such as histograms, violin plots, or box plots.
{ "source": [ "https://stats.stackexchange.com/questions/52625", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/19848/" ] }
52,754
Is it "okay" to add a vertical line to a histogram to visualize the mean value? It seems okay to me, but I've never seen this in textbooks and the likes, so I'm wondering if there's some sort of convention not to do that? The graph is for a term paper, I just want to make sure I don't accidentally break some super important unspoken stats rule. :)
Of course, why not? Here's an example (one of dozens I found with a simple google search): (Image source is is the measuring usability blog, here .) I've seen means, means plus or minus a standard deviation, various quantiles (like median, quartiles, 10th and 90th percentiles) all displayed in various ways. Instead of drawing a line right across the plot, you might mark information along the bottom of it - like so: There's an example (one of many to be found) with a boxplot across the top instead of at the bottom, here . Sometimes people mark in the data: (I have jittered the data locations slightly because the values were rounded to integers and you couldn't see the relative density well.) There's an example of this kind, done in Stata, on this page (see the third one here ) Histograms are better with a little extra information - they can be misleading on their own You just need to take care to explain what your plot consists of! (You'd want a better title and x-axis label than I used here, for starters. Plus an explanation in a figure caption explaining what you had marked on it.) -- One last plot: -- My plots are generated in R. Edit: As @gung surmised, abline(v=mean... was used to draw the mean-line across the plot and rug was used to draw the data values (though I actually used rug(jitter(... because the data was rounded to integers). Here's a way to do the boxplot in between the histogram and the axis: hist(Davis2[,2],n=30) boxplot(Davis2[,2], add=TRUE,horizontal=TRUE,at=-0.75,border="darkred",boxwex=1.5,outline=FALSE) I'm not going to list what everything there is for, but you can check the arguments in the help ( ?boxplot ) to find out what they're for, and play with them yourself. However, it's not a general solution - I don't guarantee it will always work as well as it does here (note I already changed the at and boxwex options*). If you don't write an intelligent function to take care of everything, it's necessary to pay attention to what everything does to make sure it's doing what you want. Here's how to create the data I used (I was trying to show how Theil regression was really able to handle several influential outliers). It just happened to be data I was playing with when I first answered this question. library("car") add <- data.frame(sex=c("F","F"), weight=c(150,130),height=c(NA,NA),repwt=c(55,50),repht=c(NA,NA)) Davis2 <- rbind(Davis,add) * -- an appropriate value for at is around -0.5 times the value of boxwex ; that would be a good default if you write a function to do it; boxwex would need to be scaled in a way that relates to the y-scale (height) of the boxplot; I'd suggest 0.04 to 0.05 times the upper y-limit might often be okay. Code for the marginal stripchart: hist(Davis2[,2],n=30) stripchart(jitter(Davis2[,2],amount=.5), method="jitter",jitter=.5,pch=16,cex=.05,add=TRUE,at=-.75,col='purple3')
{ "source": [ "https://stats.stackexchange.com/questions/52754", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/22251/" ] }
52,773
I have a classifier that I'm doing cross-validation on, along with a hundred or so features that I'm doing forward selection on to find optimal combinations of features. I also compare this against running the same experiments with PCA, where I take the potential features, apply SVD, transform the original signals onto the new coordinate space, and use the top $k$ features in my forward selection process. My intuition was that PCA would improve the results, as the signals would be more "informative" than the original features. Is my naive understanding of PCA leading me into trouble? Can anyone suggest some of the common reasons why PCA may improve results in some situations, but worsen them in others?
Consider a simple case, lifted from a terrific and undervalued article "A Note on the Use of Principal Components in Regression " . Suppose you only have two (scaled and de-meaned) features, denote them $x_1$ and $x_2$ with positive correlation equal to 0.5, aligned in $X$, and a third response variable $Y$ you wish to classify. Suppose that the classification of $Y$ is fully determined by the sign of $x_1 - x_2$. Performing PCA on $X$ results in the new (ordered by variance) features $[x_1 + x_2, x_1 - x_2]$, since $\operatorname{Var}( x_1 + x_2 ) = 1 + 1 + 2\rho > \operatorname{Var}(x_1 - x_2 ) = 2 - 2\rho$. Therefore, if you reduce your dimension to 1 i.e. the first principal component, you are throwing away the exact solution to your classification! The problem occurs because PCA is agnostic to $Y$. Unfortunately, one cannot include $Y$ in the PCA either as this will result in data leakage. Data leakage is when your matrix $X$ is constructed using the target predictors in question, hence any predictions out-of-sample will be impossible. For example: in financial time series, trying to predict the European end-of-day close, which occurs at 11:00am EST, using American end-of-day closes, at 4:00pm EST, is data leakage since the American closes, which occur hours later, have incorporated the prices of European closes.
{ "source": [ "https://stats.stackexchange.com/questions/52773", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5646/" ] }
52,825
I have a logit model that comes up with a number between 0 and 1 for many cases, but how can we interprete this? Lets take a case with a logit of 0.20 Can we assert that there is 20% probability that a case belongs to group B vs group A? is that the correct way of interpreting the logit value?
The logit $L$ of a probability $p$ is defined as $$L = \ln\frac{p}{1-p}$$ The term $\frac{p}{1-p}$ is called odds. The natural logarithm of the odds is known as log-odds or logit . The inverse function is $$p = \frac{1}{1+e^{-L}}$$ Probabilities range from zero to one, i.e., $p\in[0,1]$, whereas logits can be any real number ($\mathbb{R}$, from minus infinity to infinity; $L\in (-\infty,\infty)$). A probability of $0.5$ corresponds to a logit of $0$. Negative logit values indicate probabilities smaller than $0.5$, positive logits indicate probabilities greater than $0.5$. The relationship is symmetrical: Logits of $-0.2$ and $0.2$ correspond to probabilities of $0.45$ and $0.55$, respectively. Note: The absolute distance to $0.5$ is identical for both probabilities. This graph shows the non-linear relationship between logits and probabilities: The answer to your question is: There is a probability of about $0.55$ that a case belongs to group B.
{ "source": [ "https://stats.stackexchange.com/questions/52825", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/22289/" ] }
52,829
Why is the test statistic of a likelihood ratio test distributed chi-squared? $2(\ln \text{ L}_{\rm alt\ model} - \ln \text{ L}_{\rm null\ model} ) \sim \chi^{2}_{df_{\rm alt}-df_{\rm null}}$
As mentioned by @Nick this is a consequence of Wilks' theorem . But note that the test statistic is asymptotically $\chi^2$-distributed, not $\chi^2$-distributed. I am very impressed by this theorem because it holds in a very wide context. Consider a statistical model with likelihood $l(\theta \mid y)$ where $y$ is the vector observations of $n$ independent replicated observations from a distribution with parameter $\theta$ belonging to a submanifold $B_1$ of $\mathbb{R}^d$ with dimension $\dim(B_1)=s$. Let $B_0 \subset B_1$ be a submanifold with dimension $\dim(B_0)=m$. Imagine you are interested in testing $H_0\colon\{\theta \in B_0\}$. The likelihood ratio is $$lr(y) = \frac{\sup_{\theta \in B_1}l(\theta \mid y)}{\sup_{\theta \in B_0}l(\theta \mid y)}. $$ Define the deviance $d(y)=2 \log \big(lr(y)\big)$. Then Wilks' theorem says that, under usual regularity assumptions, $d(y)$ is asymptotically $\chi^2$-distributed with $s-m$ degrees of freedom when $H_0$ holds true. It is proven in Wilk's original paper mentioned by @Nick. I think this paper is not easy to read. Wilks published a book later, perhaps with an easiest presentation of his theorem. A short heuristic proof is given in Williams' excellent book .
{ "source": [ "https://stats.stackexchange.com/questions/52829", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/22088/" ] }
52,906
Using the student t-distribution with $k > 0$ degrees of freedom, location parameter $l$ and scale parameter $s$ having density $$\frac{\Gamma \left(\frac{k+1}{2}\right)}{\Gamma\left(\frac{k}{2}\sqrt{k \pi s^2}\right)} \left\{ 1 + k^{-1}\left( \frac{x-l}{s}\right)\right\}^{-(k+1)/2},$$ how to show that the Student $t$-distribution can be written as a mixture of Gaussian distributions by letting $X\sim N(\mu,\sigma^2)$, $\tau = 1/\sigma^2\sim\Gamma(\alpha,\beta)$, and integrating the joint density $f(x,\tau|\mu)$ to get the marginal density $f(x|\mu)$? What are the parameters of the resulting $t$-distribution, as functions of $\mu,\alpha,\beta$? I got lost in calculus by integrating the joint conditional density with the Gamma distribution.
The PDF of a Normal distribution is $$f_{\mu, \sigma}(x) = \frac{1}{\sqrt{2 \pi} \sigma} e^{-\frac{(x-\mu )^2}{2 \sigma ^2}}dx$$ but in terms of $\tau = 1/\sigma^2$ it is $$g_{\mu, \tau}(x) = \frac{\sqrt{\tau}}{\sqrt{2 \pi}} e^{-\frac{\tau(x-\mu )^2}{2 }}dx.$$ The PDF of a Gamma distribution is $$h_{\alpha, \beta}(\tau) = \frac{1}{\Gamma(\alpha)}e^{-\frac{\tau}{\beta }} \tau^{-1+\alpha } \beta ^{-\alpha }d\tau.$$ Their product, slightly simplified with easy algebra, is therefore $$f_{\mu, \alpha, \beta}(x,\tau) =\frac{1}{\beta^\alpha\Gamma(\alpha)\sqrt{2 \pi}} e^{-\tau\left(\frac{(x-\mu )^2}{2 } + \frac{1}{\beta}\right)} \tau^{-1/2+\alpha}d\tau dx.$$ Its inner part evidently has the form $\exp(-\text{constant}_1 \times \tau) \times \tau^{\text{constant}_2}d\tau$, making it a multiple of a Gamma function when integrated over the full range $\tau=0$ to $\tau=\infty$. That integral therefore is immediate (obtained by knowing the integral of a Gamma distribution is unity), giving the marginal distribution $$f_{\mu, \alpha, \beta}(x) = \frac{\sqrt{\beta } \Gamma \left(\alpha +\frac{1}{2}\right) }{\sqrt{2\pi } \Gamma (\alpha )}\frac{1}{\left(\frac{\beta}{2} (x-\mu )^2+1\right)^{\alpha +\frac{1}{2}}}.$$ Trying to match the pattern provided for the $t$ distribution shows there is an error in the question: the PDF for the Student t distribution actually is proportional to $$\frac{1}{\sqrt{k} s }\left(\frac{1}{1+k^{-1}\left(\frac{x-l}{s}\right)^2}\right)^{\frac{k+1}{2}}$$ (the power of $(x-l)/s$ is $2$, not $1$). Matching the terms indicates $k = 2 \alpha$, $l=\mu$, and $s = 1/\sqrt{\alpha\beta}$. Notice that no Calculus was needed for this derivation: everything was a matter of looking up the formulas of the Normal and Gamma PDFs, carrying out some trivial algebraic manipulations involving products and powers, and matching patterns in algebraic expressions (in that order).
{ "source": [ "https://stats.stackexchange.com/questions/52906", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/21417/" ] }
52,976
When computing the covariance matrix of a sample, is one then guaranteed to get a symmetric and positive-definite matrix? Currently my problem has a sample of 4600 observation vectors and 24 dimensions.
For a sample of vectors $x_i=(x_{i1},\dots,x_{ik})^\top$ , with $i=1,\dots,n$ , the sample mean vector is $$ \bar{x}=\frac{1}{n} \sum_{i=1}^n x_i \, , $$ and the sample covariance matrix is $$ Q = \frac{1}{n} \sum_{i=1}^n (x_i-\bar{x})(x_i-\bar{x})^\top \, . $$ For a nonzero vector $y\in\mathbb{R}^k$ , we have $$ y^\top Qy = y^\top\left(\frac{1}{n} \sum_{i=1}^n (x_i-\bar{x})(x_i-\bar{x})^\top\right) y $$ $$ = \frac{1}{n} \sum_{i=1}^n y^\top (x_i-\bar{x})(x_i-\bar{x})^\top y $$ $$ = \frac{1}{n} \sum_{i=1}^n \left( (x_i-\bar{x})^\top y \right)^2 \geq 0 \, . \quad (*) $$ Therefore, $Q$ is always positive semi-definite . The additional condition for $Q$ to be positive definite was given in whuber's comment bellow. It goes as follows. Define $z_i=(x_i-\bar{x})$ , for $i=1,\dots,n$ . For any nonzero $y\in\mathbb{R}^k$ , $(*)$ is zero if and only if $z_i^\top y=0$ , for each $i=1,\dots,n$ . Suppose the set $\{z_1,\dots,z_n\}$ spans $\mathbb{R}^k$ . Then, there are real numbers $\alpha_1,\dots,\alpha_n$ such that $y=\alpha_1 z_1 +\dots+\alpha_n z_n$ . But then we have $y^\top y=\alpha_1 z_1^\top y + \dots +\alpha_n z_n^\top y=0$ , yielding that $y=0$ , a contradiction. Hence, if the $z_i$ 's span $\mathbb{R}^k$ , then $Q$ is positive definite . This condition is equivalent to $\mathrm{rank} [z_1 \dots z_n] = k$ .
{ "source": [ "https://stats.stackexchange.com/questions/52976", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/20654/" ] }
53,240
My questions are about Random Forests. The concept of this beautiful classifier is clear to me, but still there are a lot of practical usage questions. Unfortunately, I failed to find any practical guide to RF (I've been searching for something like "A Practical Guide for Training Restricted Boltzman Machines" by Geoffrey Hinton, but for Random Forests! How can one tune RF in practice? Is it true that bigger number of trees is always better? Is there a reasonable limit (except comp. capacity of course) on increasing number of trees and how to estimate it for given dataset? What about depth of the trees? How to choose the reasonable one? Is there a sense in experimenting with trees of different length in one forest and what is the guidance for that? Are there any other parameters worth looking at when training RF? Algos for building individual trees may be? When they say RF are resistant to overfitting, how true is that? I'll appreciate any answers and/or links to guides or articles that I might have missed while my search.
I'm not an authoritative figure, so consider these brief practitioner notes: More trees is always better with diminishing returns. Deeper trees are almost always better subject to requiring more trees for similar performance. The above two points are directly a result of the bias-variance tradeoff. Deeper trees reduces the bias; more trees reduces the variance. The most important hyper-parameter is how many features to test for each split. The more useless features there are, the more features you should try. This needs tuned. You can sort of tune it via OOB estimates if you just want to know your performance on your training data and there is no twinning (~repeated measures). Even though this is the most important parameter, it's optimum is still usually fairly close to the original suggest defaults (sqrt(p) or (p/3) for classification/regression). Fairly recent research shows you don't even need to do exhaustive split searches inside a feature to get good performance. Just try a few cut points for each selected feature and move on. This makes training even faster. (~Extremely Random Forests/Trees).
{ "source": [ "https://stats.stackexchange.com/questions/53240", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12411/" ] }
53,312
From the documentation for anova() : When given a sequence of objects, ‘anova’ tests the models against one another in the order specified... What does it mean to test the models against one another? And why does the order matter? Here is an example from the GenABEL tutorial : > modelGen = lm(qt~snp1) > modelAdd = lm(qt~as.numeric(snp1)) > modelDom = lm(qt~I(as.numeric(snp1)>=2)) > modelRec = lm(qt~I(as.numeric(snp1)>=3)) anova(modelAdd, modelGen, test="Chisq") Analysis of Variance Table Model 1: qt ~ as.numeric(snp1) Model 2: qt ~ snp1 Res.Df RSS Df Sum of Sq Pr(>Chi) 1 2372 2320 2 2371 2320 1 0.0489 0.82 anova(modelDom, modelGen, test="Chisq") Analysis of Variance Table Model 1: qt ~ I(as.numeric(snp1) >= 2) Model 2: qt ~ snp1 Res.Df RSS Df Sum of Sq Pr(>Chi) 1 2372 2322 2 2371 2320 1 1.77 0.18 anova(modelRec, modelGen, test="Chisq") Analysis of Variance Table Model 1: qt ~ I(as.numeric(snp1) >= 3) Model 2: qt ~ snp1 Res.Df RSS Df Sum of Sq Pr(>Chi) 1 2372 2324 2 2371 2320 1 3.53 0.057 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 How do I interpret this output?
When you use anova(lm.1,lm.2,test="Chisq") , it performs the Chi-square test to compare lm.1 and lm.2 (i.e. it tests whether reduction in the residual sum of squares are statistically significant or not). Note that this makes sense only if lm.1 and lm.2 are nested models. For example, in the 1st anova that you used, the p-value of the test is 0.82. It means that the fitted model "modelAdd" is not significantly different from modelGen at the level of $\alpha=0.05$. However, using the p-value in the 3rd anova, the model "modelRec" is significantly different form model "modelGen" at $\alpha=0.1$. Check out ANOVA for Linear Model Fits as well.
{ "source": [ "https://stats.stackexchange.com/questions/53312", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/7787/" ] }
54,450
I came across a study in which patients, who were all over 50 years of age, were pseudo-randomized by birth year. If the birth year were an even number, usual care, if an odd number, intervention. It's easier to implement, it's harder to subvert (it's easy to check what treatment a patient should have received), it's easy to remember (the assignment went on for several years). But still, I don't like it, I feel like proper randomization would have been better. But I can't explain why. Am I wrong for feeling that, or is there a good reason to prefer 'real' randomization?
You are right to be skeptical. In general, one should use 'real' randomization, because typically one doesn't have all knowledge about relevant factors (unobservables). If one of those unobservables is correlated with the age being odd or even, then it is also correlated with whether or not they received treatment. If this is the case, we cannot identify the treatment effect: effects we observe could be due to treatment, or due to the unobserved factor(s). This is not a problem with real randomization, where we don't expect any dependence between treatment and unobservables (though, of course, for small samples it may be there). To construct a story why this randomization procedure might be a problem, suppose the study only included subjects that were at age 17/18 when, say, the Vietnam war started. With 17 there was no chance to be drafted (correct me if I am wrong on that), while there was that chance at 18. Assuming the chance was nonnegligible and that war experience changes people, it implies that, years later, these two groups are different, even though they are just 1 year apart. So perhaps the treatment (drug) looks like it doesn't work, but because only the group with Vietnam veterans received it, this may actually be due to the fact that it doesn't work on people with PTSD (or other factors related to being a veteran). In other words, you need both groups (treatment and control) to be identical, except for the treatment, to identify the treatment effect. With assignment by age, this is not the case. So unless you can rule out that there is no unobserved differences between the groups (but how do you do that if it isn't observed?), real randomization is preferable.
{ "source": [ "https://stats.stackexchange.com/questions/54450", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/17072/" ] }
54,452
I have a set of data points with a total number of Nt. I know a priori that the data comes from two distinct processes (distributions). I am trying to find the optimal model parameters together with the optimal partitioning for the dataset; such that N1+N2=Nt. I want to use the Bayesian Information Criterion, my question is which of the following BIC expressions is the correct one to optimize: As an example I will assume the problem of bi-linear fit of a dataset BIC= LogLikelihood - 0.5*k*log(N) [k: free parameters, N: number of datapoints] 1) Two separate linear fit models: LL(M1) - log(N1) + LL(M2) - log(N2) (k=2 for linear fit) 2) One single bi-linear fit model: LL(M)-2.5*log(Nt) (k=5; 4 (2 linear fits) + 1 (split point))
You are right to be skeptical. In general, one should use 'real' randomization, because typically one doesn't have all knowledge about relevant factors (unobservables). If one of those unobservables is correlated with the age being odd or even, then it is also correlated with whether or not they received treatment. If this is the case, we cannot identify the treatment effect: effects we observe could be due to treatment, or due to the unobserved factor(s). This is not a problem with real randomization, where we don't expect any dependence between treatment and unobservables (though, of course, for small samples it may be there). To construct a story why this randomization procedure might be a problem, suppose the study only included subjects that were at age 17/18 when, say, the Vietnam war started. With 17 there was no chance to be drafted (correct me if I am wrong on that), while there was that chance at 18. Assuming the chance was nonnegligible and that war experience changes people, it implies that, years later, these two groups are different, even though they are just 1 year apart. So perhaps the treatment (drug) looks like it doesn't work, but because only the group with Vietnam veterans received it, this may actually be due to the fact that it doesn't work on people with PTSD (or other factors related to being a veteran). In other words, you need both groups (treatment and control) to be identical, except for the treatment, to identify the treatment effect. With assignment by age, this is not the case. So unless you can rule out that there is no unobserved differences between the groups (but how do you do that if it isn't observed?), real randomization is preferable.
{ "source": [ "https://stats.stackexchange.com/questions/54452", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/21307/" ] }
54,849
I recently learned about a principle of probabilistic reasoning called " explaining away ," and I am trying to grasp an intuition for it. Let me set up a scenario. Let $A$ be the event that an earthquake is occurring. Let event $B$ be the event that the jolly green giant is strolling around town. Let $C$ be the event that the ground is shaking. Let $A \perp\!\!\!\perp B$ . As you see, either $A$ or $B$ can cause $C$ . I use "explain away" reasoning, if $C$ occurs, one of $P(A)$ or $P(B)$ increases, but the other decreases since I don't need alternative reasons to explain why $C$ occurred. However, my current intuition tells me that both $P(A)$ and $P(B)$ should increase if $C$ occurs since $C$ occurring makes it more likely that any of the causes for $C$ occurred. How do I reconcile my current intuition with the idea of explaining away? How do I use explaining away to justify that $A$ and $B$ are conditionally dependent on $C$ ?
Clarification and notation if C occurs, one of P(A) or P(B) increases, but the other decreases This isn't correct. You have (implicitly and reasonably) assumed that A is (marginally) independent of B and also that A and B are the only causes of C. This implies that A and B are indeed dependent conditional on C , their joint effect. These facts are consistent because explaining away is about P(A | C), which is not the same distribution as P(A). The conditioning bar notation is important here. However, my current intuition tells me that both P(A) and P(B) should increase if C occurs since C occurring makes it more likely that any of the causes for C occurred. You are having the 'inference from semi-controlled demolition' (see below for details). To begin with, you already believe that C indicates that either A or B happened so you can't get any more certain that either A or B happened when you see C. But how about A and B given C? Well, this possible but less likely than either A and not B or B and not A. That is the 'explaining away' and what you want the intuition for. Intuition Let's move to a continuous model so we can visualise things more easily and think about correlation as a particular form of non-independence. Assume that reading scores (A) and math scores (B) are independently distributed in the general population. Now assume that a school will admit (C) a student with a combined reading and math score over some threshold. (It doesn't matter what that threshold is as long as it's at least a bit selective). Here's a concrete example: Assume independent unit normally distributed reading and math scores and a sample of students, summarised below. When a student's reading and math score are together over the admission threshold (here 1.5) the student is shown as a red dot. Because good math scores offset bad reading scores and vice versa, the population of admitted students will be such that reading and math are now dependent and negatively correlated (-0.65 here). This is also true in the non-admitted population (-0.19 here). So, when you meet an randomly chosen student and you hear about her high math score then you should expect her to have gotten a lower reading score - the math score 'explains away' her admission. Of course she could also have a high reading score -- this certainly happens in the plot -- but it's less likely. And none of this affects our earlier assumption of no correlation, negative or positive, between math and reading scores in the general population. Intuition check Moving back to a discrete example closer to your original. Consider the best (and perhaps only) cartoon about 'explaining away'. The government plot is A, the terrorist plot is B, and treat the general destruction as C, ignoring the fact there are two towers. If it is clear why the audience are being quite rational when they doubt the speaker's theory, then you understand 'explaining away'.
{ "source": [ "https://stats.stackexchange.com/questions/54849", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/21361/" ] }
55,691
Regarding p-value s, I am wondering why $1$ % and $5$ % seem to be the gold standard for "statistical significance" . Why not other values, like $6$ % or $10$ %? Is there a fundamental mathematical reason for this, or is this just a widely held convention?
If you check the references below you'll find quite a bit of variation in the background, though there are some common elements. Those numbers are at least partly based on some comments from Fisher, where he said (while discussing a level of 1/20) It is convenient to take this point as a limit in judging whether a deviation is to be considered significant or not. Deviations exceeding twice the standard deviation are thus formally regarded as significant $\quad$ Fisher, R.A. (1925) Statistical Methods for Research Workers , p. 47 On the other hand, he was sometimes more broad: If one in twenty does not seem high enough odds, we may, if we prefer it, draw the line at one in fifty (the 2 per cent point), or one in a hundred (the 1 per cent point). Personally, the writer prefers to set a low standard of significance at the 5 per cent point, and ignore entirely all results which fail to reach this level. A scientific fact should be regarded as experimentally established only if a properly designed experiment rarely fails to give this level of significance. $\quad$ Fisher, R.A. (1926) The arrangement of field experiments . $\quad$ Journal of the Ministry of Agriculture, p. 504 Fisher also used 5% for one of his book's tables - but most of his other tables had a larger variety of significance levels Some of his comments have suggested more or less strict (i.e. lower or higher alpha levels) approaches in different situations. That sort of discussion above led to a tendency to produce tables focusing 5% and 1% significance levels (and sometimes with others, like 10%, 2% and 0.5%) for want of any other 'standard' values to use. However, in this paper , Cowles and Davis suggest that the use of 5% - or something close to it at least - goes back further than Fisher's comment. In short, our use of 5% (and to a lesser extent 1%) is pretty much arbitrary convention, though clearly a lot of people seem to feel that for many problems they're in the right kind of ballpark. There's no reason either particular value should be used in general. Further references: Dallal, Gerard E. (2012). The Little Handbook of Statistical Practice. - Why 0.05? Stigler, Stephen (December 2008). "Fisher and the 5% level". Chance 21 (4): 12. available here (Between them, you get a fair bit of background - it does look like between them there's a good case for thinking significance levels at least in the general ballpark of 5% - say between 2% and 10% - had been more or less in the air for a while.)
{ "source": [ "https://stats.stackexchange.com/questions/55691", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/20666/" ] }
55,704
I am looking to fit some data with central multivariate t distributions. I understand an ECME algorithm is the most promising way to do this. I wondered if anyone knew of any existing code for fitting t distributions with ECME that would get me started (in any language)?
If you check the references below you'll find quite a bit of variation in the background, though there are some common elements. Those numbers are at least partly based on some comments from Fisher, where he said (while discussing a level of 1/20) It is convenient to take this point as a limit in judging whether a deviation is to be considered significant or not. Deviations exceeding twice the standard deviation are thus formally regarded as significant $\quad$ Fisher, R.A. (1925) Statistical Methods for Research Workers , p. 47 On the other hand, he was sometimes more broad: If one in twenty does not seem high enough odds, we may, if we prefer it, draw the line at one in fifty (the 2 per cent point), or one in a hundred (the 1 per cent point). Personally, the writer prefers to set a low standard of significance at the 5 per cent point, and ignore entirely all results which fail to reach this level. A scientific fact should be regarded as experimentally established only if a properly designed experiment rarely fails to give this level of significance. $\quad$ Fisher, R.A. (1926) The arrangement of field experiments . $\quad$ Journal of the Ministry of Agriculture, p. 504 Fisher also used 5% for one of his book's tables - but most of his other tables had a larger variety of significance levels Some of his comments have suggested more or less strict (i.e. lower or higher alpha levels) approaches in different situations. That sort of discussion above led to a tendency to produce tables focusing 5% and 1% significance levels (and sometimes with others, like 10%, 2% and 0.5%) for want of any other 'standard' values to use. However, in this paper , Cowles and Davis suggest that the use of 5% - or something close to it at least - goes back further than Fisher's comment. In short, our use of 5% (and to a lesser extent 1%) is pretty much arbitrary convention, though clearly a lot of people seem to feel that for many problems they're in the right kind of ballpark. There's no reason either particular value should be used in general. Further references: Dallal, Gerard E. (2012). The Little Handbook of Statistical Practice. - Why 0.05? Stigler, Stephen (December 2008). "Fisher and the 5% level". Chance 21 (4): 12. available here (Between them, you get a fair bit of background - it does look like between them there's a good case for thinking significance levels at least in the general ballpark of 5% - say between 2% and 10% - had been more or less in the air for a while.)
{ "source": [ "https://stats.stackexchange.com/questions/55704", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5407/" ] }
55,718
I have a dataset for which I have multiple sets of binary labels. For each set of labels, I train a classifier, evaluating it by cross-validation. I want to reduce dimensionality using principal component analysis (PCA). My question is: Is it possible to do the PCA once for the whole dataset and then use the new dataset of lower dimensionality for cross-validation as described above? Or do I need to do a separate PCA for every training set (which would mean doing a separate PCA for every classifier and for every cross-validation fold)? On one hand, the PCA does not make any use of the labels. On the other hand, it does use the test data to do the transformation, so I am afraid it could bias the results. I should mention that in addition to saving me some work, doing the PCA once on the whole dataset would allow me to visualize the dataset for all label sets at once. If I have a different PCA for each label set, I would need to visualize each label set separately.
For measuring the generalization error, you need to do the latter: a separate PCA for every training set (which would mean doing a separate PCA for every classifier and for every CV fold). You then apply the same transformation to the test set: i.e. you do not do a separate PCA on the test set! You subtract the mean (and if needed divide by the standard deviation) of the training set, as explained here: Zero-centering the testing set after PCA on the training set . Then you project the data onto the PCs of the training set. You'll need to define an automatic criterium for the number of PCs to use. As it is just a first data reduction step before the "actual" classification, using a few too many PCs will likely not hurt the performance. If you have an expectation how many PCs would be good from experience, you can maybe just use that. You can also test afterwards whether redoing the PCA for every surrogate model was necessary (repeating the analysis with only one PCA model). I think the result of this test is worth reporting. I once measured the bias of not repeating the PCA, and found that with my spectroscopic classification data, I detected only half of the generalization error rate when not redoing the PCA for every surrogate model. Also relevant: https://stats.stackexchange.com/a/240063/4598 That being said, you can build an additional PCA model of the whole data set for descriptive (e.g. visualization) purposes. Just make sure you keep the two approaches separate from each other. I am still finding it difficult to get a feeling of how an initial PCA on the whole dataset would bias the results without seeing the class labels. But it does see the data. And if the between-class variance is large compared to the within-class variance, between-class variance will influence the PCA projection. Usually the PCA step is done because you need to stabilize the classification. That is, in a situation where additional cases do influence the model. If between-class variance is small, this bias won't be much, but in that case neither would PCA help for the classification: the PCA projection then cannot help emphasizing the separation between the classes.
{ "source": [ "https://stats.stackexchange.com/questions/55718", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/14434/" ] }
55,887
I've read a few papers discussing pros and cons of each method, some arguing that GA doesn't give any improvement in finding the optimal solution while others show that it is more effective. It seems GA is generally preferred in literature (although mostly people modify it in some way to achieve results they need), then why majority of software solutions seem to use backpropagation only? Is there some general rule of thumb when to use one or another? Maybe it depends on type of NN or there exists some state of the art solution which generally outperforms others? If possible I'm looking for general answers: i.e., "if the NN is huge, GA is better", or "GA is always better but has computational performance issues" etc...
If you look carefully at the scientific literature you'll find contrasting results. Obviously, in some cases GA (and more in general, Evolutionary Algorithms) may help you to find an optimal NN design but normally they have so many drawbacks (algorithm parameters' tuning, computational complexity etc) and their use is not feasible for real-world applications. Of course you can find a set of problems where GA/EAs is always better than backpropagation. Given that finding an optimal NN design is a complex multimodal optimization problem GA/EAs may help (as metaheuristics) to improve the results obtained with "traditional" algorithms, e.g. using GA/EAs to find only the initial weights configuration or helping traditional algorithms to escape from local minima (if you are interested I wrote a paper about this topic). I worked a lot on this field and I can tell you that there are many scientific works on GA/EAs applied to NNs because they are (or better, they used to be) an emerging research field.
{ "source": [ "https://stats.stackexchange.com/questions/55887", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/14741/" ] }
55,962
Is there a deep difference between a Normal and a Gaussian distribution, I've seen many papers using them without distinction, and I usually also refer to them as the same thing. However, my PI recently told me that a normal is the specific case of the Gaussian with mean=0 and std=1, which I also heard some time ago in another outlet, what is the consensus on this? According to Wikipedia, what they call the normal, is the standard normal distribution, while the Normal is a synonym for the Gaussian, but then again, I'm not sure about Wikipedia either. Thanks
Wikipedia is right. The Gaussian is the same as the normal. Wikipedia can usually be trusted on this sort of question.
{ "source": [ "https://stats.stackexchange.com/questions/55962", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/6327/" ] }
55,999
Suppose I have 2 sets: Set A : number of items $n= 10$, $\mu = 2.4$ , $\sigma = 0.8$ Set B : number of items $n= 5$, $\mu = 2$, $\sigma = 1.2$ I can find the combined mean ($\mu$) easily, but how am I supposed to find the combined standard deviation?
So, if you just want to have two of these samples brought together into one you have: $s_1 = \sqrt{\frac{1}{n_1}\Sigma_{i = 1}^{n_1} (x_i - \bar{y}_1)^2}$ $s_2 = \sqrt{\frac{1}{n_2}\Sigma_{i = 1}^{n_2} (y_i - \bar{y}_2)^2}$ where $\bar{y}_1$ and $\bar{y}_2$ are sample means and $s_1$ and $s_2$ are sample standard deviations. To add them up you have: $s = \sqrt{\frac{1}{n_1 + n_2}\Sigma_{i = 1}^{n_1 + n_2} (z_i - \bar{y})^2}$ which is not that straightforward since the new mean $\bar{y}$ is different from $\bar{y}_1$ and $\bar{y}_2$: $\bar{y} = \frac{1}{n_1 + n_2}\Sigma_{i = 1}^{n_1 + n_2} z_i = \frac{n_1 \bar{y}_1 + n_2 \bar{y}_2}{n_1 + n_2}$ The final formula is: $s = \sqrt{\frac{n_1 s_1^2 + n_2 s_2^2+ n_1(\bar{y}_1-\bar{y})^2 +n_2(\bar{y}_2-\bar{y})^2}{n_1 + n_2 }}$ For the commonly-used Bessel-corrected ("$n-1$-denominator") version of standard deviation, the results for the means are as before, but $s = \sqrt{\frac{(n_1 - 1)s_1^2 + (n_2 - 1)s_2^2 + n_1(\bar{y}_1-\bar{y})^2 +n_2(\bar{y}_2-\bar{y})^2}{n_1+n_2 - 1} }$ You can read more info here: http://en.wikipedia.org/wiki/Standard_deviation
{ "source": [ "https://stats.stackexchange.com/questions/55999", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/24311/" ] }
56,066
I understand that the Wald test for regression coefficients is based on the following property that holds asymptotically (e.g. Wasserman (2006): All of Statistics , pages 153, 214-215): $$ \frac{(\hat{\beta}-\beta_{0})}{\widehat{\operatorname{se}}(\hat{\beta})}\sim \mathcal{N}(0,1) $$ Where $\hat{\beta}$ denotes the estimated regression coefficient, $\widehat{\operatorname{se}}(\hat{\beta})$ denotes the standard error of the regression coefficient and $\beta_{0}$ is the value of interest ($\beta_{0}$ is usually 0 to test whether the coefficient is significantly different from 0). So the size $\alpha$ Wald test is: reject $H_{0}$ when $|W|> z_{\alpha/2}$ where $$ W=\frac{\hat{\beta}}{\widehat{\operatorname{se}}(\hat{\beta})}. $$ But when you perform a linear regression with lm in R, a $t$-value instead of a $z$-value is used to test if the regression coefficients differ significantly from 0 (with summary.lm ). Moreover, the output of glm in R sometimes gives $z$- and sometimes $t$-values as test statistics. Apparently, $z$-values are used when the dispersion parameter is assumed to be known and $t$-values are used when the dispersion parameter is esimated (see this link ). Could someone explain, why a $t$-distribution is sometimes used for a Wald test even though the ratio of the coefficient and its standard error is assumed to be distributed as standard normal? Edit after the question was answered This post also provides useful information to the question.
The output from glm using a Poisson distribution gives a $z$-value because with a Poisson distribution, the mean and variance parameter are the same. In the Poisson model, you only have to estimate a single parameter ($\lambda$). In a glm where you have to estimate both a mean and dispersion parameter, you should see the $t$-distribution used. For a standard linear regression, you assume the error term is normally distributed. Here, the variance parameter has to be estimated - hence the use of the $t$-distribution for the test statistic. If you somehow knew the population variance for the error term, you could use a $z$-test statistic instead. As you mention in your post, the distribution of the test is asymptotically normal. The $t$-distribution is asymptotically normal, so in a large sample, the difference would be negligible.
{ "source": [ "https://stats.stackexchange.com/questions/56066", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/21054/" ] }
56,150
I am trying to understand when to use a random effect and when it is unnecessary. Ive been told a rule of thumb is if you have 4 or more groups/individuals which I do (15 individual moose). Some of those moose were experimented on 2 or 3 times for a total of 29 trials. I want to know if they behave differently when they are in higher risk landscapes than not. So, I thought I would set the individual as a random effect. However, I am now being told that there is no need to include the individual as a random effect because there is not a lot of variation in their response. What I can't figure out is how to test if there really is something being accounted for when setting individual as a random effect. Maybe an initial question is: What test/diagnostic can I do to figure out if Individual is a good explanatory variable and should it be a fixed effect - qq plots? histograms? scatter plots? And what would I look for in those patterns. I ran the model with the individual as a random effect and without, but then I read Ben Bolker's GLMM FAQ where they state: do not compare lmer models with the corresponding lm fits, or glmer/glm; the log-likelihoods are not commensurate (i.e., they include different additive terms) And here I assume this means you can't compare between a model with random effect or without. But I wouldn't really know what I should compare between them anyway. In my model with the random effect I also was trying to look at the output to see what kind of evidence or significance the RE has: lmer(Velocity ~ D.CPC.min + FD.CPC + (1|ID), REML=FALSE, family=gaussian, data=tv) Linear mixed model fit by maximum likelihood Formula: Velocity ~ D.CPC.min + FD.CPC + (1 | ID) Data: tv AIC BIC logLik deviance REMLdev -13.92 -7.087 11.96 -23.92 15.39 Random effects: Groups Name Variance Std.Dev. ID (Intercept) 0.00000 0.00000 Residual 0.02566 0.16019 Number of obs: 29, groups: ID, 15 Fixed effects: Estimate Std. Error t value (Intercept) 3.287e-01 5.070e-02 6.483 D.CPC.min -1.539e-03 3.546e-04 -4.341 FD.CPC 1.153e-04 1.789e-05 6.446 Correlation of Fixed Effects: (Intr) D.CPC. D.CPC.min -0.010 FD.CPC -0.724 -0.437 You see that my variance and SD from the individual ID as the random effect equals 0. How is that possible? What does 0 mean? Is that right? Then my friend who said "since there is no variation, using ID as random effect is unnecessary" is correct? So, then would I use it as a fixed effect? But wouldn't the fact that there is so little variation suggest it isn't going to tell us much anyway?
The estimate ID 's variance = 0, indicates that the level of between-group variability is not sufficient to warrant incorporating random effects in the model; i.e., your model is degenerate. As you correctly identify yourself: most probably, yes; ID as a random effect is unnecessary. A few things spring to mind to test this assumption: You could compare (using REML = F always) the AIC (or your favourite IC in general) between a model with and without random effects and see how this goes. You would look at the anova() output of the two models. You could do a parametric bootstrap using the posterior distribution defined by your original model. Mind you, choices 1 & 2 have an issue: you are checking for something that is on the boundaries of the parameter space so actually they are not technically sound. Having said that, I don't think you'll get wrong insights from them and a lot of people use them (e.g., Douglas Bates, one of lme4 's developers, uses them in his book but clearly states this caveat about parameter values being tested on the boundary of the set of possible values). Choice 3 is the most tedious but actually gives you the best idea about what is really going on. Some people are tempted to use non-parametric bootstrap also but I think that given the fact you are making parametric assumptions to start with you might as well use them.
{ "source": [ "https://stats.stackexchange.com/questions/56150", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/14125/" ] }
56,302
Suppose I have some dataset. I perform some regression on it. I have a separate test dataset. I test the regression on this set. Find the RMSE on the test data. How should I conclude that my learning algorithm has done well, I mean what properties of the data I should look at to conclude that the RMSE I have got is good for the data?
I think you have two different types of questions there. One thing is what you ask in the title: "What are good RMSE values?" and another thing is how to compare models with different datasets using RMSE. For the first, i.e., the question in the title, it is important to recall that RMSE has the same unit as the dependent variable (DV). It means that there is no absolute good or bad threshold, however you can define it based on your DV. For a datum which ranges from 0 to 1000, an RMSE of 0.7 is small, but if the range goes from 0 to 1, it is not that small anymore. However, although the smaller the RMSE, the better, you can make theoretical claims on levels of the RMSE by knowing what is expected from your DV in your field of research. Keep in mind that you can always normalize the RMSE. For the second question, i.e., about comparing two models with different datasets by using RMSE, you may do that provided that the DV is the same in both models. Here, the smaller the better but remember that small differences between those RMSE may not be relevant or even significant.
{ "source": [ "https://stats.stackexchange.com/questions/56302", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11469/" ] }
56,500
I know that k-means is unsupervised and is used for clustering etc and that k-NN is supervised. But I wanted to know concrete differences between the two?
These are completely different methods. The fact that they both have the letter K in their name is a coincidence. K-means is a clustering algorithm that tries to partition a set of points into K sets (clusters) such that the points in each cluster tend to be near each other. It is unsupervised because the points have no external classification. K-nearest neighbors is a classification (or regression) algorithm that in order to determine the classification of a point, combines the classification of the K nearest points. It is supervised because you are trying to classify a point based on the known classification of other points.
{ "source": [ "https://stats.stackexchange.com/questions/56500", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/17503/" ] }
56,695
I understand that we use random effects (or mixed effects) models when we believe that some model parameter(s) vary randomly across some grouping factor. I have a desire to fit a model where the response has been normalized and centered (not perfectly, but pretty close) across a grouping factor, but an independent variable x has not been adjusted in any way. This led me to the following test (using fabricated data) to ensure that I'd find the effect I was looking for if it was indeed there. I ran one mixed effects model with a random intercept (across groups defined by f ) and a second fixed effect model with the factor f as a fixed effect predictor. I used the R package lmer for the mixed effect model, and the base function lm() for the fixed effect model. Following is the data and the results. Notice that y , regardless of group, varies around 0. And that x varies consistently with y within group, but varies much more across groups than y > data y x f 1 -0.5 2 1 2 0.0 3 1 3 0.5 4 1 4 -0.6 -4 2 5 0.0 -3 2 6 0.6 -2 2 7 -0.2 13 3 8 0.1 14 3 9 0.4 15 3 10 -0.5 -15 4 11 -0.1 -14 4 12 0.4 -13 4 If you're interested in working with the data, here is dput() output: data<-structure(list(y = c(-0.5, 0, 0.5, -0.6, 0, 0.6, -0.2, 0.1, 0.4, -0.5, -0.1, 0.4), x = c(2, 3, 4, -4, -3, -2, 13, 14, 15, -15, -14, -13), f = structure(c(1L, 1L, 1L, 2L, 2L, 2L, 3L, 3L, 3L, 4L, 4L, 4L), .Label = c("1", "2", "3", "4"), class = "factor")), .Names = c("y","x","f"), row.names = c(NA, -12L), class = "data.frame") Fitting the mixed effects model: > summary(lmer(y~ x + (1|f),data=data)) Linear mixed model fit by REML Formula: y ~ x + (1 | f) Data: data AIC BIC logLik deviance REMLdev 28.59 30.53 -10.3 11 20.59 Random effects: Groups Name Variance Std.Dev. f (Intercept) 0.00000 0.00000 Residual 0.17567 0.41913 Number of obs: 12, groups: f, 4 Fixed effects: Estimate Std. Error t value (Intercept) 0.008333 0.120992 0.069 x 0.008643 0.011912 0.726 Correlation of Fixed Effects: (Intr) x 0.000 I note that the intercept variance component is estimated 0, and importantly to me, x is not a significant predictor of y . Next I fit the fixed effect model with f as a predictor instead of a grouping factor for a random intercept: > summary(lm(y~ x + f,data=data)) Call: lm(formula = y ~ x + f, data = data) Residuals: Min 1Q Median 3Q Max -0.16250 -0.03438 0.00000 0.03125 0.16250 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -1.38750 0.14099 -9.841 2.38e-05 *** x 0.46250 0.04128 11.205 1.01e-05 *** f2 2.77500 0.26538 10.457 1.59e-05 *** f3 -4.98750 0.46396 -10.750 1.33e-05 *** f4 7.79583 0.70817 11.008 1.13e-05 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.1168 on 7 degrees of freedom Multiple R-squared: 0.9484, Adjusted R-squared: 0.9189 F-statistic: 32.16 on 4 and 7 DF, p-value: 0.0001348 Now I notice that, as expected, x is a significant predictor of y . What I am looking for is intuition regarding this difference. In what way is my thinking wrong here? Why do I incorrectly expect to find a significant parameter for x in both of these models but only actually see it in the fixed effect model?
There are several things going on here. These are interesting issues, but it will take a fair amount of time/space to explain it all. First of all, this all becomes a lot easier to understand if we plot the data . Here is a scatter plot where the data points are colored by group. Additionally, we have a separate group-specific regression line for each group, as well as a simple regression line (ignoring groups) in dashed bold: plot(y ~ x, data=dat, col=f, pch=19) abline(coef(lm(y ~ x, data=dat)), lwd=3, lty=2) by(dat, dat$f, function(i) abline(coef(lm(y ~ x, data=i)), col=i$f)) The fixed-effect model What the fixed-effect model is going to do with these data is fairly straightforward. The effect of $x$ is estimated "controlling for" groups. In other words, $x$ is first orthogonalized with respect to the group dummies, and then the slope of this new, orthogonalized $x$ is what is estimated. In this case, this orthogonalization is going to remove a lot of the variance in $x$ (specifically, the between-cluster variability in $x$), because the group dummies are highly correlated with $x$. (To recognize this intuitively, think about what would happen if we regressed $x$ on just the set of group dummies, leaving $y$ out of the equation. Judging from the plot above, it certainly seems that we would expect to have some high $t$-statistics on each of the dummy coefficients in this regression!) So basically what this ends up meaning for us is that only the within-cluster variability in $x$ is used to estimate the effect of $x$. The between-cluster variability in $x$ (which, as we can see above, is substantial), is "controlled out" of the analysis. So the slope that we get from lm() is the average of the 4 within-cluster regression lines, all of which are relatively steep in this case. The mixed model What the mixed model does is slightly more complicated. The mixed model attempts to use both within-cluster and between-cluster variability on $x$ to estimate the effect of $x$. Incidentally this is really one of the selling points of the model, as its ability/willingness to incorporate this additional information means it can often yield more efficient estimates. But unfortunately, things can get tricky when the between-cluster effect of $x$ and the average within-cluster effect of $x$ do not really agree, as is the case here. Note: this situation is what the "Hausman test" for panel data attempts to diagnose! Specifically, what the mixed model will attempt to do here is to estimate some sort of compromise between the average within-cluster slope of $x$ and the simple regression line that ignores the clusters (the dashed bold line). The exact point within this compromising range that mixed model settles on depends on the ratio of the random intercept variance to the total variance (also known as the intra-class correlation). As this ratio approaches 0, the mixed model estimate approaches the estimate of the simple regression line. As the ratio approaches 1, the mixed model estimate approaches the average within-cluster slope estimate. Here are the coefficients for the simple regression model (the dashed bold line in the plot): > lm(y ~ x, data=dat) Call: lm(formula = y ~ x, data = dat) Coefficients: (Intercept) x 0.008333 0.008643 As you can see, the coefficients here are identical to what we obtained in the mixed model. This is exactly what we expected to find, since as you already noted, we have an estimate of 0 variance for the random intercepts, making the previously mentioned ratio/intra-class correlation 0. So the mixed model estimates in this case are just the simple linear regression estimates, and as we can see in the plot, the slope here is far less pronounced than the within-cluster slopes. This brings us to one final conceptual issue... Why is the variance of the random intercepts estimated to be 0? The answer to this question has the potential to become a little technical and difficult, but I'll try to keep it as simple and nontechnical as I can (for both our sakes!). But it will maybe still be a little long-winded. I mentioned earlier the notion of intra-class correlation. This is another way of thinking about the dependence in $y$ (or, more correctly, the errors of the model) induced by the clustering structure. The intra-class correlation tells us how similar on average are two errors drawn from the same cluster, relative to the average similarity of two errors drawn from anywhere in the dataset (i.e., may or may not be in the same cluster). A positive intra-class correlation tells us that errors from the same cluster tend to be relatively more similar to each other; if I draw one error from a cluster and it has a high value, then I can expect above chance that the next error I draw from the same cluster will also have a high value. Although somewhat less common, intra-class correlations can also be negative; two errors drawn from the same cluster are less similar (i.e., further apart in value) than would typically be expected across the dataset as a whole. All of this intra-class correlation business is just a useful alternative way of describing the dependence in the data. The mixed model we are considering is not using the intra-class correlation method of representing the dependence in the data. Instead it describes the dependence in terms of variance components . This is all fine as long as the intra-class correlation is positive. In those cases, the intra-class correlation can be easily written in terms of variance components, specifically as the previously mentioned ratio of the random intercept variance to the total variance. (See the wiki page on intra-class correlation for more info on this.) But unfortunately variance-components models have a difficult time dealing with situations where we have a negative intra-class correlation. After all, writing the intra-class correlation in terms of the variance components involves writing it as a proportion of variance, and proportions cannot be negative. Judging from the plot, it looks like the intra-class correlation in these data would be slightly negative. (What I am looking at in drawing this conclusion is the fact that there is a lot of variance in $y$ within each cluster, but relatively little variance in the cluster means on $y$, so two errors drawn from the same cluster will tend to have a difference that nearly spans the range of $y$, whereas errors drawn from different clusters will tend to have a more moderate difference.) So your mixed model is doing what, in practice, mixed models often do in this case: it gives estimates that are as consistent with a negative intra-class correlation as it can muster, but it stops at the lower bound of 0 (this constraint is usually programmed into the model fitting algorithm). So we end up with an estimated random intercept variance of 0, which is still not a very good estimate, but it's as close as we can get with this variance-components type of model. So what can we do? One option is to just go with the fixed-effects model. This would be reasonable here because these data have two separate features that are tricky for mixed models (random group effects correlated with $x$, and negative intra-class correlation). Another option is to use a mixed model, but set it up in such a way that we separately estimate the between- and within-cluster slopes of $x$ rather than awkwardly attempting to pool them together. At the bottom of this answer I reference two papers that talk about this strategy; I follow the approach advocated in the first paper by Bell & Jones. To do this, we take our $x$ predictor and split it into two predictors, $x_b$ which will contain only between-cluster variation in $x$, and $x_w$ which will contain only within-cluster variation in $x$. Here's what this looks like: > dat <- within(dat, x_b <- tapply(x, f, mean)[paste(f)]) > dat <- within(dat, x_w <- x - x_b) > dat y x f x_b x_w 1 -0.5 2 1 3 -1 2 0.0 3 1 3 0 3 0.5 4 1 3 1 4 -0.6 -4 2 -3 -1 5 0.0 -3 2 -3 0 6 0.6 -2 2 -3 1 7 -0.2 13 3 14 -1 8 0.1 14 3 14 0 9 0.4 15 3 14 1 10 -0.5 -15 4 -14 -1 11 -0.1 -14 4 -14 0 12 0.4 -13 4 -14 1 > > mod <- lmer(y ~ x_b + x_w + (1|f), data=dat) > mod Linear mixed model fit by REML Formula: y ~ x_b + x_w + (1 | f) Data: dat AIC BIC logLik deviance REMLdev 6.547 8.972 1.726 -23.63 -3.453 Random effects: Groups Name Variance Std.Dev. f (Intercept) 0.000000 0.00000 Residual 0.010898 0.10439 Number of obs: 12, groups: f, 4 Fixed effects: Estimate Std. Error t value (Intercept) 0.008333 0.030135 0.277 x_b 0.005691 0.002977 1.912 x_w 0.462500 0.036908 12.531 Correlation of Fixed Effects: (Intr) x_b x_b 0.000 x_w 0.000 0.000 A few things to notice here. First, the coefficient for $x_w$ is exactly the same as what we got in the fixed-effect model. So far so good. Second, the coefficient for $x_b$ is the slope of the regression we would get from regression $y$ on just a vector of the cluster means of $x$. As such it is not quite equivalent to the bold dashed line in our first plot, which used the total variance in $x$, but it is close. Third, although the coefficient for $x_b$ is smaller than the coefficient from the simple regression model, the standard error is also substantially smaller and hence the $t$-statistic is larger. This also is unsurprising because the residual variance is far smaller in this mixed model due to the random group effects eating up a lot of the variance that the simple regression model had to deal with. Finally, we still have an estimate of 0 for the variance of the random intercepts, for the reasons I elaborated in the previous section. I'm not really sure what all we can do about that one at least without switching to some software other than lmer() , and I'm also not sure to what extent this is still going to be adversely affecting our estimates in this final mixed model. Maybe another user can chime in with some thoughts about this issue. References Bell, A., & Jones, K. (2014). Explaining fixed effects: Random effects modelling of time-series cross-sectional and panel data. Political Science Research and Methods. PDF Bafumi, J., & Gelman, A. E. (2006). Fitting multilevel models when predictors and group effects correlate. PDF
{ "source": [ "https://stats.stackexchange.com/questions/56695", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/24000/" ] }
56,832
I guess the answer should be yes, but I still feel something is not right. There should be some general results in the literature, could anyone help me?
No. Consider three variables, $X$, $Y$ and $Z = X+Y$. Their covariance matrix, $M$, is not positive definite, since there's a vector $z$ ($= (1, 1, -1)'$) for which $z'Mz$ is not positive. Population covariance matrices are positive semi-definite. (See property 2 here .) The same should generally apply to covariance matrices of complete samples (no missing values), since they can also be seen as a form of discrete population covariance. However due to inexactness of floating point numerical computations, even algebraically positive definite cases might occasionally be computed to not be even positive semi-definite; good choice of algorithms can help with this. More generally, sample covariance matrices - depending on how they deal with missing values in some variables - may or may not be positive semi-definite, even in theory. If pairwise deletion is used, for example, then there's no guarantee of positive semi-definiteness. Further, accumulated numerical error can cause sample covariance matrices that should be notionally positive semi-definite to fail to be. Like so: x <- rnorm(30) y <- rnorm(30) - x/10 # it doesn't matter for this if x and y are correlated or not z <- x+y M <- cov(data.frame(x=x,y=y,z=z)) z <- rbind(1,1,-1) t(z)%*%M%*%z [,1] [1,] -1.110223e-16 This happened on the first example I tried (I probably should supply a seed but it's not so rare that you should have to try a lot of examples before you get one). The result came out negative , even though it should be algebraically zero. A different set of numbers might yield a positive number or an "exact" zero. -- Example of moderate missingness leading to loss of positive semidefiniteness via pairwise deletion: z <- x + y + rnorm(30)/50 # same x and y as before. xyz1 <- data.frame(x=x,y=y,z=z) # high correlation but definitely of full rank xyz1$x[sample(1:30,5)] <- NA # make 5 x's missing xyz1$y[sample(1:30,5)] <- NA # make 5 y's missing xyz1$z[sample(1:30,5)] <- NA # make 5 z's missing cov(xyz1,use="pairwise") # the individual pairwise covars are fine ... x y z x 1.2107760 -0.2552947 1.255868 y -0.2552947 1.2728156 1.037446 z 1.2558683 1.0374456 2.367978 chol(cov(xyz1,use="pairwise")) # ... but leave the matrix not positive semi-definite Error in chol.default(cov(xyz1, use = "pairwise")) : the leading minor of order 3 is not positive definite chol(cov(xyz1,use="complete")) # but deleting even more rows leaves it PSD x y z x 0.8760209 -0.2253484 0.64303448 y 0.0000000 1.1088741 1.11270078 z 0.0000000 0.0000000 0.01345364
{ "source": [ "https://stats.stackexchange.com/questions/56832", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/22100/" ] }
56,881
I was wondering if there is a relationship between $R^2$ and a F-Test. Usually $$R^2=\frac {\sum (\hat Y_t - \bar Y)^2 / T-1} {\sum( Y_t - \bar Y)^2 / T-1}$$ and it measures the strength of the linear relationship in the regression. An F-Test just proves a hypothesis. Is there a relationship between $R^2$ and a F-Test?
If all the assumptions hold and you have the correct form for $R^2$ then the usual F statistic can be computed as $F = \frac{ R^2 }{ 1- R^2} \times \frac{ \text{df}_2 }{ \text{df}_1 }$. This value can then be compared to the appropriate F distribution to do an F test. This can be derived/confirmed with basic algebra.
{ "source": [ "https://stats.stackexchange.com/questions/56881", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/13561/" ] }
56,895
If I run a randomForest model, I can then make predictions based on the model. Is there a way to get a prediction interval of each of the predictions such that I know how "sure" the model is of its answer. If this is possible is it simply based on the variability of the dependent variable for the whole model or will it have wider and narrower intervals depending on the particular decision tree that was followed for a particular prediction?
This is partly a response to @Sashikanth Dareddy (since it will not fit in a comment) and partly a response to the original post. Remember what a prediction interval is, it is an interval or set of values where we predict that future observations will lie. Generally the prediction interval has 2 main pieces that determine its width, a piece representing the uncertainty about the predicted mean (or other parameter) this is the confidence interval part, and a piece representing the variability of the individual observations around that mean. The confidence interval is fairy robust due to the Central Limit Theorem and in the case of a random forest, the bootstrapping helps as well. But the prediction interval is completely dependent on the assumptions about how the data is distributed given the predictor variables, CLT and bootstrapping have no effect on that part. The prediction interval should be wider where the corresponding confidence interval would also be wider. Other things that would affect the width of the prediction interval are assumptions about equal variance or not, this has to come from the knowledge of the researcher, not the random forest model. A prediction interval does not make sense for a categorical outcome (you could do a prediction set rather than an interval, but most of the time it would probably not be very informative). We can see some of the issues around prediction intervals by simulating data where we know the exact truth. Consider the following data: set.seed(1) x1 <- rep(0:1, each=500) x2 <- rep(0:1, each=250, length=1000) y <- 10 + 5*x1 + 10*x2 - 3*x1*x2 + rnorm(1000) This particular data follows the assumptions for a linear regression and is fairly straight forward for a random forest fit. We know from the "true" model that when both predictors are 0 that the mean is 10, we also know that the individual points follow a normal distribution with standard deviation of 1. This means that the 95% prediction interval based on perfect knowledge for these points would be from 8 to 12 (well actually 8.04 to 11.96, but rounding keeps it simpler). Any estimated prediction interval should be wider than this (not having perfect information adds width to compensate) and include this range. Let's look at the intervals from regression: fit1 <- lm(y ~ x1 * x2) newdat <- expand.grid(x1=0:1, x2=0:1) (pred.lm.ci <- predict(fit1, newdat, interval='confidence')) # fit lwr upr # 1 10.02217 9.893664 10.15067 # 2 14.90927 14.780765 15.03778 # 3 20.02312 19.894613 20.15162 # 4 21.99885 21.870343 22.12735 (pred.lm.pi <- predict(fit1, newdat, interval='prediction')) # fit lwr upr # 1 10.02217 7.98626 12.05808 # 2 14.90927 12.87336 16.94518 # 3 20.02312 17.98721 22.05903 # 4 21.99885 19.96294 24.03476 We can see there is some uncertainty in the estimated means (confidence interval) and that gives us a prediction interval that is wider (but includes) the 8 to 12 range. Now let's look at the interval based on the individual predictions of individual trees (we should expect these to be wider since the random forest does not benefit from the assumptions (which we know to be true for this data) that the linear regression does): library(randomForest) fit2 <- randomForest(y ~ x1 + x2, ntree=1001) pred.rf <- predict(fit2, newdat, predict.all=TRUE) pred.rf.int <- apply(pred.rf$individual, 1, function(x) { c(mean(x) + c(-1, 1) * sd(x), quantile(x, c(0.025, 0.975))) }) t(pred.rf.int) # 2.5% 97.5% # 1 9.785533 13.88629 9.920507 15.28662 # 2 13.017484 17.22297 12.330821 18.65796 # 3 16.764298 21.40525 14.749296 21.09071 # 4 19.494116 22.33632 18.245580 22.09904 The intervals are wider than the regression prediction intervals, but they don't cover the entire range. They do include the true values and therefore may be legitimate as confidence intervals, but they are only predicting where the mean (predicted value) is, no the added piece for the distribution around that mean. For the first case where x1 and x2 are both 0 the intervals don't go below 9.7, this is very different from the true prediction interval that goes down to 8. If we generate new data points then there will be several points (much more than 5%) that are in the true and regression intervals, but don't fall in the random forest intervals. To generate a prediction interval you will need to make some strong assumptions about the distribution of the individual points around the predicted means, then you could take the predictions from the individual trees (the bootstrapped confidence interval piece) then generate a random value from the assumed distribution with that center. The quantiles for those generated pieces may form the prediction interval (but I would still test it, you may need to repeat the process several more times and combine). Here is an example of doing this by adding normal (since we know the original data used a normal) deviations to the predictions with the standard deviation based on the estimated MSE from that tree: pred.rf.int2 <- sapply(1:4, function(i) { tmp <- pred.rf $individual[i, ] + rnorm(1001, 0, sqrt(fit2$ mse)) quantile(tmp, c(0.025, 0.975)) }) t(pred.rf.int2) # 2.5% 97.5% # [1,] 7.351609 17.31065 # [2,] 10.386273 20.23700 # [3,] 13.004428 23.55154 # [4,] 16.344504 24.35970 These intervals contain those based on perfect knowledge, so look reasonable. But, they will depend greatly on the assumptions made (the assumptions are valid here because we used the knowledge of how the data was simulated, they may not be as valid in real data cases). I would still repeat the simulations several times for data that looks more like your real data (but simulated so you know the truth) several times before fully trusting this method.
{ "source": [ "https://stats.stackexchange.com/questions/56895", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/24521/" ] }
56,950
I am interested in regression with neural networks. Neural networks with zero hidden nodes + skip-layer connections are linear models. What about the same neural nets but with hidden nodes ? I am wondering what would be the role of the skip-layer connections ? Intuitively, i would say that if you include the skip-layer connections, then the final model will a sum of a linear model + some non-linear parts. Is there any advantage or disadvantage in adding skip-layer connections to neural nets ?
I am very late to the game, but I wanted to post to reflect some current developments in convolutional neural networks with respect to skip connections . A Microsoft Research team recently won the ImageNet 2015 competition and released a technical report Deep Residual Learning for Image Recognition describing some of their main ideas. One of their main contributions is this concept of deep residual layers . These deep residual layers use skip connections . Using these deep residual layers, they were able to train a 152 layer conv net for ImageNet 2015. They even trained a 1000+ layer conv net for the CIFAR-10. The problem that motivated them is the following: When deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by overfitting , and adding more layers to a suitably deep model leads to higher training error ... The idea is if that if you take a "shallow" network and just stack on more layers to create a deeper network, the performance of the deeper network should be at least as good as the shallow network as the deeper network could learn the exact shallow network by setting the new stacked layers to identity layers (in reality we know this is probably highly unlikely to happen using no architectural priors or current optimization methods). They observed that this was not the case and that training error sometimes got worse when they stacked more layers on top of a shallower model. So this motivated them to use skip connections and use so-called deep residual layers to allow their network to learn deviations from the identity layer, hence the term residual , residual here referring to difference from the identity. They implement skip connections in the following manner: So they view the map $\mathcal{F}(x) := \mathcal{H}(x) - x$ as some residual map. They use a skip layer connection to cast this mapping into $\mathcal{F}(x) + x = \mathcal{H}(x)$. So if the residual $\mathcal{F}(x)$ is "small", the map $\mathcal{H}(x)$ is roughly the identity. In this manner the use of deep residual layers via skip connections allows their deep nets to learn approximate identity layers, if that is indeed what is optimal, or locally optimal. Indeed they claim that their residual layers: We show by experiments (Fig. 7) that the learned residual functions in general have small responses As to why exactly this works they don't have an exact answer. It is highly unlikely that identity layers are optimal, but they believe that using these residual layers helps precondition the problem and that it's easier to learn a new function given a reference/baseline of comparison to the identity mapping than to learn one "from scratch" without using the identity baseline. Who knows. But I thought this would be a nice answer to your question. By the way, in hindsight: sashkello's answer is even better isn't it?
{ "source": [ "https://stats.stackexchange.com/questions/56950", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9447/" ] }
56,971
I would like to compare the means across three groups of equal sizes (equal sample size is small, 21). The means of each group are normally distributed, but their variances are unequal (tested via Levene's). Is a transformation the best route in this situation? Should I consider anything else first?
@JeremyMiles is right. First, there's a rule of thumb that the ANOVA is robust to heterogeneity of variance so long as the largest variance is not more than 4 times the smallest variance. Furthermore, the general effect of heterogeneity of variance is to make the ANOVA less efficient. That is, you would have lower power. Since you have a significant effect anyway, there is less reason to be concerned here. Update: I demonstrate my point about lower efficiency / power here: Efficiency of beta estimates with heteroscedasticity I have a comprehensive overview of strategies for dealing with problematic heteroscedasticity in one-way ANOVAs here: Alternatives to one-way ANOVA for heteroscedastic data
{ "source": [ "https://stats.stackexchange.com/questions/56971", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/24725/" ] }
57,031
I have data from a survey experiment in which respondents were randomly assigned to one of four groups: > summary(df$Group) Control Treatment1 Treatment2 Treatment3 59 63 62 66 While the three treatment groups do vary slightly in the stimulus applied, the main distinction that I care about is between the control and treatment groups. So I defined a dummy variable Control : > summary(df$Control) TRUE FALSE 59 191 In the survey, respondents were asked (among other things) to choose which of two things they preferred: > summary(df$Prefer) A B NA's 152 93 5 Then, after receiving some stimulus as determined by their treatment group (and none if they were in the control group), respondents were asked to choose between the same two things: > summary(df$Choice) A B 149 101 I want to know if the being in one of the three treatment groups had an effect on the choice that respondents made in this last question. My hypothesis is that respondents who received a treatment are more likely to choose A than B . Given that I am working with categorical data, I have decided to use a logit regression (feel free to chime in if you think that's incorrect). Since respondents were randomly assigned, I am under the impression that I shouldn't necessarily need to control for other variables (e.g. demographics), so I have left those out for this question. My first model was simply the following: > x0 <- glm(Product ~ Control + Prefer, data=df, family=binomial(link="logit")) > summary(x0) Call: glm(formula = Choice ~ Control + Prefer, family = binomial(link = "logit"), data = df) Deviance Residuals: Min 1Q Median 3Q Max -1.8366 -0.5850 -0.5850 0.7663 1.9235 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 1.4819 0.3829 3.871 0.000109 *** ControlFALSE -0.4068 0.3760 -1.082 0.279224 PreferA -2.7538 0.3269 -8.424 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 328.95 on 244 degrees of freedom Residual deviance: 239.69 on 242 degrees of freedom (5 observations deleted due to missingness) AIC: 245.69 Number of Fisher Scoring iterations: 4 I am under the impression that the intercept being statistically significant is not something that holds interpretable meaning. I thought perhaps that I should include an interaction term as follows: > x1 <- glm(Choice ~ Control + Prefer + Control:Prefer, data=df, family=binomial(link="logit")) > summary(x1) Call: glm(formula = Product ~ Control + Prefer + Control:Prefer, family = binomial(link = "logit"), data = df) Deviance Residuals: Min 1Q Median 3Q Max -2.5211 -0.6424 -0.5003 0.8519 2.0688 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 3.135 1.021 3.070 0.00214 ** ControlFALSE -2.309 1.054 -2.190 0.02853 * PreferA -5.150 1.152 -4.472 7.75e-06 *** ControlFALSE:PreferA 2.850 1.204 2.367 0.01795 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 328.95 on 244 degrees of freedom Residual deviance: 231.27 on 241 degrees of freedom (5 observations deleted due to missingness) AIC: 239.27 Number of Fisher Scoring iterations: 5 Now the respondents status as in a treatment group has the expected effect. Was this a valid set of steps? How can I interpret the interaction term ControlFALSE:PreferA ? Are the other coefficients still the log odds?
I assume that PreferA = 1 when one prefered A and 0 otherwise and that ControlFALSE = 1 when treated and 0 when control. The odds of preffering A when a person did not do so previously and did not receive a treatment (ControlFALSE=0 and PreferA=0) is $\exp(3.135)= 23$, i.e. there are 23 such persons who prefer A for every such person that prefers B. So A is very popular. The effect of treatmeant refers to a person did not prefer A previously (PreferA=0). In that case the baseline odds decreases by a factor $\exp(-2.309) = .099$ or $(1-.099) \times 100\%=-90.1\%$ when she or he is subjected to the treatment. So the odds of choosing A for those who were treated and did not prefer A previously is $.099*23=2.3$, so there 2.3 such person who prefer A for every such person who prefers B. So among this group A is still more popular than B, but less so than in the untreated/baseline group. The effect of prefering A previously refers to a person who is a control (ControlFALSE = 0). In that case the baseline odds decreases by a factor $.006$ or $-99.4\%$ when someone prefered A previously. (So those that pefered A previously are a lot less likely to do so now. Does that make sense?) The interaction effect compares the effect of treatment for those persons that prefered A previously and those that did not. If a person prefered A previously (PreferA =1) then the odds ratio of treatment increases by a factor $\exp(2.850) = 17.3$. So the odds ratio of treatment for those that prefered A previously is $17.3 \times .099 = 1.71$. Alternatively, this odds ratio of treatment for those that prefered A previously could be computed as $\exp(2.850 - 2.309)$. So the exponentiated constant gives you the baseline odds , the exponentiated coefficients of the main effects give you the odds ratios when the other variable equals 0, and the exponentiated coefficient of the interaction terms tells you the ratio by wich the odds ratio changes .
{ "source": [ "https://stats.stackexchange.com/questions/57031", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/24327/" ] }
57,407
"Essentially, all models are wrong, but some are useful." --- Box, George E. P.; Norman R. Draper (1987). Empirical Model-Building and Response Surfaces, p. 424, Wiley. ISBN 0471810339. What exactly is the meaning of the above phrase?
I think its meaning is best analyzed by looking at it in two parts: "All models are wrong" that is, every model is wrong because it is a simplification of reality. Some models, especially in the "hard" sciences, are only a little wrong. They ignore things like friction or the gravitational effect of tiny bodies. Other models are a lot wrong - they ignore bigger things. In the social sciences, we ignore a lot. "But some are useful" - simplifications of reality can be quite useful. They can help us explain, predict and understand the universe and all its various components. This isn't just true in statistics! Maps are a type of model; they are wrong. But good maps are very useful. Examples of other useful but wrong models abound.
{ "source": [ "https://stats.stackexchange.com/questions/57407", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/13473/" ] }
57,467
I have a big dataset and I want to perform a dimensionality reduction. Now everywhere I read that I can use PCA for this. However, I still don't seem to get what to do after calculating/performing the PCA. In R this is easily done with the command princomp . But what to do after calculating the PCA? If I decided I want to use the first $100$ principal components, how do I reduce my dataset exactly?
I believe what you are getting at in your question concerns data truncation using a smaller number of principal components (PC). For such operations, I think the function prcomp is more illustrative in that it is easier to visualize the matrix multiplication used in reconstruction. First, give a synthetic dataset, Xt , you perform the PCA (typically you would center samples in order to describe PC's relating to a covariance matrix: #Generate data m=50 n=100 frac.gaps <- 0.5 # the fraction of data with NaNs N.S.ratio <- 0.25 # the Noise to Signal ratio for adding noise to data x <- (seq(m)*2*pi)/m t <- (seq(n)*2*pi)/n #True field Xt <- outer(sin(x), sin(t)) + outer(sin(2.1*x), sin(2.1*t)) + outer(sin(3.1*x), sin(3.1*t)) + outer(tanh(x), cos(t)) + outer(tanh(2*x), cos(2.1*t)) + outer(tanh(4*x), cos(0.1*t)) + outer(tanh(2.4*x), cos(1.1*t)) + tanh(outer(x, t, FUN="+")) + tanh(outer(x, 2*t, FUN="+")) Xt <- t(Xt) #PCA res <- prcomp(Xt, center = TRUE, scale = FALSE) names(res) In the results or prcomp , you can see the PC's ( res$x ), the eigenvalues ( res$sdev ) giving information on the magnitude of each PC, and the loadings ( res$rotation ). res $sdev length(res$ sdev) res $rotation dim(res$ rotation) res $x dim(res$ x) By squaring the eigenvalues, you get the variance explained by each PC: plot(cumsum(res $sdev^2/sum(res$ sdev^2))) #cumulative explained variance Finally, you can create a truncated version of your data by using only the leading (important) PCs: pc.use <- 3 # explains 93% of variance trunc <- res $x[,1:pc.use] %*% t(res$ rotation[,1:pc.use]) #and add the center (and re-scale) back to data if(all(res $scale != FALSE)){ trunc <- scale(trunc, center = FALSE , scale=1/res$scale) } if(all(res$ center != FALSE)){ trunc <- scale(trunc, center = -1 * res$center, scale=FALSE) } dim(trunc); dim(Xt) You can see that the result is a slightly smoother data matrix, with small scale features filtered out: RAN <- range(cbind(Xt, trunc)) BREAKS <- seq(RAN[1], RAN[2],,100) COLS <- rainbow(length(BREAKS)-1) par(mfcol=c(1,2), mar=c(1,1,2,1)) image(Xt, main="Original matrix", xlab="", ylab="", xaxt="n", yaxt="n", breaks=BREAKS, col=COLS) box() image(trunc, main="Truncated matrix (3 PCs)", xlab="", ylab="", xaxt="n", yaxt="n", breaks=BREAKS, col=COLS) box() And here is a very basic approach that you can do outside of the prcomp function: #alternate approach Xt.cen <- scale(Xt, center=TRUE, scale=FALSE) C <- cov(Xt.cen, use="pair") E <- svd(C) A <- Xt.cen %*% E$u #To remove units from principal components (A) #function for the exponent of a matrix "%^%" <- function(S, power) with(eigen(S), vectors %*% (values^power * t(vectors))) Asc <- A %*% (diag(E$d) %^% -0.5) # scaled principal components #Relationship between eigenvalues from both approaches plot(res $sdev^2, E$ d) #PCA via a covariance matrix - the eigenvalues now hold variance, not stdev abline(0,1) # same results Now, deciding which PCs to retain is a separate question - one that I was interested in a while back . Hope that helps.
{ "source": [ "https://stats.stackexchange.com/questions/57467", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/24730/" ] }
57,746
When running a multiple regression model in R, one of the outputs is a residual standard error of 0.0589 on 95,161 degrees of freedom. I know that the 95,161 degrees of freedom is given by the difference between the number of observations in my sample and the number of variables in my model. What is the residual standard error?
A fitted regression model uses the parameters to generate point estimate predictions which are the means of observed responses if you were to replicate the study with the same $X$ values an infinite number of times (and when the linear model is true). The difference between these predicted values and the ones used to fit the model are called "residuals" which, when replicating the data collection process, have properties of random variables with 0 means. The observed residuals are then used to subsequently estimate the variability in these values and to estimate the sampling distribution of the parameters. When the residual standard error is exactly 0 then the model fits the data perfectly (likely due to overfitting). If the residual standard error can not be shown to be significantly different from the variability in the unconditional response, then there is little evidence to suggest the linear model has any predictive ability.
{ "source": [ "https://stats.stackexchange.com/questions/57746", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/22008/" ] }
58,220
Let us say that I have 1000 components and I have been collecting data on how many times these log a failure and each time they logged a failure, I am also keeping track of how long it took my team to fix the problem. In short, I have been recording the time to repair (in seconds) for each of these 1000 components. Data is given at the end of this question. I took all these values and drew a Cullen and Frey graph in R using descdist from the fitdistrplus package. My hope was to understand if the time to repair follows a particular distribution. Here's the plot with boot=500 to get bootstrapped values: I see that this plot is telling me that the observation falls into the beta distribution (or maybe not, in which case, what is it revealing?) Now, considering that I am a system architect and not a statistician, what is this plot revealing? (I am looking for a practical real-world intuition behind these results). EDIT: QQplot using the qqPlot function in package car . I first estimated the shape and scale parameters using the fitdistr function. > fitdistr(Data$Duration, "weibull") shape scale 3.783365e-01 5.273310e+03 (6.657644e-03) (3.396456e+02) Then, I did this: qqPlot(LB$Duration, distribution="weibull", shape=3.783365e-01, scale=5.273310e+03) EDIT 2: Updating with a lognormal QQplot. Here's my data: c(1528L, 285L, 87138L, 302L, 115L, 416L, 8940L, 19438L, 165820L, 540L, 1653L, 1527L, 974L, 12999L, 226L, 190L, 306L, 189L, 138542L, 3049L, 129067L, 21806L, 456L, 22745L, 198L, 44568L, 29355L, 17163L, 294L, 4218L, 3672L, 10100L, 290L, 8341L, 128L, 11263L, 1495243L, 1699L, 247L, 249L, 300L, 351L, 608L, 186684L, 524026L, 1392L, 396L, 298L, 1063L, 11102L, 6684L, 6546L, 289L, 465L, 261L, 175L, 356L, 61652L, 236L, 74795L, 64982L, 294L, 95221L, 322L, 38892L, 2146L, 59347L, 2118L, 310801L, 277964L, 205679L, 5980L, 66102L, 36495L, 580277L, 27600L, 509L, 21795L, 21795L, 301L, 617L, 331L, 250L, 123501L, 144L, 347L, 121443L, 211L, 232L, 445783L, 9715L, 10308L, 1921L, 178L, 168L, 291L, 6915L, 6735L, 1008478L, 274L, 20L, 3287L, 591208L, 797L, 586L, 170613L, 938L, 3121L, 249L, 1497L, 24L, 1407L, 1217L, 1323L, 272L, 443L, 49466L, 323L, 323L, 784L, 900L, 26814L, 2452L, 214713L, 3668L, 325L, 20439L, 12304L, 261L, 137L, 379L, 2273L, 274L, 17760L, 920699L, 13L, 485644L, 1243L, 226L, 20388L, 584L, 17695L, 1477L, 242L, 280L, 253L, 17964L, 7073L, 308L, 260692L, 155L, 58136L, 16644L, 29353L, 543L, 276L, 2328L, 254L, 1392L, 272L, 480L, 219L, 60L, 2285L, 2676L, 256L, 234L, 1240L, 219714L, 102174L, 258L, 266L, 33043L, 530L, 6334L, 94047L, 293L, 536L, 48557L, 4141L, 39079L, 23259L, 2235L, 17673L, 28268L, 112L, 64824L, 127992L, 5291L, 51693L, 762L, 1070735L, 179L, 189L, 157L, 157L, 122L, 1045L, 1317L, 186L, 57901L, 456126L, 674L, 2375L, 1782L, 257L, 23L, 248L, 216L, 114L, 11662L, 107890L, 203022L, 513L, 2549L, 146L, 53331L, 1690L, 10752L, 1648611L, 148L, 611L, 198L, 443L, 10061L, 720L, 10L, 24L, 220L, 38L, 453L, 10066L, 115774L, 97713L, 7234L, 773L, 90154L, 151L, 1560L, 222L, 51558L, 214L, 948L, 208L, 1127L, 221L, 169L, 1528L, 78959L, 61566L, 88049L, 780L, 6196L, 633L, 214L, 2547L, 19088L, 119L, 561L, 112L, 17557L, 101086L, 244L, 257L, 94483L, 6189L, 236L, 248L, 966L, 117L, 333L, 278L, 553L, 568L, 356L, 731L, 25258L, 127931L, 7735L, 112717L, 395L, 12960L, 11383L, 16L, 229067L, 259076L, 311L, 366L, 2696L, 7265L, 259076L, 3551L, 7782L, 4256L, 87121L, 4971L, 4706L, 245L, 34457L, 4971L, 4706L, 245L, 34457L, 258L, 36071L, 301L, 2214L, 2231L, 247L, 537L, 301L, 2214L, 230L, 1076L, 1881L, 266L, 4371L, 88304L, 50056L, 50056L, 232L, 186336L, 48200L, 112L, 48200L, 48200L, 6236L, 82158L, 6236L, 82158L, 1331L, 713L, 89106L, 46315L, 220L, 5634L, 170601L, 588L, 1063L, 2282L, 247L, 804L, 125L, 5507L, 1271L, 2567L, 441L, 6623L, 64781L, 1545L, 240L, 2921L, 777L, 697L, 2018L, 24064L, 199L, 183L, 297L, 9010L, 16304L, 930L, 6522L, 5717L, 17L, 20L, 364418L, 58246L, 7976L, 304L, 4814L, 307L, 487L, 292016L, 6972L, 15L, 40922L, 471L, 2342L, 2248L, 23L, 2434L, 23342L, 807L, 21L, 345568L, 324L, 188L, 184L, 191L, 188L, 198L, 195L, 187L, 185L, 33968L, 1375L, 121L, 56872L, 35970L, 929L, 151L, 5526L, 156L, 2687L, 4870L, 26939L, 180L, 14623L, 265L, 261L, 30501L, 5435L, 9849L, 5496L, 1753L, 847L, 265L, 280L, 1840L, 1107L, 2174L, 18907L, 14762L, 3450L, 9648L, 1080L, 45L, 6453L, 136351L, 521L, 715L, 668L, 14550L, 1381L, 13294L, 13100L, 6354L, 6319L, 84837L, 84726L, 84702L, 2126L, 36L, 572L, 1448L, 215L, 12L, 7105L, 758L, 4694L, 29369L, 7579L, 709L, 121L, 781L, 1391L, 2166L, 160403L, 674L, 1933L, 320L, 1628L, 2346L, 2955L, 204852L, 206277L, 2408L, 2162L, 312L, 280L, 243L, 84050L, 830L, 290L, 10490L, 119392L, 182960L, 261791L, 92L, 415L, 144L, 2006L, 1172L, 1886L, 233L, 36123L, 7855L, 554L, 234L, 2292L, 21L, 132L, 142L, 3848L, 3847L, 3965L, 3431L, 2465L, 1717L, 3952L, 854L, 854L, 834L, 14608L, 172L, 7885L, 75303L, 535L, 443347L, 5478L, 782L, 9066L, 6733L, 568L, 611L, 533L, 1022L, 334L, 21628L, 295362L, 34L, 486L, 279L, 2530L, 504L, 525L, 367L, 293L, 258L, 1854L, 209L, 152L, 1139L, 398L, 3275L, 284178L, 284127L, 826L, 751L, 1814L, 398L, 1517L, 255L, 13745L, 43L, 1463L, 385L, 64L, 5279L, 885L, 1193L, 190L, 451L, 1093L, 322L, 453L, 680L, 452L, 677L, 295L, 120L, 12184L, 250L, 1165L, 476L, 211L, 4437L, 7310L, 778L, 260L, 855L, 353L, 97L, 34L, 87L, 137L, 101L, 416L, 130L, 148L, 832L, 187L, 291L, 4050L, 14569L, 271L, 1968L, 6553L, 2535L, 227L, 202L, 647L, 266L, 2681L, 106L, 158L, 257L, 234L, 1726L, 34L, 465L, 436L, 245L, 245L, 2790L, 104L, 1283L, 44416L, 142L, 13617L, 232L, 171L, 221L, 719L, 176L, 5838L, 37488L, 12214L, 3780L, 5556L, 5368L, 106L, 246L, 101L, 158L, 10743L, 5L, 46478L, 5286L, 9866L, 32593L, 174L, 298L, 19617L, 19350L, 230L, 78449L, 78414L, 78413L, 78413L, 6260L, 6260L, 209L, 2552L, 522L, 178L, 140L, 173046L, 299L, 265L, 132360L, 132252L, 4821L, 4755L, 197L, 567L, 113L, 30314L, 7006L, 10L, 30L, 55281L, 8263L, 8244L, 8142L, 568L, 1592L, 1750L, 628L, 60304L, 212553L, 51393L, 222L, 13471L, 3423L, 306L, 325L, 2650L, 74796L, 37807L, 103751L, 6924L, 6727L, 667L, 657L, 752L, 546L, 1860L, 230L, 217L, 1422L, 347L, 341055L, 4510L, 4398L, 179670L, 796L, 1210L, 2579L, 250L, 273L, 407L, 192049L, 236L, 96084L, 5808L, 7546L, 10646L, 197L, 188L, 19L, 167877L, 200509L, 429L, 632L, 495L, 471L, 2578L, 251L, 198L, 175L, 19161L, 289L, 20718L, 201L, 937L, 283L, 4829L, 4776L, 5949L, 856907L, 2747L, 2761L, 3150L, 3142L, 68031L, 187666L, 255211L, 255231L, 6581L, 392991L, 858L, 115L, 141L, 85629L, 125433L, 6850L, 6684L, 23L, 529L, 562L, 216L, 1450L, 838L, 3335L, 1446L, 178L, 130101L, 239L, 1838L, 286L, 289L, 68974L, 757L, 764L, 218L, 207L, 3485L, 16597L, 236L, 1387L, 2121L, 2122L, 957L, 199899L, 409803L, 367877L, 1650L, 116710L, 5662L, 12497L, 613889L, 10182L, 260L, 9654L, 422947L, 294L, 284L, 996L, 1444L, 2373L, 308L, 1522L, 288L, 937L, 291L, 93L, 17629L, 5151L, 184L, 161L, 3273L, 1090L, 179840L, 1294L, 922L, 826L, 725L, 252L, 715L, 6116L, 259L, 6171L, 198L, 5610L, 5679L, 862L, 332L, 1324L, 536L, 98737L, 316L, 5608L, 5526L, 404L, 255L, 251L, 14067L, 3360L, 3623L, 8920L, 288L, 447L, 453L, 1604687L, 115L, 127L, 127L, 2398L, 2396L, 2396L, 2398L, 2396L, 2397L, 154L, 154L, 154L, 154L, 887L, 636L, 227L, 227L, 354L, 7150L, 30227L, 546013L, 545979L, 251L, 171647L, 252L, 583L, 593L, 10222L, 2660L, 1864L, 2884L, 1577L, 1304L, 337L, 2642L, 2462L, 280L, 284L, 3463L, 288L, 288L, 540L, 287L, 526L, 721L, 1015L, 74071L, 6338L, 1590L, 582L, 765L, 291L, 983L, 158L, 625L, 581L, 350L, 6896L, 13567L, 20261L, 4781L, 1025L, 722L, 721L, 1618L, 1799L, 987L, 6373L, 733L, 5648L, 987L, 1010L, 985L, 920L, 920L, 4696L, 1154L, 1132L, 927L, 4546L, 692L, 702L, 301L, 305L, 316L, 313L, 801L, 788L, 14624L, 14624L, 9778L, 9778L, 9778L, 9778L, 757L, 275L, 1480L, 610L, 68495L, 1152L, 1155L, 323L, 312L, 303L, 298L, 1641L, 1607L, 1645L, 616L, 1002L, 1034L, 1022L, 1030L, 1030L, 1027L, 1027L, 934L, 960L, 47L, 44L, 1935L, 1925L, 43L, 47L, 1933L, 1898L, 938L, 830L, 286L, 287L, 807L, 807L, 741L, 628L, 482L, 500L, 480L, 431L, 287L, 298L, 227L, 968L, 961L, 943L, 932L, 704L, 420L, 548L, 3612L, 1723L, 780L, 337L, 780L, 527L, 528L, 499L, 679L, 308L, 1104L, 314L, 1607L, 990L, 1156L, 562L, 299L, 16L, 20L, 287L, 581L, 1710L, 1859L, 988L, 962L, 834L, 1138L, 363L, 294L, 2678L, 362L, 539L, 295L, 996L, 977L, 988L, 39L, 762L, 579L, 595L, 405L, 1001L, 1002L, 555L, 1102L, 54L, 1283L, 347L, 1384L, 603L, 307L, 306L, 302L, 302L, 288L, 288L, 286L, 292L, 529L, 56844L, 1986L, 503L, 751L, 3977L, 367L, 4817L, 4631L, 4609L, 4579L, 937L, 402L, 257L, 570L, 1156L, 3297L, 3948L, 4527L, 3119L, 15227L, 3893L, 538L, 802L, 5128L, 595L, 522L, 1346L, 449L, 443L, 323L, 372L, 369L, 307L, 246L, 260L, 342L, 283L, 963L, 751L, 108L, 280L, 320L, 287L, 285L, 283L, 529L, 536L, 298L, 29427L, 29413L, 761L, 249L, 255L, 304L, 297L, 256L, 119L, 288L, 564L, 234L, 226L, 530L, 766L, 223L, 5858L, 5568L, 481L, 462L, 8692L, 498L, 330L, 7604L, 15L, 121738L, 121833L, 826L, 760L, 208937L, 1598L, 1166L, 446L, 85598L, 513L, 84897L, 50239L, 308L, 1351L, 283L, 7100L, 7101L, 321L, 1019L, 287L, 253L, 634L, 629L, 628L, 678L, 1391L, 1147L, 853L, 287L, 1174L, 287L, 197145L, 197116L, 147L, 147L, 712L, 274L, 283L, 907L, 434L, 1164L, 30L, 599L, 577L, 315L, 1423L, 1250L, 30L, 1502L, 296L, 348L, 617L, 339L, 328L, 123L, 338L, 332L, 47133L, 288L, 340L, 1524L, 1049L, 1072L, 1031L, 1059L, 1038L, 989L, 52L, 54L, 986L, 46L, 1202L, 1272L, 43L, 785L, 761L, 16924L, 289L, 264L, 453L, 365L, 356L, 280L, 16520L, 281L, 255L, 244L, 642L, 1003L, 951L, 921L, 1011L, 45L, 932L, 973L, 39L, 40L, 159L, 566L, 49L, 1161L, 50L, 200L, 215L, 361L, 377L, 980L, 935L, 882L, 281L, 280L, 1025L, 319L, 690L, 284L, 271L, 276L, 286L, 371L, 324L, 304L, 311L, 341L, 603L, 11566L, 270L, 286L, 342L, 326L, 11018L, 282L, 271L, 286L, 586L, 604L, 750L, 608L, 523L, 506L, 3303L, 1079797L, 1079811L, 530L, 2631L, 882L, 628L, 30L, 11905L, 12966L, 390995L, 322353L, 1763L, 1755L, 709L, 713L, 365L, 351L, 205L, 393L, 284L, 39417L, 320L, 322L, 8039L, 995L, 625L, 785L, 298L, 518L, 467L, 1050L, 329L, 141345L, 55566L, 40318L, 287L, 220L, 309346L, 220L, 215314L, 304L, 296L, 4301L, 4311L, 1543L, 1549L, 2876L, 2894L, 287L, 290L, 215L, 605L, 577L, 254L, 1330L, 1863L, 140L, 328L, 284L, 291L, 283L, 1701L, 1696L, 519L, 499L, 2440007L, 289L, 294L, 311L, 324L, 4793L, 4808L, 249L, 205L, 219L, 638L, 2653L, 2648L, 351L, 323L, 1056L, 327L, 794L, 1491L, 284L, 289L, 220L, 765L, 565L, 808L, 832L, 772L, 41668L, 42307L, 6843L, 6612L, 6598L, 241164L, 531L, 554L, 1246L, 459L, 971504L, 805L, 2615L, 2290L, 2086L, 2063L, 2685L, 2704L, 275L, 461L, 458L, 317L, 889L, 335L, 974L, 959L, 253142L, 257L, 250L, 282L, 293L, 666L, 4991L, 287L, 588L, 555L, 3585L, 3195L, 481L, 2405L, 135266L, 571L, 1805L, 365L, 340L, 232L, 224L, 298L, 3682L, 3677L, 577L, 571L, 288L, 297L, 293L, 291L, 256L, 214L, 1257L, 1271L, 65471L, 65471L, 65476L, 65476L, 4680L, 4675L, 339L, 329L, 284L, 288L, 4859L, 4851L, 2534L, 24222L, 330684L, 330684L, 2116L, 282L, 412L, 429L, 2324L, 1978L, 502L, 286L, 943149L, 256L, 288L, 286L, 1098L, 1125L, 442L, 240L, 182L, 2617L, 1068L, 25204L, 170L, 418L, 1867L, 8989L, 1804L, 1240L, 6610L, 1237L, 1750L, 1565L, 1565L, 3662L, 1803L, 218L, 172L, 780L, 1418L, 2390L, 7514L, 23214L, 1464L, 1060L, 1503L, 308802L, 308357L, 21691L, 298817L, 289875L, 4442L, 289284L, 235L, 456L, 676L, 897L, 289109L, 1865L, 288030L, 287899L, 287767L, 287635L, 286639L, 286509L, 286157L, 1427L, 2958L, 4340L, 5646L, 282469L, 7016L, 279353L, 278568L, 316L, 558L, 3501L, 1630L, 278443L, 1360L, 828L, 1089L, 278430L, 278299L, 278169L, 278035L, 277671L, 277541L, 277400L, 277277L, 276567L, 285L, 555L, 834L, 1084L, 1355L, 5249L, 14776L, 1441L, 755L, 755L, 70418L, 3135L, 1026L, 1497L, 949663L, 68L, 526058L, 1692L, 150L, 48370L, 4207L, 4088L, 197551L, 197109L, 196891L, 196634L, 2960L, 194319L, 194037L, 3008L, 3927L, 178762L, 178567L, 403L, 178124L, 2590L, 177405L, 177179L, 301L, 328L, 390685L, 390683L, 575L, 1049L, 819L, 367L, 289L, 277L, 390L, 301L, 318L, 3806L, 3778L, 3699L, 3691L)
The thing is that real data doesn't necessarily follow any particular distribution you can name ... and indeed it would be surprising if it did. So while I could name a dozen possibilities, the actual process generating these observations probably won't be anything that I could suggest either. As sample size increases, you will likely be able to reject any well-known distribution. Parametric distributions are often a useful fiction, not a perfect description. Let's at least look at the log-data, first in a normal qqplot and then as a kernel density estimate to see how it appears: Note that in a Q-Q plot done this way around, the flattest sections of slope are where you tend to see peaks. This has a clear suggestion of a peak near 6 and another about 12.3. The kernel density estimate of the log shows the same thing: In both cases, the indication is that the distribution of the log time is right skew, but it's not clearly unimodal. Clearly the main peak is somewhere around the 5 minute mark. It may be that there's a second small peak in the log-time density, that appears to be somewhere in the region of perhaps 60 hours. Perhaps there are two very qualitatively different "types" of repair, and your distribution is reflecting a mix of two types. Or just maybe once a repair hits a full day of work, it tends to just take a longer time (that is, rather than reflecting a peak at just over a week, it may reflect an anti-peak at just over a day - once you get longer than just under a day to repair, jobs tend to 'slow down'). Even the log of the log of the time is somewhat right skew. Let's look at a stronger transformation, where the second peak is quite clear - minus the inverse of the fourth root of time: The marked lines are at 5 minutes (blue) and 60 hours (dashed green); as you see, there's a peak just below 5 minutes and another somewhere above 60 hours. Note that the upper "peak" is out at about the 95th percentile and won't necessarily be close to a peak in the untransformed distribution. There's also a suggestion of another dip around 7.5 minutes with a broad peak between 10 and 20 minutes, which might suggest a very slight tendency to 'round up' in that region (not that there's necessarily anything untoward going on; even if there's no dip/peak in inherent job time there, it could even be something as simple as a function of human ability to focus in one unbroken period for more than a few minutes.) It looks to me like a two-component (two peak) or maybe three component mixture of right-skew distributions would describe the process reasonably well but would not be a perfect description. The package logspline seems to pick four peaks in log(time): with peaks near 30, 270, 900 and 270K seconds (30s,4.5m,15m and 75h). Using logspline with other transforms generally find 4 peaks but with slightly different centers (when translated to the original units); this is to be expected with transformations.
{ "source": [ "https://stats.stackexchange.com/questions/58220", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2164/" ] }
58,242
When I read "moving average" in relation to a time series, I think something like $\frac{(x_{t-1} + x_{t-2} + x_{t-3})}3$ , or perhaps a weighted average like $0.5x_{t-1} + 0.3x_{t-2} + 0.2x_{t-3}$ . (I realize these are actually AR(3) models, but these are what my brain jumps to.) Why are MA(q) models formulas of error terms, or "innovations"? What does $\{\epsilon\}$ have to do with a moving average? I feel like I'm missing some obvious intuition.
A footnote in Pankratz (1983) , on page 48, says: The label "moving average" is technically incorrect since the MA coefficients may be negative and may not sum to unity. This label is used by convention. Box and Jenkins (1976) also says something similar. On page 10: The name "moving average" is somewhat misleading because the weights $1, -\theta_{1}, -\theta_{2}, \ldots, -\theta_{q}$, which multiply the $a$'s, need not total unity nor need that be positive. However, this nomenclature is in common use, and therefore we employ it. I hope this helps.
{ "source": [ "https://stats.stackexchange.com/questions/58242", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/25276/" ] }
58,391
How can we calculate the Mean absolute percentage error (MAPE) of our predictions using Python and scikit-learn? From the docs , we have only these 4 metric functions for Regressions: metrics.explained_variance_score(y_true, y_pred) metrics.mean_absolute_error(y_true, y_pred) metrics.mean_squared_error(y_true, y_pred) metrics.r2_score(y_true, y_pred)
As noted (for example, in Wikipedia ), MAPE can be problematic. Most pointedly, it can cause division-by-zero errors. My guess is that this is why it is not included in the sklearn metrics. However, it is simple to implement. from sklearn.utils import check_arrays def mean_absolute_percentage_error(y_true, y_pred): y_true, y_pred = check_arrays(y_true, y_pred) ## Note: does not handle mix 1d representation #if _is_1d(y_true): # y_true, y_pred = _check_1d_array(y_true, y_pred) return np.mean(np.abs((y_true - y_pred) / y_true)) * 100 Use like any other metric...: > y_true = [3, -0.5, 2, 7]; y_pred = [2.5, -0.3, 2, 8] > mean_absolute_percentage_error(y_true, y_pred) Out[19]: 17.738095238095237 (Note that I'm multiplying by 100 and returning a percentage.) ... but with caution: > y_true = [3, 0.0, 2, 7]; y_pred = [2.5, -0.3, 2, 8] > #Note the zero in y_pred > mean_absolute_percentage_error(y_true, y_pred) -c:8: RuntimeWarning: divide by zero encountered in divide Out[21]: inf
{ "source": [ "https://stats.stackexchange.com/questions/58391", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/25058/" ] }
58,564
In a group of students, there are 2 out of 18 that are left-handed. Find the posterior distribution of left-handed students in the population assuming uninformative prior. Summarize the results. According to the literature 5-20% of people are left-handed. Take this information into account in your prior and calculate new posterior. I know the beta distribution should be used here. First, with $\alpha$ and $\beta$ values as 1? The equation I found in the material for posterior is $$\pi(r \vert Y ) \propto r^{(Y +−1)} \times (1 − r)^{(N−Y +−1)} \\ $$ $Y=2$ , $N=18$ Why is that $r$ in the equation? ( $r$ denoting the proportion of left-handed people). It is unknown, so how can it be in this equation? To me it seems ridiculous to calculate $r$ given $Y$ and use that $r$ in the equation giving $r$ . Well, with the sample $r=2/18$ the result was $0,0019$ . The $f$ should I deduce from that? The equation giving an expected value of $R$ given known $Y$ and $N$ worked better and gave me $0,15$ which sounds about right. The equation being $E(r | X, N, α, β) = (α + X)/(α + β + N)$ with value $1$ assigned to $α$ and $β$ . What values should I give $α$ and $β$ to take into account prior information? Some tips would be much appreciated. A general lecture on prior and posterior distributions wouldn't hurt either (I have vague understanding what they are but only vague) Also bear in mind I'm not very advanced statistician (actually I'm a political scientist by my main trade) so advanced mathematics will probably fly over my head.
Let me first explain what a conjugate prior is. I will then explain the Bayesian analyses using your specific example. Bayesian statistics involve the following steps: Define the prior distribution that incorporates your subjective beliefs about a parameter (in your example the parameter of interest is the proportion of left-handers). The prior can be "uninformative" or "informative" (but there is no prior that has no information, see the discussion here ). Gather data. Update your prior distribution with the data using Bayes' theorem to obtain a posterior distribution. The posterior distribution is a probability distribution that represents your updated beliefs about the parameter after having seen the data. Analyze the posterior distribution and summarize it (mean, median, sd, quantiles, ...). The basis of all bayesian statistics is Bayes' theorem, which is $$ \mathrm{posterior} \propto \mathrm{prior} \times \mathrm{likelihood} $$ In your case, the likelihood is binomial. If the prior and the posterior distribution are in the same family, the prior and posterior are called conjugate distributions. The beta distribution is a conjugate prior because the posterior is also a beta distribution. We say that the beta distribution is the conjugate family for the binomial likelihood. Conjugate analyses are convenient but rarely occur in real-world problems. In most cases, the posterior distribution has to be found numerically via MCMC (using Stan, WinBUGS, OpenBUGS, JAGS, PyMC or some other program). If the prior probability distribution does not integrate to 1, it is called an improper prior, if it does integrate to 1 it is called a proper prior. In most cases, an improper prior does not pose a major problem for Bayesian analyses. The posterior distribution must be proper though, i.e. the posterior must integrate to 1. These rules of thumb follow directly from the nature of the Bayesian analysis procedure: If the prior is uninformative, the posterior is very much determined by the data (the posterior is data-driven) If the prior is informative, the posterior is a mixture of the prior and the data The more informative the prior, the more data you need to "change" your beliefs, so to speak because the posterior is very much driven by the prior information If you have a lot of data, the data will dominate the posterior distribution (they will overwhelm the prior) An excellent overview of some possible "informative" and "uninformative" priors for the beta distribution can be found in this post . Say your prior beta is $\mathrm{Beta}(\pi_{LH}| \alpha, \beta)$ where $\pi_{LH}$ is the proportion of left-handers. To specify the prior parameters $\alpha$ and $\beta$ , it is useful to know the mean and variance of the beta distribution (for example, if you want your prior to have a certain mean and variance). The mean is $\bar{\pi}_{LH}=\alpha/(\alpha + \beta)$ . Thus, whenever $\alpha =\beta$ , the mean is $0.5$ . The variance of the beta distribution is $\frac{\alpha\beta}{(\alpha + \beta)^{2}(\alpha + \beta + 1)}$ . Now, the convenient thing is that you can think of $\alpha$ and $\beta$ as previously observed (pseudo-)data, namely $\alpha$ left-handers and $\beta$ right-handers out of a (pseudo-)sample of size $n_{eq}=\alpha + \beta$ . The $\mathrm{Beta}(\pi_{LH} |\alpha=1, \beta=1)$ distribution is the uniform (all values of $\pi_{LH}$ are equally probable) and is the equivalent of having observed two people out of which one is left-handed and one is right-handed. The posterior beta distribution is simply $\mathrm{Beta}(z + \alpha, N - z +\beta)$ where $N$ is the size of the sample and $z$ is the number of left-handers in the sample. The posterior mean of $\pi_{LH}$ is therefore $(z + \alpha)/(N + \alpha + \beta)$ . So to find the parameters of the posterior beta distribution, we simply add $z$ left-handers to $\alpha$ and $N-z$ right-handers to $\beta$ . The posterior variance is $\frac{(z+\alpha)(N-z+\beta)}{(N+\alpha+\beta)^{2}(N + \alpha + \beta + 1)}$ . Note that a highly informative prior also leads to a smaller variance of the posterior distribution (the graphs below illustrate the point nicely). In your case, $z=2$ and $N=18$ and your prior is the uniform which is uninformative, so $\alpha = \beta = 1$ . Your posterior distribution is therefore $Beta(3, 17)$ . The posterior mean is $\bar{\pi}_{LH}=3/(3+17)=0.15$ . Here is a graph that shows the prior, the likelihood of the data and the posterior You see that because your prior distribution is uninformative, your posterior distribution is entirely driven by the data. Also plotted is the highest density interval (HDI) for the posterior distribution. Imagine that you put your posterior distribution in a 2D-basin and start to fill in water until 95% of the distribution are above the waterline. The points where the waterline intersects with the posterior distribution constitute the 95%-HDI. Every point inside the HDI has a higher probability than any point outside it. Also, the HDI always includes the peak of the posterior distribution (i.e. the mode). The HDI is different from an equal tailed 95% credible interval where 2.5% from each tail of the posterior are excluded (see here ). For your second task, you're asked to incorporate the information that 5-20% of the population are left-handers into account. There are several ways of doing that. The easiest way is to say that the prior beta distribution should have a mean of $0.125$ which is the mean of $0.05$ and $0.2$ . But how to choose $\alpha$ and $\beta$ of the prior beta distribution? First, you want your mean of the prior distribution to be $0.125$ out of a pseudo-sample of equivalent sample size $n_{eq}$ . More generally, if you want your prior to have a mean $m$ with a pseudo-sample size $n_{eq}$ , the corresponding $\alpha$ and $\beta$ values are: $\alpha = mn_{eq}$ and $\beta = (1-m)n_{eq}$ . All you are left to do now is to choose the pseudo-sample size $n_{eq}$ which determines how confident you are about your prior information. Let's say you are very sure about your prior information and set $n_{eq}=1000$ . The parameters of your prior distribution are thereore $\alpha = 0.125\cdot 1000 = 125$ and $\beta = (1 - 0.125)\cdot 1000 = 875$ . The posterior distribution is $\mathrm{Beta}(127, 891)$ with a mean of about $0.125$ which is practically the same as the prior mean of $0.125$ . The prior information is dominating the posterior (see the following graph): If you are less sure about the prior information, you could set the $n_{eq}$ of your pseudo-sample to, say, $10$ , which yields $\alpha=1.25$ and $\beta=8.75$ for your prior beta distribution. The posterior distribution is $\mathrm{Beta}(3.25, 24.75)$ with a mean of about $0.116$ . The posterior mean is now near the mean of your data ( $0.111$ ) because the data overwhelm the prior. Here is the graph showing the situation: A more advanced method of incorporating the prior information would be to say that the $0.025$ quantile of your prior beta distribution should be about $0.05$ and the $0.975$ quantile should be about $0.2$ . This is equivalent of saying that your are 95% sure that the proportion of left-handers in the population lies between 5% and 20%. The function beta.select in the R package LearnBayes calculates the corresponding $\alpha$ and $\beta$ values of a beta distribution corresponding to such quantiles. The code is library(LearnBayes) quantile1=list(p=.025, x=0.05) # the 2.5% quantile should be 0.05 quantile2=list(p=.975, x=0.2) # the 97.5% quantile should be 0.2 beta.select(quantile1, quantile2) [1] 7.61 59.13 It seems that a beta distribution with paramters $\alpha = 7.61$ and $\beta=59.13$ has the desired properties. The prior mean is $7.61/(7.61 + 59.13)\approx 0.114$ which is near the mean of your data ( $0.111$ ). Again, this prior distribution incorporates the information of a pseudo-sample of an equivalent sample size of about $n_{eq}\approx 7.61+59.13 \approx 66.74$ . The posterior distribution is $\mathrm{Beta}(9.61, 75.13)$ with a mean of $0.113$ which is comparable with the mean of the previous analysis using a highly informative $\mathrm{Beta}(125, 875)$ prior. Here is the corresponding graph: See also this reference for a short but imho good overview of Bayesian reasoning and simple analysis. A longer introduction for conjugate analyses, especially for binomial data can be found here . A general introduction into Bayesian thinking can be found here . More slides concerning aspects of Baysian statistics are here .
{ "source": [ "https://stats.stackexchange.com/questions/58564", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/25443/" ] }
58,567
I clustered data according to latitude and longitude.Use the kmeans++ for increasing the accuracy of kmeans. But still result does not change much. Here is my db index graph for kmeans++ For kmeans I could not decide which should be the optimal number of clusters. I also plot the sum of squares of clusters. For kmeans++ for kmeans What should I do? Give me suggestion. Here is the value of boot and noise using clusterboot
Let me first explain what a conjugate prior is. I will then explain the Bayesian analyses using your specific example. Bayesian statistics involve the following steps: Define the prior distribution that incorporates your subjective beliefs about a parameter (in your example the parameter of interest is the proportion of left-handers). The prior can be "uninformative" or "informative" (but there is no prior that has no information, see the discussion here ). Gather data. Update your prior distribution with the data using Bayes' theorem to obtain a posterior distribution. The posterior distribution is a probability distribution that represents your updated beliefs about the parameter after having seen the data. Analyze the posterior distribution and summarize it (mean, median, sd, quantiles, ...). The basis of all bayesian statistics is Bayes' theorem, which is $$ \mathrm{posterior} \propto \mathrm{prior} \times \mathrm{likelihood} $$ In your case, the likelihood is binomial. If the prior and the posterior distribution are in the same family, the prior and posterior are called conjugate distributions. The beta distribution is a conjugate prior because the posterior is also a beta distribution. We say that the beta distribution is the conjugate family for the binomial likelihood. Conjugate analyses are convenient but rarely occur in real-world problems. In most cases, the posterior distribution has to be found numerically via MCMC (using Stan, WinBUGS, OpenBUGS, JAGS, PyMC or some other program). If the prior probability distribution does not integrate to 1, it is called an improper prior, if it does integrate to 1 it is called a proper prior. In most cases, an improper prior does not pose a major problem for Bayesian analyses. The posterior distribution must be proper though, i.e. the posterior must integrate to 1. These rules of thumb follow directly from the nature of the Bayesian analysis procedure: If the prior is uninformative, the posterior is very much determined by the data (the posterior is data-driven) If the prior is informative, the posterior is a mixture of the prior and the data The more informative the prior, the more data you need to "change" your beliefs, so to speak because the posterior is very much driven by the prior information If you have a lot of data, the data will dominate the posterior distribution (they will overwhelm the prior) An excellent overview of some possible "informative" and "uninformative" priors for the beta distribution can be found in this post . Say your prior beta is $\mathrm{Beta}(\pi_{LH}| \alpha, \beta)$ where $\pi_{LH}$ is the proportion of left-handers. To specify the prior parameters $\alpha$ and $\beta$ , it is useful to know the mean and variance of the beta distribution (for example, if you want your prior to have a certain mean and variance). The mean is $\bar{\pi}_{LH}=\alpha/(\alpha + \beta)$ . Thus, whenever $\alpha =\beta$ , the mean is $0.5$ . The variance of the beta distribution is $\frac{\alpha\beta}{(\alpha + \beta)^{2}(\alpha + \beta + 1)}$ . Now, the convenient thing is that you can think of $\alpha$ and $\beta$ as previously observed (pseudo-)data, namely $\alpha$ left-handers and $\beta$ right-handers out of a (pseudo-)sample of size $n_{eq}=\alpha + \beta$ . The $\mathrm{Beta}(\pi_{LH} |\alpha=1, \beta=1)$ distribution is the uniform (all values of $\pi_{LH}$ are equally probable) and is the equivalent of having observed two people out of which one is left-handed and one is right-handed. The posterior beta distribution is simply $\mathrm{Beta}(z + \alpha, N - z +\beta)$ where $N$ is the size of the sample and $z$ is the number of left-handers in the sample. The posterior mean of $\pi_{LH}$ is therefore $(z + \alpha)/(N + \alpha + \beta)$ . So to find the parameters of the posterior beta distribution, we simply add $z$ left-handers to $\alpha$ and $N-z$ right-handers to $\beta$ . The posterior variance is $\frac{(z+\alpha)(N-z+\beta)}{(N+\alpha+\beta)^{2}(N + \alpha + \beta + 1)}$ . Note that a highly informative prior also leads to a smaller variance of the posterior distribution (the graphs below illustrate the point nicely). In your case, $z=2$ and $N=18$ and your prior is the uniform which is uninformative, so $\alpha = \beta = 1$ . Your posterior distribution is therefore $Beta(3, 17)$ . The posterior mean is $\bar{\pi}_{LH}=3/(3+17)=0.15$ . Here is a graph that shows the prior, the likelihood of the data and the posterior You see that because your prior distribution is uninformative, your posterior distribution is entirely driven by the data. Also plotted is the highest density interval (HDI) for the posterior distribution. Imagine that you put your posterior distribution in a 2D-basin and start to fill in water until 95% of the distribution are above the waterline. The points where the waterline intersects with the posterior distribution constitute the 95%-HDI. Every point inside the HDI has a higher probability than any point outside it. Also, the HDI always includes the peak of the posterior distribution (i.e. the mode). The HDI is different from an equal tailed 95% credible interval where 2.5% from each tail of the posterior are excluded (see here ). For your second task, you're asked to incorporate the information that 5-20% of the population are left-handers into account. There are several ways of doing that. The easiest way is to say that the prior beta distribution should have a mean of $0.125$ which is the mean of $0.05$ and $0.2$ . But how to choose $\alpha$ and $\beta$ of the prior beta distribution? First, you want your mean of the prior distribution to be $0.125$ out of a pseudo-sample of equivalent sample size $n_{eq}$ . More generally, if you want your prior to have a mean $m$ with a pseudo-sample size $n_{eq}$ , the corresponding $\alpha$ and $\beta$ values are: $\alpha = mn_{eq}$ and $\beta = (1-m)n_{eq}$ . All you are left to do now is to choose the pseudo-sample size $n_{eq}$ which determines how confident you are about your prior information. Let's say you are very sure about your prior information and set $n_{eq}=1000$ . The parameters of your prior distribution are thereore $\alpha = 0.125\cdot 1000 = 125$ and $\beta = (1 - 0.125)\cdot 1000 = 875$ . The posterior distribution is $\mathrm{Beta}(127, 891)$ with a mean of about $0.125$ which is practically the same as the prior mean of $0.125$ . The prior information is dominating the posterior (see the following graph): If you are less sure about the prior information, you could set the $n_{eq}$ of your pseudo-sample to, say, $10$ , which yields $\alpha=1.25$ and $\beta=8.75$ for your prior beta distribution. The posterior distribution is $\mathrm{Beta}(3.25, 24.75)$ with a mean of about $0.116$ . The posterior mean is now near the mean of your data ( $0.111$ ) because the data overwhelm the prior. Here is the graph showing the situation: A more advanced method of incorporating the prior information would be to say that the $0.025$ quantile of your prior beta distribution should be about $0.05$ and the $0.975$ quantile should be about $0.2$ . This is equivalent of saying that your are 95% sure that the proportion of left-handers in the population lies between 5% and 20%. The function beta.select in the R package LearnBayes calculates the corresponding $\alpha$ and $\beta$ values of a beta distribution corresponding to such quantiles. The code is library(LearnBayes) quantile1=list(p=.025, x=0.05) # the 2.5% quantile should be 0.05 quantile2=list(p=.975, x=0.2) # the 97.5% quantile should be 0.2 beta.select(quantile1, quantile2) [1] 7.61 59.13 It seems that a beta distribution with paramters $\alpha = 7.61$ and $\beta=59.13$ has the desired properties. The prior mean is $7.61/(7.61 + 59.13)\approx 0.114$ which is near the mean of your data ( $0.111$ ). Again, this prior distribution incorporates the information of a pseudo-sample of an equivalent sample size of about $n_{eq}\approx 7.61+59.13 \approx 66.74$ . The posterior distribution is $\mathrm{Beta}(9.61, 75.13)$ with a mean of $0.113$ which is comparable with the mean of the previous analysis using a highly informative $\mathrm{Beta}(125, 875)$ prior. Here is the corresponding graph: See also this reference for a short but imho good overview of Bayesian reasoning and simple analysis. A longer introduction for conjugate analyses, especially for binomial data can be found here . A general introduction into Bayesian thinking can be found here . More slides concerning aspects of Baysian statistics are here .
{ "source": [ "https://stats.stackexchange.com/questions/58567", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/22542/" ] }
58,709
I have a theoretical economic model which is as follows, $$ y = a + b_1x_1 + b_2x_2 + b_3x_3 + u $$ So theory says that there are $x_1$, $x_2$ and $x_3$ factors to estimate $y$. Now I have the real data and I need to estimate $b_1$, $b_2$, $b_3$. The problem is that the real data set contains only data for $x_1$ and $x_2$; there are no data for $x_3$. So the model I can fit actually is: $$y = a + b_1x_1 + b_2x_2 + u$$ Is it OK to estimate this model? Do I lose anything estimating it? If I do estimate $b_1$, $b_2$, then where does the $b_3x_3$ term go? Is it accounted for by error term $u$? And we would like to assume that $x_3$ is not correlated with $x_1$ and $x_2$.
The issue you need to worry about is called endogeneity . More specifically, it depends on whether $x_3$ is correlated in the population with $x_1$ or $x_2$. If it is, then the associated $b_j$s will be biased. That is because OLS regression methods force the residuals, $u_i$, to be uncorrelated with your covariates, $x_j$s. However, your residuals are composed of some irreducible randomness, $\varepsilon_i$, and the unobserved (but relevant) variable, $x_3$, which by stipulation is correlated with $x_1$ and / or $x_2$. On the other hand, if both $x_1$ and $x_2$ are uncorrelated with $x_3$ in the population, then their $b$s won't be biased by this (they may well be biased by something else, of course). One way econometricians try to deal with this issue is by using instrumental variables . For the sake of greater clarity, I've written a quick simulation in R that demonstrates the sampling distribution of $b_2$ is unbiased / centered on the true value of $\beta_2$, when it is uncorrelated with $x_3$. In the second run, however, note that $x_3$ is uncorrelated with $x_1$, but not $x_2$. Not coincidentally, $b_1$ is unbiased, but $b_2$ is biased. library(MASS) # you'll need this package below N = 100 # this is how much data we'll use beta0 = -71 # these are the true values of the beta1 = .84 # parameters beta2 = .64 beta3 = .34 ############## uncorrelated version b0VectU = vector(length=10000) # these will store the parameter b1VectU = vector(length=10000) # estimates b2VectU = vector(length=10000) set.seed(7508) # this makes the simulation reproducible for(i in 1:10000){ # we'll do this 10k times x1 = rnorm(N) x2 = rnorm(N) # these variables are uncorrelated x3 = rnorm(N) y = beta0 + beta1*x1 + beta2*x2 + beta3*x3 + rnorm(100) mod = lm(y~x1+x2) # note all 3 variables are relevant # but the model omits x3 b0VectU[i] = coef(mod)[1] # here I'm storing the estimates b1VectU[i] = coef(mod)[2] b2VectU[i] = coef(mod)[3] } mean(b0VectU) # [1] -71.00005 # all 3 of these are centered on the mean(b1VectU) # [1] 0.8399306 # the true values / are unbiased mean(b2VectU) # [1] 0.6398391 # e.g., .64 = .64 ############## correlated version r23 = .7 # this will be the correlation in the b0VectC = vector(length=10000) # population between x2 & x3 b1VectC = vector(length=10000) b2VectC = vector(length=10000) set.seed(2734) for(i in 1:10000){ x1 = rnorm(N) X = mvrnorm(N, mu=c(0,0), Sigma=rbind(c( 1, r23), c(r23, 1))) x2 = X[,1] x3 = X[,2] # x3 is correated w/ x2, but not x1 y = beta0 + beta1*x1 + beta2*x2 + beta3*x3 + rnorm(100) # once again, all 3 variables are relevant mod = lm(y~x1+x2) # but the model omits x3 b0VectC[i] = coef(mod)[1] b1VectC[i] = coef(mod)[2] # we store the estimates again b2VectC[i] = coef(mod)[3] } mean(b0VectC) # [1] -70.99916 # the 1st 2 are unbiased mean(b1VectC) # [1] 0.8409656 # but the sampling dist of x2 is biased mean(b2VectC) # [1] 0.8784184 # .88 not equal to .64
{ "source": [ "https://stats.stackexchange.com/questions/58709", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/24973/" ] }
58,739
I am trying to use scikit-learn for polynomial regression. From what I read polynomial regression is a special case of linear regression. I was hopping that maybe one of scikit's generalized linear models can be parameterised to fit higher order polynomials but I see no option for doing that. I did manage to use a Support Vector Regressor with a poly kernel. That worked well with a subset of my data, but it takes much to long to fit larger data sets so I still need to find something faster (even if trading some precision). Am I missing something obvious here?
Given data $\mathbf{x}$, a column vector, and $\mathbf{y}$, the target vector, you can perform polynomial regression by appending polynomials of $\mathbf{x}$. For example, consider if $$ \mathbf{x} = \begin{bmatrix} 2 \\[0.3em] -1 \\[0.3em] \frac{1}{3} \end{bmatrix}$$ Using just this vector in linear regression implies the model: $$ y = \alpha_1 x $$ We can add columns that are powers of the vector above, which represent adding polynomials to the regression. Below we show this for polynomials up to power 3: $$ X = \begin{bmatrix} 2 & 4 & 8 \\[0.3em] -1 & 1 & -1 \\[0.3em] \frac{1}{3} & \frac{1}{3^2} & \frac{1}{3^3} \end{bmatrix}$$ This is our new data matrix that we use in sklearn's linear regression, and it represents the model: $$ y = \alpha_1 x + \alpha_2x^2 + \alpha_3x^3$$ Note that I did not add a constant vector of $1$'s, as sklearn will automatically include this.
{ "source": [ "https://stats.stackexchange.com/questions/58739", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/25522/" ] }
58,741
This should be a fairly straight forward, but I can't seem to figure out the answer. I'm doing some A/B(C/D/E...) testing on a website and measuring impressions and clicks. What method should I be using to determine the statistically significant winner? This is sample data that I would have. There could be any number of tests, all being displayed in relatively-equal distribution and measuring clicks as a successful result. Such as: Test # | Impressions | Clicks 1 | 50 | 12 2 | 55 | 15 3 | 53 | 30 4 | 49 | 22 What algorithm should I be using to determine the winner in a statistically significant way?
Given data $\mathbf{x}$, a column vector, and $\mathbf{y}$, the target vector, you can perform polynomial regression by appending polynomials of $\mathbf{x}$. For example, consider if $$ \mathbf{x} = \begin{bmatrix} 2 \\[0.3em] -1 \\[0.3em] \frac{1}{3} \end{bmatrix}$$ Using just this vector in linear regression implies the model: $$ y = \alpha_1 x $$ We can add columns that are powers of the vector above, which represent adding polynomials to the regression. Below we show this for polynomials up to power 3: $$ X = \begin{bmatrix} 2 & 4 & 8 \\[0.3em] -1 & 1 & -1 \\[0.3em] \frac{1}{3} & \frac{1}{3^2} & \frac{1}{3^3} \end{bmatrix}$$ This is our new data matrix that we use in sklearn's linear regression, and it represents the model: $$ y = \alpha_1 x + \alpha_2x^2 + \alpha_3x^3$$ Note that I did not add a constant vector of $1$'s, as sklearn will automatically include this.
{ "source": [ "https://stats.stackexchange.com/questions/58741", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/25523/" ] }
58,745
EDIT 2: I originally thought I needed to run a two-factor ANOVA with repeated measures on one factor, but I now think a linear mixed-effect model will work better for my data. I think I nearly know what needs to happen, but am still confused by few points. The experiments I need to analyze look like this: Subjects were assigned to one of several treatment groups Measurements of each subject were taken on multiple days So: Subject is nested within treatment Treatment is crossed with day (each subject is assigned to only one treatment, and measurements are taken on each subject on each day) My dataset contains the following information: Subject = blocking factor (random factor) Day = within subject or repeated measures factor (fixed factor) Treatment = between subject factor (fixed factor) Obs = measured (dependent) variable UPDATE OK, so I went and talked to a statistician, but he's an SAS user. He thinks that the model should be: Treatment + Day + Subject(Treatment) + Day*Subject(Treatment) Obviously his notation is different from the R syntax, but this model is supposed to account for: Treatment (fixed) Day (fixed) the Treatment*Day interaction Subject nested within Treatment (random) Day crossed with "Subject within Treatment" (random) So, is this the correct syntax to use? m4 <- lmer(Obs~Treatment*Day + (1+Treatment/Subject) + (1+Day*Treatment/Subject), mydata) I'm particularly concerned about whether the Day crossed with "Subject within Treatment" part is right. Is anyone familiar with SAS, or confident that they understand what's going on in his model, able to comment on whether my sad attempt at R syntax matches? Here are my previous attempts at building a model and writing syntax (discussed in answers & comments): m1 <- lmer(Obs ~ Treatment * Day + (1 | Subject), mydata) How do I deal with the fact that subject is nested within treatment? How does m1 differ from: m2 <- lmer(Obs ~ Treatment * Day + (Treatment|Subject), mydata) m3 <- lmer(Obs ~ Treatment * Day + (Treatment:Subject), mydata) and are m2 and m3 equivalent (and if not, why)? Also, do I need to be using nlme instead of lme4 if I want to specify the correlation structure (like correlation = corAR1 )? According to Repeated Measures , for a repeated-measures analysis with repeated measures on one factor, the covariance structure (the nature of the correlations between measurements of the same subject) is important. When I was trying to do a repeated-measures ANOVA, I'd decided to use a Type II SS; is this still relevant, and if so, how do I go about specifying that? Here's an example of what the data look like: mydata <- data.frame( Subject = c(13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 62, 63, 64, 65, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 62, 63, 64, 65, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 62, 63, 64, 65), Day = c(rep(c("Day1", "Day3", "Day6"), each=28)), Treatment = c(rep(c("B", "A", "C", "B", "C", "A", "A", "B", "A", "C", "B", "C", "A", "A", "B", "A", "C", "B", "C", "A", "A"), each = 4)), Obs = c(6.472687, 7.017110, 6.200715, 6.613928, 6.829968, 7.387583, 7.367293, 8.018853, 7.527408, 6.746739, 7.296910, 6.983360, 6.816621, 6.571689, 5.911261, 6.954988, 7.624122, 7.669865, 7.676225, 7.263593, 7.704737, 7.328716, 7.295610, 5.964180, 6.880814, 6.926342, 6.926342, 7.562293, 6.677607, 7.023526, 6.441864, 7.020875, 7.478931, 7.495336, 7.427709, 7.633020, 7.382091, 7.359731, 7.285889, 7.496863, 6.632403, 6.171196, 6.306012, 7.253833, 7.594852, 6.915225, 7.220147, 7.298227, 7.573612, 7.366550, 7.560513, 7.289078, 7.287802, 7.155336, 7.394452, 7.465383, 6.976048, 7.222966, 6.584153, 7.013223, 7.569905, 7.459185, 7.504068, 7.801867, 7.598728, 7.475841, 7.511873, 7.518384, 6.618589, 5.854754, 6.125749, 6.962720, 7.540600, 7.379861, 7.344189, 7.362815, 7.805802, 7.764172, 7.789844, 7.616437, NA, NA, NA, NA))
I think that your approach is correct. Model m1 specifies a separate intercept for each subject. Model m2 adds a separate slope for each subject. Your slope is across days as subjects only participate in one treatment group. If you write model m2 as follows it's more obvious that you model a separate intercept and slope for each subject m2 <- lmer(Obs ~ Treatment * Day + (1+Day|Subject), mydata) This is equivalent to: m2 <- lmer(Obs ~ Treatment + Day + Treatment:Day + (1+Day|Subject), mydata) I.e. the main effects of treatment, day and the interaction between the two. I think that you don't need to worry about nesting as long as you don't repeat subject ID's within treatment groups. Which model is correct, really depends on your research question. Is there reason to believe that subjects' slopes vary in addition to the treatment effect? You could run both models and compare them with anova(m1,m2) to see if the data supports either one. I'm not sure what you want to express with model m3 ? The nesting syntax uses a / , e.g. (1|group/subgroup) . I don't think that you need to worry about autocorrelation with such a small number of time points.
{ "source": [ "https://stats.stackexchange.com/questions/58745", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/25525/" ] }
59,047
How are they all versions of the same basic statistical method?
Consider that they can all be written as a regression equation (perhaps with slightly differing interpretations than their traditional forms). Regression: $$ Y=\beta_0 + \beta_1X_{\text{(continuous)}} + \varepsilon \\ \text{where }\varepsilon\sim\mathcal N(0, \sigma^2) $$ t-test: $$ Y=\beta_0 + \beta_1X_{\text{(dummy code)}} + \varepsilon \\ \text{where }\varepsilon\sim\mathcal N(0, \sigma^2) $$ ANOVA: $$ Y=\beta_0 + \beta_1X_{\text{(dummy code)}} + \varepsilon \\ \text{where }\varepsilon\sim\mathcal N(0, \sigma^2) $$ The prototypical regression is conceptualized with $X$ as a continuous variable. However, the only assumption that is actually made about $X$ is that it is a vector of known constants. It could be a continuous variable, but it could also be a dummy code (i.e., a vector of $0$'s & $1$'s that indicates whether an observation is a member of an indicated group--e.g., a treatment group). Thus, in the second equation, $X$ could be such a dummy code, and the p-value would be the same as that from a t-test in its more traditional form. The meaning of the betas would differ here, though. In this case, $\beta_0$ would be the mean of the control group (for which the entries in the dummy variable would be $0$'s), and $\beta_1$ would be the difference between the mean of the treatment group and the mean of the control group. Now, remember that it is perfectly reasonable to have / run an ANOVA with only two groups (although a t-test would be more common), and you have all three connected. If you prefer seeing how it would work if you had an ANOVA with 3 groups; it would be: $$ Y=\beta_0 + \beta_1X_{\text{(dummy code 1)}} + \beta_2X_{\text{(dummy code 2)}} + \varepsilon \\ \text{where }\varepsilon\sim\mathcal N(0, \sigma^2) $$ Note that when you have $g$ groups, you have $g-1$ dummy codes to represent them. The reference group (typically the control group) is indicated by having $0$'s for all dummy codes (in this case, both dummy code 1 & dummy code 2). In this case, you would not want to interpret the p-values of the t-tests for these betas that come with standard statistical output--they only indicate whether the indicated group differs from the control group when assessed in isolation . That is, these tests are not independent. Instead, you would want to assess whether the group means vary by constructing an ANOVA table and conducting an F-test. For what it's worth, the betas are interpreted just as with the t-test version described above: $\beta_0$ is the mean of the control / reference group, $\beta_1$ indicates the difference between the means of group 1 and the reference group, and $\beta_2$ indicates the difference between group 2 and the reference group. In light of @whuber's comments below, these can also be represented via matrix equations: $$ \bf Y=\bf X\boldsymbol\beta + \boldsymbol\varepsilon $$ Represented this way, $\bf Y$ & $\boldsymbol\varepsilon$ are vectors of length $N$, and $\boldsymbol\beta$ is a vector of length $p+1$. $\bf X$ is now a matrix with $N$ rows and $(p+1)$ columns. In a prototypical regression you have $p$ continuous $X$ variables and the intercept. Thus, your $\bf X$ matrix is composed of a series of column vectors side by side, one for each $X$ variable, with a column of $1$'s on the far left for the intercept. If you are representing an ANOVA with $g$ groups in this way, remember that you would have $g-1$ dummy variables indicating the groups, with the reference group indicated by an observation having $0$'s in each dummy variable. As above, you would still have an intercept. Thus, $p=g-1$.
{ "source": [ "https://stats.stackexchange.com/questions/59047", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/25659/" ] }
59,124
I am kind of new to random forest so I am still struggling with some basic concepts. In linear regression, we assume independent observations, constant variance… What are the basic assumptions/hypothesis we make, when we use random forest? What are the key differences between random forest and naive bayes in terms of model assumptions?
Thanks for a very good question! I will try to give my intuition behind it. In order to understand this, remember the "ingredients" of random forest classifier (there are some modifications, but this is the general pipeline): At each step of building individual tree we find the best split of data While building a tree we use not the whole dataset, but bootstrap sample We aggregate the individual tree outputs by averaging (actually 2 and 3 means together more general bagging procedure ). Assume first point. It is not always possible to find the best split. For example in the following dataset each split will give exactly one misclassified object. And I think that exactly this point can be confusing: indeed, the behaviour of the individual split is somehow similar to the behaviour of Naive Bayes classifier: if the variables are dependent - there is no better split for Decision Trees and Naive Bayes classifier also fails (just to remind: independent variables is the main assumption that we make in Naive Bayes classifier; all other assumptions come from the probabilistic model that we choose). But here comes the great advantage of decision trees: we take any split and continue splitting further. And for the following splits we will find a perfect separation (in red). And as we have no probabilistic model, but just binary split, we don't need to make any assumption at all. That was about Decision Tree, but it also applies for Random Forest. The difference is that for Random Forest we use Bootstrap Aggregation. It has no model underneath, and the only assumption that it relies is that sampling is representative . But this is usually a common assumption. For example, if one class consist of two components and in our dataset one component is represented by 100 samples, and another component is represented by 1 sample - probably most individual decision trees will see only the first component and Random Forest will misclassify the second one. Hope it will give some further understanding.
{ "source": [ "https://stats.stackexchange.com/questions/59124", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/17296/" ] }
59,175
I'm currently going through "Bayesian Reasoning and Machine Learning" by David Barber and it is an extremely well written and engaging book for learning the fundamentals. So a question to someone who has already done this. What are the next set of books I should go through after I have reasonable proficiency with most of the concepts in Barber?
I'd not heard of the Barber book before, but having had a quick look through it, it does look very very good. Unless you've got a particular field you want to look into I'd suggest the following (some/many of which you've probably already heard of): Information theory, inference and learning algorithms, by D.J.C Mackay. A classic, and the author makes a .pdf of it available for free online, so you've no excuse. Pattern Recognition and Machine Learning, by C.M.Bishop. Frequently cited, though there looks to be a lot of crossover between this and the Barber book. Probability theory, the logic of science, by E.T.Jaynes. In some areas perhaps a bit more basic. However the explanations are excellent. I found it cleared up a couple of misunderstandings I didn't even know I had. Elements of Information Theory, by T.M. Cover and J.A.Thomas. Attacks probability from the perspective of, yes, you guessed it, information theory. Some very neat stuff on channel capacity and max ent. A bit different from the more bayesian stuff (I can only remember seeing one prior in the whole book). Statistical Learning Theory, by V.Vapnik. Thoroughly un-baysian, which may not appeal to you. Focuses on probablisitc upper bound on structural risk. Explains where support vector machines come from. Sir Karl Popper produced a series of works on the philosophy of scientific discovery, which feature quite a lot of stats (collections of them can be bought, but I don't have any titles to hand - apologies). Again, not bayesian in the slightest, but his discussion on falsifiability and its relationship to occams razor is (in my opinion) fascinating, and should be read by anyone involved in doing science.
{ "source": [ "https://stats.stackexchange.com/questions/59175", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
59,213
I ran PCA on 25 variables and selected the top 7 PCs using prcomp . prc <- prcomp(pollutions, center=T, scale=T, retx=T) I have then done varimax rotation on those components. varimax7 <- varimax(prc$rotation[,1:7]) And now I wish to varimax rotate the PCA-rotated data (as it is not part of the varimax object - only the loadings matrix and the rotation matrix). I read that to do this you multiply the transpose of the rotation matrix by the transpose of the data so I would have done this: newData <- t(varimax7$rotmat) %*% t(prc$x[,1:7]) But that doesn't make sense as the dimensions of the matrix transposes above are $7\times 7$ and $7 \times 16933$ respectively and so I will be left with a matrix of only $7$ rows, rather than $16933$ rows... does anyone know what I am doing wrong here or what my final line should be? Do I just need to transpose back afterwards?
"Rotations" is an approach developed in factor analysis; there rotations (such as e.g. varimax) are applied to loadings , not to eigenvectors of the covariance matrix. Loadings are eigenvectors scaled by the square roots of the respective eigenvalues. After the varimax rotation, the loading vectors are not orthogonal anymore (even though the rotation is called "orthogonal"), so one cannot simply compute orthogonal projections of the data onto the rotated loading directions. @FTusell's answer assumes that varimax rotation is applied to the eigenvectors (not to loadings). This would be pretty unconventional. Please see my detailed account of PCA+varimax for details: Is PCA followed by a rotation (such as varimax) still PCA? Briefly, if we look at the SVD of the data matrix $X=USV^\top$, then to rotate the loadings means inserting $RR^\top$ for some rotation matrix $R$ as follows: $X=(UR)(R^\top SV^\top).$ If rotation is applied to loadings (as it usually is), then there are at least three easy ways to compute varimax-rotated PCs in R : They are readily available via function psych::principal (demonstrating that this is indeed the standard approach). Note that it returns standardized scores , i.e. all PCs have unit variance. One can manually use varimax function to rotate the loadings, and then use the new rotated loadings to obtain the scores; one needs to multiple the data with the transposed pseudo-inverse of the rotated loadings (see formulas in this answer by @ttnphns ). This will also yield standardized scores. One can use varimax function to rotate the loadings, and then use the $rotmat rotation matrix to rotate the standardized scores obtained with prcomp . All three methods yield the same result: irisX <- iris[,1:4] # Iris data ncomp <- 2 pca_iris_rotated <- psych::principal(irisX, rotate="varimax", nfactors=ncomp, scores=TRUE) print(pca_iris_rotated$scores[1:5,]) # Scores returned by principal() pca_iris <- prcomp(irisX, center=T, scale=T) rawLoadings <- pca_iris$rotation[,1:ncomp] %*% diag(pca_iris$sdev, ncomp, ncomp) rotatedLoadings <- varimax(rawLoadings)$loadings invLoadings <- t(pracma::pinv(rotatedLoadings)) scores <- scale(irisX) %*% invLoadings print(scores[1:5,]) # Scores computed via rotated loadings scores <- scale(pca_iris$x[,1:2]) %*% varimax(rawLoadings)$rotmat print(scores[1:5,]) # Scores computed via rotating the scores This yields three identical outputs: 1 -1.083475 0.9067262 2 -1.377536 -0.2648876 3 -1.419832 0.1165198 4 -1.471607 -0.1474634 5 -1.095296 1.0949536 Note: The varimax function in R uses normalize = TRUE, eps = 1e-5 parameters by default ( see documentation ). One might want to change these parameters (decrease the eps tolerance and take care of Kaiser normalization) when comparing the results to other software such as SPSS. I thank @GottfriedHelms for bringing this to my attention. [Note: these parameters work when passed to the varimax function, but do not work when passed to the psych::principal function. This appears to be a bug that will be fixed.]
{ "source": [ "https://stats.stackexchange.com/questions/59213", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/23681/" ] }
59,250
I am using sample algae data to understand data mining a bit more. I have used the following commands: data(algae) algae <- algae[-manyNAs(algae),] clean.algae <-knnImputation(algae, k = 10) lm.a1 <- lm(a1 ~ ., data = clean.algae[, 1:12]) summary(lm.a1) Subsequently I received the results below. However I can not find any good documentation which explains what most of this means, especially Std. Error,t value and Pr. Can someone please be kind enough to shed some light please? Most importantly, which variables should I look at to ascertain on whether a model is giving me good prediction data? Call: lm(formula = a1 ~ ., data = clean.algae[, 1:12]) Residuals: Min 1Q Median 3Q Max -37.679 -11.893 -2.567 7.410 62.190 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 42.942055 24.010879 1.788 0.07537 . seasonspring 3.726978 4.137741 0.901 0.36892 seasonsummer 0.747597 4.020711 0.186 0.85270 seasonwinter 3.692955 3.865391 0.955 0.34065 sizemedium 3.263728 3.802051 0.858 0.39179 sizesmall 9.682140 4.179971 2.316 0.02166 * speedlow 3.922084 4.706315 0.833 0.40573 speedmedium 0.246764 3.241874 0.076 0.93941 mxPH -3.589118 2.703528 -1.328 0.18598 mnO2 1.052636 0.705018 1.493 0.13715 Cl -0.040172 0.033661 -1.193 0.23426 NO3 -1.511235 0.551339 -2.741 0.00674 ** NH4 0.001634 0.001003 1.628 0.10516 oPO4 -0.005435 0.039884 -0.136 0.89177 PO4 -0.052241 0.030755 -1.699 0.09109 . Chla -0.088022 0.079998 -1.100 0.27265 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 17.65 on 182 degrees of freedom Multiple R-squared: 0.3731, Adjusted R-squared: 0.3215 F-statistic: 7.223 on 15 and 182 DF, p-value: 2.444e-12
It sounds like you need a decent basic statistics text that covers at least basic location tests, simple regression and multiple regression. Std. Error,t value and Pr. Std. Error is the standard deviation of the sampling distribution of the estimate of the coefficient under the standard regression assumptions. Such standard deviations are called standard errors of the corresponding quantity (the coefficient estimate in this case). In the case of simple regression, it's usually denoted $s_{\hat \beta}$, as here . Also see this For multiple regression, it's a little more complicated, but if you don't know what these things are it's probably best to understand them in the context of simple regression first. t value is the value of the t-statistic for testing whether the corresponding regression coefficient is different from 0. The formula for computing it is given at the first link above. Pr. is the p-value for the hypothesis test for which the t value is the test statistic. It tells you the probability of a test statistic at least as unusual as the one you obtained, if the null hypothesis were true . In this case, the null hypothesis is that the true coefficient is zero; if that probability is low, it's suggesting that it would be rare to get a result as unusual as this if the coefficient were really zero. Most importantly, which variables should I look at to ascertain on whether a model is giving me good prediction data? What do you mean by 'good prediction data'? Can you make it clearer what you're asking? The Residual standard error , which is usually called $s$, represents the standard deviation of the residuals. It's a measure of how close the fit is to the points. The Multiple R-squared , also called the coefficient of determination is the proportion of the variance in the data that's explained by the model. The more variables you add - even if they don't help - the larger this will be. The Adjusted one reduces that to account for the number of variables in the model. The $F$ statistic on the last line is telling you whether the regression as a whole is performing 'better than random' - any set of random predictors will have some relationship with the response, so it's seeing whether your model fits better than you'd expect if all your predictors had no relationship with the response (beyond what would be explained by that randomness). This is used for a test of whether the model outperforms 'noise' as a predictor. The p-value in the last row is the p-value for that test, essentially comparing the full model you fitted with an intercept-only model. Where do the data come from? Is this in some package?
{ "source": [ "https://stats.stackexchange.com/questions/59250", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/25761/" ] }
59,588
I understand that the basic definition of endogeneity is that $$ X'\epsilon=0 $$ is not satisfied, but what does this mean in a real world sense? I read the Wikipedia article, with the supply and demand example, trying to make sense of it, but it didn't really help. I've heard the other description of endogenous and exogenous as being within the system and being outside the system and that still doesn't make sense to me.
JohnRos's answer is very good. In plain English, endogeneity means you got the causation wrong. That the model you wrote down and estimated does not properly capture the way causation works in the real world. When you write: \begin{equation} Y_i=\beta_0+\beta_1X_i+\epsilon_i \end{equation} you can think of this equation in a number of ways. You could think of it as a convenient way of predicting $Y$ based on $X$ 's values. You could think of it as a convenient way of modeling $E\{Y|X\}$ . In either of these cases, there is no such thing as endogeneity, and you don't need to worry about it. However, you can also think of the equation as embodying causation. You can think of $\beta_1$ as the answer to the question: "What would happen to $Y$ if I reached in to this system and experimentally increased $X$ by 1?" If you want to think about it that way, using OLS to estimate it amounts to assuming that: $X$ causes $Y$ $\epsilon$ causes $Y$ $\epsilon$ does not cause $X$ $Y$ does not cause $X$ Nothing which causes $\epsilon$ also causes $X$ Failure of any one of 3-5 will generally result in $E\{\epsilon|X\}\ne0$ , or, not quite equivalently, ${\rm Cov}(X,\epsilon)\ne0$ . Instrumental variables is a way of correcting for the fact that you got the causation wrong (by making another, different, causal assumption). A perfectly conducted randomized controlled trial is a way of forcing 3-5 to be true. If you pick $X$ randomly, then it sure ain't caused by $Y$ , $\epsilon$ , or anything else. So-called "natural experiment" methods are attempts to find special circumstances out in the world where 3-5 are true even when we don't think 3-5 are usually true. In JohnRos's example, to calculate the wage value of education, you need a causal interpretation of $\beta_1$ , but there are good reasons to believe that 3 or 5 is false. Your confusion is understandable, though. It is very typical in courses on the linear model for the instructor to use the causal interpretation of $\beta_1$ I gave above while pretending not to be introducing causation, pretending that "it's all just statistics." It's a cowardly lie, but it's also very common. In fact, it is part of a larger phenomenon in biomedicine and the social sciences. It is almost always the case that we are trying to determine the causal effect of $X$ on $Y$ ---that's what science is about after all. On the other hand, it is also almost always the case that there is some story you can tell leading to a conclusion that one of 3-5 is false. So, there is a kind of practiced, fluid, equivocating dishonesty in which we swat away objections by saying that we're just doing associational work and then sneak the causal interpretation back elsewhere (normally in the introduction and conclusion sections of the paper). If you are really interested, the guy to read is Judea Perl. James Heckman is also good.
{ "source": [ "https://stats.stackexchange.com/questions/59588", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/25901/" ] }
59,590
As far as I understand, the training phase usually uses the dual optimization formulation where we can implicitly calculate the weight vector which defines the discriminant function. How about the prediction phase, how do we use these weights and the kernel function when a new test sample arrives? edit: I should clarify, I am interested in the nonlinear SVM.
JohnRos's answer is very good. In plain English, endogeneity means you got the causation wrong. That the model you wrote down and estimated does not properly capture the way causation works in the real world. When you write: \begin{equation} Y_i=\beta_0+\beta_1X_i+\epsilon_i \end{equation} you can think of this equation in a number of ways. You could think of it as a convenient way of predicting $Y$ based on $X$ 's values. You could think of it as a convenient way of modeling $E\{Y|X\}$ . In either of these cases, there is no such thing as endogeneity, and you don't need to worry about it. However, you can also think of the equation as embodying causation. You can think of $\beta_1$ as the answer to the question: "What would happen to $Y$ if I reached in to this system and experimentally increased $X$ by 1?" If you want to think about it that way, using OLS to estimate it amounts to assuming that: $X$ causes $Y$ $\epsilon$ causes $Y$ $\epsilon$ does not cause $X$ $Y$ does not cause $X$ Nothing which causes $\epsilon$ also causes $X$ Failure of any one of 3-5 will generally result in $E\{\epsilon|X\}\ne0$ , or, not quite equivalently, ${\rm Cov}(X,\epsilon)\ne0$ . Instrumental variables is a way of correcting for the fact that you got the causation wrong (by making another, different, causal assumption). A perfectly conducted randomized controlled trial is a way of forcing 3-5 to be true. If you pick $X$ randomly, then it sure ain't caused by $Y$ , $\epsilon$ , or anything else. So-called "natural experiment" methods are attempts to find special circumstances out in the world where 3-5 are true even when we don't think 3-5 are usually true. In JohnRos's example, to calculate the wage value of education, you need a causal interpretation of $\beta_1$ , but there are good reasons to believe that 3 or 5 is false. Your confusion is understandable, though. It is very typical in courses on the linear model for the instructor to use the causal interpretation of $\beta_1$ I gave above while pretending not to be introducing causation, pretending that "it's all just statistics." It's a cowardly lie, but it's also very common. In fact, it is part of a larger phenomenon in biomedicine and the social sciences. It is almost always the case that we are trying to determine the causal effect of $X$ on $Y$ ---that's what science is about after all. On the other hand, it is also almost always the case that there is some story you can tell leading to a conclusion that one of 3-5 is false. So, there is a kind of practiced, fluid, equivocating dishonesty in which we swat away objections by saying that we're just doing associational work and then sneak the causal interpretation back elsewhere (normally in the introduction and conclusion sections of the paper). If you are really interested, the guy to read is Judea Perl. James Heckman is also good.
{ "source": [ "https://stats.stackexchange.com/questions/59590", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/17383/" ] }
59,630
I've a dataset containing at most 150 examples (split into training & test), with many features (higher than 1000). I need to compare classifiers and feature selection methods which perform well on data. So, I'm using three classification methods (J48, NB, SVM) and 2 feature selection methods (CFS, WrapperSubset) with different search methods (Greedy, BestFirst). While comparing, I'm looking at training accuracy (5-fold cross-folding) and test accuracy. Here is one of the results of J48 and CFS-BestFirst: { "accuracyTraining" : 95.83, "accuracyTest" : 98.21 } Many results are like this, and on the SVM there are many results that indicate that test accuracy is much higher than training (training: 60%, test: 98%) How can I meaningfully interpret these kind of results? If it was lower, I would say it's overfitting. Is there something to be said about bias and variance in this case by looking all the results? What can I do to make this classification meaningful, such as re-selecting training and test sets or just using cross-validation on all data? I have 73 training & 58 test instances. Some answers didn't have this info when they were posted.
I think a first step is to check whether the reported training and test performance are in fact correct. Is the splitting during the 5-fold cross validation done in a way that yields statistically independent cv train/test sets? E.g. if there are repeated measurements in the data, do they always end up in the same set? 95.83% accuracy in a 5-fold cv of 150 samples is in line with 5 wrong out of 130 training samples for the 5 surrogate models, or 25 wrong cases for 5 * 130 training samples. 98.21% test accuracy is more difficult to explain: during one run of the cv, each case should be tested once. So the possibly reported numbers should be in steps of 100%/150. 98.21% corresponds to 2.68 wrong cases (2 and 3 wrong out of 150 test cases gives 98.67 and 98.00% accuracy, respectively). If you can extract your model, calculate the reported accuracies externally. What are the reported accuracies for random input? Do an external cross validation: split your data, and hand over only the training part to the program. Predict the "external" test data and calculate accuracy. Is this in line with the program's output? Make sure the reported "test accuracy" comes from independent data (double/nested cross validation): if your program does data driven optimization (e.g. choosing the "best" features by comparing many models), this is more like at training error (goodness of fit) than like a generalization error. I agree with @mbq that training error is hardly ever useful in machine learning. But you may be in one of the few situations where it actually is useful: If the program selects a "best" model by comparing accuracies, but has only training errors to choose from, you need to check whether the training error actually allows a sensible choice. @mbq outlined the best-case scenario for indistinguishable models. However, worse scenarios happen as well: just like test accuracy, training accuracy is also subject to variance but has an optimistic bias compared to the generalization accuracy that is usually of interest. This can lead to a situation where models cannot be distinguished although they really have different performance. But their training (or internal cv) accuracies are too close to distinguish them because of their optimistic bias. E.g. iterative feature selection methods can be subject to such problems that may even persist for the internal cross validation accuracies (depending on how that cross validation is implemented). So if such an issue could arise, I think it is a good idea to check whether a sensible choice can possibly result from the accuracies the program uses for the decision. This would mean checking that the internal cv accuracy (which is supposedly used for selection of the best model) is not or not too much optimistically biased with respect to an externally done cv with statistically independent splitting. Again, synthetic and/or random data can help finding out what the program actually does. A second step is to have a look whether the observed differences for statistically independent splits are meaningful, as @mbq pointed out already. I suggest you calculate what difference in accuracy you need to observe with your given sample size in order to have a statistically meaningful difference. If your observed variation is less, you cannot decide which algorithm is better with your given data set: further optimization does not make sense.
{ "source": [ "https://stats.stackexchange.com/questions/59630", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/24357/" ] }
59,774
If you want to test whether two variables follow the same distribution, would it be a good test to simply sort both variables, and then check their correlation? If it is high (at least 0.9?), then the variables most likely come from the same distribution. With distribution here I mean "normal", "chi-square", "gamma" etc.
Let's find out whether this is a good test or not. There's a lot more to it than just claiming it's bad or showing in one instance that it doesn't work well. Most tests work poorly in some circumstances, so often we are faced with identifying the circumstances in which any proposed test might possibly be a good choice. Description of the test Like any hypothesis test, this one consists of (a) a null and alternate hypothesis and (b) a test statistic (the correlation coefficient) intended to discriminate between the hypotheses. The null hypothesis is that the two variables come from the same distribution. To be precise, let us name the variables $X$ and $Y$ and assume we have observed $n_x$ instances of $X$, called $x_i = (x_1, x_2, \ldots, x_{n_x})$, and $n_y$ instances of $Y$, called $y_i$. The null hypothesis is that all instances of $X$ and $Y$ are independent and identically distributed (iid). Let us take as the alternate hypothesis that (a) all instances of $X$ are iid according to some underlying distribution $F_X$ and (b) all instances of $Y$ are iid according to some underlying distribution $F_Y$ but (c) $F_X$ differs from $F_Y$. (Thus, we will not be looking for correlations among the $x_i$, correlations among the $y_i$, correlations between the $x_i$ and $y_j$, or differences of distribution among the $x$'s or $y$'s separately: that's assumed not to be plausible.) The proposed test statistic assumes that $n_x = n_y$ (call this common value $n$) and computes the correlation coefficient of the $(x_{[i]}, y_{[i]})$ (where, as usual, $[i]$ designates the $i^\text{th}$ smallest of the data). Call this $t(x,y)$. Permutation tests In this situation--no matter what statistic $t$ is proposed--we can always conduct a permutation test. Under the null hypothesis, the likelihood of the data $\left((x_1, x_2, \ldots, x_n), (y_1, y_2, \ldots, y_n)\right)$ is the same as the likelihood of any permutation of the $2n$ data values. In other words, the assignment of half the data to $X$ and the other half to $Y$ is a pure random coincidence. This is a simple, direct consequence of the iid assumptions and the null hypothesis that $F_X=F_Y$. Therefore, the sampling distribution of $t(x,y)$, conditional on the observations $x_i$ and $y_i$, is the distribution of all the values of $t$ attained for all $(2n)!$ permutations of the data. We are interested in this because for any given intended test size $\alpha$, such as $\alpha = .05$ (corresponding to $95$% confidence), we will construct a two-sided critical region from the sampling distribution of $t$: it consists of the most extreme $100\alpha$% of the possible values of $t$ (on the high side, because high correlation is consistent with similar distributions and low correlation is not). This is how we go about determining how large the correlation coefficient must be in order to decide the data come from different distributions. Simulating the null sampling distribution Because $(2n)!$ (or, if you like, $\binom{2n}{n}/2$, which counts the number of ways of splitting the $2n$ data into two pieces of size $n$) gets big even for small $n$, it is not practicable to compute the sampling distribution exactly, so we sample it using a simulation. (For instance, when $n=16$, $\binom{2n}{n}/2 = 300\ 540\ 195$ and $(2n)! \approx 2.63\times 10^{35}$.) About a thousand samples often suffices (and certainly will for the explorations we are about to undertake). There are two things we need to find out: first, what does the sampling distribution look like under the null hypothesis. Second, how well does this test discriminate between different distributions? There is a complication: the sampling distribution depends on the nature of the data. All we can do is to look at realistic data, created to emulate whatever it is we are interested in studying, and hope that what we learn from the simulations will apply to our own situation. Implementation To illustrate, I have carried out this work in R . It falls naturally into three pieces. A function to compute the test statistic $t(x,y)$. Because I want to be a little more general, my version handles different size datasets ($n_x \ne n_y$) by linearly interpolating among the values in the (sorted) larger dataset to create matches with the (sorted) smaller dataset. Because this is already done by the R function qqplot , I just take its results: test.statistic <- function(x, y) { transform <- function(z) -log(1-z^2)/2 fit <- qqplot(x,y, plot.it=FALSE) transform(cor(fit$x, fit$y)) } A little twist--unnecessary but helpful for visualization--re-expresses the correlation coefficient in a way that will make the distribution of the null statistic approximately symmetric. That's what transform is doing. The simulation of the sampling distribution. For input this function accepts the number of iterations n.iter along with the two sets of data in arrays x and y . It outputs an array of n.iter values of the test statistic. Its inner workings should be transparent, even to a non R user: permutation.test <- function(n.iter, x, y) { z <- c(x,y) n.x <- length(x) n.y <- length(y) n <- length(z) k <- min(n.x, n.y) divide <- function() { i <- sample.int(n, size=k) test.statistic(z[i], z[-i]) } replicate(n.iter, divide()) } Although that's all we need to conduct the test, in order to study it we will want to repeat the test many times. So, we conduct the test once and wrap that code within a third functional layer, just generally named f here, which we can call repeatedly. To make it sufficiently general for a broad study, for input it accepts the sizes of the datasets to simulate ( n.x and n.y ), the number of iterations for each permutation test ( n.iter ), a reference to the function test to compute the test statistic (you will see momentarily why we might not want to hard-code this), and two functions to generate iid random values, one for $X$ ( dist.x ) and one for $Y$ ( dist.y ). An option plot.it is useful to help see what's going on. f <- function(n.x, n.y, n.iter, test=test.statistic, dist.x=runif, dist.y=runif, plot.it=FALSE) { x <- dist.x(n.x) y <- dist.y(n.y) if(plot.it) qqplot(x,y) t0 <- test(x,y) sim <- permutation.test(n.iter, x, y) p <- mean(sim > t0) + mean(sim==t0)/2 if(plot.it) { hist(sim, xlim=c(min(t0, min(sim)), max(t0, max(sim))), main="Permutation distribution") abline(v=t0, col="Red", lwd=2) } return(p) } The output is a simulated "p-value": the proportion of simulations yielding a statistic that looks more extreme than the one actually computed for the data. Parts (2) and (3) are extremely general: you can conduct a study like this one for a different test simply by replacing test.statistic with some other calculation. We do that below. First results By default, our code compares data drawn from two uniform distributions. I let it do that (for $n.x = n.y = 16$, which are fairly small datasets and therefore present a moderately difficult test case) and then repeat it for a uniform-normal comparison and a uniform-exponential comparison. (Uniform distributions are not easy to distinguish from normal distributions unless you have a bit more than $16$ values, but exponential distributions--having high skewness and a long right tail--are usually easily distinguished from uniform distributions.) set.seed(17) # Makes the results reproducible n.per.rep <- 1000 # Number of iterations to compute each p-value n.reps <- 1000 # Number of times to call `f` n.x <- 16; n.y <- 16 # Dataset sizes par(mfcol=c(2,3)) # Lay results out in three columns null <- replicate(n.reps, f(n.x, n.y, n.per.rep)) hist(null, breaks=20) plot(null) normal <- replicate(n.reps, f(n.x, n.y, n.per.rep, dist.y=rnorm)) hist(normal, breaks=20) plot(normal) exponential <- replicate(n.reps, f(n.x, n.y, n.per.rep, dist.y=function(n) rgamma(n, 1))) hist(exponential, breaks=20) plot(exponential) On the left is the null distribution of the p-values when both $X$ and $Y$ are uniform. We would hope that the histogram is close to uniform (paying especial attention to the extreme left end, which is in the range of "significant" results)--and it actually is--and that the sequence of values obtained during the simulation, shown below it, looks random--and it does. That's good. It means we can move on to the next step to study how this changes when $X$ and $Y$ come from different distributions. The middle plots test $16$ uniform variates $x_i$ against $16$ normal variates $y_i$. More often than not, the p-values were lower than expected. That indicates a tendency for this test actually to detect a difference. But it's not a large one. For instance, the leftmost bar in the histogram shows that out of the 1000 runs of f (comprising 1000 separately simulated datasets), the p-value was less than $0.05$ only about 110 times. If we consider that "significant," then this test has only about an $11$% chance of detecting the difference between a uniform and normal distribution based on $16$ independent values from each. That's pretty low power. But maybe it's unavoidable, so let's proceed. The right-hand plots similarly test a uniform distribution against an exponential one. This result is bizarre. This test tends, more often than not, to conclude that uniform data and exponential data look the same. It seems to "think" that uniform and exponential variates are more similar than two uniform variables! What's going on here? The problem is that data from an exponential distribution will tend to have a few extremely high values. When you make a scatterplot of those against uniformly-distributed values, there will then be a few points far to the upper right of all the rest. That corresponds to a very high correlation coefficient. Thus, whenever either of the distributions generates a few extreme values, the correlation coefficient is a terrible choice for measuring how different the distributions are. This leads to another even worse problem: as the dataset sizes grow, the chances of obtaining a few extreme observations increase. Thus, we can expect this test to perform worse and worse as the amount of data increase. How very awful... . A better test The original question has been answered in the negative. However, there is a well-known, powerful test for discriminating among distributions: the Kolmogorov-Smirnov test. Instead of the correlation coefficient, it computes the largest vertical deviation from the line $y=x$ in their QQ plot. (When data come from the same distribution, the QQ plot tends to follow this line. Otherwise, it will deviate somewhere; the K-S statistic picks up the largest such deviation.) Here is an R implementation: test.statistic <- function(x, y) { ks.test(x,y)$statistic } That's right: it's built in to the software, so we only have to call it. But wait! If you read the manual carefully, you will learn that (a) the test supplies a p-value but (b) that p-value is (grossly) incorrect when both x and y are datasets. It is intended for use when you believe you know exactly what distribution the data x came from and you want to see whether that's true. Thus the test does not properly accommodate the uncertainty about the distribution the data in y came from. No problem! The permutation test framework is still just as valid. By making the preceding change to test.statistic , all we have to do is re-run the previous study, unchanged. Here are the results. Although the null distribution is not uniform (upper left), it's pretty uniform below $p=0.20$ or so, which is where we really care about its values. A glance at the plot below it (bottom left) shows the problem: the K-S statistic tends to cluster around a few discrete values. (This problem practically goes away for larger datasets.) The middle (uniform vs normal) and right( uniform vs exponential) histograms are doing exactly the right thing: in the vast majority of cases where the two distributions differ, this test is producing small p-values. For instance, it has a $70$% chance of yielding a p-value less than $0.05$ when comparing a uniform to a normal based on 16 values from each. Compare this to the piddling $11$% achieved by the correlation coefficient test. The right histogram is not quite as good, but at least it's in the correct direction now! We estimate that it has a $30$% chance of detecting the difference between a uniform and exponential distribution at the $\alpha=5$% level and a $50$% chance of making that detection at the $\alpha=10$% level (because the two bars for the p-value less than $0.10$ total over 500 of the 1000 iterations). Conclusions Thus, the problems with the correlation test are not due to some inherent difficulty in this setting. Not only does the correlation test perform very badly, it is bad compared to a widely known and available test. (I would guess that it is inadmissible, meaning that it will always perform worse, on the average, than the permutation version of the K-S test, implying there is no reason ever to use it.)
{ "source": [ "https://stats.stackexchange.com/questions/59774", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/16175/" ] }
59,777
When performing a multiple regression with dummy variables, is it really necessary to include an intercept term in the design matrix? By dummy variables, I mean indicator variables; a one in the design matrix if some effect is present, and a zero if not. It seems to me that without the intercept it is simpler to interpret the OLS solution. Instead of $\beta_{0}$ = $\mu_{A}$ (where $\beta_{0}$ is the intercept) $\beta_{1}$ = $\mu_{B} - \mu_{A}$ $\beta_{2}$ = $\mu_{C} - \mu_{A}$ etc. We have $\beta_{1}$ = $\mu_{A}$ $\beta_{2}$ = $\mu_{B}$ $\beta_{3}$ = $\mu_{C}$ etc. Do the computations of $R^{2}$, the F-statistic and t-statistics change? What if a continuous independent variable is then included?
Let's find out whether this is a good test or not. There's a lot more to it than just claiming it's bad or showing in one instance that it doesn't work well. Most tests work poorly in some circumstances, so often we are faced with identifying the circumstances in which any proposed test might possibly be a good choice. Description of the test Like any hypothesis test, this one consists of (a) a null and alternate hypothesis and (b) a test statistic (the correlation coefficient) intended to discriminate between the hypotheses. The null hypothesis is that the two variables come from the same distribution. To be precise, let us name the variables $X$ and $Y$ and assume we have observed $n_x$ instances of $X$, called $x_i = (x_1, x_2, \ldots, x_{n_x})$, and $n_y$ instances of $Y$, called $y_i$. The null hypothesis is that all instances of $X$ and $Y$ are independent and identically distributed (iid). Let us take as the alternate hypothesis that (a) all instances of $X$ are iid according to some underlying distribution $F_X$ and (b) all instances of $Y$ are iid according to some underlying distribution $F_Y$ but (c) $F_X$ differs from $F_Y$. (Thus, we will not be looking for correlations among the $x_i$, correlations among the $y_i$, correlations between the $x_i$ and $y_j$, or differences of distribution among the $x$'s or $y$'s separately: that's assumed not to be plausible.) The proposed test statistic assumes that $n_x = n_y$ (call this common value $n$) and computes the correlation coefficient of the $(x_{[i]}, y_{[i]})$ (where, as usual, $[i]$ designates the $i^\text{th}$ smallest of the data). Call this $t(x,y)$. Permutation tests In this situation--no matter what statistic $t$ is proposed--we can always conduct a permutation test. Under the null hypothesis, the likelihood of the data $\left((x_1, x_2, \ldots, x_n), (y_1, y_2, \ldots, y_n)\right)$ is the same as the likelihood of any permutation of the $2n$ data values. In other words, the assignment of half the data to $X$ and the other half to $Y$ is a pure random coincidence. This is a simple, direct consequence of the iid assumptions and the null hypothesis that $F_X=F_Y$. Therefore, the sampling distribution of $t(x,y)$, conditional on the observations $x_i$ and $y_i$, is the distribution of all the values of $t$ attained for all $(2n)!$ permutations of the data. We are interested in this because for any given intended test size $\alpha$, such as $\alpha = .05$ (corresponding to $95$% confidence), we will construct a two-sided critical region from the sampling distribution of $t$: it consists of the most extreme $100\alpha$% of the possible values of $t$ (on the high side, because high correlation is consistent with similar distributions and low correlation is not). This is how we go about determining how large the correlation coefficient must be in order to decide the data come from different distributions. Simulating the null sampling distribution Because $(2n)!$ (or, if you like, $\binom{2n}{n}/2$, which counts the number of ways of splitting the $2n$ data into two pieces of size $n$) gets big even for small $n$, it is not practicable to compute the sampling distribution exactly, so we sample it using a simulation. (For instance, when $n=16$, $\binom{2n}{n}/2 = 300\ 540\ 195$ and $(2n)! \approx 2.63\times 10^{35}$.) About a thousand samples often suffices (and certainly will for the explorations we are about to undertake). There are two things we need to find out: first, what does the sampling distribution look like under the null hypothesis. Second, how well does this test discriminate between different distributions? There is a complication: the sampling distribution depends on the nature of the data. All we can do is to look at realistic data, created to emulate whatever it is we are interested in studying, and hope that what we learn from the simulations will apply to our own situation. Implementation To illustrate, I have carried out this work in R . It falls naturally into three pieces. A function to compute the test statistic $t(x,y)$. Because I want to be a little more general, my version handles different size datasets ($n_x \ne n_y$) by linearly interpolating among the values in the (sorted) larger dataset to create matches with the (sorted) smaller dataset. Because this is already done by the R function qqplot , I just take its results: test.statistic <- function(x, y) { transform <- function(z) -log(1-z^2)/2 fit <- qqplot(x,y, plot.it=FALSE) transform(cor(fit$x, fit$y)) } A little twist--unnecessary but helpful for visualization--re-expresses the correlation coefficient in a way that will make the distribution of the null statistic approximately symmetric. That's what transform is doing. The simulation of the sampling distribution. For input this function accepts the number of iterations n.iter along with the two sets of data in arrays x and y . It outputs an array of n.iter values of the test statistic. Its inner workings should be transparent, even to a non R user: permutation.test <- function(n.iter, x, y) { z <- c(x,y) n.x <- length(x) n.y <- length(y) n <- length(z) k <- min(n.x, n.y) divide <- function() { i <- sample.int(n, size=k) test.statistic(z[i], z[-i]) } replicate(n.iter, divide()) } Although that's all we need to conduct the test, in order to study it we will want to repeat the test many times. So, we conduct the test once and wrap that code within a third functional layer, just generally named f here, which we can call repeatedly. To make it sufficiently general for a broad study, for input it accepts the sizes of the datasets to simulate ( n.x and n.y ), the number of iterations for each permutation test ( n.iter ), a reference to the function test to compute the test statistic (you will see momentarily why we might not want to hard-code this), and two functions to generate iid random values, one for $X$ ( dist.x ) and one for $Y$ ( dist.y ). An option plot.it is useful to help see what's going on. f <- function(n.x, n.y, n.iter, test=test.statistic, dist.x=runif, dist.y=runif, plot.it=FALSE) { x <- dist.x(n.x) y <- dist.y(n.y) if(plot.it) qqplot(x,y) t0 <- test(x,y) sim <- permutation.test(n.iter, x, y) p <- mean(sim > t0) + mean(sim==t0)/2 if(plot.it) { hist(sim, xlim=c(min(t0, min(sim)), max(t0, max(sim))), main="Permutation distribution") abline(v=t0, col="Red", lwd=2) } return(p) } The output is a simulated "p-value": the proportion of simulations yielding a statistic that looks more extreme than the one actually computed for the data. Parts (2) and (3) are extremely general: you can conduct a study like this one for a different test simply by replacing test.statistic with some other calculation. We do that below. First results By default, our code compares data drawn from two uniform distributions. I let it do that (for $n.x = n.y = 16$, which are fairly small datasets and therefore present a moderately difficult test case) and then repeat it for a uniform-normal comparison and a uniform-exponential comparison. (Uniform distributions are not easy to distinguish from normal distributions unless you have a bit more than $16$ values, but exponential distributions--having high skewness and a long right tail--are usually easily distinguished from uniform distributions.) set.seed(17) # Makes the results reproducible n.per.rep <- 1000 # Number of iterations to compute each p-value n.reps <- 1000 # Number of times to call `f` n.x <- 16; n.y <- 16 # Dataset sizes par(mfcol=c(2,3)) # Lay results out in three columns null <- replicate(n.reps, f(n.x, n.y, n.per.rep)) hist(null, breaks=20) plot(null) normal <- replicate(n.reps, f(n.x, n.y, n.per.rep, dist.y=rnorm)) hist(normal, breaks=20) plot(normal) exponential <- replicate(n.reps, f(n.x, n.y, n.per.rep, dist.y=function(n) rgamma(n, 1))) hist(exponential, breaks=20) plot(exponential) On the left is the null distribution of the p-values when both $X$ and $Y$ are uniform. We would hope that the histogram is close to uniform (paying especial attention to the extreme left end, which is in the range of "significant" results)--and it actually is--and that the sequence of values obtained during the simulation, shown below it, looks random--and it does. That's good. It means we can move on to the next step to study how this changes when $X$ and $Y$ come from different distributions. The middle plots test $16$ uniform variates $x_i$ against $16$ normal variates $y_i$. More often than not, the p-values were lower than expected. That indicates a tendency for this test actually to detect a difference. But it's not a large one. For instance, the leftmost bar in the histogram shows that out of the 1000 runs of f (comprising 1000 separately simulated datasets), the p-value was less than $0.05$ only about 110 times. If we consider that "significant," then this test has only about an $11$% chance of detecting the difference between a uniform and normal distribution based on $16$ independent values from each. That's pretty low power. But maybe it's unavoidable, so let's proceed. The right-hand plots similarly test a uniform distribution against an exponential one. This result is bizarre. This test tends, more often than not, to conclude that uniform data and exponential data look the same. It seems to "think" that uniform and exponential variates are more similar than two uniform variables! What's going on here? The problem is that data from an exponential distribution will tend to have a few extremely high values. When you make a scatterplot of those against uniformly-distributed values, there will then be a few points far to the upper right of all the rest. That corresponds to a very high correlation coefficient. Thus, whenever either of the distributions generates a few extreme values, the correlation coefficient is a terrible choice for measuring how different the distributions are. This leads to another even worse problem: as the dataset sizes grow, the chances of obtaining a few extreme observations increase. Thus, we can expect this test to perform worse and worse as the amount of data increase. How very awful... . A better test The original question has been answered in the negative. However, there is a well-known, powerful test for discriminating among distributions: the Kolmogorov-Smirnov test. Instead of the correlation coefficient, it computes the largest vertical deviation from the line $y=x$ in their QQ plot. (When data come from the same distribution, the QQ plot tends to follow this line. Otherwise, it will deviate somewhere; the K-S statistic picks up the largest such deviation.) Here is an R implementation: test.statistic <- function(x, y) { ks.test(x,y)$statistic } That's right: it's built in to the software, so we only have to call it. But wait! If you read the manual carefully, you will learn that (a) the test supplies a p-value but (b) that p-value is (grossly) incorrect when both x and y are datasets. It is intended for use when you believe you know exactly what distribution the data x came from and you want to see whether that's true. Thus the test does not properly accommodate the uncertainty about the distribution the data in y came from. No problem! The permutation test framework is still just as valid. By making the preceding change to test.statistic , all we have to do is re-run the previous study, unchanged. Here are the results. Although the null distribution is not uniform (upper left), it's pretty uniform below $p=0.20$ or so, which is where we really care about its values. A glance at the plot below it (bottom left) shows the problem: the K-S statistic tends to cluster around a few discrete values. (This problem practically goes away for larger datasets.) The middle (uniform vs normal) and right( uniform vs exponential) histograms are doing exactly the right thing: in the vast majority of cases where the two distributions differ, this test is producing small p-values. For instance, it has a $70$% chance of yielding a p-value less than $0.05$ when comparing a uniform to a normal based on 16 values from each. Compare this to the piddling $11$% achieved by the correlation coefficient test. The right histogram is not quite as good, but at least it's in the correct direction now! We estimate that it has a $30$% chance of detecting the difference between a uniform and exponential distribution at the $\alpha=5$% level and a $50$% chance of making that detection at the $\alpha=10$% level (because the two bars for the p-value less than $0.10$ total over 500 of the 1000 iterations). Conclusions Thus, the problems with the correlation test are not due to some inherent difficulty in this setting. Not only does the correlation test perform very badly, it is bad compared to a widely known and available test. (I would guess that it is inadmissible, meaning that it will always perform worse, on the average, than the permutation version of the K-S test, implying there is no reason ever to use it.)
{ "source": [ "https://stats.stackexchange.com/questions/59777", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/13902/" ] }
59,784
I have a dataset which is statistics from a web discussion forum. I'm looking at the distribution of the number of replies a topic is expected to have. In particular, I've created a dataset which has a list of topic reply counts, and then the count of topics which have that number of replies. "num_replies","count" 0,627568 1,156371 2,151670 3,79094 4,59473 5,39895 6,30947 7,23329 8,18726 If I plot the dataset on a log-log plot, I get what is basically a straight line: (This is a Zipfian distribution ). Wikipedia tells me that straight lines on log-log plots imply a function that can be modelled by a monomial of the form $y = ax^k$. And in fact I've eyeballed such a function: lines(data$num_replies, 480000 * data$num_replies ^ -1.62, col="green") My eyeballs obviously aren't as accurate as R. So how can I get R to fit the parameters of this model for me more accurately? I tried polynomial regression, but I don't think that R tries to fit the exponent as a parameter - what is the proper name for the model I want? Edit: Thanks for the answers everyone. As suggested, I've now fit a linear model against the logs of the input data, using this recipe: data <- read.csv(file="result.txt") # Avoid taking the log of zero: data$num_replies = data$num_replies + 1 plot(data$num_replies, data$count, log="xy", cex=0.8) # Fit just the first 100 points in the series: model <- lm(log(data$count[1:100]) ~ log(data$num_replies[1:100])) points(data$num_replies, round(exp(coef(model)[1] + coef(model)[2] * log(data$num_replies))), col="red") The result is this, showing the model in red: That looks like a good approximation for my purposes. If I then use this Zipfian model (alpha = 1.703164) along with a random number generator to generate the same total number of topics (1400930) as the original measured dataset contained (using this C code I found on the web ), the result looks like: Measured points are in black, randomly generated ones according to the model are in red. I think this shows that the simple variance created by randomly generating these 1400930 points is a good explanation for the shape of the original graph. If you're interested in playing with the raw data yourself, I have posted it here .
Your example is a very good one because it clearly points up recurrent issues with such data. Two common names are power function and power law. In biology, and some other fields, people often talk of allometry, especially whenever you are relating size measurements. In physics, and some other fields, people talk of scaling laws. I would not regard monomial as a good term here, as I associate that with integer powers. For the same reason this is not best regarded as a special case of a polynomial. Problems of fitting a power law to the tail of a distribution morph into problems of fitting a power law to the relationship between two different variables. The easiest way to fit a power law is take logarithms of both variables and then fit a straight line using regression. There are many objections to this whenever both variables are subject to error, as is common. The example here is a case in point as both variables (and neither) might be regarded as response (dependent variable). That argument leads to a more symmetric method of fitting. In addition, there is always the question of assumptions about error structure. Again, the example here is a case in point as errors are clearly heteroscedastic. That suggests something more like weighted least-squares. One excellent review is http://www.ncbi.nlm.nih.gov/pubmed/16573844 Yet another problem is that people often identify power laws only over some range of their data. The questions then become scientific as well as statistical, going all the way down to whether identifying power laws is just wishful thinking or a fashionable amateur pastime. Much of the discussion arises under the headings of fractal and scale-free behaviour, with associated discussion ranging from physics to metaphysics. In your specific example, a little curvature seems evident. Enthusiasts for power laws are not always matched by sceptics, because the enthusiasts publish more than the sceptics. I'd suggest that a scatter plot on logarithmic scales, although a natural and excellent plot that is essential, should be accompanied by residual plots of some kind to check for departures from power function form.
{ "source": [ "https://stats.stackexchange.com/questions/59784", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/25993/" ] }
60,153
Is "deep learning" just another term for multilevel/hierarchical modeling? I'm much more familiar with the latter than the former, but from what I can tell, the primary difference is not in their definition, but how they are used and evaluated within their application domain. It looks like the number of nodes in a typical "deep learning" application is larger and uses a generic hierarchical form, whereas applications of multilevel modeling typically uses a hierarchical relationships that mimic the generative process being modeled. Using a generic hierarchy in an applied statistics (hierarchical modeling) domain would be regarded as an "incorrect" model of the phenomena, whereas modeling a domain-specific hierarchy might be regarded as subverting the objective of making a generic deep learning learning machine. Are these two things really the same machinery under two different names, used in two different ways?
Similarity Fundamentally both types of algorithms were developed to answer one general question in machine learning applications: Given predictors (factors) $x_1, x_2, \ldots, x_p$ - how to incorporate the interactions between this factors in order to increase the performance? One way is to simply introduce new predictors: $x_{p+1} = x_1x_2, x_{p+2} = x_1x_3, \ldots$ But this proves to be bad idea due to huge number of parameters and very specific type of interactions. Both Multilevel modelling and Deep Learning algorithms answer this question by introducing much smarter model of interactions. And from this point of view they are very similar. Difference Now let me try to give my understanding on what is the great conceptual difference between them. In order to give some explanation, let's see the assumptions that we make in each of the models: Multilevel modelling: $^1$ layers that reflect the data structure can be represented as a Bayesian Hierarchical Network . This network is fixed and usually comes from domain applications. Deep Learning: $^2$ the data were generated by the interactions of many factors. The structure of interactions is not known, but can be represented as a layered factorisation: higher-level interactions are obtained by transforming lower-level representations. The fundamental difference comes from the phrase "the structure of interactions is not known" in Deep Learning. We can assume some priors on the type of interaction, but yet the algorithm defines all the interactions during the learning procedure. On the other hand, we have to define the structure of interactions for Multilevel modelling (we learn only vary the parameters of the model afterwards). Examples For example, let's assume we are given three factors $x_1, x_2, x_3$ and we define $\{x_1\}$ and $\{x_2, x_3\}$ as different layers. In the Multilevel modelling regression, for example, we will get the interactions $x_1 x_2$ and $x_1 x_3$, but we will never get the interaction $x_2 x_3$. Of course, partly the results will be affected by the correlation of the errors, but this is not that important for the example. In Deep learning, for example in multilayered Restricted Boltzmann machines ( RBM ) with two hidden layers and linear activation function, we will have all the possible polinomial interactions with the degree less or equal than three. Common advantages and disadvantages Multilevel modelling (-) need to define the structure of interactions (+) results are usually easier to interpret (+) can apply statistics methods (evaluate confidence intervals, check hypotheses) Deep learning (-) requires huge amount of data to train (and time for training as well) (-) results are usually impossible to interpret (provided as a black box) (+) no expert knowledge required (+) once well-trained, usually outperforms most other general methods (not application specific) Hope it will help!
{ "source": [ "https://stats.stackexchange.com/questions/60153", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4733/" ] }
60,476
I've run a regression on U.S. counties, and am checking for collinearity in my 'independent' variables. Belsley, Kuh, and Welsch's Regression Diagnostics suggests looking at the Condition Index and Variance Decomposition Proportions: library(perturb) ## colldiag(, scale=TRUE) for model with interaction Condition Index Variance Decomposition Proportions (Intercept) inc09_10k unins09 sqmi_log pop10_perSqmi_log phys_per100k nppa_per100k black10_pct hisp10_pct elderly09_pct inc09_10k:unins09 1 1.000 0.000 0.000 0.000 0.000 0.001 0.002 0.003 0.002 0.002 0.001 0.000 2 3.130 0.000 0.000 0.000 0.000 0.002 0.053 0.011 0.148 0.231 0.000 0.000 3 3.305 0.000 0.000 0.000 0.000 0.000 0.095 0.072 0.351 0.003 0.000 0.000 4 3.839 0.000 0.000 0.000 0.001 0.000 0.143 0.002 0.105 0.280 0.009 0.000 5 5.547 0.000 0.002 0.000 0.000 0.050 0.093 0.592 0.084 0.005 0.002 0.000 6 7.981 0.000 0.005 0.006 0.001 0.150 0.560 0.256 0.002 0.040 0.026 0.001 7 11.170 0.000 0.009 0.003 0.000 0.046 0.000 0.018 0.003 0.250 0.272 0.035 8 12.766 0.000 0.050 0.029 0.015 0.309 0.023 0.043 0.220 0.094 0.005 0.002 9 18.800 0.009 0.017 0.003 0.209 0.001 0.002 0.001 0.047 0.006 0.430 0.041 10 40.827 0.134 0.159 0.163 0.555 0.283 0.015 0.001 0.035 0.008 0.186 0.238 11 76.709 0.855 0.759 0.796 0.219 0.157 0.013 0.002 0.004 0.080 0.069 0.683 ## colldiag(, scale=TRUE) for model without interaction Condition Index Variance Decomposition Proportions (Intercept) inc09_10k unins09 sqmi_log pop10_perSqmi_log phys_per100k nppa_per100k black10_pct hisp10_pct elderly09_pct 1 1.000 0.000 0.001 0.001 0.000 0.001 0.003 0.004 0.003 0.003 0.001 2 2.988 0.000 0.000 0.001 0.000 0.002 0.030 0.003 0.216 0.253 0.000 3 3.128 0.000 0.000 0.002 0.000 0.000 0.112 0.076 0.294 0.027 0.000 4 3.630 0.000 0.002 0.001 0.001 0.000 0.160 0.003 0.105 0.248 0.009 5 5.234 0.000 0.008 0.002 0.000 0.053 0.087 0.594 0.086 0.004 0.001 6 7.556 0.000 0.024 0.039 0.001 0.143 0.557 0.275 0.002 0.025 0.035 7 11.898 0.000 0.278 0.080 0.017 0.371 0.026 0.023 0.147 0.005 0.038 8 13.242 0.000 0.001 0.343 0.006 0.000 0.000 0.017 0.129 0.328 0.553 9 21.558 0.010 0.540 0.332 0.355 0.037 0.000 0.003 0.003 0.020 0.083 10 50.506 0.989 0.148 0.199 0.620 0.393 0.026 0.004 0.016 0.087 0.279 ?HH::vif suggests that VIFs >5 are problematic: library(HH) ## vif() for model with interaction inc09_10k unins09 sqmi_log pop10_perSqmi_log phys_per100k nppa_per100k black10_pct hisp10_pct 8.378646 16.329881 1.653584 2.744314 1.885095 1.471123 1.436229 1.789454 elderly09_pct inc09_10k:unins09 1.547234 11.590162 ## vif() for model without interaction inc09_10k unins09 sqmi_log pop10_perSqmi_log phys_per100k nppa_per100k black10_pct hisp10_pct 1.859426 2.378138 1.628817 2.716702 1.882828 1.471102 1.404482 1.772352 elderly09_pct 1.545867 Whereas John Fox's Regression Diagnostics suggests looking at the square root of the VIF: library(car) ## sqrt(vif) for model with interaction inc09_10k unins09 sqmi_log pop10_perSqmi_log phys_per100k nppa_per100k black10_pct hisp10_pct 2.894589 4.041025 1.285917 1.656597 1.372987 1.212898 1.198428 1.337705 elderly09_pct inc09_10k:unins09 1.243879 3.404433 ## sqrt(vif) for model without interaction inc09_10k unins09 sqmi_log pop10_perSqmi_log phys_per100k nppa_per100k black10_pct hisp10_pct 1.363608 1.542121 1.276251 1.648242 1.372162 1.212890 1.185108 1.331297 elderly09_pct 1.243329 In the first two cases (where a clear cutoff is suggested), the model is problematic only when the interaction term is included. The model with the interaction term has until this point been my preferred specification. I have two questions given this quirk of the data: Does an interaction term always worsen the collinearity of the data? Since the two variables without the interaction term are not above the threshold, am I ok using the model with the interaction term. Specifically, the reason I think this might be ok is that I'm using the King, Tomz, and Wittenberg (2000) method to interpret the coefficients (negative binomial model), where I generally hold the other coefficients at the mean, and then interpret what happens to predictions of my dependent variable when I move inc09_10k and unins09 around independently and jointly.
Yes, this is usually the case with non-centered interactions. A quick look at what happens to the correlation of two independent variables and their "interaction" set.seed(12345) a = rnorm(10000,20,2) b = rnorm(10000,10,2) cor(a,b) cor(a,a*b) > cor(a,b) [1] 0.01564907 > cor(a,a*b) [1] 0.4608877 And then when you center them: c = a - 20 d = b - 10 cor(c,d) cor(c,c*d) > cor(c,d) [1] 0.01564907 > cor(c,c*d) [1] 0.001908758 Incidentally, the same can happen with including polynomial terms (i.e., $X,~X^2,~...$) without first centering. So you can give that a shot with your pair. As to why centering helps - but let's go back to the definition of covariance \begin{align} \text{Cov}(X,XY) &= E[(X-E(X))(XY-E(XY))] \\ &= E[(X-\mu_x)(XY-\mu_{xy})] \\ &= E[X^2Y-X\mu_{xy}-XY\mu_x+\mu_x\mu_{xy}] \\ &= E[X^2Y]-E[X]\mu_{xy}-E[XY]\mu_x+\mu_x\mu_{xy} \\ \end{align} Even given independence of X and Y \begin{align} \qquad\qquad\qquad\, &= E[X^2]E[Y]-\mu_x\mu_x\mu_y-\mu_x\mu_y\mu_x+\mu_x\mu_x\mu_y \\ &= (\sigma_x^2+\mu_x^2)\mu_y-\mu_x^2\mu_y \\ &= \sigma_x^2\mu_y \\ \end{align} This doesn't related directly to your regression problem, since you probably don't have completely independent $X$ and $Y$, and since correlation between two explanatory variables doesn't always result in multicollinearity issues in regression. But it does show how an interaction between two non-centered independent variables causes correlation to show up, and that correlation could cause multicollinearity issues. Intuitively to me, having non-centered variables interact simply means that when $X$ is big, then $XY$ is also going to be bigger on an absolute scale irrespective of $Y$, and so $X$ and $XY$ will end up correlated, and similarly for $Y$.
{ "source": [ "https://stats.stackexchange.com/questions/60476", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3488/" ] }
60,484
So just "why" is $SE = \frac{s}{\sqrt n}$ ? How should one interpret/articulate the reason of having $\sqrt n$ in the denominator. Why do we divide sample mean by the square root of the sample size, intuitively speaking? And how/why is it called standard "error". (Question equally applicable for true standard deviation of the population: $\frac{\sigma}{\sqrt n}$) Is there an intuitive derivation of $SE$ that can make this clear? Please assume you are explaining it to a 6 year old who understands mean and sample size :)
This comes from the fact that $\newcommand{\Var}{\operatorname{Var}}\newcommand{\Cov}{\operatorname{Cov}}\Var(X+Y) = \Var(X) + \Var(Y) + 2\cdot\Cov(X,Y)$ and for a constant $a$ , $\Var( a X ) = a^2 \Var(X)$ . Since we are assuming that the individual observations are independent the $\Cov(X,Y)$ term is $0$ and since we assume that the observations are identically distributed all the variances are $\sigma^2$ . So $\Var( \frac{1}{n} \sum X_i ) = \frac{1}{n^2} \sum \Var(X_i) = \frac{1}{n^2} \times \sum_{i=1}^n \sigma^2= \frac{n}{n^2} \sigma^2 = \frac{\sigma^2}{n}$ And when we take the square root of that (because it is harder to think on the variance scale) we get $\dfrac{\sigma}{\sqrt{n}}$ . More intuitively, think of 2 statistics classes: in the first the teacher assigns each of the students to draw a sample of size 10 from a set of tiles with numbers on them (the teacher knows the true mean of this population, but the students don't) and compute the mean of their sample. The second teacher assigns each of his/her students to take samples of size 100 from the same set of tiles and compute the mean. Would you expect every sample mean to exactly match the population mean? or to vary about it? Would you expect the spread of the sample means to be the same in both classes? or would the 2nd class tend to be closer to the population? That's why it makes sense to divide by a function of the sample size. The square root means we have a law of diminishing returns, to halve the standard error you need to quadruple the sample size. As for the name, the full name is "The estimated standard deviation of the sampling distribution of x-bar"; it only takes saying that a few times before you appreciate having a shortened form. I don't know who first substituted "error" for "deviation" this way, but it stuck. The standard deviation measures variability of individual observations; the standard error measures variability in estimates of parameters (based on observations).
{ "source": [ "https://stats.stackexchange.com/questions/60484", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4426/" ] }
60,500
I want to assume that the sea surface temperature of the Baltic Sea is the same year after year, and then describe that with a function / linear model. The idea I had was to just input year as a decimal number (or num_months/12) and get out what the temperature should be about that time. Throwing it into lm() function in R, it doesn't recognize sinusoidal data so it just produces a straight line. So I put the sin() function within a I() bracket and tried a few values to manually fit the function, and that gets close to what I want. But the sea is warming up faster in the summer and then cooling off slower in the fall... So the model is wrong the first year, then gets more correct after a couple of years, and then in the future I guess it becomes more and more wrong again. How can I get R to estimate the model for me, so I don't have to guess numbers myself? The key here is that I want it to produce the same values year after year, not just be correct for one year. If I knew more about math, maybe I could guesstimate it as something like a Poisson or Gaussian instead of sin(), but I don't know how to do that either. Any help to get closer to a good answer would be greatly appreciated. Here is the data I use, and the code to show results so far: # SST from Bradtke et al 2010 ToY <- c(1/12,2/12,3/12,4/12,5/12,6/12,7/12,8/12,9/12,10/12,11/12,12/12,13/12,14/12,15/12,16/12,17/12,18/12,19/12,20/12,21/12,22/12,23/12,24/12,25/12,26/12,27/12,28/12,29/12,30/12,31/12,32/12,33/12,34/12,35/12,36/12,37/12,38/12,39/12,40/12,41/12,42/12,43/12,44/12,45/12,46/12,47/12,48/12) Degrees <- c(3,2,2.2,4,7.6,13,16,16.1,14,10.1,7,4.5,3,2,2.2,4,7.6,13,16,16.1,14,10.1,7,4.5,3,2,2.2,4,7.6,13,16,16.1,14,10.1,7,4.5,3,2,2.2,4,7.6,13,16,16.1,14,10.1,7,4.5) SST <- data.frame(ToY, Degrees) SSTlm <- lm(SST$Degrees ~ I(sin(pi*2.07*SST$ToY))) summary(SSTlm) plot(SST,xlim=c(0,4),ylim=c(0,17)) par(new=T) plot(data.frame(ToY=SST$ToY,Degrees=8.4418-6.9431*sin(2.07*pi*SST$ToY)),type="l",xlim=c(0,4),ylim=c(0,17))
It can be done with linear regression - You just need both a $\sin$ and a $\cos$ term at each frequency. The reason why you can use a $\sin$ and $\cos$ term in a linear regression to handle seasonality with any amplitude and phase is because of the following trigonometric identity : A 'general' sine wave with amplitude $A$ and phase $\varphi$ , $A \sin (x + \varphi)$ , can be written as the linear combination $a\sin x+b\cos x$ where $a$ and $b$ are such that $A=\sqrt{a^2+b^2}$ and $\sin\varphi = \frac{b}{\sqrt{a^2+b^2}}$ . Let's see that the two are equivalent: \begin{eqnarray} a \sin(x) + b \cos(x) &=& \sqrt{a^2+b^2} \left(\frac{a}{\sqrt{a^2+b^2}} \sin(x) + \frac{b}{\sqrt{a^2+b^2}} \cos(x)\right)\\ &=& A\left[\sin(x)\cos(\varphi) + \cos(x)\sin(\varphi)\right]\\ &=& A\sin(x+\varphi)\,\text{.} \end{eqnarray} Here's the 'basic' model: SSTlm <- lm(Degrees ~ sin(2*pi*ToY)+cos(2*pi*ToY),data=SST) summary(SSTlm) [snip] Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 8.292 0.135 61.41 <2e-16 *** sin(2 * pi * ToY) -5.916 0.191 -30.98 <2e-16 *** cos(2 * pi * ToY) -4.046 0.191 -21.19 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.9355 on 45 degrees of freedom Multiple R-squared: 0.969, Adjusted R-squared: 0.9677 F-statistic: 704.3 on 2 and 45 DF, p-value: < 2.2e-16 plot(Degrees~ToY,ylim=c(1.5,16.5),data=SST) lines(SST $ToY,SSTlm$ fitted,col=2) Edit: Important note - the $2\pi\,t$ term works because the period of the function has been set up so that one period = 1 unit of $t$ . If the period is different from 1, say the period is $\omega$ , then you need $(2\pi/\omega)\, t$ instead. Here's the model with the second harmonic: SSTlm2 <- lm(Degrees ~ sin(2*pi*ToY)+cos(2*pi*ToY) +sin(4*pi*ToY)+cos(4*pi*ToY),data=SST) summary(SSTlm2) [snip] Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 8.29167 0.02637 314.450 < 2e-16 *** sin(2 * pi * ToY) -5.91562 0.03729 -158.634 < 2e-16 *** cos(2 * pi * ToY) -4.04632 0.03729 -108.506 < 2e-16 *** sin(4 * pi * ToY) 1.21244 0.03729 32.513 < 2e-16 *** cos(4 * pi * ToY) 0.33333 0.03729 8.939 2.32e-11 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.1827 on 43 degrees of freedom Multiple R-squared: 0.9989, Adjusted R-squared: 0.9988 F-statistic: 9519 on 4 and 43 DF, p-value: < 2.2e-16 plot(Degrees~ToY,ylab="Degrees",xlab="ToY",ylim=c(1.5,16.5),data=SST) lines(SSTlm2$fitted~ToY,col=2,data=SST) ... and so forth, with 6*pi*ToY etc. If there was a tiny bit of noise in the data I'd probably stop with this second model though. With enough terms, you can exactly fit asymmetric and even jagged periodic sequences, but the resulting fits may 'wiggle'. Here's an asymmetric function (it's a sawtooth - ) added to a scaled version of your periodic function), with third (red) and fourth (green) harmonics. The green fit is on average a little closer but "wiggly" (even when the fit goes through every point, the fit may be very wiggly between points). The periodicity here means there's only 12 d.f. available for a seasonal model in the data. With the intercept in the model, you only have enough degrees of freedom for 11 additional seasonal parameters. Since you are adding two terms with each harmonic, the last harmonic you can fit will only allow you one of them for the last term, the sixth harmonic (and that one has to be a $\cos$ ; the $\sin$ term will be all-zero, while the cos alternates between 1 and -1). If you want fits that are smoother than this approach produces on non-smooth series, you may want to look into periodic spline fits. Yet another approach is to use seasonal dummies, but the sin/cos approach is often better if it's a smooth periodic function. This kind of approach to seasonality can also adapt to situations where seasonality is changing, such as using trigonometric or dummy seasonality with state-space models. While the linear model approach discussed here is simple to use, one advantage of @COOLSerdash's nonlinear regression approach is that it can deal with a much wider range of situations - you don't have to change much before you're in a situation where linear regression is no longer suitable but nonlinear least-squares may still be used (having an unknown period would be one such case).
{ "source": [ "https://stats.stackexchange.com/questions/60500", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/26330/" ] }
60,643
I am looking for some information about the difference between binomial, negative binomial and Poisson regression and for which situations are these regression best fitted. Are there any tests I can perform in SPSS that can tell me which of these regressions is the best for my situation? Also, how do I run a Poisson or negative binomial in SPSS, since there are no options such as I can see in the regression part? If you have any useful links I would appreciate it very much.
Only the nature of your data and your question of interest can tell you which of these regressions are best for your situation. So there are no tests that will tell you which one of these methods is the best for you. (Click on the links of the regression methods below to see some worked examples in SPSS.) If you have a binary outcome (e.g. death/alive, sick/healthy, 1/0), then logistic regression is appropriate. If your outcomes are discrete counts, then Poisson regression or negative binomial regression can be used. Remember that the Poisson distribution assumes that the mean and variance are the same. Sometimes, your data show extra variation that is greater than the mean. This situation is called overdispersion and negative binomial regression is more flexible in that regard than Poisson regression (you could still use Poisson regression in that case but the standard errors could be biased). The negative binomial distribution has one parameter more than the Poisson regression that adjusts the variance independently from the mean. In fact, the Poisson distribution is a special case of the negative binomial distribution.
{ "source": [ "https://stats.stackexchange.com/questions/60643", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/25739/" ] }
60,777
I'm working with a large data set (confidential, so I can't share too much), and came to the conclusion a negative binomial regression would be necessary. I've never done a glm regression before, and I can't find any clear information about what the assumptions are. Are they the same for MLR? Can I transform the variables the same way (I've already discovered transforming the dependent variable is a bad call since it needs to be a natural number)? I already determined that the negative binomial distribution would help with the over-dispersion in my data (variance is around 2000, the mean is 48). Thanks for the help!!
I'm working with a large data set (confidential, so I can't share too much), It might be possible to create a small data set that has some of the general characteristics of the real data without either the variable names nor any of the actual values. and came to the conclusion a negative binomial regression would be necessary. I've never done a glm regression before, and I can't find any clear information about what the assumptions are. Are they the same for MLR? Clearly not! You already know you're assuming response is conditionally negative binomial, not conditionally normal. ( Some assumptions are shared. Independence for example.) Let me talk about GLMs more generally first. GLMs include multiple regression but generalize in several ways: 1) the conditional distribution of the response (dependent variable) is from the exponential family , which includes the Poisson, binomial, gamma, normal and numerous other distributions. 2) the mean response is related to the predictors (independent variables) through a link function . Each family of distributions has an associated canonical link function - for example in the case of the Poisson, the canonical link is the log . The canonical links are almost always the default, but in most software you generally have several choices within each distribution choice. For the binomial the canonical link is the logit (the linear predictor is modelling $\log(\frac{p}{1-p})$, the log-odds of a success, or a "1") and for the Gamma the canonical link is the inverse - but in both cases other link functions are often used. So if your response was $Y$ and your predictors were $X_1$ and $X_2$, with a Poisson regression with the log link you might have for your description of how the mean of $Y$ is related to the $X$'s: $\text{E}(Y_i) = \mu_i$ $\log\mu_i= \eta_i$ ($\eta$ is called the 'linear predictor', and here the link function is $\log$, the symbol $g$ is often used to represent the link function) $\eta_i = \beta_0 + \beta_1 x_{1i} + \beta_2 x_{2i}$ 3) the variance of the response is not constant, but operates through a variance-function (a function of the mean, possibly times a scaling parameter). For example, the variance of a Poisson is equal to the mean, while for a gamma it's proportional to the square of the mean. (The quasi-distributions allow some degree of decoupling of Variance function from assumed distribution) -- So what assumptions are in common with what you remember from MLR? Independence is still there. Homoskedasticity is no longer assumed; the variance is explicitly a function of the mean and so in general varies with the predictors (so while the model is generally heteroskedastic, the heteroskedasticity takes a specific form). Linearity: The model is still linear in the parameters (i.e. the linear predictor is $X\beta$), but the expected response is not linearly related to them (unless you use the identity link function!). The distribution of the response is substantially more general The interpretation of the output is in many ways quite similar; you can still look at estimated coefficients divided by their standard errors for example, and interpret them similarly (they're asymptotically normal - a Wald z-test - but people still seem to call them t-ratios, even when there's no theory that makes them $t$-distributed in general). Comparisons between nested models (via 'anova-table' like setups) are a bit different, but similar (involving asymptotic chi-square tests). If you're comfortable with AIC and BIC these can be calculated. Similar kinds of diagnostic displays are generally used, but can be harder to interpret. Much of your multiple linear regression intuition will carry over if you keep the differences in mind. Here's an example of something you can do with a glm that you can't really do with linear regression (indeed, most people would use nonlinear regression for this, but GLM is easier and nicer for it) in the normal case - $Y$ is normal, modelled as a function of $x$: $\text{E}(Y) = \exp(\eta) = \exp(X\beta) = \exp(\beta_0+\beta_1 x)$ (that is, a log-link) $\text{Var}(Y) = \sigma^2$ That is, a least-squares fit of an exponential relationship between $Y$ and $x$. Can I transform the variables the same way (I've already discovered transforming the dependent variable is a bad call since it needs to be a natural number)? You (usually) don't want to transform the response (DV). You sometimes may want to transform predictors (IVs) in order to achieve linearity of the linear predictor. I already determined that the negative binomial distribution would help with the over-dispersion in my data (variance is around 2000, the mean is 48). Yes, it can deal with overdispersion. But take care not to confuse the conditional dispersion with the unconditional dispersion. Another common approach - if a bit more kludgy and so somewhat less satisfying to my mind - is quasi-Poisson regression (overdispersed Poisson regression). With the negative binomial, it's in the exponential family if you specify a particular one of its parameters (the way it's usually reparameterized for GLMS at least). Some packages will fit it if you specify the parameter, others will wrap ML estimation of that parameter (say via profile likelihood) around a GLM routine, automating the process. Some will restrict you to a smaller set of distributions; you don't say what software you might use so it's difficult to say much more there. I think usually the log-link tends to be used with negative binomial regression. There are a number of introductory-level documents (readily found via google) that lead through some basic Poisson GLM and then negative binomial GLM analysis of data, but you may prefer to look at a book on GLMs and maybe do a little Poisson regression first just to get used to that.
{ "source": [ "https://stats.stackexchange.com/questions/60777", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/26493/" ] }
60,780
I have a within subjects design with 4 DVs and I ran a repeated measures analysis on SPSS. My DV is a physiological measurement and is not expected to be linear. My multivariate Wilks lambda is significant at p = .004 and my Mauchly test approaches significance at p = .054. However, my univariate tests of within subjects is non-significant (uncorrected p =.07 and greenhouse Geisser p = .082) Given that my sphericity test is almost significant and given that I do not expect linearity in the data, is it reasonable to look at my multivariate test and ignore the non-significant univariate test? Do I even have to look at univariate if the multivariate is significant? I.e., am I justified to look at my pairwise comparisons simply because the multivariate is significant?
I'm working with a large data set (confidential, so I can't share too much), It might be possible to create a small data set that has some of the general characteristics of the real data without either the variable names nor any of the actual values. and came to the conclusion a negative binomial regression would be necessary. I've never done a glm regression before, and I can't find any clear information about what the assumptions are. Are they the same for MLR? Clearly not! You already know you're assuming response is conditionally negative binomial, not conditionally normal. ( Some assumptions are shared. Independence for example.) Let me talk about GLMs more generally first. GLMs include multiple regression but generalize in several ways: 1) the conditional distribution of the response (dependent variable) is from the exponential family , which includes the Poisson, binomial, gamma, normal and numerous other distributions. 2) the mean response is related to the predictors (independent variables) through a link function . Each family of distributions has an associated canonical link function - for example in the case of the Poisson, the canonical link is the log . The canonical links are almost always the default, but in most software you generally have several choices within each distribution choice. For the binomial the canonical link is the logit (the linear predictor is modelling $\log(\frac{p}{1-p})$, the log-odds of a success, or a "1") and for the Gamma the canonical link is the inverse - but in both cases other link functions are often used. So if your response was $Y$ and your predictors were $X_1$ and $X_2$, with a Poisson regression with the log link you might have for your description of how the mean of $Y$ is related to the $X$'s: $\text{E}(Y_i) = \mu_i$ $\log\mu_i= \eta_i$ ($\eta$ is called the 'linear predictor', and here the link function is $\log$, the symbol $g$ is often used to represent the link function) $\eta_i = \beta_0 + \beta_1 x_{1i} + \beta_2 x_{2i}$ 3) the variance of the response is not constant, but operates through a variance-function (a function of the mean, possibly times a scaling parameter). For example, the variance of a Poisson is equal to the mean, while for a gamma it's proportional to the square of the mean. (The quasi-distributions allow some degree of decoupling of Variance function from assumed distribution) -- So what assumptions are in common with what you remember from MLR? Independence is still there. Homoskedasticity is no longer assumed; the variance is explicitly a function of the mean and so in general varies with the predictors (so while the model is generally heteroskedastic, the heteroskedasticity takes a specific form). Linearity: The model is still linear in the parameters (i.e. the linear predictor is $X\beta$), but the expected response is not linearly related to them (unless you use the identity link function!). The distribution of the response is substantially more general The interpretation of the output is in many ways quite similar; you can still look at estimated coefficients divided by their standard errors for example, and interpret them similarly (they're asymptotically normal - a Wald z-test - but people still seem to call them t-ratios, even when there's no theory that makes them $t$-distributed in general). Comparisons between nested models (via 'anova-table' like setups) are a bit different, but similar (involving asymptotic chi-square tests). If you're comfortable with AIC and BIC these can be calculated. Similar kinds of diagnostic displays are generally used, but can be harder to interpret. Much of your multiple linear regression intuition will carry over if you keep the differences in mind. Here's an example of something you can do with a glm that you can't really do with linear regression (indeed, most people would use nonlinear regression for this, but GLM is easier and nicer for it) in the normal case - $Y$ is normal, modelled as a function of $x$: $\text{E}(Y) = \exp(\eta) = \exp(X\beta) = \exp(\beta_0+\beta_1 x)$ (that is, a log-link) $\text{Var}(Y) = \sigma^2$ That is, a least-squares fit of an exponential relationship between $Y$ and $x$. Can I transform the variables the same way (I've already discovered transforming the dependent variable is a bad call since it needs to be a natural number)? You (usually) don't want to transform the response (DV). You sometimes may want to transform predictors (IVs) in order to achieve linearity of the linear predictor. I already determined that the negative binomial distribution would help with the over-dispersion in my data (variance is around 2000, the mean is 48). Yes, it can deal with overdispersion. But take care not to confuse the conditional dispersion with the unconditional dispersion. Another common approach - if a bit more kludgy and so somewhat less satisfying to my mind - is quasi-Poisson regression (overdispersed Poisson regression). With the negative binomial, it's in the exponential family if you specify a particular one of its parameters (the way it's usually reparameterized for GLMS at least). Some packages will fit it if you specify the parameter, others will wrap ML estimation of that parameter (say via profile likelihood) around a GLM routine, automating the process. Some will restrict you to a smaller set of distributions; you don't say what software you might use so it's difficult to say much more there. I think usually the log-link tends to be used with negative binomial regression. There are a number of introductory-level documents (readily found via google) that lead through some basic Poisson GLM and then negative binomial GLM analysis of data, but you may prefer to look at a book on GLMs and maybe do a little Poisson regression first just to get used to that.
{ "source": [ "https://stats.stackexchange.com/questions/60780", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/26462/" ] }
60,817
I am having trouble interpreting the z values for categorical variables in logistic regression. In the example below I have a categorical variable with 3 classes and according to the z value, CLASS2 might be relevant while the others are not. But now what does this mean? That I could merge the other classes to one? That the whole variable might not be a good predictor? This is just an example and the actual z values here are not from a real problem, I just have difficulties about their interpretation. Estimate Std. Error z value Pr(>|z|) CLASS0 6.069e-02 1.564e-01 0.388 0.6979 CLASS1 1.734e-01 2.630e-01 0.659 0.5098 CLASS2 1.597e+00 6.354e-01 2.514 0.0119 *
The following explanation is not limited to logistic regression but applies equally in normal linear regression and other GLMs. Usually, R excludes one level of the categorical and the coefficients denote the difference of each class to this reference class (or sometimes called baseline class) (this is called dummy coding or treatment contrasts in R , see here for an excellent overview of the different contrast options). To see the current contrasts in R , type options("contrasts") . Normally, R orders the levels of the categorical variable alphabetically and takes the first as reference class. This is not always optimal and can be changed by typing (here, we would set the reference class to "c" in the new variable) new.variable <- relevel(old.variable, ref="c") . For each coefficient of every level of the categorical variable, a Wald test is performed to test whether the pairwise difference between the coefficient of the reference class and the other class is different from zero or not. This is what the $z$ and $p$ -values in the regression table are. If only one categorical class is significant, this does not imply that the whole variable is meaningless and should be removed from the model. You can check the overall effect of the variable by performing a likelihood ratio test : fit two models, one with and one without the variable and type anova(model1, model2, test="LRT") in R (see example below). Here is an example: mydata <- read.csv("https://stats.idre.ucla.edu/stat/data/binary.csv") mydata $rank <- factor(mydata$ rank) my.mod <- glm(admit ~ gre + gpa + rank, data = mydata, family = "binomial") summary(my.mod) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -3.989979 1.139951 -3.500 0.000465 *** gre 0.002264 0.001094 2.070 0.038465 * gpa 0.804038 0.331819 2.423 0.015388 * rank2 -0.675443 0.316490 -2.134 0.032829 * rank3 -1.340204 0.345306 -3.881 0.000104 *** rank4 -1.551464 0.417832 -3.713 0.000205 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 The level rank1 has been omitted and each coefficient of rank denotes the difference between the coefficient of rank1 and the corresponding rank level. So the difference between the coefficient of rank1 and rank2 would be $-0.675$ . The coefficient of rank1 is simply the intercept. So the true coefficient of rank2 would be $-3.99 - 0.675 = -4.67$ . The Wald tests here test whether the difference between the coefficient of the reference class (here rank1 ) and the corresponding levels differ from zero. In this case, we have evidence that the coefficients of all classes differ from the coefficient of rank1 . You could also fit the model without an intercept by adding - 1 to the model formula to see all coefficients directly: my.mod2 <- glm(admit ~ gre + gpa + rank - 1, data = mydata, family = "binomial") summary(my.mod2) # no intercept model Coefficients: Estimate Std. Error z value Pr(>|z|) gre 0.002264 0.001094 2.070 0.038465 * gpa 0.804038 0.331819 2.423 0.015388 * rank1 -3.989979 1.139951 -3.500 0.000465 *** rank2 -4.665422 1.109370 -4.205 2.61e-05 *** rank3 -5.330183 1.149538 -4.637 3.54e-06 *** rank4 -5.541443 1.138072 -4.869 1.12e-06 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Note that the intercept is gone now and that the coefficient of rank1 is exactly the intercept of the first model. Here, the Wald test checks not the pairwise difference between coefficients but the hypothesis that each individual coefficient is zero. Again, we have evidence that every coefficient of rank differs from zero. Finally, to check whether the whole variable rank improves the model fit, we fit one model with ( my.mod1 ) and one without the variable rank ( my.mod2 ) and conduct a likelihood ratio test. This tests the hypothesis that all coefficients of rank are zero: my.mod1 <- glm(admit ~ gre + gpa + rank, data = mydata, family = "binomial") # with rank my.mod2 <- glm(admit ~ gre + gpa, data = mydata, family = "binomial") # without rank anova(my.mod1, my.mod2, test="LRT") Analysis of Deviance Table Model 1: admit ~ gre + gpa + rank Model 2: admit ~ gre + gpa Resid. Df Resid. Dev Df Deviance Pr(>Chi) 1 394 458.52 2 397 480.34 -3 -21.826 7.088e-05 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 The likelihood ratio test is highly significant and we would conclude that the variable rank should remain in the model. This post is also very interesting.
{ "source": [ "https://stats.stackexchange.com/questions/60817", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/20563/" ] }
60,994
Although I read this post, I still have no idea how to apply this to my own data and hope that someone can help me out. I have the following data: y <- c(11.622967, 12.006081, 11.760928, 12.246830, 12.052126, 12.346154, 12.039262, 12.362163, 12.009269, 11.260743, 10.950483, 10.522091, 9.346292, 7.014578, 6.981853, 7.197708, 7.035624, 6.785289, 7.134426, 8.338514, 8.723832, 10.276473, 10.602792, 11.031908, 11.364901, 11.687638, 11.947783, 12.228909, 11.918379, 12.343574, 12.046851, 12.316508, 12.147746, 12.136446, 11.744371, 8.317413, 8.790837, 10.139807, 7.019035, 7.541484, 7.199672, 9.090377, 7.532161, 8.156842, 9.329572, 9.991522, 10.036448, 10.797905) t <- 18:65 And now I simply want to fit a sine wave $$ y(t)=A\cdot sin(\omega t+\phi) +C. $$ with the four unknowns $A$ , $\omega$ , $\phi$ and $C$ to it. The rest of my code looks is the following res <- nls(y ~ A*sin(omega*t+phi)+C, data=data.frame(t,y), start=list(A=1,omega=1,phi=1,C=1)) co <- coef(res) fit <- function(x, a, b, c, d) {a*sin(b*x+c)+d} # Plot result plot(x=t, y=y) curve(fit(x, a=co["A"], b=co["omega"], c=co["phi"], d=co["C"]), add=TRUE ,lwd=2, col="steelblue") But the result is really poor. I would very much appreciate any help.
If you just want a good estimate of $\omega$ and don't care much about its standard error: ssp <- spectrum(y) per <- 1/ssp$freq[ssp$spec==max(ssp$spec)] reslm <- lm(y ~ sin(2*pi/per*t)+cos(2*pi/per*t)) summary(reslm) rg <- diff(range(y)) plot(y~t,ylim=c(min(y)-0.1*rg,max(y)+0.1*rg)) lines(fitted(reslm)~t,col=4,lty=2) # dashed blue line is sin fit # including 2nd harmonic really improves the fit reslm2 <- lm(y ~ sin(2*pi/per*t)+cos(2*pi/per*t)+sin(4*pi/per*t)+cos(4*pi/per*t)) summary(reslm2) lines(fitted(reslm2)~t,col=3) # solid green line is periodic with second harmonic (A better fit still would perhaps account for the outliers in that series in some way, reducing their influence.) --- If you want some idea of the uncertainty in $\omega$, you could use profile likelihood ( pdf1 , pdf2 - references on getting approximate CIs or SEs from profile likelihood or its variants aren't hard to locate) (Alternatively, you could feed these estimates into nls ... and start it already converged.)
{ "source": [ "https://stats.stackexchange.com/questions/60994", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/14447/" ] }
61,080
Suppose $\phi(\cdot)$ and $\Phi(\cdot)$ are density function and distribution function of the standard normal distribution. How can one calculate the integral: $$\int^{\infty}_{-\infty}\Phi\left(\frac{w-a}{b}\right)\phi(w)\,\mathrm dw$$
A more conventional notation is $$y(\mu, \sigma) = \int\Phi\left(\frac{x-\mu}{\sigma}\right)\phi(x) dx = \Phi\left(\frac{-\mu}{\sqrt{1+\sigma^2}}\right).$$ This can be found by differentiating the integral with respect to $\mu$ and $\sigma$, producing elementary integrals which can be expressed in closed form: $$\frac{\partial y}{\partial \mu}(\mu, \sigma) = -\frac{1}{\sqrt{2 \pi } \sqrt{\sigma ^2+1}}e^{-\frac{1}{2}\frac{\mu ^2}{\sigma ^2+1}},$$ $$\frac{\partial y}{\partial \sigma}(\mu, \sigma) = \frac{\mu\sigma }{\sqrt{2 \pi } \left(\sigma ^2+1\right)^{3/2}}e^{-\frac{1}{2}\frac{\mu ^2}{\sigma ^2+1}}.$$ This system can be integrated, beginning with the initial condition $y(0,1)$ = $\int\Phi(x)\phi(x)dx$ = $1/2$, to obtain the given solution (which is easily checked by differentiation).
{ "source": [ "https://stats.stackexchange.com/questions/61080", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/10286/" ] }
61,189
I am a bit confused with the difference between an SVM and a perceptron. Let me try to summarize my understanding here, and please feel free to correct where I am wrong and fill in what I have missed. The Perceptron does not try to optimize the separation "distance". As long as it finds a hyperplane that separates the two sets, it is good. SVM on the other hand tries to maximize the "support vector", i.e., the distance between two closest opposite sample points. The SVM typically tries to use a "kernel function" to project the sample points to high dimension space to make them linearly separable, while the perceptron assumes the sample points are linearly separable.
It sounds right to me. People sometimes also use the word "Perceptron" to refer to the training algorithm together with the classifier. For example, someone explained this to me in the answer to this question . Also, there is nothing to stop you from using a kernel with the perceptron, and this is often a better classifier. See here for some slides (pdf) on how to implement the kernel perceptron. The major practical difference between a (kernel) perceptron and SVM is that perceptrons can be trained online (i.e. their weights can be updated as new examples arrive one at a time) whereas SVMs cannot be. See this question for information on whether SVMs can be trained online. So, even though a SVM is usually a better classifier, perceptrons can still be useful because they are cheap and easy to re-train in a situation in which fresh training data is constantly arriving.
{ "source": [ "https://stats.stackexchange.com/questions/61189", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/8086/" ] }
61,217
I am trying to perform a multiple regression in R . However, my dependent variable has the following plot: Here is a scatterplot matrix with all my variables ( WAR is the dependent variable): I know that I need to perform a transformation on this variable (and possibly the independent variables?) but I am not sure of the exact transformation required. Can someone point me in the right direction? I am happy to provide any additional information about the relationship between the independent and dependent variables. The diagnostic graphics from my regression look as follows: EDIT After transforming the dependent and independent variables using Yeo-Johnson transformations, the diagnostic plots look like this: If I use a GLM with a log-link, the diagnostic graphics are:
John Fox's book An R companion to applied regression is an excellent ressource on applied regression modelling with R . The package car which I use throughout in this answer is the accompanying package. The book also has as website with additional chapters. Transforming the response (aka dependent variable, outcome) Box-Cox transformations offer a possible way for choosing a transformation of the response. After fitting your regression model containing untransformed variables with the R function lm , you can use the function boxCox from the car package to estimate $\lambda$ (i.e. the power parameter) by maximum likelihood. Because your dependent variable isn't strictly positive, Box-Cox transformations will not work and you have to specify the option family="yjPower" to use the Yeo-Johnson transformations (see the original paper here and this related post ): boxCox(my.regression.model, family="yjPower", plotit = TRUE) This produces a plot like the following one: The best estimate of $\lambda$ is the value that maximizes the profile likelhod which in this example is about 0.2. Usually, the estimate of $\lambda$ is rounded to a familiar value that is still within the 95%-confidence interval, such as -1, -1/2, 0, 1/3, 1/2, 1 or 2. To transform your dependent variable now, use the function yjPower from the car package: depvar.transformed <- yjPower(my.dependent.variable, lambda) In the function, the lambda should be the rounded $\lambda$ you have found before using boxCox . Then fit the regression again with the transformed dependent variable. Important: Rather than just log-transform the dependent variable, you should consider to fit a GLM with a log-link. Here are some references that provide further information: first , second , third . To do this in R , use glm : glm.mod <- glm(y~x1+x2, family=gaussian(link="log")) where y is your dependent variable and x1 , x2 etc. are your independent variables. Transformations of predictors Transformations of strictly positive predictors can be estimated by maximum likelihood after the transformation of the dependent variable. To do so, use the function boxTidwell from the car package (for the original paper see here ). Use it like that: boxTidwell(y~x1+x2, other.x=~x3+x4) . The important thing here is that option other.x indicates the terms of the regression that are not to be transformed. This would be all your categorical variables. The function produces an output of the following form: boxTidwell(prestige ~ income + education, other.x=~ type + poly(women, 2), data=Prestige) Score Statistic p-value MLE of lambda income -4.482406 0.0000074 -0.3476283 education 0.216991 0.8282154 1.2538274 In that case, the score test suggests that the variable income should be transformed. The maximum likelihood estimates of $\lambda$ for income is -0.348. This could be rounded to -0.5 which is analogous to the transformation $\text{income}_{new}=1/\sqrt{\text{income}_{old}}$. Another very interesting post on the site about the transformation of the independent variables is this one . Disadvantages of transformations While log-transformed dependent and/or independent variables can be interpreted relatively easy , the interpretation of other, more complicated transformations is less intuitive (for me at least). How would you, for example, interpret the regression coefficients after the dependent variables has been transformed by $1/\sqrt{y}$? There are quite a few posts on this site that deal exactly with that question: first , second , third , fourth . If you use the $\lambda$ from Box-Cox directly, without rounding (e.g. $\lambda$=-0.382), it is even more difficult to interpret the regression coefficients. Modelling nonlinear relationships Two quite flexible methods to fit nonlinear relationships are fractional polynomials and splines . These three papers offer a very good introduction to both methods: First , second and third . There is also a whole book about fractional polynomials and R . The R package mfp implements multivariable fractional polynomials. This presentation might be informative regarding fractional polynomials. To fit splines, you can use the function gam (generalized additive models, see here for an excellent introduction with R ) from the package mgcv or the functions ns (natural cubic splines) and bs (cubic B-splines) from the package splines (see here for an example of the usage of these functions). Using gam you can specify which predictors you want to fit using splines using the s() function: my.gam <- gam(y~s(x1) + x2, family=gaussian()) here, x1 would be fitted using a spline and x2 linearly as in a normal linear regression. Inside gam you can specify the distribution family and the link function as in glm . So to fit a model with a log-link function, you can specify the option family=gaussian(link="log") in gam as in glm . Have a look at this post from the site.
{ "source": [ "https://stats.stackexchange.com/questions/61217", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/26640/" ] }
61,225
I'm looking for the correct equation to compute the weighted unbiased sample covariance. Internet sources are quite rare on this theme and they all use different equations. The most likely equation I've found is this one: $q_{jk}=\frac{\sum_{i=1}^{N}w_i}{\left(\sum_{i=1}^{N}w_i\right)^2-\sum_{i=1}^{N}w_i^2} \sum_{i=1}^N w_i \left( x_{ij}-\bar{x}_j \right) \left( x_{ik}-\bar{x}_k \right) .$ From: https://en.wikipedia.org/wiki/Sample_mean_and_sample_covariance#Weighted_samples Of course, you have to compute the weighted (unbiased) sample mean beforehand. However, I have found several other formulas like: $q_{jk}= \frac{1}{\sum_{i=1}^N w_i)-1}\sum_{i=1}^N w_i \left( x_{ij}-\bar{x}_j \right) \left( x_{ik}-\bar{x}_k \right) .$ Or I've even seen some source codes and academic papers just using the standard covariance formula but with the weighted sample mean instead of the sample mean... Can someone help me and shed some light? /EDIT: my weights are simply the number of observations for a sample in the dataset, thus weights.sum() = n
Found the solution in a 1972's book (George R. Price, Ann. Hum. Genet., Lond, pp485-490, Extension of covariance selection mathematics, 1972) . Biased weighted sample covariance: $\Sigma=\frac{1}{\sum_{i=1}^{N}w_i}\sum_{i=1}^N w_i \left(x_i - \mu^*\right)^T\left(x_i - \mu^*\right)$ And the unbiased weighted sample covariance given by applying the Bessel correction: $\Sigma=\frac{1}{\sum_{i=1}^{N}w_i - 1}\sum_{i=1}^N w_i \left(x_i - \mu^*\right)^T\left(x_i - \mu^*\right)$ Where $\mu^*$ is the (unbiased) weighted sample mean: $\mathbf{\mu^*}=\frac{\sum_{i=1}^N w_i \mathbf{x}_i}{\sum_{i=1}^N w_i}$ Important Note: this works only if the weights are "repeat"-type weights, meaning that each weight represent the number of occurrences of one observation, and that $\sum_{i=1}^N w_i=N^*$ where $N^*$ represent the real sample size (real total number of samples, accounting for the weights). I have updated the article on Wikipedia, where you will also find the equation for unbiased weighted sample variance: https://en.wikipedia.org/wiki/Weighted_arithmetic_mean#Weighted_sample_covariance Practical note: I advise you to first multiply column-by-column $w_i$ and $\left(x_i - \mu^*\right)$ and then do a matrix multiplication with $\left(x_i - \mu^*\right)$ to wrap things up and automatically perform the summation. Eg in Python Pandas/Numpy code: import pandas as pd import numpy as np # X is the dataset, as a Pandas' DataFrame mean = mean = np.ma.average(X, axis=0, weights=weights) # Computing the weighted sample mean (fast, efficient and precise) mean = pd.Series(mean, index=list(X.keys())) # Convert to a Pandas' Series (it's just aesthetic and more ergonomic, no differenc in computed values) xm = X-mean # xm = X diff to mean xm = xm.fillna(0) # fill NaN with 0 (because anyway a variance of 0 is just void, but at least it keeps the other covariance's values computed correctly)) sigma2 = 1./(weights.sum()-1) * xm.mul(weights, axis=0).T.dot(xm); # Compute the unbiased weighted sample covariance Did a few sanity checks using a non-weighted dataset and an equivalent weighted dataset, and it works correctly. For more details about the theory of unbiased variance/covariance, see this post.
{ "source": [ "https://stats.stackexchange.com/questions/61225", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/25538/" ] }
61,230
I am running a decision tree classification using SPSS on a data set with around 20 predictors (categorical with few categories). CHAID (Chi-squared Automatic Interaction Detection) and CRT/CART (Classification And Regression Trees) are giving me different trees. Can anyone explain the relative merits of CHAID vs CRT? What are the implications of using one method over the other?
I will list some properties and later give you my appraisal for what its worth: CHAID uses multiway splits by default (multiway splits means that the current node is splitted into more than two nodes). This may or may not be desired (it can lead to better segments or easier interpretation). What it definitely does, though, is thin out the sample size in the nodes and thus lead to less deep trees. When used for segmentation purposes this can backfire soon as CHAID needs a large sample sizes to work well. CART does binary splits (each node is split into two daughter nodes) by default. CHAID is intended to work with categorical/discretized targets (XAID was for regression but perhaps they have been merged since then). CART can definitely do regression and classification. CHAID uses a pre-pruning idea . A node is only split if a significance criterion is fulfilled. This ties in with the above problem of needing large sample sizes as the Chi-Square test has only little power in small samples (which is effectively reduced even further by a Bonferroni correction for multiple testing). CART on the other hand grows a large tree and then post-prunes the tree back to a smaller version. Thus CHAID tries to prevent overfitting right from the start (only split is there is significant association), whereas CART may easily overfit unless the tree is pruned back. On the other hand this allows CART to perform better than CHAID in and out-of-sample (for a given tuning parameter combination). The most important difference in my opinion is that split variable and split point selection in CHAID is less strongly confounded as in CART . This is largely irrelevant when the trees are used for prediction but is an important issue when trees are used for interpretation: A tree that has those two parts of the algorithm highly confounded is said to be "biased in variable selection" (an unfortunate name). This means that split variable selection prefers variables with many possible splits (say metric predictors). CART is highly "biased" in that sense, CHAID not so much. With surrogate splits CART knows how to handle missing values (surrogate splits means that with missing values (NAs) for predictor variables the algorithm uses other predictor variables that are not as "good" as the primary split variable but mimic the splits produced by the primary splitter). CHAID has no such thing afaik. So depending on what you need it for I'd suggest to use CHAID if the sample is of some size and the aspects of interpretation are more important. Also, if multiway splits or smaller trees are desired CHAID is better. CART on the other hand is a well working prediction machine so if prediction is your aim, I'd go for CART.
{ "source": [ "https://stats.stackexchange.com/questions/61230", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/14188/" ] }
61,546
Computing power considerations aside, are there any reasons to believe that increasing the number of folds in cross-validation leads to better model selection/validation (i.e. that the higher the number of folds the better)? Taking the argument to the extreme, does leave-one-out cross-validation necessarily lead to better models than $K$-fold cross-validation? Some background on this question: I am working on a problem with very few instances (e.g. 10 positives and 10 negatives), and am afraid that my models may not generalize well/would overfit with so little data.
Leave-one-out cross-validation does not generally lead to better performance than K-fold, and is more likely to be worse , as it has a relatively high variance (i.e. its value changes more for different samples of data than the value for k-fold cross-validation). This is bad in a model selection criterion as it means the model selection criterion can be optimised in ways that merely exploit the random variation in the particular sample of data, rather than making genuine improvements in performance, i.e. you are more likely to over-fit the model selection criterion. The reason leave-one-out cross-validation is used in practice is that for many models it can be evaluated very cheaply as a by-product of fitting the model. If computational expense is not primarily an issue, a better approach is to perform repeated k-fold cross-validation, where the k-fold cross-validation procedure is repeated with different random partitions into k disjoint subsets each time. This reduces the variance. If you have only 20 patterns, it is very likely that you will experience over-fitting the model selection criterion, which is a much neglected pitfall in statistics and machine learning (shameless plug: see my paper on the topic). You may be better off choosing a relatively simple model and try not to optimise it very aggressively, or adopt a Bayesian approach and average over all model choices, weighted by their plausibility. IMHO optimisation is the root of all evil in statistics, so it is better not to optimise if you don't have to, and to optimise with caution whenever you do. Note also if you are going to perform model selection, you need to use something like nested cross-validation if you also need a performance estimate (i.e. you need to consider model selection as an integral part of the model fitting procedure and cross-validate that as well).
{ "source": [ "https://stats.stackexchange.com/questions/61546", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2798/" ] }
61,733
I want to perform a very simple linear regression in R . The formula is as simple as $y = ax + b$. However I would like the slope ($a$) to be inside an interval, let's say, between 1.4 and 1.6. How can this be done?
I want to perform ... linear regression in R. ... I would like the slope to be inside an interval, let's say, between 1.4 and 1.6. How can this be done? (i) Simple way: fit the regression. If it's in the bounds, you're done. If it's not in the bounds, set the slope to the nearest bound, and estimate the intercept as the average of $(y - ax)$ over all observations. (ii) More complex way: do least squares with box constraints on the slope; many optimizaton routines implement box constraints, e.g. nlminb (which comes with R) does. Edit: actually (as mentioned in the example below), in vanilla R, nls can do box constraints; as shown in the example, that's really very easy to do. You can use constrained regression more directly; I think the pcls function from the package "mgcv" and the nnls function from the package "nnls" both do. -- Edit to answer followup question - I was going to show you how to use it with nlminb since that comes with R, but I realized that nls already uses the same routines (the PORT routines) to implement constrained least squares, so my example below does that case. NB: in my example below, $a$ is the intercept and $b$ is the slope (the more common convention in stats). I realized after I put it in here that you started the other way around; I'm going to leave the example 'backward' relative to your question, though. First, set up some data with the 'true' slope inside the range: set.seed(seed=439812L) x=runif(35,10,30) y = 5.8 + 1.53*x + rnorm(35,s=5) # population slope is in range plot(x,y) lm(y~x) Call: lm(formula = y ~ x) Coefficients: (Intercept) x 12.681 1.217 ... but LS estimate is well outside it, just caused by random variation. So lets use the constrained regression in nls : nls(y~a+b*x,algorithm="port", start=c(a=0,b=1.5),lower=c(a=-Inf,b=1.4),upper=c(a=Inf,b=1.6)) Nonlinear regression model model: y ~ a + b * x data: parent.frame() a b 9.019 1.400 residual sum-of-squares: 706.2 Algorithm "port", convergence message: both X-convergence and relative convergence (5) As you see, you get a slope right on the boundary. If you pass the fitted model to summary it will even produce standard errors and t-values but I am not sure how meaningful/interpretable these are. So how does my suggestion (1) compare? (i.e. set the slope to the nearest bound and average the residuals $y-bx$ to estimate the intercept) b=1.4 c(a=mean(y-x*b),b=b) a b 9.019376 1.400000 It's the same estimate ... In the plot below, the blue line is least squares and the red line is the constrained least squares:
{ "source": [ "https://stats.stackexchange.com/questions/61733", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/26887/" ] }
61,747
I have a set of values $x$ and $y$ which are theoretically related exponentially: $y = ax^b$ One way to obtain the coefficients is by applying natural logarithms in both sides and fitting a linear model: > fit <- lm(log(y)~log(x)) > a <- exp(fit$coefficients[1]) > b <- fit$coefficients[2] Another way to obtain this is using a nonlinear regression, given a theoretical set of start values: > fit <- nls(y~a*x^b, start=c(a=50, b=1.3)) My tests show better and more theory-related results if I apply the second algorithm. However, I would like to know the statistical meaning and implications of each method. Which of them is better?
"Better" is a function of your model. Part of the reason for your confusion is you only wrote half of your model. When you say $y=ax^b$ , that's not actually true. Your observed $y$ values aren't equal to $ax^b$ ; they have an error component. For example, the two models you mention (not the only possible models by any means) make entirely different assumptions about the error. You probably mean something closer to $E(Y|X=x) = ax^b\,$ . But then what do we say about the variation of $Y$ away from that expectation at a given $x$ ? It matters! When you fit the nonlinear least squares model, you're saying that the errors are additive and the standard deviation of the errors is constant across the data: $\: y_i \sim N(ax_i^b,\sigma^2)$ or equivalently $\: y_i = ax_i^b + e_i$ , with $\text{var}(e_i) = \sigma^2$ by contrast when you take logs and fit a linear model, you're saying the error is additive on the log scale and (on the log scale) constant across the data. This means that on the scale of the observations, the error term is multiplicative , and so the errors are larger when the expected values are larger: $\: y_i \sim \text{logN}(\log a+b\log x_i,\sigma^2)$ or equivalently $\: y_i = ax_i^b \cdot \eta_i$ , with $\eta_i \sim \text{logN}(0,\sigma^2)$ (Note that $\text{E}(\eta)$ is not 1. If $\sigma^2$ is not very small, you will need to allow for this effect if you want a reasonable approximation for the conditional mean of $Y$ ) (You can do least squares without assuming normality / lognormal distributions, but the central issue being discussed still applies ... and if you're nowhere near normality, you should probably be considering a different error model anyway) So what is best depends on which kind of error model describes your circumstances. [If you're doing some exploratory analysis with some kind of data that's not been seen before, you'd consider questions like "What do your data look like? (i.e. $y$ plotted against $x$ ? What do the residuals look like against $x$ ?". On the other hand if variables like these are not uncommon you should already have information about their general behaviour.]
{ "source": [ "https://stats.stackexchange.com/questions/61747", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/26887/" ] }
61,783
How do different cross-validation methods compare in terms of model variance and bias? My question is partly motivated by this thread: Optimal number of folds in $K$-fold cross-validation: is leave-one-out CV always the best choice? . The answer there suggests that models learned with leave-one-out cross-validation have higher variance than those learned with regular $K$-fold cross-validation, making leave-one-out CV a worse choice. However, my intuition tells me that in leave-one-out CV one should see relatively lower variance between models than in the $K$-fold CV, since we are only shifting one data point across folds and therefore the training sets between folds overlap substantially. Or going in the other direction, if $K$ is low in the $K$-fold CV, the training sets would be quite different across folds, and the resulting models are more likely to be different (hence higher variance). If the above argument is right, why would models learned with leave-one-out CV have higher variance?
why would models learned with leave-one-out CV have higher variance? [TL:DR] A summary of recent posts and debates (July 2018) This topic has been widely discussed both on this site, and in the scientific literature, with conflicting views, intuitions and conclusions. Back in 2013 when this question was first asked, the dominant view was that LOOCV leads to larger variance of the expected generalization error of a training algorithm producing models out of samples of size $n(K−1)/K$. This view, however, appears to be an incorrect generalization of a special case and I would argue that the correct answer is: "it depends..." Paraphrasing Yves Grandvalet the author of a 2004 paper on the topic I would summarize the intuitive argument as follows: If cross-validation were averaging independent estimates : then leave-one-out CV one should see relatively lower variance between models since we are only shifting one data point across folds and therefore the training sets between folds overlap substantially. This is not true when training sets are highly correlated : Correlation may increase with K and this increase is responsible for the overall increase of variance in the second scenario. Intuitively, in that situation, leave-one-out CV may be blind to instabilities that exist, but may not be triggered by changing a single point in the training data, which makes it highly variable to the realization of the training set. Experimental simulations from myself and others on this site, as well as those of researchers in the papers linked below will show you that there is no universal truth on the topic. Most experiments have monotonically decreasing or constant variance with $K$, but some special cases show increasing variance with $K$. The rest of this answer proposes a simulation on a toy example and an informal literature review. [Update] You can find here an alternative simulation for an unstable model in the presence of outliers. Simulations from a toy example showing decreasing / constant variance Consider the following toy example where we are fitting a degree 4 polynomial to a noisy sine curve. We expect this model to fare poorly for small datasets due to overfitting, as shown by the learning curve. Note that we plot 1 - MSE here to reproduce the illustration from ESLII page 243 Methodology You can find the code for this simulation here . The approach was the following: Generate 10,000 points from the distribution $sin(x) + \epsilon$ where the true variance of $\epsilon$ is known Iterate $i$ times (e.g. 100 or 200 times). At each iteration, change the dataset by resampling $N$ points from the original distribution For each data set $i$: Perform K-fold cross validation for one value of $K$ Store the average Mean Square Error (MSE) across the K-folds Once the loop over $i$ is complete, calculate the mean and standard deviation of the MSE across the $i$ datasets for the same value of $K$ Repeat the above steps for all $K$ in range $\{ 5,...,N\}$ all the way to Leave One Out CV (LOOCV) Impact of $K$ on the Bias and Variance of the MSE across $i$ datasets. Left Hand Side : Kfolds for 200 data points, Right Hand Side : Kfolds for 40 data points Standard Deviation of MSE (across data sets i) vs Kfolds From this simulation, it seems that: For small number $N = 40$ of datapoints, increasing $K$ until $K=10$ or so significantly improves both the bias and the variance. For larger $K$ there is no effect on either bias or variance. The intuition is that for too small effective training size, the polynomial model is very unstable, especially for $K \leq 5$ For larger $N = 200$ - increasing $K$ has no particular impact on both the bias and variance. An informal literature review The following three papers investigate the bias and variance of cross validation Ron Kohavi (1995) Yves Grandvalet and Yoshua Bengio (2004) Zhang and Yang (2015) Kohavi 1995 This paper is often refered to as the source for the argument that LOOC has higher variance. In section 1: “For example, leave-oneout is almost unbiased, but it has high variance, leading to unreliable estimates (Efron 1983)" This statement is source of much confusion, because it seems to be from Efron in 1983, not Kohavi. Both Kohavi's theoretical argumentations and experimental results go against this statement: Corollary 2 ( Variance in CV) Given a dataset and an inducer. If the inducer is stable under the perturbations caused by deleting the test instances for the folds in k-fold CV for various values of $k$, then the variance of the estimate will be the same Experiment In his experiment, Kohavi compares two algorithms: a C4.5 decision tree and a Naive Bayes classifier across multiple datasets from the UC Irvine repository. His results are below: LHS is accuracy vs folds (i.e. bias) and RHS is standard deviation vs folds In fact, only the decision tree on three data sets clearly has higher variance for increasing K. Other results show decreasing or constant variance. Finally, although the conclusion could be worded more strongly, there is no argument for LOO having higher variance, quite the opposite. From section 6. Summary "k-fold cross validation with moderate k values (10-20) reduces the variance... As k-decreases (2-5) and the samples get smaller, there is variance due to instability of the training sets themselves. Zhang and Yang The authors take a strong view on this topic and clearly state in Section 7.1 In fact, in least squares linear regression, Burman (1989) shows that among the k-fold CVs, in estimating the prediction error, LOO (i.e., n-fold CV) has the smallest asymptotic bias and variance. ... ... Then a theoretical calculation ( Lu , 2007) shows that LOO has the smallest bias and variance at the same time among all delete-n CVs with all possible n_v deletions considered Experimental results Similarly, Zhang's experiments point in the direction of decreasing variance with K, as shown below for the True model and the wrong model for Figure 3 and Figure 5. The only experiment for which variance increases with $K$ is for the Lasso and SCAD models. This is explained as follows on page 31: However, if model selection is involved, the performance of LOO worsens in variability as the model selection uncertainty gets higher due to large model space, small penalty coefficients and/or the use of data-driven penalty coefficients
{ "source": [ "https://stats.stackexchange.com/questions/61783", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2798/" ] }
61,829
I have values for True Positive (TP) and False Negative (FN) as follows: TP = 0.25 FN = 0.75 From those values, can we calculate False Positive (FP) and True Negative (TN) ?
There is quite a bit of terminological confusion in this area. Personally, I always find it useful to come back to a confusion matrix to think about this. In a classification / screening test, you can have four different situations: Condition: A Not A Test says “A” True positive | False positive ---------------------------------- Test says “Not A” False negative | True negative In this table, “true positive”, “false negative”, “false positive” and “true negative” are events (or their probability). What you have is therefore probably a true positive rate and a false negative rate . The distinction matters because it emphasizes that both numbers have a numerator and a denominator. Where things get a bit confusing is that you can find several definitions of “false positive rate” and “false negative rate”, with different denominators. For example, Wikipedia provides the following definitions (they seem pretty standard): True positive rate (or sensitivity): $TPR = TP/(TP + FN)$ False positive rate: $FPR = FP/(FP + TN)$ True negative rate (or specificity): $TNR = TN/(FP + TN)$ In all cases, the denominator is the column total. This also gives a cue to their interpretation: The true positive rate is the probability that the test says “A” when the real value is indeed A (i.e., it is a conditional probability, conditioned on A being true). This does not tell you how likely you are to be correct when calling “A” (i.e., the probability of a true positive, conditioned on the test result being “A”). Assuming the false negative rate is defined in the same way, we then have $FNR = 1 - TPR$ (note that your numbers are consistent with this). We cannot however directly derive the false positive rate from either the true positive or false negative rates because they provide no information on the specificity, i.e., how the test behaves when “not A” is the correct answer. The answer to your question would therefore be “no, it's not possible” because you have no information on the right column of the confusion matrix. There are however other definitions in the literature. For example, Fleiss ( Statistical methods for rates and proportions ) offers the following: “[…] the false positive rate […] is the proportion of people, among those responding positive who are actually free of the disease.” “The false negative rate […] is the proportion of people, among those responding negative on the test, who nevertheless have the disease.” (He also acknowledges the previous definitions but considers them “wasteful of precious terminology”, precisely because they have a straightforward relationship with sensitivity and specificity.) Referring to the confusion matrix, it means that $FPR = FP / (TP + FP)$ and $FNR = FN / (TN + FN)$ so the denominators are the row totals. Importantly, under these definitions, the false positive and false negative rates cannot directly be derived from the sensitivity and specificity of the test. You also need to know the prevalence (i.e., how frequent A is in the population of interest). Fleiss does not use or define the phrases “true negative rate” or the “true positive rate” but if we assume those are also conditional probabilities given a particular test result / classification, then @guill11aume answer is the correct one. In any case, you need to be careful with the definitions because there is no indisputable answer to your question.
{ "source": [ "https://stats.stackexchange.com/questions/61829", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5907/" ] }
61,834
I have done a job satisfaction survey where the DV is a 7 point Likert type scale and 5 IVs with 6 point Likert-type scale and 6 IVs with 5 point Likert type scale, all ordinal. Which type of analysis is suitable for this kind of study? Between ordinal and multinomial regression which is best suited to analyze IVs that would predict the DV?
There is quite a bit of terminological confusion in this area. Personally, I always find it useful to come back to a confusion matrix to think about this. In a classification / screening test, you can have four different situations: Condition: A Not A Test says “A” True positive | False positive ---------------------------------- Test says “Not A” False negative | True negative In this table, “true positive”, “false negative”, “false positive” and “true negative” are events (or their probability). What you have is therefore probably a true positive rate and a false negative rate . The distinction matters because it emphasizes that both numbers have a numerator and a denominator. Where things get a bit confusing is that you can find several definitions of “false positive rate” and “false negative rate”, with different denominators. For example, Wikipedia provides the following definitions (they seem pretty standard): True positive rate (or sensitivity): $TPR = TP/(TP + FN)$ False positive rate: $FPR = FP/(FP + TN)$ True negative rate (or specificity): $TNR = TN/(FP + TN)$ In all cases, the denominator is the column total. This also gives a cue to their interpretation: The true positive rate is the probability that the test says “A” when the real value is indeed A (i.e., it is a conditional probability, conditioned on A being true). This does not tell you how likely you are to be correct when calling “A” (i.e., the probability of a true positive, conditioned on the test result being “A”). Assuming the false negative rate is defined in the same way, we then have $FNR = 1 - TPR$ (note that your numbers are consistent with this). We cannot however directly derive the false positive rate from either the true positive or false negative rates because they provide no information on the specificity, i.e., how the test behaves when “not A” is the correct answer. The answer to your question would therefore be “no, it's not possible” because you have no information on the right column of the confusion matrix. There are however other definitions in the literature. For example, Fleiss ( Statistical methods for rates and proportions ) offers the following: “[…] the false positive rate […] is the proportion of people, among those responding positive who are actually free of the disease.” “The false negative rate […] is the proportion of people, among those responding negative on the test, who nevertheless have the disease.” (He also acknowledges the previous definitions but considers them “wasteful of precious terminology”, precisely because they have a straightforward relationship with sensitivity and specificity.) Referring to the confusion matrix, it means that $FPR = FP / (TP + FP)$ and $FNR = FN / (TN + FN)$ so the denominators are the row totals. Importantly, under these definitions, the false positive and false negative rates cannot directly be derived from the sensitivity and specificity of the test. You also need to know the prevalence (i.e., how frequent A is in the population of interest). Fleiss does not use or define the phrases “true negative rate” or the “true positive rate” but if we assume those are also conditional probabilities given a particular test result / classification, then @guill11aume answer is the correct one. In any case, you need to be careful with the definitions because there is no indisputable answer to your question.
{ "source": [ "https://stats.stackexchange.com/questions/61834", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/26931/" ] }
62,092
I'm studying pattern recognition and statistics and almost every book I open on the subject I bump into the concept of Mahalanobis distance . The books give sort of intuitive explanations, but still not good enough ones for me to actually really understand what is going on. If someone would ask me "What is the Mahalanobis distance?" I could only answer: "It's this nice thing, which measures distance of some kind" :) The definitions usually also contain eigenvectors and eigenvalues, which I have a little trouble connecting to the Mahalanobis distance. I understand the definition of eigenvectors and eigenvalues, but how are they related to the Mahalanobis distance? Does it have something to do with changing the base in Linear Algebra etc.? I have also read these former questions on the subject: What is Mahalanobis distance, & how is it used in pattern recognition? Intuitive explanations for Gaussian distribution function and mahalanobis distance (Math.SE) I have also read this explanation . The answers are good and pictures nice, but still I don't really get it...I have an idea but it's still in the dark. Can someone give a "How would you explain it to your grandma"-explanation so that I could finally wrap this up and never again wonder what the heck is a Mahalanobis distance? :) Where does it come from, what, why? UPDATE: Here is something which helps understanding the Mahalanobis formula: https://math.stackexchange.com/questions/428064/distance-of-a-test-point-from-the-center-of-an-ellipsoid
Here is a scatterplot of some multivariate data (in two dimensions): What can we make of it when the axes are left out? Introduce coordinates that are suggested by the data themselves. The origin will be at the centroid of the points (the point of their averages). The first coordinate axis (blue in the next figure) will extend along the "spine" of the points, which (by definition) is any direction in which the variance is the greatest. The second coordinate axis (red in the figure) will extend perpendicularly to the first one. (In more than two dimensions, it will be chosen in that perpendicular direction in which the variance is as large as possible, and so on.) We need a scale . The standard deviation along each axis will do nicely to establish the units along the axes. Remember the 68-95-99.7 rule: about two-thirds (68%) of the points should be within one unit of the origin (along the axis); about 95% should be within two units. That makes it easy to eyeball the correct units. For reference, this figure includes the unit circle in these units: That doesn't really look like a circle, does it? That's because this picture is distorted (as evidenced by the different spacings among the numbers on the two axes). Let's redraw it with the axes in their proper orientations--left to right and bottom to top--and with a unit aspect ratio so that one unit horizontally really does equal one unit vertically: You measure the Mahalanobis distance in this picture rather than in the original. What happened here? We let the data tell us how to construct a coordinate system for making measurements in the scatterplot. That's all it is. Although we had a few choices to make along the way (we could always reverse either or both axes; and in rare situations the directions along the "spines"--the principal directions --are not unique), they do not change the distances in the final plot. Technical comments (Not for grandma, who probably started to lose interest as soon as numbers reappeared on the plots, but to address the remaining questions that were posed.) Unit vectors along the new axes are the eigenvectors (of either the covariance matrix or its inverse). We noted that undistorting the ellipse to make a circle divides the distance along each eigenvector by the standard deviation: the square root of the covariance. Letting $C$ stand for the covariance function, the new (Mahalanobis) distance between two points $x$ and $y$ is the distance from $x$ to $y$ divided by the square root of $C(x-y, x-y)$ . The corresponding algebraic operations, thinking now of $C$ in terms of its representation as a matrix and $x$ and $y$ in terms of their representations as vectors, are written $\sqrt{(x-y)'C^{-1}(x-y)}$ . This works regardless of what basis is used to represent vectors and matrices. In particular, this is the correct formula for the Mahalanobis distance in the original coordinates. The amounts by which the axes are expanded in the last step are the (square roots of the) eigenvalues of the inverse covariance matrix. Equivalently, the axes are shrunk by the (roots of the) eigenvalues of the covariance matrix. Thus, the more the scatter, the more the shrinking needed to convert that ellipse into a circle. Although this procedure always works with any dataset, it looks this nice (the classical football-shaped cloud) for data that are approximately multivariate Normal. In other cases, the point of averages might not be a good representation of the center of the data or the "spines" (general trends in the data) will not be identified accurately using variance as a measure of spread. The shifting of the coordinate origin, rotation, and expansion of the axes collectively form an affine transformation. Apart from that initial shift, this is a change of basis from the original one (using unit vectors pointing in the positive coordinate directions) to the new one (using a choice of unit eigenvectors). There is a strong connection with Principal Components Analysis (PCA) . That alone goes a long way towards explaining the "where does it come from" and "why" questions--if you weren't already convinced by the elegance and utility of letting the data determine the coordinates you use to describe them and measure their differences. For multivariate Normal distributions (where we can carry out the same construction using properties of the probability density instead of the analogous properties of the point cloud), the Mahalanobis distance (to the new origin) appears in place of the " $x$ " in the expression $\exp(-\frac{1}{2} x^2)$ that characterizes the probability density of the standard Normal distribution. Thus, in the new coordinates, a multivariate Normal distribution looks standard Normal when projected onto any line through the origin. In particular, it is standard Normal in each of the new coordinates. From this point of view, the only substantial sense in which multivariate Normal distributions differ among one another is in terms of how many dimensions they use. (Note that this number of dimensions may be, and sometimes is, less than the nominal number of dimensions.)
{ "source": [ "https://stats.stackexchange.com/questions/62092", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/18528/" ] }
62,101
I am training a rule based algorithm (PRISM or CN.2) with n classes (y_1,y_2,..,y_n) . All rules in the training RuleSet are in DFN form, like: IF t_1 OR t_2 OR ... t_m THEN y_i (terms) , where t_1 == lit_1 AND lit_2 AND ... lit_n (literals) I believe it is possible to have an instance (example) that can be classified with more than one class when predicting in the test sample based on the RuleSet. My question thus is: IF it is possible, How to proceed? (Do a major vote for the different classes?). Could you guys please provide me some links describing this type of "problem"?
Here is a scatterplot of some multivariate data (in two dimensions): What can we make of it when the axes are left out? Introduce coordinates that are suggested by the data themselves. The origin will be at the centroid of the points (the point of their averages). The first coordinate axis (blue in the next figure) will extend along the "spine" of the points, which (by definition) is any direction in which the variance is the greatest. The second coordinate axis (red in the figure) will extend perpendicularly to the first one. (In more than two dimensions, it will be chosen in that perpendicular direction in which the variance is as large as possible, and so on.) We need a scale . The standard deviation along each axis will do nicely to establish the units along the axes. Remember the 68-95-99.7 rule: about two-thirds (68%) of the points should be within one unit of the origin (along the axis); about 95% should be within two units. That makes it easy to eyeball the correct units. For reference, this figure includes the unit circle in these units: That doesn't really look like a circle, does it? That's because this picture is distorted (as evidenced by the different spacings among the numbers on the two axes). Let's redraw it with the axes in their proper orientations--left to right and bottom to top--and with a unit aspect ratio so that one unit horizontally really does equal one unit vertically: You measure the Mahalanobis distance in this picture rather than in the original. What happened here? We let the data tell us how to construct a coordinate system for making measurements in the scatterplot. That's all it is. Although we had a few choices to make along the way (we could always reverse either or both axes; and in rare situations the directions along the "spines"--the principal directions --are not unique), they do not change the distances in the final plot. Technical comments (Not for grandma, who probably started to lose interest as soon as numbers reappeared on the plots, but to address the remaining questions that were posed.) Unit vectors along the new axes are the eigenvectors (of either the covariance matrix or its inverse). We noted that undistorting the ellipse to make a circle divides the distance along each eigenvector by the standard deviation: the square root of the covariance. Letting $C$ stand for the covariance function, the new (Mahalanobis) distance between two points $x$ and $y$ is the distance from $x$ to $y$ divided by the square root of $C(x-y, x-y)$ . The corresponding algebraic operations, thinking now of $C$ in terms of its representation as a matrix and $x$ and $y$ in terms of their representations as vectors, are written $\sqrt{(x-y)'C^{-1}(x-y)}$ . This works regardless of what basis is used to represent vectors and matrices. In particular, this is the correct formula for the Mahalanobis distance in the original coordinates. The amounts by which the axes are expanded in the last step are the (square roots of the) eigenvalues of the inverse covariance matrix. Equivalently, the axes are shrunk by the (roots of the) eigenvalues of the covariance matrix. Thus, the more the scatter, the more the shrinking needed to convert that ellipse into a circle. Although this procedure always works with any dataset, it looks this nice (the classical football-shaped cloud) for data that are approximately multivariate Normal. In other cases, the point of averages might not be a good representation of the center of the data or the "spines" (general trends in the data) will not be identified accurately using variance as a measure of spread. The shifting of the coordinate origin, rotation, and expansion of the axes collectively form an affine transformation. Apart from that initial shift, this is a change of basis from the original one (using unit vectors pointing in the positive coordinate directions) to the new one (using a choice of unit eigenvectors). There is a strong connection with Principal Components Analysis (PCA) . That alone goes a long way towards explaining the "where does it come from" and "why" questions--if you weren't already convinced by the elegance and utility of letting the data determine the coordinates you use to describe them and measure their differences. For multivariate Normal distributions (where we can carry out the same construction using properties of the probability density instead of the analogous properties of the point cloud), the Mahalanobis distance (to the new origin) appears in place of the " $x$ " in the expression $\exp(-\frac{1}{2} x^2)$ that characterizes the probability density of the standard Normal distribution. Thus, in the new coordinates, a multivariate Normal distribution looks standard Normal when projected onto any line through the origin. In particular, it is standard Normal in each of the new coordinates. From this point of view, the only substantial sense in which multivariate Normal distributions differ among one another is in terms of how many dimensions they use. (Note that this number of dimensions may be, and sometimes is, less than the nominal number of dimensions.)
{ "source": [ "https://stats.stackexchange.com/questions/62101", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/25504/" ] }
62,621
I read some definitions of recall and precision, though it is every time in the context of information retrieval. I was wondering if someone could explain this a bit more in a classification context and maybe illustrate some examples. Say for example I have a binary classifier which gives me a precision of 60% and a recall of 95%, is this a good classifier ? Maybe to help my goal a bit more, what is the best classifier according to you ? (dataset is imbalanced. Majority class has twice the amount of examples of the minority class) I'd personally say 5 because of the area under the receiver operator curve. (as you can see here model 8 has a low precision, very high recall, but one of the lowest AUC_ROC, does that make it a good model ? or a bad one ?) edit: I have an excel file with more information: https://www.dropbox.com/s/6hq7ew5qpztwbo8/comparissoninbalance.xlsx In this document the area under the reciever operator curve can be found and the area under the precision recall curve. Together with the plots.
Whether a classifier is “good” really depends on What else is available for your particular problem. Obviously, you want a classifier to be better than random or naive guesses (e.g. classifying everything as belonging to the most common category) but some things are easier to classify than others. The cost of different mistakes (false alarm vs. false negatives) and the base rate. It's very important to distinguish the two and work out the consequences as it's possible to have a classifier with a very high accuracy (correct classifications on some test sample) that is completely useless in practice (say you are trying to detect a rare disease or some uncommon mischievous behavior and plan to launch some action upon detection; Large-scale testing costs something and the remedial action/treatment also typically involve significant risks/costs so considering that most hits are going to be false positives, from a cost/benefit perspective it might be better to do nothing). To understand the link between recall/precision on the one hand and sensitivity/specificity on the other hand, it's useful to come back to a confusion matrix: Condition: A Not A Test says “A” True positive (TP) | False positive (FP) ---------------------------------- Test says “Not A” False negative (FN) | True negative (TN) Recall is TP/(TP + FN) whereas precision is TP/(TP+FP). This reflects the nature of the problem: In information retrieval, you want to identify as many relevant documents as you can (that's recall) and avoid having to sort through junk (that's precision). Using the same table, traditional classification metrics are (1) sensitivity defined as TP/(TP + FN) and (2) specificity defined as TN/(FP + TN). So recall and sensitivity are simply synonymous but precision and specificity are defined differently (like recall and sensitivity, specificity is defined with respect to the column total whereas precision refers to the row total). Precision is also sometimes called the “positive predictive value” or, rarely, the “false positive rate” (but see my answer to Relation between true positive, false positive, false negative and true negative regarding the confusion surrounding this definition of the false positive rate). Interestingly, information retrieval metrics do not involve the “true negative” count. This makes sense: In information retrieval, you don't care about correctly classifying negative instances per se , you just don't want too many of them polluting your results (see also Why doesn't recall take into account true negatives? ). Because of this difference, it's not possible to go from specificity to precision or the other way around without additional information, namely the number of true negatives or, alternatively, the overall proportion of positive and negative cases. However, for the same corpus/test set, higher specificity always means better precision so they are closely related. In an information retrieval context, the goal is typically to identify a small number of matches from a large number of documents. Because of this asymmetry, it is in fact much more difficult to get a good precision than a good specificity while keeping the sensitivity/recall constant. Since most documents are irrelevant, you have many more occasions for false alarms than true positives and these false alarms can swamp the correct results even if the classifier has impressive accuracy on a balanced test set (this is in fact what's going on in the scenarios I mentioned in my point 2 above). Consequently, you really need to optimize precision and not merely to ensure decent specificity because even impressive-looking rates like 99% or more are sometimes not enough to avoid numerous false alarms. There is usually a trade-off between sensitivity and specificity (or recall and precision). Intuitively, if you cast a wider net, you will detect more relevant documents/positive cases (higher sensitivity/recall) but you will also get more false alarms (lower specificity and lower precision). If you classify everything in the positive category, you have 100% recall/sensitivity, a bad precision and a mostly useless classifier (“mostly” because if you don't have any other information, it is perfectly reasonable to assume it's not going to rain in a desert and to act accordingly so maybe the output is not useless after all; of course, you don't need a sophisticated model for that). Considering all this, 60% precision and 95% recall does not sound too bad but, again, this really depends on the domain and what you intend to do with this classifier. Some additional information regarding the latest comments/edits: Again, the performance you can expect depends on the specifics (in this context this would be things like the exact set of emotions present in the training set, quality of the picture/video, luminosity, occlusion, head movements, acted or spontaneous videos, person-dependent or person-independent model, etc.) but F1 over .7 sounds good for this type of applications even if the very best models can do better on some data sets [see Valstar, M.F., Mehu, M., Jiang, B., Pantic, M., & Scherer, K. (2012). Meta-analysis of the first facial expression recognition challenge. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 42 (4), 966-979.] Whether such a model is useful in practice is a completely different question and obviously depends on the application. Note that facial “expression” is itself a complex topic and going from a typical training set (posed expressions) to any real-life situation is not easy. This is rather off-topic on this forum but it will have serious consequences for any practical application you might contemplate. Finally, head-to-head comparison between models is yet another question. My take on the numbers you presented is that there isn't any dramatic difference between the models (if you refer to the paper I cited above, the range of F1 scores for well-known models in this area is much broader). In practice, technical aspects (simplicity/availability of standard libraries, speed of the different techniques, etc.) would likely decide which model is implemented, except perhaps if the cost/benefits and overall rate make you strongly favor either precision or recall.
{ "source": [ "https://stats.stackexchange.com/questions/62621", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/17192/" ] }
62,622
I have as an input a number of points that I need to partition into clusters. Each point has a number of features that are ideally to be used to find the similarity between each point and the others. Some of these features are scalar values (one number) and others are vectors. For example, assume that each point has the following features: S1 : scalar value V1 : 48 $\times$ 1 vector V2 : 48 $\times$ 1 vector For example one point may have (S1,V1, V2) as (100, {0, 100, 20, 30}, {75,0,10, 5}) My hypothesis is to use cosine similarity to find how similar the vector V1 or V2 of one point is to the vector V1 or V2 of another point. I have already computed the similarity matrices between all points in terms of V1 and V2 similarities. By exploring the standard clustering algorithms in R, I have found that k-means turns to use the Euclidean distance, which might be suitable for clustering points according to their scalar values, because [subject unclear] doesn't work for the situation where I have hybrid types of features (scalars and vectors). Also the K-medoid clustering seems to be supporting only the Euclidean and the Manhattan distances. I think what should be done is to generate one more distance/similarity matrix between all points based on the scalar value, so that we end with three similarity matrices that show the similarity between each point and the other points according to each feature regardless of it being a scalar or a vector, and use those matrices for finding the neighbourhood of points while clustering. I wonder if there is an implementation for a clustering algorithm that accepts as an input the similarity matrices (or alternatively the dissimilarity/distance matrices) between vector features of multiple points and uses them for clustering?
Whether a classifier is “good” really depends on What else is available for your particular problem. Obviously, you want a classifier to be better than random or naive guesses (e.g. classifying everything as belonging to the most common category) but some things are easier to classify than others. The cost of different mistakes (false alarm vs. false negatives) and the base rate. It's very important to distinguish the two and work out the consequences as it's possible to have a classifier with a very high accuracy (correct classifications on some test sample) that is completely useless in practice (say you are trying to detect a rare disease or some uncommon mischievous behavior and plan to launch some action upon detection; Large-scale testing costs something and the remedial action/treatment also typically involve significant risks/costs so considering that most hits are going to be false positives, from a cost/benefit perspective it might be better to do nothing). To understand the link between recall/precision on the one hand and sensitivity/specificity on the other hand, it's useful to come back to a confusion matrix: Condition: A Not A Test says “A” True positive (TP) | False positive (FP) ---------------------------------- Test says “Not A” False negative (FN) | True negative (TN) Recall is TP/(TP + FN) whereas precision is TP/(TP+FP). This reflects the nature of the problem: In information retrieval, you want to identify as many relevant documents as you can (that's recall) and avoid having to sort through junk (that's precision). Using the same table, traditional classification metrics are (1) sensitivity defined as TP/(TP + FN) and (2) specificity defined as TN/(FP + TN). So recall and sensitivity are simply synonymous but precision and specificity are defined differently (like recall and sensitivity, specificity is defined with respect to the column total whereas precision refers to the row total). Precision is also sometimes called the “positive predictive value” or, rarely, the “false positive rate” (but see my answer to Relation between true positive, false positive, false negative and true negative regarding the confusion surrounding this definition of the false positive rate). Interestingly, information retrieval metrics do not involve the “true negative” count. This makes sense: In information retrieval, you don't care about correctly classifying negative instances per se , you just don't want too many of them polluting your results (see also Why doesn't recall take into account true negatives? ). Because of this difference, it's not possible to go from specificity to precision or the other way around without additional information, namely the number of true negatives or, alternatively, the overall proportion of positive and negative cases. However, for the same corpus/test set, higher specificity always means better precision so they are closely related. In an information retrieval context, the goal is typically to identify a small number of matches from a large number of documents. Because of this asymmetry, it is in fact much more difficult to get a good precision than a good specificity while keeping the sensitivity/recall constant. Since most documents are irrelevant, you have many more occasions for false alarms than true positives and these false alarms can swamp the correct results even if the classifier has impressive accuracy on a balanced test set (this is in fact what's going on in the scenarios I mentioned in my point 2 above). Consequently, you really need to optimize precision and not merely to ensure decent specificity because even impressive-looking rates like 99% or more are sometimes not enough to avoid numerous false alarms. There is usually a trade-off between sensitivity and specificity (or recall and precision). Intuitively, if you cast a wider net, you will detect more relevant documents/positive cases (higher sensitivity/recall) but you will also get more false alarms (lower specificity and lower precision). If you classify everything in the positive category, you have 100% recall/sensitivity, a bad precision and a mostly useless classifier (“mostly” because if you don't have any other information, it is perfectly reasonable to assume it's not going to rain in a desert and to act accordingly so maybe the output is not useless after all; of course, you don't need a sophisticated model for that). Considering all this, 60% precision and 95% recall does not sound too bad but, again, this really depends on the domain and what you intend to do with this classifier. Some additional information regarding the latest comments/edits: Again, the performance you can expect depends on the specifics (in this context this would be things like the exact set of emotions present in the training set, quality of the picture/video, luminosity, occlusion, head movements, acted or spontaneous videos, person-dependent or person-independent model, etc.) but F1 over .7 sounds good for this type of applications even if the very best models can do better on some data sets [see Valstar, M.F., Mehu, M., Jiang, B., Pantic, M., & Scherer, K. (2012). Meta-analysis of the first facial expression recognition challenge. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 42 (4), 966-979.] Whether such a model is useful in practice is a completely different question and obviously depends on the application. Note that facial “expression” is itself a complex topic and going from a typical training set (posed expressions) to any real-life situation is not easy. This is rather off-topic on this forum but it will have serious consequences for any practical application you might contemplate. Finally, head-to-head comparison between models is yet another question. My take on the numbers you presented is that there isn't any dramatic difference between the models (if you refer to the paper I cited above, the range of F1 scores for well-known models in this area is much broader). In practice, technical aspects (simplicity/availability of standard libraries, speed of the different techniques, etc.) would likely decide which model is implemented, except perhaps if the cost/benefits and overall rate make you strongly favor either precision or recall.
{ "source": [ "https://stats.stackexchange.com/questions/62622", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/27317/" ] }
63,152
I'm sure many people will respond with links to 'let me google that for you', so I want to say that I've tried to figure this out so please forgive my lack of understanding here, but I cannot figure out how the practical implementation of a neural network actually works. I understand the input layer and how to normalize the data, I also understand the bias unit, but when it comes to the hidden layer, what the actual computation is in that layer, and how it maps to the output is just a little foggy. I've seen diagrams with question marks in the hidden layer, boolean functions like AND/OR/XOR, activation functions, and input nodes that map to all of the hidden units and input nodes that map to only a few hidden units each and so I just have a few questions on the practical aspect. Of course, a simple explanation of the entire neural network process like you would explain to a child, would be awesome. What computations are done in the hidden layer? How are those computations mapped to the output layer? How does the ouput layer work? De-normalizing the data from the hidden layer? Why are some layers in the input layer connected to the hidden layer and some are not?
Three sentence version: Each layer can apply any function you want to the previous layer (usually a linear transformation followed by a squashing nonlinearity). The hidden layers' job is to transform the inputs into something that the output layer can use. The output layer transforms the hidden layer activations into whatever scale you wanted your output to be on. Like you're 5: If you want a computer to tell you if there's a bus in a picture, the computer might have an easier time if it had the right tools. So your bus detector might be made of a wheel detector (to help tell you it's a vehicle) and a box detector (since the bus is shaped like a big box) and a size detector (to tell you it's too big to be a car). These are the three elements of your hidden layer: they're not part of the raw image, they're tools you designed to help you identify busses. If all three of those detectors turn on (or perhaps if they're especially active), then there's a good chance you have a bus in front of you. Neural nets are useful because there are good tools (like backpropagation) for building lots of detectors and putting them together. Like you're an adult A feed-forward neural network applies a series of functions to the data. The exact functions will depend on the neural network you're using: most frequently, these functions each compute a linear transformation of the previous layer, followed by a squashing nonlinearity. Sometimes the functions will do something else (like computing logical functions in your examples, or averaging over adjacent pixels in an image). So the roles of the different layers could depend on what functions are being computed, but I'll try to be very general. Let's call the input vector $x$, the hidden layer activations $h$, and the output activation $y$. You have some function $f$ that maps from $x$ to $h$ and another function $g$ that maps from $h$ to $y$. So the hidden layer's activation is $f(x)$ and the output of the network is $g(f(x))$. Why have two functions ($f$ and $g$) instead of just one? If the level of complexity per function is limited, then $g(f(x))$ can compute things that $f$ and $g$ can't do individually. An example with logical functions: For example, if we only allow $f$ and $g$ to be simple logical operators like "AND", "OR", and "NAND", then you can't compute other functions like "XOR" with just one of them. On the other hand, we could compute "XOR" if we were willing to layer these functions on top of each other: First layer functions: Make sure that at least one element is "TRUE" (using OR) Make sure that they're not all "TRUE" (using NAND) Second layer function: Make sure that both of the first-layer criteria are satisfied (using AND) The network's output is just the result of this second function. The first layer transforms the inputs into something that the second layer can use so that the whole network can perform XOR. An example with images: Slide 61 from this talk --also available here as a single image--shows (one way to visualize) what the different hidden layers in a particular neural network are looking for. The first layer looks for short pieces of edges in the image: these are very easy to find from raw pixel data, but they're not very useful by themselves for telling you if you're looking at a face or a bus or an elephant. The next layer composes the edges: if the edges from the bottom hidden layer fit together in a certain way, then one of the eye-detectors in the middle of left-most column might turn on. It would be hard to make a single layer that was so good at finding something so specific from the raw pixels: eye detectors are much easier to build out of edge detectors than out of raw pixels. The next layer up composes the eye detectors and the nose detectors into faces. In other words, these will light up when the eye detectors and nose detectors from the previous layer turn on with the right patterns. These are very good at looking for particular kinds of faces: if one or more of them lights up, then your output layer should report that a face is present. This is useful because face detectors are easy to build out of eye detectors and nose detectors, but really hard to build out of pixel intensities. So each layer gets you farther and farther from the raw pixels and closer to your ultimate goal (e.g. face detection or bus detection). Answers to assorted other questions "Why are some layers in the input layer connected to the hidden layer and some are not?" The disconnected nodes in the network are called "bias" nodes. There's a really nice explanation here . The short answer is that they're like intercept terms in regression. "Where do the "eye detector" pictures in the image example come from?" I haven't double-checked the specific images I linked to, but in general, these visualizations show the set of pixels in the input layer that maximize the activity of the corresponding neuron. So if we think of the neuron as an eye detector, this is the image that the neuron considers to be most eye-like. Folks usually find these pixel sets with an optimization (hill-climbing) procedure. In this paper by some Google folks with one of the world's largest neural nets, they show a "face detector" neuron and a "cat detector" neuron this way, as well as a second way: They also show the actual images that activate the neuron most strongly (figure 3, figure 16). The second approach is nice because it shows how flexible and nonlinear the network is--these high-level "detectors" are sensitive to all these images, even though they don't particularly look similar at the pixel level. Let me know if anything here is unclear or if you have any more questions.
{ "source": [ "https://stats.stackexchange.com/questions/63152", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/17679/" ] }
63,419
I am having difficulties understanding one logistic regression explanation. The logistic regression is between temperature and fish which die or do not die. The slope of a logistic regression is 1.76. Then the odds that fish die increase by a factor of exp(1.76) = 5.8. In other words, the odds that fish die increase by a factor of 5.8 for each change of 1 degree Celsius in temperature. Because 50% fish die in 2012, a 1 degree Celsius increase on 2012 temperature would raise the fish die occurrence to 82%. A 2 degree Celsius increase on 2012 temperature would raise the fish die occurrence to 97%. A 3 degree Celsius increase -> 100% fish die. How do we calculate 1, 2 and 3? (82%, 97% and 100%)
The odds is not the same as the probability. The odds is the number of "successes" (deaths) per "failure" (continue to live), while the probability is the proportion of "successes". I find it instructive to compare how one would estimate these two: An estimate of the odds would be the ratio of the number of successes over the number of failures, while an estimate of the probability would be the ratio of the number of success over the total number of observations. Odds and probabilities are both ways of quantifying how likely an event is, so it is not surprising that there is a one to one relation between the two. You can turn a probability ($p$) into an odds ($o$) using the following formula: $o=\frac{p}{1-p}$. You can turn an odds into a probability like so: $p = \frac{o}{1+o}$. So to come back to your example: The baseline probability is .5, so you would expect to find 1 failure per success, i.e. the baseline odds is 1. This odds is multiplied by a factor 5.8, so then the odds would become 5.8, which you can transform back to a probability as: $\frac{5.8}{1+5.8}\approx.85$ or 85% A two degree change in temperature is association with a change in the odds of death by a factor $5.8^2=33.6$. So the baseline odds is still 1, which means the new odds would be 33.6, i.e. you would expect 33.6 dead fishes for every live fish, or the probability of finding a dead fish is $\frac{33.6}{1+33.6} \approx .97$ A three degree change in temperatue leads to a new odds of death of $1\times 5.8^3\approx195$. So the probability of finding a dead fish = $\frac{195}{1+195}\approx.99$
{ "source": [ "https://stats.stackexchange.com/questions/63419", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/27657/" ] }