source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
93,181
How do ridge, LASSO and elasticnet regularization methods compare? What are their respective advantages and disadvantages? Any good technical paper, or lecture notes would be appreciated as well.
In The Elements of Statistical Learning book, Hastie et al. provide a very insightful and thorough comparison of these shrinkage techniques. The book is available online ( pdf ). The comparison is done in section 3.4.3, page 69. The main difference between Lasso and Ridge is the penalty term they use. Ridge uses $L_2$ penalty term which limits the size of the coefficient vector. Lasso uses $L_1$ penalty which imposes sparsity among the coefficients and thus, makes the fitted model more interpretable. Elasticnet is introduced as a compromise between these two techniques, and has a penalty which is a mix of $L_1$ and $L_2$ norms.
{ "source": [ "https://stats.stackexchange.com/questions/93181", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3269/" ] }
93,540
This seems to be a basic issue, but I just realized that I actually don't know how to test equality of coefficients from two different regressions. Can anyone shed some light on this? More formally, suppose I ran the following two regressions: $$ y_1 = X_1\beta_1 + \epsilon_1 $$ and $$ y_2 = X_2\beta_2 + \epsilon_2 $$ where $X_i$ refers to the design matrix of regression $i$, and $\beta_i$ to the vector of coefficients in regression $i$. Note that $X_1$ and $X_2$ are potentially very different, with different dimensions etc. I am interested in for instance whether or not $\hat\beta_{11} \neq \hat\beta_{21}$. If these came from the same regression, this would be trivial. But since they come from different ones, I am not quite sure how to do it. Does anyone have an idea or can give me some pointers? My problem in detail: My first intuition was to look at the confidence intervals, and if they overlap, then I would say they are essentially the same. This procedure does not come with the correct size of the test, though (i.e. each individual confidence interval has $\alpha=0.05$, say, but looking at them jointly will not have the same probability). My "second" intuition was to conduct a normal t-test. That is, take $$ \frac{\beta_{11}-\beta_{21}}{sd(\beta_{11})} $$ where $\beta_{21}$ is taken as the value of my null hypothesis. This does not take into account the estimation uncertainty of $\beta_{21}$, though, and the answer may depend on the order of the regressions (which one I call 1 and 2). My third idea was to do it as in a standard test for equality of two coefficients from the same regression, that is take $$ \frac{\beta_{11}-\beta_{21}}{sd(\beta_{11}-\beta_{21})} $$ The complication arises due to the fact that both come from different regressions. Note that $$ Var(\beta_{11}-\beta_{21}) = Var(\beta_{11}) + Var(\beta_{21}) -2 Cov(\beta_{11},\beta_{21}) $$ but since they are from different regressions, how would I get $Cov(\beta_{11},\beta_{21})$? This led me to ask this question here. This must be a standard procedure / standard test, but I cound not find anything that was sufficiently similar to this problem. So, if anyone can point me to the correct procedure, I would be very grateful!
Although this isn't a common analysis, it really is one of interest. The accepted answer fits the way you asked your question, but I'm going to provide another reasonably well accepted technique that may or may not be equivalent (I'll leave it to better minds to comment on that). This approach is to use the following Z test: $Z = \frac{\beta_1-\beta_2}{\sqrt{(SE\beta_1)^2+(SE\beta_2)^2}}$ Where $SE\beta$ is the standard error of $\beta$. This equation is provided by Clogg, C. C., Petkova, E., & Haritou, A. (1995). Statistical methods for comparing regression coefficients between models. American Journal of Sociology , 100 (5), 1261-1293. and is cited by Paternoster, R., Brame, R., Mazerolle, P., & Piquero, A. (1998). Using the correct statistical test for equality of regression coefficients. Criminology , 36 (4), 859-866. equation 4, which is available free of a paywall. I've adapted Peternoster's formula to use $\beta$ rather than $b$ because it is possible that you might be interested in different DVs for some awful reason and my memory of Clogg et al. was that their formula used $\beta$. I also remember cross checking this formula against Cohen, Cohen, West, and Aiken, and the root of the same thinking can be found there in the confidence interval of differences between coefficients, equation 2.8.6, pg 46-47.
{ "source": [ "https://stats.stackexchange.com/questions/93540", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/31634/" ] }
93,569
Since we are using the logistic function to transform a linear combination of the input into a non-linear output, how can logistic regression be considered a linear classifier? Linear regression is just like a neural network without the hidden layer, so why are neural networks considered non-linear classifiers and logistic regression is linear?
Logistic regression is linear in the sense that the predictions can be written as $$ \hat{p} = \frac{1}{1 + e^{-\hat{\mu}}}, \text{ where } \hat{\mu} = \hat{\theta} \cdot x. $$ Thus, the prediction can be written in terms of $\hat{\mu}$, which is a linear function of $x$. (More precisely, the predicted log-odds is a linear function of $x$.) Conversely, there is no way to summarize the output of a neural network in terms of a linear function of $x$, and that is why neural networks are called non-linear. Also, for logistic regression, the decision boundary $\{x:\hat{p} = 0.5\}$ is linear: it's the solution to $\hat{\theta} \cdot x = 0$. The decision boundary of a neural network is in general not linear.
{ "source": [ "https://stats.stackexchange.com/questions/93569", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/27779/" ] }
93,592
I am planning my wedding. I wish to estimate how many people will come to my wedding. I have created a list of people and the chance that they will attend in percentage. For example Dad 100% Mom 100% Bob 50% Marc 10% Jacob 25% Joseph 30% I have a list of about 230 people with percentages. How can I estimate how many people will attend my wedding? Can I simply add up the percentages and divide it by 100? For example, if I invite 10 people with each a 10% chance of coming, I can expect 1 person? If I invite 20 people with a 50% chance of coming, can I expect 10 people? UPDATE: 140 people came to my wedding :). Using the techniques described below I predicted about 150. Not too shabby!
Assuming that the decisions of invited persons to come to the wedding are independent, the number of guests that will come to the wedding can be modeled as the sum of Bernoulli random variables that have not necessarily identical probabilities of success. This corresponds to the Poisson binomial distribution . Let $X$ be a random variable corresponding to the total number of persons that will come to your wedding out of $N$ invited persons. The expected number of participants is indeed the sum of individual ''show-up'' probabilities $p_i$, that is $$ E(X) = \sum_{i = 1}^N p_i . $$ The derivation of confidence intervals isn't straightforward given the form of the probability mass function . However, they are easy to approximate with Monte Carlo simulations. The following figure shows an example of the distribution of the number of participants to the wedding based on 10000 simulated scenarios (right) using some fake show-up probabilities for the 230 invited persons (left). The R code used to run this simulation is shown below; it provides approximations of confidence intervals. ## Parameters N <- 230 # Number of potential guests nb.sim <- 10000 # Number of simulations ## Create example of groups of guests with same show-up probability set.seed(345) tmp <- hist(rbeta(N, 3, 2), breaks = seq(0, 1, length.out = 21)) p <- tmp$breaks[-1] # Group show-up probabilities n <- tmp$counts # Number of person per group ## Generate number of guests by group guest.mat <- matrix(NA, nrow = nb.sim, ncol = length(p)) for (j in 1:length(p)) { guest.mat[, j] <- rbinom(nb.sim, n[j], p[j]) } ## Number of guest per scenario nb.guests <- apply(guest.mat, 1, sum) ## Result summary par(mfrow = c(1, 2)) barplot(n, names.arg = p, xlab = "Probability group", ylab = "Group size") hist(nb.guests, breaks = 21, probability = TRUE, main = "", xlab = "Guests") par(mfrow = c(1, 1)) ## Theoretical mean and variance c(sum(n * p), sum(n * p * (1-p))) #[1] 148.8500 43.8475 ## Sample mean and variance c(mean(nb.guests), var(nb.guests)) #[1] 148.86270 43.23657 ## Sample quantiles quantile(nb.guests, probs = c(0.01, 0.05, 0.5, 0.95, 0.99)) #1% 5% 50% 95% 99% #133.99 138.00 149.00 160.00 164.00
{ "source": [ "https://stats.stackexchange.com/questions/93592", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3262/" ] }
93,705
I often hear people talking about neural networks as something as a black-box that you don't understand what it does or what they mean. I actually I can't understand what they mean by that! If you understand how back-propagation works, then how is it a black-box? Do they mean that we don't understand how the weights that were computed or what?
A neural network is a black box in the sense that while it can approximate any function, studying its structure won't give you any insights on the structure of the function being approximated. As an example, one common use of neural networks on the banking business is to classify loaners on "good payers" and "bad payers". You have a matrix of input characteristics $C$ (sex, age, income, etc) and a vector of results $R$ ("defaulted", "not defaulted", etc). When you model this using a neural network, you are supposing that there is a function $f(C)=R$ , in the proper sense of a mathematical function. This function f can be arbitrarily complex, and might change according to the evolution of the business, so you can't derive it by hand. Then you use the Neural Network to build an approximation of $f$ that has a error rate that is acceptable to your application. This works, and the precision can be arbitrarily small - you can expand the network, fine tune its training parameters and get more data until the precision hits your goals. The black box issue is: The approximation given by the neural network will not give you any insight on the form of f. There is no simple link between the weights and the function being approximated. Even the analysis of which input characteristic is irrelevant is a open problem (see this link ). Plus, from a traditional statistics viewpoint, a neural network is a non-identifiable model: Given a dataset and network topology, there can be two neural networks with different weights but exactly the same result. This makes the analysis very hard. As an example of "non-black box models", or "interpretable models", you have regression equations and decision trees. The first one gives you a closed form approximation of f where the importance of each element is explicit, the second one is a graphical description of some relative risks\odds ratios.
{ "source": [ "https://stats.stackexchange.com/questions/93705", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/27779/" ] }
93,830
I have came across a question in job interview aptitude test for critical thinking. It is goes something like this: The Zorganian Republic has some very strange customs. Couples only wish to have female children as only females can inherit the family's wealth, so if they have a male child they keep having more children until they have a girl. If they have a girl, they stop having children. What is the ratio of girls to boys in Zorgania? I don't agree with the model answer given by the question writer, which is about 1:1. The justification was any birth will always have a 50% chance of being male or female. Can you convince me with a more mathematical vigorous answer of $\text{E}[G]:\text{E}[B]$ if $G$ is the number of girls and B is the number of boys in the country?
Start with no children repeat step { Every couple who is still having children has a child. Half the couples have males and half the couples have females. Those couples that have females stop having children } At each step you get an even number of males and females and the number of couples having children reduces by half (ie those that had females won't have any children in the next step) So, at any given time you have an equal number of males and females and from step to step the number of couples having children is falling by half. As more couples are created the same situation reoccurs and all other things being equal, the population will contain the same number of male and females
{ "source": [ "https://stats.stackexchange.com/questions/93830", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12142/" ] }
93,845
I'm trying to write my own function for principal component analysis, PCA (of course there's a lot already written but I'm just interested in implementing stuff by myself). The main problem I encountered is the cross-validation step and calculating predicted sum of squares (PRESS). It doesn't matter which cross-validation I use, it's a question mainly about the theory behind, but consider leave-one-out cross-validation (LOOCV). From the theory I found out that in order to perform LOOCV you need to: delete an object scale the rest perform PCA with some number of components scale the deleted object according to parameters obtained in (2) predict the object according to the PCA model calculate PRESS for this object re-perform the same algorithm to other objects sum up all the PRESS values profit Because I'm very new in the field, in order to be sure that I'm right, I compare the results with the output from some software I have (also in order to write some code I follow the instructions in the software). I get completely the same results calculating the residual sum of squares and $R^2$, but calculating PRESS is a problem. Could you please tell to me if what I implement in the cross-validation step is right or not: case 'loocv' % # n - number of objects % # p - number of variables % # vComponents - the number of components used in CV dataSets = divideData(n,n); % # it is just a variable responsible for creating datasets for CV % # (for LOOCV datasets will be equal to [1, 2, 3, ... , n]);' tempPRESS = zeros(n,vComponents); for j = 1:n Xmodel1 = X; % # X - n x p original matrix Xmodel1(dataSets{j},:) = []; % # delete the object to be predicted [Xmodel1,Xmodel1shift,Xmodel1div] = skScale(Xmodel1, 'Center', vCenter, 'Scaling', vScaling); % # scale the data and extract the shift and scaling factor Xmodel2 = X(dataSets{j},:); % # the object to be predicted Xmodel2 = bsxfun(@minus,Xmodel2,Xmodel1shift); % # shift and scale the object Xmodel2 = bsxfun(@rdivide,Xmodel2,Xmodel1div); [Xscores2,Xloadings2] = myNipals(Xmodel1,0.00000001,vComponents); % # the way to calculate the scores and loadings % # Xscores2 - n x vComponents matrix % # Xloadings2 - vComponents x p matrix for i = 1:vComponents tempPRESS(j,i) = sum(sum((Xmodel2* ... (eye(p) - transpose(Xloadings2(1:i,:))*Xloadings2(1:i,:))).^2)); end end PRESS = sum(tempPRESS,1); In the software ( PLS_Toolbox ) it works like this: for i = 1:vComponents tempPCA = eye(p) - transpose(Xloadings2(1:i,:))*Xloadings2(1:i,:); for kk = 1:p tempRepmat(:,kk) = -(1/tempPCA(kk,kk))*tempPCA(:,kk); % # this I do not understand tempRepmat(kk,kk) = -1; % # here is some normalization that I do not get end tempPRESS(j,i) = sum(sum((Xmodel2*tempRepmat).^2)); end So, they do some additional normalization using this tempRepmat variable: the only reason I found was that they apply LOOCV for robust PCA. Unfortunately, support team did not want to answer my question since I have only demo version of their software.
What you are doing is wrong: it does not make sense to compute PRESS for PCA like that! Specifically, the problem lies in your step #5. Naïve approach to PRESS for PCA Let the data set consist of $n$ points in $d$-dimensional space: $\mathbf x^{(i)} \in \mathbb R^d, \, i=1 \dots n$. To compute reconstruction error for a single test data point $\mathbf x^{(i)}$, you perform PCA on the training set $\mathbf X^{(-i)}$ with this point excluded, take a certain number $k$ of principal axes as columns of $\mathbf U^{(-i)}$, and find the reconstruction error as $\left \|\mathbf x^{(i)} - \hat{\mathbf x}^{(i)}\right\|^2 = \left\|\mathbf x^{(i)} - \mathbf U^{(-i)} [\mathbf U^{(-i)}]^\top \mathbf x^{(i)}\right\|^2$. PRESS is then equal to sum over all test samples $i$, so the reasonable equation seems to be: $$\mathrm{PRESS} \overset{?}{=} \sum_{i=1}^n \left\|\mathbf x^{(i)} - \mathbf U^{(-i)} [\mathbf U^{(-i)}]^\top \mathbf x^{(i)}\right\|^2.$$ For simplicity, I am ignoring the issues of centering and scaling here. The naïve approach is wrong The problem above is that we use $\mathbf x^{(i)}$ to compute the prediction $ \hat{\mathbf x}^{(i)}$, and that is a Very Bad Thing. Note the crucial difference to a regression case, where the formula for the reconstruction error is basically the same $\left\|\mathbf y^{(i)} - \hat{\mathbf y}^{(i)}\right\|^2$, but prediction $\hat{\mathbf y}^{(i)}$ is computed using the predictor variables and not using $\mathbf y^{(i)}$. This is not possible in PCA, because in PCA there are no dependent and independent variables: all variables are treated together. In practice it means that PRESS as computed above can decrease with increasing number of components $k$ and never reach a minimum. Which would lead one to think that all $d$ components are significant. Or maybe in some cases it does reach a minimum, but still tends to overfit and overestimate the optimal dimensionality. A correct approach There are several possible approaches, see Bro et al. (2008) Cross-validation of component models: a critical look at current methods for an overview and comparison. One approach is to leave out one dimension of one data point at a time (i.e. $x^{(i)}_j$ instead of $\mathbf x^{(i)}$), so that the training data become a matrix with one missing value, and then to predict ("impute") this missing value with PCA. (One can of course randomly hold out some larger fraction of matrix elements, e.g. 10%). The problem is that computing PCA with missing values can be computationally quite slow (it relies on EM algorithm), but needs to be iterated many times here. Update: see http://alexhwilliams.info/itsneuronalblog/2018/02/26/crossval/ for a nice discussion and Python implementation (PCA with missing values is implemented via alternating least squares). An approach that I found to be much more practical is to leave out one data point $\mathbf x^{(i)}$ at a time, compute PCA on the training data (exactly as above), but then to loop over dimensions of $\mathbf x^{(i)}$, leave them out one at a time and compute a reconstruction error using the rest. This can be quite confusing in the beginning and the formulas tend to become quite messy, but implementation is rather straightforward. Let me first give the (somewhat scary) formula, and then briefly explain it: $$\mathrm{PRESS_{PCA}} = \sum_{i=1}^n \sum_{j=1}^d \left|x^{(i)}_j - \left[\mathbf U^{(-i)} \left [\mathbf U^{(-i)}_{-j}\right]^+\mathbf x^{(i)}_{-j}\right]_j\right|^2.$$ Consider the inner loop here. We left out one point $\mathbf x^{(i)}$ and computed $k$ principal components on the training data, $\mathbf U^{(-i)}$. Now we keep each value $x^{(i)}_j$ as the test and use the remaining dimensions $\mathbf x^{(i)}_{-j} \in \mathbb R^{d-1}$ to perform the prediction. The prediction $\hat{x}^{(i)}_j$ is the $j$-th coordinate of "the projection" (in the least squares sense) of $\mathbf x^{(i)}_{-j}$ onto subspace spanned by $\mathbf U^{(-i)}$. To compute it, find a point $\hat{\mathbf z}$ in the PC space $\mathbb R^k$ that is closest to $\mathbf x^{(i)}_{-j}$ by computing $\hat{\mathbf z} = \left [\mathbf U^{(-i)}_{-j}\right]^+\mathbf x^{(i)}_{-j} \in \mathbb R^k$ where $\mathbf U^{(-i)}_{-j}$ is $\mathbf U^{(-i)}$ with $j$-th row kicked out, and $[\cdot]^+$ stands for pseudoinverse. Now map $\hat{\mathbf z}$ back to the original space: $\mathbf U^{(-i)} \left [\mathbf U^{(-i)}_{-j}\right]^+\mathbf x^{(i)}_{-j}$ and take its $j$-th coordinate $[\cdot]_j$. An approximation to the correct approach I don't quite understand the additional normalization used in the PLS_Toolbox, but here is one approach that goes in the same direction. There is another way to map $\mathbf x^{(i)}_{-j}$ onto the space of principal components: $\hat{\mathbf z}_\mathrm{approx} = \left [\mathbf U^{(-i)}_{-j}\right]^\top\mathbf x^{(i)}_{-j}$, i.e. simply take the transpose instead of pseudo-inverse. In other words, the dimension that is left out for testing is not counted at all, and the corresponding weights are also simply kicked out. I think this should be less accurate, but might often be acceptable. The good thing is that the resulting formula can now be vectorized as follows (I omit the computation): $$\begin{align} \mathrm{PRESS_{PCA,\,approx}} &= \sum_{i=1}^n \sum_{j=1}^d \left|x^{(i)}_j - \left[\mathbf U^{(-i)} \left [\mathbf U^{(-i)}_{-j}\right]^\top\mathbf x^{(i)}_{-j}\right]_j\right|^2 \\ &= \sum_{i=1}^n \left\|\left(\mathbf I - \mathbf U \mathbf U^\top + \mathrm{diag}\{\mathbf U \mathbf U^\top\}\right) \mathbf x^{(i)}\right\|^2, \end{align}$$ where I wrote $\mathbf U^{(-i)}$ as $\mathbf U$ for compactness, and $\mathrm{diag}\{\cdot\}$ means setting all non-diagonal elements to zero. Note that this formula looks exactly like the first one (naive PRESS) with a small correction! Note also that this correction only depends on the diagonal of $\mathbf U \mathbf U^\top$, like in the PLS_Toolbox code. However, the formula is still different from what seems to be implemented in PLS_Toolbox, and this difference I cannot explain. Update (Feb 2018): Above I called one procedure "correct" and another "approximate" but I am not so sure anymore that this is meaningful. Both procedures make sense and I think neither is more correct. I really like that the "approximate" procedure has a simpler formula. Also, I remember that I had some dataset where "approximate" procedure yielded results that looked more meaningful. Unfortunately, I don't remember the details anymore. Examples Here is how these methods compare for two well-known datasets: Iris dataset and wine dataset. Note that the naive method produces a monotonically decreasing curve, whereas other two methods yield a curve with a minimum. Note further that in the Iris case, approximate method suggests 1 PC as the optimal number but the pseudoinverse method suggests 2 PCs. (And looking at any PCA scatterplot for the Iris dataset, it does seem that both first PCs carry some signal.) And in the wine case the pseudoinverse method clearly points at 3 PCs, whereas the approximate method cannot decide between 3 and 5. Matlab code to perform cross-validation and plot the results function pca_loocv(X) %// loop over data points for n=1:size(X,1) Xtrain = X([1:n-1 n+1:end],:); mu = mean(Xtrain); Xtrain = bsxfun(@minus, Xtrain, mu); [~,~,V] = svd(Xtrain, 'econ'); Xtest = X(n,:); Xtest = bsxfun(@minus, Xtest, mu); %// loop over the number of PCs for j=1:min(size(V,2),25) P = V(:,1:j)*V(:,1:j)'; %//' err1 = Xtest * (eye(size(P)) - P); err2 = Xtest * (eye(size(P)) - P + diag(diag(P))); for k=1:size(Xtest,2) proj = Xtest(:,[1:k-1 k+1:end])*pinv(V([1:k-1 k+1:end],1:j))'*V(:,1:j)'; err3(k) = Xtest(k) - proj(k); end error1(n,j) = sum(err1(:).^2); error2(n,j) = sum(err2(:).^2); error3(n,j) = sum(err3(:).^2); end end error1 = sum(error1); error2 = sum(error2); error3 = sum(error3); %// plotting code figure hold on plot(error1, 'k.--') plot(error2, 'r.-') plot(error3, 'b.-') legend({'Naive method', 'Approximate method', 'Pseudoinverse method'}, ... 'Location', 'NorthWest') legend boxoff set(gca, 'XTick', 1:length(error1)) set(gca, 'YTick', []) xlabel('Number of PCs') ylabel('Cross-validation error')
{ "source": [ "https://stats.stackexchange.com/questions/93845", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/43820/" ] }
94,010
If a factor variable (e.g. gender with levels M and F) is used in the glm formula, dummy variable(s) are created, and can be found in the glm model summary along with their associated coefficients (e.g. genderM) If, instead of relying on R to split up the factor in this way, the factor is encoded in a series of numeric 0/1 variables (e.g. genderM (1 for M, 0 for F), genderF (1 for F, 0 for M) and these variables are then used as numeric variables in the glm formula, would the coefficient result be any different? Basically the question is: does R use a different coefficient calculation when working with factor variables versus numeric variables? Follow-up question (possibly answered by the above): besides just the efficiency of letting R create dummy variables, is there any problem with re-coding factors as a series of numeric 0,1 variables and using those in the model instead?
Categorical variables (called " factors " in R) need to be represented by numerical codes in multiple regression models. There are very many possible ways to construct numerical codes appropriately (see this great list at UCLA's stats help site). By default, R uses reference level coding (which R calls "contr.treatment"), and which is pretty much the default statistics-wide. This can be changed for all contrasts for your entire R session using ?options , or for specific analyses / variables using ?contrasts or ?C (note the capital). If you need more information about reference level coding, I explain it here: Regression based for example on days of the week . Some people find reference level coding confusing, and you don't have to use it. If you want, you can have two variables for male and female; this is called level means coding. However, if you do that, you will need to suppress the intercept or the model matrix will be singular and the regression cannot be fit as @Affine notes above and as I explain here: Qualitative variable coding leads to singularities . To suppress the intercept, you modify your formula by adding -1 or +0 like so: y~... -1 or y~... +0 . Using level means coding instead of reference level coding will change the coefficients estimated and the meaning of the hypothesis tests that are printed with your output. When you have a two level factor (e.g., male vs. female) and you use reference level coding, you will see the intercept called (constant) and only one variable listed in the output (perhaps sexM ). The intercept is the mean of the reference group (perhaps females) and sexM is the difference between the mean of males and the mean of females. The p-value associated with the intercept is a one-sample $t$ -test of whether the reference level has a mean of $0$ and the p-value associated with sexM tells you if the sexes differ on your response. But if you use level means coding instead, you will have two variables listed and each p-value will correspond to a one-sample $t$ -test of whether the mean of that level is $0$ . That is, none of the p-values will be a test of whether the sexes differ. set.seed(1) y = c( rnorm(30), rnorm(30, mean=1) ) sex = rep(c("Female", "Male" ), each=30) fem = ifelse(sex=="Female", 1, 0) male = ifelse(sex=="Male", 1, 0) ref.level.coding.model = lm(y~sex) level.means.coding.model = lm(y~fem+male+0) summary(ref.level.coding.model) # ... # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 0.08246 0.15740 0.524 0.602 # sexMale 1.05032 0.22260 4.718 1.54e-05 *** # --- # Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # ... summary(level.means.coding.model) # ... # Coefficients: # Estimate Std. Error t value Pr(>|t|) # fem 0.08246 0.15740 0.524 0.602 # male 1.13277 0.15740 7.197 1.37e-09 *** # --- # Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # ...
{ "source": [ "https://stats.stackexchange.com/questions/94010", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/26946/" ] }
94,063
I was told that it's possible to run a two-stage IV regression where the first stage is a probit and the second stage is an OLS. Is it possible to use 2SLS if the first stage is a probit but the second stage is a probit/poisson model?
What was proposed to you is sometimes referred to as a forbidden regression and in general you will not consistently estimate the relationship of interest. Forbidden regressions produce consistent estimates only under very restrictive assumptions which rarely hold in practice (see for instance Wooldridge (2010) "Econometric Analysis of Cross Section an Panel Data", p. 265-268). The problem is that neither the conditional expectations operator nor the linear projection carry through nonlinear functions. For this reason only an OLS regression in the first stage is guaranteed to produce fitted values that are uncorrelated with the residuals. A proof for this can be found in Greene (2008) "Econometric Analysis" or, if you want a more detailed (but also more technical) proof, you can have a look at the notes by Jean-Louis Arcand on p. 47 to 52. For the same reason as in the forbidden regression this seemingly obvious two-step procedure of mimicking 2SLS with probit will not produce consistent estimates. This is again because expectations and linear projections do not carry over through nonlinear functions. Wooldridge (2010) in section 15.7.3 on page 594 provides a detailed explanation for this. He also explains the proper procedure of estimating probit models with a binary endogenous variable. The correct approach is to use maximum likelihood but doing this by hand is not exactly trivial. Therefore it is preferable if you have access to some statistical software which has a ready-canned package for this. For example, the Stata command would be ivprobit (see the Stata manual for this command which also explains the maximum likelihood approach). If you require references for the theory behind probit with instrumental variables see for instance: Newey, W. (1987) "Efficient estimation of limited dependent variable models with endogenous explanatory variables", Journal of Econometrics, Vol. 36, pp. 231-250 Rivers, D. and Vuong, Q.H. (1988) "Limited information estimators and exogeneity tests for simultaneous probit models", Journal of Econometrics, Vol. 39, pp. 347-366 Finally, combining different estimation methods in the first and second stages is difficult unless there exists a theoretical foundation which justifies their use. This is not to say that it is not feasible though. For instance, Adams et al. (2009) use a three-step procedure where they have a probit "first stage" and an OLS second stage without falling for the forbidden regression problem. Their general approach is: use probit to regress the endogenous variable on the instrument(s) and control variables use the predicted values from the previous step in an OLS first stage together with the control (but without the instrumental) variables do the second stage as usual A similar procedure was employed by a user on the Statalist who wanted to use a Tobit first-stage and a Poisson second stage (see here ). The same fix should be feasible for your estimation problem.
{ "source": [ "https://stats.stackexchange.com/questions/94063", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/43929/" ] }
94,136
I often read about a function being 'highly non linear'. In my understanding, there is "linear" and "non-linear", so what is this 'highly' about? Is there a formal difference from non linear? How is it defined?
I don't think there's a formal definition. It's my impression that it simply means that not only is it non-linear, but attempting to model it with a linear approximation won't yield reasonable results and may even cause instability in the fitting method. Someone may also use it to simply mean that small input changes can result in counter-intuitively large changes in output.
{ "source": [ "https://stats.stackexchange.com/questions/94136", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/43970/" ] }
94,402
What is the difference between finite and infinite variance ? My stats knowledge is rather basic; Wikipedia / Google wasn't much help here.
$\DeclareMathOperator{\E}{E} \DeclareMathOperator{\var}{var}$ What does it mean for a random variable to have "infinite variance"? What does it mean for a random variable to have infinite expectation? The explanation in both cases are rather similar, so let us start with the case of expectation, and then variance after that. Let $X$ be a continuous random variable (RV) (our conclusions will be valid more generally, for the discrete case, replace integral by sum). To simplify exposition, lets assume $X \ge 0$ . Its expectation is defined by the integral $$ \E X = \int_0^\infty x f(x) \, d x $$ when that integral exists, that is, is finite. Else we say the expectation does not exist. That is an improper integral, and by definition is $$ \int_0^\infty x f(x) \, d x = \lim_{a \rightarrow \infty} \int_0^a x f(x) \, d x $$ For that limit to be finite, the contribution from the tail must vanish, that is, we must have $$ \lim_{a \rightarrow \infty} \int_a^\infty x f(x) \, d x =0 $$ A necessary (but not sufficient) condition for that to be the case is $\lim_{x\rightarrow \infty} x f(x) =0 $ . What the above displayed condition says, is that, the contribution to the expectation from the (right) tail must be vanishing . If so is not the case, the expectation is dominated by contributions from arbitrarily large realized values . In practice, that will mean that empirical means will be very unstable, because they will be dominated by the infrequent very large realized values . And note that this instability of sample means will not disappear with large samples --- it is a built-in part of the model! In many situations, that seems unrealistic. Lets say an (life) insurance model, so $X$ models some (human) lifetime. We know that, say $X > 1000$ doesn't occur, but in practice we use models without an upper limit. The reason is clear: No hard upper limit is known, if a person is (say) 110 years old, there is no reason he cannot live one more year! So a model with a hard upper limit seems artificial. Still, we do not want the extreme upper tail to have much influence. If $X$ has a finite expectation, then we can change the model to have a hard upper limit without undue influence to the model. In situations with a fuzzy upper limit that seems good. If the model has infinite expectation, then, any hard upper limit we introduce to the model will have dramatic consequences! That is the real importance of infinite expectation. With finite expectation, we can be fuzzy about upper limits. With infinite expectation, we cannot . Now, much the same can be said about infinite variance, mutatis mutandi . To make clearer, let us see at an example. For the example we use the Pareto distribution, implemented in the R package (on CRAN) actuar as pareto1 --- single-parameter Pareto distribution also known as Pareto type 1 distribution. It has probability density function given by $$ f(x) = \begin{cases} \frac{\alpha m^\alpha}{x^{\alpha+1}} &, x\ge m \\ 0 &, x<m \end{cases} $$ for some parameters $m>0, \alpha>0$ . When $\alpha > 1 $ the expectation exists and is given by $\frac{\alpha}{\alpha-1}\cdot m$ . When $\alpha \le 1$ the expectation do not exist, or as we say, it is infinite, because the integral defining it diverges to infinity. We can define the First moment distribution (see the post When would we use tantiles and the medial, rather than quantiles and the median? for some information and references) as $$ E(M) = \int_m^M x f(x) \, d x = \frac{\alpha}{\alpha-1} \left( m - \frac{m^\alpha}{M^{\alpha-1}} \right) $$ (this exists without regard to if the expectation itself exists). (Later edit: I invented the name "first moment distribution, later I learned this is related to what is "officially" named partial moments ). When the expectation exists ( $\alpha> 1$ ) we can divide by it to get the relative first moment distribution, given by $$ Er(M) = E(m)/E(\infty) = 1-\left(\frac{m}{M}\right)^{\alpha-1} $$ When $\alpha$ is just a little bit larger than one, so the expectation "just barely exists", the integral defining the expectation will converge slowly. Let us look at the example with $m=1, \alpha=1.2$ . Let us plot then $Er(M)$ with the help of R: ### Function for opening new plot file: open_png <- function(filename) png(filename=filename, type="cairo-png") library(actuar) # from CRAN ### Code for Pareto type I distribution: # First plotting density and "graphical # moments" using ideas from # http://www.quantdec.com/envstats/notes/class_06/properties.htm # and used some times at cross validated m <- 1.0 alpha <- 1.2 # Expectation: E <- m * (alpha/(alpha-1)) # upper limit for plots: upper <- qpareto1(0.99, alpha, m) # open_png("first_moment_dist1.png") Er <- function(M, m, alpha) 1.0 - (m/M)^(alpha-1.0) ### Inverse relative first moment ### distribution function, giving # what we may call "expectation quantiles": Er_inv <- function(eq, m, alpha) m*exp(log(1.0-eq)/(1-alpha)) plot(function(M) Er(M, m, alpha), from=1.0, to=upper) plot(function(M) ppareto1(M, alpha, m), from=1.0, to=upper, add=TRUE, col="red") dev.off() which produces this plot: For example, from this plot you can read that about 50% of the contribution to the expectation come from observations above around 40. Given that the expectation $\mu$ of this distribution is 6, that is astounding! (this distribution do not have existing variance. For that we need $\alpha > 2$ ). The function Er_inv defined above is the inverse relative first moment distribution, an analogue to the quantile function. We have: ### What this plot shows very clearly is that ### most of the contribution to the expectation ### come from the very extreme right tail! # Example eq <- Er_inv(0.5, m, alpha) ppareto1(eq, alpha, m) eq [1] 0.984375 [1] 32 This shows that 50% of the contributions to the expectation comes from the upper 1.5% tail of the distribution! So, especially in small samples where there is a high probability that the extreme tail is not represented, the arithmetic mean, while still being an unbiased estimator of the expectation $\mu$ , must have a very skew distribution. We will investigate this by simulation: First we use a sample size $n=5$ . set.seed(1234) n <- 5 N <- 10000000 # Number of simulation replicas means <- replicate(N, mean(rpareto1(n, alpha, m) )) mean(means) [1] 5.846645 median(means) [1] 2.658925 min(means) [1] 1.014836 max(means) [1] 633004.5 length(means[means <=100]) [1] 9970136 To get a readable plot we only show the histogram for the part of the sample with values below 100, which is a very large part of the sample. open_png("mean_sim_hist1.png") hist(means[means<=100], breaks=100, probability=TRUE) dev.off() The distribution of the arithmetical means is very skew, sum(means <= 6)/N [1] 0.8596413 Almost 86% of the empirical means are less or equal than the theoretical mean, the expectation. That is what we should expect, since most of the contribution to the mean comes from the extreme upper tail, which is unrepresented in most samples . We need to go back to reassess our earlier conclusion. While the existence of the mean makes it possible to be fuzzy about upper limits, we see that when "the mean just barely exists", meaning that the integral is slowly convergent, we cannot really be that fuzzy about upper limits . Slowly convergent integrals has the consequence that it might be better to use methods that do not assume that the expectation exists . When the integral is very slowly converging, it is in practice as if it didn't converge at all. The practical benefits that follow from a convergent integral is a chimera in the slowly convergent case! That is one way to understand N N Taleb's conclusion in http://fooledbyrandomness.com/complexityAugust-06.pdf
{ "source": [ "https://stats.stackexchange.com/questions/94402", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/44105/" ] }
94,511
What is the difference between neural network , Bayesian network , decision tree and Petri nets , even though they are all graphical models and visually depict cause-effect relationship.
Wow, what a big question! The short version of the answer is that just because you can represent two models using diagrammatically similar visual representations, doesn't mean they are even remotely related structurally, functionally, or philosophically. I'm not familiar with FCM or NF, but I can speak to the other ones a bit. Bayesian Network In a Bayesian network, the graph represents the conditional dependencies of different variables in the model. Each node represents a variable, and each directed edge represents a conditional relationship. Essentially, the graphical model is a visualization of the chain rule. Neural Network In a neural network, each node is a simulated "neuron". The neuron is essentially on or off, and its activation is determined by a linear combination of the values of each output in the preceding "layer" of the network. Decision Tree Let's say we are using a decision tree for classification. The tree essentially provides us with a flowchart describing how we should classify an observation. We start at the root of the tree, and the leaf where we end up determines the classification we predict. As you can see, these three models really have basically nothing at all to do with each other besides being representable with boxes and arrows.
{ "source": [ "https://stats.stackexchange.com/questions/94511", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/21961/" ] }
94,519
Considering that we want to use optimize() on the interval [0,1] how can I write an R code for finding the value of β that produces the minimum forecast error without using external packages like forecast ? For simplicity assume that: I want to use the following package: > require(datasets) > str(nhtemp) Time-Series [1:60] from 1912 to 1971: 49.9 52.3 49.4 51.1 49.4 47.9 49.8 50.9 49.3 51.9 ... in which nhtemp is the Yearly Average Temperatures in New Haven CT .
Wow, what a big question! The short version of the answer is that just because you can represent two models using diagrammatically similar visual representations, doesn't mean they are even remotely related structurally, functionally, or philosophically. I'm not familiar with FCM or NF, but I can speak to the other ones a bit. Bayesian Network In a Bayesian network, the graph represents the conditional dependencies of different variables in the model. Each node represents a variable, and each directed edge represents a conditional relationship. Essentially, the graphical model is a visualization of the chain rule. Neural Network In a neural network, each node is a simulated "neuron". The neuron is essentially on or off, and its activation is determined by a linear combination of the values of each output in the preceding "layer" of the network. Decision Tree Let's say we are using a decision tree for classification. The tree essentially provides us with a flowchart describing how we should classify an observation. We start at the root of the tree, and the leaf where we end up determines the classification we predict. As you can see, these three models really have basically nothing at all to do with each other besides being representable with boxes and arrows.
{ "source": [ "https://stats.stackexchange.com/questions/94519", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/40077/" ] }
94,872
Is the claim that functions of independent random variables are themselves independent, true? I have seen that result often used implicitly in some proofs, for example in the proof of independence between the sample mean and the sample variance of a normal distribution, but I have not been able to find justification for it. It seems that some authors take it as given but I am not certain that this is always the case.
The most general and abstract definition of independence makes this assertion trivial while supplying an important qualifying condition: that two random variables are independent means the sigma-algebras they generate are independent. Because the sigma-algebra generated by a measurable function of a sigma-algebra is a sub-algebra, a fortiori any measurable functions of those random variables have independent algebras, whence those functions are independent. (When a function is not measurable, it usually does not create a new random variable, so the concept of independent wouldn't even apply.) Let's unwrap the definitions to see how simple this is. Recall that a random variable $X$ is a real-valued function defined on the "sample space" $\Omega$ (the set of outcomes being studied via probability). A random variable $X$ is studied by means of the probabilities that its value lies within various intervals of real numbers (or, more generally, sets constructed in simple ways out of intervals: these are the Borel measurable sets of real numbers). Corresponding to any Borel measurable set $I$ is the event $X^{*}(I)$ consisting of all outcomes $\omega$ for which $X(\omega)$ lies in $I$. The sigma-algebra generated by $X$ is determined by the collection of all such events. The naive definition says two random variables $X$ and $Y$ are independent "when their probabilities multiply." That is, when $I$ is one Borel measurable set and $J$ is another, then $\Pr(X(\omega)\in I\text{ and }Y(\omega)\in J) = \Pr(X(\omega)\in I)\Pr(Y(\omega)\in J).$ But in the language of events (and sigma algebras) that's the same as $\Pr(\omega \in X^{*}(I)\text{ and }\omega \in Y^{*}(J)) = \Pr(\omega\in X^{*}(I))\Pr(\omega\in Y^{*}(J)).$ Consider now two functions $f, g:\mathbb{R}\to\mathbb{R}$ and suppose that $f \circ X$ and $g\circ Y$ are random variables. (The circle is functional composition: $(f\circ X)(\omega) = f(X(\omega))$. This is what it means for $f$ to be a "function of a random variable".) Notice--this is just elementary set theory--that $$(f\circ X)^{*}(I) = X^{*}(f^{*}(I)).$$ In other words, every event generated by $f\circ X$ (which is on the left) is automatically an event generated by $X$ (as exhibited by the form of the right hand side). Therefore (5) automatically holds for $f\circ X$ and $g\circ Y$: there's nothing to check! NB You may replace "real-valued" everywhere by "with values in $\mathbb{R}^d$" without needing to change anything else in any material way. This covers the case of vector-valued random variables.
{ "source": [ "https://stats.stackexchange.com/questions/94872", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/31420/" ] }
94,974
I had a discussion with a statistician back in 2009 where he stated that the exact value of a p-value is irrelevant: the only thing that is important is whether it is significant or not. I.e. one result cannot be more significant than another; your samples for example, either come from the same population or don't. I have some qualms with this, but I can perhaps understand the ideology: The 5% threshold is arbitrary, i.e. that p = 0.051 is not significant and that p = 0.049 is, shouldn't really change the conclusion of your observation or experiment, despite one result being significant and the other not significant. The reason I bring this up now is that I'm studying for an MSc in Bioinformatics, and after talking to people in the field, there seems to be a determined drive to get an exact p-value for every set of statistics they do. For instance, if they 'achieve' a p-value of p < 1.9×10 -12 , they want to demonstrate HOW significant their result is, and that this result is SUPER informative. This issue exemplified with questions such as: Why can't I get a p-value smaller than 2.2e-16? , whereby they want to record a value that indicates that by chance alone this would be MUCH less than 1 in a trillion. But I see little difference in demonstrating that this result would occur less than 1 in a trillion as opposed to 1 in a billion. I can appreciate then that p < 0.01 shows that there is less than 1% chance that this would occur, whereas p < 0.001 indicates that a result like this is even more unlikely than the aforementioned p-value, but should your conclusions drawn be completely different? After all they are both significant p-values. The only way I can conceive of wanting to record the exact p-value is during a Bonferroni correction whereby the threshold changes due to the number of comparisons made, thus decreasing the type I error. But even still, why would you want to show a p-value that is 12 orders of magnitude smaller than your threshold significance? And isn't applying the Bonferroni correction in itself slightly arbitrary too? In the sense that initially the correction is seen as very conservative, and therefore there are other corrections that one can choose to access the significance level that the observer could use for their multiple comparisons. But because of this, isn't the point at which something becomes significant essentially variable depending upon what statistics the researcher wants to use. Should statistics be so open to interpretation? In conclusion, shouldn't statistics be less subjective (although I guess the need for it to be subjective is as a consequence of a multivariate system), but ultimately I want some clarification: can something be more significant than something else? And will p < 0.001 suffice in respect to trying to record the exact p-value?
The type 1 / false rejection error rate $\alpha=.05$ isn't completely arbitrary, but yes, it is close. It's somewhat preferable to $\alpha=.051$ because it's less cognitively complex ( people like round numbers and multiples of five ). It's a decent compromise between skepticism and practicality, though maybe a little outdated – modern methods and research resources may make higher standards (i.e., lower $p$ values) preferable, if standards there must be ( Johnson, 2013 ) . IMO, the greater problem than the choice of threshold is the often unexamined choice to use a threshold where it is not necessary or helpful. In situations where a practical choice has to be made, I can see the value, but much basic research does not necessitate the decision to dismiss one's evidence and give up on the prospect of rejecting the null just because a given sample's evidence against it falls short of almost any reasonable threshold. Yet much of this research's authors feel obligated to do so by convention, and resist it uncomfortably, inventing terms like "marginal" significance to beg for attention when they can feel it slipping away because their audiences often don't care about $p$s $\ge.05$. If you look around at other questions here on $p$ value interpretation, you'll see plenty of dissension about the interpretation of $p$ values by binary fail to / reject decisions regarding the null. Completely different – no. Meaningfully different – maybe. One reason to show a ridiculously small $p$ value is to imply information about effect size. Of course, just reporting effect size would be much better for several technical reasons, but authors often fail to consider this alternative, and audiences may be less familiar with it as well, unfortunately. In a null-hypothetical world where no one knows how to report effect sizes, one may be right most often in guessing that a smaller $p$ means a larger effect. To whatever extent this null-hypothetical world is closer to reality than the opposite, maybe there's some value in reporting exact $p$s for this reason. Please understand that this point is pure devil's advocacy... Another use for exact $p$s that I've learned by engaging in a very similar debate here is as indices of likelihood functions. See Michael Lew's comments on and article ( Lew, 2013 ) linked in my answer to " Accommodating entrenched views of p-values ". I don't think the Bonferroni correction is the same kind of arbitrary really. It corrects the threshold that I think we agree is at least close-to-completely arbitrary, so it doesn't lose any of that fundamental arbitrariness, but I don't think it adds anything arbitrary to the equation. The correction is defined in a logical, pragmatic way, and minor variations toward larger or smaller corrections would seem to require rather sophisticated arguments to justify them as more than arbitrary, whereas I think it would be easier to argue for an adjustment of $\alpha$ without having to overcome any deeply appealing yet simple logic in it. If anything, I think $p$ values should be more open to interpretation! I.e., whether the null is really more useful than the alternative ought to depend on more than just the evidence against it, including the cost of obtaining more information and the added incremental value of more precise knowledge thusly gained. This is essentially the Fisherian no-threshold idea that, AFAIK, is how it all began. See " Regarding p-values, why 1% and 5%? Why not 6% or 10%? " If fail to / reject crises aren't forced upon the null hypothesis from the outset, then the more continuous understanding of statistical significance certainly does admit the possibility of continuously increasing significance. In the dichotomized approach to statistical significance (I think this is sometimes referred to as the Neyman-Pearson framework; cf. Dienes, 2007 ), no, any significant result is as significant as the next – no more, no less. This question may help explain that principle: " Why are p-values uniformly distributed under the null hypothesis? " As for how many zeroes are meaningful and worth reporting, I recommend Glen_b's answer to this question: " How should tiny $p$-values be reported? (and why does R put a minimum on 2.22e-16?) " – it's much better than the answers to the version of that question you linked on Stack Overflow! References - Johnson, V. E. (2013). Revised standards for statistical evidence. Proceedings of the National Academy of Sciences, 110 (48), 19313–19317. Retrieved from http://www.pnas.org/content/110/48/19313.full.pdf . - Lew, M. J. (2013). To P or not to P: On the evidential nature of P-values and their place in scientific inference. arXiv:1311.0081 [stat.ME]. Retrieved from http://arxiv.org/abs/1311.0081 .
{ "source": [ "https://stats.stackexchange.com/questions/94974", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/41650/" ] }
95,038
Here is a quote from Bishop's "Pattern Recognition and Machine Learning" book, section 12.2.4 "Factor analysis": According to the highlighted part, factor analysis captures the covariance between variables in the matrix $W$ . I wonder HOW ? Here is how I understand it. Say $x$ is the observed $p$-dimensional variable, $W$ is the factor loading matrix, and $z$ is the factor score vector. Then we have $$x=\mu+Wz+\epsilon,$$ that is \begin{align*} \begin{pmatrix} x_1\\ \vdots\\ x_p \end{pmatrix} = \begin{pmatrix} \mu_1\\ \vdots\\ \mu_p \end{pmatrix} + \begin{pmatrix} \vert & & \vert\\ w_1 & \ldots & w_m\\ \vert & & \vert \end{pmatrix} \begin{pmatrix} z_1\\ \vdots\\ z_m \end{pmatrix} +\epsilon, \end{align*} and each column in $W$ is a factor loading vector $$w_i=\begin{pmatrix}w_{i1}\\ \vdots\\ w_{ip}\end{pmatrix}.$$ Here as I wrote, $W$ has $m$ columns meaning there are $m$ factors under consideration. Now here is the point, according to the highlighted part, I think the loadings in each column $w_i$ explain the covariance in the observed data, right? For example, let's take a look at the first loading vector $w_1$, for $1\le i,j,k\le p$, if $w_{1i}=10$, $w_{1j}=11$ and $w_{1k}=0.1$, then I'd say $x_i$ and $x_j$ are highly correlated, whereas $x_k$ seems uncorrelated with them , am I right? And if this is how factor analysis explain the covariance between observed features, then I'd say PCA also explains the covariance, right?
The distinction between Principal component analysis and Factor analysis is discussed in numerous textbooks and articles on multivariate techniques. You may find the full thread , and a newer one , and odd answers, on this site, too. I'm not going to make it detailed. I've already given a concise answer and a longer one and would like now to clarify it with a pair of pictures. Graphical representation The picture below explains PCA . (This was borrowed from here where PCA is compared with Linear regression and Canonical correlations. The picture is the vector representation of variables in the subject space ; to understand what it is you may want to read the 2nd paragraph there.) PCA configuration on this picture was described there . I will repeat most principal things. Principal components $P_1$ and $P_2$ lie in the same space that is spanned by the variables $X_1$ and $X_2$ , "plane X". Squared length of each of the four vectors is its variance. The covariance between $X_1$ and $X_2$ is $cov_{12}= |X_1||X_2|r$ , where $r$ equals the cosine of the angle between their vectors. The projections (coordinates) of the variables on the components, the $a$ 's, are the loadings of the components on the variables: loadings are the regression coefficients in the linear combinations of modeling variables by standardized components . "Standardized" - because information about components' variances is already absorbed in loadings (remember, loadings are eigenvectors normalized to the respective eigenvalues). And due to that, and to the fact that components are uncorrelated, loadings are the covariances between the variables and the components. Using PCA for dimensionality/data reduction aim compels us to retain only $P_1$ and to regard $P_2$ as the remainder, or error. $a_{11}^2+a_{21}^2= |P_1|^2$ is the variance captured (explained) by $P_1$ . The picture below demonstrates Factor analysis performed on the same variables $X_1$ and $X_2$ with which we did PCA above. (I will speak of common factor model, for there exist other: alpha factor model, image factor model.) Smiley sun helps with lighting. The common factor is $F$ . It is what is the analogue to the main component $P_1$ above. Can you see the difference between these two? Yes, clearly: the factor does not lie in the variables' space "plane X". How to get that factor with one finger, i.e. to do factor analysis? Let's try. On the previous picture, hook the end of $P_1$ arrow by your nail tip and pull away from "plane X", while visualizing how two new planes appear, "plane U1" and "plane U2"; these connecting the hooked vector and the two variable vectors. The two planes form a hood, X1 - F - X2, above "plane X". Continue to pull while contemplating the hood and stop when "plane U1" and "plane U2" form 90 degrees between them. Ready, factor analysis is done. Well, yes, but not yet optimally. To do it right, like packages do, repeat the whole excercise of pulling the arrow, now adding small left-right swings of your finger while you pull. Doing so, find the position of the arrow when the sum of squared projections of both variables onto it is maximized , while you attain to that 90 degree angle. Stop. You did factor analysis, found the position of the common factor $F$ . Again to remark, unlike principal component $P_1$ , factor $F$ does not belong to variables' space "plane X". It therefore is not a function of the variables (principal component is, and you can make sure from the two top pictures here that PCA is fundamentally two-directional: predicts variables by components and vice versa). Factor analysis is thus not a description/simplification method, like PCA, it is modeling method whereby latent factor steeres observed variables, one-directionally. Loadings $a$ 's of the factor on the variables are like loadings in PCA; they are the covariances and they are the coefficients of modeling variables by the (standardized) factor. $a_{1}^2+a_{2}^2= |F|^2$ is the variance captured (explained) by $F$ . The factor was found as to maximize this quantity - as if a principal component. However, that explained variance is no more variables' gross variance, - instead, it is their variance by which they co-vary (correlate). Why so? Get back to the pic. We extracted $F$ under two requirements. One was the just mentioned maximized sum of squared loadings. The other was the creation of the two perpendicular planes, "plane U1" containing $F$ and $X_1$ , and "plane U2" containing $F$ and $X_2$ . This way each of the X variables appeared decomposed. $X_1$ was decomposed into variables $F$ and $U_1$ , mutually orthogonal; $X_2$ was likewise decomposed into variables $F$ and $U_2$ , also orthogonal. And $U_1$ is orthogonal to $U_2$ . We know what is $F$ - the common factor . $U$ 's are called unique factors . Each variable has its unique factor. The meaning is as follows. $U_1$ behind $X_1$ and $U_2$ behind $X_2$ are the forces that hinder $X_1$ and $X_2$ to correlate. But $F$ - the common factor - is the force behind both $X_1$ and $X_2$ that makes them to correlate. And the variance being explained lie along that common factor. So, it is pure collinearity variance. It is that variance that makes $cov_{12}>0$ ; the actual value of $cov_{12}$ being determined by inclinations of the variables towards the factor, by $a$ 's. A variable's variance (vector's length squared) thus consists of two additive disjoint parts: uniqueness $u^2$ and communality $a^2$ . With two variables, like our example, we can extract at most one common factor, so communality = single loading squared. With many variables we might extract several common factors, and a variable's communality will be the sum of its squared loadings. On our picture, the common factors space is unidimensional (just $F$ itself); when m common factors exist, that space is m -dimensional, with communalities being variables' projections on the space and loadings being variables' as well as those projections' projections on the factors that span the space. Variance explained in factor analysis is the variance within that common factors' space, different from variables' space in which components explain variance. The space of the variables is in the belly of the combined space: m common + p unique factors. Just glance at the current pic please. There were several (say, $X_1$ , $X_2$ , $X_3$ ) variables with which factor analysis was done, extracting two common factors. The factors $F_1$ and $F_2$ span the common factor space "factor plane". Of the bunch of analysed variables only one ( $X_1$ ) is shown on the figure. The analysis decomposed it in two orthogonal parts, communality $C_1$ and unique factor $U_1$ . Communality lies in the "factor plane" and its coordinates on the factors are the loadings by which the common factors load $X_1$ (= coordinates of $X_1$ itself on the factors). On the picture, communalities of the other two variables - projections of $X_2$ and of $X_3$ - are also displayed. It would be interesting to remark that the two common factors can, in a sense, be seen as the principal components of all those communality "variables". Whereas usual principal components summarize by seniority the multivariate total variance of the variables, the factors summarize likewise their multivariate common variance. $^1$ Why needed all that verbiage? I just wanted to give evidence to the claim that when you decompose each of the correlated variables into two orthogonal latent parts, one (A) representing uncorrelatedness (orthogonality) between the variables and the other part (B) representing their correlatedness (collinearity), and you extract factors from the combined B's only, you will find yourself explaining pairwise covariances, by those factors' loadings. In our factor model, $cov_{12} \approx a_1a_2$ - factors restore individual covariances by means of loadings. In PCA model, it is not so since PCA explains undecomposed, mixed collinear+orthogonal native variance. Both strong components that you retain and subsequent ones that you drop are fusions of (A) and (B) parts; hence PCA can tap, by its loadings, covariances only blindly and grossly. Contrast list PCA vs FA PCA: operates in the space of the variables. FA: trancsends the space of the variables. PCA: takes variability as is. FA: segments variability into common and unique parts. PCA: explains nonsegmented variance, i.e. trace of the covariance matrix. FA: explains common variance only, hence explains (restores by loadings) correlations/covariances, off-diagonal elements of the matrix. (PCA explains off-diagonal elements too - but in passing, offhand manner - simply because variances are shared in a form of covariances.) PCA: components are theoretically linear functions of variables, variables are theoretically linear functions of components. FA: variables are theoretically linear functions of factors, only. PCA: empirical summarizing method; it retains m components. FA: theoretical modeling method; it fits fixed number m factors to the data; FA can be tested (Confirmatory FA). PCA: is simplest metric MDS , aims to reduce dimensionality while indirectly preserving distances between data points as much as possible. FA: Factors are essential latent traits behind variables which make them to correlate; the analysis aims to reduce data to those essences only. PCA: rotation/interpretation of components - sometimes (PCA is not enough realistic as a latent-traits model). FA: rotation/interpretation of factors - routinely. PCA: data reduction method only. FA: also a method to find clusters of coherent variables (this is because variables cannot correlate beyond a factor). PCA: loadings and scores are independent of the number m of components "extracted". FA: loadings and scores depend on the number m of factors "extracted". PCA: component scores are exact component values. FA: factor scores are approximates to true factor values, and several computational methods exist. Factor scores do lie in the space of the variables (like components do) while true factors (as embodied by factor loadings) do not. PCA: usually no assumptions. FA: assumption of weak partial correlations; sometimes multivariate normality assumption; some datasets may be "bad" for analysis unless transformed. PCA: noniterative algorithm; always successful. FA: iterative algorithm (typically); sometimes nonconvergence problem; singularity may be a problem. $^1$ For meticulous . One might ask where are variables $X_2$ and $X_3$ themselves on the pic, why were they not drawn? The answer is that we can't draw them, even theoretically. The space on the picture is 3d (defined by "factor plane" and the unique vector $U_1$ ; $X_1$ lying on their mutual complement, plane shaded grey, that's what corresponds to one slope of the "hood" on the picture No.2), and so our graphic resources are exhausted. The three dimensional space spanned by three variables $X_1$ , $X_2$ , $X_3$ together is another space. Neither "factor plane" nor $U_1$ are the subspaces of it. It's what is different from PCA: factors do not belong to the variables' space. Each variable separately lies in its separate grey plane orthogonal to "factor plane" - just like $X_1$ shown on our pic, and that is all: if we were to add, say, $X_2$ to the plot we should have invented 4th dimension. (Just recall that all $U$ s have to be mutually orthogonal; so, to add another $U$ , you must expand dimensionality farther.) Similarly as in regression the coefficients are the coordinates, on the predictors, both of the dependent variable(s) and of the prediction(s) ( See pic under "Multiple Regression", and here , too), in FA loadings are the coordinates, on the factors, both of the observed variables and of their latent parts - the communalities. And exactly as in regression that fact did not make the dependent(s) and the predictors be subspaces of each other, - in FA the similar fact does not make the observed variables and the latent factors be subspaces of each other. A factor is "alien" to a variable in a quite similar sense as a predictor is "alien" to a dependent response. But in PCA, it is other way: principal components are derived from the observed variables and are confined to their space. So, once again to repeat: m common factors of FA are not a subspace of the p input variables. On the contrary: the variables form a subspace in the m+p ( m common factors + p unique factors) union hyperspace. When seen from this perspective (i.e. with the unique factors attracted too) it becomes clear that classic FA is not a dimensionality shrinkage technique, like classic PCA, but is a dimensionality expansion technique. Nevertheless, we give our attention only to a small ( m dimensional common) part of that bloat, since this part solely explains correlations.
{ "source": [ "https://stats.stackexchange.com/questions/95038", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/30540/" ] }
95,054
I would like to get a p-value and an effect size of an independent categorical variable (with several levels) -- that is "overall" and not for each level separately, as is the normal output from lme4 in R. It is just like the thing people report when running an ANOVA. How can I get this?
Both of the concepts you mention (p-values and effect sizes of linear mixed models) have inherent issues. With respect to effect size , quoting Doug Bates, the original author of lme4 , Assuming that one wants to define an $R^2$ measure, I think an argument could be made for treating the penalized residual sum of squares from a linear mixed model in the same way that we consider the residual sum of squares from a linear model. Or one could use just the residual sum of squares without the penalty or the minimum residual sum of squares obtainable from a given set of terms, which corresponds to an infinite precision matrix. I don't know, really. It depends on what you are trying to characterize. For more information, you can look at this thread , this thread , and this message . Basically, the issue is that there is not an agreed upon method for the inclusion and decomposition of the variance from the random effects in the model. However, there are a few standards that are used. If you have a look at the Wiki set up for/by the r-sig-mixed-models mailing list , there are a couple of approaches listed. One of the suggested methods looks at the correlation between the fitted and the observed values. This can be implemented in R as suggested by Jarrett Byrnes in one of those threads: r2.corr.mer <- function(m) { lmfit <- lm(model.response(model.frame(m)) ~ fitted(m)) summary(lmfit)$r.squared } So for example, say we estimate the following linear mixed model: set.seed(1) d <- data.frame(y = rnorm(250), x = rnorm(250), z = rnorm(250), g = sample(letters[1:4], 250, replace=T) ) library(lme4) summary(fm1 <- lmer(y ~ x + (z | g), data=d)) # Linear mixed model fit by REML ['lmerMod'] # Formula: y ~ x + (z | g) # Data: d # REML criterion at convergence: 744.4 # # Scaled residuals: # Min 1Q Median 3Q Max # -2.7808 -0.6123 -0.0244 0.6330 3.5374 # # Random effects: # Groups Name Variance Std.Dev. Corr # g (Intercept) 0.006218 0.07885 # z 0.001318 0.03631 -1.00 # Residual 1.121439 1.05898 # Number of obs: 250, groups: g, 4 # # Fixed effects: # Estimate Std. Error t value # (Intercept) 0.02180 0.07795 0.280 # x 0.04446 0.06980 0.637 # # Correlation of Fixed Effects: # (Intr) # x -0.005 We can calculate the effect size using the function defined above: r2.corr.mer(fm1) # [1] 0.0160841 A similar alternative is recommended in a paper by Ronghui Xu , referred to as $\Omega^{2}_{0}$, and can be calculated in R simply: 1-var(residuals(fm1))/(var(model.response(model.frame(fm1)))) # [1] 0.01173721 # Usually, it would be even closer to the value above With respect to the p-values , this is a much more contentious issue (at least in the R / lme4 community). See the discussions in the questions here , here , and here among many others. Referencing the Wiki page again, there are a few approaches to test hypotheses on effects in linear mixed models. Listed from "worst to best" (according to the authors of the Wiki page which I believe includes Doug Bates as well as Ben Bolker who contributes here a lot): Wald Z-tests For balanced, nested LMMs where df can be computed: Wald t-tests Likelihood ratio test, either by setting up the model so that the parameter can be isolated/dropped (via anova or drop1 ), or via computing likelihood profiles MCMC or parametric bootstrap confidence intervals They recommend the Markov chain Monte Carlo sampling approach and also list a number of possibilities to implement this from pseudo and fully Bayesian approaches, listed below. Pseudo-Bayesian: Post-hoc sampling, typically (1) assuming flat priors and (2) starting from the MLE, possibly using the approximate variance-covariance estimate to choose a candidate distribution Via mcmcsamp (if available for your problem: i.e. LMMs with simple random effects — not GLMMs or complex random effects) Via pvals.fnc in the languageR package, a wrapper for mcmcsamp ) In AD Model Builder, possibly via the glmmADMB package (use the mcmc=TRUE option) or the R2admb package (write your own model definition in AD Model Builder), or outside of R Via the sim function from the arm package (simulates the posterior only for the beta (fixed-effect) coefficients Fully Bayesian approaches: Via the MCMCglmm package Using glmmBUGS (a WinBUGS wrapper/ R interface) Using JAGS/WinBUGS/OpenBUGS etc., via the rjags / r2jags / R2WinBUGS / BRugs packages For the sake of illustration to show what this might look like, below is an MCMCglmm estimated using the MCMCglmm package which you will see yields similar results as the above model and has some kind of Bayesian p-values: library(MCMCglmm) summary(fm2 <- MCMCglmm(y ~ x, random=~us(z):g, data=d)) # Iterations = 3001:12991 # Thinning interval = 10 # Sample size = 1000 # # DIC: 697.7438 # # G-structure: ~us(z):g # # post.mean l-95% CI u-95% CI eff.samp # z:z.g 0.0004363 1.586e-17 0.001268 397.6 # # R-structure: ~units # # post.mean l-95% CI u-95% CI eff.samp # units 0.9466 0.7926 1.123 1000 # # Location effects: y ~ x # # post.mean l-95% CI u-95% CI eff.samp pMCMC # (Intercept) -0.04936 -0.17176 0.07502 1000 0.424 # x -0.07955 -0.19648 0.05811 1000 0.214 I hope this helps somewhat. I think the best advice for somebody starting out with linear mixed models and trying to estimate them in R is to read the Wiki faqs from where most of this information was drawn. It is an excellent resource for all sorts of mixed effects themes from basic to advanced and from modelling to plotting.
{ "source": [ "https://stats.stackexchange.com/questions/95054", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/40023/" ] }
95,083
I have a data set with N ~ 5000 and about 1/2 missing on at least one important variable. The main analytic method will be Cox proportional hazards. I plan to use multiple imputation. I will also be splitting into a train and test set. Should I split the data and then impute separately, or impute and then split? If it matters, I will be using PROC MI in SAS .
You should split before pre-processing or imputing. The division between training and test set is an attempt to replicate the situation where you have past information and are building a model which you will test on future as-yet unknown information: the training set takes the place of the past and the test set takes the place of the future, so you only get to test your trained model once. Keeping the past/future analogy in mind, this means anything you do to pre-process or process your data, such as imputing missing values, you should do on the training set alone. You can then remember what you did to your training set if your test set also needs pre-processing or imputing, so that you do it the same way on both sets. Added from comments: if you use the test data to affect the training data, then the test data is being used to build your model, so it ceases to be test data and will not provide a fair test of your model. You risk overfitting, and it was to discourage this that you separated out the test data in the first place
{ "source": [ "https://stats.stackexchange.com/questions/95083", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/686/" ] }
95,209
I am using confusion matrix to check the performance of my classifier. I am using Scikit-Learn, I am little bit confused. How can I interpret the result from from sklearn.metrics import confusion_matrix >>> y_true = [2, 0, 2, 2, 0, 1] >>> y_pred = [0, 0, 2, 2, 0, 2] >>> confusion_matrix(y_true, y_pred) array([[2, 0, 0], [0, 0, 1], [1, 0, 2]]) How can I take the decision whether this predicted values are good or no.
The confusion matrix is a way of tabulating the number of misclassifications, i.e., the number of predicted classes which ended up in a wrong classification bin based on the true classes. While sklearn.metrics.confusion_matrix provides a numeric matrix, I find it more useful to generate a 'report' using the following: import pandas as pd y_true = pd.Series([2, 0, 2, 2, 0, 1, 1, 2, 2, 0, 1, 2]) y_pred = pd.Series([0, 0, 2, 1, 0, 2, 1, 0, 2, 0, 2, 2]) pd.crosstab(y_true, y_pred, rownames=['True'], colnames=['Predicted'], margins=True) which results in: Predicted 0 1 2 All True 0 3 0 0 3 1 0 1 2 3 2 2 1 3 6 All 5 2 5 12 This allows us to see that: The diagonal elements show the number of correct classifications for each class: 3, 1 and 3 for the classes 0, 1 and 2. The off-diagonal elements provides the misclassifications: for example, 2 of the class 2 were misclassified as 0, none of the class 0 were misclassified as 2, etc. The total number of classifications for each class in both y_true and y_pred , from the "All" subtotals This method also works for text labels, and for a large number of samples in the dataset can be extended to provide percentage reports. import numpy as np import pandas as pd # create some data lookup = {0: 'biscuit', 1:'candy', 2:'chocolate', 3:'praline', 4:'cake', 5:'shortbread'} y_true = pd.Series([lookup[_] for _ in np.random.random_integers(0, 5, size=100)]) y_pred = pd.Series([lookup[_] for _ in np.random.random_integers(0, 5, size=100)]) pd.crosstab(y_true, y_pred, rownames=['True'], colnames=['Predicted']).apply(lambda r: 100.0 * r/r.sum()) The output then is: Predicted biscuit cake candy chocolate praline shortbread True biscuit 23.529412 10 23.076923 13.333333 15.384615 9.090909 cake 17.647059 20 0.000000 26.666667 15.384615 18.181818 candy 11.764706 20 23.076923 13.333333 23.076923 31.818182 chocolate 11.764706 5 15.384615 6.666667 15.384615 13.636364 praline 17.647059 10 30.769231 20.000000 0.000000 13.636364 shortbread 17.647059 35 7.692308 20.000000 30.769231 13.636364 where the numbers now represent the percentage (rather than number of cases) of the outcomes that were classified. Although note, that the sklearn.metrics.confusion_matrix output can be directly visualized using: import matplotlib.pyplot as plt conf = sklearn.metrics.confusion_matrix(y_true, y_pred) plt.imshow(conf, cmap='binary', interpolation='None') plt.show()
{ "source": [ "https://stats.stackexchange.com/questions/95209", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/43653/" ] }
95,247
I am trying to wrap my head around the statistical difference between Linear discriminant analysis and Logistic regression . Is my understanding right that, for a two class classification problem, LDA predicts two normal density functions (one for each class) that creates a linear boundary where they intersect, whereas logistic regression only predicts the log-odd function between the two classes, which creates a boundary but does not assume density functions for each class?
It sounds to me that you are correct. Logistic regression indeed does not assume any specific shapes of densities in the space of predictor variables, but LDA does. Here are some differences between the two analyses, briefly. Binary Logistic regression (BLR) vs Linear Discriminant analysis (with 2 groups: also known as Fisher's LDA): BLR : Based on Maximum likelihood estimation. LDA : Based on Least squares estimation; equivalent to linear regression with binary predictand (coefficients are proportional and R-square = 1-Wilk's lambda). BLR : Estimates probability (of group membership) immediately (the predictand is itself taken as probability, observed one) and conditionally. LDA : estimates probability mediately (the predictand is viewed as binned continuous variable, the discriminant) via classificatory device (such as naive Bayes) which uses both conditional and marginal information. BLR : Not so exigent to the level of the scale and the form of the distribution in predictors. LDA : Predictirs desirably interval level with multivariate normal distribution. BLR : No requirements about the within-group covariance matrices of the predictors. LDA : The within-group covariance matrices should be identical in population. BLR : The groups may have quite different $n$ . LDA : The groups should have similar $n$ . BLR : Not so sensitive to outliers. LDA : Quite sensitive to outliers. BLR : Younger method. LDA : Older method. BLR : Usually preferred, because less exigent / more robust. LDA : With all its requirements met, often classifies better than BLR (asymptotic relative efficiency 3/2 time higher then).
{ "source": [ "https://stats.stackexchange.com/questions/95247", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/36680/" ] }
95,261
I implemented the following function to calculate entropy: from math import log def calc_entropy(probs): my_sum = 0 for p in probs: if p > 0: my_sum += p * log(p, 2) return - my_sum Result: >>> calc_entropy([1/7.0, 1/7.0, 5/7.0]) 1.1488348542809168 >>> from scipy.stats import entropy # using a built-in package # give the same answer >>> entropy([1/7.0, 1/7.0, 5/7.0], base=2) 1.1488348542809166 My understanding was that entropy is between 0 and 1, 0 meaning very certain, and 1 meaning very uncertain. Why do I get measure of entropy greater than 1? I know that if I increase size of log base, the entropy measure will be smaller, but I thought base 2 was standard, so I don't think that's the problem. I must be missing something obvious, but what?
Entropy is not the same as probability . Entropy measures the "information" or "uncertainty" of a random variable. When you are using base 2, it is measured in bits; and there can be more than one bit of information in a variable. In this example, one sample "contains" about 1.15 bits of information. In other words, if you were able to compress a series of samples perfectly, you would need that many bits per sample, on average.
{ "source": [ "https://stats.stackexchange.com/questions/95261", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11708/" ] }
95,340
Can someone please give me some intuition as to when to choose either SVM or LR? I want to understand the intuition behind what is the difference between the optimization criteria of learning the hyperplane of the two, where the respective aims are as follows: SVM: Try to maximize the margin between the closest support vectors LR: Maximize the posterior class probability Let's consider the linear feature space for both SVM and LR. Some differences I know of already: SVM is deterministic (but we can use Platts model for probability score) while LR is probabilistic. For the kernel space, SVM is faster (stores just support vectors)
Linear SVMs and logistic regression generally perform comparably in practice. Use SVM with a nonlinear kernel if you have reason to believe your data won't be linearly separable (or you need to be more robust to outliers than LR will normally tolerate). Otherwise, just try logistic regression first and see how you do with that simpler model. If logistic regression fails you, try an SVM with a non-linear kernel like a RBF. EDIT: Ok, let's talk about where the objective functions come from. The logistic regression comes from generalized linear regression. A good discussion of the logistic regression objective function in this context can be found here: https://stats.stackexchange.com/a/29326/8451 The Support Vector Machines algorithm is much more geometrically motivated . Instead of assuming a probabilistic model, we're trying to find a particular optimal separating hyperplane, where we define "optimality" in the context of the support vectors. We don't have anything resembling the statistical model we use in logistic regression here, even though the linear case will give us similar results: really this just means that logistic regression does a pretty good job of producing "wide margin" classifiers, since that's all SVM is trying to do (specifically, SVM is trying to "maximize" the margin between the classes). I'll try to come back to this later and get a bit deeper into the weeds, I'm just sort of in the middle of something :p
{ "source": [ "https://stats.stackexchange.com/questions/95340", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/41799/" ] }
95,391
I'm reading a Scientific paper on image classification. In the experimental results they speak of top-1 and top-5 accuracy but i've never heard of the term, nor can find it using google. Can someone give me a definition or point me somewhere? :)
In top-5 accuracy you give yourself credit for having the right answer if the right answer appears in your top five guesses.
{ "source": [ "https://stats.stackexchange.com/questions/95391", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/44547/" ] }
95,939
I'm trying to create a second order polynomial fit to some data I have. Let's say I plot this fit with ggplot() : ggplot(data, aes(foo, bar)) + geom_point() + geom_smooth(method="lm", formula=y~poly(x, 2)) I get: So, a second order fit works quite well. I calculate it with R: summary(lm(data$bar ~ poly(data$foo, 2))) And I get: lm(formula = data$bar ~ poly(data$foo, 2)) # ... # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 3.268162 0.008282 394.623 <2e-16 *** # poly(data$foo, 2)1 -0.122391 0.096225 -1.272 0.206 # poly(data$foo, 2)2 1.575391 0.096225 16.372 <2e-16 *** # .... Now, I would assume the formula for my fit is: $$ \text{bar} = 3.268 - 0.122 \cdot \text{foo} + 1.575 \cdot \text{foo}^2 $$ But that just gives me the wrong values. For example, with $\text{foo}$ being 3 I would expect $\text{bar}$ to become something around 3.15. However, inserting into above formula I get: $$ \text{bar} = 3.268 - 0.122 \cdot 3 + 1.575 \cdot 3^2 = 17.077 $$ What gives? Am I incorrectly interpreting the coefficients of the model?
My detailed answer is below, but the general (i.e. real) answer to this kind of question is: 1) experiment, mess around, look at the data, you can't break the computer no matter what you do, so ... experiment; or 2) read the documentation. Here is some R code which replicates the problem identified in this question, more or less: # This program written in response to a Cross Validated question # http://stats.stackexchange.com/questions/95939/ # # It is an exploration of why the result from lm(y_x+I(x^2)) # looks so different from the result from lm(y~poly(x,2)) library(ggplot2) epsilon <- 0.25*rnorm(100) x <- seq(from=1, to=5, length.out=100) y <- 4 - 0.6*x + 0.1*x^2 + epsilon # Minimum is at x=3, the expected y value there is 4 - 0.6*3 + 0.1*3^2 ggplot(data=NULL,aes(x, y)) + geom_point() + geom_smooth(method = "lm", formula = y ~ poly(x, 2)) summary(lm(y~x+I(x^2))) # Looks right summary(lm(y ~ poly(x, 2))) # Looks like garbage # What happened? # What do x and x^2 look like: head(cbind(x,x^2)) #What does poly(x,2) look like: head(poly(x,2)) The first lm returns the expected answer: Call: lm(formula = y ~ x + I(x^2)) Residuals: Min 1Q Median 3Q Max -0.53815 -0.13465 -0.01262 0.15369 0.61645 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.92734 0.15376 25.542 < 2e-16 *** x -0.53929 0.11221 -4.806 5.62e-06 *** I(x^2) 0.09029 0.01843 4.900 3.84e-06 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.2241 on 97 degrees of freedom Multiple R-squared: 0.1985, Adjusted R-squared: 0.182 F-statistic: 12.01 on 2 and 97 DF, p-value: 2.181e-05 The second lm returns something odd: Call: lm(formula = y ~ poly(x, 2)) Residuals: Min 1Q Median 3Q Max -0.53815 -0.13465 -0.01262 0.15369 0.61645 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.24489 0.02241 144.765 < 2e-16 *** poly(x, 2)1 0.02853 0.22415 0.127 0.899 poly(x, 2)2 1.09835 0.22415 4.900 3.84e-06 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.2241 on 97 degrees of freedom Multiple R-squared: 0.1985, Adjusted R-squared: 0.182 F-statistic: 12.01 on 2 and 97 DF, p-value: 2.181e-05 Since lm is the same in the two calls, it has to be the arguments of lm which are different. So, let's look at the arguments. Obviously, y is the same. It's the other parts. Let's look at the first few observations on the right-hand-side variables in the first call of lm . The return of head(cbind(x,x^2)) looks like: x [1,] 1.000000 1.000000 [2,] 1.040404 1.082441 [3,] 1.080808 1.168146 [4,] 1.121212 1.257117 [5,] 1.161616 1.349352 [6,] 1.202020 1.444853 This is as expected. First column is x and second column is x^2 . How about the second call of lm , the one with poly? The return of head(poly(x,2)) looks like: 1 2 [1,] -0.1714816 0.2169976 [2,] -0.1680173 0.2038462 [3,] -0.1645531 0.1909632 [4,] -0.1610888 0.1783486 [5,] -0.1576245 0.1660025 [6,] -0.1541602 0.1539247 OK, that's really different. First column is not x , and second column is not x^2 . So, whatever poly(x,2) does, it does not return x and x^2 . If we want to know what poly does, we might start by reading its help file. So we say help(poly) . The description says: Returns or evaluates orthogonal polynomials of degree 1 to degree over the specified set of points x. These are all orthogonal to the constant polynomial of degree 0. Alternatively, evaluate raw polynomials. Now, either you know what "orthogonal polynomials" are or you don't. If you don't, then use Wikipedia or Bing (not Google, of course, because Google is evil---not as bad as Apple, naturally, but still bad). Or, you might decide you don't care what orthogonal polynomials are. You might notice the phrase "raw polynomials" and you might notice a little further down in the help file that poly has an option raw which is, by default, equal to FALSE . Those two considerations might inspire you to try out head(poly(x, 2, raw=TRUE)) which returns: 1 2 [1,] 1.000000 1.000000 [2,] 1.040404 1.082441 [3,] 1.080808 1.168146 [4,] 1.121212 1.257117 [5,] 1.161616 1.349352 [6,] 1.202020 1.444853 Excited by this discovery (it looks right, now, yes?), you might go on to try summary(lm(y ~ poly(x, 2, raw=TRUE))) This returns: Call: lm(formula = y ~ poly(x, 2, raw = TRUE)) Residuals: Min 1Q Median 3Q Max -0.53815 -0.13465 -0.01262 0.15369 0.61645 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.92734 0.15376 25.542 < 2e-16 *** poly(x, 2, raw = TRUE)1 -0.53929 0.11221 -4.806 5.62e-06 *** poly(x, 2, raw = TRUE)2 0.09029 0.01843 4.900 3.84e-06 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.2241 on 97 degrees of freedom Multiple R-squared: 0.1985, Adjusted R-squared: 0.182 F-statistic: 12.01 on 2 and 97 DF, p-value: 2.181e-05 There are at least two levels to the above answer. First, I answered your question. Second, and much more importantly, I illustrated how you are supposed to go about answering questions like this yourself. Every single person who "knows how to program" has gone through a sequence like the one above sixty million times. Even people as depressingly bad at programming as I am go through this sequence all the time. It's normal for code not to work. It's normal to misunderstand what functions do. The way to deal with it is to screw around, experiment, look at the data, and RTFM. Get yourself out of "mindlessly following a recipe" mode and into "detective" mode.
{ "source": [ "https://stats.stackexchange.com/questions/95939", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/43472/" ] }
95,947
I recently came across this identity: $$E \left[ E \left(Y|X,Z \right) |X \right] =E \left[Y | X \right]$$ I am of course familiar with the simpler version of that rule, namely that $E \left[ E \left(Y|X \right) \right]=E \left(Y\right) $ but I was not able to find justification for its generalization. I would be grateful if someone could point me to a not-so-technical reference for that fact or, even better, if someone could lay out a simple proof for this important result.
INFORMAL TREATMENT We should remember that the notation where we condition on random variables is inaccurate, although economical, as notation. In reality we condition on the sigma-algebra that these random variables generate. In other words $E[Y\mid X]$ is meant to mean $E[Y\mid \sigma(X)]$. This remark may seem out of place in an "Informal Treatment", but it reminds us that our conditioning entities are collections of sets (and when we condition on a single value, then this is a singleton set). And what do these sets contain? They contain the information with which the possible values of the random variable $X$ supply us about what may happen with the realization of $Y$. Bringing in the concept of Information, permits us to think about (and use) the Law of Iterated Expectations (sometimes called the "Tower Property") in a very intuitive way: The sigma-algebra generated by two random variables, is at least as large as that generated by one random variable: $\sigma (X) \subseteq \sigma(X,Z)$ in the proper set-theoretic meaning. So the information about $Y$ contained in $\sigma(X,Z)$ is at least as great as the corresponding information in $\sigma (X)$. Now, as notational innuendo, set $\sigma (X) \equiv I_x$ and $\sigma(X,Z) \equiv I_{xz}$. Then the LHS of the equation we are looking at, can be written $$E \left[ E \left(Y|I_{xz} \right) |I_{x} \right]$$ Describing verbally the above expression we have : "what is the expectation of {the expected value of $Y$ given Information $I_{xz}$} given that we have available information $I_x$ only ?" Can we somehow "take into account" $I_{xz}$? No - we only know $I_x$. But if we use what we have (as we are obliged by the expression we want to resolve), then we are essentially saying things about $Y$ under the expectations operator, i.e. we say "$E(Y\mid I_x)$", no more -we have just exhausted our information. Hence $$E \left[ E \left(Y|I_{xz} \right) |I_{x} \right] = E\left(Y|I_{x} \right)$$ If somebody else doesn't, I will return for the formal treatment. A (bit more) FORMAL TREATMENT Let's see how two very important books of probability theory, P. Billingsley's Probability and Measure (3d ed.-1995) and D. Williams "Probability with Martingales" (1991), treat the matter of proving the "Law Of Iterated Expectations": Billingsley devotes exactly three lines to the proof. Williams, and I quote, says "(the Tower Property) is virtually immediate from the definition of conditional expectation". That's one line of text. Billingsley's proof is not less opaque. They are of course right: this important and very intuitive property of conditional expectation derives essentially directly (and almost immediately) from its definition -the only problem is, I suspect that this definition is not usually taught, or at least not highlighted, outside probability or measure theoretic circles. But in order to show in (almost) three lines that the Law of Iterated Expectations holds, we need the definition of conditional expectation, or rather, its defining property . Let a probability space $(\Omega, \mathcal F, \mathbf P)$, and an integrable random variable $Y$. Let $\mathcal G$ be a sub-$\sigma$-algebra of $\mathcal F$, $\mathcal G \subseteq \mathcal F$. Then there exists a function $W$ that is $\mathcal G$-measurable, is integrable and (this is the defining property) $$E(W\cdot\mathbb 1_{G}) = E(Y\cdot \mathbb 1_{G})\qquad \forall G \in \mathcal G \qquad [1]$$ where $1_{G}$ is the indicator function of the set $G$. We say that $W$ is ("a version of") the conditional expectation of $Y$ given $\mathcal G$, and we write $W = E(Y\mid \mathcal G) \;a.s.$ The critical detail to note here is that the conditional expectation, has the same expected value as $Y$ does, not just over the whole $\mathcal G$, but in every subset $G$ of $\mathcal G$. (I will try now to present how the Tower property derives from the definition of conditional expectation). $W$ is a $\mathcal G$-measurable random variable. Consider then some sub-$\sigma$-algebra, say $\mathcal H \subseteq \mathcal G$. Then $G\in \mathcal H \Rightarrow G\in \mathcal G$. So, in an analogous manner as previously, we have the conditional expectation of $W$ given $\mathcal H$, say $U=E(W\mid \mathcal H) \;a.s.$ that is characterized by $$E(U\cdot\mathbb 1_{G}) = E(W\cdot \mathbb 1_{G})\qquad \forall G \in \mathcal H \qquad [2]$$ Since $\mathcal H \subseteq \mathcal G$, equations $[1]$ and $[2]$ give us $$E(U\cdot\mathbb 1_{G}) = E(Y\cdot \mathbb 1_{G})\qquad \forall G \in \mathcal H \qquad [3]$$ But this is the defining property of the conditional expectation of $Y$ given $\mathcal H$. So we are entitled to write $U=E(Y\mid \mathcal H)\; a.s.$ Since we have also by construction $U = E(W\mid \mathcal H) = E\big(E[Y\mid \mathcal G]\mid \mathcal H\big)$, we just proved the Tower property, or the general form of the Law of Iterated Expectations - in eight lines.
{ "source": [ "https://stats.stackexchange.com/questions/95947", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/31420/" ] }
95,993
In my probability class the terms "sums of random variables" is constantly used. However, I'm stuck on what exactly that means? Are we talking about the sum of a bunch of realizations from a random variable? If so, doesn't that add up to a single number? How does a sum of random variable realizations lead us to a distribution, or a cdf / pdf / function of any kind? And if it isn't random variable realizations, then what exactly is being added?
A physical, intuitive model of a random variable is to write down the name of every member of a population on one or more slips of paper--"tickets"--and put those tickets into a box. The process of thoroughly mixing the contents of the box, followed by blindly pulling out one ticket--exactly as in a lottery--models randomness. Non-uniform probabilities are modeled by introducing variable numbers of tickets in the box: more tickets for the more probable members, fewer for the less probable. A random variable is a number associated with each member of the population. (Therefore, for consistency, every ticket for a given member has to have the same number written on it.) Multiple random variables are modeled by reserving spaces on the tickets for more than one number. We usually give those spaces names like $X,$ $Y,$ and $Z$. The sum of those random variables is the usual sum: reserve a new space on every ticket for the sum, read off the values of $X,$ $Y,$ etc. on each ticket, and write their sum in that new space. This is a consistent way of writing numbers on the tickets, so it's another random variable. This figure portrays a box representing a population $\Omega=\{\alpha,\beta,\gamma\}$ and three random variables $X$, $Y$, and $X+Y$. It contains six tickets: the three for $\alpha$ (blue) give it a probability of $3/6$, the two for $\beta$ (yellow) give it a probability of $2/6$, and the one for $\gamma$ (green) give it a probability of $1/6$. In order to display what is written on the tickets, they are shown before being mixed. The beauty of this approach is that all the paradoxical parts of the question turn out to be correct: the sum of random variables is indeed a single, definite number (for each member of the population), yet it also leads to a distribution (given by the frequencies with which the sum appears in the box), and it still effectively models a random process (because the tickets are still blindly drawn from the box). In this fashion the sum can simultaneously have a definite value (given by the rules of addition as applied to numbers on each of the tickets) while the realization --which will be a ticket drawn from the box--does not have a value until it is carried out. This physical model of drawing tickets from a box is adopted in the theoretical literature and made rigorous with the definitions of sample space (the population), sigma algebras (with their associated probability measures), and random variables as measurable functions defined on the sample space. This account of random variables is elaborated, with realistic examples, at "What is meant by a random variable?" .
{ "source": [ "https://stats.stackexchange.com/questions/95993", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/44797/" ] }
96,739
Here @gung makes reference to the .632+ rule. A quick Google search doesn't yield an easy to understand answer as to what this rule means and for what purpose it is used. Would someone please elucidate the .632+ rule?
I will get to the 0.632 estimator, but it'll be a somewhat long development: Suppose we want to predict $Y$ with $X$ using the function $f$, where $f$ may depend on some parameters that are estimated using the data $(\mathbf{Y}, \mathbf{X})$, e.g. $f(\mathbf{X}) = \mathbf{X}\mathbf{\beta}$ A naïve estimate of prediction error is $$\overline{err} = \dfrac{1}{N}\sum_{i=1}^N L(y_i,f(x_i))$$ where $L$ is some loss function, e.g. squared error loss. This is often called training error. Efron et al. calls it apparent error rate or resubstitution rate. It's not very good since we use our data $(x_i,y_i)$ to fit $f$. This results in $\overline{err}$ being downward biased. You want to know how well your model $f$ does in predicting new values. Often we use cross-validation as a simple way to estimate the expected extra-sample prediction error (how well does our model do on data not in our training set?). $$Err = \text{E}\left[ L(Y, f(X))\right]$$ A popular way to do this is to do $K$-fold cross-validation. Split your data into $K$ groups (e.g. 10). For each group $k$, fit your model on the remaining $K-1$ groups and test it on the $k$th group. Our cross-validated extra-sample prediction error is just the average $$Err_{CV} = \dfrac{1}{N}\sum_{i=1}^N L(y_i, f_{-\kappa(i)}(x_i))$$ where $\kappa$ is some index function that indicates the partition to which observation $i$ is allocated and $f_{-\kappa(i)}(x_i)$ is the predicted value of $x_i$ using data not in the $\kappa(i)$th set. This estimator is approximately unbiased for the true prediction error when $K=N$ and has larger variance and is more computationally expensive for larger $K$. So once again we see the bias–variance trade-off at play. Instead of cross-validation we could use the bootstrap to estimate the extra-sample prediction error. Bootstrap resampling can be used to estimate the sampling distribution of any statistic. If our training data is $\mathbf{X} = (x_1,\ldots,x_N)$, then we can think of taking $B$ bootstrap samples (with replacement) from this set $\mathbf{Z}_1,\ldots,\mathbf{Z}_B$ where each $\mathbf{Z}_i$ is a set of $N$ samples. Now we can use our bootstrap samples to estimate extra-sample prediction error: $$Err_{boot} = \dfrac{1}{B}\sum_{b=1}^B\dfrac{1}{N}\sum_{i=1}^N L(y_i, f_b(x_i))$$ where $f_b(x_i)$ is the predicted value at $x_i$ from the model fit to the $b$th bootstrap dataset. Unfortunately, this is not a particularly good estimator because bootstrap samples used to produce $f_b(x_i)$ may have contained $x_i$. The leave-one-out bootstrap estimator offers an improvement by mimicking cross-validation and is defined as: $$Err_{boot(1)} = \dfrac{1}{N}\sum_{i=1}^N\dfrac{1}{|C^{-i}|}\sum_{b\in C^{-i}}L(y_i,f_b(x_i))$$ where $C^{-i}$ is the set of indices for the bootstrap samples that do not contain observation $i$, and $|C^{-i}|$ is the number of such samples. $Err_{boot(1)}$ solves the overfitting problem, but is still biased (this one is upward biased). The bias is due to non-distinct observations in the bootstrap samples that result from sampling with replacement. The average number of distinct observations in each sample is about $0.632N$ (see this answer for an explanation of why Why on average does each bootstrap sample contain roughly two thirds of observations? ). To solve the bias problem, Efron and Tibshirani proposed the 0.632 estimator: $$ Err_{.632} = 0.368\overline{err} + 0.632Err_{boot(1)}$$ where $$\overline{err} = \dfrac{1}{N}\sum_{i=1}^N L(y_i,f(x_i))$$ is the naïve estimate of prediction error often called training error. The idea is to average a downward biased estimate and an upward biased estimate. However, if we have a highly overfit prediction function (i.e. $\overline{err}=0$) then even the .632 estimator will be downward biased. The .632+ estimator is designed to be a less-biased compromise between $\overline{err}$ and $Err_{boot(1)}$. $$ Err_{.632+} = (1 - w) \overline{err} + w Err_{boot(1)} $$ with $$w = \dfrac{0.632}{1 - 0.368R} \quad\text{and}\quad R = \dfrac{Err_{boot(1)} - \overline{err}}{\gamma - \overline{err}} $$ where $\gamma$ is the no-information error rate, estimated by evaluating the prediction model on all possible combinations of targets $y_i$ and predictors $x_i$. $$\gamma = \dfrac{1}{N^2}\sum_{i=1}^N\sum_{j=1}^N L(y_i, f(x_j))$$. Here $R$ measures the relative overfitting rate. If there is no overfitting (R=0, when the $Err_{boot(1)} = \overline{err}$) this is equal to the .632 estimator.
{ "source": [ "https://stats.stackexchange.com/questions/96739", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/196/" ] }
96,742
I am using the following regression: $$\text{Test score} = \beta_0+\beta_1\text{Mother's employment}+\beta_2\text{Mother's education}$$ where "Mother's employment" is a set of dummy variables indicating whether the mother works more than 35 hours a week, is unemployed or is absent , and "Mother's education" is also a set of dummy variables indicating if the mother has a high school diploma, a college degree or a PhD. If the mother is absent, then "Mother's education" is not applicable, i.e. there is no answer. How do I deal with this in Stata ? Mean imputation? How do I do that with dummy variables?
I will get to the 0.632 estimator, but it'll be a somewhat long development: Suppose we want to predict $Y$ with $X$ using the function $f$, where $f$ may depend on some parameters that are estimated using the data $(\mathbf{Y}, \mathbf{X})$, e.g. $f(\mathbf{X}) = \mathbf{X}\mathbf{\beta}$ A naïve estimate of prediction error is $$\overline{err} = \dfrac{1}{N}\sum_{i=1}^N L(y_i,f(x_i))$$ where $L$ is some loss function, e.g. squared error loss. This is often called training error. Efron et al. calls it apparent error rate or resubstitution rate. It's not very good since we use our data $(x_i,y_i)$ to fit $f$. This results in $\overline{err}$ being downward biased. You want to know how well your model $f$ does in predicting new values. Often we use cross-validation as a simple way to estimate the expected extra-sample prediction error (how well does our model do on data not in our training set?). $$Err = \text{E}\left[ L(Y, f(X))\right]$$ A popular way to do this is to do $K$-fold cross-validation. Split your data into $K$ groups (e.g. 10). For each group $k$, fit your model on the remaining $K-1$ groups and test it on the $k$th group. Our cross-validated extra-sample prediction error is just the average $$Err_{CV} = \dfrac{1}{N}\sum_{i=1}^N L(y_i, f_{-\kappa(i)}(x_i))$$ where $\kappa$ is some index function that indicates the partition to which observation $i$ is allocated and $f_{-\kappa(i)}(x_i)$ is the predicted value of $x_i$ using data not in the $\kappa(i)$th set. This estimator is approximately unbiased for the true prediction error when $K=N$ and has larger variance and is more computationally expensive for larger $K$. So once again we see the bias–variance trade-off at play. Instead of cross-validation we could use the bootstrap to estimate the extra-sample prediction error. Bootstrap resampling can be used to estimate the sampling distribution of any statistic. If our training data is $\mathbf{X} = (x_1,\ldots,x_N)$, then we can think of taking $B$ bootstrap samples (with replacement) from this set $\mathbf{Z}_1,\ldots,\mathbf{Z}_B$ where each $\mathbf{Z}_i$ is a set of $N$ samples. Now we can use our bootstrap samples to estimate extra-sample prediction error: $$Err_{boot} = \dfrac{1}{B}\sum_{b=1}^B\dfrac{1}{N}\sum_{i=1}^N L(y_i, f_b(x_i))$$ where $f_b(x_i)$ is the predicted value at $x_i$ from the model fit to the $b$th bootstrap dataset. Unfortunately, this is not a particularly good estimator because bootstrap samples used to produce $f_b(x_i)$ may have contained $x_i$. The leave-one-out bootstrap estimator offers an improvement by mimicking cross-validation and is defined as: $$Err_{boot(1)} = \dfrac{1}{N}\sum_{i=1}^N\dfrac{1}{|C^{-i}|}\sum_{b\in C^{-i}}L(y_i,f_b(x_i))$$ where $C^{-i}$ is the set of indices for the bootstrap samples that do not contain observation $i$, and $|C^{-i}|$ is the number of such samples. $Err_{boot(1)}$ solves the overfitting problem, but is still biased (this one is upward biased). The bias is due to non-distinct observations in the bootstrap samples that result from sampling with replacement. The average number of distinct observations in each sample is about $0.632N$ (see this answer for an explanation of why Why on average does each bootstrap sample contain roughly two thirds of observations? ). To solve the bias problem, Efron and Tibshirani proposed the 0.632 estimator: $$ Err_{.632} = 0.368\overline{err} + 0.632Err_{boot(1)}$$ where $$\overline{err} = \dfrac{1}{N}\sum_{i=1}^N L(y_i,f(x_i))$$ is the naïve estimate of prediction error often called training error. The idea is to average a downward biased estimate and an upward biased estimate. However, if we have a highly overfit prediction function (i.e. $\overline{err}=0$) then even the .632 estimator will be downward biased. The .632+ estimator is designed to be a less-biased compromise between $\overline{err}$ and $Err_{boot(1)}$. $$ Err_{.632+} = (1 - w) \overline{err} + w Err_{boot(1)} $$ with $$w = \dfrac{0.632}{1 - 0.368R} \quad\text{and}\quad R = \dfrac{Err_{boot(1)} - \overline{err}}{\gamma - \overline{err}} $$ where $\gamma$ is the no-information error rate, estimated by evaluating the prediction model on all possible combinations of targets $y_i$ and predictors $x_i$. $$\gamma = \dfrac{1}{N^2}\sum_{i=1}^N\sum_{j=1}^N L(y_i, f(x_j))$$. Here $R$ measures the relative overfitting rate. If there is no overfitting (R=0, when the $Err_{boot(1)} = \overline{err}$) this is equal to the .632 estimator.
{ "source": [ "https://stats.stackexchange.com/questions/96742", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/45115/" ] }
96,751
I have a test which contains 24 items, so the maximum result is 24 and the minimum result is 0. How can I calculate the maximum standard deviation possible? Thank you!
I will get to the 0.632 estimator, but it'll be a somewhat long development: Suppose we want to predict $Y$ with $X$ using the function $f$, where $f$ may depend on some parameters that are estimated using the data $(\mathbf{Y}, \mathbf{X})$, e.g. $f(\mathbf{X}) = \mathbf{X}\mathbf{\beta}$ A naïve estimate of prediction error is $$\overline{err} = \dfrac{1}{N}\sum_{i=1}^N L(y_i,f(x_i))$$ where $L$ is some loss function, e.g. squared error loss. This is often called training error. Efron et al. calls it apparent error rate or resubstitution rate. It's not very good since we use our data $(x_i,y_i)$ to fit $f$. This results in $\overline{err}$ being downward biased. You want to know how well your model $f$ does in predicting new values. Often we use cross-validation as a simple way to estimate the expected extra-sample prediction error (how well does our model do on data not in our training set?). $$Err = \text{E}\left[ L(Y, f(X))\right]$$ A popular way to do this is to do $K$-fold cross-validation. Split your data into $K$ groups (e.g. 10). For each group $k$, fit your model on the remaining $K-1$ groups and test it on the $k$th group. Our cross-validated extra-sample prediction error is just the average $$Err_{CV} = \dfrac{1}{N}\sum_{i=1}^N L(y_i, f_{-\kappa(i)}(x_i))$$ where $\kappa$ is some index function that indicates the partition to which observation $i$ is allocated and $f_{-\kappa(i)}(x_i)$ is the predicted value of $x_i$ using data not in the $\kappa(i)$th set. This estimator is approximately unbiased for the true prediction error when $K=N$ and has larger variance and is more computationally expensive for larger $K$. So once again we see the bias–variance trade-off at play. Instead of cross-validation we could use the bootstrap to estimate the extra-sample prediction error. Bootstrap resampling can be used to estimate the sampling distribution of any statistic. If our training data is $\mathbf{X} = (x_1,\ldots,x_N)$, then we can think of taking $B$ bootstrap samples (with replacement) from this set $\mathbf{Z}_1,\ldots,\mathbf{Z}_B$ where each $\mathbf{Z}_i$ is a set of $N$ samples. Now we can use our bootstrap samples to estimate extra-sample prediction error: $$Err_{boot} = \dfrac{1}{B}\sum_{b=1}^B\dfrac{1}{N}\sum_{i=1}^N L(y_i, f_b(x_i))$$ where $f_b(x_i)$ is the predicted value at $x_i$ from the model fit to the $b$th bootstrap dataset. Unfortunately, this is not a particularly good estimator because bootstrap samples used to produce $f_b(x_i)$ may have contained $x_i$. The leave-one-out bootstrap estimator offers an improvement by mimicking cross-validation and is defined as: $$Err_{boot(1)} = \dfrac{1}{N}\sum_{i=1}^N\dfrac{1}{|C^{-i}|}\sum_{b\in C^{-i}}L(y_i,f_b(x_i))$$ where $C^{-i}$ is the set of indices for the bootstrap samples that do not contain observation $i$, and $|C^{-i}|$ is the number of such samples. $Err_{boot(1)}$ solves the overfitting problem, but is still biased (this one is upward biased). The bias is due to non-distinct observations in the bootstrap samples that result from sampling with replacement. The average number of distinct observations in each sample is about $0.632N$ (see this answer for an explanation of why Why on average does each bootstrap sample contain roughly two thirds of observations? ). To solve the bias problem, Efron and Tibshirani proposed the 0.632 estimator: $$ Err_{.632} = 0.368\overline{err} + 0.632Err_{boot(1)}$$ where $$\overline{err} = \dfrac{1}{N}\sum_{i=1}^N L(y_i,f(x_i))$$ is the naïve estimate of prediction error often called training error. The idea is to average a downward biased estimate and an upward biased estimate. However, if we have a highly overfit prediction function (i.e. $\overline{err}=0$) then even the .632 estimator will be downward biased. The .632+ estimator is designed to be a less-biased compromise between $\overline{err}$ and $Err_{boot(1)}$. $$ Err_{.632+} = (1 - w) \overline{err} + w Err_{boot(1)} $$ with $$w = \dfrac{0.632}{1 - 0.368R} \quad\text{and}\quad R = \dfrac{Err_{boot(1)} - \overline{err}}{\gamma - \overline{err}} $$ where $\gamma$ is the no-information error rate, estimated by evaluating the prediction model on all possible combinations of targets $y_i$ and predictors $x_i$. $$\gamma = \dfrac{1}{N^2}\sum_{i=1}^N\sum_{j=1}^N L(y_i, f(x_j))$$. Here $R$ measures the relative overfitting rate. If there is no overfitting (R=0, when the $Err_{boot(1)} = \overline{err}$) this is equal to the .632 estimator.
{ "source": [ "https://stats.stackexchange.com/questions/96751", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/45118/" ] }
97,014
Gradient Descent has a problem of getting stuck in Local Minima. We need to run gradient descent exponential times in order to find global minima. Can anybody tell me about any alternatives of gradient descent as applied in neural network learning, along with their pros and cons.
This is more a problem to do with the function being minimized than the method used, if finding the true global minimum is important, then use a method such a simulated annealing . This will be able to find the global minimum, but may take a very long time to do so. In the case of neural nets, local minima are not necessarily that much of a problem. Some of the local minima are due to the fact that you can get a functionally identical model by permuting the hidden layer units, or negating the inputs and output weights of the network etc. Also if the local minima is only slightly non-optimal, then the difference in performance will be minimal and so it won't really matter. Lastly, and this is an important point, the key problem in fitting a neural network is over-fitting, so aggressively searching for the global minima of the cost function is likely to result in overfitting and a model that performs poorly. Adding a regularisation term, e.g. weight decay, can help to smooth out the cost function, which can reduce the problem of local minima a little, and is something I would recommend anyway as a means of avoiding overfitting. The best method however of avoiding local minima in neural networks is to use a Gaussian Process model (or a Radial Basis Function neural network), which have fewer problems with local minima.
{ "source": [ "https://stats.stackexchange.com/questions/97014", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/32467/" ] }
97,098
This isn't a strictly stats question--I can read all the textbooks about ANOVA assumptions--I'm trying to figure out how actual working analysts handle data that doesn't quite meet the assumptions. I've gone through a lot of questions on this site looking for answers and I keep finding posts about when not to use ANOVA (in an abstract, idealized mathematical context) or how to do some of the things I describe below in R. I'm really trying to figure out what decisions people actually make and why. I'm running analysis on grouped data from trees (actual trees, not statistical trees) in four groups. I've got data for about 35 attributes for each tree and I'm going through each attribute to determine if the groups differ significantly on that attribute. However, in a couple of cases, the ANOVA assumptions are slightly violated because the variances aren't equal (according to a Levene's test, using alpha=.05). As I see it, my options are to: 1. Power transform the data and see if it changes the Levene p-val. 2. Use a non-parametric test like a Wilcoxon (if so, which one?). 3. Do some kind of correction to the ANOVA result, like a Bonferroni (I'm not actually sure if something like this exists?). I've tried the first two options and gotten slightly different results--in some cases one approach is significant and the other is not. I'm afraid of falling into the p-value fishing trap, and I'm looking for advice that will help me justify which approach to use. I've also read some things that suggest that heteroscedasticity isn't really that big of a problem for ANOVA unless the means and variances are correlated (i.e. they both increase together), so perhaps I can just ignore the Levene's result unless I see a pattern like that? If so, is there a test for this? Finally, I should add that I'm doing this analysis for publication in a peer-reviewed journal, so whatever approach I settle on has to pass muster with reviewers. So, if anyone can provide links to similar, published examples that would be fantastic.
I'm trying to figure out how actual working analysts handle data that doesn't quite meet the assumptions. It depends on my needs, which assumptions are violated, in what way, how badly, how much that affects the inference, and sometimes on the sample size. I'm running analysis on grouped data from trees in four groups. I've got data for about 35 attributes for each tree and I'm going through each attribute to determine if the groups differ significantly on that attribute. However, in a couple of cases, the ANOVA assumptions are slightly violated because the variances aren't equal (according to a Levene's test, using alpha=.05). 1) If sample sizes are equal, you don't have much of a problem. ANOVA is quite (level-)robust to different variances if the n's are equal. 2) testing equality of variance before deciding whether to assume it is recommended against by a number of studies. If you're in any real doubt that they'll be close to equal, it's better to simply assume they're unequal. Some references: Zimmerman, D.W. (2004), "A note on preliminary tests of equality of variances." Br. J. Math. Stat. Psychol. , May ; 57 (Pt 1): 173-81. http://www.ncbi.nlm.nih.gov/pubmed/15171807 Henrik gives three references here 3) It's the effect size that matters, rather than whether your sample is large enough to tell you they're significantly different. So in large samples, a small difference in variance will show as highly significant by Levene's test, but will be of essentially no consequence in its impact. If the samples are large and the effect size - the ratio of variances or the differences in variances - are quite close to what they should be, then the p-value is of no consequence. (On the other hand, in small samples, a nice big p-value is of little comfort. Either way the test doesn't answer the right question.) Note that there's a Welch-Satterthwaite type adjustment to the estimate of residual standard error and d.f. in ANOVA, just as there is in two-sample t-tests. Use a non-parametric test like a Wilcoxon (if so, which one?). If you're interested in location-shift alternatives, you're still assuming constant spread. If you're interested in much more general alternatives then you might perhaps consider it; the k-sample equivalent to a Wilcoxon test is a Kruskal-Wallis test. Do some kind of correction to the ANOVA result See my above suggestion of considering Welch-Satterthwaite, that's a 'kind of correction'. (Alternatively you might cast your ANOVA as a set of pairwise Welch-type t-tests, in which case you likely would want to look at a Bonferroni or something similar) I've also read some things that suggest that heteroscedasticity isn't really that big of a problem for ANOVA unless the means and variances are correlated (i.e. they both increase together) You'd have to cite something like that. Having looked at a number of situations with t-tests, I don't think it's clearly true, so I'd like to see why they think so; perhaps the situation is restricted in some way. It would be nice if it were the case though because pretty often generalized linear models can help with that situation. Finally, I should add that I'm doing this analysis for publication in a peer-reviewed journal, so whatever approach I settle on has to pass muster with reviewers. It's very hard to predict what might satisfy your reviewers. Most of us don't work with trees.
{ "source": [ "https://stats.stackexchange.com/questions/97098", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/42170/" ] }
97,515
I'm reading a paper where the authors are leading from a discussion of maximum likelihood estimation to Bayes' Theorem, ostensibly as an introduction for beginners. As a likelihood example, they start with a binomial distribution: $$p(x|n,\theta) = \binom{n}{x}\theta^x(1-\theta)^{n-x}$$ and then log both sides $$\ell(\theta|x, n) = x \ln (\theta) + (n-x)\ln (1-\theta)$$ with the rationale that: "Because the likelihood is only defined up to a multiplicative constant of proportionality (or an additive constant for the log-likelihood), we can rescale ... by dropping the binomial coefficient and writing the log-likelihood in place of the likelihood" The math makes sense, but I can't understand what is meant by "the likelihood is only defined up to a multiplicative constant of proportionality" and how this allows dropping the binomial coefficient and going from $p(x|n,\theta)$ to $\ell(\theta|x,n)$. Similar terminology has come up in other questions ( here and here ), but it still not clear what, practically, likelihood being defined or bringing information up to a multiplicative constant means. Is it possible to explain this in layman's terms?
The point is that sometimes, different models (for the same data) can lead to likelihood functions which differ by a multiplicative constant, but the information content must clearly be the same. An example: We model $n$ independent Bernoulli experiments, leading to data $X_1, \dots, X_n$ , each with a Bernoulli distribution with (probability) parameter $p$ . This leads to the likelihood function $$ \prod_{i=1}^n p^{x_i} (1-p)^{1-x_i} $$ Or we can summarize the data by the binomially distributed variable $Y=X_1+X_2+\dotsm+X_n$ , which has a binomial distribution, leading to the likelihood function $$ \binom{n}{y} p^y (1-p)^{n-y} $$ which, as a function of the unknown parameter $p$ , is proportional to the former likelihood function. The two likelihood functions clearly contains the same information, and should lead to the same inferences! And indeed, by definition, they are considered the same likelihood function. Another viewpoint: observe that when the likelihood functions are used in Bayes theorem, as needed for bayesian analysis, such multiplicative constants simply cancel! so they are clearly irrelevant to bayesian inference. Likewise, it will cancel when calculating likelihood ratios, as used in optimal hypothesis tests (Neyman-Pearson lemma.) And it will have no influence on the value of maximum likelihood estimators. So we can see that in much of frequentist inference it cannot play a role. We can argue from still another viewpoint. The Bernoulli probability function (hereafter we use the term "density") above is really a density with respect to counting measure, that is, the measure on the non-negative integers with mass one for each non-negative integer. But we could have defined a density with respect to some other dominating measure. In this example this will seem (and is) artificial, but in larger spaces (function spaces) it is really fundamental! Let us, for the purpose of illustration, use the specific geometric distribution, written $\lambda$ , with $\lambda(0)=1/2$ , $\lambda(1)=1/4$ , $\lambda(2)=1/8$ and so on. Then the density of the Bernoulli distribution with respect to $\lambda$ is given by $$ f_{\lambda}(x) = p^x (1-p)^{1-x}\cdot 2^{x+1} $$ meaning that $$ P(X=x)= f_\lambda(x) \cdot \lambda(x) $$ With this new, dominating, measure, the likelihood function becomes (with notation from above) $$ \prod_{i=1}^n p^{x_i} (1-p)^{1-x_i} 2^{x_i+1} = p^y (1-p)^{n-y} 2^{y+n} $$ note the extra factor $2^{y+n}$ . So when changing the dominating measure used in the definition of the likelihood function, there arises a new multiplicative constant, which does not depend on the unknown parameter $p$ , and is clearly irrelevant. That is another way to see how multiplicative constants must be irrelevant. This argument can be generalized using Radon-Nikodym derivatives (as the argument above is an example of.)
{ "source": [ "https://stats.stackexchange.com/questions/97515", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/597/" ] }
97,832
I am a bit lost in the process of WLS regression. I have been given dataset and my task is to test whether there is heteroscedascity, and if so I should run WLS regression. I have carried out the test and found evidence for heteroscedascity, so I need to run the WLS. I have been told that WLS is basically OLS regression of a transformed model, but I am a bit confused about finding the transformation function. I have read some articles which suggested that the transformation can be function of squared residuals from OLS regression, but I would appreciate If someone can help me to get on the right track.
Weighted least squares (WLS) regression is not a transformed model. Instead, you are simply treating each observation as more or less informative about the underlying relationship between $X$ and $Y$. Those points that are more informative are given more 'weight', and those that are less informative are given less weight. You are right that weighted least squares (WLS) regression is technically only valid if the weights are known a-priori. However, (OLS) linear regression is fairly robust against heteroscedasticity and thus so is WLS if your estimates are in the ballpark. A rule of thumb for OLS regression is that it isn't too impacted by heteroscedasticity as long as the maximum variance is not greater than 4 times the minimum variance. For example, if the variance of the residuals / errors increases with $X$, then you would be OK if the variance of the residuals at the high end were less than four times the variance of the residuals at the low end. The implication of this is that if your weights get you within that range, you are reasonably safe. It's kind of a horseshoes and hand grenades situation. As a result, you can try to estimate the function relating the variance of the residuals to the levels of your predictor variables. There are several issues pertaining to how such estimation should be done: Remember that the weights should be the reciprocal of the variance (or whatever you use). If your data occur only at discrete levels of $X$, like in an experiment or an ANOVA, then you can estimate the variance directly at each level of $X$ and use that. If the estimates are discrete levels of a continuous variable (e.g., 0 mg., 10 mg., 20 mg., etc.), you may want to smooth those, but it probably won't make much difference. Estimates of variances, due to the squaring, are very susceptible to outliers and/or high leverage points, though. If your data are not evenly distributed across $X$, or you have relatively few data, estimating the variance directly is not recommended. It is better to estimate something that is expected to correlate with variance, but which is more robust. A common choice would be to use the square root of the absolute values of the deviations from the conditional mean. (For example, in R, plot(model, which=2) will display a scatterplot of these against $X$, called a "spread level plot", to help you diagnose potential heteroscedasticity; see my answer here .) Even more robust might be to use the conditional interquartile range, or the conditional median absolute deviation from the median . If $X$ is a continuous variable, the typical strategy is to use a simple OLS regression to get the residuals, and then regress one of the functions in [ 3 ] (most likely the root absolute deviation) onto $X$. The predicted value of this function is used for the weight associated with that point. Getting your weights from the residuals of an OLS regression is reasonable because OLS is unbiased, even in the presence of heteroscedasticity. Nonetheless, those weights are contingent on the original model, and may change the fit of the subsequent WLS model. Thus, you should check your results by comparing the estimated betas from the two regressions. If they are very similar, you are OK. If the WLS coefficients diverge from the OLS ones, you should use the WLS estimates to compute residuals manually (the reported residuals from the WLS fit will take the weights into account). Having calculated a new set of residuals, determine the weights again and use the new weights in a second WLS regression. This process should be repeated until two sets of estimated betas are sufficiently similar (even doing this once is uncommon, though). If this process makes you somewhat uncomfortable, because the weights are estimated, and because they are contingent on the earlier, incorrect model, another option is to use the Huber-White 'sandwich' estimator . This is consistent even in the presence of heteroscedasticity no matter how severe, and it isn't contingent on the model. It is also potentially less hassle. I demonstrate a simple version of weighted least squares and the use of the sandwich SEs in my answer here: Alternatives to one-way ANOVA for heteroscedastic data .
{ "source": [ "https://stats.stackexchange.com/questions/97832", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/45599/" ] }
97,845
I'm trying to fit a spline for a GLM using R. Once I fit the spline, I want to be able to take my resulting model and create a modeling file in an Excel workbook. For example, let's say I have a data set where y is a random function of x and the slope changes abruptly at a specific point (in this case @ x=500). set.seed(1066) x<- 1:1000 y<- rep(0,1000) y[1:500]<- pmax(x[1:500]+(runif(500)-.5)*67*500/pmax(x[1:500],100),0.01) y[501:1000]<-500+x[501:1000]^1.05*(runif(500)-.5)/7.5 df<-as.data.frame(cbind(x,y)) plot(df) I now fit this using library(splines) spline1 <- glm(y~ns(x,knots=c(500)),data=df,family=Gamma(link="log")) and my results show summary(spline1) Call: glm(formula = y ~ ns(x, knots = c(500)), family = Gamma(link = "log"), data = df) Deviance Residuals: Min 1Q Median 3Q Max -4.0849 -0.1124 -0.0111 0.0988 1.1346 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 4.17460 0.02994 139.43 <2e-16 *** ns(x, knots = c(500))1 3.83042 0.06700 57.17 <2e-16 *** ns(x, knots = c(500))2 0.71388 0.03644 19.59 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for Gamma family taken to be 0.1108924) Null deviance: 916.12 on 999 degrees of freedom Residual deviance: 621.29 on 997 degrees of freedom AIC: 13423 Number of Fisher Scoring iterations: 9 At this point, I can use predict function within r and get perfectly acceptable answers. The problem is that I want to use the model results to build a workbook in Excel. My understanding of the predict function is that given a new "x" value, r plugs that new x into the appropriate spline function (either the function for values above 500 or the one for values below 500), then it takes that result and multiplies it by the appropriate coefficient and from that point treats it like any other model term. How do I get these spline functions? (Note: I realize that a log-linked gamma GLM may not be appropriate for the data set provided. I am not asking about how or when to fit GLMs. I am providing that set as an example for reproducibility purposes.)
You can reverse-engineer the spline formulae without having to go into the R code. It suffices to know that A spline is a piecewise polynomial function. Polynomials of degree $d$ are determined by their values at $d+1$ points. The coefficients of a polynomial can be obtained via linear regression. Thus, you only have to create $d+1$ points spaced between each pair of successive knots (including the implicit endpoints of the data range), predict the spline values, and regress the prediction against the powers of $x$ up to $x^d$. There will be a separate formula for each spline basis element within each such knot "bin." For instance, in the example below there are three internal knots (for four knot bins) and cubic splines ($d=3$) were used, resulting in $4\times 4=16$ cubic polynomials, each with $d+1=4$ coefficients. Because relatively high powers of $x$ are involved, it is imperative to preserve all the precision in the coefficients. As you might imagine, the full formula for any spline basis element can get pretty long! As I mentioned quite a while ago , being able to use the output of one program as the input of another (without manual intervention, which can introduce irreproducible errors) is a useful statistical communication skill. This question provides a nice example of how that principle applies: instead of copying those $64$ sixteen-digit coefficients manually, we can hack together a way to convert the splines computed by R into formulas that Excel can understand. All we need do is extract the spline coefficients from R as described above, have it reformat them into Excel-like formulas, and copy and paste those into Excel. This method will work with any statistical software, even undocumented proprietary software whose source code is unavailable. Here is an example taken from the question, but modified to have knots at three internal points ($200, 500, 800$) as well as at the endpoints $(1, 1000)$. The plots show R 's version followed by Excel's rendering. Very little customization was performed in either environment (apart from specifying colors in R to match Excel's default colors approximately). (The vertical gray gridlines in the R version show where the internal knots are.) Here is the full R code. It's an unsophisticated hack, relying entirely on the paste function to accomplish the string manipulation. (A better way would be to create a formula template and fill it in using string matching and substitution commands.) # # Create and display a spline basis. # x <- 1:1000 n <- ns(x, knots=c(200, 500, 800)) colors <- c("Orange", "Gray", "tomato2", "deepskyblue3") plot(range(x), range(n), type="n", main="R Version", xlab="x", ylab="Spline value") for (k in attr(n, "knots")) abline(v=k, col="Gray", lty=2) for (j in 1:ncol(n)) { lines(x, n[,j], col=colors[j], lwd=2) } # # Export this basis in Excel-readable format. # ns.formula <- function(n, ref="A1") { ref.p <- paste("I(", ref, sep="") knots <- sort(c(attr(n, "Boundary.knots"), attr(n, "knots"))) d <- attr(n, "degree") f <- sapply(2:length(knots), function(i) { s.pre <- paste("IF(AND(", knots[i-1], "<=", ref, ", ", ref, "<", knots[i], "), ", sep="") x <- seq(knots[i-1], knots[i], length.out=d+1) y <- predict(n, x) apply(y, 2, function(z) { s.f <- paste("z ~ x+", paste("I(x", 2:d, sep="^", collapse=")+"), ")", sep="") f <- as.formula(s.f) b.hat <- coef(lm(f)) s <- paste(c(b.hat[1], sapply(1:d, function(j) paste(b.hat[j+1], "*", ref, "^", j, sep=""))), collapse=" + ") paste(s.pre, s, ", 0)", sep="") }) }) apply(f, 1, function(s) paste(s, collapse=" + ")) } ns.formula(n) # Each line of this output is one basis formula: paste into Excel The first spline output formula (out of the four produced here) is "IF(AND(1<=A1, A1<200), -1.26037447288906e-08 + 3.78112341937071e-08*A1^1 + -3.78112341940948e-08*A1^2 + 1.26037447313669e-08*A1^3, 0) + IF(AND(200<=A1, A1<500), 0.278894459758071 + -0.00418337927419299*A1^1 + 2.08792741929417e-05*A1^2 + -2.22580643138594e-08*A1^3, 0) + IF(AND(500<=A1, A1<800), -5.28222778473101 + 0.0291833541927414*A1^1 + -4.58541927409268e-05*A1^2 + 2.22309136420529e-08*A1^3, 0) + IF(AND(800<=A1, A1<1000), 12.500000000002 + -0.0375000000000067*A1^1 + 3.75000000000076e-05*A1^2 + -1.25000000000028e-08*A1^3, 0)" For this to work in Excel, all you need do is remove the surrounding quotation marks and prefix it with an "=" sign. (With a bit more effort you could have R write a file which, when imported by Excel, contains copies of these formulas in all the right places.) Paste it into a formula box and then drag that cell around until "A1" references the first $x$ value where the spline is to be computed. Copy and paste (or drag and drop) that cell to compute values for other cells. I filled cells B2:E:102 with these formulas, referencing $x$ values in cells A2:A102.
{ "source": [ "https://stats.stackexchange.com/questions/97845", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/45603/" ] }
98,953
What are theoretical reasons to not handle missing values? Gradient boosting machines, regression trees handle missing values. Why doesn't Random Forest do that?
Gradient Boosting Trees uses CART trees (in a standard setup, as it was proposed by its authors). CART trees are also used in Random Forests. What @user777 said is true, that RF trees handles missing values either by imputation with average, either by rough average/mode, either by an averaging/mode based on proximities. These methods were proposed by Breiman and Cutler and are used for RF. This is a reference from authors Missing values in training set . However, one can build a GBM or RF with other type of decision trees. The usual replacement for CART is C4.5 proposed by Quinlan. In C4.5 the missing values are not replaced on data set. Instead, the impurity function computed takes into account the missing values by penalizing the impurity score with the ration of missing values. On test set the evaluation in a node which has a test with missing value, the prediction is built for each child node and aggregated later (by weighting). Now, in many implementations C4.5 is used instead of CART. The main reason is to avoid expensive computation (CART has more rigorous statistical approaches, which require more computation), the results seems to be similar, the resulted trees are often smaller (since CART is binary and C4.5 not). I know that Weka uses this approach. I do not know other libraries, but I expect it to not be a singular situation. If that is the case with your GBM implementation, than this would be an answer.
{ "source": [ "https://stats.stackexchange.com/questions/98953", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/38286/" ] }
98,971
After centering, the two measurements x and −x can be assumed to be independent observations from a Cauchy distribution with probability density function: $f(x :\theta) = $ $1\over\pi (1+(x-\theta)^2) $ $, -∞ < x < ∞$ Show that if $x^2≤ 1$ the MLE of $\theta$ is 0, but if $x^2>1$ there are two MLE's of $\theta$, equal to ±$\sqrt {x^2-1}$ I think to find the MLE I have to differentiate the log likelihood: $dl\over d\theta$ $=\sum $$2(x_i-\theta)\over 1+(x_i-\theta)^2 $ $=$ $2(-x-\theta)\over 1+(-x-\theta)^2 $ + $2(x-\theta)\over 1+(x-\theta)^2 $ $=0$ So, $2(x-\theta)\over 1+(x-\theta)^2 $ $=$ $2(x+\theta)\over 1+(x-\theta)^2 $ which I then simplified down to $5x^2 = 3\theta^2+2\theta x+3$ Now I've hit a wall. I've probably gone wrong at some point, but either way I'm not sure how to answer the question. Can anyone help?
There is a math typo in your calculations. The first order condition for a maximum is: \begin{align} \frac {\partial L}{\partial \theta}= 0 &\Rightarrow \frac {2(x+\theta)}{ 1+(x+\theta)^2} - \frac{2(x-\theta)}{ 1+(x-\theta)^2}&=0 \\[5pt] &\Rightarrow (x+\theta)+(x+\theta)(x-\theta)^2 - (x-\theta)-(x-\theta)(x+\theta)^2&=0 \\[3pt] &\Rightarrow 2\theta +(x+\theta)(x-\theta)\left[x-\theta-(x+\theta\right]&=0 \\[3pt] &\Rightarrow2\theta -2\theta(x+\theta)(x-\theta) =0\Rightarrow 2\theta -2\theta(x^2-\theta^2)&=0 \\[3pt] &\Rightarrow2\theta(1-x^2+\theta^2)=0 \Rightarrow 2\theta\big(\theta^2+(1-x^2)\big)&=0 \end{align} If $x^2\leq 1$ then the term in the parenthesis cannot be zero (for real solutions of course), so you are left only with the solution $\hat \theta =0$. If $x^2 >1$ you have $2\theta\big[\theta^2-(x^2-1)\big]=0$ so, apart from the candidate point $\theta =0$ you also get $$\frac {\partial L}{\partial \theta}= 0,\;\; \text{for}\;\;\hat \theta = \pm\sqrt {x^2-1}$$ You also have to justify why in this case $\hat \theta =0$ is no longer an MLE. ADDENDUM For $x =\pm 0.5$ the graph of the log-likelihood is while for $x =\pm 1.5$ the graph of the log-likelihood is, Now all you have to do is to prove it algebraically and then wonder "fine -now which of the two should I choose?"
{ "source": [ "https://stats.stackexchange.com/questions/98971", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/40378/" ] }
99,162
I was using one class SVM , implemented in scikit-learn, for my research work. But I have no good understanding of this. Can anyone please give a simple, good explanation of one class SVM ?
The problem addressed by One Class SVM, as the documentation says, is novelty detection . The original paper describing how to use SVMs for this task is " Support Vector Method for Novelty Detection ". The idea of novelty detection is to detect rare events, i.e. events that happen rarely, and hence, of which you have very little samples. The problem is then, that the usual way of training a classifier will not work. So how do you decide what a novel pattern is?. Many approaches are based on the estimation of the density of probability for the data. Novelty corresponds to those samples where the density of probability is "very low". How low depends on the application. Now, SVMs are max-margin methods, i.e. they do not model a probability distribution. Here the idea is to find a function that is positive for regions with high density of points, and negative for small densities. The gritty details are given in the paper. ;) If you really intend to go through the paper, make sure that you first understand the settings of the basic SVM algorithm for classification. It will make much easier to understand the bounds and the motivation the algorithm.
{ "source": [ "https://stats.stackexchange.com/questions/99162", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/40856/" ] }
99,171
I read that 'Euclidean distance is not a good distance in high dimensions'. I guess this statement has something to do with the curse of dimensionality, but what exactly? Besides, what is 'high dimensions'? I have been applying hierarchical clustering using Euclidean distance with 100 features. Up to how many features is it 'safe' to use this metric?
A great summary of non-intuitive results in higher dimensions comes from " A Few Useful Things to Know about Machine Learning " by Pedro Domingos at the University of Washington: [O]ur intuitions, which come from a three-dimensional world, often do not apply in high-dimensional ones. In high dimensions, most of the mass of a multivariate Gaussian distribution is not near the mean, but in an increasingly distant “shell” around it; and most of the volume of a high-dimensional orange is in the skin, not the pulp. If a constant number of examples is distributed uniformly in a high-dimensional hypercube, beyond some dimensionality most examples are closer to a face of the hypercube than to their nearest neighbor. And if we approximate a hypersphere by inscribing it in a hypercube, in high dimensions almost all the volume of the hypercube is outside the hypersphere. This is bad news for machine learning, where shapes of one type are often approximated by shapes of another. The article is also full of many additional pearls of wisdom for machine learning. Another application, beyond machine learning, is nearest neighbor search: given an observation of interest, find its nearest neighbors (in the sense that these are the points with the smallest distance from the query point). But in high dimensions, a curious phenomenon arises: the ratio between the nearest and farthest points approaches 1, i.e. the points essentially become uniformly distant from each other. This phenomenon can be observed for wide variety of distance metrics, but it is more pronounced for the Euclidean metric than, say, Manhattan distance metric. The premise of nearest neighbor search is that "closer" points are more relevant than "farther" points, but if all points are essentially uniformly distant from each other, the distinction is meaningless. From Charu C. Aggarwal, Alexander Hinneburg, Daniel A. Keim, " On the Surprising Behavior of Distance Metrics in High Dimensional Space ": It has been argued in [Kevin Beyer, Jonathan Goldstein, Raghu Ramakrishnan, Uri Shaft, " When Is 'Nearest Neighbor' Meaningful? "] that under certain reasonable assumptions on the data distribution, the ratio of the distances of the nearest and farthest neighbors to a given target in high dimensional space is almost 1 for a wide variety of data distributions and distance functions. In such a case, the nearest neighbor problem becomes ill defined, since the contrast between the distances to diferent data points does not exist. In such cases, even the concept of proximity may not be meaningful from a qualitative perspective: a problem which is even more fundamental than the performance degradation of high dimensional algorithms. ... Many high-dimensional indexing structures and algorithms use the [E]uclidean distance metric as a natural extension of its traditional use in two- or three-dimensional spatial applications. ... In this paper we provide some surprising theoretical and experimental results in analyzing the dependency of the $L_k$ norm on the value of $k$ . More specifically, we show that the relative contrasts of the distances to a query point depend heavily on the $L_k$ metric used. This provides considerable evidence that the meaningfulness of the $L_k$ norm worsens faster within increasing dimensionality for higher values of $k$ . Thus, for a given problem with a fixed (high) value for the dimensionality $d$ , it may be preferable to use lower values of $k$ . This means that the $L_1$ distance metric (Manhattan distance metric) is the most preferable for high dimensional applications, followed by the Euclidean metric ( $L_2$ ). ... The authors of the "Surprising Behavior" paper then propose using $L_k$ norms with $k<1$ . They produce some results which demonstrate that these "fractional norms" exhibit the property of increasing the contrast between farthest and nearest points. However, later research has concluded against fractional norms. See: " Fractional norms and quasinorms do not help to overcome the curse of dimensionality " by Mirkes, Allohibi, & Gorban (2020). (Thanks to michen00 for the comment and helpful citation.)
{ "source": [ "https://stats.stackexchange.com/questions/99171", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/31975/" ] }
99,738
I can train a logistic regression in R using glm(y ~ x, family=binomial(logit))) but, IIUC, this optimizes for log likelihood. Is there a way to train the model using the linear ($L_1$) loss function (which in this case is the same as the total variation distance )? I.e., given a numeric vector $x$ and a bit (logical) vector $y$, I want to construct a monotonic (in fact, increasing) function $f$ such that $\sum |f(x)-y|$ is minimized. See also How do I train a logistic regression in R using L1 loss function?
What you want to do does not exist because it is, for lack of better word, mathematically flawed. But first, I will stress why I think the premises of your question are sound. I will then try to explain why I think the conclusions you draw from them rest on a misunderstanding of the logistic model and, finally, I will suggest an alternative approach. I will denote $\{(\pmb x_i,y_i)\}_{i=1}^n$ your $n$ observations (the bolder letters denote vectors) which lie in $p$ dimensional space (the first entry of $\pmb x_i$ is 1) with $p<n$ , $y_i\in [0,1]$ and $f(\pmb x_i)= f(\pmb x_i'\pmb\beta)$ is an monotonous function of $\pmb x_i'\pmb\beta$ , say like the logistic curve to fix ideas. For expediency, I will just assume that $n$ is sufficiently large compared to $p$ . You are correct that if you intend to use TVD as criterion to evaluate the fitted model, then it is reasonable to expect your fit to optimize that same criterion among all possible candidate, on your data. Hence $$\pmb\beta^*=\underset{\pmb\beta\in\mathbb{R}^{p}}{\arg\min}\;\;\;\;\;||\pmb y-f(\pmb x_i'\pmb\beta)||_1$$ The problem is the error term : $\epsilon_i=y_i-f(\pmb x_i'\pmb\beta)$ and if we enforce $E(\pmb\epsilon)=0$ (we simply want our model to be asymptotically unbiased ), then, $\epsilon_i$ must be heteroskedastic . This is because $y_i$ can take on only two values, 0 and 1. Therefore, given $\pmb x_i$ , $\epsilon_i$ can also only take on two values: $1-f(\pmb x_i'\pmb\beta)$ when $y_i=1$ , which occurs with probability $f(\pmb x_i'\pmb\beta)$ , and $-f(\pmb x_i'\pmb\beta)$ when $y_i=1$ , which occurs with probability $1-f(\pmb x_i'\pmb\beta)$ . These consideration together imply that: $$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\text{var}(\pmb\epsilon)=E(\pmb\epsilon^2)=(1-f(\pmb x'\pmb\beta))^2f(\pmb x'\pmb\beta)+(-f(\pmb x'\pmb\beta))^2(1-f(\pmb x'\pmb\beta))\\ \;=(1-f(\pmb x'\pmb\beta))f(\pmb x'\pmb\beta)\\ =E(\pmb y|\pmb x)E(1-\pmb y|\pmb x)$$ hence $\text{var}(\pmb\epsilon)$ is not constant but concave parabola shaped and is maximized when $\pmb x$ is such that $E(y|\pmb x)\approx .5$ . This inherent heteroskedasticity of the residuals has consequences . It implies among other things that when minimizing the $l_1$ loss function, you are asymptotically over-weighting part of your sample. That is, the fitted $\pmb\beta^*$ don't fit the data at all but only the portion of it that is clustered around places where $\pmb x$ is such that $E(\pmb y|\pmb x)\approx .5$ . To wit, these are the least informative data points in your sample : they correspond to those observations for which the noise component is the largest. Hence, your fit is pulled towards $\pmb\beta^*=\pmb\beta:f(\pmb x'\pmb\beta)\approx .5$ , e.g. made irrelevant. One solution, as is clear from the exposition above is to drop the requirement of unbiased-ness. A popular way to bias the estimator (with some Bayesian interpretation attached) is by including a shrinkage term. If we re-scale the response: $$y^+_i=2(y_i-.5),1\leq i\leq n$$ and, for computational expediency, replace $f(\pmb x'\pmb\beta)$ by another monotone function $g(\pmb x,[c,\pmb\gamma])=\pmb x'[c,\pmb\gamma]$ --it will be convenient for the sequel to denote the first component of the vector of parameter as $c$ and the remaining $p-1$ ones $\pmb\gamma$ -- and include a shrinkage term (for example one of the form $||\pmb\gamma||_2$ ), the resulting optimization problem becomes: $$[c^*,\pmb\gamma^{*}]=\underset{\pmb[c,\pmb\gamma]\in\mathbb{R}^{p}}{\arg\min}\;\;\sum_{i=1}^n\max(0,1-y_i^+\pmb x_i'\pmb[c,\pmb\gamma])+\frac{1}{2}||\pmb\gamma||_2$$ Note that in this new (also convex) optimization problem, the penalty for a correctly classified observations is 0 and it grows linearly with $\pmb x'\pmb[c,\gamma]$ for a miss-classified one --as in the $l_1$ loss. The $[c^*,\pmb\gamma^*]$ solution to this second optimization problem are the celebrated linear svm (with perfect separation) coefficients. As opposed to the $\pmb\beta^*$ , it makes sense to learn these $[c^*,\pmb\gamma^{*}]$ from the data with an TVD-type penalty ('type' because of the bias term). Consequently, this solution is widely implemented. See for example the R package LiblineaR .
{ "source": [ "https://stats.stackexchange.com/questions/99738", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/13538/" ] }
99,829
I have a bunch of raw data values that are dollar amounts and I want to find a confidence interval for a percentile of that data. Is there a formula for such a confidence interval?
This question, which covers a common situation, deserves a simple, non-approximate answer. Fortunately, there is one. Suppose $X_1, \ldots, X_n$ are independent values from an unknown distribution $F$ whose $q^\text{th}$ quantile I will write $F^{-1}(q)$. This means each $X_i$ has a chance of (at least) $q$ of being less than or equal to $F^{-1}(q)$. Consequently the number of $X_i$ less than or equal to $F^{-1}(q)$ has a Binomial$(n,q)$ distribution. Motivated by this simple consideration, Gerald Hahn and William Meeker in their handbook Statistical Intervals (Wiley 1991) write A two-sided distribution-free conservative $100(1-\alpha)\%$ confidence interval for $F^{-1}(q)$ is obtained ... as $[X_{(l)}, X_{(u)}]$ where $X_{(1)}\le X_{(2)}\le \cdots \le X_{(n)}$ are the order statistics of the sample. They proceed to say One can choose integers $0 \le l \le u \le n$ symmetrically (or nearly symmetrically) around $q(n+1)$ and as close together as possible subject to the requirements that $$B(u-1;n,q) - B(l-1;n,q) \ge 1-\alpha.\tag{1}$$ The expression at the left is the chance that a Binomial$(n,q)$ variable has one of the values $\{l, l+1, \ldots, u-1\}$. Evidently, this is the chance that the number of data values $X_i$ falling within the lower $100q\%$ of the distribution is neither too small (less than $l$) nor too large ($u$ or greater). Hahn and Meeker follow with some useful remarks, which I will quote. The preceding interval is conservative because the actual confidence level, given by the left-hand side of Equation $(1)$, is greater than the specified value $1-\alpha$. ... It is sometimes impossible to construct a distribution-free statistical interval that has at least the desired confidence level. This problem is particularly acute when estimating percentiles in the tail of a distribution from a small sample. ... In some cases, the analyst can cope with this problem by choosing $l$ and $u$ nonsymmetrically. Another alternative may be to use a reduced confidence level. Let's work through an example (also provided by Hahn & Meeker). They supply an ordered set of $n=100$ "measurements of a compound from a chemical process" and ask for a $100(1-\alpha)=95\%$ confidence interval for the $q=0.90$ percentile. They claim $l=85$ and $u=97$ will work. The total probability of this interval, as shown by the blue bars in the figure, is $95.3\%$: that's as close as one can get to $95\%$, yet still be above it, by choosing two cutoffs and eliminating all chances in the left tail and the right tail that are beyond those cutoffs. Here are the data, shown in order, leaving out $81$ of the values from the middle: $$\matrix{ 1.49&1.66&2.05&\ldots&\mathbf {24.33}&24.72&25.46&25.67&25.77&26.64\\ 28.28&28.28&29.07&29.16&31.14&31.83&\mathbf{33.24}&37.32&53.43&58.11}$$ The $85^\text{th}$ largest is $24.33$ and the $97^\text{th}$ largest is $33.24$. The interval therefore is $[24.33, 33.24]$. Let's re-interpret that. This procedure was supposed to have at least a $95\%$ chance of covering the $90^\text{th}$ percentile. If that percentile actually exceeds $33.24$, that means we will have observed $97$ or more out of $100$ values in our sample that are below the $90^\text{th}$ percentile. That's too many. If that percentile is less than $24.33$, that means we will have observed $84$ or fewer values in our sample that are below the $90^\text{th}$ percentile. That's too few. In either case--exactly as indicated by the red bars in the figure--it would be evidence against the $90^\text{th}$ percentile lying within this interval. One way to find good choices of $l$ and $u$ is to search according to your needs. Here is a method that starts with a symmetric approximate interval and then searches by varying both $l$ and $u$ by up to $2$ in order to find an interval with good coverage (if possible). It is illustrated with R code. It is set up to check the coverage in the preceding example for a Normal distribution. Its output is Simulation mean coverage was 0.9503; expected coverage is 0.9523 The agreement between simulation and expectation is excellent. # # Near-symmetric distribution-free confidence interval for a quantile `q`. # Returns indexes into the order statistics. # quantile.CI <- function(n, q, alpha=0.05) { # # Search over a small range of upper and lower order statistics for the # closest coverage to 1-alpha (but not less than it, if possible). # u <- qbinom(1-alpha/2, n, q) + (-2:2) + 1 l <- qbinom(alpha/2, n, q) + (-2:2) u[u > n] <- Inf l[l < 0] <- -Inf coverage <- outer(l, u, function(a,b) pbinom(b-1,n,q) - pbinom(a-1,n,q)) if (max(coverage) < 1-alpha) i <- which(coverage==max(coverage)) else i <- which(coverage == min(coverage[coverage >= 1-alpha])) i <- i[1] # # Return the order statistics and the actual coverage. # u <- rep(u, each=5)[i] l <- rep(l, 5)[i] return(list(Interval=c(l,u), Coverage=coverage[i])) } # # Example: test coverage via simulation. # n <- 100 # Sample size q <- 0.90 # Percentile # # You only have to compute the order statistics once for any given (n,q). # lu <- quantile.CI(n, q)$Interval # # Generate many random samples from a known distribution and compute # CIs from those samples. # set.seed(17) n.sim <- 1e4 index <- function(x, i) ifelse(i==Inf, Inf, ifelse(i==-Inf, -Inf, x[i])) sim <- replicate(n.sim, index(sort(rnorm(n)), lu)) # # Compute the proportion of those intervals that cover the percentile. # F.q <- qnorm(q) covers <- sim[1, ] <= F.q & F.q <= sim[2, ] # # Report the result. # message("Simulation mean coverage was ", signif(mean(covers), 4), "; expected coverage is ", signif(quantile.CI(n,q)$Coverage, 4))
{ "source": [ "https://stats.stackexchange.com/questions/99829", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9495/" ] }
100,041
The formula for computing variance has $(n-1)$ in the denominator: $s^2 = \frac{\sum_{i=1}^N (x_i - \bar{x})^2}{n-1}$ I've always wondered why. However, reading and watching a few good videos about "why" it is, it seems, $(n-1)$ is a good unbiased estimator of the population variance. Whereas $n$ underestimates and $(n-2)$ overestimates the population variance. What I'm curious to know, is that in the era of no computers how exactly was this choice made? Is there an actual mathematical proof proving this or was this purely empirical and statisticians made A LOT of calculations by hand to come up with the "best explanation" at the time? Just how did statisticians come up with this formula in the early 19th century with the aid of computers? Manual or there is more to it than meets the eye?
The correction is called Bessel's correction and it has a mathematical proof. Personally, I was taught it the easy way: using $n-1$ is how you correct the bias of $E[\frac{1}{n}\sum_1^n(x_i - \bar x)^2]$ (see here ). You can also explain the correction based on the concept of degrees of freedom, simulation isn't strictly needed.
{ "source": [ "https://stats.stackexchange.com/questions/100041", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4426/" ] }
100,137
I would like to try the following algorithm in order to win in the roulette: Be an observer until there are 3 same parity numbers in a row ($0$ has no defined parity in this context) Once there were achieved 3 numbers with the same parity in a row: init $factor$ to be $1$ Bet the next number will be the opposite parity if you were wrong double $factor$ and return 3 else goto 1 . Here is a python code for it - which bring profit all the time: import random nums = range(37) bank = 10**10 games = 3000 factor=1 bet_amount=100 next_bet = None parity_in_row ={"odd":0, "even":0} for i in xrange(games): num = nums[random.randint(0,36)] if next_bet == "odd": if num % 2 == 1: bank += factor*bet_amount factor = 1 next_bet = None else: bank -= factor*bet_amount factor *= 2 elif next_bet == "even": if num % 2 == 0: bank += factor*bet_amount factor = 1 next_bet = None else: bank -= factor*bet_amount factor *= 2 if num > 0 and num % 2 == 0: parity_in_row["even"] += 1 parity_in_row["odd"] = 0 elif num % 2 == 1: parity_in_row["odd"] += 1 parity_in_row["even"] = 0 else: parity_in_row ={"odd":0, "even":0} if parity_in_row["odd"] > 2: next_bet = "even" elif parity_in_row["even"] > 2: next_bet = "odd" else: next_bet = None print bank If I did the calculation correctly the probability to have 4 numbers in a row with the same parity $ < (1/2-\epsilon)^4 $ [where $\epsilon < 1/36$ is a compensation for the probability to achieve $0$]. Keep doubling $factor$ ensures you will have positive expectation, right? Isn't the probability of getting odd number is $0.5$ ? since $Prob(odd | even, even, even) = \frac{1}{2}$ ? Please try to supply a rigorous proof why it is wrong, and try to explain why my python code always return with a positive outcomes
You have discovered the Martingale . It is a perfectly reasonable betting strategy, which was played by Casanova and also by Charles Wells , but unfortunately it doesn't win in the long run. Essentially the strategy is no different from the following idea: Keep betting. Each time you lose, bet more than your total losses up to now. Since you know that you'll eventually win, you will always end up on top. Your plan of only betting after getting three the same in a row is a red herring, because spins are independent. It makes no difference if there have already been three in a row. After all, how could it make a difference? If it did, this would imply that the roulette wheel magically possessed some kind of memory, which it doesn't. There are various explanations for why the strategy doesn't work, among which are: you don't have an infinite amount of money to bet with, the bank doesn't have an infinite amount of money, and you can't play for an infinite amount of time. It's the third of these that is the real problem. Since each bet in roulette has a negative expected value , no matter what you do, you expect to end up with a negative amount of money after any finite number of bets. You can only win if there is no upper limit to the amount of bets you can ever make. For your last question, the probability of getting an odd number on a real roulette wheel is actually less than $0.5$, because of a 00 on the wheel. But this doesn't change the fact that you have discovered a nice winning strategy; it's just that your strategy can't win (on average) in any finite amount of bets.
{ "source": [ "https://stats.stackexchange.com/questions/100137", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/21743/" ] }
100,151
It is well known that researchers should spend time observing and exploring existing data and research before forming a hypothesis and then collecting data to test that hypothesis (referring to null-hypothesis significance testing). Many basic statistics books warn that hypotheses must be formed a priori and can not be changed after data collection otherwise the methodology becomes invalid. I understand that one reason why changing a hypothesis to fit observed data is problematic is because of the greater chance of committing a type I error due to spurious data, but my question is: is that the only reason or are there other fundamental problems with going on a fishing expedition? As a bonus question, are there ways to go on fishing expeditions without exposing oneself to the potential pitfalls? For example, if you have enough data, could you generate hypotheses from half of the data and then use the other half to test them? update I appreciate the interest in my question, but the answers and comments are mostly aimed at what I thought I established as background information. I'm interested to know if there are other reasons why it's bad beyond the higher possibility of spurious results and if there are ways, such as splitting data first, of changing a hypothesis post hoc but avoiding the increase in Type I errors. I've updated the title to hopefully reflect the thrust of my question. Thanks, and sorry for the confusion!
Certainly you can go on fishing expeditions, as long as you admit that it's a fishing expedition and treat it as such. A nicer name for such is "exploratory data analysis". A better analogy might be shooting at a target: You can shoot at a target and celebrate if you hit the bulls eye. You can shoot without a target in order to test the properties of your gun. But it's cheating to shoot at a wall and then paint a target around the bullet hole. One way to avoid some of the problems with this is to do the exploration in a training data set and then test it on a separate "test" data set.
{ "source": [ "https://stats.stackexchange.com/questions/100151", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/46134/" ] }
100,214
I am a little bit confused on what the assumptions of linear regression are. So far I checked whether: all of the explanatory variables correlated linearly with the response variable. (This was the case) there was any collinearity among the explanatory variables. (there was little collinearity). the Cook's distances of the datapoints of my model are below 1 (this is the case, all distances are below 0.4, so no influence points). the residuals are normally distributed. (this may not be the case) But I then read the following: violations of normality often arise either because (a) the distributions of the dependent and/or independent variables are themselves significantly non-normal, and/or (b) the linearity assumption is violated. Question 1 This makes it sound as if the independent and depend variables need to be normally distributed, but as far as I know this is not the case. My dependent variable as well as one of my independent variables are not normally distributed. Should they be? Question 2 My QQnormal plot of the residuals look like this: That slightly differs from a normal distribution and the shapiro.test also rejects the null hypothesis that the residuals are from a normal distribution: > shapiro.test(residuals(lmresult)) W = 0.9171, p-value = 3.618e-06 The residuals vs fitted values look like: What can I do if my residuals are not normally distributed? Does it mean the linear model is entirely useless?
First off, I would get yourself a copy of this classic and approachable article and read it: Anscombe FJ. (1973) Graphs in statistical analysis . The American Statistician . 27:17–21. On to your questions: Answer 1: Neither the dependent nor independent variable needs to be normally distributed. In fact they can have all kinds of loopy distributions. The normality assumption applies to the distribution of the errors ( $Y_{i} - \widehat{Y}_{i}$ ). Answer 2: You are actually asking about two separate assumptions of ordinary least squares (OLS) regression: One is the assumption of linearity . This means that the trend in $\overline{Y}$ across $X$ is expressed by a straight line (Right? Straight back to algebra: $y = a +bx$ , where $a$ is the $y$ -intercept, and $b$ is the slope of the line.) A violation of this assumption simply means that the relationship is not well described by a straight line (e.g., $\overline{Y}$ is a sinusoidal function of $X$ , or a quadratic function, or even a straight line that changes slope at some point). My own preferred two-step approach to address non-linearity is to (1) perform some kind of non-parametric smoothing regression to suggest specific nonlinear functional relationships between $Y$ and $X$ (e.g., using LOWESS , or GAM s, etc.), and (2) to specify a functional relationship using either a multiple regression that includes nonlinearities in $X$ , (e.g., $Y \sim X + X^{2}$ ), or a nonlinear least squares regression model that includes nonlinearities in parameters of $X$ (e.g., $Y \sim X + \max{(X-\theta,0)}$ , where $\theta$ represents the point where the regression line of $\overline{Y}$ on $X$ changes slope). Another is the assumption of normally distributed residuals. Sometimes one can validly get away with non-normal residuals in an OLS context; see for example, Lumley T, Emerson S. (2002) The Importance of the Normality Assumption in Large Public Health Data Sets . Annual Review of Public Health . 23:151–69. Sometimes, one cannot (again, see the Anscombe article). However, I would recommend thinking about the assumptions in OLS not so much as desired properties of your data, but rather as interesting points of departure for describing nature. After all, most of what we care about in the world is more interesting than $y$ -intercept and slope. Creatively violating OLS assumptions (with the appropriate methods) allows us to ask and answer more interesting questions.
{ "source": [ "https://stats.stackexchange.com/questions/100214", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/46234/" ] }
101,274
I am working with a small dataset (21 observations) and have the following normal QQ plot in R: Seeing that the plot does not support normality, what could I infer about the underlying distribution? It seems to me that a distribution more skewed to the right would be a better fit, is that right? Also, what other conclusions can we draw from the data?
If the values lie along a line the distribution has the same shape (up to location and scale) as the theoretical distribution we have supposed. Local behaviour : When looking at sorted sample values on the y-axis and (approximate) expected quantiles on the x-axis, we can identify from how the values in some section of the plot differ locally from an overall linear trend by seeing whether the values are more or less concentrated than the theoretical distribution would suppose in that section of a plot: As we see, less concentrated points increase more and more concentrated points increase less rapidly than an overall linear relation would suggest, and in the extreme cases correspond to a gap in the density of the sample (shows as a near-vertical jump) or a spike of constant values (values aligned horizontally). This allows us to spot a heavy tail or a light tail and hence, skewness greater or smaller than the theoretical distribution, and so on. Overall apppearance: Here's what QQ-plots look like (for particular choices of distribution) on average : But randomness tends to obscure things, especially with small samples: Note that at $n=21$ the results may be much more variable than shown there - I generated several such sets of six plots and chose a 'nice' set where you could kind of see the shape in all six plots at the same time. Sometimes straight relationships look curved, curved relationships look straight, heavy-tails just look skew, and so on - with such small samples, often the situation may be much less clear: It's possible to discern more features than those (such as discreteness, for one example), but with $n=21$ , even such basic features may be hard to spot; we shouldn't try to 'over-interpret' every little wiggle. As sample sizes become larger, generally speaking the plots 'stabilize' and the features become more clearly interpretable rather than representing noise. [With some very heavy-tailed distributions, the rare large outlier might prevent the picture stabilizing nicely even at quite large sample sizes.] You may also find the suggestion here useful when trying to decide how much you should worry about a particular amount of curvature or wiggliness. A more suitable guide for interpretation in general would also include displays at smaller and larger sample sizes.
{ "source": [ "https://stats.stackexchange.com/questions/101274", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/31420/" ] }
101,344
If in kernel PCA I choose a linear kernel $K(\mathbf{x},\mathbf{y}) = \mathbf x^\top \mathbf y$, is the result going to be different from the ordinary linear PCA ? Are the solutions fundamentally different or does some well defined relation exist?
Summary: kernel PCA with linear kernel is exactly equivalent to the standard PCA. Let $\mathbf{X}$ be the centered data matrix of $N \times D$ size with $D$ variables in columns and $N$ data points in rows. Then the $D \times D$ covariance matrix is given by $\mathbf{X}^\top\mathbf{X}/(n-1)$, its eigenvectors are principal axes and eigenvalues are PC variances. At the same time, one can consider the so called Gram matrix $\mathbf{X}\mathbf{X}^\top$ of the $N \times N$ size. It is easy to see that it has the same eigenvalues (i.e. PC variances) up to the $n-1$ factor, and its eigenvectors are principal components scaled to unit norm. This was standard PCA. Now, in kernel PCA we consider some function $\phi(x)$ that maps each data point to another vector space that usually has larger dimensionality $D_\mathrm{new}$, possibly even infinite. The idea of kernel PCA is to perform the standard PCA in this new space. Since the dimensionality of this new space is very large (or infinite), it is hard or impossible to compute a covariance matrix. However, we can apply the second approach to PCA outlined above. Indeed, Gram matrix will still be of the same manageable $N \times N$ size. Elements of this matrix are given by $\phi(\mathbf{x}_i)\phi(\mathbf{x}_j)$, which we will call kernel function $K(\mathbf{x}_i,\mathbf{x}_j)=\phi(\mathbf{x}_i)\phi(\mathbf{x}_j)$. This is what is known as the kernel trick : one actually does not ever need to compute $\phi()$, but only $K()$. Eigenvectors of this Gram matrix will be the principal components in the target space, the ones we are interested in. The answer to your question now becomes obvious. If $K(x,y)=\mathbf{x}^\top \mathbf{y}$, then the kernel Gram matrix reduces to $\mathbf{X} \mathbf{X}^\top$ which is equal to the standard Gram matrix, and hence the principal components will not change. A very readable reference is Scholkopf B, Smola A, and Müller KR, Kernel principal component analysis, 1999 , and note that e.g. in Figure 1 they explicitly refer to standard PCA as the one using dot product as a kernel function:
{ "source": [ "https://stats.stackexchange.com/questions/101344", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/44086/" ] }
101,485
Normally in principal component analysis (PCA) the first few PCs are used and the low variance PCs are dropped, as they do not explain much of the variation in the data. However, are there examples where the low variation PCs are useful (i.e. have use in the context of the data, have an intuitive explanation, etc.) and should not be thrown away?
Here's a cool excerpt from Jolliffe (1982) that I didn't include in my previous answer to the very similar question, " Low variance components in PCA, are they really just noise? Is there any way to test for it? " I find it pretty intuitive. $\quad$Suppose that it is required to predict the height of the cloud-base, $H$, an important problem at airports. Various climatic variables are measured including surface temperature $T_s$, and surface dewpoint, $T_d$. Here, $T_d$ is the temperature at which the surface air would be saturated with water vapour, and the difference $T_s-T_d$, is a measure of surface humidity. Now $T_s,T_d$ are generally positively correlated, so a principal component analysis of the climatic variables will have a high-variance component which is highly correlated with $T_s+T_d$,and a low-variance component which is similarly correlated with $T_s-T_d$. But $H$ is related to humidity and hence to $T_s-T_d$, i.e. to a low-variance rather than a high-variance component, so a strategy which rejects low-variance components will give poor predictions for $H$. $\quad$The discussion of this example is necessarily vague because of the unknown effects of any other climatic variables which are also measured and included in the analysis. However, it shows a physically plausible case where a dependent variable will be related to a low-variance component, confirming the three empirical examples from the literature. $\quad$Furthermore, the cloud-base example has been tested on data from Cardiff (Wales) Airport for the period 1966–73 with one extra climatic variable, sea-surface temperature, also included. Results were essentially as predicted above. The last principal component was approximately $T_s-T_d$, and it accounted for only 0·4 per cent of the total variation. However, in a principal component regression it was easily the most important predictor for $H$ . [Emphasis added] The three examples from literature referred to in the last sentence of the second paragraph were the three I mentioned in my answer to the linked question . Reference Jolliffe, I. T. (1982). Note on the use of principal components in regression. Applied Statistics, 31 (3), 300–303. Retrieved from http://automatica.dei.unipd.it/public/Schenato/PSC/2010_2011/gruppo4-Building_termo_identification/IdentificazioneTermodinamica20072008/Biblio/Articoli/PCR%20vecchio%2082.pdf .
{ "source": [ "https://stats.stackexchange.com/questions/101485", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/46897/" ] }
101,546
I have read that using R-squared for time series is not appropriate because in a time series context (I know that there are other contexts) R-squared is no longer unique. Why is this? I tried to look this up, but I did not find anything. Typically I do not place much value in R-squared (or Adjusted R-Squared) when I evaluate my models, but a lot of my colleagues (i.e. Business Majors) are absolutely in love with R-Squared and I want to be able to explain to them why R-Squared in not appropriate in the context of time series.
Some aspects of the issue: If somebody gives us a vector of numbers $\mathbf y$ and a conformable matrix of numbers $\mathbf X$, we do not need to know what is the relation between them to execute some estimation algebra, treating $y$ as the dependent variable. The algebra will result, irrespective of whether these numbers represent cross-sectional or time series or panel data, or of whether the matrix $\mathbf X$ contains lagged values of $y$ etc. The fundamental definition of the coefficient of determination $R^2$ is $$R^2 = 1 - \frac {SS_{res}}{SS_{tot}}$$ where $SS_{res}$ is the sum of squared residuals from some estimation procedure, and $SS_{tot}$ is the sum of squared deviations of the dependent variable from its sample mean. Combining, the $R^2$ will always be uniquely calculated, for a specific data sample, a specific formulation of the relation between the variables, and a specific estimation procedure, subject only to the condition that the estimation procedure is such that it provides point estimates of the unknown quantities involved (and hence point estimates of the dependent variable, and hence point estimates of the residuals). If any of these three aspects change, the arithmetic value of $R^2$ will in general change -but this holds for any type of data, not just time-series. So the issue with $R^2$ and time-series, is not whether it is "unique" or not (since most estimation procedures for time-series data provide point estimates). The issue is whether the "usual" time series specification framework is technically friendly for the $R^2$, and whether $R^2$ provides some useful information. The interpretation of $R^2$ as "proportion of dependent variable variance explained" depends critically on the residuals adding up to zero. In the context of linear regression (on whatever kind of data), and of Ordinary Least Squares estimation, this is guaranteed only if the specification includes a constant term in the regressor matrix (a "drift" in time-series terminology). In autoregressive time-series models, a drift is in many cases not included. More generally, when we are faced with time-series data, "automatically" we start thinking about how the time-series will evolve into the future. So we tend to evaluate a time-series model based more on how well it predicts future values , than how well it fits past values . But the $R^2$ mainly reflects the latter, not the former. The well-known fact that $R^2$ is non-decreasing in the number of regressors means that we can obtain a perfect fit by keeping adding regressors ( any regressors, i.e. any series' of numbers, perhaps totally unrelated conceptually to the dependent variable). Experience shows that a perfect fit obtained thus, will also give abysmal predictions outside the sample. Intuitively, this perhaps counter-intuitive trade-off happens because by capturing the whole variability of the dependent variable into an estimated equation, we turn unsystematic variability into systematic one, as regards prediction (here, "unsystematic" should be understood relative to our knowledge -from a purely deterministic philosophical point of view, there is no such thing as "unsystematic variability". But to the degree that our limited knowledge forces us to treat some variability as "unsystematic", then the attempt to nevertheless turn it into a systematic component, brings prediction disaster). In fact this is perhaps the most convincing way to show somebody why $R^2$ should not be the main diagnostic/evaluation tool when dealing with time series: increase the number of regressors up to a point where $R^2\approx 1$. Then take the estimated equation and try to predict the future values of the dependent variable.
{ "source": [ "https://stats.stackexchange.com/questions/101546", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/46705/" ] }
101,560
The tanh activation function is: $$tanh \left( x \right) = 2 \cdot \sigma \left( 2 x \right) - 1$$ Where $\sigma(x)$, the sigmoid function, is defined as: $$\sigma(x) = \frac{e^x}{1 + e^x}$$. Questions: Does it really matter between using those two activation functions (tanh vs. sigma)? Which function is better in which cases?
Yes it matters for technical reasons. Basically for optimization. It is worth reading Efficient Backprop by LeCun et al. There are two reasons for that choice (assuming you have normalized your data, and this is very important): Having stronger gradients: since data is centered around 0, the derivatives are higher. To see this, calculate the derivative of the tanh function and notice that its range (output values) is [0,1]. The range of the tanh function is [-1,1] and that of the sigmoid function is [0,1] Avoiding bias in the gradients. This is explained very well in the paper, and it is worth reading it to understand these issues.
{ "source": [ "https://stats.stackexchange.com/questions/101560", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/46194/" ] }
101,590
As we all know, if you flip a coin that has an equal chance of landing heads as it does tails, then if you flip the coin many times, half the time you will get heads and half the time you will get tails. When discussing this with a friend, they said that if you were to flip the coin 1000 times, and lets say the first 100 times it landed heads, then the chances of landing a tail was increased (the logic being that if it is unbiased, then by the time you have flipped it 1000 times you will have roughly 500 heads and 500 tails, so tails must be more likely). I know that to be a fallacy, as past results don't influence future results. Is there a name for that particular fallacy? Also, is there a better explanation of why this is fallacious?
It's called the Gambler's fallacy .
{ "source": [ "https://stats.stackexchange.com/questions/101590", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/46957/" ] }
102,778
I would like to find the correlation between a continuous (dependent variable) and a categorical (nominal: gender, independent variable) variable. Continuous data is not normally distributed. Before, I had computed it using the Spearman's $\rho$. However, I have been told that it is not right. While searching on the internet, I found that the boxplot can provide an idea about how much they are associated; however, I was looking for a quantified value such as Pearson's product moment coefficient or Spearman's $\rho$. Can you please help me on how to do this? Or, inform on which method would be appropriate? Would Point Biserial Coefficient be the right option?
The reviewer should have told you why the Spearman $\rho$ is not appropriate. Here is one version of that: Let the data be $(Z_i, I_i)$ where $Z$ is the measured variable and $I$ is the gender indicator, say it is 0 (man), 1 (woman). Then Spearman's $\rho$ is calculated based on the ranks of $Z, I$ respectively. Since there are only two possible values for the indicator $I$, there will be a lot of ties, so this formula is not appropriate. If you replace rank with mean rank, then you will get only two different values, one for men, another for women. Then $\rho$ will become basically some rescaled version of the mean ranks between the two groups. It would be simpler (more interpretable) to simply compare the means! Another approach is the following. Let $X_1, \dots, X_n$ be the observations of the continuous variable among men, $Y_1, \dots, Y_m$ same among women. Now, if the distribution of $X$ and of $Y$ are the same, then $P(X>Y)$ will be 0.5 (let's assume the distribution is purely absolutely continuous, so there are no ties). In the general case, define $$ \theta = P(X>Y) $$ where $X$ is a random draw among men, $Y$ among women. Can we estimate $\theta$ from our sample? Form all pairs $(X_i, Y_j)$ (assume no ties) and count for how many we have "man is larger" ($X_i > Y_j$)($M$) and for how many "woman is larger" ($ X_i < Y_j$) ($W$). Then one sample estimate of $\theta$ is $$ \frac{M}{M+W} $$ That is one reasonable measure of correlation! (If there are only a few ties, just ignore them). But I am not sure what that is called, if it has a name. This one may be close: https://en.wikipedia.org/wiki/Goodman_and_Kruskal%27s_gamma
{ "source": [ "https://stats.stackexchange.com/questions/102778", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/18482/" ] }
103,345
I have a problem with the normality of some data I have: I've done a Kolmogorov test which says it isn't normal with p=.0000, I don't understand: the skewness of my distribution =-.497, and the kurtosis =-0,024 Here is the plot of my distribution which looks very much normal ... (I have three scores, and each one of this scores isn't normal with a significant p-value for the Kolmogorov test ... I really don't understand )
You have no basis to assert your data are normal. Even if your skewness and excess kurtosis both were exactly 0, that doesn't imply your data are normal. While skewness and kurtosis far from the expected values indicate non-normality, the converse doesn't hold. There are non-normal distributions that have the same skewness and kurtosis as the normal. An example is discussed here , the density of which is reproduced below: As you see, it's distinctly bimodal. In this case, the distribution is symmetric, so as long as sufficient moments exist, the typical skewness measure will be 0 (indeed all the usual measures will be). For the kurtosis, the contribution to 4th moments from the region close to the mean will tend to make the kurtosis smaller, but the tail is relatively heavy, which tends to make it larger. If you choose just right, the kurtosis comes out with the same value as for the normal. Your sample skewness is actually around -0.5, which is suggestive of mild left-skewness. Your histogram and Q-Q plot both indicate the same - a mildly left-skew distribution. (Such mild skewness is unlikely to be a problem for most of the common normal-theory procedures.) You're looking at several different indicators of non-normality which you shouldn't expect to agree a priori , since they consider different aspects of the distribution; with smallish mildly non-normal samples, they'll frequently disagree. Now for the big question: *Why are you testing for normality?* [edited in response from comments:] I'm not really sure , I though I should before doing an ANOVA There are a number of points to be made here. i. Normality is an assumption of ANOVA if you're using it for inference (such as hypothesis testing), but it's not especially sensitive to non-normality in larger samples - mild non-normality is of little consequence and as sample sizes increase the distribution may become more non-normal and the test may be only a little affected. ii. You appear to be testing normality of the response (the DV). The (unconditional) distribution of DV itself is not assumed to be normal in ANOVA. You check the residuals to assess the reasonableness of the assumption about the conditional distribution (that is, its the error term in the model that's assumed normal) - i.e. you don't seem to be looking at the right thing. Indeed, because the check is done on residuals, you do it after model fitting, rather than before. iii. Formal testing can be next to useless. The question of interest here is 'how badly is the degree of non-normality affecting my inference?', which the hypothesis test really doesn't respond to. As the sample size gets larger, the test becomes more and more able to detect trivial differences from normality, while the effect on the significance level in the ANOVA becomes smaller and smaller. That is, if your sample size is reasonably large, the test of normality is mostly telling you you have a large sample size, which means you may not have much to worry about. At least with a Q-Q plot you have a visual assessment of how non-normal it is. iv. at reasonable sample sizes, other assumptions - like equality of variance and independence - generally matter much more than mild non-normality. Worry about the other assumptions first ... but again, formal testing isn't answering the right question v. choosing whether you do an ANOVA or some other test based on the outcome of a hypothesis test tends to have worse properties than simply deciding to act as if the assumption doesn't hold. (There are a variety of methods that are suitable for one-way ANOVA-like analyses on data that isn't assumed to be normal that you can use whenever you don't think you have reason to assume normality. Some have very good power at the normal, and with decent software there's no reason to avoid them.) [I believe I had a reference for this last point but I can't locate it right now; if I find it I'll try to come back and put it in]
{ "source": [ "https://stats.stackexchange.com/questions/103345", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/48360/" ] }
103,459
I am trying to figure out which cross validation method is best for my situation. The following data are just an example for working through the issue (in R), but my real X data ( xmat ) are correlated with each other and correlated to different degrees with the y variable ( ymat ). I provided R code, but my question is not about R but rather about the methods. Xmat includes X variables V1 to V100 while ymat includes a single y variable. set.seed(1233) xmat <- matrix(sample(-1:1, 20000, replace = TRUE), ncol = 100) colnames(xmat) <- paste("V", 1:100, sep ="") rownames(xmat) <- paste("S", 1:200, sep ="") # the real y data are correlated with xmat ymat <- matrix(rnorm(200, 70,20), ncol = 1) rownames(ymat) <- paste("S", 1:200, sep="") I would like to build a model for predicting y based on all the variables in xmat . So it will be a linear regression model y ~ V1 + V2 + V3+ ... + V100 . From a review, I can see the following three cross validation methods: Split data in about half and use one for training and another half for testing (cross validation): prop <- 0.5 # proportion of subset data set.seed(1234) # training data set training.s <- sample (1:nrow(xmat), round(prop*nrow(xmat),0)) xmat.train <- xmat[training.s,] ymat.train <- ymat[training.s,] # testing data set testing.s <- setdiff(1:nrow(xmat), training) xmat.test <- xmat[testing.s,] ymat.test <- ymat[testing.s,] K-fold cross validation - using 10 fold cross validation: mydata <- data.frame(ymat, xmat) fit <- lm(ymat ~ ., data=mydata) library(DAAG) cv.lm(df=mydata, fit, m=10) # ten-fold cross validation Masking one value or few values at a time : In this method we randomly mask a value in dataset (y) by replacing it with NA and predict it. The process is repeated n times. n = 500 predicted.v <- rep(NA, n) real.v <- rep(NA, n) for (i in 1:n){ masked.id <- sample (1:nrow(xmat), 1) ymat1 <- ymat real.v[i] <- ymat[masked.id,] ymat1[masked.id,] <- NA mydata <- data.frame(ymat1, xmat) fit <- lm(ymat1 ~ ., data=mydata) predicted.v[i] <- fit$fitted.values[masked.id] } How do I know which is best for any situation? Are there other methods? Bootstrap validation vs CV ? Worked examples would be appreciated.
Since the OP has placed a bounty on this question, it should attract some attention, and thus it is the right place to discuss some general ideas, even if it does not answer the OP directly. First, names: a) cross-validation is the general name for all estimation/measure techniques that use a test set different than the train set. Synonym: out-of-sample or extra-sample estimations. Antonym: in-sample estimation. In-sample estimation are techniques that use some information on the training set to estimate the model quality (not necessarily error). This is very common if the model has a high bias – that is – it makes strong assumptions about the data. In linear models (a high bias model), as the in the example of the question, one uses R-squared, AIC, BIC, deviance, as a measure of model quality – all these are in-sample estimators. In SVM, for example, the ratio data in the support vector to the number of data is an in-sample estimation of error of the model. There are many cross validation techniques: b) hold-out is the the method #1 above. Split the set into a training and one test. There is a long history of discussion and practices on the relative sizes of the training and test set. c) k -fold – method #2 above. Pretty standard. d) Leave-one-out – method #3 above. e) bootstrap : if your set has N data, randomly select N samples WITH REPLACEMENT from the set and use it as training. The data from the original set that has not been samples any time is used as the test set. There are different ways to compute the final estimation of the error of the model which uses both the error for the test set (out-of-sample) and the error for the train set (in-sample). See for example, the .632 bootstrap. I think there is also a .632+ formula – they are formulas that estimate the true error of the model using both out-of-sample and in-sample errors. f) Orthogonal to the selection of the method above is the issue of repetition. Except for leave-one-out, all methods above can be repeated any number of times. In fact one can talk about REPEATED hold-out, or REPEATED k -fold. To be fair, almost always the bootstrap method is used in a repeated fashion. The next question is, which method is "better". The problem is what "better" means. 1) The first answer is whether each of these methods is biased for the estimation of the model error (for an infinite amount of future data). 2) The second alternative is how fast or how well each of these methods converge to the true model error (if they are not biased). I believe this is still a topic of research. Let me point to these two papers (behind pay-wall) but the abstract gives us some understanding of what they are trying to accomplish. Also notice that it is very common to call k -fold as "cross-validation" by itself. Measuring the prediction error. A comparison of cross-validation, bootstrap and covariance penalty methods Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap There are probably many other papers on these topics. Those are just some examples. 3) Another aspect of "better" is: given a particular measure of the model error using one of the techniques above, how certain can you be that the correct model error is close. In general, in this case you want to take many measures of the error and calculate a confidence interval (or a credible interval if you follow a Bayesian approach). In this case, the issue is how much can you trust the variance of the set of error measures. Notice that except for the leave-one-out, all techniques above will give you many different measures ( k measures for a k -fold, n measures for a n -repeated hold out) and thus you can measure the variance (or standard deviation) of this set and calculate a confidence interval for the measure of error. Here things get somewhat complicated. From what I understand from the paper No unbiased estimator of the variance of k -fold cross-validation (not behind paywall), one cannot trust the variance you get from a k -fold – so one cannot construct a good confidence interval from k -folds. Also from what I understand from the paper Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms (not behind paywall), techniques that use repeated measures (repeated k -fold, repeated hold-out – not sure about bootstrap) will sub-estimate the true variance of the error measure (it is somewhat easy to see that – since you are sampling from a finite set if you repeat the measure a very large number of times, the same values will keep repeating, which keep the mean the same, but reduce the variance). Thus repeated measures techniques will be too optimistic on the confidence interval. This last paper suggest doing a 5 repeated 2-fold – which he calls 5×2 CV – as a good balance of many measures (10) but not too much repetitions. EDIT: Of course there are great answers in Cross Validated to some of these questions (although sometimes they do not agree among themselves). Here are some: Cross-validation or bootstrapping to evaluate classification performance? Differences between cross validation and bootstrapping to estimate the prediction error Cross-validation or bootstrapping to evaluate classification performance? Understanding bootstrapping for validation and model selection In general, the tag cross-validation is your friend here. So what is the best solution? I don't know. I have been using 5×2 CV when I need to be very rigorous, when I need to be sure that one technique is better than another, especially in publications. And I use a hold out if I am not planning to make any measure of variance or standard deviation, or if I have time constraints – there is only one model learning in a hold-out .
{ "source": [ "https://stats.stackexchange.com/questions/103459", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/19762/" ] }
103,466
I am testing whether price per ounce of beer (continuous variable, range of values mostly between 0.1 and 0.5 dollars) and the presence of promotion, advertisement, and display (all binary) have effect on the total amount of ounces purchased (continuous variable). Here is my residual vs. fitted plot before the log transformation of y: This is the residuals vs. fitted plot after the log transformation of y: Heteroscedasticity is very high (White's general t statistics is nearly 800). This is the histogram of the transformed y: Any ideas or suggestions on how to improve my model or where to look for errors in order to improve the problem of heteroskedasticity are greatly appreciated.
Since the OP has placed a bounty on this question, it should attract some attention, and thus it is the right place to discuss some general ideas, even if it does not answer the OP directly. First, names: a) cross-validation is the general name for all estimation/measure techniques that use a test set different than the train set. Synonym: out-of-sample or extra-sample estimations. Antonym: in-sample estimation. In-sample estimation are techniques that use some information on the training set to estimate the model quality (not necessarily error). This is very common if the model has a high bias – that is – it makes strong assumptions about the data. In linear models (a high bias model), as the in the example of the question, one uses R-squared, AIC, BIC, deviance, as a measure of model quality – all these are in-sample estimators. In SVM, for example, the ratio data in the support vector to the number of data is an in-sample estimation of error of the model. There are many cross validation techniques: b) hold-out is the the method #1 above. Split the set into a training and one test. There is a long history of discussion and practices on the relative sizes of the training and test set. c) k -fold – method #2 above. Pretty standard. d) Leave-one-out – method #3 above. e) bootstrap : if your set has N data, randomly select N samples WITH REPLACEMENT from the set and use it as training. The data from the original set that has not been samples any time is used as the test set. There are different ways to compute the final estimation of the error of the model which uses both the error for the test set (out-of-sample) and the error for the train set (in-sample). See for example, the .632 bootstrap. I think there is also a .632+ formula – they are formulas that estimate the true error of the model using both out-of-sample and in-sample errors. f) Orthogonal to the selection of the method above is the issue of repetition. Except for leave-one-out, all methods above can be repeated any number of times. In fact one can talk about REPEATED hold-out, or REPEATED k -fold. To be fair, almost always the bootstrap method is used in a repeated fashion. The next question is, which method is "better". The problem is what "better" means. 1) The first answer is whether each of these methods is biased for the estimation of the model error (for an infinite amount of future data). 2) The second alternative is how fast or how well each of these methods converge to the true model error (if they are not biased). I believe this is still a topic of research. Let me point to these two papers (behind pay-wall) but the abstract gives us some understanding of what they are trying to accomplish. Also notice that it is very common to call k -fold as "cross-validation" by itself. Measuring the prediction error. A comparison of cross-validation, bootstrap and covariance penalty methods Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap There are probably many other papers on these topics. Those are just some examples. 3) Another aspect of "better" is: given a particular measure of the model error using one of the techniques above, how certain can you be that the correct model error is close. In general, in this case you want to take many measures of the error and calculate a confidence interval (or a credible interval if you follow a Bayesian approach). In this case, the issue is how much can you trust the variance of the set of error measures. Notice that except for the leave-one-out, all techniques above will give you many different measures ( k measures for a k -fold, n measures for a n -repeated hold out) and thus you can measure the variance (or standard deviation) of this set and calculate a confidence interval for the measure of error. Here things get somewhat complicated. From what I understand from the paper No unbiased estimator of the variance of k -fold cross-validation (not behind paywall), one cannot trust the variance you get from a k -fold – so one cannot construct a good confidence interval from k -folds. Also from what I understand from the paper Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms (not behind paywall), techniques that use repeated measures (repeated k -fold, repeated hold-out – not sure about bootstrap) will sub-estimate the true variance of the error measure (it is somewhat easy to see that – since you are sampling from a finite set if you repeat the measure a very large number of times, the same values will keep repeating, which keep the mean the same, but reduce the variance). Thus repeated measures techniques will be too optimistic on the confidence interval. This last paper suggest doing a 5 repeated 2-fold – which he calls 5×2 CV – as a good balance of many measures (10) but not too much repetitions. EDIT: Of course there are great answers in Cross Validated to some of these questions (although sometimes they do not agree among themselves). Here are some: Cross-validation or bootstrapping to evaluate classification performance? Differences between cross validation and bootstrapping to estimate the prediction error Cross-validation or bootstrapping to evaluate classification performance? Understanding bootstrapping for validation and model selection In general, the tag cross-validation is your friend here. So what is the best solution? I don't know. I have been using 5×2 CV when I need to be very rigorous, when I need to be sure that one technique is better than another, especially in publications. And I use a hold out if I am not planning to make any measure of variance or standard deviation, or if I have time constraints – there is only one model learning in a hold-out .
{ "source": [ "https://stats.stackexchange.com/questions/103466", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/48419/" ] }
103,731
I have a study where many outcomes are represented like percentages and I'm using multiple linear regressions to asses the effect of some categorical variables on these outcomes. I was wondering, since a linear regression assume that the outcome is a continuous distribution, are there methodological problems in applying such model to percentages, which are limited between 0 and 100?
I'll address the issues relevant to either discrete or continuous possibility: A problem with the description of the mean You have a bounded response. But the model you're fitting isn't bounded, and so can blast right through the bound; some of your fitted values may be impossible, and predicted values eventually must be. The true relationship must eventually become flatter than it is at the middle as it approaches the bounds, so it would be expected to bend in some fashion. A problem with the description of the variance As the mean approaches the bound, the variance will tend to decrease as well, other things being equal. There's less room between the mean and the bound, so the overall variability tends to reduce (otherwise the mean would tend to be pulled away from the bound by points being on average further away on the side not close to the bound. (Indeed, if all the population values in some neighborhood were exactly at the bound, the variance there would be zero.) A model that deals with such a bound should take such effects into consideration. If the proportion is for a count variable, a common model for the distribution of the proportion is a binomial GLM. There are several options for the form of the relationship of the mean proportion and the predictors, but the most common one would be a logistic GLM (several other choices are in common use). If the proportion is a continuous one (like the percentage of cream in milk), there are a number of options. Beta regression seems to be one fairly common choice. Again, it might use a logistic relationship between the mean and the predictors, or it might use some other functional form. See also Regression for an outcome (ratio or fraction) between 0 and 1 .
{ "source": [ "https://stats.stackexchange.com/questions/103731", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/6479/" ] }
103,800
I have two normal distributions defined by their averages and standard deviations. Sample 1: Mean=5.28; SD=0.91 Sample 2: Mean=8.45; SD=1.36 You can see how they look like in the next image: How can I get the probability to obtain an individual from the overlapping area (green)? Is the probability the same as the area?
It is not quite clear what you mean by probability to obtain an individual from the overlapping area . This solves for the area of the green zone in your diagram: Let: $X_1 \sim N(\mu_1,\sigma_1^2)$ with pdf $f_1(x_1)$ and cdf $ F_1(x_1)$ and $X_2 \sim N(\mu_2,\sigma_2^2)$ with pdf $f_2(x_2)$ and cdf $ F_2(x_2)$, where $\mu_1 < \mu_2$. In your example, the 'black variable' corresponds to $X_1$. Let $c$ denote the point of intersection where the pdf's meet in the green zone of your plot Then, the area of your green intersection zone is simply: $$P(X_1>c) + P(X_2<c) = 1 - F_1(c) + F_2(c) = 1-\frac{1}{2} \text{erf}\left(\frac{c-\mu _1}{\sqrt{2} \sigma _1}\right)+\frac{1}{2} \text{erf}\left(\frac{c-\mu _2}{\sqrt{2} \sigma _2}\right)$$ where erf(.) is the error function. Point $c$ is the solution to $f_1(x) = f_2(x)$ within the green zone, which yields: $$c = \frac{\mu _2 \sigma _1^2-\sigma _2 \left(\mu _1 \sigma _2+\sigma _1 \sqrt{\left(\mu _1-\mu _2\right){}^2+2 \left(\sigma _1^2-\sigma _2^2\right) \log \left(\frac{\sigma _1}{\sigma _2}\right)}\right)}{\sigma _1^2-\sigma _2^2}$$ For your example, with $ {\mu_1 = 5.28, \mu_2 = 8.45, \sigma_1 = 0.91, \sigma_2 = 1.36}$, this yields: $c = 6.70458...$, and the area of the green section is: 0.158413 ...
{ "source": [ "https://stats.stackexchange.com/questions/103800", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/40547/" ] }
103,801
There are two Boolean vectors, which contain 0 and 1 only. If I calculate the Pearson or Spearman correlation, are they meaningful or reasonable?
The Pearson and Spearman correlation are defined as long as you have some $0$ s and some $1$ s for both of two binary variables, say $y$ and $x$ . It is easy to get a good qualitative idea of what they mean by thinking of a scatter plot of the two variables. Clearly, there are only four possibilities $(0,0), (0,1), (1, 0), (1,1)$ (so that jittering to shake identical points apart for visualization is a good idea). For example, in any situation where the two vectors are identical, subject to having some 0s and some 1s in each, then by definition $y = x$ and the correlation is necessarily $1$ . Similarly, it is possible that $y = 1 -x$ and then the correlation is $-1$ . For this set-up, there is no scope for monotonic relations that are not linear. When taking ranks of $0$ s and $1$ s under the usual midrank convention the ranks are just a linear transformation of the original $0$ s and $1$ s and the Spearman correlation is necessarily identical to the Pearson correlation. Hence there is no reason to consider Spearman correlation separately here, or indeed at all. Correlations arise naturally for some problems involving $0$ s and $1$ s, e.g. in the study of binary processes in time or space. On the whole, however, there will be better ways of thinking about such data, depending largely on the main motive for such a study. For example, the fact that correlations make much sense does not mean that linear regression is a good way to model a binary response. If one of the binary variables is a response, then most statistical people would start by considering a logit model.
{ "source": [ "https://stats.stackexchange.com/questions/103801", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/8267/" ] }
103,843
I took several statistics courses in college but I found that my education was very theory driven. I was wondering if any of you had a text in Applied Statistics (at the graduate level) that you recommend or have had good experience with.
Some very good books: "Statistics for Experimenters: Design, Innovation, and Discovery , 2nd Edition" by Box, Hunter & Hunter. This is formally an introductory text (more for chemistry & engineering people) but extremely good on the applied side. "Data Analysis Using Regression and Multilevel/Hierarchical Models" by Andrew Gelman & Jennifer Hill. Very good on application of regression modelling. "The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition" (Springer Series in Statistics) 2nd (2009) Corrected Edition by Hastie Trevor, Tibshirani Robert & Friedman Jerome. More theoretical than the two first in my list, but also extremely good on the whys and ifs of applications. -- PDF Released Version "An Introduction to Statistical Learning" (Springer Series in Statistics) 6th (2015) by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani -- PDF Released Version Working your way through these three books should give a very good basis for applications.
{ "source": [ "https://stats.stackexchange.com/questions/103843", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/48578/" ] }
103,847
Is there anything nice I can say about the sum of two independent generalized Laplace variables, with different scales and sizes? i.e. are they distributed same as another generalized Laplace variable with some function of the moments, etc. Edit: the PDF of generalized {Laplace or Gaussian} is: $f(x) = C\exp\{-|(x-\mu)/a|^b\}$ where $a$ is the scale and $b$ is the shape ($C$ is a normalization constant). Thanks.
Some very good books: "Statistics for Experimenters: Design, Innovation, and Discovery , 2nd Edition" by Box, Hunter & Hunter. This is formally an introductory text (more for chemistry & engineering people) but extremely good on the applied side. "Data Analysis Using Regression and Multilevel/Hierarchical Models" by Andrew Gelman & Jennifer Hill. Very good on application of regression modelling. "The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition" (Springer Series in Statistics) 2nd (2009) Corrected Edition by Hastie Trevor, Tibshirani Robert & Friedman Jerome. More theoretical than the two first in my list, but also extremely good on the whys and ifs of applications. -- PDF Released Version "An Introduction to Statistical Learning" (Springer Series in Statistics) 6th (2015) by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani -- PDF Released Version Working your way through these three books should give a very good basis for applications.
{ "source": [ "https://stats.stackexchange.com/questions/103847", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/40946/" ] }
103,865
I've been looking up all across the internet to see what "lag length" is. I want to perform an Engle Granger - Augmented Dickey Fuller test, but for ADF, it always asks to specifiy a lag. It seems to me that whenever I look up lag length, most people just have trouble determining what it should be. What is the lag length? What does it mean?
Some very good books: "Statistics for Experimenters: Design, Innovation, and Discovery , 2nd Edition" by Box, Hunter & Hunter. This is formally an introductory text (more for chemistry & engineering people) but extremely good on the applied side. "Data Analysis Using Regression and Multilevel/Hierarchical Models" by Andrew Gelman & Jennifer Hill. Very good on application of regression modelling. "The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition" (Springer Series in Statistics) 2nd (2009) Corrected Edition by Hastie Trevor, Tibshirani Robert & Friedman Jerome. More theoretical than the two first in my list, but also extremely good on the whys and ifs of applications. -- PDF Released Version "An Introduction to Statistical Learning" (Springer Series in Statistics) 6th (2015) by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani -- PDF Released Version Working your way through these three books should give a very good basis for applications.
{ "source": [ "https://stats.stackexchange.com/questions/103865", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/48590/" ] }
103,969
This is the constructivist sequel of this question . If we can't have a discrete uniform random variable having as support all the rationals in the interval $[0,1]$, then the next best thing is: Construct a random variable $Q$ that has this support, $Q\in \mathbb{Q}\cap[0,1]$, and that it follows some distribution. And the craftsman in me requires that this random variable is constructed from existing distributions, rather than created by abstractly defining what we desire to obtain. So I came up with the following: Let $X$ be a discrete random variable following the Geometric Distribution-Variant II with parameter $0<p<1$, namely $$ X \in \{0,1,2,...\},\;\;\;\; P(X=k) = (1-p)^kp,\;\;\; F_X(X) = 1-(1-p)^{k+1}$$ Let also $Y$ be a discrete random variable following the Geometric Distribution-Variant I with identical parameter $p$, namely $$ Y \in \{1,2,...\},\;\;\;\; P(Y=k) = (1-p)^{k-1}p,\;\;\; F_Y(Y) = 1-(1-p)^k$$ $X$ and $Y$ are independent. Define now the random variable $$Q = \frac {X}{Y}$$ and consider the conditional distribution $$P(Q\leq q \mid \{X\leq Y\})$$ In loose words "conditional $Q$ is the ratio of $X$ over $Y$ conditional on $X$ being smaller or equal than $Y$." The support of this conditional distribution is $\{0,1,1/2,1/3,...,1/k,1/(k+1),...,2/3,2/4,...\} = \mathbb{Q}\cap[0,1]$. The "question" is: Can somebody please provide the associated conditional probability mass function? A comment asked "should it be closed-form"? Since what constitutes a closed-form nowadays is not so clear cut, let me put it this way: we are searching for a functional form into which we can input a rational number from $[0,1]$, and obtain the probability (for some specified value of the parameter $p$ of course), leading to an indicative graph of the pmf. And then vary $p$ to see how the graph changes. If it helps, then we can make the one or both bounds of the support open, although these variants will deprive us of the ability to definitely graph the upper and/or lower values of the pmf . Also, if we make open the upper bound, then we should consider the conditioning event $\{X<Y\}$. Alternatively, I welcome also other r.v.'s that have this support(s), as long as they come together with their pmf . I used the Geometric distribution because it has readily available two variants with the one not including zero in the support (so that division by zero is avoided). Obviously, one can use other discrete r.v.'s, using some truncation. I most certainly will put a bounty on this question, but the system does not immediately permit this.
Consider the discrete distribution $F$ with support on the set $\{(p,q)\,|\, q \ge p \ge 1\}\subset \mathbb{N}^2$ with probability masses $$F(p,q) = \frac{3}{2^{1+p+q}}.$$ This is easily summed (all series involved are geometric) to demonstrate it really is a distribution (the total probability is unity). For any nonzero rational number $x$ let $a/b=x$ be its representation in lowest terms: that is, $b\gt 0$ and $\gcd(a,b)=1$. $F$ induces a discrete distribution $G$ on $[0,1]\cap \mathbb{Q}$ via the rules $$G(x) = G\left(\frac{a}{b}\right) = \sum_{n=1}^\infty F\left(an, bn\right)=\frac{3}{2^{1+a+b}-2}.$$ (and $G(0)=0$). Every rational number in $(0,1]$ has nonzero probability. (If you must include $0$ among the values with positive probability, just take some of the probability away from another number--like $1$--and assign it to $0$.) To understand this construction, look at this depiction of $F$: $F$ gives probability masses at all points $p,q$ with positive integral coordinates. Values of $F$ are represented by the colored areas of circular symbols. The lines have slopes $p/q$ for all possible combinations of coordinates $p$ and $q$ appearing in the plot. They are colored in the same way the circular symbols are: according to their slopes. Thus, slope (which clearly ranges from $0$ through $1$) and color correspond to the argument of $G$ and the values of $G$ are obtained by summing the areas of all circles lying on each line. For instance, $G(1)$ is obtained by summing the areas of all the (red) circles along the main diagonal of slope $1$, given by $F(1,1)+F(2,2)+F(3,3)+\cdots$ = $3/8 + 3/32 + 3/128 + \cdots = 1/2$. This figure shows an approximation to $G$ achieved by limiting $q\le 100$: it plots its values at $3044$ rational numbers ranging from $1/100$ through $1$. The largest probability masses are $\frac{1}{2},\frac{3}{14},\frac{1}{10},\frac{3}{62},\frac{3}{62},\frac{1}{42},\ldots$. Here is the full CDF of $G$ (accurate to the resolution of the image). The six numbers just listed give the sizes of the visible jumps, but every part of the CDF consists of jumps, without exception:
{ "source": [ "https://stats.stackexchange.com/questions/103969", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/28746/" ] }
103,981
I have an 5297X26 imbalanced dataset, the class1 has 588 samples and class2 has 4709 samples. I used the following code to perform random forest: rfp<-randomForest(label~.,data=data,importance=TRUE,proximity=TRUE,replace=TRUE,sampsize=c(588,588)) Thus I could solve the imbalanced problem by selecting 588 samples from each class in each iteration. But I also want to perform cross validation for feature selection. The function I plan to use is rfcv .I tried to add sampsize=c(588,588) to the function but it didn't work. How to perform the sampling in this function?
Consider the discrete distribution $F$ with support on the set $\{(p,q)\,|\, q \ge p \ge 1\}\subset \mathbb{N}^2$ with probability masses $$F(p,q) = \frac{3}{2^{1+p+q}}.$$ This is easily summed (all series involved are geometric) to demonstrate it really is a distribution (the total probability is unity). For any nonzero rational number $x$ let $a/b=x$ be its representation in lowest terms: that is, $b\gt 0$ and $\gcd(a,b)=1$. $F$ induces a discrete distribution $G$ on $[0,1]\cap \mathbb{Q}$ via the rules $$G(x) = G\left(\frac{a}{b}\right) = \sum_{n=1}^\infty F\left(an, bn\right)=\frac{3}{2^{1+a+b}-2}.$$ (and $G(0)=0$). Every rational number in $(0,1]$ has nonzero probability. (If you must include $0$ among the values with positive probability, just take some of the probability away from another number--like $1$--and assign it to $0$.) To understand this construction, look at this depiction of $F$: $F$ gives probability masses at all points $p,q$ with positive integral coordinates. Values of $F$ are represented by the colored areas of circular symbols. The lines have slopes $p/q$ for all possible combinations of coordinates $p$ and $q$ appearing in the plot. They are colored in the same way the circular symbols are: according to their slopes. Thus, slope (which clearly ranges from $0$ through $1$) and color correspond to the argument of $G$ and the values of $G$ are obtained by summing the areas of all circles lying on each line. For instance, $G(1)$ is obtained by summing the areas of all the (red) circles along the main diagonal of slope $1$, given by $F(1,1)+F(2,2)+F(3,3)+\cdots$ = $3/8 + 3/32 + 3/128 + \cdots = 1/2$. This figure shows an approximation to $G$ achieved by limiting $q\le 100$: it plots its values at $3044$ rational numbers ranging from $1/100$ through $1$. The largest probability masses are $\frac{1}{2},\frac{3}{14},\frac{1}{10},\frac{3}{62},\frac{3}{62},\frac{1}{42},\ldots$. Here is the full CDF of $G$ (accurate to the resolution of the image). The six numbers just listed give the sizes of the visible jumps, but every part of the CDF consists of jumps, without exception:
{ "source": [ "https://stats.stackexchange.com/questions/103981", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/48648/" ] }
103,989
I'm working on 2D time series data where two attributes are depth and temperature. When I plotted depth-vs-temp curve and saw its variation over time, the fluctuation occurs at few places only. I'm not saying temperature is dependent on depth. But given these data, what should I look into to establish relationships between depth, temperature, time? Ideas: Look for depth regions where fluctuation in temperatures occurs over time. And develop time series prediction for these given regions. What models/papers should I study to get insights out of the data? I want to understand and study what are the various methods that could be tested and visualizations built over these data. It's a data exploration problem. Data shape: (2000,2,100) (samples,depth-temperature,time-sample) .i.e 2000 depth,temperature samples for each of 100 time stamps. Thanks.
Consider the discrete distribution $F$ with support on the set $\{(p,q)\,|\, q \ge p \ge 1\}\subset \mathbb{N}^2$ with probability masses $$F(p,q) = \frac{3}{2^{1+p+q}}.$$ This is easily summed (all series involved are geometric) to demonstrate it really is a distribution (the total probability is unity). For any nonzero rational number $x$ let $a/b=x$ be its representation in lowest terms: that is, $b\gt 0$ and $\gcd(a,b)=1$. $F$ induces a discrete distribution $G$ on $[0,1]\cap \mathbb{Q}$ via the rules $$G(x) = G\left(\frac{a}{b}\right) = \sum_{n=1}^\infty F\left(an, bn\right)=\frac{3}{2^{1+a+b}-2}.$$ (and $G(0)=0$). Every rational number in $(0,1]$ has nonzero probability. (If you must include $0$ among the values with positive probability, just take some of the probability away from another number--like $1$--and assign it to $0$.) To understand this construction, look at this depiction of $F$: $F$ gives probability masses at all points $p,q$ with positive integral coordinates. Values of $F$ are represented by the colored areas of circular symbols. The lines have slopes $p/q$ for all possible combinations of coordinates $p$ and $q$ appearing in the plot. They are colored in the same way the circular symbols are: according to their slopes. Thus, slope (which clearly ranges from $0$ through $1$) and color correspond to the argument of $G$ and the values of $G$ are obtained by summing the areas of all circles lying on each line. For instance, $G(1)$ is obtained by summing the areas of all the (red) circles along the main diagonal of slope $1$, given by $F(1,1)+F(2,2)+F(3,3)+\cdots$ = $3/8 + 3/32 + 3/128 + \cdots = 1/2$. This figure shows an approximation to $G$ achieved by limiting $q\le 100$: it plots its values at $3044$ rational numbers ranging from $1/100$ through $1$. The largest probability masses are $\frac{1}{2},\frac{3}{14},\frac{1}{10},\frac{3}{62},\frac{3}{62},\frac{1}{42},\ldots$. Here is the full CDF of $G$ (accurate to the resolution of the image). The six numbers just listed give the sizes of the visible jumps, but every part of the CDF consists of jumps, without exception:
{ "source": [ "https://stats.stackexchange.com/questions/103989", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/48651/" ] }
104,040
I am trying to understand difference between different resampling methods (Monte Carlo simulation, parametric bootstrapping, non-parametric bootstrapping, jackknifing, cross-validation, randomization tests, and permutation tests) and their implementation in my own context using R. Say I have the following situation – I want to perform ANOVA with a Y variable ( Yvar ) and X variable ( Xvar ). Xvar is categorical. I am interested in the following things: (1) Significance of p-values – false discovery rate (2) effect size of Xvar levels Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) Xvar <- c(rep("A", 5), rep("B", 5), rep("C", 5)) mydf <- data.frame (Yvar, Xvar) Could you gel me to explain the sampling differences with explicit worked examples how these resampling method work? Edits: Here are my attempts: Bootstrap 10 bootstrap samples, sample number of samples with replacement, means that samples can be repeated boot.samples <- list() for(i in 1:10) { t.xvar <- Xvar[ sample(length(Xvar), length(Xvar), replace=TRUE) ] t.yvar <- Yvar[ sample(length(Yvar), length(Yvar), replace=TRUE) ] b.df <- data.frame (t.xvar, t.yvar) boot.samples[[i]] <- b.df } str(boot.samples) boot.samples[1] Permutation: 10 permutation samples, sample number of samples without replacement permt.samples <- list() for(i in 1:10) { t.xvar <- Xvar[ sample(length(Xvar), length(Xvar), replace=FALSE) ] t.yvar <- Yvar[ sample(length(Yvar), length(Yvar), replace=FALSE) ] b.df <- data.frame (t.xvar, t.yvar) permt.samples[[i]] <- b.df } str(permt.samples) permt.samples[1] Monte Caro Simulation Although the term "resampling" is often used to refer to any repeated random or pseudorandom sampling simulation, when the "resampling" is done from a known theoretical distribution, the correct term is "Monte Carlo" simulation. I am not sure about all above terms and whether my above edits are correct. I did find some information on jacknife but I could not tame it to my situation.
We can find different Resampling methods , or loosely called " simulation " methods, that depend upon resampling or shuffling of the samples. There might be differences in opinions with respect to proper terminology, but the following discussion tries to generalize and simplify what is available in the appropriate literature: Resampling methods are used in (1) estimating precision / accuracy of sample statistics through using subset of data (e.g. Jackknifing) or drawing randomly with replacement from a set of data points (e.g. bootstrapping) (2) Exchanging labels on data points when performing significance tests (permutation tests, also called exact tests, randomization tests, or re-randomization tests) (3) Validating models by using random subsets (bootstrapping, cross validation) (see wikipedia: resampling methods ) BOOTSTRAPING " Bootstrapping is a statistical method for estimating the sampling distribution of an estimator by sampling with replacement from the original sample". The method assigns measures of accuracy (defined in terms of bias , variance , confidence intervals , prediction error or some other such measure) to sample estimates. The basic idea of bootstrapping is that inference about a population from sample data ( sample → population ) can be modeled by resampling the sample data and performing inference on (resample → sample). As the population is unknown, the true error in a sample statistic against its population value is unknowable. In bootstrap-resamples, the 'population' is in fact the sample, and this is known; hence the quality of inference from resample data → 'true' sample is measurable." see wikipedia Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) #To generate a single bootstrap sample sample(Yvar, replace = TRUE) #generate 1000 bootstrap samples boot <-list() for (i in 1:1000) boot[[i]] <- sample(Yvar,replace=TRUE) In univariate problems, it is usually acceptable to resample the individual observations with replacement ("case resampling"). Here we resample the data with replacement, and the size of the resample must be equal to the size of the original data set. In regression problems, case resampling refers to the simple scheme of resampling individual cases - often rows of a data set in regression problems, the explanatory variables are often fixed, or at least observed with more control than the response variable. Also, the range of the explanatory variables defines the information available from them. Therefore, to resample cases means that each bootstrap sample will lose some information (see Wikipedia ). So it will be logical to sample rows of the data rather just Yvar . Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) Xvar <- c(rep("A", 5), rep("B", 5), rep("C", 5)) mydf <- data.frame (Yvar, Xvar) boot.samples <- list() for(i in 1:10) { b.samples.cases <- sample(length(Xvar), length(Xvar), replace=TRUE) b.mydf <- mydf[b.samples.cases,] boot.samples[[i]] <- b.mydf } str(boot.samples) boot.samples[1] You can see some cases are repeated as we are sampling with replacement. " Parametric bootstrap - a parametric model is fitted to the data, often by maximum likelihood, and samples of random numbers are drawn from this fitted model . Usually the sample drawn has the same sample size as the original data. Then the quantity, or estimate, of interest is calculated from these data. This sampling process is repeated many times as for other bootstrap methods. The use of a parametric model at the sampling stage of the bootstrap methodology leads to procedures which are different from those obtained by applying basic statistical theory to inference for the same model."(see Wikipedia ). The following is parametric bootstrap with normal distribution assumption with mean and standard deviation parameters. Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) # parameters for Yvar mean.y <- mean(Yvar) sd.y <- sd(Yvar) #To generate a single bootstrap sample with assumed normal distribution (mean, sd) rnorm(length(Yvar), mean.y, sd.y) #generate 1000 bootstrap samples boot <-list() for (i in 1:1000) boot[[i]] <- rnorm(length(Yvar), mean.y, sd.y) There are other variants of bootstrap, please consult the wikipedia page or any good statical book on resampling. JACKNIFE "The jackknife estimator of a parameter is found by systematically leaving out each observation from a dataset and calculating the estimate and then finding the average of these calculations. Given a sample of size N, the jackknife estimate is found by aggregating the estimates of each N − 1 estimate in the sample." see: wikipedia The following shows how to jackknife the Yvar . jackdf <- list() jack <- numeric(length(Yvar)-1) for (i in 1:length (Yvar)){ for (j in 1:length(Yvar)){ if(j < i){ jack[j] <- Yvar[j] } else if(j > i) { jack[j-1] <- Yvar[j] } } jackdf[[i]] <- jack } jackdf "the regular bootstrap and the jackknife, estimate the variability of a statistic from the variability of that statistic between subsamples, rather than from parametric assumptions . For the more general jackknife, the delete-m observations jackknife, the bootstrap can be seen as a random approximation of it. Both yield similar numerical results, which is why each can be seen as approximation to the other." See this question on Bootstrap vs Jacknife. RANDOMIZATION TESTS "In parametric tests we randomly sample from one or more populations. We make certain assumptions about those populations, most commonly that they are normally distributed with equal variances. We establish a null hypothesis that is framed in terms of parameters, often of the form m1 -m2 = 0 . We use our sample statistics as estimates of the corresponding population parameters, and calculate a test statistic (such as a t test). For example: in Student's t - test for differences in means when variances are unknown, but are considered to be equal. The hypothesis of interest is that H0: m1 = m2 . One of alternative hypothesis would be stated as : HA: m1 < m2 . Given two samples drawn from populations 1 and 2, assuming that these are normally distributed populations with equal variances, and that the samples were drawn independently and at random from each population, then a statistic whose distribution is known can be elaborated to test H0 . One way to avoid these distributional assumptions has been the approach now called non - parametric, rank - order, rank - like, and distribution - free statistics. These distribution - free statistics are usually criticized for being less "efficient" than the analogous test based on assuming the populations to be normally distributed. Another alternative approach is randomization approach - "process of randomly assigning ranks to observations independent of one's knowledge of which sample an observation is a member. A randomization test makes use of such a procedure, but does so by operating on the observations rather than the joint ranking of the observations. For this reason, the distribution of an analogous statistic (the sum of the observations in one sample) cannot be easily tabulated, although it is theoretically possible to enumerate such a distribution" ( see ) Randomization tests differ from parametric tests in almost every respect. (1) There is no requirement that we have random samples from one or more populations—in fact we usually have not sampled randomly. (2) We rarely think in terms of the populations from which the data came, and there is no need to assume anything about normality or homoscedasticity (3) Our null hypothesis has nothing to do with parameters, but is phrased rather vaguely, as, for example, the hypothesis that the treatment has no effect on the how participants perform.(4) Because we are not concerned with populations, we are not concerned with estimating (or even testing) characteristics of those populations (5) We do calculate some sort of test statistic, however we do not compare that statistic to tabled distributions. Instead, we compare it to the results we obtain when we repeatedly randomize the data across the groups, and calculate the corresponding statistic for each randomization. (6) Even more than parametric tests, randomization tests emphasize the importance of random assignment of participants to treatments." see . The type of randomization test that is very popular is permutation test. If our sample size is 12 and 5, the total permutation possible is C(12,5) = 792 . If our sample sizes been 10 and 15 then over 3.2 million arrangements would have been possible. This is computing challenge: What then? Sample . When the universe of possible arrangements is too large to enumerate why not sample arrangements from this universe independently and at random? The distribution of the test statistic over this series of samples can then be tabulated, its' mean and variance computed, and the error rate associated with an hypothesis test estimated. PERMUTATION TEST According to wikipedia "A permutation test (also called a randomization test , re-randomization test , or an exact test ) is a type of statistical significance test in which the distribution of the test statistic under the null hypothesis is obtained by calculating all possible values of the test statistic under rearrangements of the labels on the observed data points. Permutation tests exist for any test statistic, regardless of whether or not its distribution is known. Thus one is always free to choose the statistic which best discriminates between hypothesis and alternative and which minimizes losses." The difference between permutation and bootstrap is that bootstraps sample with replacement, and permutations sample without replacement . In either case, the time order of the observations is lost and hence volatility clustering is lost — thus assuring that the samples are under the null hypothesis of no volatility clustering. The permutations always have all of the same observations, so they are more like the original data than bootstrap samples. The expectation is that the permutation test should be more sensitive than a bootstrap test. The permutations destroy volatility clustering but do not add any other variability . See the question on permutation vs bootstrapping - "The permutation test is best for testing hypotheses and bootstrapping is best for estimating confidence intervals ". So to perform permutation in this case we can just change replace = FALSE in the above bootstrap example. Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) #generate 1000 bootstrap samples permutes <-list() for (i in 1:1000) permutes[[i]] <- sample(Yvar,replace=FALSE) In case of more than one variable, just picking of the rows and reshuffling the order will not make any difference as the data will remain same. So we reshuffle the y variable. Something what you have done, but I do not think we need double reshuffling of both x and y variables (as you have done). Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) Xvar <- c(rep("A", 5), rep("B", 5), rep("C", 5)) mydf <- data.frame (Yvar, Xvar) permt.samples <- list() for(i in 1:10) { t.yvar <- Yvar[ sample(length(Yvar), length(Yvar), replace=FALSE) ] b.df <- data.frame (Xvar, t.yvar) permt.samples[[i]] <- b.df } str(permt.samples) permt.samples[1] MONTE CARLO METHODS "Monte Carlo methods (or Monte Carlo experiments) are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results; typically one runs simulations many times over in order to obtain the distribution of an unknown probabilistic entity. The name comes from the resemblance of the technique to the act of playing and recording results in a real gambling casino. " see Wikipedia "In applied statistics, Monte Carlo methods are generally used for two purposes: (1) To compare competing statistics for small samples under realistic data conditions. Although Type I error and power properties of statistics can be calculated for data drawn from classical theoretical distributions (e.g., normal curve, Cauchy distribution) for asymptotic conditions (i. e, infinite sample size and infinitesimally small treatment effect), real data often do not have such distributions. (2) To provide implementations of hypothesis tests that are more efficient than exact tests such as permutation tests (which are often impossible to compute) while being more accurate than critical values for asymptotic distributions. Monte Carlo methods are also a compromise between approximate randomization and permutation tests . An approximate randomization test is based on a specified subset of all permutations (which entails potentially enormous housekeeping of which permutations have been considered). The Monte Carlo approach is based on a specified number of randomly drawn permutations ( exchanging a minor loss in precision if a permutation is drawn twice – or more frequently—for the efficiency of not having to track which permutations have already been selected )." Both MC and Permutation test are sometime collectively called randomization tests . The difference is in MC we sample the permutation samples, rather using all possible combinations [see] 21 . CROSS VALIDATION The idea beyond cross validation is that models should be tested with data that were not used to fit the model. Cross validation is perhaps most often used in the context of prediction . "Cross-validation is a statistical method for validating a predictive model. Subsets of the data are held out for use as validating sets ; a model is fit to the remaining data (a training set) and used to predict for the validation set. Averaging the quality of the predictions across the validation sets yields an overall measure of prediction accuracy. One form of cross-validation leaves out a single observation at a time; this is similar to the jackknife. Another, K-fold cross-validation, splits the data into K subsets; each is held out in turn as the validation set." see Wikipedia . Cross validation is usually done with quantitative data. You can convert your qualitative (factor data) to quantitative someway to fit a linear model and test this model. The following is simple hold-out strategy where 50% of data is used for model prediction while rest is used for testing. Lets assume Xvar is also quantitative variable. Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) Xvar <- c(rep(1, 5), rep(2, 5), rep(3, 5)) mydf <- data.frame (Yvar, Xvar) training.id <- sample(1:nrow(mydf), round(nrow(mydf)/2,0), replace = FALSE) test.id <- setdiff(1:nrow(mydf), training.id) # training dataset mydf.train <- mydf[training.id] #testing dataset mydf.test <- mydf[test.id] Unlike bootstrap and permutation tests the cross-validation dataset for training and testing is different. The following figure shows a summary of resampling in different methods. Hope this helps a bit.
{ "source": [ "https://stats.stackexchange.com/questions/104040", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/7397/" ] }
104,255
From what I understand, we can only build a regression function that lies within the interval of the training data. For example (only one of the panels is necessary): How would I predict into the future using a KNN regressor? Again, it appears to only approximate a function that lies within the interval of the training data. My question: What are the advantages of using a KNN regressor? I understand that it is a very powerful tool for classification, but it seems that it would perform poorly in a regression scenario.
Local methods like K-NN make sense in some situations. One example that I did in school work had to do with predicting the compressive strength of various mixtures of cement ingredients. All of these ingredients were relatively non-volatile with respect to the response or each other and KNN made reliable predictions on it. In other words none of the independent variables had disproportionately large variance to confer to the model either individually or possibly by mutual interaction. Take this with a grain of salt because I don't know of a data investigation technique that conclusively shows this but intuitively it seems reasonable that if your features have some proportionate degree of variances, I don't know what proportion, you might have a KNN candidate. I'd certainly like to know if there were some studies and resulting techniques developed to this effect. If you think about it from a generalized domain perspective there is a broad class of applications where similar 'recipes' yield similar outcomes. This certainly seemed to describe the situation of predicting outcomes of mixing cement. I would say if you had data that behaved according to this description and in addition your distance measure was also natural to the domain at hand and lastly that you had sufficient data, I would imagine that you should get useful results from KNN or another local method. You are also getting the benefit of extremely low bias when you use local methods. Sometimes generalized additive models (GAM) balance bias and variance by fitting each individual variable using KNN such that: $$\hat{y}=f_1(x_1) + f_2(x_2) + \dots + f_n(x_n) + \epsilon$$ The additive portion (the plus symbols) protect against high variance while the use of KNN in place of $f_n(x_n)$ protects against high bias. I wouldn't write off KNN so quickly. It has its place.
{ "source": [ "https://stats.stackexchange.com/questions/104255", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
104,500
It seems that data mining and machine learning became so popular that now almost every CS student knows about classifiers, clustering, statistical NLP ... etc. So it seems that finding data miners is not a hard thing nowadays. My question is: What are the skills that a data miner could learn that would make him different than the others? To make him a not-so-easy-to-find-someone-like-him kind of person.
What it's about Just knowing about techniques is akin to knowing the animals in a zoo -- you can name them, describe their properties, perhaps identify them in the wild. Understanding when to use them, formulating, building, testing, and deploying working mathematical models within an application area while avoiding the pitfalls --- these are the skills that distinguish, in my opinion. The emphasis should be on the science , applying a systematic, scientific approach to business, industrial, and commercial problems. But this requires skills broader than data mining & machine learning, as Robin Bloor argues persuasively in "A Data Science Rant" . So what can one do? Application areas : learn about various application areas close to your interest, or that of your employer. The area is often less important than understanding how the model was built and how it was used to add value to that area. Models that are successful in one area can often be transplanted and applied to different areas that work in similar ways. Competitions : try the data mining competition site Kaggle , preferably joining a team of others. (Kaggle: a platform for predictive modeling competitions. Companies, governments and researchers present datasets and problems and the world’s best data scientists compete to produce the best solutions.) Fundamentals : There are four: (1) solid grounding in statistics, (2) reasonably good programming skills, (3) understanding how to structure complex data queries, (4) building data models. If any are weak, then that's an important place to start. A few quotes in this respect: ``I learned very early the difference between knowing the name of something and knowing something. You can know the name of a bird in all the languages of the world, but when you're finished, you'll know absolutely nothing whatever about the bird... So let's look at the bird and see what it's doing -- that's what counts.'' -- Richard Feynman, "The Making of a Scientist", p14 in What Do You Care What Other People Think, 1988 Keep in mind: ``The combination of skills required to carry out these business science [data science] projects rarely reside in one person. Someone could indeed have attained extensive knowledge in the triple areas of (i) what the business does, (ii) how to use statistics, and (iii) how to manage data and data flows. If so, he or she could indeed claim to be a business scientist (a.k.a., “data scientist”) in a given sector. But such individuals are almost as rare as hen’s teeth.'' -- Robin Bloor, A Data Science Rant , Aug 2013, Inside Analysis And finally: ``The Map is Not the Territory.'' -- Alfred Korzybski, 1933, Science & Sanity. Most real, applied problems are not accessible solely from ``the map''. To do practical things with mathematical modelling one must be willing to get grubby with details, subtleties, and exceptions. Nothing can substitute for knowing the territory first-hand.
{ "source": [ "https://stats.stackexchange.com/questions/104500", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/27779/" ] }
104,713
To me, it seems that hold-out validation is useless. That is, splitting the original dataset into two-parts (training and testing) and using the testing score as a generalization measure, is somewhat useless. K-fold cross-validation seems to give better approximations of generalization (as it trains and tests on every point). So, why would we use the standard hold-out validation? Or even talk about it?
NOTE : This answer is old, incomplete, and thoroughly out of date. Its was only debatably correct when it was posted in 2014, and I'm not really sure how it got so many upvotes or how it became the accepted answer. I recommend this answer instead, written by an expert in the field (and with significantly more upvotes). I am leaving my answer here for historical/archival purposes only. My only guess is that you can Hold-Out with three hours of programming experience; the other takes a week in principle and six months in practice. In principle it's simple, but writing code is tedious and time-consuming. As Linus Torvalds famously said, "Bad programmers worry about the code. Good programmers worry about data structures and their relationships." Many of the people doing statistics are bad programmers, through no fault of their own. Doing k-fold cross validation efficiently (and by that I mean, in a way that isn't horribly frustrating to debug and use more than once) in R requires a vague understanding of data structures, but data structures are generally skipped in "intro to statistical programming" tutorials. It's like the old person using the Internet for the first time. It's really not hard, it just takes an extra half hour or so to figure out the first time, but it's brand new and that makes it confusing, so it's easy to ignore. You have questions like this: How to implement a hold-out validation in R . No offense intended, whatsoever, to the asker. But many people just are not code-literate. The fact that people are doing cross-validation at all is enough to make me happy. It sounds silly and trivial, but this comes from personal experience, having been that guy and having worked with many people who were that guy.
{ "source": [ "https://stats.stackexchange.com/questions/104713", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
104,988
I see that both functions are part of data mining methods such as Gradient Boosting Regressors. I see that those are separate objects too. How is the relationship between both in general?
A decision function is a function which takes a dataset as input and gives a decision as output. What the decision can be depends on the problem at hand. Examples include: Estimation problems: the "decision" is the estimate. Hypothesis testing problems: the decision is to reject or not reject the null hypothesis. Classification problems: the decision is to classify a new observation (or observations) into a category. Model selection problems: the decision is to chose one of the candidate models. Typically, there are an infinite number of decision functions available for a problem. If we for instance are interested in estimating the height of Swedish males based on ten observations $\mathbf{x}=(x_1,x_2,\ldots,x_{10})$, we can use any of the following decision functions $d(\mathbf{x})$: The sample mean: $d(\mathbf{x})=\frac{1}{10}\sum_{i=1}^{10}x_i$. The median of the sample: $d(\mathbf{x})=\mbox{median}(\mathbf{x})$ The geometric mean of the sample: $d(\mathbf{x})=\sqrt[10]{x_1\cdots x_{10}}$ The function that always returns 1: $d(\mathbf{x})=1$, regardless of the value of $\mathbf{x}$. Silly, yes, but it is nevertheless a valid decision function. How then can we determine which of these decision functions to use? One way is to use a loss function , which describes the loss (or cost) associated with all possible decisions. Different decision functions will tend to lead to different types of mistakes. The loss function tells us which type of mistakes we should be more concerned about. The best decision function is the function that yields the lowest expected loss . What is meant by expected loss depends on the setting (in particular, whether we are talking about frequentist or Bayesian statistics). In summary: Decision functions are used to make decisions based on data. Loss functions are used to determine which decision function to use.
{ "source": [ "https://stats.stackexchange.com/questions/104988", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/36447/" ] }
105,115
I cannot understand the usage of polynomial contrasts in regression fitting. In particular, I am referring to an encoding used by R in order to express an interval variable (ordinal variable with equally spaced levels), described at this page . In the example of that page , if I understood correctly, R fits a model for an interval variable, returning some coefficients which weights its linear, quadratic, or cubic trend. Hence, the fitted model should be: $${\rm write} = 52.7870 + 14.2587X - 0.9680X^2 - 0.1554X^3,$$ where $X$ should take values $1$ , $2$ , $3$ , or $4$ according to the different level of the interval variable. Is this correct? And, if so, what was the purpose of polynomial contrasts?
Just to recap (and in case the OP hyperlinks fail in the future), we are looking at a dataset hsb2 as such: id female race ses schtyp prog read write math science socst 1 70 0 4 1 1 1 57 52 41 47 57 2 121 1 4 2 1 3 68 59 53 63 61 ... 199 118 1 4 2 1 1 55 62 58 58 61 200 137 1 4 3 1 2 63 65 65 53 61 which can be imported here . We turn the variable read into an ordered / ordinal variable: hsb2 $readcat<-cut(hsb2$ read, 4, ordered = TRUE) (means = tapply(hsb2 $write, hsb2$ readcat, mean)) (28,40] (40,52] (52,64] (64,76] 42.77273 49.97849 56.56364 61.83333 Now we are all set to just run a regular ANOVA - yes, it is R, and we basically have a continuous dependent variable, write , and an explanatory variable with multiple levels, readcat . In R we can use lm(write ~ readcat, hsb2) 1. Generating the contrast matrix: There are four different levels to the ordered variable readcat , so we'll have $n-1=3$ contrasts. table(hsb2$readcat) (28,40] (40,52] (52,64] (64,76] 22 93 55 30 First, let's go for the money, and take a look at the built-in R function: contr.poly(4) .L .Q .C [1,] -0.6708204 0.5 -0.2236068 [2,] -0.2236068 -0.5 0.6708204 [3,] 0.2236068 -0.5 -0.6708204 [4,] 0.6708204 0.5 0.2236068 Now let's dissect what went on under the hood: scores = 1:4 # 1 2 3 4 These are the four levels of the explanatory variable. y = scores - mean(scores) # scores - 2.5 $y = \small [-1.5, -0.5, 0.5, 1.5]$ $\small \text{seq_len(n) - 1} = [0, 1, 2, 3]$ n = 4; X <- outer(y, seq_len(n) - 1, "^") # n = 4 in this case $\small\begin{bmatrix} 1&-1.5&2.25&-3.375\\1&-0.5&0.25&-0.125\\1&0.5&0.25&0.125\\1&1.5&2.25&3.375 \end{bmatrix}$ What happened there? the outer(a, b, "^") raises the elements of a to the elements of b , so that the first column results from the operations, $\small(-1.5)^0$ , $\small(-0.5)^0$ , $\small 0.5^0$ and $\small 1.5^0$ ; the second column from $\small(-1.5)^1$ , $\small(-0.5)^1$ , $\small0.5^1$ and $\small1.5^1$ ; the third from $\small(-1.5)^2=2.25$ , $\small(-0.5)^2 = 0.25$ , $\small0.5^2 = 0.25$ and $\small1.5^2 = 2.25$ ; and the fourth, $\small(-1.5)^3=-3.375$ , $\small(-0.5)^3=-0.125$ , $\small0.5^3=0.125$ and $\small1.5^3=3.375$ . Next we do a $QR$ orthonormal decomposition of this matrix and take the compact representation of Q ( c_Q = qr(X)$qr ). Some of the inner workings of the functions used in QR factorization in R used in this post are further explained here . $\small\begin{bmatrix} -2&0&-2.5&0\\0.5&-2.236&0&-4.584\\0.5&0.447&2&0\\0.5&0.894&-0.9296&-1.342 \end{bmatrix}$ ... of which we save the diagonal only ( z = c_Q * (row(c_Q) == col(c_Q)) ). What lies in the diagonal: Just the "bottom" entries of the $\bf R$ part of the $QR$ decomposition. Just? well, no... It turns out that the diagonal of a upper triangular matrix contains the eigenvalues of the matrix! Next we call the following function: raw = qr.qy(qr(X), z) , the result of which can be replicated "manually" by two operations: 1. Turning the compact form of $Q$ , i.e. qr(X)$qr , into $Q$ , a transformation that can be achieved with Q = qr.Q(qr(X)) , and 2. Carrying out the matrix multiplication $Qz$ , as in Q %*% z . Crucially, multiplying $\bf Q$ by the eigenvalues of $\bf R$ does not change the orthogonality of the constituent column vectors, but given that the absolute value of the eigenvalues appears in decreasing order from top left to bottom right, the multiplication of $Qz$ will tend to decrease the values in the higher order polynomial columns: Matrix of Eigenvalues of R [,1] [,2] [,3] [,4] [1,] -2 0.000000 0 0.000000 [2,] 0 -2.236068 0 0.000000 [3,] 0 0.000000 2 0.000000 [4,] 0 0.000000 0 -1.341641 Compare the values in the later column vectors (quadratic and cubic) before and after the $QR$ factorization operations, and to the unaffected first two columns. Before QR factorization operations (orthogonal col. vec.) [,1] [,2] [,3] [,4] [1,] 1 -1.5 2.25 -3.375 [2,] 1 -0.5 0.25 -0.125 [3,] 1 0.5 0.25 0.125 [4,] 1 1.5 2.25 3.375 After QR operations (equally orthogonal col. vec.) [,1] [,2] [,3] [,4] [1,] 1 -1.5 1 -0.295 [2,] 1 -0.5 -1 0.885 [3,] 1 0.5 -1 -0.885 [4,] 1 1.5 1 0.295 Finally we call (Z <- sweep(raw, 2L, apply(raw, 2L, function(x) sqrt(sum(x^2))), "/", check.margin = FALSE)) turning the matrix raw into an orthonormal vectors: Orthonormal vectors (orthonormal basis of R^4) [,1] [,2] [,3] [,4] [1,] 0.5 -0.6708204 0.5 -0.2236068 [2,] 0.5 -0.2236068 -0.5 0.6708204 [3,] 0.5 0.2236068 -0.5 -0.6708204 [4,] 0.5 0.6708204 0.5 0.2236068 This function simply "normalizes" the matrix by dividing ( "/" ) columnwise each element by the $\small\sqrt{\sum_\text{col.} x_i^2}$ . So it can be decomposed in two steps: $(\text{i})$ apply(raw, 2, function(x)sqrt(sum(x^2))) , resulting in 2 2.236 2 1.341 , which are the denominators for each column in $(\text{ii})$ where every element in a column is divided by the corresponding value of $(\text{i})$ . At this point the column vectors form an orthonormal basis of $\mathbb{R}^4$ , until we get rid of the first column, which will be the intercept, and we have reproduced the result of contr.poly(4) : $\small\begin{bmatrix} -0.6708204&0.5&-0.2236068\\-0.2236068&-0.5&0.6708204\\0.2236068&-0.5&-0.6708204\\0.6708204&0.5&0.2236068 \end{bmatrix}$ The columns of this matrix are orthonormal , as can be shown by (sum(Z[,3]^2))^(1/4) = 1 and z[,3]%*%z[,4] = 0 , for example (incidentally the same goes for rows). And, each column is the result of raising the initial $\text{scores - mean}$ to the $1$ -st, $2$ -nd and $3$ -rd power, respectively - i.e. linear, quadratic and cubic . 2. Which contrasts (columns) contribute significantly to explain the differences between levels in the explanatory variable? We can just run the ANOVA and look at the summary... summary(lm(write ~ readcat, hsb2)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 52.7870 0.6339 83.268 <2e-16 *** readcat.L 14.2587 1.4841 9.607 <2e-16 *** readcat.Q -0.9680 1.2679 -0.764 0.446 readcat.C -0.1554 1.0062 -0.154 0.877 ... to see that there is a linear effect of readcat on write , so that the original values (in the third chunk of code in the beginning of the post) can be reproduced as: coeff = coefficients(lm(write ~ readcat, hsb2)) C = contr.poly(4) (recovered = c(coeff %*% c(1, C[1,]), coeff %*% c(1, C[2,]), coeff %*% c(1, C[3,]), coeff %*% c(1, C[4,]))) [1] 42.77273 49.97849 56.56364 61.83333 ... or... ... or much better... Being orthogonal contrasts the sum of their components adds to zero $\displaystyle \sum_{i=1}^t a_i = 0$ for $a_1,\cdots,a_t$ constants, and the dot product of any two of them is zero. If we could visualized them they would look something like this: The idea behind orthogonal contrast is that the inferences that we can exctract (in this case generating coefficients via a linear regression) will be the result of independent aspects of the data. This would not be the case if we simply used $X^0, X^1, \cdots. X^n$ as contrasts. Graphically, this is much easier to understand. Compare the actual means by groups in large square black blocks to the prediced values, and see why a straight line approximation with minimal contribution of quadratic and cubic polynomials (with curves only approximated with loess) is optimal: If, just for effect, the coefficients of the ANOVA had been as large for the linear contrast for the other approximations (quadratic and cubic), the nonsensical plot that follows would depict more clearly the polynomial plots of each "contribution": The code is here .
{ "source": [ "https://stats.stackexchange.com/questions/105115", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/18086/" ] }
105,122
I am trying to work out is there is any association between occurrence or not of an event in around 500 individuals and their randomised binary grouping (intervention vs placebo). However I also want to see if this holds true when broken down by one of 5 locations in which the individuals were treated. A chi square test on the entire group shows a significant result. However if I repeat the test on each of the 5 subgroups only one subgroup is significant (<0.05) and all of the others are non-signficant. Further still I have then corrected for multiple testing by multiplying the p-values by 6 (5 tests for each centre and then 1 test for entire group). This does not affect the subgroups but does affect the overall group which is no longer significant after multiple testing correction. I know this issue is controversial. Had I just stuck with the original hypothesis and performed one chi square test I would be saying it is significant but once I do subgroup testing I am then having to say this is not the case. Does anyone have any suggestions on how to approach this problem?
Just to recap (and in case the OP hyperlinks fail in the future), we are looking at a dataset hsb2 as such: id female race ses schtyp prog read write math science socst 1 70 0 4 1 1 1 57 52 41 47 57 2 121 1 4 2 1 3 68 59 53 63 61 ... 199 118 1 4 2 1 1 55 62 58 58 61 200 137 1 4 3 1 2 63 65 65 53 61 which can be imported here . We turn the variable read into an ordered / ordinal variable: hsb2 $readcat<-cut(hsb2$ read, 4, ordered = TRUE) (means = tapply(hsb2 $write, hsb2$ readcat, mean)) (28,40] (40,52] (52,64] (64,76] 42.77273 49.97849 56.56364 61.83333 Now we are all set to just run a regular ANOVA - yes, it is R, and we basically have a continuous dependent variable, write , and an explanatory variable with multiple levels, readcat . In R we can use lm(write ~ readcat, hsb2) 1. Generating the contrast matrix: There are four different levels to the ordered variable readcat , so we'll have $n-1=3$ contrasts. table(hsb2$readcat) (28,40] (40,52] (52,64] (64,76] 22 93 55 30 First, let's go for the money, and take a look at the built-in R function: contr.poly(4) .L .Q .C [1,] -0.6708204 0.5 -0.2236068 [2,] -0.2236068 -0.5 0.6708204 [3,] 0.2236068 -0.5 -0.6708204 [4,] 0.6708204 0.5 0.2236068 Now let's dissect what went on under the hood: scores = 1:4 # 1 2 3 4 These are the four levels of the explanatory variable. y = scores - mean(scores) # scores - 2.5 $y = \small [-1.5, -0.5, 0.5, 1.5]$ $\small \text{seq_len(n) - 1} = [0, 1, 2, 3]$ n = 4; X <- outer(y, seq_len(n) - 1, "^") # n = 4 in this case $\small\begin{bmatrix} 1&-1.5&2.25&-3.375\\1&-0.5&0.25&-0.125\\1&0.5&0.25&0.125\\1&1.5&2.25&3.375 \end{bmatrix}$ What happened there? the outer(a, b, "^") raises the elements of a to the elements of b , so that the first column results from the operations, $\small(-1.5)^0$ , $\small(-0.5)^0$ , $\small 0.5^0$ and $\small 1.5^0$ ; the second column from $\small(-1.5)^1$ , $\small(-0.5)^1$ , $\small0.5^1$ and $\small1.5^1$ ; the third from $\small(-1.5)^2=2.25$ , $\small(-0.5)^2 = 0.25$ , $\small0.5^2 = 0.25$ and $\small1.5^2 = 2.25$ ; and the fourth, $\small(-1.5)^3=-3.375$ , $\small(-0.5)^3=-0.125$ , $\small0.5^3=0.125$ and $\small1.5^3=3.375$ . Next we do a $QR$ orthonormal decomposition of this matrix and take the compact representation of Q ( c_Q = qr(X)$qr ). Some of the inner workings of the functions used in QR factorization in R used in this post are further explained here . $\small\begin{bmatrix} -2&0&-2.5&0\\0.5&-2.236&0&-4.584\\0.5&0.447&2&0\\0.5&0.894&-0.9296&-1.342 \end{bmatrix}$ ... of which we save the diagonal only ( z = c_Q * (row(c_Q) == col(c_Q)) ). What lies in the diagonal: Just the "bottom" entries of the $\bf R$ part of the $QR$ decomposition. Just? well, no... It turns out that the diagonal of a upper triangular matrix contains the eigenvalues of the matrix! Next we call the following function: raw = qr.qy(qr(X), z) , the result of which can be replicated "manually" by two operations: 1. Turning the compact form of $Q$ , i.e. qr(X)$qr , into $Q$ , a transformation that can be achieved with Q = qr.Q(qr(X)) , and 2. Carrying out the matrix multiplication $Qz$ , as in Q %*% z . Crucially, multiplying $\bf Q$ by the eigenvalues of $\bf R$ does not change the orthogonality of the constituent column vectors, but given that the absolute value of the eigenvalues appears in decreasing order from top left to bottom right, the multiplication of $Qz$ will tend to decrease the values in the higher order polynomial columns: Matrix of Eigenvalues of R [,1] [,2] [,3] [,4] [1,] -2 0.000000 0 0.000000 [2,] 0 -2.236068 0 0.000000 [3,] 0 0.000000 2 0.000000 [4,] 0 0.000000 0 -1.341641 Compare the values in the later column vectors (quadratic and cubic) before and after the $QR$ factorization operations, and to the unaffected first two columns. Before QR factorization operations (orthogonal col. vec.) [,1] [,2] [,3] [,4] [1,] 1 -1.5 2.25 -3.375 [2,] 1 -0.5 0.25 -0.125 [3,] 1 0.5 0.25 0.125 [4,] 1 1.5 2.25 3.375 After QR operations (equally orthogonal col. vec.) [,1] [,2] [,3] [,4] [1,] 1 -1.5 1 -0.295 [2,] 1 -0.5 -1 0.885 [3,] 1 0.5 -1 -0.885 [4,] 1 1.5 1 0.295 Finally we call (Z <- sweep(raw, 2L, apply(raw, 2L, function(x) sqrt(sum(x^2))), "/", check.margin = FALSE)) turning the matrix raw into an orthonormal vectors: Orthonormal vectors (orthonormal basis of R^4) [,1] [,2] [,3] [,4] [1,] 0.5 -0.6708204 0.5 -0.2236068 [2,] 0.5 -0.2236068 -0.5 0.6708204 [3,] 0.5 0.2236068 -0.5 -0.6708204 [4,] 0.5 0.6708204 0.5 0.2236068 This function simply "normalizes" the matrix by dividing ( "/" ) columnwise each element by the $\small\sqrt{\sum_\text{col.} x_i^2}$ . So it can be decomposed in two steps: $(\text{i})$ apply(raw, 2, function(x)sqrt(sum(x^2))) , resulting in 2 2.236 2 1.341 , which are the denominators for each column in $(\text{ii})$ where every element in a column is divided by the corresponding value of $(\text{i})$ . At this point the column vectors form an orthonormal basis of $\mathbb{R}^4$ , until we get rid of the first column, which will be the intercept, and we have reproduced the result of contr.poly(4) : $\small\begin{bmatrix} -0.6708204&0.5&-0.2236068\\-0.2236068&-0.5&0.6708204\\0.2236068&-0.5&-0.6708204\\0.6708204&0.5&0.2236068 \end{bmatrix}$ The columns of this matrix are orthonormal , as can be shown by (sum(Z[,3]^2))^(1/4) = 1 and z[,3]%*%z[,4] = 0 , for example (incidentally the same goes for rows). And, each column is the result of raising the initial $\text{scores - mean}$ to the $1$ -st, $2$ -nd and $3$ -rd power, respectively - i.e. linear, quadratic and cubic . 2. Which contrasts (columns) contribute significantly to explain the differences between levels in the explanatory variable? We can just run the ANOVA and look at the summary... summary(lm(write ~ readcat, hsb2)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 52.7870 0.6339 83.268 <2e-16 *** readcat.L 14.2587 1.4841 9.607 <2e-16 *** readcat.Q -0.9680 1.2679 -0.764 0.446 readcat.C -0.1554 1.0062 -0.154 0.877 ... to see that there is a linear effect of readcat on write , so that the original values (in the third chunk of code in the beginning of the post) can be reproduced as: coeff = coefficients(lm(write ~ readcat, hsb2)) C = contr.poly(4) (recovered = c(coeff %*% c(1, C[1,]), coeff %*% c(1, C[2,]), coeff %*% c(1, C[3,]), coeff %*% c(1, C[4,]))) [1] 42.77273 49.97849 56.56364 61.83333 ... or... ... or much better... Being orthogonal contrasts the sum of their components adds to zero $\displaystyle \sum_{i=1}^t a_i = 0$ for $a_1,\cdots,a_t$ constants, and the dot product of any two of them is zero. If we could visualized them they would look something like this: The idea behind orthogonal contrast is that the inferences that we can exctract (in this case generating coefficients via a linear regression) will be the result of independent aspects of the data. This would not be the case if we simply used $X^0, X^1, \cdots. X^n$ as contrasts. Graphically, this is much easier to understand. Compare the actual means by groups in large square black blocks to the prediced values, and see why a straight line approximation with minimal contribution of quadratic and cubic polynomials (with curves only approximated with loess) is optimal: If, just for effect, the coefficients of the ANOVA had been as large for the linear contrast for the other approximations (quadratic and cubic), the nonsensical plot that follows would depict more clearly the polynomial plots of each "contribution": The code is here .
{ "source": [ "https://stats.stackexchange.com/questions/105122", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/49166/" ] }
105,602
I have read about the log-sum-exp trick in many places (e.g. here , and here ) but have never seen an example of how it is applied specifically to the Naive Bayes classifier (e.g. with discrete features and two classes) How exactly would one avoid the problem of numerical underflow using this trick?
In $$ p(Y=C|\mathbf{x}) = \frac{p(\mathbf{x}|Y=C)p(Y=C)}{~\sum_{k=1}^{|C|}{}p(\mathbf{x}|Y=C_k)p(Y=C_k)} $$ both the denominator and the numerator can become very small, typically because the $p(x_i \vert C_k)$ can be close to 0 and we multiply many of them with each other. To prevent underflows, one can simply take the log of the numerator, but one needs to use the log-sum-exp trick for the denominator. More specifically, in order to prevent underflows: If we only care about knowing which class $(\hat{y})$ the input $(\mathbf{x}=x_1, \dots, x_n)$ most likely belongs to with the maximum a posteriori (MAP) decision rule, we don't have to apply the log-sum-exp trick, since we don't have to compute the denominator in that case. For the numerator one can simply take the log to prevent underflows: $log \left( p(\mathbf{x}|Y=C)p(Y=C) \right) $. More specifically: $$\hat{y} = \underset{k \in \{1, \dots, |C|\}}{\operatorname{argmax}}p(C_k \vert x_1, \dots, x_n) = \underset{k \in \{1, \dots, |C|\}}{\operatorname{argmax}} \ p(C_k) \displaystyle\prod_{i=1}^n p(x_i \vert C_k)$$ which becomes after taking the log: $$ \begin{align} \hat{y} &= \underset{k \in \{1, \dots, |C|\}}{\operatorname{argmax}} \log \left( p(C_k \vert x_1, \dots, x_n) \right)\\ &= \underset{k \in \{1, \dots, |C|\}}{\operatorname{argmax}} \log \left( \ p(C_k) \displaystyle\prod_{i=1}^n p(x_i \vert C_k) \right) \\ &= \underset{k \in \{1, \dots, |C|\}}{\operatorname{argmax}} \left( \log \left( p(C_k) \right) + \ \displaystyle\sum_{i=1}^n \log \left(p(x_i \vert C_k) \right) \right) \end{align}$$ If we want to compute the class probability $p(Y=C|\mathbf{x})$, we will need to compute the denominator: $$ \begin{align} \log \left( p(Y=C|\mathbf{x}) \right) &= \log \left( \frac{p(\mathbf{x}|Y=C)p(Y=C)}{~\sum_{k=1}^{|C|}{}p(\mathbf{x}|Y=C_k)p(Y=C_k)} \right)\\ &= \log \left( \underbrace{p(\mathbf{x}|Y=C)p(Y=C)}_{\text{numerator}} \right) - \log \left( \underbrace{~\sum_{k=1}^{|C|}{}p(\mathbf{x}|Y=C_k)p(Y=C_k)}_{\text{denominator}} \right)\\ \end{align} $$ The element $\log \left( ~\sum_{k=1}^{|C|}{}p(\mathbf{x}|Y=C_k)p(Y=C_k) \right)\\ $ may underflow because $p(x_i \vert C_k)$ can be very small: it is the same issue as in the numerator, but this time we have a summation inside the logarithm, which prevents us from transforming the $p(x_i \vert C_k)$ (can be close to 0) into $\log \left(p(x_i \vert C_k) \right)$ (negative and not close to 0 anymore, since $0 \leq p(x_i \vert C_k) \leq 1$). To circumvent this issue, we can use the fact that $p(x_i \vert C_k) = \exp \left( {\log \left(p(x_i \vert C_k) \right)} \right)$ to obtain: $$\log \left( ~\sum_{k=1}^{|C|}{}p(\mathbf{x}|Y=C_k)p(Y=C_k) \right) =\log \left( ~\sum_{k=1}^{|C|}{} \exp \left( \log \left( p(\mathbf{x}|Y=C_k)p(Y=C_k) \right) \right) \right)$$ At that point, a new issue arises: $\log \left( p(\mathbf{x}|Y=C_k)p(Y=C_k) \right)$ may be quite negative, which implies that $ \exp \left( \log \left( p(\mathbf{x}|Y=C_k)p(Y=C_k) \right) \right) $ may become very close to 0, i.e. underflow. This is where we use the log-sum-exp trick : $$\log \sum_k e^{a_k} = \log \sum_k e^{a_k}e^{A-A} = A+ \log\sum_k e^{a_k -A}$$ with: $a_k=\log \left( p(\mathbf{x}|Y=C_k)p(Y=C_k) \right)$, $A = \underset{k \in \{1, \dots, |C|\}} \max a_k.$ We can see that introducing the variable $A$ avoids underflows. E.g. with $k=2, a_1 = - 245, a_2 = - 255$, we have: $\exp \left(a_1\right) = \exp \left(- 245\right) =3.96143\times 10^{- 107}$ $\exp \left(a_2\right) = \exp \left(- 255\right) =1.798486 \times 10^{-111}$ Using the log-sum-exp trick we avoid the underflow, with $A=\max ( -245, -255 )=-245$: $\begin{align}\log \sum_k e^{a_k} &= \log \sum_k e^{a_k}e^{A-A} \\&= A+ \log\sum_k e^{a_k -A}\\ &= -245+ \log\sum_k e^{a_k +245}\\&= -245+ \log \left(e^{-245 +245}+e^{-255 +245}\right) \\&=-245+ \log \left(e^{0}+e^{-10}\right) \end{align}$ We avoided the underflow since $e^{-10}$ is much farther away from 0 than $3.96143\times 10^{- 107}$ or $1.798486 \times 10^{-111}$.
{ "source": [ "https://stats.stackexchange.com/questions/105602", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/27838/" ] }
105,745
The Maximum of $X_1,\dots,X_n. \sim$ i.i.d. Standardnormals converges to the Standard Gumbel Distribution according to Extreme Value Theory . How can we show that? We have $$P(\max X_i \leq x) = P(X_1 \leq x, \dots, X_n \leq x) = P(X_1 \leq x) \cdots P(X_n \leq x) = F(x)^n $$ We need to find/choose $a_n>0,b_n\in\mathbb{R}$ sequences of constants such that: $$F\left(a_n x+b_n\right)^n\rightarrow^{n\rightarrow\infty} G(x) = e^{-\exp(-x)}$$ Can you solve it or find it in literature? There are some examples pg.6/71 , but not for the Normal case: $$\Phi\left(a_n x+b_n\right)^n=\left(\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{a_n x+b_n} e^{-\frac{y^2}{2}}dy\right)^n\rightarrow e^{-\exp(-x)}$$
An indirect way, is as follows: For absolutely continuous distributions, Richard von Mises (in a 1936 paper "La distribution de la plus grande de n valeurs" , which appears to have been reproduced -in English?- in a 1964 edition with selected papers of his), has provided the following sufficient condition for the maximum of a sample to converge to the standard Gumbel, $G(x)$: Let $F(x)$ be the common distribution function of $n$ i.i.d. random variables, and $f(x)$ their common density. Then, if $$\lim_{x\rightarrow F^{-1}(1)}\left (\frac d{dx}\frac {(1-F(x))}{f(x)}\right) =0 \Rightarrow X_{(n)} \xrightarrow{d} G(x)$$ Using the usual notation for the standard normal and calculating the derivative, we have $$\frac d{dx}\frac {(1-\Phi(x))}{\phi(x)} = \frac {-\phi(x)^2-\phi'(x)(1-\Phi(x))}{\phi(x)^2} = \frac {-\phi'(x)}{\phi(x)}\frac {(1-\Phi(x))}{\phi(x)}-1$$ Note that $\frac {-\phi'(x)}{\phi(x)} =x$. Also, for the normal distribution, $F^{-1}(1) = \infty$. So we have to evaluate the limit $$\lim_{x\rightarrow \infty}\left (x\frac {(1-\Phi(x))}{\phi(x)}-1\right) $$ But $\frac {(1-\Phi(x))}{\phi(x)}$ is Mill's ratio, and we know that the Mill's ratio for the standard normal tends to $1/x$ as $x$ grows. So $$\lim_{x\rightarrow \infty}\left (x\frac {(1-\Phi(x))}{\phi(x)}-1\right) = x\frac {1}{x}-1= 0$$ and the sufficient condition is satisfied. The associated series are given as $$a_n = \frac 1{n\phi(b_n)},\;\;\; b_n = \Phi^{-1}(1-1/n)$$ ADDENDUM This is from ch. 10.5 of the book H.A. David & H.N. Nagaraja (2003), "Order Statistics" (3d edition) . $\xi_a = F^{-1}(a)$. Also, the reference to de Haan is "Haan, L. D. (1976). Sample extremes: an elementary introduction. Statistica Neerlandica, 30(4), 161-172. " But beware because some of the notation has different content in de Haan -for example in the book $f(t)$ is the probability density function, while in de Haan $f(t)$ means the function $w(t)$ of the book (i.e. Mill's ratio). Also, de Haan examines the sufficient condition already differentiated.
{ "source": [ "https://stats.stackexchange.com/questions/105745", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/48639/" ] }
106,121
Assume I have a dataset for a supervised statistical classification task, e.g., via a Bayes' classifier. This dataset consists of 20 features and I want to boil it down to 2 features via dimensionality reduction techniques such as Principal Component Analysis (PCA) and/or Linear Discriminant Analysis (LDA). Both techniques are projecting the data onto a smaller feature subspace: with PCA, I would find the directions (components) that maximize the variance in the dataset (without considering the class labels), and with LDA I would have the components that maximize the between-class separation. Now, I am wondering if, how, and why these techniques can be combined and if it makes sense. For example: transforming the dataset via PCA and projecting it onto a new 2D subspace transforming (the already PCA-transformed) dataset via LDA for max. in-class separation or skipping the PCA step and using the top 2 components from a LDA. or any other combination that makes sense.
Summary: PCA can be performed before LDA to regularize the problem and avoid over-fitting. Recall that LDA projections are computed via eigendecomposition of $\boldsymbol \Sigma_W^{-1} \boldsymbol \Sigma_B$, where $\boldsymbol \Sigma_W$ and $\boldsymbol \Sigma_B$ are within- and between-class covariance matrices. If there are less than $N$ data points (where $N$ is the dimensionality of your space, i.e. the number of features/variables), then $\boldsymbol \Sigma_W$ will be singular and therefore cannot be inverted. In this case there is simply no way to perform LDA directly, but if one applies PCA first, it will work. @Aaron made this remark in the comments to his reply, and I agree with that (but disagree with his answer in general, as you will see now). However, this is only part of the problem. The bigger picture is that LDA very easily tends to overfit the data. Note that within-class covariance matrix gets inverted in the LDA computations; for high-dimensional matrices inversion is a really sensitive operation that can only be reliably done if the estimate of $\boldsymbol \Sigma_W$ is really good. But in high dimensions $N \gg 1$, it is really difficult to obtain a precise estimate of $\boldsymbol \Sigma_W$, and in practice one often has to have a lot more than $N$ data points to start hoping that the estimate is good. Otherwise $\boldsymbol \Sigma_W$ will be almost-singular (i.e. some of the eigenvalues will be very low), and this will cause over-fitting, i.e. near-perfect class separation on the training data with chance performance on the test data. To tackle this issue, one needs to regularize the problem. One way to do it is to use PCA to reduce dimensionality first. There are other, arguably better ones, e.g. regularized LDA (rLDA) method which simply uses $(1-\lambda)\boldsymbol \Sigma_W + \lambda \boldsymbol I$ with small $\lambda$ instead of $\boldsymbol \Sigma_W$ (this is called shrinkage estimator ), but doing PCA first is conceptually the simplest approach and often works just fine. Illustration Here is an illustration of the over-fitting problem. I generated 60 samples per class in 3 classes from standard Gaussian distribution (mean zero, unit variance) in 10-, 50-, 100-, and 150-dimensional spaces, and applied LDA to project the data on 2D: Note how as the dimensionality grows, classes become better and better separated, whereas in reality there is no difference between the classes. We can see how PCA helps to prevent the overfitting if we make classes slightly separated. I added 1 to the first coordinate of the first class, 2 to the first coordinate of the second class, and 3 to the first coordinate of the third class. Now they are slightly separated, see top left subplot: Overfitting (top row) is still obvious. But if I pre-process the data with PCA, always keeping 10 dimensions (bottom row), overfitting disappears while the classes remain near-optimally separated. PS. To prevent misunderstandings: I am not claiming that PCA+LDA is a good regularization strategy (on the contrary, I would advice to use rLDA), I am simply demonstrating that it is a possible strategy. Update. Very similar topic has been previously discussed in the following threads with interesting and comprehensive answers provided by @cbeleites: Should PCA be performed before I do classification? Does it make sense to run LDA on several principal components and not on all variables? See also this question with some good answers: What can cause PCA to worsen results of a classifier?
{ "source": [ "https://stats.stackexchange.com/questions/106121", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
106,129
While I was preparing an internet survey, I read from multiple sources which says it is more important to have a high response rate than large sample size. Q1. I don't quite understand the logic behind this. Can you explain? If i can afford to survey the entire population, would that be better than a random sub-sample? Also, the importance of random sampling is often stressed. Q2. But what if I just invite the entire population of interest to the survey (because I can... using the internet). Wouldn't this be better than random sampling (even though the response rate might be lower because N is larger?) I understand the answer to the questions above might depends on the objective of the study. If this is the case please delineate this for me. Thank you! Update After seeing seeing several responses, I guess I have asked a misleading question. I am not comparing response rate and sample size as in a contest of which is more important. I am trying to find out, if it costs me nothing more, relative to random sub-sampling, to survey the entire population of interest, is there any reason why I shouldn't do that and stick with a random sub-sample instead?
Summary: PCA can be performed before LDA to regularize the problem and avoid over-fitting. Recall that LDA projections are computed via eigendecomposition of $\boldsymbol \Sigma_W^{-1} \boldsymbol \Sigma_B$, where $\boldsymbol \Sigma_W$ and $\boldsymbol \Sigma_B$ are within- and between-class covariance matrices. If there are less than $N$ data points (where $N$ is the dimensionality of your space, i.e. the number of features/variables), then $\boldsymbol \Sigma_W$ will be singular and therefore cannot be inverted. In this case there is simply no way to perform LDA directly, but if one applies PCA first, it will work. @Aaron made this remark in the comments to his reply, and I agree with that (but disagree with his answer in general, as you will see now). However, this is only part of the problem. The bigger picture is that LDA very easily tends to overfit the data. Note that within-class covariance matrix gets inverted in the LDA computations; for high-dimensional matrices inversion is a really sensitive operation that can only be reliably done if the estimate of $\boldsymbol \Sigma_W$ is really good. But in high dimensions $N \gg 1$, it is really difficult to obtain a precise estimate of $\boldsymbol \Sigma_W$, and in practice one often has to have a lot more than $N$ data points to start hoping that the estimate is good. Otherwise $\boldsymbol \Sigma_W$ will be almost-singular (i.e. some of the eigenvalues will be very low), and this will cause over-fitting, i.e. near-perfect class separation on the training data with chance performance on the test data. To tackle this issue, one needs to regularize the problem. One way to do it is to use PCA to reduce dimensionality first. There are other, arguably better ones, e.g. regularized LDA (rLDA) method which simply uses $(1-\lambda)\boldsymbol \Sigma_W + \lambda \boldsymbol I$ with small $\lambda$ instead of $\boldsymbol \Sigma_W$ (this is called shrinkage estimator ), but doing PCA first is conceptually the simplest approach and often works just fine. Illustration Here is an illustration of the over-fitting problem. I generated 60 samples per class in 3 classes from standard Gaussian distribution (mean zero, unit variance) in 10-, 50-, 100-, and 150-dimensional spaces, and applied LDA to project the data on 2D: Note how as the dimensionality grows, classes become better and better separated, whereas in reality there is no difference between the classes. We can see how PCA helps to prevent the overfitting if we make classes slightly separated. I added 1 to the first coordinate of the first class, 2 to the first coordinate of the second class, and 3 to the first coordinate of the third class. Now they are slightly separated, see top left subplot: Overfitting (top row) is still obvious. But if I pre-process the data with PCA, always keeping 10 dimensions (bottom row), overfitting disappears while the classes remain near-optimally separated. PS. To prevent misunderstandings: I am not claiming that PCA+LDA is a good regularization strategy (on the contrary, I would advice to use rLDA), I am simply demonstrating that it is a possible strategy. Update. Very similar topic has been previously discussed in the following threads with interesting and comprehensive answers provided by @cbeleites: Should PCA be performed before I do classification? Does it make sense to run LDA on several principal components and not on all variables? See also this question with some good answers: What can cause PCA to worsen results of a classifier?
{ "source": [ "https://stats.stackexchange.com/questions/106129", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/28309/" ] }
106,334
The cost function of neural network is $J(W,b)$, and it is claimed to be non-convex . I don't quite understand why it's that way, since as I see that it's quite similar to the cost function of logistic regression, right? If it is non-convex, so the 2nd order derivative $\frac{\partial J}{\partial W} < 0$, right? UPDATE Thanks to the answers below as well as @gung's comment, I got your point, if there's no hidden layers at all, it's convex, just like logistic regression. But if there's hidden layers, by permuting the nodes in the hidden layers as well as the weights in subsequent connections, we could have multiple solutions of the weights resulting to the same loss. Now more questions, 1) There're multiple local minima, and some of them should be of the same value, since they're corresponding to some nodes and weights permutations, right? 2) If the nodes and weights won't be permuted at all, then it's convex, right? And the minima will be the global minima. If so, the answer to 1) is, all those local minima will be of the same value, correct?
The cost function of a neural network is in general neither convex nor concave. This means that the matrix of all second partial derivatives (the Hessian) is neither positive semidefinite, nor negative semidefinite. Since the second derivative is a matrix, it's possible that it's neither one or the other. To make this analogous to one-variable functions, one could say that the cost function is neither shaped like the graph of $x^2$ nor like the graph of $-x^2$. Another example of a non-convex, non-concave function is $\sin(x)$ on $\mathbb{R}$. One of the most striking differences is that $\pm x^2$ has only one extremum, whereas $\sin$ has infinitely many maxima and minima. How does this relate to our neural network? A cost function $J(W,b)$ has also a number of local maxima and minima, as you can see in this picture , for example. The fact that $J$ has multiple minima can also be interpreted in a nice way. In each layer, you use multiple nodes which are assigned different parameters to make the cost function small. Except for the values of the parameters, these nodes are the same. So you could exchange the parameters of the first node in one layer with those of the second node in the same layer, and accounting for this change in the subsequent layers. You'd end up with a different set of parameters, but the value of the cost function can't be distinguished by (basically you just moved a node, to another place, but kept all the inputs/outputs the same).
{ "source": [ "https://stats.stackexchange.com/questions/106334", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/30540/" ] }
107,563
Do the pdf and the pmf and the cdf contain the same information? For me the pdf gives the whole probability to a certain point(basically the area under the probability). The pmf give the probability of a certain point. The cdf give the probability under a certain point. So to me the pdf and cdf have the same information, but the pmf does not because it gives the probability for a point x on the distribution.
Where a distinction is made between probability function and density*, the pmf applies only to discrete random variables, while the pdf applies to continuous random variables. * formal approaches can encompass both and use a single term for them The cdf applies to any random variables, including ones that have neither a pdf nor pmf (such as a mixed distribution - for example, consider the amount of rain in a day, or the amount of money paid in claims on a property insurance policy, either of which might be modelled by a zero-inflated continuous distribution). The cdf for a random variable $X$ gives $P(X\leq x)$ The pmf for a discrete random variable $X$ , gives $P(X=x)$ . The pdf doesn't itself give probabilities , but relative probabilities; continuous distributions don't have point probabilities. To get probabilities from pdfs you need to integrate over some interval - or take a difference of two cdf values. It's difficult to answer the question 'do they contain the same information' because it depends on what you mean. You can go from pdf to cdf (via integration), and from pmf to cdf (via summation), and from cdf to pdf (via differentiation) and from cdf to pmf (via differencing), so when you have a pmf or a pdf, it contains the same information as the cdf (but in a sense 'encoded' in a different way).
{ "source": [ "https://stats.stackexchange.com/questions/107563", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/37155/" ] }
107,610
I once heard that log transformation is the most popular one for right-skewed distributions in linear regression or quantile regression I would like to know is there any reason underlying this statement? Why is the log transformation suitable for a right-skewed distribution? How about a left-skewed distribution?
Economists (like me) love the log transformation. We especially love it in regression models, like this: \begin{align} \ln{Y_i} &= \beta_1 + \beta_2 \ln{X_i} + \epsilon_i \end{align} Why do we love it so much? Here is the list of reasons I give students when I lecture on it: It respects the positivity of $Y$. Many times in real-world applications in economics and elsewhere, $Y$ is, by nature, a positive number. It might be a price, a tax rate, a quantity produced, a cost of production, spending on some category of goods, etc. The predicted values from an untransformed linear regression may be negative. The predicted values from a log-transformed regression can never be negative. They are $\widehat{Y}_j=\exp{\left(\beta_1 + \beta_2 \ln{X_j}\right)} \cdot \frac{1}{N} \sum \exp{\left(e_i\right)}$ (See an earlier answer of mine for derivation). The log-log functional form is surprisingly flexible. Notice: \begin{align} \ln{Y_i} &= \beta_1 + \beta_2 \ln{X_i} + \epsilon_i \\ Y_i &= \exp{\left(\beta_1 + \beta_2 \ln{X_i}\right)}\cdot\exp{\left(\epsilon_i\right)}\\ Y_i &= \left(X_i\right)^{\beta_2}\exp{\left(\beta_1\right)}\cdot\exp{\left(\epsilon_i\right)}\\ \end{align} Which gives us: That's a lot of different shapes. A line (whose slope would be determined by $\exp{\left(\beta_1\right)}$, so which can have any positive slope), a hyperbola, a parabola, and a "square-root-like" shape. I've drawn it with $\beta_1=0$ and $\epsilon=0$, but in a real application neither of these would be true, so that the slope and the height of the curves at $X=1$ would be controlled by those rather than set at 1. As TrynnaDoStat mentions, the log-log form "draws in" big values which often makes the data easier to look at and sometimes normalizes the variance across observations. The coefficient $\beta_2$ is interpreted as an elasticity. It is the percentage increase in $Y$ from a one percent increase in $X$. If $X$ is a dummy variable, you include it without logging it. In this case, $\beta_2$ is the percent difference in $Y$ between the $X=1$ category and the $X=0$ category. If $X$ is time, again you include it without logging it, typically. In this case, $\beta_2$ is the growth rate in $Y$---measured in whatever time units $X$ is measured in. If $X$ is years, then the coefficient is annual growth rate in $Y$, for example. The slope coefficient, $\beta_2$, becomes scale-invariant. This means, on the one hand, that it has no units, and, on the other hand, that if you re-scale (i.e. change the units of) $X$ or $Y$, it will have absolutely no effect on the estimated value of $\beta_2$. Well, at least with OLS and other related estimators. If your data are log-normally distributed, then the log transformation makes them normally distributed. Normally distributed data have lots going for them. Statisticians generally find economists over-enthusiastic about this particular transformation of the data. This, I think, is because they judge my point 8 and the second half of my point 3 to be very important. Thus, in cases where the data are not log-normally distributed or where logging the data does not result in the transformed data having equal variance across observations, a statistician will tend not to like the transformation very much. The economist is likely to plunge ahead anyway since what we really like about the transformation are points 1,2,and 4-7.
{ "source": [ "https://stats.stackexchange.com/questions/107610", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3269/" ] }
107,643
For a linear regression model obtained by the R function lm , I would like to know if it is possible to obtain the Mean Squared Error by a command. I had the following output of an example: > lm <- lm(MuscleMAss~Age,data) > sm<-summary(lm) > sm Call: lm(formula = MuscleMAss ~ Age, data = data) Residuals: Min 1Q Median 3Q Max -16.1368 -6.1968 -0.5969 6.7607 23.4731 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 156.3466 5.5123 28.36 <2e-16 *** Age -1.1900 0.0902 -13.19 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 8.173 on 58 degrees of freedom Multiple R-squared: 0.7501, Adjusted R-squared: 0.7458 F-statistic: 174.1 on 1 and 58 DF, p-value: < 2.2e-16 Is Multiple R-squared the same as sum of square error? If the answer is no, could someone explain the meaning of Multiple R-squared?
The multiple R-squared that R reports is the coefficient of determination , which is given by the formula $$ R^2 = 1 - \frac{SS_{\text{res}}}{SS_{\text{tot}}}.$$ The sum of squared errors is given (thanks to a previous answer ) by sum(sm$residuals^2) . The mean squared error is given by mean(sm$residuals^2) . You could write a function to calculate this, e.g.: mse <- function(sm) mean(sm$residuals^2)
{ "source": [ "https://stats.stackexchange.com/questions/107643", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/37141/" ] }
108,007
I have a dataframe with many observations and many variables. Some of them are categorical (unordered) and the others are numerical. I'm looking for associations between these variables. I've been able to compute correlation for numerical variables (Spearman's correlation) but : I don't know how to measure correlation between unordered categorical variables. I don't know how to measure correlation between unordered categorical variables and numerical variables. Does anyone know how this could be done? If so, are there R functions implementing these methods?
It depends on what sense of a correlation you want. When you run the prototypical Pearson's product moment correlation, you get a measure of the strength of association and you get a test of the significance of that association. More typically however, the significance test and the measure of effect size differ. Significance tests: Continuous vs. Nominal: run an ANOVA . In R, you can use ?aov . Nominal vs. Nominal: run a chi-squared test . In R, you use ?chisq.test . Effect size (strength of association): Continuous vs. Nominal: calculate the intraclass correlation . In R, you can use ?ICC in the psych package; there is also an ICC package. Nominal vs. Nominal: calculate Cramer's V . In R, you can use ?assocstats in the vcd package.
{ "source": [ "https://stats.stackexchange.com/questions/108007", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/52117/" ] }
108,010
I have a series of physicians' claims submissions. I would like to perform cluster analysis as an exploratory tool to find patterns in how physicians bill based on things like Revenue Codes, Procedure Codes, etc. The data are all polytomous, and from my basic understanding, a latent class algorithm is appropriate for this kind of data. I am trying my hand at some of R's cluster packages, & specifically poLCA & mclust for this analysis. I'm getting alerts after running a test model on a sample of the data using poLCA . > library(poLCA) > # Example data structure - actual test data has 200 rows: > df <- structure(list(RevCd = c(274L, 320L, 320L, 450L, 450L, 450L, 636L, 636L, 636L, 450L, 450L, 450L, 301L, 305L, 450L, 450L, 352L, 301L, 300L, 636L, 301L, 450L, 636L, 636L, 307L, 450L, 300L, 300L, 301L, 301L), PlaceofSvc = c(23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L), TypOfSvc = c(51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L), FundType = c(3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L), ProcCd2 = c(1747L, 656L, 656L, 1375L, 1376L, 1439L, 1623L, 1645L, 1662L, 176L, 1374L, 1376L, 958L, 1032L, 1368L, 1374L, 707L, 960L, 347L, 1662L, 859L, 1375L, 1654L, 1783L, 882L, 1440L, 332L, 332L, 946L, 946L)), .Names = c("RevCd", "PlaceofSvc", "TypOfSvc", "FundType", "ProcCd2"), row.names = c(1137L, 1138L, 1139L, 1140L, 1141L, 1142L, 1143L, 1144L, 1145L, 1146L, 1147L, 1945L, 1946L, 1947L, 1948L, 1949L, 1950L, 1951L, 1952L, 1953L, 1954L, 1955L, 1956L, 1957L, 1958L, 1959L, 2265L, 2266L, 2267L, 2268L), class = "data.frame") > clust <- poLCA(cbind(RevCd, PlaceofSvc, TypOfSvc, FundType, ProcCd2)~1, df, nclass = 3) ========================================================= Fit for 3 latent classes: ========================================================= number of observations: 200 number of estimated parameters: 7769 residual degrees of freedom: -7569 maximum log-likelihood: -1060.778 AIC(3): 17659.56 BIC(3): 43284.18 G^2(3): 559.9219 (Likelihood ratio/deviance statistic) X^2(3): 33852.85 (Chi-square goodness of fit) ALERT: number of parameters estimated ( 7769 ) exceeds number of observations ( 200 ) ALERT: negative degrees of freedom; respecify model My novice assumption is that I need to run a greater number of iterations before I can get results that are robust? e.g. "...it is essential to run poLCA multiple times until you can be reasonably certain that you have found the parameter estimates that produce the global maximum likelihood solution." ( http://www.sscnet.ucla.edu/polisci/faculty/lewis/pdf/poLCA-JSS-final.pdf ). Alternatively, perhaps certain variables, particularly CPT & Revenue Codes, have too many unique values, and that I need to aggregate these variables into higher level categories to reduce the number of parameters? When I run the model using package mclust , which optimizes the model based on BIC, I don't get any such alert. > library(mclust) > clustBIC <- mclustBIC(df) > summary(clustBIC, data = df) classification table: 1 2 141 59 best BIC values: VEV,2 VEV,3 EEV,3 -4562.286 -4706.190 -5655.783 If anyone can shed a bit of light on the above alerts, it would be much appreciated. I was also planning on using the script found in the poLCA documentation to run multiple iterations of the model until the log-likelihood is maximized. However it's computationally intensive and I'm afraid the process will crash before I have a chance to post this. Sorry in advance if I've missed something obvious here; I'm new to cluster analysis.
It depends on what sense of a correlation you want. When you run the prototypical Pearson's product moment correlation, you get a measure of the strength of association and you get a test of the significance of that association. More typically however, the significance test and the measure of effect size differ. Significance tests: Continuous vs. Nominal: run an ANOVA . In R, you can use ?aov . Nominal vs. Nominal: run a chi-squared test . In R, you use ?chisq.test . Effect size (strength of association): Continuous vs. Nominal: calculate the intraclass correlation . In R, you can use ?ICC in the psych package; there is also an ICC package. Nominal vs. Nominal: calculate Cramer's V . In R, you can use ?assocstats in the vcd package.
{ "source": [ "https://stats.stackexchange.com/questions/108010", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/40501/" ] }
108,441
Are these merely stylistic conventions (whether italicized or non-italicized), or are there substantive differences in the meanings of these notations? Are there other notations meaning " the probability of " that should be considered in this question?
Stylistic conventions, mainly, but with some underlying rationale. $\mathbb{P}()$ and $\Pr()$ can be seen as two ways to "free up" the letter $\text{P}$ for other use—it is used to denote other things than "probability", for example in research with complicated and extensive notation where one starts to exhaust available letters. $\mathbb{P}()$ requires special fonts, which is a disadvantage. $\Pr()$ may be useful when the author would want the reader to think of probability in abstract and general terms, using the second lower-capital letter " $r$ " to disassociate the symbol as a whole from the usual way we write up functions. For example, some problems are solved when one remembers that the cumulative distribution function of a random variable can be written and treated as a probability of an "inequality-event", and apply the basic probability rules rather than functional analysis. In some cases, one may also see $\text {Prob}()$ , again, usually in the beginning of an argument that will end up in a specific formulation of how this probability is functionally determined. The italics version $P()$ is also used, and also in lower-case form, $p()$ —this last version is especially used when discussing discrete random variables (where the probability mass function is a probability). $\pi(\;,\;)$ is used for conditional ("transition") probabilities in Markov Theory.
{ "source": [ "https://stats.stackexchange.com/questions/108441", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/44269/" ] }
108,466
Different implementation software are available for lasso . I know a lot discussed about bayesian approach vs frequentist approach in different forums. My question is very specific to lasso - What are differences or advantages of baysian lasso vs regular lasso ? Here are two example of implementation in the package: # just example data set.seed(1233) X <- scale(matrix(rnorm(30),ncol=3))[,] set.seed(12333) Y <- matrix(rnorm(10, X%*%matrix(c(-0.2,0.5,1.5),ncol=1), sd=0.8),ncol=1) require(monomvn) ## Lasso regression reg.las <- regress(X, Y, method="lasso") ## Bayesian Lasso regression reg.blas <- blasso(X, Y) So when should I go for one or other methods ? Or they are same ?
The standard lasso uses an L1 regularisation penalty to achieve sparsity in regression. Note that this is also known as Basis Pursuit (Chen & Donoho, 1994). In the Bayesian framework, the choice of regulariser is analogous to the choice of prior over the weights. If a Gaussian prior is used, then the Maximum a Posteriori (MAP) solution will be the same as if an L2 penalty was used. Whilst not directly equivalent, the Laplace prior (which is sharply peaked around zero, unlike the Gaussian which is smooth around zero), produces the same shrinkage effect to the L1 penalty. Park & Casella (2008) describes the Bayesian Lasso. In fact, when you place a Laplace prior over the parameters, the MAP solution should be identical (not merely similar) to regularization with the L1 penalty and the Laplace prior will produce an identical shrinkage effect to the L1 penalty. However, due to either approximations in the Bayesian inference procedure, or other numerical issues, solutions may not actually be identical. In most cases, the results produced by both methods will be very similar. Depending on the optimisation method and whether approximations are used, the standard lasso will probably be more efficient to compute than the Bayesian version. The Bayesian automatically produces interval estimates for all of the parameters, including the error variance, if these are required. Chen, S., & Donoho, D. (1994). Basis pursuit. In Proceedings of 1994 28th Asilomar Conference on Signals, Systems and Computers (Vol. 1, pp. 41-44). IEEE. https://doi.org/10.1109/ACSSC.1994.471413 Park, T., & Casella, G. (2008). The bayesian lasso. Journal of the American Statistical Association, 103 (482), 681-686. https://doi.org/10.1198/016214508000000337
{ "source": [ "https://stats.stackexchange.com/questions/108466", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/19762/" ] }
108,477
How to create a feature vector from text with a deep learning aproach?. Im new at this topic, could anybody advice me where to start and how to aproach this task?.
The standard lasso uses an L1 regularisation penalty to achieve sparsity in regression. Note that this is also known as Basis Pursuit (Chen & Donoho, 1994). In the Bayesian framework, the choice of regulariser is analogous to the choice of prior over the weights. If a Gaussian prior is used, then the Maximum a Posteriori (MAP) solution will be the same as if an L2 penalty was used. Whilst not directly equivalent, the Laplace prior (which is sharply peaked around zero, unlike the Gaussian which is smooth around zero), produces the same shrinkage effect to the L1 penalty. Park & Casella (2008) describes the Bayesian Lasso. In fact, when you place a Laplace prior over the parameters, the MAP solution should be identical (not merely similar) to regularization with the L1 penalty and the Laplace prior will produce an identical shrinkage effect to the L1 penalty. However, due to either approximations in the Bayesian inference procedure, or other numerical issues, solutions may not actually be identical. In most cases, the results produced by both methods will be very similar. Depending on the optimisation method and whether approximations are used, the standard lasso will probably be more efficient to compute than the Bayesian version. The Bayesian automatically produces interval estimates for all of the parameters, including the error variance, if these are required. Chen, S., & Donoho, D. (1994). Basis pursuit. In Proceedings of 1994 28th Asilomar Conference on Signals, Systems and Computers (Vol. 1, pp. 41-44). IEEE. https://doi.org/10.1109/ACSSC.1994.471413 Park, T., & Casella, G. (2008). The bayesian lasso. Journal of the American Statistical Association, 103 (482), 681-686. https://doi.org/10.1198/016214508000000337
{ "source": [ "https://stats.stackexchange.com/questions/108477", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/52351/" ] }
108,705
I've recently encountered the bivariate Poisson distribution, but I'm a little confused as to how it can be derived. The distribution is given by: $P(X = x, Y = y) = e^{-(\theta_{1}+\theta_{2}+\theta_{0})} \displaystyle\frac{\theta_{1}^{x}}{x!}\frac{\theta_{2}^{y}}{y!} \sum_{i=0}^{min(x,y)}\binom{x}{i}\binom{y}{i}i!\left(\frac{\theta_{0}}{\theta_{1}\theta_{2}}\right)^{i}$ From what I can gather, the $\theta_{0}$ term is a measure of correlation between $X$ and $Y$; hence, when $X$ and $Y$ are independent, $\theta_{0} = 0$ and the distribution simply becomes the product of two univariate Poisson distributions. Bearing this in mind, my confusion is predicated on the summation term - I'm assuming this term explains the the correlation between $X$ and $Y$. It seems to me that the summand constitutes some sort of product of binomial cumulative distribution functions where the probability of "success" is given by $\left(\frac{\theta_{0}}{\theta_{1}\theta_{2}}\right)$ and the probability of "failure" is given by $i!^{\frac{1}{min(x,y)-i}}$, because $\left(i!^{\frac{1}{min(x,y)-i!}}\right)^{(min(x,y)-i)} = i!$, but I could be way off with this. Could somebody provide some assistance on how this distribution can be derived? Also, if it could be included in any answer how this model might be extended to a multivariate scenario (say three or more random variables), that would be great! (Finally, I have noted that there was a similar question posted before ( Understanding the bivariate Poisson distribution ), but the derivation wasn't actually explored.)
In a slide presentation , Karlis and Ntzoufras define a bivariate Poisson as the distribution of $(X,Y)=(X_1+X_0,X_2+X_0)$ where the $X_i$ independently have Poisson $\theta_i$ distributions. Recall that having such a distribution means $$\Pr(X_i=k) = e^{-\theta_i}\frac{\theta_i^k}{k!}$$ for $k=0, 1, 2, \ldots.$ The event $(X,Y)=(x,y)$ is the disjoint union of the events $$(X_0,X_1,X_2) = (i,x-i,y-i)$$ for all $i$ that make all three components non-negative integers, from which we may deduce that $0 \le i \le \min(x,y)$. Because the $X_i$ are independent their probabilities multiply, whence $$F_{(\theta_0,\theta_1,\theta_2)}(x,y)=\Pr((X,Y)=(x,y)) \\= \sum_{i=0}^{\min(x,y)} \Pr(X_0=i)\Pr(X_1=x-i)\Pr(X_2=y-i).$$ This is a formula; we are done. But to see that it is equivalent to the formula in the question, use the definition of the Poisson distribution to write these probabilities in terms of the parameters $\theta_i$ and (assuming neither of $\theta_1,\theta_2$ is zero) re-work it algebraically to look as much as possible like the product $\Pr(X_1=x)\Pr(X_2=y)$: $$\eqalign{ F_{(\theta_0,\theta_1,\theta_2)}(x,y)&= \sum_{i=0}^{\min(x,y)} \left( e^{-\theta_0} \frac{\theta_0^i}{i!}\right) \left( e^{-\theta_1} \frac{\theta_1^{x-i}}{(x-i)!}\right) \left( e^{-\theta_2} \frac{\theta_2^{y-i}}{(y-i)!}\right) \\ &=e^{-(\theta_1+\theta_2)}\frac{\theta_1^x}{x!}\frac{\theta_2^y}{y!}\left(e^{-\theta_0}\sum_{i=0}^{\min(x,y)} \frac{\theta_0^i}{i!}\frac{x!\theta_1^{-i}}{(x-i)!}\frac{y!\theta_2^{-i}}{(y-i)!}\right). }$$ If you really want to--it is somewhat suggestive--you can re-express the terms in the sum using the binomial coefficients $\binom{x}{i}=x!/((x-i)!i!)$ and $\binom{y}{i}$, yielding $$F_{(\theta_0,\theta_1,\theta_2)}(x,y) = e^{-(\theta_0+\theta_1+\theta_2)}\frac{\theta_1^x}{x!}\frac{\theta_2^y}{y!}\sum_{i=0}^{\min(x,y)}i!\binom{x}{i}\binom{y}{i}\left(\frac{\theta_0}{\theta_1\theta_2}\right)^i,$$ exactly as in the question. Generalization to multivariate scenarios could proceed in several ways, depending on the flexibility needed. The simplest would contemplate the distribution of $$(X_1+X_0, X_2+X_0, \ldots, X_d+X_0)$$ for independent Poisson distributed variates $X_0, X_1, \ldots,X_d$. For more flexibility additional variables could be introduced. For instance, use independent Poisson $\eta_i$ variables $Y_1, \ldots, Y_d$ and consider the multivariate distribution of the $X_i + (Y_i + Y_{i+1} + \cdots + Y_d)$, $i=1, 2, \ldots, d.$
{ "source": [ "https://stats.stackexchange.com/questions/108705", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9171/" ] }
108,911
I was just reading this article on the Bayes factor for a completely unrelated problem when I stumbled upon this passage Hypothesis testing with Bayes factors is more robust than frequentist hypothesis testing, since the Bayesian form avoids model selection bias, evaluates evidence in favor the null hypothesis, includes model uncertainty, and allows non-nested models to be compared (though of course the model must have the same dependent variable). Also, frequentist significance tests become biased in favor of rejecting the null hypothesis with sufficiently large sample size. [emphasis added] I've seen this claim before in Karl Friston's 2012 paper in NeuroImage , where he calls it the fallacy of classical inference . I've had a bit of trouble finding a truly pedagogical account of why this should be true. Specifically, I'm wondering: why this occurs how to guard against it failing that, how to detect it
Answer to question 1: This occurs because the $p$ -value becomes arbitrarily small as the sample size increases in frequentist tests for difference (i.e. tests with a null hypothesis of no difference/some form of equality) when a true difference exactly equal to zero , as opposed to arbitraily close to zero, is not realistic (see Nick Stauner's comment to the OP). The $p$ -value becomes arbitrarily small because the error of frequentist test statistics generally decreases with sample size, with the upshot that all differences are significant to an arbitrary level with a large enough sample size . Cosma Shalizi has written eruditely about this . Answer to question 2: Within a frequentist hypothesis testing framework, one can guard against this by not making inference solely about detecting difference . For example, one can combine inferences about difference and equivalence so that one is not favoring (or conflating!) the burden of proof on evidence of effect versus evidence of absence of effect . Evidence of absence of an effect comes from, for example: two one-sided tests for equivalence (TOST), uniformly most powerful tests for equivalence , and the confidence interval approach to equivalence (i.e. if the $1-2\alpha$ %CI of the test statistic is within the a priori -defined range of equivalence/relevance, then one concludes equivalence at the $\alpha$ level of significance). What these approaches all share is an a priori decision about what effect size constitutes a relevant difference and a null hypothesis framed in terms of a difference at least as large as what is considered relevant. Combined inference from tests for difference and tests for equivalence thus protects against the bias you describe when sample sizes are large in this way (two-by-two table showing the four possibilities resulting from combined tests for difference—positivist null hypothesis, $\text{H}_{0}^{+}$ —and equivalence—negativist null hypothesis, $\text{H}_{0}^{-}$ ): Notice the upper left quadrant: an overpowered test is one where yes you reject the null hypothesis of no difference, but you also reject the null hypothesis of relevant difference, so yes there's a difference, but you have a priori decided you do not care about it because it is too small. Answer to question 3: See answer to 2.
{ "source": [ "https://stats.stackexchange.com/questions/108911", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/17054/" ] }
108,995
How to interpret the Null and Residual Deviance in GLM in R? Like, we say that smaller AIC is better. Is there any similar and quick interpretation for the deviances also? Null deviance: 1146.1 on 1077 degrees of freedom Residual deviance: 4589.4 on 1099 degrees of freedom AIC: 11089
Let LL = loglikelihood Here is a quick summary of what you see from the summary(glm.fit) output, Null Deviance = 2(LL(Saturated Model) - LL(Null Model)) on df = df_Sat - df_Null Residual Deviance = 2(LL(Saturated Model) - LL(Proposed Model)) df = df_Sat - df_Proposed The Saturated Model is a model that assumes each data point has its own parameters (which means you have n parameters to estimate.) The Null Model assumes the exact "opposite", in that is assumes one parameter for all of the data points, which means you only estimate 1 parameter. The Proposed Model assumes you can explain your data points with p parameters + an intercept term, so you have p+1 parameters. If your Null Deviance is really small, it means that the Null Model explains the data pretty well. Likewise with your Residual Deviance . What does really small mean? If your model is "good" then your Deviance is approx Chi^2 with (df_sat - df_model) degrees of freedom. If you want to compare you Null model with your Proposed model, then you can look at (Null Deviance - Residual Deviance) approx Chi^2 with df Proposed - df Null = (n-(p+1))-(n-1)=p Are the results you gave directly from R? They seem a little bit odd, because generally you should see that the degrees of freedom reported on the Null are always higher than the degrees of freedom reported on the Residual. That is because again, Null Deviance df = Saturated df - Null df = n-1 Residual Deviance df = Saturated df - Proposed df = n-(p+1)
{ "source": [ "https://stats.stackexchange.com/questions/108995", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/46050/" ] }
109,076
This is a favorite of mine This example is in a humorous vein (credit goes to a former professor of mine, Steven Gortmaker), but I am also interested in graphs that you feel beautifully capture and communicate a statistical insight or method, along with your ideas about same. One entry per answer. Of course, this question is along the same line as What is your favorite "data analysis" cartoon? Kindly provide proper credit/citations with any images you provide.
I think that Anscombe's quartet deserves a place here as an example and reminder to always plot your data because datasets with the same numeric summaries can have very different relationships: Anscombe, Francis J. (1973) Graphs in statistical analysis . American Statistician , 27 , 17-21.
{ "source": [ "https://stats.stackexchange.com/questions/109076", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/44269/" ] }
110,004
If we a re fitting a glmer we may get a warning that tells us the model is finding a hard time to converge...e.g. >Warning message: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, : Model failed to converge with max|grad| = 0.00389462 (tol = 0.001) another way to check convergence discussed in this thread by @Ben Bolker is: relgrad <- with(model@optinfo$derivs,solve(Hessian,gradient)) max(abs(relgrad)) #[1] 1.152891e-05 if max(abs(relgrad)) is <0.001 then things might be ok... so in this case we have conflicting results? How should we choose between methods and feel safe with our model fits? On the other hand when we get more extreme values like: >Warning message: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, : Model failed to converge with max|grad| = 35.5352 (tol = 0.001) relgrad <- with(model@optinfo$derivs,solve(Hessian,gradient)) max(abs(relgrad)) #[1] 0.002776518 Does this mean we have to ignore the model results/estimates/p-values? Is 0.0027 far too large to proceed? When different optimisers give different results and centering of variables / removing parameters (stripping models down to the minimum) does not help but VIFs are low, models not overdispersed, and models results make logical sense based on a priori expectations, it seems hard to know what to do. Advice on how to interpret the convergence problems, how extreme they need be to really get us worried and possible ways to try manage them beyond those mentioned would be very helpful. Using: R version 3.1.0 (2014-04-10) and lme4_1.1-6
Be afraid. Be very afraid. Last year, I interviewed John Nash, the author of optim and optimx, for an article on IBM's DeveloperWorks site. We talked about how optimizers work and why they fail when they fail. He seemed to take it for granted that they often do. That's why the diagnostics are included in the package. He also thought that you need to "understand your problem", and understand your data. All of which means that warnings should be taken seriously, and are an invitation to look at your data in other ways. Typically, an optimizer stops searching when it can no longer improve the loss function by a meaningful amount. It doesn't know where to go next, basically. If the gradient of the loss function is not zero at that point, you haven't reached an extremum of any kind. If the Hessian is not positive, but the gradient is zero, you haven't found a minimum, but possibly, you did find a maximum or saddle point. Depending on the optimizer, though, results about the Hessian might not be supplied. In Optimx, if you want the KKT conditions evaluated, you have to ask for them -- they are not evaluated by default. (These conditions look at the gradient and Hessian to see if you really have a minimum.) The problem with mixed models is that the variance estimates for the random effects are constrained to be positive, thus placing a boundary within the optimization region. But suppose a particular random effect is not really needed in your model -- i.e. the variance of the random effect is 0. Your optimizer will head into that boundary,be unable to proceed, and stop with a non-zero gradient. If removing that random effect improved convergence, you will know that was the problem. As an aside, note that asymptotic maximum likelihood theory assumes the MLE is found in an interior point (i.e. not on the boundary of licit parameter values) - so likelihood ratio tests for variance components may not work when indeed the null hypothesis of zero variance is true. Testing can be done using simulation tests, as implemented in package RLRsim. To me, I suspect that optimizers run into problems when there is too little data for the number of parameters, or the proposed model is really not suitable. Think glass slipper and ugly step-sister: you can't shoehorn your data into the model, no matter how hard you try, and something has to give. Even if the data happen to fit the model, they may not have the power to estimate all the parameters. A funny thing happened to me along those lines. I simulated some mixed models to answer a question about what happens if you don't allow the random effects to be correlated when fitting a mixed effects model. I simulated data with a strong correlation between the two random effects, then fit the model both ways with lmer: positing 0 correlations and free correlations. The correlation model fit better than the uncorrelated model, but interestingly, in 1000 simulations, I had 13 errors when fitting the true model and 0 errors when fitting the simpler model. I don't fully understand why this happened (and I repeated the sims to similar results). I suspect that the correlation parameter is fairly useless and the optimizer can't find the value (because it doesn't matter). You asked about what to do when different optimizers give different results. John and I discussed this point. Some optimizers, in his opinion, are just not that good! And all of them have points of weakness -- i.e., data sets that will cause them to fail. This is why he wrote optimx, which includes a variety of optimizers. You can run several on the same data set. If two optimizers give the same parameters, but different diagnostics -- and those parameters make real world sense -- then I would be inclined to trust the parameter values. The difficulty could lie with the diagnostics, which are not fool-proof. If you have not explicitly supplied the gradient function and/or Hessian matrix, the optimizer will need to estimate these from the loss function and the data, which is just something else that can go wrong. If you are getting different parameter values as well, then you might want to try different starting values and see what happens then. Some optimizers and some problems are very sensitive to the starting values. You want to be starting in the ball park.
{ "source": [ "https://stats.stackexchange.com/questions/110004", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/17865/" ] }
110,008
I have two linear regression parameters of interest, b1 and b2. Both parameters are from a linear model built from 14 datapoints and having 7 model parameters, including an intercept. Interest lies in testing the null hypothesis b2 - b1 = 0 versus its two-sided alternative. I have two questions concerning this test: First, according to Slutsky's lemma, the subtraction of model parameters b1 and b2 converges to the subtraction of the distributions of b2 and b1 (t-distributions with n-p=7 degrees of freedom). However, I am afraid I am not allowed to work asymptotically since I have such a low number of data points, or can I? Second, how can I calculate a subtraction of these two t-distributions?
Be afraid. Be very afraid. Last year, I interviewed John Nash, the author of optim and optimx, for an article on IBM's DeveloperWorks site. We talked about how optimizers work and why they fail when they fail. He seemed to take it for granted that they often do. That's why the diagnostics are included in the package. He also thought that you need to "understand your problem", and understand your data. All of which means that warnings should be taken seriously, and are an invitation to look at your data in other ways. Typically, an optimizer stops searching when it can no longer improve the loss function by a meaningful amount. It doesn't know where to go next, basically. If the gradient of the loss function is not zero at that point, you haven't reached an extremum of any kind. If the Hessian is not positive, but the gradient is zero, you haven't found a minimum, but possibly, you did find a maximum or saddle point. Depending on the optimizer, though, results about the Hessian might not be supplied. In Optimx, if you want the KKT conditions evaluated, you have to ask for them -- they are not evaluated by default. (These conditions look at the gradient and Hessian to see if you really have a minimum.) The problem with mixed models is that the variance estimates for the random effects are constrained to be positive, thus placing a boundary within the optimization region. But suppose a particular random effect is not really needed in your model -- i.e. the variance of the random effect is 0. Your optimizer will head into that boundary,be unable to proceed, and stop with a non-zero gradient. If removing that random effect improved convergence, you will know that was the problem. As an aside, note that asymptotic maximum likelihood theory assumes the MLE is found in an interior point (i.e. not on the boundary of licit parameter values) - so likelihood ratio tests for variance components may not work when indeed the null hypothesis of zero variance is true. Testing can be done using simulation tests, as implemented in package RLRsim. To me, I suspect that optimizers run into problems when there is too little data for the number of parameters, or the proposed model is really not suitable. Think glass slipper and ugly step-sister: you can't shoehorn your data into the model, no matter how hard you try, and something has to give. Even if the data happen to fit the model, they may not have the power to estimate all the parameters. A funny thing happened to me along those lines. I simulated some mixed models to answer a question about what happens if you don't allow the random effects to be correlated when fitting a mixed effects model. I simulated data with a strong correlation between the two random effects, then fit the model both ways with lmer: positing 0 correlations and free correlations. The correlation model fit better than the uncorrelated model, but interestingly, in 1000 simulations, I had 13 errors when fitting the true model and 0 errors when fitting the simpler model. I don't fully understand why this happened (and I repeated the sims to similar results). I suspect that the correlation parameter is fairly useless and the optimizer can't find the value (because it doesn't matter). You asked about what to do when different optimizers give different results. John and I discussed this point. Some optimizers, in his opinion, are just not that good! And all of them have points of weakness -- i.e., data sets that will cause them to fail. This is why he wrote optimx, which includes a variety of optimizers. You can run several on the same data set. If two optimizers give the same parameters, but different diagnostics -- and those parameters make real world sense -- then I would be inclined to trust the parameter values. The difficulty could lie with the diagnostics, which are not fool-proof. If you have not explicitly supplied the gradient function and/or Hessian matrix, the optimizer will need to estimate these from the loss function and the data, which is just something else that can go wrong. If you are getting different parameter values as well, then you might want to try different starting values and see what happens then. Some optimizers and some problems are very sensitive to the starting values. You want to be starting in the ball park.
{ "source": [ "https://stats.stackexchange.com/questions/110008", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/42808/" ] }
110,359
As per Wikipedia, I understand that the t-distribution is the sampling distribution of the t-value when the samples are iid observations from a normally distributed population. However, I don't intuitively understand why that causes the shape of the t-distribution to change from fat-tailed to almost perfectly normal. I get that if you're sampling from a normal distribution then if you take a big sample it will resemble that distribution, but I don't get why it starts out with the fat-tailed shape it does.
I'll try to give an intuitive explanation. The t-statistic* has a numerator and a denominator. For example, the statistic in the one sample t-test is $$\frac{\bar{x}-\mu_0}{s/\sqrt{n}}$$ *(there are several, but this discussion should hopefully be general enough to cover the ones you are asking about) Under the assumptions, the numerator has a normal distribution with mean 0 and some unknown standard deviation. Under the same set of assumptions, the denominator is an estimate of the standard deviation of the distribution of the numerator (the standard error of the statistic on the numerator). It is independent of the numerator. Its square is a chi-square random variable divided by its degrees of freedom (which is also the d.f. of the t-distribution) times $\sigma_\text{numerator}$. When the degrees of freedom are small, the denominator tends to be fairly right-skew. It has a high chance of being less than its mean, and a relatively good chance of being quite small. At the same time, it also has some chance of being much, much larger than its mean. Under the assumption of normality, the numerator and denominator are independent. So if we draw randomly from the distribution of this t-statistic we have a normal random number divided by a second randomly* chosen value from a right-skew distribution that's on average around 1. * without regard to the normal term Because it's on the denominator, the small values in the distribution of the denominator produce very large t-values. The right-skew in the denominator make the t-statistic heavy-tailed. The right tail of the distribution, when on the denominator makes the t-distribution more sharply peaked than a normal with the same standard deviation as the t . However, as the degrees of freedom become large, the distribution becomes much more normal-looking and much more "tight" around its mean. As such, the effect of dividing by the denominator on the shape of the distribution of the numerator reduces as the degrees of freedom increase. Eventually - as Slutsky's theorem might suggest to us could happen - the effect of the denominator becomes more like dividing by a constant and the distribution of the t-statistic is very close to normal. Considered in terms of the reciprocal of the denominator whuber suggested in comments that it might be more illuminating to look at the reciprocal of the denominator. That is, we could write our t-statistics as numerator (normal) times reciprocal-of-denominator (right-skew). For example, our one-sample-t statistic above would become: $${\sqrt{n}(\bar{x}-\mu_0)}\cdot{1/s}$$ Now consider the population standard deviation of the original $X_i$, $\sigma_x$. We can multiply and divide by it, like so: $${\sqrt{n}(\bar{x}-\mu_0)/\sigma_x}\cdot{\sigma_x/s}$$ The first term is standard normal. The second term (the square root of a scaled inverse-chi-squared random variable) then scales that standard normal by values that are either larger or smaller than 1, "spreading it out". Under the assumption of normality, the two terms in the product are independent. So if we draw randomly from the distribution of this t-statistic we have a normal random number (the first term in the product) times a second randomly-chosen value (without regard to the normal term) from a right-skew distribution that's 'typically' around 1. When the d.f. are large, the value tends to be very close to 1, but when the df are small, it's quite skew and the spread is large, with the big right tail of this scaling factor making the tail quite fat:
{ "source": [ "https://stats.stackexchange.com/questions/110359", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9162/" ] }
110,999
Root mean square error residual sum of squares residual standard error mean squared error test error I thought I used to understand these terms but the more I do statistic problems the more I have gotten myself confused where I second guess myself. I would like some re-assurance & a concrete example I can find the equations easily enough online but I am having trouble getting a 'explain like I'm 5' explanation of these terms so I can crystallize in my head the differences and how one leads to another. If anyone can take this code below and point out how I would calculate each one of these terms I would appreciate it. R code would be great.. Using this example below: summary(lm(mpg~hp, data=mtcars)) Show me in R code how to find: rmse = ____ rss = ____ residual_standard_error = ______ # i know its there but need understanding mean_squared_error = _______ test_error = ________ Bonus points for explaining like i'm 5 the differences/similarities between these. example: rmse = squareroot(mss)
As requested, I illustrate using a simple regression using the mtcars data: fit <- lm(mpg~hp, data=mtcars) summary(fit) Call: lm(formula = mpg ~ hp, data = mtcars) Residuals: Min 1Q Median 3Q Max -5.7121 -2.1122 -0.8854 1.5819 8.2360 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 30.09886 1.63392 18.421 < 2e-16 *** hp -0.06823 0.01012 -6.742 1.79e-07 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 3.863 on 30 degrees of freedom Multiple R-squared: 0.6024, Adjusted R-squared: 0.5892 F-statistic: 45.46 on 1 and 30 DF, p-value: 1.788e-07 The mean squared error (MSE) is the mean of the square of the residuals: # Mean squared error mse <- mean(residuals(fit)^2) mse [1] 13.98982 Root mean squared error (RMSE) is then the square root of MSE: # Root mean squared error rmse <- sqrt(mse) rmse [1] 3.740297 Residual sum of squares (RSS) is the sum of the squared residuals: # Residual sum of squares rss <- sum(residuals(fit)^2) rss [1] 447.6743 Residual standard error (RSE) is the square root of (RSS / degrees of freedom): # Residual standard error rse <- sqrt( sum(residuals(fit)^2) / fit$df.residual ) rse [1] 3.862962 The same calculation, simplified because we have previously calculated rss : sqrt(rss / fit$df.residual) [1] 3.862962 The term test error in the context of regression (and other predictive analytics techniques) usually refers to calculating a test statistic on test data, distinct from your training data. In other words, you estimate a model using a portion of your data (often an 80% sample) and then calculating the error using the hold-out sample. Again, I illustrate using mtcars , this time with an 80% sample set.seed(42) train <- sample.int(nrow(mtcars), 26) train [1] 30 32 9 25 18 15 20 4 16 17 11 24 19 5 31 21 23 2 7 8 22 27 10 28 1 29 Estimate the model, then predict with the hold-out data: fit <- lm(mpg~hp, data=mtcars[train, ]) pred <- predict(fit, newdata=mtcars[-train, ]) pred Datsun 710 Valiant Merc 450SE Merc 450SL Merc 450SLC Fiat X1-9 24.08103 23.26331 18.15257 18.15257 18.15257 25.92090 Combine the original data and prediction in a data frame test <- data.frame(actual=mtcars$mpg[-train], pred) test$error <- with(test, pred-actual) test actual pred error Datsun 710 22.8 24.08103 1.2810309 Valiant 18.1 23.26331 5.1633124 Merc 450SE 16.4 18.15257 1.7525717 Merc 450SL 17.3 18.15257 0.8525717 Merc 450SLC 15.2 18.15257 2.9525717 Fiat X1-9 27.3 25.92090 -1.3791024 Now compute your test statistics in the normal way. I illustrate MSE and RMSE: test.mse <- with(test, mean(error^2)) test.mse [1] 7.119804 test.rmse <- sqrt(test.mse) test.rmse [1] 2.668296 Note that this answer ignores weighting of the observations.
{ "source": [ "https://stats.stackexchange.com/questions/110999", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/53610/" ] }
111,001
I am trying to construct a formula, which will take student's previous exam results (for ex: SAT) taken at particular dates and predict his future test result. One X is previous test result 1; another X is date of previous test 1 (can be converted to number of days between this test and the last test for simplicity); and other Xs are similar variables for additional previous tests, of which there are 3-5 per person. My Y is the result of the last test. Normally I would use simple linear regression to model this relation, but the problem is that this relation is not linear, because improving one's score from 100 to 200 is easier than from 300 to 400, for example. And also because of the upper limit of the test score (700 for example). Is there a way to create a more or less meaningful model for such prediction given 3-5 previous test results? Thank you!
As requested, I illustrate using a simple regression using the mtcars data: fit <- lm(mpg~hp, data=mtcars) summary(fit) Call: lm(formula = mpg ~ hp, data = mtcars) Residuals: Min 1Q Median 3Q Max -5.7121 -2.1122 -0.8854 1.5819 8.2360 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 30.09886 1.63392 18.421 < 2e-16 *** hp -0.06823 0.01012 -6.742 1.79e-07 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 3.863 on 30 degrees of freedom Multiple R-squared: 0.6024, Adjusted R-squared: 0.5892 F-statistic: 45.46 on 1 and 30 DF, p-value: 1.788e-07 The mean squared error (MSE) is the mean of the square of the residuals: # Mean squared error mse <- mean(residuals(fit)^2) mse [1] 13.98982 Root mean squared error (RMSE) is then the square root of MSE: # Root mean squared error rmse <- sqrt(mse) rmse [1] 3.740297 Residual sum of squares (RSS) is the sum of the squared residuals: # Residual sum of squares rss <- sum(residuals(fit)^2) rss [1] 447.6743 Residual standard error (RSE) is the square root of (RSS / degrees of freedom): # Residual standard error rse <- sqrt( sum(residuals(fit)^2) / fit$df.residual ) rse [1] 3.862962 The same calculation, simplified because we have previously calculated rss : sqrt(rss / fit$df.residual) [1] 3.862962 The term test error in the context of regression (and other predictive analytics techniques) usually refers to calculating a test statistic on test data, distinct from your training data. In other words, you estimate a model using a portion of your data (often an 80% sample) and then calculating the error using the hold-out sample. Again, I illustrate using mtcars , this time with an 80% sample set.seed(42) train <- sample.int(nrow(mtcars), 26) train [1] 30 32 9 25 18 15 20 4 16 17 11 24 19 5 31 21 23 2 7 8 22 27 10 28 1 29 Estimate the model, then predict with the hold-out data: fit <- lm(mpg~hp, data=mtcars[train, ]) pred <- predict(fit, newdata=mtcars[-train, ]) pred Datsun 710 Valiant Merc 450SE Merc 450SL Merc 450SLC Fiat X1-9 24.08103 23.26331 18.15257 18.15257 18.15257 25.92090 Combine the original data and prediction in a data frame test <- data.frame(actual=mtcars$mpg[-train], pred) test$error <- with(test, pred-actual) test actual pred error Datsun 710 22.8 24.08103 1.2810309 Valiant 18.1 23.26331 5.1633124 Merc 450SE 16.4 18.15257 1.7525717 Merc 450SL 17.3 18.15257 0.8525717 Merc 450SLC 15.2 18.15257 2.9525717 Fiat X1-9 27.3 25.92090 -1.3791024 Now compute your test statistics in the normal way. I illustrate MSE and RMSE: test.mse <- with(test, mean(error^2)) test.mse [1] 7.119804 test.rmse <- sqrt(test.mse) test.rmse [1] 2.668296 Note that this answer ignores weighting of the observations.
{ "source": [ "https://stats.stackexchange.com/questions/111001", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/53620/" ] }
111,005
Is it necessary to calculate VAR before Granger causality test so that we can have the lag length to be used in Granger causality test
As requested, I illustrate using a simple regression using the mtcars data: fit <- lm(mpg~hp, data=mtcars) summary(fit) Call: lm(formula = mpg ~ hp, data = mtcars) Residuals: Min 1Q Median 3Q Max -5.7121 -2.1122 -0.8854 1.5819 8.2360 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 30.09886 1.63392 18.421 < 2e-16 *** hp -0.06823 0.01012 -6.742 1.79e-07 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 3.863 on 30 degrees of freedom Multiple R-squared: 0.6024, Adjusted R-squared: 0.5892 F-statistic: 45.46 on 1 and 30 DF, p-value: 1.788e-07 The mean squared error (MSE) is the mean of the square of the residuals: # Mean squared error mse <- mean(residuals(fit)^2) mse [1] 13.98982 Root mean squared error (RMSE) is then the square root of MSE: # Root mean squared error rmse <- sqrt(mse) rmse [1] 3.740297 Residual sum of squares (RSS) is the sum of the squared residuals: # Residual sum of squares rss <- sum(residuals(fit)^2) rss [1] 447.6743 Residual standard error (RSE) is the square root of (RSS / degrees of freedom): # Residual standard error rse <- sqrt( sum(residuals(fit)^2) / fit$df.residual ) rse [1] 3.862962 The same calculation, simplified because we have previously calculated rss : sqrt(rss / fit$df.residual) [1] 3.862962 The term test error in the context of regression (and other predictive analytics techniques) usually refers to calculating a test statistic on test data, distinct from your training data. In other words, you estimate a model using a portion of your data (often an 80% sample) and then calculating the error using the hold-out sample. Again, I illustrate using mtcars , this time with an 80% sample set.seed(42) train <- sample.int(nrow(mtcars), 26) train [1] 30 32 9 25 18 15 20 4 16 17 11 24 19 5 31 21 23 2 7 8 22 27 10 28 1 29 Estimate the model, then predict with the hold-out data: fit <- lm(mpg~hp, data=mtcars[train, ]) pred <- predict(fit, newdata=mtcars[-train, ]) pred Datsun 710 Valiant Merc 450SE Merc 450SL Merc 450SLC Fiat X1-9 24.08103 23.26331 18.15257 18.15257 18.15257 25.92090 Combine the original data and prediction in a data frame test <- data.frame(actual=mtcars$mpg[-train], pred) test$error <- with(test, pred-actual) test actual pred error Datsun 710 22.8 24.08103 1.2810309 Valiant 18.1 23.26331 5.1633124 Merc 450SE 16.4 18.15257 1.7525717 Merc 450SL 17.3 18.15257 0.8525717 Merc 450SLC 15.2 18.15257 2.9525717 Fiat X1-9 27.3 25.92090 -1.3791024 Now compute your test statistics in the normal way. I illustrate MSE and RMSE: test.mse <- with(test, mean(error^2)) test.mse [1] 7.119804 test.rmse <- sqrt(test.mse) test.rmse [1] 2.668296 Note that this answer ignores weighting of the observations.
{ "source": [ "https://stats.stackexchange.com/questions/111005", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/53612/" ] }
111,010
I have read enough threads on QQplots here to understand that a QQplot can be more informative than other normality tests. However, I am inexperienced with interpreting QQplots. I googled a lot; I found a lot of graphs of non-normal QQplots, but no clear rules on how to interpret them, other than what it seems to be comparison with know distributions plus "gut feeling". I would like to know if you have (or you know of) any rule of thumb to help you decide for non-normality. This question came up when I saw these two graphs: I understand that the decision of non-normality depends on the data and what I want to do with them; however, my question is: generally, when do the observed departures from the straight line constitute enough evidence to make unreasonable the approximation of normality? For what it's worth, the Shapiro-Wilk test failed to reject the hypothesis of non-normality in both cases.
Note that the Shapiro-Wilk is a powerful test of normality. The best approach is really to have a good idea of how sensitive any procedure you want to use is to various kinds of non-normality (how badly non-normal does it have to be in that way for it to affect your inference more than you can accept). An informal approach for looking at the plots would be to generate a number of data sets that are actually normal of the same sample size as the one you have - (for example, say 24 of them). Plot your real data among a grid of such plots (5x5 in the case of 24 random sets). If it's not especially unusual looking (the worst looking one, say), it's reasonably consistent with normality. To my eye, data set "Z" in the center looks roughly on a par with "o" and "v" and maybe even "h", while "d" and "f" look slightly worse. "Z" is the real data. While I don't believe for a moment that it's actually normal, it's not particularly unusual-looking when you compare it with normal data. [Edit: I just conducted a random poll --- well, I asked my daughter, but at a fairly random time -- and her choice for the least like a straight line was "d". So 100% of those surveyed thought "d" was the most-odd one.] More formal approach would be to do a Shapiro-Francia test (which is effectively based on the correlation in the QQ-plot), but (a) it's not even as powerful as the Shapiro Wilk test, and (b) formal testing answers a question (sometimes) that you should already know the answer to anyway (the distribution your data were drawn from isn't exactly normal), instead of the question you need answered (how badly does that matter?). As requested, code for the above display. Nothing fancy involved: z = lm(dist~speed,cars)$residual n = length(z) xz = cbind(matrix(rnorm(12*n), nr=n), z, matrix(rnorm(12*n), nr=n)) colnames(xz) = c(letters[1:12],"Z",letters[13:24]) opar = par() par(mfrow=c(5,5)); par(mar=c(0.5,0.5,0.5,0.5)) par(oma=c(1,1,1,1)); ytpos = (apply(xz,2,min)+3*apply(xz,2,max))/4 cn = colnames(xz) for(i in 1:25) { qqnorm(xz[, i], axes=FALSE, ylab= colnames(xz)[i], xlab="", main="") qqline(xz[,i],col=2,lty=2) box("figure", col="darkgreen") text(-1.5,ytpos[i],cn[i]) } par(opar) Note that this was just for the purposes of illustration; I wanted a small data set that looked mildly non-normal which is why I used the residuals from a linear regression on the cars data (the model isn't quite appropriate). However, if I was actually generating such a display for a set of residuals for a regression, I'd regress all 25 data sets on the same $x$ 's as in the model, and display QQ plots of their residuals, since residuals have some structure not present in normal random numbers. (I've been making sets of plots like this since the mid-80s at least. How can you interpret plots if you are unfamiliar with how they behave when the assumptions hold --- and when they don't?) See more: Buja, A., Cook, D. Hofmann, H., Lawrence, M. Lee, E.-K., Swayne, D.F and Wickham, H. (2009) Statistical Inference for exploratory data analysis and model diagnostics Phil. Trans. R. Soc. A 2009 367, 4361-4383 doi: 10.1098/rsta.2009.0120 Edit: I mentioned this issue in my second paragraph but I want to emphasize the point again, in case it gets forgotten along the way. What usually matters is not whether you can tell something is not-actually-normal (whether by formal test or by looking at a plot) but rather how much it matters for what you would be using that model to do: How sensitive are the properties you care about to the amount and manner of lack of fit you might have between your model and the actual population? The answer to the question "is the population I'm sampling actually normally distributed" is, essentially always, "no" (you don't need a test or a plot for that), but the question is rather "how much does it matter?". If the answer is "not much at all", the fact that the assumption is false is of little practical consequence. A plot can help some since it at least shows you something of the 'amount and manner' of deviation between the sample and the distributional model, so it's a starting point for considering whether it would matter. However, whether it does depends on the properties of what you are doing (consider a t-test vs a test of variance for example; the t-test can in general tolerate much more substantial deviations from the assumptions that are made in its derivation than an F-ratio test of equality variances can).
{ "source": [ "https://stats.stackexchange.com/questions/111010", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/53356/" ] }
111,445
Let us consider the following two probability distributions P Q 0.01 0.002 0.02 0.004 0.03 0.006 0.04 0.008 0.05 0.01 0.06 0.012 0.07 0.014 0.08 0.016 0.64 0.928 I have have calculated Kullback-Leibler divergence which is equal $0.492820258$, I want to know in general what does this number shows me? Generally, Kullback-Leibler divergence shows me how far is one probability distribution from another, right? It is similar to entropy terminology, but in terms of numbers, what does it mean? If I have a result of result of 0.49, can I say that approximately one distribution is far from another by 50%?
The Kullback-Leibler Divergence is not a metric proper, since it is not symmetric and also, it does not satisfy the triangle inequality. So the "roles" played by the two distributions are different, and it is important to distribute these roles according to the real-world phenomenon under study. When we write (the OP has calculated the expression using base-2 logarithms) $$\mathbb K\left(P||Q\right) = \sum_{i}\log_2 (p_i/q_i)p_i $$ we consider the $P$ distribution to be the "target distribution" (usually considered to be the true distribution), which we approximate by using the $Q$ distribution. Now, $$\sum_{i}\log_2 (p_i/q_i)p_i = \sum_{i}\log_2 (p_i)p_i-\sum_{i}\log_2 (q_i)p_i = -H(P) - E_P(\ln(Q))$$ where $H(P)$ is the Shannon entropy of distribution $P$ and $-E_P(\ln(Q))$ is called the "cross-entropy of $P$ and $Q$" -also non-symmetric. Writing $$\mathbb K\left(P||Q\right) = H(P,Q) - H(P)$$ (here too, the order in which we write the distributions in the expression of the cross-entropy matters, since it too is not symmetric), permits us to see that KL-Divergence reflects an increase in entropy over the unavoidable entropy of distribution $P$. So, no , KL-divergence is better not to be interpreted as a "distance measure" between distributions, but rather as a measure of entropy increase due to the use of an approximation to the true distribution rather than the true distribution itself . So we are in Information Theory land. To hear it from the masters (Cover & Thomas) " ...if we knew the true distribution $P$ of the random variable, we could construct a code with average description length $H(P)$. If, instead, we used the code for a distribution $Q$, we would need $H(P) + \mathbb K (P||Q)$ bits on the average to describe the random variable. The same wise people say ...it is not a true distance between distributions since it is not symmetric and does not satisfy the triangle inequality. Nonetheless, it is often useful to think of relative entropy as a “distance” between distributions. But this latter approach is useful mainly when one attempts to minimize KL-divergence in order to optimize some estimation procedure. For the interpretation of its numerical value per se , it is not useful, and one should prefer the "entropy increase" approach. For the specific distributions of the question (always using base-2 logarithms) $$ \mathbb K\left(P||Q\right) = 0.49282,\;\;\;\; H(P) = 1.9486$$ In other words, you need 25% more bits to describe the situation if you are going to use $Q$ while the true distribution is $P$. This means longer code lines, more time to write them, more memory, more time to read them, higher probability of mistakes etc... it is no accident that Cover & Thomas say that KL-Divergence (or "relative entropy") "measures the inefficiency caused by the approximation."
{ "source": [ "https://stats.stackexchange.com/questions/111445", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5419/" ] }
111,467
I'm building regression models. As a preprocessing step, I scale my feature values to have mean 0 and standard deviation 1. Is it necessary to normalize the target values also?
Let's first analyse why feature scaling is performed. Feature scaling improves the convergence of steepest descent algorithms, which do not possess the property of scale invariance. In stochastic gradient descent training examples inform the weight updates iteratively like so, $$w_{t+1} = w_t - \gamma\nabla_w \ell(f_w(x),y)$$ Where $w$ are the weights, $\gamma$ is a stepsize, $\nabla_w$ is the gradient wrt weights, $\ell$ is a loss function, $f_w$ is the function parameterized by $w$, $x$ is a training example, and $y$ is the response/label. Compare the following convex functions, representing proper scaling and improper scaling. A step through one weight update of size $\gamma$ will yield much better reduction in the error in the properly scaled case than the improperly scaled case. Shown below is the direction of $\nabla_w \ell(f_w(x),y)$ of length $\gamma$. Normalizing the output will not affect shape of $f$, so it's generally not necessary. The only situation I can imagine scaling the outputs has an impact, is if your response variable is very large and/or you're using f32 variables (which is common with GPU linear algebra). In this case it is possible to get a floating point overflow of an element of the weights. The symptom is either an Inf value or it will wrap-around to the other extreme representation.
{ "source": [ "https://stats.stackexchange.com/questions/111467", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/32397/" ] }
111,968
I have a computer science background but am trying to teach myself data science by solving problems on the internet. I have been working on this problem for the last couple of weeks (approx 900 rows and 10 features). I was initially using logistic regression but now I have switched to random forests. When I run my random forest model on my training data I get really high values for auc (> 99%). However when I run the same model on the test data the results are not so good (Accuracy of approx 77%). This leads me to believe that I am over fitting the training data. What are the best practices regarding preventing over fitting in random forests? I am using r and rstudio as my development environment. I am using the randomForest package and have accepted defaults for all parameters
To avoid over-fitting in random forest, the main thing you need to do is optimize a tuning parameter that governs the number of features that are randomly chosen to grow each tree from the bootstrapped data. Typically, you do this via $k$-fold cross-validation, where $k \in \{5, 10\}$, and choose the tuning parameter that minimizes test sample prediction error. In addition, growing a larger forest will improve predictive accuracy, although there are usually diminishing returns once you get up to several hundreds of trees.
{ "source": [ "https://stats.stackexchange.com/questions/111968", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/48876/" ] }
112,147
This question has been triggered by something I read in this graduate-level statistics textbook and also (independently) heard during this presentation at a statistical seminar. In both cases, the statement was along the lines of "because the sample size is pretty small, we decided to perform estimation via bootstrap instead of (or along with) this parametric method $X$". They didn't get into the details, but probably the reasoning was as follows: method $X$ assumes the data follow a certain parametric distribution $D$. In reality the distribution is not exactly $D$, but it's ok as long as the sample size is large enough. Since in this case the sample size is too small, let's switch to the (non-parametric) bootstrap that doesn't make any distributional assumptions. Problem solved! In my opinion, that's not what bootstrap is for. Here is how I see it: bootstrap can give one an edge when it's more or less obvious that there are enough data, but there is no closed form solution to get standard errors, p-values and similar statistics. A classic example is obtaining a CI for the correlation coefficient given a sample from a bivariate normal distribution: the closed form solution exists, but it is so convoluted that bootstrapping is simpler. However, nothing implies that bootstrap can somehow help one to get away with a small sample size. Is my perception right? If you find this question interesting, there is another, more specific bootstrap question from me: Bootstrap: the issue of overfitting P.S. I can’t help sharing one egregious example of the “bootstrap approach”. I am not disclosing the author’s name, but he is one of the older generation “quants” who wrote a book on Quantitative Finance in 2004. The example is taken from there. Consider the following problem: suppose you have 4 assets and 120 monthly return observations for each. The goal is to construct the joint 4-dimensional cdf of yearly returns. Even for a single asset, the task appears hardly attainable with only 10 yearly observations, let alone the estimation of 4-dimensional cdf. But not to worry, the “bootstrap” will help you out: take all of the available 4-dimensional observations, resample 12 with replacement and compound them to construct a single “bootstrapped” 4-dimensional vector of annual returns. Repeat that 1000 times and, lo and behold, you got yourself a “bootstrap sample” of 1000 annual returns. Use this as an i.i.d. sample of size 1000 for the purpose of cdf estimation, or any other inference that can be drawn from a thousand –year history.
I remember reading that using the percentile confidence interval for bootstrapping is equivalent to using a Z interval instead of a T interval and using $n$ instead of $n-1$ for the denominator. Unfortunately I don't remember where I read this and could not find a reference in my quick searches. These differences don't matter much when n is large (and the advantages of the bootstrap outweigh these minor problems when $n$ is large), but with small $n$ this can cause problems. Here is some R code to simulate and compare: simfun <- function(n=5) { x <- rnorm(n) m.x <- mean(x) s.x <- sd(x) z <- m.x/(1/sqrt(n)) t <- m.x/(s.x/sqrt(n)) b <- replicate(10000, mean(sample(x, replace=TRUE))) c( t=abs(t) > qt(0.975,n-1), z=abs(z) > qnorm(0.975), z2 = abs(t) > qnorm(0.975), b= (0 < quantile(b, 0.025)) | (0 > quantile(b, 0.975)) ) } out <- replicate(10000, simfun()) rowMeans(out) My results for one run are: t z z2 b.2.5% 0.0486 0.0493 0.1199 0.1631 So we can see that using the t-test and the z-test (with the true population standard deviation) both give a type I error rate that is essentially $\alpha$ as designed. The improper z test (dividing by sample standard deviation, but using Z critical value instead of T) rejects the null more than twice as often as it should. Now to the bootstrap, it is rejecting the null 3 times as often as it should (looking if 0, the true mean, is in the interval or not), so for this small sample size the simple bootstrap is not sized properly and therefore does not fix problems (and this is when the data is optimally normal). The improved bootstrap intervals (BCa etc.) will probably do better, but this should raise some concern about using bootstrapping as a panacea for small sample sizes.
{ "source": [ "https://stats.stackexchange.com/questions/112147", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/54099/" ] }
112,442
While building a regression model in R ( lm ), I am frequently getting this message "there are aliased coefficients in the model" What exactly does it mean? Also, due to this predict() is also giving a warning. Though it's just a warning, I want to know how can we detect/remove aliased coefficients before building a model. Also, what are the probable consequences of neglecting this warning?
I suspect this is not an error of lm , but rather vif (from package car ). If so, I believe you have ran into perfect multicollinearity . For instance x1 <- rnorm( 100 ) x2 <- 2 * x1 y <- rnorm( 100 ) vif( lm( y ~ x1 + x2 ) ) produces your error. In this context, ''alias'' refers to the variables that are linearly dependent on others (i.e. cause perfect multicollinearity). The first step towards the solution is to identify which variable(s) are the culprit(s). Run alias( lm( y ~ x1 + x2 ) ) to see an example.
{ "source": [ "https://stats.stackexchange.com/questions/112442", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/54324/" ] }
112,451
Could anyone explain to me in detail about maximum likelihood estimation (MLE) in layman's terms? I would like to know the underlying concept before going into mathematical derivation or equation.
Say you have some data. Say you're willing to assume that the data comes from some distribution -- perhaps Gaussian. There are an infinite number of different Gaussians that the data could have come from (which correspond to the combination of the infinite number of means and variances that a Gaussian distribution can have). MLE will pick the Gaussian (i.e., the mean and variance) that is "most consistent" with your data (the precise meaning of consistent is explained below). So, say you've got a data set of $y = \{-1, 3, 7\}$ . The most consistent Gaussian from which that data could have come has a mean of 3 and a variance of 16. It could have been sampled from some other Gaussian. But one with a mean of 3 and variance of 16 is most consistent with the data in the following sense: the probability of getting the particular $y$ values you observed is greater with this choice of mean and variance, than it is with any other choice. Moving to regression: instead of the mean being a constant, the mean is a linear function of the data, as specified by the regression equation. So, say you've got data like $x = \{ 2,4,10 \}$ along with $y$ from before. The mean of that Gaussian is now the fitted regression model $X'\hat\beta$ , where $\hat\beta =[-1.9,.9]$ Moving to GLMs: replace Gaussian with some other distribution (from the exponential family). The mean is now a linear function of the data, as specified by the regression equation, transformed by the link function. So, it's $g(X'\beta)$ , where $g(x) = e^x/(1+e^x)$ for logit (with binomial data).
{ "source": [ "https://stats.stackexchange.com/questions/112451", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/48756/" ] }